Updates from: 08/17/2021 03:07:53
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-domain.md
Previously updated : 08/12/2021 Last updated : 08/16/2021 zone_pivot_groups: b2c-policy-type
Every new Azure AD B2C tenant comes with an initial domain name, <domainname&
Follow these steps to add a custom domain to your Azure AD B2C tenant: 1. [Add your custom domain name to Azure AD](../active-directory/fundamentals/add-custom-domain.md#add-your-custom-domain-name-to-azure-ad).+ > [!IMPORTANT] > For these steps, be sure to sign in to your **Azure AD B2C** tenant and select the **Azure Active Directory** service.
Follow these steps to add a custom domain to your Azure AD B2C tenant:
|login | TXT | MS=ms12345678 | |account | TXT | MS=ms87654321 |
- The TXT record must be associated with the subdomain, or hostname of the domain. For example, the *login* part of the *contoso.com* domain. If the hostname is empty or `@`, Azure AD will not be able to verify the custom domain you added. In the following examples, both records are wrongly configured.
+ The TXT record must be associated with the subdomain, or hostname of the domain. For example, the *login* part of the *contoso.com* domain. If the hostname is empty or `@`, Azure AD will not be able to verify the custom domain you added. In the following examples, both records are configured incorrectly.
|Name (hostname) |Type |Data | ||||
Follow these steps to add a custom domain to your Azure AD B2C tenant:
> [!TIP] > You can manage your custom domain with any publicly available DNS service, such as GoDaddy. If you don't have a DNS server, you can use [Azure DNS zone](../dns/dns-getstarted-portal.md), or [App Service domains](../app-service/manage-custom-dns-buy-domain.md).
-1. [Verify your custom domain name](../active-directory/fundamentals/add-custom-domain.md#verify-your-custom-domain-name). Verify each subdomain, or hostname you plan to use. Verifying just the top-level domain isn't sufficient. For example, to be able to sign-in with *login.contoso.com* and *account.contoso.com*, you need to verify both subdomains and not just the top-level domain *contoso.com*.
+1. [Verify your custom domain name](../active-directory/fundamentals/add-custom-domain.md#verify-your-custom-domain-name). Verify each subdomain, or hostname you plan to use. For example, to be able to sign-in with *login.contoso.com* and *account.contoso.com*, you need to verify both subdomains and not the top-level domain *contoso.com*.
After the domain is verified, **delete** the DNS TXT record you created.
Configure Azure Blob storage for Cross-Origin Resource Sharing with the followin
1. Under **Policies**, select **User flows (policies)**. 1. Select a user flow, and then select **Run user flow**. 1. For **Application**, select the web application named *webapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Click **Copy to clipboard**.
+1. Copy the URL under **Run user flow endpoint**.
![Screenshot demonstrates how to copy the authorization request URI.](./media/custom-domain/user-flow-run-now.png)
-1. In the **Run user flow endpoint** URL, replace the Azure AD B2C domain (_<tenant-name>_.b2clogin.com) with your custom domain.
+1. To simulate a sign-in with your custom domain, open a web browser and use the URL you copied. Replace the Azure AD B2C domain (_<tenant-name>_.b2clogin.com) with your custom domain.
+ For example, instead of: ```http
Configure Azure Blob storage for Cross-Origin Resource Sharing with the followin
```http https://login.contoso.com/contoso.onmicrosoft.com/oauth2/v2.0/authorize?p=B2C_1_susi&client_id=63ba0d17-c4ba-47fd-89e9-31b3c2734339&nonce=defaultNonce&redirect_uri=https%3A%2F%2Fjwt.ms&scope=openid&response_type=id_token&prompt=login ```
-1. Select **Run user flow**. Your Azure AD B2C policy should load.
-1. Sign-in with Azure AD B2C local account.
+
+1. Verify that the Azure AD B2C is loaded correctly. Then, sign-in with a local account.
1. Repeat the test with the rest of your policies. ## Configure your identity provider
active-directory-b2c Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/data-residency.md
Previously updated : 07/14/2021 Last updated : 08/16/2021
Azure AD B2C is available worldwide via the Azure public cloud. You can see exam
## Data residency
-Azure AD B2C stores user data in either United States, Europe, or the Asia Pacific region.
+Azure AD B2C stores user data in the United States, Europe, the Asia Pacific region, or Australia.
Data residency is determined by the country/region you select when you [create an Azure AD B2C tenant](tutorial-create-tenant.md):
active-directory-b2c Quickstart Native App Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/quickstart-native-app-desktop.md
Previously updated : 09/12/2019 Last updated : 08/16/2021
Azure Active Directory B2C (Azure AD B2C) provides cloud identity management to
## Run the application in Visual Studio 1. In the sample application project folder, open the **active-directory-b2c-wpf.sln** solution in Visual Studio.
-2. Press **F5** to debug the application.
+2. [Restore the NuGet packages](/nuget/consume-packages/package-restore.md).
+3. Press **F5** to debug the application.
## Sign in using your account
active-directory How To Authentication Sms Supported Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/how-to-authentication-sms-supported-apps.md
For the same reason, Microsoft Office mobile apps (except Microsoft Teams, Intun
| Unsupported Microsoft apps| Examples | | | | | Native desktop Microsoft apps | Microsoft Teams, O365 apps, Word, Excel, etc.|
-| Native mobile Microsoft apps (except Microsoft Teams, Intune Company Portal, and Microsoft Azure) | Outlook, Edge, Power BI, Stream, Sharepoint, Power Apps, Word, etc.|
+| Native mobile Microsoft apps (except Microsoft Teams, Intune Company Portal, and Microsoft Azure) | Outlook, Edge, Power BI, Stream, SharePoint, Power Apps, Word, etc.|
| Microsoft 365 web apps (accessed directly on web) | [Outlook](https://outlook.live.com/owa/), [Word](https://office.live.com/start/Word.aspx), [Excel](https://office.live.com/start/Excel.aspx), [PowerPoint](https://office.live.com/start/PowerPoint.aspx), [OneDrive](https://onedrive.live.com/about/signin)| ## Support for Non-Microsoft apps
active-directory Active Directory Signing Key Rollover https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-signing-key-rollover.md
Previously updated : 8/11/2020 Last updated : 8/16/2021
This article discusses what you need to know about the public keys that are used
## Overview of signing keys in the Microsoft identity platform The Microsoft identity platform uses public-key cryptography built on industry standards to establish trust between itself and the applications that use it. In practical terms, this works in the following way: The Microsoft identity platform uses a signing key that consists of a public and private key pair. When a user signs in to an application that uses the Microsoft identity platform for authentication, the Microsoft identity platform creates a security token that contains information about the user. This token is signed by the Microsoft identity platform using its private key before it is sent back to the application. To verify that the token is valid and originated from Microsoft identity platform, the application must validate the tokenΓÇÖs signature using the public keys exposed by the Microsoft identity platform that is contained in the tenantΓÇÖs [OpenID Connect discovery document](https://openid.net/specs/openid-connect-discovery-1_0.html) or SAML/WS-Fed [federation metadata document](../azuread-dev/azure-ad-federation-metadata.md).
-For security purposes, the Microsoft identity platformΓÇÖs signing key rolls on a periodic basis and, in the case of an emergency, could be rolled over immediately. There is no set or guaranteed time between these key rolls - any application that integrates with the Microsoft identity platform should be prepared to handle a key rollover event no matter how frequently it may occur. If it doesnΓÇÖt, and your application attempts to use an expired key to verify the signature on a token, the sign-in request will fail. Checking every 24 hours for updates is a best practice, with throttled (once every five minutes at most) immediate refreshes of the key document if a token is encountered with an unknown key identifier.
+For security purposes, the Microsoft identity platformΓÇÖs signing key rolls on a periodic basis and, in the case of an emergency, could be rolled over immediately. There is no set or guaranteed time between these key rolls - any application that integrates with the Microsoft identity platform should be prepared to handle a key rollover event no matter how frequently it may occur. If your application doesn't handle sudden refreshes, and attempts to use an expired key to verify the signature on a token, your application will incorrectly reject the token. Checking every 24 hours for updates is a best practice, with throttled (once every five minutes at most) immediate refreshes of the key document if a token is encountered that doesn't validate with the keys in your application's cache.
-There is always more than one valid key available in the OpenID Connect discovery document and the federation metadata document. Your application should be prepared to use any and all of the keys specified in the document, since one key may be rolled soon, another may be its replacement, and so forth. The number of keys present can change over time based on the internal architecture of the Microsoft identity platform as we support new platforms, new clouds, or new authentication protocols. Neither the order of the keys in the JSON response nor the order in which they were exposed should be considered meaninful to your app.
+There is always more than one valid key available in the OpenID Connect discovery document and the federation metadata document. Your application should be prepared to use any and all of the keys specified in the document, since one key may be rolled soon, another may be its replacement, and so forth. The number of keys present can change over time based on the internal architecture of the Microsoft identity platform as we support new platforms, new clouds, or new authentication protocols. Neither the order of the keys in the JSON response nor the order in which they were exposed should be considered meaningful to your app.
-Applications that support only a single signing key, or those that require manual updates to the signing keys, are inherently less secure and reliable. They should be updated to use [standard libraries](reference-v2-libraries.md) to ensure that they are always using up-to-date signing keys, among other best practices.
+Applications that support only a single signing key, or those that require manual updates to the signing keys, are inherently less secure and less reliable. They should be updated to use [standard libraries](reference-v2-libraries.md) to ensure that they are always using up-to-date signing keys, among other best practices.
## How to assess if your application will be affected and what to do about it How your application handles key rollover depends on variables such as the type of application or what identity protocol and library was used. The sections below assess whether the most common types of applications are impacted by the key rollover and provide guidance on how to update the application to support automatic rollover or manually update the key.
How your application handles key rollover depends on variables such as the type
This guidance is **not** applicable for:
-* Applications added from Azure AD Application Gallery (including Custom) have separate guidance with regards to signing keys. [More information.](../manage-apps/manage-certificates-for-federated-single-sign-on.md)
+* Applications added from Azure AD Application Gallery (including Custom) have separate guidance with regard to signing keys. [More information.](../manage-apps/manage-certificates-for-federated-single-sign-on.md)
* On-premises applications published via application proxy don't have to worry about signing keys. ### <a name="nativeclient"></a>Native client applications accessing resources
active-directory Device Registration How It Works https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/device-registration-how-it-works.md
+
+ Title: How Azure AD device registration works
+description: Azure AD device registration flows for managed and federated domains
+++++ Last updated : 08/16/2021++++++++
+# How it works: Device registration
+
+Device Registration is a prerequisite to cloud-based authentication. Commonly, devices are Azure AD or hybrid Azure AD joined to complete device registration. This article provides details of how Azure AD join and hybrid Azure Ad join work in managed and federated environments.For more information about how Azure AD authentication works on these devices, see the article [Primary refresh tokens](concept-primary-refresh-token.md#detailed-flows)
+
+## Azure AD joined in Managed environments
++
+| Phase | Description |
+| :-: | :-- |
+| A | The most common way Azure AD joined devices register is during the out-of-box-experience (OOBE) where it loads the Azure AD join web application in the Cloud Experience Host (CXH) application. The application sends a GET request to the Azure AD OpenID configuration endpoint to discover authorization endpoints. Azure AD returns the OpenID configuration, which includes the authorization endpoints, to application as JSON document. |
+| B | The application builds a sign-in request for the authorization end point and collects user credentials. |
+| C | After the user provides their user name (in UPN format), the application sends a GET request to Azure AD to discover corresponding realm information for the user. This information determines if the environment is managed or federated. Azure AD returns the information in a JSON object. The application determines the environment is managed (non-federated).<br><br>The last step in this phase has the application create an authentication buffer and if in OOBE, temporarily caches it for automatic sign-in at the end of OOBE. The application POSTs the credentials to Azure AD where they're validated. Azure AD returns an ID token with claims. |
+| D | The application looks for MDM terms of use (the mdm_tou_url claim). If present, the application retrieves the terms of use from the claim's value, present the contents to the user, and waits for the user to accept the terms of use. This step is optional and skipped if the claim isn't present or if the claim value is empty. |
+| E | The application sends a device registration discovery request to the Azure Device Registration Service (ADRS). Azure DRS returns a discovery data document, which returns tenant-specific URIs to complete device registration. |
+| F | The application creates TPM bound (preferred) RSA 2048 bit key-pair known as the device key (dkpub/dkpriv). The application creates a certificate request using dkpub and the public key and signs the certificate request with using dkpriv. Next, the application derives second key pair from the TPM's storage root key. This key is the transport key (tkpub/tkpriv). |
+| G | The application sends a device registration request to Azure DRS that includes the ID token, certificate request, tkpub, and attestation data. Azure DRS validates the ID token, creates a device ID, and creates a certificate based on the included certificate request. Azure DRS then writes a device object in Azure AD and sends the device ID and the device certificate to the client. |
+| H | Device registration completes by receiving the device ID and the device certificate from Azure DRS. The device ID is saved for future reference (viewable from `dsregcmd.exe /status`), and the device certificate is installed in the Personal store of the computer. With device registration complete, the process continues with MDM enrollment. |
+
+## Azure AD joined in Federated environments
++
+| Phase | Description |
+| :-: | :-- |
+| A | The most common way Azure AD joined devices register is during the out-of-box-experience (OOBE) where it loads the Azure AD join web application in the Cloud Experience Host (CXH) application. The application sends a GET request to the Azure AD OpenID configuration endpoint to discover authorization endpoints. Azure AD returns the OpenID configuration, which includes the authorization endpoints, to application as JSON document. |
+| B | The application builds a sign-in request for the authorization end point and collects user credentials. |
+| C | After the user provides their user name (in UPN format), the application sends a GET request to Azure AD to discover corresponding realm information for the user. This information determines if the environment is managed or federated. Azure AD returns the information in a JSON object. The application determines the environment is federated.<br><br>The application redirects to the AuthURL value (on-premises STS sign-in page) in the returned JSON realm object. The application collects credentials through the STS web page. |
+| D | The application POST the credential to the on-premises STS, which may require extra factors of authentication. The on-premises STS authenticates the user and returns a token. The application POSTs the token to Azure AD for authentication. Azure AD validates the token and returns an ID token with claims. |
+| E | The application looks for MDM terms of use (the mdm_tou_url claim). If present, the application retrieves the terms of use from the claim's value, present the contents to the user, and waits for the user to accept the terms of use. This step is optional and skipped if the claim isn't present or if the claim value is empty. |
+| F | The application sends a device registration discovery request to the Azure Device Registration Service (ADRS). Azure DRS returns a discovery data document, which returns tenant-specific URIs to complete device registration. |
+| G | The application creates TPM bound (preferred) RSA 2048 bit key-pair known as the device key (dkpub/dkpriv). The application creates a certificate request using dkpub and the public key and signs the certificate request with using dkpriv. Next, the application derives second key pair from the TPM's storage root key. This key is the transport key (tkpub/tkpriv). |
+| H | The application sends a device registration request to Azure DRS that includes the ID token, certificate request, tkpub, and attestation data. Azure DRS validates the ID token, creates a device ID, and creates a certificate based on the included certificate request. Azure DRS then writes a device object in Azure AD and sends the device ID and the device certificate to the client. |
+| I | Device registration completes by receiving the device ID and the device certificate from Azure DRS. The device ID is saved for future reference (viewable from `dsregcmd.exe /status`), and the device certificate is installed in the Personal store of the computer. With device registration complete, the process continues with MDM enrollment. |
+
+## Hybrid Azure AD joined in Managed environments
++
+| Phase | Description |
+| :-: | -- |
+| A | The user signs in to a domain joined Windows 10 computer using domain credentials. This credential can be user name and password or smart card authentication. The user sign-in triggers the Automatic Device Join task. The Automatic Device Join tasks is triggered on domain join and retried every hour. It doesn't solely depend on the user sign-in. |
+| B | The task queries Active Directory using the LDAP protocol for the keywords attribute on the service connection point stored in the configuration partition in Active Directory (`CN=62a0ff2e-97b9-4513-943f-0d221bd30080,CN=Device Registration Configuration,CN=Services,CN=Configuration,DC=corp,DC=contoso,DC=com`). The value returned in the keywords attribute determines if device registration is directed to Azure Device Registration Service (ADRS) or the enterprise device registration service hosted on-premises. |
+| C | For the managed environment, the task creates an initial authentication credential in the form of a self-signed certificate. The task writes the certificate to the userCertificate attribute on the computer object in Active Directory using LDAP. |
+| D | The computer can't authenticate to Azure DRS until a device object representing the computer that includes the certificate on the userCertificate attribute is created in Azure AD. Azure AD Connect detects an attribute change. On the next synchronization cycle, Azure AD Connect sends the userCertificate, object GUID, and computer SID to Azure DRS. Azure DRS uses the attribute information to create a device object in Azure AD. |
+| E | The Automatic Device Join task triggers with each user sign-in or every hour, and tries to authenticate the computer to Azure AD using the corresponding private key of the public key in the userCertificate attribute. Azure AD authenticates the computer and issues an ID token to the computer. |
+| F | The task creates TPM bound (preferred) RSA 2048 bit key-pair known as the device key (dkpub/dkpriv). The application creates a certificate request using dkpub and the public key and signs the certificate request with using dkpriv. Next, the application derives second key pair from the TPM's storage root key. This key is the transport key (tkpub/tkpriv). |
+| G | The task sends a device registration request to Azure DRS that includes the ID token, certificate request, tkpub, and attestation data. Azure DRS validates the ID token, creates a device ID, and creates a certificate based on the included certificate request. Azure DRS then updates the device object in Azure AD and sends the device ID and the device certificate to the client. |
+| H | Device registration completes by receiving the device ID and the device certificate from Azure DRS. The device ID is saved for future reference (viewable from `dsregcmd.exe /status`), and the device certificate is installed in the Personal store of the computer. With device registration complete, the task exits. |
+
+## Hybrid Azure AD joined in Federated environments
++
+| Phase | Description |
+| :-: | :-- |
+| A | The user signs in to a domain joined Windows 10 computer using domain credentials. This credential can be user name and password or smart card authentication. The user sign-in triggers the Automatic Device Join task. The Automatic Device Join tasks is triggered on domain join and retried every hour. It doesn't solely depend on the user sign-in. |
+| B | The task queries Active Directory using the LDAP protocol for the keywords attribute on the service connection point stored in the configuration partition in Active Directory (`CN=62a0ff2e-97b9-4513-943f-0d221bd30080,CN=Device Registration Configuration,CN=Services,CN=Configuration,DC=corp,DC=contoso,DC=com`). The value returned in the keywords attribute determines if device registration is directed to Azure Device Registration Service (ADRS) or the enterprise device registration service hosted on-premises. |
+| C | For the federated environments, the computer authenticates the enterprise device registration endpoint using Windows Integrated Authentication. The enterprise device registration service creates and returns a token that includes claims for the object GUID, computer SID, and domain joined state. The task submits the token and claims to Azure AD where they're validated. Azure AD returns an ID token to the running task. |
+| D | The application creates TPM bound (preferred) RSA 2048 bit key-pair known as the device key (dkpub/dkpriv). The application creates a certificate request using dkpub and the public key and signs the certificate request with using dkpriv. Next, the application derives second key pair from the TPM's storage root key. This key is the transport key (tkpub/tkpriv). |
+| E | To provide SSO for on-premises federated application, the task requests an enterprise PRT from the on-premises STS. Windows Server 2016 running the Active Directory Federation Services role validate the request and return it the running task. |
+| F | The task sends a device registration request to Azure DRS that includes the ID token, certificate request, tkpub, and attestation data. Azure DRS validates the ID token, creates a device ID, and creates a certificate based on the included certificate request. Azure DRS then writes a device object in Azure AD and sends the device ID and the device certificate to the client. Device registration completes by receiving the device ID and the device certificate from Azure DRS. The device ID is saved for future reference (viewable from `dsregcmd.exe /status`), and the device certificate is installed in the Personal store of the computer. With device registration complete, the task exits. |
+| G | If Azure AD Connect device writeback is enabled, Azure AD Connect requests updates from Azure AD at its next synchronization cycle (device writeback is required for hybrid deployment using certificate trust). Azure AD correlates the device object with a matching synchronized computer object. Azure AD Connect receives the device object that includes the object GUID and computer SID and writes the device object to Active Directory. |
+
+## Next steps
+
+- [Azure AD joined devices](concept-azure-ad-join.md)
+- [Azure AD registered devices](concept-azure-ad-register.md)
+- [Hybrid Azure AD joined devices](concept-azure-ad-join-hybrid.md)
+- [What is a Primary Refresh Token?](concept-primary-refresh-token.md)
+- [Azure AD Connect: Device options](../hybrid/how-to-connect-device-options.md)
active-directory Active Directory Ops Guide Auth https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-ops-guide-auth.md
Conditional Access is an essential tool for improving the security posture of yo
- Have a small set of core policies that can apply to multiple applications - Define empty exception groups and add them to the policies to have an exception strategy - Plan for [break glass](../roles/security-planning.md#break-glass-what-to-do-in-an-emergency) accounts without MFA controls-- Ensure a consistent experience across Microsoft 365 client applications, for example, Teams, OneDrive, Outlook, etc.) by implementing the same set of controls for services such as Exchange Online and Sharepoint Online
+- Ensure a consistent experience across Microsoft 365 client applications, for example, Teams, OneDrive, Outlook, etc.) by implementing the same set of controls for services such as Exchange Online and SharePoint Online
- Assignment to policies should be implemented through groups, not individuals - Do regular reviews of the exception groups used in policies to limit the time users are out of the security posture. If you own Azure AD P2, then you can use access reviews to automate the process
Below are the user and group settings that can be locked down if there isn't an
#### User settings -- **External Users** - external collaboration can happen organically in the enterprise with services like Teams, Power BI, Sharepoint Online, and Azure Information Protection. If you have explicit constraints to control user-initiated external collaboration, it is recommended you enable external users by using [Azure AD Entitlement management](../governance/entitlement-management-overview.md) or a controlled operation such as through your help desk. If you don't want to allow organic external collaboration for services, you can [block members from inviting external users completely](../external-identities/delegate-invitations.md). Alternatively, you can also [allow or block specific domains](../external-identities/allow-deny-list.md) in external user invitations.
+- **External Users** - external collaboration can happen organically in the enterprise with services like Teams, Power BI, SharePoint Online, and Azure Information Protection. If you have explicit constraints to control user-initiated external collaboration, it is recommended you enable external users by using [Azure AD Entitlement management](../governance/entitlement-management-overview.md) or a controlled operation such as through your help desk. If you don't want to allow organic external collaboration for services, you can [block members from inviting external users completely](../external-identities/delegate-invitations.md). Alternatively, you can also [allow or block specific domains](../external-identities/allow-deny-list.md) in external user invitations.
- **App Registrations** - when App registrations are enabled, end users can onboard applications themselves and grant access to their data. A typical example of App registration is users enabling Outlook plug-ins, or voice assistants such as Alexa and Siri to read their email and calendar or send emails on their behalf. If the customer decides to turn off App registration, the InfoSec and IAM teams must be involved in the management of exceptions (app registrations that are needed based on business requirements), as they would need to register the applications with an admin account, and most likely require designing a process to operationalize the process. - **Administration Portal** - organizations can lock down the Azure AD blade in the Azure portal so that non-administrators can't access Azure AD management in the Azure portal and get confused. Go to the user settings in the Azure AD management portal to restrict access:
active-directory Deploy Access Reviews https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/deploy-access-reviews.md
External identities can be granted access to company resources through one of th
* Assigned a privileged role in Azure AD or in an Azure subscription
-See [sample script](https://github.com/microsoft/access-reviews-samples/tree/master/ExternalIdentityUse). The script will show where external identities invited into the tenant are used. You can see external userΓÇÖs group membership, role assignments, and application assignments in Azure AD. The script won't show any assignments outside of Azure AD, for example direct rights assignment to Sharepoint resources, without the use of groups.
+See [sample script](https://github.com/microsoft/access-reviews-samples/tree/master/ExternalIdentityUse). The script will show where external identities invited into the tenant are used. You can see external userΓÇÖs group membership, role assignments, and application assignments in Azure AD. The script won't show any assignments outside of Azure AD, for example direct rights assignment to SharePoint resources, without the use of groups.
When creating an Access Review for groups or applications, you can choose to let the reviewer focus on Everyone with access, or Guest users only. By selecting Guest users only, reviewers are given a focused list of external identities from Azure AD B2B that have access to the resource.
active-directory How To Connect Install Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-automatic-upgrade.md
ms.devlang: na
na Previously updated : 06/09/2020 Last updated : 08/11/2021
Automatic upgrade is using Azure AD Connect Health for the upgrade infrastructur
If the **Synchronization Service Manager** UI is running on the server, then the upgrade is suspended until the UI is closed.
+>[!NOTE]
+> Not all releases of Azure AD Connect are made available for auto upgrade. The release status indicates if a release is available for auto upgrade or for download only. If auto upgrade was enabled on your Azure AD Connect server then that server will automatically upgrade to the latest version of Azure AD Connect released for auto upgrade if **your configuration is [eligible](#auto-upgrade-eligibility)** for auto upgrade. For more information, see the article [Azure AD Connect: Version release history](reference-connect-version-history.md).
+
+## Auto-upgrade eligibility
+In order to eligible for an automatic upgrade, you must not meet any one of the following conditions:
+
+| Result Message | Description |
+| | |
+|UpgradeNotSupportedCustomizedSyncRules|You have added your own custom rules to the configuration.|
+|UpgradeNotSupportedInvalidPersistedState|The installation is not an Express settings or a DirSync upgrade.|
+|UpgradeNotSupportedNonLocalDbInstall|You are not using a SQL Server Express LocalDB database.|
+|UpgradeNotSupportedLocalDbSizeExceeded|Local DB size is greater than or equal to 8 GB|
+|UpgradeNotSupportedAADHealthUploadDisabled|Health data uploads have been disabled from the portal|
+++ ## Troubleshooting If your Connect installation does not upgrade itself as expected, then follow these steps to find out what could be wrong.
active-directory Cisco Anyconnect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cisco-anyconnect.md
Previously updated : 09/09/2020 Last updated : 08/11/2021
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Cisco AnyConnect supports **IDP** initiated SSO
+* Cisco AnyConnect supports **IDP** initiated SSO.
## Adding Cisco AnyConnect from the gallery
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, enter the values for the following fields (note that the values are case-sensitive): 1. In the **Identifier** text box, type a URL using the following pattern:
- `https://*.YourCiscoServer.com/saml/sp/metadata/TGTGroup`
+ `https://<SUBDOMAIN>.YourCiscoServer.com/saml/sp/metadata/<Tunnel_Group_Name>`
1. In the **Reply URL** text box, type a URL using the following pattern:
- `https://YOUR_CISCO_ANYCONNECT_FQDN/+CSCOE+/saml/sp/acs?tgname=TGTGroup`
+ `https://<YOUR_CISCO_ANYCONNECT_FQDN>/+CSCOE+/saml/sp/acs?tgname=<Tunnel_Group_Name>`
> [!NOTE] > For clarification about these values, contact Cisco TAC support. Update these values with the actual Identifier and Reply URL provided by Cisco TAC. Contact the [Cisco AnyConnect Client support team](https://www.cisco.com/c/en/us/support/https://docsupdatetracker.net/index.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
Follow these steps to enable Azure AD SSO in the Azure portal.
![Copy configuration URLs](common/copy-configuration-urls.png) > [!NOTE]
-> If you would like to on board multiple TGTs of the server then you need to add multiple instance of the Cisco AnyConnect application from the gallery. Also you can choose to upload your own certificate in Azure AD for all these application instances. That way you can have same certificate for the applications but you can configure different Identifier and Reply URL for every application.
+> If you would like to on board multiple TGTs of the server then you need to add multiple instances of the Cisco AnyConnect application from the gallery. You can also choose to upload your own certificate in Azure AD for all these application instances. That way you can have same certificate for the applications but you can configure different Identifier and Reply URL for every application.
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
``` > [!NOTE]
- > There is a feature with the SAML IdP configuration - If you make changes to the IdP config you need to remove the saml identity-provider config from your Tunnel Group and re-apply it for the changes to become effective.
+ > There is a work around with the SAML IdP configuration. If you make changes to the IdP configuration you need to remove the saml identity-provider configuration from your Tunnel Group and re-apply it for the changes to become effective.
### Create Cisco AnyConnect test user
active-directory Freshservice Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/freshservice-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Freshservice Provisioning for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Freshservice Provisioning.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: e03ec65a-25ef-4c91-a364-36b2f007443c
+++
+ na
+ms.devlang: na
+ Last updated : 08/09/2021+++
+# Tutorial: Configure Freshservice Provisioning for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Freshservice Provisioning and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [Freshservice Provisioning](https://effy.co.in/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Freshservice Provisioning
+> * Remove users in Freshservice Provisioning when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Freshservice Provisioning
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A [Freshservice account](https://www.freshservice.com) with the Organizational Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and Freshservice Provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure Freshservice Provisioning to support provisioning with Azure AD
+
+1. On your Freshservice account, install the **Azure Provisioning (SCIM)** app from the marketplace by navigating to **Freshservice Admin** > **Apps** > **Get Apps**.
+2. In the configuration screen, provide your **Freshservice Domain** (for example, `acme.freshservice.com`) and the **Organization Admin API key**.
+3. Click **Continue**.
+4. Highlight and copy the **Bearer Token**. This value will be entered in the **Secret Token** field in the Provisioning tab of your Freshservice Provisioning application in the Azure portal.
+5. Click **Install** to complete the installation.
+6. The **Tenant URL** is `https://scim.freshservice.com/scim/v2`. This value will be entered in the **Tenant URL** field in the Provisioning tab of your Freshservice Provisioning application in the Azure portal.
+
+## Step 3. Add Freshservice Provisioning from the Azure AD application gallery
+
+Add Freshservice Provisioning from the Azure AD application gallery to start managing provisioning to Freshservice Provisioning. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users to Freshservice Provisioning, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to Freshservice Provisioning
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in Freshservice Provisioning based on user assignments in Azure AD.
+
+### To configure automatic user provisioning for Freshservice Provisioning in Azure AD
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **Freshservice Provisioning**.
+
+ ![The Freshservice Provisioning link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your Freshservice Provisioning Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Freshservice Provisioning. If the connection fails, ensure your Freshservice Provisioning account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Freshservice Provisioning**.
+
+9. Review the user attributes that are synchronized from Azure AD to Freshservice Provisioning in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Freshservice Provisioning for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Freshservice Provisioning API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported For Filtering|
+ ||||
+ |userName|String|&check;|
+ |active|Boolean|
+ |emails[type eq "work"].value|String|
+ |displayName|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |phoneNumbers[type eq "work"].value|String|
+ |phoneNumbers[type eq "mobile"].value|String|
+ |addresses[type eq "work"].formatted|String|
+ |locale|String|
+ |title|String|
+ |timezone|String|
+ |externalId|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference|
+ |urn:ietf:params:scim:schemas:extension:freshservice:2.0:User:isAgent|String|
+
+> [!NOTE]
+> Custom extension attributes can be added to your schema to meet your application's needs by following the below steps:
+> * Under Mappings, select **Provision Azure Active Directory Users**.
+> * At the bottom of the page, select **Show advanced options**.
+> * Select **Edit attribute list for Freshservice**.
+> * At the bottom of the attribute list, enter information about the custom attribute in the fields provided. The custom attribute urn namespace must follow the pattern as shown in the below example. The **CustomAttribute** can be customized per your application's requirements, for example: urn:ietf:params:scim:schemas:extension:freshservice:2.0:User:**isAgent**.
+> * The appropriate data type has to be selected for the custom attribute and click **Save**.
+> * Navigate back to the default mappings screen and click on **Add New Mapping**. The custom attributes will show up in the **Target Attribute** list dropdown.
+
+10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+11. To enable the Azure AD provisioning service for Freshservice Provisioning, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+12. Define the users that you would like to provision to Freshservice Provisioning by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+13. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory Rhombus Systems Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/rhombus-systems-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Rhombus Systems | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Rhombus Systems.
++++++++ Last updated : 08/13/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Rhombus Systems
+
+In this tutorial, you'll learn how to integrate Rhombus Systems with Azure Active Directory (Azure AD). When you integrate Rhombus Systems with Azure AD, you can:
+
+* Control in Azure AD who has access to Rhombus Systems.
+* Enable your users to be automatically signed-in to Rhombus Systems with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Rhombus Systems single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Rhombus Systems supports **SP and IDP** initiated SSO.
+* Rhombus Systems supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Rhombus Systems from the gallery
+
+To configure the integration of Rhombus Systems into Azure AD, you need to add Rhombus Systems from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Rhombus Systems** in the search box.
+1. Select **Rhombus Systems** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Rhombus Systems
+
+Configure and test Azure AD SSO with Rhombus Systems using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Rhombus Systems.
+
+To configure and test Azure AD SSO with Rhombus Systems, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Rhombus Systems SSO](#configure-rhombus-systems-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Rhombus Systems test user](#create-rhombus-systems-test-user)** - to have a counterpart of B.Simon in Rhombus Systems that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Rhombus Systems** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you have **Service Provider metadata file** and wish to configure in **IDP** initiated mode, perform the following steps:
+
+ a. Click **Upload metadata file**.
+
+ ![Upload metadata file](common/upload-metadata.png)
+
+ b. Click on **folder logo** to select the metadata file and click **Upload**.
+
+ ![choose metadata file](common/browse-upload-metadata.png)
+
+ c. After the metadata file is successfully uploaded, the **Identifier** and **Reply URL** values get auto populated in Basic SAML Configuration section.
+
+ > [!Note]
+ > If the **Identifier** and **Reply URL** values do not get auto populated, then fill in the values manually according to your requirement.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://console.rhombussystems.com/login/`
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up Rhombus Systems** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Rhombus Systems.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Rhombus Systems**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Rhombus Systems SSO
+
+1. Log in to your Rhombus Systems company site as an administrator.
+
+1. Go to **Settings** icon and click **Single Sign-On**.
+
+1. On the **Single Sign-On** page, perform the following steps.
+
+ ![Screenshot shows settings of SSO configuration.](./media/rhombus-systems-tutorial/settings.png "Account")
+
+ 1. Enable **Use Single Sign-On** button.
+
+ 1. Enable **Just-In-Time User Creation** button.
+
+ 1. Enter a valid **Team name** in the textbox.
+
+ 1. **Select Users** from the dropdown in the **SSO Recovery Users**.
+
+ 1. Download **SP Metadata** file and upload the metadata file into the **Basic SAML Configuration** section in the Azure portal.
+
+ 1. Copy **Federation Metadata XML** from the Azure portal into Notepad and paste the content into the **IDP MetaData XML** textbox.
+
+ 1. Click **Save**.
+
+### Create Rhombus Systems test user
+
+In this section, a user called Britta Simon is created in Rhombus Systems. Rhombus Systems supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Rhombus Systems, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Rhombus Systems Sign on URL where you can initiate the login flow.
+
+* Go to Rhombus Systems Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Rhombus Systems for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Rhombus Systems tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Rhombus Systems for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Rhombus Systems you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory User Help Auth App Add Work School Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/user-help-auth-app-add-work-school-account.md
Previously updated : 07/26/2021 Last updated : 08/09/2021
To add an account by scanning a QR Code, do the following:
1. Open the Microsoft Authenticator app, select the plus icon ![Select the plus icon on either iOS or Android devices](media/user-help-auth-app-add-work-school-account/plus-icon.png) and select **Add account**, and then select **Work or school account,** followed by **Scan a QR Code**. If you don't have an account set up in the Authenticator app, you'll see a large blue button that says **Add account**.
-If you aren't prompted to use your camera to scan a QR Code, in your phone's settings, ensure that the Authenticator app has access to the phone camera.
+If you aren't prompted to use your camera to scan a QR Code, in your phone's settings, ensure that the Authenticator app has access to the phone camera. After you add your account using a QR code, you can set up phone sign-in. If you receive the message "You might be signing in from a location that is restricted by your admin," your admin hasn't enabled this feature for you and probably set up a Security Information Registration Conditional Access policy. Contact the administrator for your work or school account to use this authentication method. If you *are* allowed by your admin to use phone sign-in using the Authenticator app, you'll be able to go through device registration to get set up for passwordless phone sign-in and Azure AD Multi-Factor Authentication.
+
+>[!Note]
+> For US government organizations, the only way that you can add a phone sign-in account is by adding it using the [Sign in with your credentials](#sign-in-with-your-credentials) option, instead of upgrading from a QR-code based account.
## Sign in on a remote computer
aks Kubernetes Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-service-principal.md
In the following Azure PowerShell example, a service principal is not specified.
New-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup ``` -
+> [!NOTE]
+> For error "Service principal clientID: 00000000-0000-0000-0000-000000000000 not found in Active Directory tenant 00000000-0000-0000-0000-000000000000", see [Additional considerations](#additional-considerations) to remove the `acsServicePrincipal.json` file.
+ ## Manually create a service principal ### [Azure CLI](#tab/azure-cli)
When using AKS and Azure AD service principals, keep the following consideration
- Every service principal is associated with an Azure AD application. The service principal for a Kubernetes cluster can be associated with any valid Azure AD application name (for example: *https://www.contoso.org/example*). The URL for the application doesn't have to be a real endpoint. - When you specify the service principal **Client ID**, use the value of the `ApplicationId`. - On the agent node VMs in the Kubernetes cluster, the service principal credentials are stored in the file `/etc/kubernetes/azure.json`-- When you use the [New-AzAksCluster][new-azakscluster] command to generate the service principal automatically, the service principal credentials are written to the file `~/.azure/aksServicePrincipal.json` on the machine used to run the command.-- If you do not specifically pass a service principal in additional AKS PowerShell commands, the default service principal located at `~/.azure/aksServicePrincipal.json` is used.-- You can also optionally remove the aksServicePrincipal.json file, and AKS will create a new service principal.
+- When you use the [New-AzAksCluster][new-azakscluster] command to generate the service principal automatically, the service principal credentials are written to the file `~/.azure/acsServicePrincipal.json` on the machine used to run the command.
+- If you do not specifically pass a service principal in additional AKS PowerShell commands, the default service principal located at `~/.azure/acsServicePrincipal.json` is used.
+- You can also optionally remove the acsServicePrincipal.json file, and AKS will create a new service principal.
- When you delete an AKS cluster that was created by [New-AzAksCluster][new-azakscluster], the service principal that was created automatically is not deleted. - To delete the service principal, query for your cluster *ServicePrincipalProfile.ClientId* and then delete with [Remove-AzADServicePrincipal][remove-azadserviceprincipal]. Replace the following resource group and cluster names with your own values:
aks Managed Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/managed-aad.md
There are some non-interactive scenarios, such as continuous integration pipelin
## Disable local accounts (preview)
-When deploying an AKS Cluster, local accounts are enabled by default. Even when enabling RBAC or Azure Active Directory integration, `--admin` access still exists, essentially as a non-auditable backdoor option. With this in mind, AKS offers users the ability to disable local accounts via a flag, `disable-local`. A field, `properties.disableLocalAccounts`, has also been added to the managed cluster API to indicate whether the feature has been enabled on the cluster.
+When deploying an AKS Cluster, local accounts are enabled by default. Even when enabling RBAC or Azure Active Directory integration, `--admin` access still exists, essentially as a non-auditable backdoor option. With this in mind, AKS offers users the ability to disable local accounts via a flag, `disable-local-accounts`. A field, `properties.disableLocalAccounts`, has also been added to the managed cluster API to indicate whether the feature has been enabled on the cluster.
> [!NOTE] > On clusters with Azure AD integration enabled, users belonging to a group specified by `aad-admin-group-object-ids` will still be able to gain access via non-admin credentials. On clusters without Azure AD integration enabled and `properties.disableLocalAccounts` set to true, obtaining both user and admin credentials will fail.
az provider register --namespace Microsoft.ContainerService
### Create a new cluster without local accounts
-To create a new AKS cluster without any local accounts, use the [az aks create][az-aks-create] command with the `disable-local` flag:
+To create a new AKS cluster without any local accounts, use the [az aks create][az-aks-create] command with the `disable-local-accounts` flag:
```azurecli-interactive
-az aks create -g <resource-group> -n <cluster-name> --enable-aad --aad-admin-group-object-ids <aad-group-id> --disable-local
+az aks create -g <resource-group> -n <cluster-name> --enable-aad --aad-admin-group-object-ids <aad-group-id> --disable-local-accounts
``` In the output, confirm local accounts have been disabled by checking the field `properties.disableLocalAccounts` is set to true:
Operation failed with status: 'Bad Request'. Details: Getting static credential
### Disable local accounts on an existing cluster
-To disable local accounts on an existing AKS cluster, use the [az aks update][az-aks-update] command with the `disable-local` flag:
+To disable local accounts on an existing AKS cluster, use the [az aks update][az-aks-update] command with the `disable-local-accounts` flag:
```azurecli-interactive
-az aks update -g <resource-group> -n <cluster-name> --enable-aad --aad-admin-group-object-ids <aad-group-id> --disable-local
+az aks update -g <resource-group> -n <cluster-name> --enable-aad --aad-admin-group-object-ids <aad-group-id> --disable-local-accounts
``` In the output, confirm local accounts have been disabled by checking the field `properties.disableLocalAccounts` is set to true:
aks Monitor Aks Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/monitor-aks-reference.md
For more information on the schema of Activity Log entries, see [Activity Log s
## See also - See [Monitoring Azure AKS](monitor-aks.md) for a description of monitoring Azure AKS.-- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resources) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
aks Monitor Aks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/monitor-aks.md
The logs for AKS control plane components are implemented in Azure as [resource
You need to create a diagnostic setting to collect resource logs. Create multiple diagnostic settings to send different sets of logs to different locations. See [Create diagnostic settings to send platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md) to create diagnostic settings for your AKS cluster.
-There is a cost for sending resource logs to a workspace, so you should only collect those log categories that you intend to use. Send logs to an Azure storage account to reduce costs if you need to retain the information but don't require it to be readily available for analysis. See [Resource logs](/monitor-aks-reference.md#resource-logs) for a description of the categories that are available for AKS and [Manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/manage-cost-storage.md) for details on the cost of ingesting and retaining log data. Start by collecting a minimal number of categories and then modify the diagnostic setting to collect additional categories as your needs increase and as you understand your associated costs.
+There is a cost for sending resource logs to a workspace, so you should only collect those log categories that you intend to use. Send logs to an Azure storage account to reduce costs if you need to retain the information but don't require it to be readily available for analysis. See [Resource logs](monitor-aks-reference.md#resource-logs) for a description of the categories that are available for AKS and [Manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/manage-cost-storage.md) for details on the cost of ingesting and retaining log data. Start by collecting a minimal number of categories and then modify the diagnostic setting to collect additional categories as your needs increase and as you understand your associated costs.
If you're unsure about which resource logs to initially enable, use the recommendations in the following table which are based on the most common customer requirements. Enable the other categories if you later find that you require this information.
Use **Node** workbooks in Container Insights to analyze disk capacity and IO in
:::image type="content" source="media/monitor-aks/container-insights-node-workbooks.png" alt-text="Container insights node workbooks" lightbox="media/monitor-aks/container-insights-node-workbooks.png":::
-For troubleshooting scenarios, you may need to access the AKS nodes directly for maintenance or immediate log collection. For security purposes, the AKS nodes aren't exposed to the internet but you can `kubectl debug` to SSH to the AKS nodes. See [Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting](/ssh.md) for details on this process.
+For troubleshooting scenarios, you may need to access the AKS nodes directly for maintenance or immediate log collection. For security purposes, the AKS nodes aren't exposed to the internet but you can `kubectl debug` to SSH to the AKS nodes. See [Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting](ssh.md) for details on this process.
Azure Monitor and container insights don't yet provide full monitoring for the A
:::image type="content" source="media/monitor-aks/grafana-api-server.png" alt-text="Grafana API server" lightbox="media/monitor-aks/grafana-api-server.png":::
-Use the **Kubelet** workbook to view the health and performance of each kubelet. See [Resource Monitoring workbooks](../azure-monitor/containers/container-insights-reports.md#resource-monitoring-workbooks) for details on this workbooks. For troubleshooting scenarios, you can access kubelet logs using the process described at [Get kubelet logs from Azure Kubernetes Service (AKS) cluster nodes](/kubelet-logs.md).
+Use the **Kubelet** workbook to view the health and performance of each kubelet. See [Resource Monitoring workbooks](../azure-monitor/containers/container-insights-reports.md#resource-monitoring-workbooks) for details on this workbooks. For troubleshooting scenarios, you can access kubelet logs using the process described at [Get kubelet logs from Azure Kubernetes Service (AKS) cluster nodes](kubelet-logs.md).
:::image type="content" source="media/monitor-aks/container-insights-kubelet-workbook.png" alt-text="Container insights kubelet workbook" lightbox="media/monitor-aks/container-insights-kubelet-workbook.png":::
Monitor external components such as Service Mesh, Ingress, Egress with Prometheu
## Analyze metric data with metrics explorer Use metrics explorer when you want to perform custom analysis of metric data collected for your containers. Metrics explorer allows you plot charts, visually correlate trends, and investigate spikes and dips in metrics' values. Create a metrics alert to proactively notify you when a metric value crosses a threshold, and pin charts to dashboards for use by different members of your organization.
-See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this feature. For a list of the platform metrics collected for AKS, see [Monitoring AKS data reference metrics](/monitor-aks-reference.md#metrics). When Container insights is enabled for a cluster, [addition metric values](../azure-monitor/containers/container-insights-update-metrics.md) are available.
+See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this feature. For a list of the platform metrics collected for AKS, see [Monitoring AKS data reference metrics](monitor-aks-reference.md#metrics). When Container insights is enabled for a cluster, [addition metric values](../azure-monitor/containers/container-insights-update-metrics.md) are available.
:::image type="content" source="media/monitor-aks/metrics-explorer.png" alt-text="Metrics explorer" lightbox="media/monitor-aks/metrics-explorer.png":::
Use Log Analytics when you want to analyze resource logs or dig deeper into the
See [How to query logs from Container insights](../azure-monitor/containers/container-insights-log-query.md) for details on using log queries to analyze data collected by Container insights. See [Using queries in Azure Monitor Log Analytics](../azure-monitor/logs/queries.md) for information on using these queries and [Log Analytics tutorial](../azure-monitor/logs/log-analytics-tutorial.md) for a complete tutorial on using Log Analytics to run queries and work with their results.
-For a list of the tables collected for AKS that you can analyze in metrics explorer, see [Monitoring AKS data reference logs](/monitor-aks-reference.md#azure-monitor-logs-tables).
+For a list of the tables collected for AKS that you can analyze in metrics explorer, see [Monitoring AKS data reference logs](monitor-aks-reference.md#azure-monitor-logs-tables).
:::image type="content" source="media/monitor-aks/log-analytics-queries.png" alt-text="Log Analytics queries for Kubernetes" lightbox="media/monitor-aks/log-analytics-queries.png":::
-In addition to Container insights data, you can use log queries to analyze resource logs from AKS. For a list of the log categories available, see [AKS data reference resource logs](/monitor-aks-reference.md#resource-logs). You must create a diagnostic setting to collect each category as described in [Configure monitoring](#configure-monitoring) before that data will be collected.
+In addition to Container insights data, you can use log queries to analyze resource logs from AKS. For a list of the log categories available, see [AKS data reference resource logs](monitor-aks-reference.md#resource-logs). You must create a diagnostic setting to collect each category as described in [Configure monitoring](#configure-monitoring) before that data will be collected.
For those conditions where Azure Monitor either doesn't have the data required f
## Next steps - See [Monitoring AKS data reference](monitor-aks-reference.md) for a reference of the metrics, logs, and other important values created by AKS.-
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-azure-ad-pod-identity.md
az aks create -g myResourceGroup -n myAKSCluster --enable-pod-identity --network
> * [Node Managed Identity (NMI)](https://azure.github.io/aad-pod-identity/docs/concepts/nmi/): is a pod that runs as a DaemonSet on each node in the AKS cluster. NMI intercepts security token requests to the [Azure Instance Metadata Service](../virtual-machines/linux/instance-metadata-service.md?tabs=linux) on each node, redirect them to itself and validates if the pod has access to the identity it's requesting a token for and fetch the token from the Azure Active Directory tenant on behalf of the application. > 2. Managed Mode: In this mode, there is only NMI. The identity needs to be manually assigned and managed by the user. For more information, see [Pod Identity in Managed Mode](https://azure.github.io/aad-pod-identity/docs/configure/pod_identity_in_managed_mode/). >
->When you install the Azure Active Directory Pod Identity via Helm chart or YAML manifest as shown in the [Installation Guide](https://azure.github.io/aad-podidentity/docs/getting-started/installation/), you can choose between the `standard` and `managed` mode. If you instead decide to install the Azure Active Directory Pod Identity using the [AKS cluster add-on]() as shown in this article, the setup will use the `managed` mode.
+>When you install the Azure Active Directory Pod Identity via Helm chart or YAML manifest as shown in the [Installation Guide](https://azure.github.io/aad-pod-identity/docs/getting-started/installation/), you can choose between the `standard` and `managed` mode. If you instead decide to install the Azure Active Directory Pod Identity using the [AKS cluster add-on](/azure/aks/use-azure-ad-pod-identity) as shown in this article, the setup will use the `managed` mode.
Use [az aks get-credentials][az-aks-get-credentials] to sign in to your AKS cluster. This command also downloads and configures the `kubectl` client certificate on your development computer.
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-multiple-node-pools.md
It takes a few minutes for the *gpunodepool* to be successfully created.
When creating a node pool, you can add taints, labels, or tags to that node pool. When you add a taint, label, or tag, all nodes within that node pool also get that taint, label, or tag. > [!IMPORTANT]
-> Adding taints, labels, or tags to nodes should be done for the entire node pool using `az aks nodepool`. Applying taints, lablels, or tags to individual nodes in a node pool using `kubectl` is not recommended.
+> Adding taints, labels, or tags to nodes should be done for the entire node pool using `az aks nodepool`. Applying taints, labels, or tags to individual nodes in a node pool using `kubectl` is not recommended.
### Setting nodepool taints
api-management Api Management Howto Setup Delegation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-setup-delegation.md
na Previously updated : 10/15/2020 Last updated : 08/13/2021 # How to delegate user registration and product subscription
-Delegation allows you to use your existing website for handling developer sign in/sign up and subscription to products, as opposed to using the built-in functionality in the developer portal. It enables your website to own the user data and perform the validation of these steps in a custom way.
+Delegation enables your website to own the user data and perform custom validation. With delegation, you can handle developer sign-in/sign-up and product subscription using your existing website, instead of the developer portal's built-in functionality.
[!INCLUDE [premium-dev-standard-basic.md](../../includes/api-management-availability-premium-dev-standard-basic.md)] ## <a name="delegate-signin-up"> </a>Delegating developer sign-in and sign-up
-To delegate developer, sign in and sign up to your existing website, you'll need to create a special delegation endpoint on your site. It needs to act as the entry-point for any such request initiated from the API Management developer portal.
+To delegate developer sign-in and sign-up to your existing website, create a special delegation endpoint on your site. This special delegation acts as the entry-point for any sign-in/sign-up request initiated from the API Management developer portal.
-The final workflow will be as follows:
+The final workflow will be:
-1. Developer clicks on the sign in or sign up link at the API Management developer portal
-2. Browser is redirected to the delegation endpoint
-3. Delegation endpoint in return redirects to or presents UI asking user to sign in or sign up
-4. On success, the user is redirected back to the API Management developer portal page they started from
+1. Developer clicks on the sign-in or sign-up link at the API Management developer portal.
+2. Browser redirects to the delegation endpoint.
+3. Delegation endpoint in return redirects user to or presents user with sign-in/sign-up.
+4. Upon successful sign-in/sign-up, user is redirected back to the API Management developer portal at the location they left.
-To begin, let's first set-up API Management to route requests via your delegation endpoint. In the Azure portal, search for **Security** in your API Management resource and then click the **Delegation** item. Click the checkbox to enable 'Delegate sign in & sign up'.
+### Set up API Management to route requests via delegation endpoint
-![Delegation page][api-management-delegation-signin-up]
+1. In the Azure portal, search for **Developer portal** in your API Management resource.
+2. Click the **Delegation** item.
+3. Click the checkbox to enable **Delegate sign-in & sign-up**.
-* Decide what the URL of your special delegation endpoint will be and enter it in the **Delegation endpoint URL** field.
-* Within the Delegation authentication key field, enter a secret that will be used to compute a signature provided to you for verification to ensure that the request is indeed coming from Azure API Management. You can click the **generate** button to have API Management randomly generate a key for you.
-Now you need to create the **delegation endpoint**. It has to perform a number of actions:
+4. Decide your special delegation endpoint's URL and enter it in the **Delegation endpoint URL** field.
+5. Within the **Delegation Validation Key** field, either:
+ * Enter a secret used to compute a signature provided for verification that the request originates from API Management.
+ * Click the **Generate** button for API Management to generate a random key for you.
+6. Click **Save**.
+
+### Create your delegation endpoint
+
+>[!NOTE]
+> While the following procedure provides examples of the **SignIn** operation, you can perform account management using any of the available operations with the steps below.
+
+Recommended steps for creating a new delegation endpoint to implement on your site:
1. Receive a request in the following form: > *http:\//www.yourwebsite.com/apimdelegation?operation=SignIn&returnUrl={URL of source page}&salt={string}&sig={string}*
- Query parameters for the sign in / sign up case:
+ Query parameters for the sign-in/sign-up case:
+
+ | Parameter | Description |
+ | | -- |
+ | **operation** | Identifies the delegation request type. Available operations: **SignIn**, **ChangePassword**, **ChangeProfile**, **CloseAccount**, and **SignOut**. |
+ | **returnUrl** | The URL of where the user clicked on a sign-in or sign-up link. |
+ | **salt** | A special salt string used for computing a security hash. |
+ | **sig** | A computed security hash used for comparison to your own computed hash. |
- * **operation**: identifies what type of delegation request it is - it can only be **SignIn** in this case
- * **returnUrl**: the URL of the page where the user clicked on a sign in or sign up link
- * **salt**: a special salt string used for computing a security hash
- * **sig**: a computed security hash to be used for comparison to your own computed hash
-2. Verify that the request is coming from Azure API Management (optional, but highly recommended for security)
+3. Verify the request comes from Azure API Management (optional, but highly recommended for security).
- * Compute an HMAC-SHA512 hash of a string based on the **returnUrl** and **salt** query parameters ([example code provided below]):
+ * Compute an HMAC-SHA512 hash of a string based on the **returnUrl** and **salt** query parameters. For more details, check our [example code].
- > HMAC(**salt** + '\n' + **returnUrl**)
-
- * Compare the above-computed hash to the value of the **sig** query parameter. If the two hashes match, move on to the next step, otherwise deny the request.
-3. Verify that you are receiving a request for sign in/sign up: the **operation** query parameter will be set to "**SignIn**".
-4. Present the user with UI to sign in or sign up
-5. If the user is signing-up you have to create a corresponding account for them in API Management. [Create a user] with the API Management REST API. When doing so, ensure that you set the user ID to the same value as in your user store or to an ID that you can keep track of.
+ ```
+ HMAC(salt + '\n' + returnUrl)
+ ```
+ * Compare the above-computed hash to the value of the **sig** query parameter. If the two hashes match, move on to the next step. Otherwise, deny the request.
+4. Verify you receive a request for sign-in/sign-up.
+ * The **operation** query parameter will be set to "**SignIn**".
+5. Present the user with sign-in/sign-up UI.
+ * If the user signs up, create a corresponding account for them in API Management.
+ * [Create a user] with the API Management REST API.
+ * Set the user ID to either the same value in your user store or a new, easily tracked ID.
6. When the user is successfully authenticated:
- * [Request a shared access token] via the API Management REST API
- * Append a returnUrl query parameter to the SSO URL you have received from the API call above:
+ * [Request a shared access token] via the API Management REST API.
+ * Append a **returnUrl** query parameter to the SSO URL you received from the API call above. For example:
- > for example, `https://<developer portal domain, for example: contoso.developer.azure-api.net>/signin-sso?token=<URL-encoded token>&returnUrl=<URL-encoded URL, for example: %2Freturn%2Furl>`
+ > `https://contoso.developer.azure-api.net/signin-sso?token=<URL-encoded token>&returnUrl=%2Freturn%2Furl`
- * Redirect the user to the above produced URL
-
-In addition to the **SignIn** operation, you can also perform account management by following the previous steps and using one of the following operations:
+ * Redirect the user to the above-produced URL.
-* **ChangePassword**
-* **ChangeProfile**
-* **CloseAccount**
-* **SignOut**
+>[!NOTE]
+> For account management operations (**ChangePassword**, **ChangeProfile**, and **CloseAccount**), pass the following query parameters:
+>
+> | Parameter | Description |
+> | | -- |
+> | **operation** | Identifies the delegation request type. |
+> | **userId** | The user ID of the account you wish to manage. |
+> | **salt** | A special salt string used for computing a security hash. |
+> | **sig** | A computed security hash used for comparison to your own computed hash. |
-You must pass the following query parameters for account management operations.
+## <a name="delegate-product-subscription"> </a>Delegating product subscription
-* **operation**: identifies what type of delegation request it is (ChangePassword, ChangeProfile, or CloseAccount)
-* **userId**: the user ID of the account to manage
-* **salt**: a special salt string used for computing a security hash
-* **sig**: a computed security hash to be used for comparison to your own computed hash
+Delegating product subscriptions works similarly to delegating user sign-in/sign-up. The final workflow would be as follows:
-## <a name="delegate-product-subscription"> </a>Delegating product subscription
+1. Developer selects a product in the API Management developer portal and clicks on the **Subscribe** button.
+2. Browser redirects to the delegation endpoint.
+3. Delegation endpoint performs required product subscription steps, which you design. They may include:
+ * Redirecting to another page to request billing information.
+ * Asking additional questions.
+ * Storing the information and not requiring any user action.
-Delegating product subscription works similarly to delegating user sign in/-up. The final workflow would be as follows:
+### Enable the API Management functionality
-1. Developer selects a product in the API Management developer portal and clicks on the Subscribe button.
-2. Browser is redirected to the delegation endpoint.
-3. Delegation endpoint performs required product subscription steps. It's up to you to design the steps. They may include redirecting to another page to request billing information, asking additional questions, or simply storing the information and not requiring any user action.
+On the **Delegation** page, click **Delegate product subscription**.
-To enable the functionality, on the **Delegation** page click **Delegate product subscription**.
+### Create your delegation endpoint
-Next, ensure the delegation endpoint does the following actions:
+Recommended steps for creating a new delegation endpoint to implement on your site:
-1. Receive a request in the following form:
+1. Receive a request in the following form.
> *http:\//www.yourwebsite.com/apimdelegation?operation={operation}&productId={product to subscribe to}&userId={user making request}&salt={string}&sig={string}* > Query parameters for the product subscription case:
-
- * **operation**: identifies what type of delegation request it is. For product subscription requests the valid options are:
- * "Subscribe": a request to subscribe the user to a given product with provided ID (see below)
- * "Unsubscribe": a request to unsubscribe a user from a product
- * "Renew": a request to renew a subscription (for example, that may be expiring)
- * **productId**: on *Subscribe* - the ID of the product the user requested to subscribe to
- * **subscriptionId**: on *Unsubscribe* and *Renew* - the ID of the product subscription
- * **userId**: on *Subscribe* - the ID of the user the request is made for
- * **salt**: a special salt string used for computing a security hash
- * **sig**: a computed security hash to be used for comparison to your own computed hash
+
+ | Parameter | Description |
+ | | -- |
+ | **operation** | Identifies the delegation request type. Valid product subscription requests options are: <ul><li>**Subscribe**: a request to subscribe the user to a given product with provided ID (see below).</li><li>**Unsubscribe**: a request to unsubscribe a user from a product.</li><li>**Renew**: a request to renew a subscription (for example, that may be expiring)</li></ul> |
+ | **productId** | On *Subscribe*, the product ID that the user requested subscription. |
+ | **subscriptionId** | On *Unsubscribe* and *Renew*, the product subscription. |
+ | **userId** | On *Subscribe*, the requesting user's ID. |
+ | **salt** | A special salt string used for computing a security hash. |
+ | **sig** | A computed security hash used for comparison to your own computed hash. |
2. Verify that the request is coming from Azure API Management (optional, but highly recommended for security) * Compute an HMAC-SHA512 of a string based on the **productId**, **userId**, and **salt** query parameters:
-
- > HMAC(**salt** + '\n' + **productId** + '\n' + **userId**)
- >
- >
- * Compare the above-computed hash to the value of the **sig** query parameter. If the two hashes match, move on to the next step, otherwise deny the request.
-3. Process product subscription based on the type of operation requested in **operation** - for example, billing, further questions, etc.
-4. On successfully subscribing the user to the product on your side, subscribe the user to the API Management product by [calling the REST API for subscriptions].
+ ```
+ HMAC(**salt** + '\n' + **productId** + '\n' + **userId**)
+ ```
+ * Compare the above-computed hash to the value of the **sig** query parameter. If the two hashes match, move on to the next step. Otherwise, deny the request.
+3. Process the product subscription based on the operation type requested in **operation** (for example: billing, further questions, etc.).
+4. Upon successful user subscription to the product on your side, subscribe the user to the API Management product by [calling the REST API for subscriptions].
-## <a name="delegate-example-code"> </a> Example Code
+## <a name="delegate-example-code"> </a> Example code
These code samples show how to:
-* Take the *delegation validation key*, which is set in the Delegation screen of the publisher portal
-* Create an HMAC, which is then used to validate the signature, proving the validity of the passed returnUrl.
+* Take the *delegation validation key*, which is set in the **Delegation** screen of the publisher portal.
+* Create an HMAC, which validates the signature, proving the validity of the passed returnUrl.
-The same code works for the productId and userId with slight modification.
+With slight modification, you can use the same code for the **productId** and **userId**.
-**C# code to generate hash of returnUrl**
+### C# code to generate hash of returnUrl
```csharp using System.Security.Cryptography;
using (var encoder = new HMACSHA512(Convert.FromBase64String(key)))
} ```
-**NodeJS code to generate hash of returnUrl**
+### NodeJS code to generate hash of returnUrl
``` var crypto = require('crypto');
var signature = digest.toString('base64');
> You need to [republish the developer portal](api-management-howto-developer-portal-customize.md#publish) for the delegation changes to take effect. ## Next steps
-For more information on delegation, see the following video:
-
-> [!VIDEO https://channel9.msdn.com/Blogs/AzureApiMgmt/Delegating-User-Authentication-and-Product-Subscription-to-a-3rd-Party-Site/player]
->
->
+- [Learn more about the developer portal.](api-management-howto-developer-portal.md)
+- [Authenticate using Azure AD](api-management-howto-aad.md) or with [Azure AD B2C](api-management-howto-aad-b2c.md).
+- More developer portal questions? [Find answers in our FAQ](developer-portal-faq.md).
-[Delegating developer sign in and sign up]: #delegate-signin-up
+[Delegating developer sign-in and sign-up]: #delegate-signin-up
[Delegating product subscription]: #delegate-product-subscription [Request a shared access token]: /rest/api/apimanagement/2020-12-01/user/get-shared-access-token [create a user]: /rest/api/apimanagement/2020-12-01/user/create-or-update [calling the REST API for subscriptions]: /rest/api/apimanagement/2020-12-01/subscription/create-or-update [Next steps]: #next-steps
-[example code provided below]: #delegate-example-code
+[example code]: #delegate-example-code
[api-management-delegation-signin-up]: ./media/api-management-howto-setup-delegation/api-management-delegation-signin-up.png
azure-arc Tutorial Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-gitops-ci-cd.md
To avoid having to set an imagePullSecret for every Pod, consider adding the ima
| AZURE_VOTE_IMAGE_REPO | The full path to the Azure Vote App repo, for example azurearctest.azurecr.io/azvote | | ENVIRONMENT_NAME | Dev | | MANIFESTS_BRANCH | `master` |
-| MANIFESTS_FOLDER | `azure-vote` |
-| MANIFESTS_REPO | `acr-cicd-demo-gitops` |
+| MANIFESTS_FOLDER | `azure-vote-manifests` |
+| MANIFESTS_REPO | `arc-cicd-demo-gitops` |
| ORGANIZATION_NAME | Name of Azure DevOps organization | | PROJECT_NAME | Name of GitOps project in Azure DevOps | | REPO_URL | Full URL for GitOps repo |
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/overview.md
Title: Azure Arc-enabled servers Overview description: Learn how to use Azure Arc-enabled servers to manage servers hosted outside of Azure like an Azure resource.
-keywords: azure automation, DSC, powershell, desired state configuration, update management, change tracking, inventory, runbooks, python, graphical, hybrid
Previously updated : 07/16/2021 Last updated : 08/12/2021 # What is Azure Arc-enabled servers?
-Azure Arc-enabled servers enables you to manage your Windows and Linux physical servers and virtual machines hosted *outside* of Azure, on your corporate network, or other cloud provider. This management experience is designed to be consistent with how you manage native Azure virtual machines. When a hybrid machine is connected to Azure, it becomes a connected machine and is treated as a resource in Azure. Each connected machine has a Resource ID, is included in a resource group, and benefits from standard Azure constructs such as Azure Policy and applying tags. Service providers who manage a customer's on-premises infrastructure can manage their hybrid machines, just like they do today with native Azure resources, across multiple customer environments, using [Azure Lighthouse](../../lighthouse/how-to/manage-hybrid-infrastructure-arc.md) with Azure Arc.
+Azure Arc-enabled servers enables you to manage your Windows and Linux physical servers and virtual machines hosted *outside* of Azure, on your corporate network, or other cloud provider. This management experience is designed to be consistent with how you manage native Azure virtual machines. When a hybrid machine is connected to Azure, it becomes a connected machine and is treated as a resource in Azure. Each connected machine has a Resource ID enabling the machine to be included in a resource group. Now you can benefit from standard Azure constructs, such as Azure Policy and applying tags. Service providers managing a customer's on-premises infrastructure can manage their hybrid machines, just like they do today with native Azure resources, across multiple customer environments using [Azure Lighthouse](../../lighthouse/how-to/manage-hybrid-infrastructure-arc.md).
-To deliver this experience with your hybrid machines hosted outside of Azure, the Azure Connected Machine agent needs to be installed on each machine that you plan on connecting to Azure. This agent does not deliver any other functionality, and it doesn't replace the Azure [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md). The Log Analytics agent for Windows and Linux is required when you want to proactively monitor the OS and workloads running on the machine, manage it using Automation runbooks or solutions like Update Management, or use other Azure services like [Azure Security Center](../../security-center/security-center-introduction.md).
+To deliver this experience with your hybrid machines, you need to install the Azure Connected Machine agent on each machine. This agent does not deliver any other functionality, and it doesn't replace the Azure [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md). The Log Analytics agent for Windows and Linux is required when:
+
+* You want to proactively monitor the OS and workloads running on the machine,
+* Manage it using Automation runbooks or solutions like Update Management, or
+* Use other Azure services like [Azure Security Center](../../security-center/security-center-introduction.md).
>[!NOTE] > The [Azure Monitor agent](../../azure-monitor/agents/azure-monitor-agent-overview.md) (AMA), which is currently in preview, does not replace the Connected Machine agent. The Azure Monitor agent will replace the Log Analytics agent, Diagnostics extension, and Telegraf agent for both Windows and Linux machines. Review the Azure Monitor documentation about the new agent for more details.
-## Supported scenarios
-
-When you connect your machine to Azure Arc-enabled servers, it enables the ability to perform the following configuration management and monitoring tasks:
-- Assign [Azure Policy guest configurations](../../governance/policy/concepts/guest-configuration.md) using the same experience as policy assignment for Azure virtual machines. Today, most Guest Configuration policies do not apply configurations, they only audit settings inside the machine. To understand the cost of using Azure Policy Guest Configuration policies with Arc-enabled servers, see Azure Policy [pricing guide](https://azure.microsoft.com/pricing/details/azure-policy/).--- Report on configuration changes about installed software, Microsoft services, Windows registry and files, and Linux daemons on monitored servers using Azure Automation [Change Tracking and Inventory](../../automation/change-tracking/overview.md) and [Azure Security Center File Integrity Monitoring](../../security-center/security-center-file-integrity-monitoring.md), for servers enabled with [Azure Defender for servers](../../security-center/defender-for-servers-introduction.md).--- Monitor your connected machine guest operating system performance, and discover application components to monitor their processes and dependencies with other resources the application communicates using [VM insights](../../azure-monitor/vm/vminsights-overview.md).
+## Supported cloud operations
-- Simplify deployment using other Azure services like Azure Monitor Log Analytics workspace, using the supported [Azure VM extensions](manage-vm-extensions.md) for your non-Azure Windows or Linux machine. This includes performing post-deployment configuration or software installation using the Custom Script Extension.
+When you connect your machine to Azure Arc-enabled servers, it enables the ability for you to perform the following operational functions as described in the following table.
-- Use [Update Management](../../automation/update-management/overview.md) in Azure Automation to manage operating system updates for your Windows and Linux servers
+|Operations function |Description |
+|--||
+|**Govern** ||
+| Azure Policy |Assign [Azure Policy guest configurations](../../governance/policy/concepts/guest-configuration.md) to audit settings inside the machine. To understand the cost of using Azure Policy Guest Configuration policies with Arc-enabled servers, see Azure Policy [pricing guide](https://azure.microsoft.com/pricing/details/azure-policy/)|
+|**Protect** ||
+| Azure Security Center | Protect non-Azure servers with [Microsoft Defender for Endpoint](/microsoft-365/security/endpoint-defender), included through [Azure Defender](../../security-center/defender-for-servers-introduction.md), for threat detection, for vulnerability management, and to proactively monitor for potential security threats. Azure Security Center presents the alerts and remediation suggestions from the threats detected. |
+| Azure Sentinel | Machines connected to Arc-enabled servers can be [configured with Azure Sentinel](scenario-onboard-azure-sentinel.md) to collect security-related events and correlate them with other data sources. |
+|**Configure** ||
+| Azure Automation |Assess configuration changes about installed software, Microsoft services, Windows registry and files, and Linux daemons using [Change Tracking and Inventory](../../automation/change-tracking/overview.md).<br> Use [Update Management](../../automation/update-management/overview.md) to manage operating system updates for your Windows and Linux servers. |
+| Azure Automanage | Onboard a set of Azure services when you use [Automanage Machine for Arc-enabled servers](../../automanage/automanage-arc.md). |
+| VM extensions | Provides post-deployment configuration and automation tasks using supported [Arc-enabled servers VM extensions](manage-vm-extensions.md) for your non-Azure Windows or Linux machine. |
+|**Monitor**|
+| Azure Monitor | Monitor the connected machine guest operating system performance, and discover application components to monitor their processes and dependencies with other resources using [VM insights](../../azure-monitor/vm/vminsights-overview.md). Collect other log data, such as performance data and events, from the operating system or workload(s) running on the machine with the [Log Analytics agent](../../azure-monitor/agents/agents-overview.md#log-analytics-agent). This data is stored in a [Log Analytics workspace](../../azure-monitor/logs/design-logs-deployment.md). |
- > [!NOTE]
- > At this time, enabling Update Management directly from an Arc-enabled server is not supported. See [Enable Update Management from your Automation account](../../automation/update-management/enable-from-automation-account.md) to understand requirements and how to enable for your server.
--- Include your non-Azure servers for advanced threat detection and proactively monitor for potential security threats using [Azure Security Center](../../security-center/security-center-introduction.md) or [Azure Defender](../../security-center/azure-defender.md).--- Protect non-Azure servers with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint), included through [Azure Defender](../../security-center/azure-defender.md), for threat detection, for vulnerability management, and to proactively monitor for potential security threats.
+> [!NOTE]
+> At this time, enabling Update Management directly from an Arc-enabled server is not supported. See [Enable Update Management from your Automation account](../../automation/update-management/enable-from-automation-account.md) to understand requirements and how to enable for your server.
-Log data collected and stored in a Log Analytics workspace from the hybrid machine now contains properties specific to the machine, such as a Resource ID. This can be used to support [resource-context](../../azure-monitor/logs/design-logs-deployment.md#access-mode) log access.
+Log data collected and stored in a Log Analytics workspace from the hybrid machine now contains properties specific to the machine, such as a Resource ID, to support [resource-context](../../azure-monitor/logs/design-logs-deployment.md#access-mode) log access.
[!INCLUDE [azure-lighthouse-supported-service](../../../includes/azure-lighthouse-supported-service.md)]
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-service-bus.md
This section describes the global configuration settings available for this bind
}, "batchOptions": { "maxMessageCount": 1000,
- "operationTimeout": "00:01:00"
- "autoComplete": "true"
+ "operationTimeout": "00:01:00",
+ "autoComplete": true
} } }
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
This section outlines variations and considerations when using **Azure Bot Servi
### [Azure Bot Service](/azure/bot-service/)
-The following Azure Bot Service **features are not currently available** in Azure Government:
+The following Azure Bot Service **features are not currently available** in Azure Government (updated 8/16/2021):
-- BotBuilder V3 Bot Templates-- Channels
- - Cortana channel
- - Skype for Business Channel
+- Bot Framework Composer integration
+- Channels (due to availability of dependent services)
- Teams Channel
- - Slack Channel
- - Office 365 Email Channel
- - Facebook Messenger Channel
- - Telegram Channel
- - Kik Messenger Channel
- - GroupMe Channel
- - Skype Channel
-- Application Insights related capabilities including the Analytics Tab-- Speech Priming Feature-- Payment Card Feature-
-Commonly used services in bot applications that are not currently available in Azure Government:
--- Application Insights-- Speech Service
+ - Direct Line Speech Channel
+ - Telephony Channel (Preview)
+ - Microsoft Search Channel (Preview)
+ - Kik Channel (deprecated)
For more information, see [How do I create a bot that uses US Government data center](/azure/bot-service/bot-service-resources-faq-ecosystem#how-do-i-create-a-bot-that-uses-the-us-government-data-center).
azure-monitor Private Link Design https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-design.md
Hub-spoke topologies can avoid the issue of DNS overrides by setting the Private
### Peered networks Network peering is used in various topologies, other than hub-spoke. Such networks can share reach each others' IP addresses, and most likely share the same DNS. In such cases, our recommendation is similar to Hub-spoke - select a single network that is reached by all other (relevant) networks and set the Private Link connection on that network. Avoid creating multiple Private Endpoints and AMPLS objects, since ultimately only the last one set in the DNS will apply.
-## Isolated networks
+### Isolated networks
#If your networks aren't peered, **you must also separate their DNS in order to use Private Links**. Once that's done, you can create a Private Link for one (or many) network, without affecting traffic of other networks. That means creating a separate Private Endpoint for each network, and a separate AMPLS object. Your AMPLS objects can link to the same workspaces/components, or to different ones.
-### Testing with a local bypass: Edit your machine's hosts file instead of the DNS
+### Testing locally: Edit your machine's hosts file instead of the DNS
As a local bypass to the All or Nothing behavior, you can select not to update your DNS with the Private Link records, and instead edit the hosts files on select machines so only these machines would send requests to the Private Link endpoints. * Set up a Private Link, but when connecting to a Private Endpoint choose **not** to auto-integrate with the DNS (step 5b). * Configure the relevant endpoints on your machines' hosts files. To review the Azure Monitor endpoints that need mapping, see [Reviewing your Endpoint's DNS settings](./private-link-configure.md#reviewing-your-endpoints-dns-settings).
In the below diagram:
![Diagram of AMPLS limits](./media/private-link-security/ampls-limits.png)
-## Controlling network access to your resources
+## Control network access to your resources
Your Log Analytics workspaces or Application Insights components can be set to accept or block access from public networks, meaning networks not connected to the resource's AMPLS. That granularity allows you to set access according to your needs, per workspace. For example, you may accept ingestion only through Private Link connected networks (i.e. specific VNets), but still choose to accept queries from all networks, public and private. Note that blocking queries from public networks means, clients (machines, SDKs etc.) outside of the connected AMPLSs can't query data in the resource. That data includes access to logs, metrics, and the live metrics stream, as well as experiences built on top such as workbooks, dashboards, query API-based client experiences, insights in the Azure portal, and more. Experiences running outside the Azure portal and that query Log Analytics data are also affected by that setting.
For more information on connecting your own storage account, see [Customer-owned
If you use Log Analytics solutions that require an Automation account, such as Update Management, Change Tracking, or Inventory, you should also set up a separate Private Link for your Automation account. For more information, see [Use Azure Private Link to securely connect networks to Azure Automation](../../automation/how-to/private-link-security.md). > [!NOTE]
-> Some products and Azure portal experiences query data through Azure Resource Manager and therefore won't be able to query data over a Private Link, unless Private Link settings are applied to the Resource Manager as well. To overcome this, you can configure your resources to accept queries from public networks as explained in [Controlling network access to your resources](./private-link-design.md#controlling-network-access-to-your-resources) (Ingestion can remain limited to Private Link networks).
+> Some products and Azure portal experiences query data through Azure Resource Manager and therefore won't be able to query data over a Private Link, unless Private Link settings are applied to the Resource Manager as well. To overcome this, you can configure your resources to accept queries from public networks as explained in [Controlling network access to your resources](./private-link-design.md#control-network-access-to-your-resources) (Ingestion can remain limited to Private Link networks).
We've identified the following products and experiences query workspaces through Azure Resource > * LogicApp connector > * Update Management solution
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/whats-new.md
Title: "Azure Monitor docs: What's new for June 2021"
-description: "What's new in the Azure Monitor docs for June 2021."
+ Title: "What's new in Azure Monitor documentation"
+description: "What's new in Azure Monitor documentation"
Previously updated : 07/12/2021 Last updated : 08/15/2021
-# Azure Monitor docs: What's new for June, 2021
+# What's new in Azure Monitor documentation
-This article lists the significant changes to AzureMonitor docs during the month of June.
+This article lists significant changes to Azure Monitor documentation.
-## Agents
+## July, 2021
-### Updated articles
+### General
+
+**Updated articles**
+
+- [Azure Monitor Frequently Asked Questions](faq.yml)
+- [Deploy Azure Monitor at scale using Azure Policy](deploy-scale.md)
+
+### Agents
+
+**New articles**
+
+- [Migrating from Log Analytics agent](agents/azure-monitor-agent-migration.md)
+
+**Updated articles**
+
+- [Azure Monitor agent overview](agents/azure-monitor-agent-overview.md)
+
+### Alerts
+
+**Updated articles**
+
+- [Common alert schema definitions](alerts/alerts-common-schema-definitions.md)
+- [Create a log alert with a Resource Manager template](alerts/alerts-log-create-templates.md)
+- [Resource Manager template samples for log alert rules in Azure Monitor](alerts/resource-manager-alerts-log.md)
+
+### Application Insights
+
+**New articles**
+
+- [Standard test](app/availability-standard-tests.md)
+
+**Updated articles**
+
+- [Use Azure Application Insights to understand how customers are using your application](app/tutorial-users.md)
+- [Application Insights cohorts](app/usage-cohorts.md)
+- [Discover how customers are using your application with Application Insights Funnels](app/usage-funnels.md)
+- [Impact analysis with Application Insights](app/usage-impact.md)
+- [Usage analysis with Application Insights](app/usage-overview.md)
+- [User retention analysis for web applications with Application Insights](app/usage-retention.md)
+- [Users, sessions, and events analysis in Application Insights](app/usage-segmentation.md)
+- [Troubleshooting Application Insights Agent (formerly named Status Monitor v2)](app/status-monitor-v2-troubleshoot.md)
+- [Monitor availability with URL ping tests](app/monitor-web-app-availability.md)
+
+### Containers
+
+**New articles**
+
+- [How to query logs from Container insights](containers/container-insights-log-query.md)
+- [Monitoring Azure Kubernetes Service (AKS) with Azure Monitor](../aks/monitor-aks.md)
+
+**Updated articles**
+
+- [How to create log alerts from Container insights](containers/container-insights-log-alerts.md)
+
+### Essentials
+
+**Updated articles**
+
+- [Supported metrics with Azure Monitor](essentials/metrics-supported.md)
+- [Supported categories for Azure Resource Logs](essentials/resource-logs-categories.md)
+
+### Insights
+
+**Updated articles**
+
+- [Monitor Surface Hubs with Azure Monitor to track their health](insights/surface-hubs.md)
+
+### Logs
+
+**Updated articles**
+
+- [Azure Monitor Logs Dedicated Clusters](logs/logs-dedicated-clusters.md)
+- [Log Analytics workspace data export in Azure Monitor (preview)](logs/logs-data-export.md)
+
+### Virtual Machines
+
+**Updated articles**
+
+- [Monitor virtual machines with Azure Monitor: Configure monitoring](vm/monitor-virtual-machine-configure.md)
+- [Monitor virtual machines with Azure Monitor: Security monitoring](vm/monitor-virtual-machine-security.md)
+- [Monitor virtual machines with Azure Monitor: Workloads](vm/monitor-virtual-machine-workloads.md)
+- [Monitor virtual machines with Azure Monitor](vm/monitor-virtual-machine.md)
+- [Monitor virtual machines with Azure Monitor: Alerts](vm/monitor-virtual-machine-alerts.md)
+- [Monitor virtual machines with Azure Monitor: Analyze monitoring data](vm/monitor-virtual-machine-analyze.md)
+
+### Visualizations
+
+**Updated articles**
+
+- [Visualizing data from Azure Monitor](visualizations.md)
+## June, 2021
+### Agents
+
+**Updated articles**
- [Azure Monitor agent overview](agents/azure-monitor-agent-overview.md) - [Overview of Azure Monitor agents](agents/agents-overview.md) - [Configure data collection for the Azure Monitor agent (preview)](agents/data-collection-rule-azure-monitor-agent.md)
-## Alerts
+### Alerts
-### New articles
+**New articles**
- [Migrate Azure Monitor Application Insights smart detection to alerts (Preview)](alerts/alerts-smart-detections-migration.md)
-### Updated articles
+**Updated articles**
- [Create Metric Alerts for Logs in Azure Monitor](alerts/alerts-metric-logs.md) - [Troubleshoot log alerts in Azure Monitor](alerts/alerts-troubleshoot-log.md)
-## Application Insights
+### Application Insights
-### New articles
+**New articles**
- [Azure AD authentication for Application Insights (Preview)](app/azure-ad-authentication.md) - [Quickstart: Monitor an ASP.NET Core app with Azure Monitor Application Insights](app/dotnet-quickstart.md)
-### Updated articles
+**Updated articles**
- [Work Item Integration](app/work-item-integration.md) - [Azure AD authentication for Application Insights (Preview)](app/azure-ad-authentication.md)
This article lists the significant changes to AzureMonitor docs during the month
- [Memory leak detection (preview)](app/proactive-potential-memory-leak.md) - [Degradation in trace severity ratio (preview)](app/proactive-trace-severity.md)
-## Containers
+### Containers
-### Updated articles
+**Updated articles**
- [How to query logs from Container insights](containers/container-insights-log-query.md)
-## Essentials
+### Essentials
-### Updated articles
+**Updated articles**
- [Supported categories for Azure Resource Logs](essentials/resource-logs-categories.md) - [Resource Manager template samples for diagnostic settings in Azure Monitor](essentials/resource-manager-diagnostic-settings.md)
-## General
-
-### New articles
--- [Azure Monitor Frequently Asked Questions](faq.yml)-
-### Updated articles
--- [Deploy Azure Monitor at scale using Azure Policy](deploy-scale.md)-- [Azure Monitor docs: What's new for May, 2021](whats-new.md)
-## Insights
+### Insights
-### Updated articles
+**Updated articles**
- [Enable SQL insights (preview)](insights/sql-insights-enable.md)
-## Logs
+### Logs
-### Updated articles
+**Updated articles**
- [Log Analytics tutorial](logs/log-analytics-tutorial.md) - [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md)
This article lists the significant changes to AzureMonitor docs during the month
- [Azure Monitor Logs Dedicated Clusters](logs/logs-dedicated-clusters.md) - [Monitor health of Log Analytics workspace in Azure Monitor](logs/monitor-workspace.md)
-## Virtual Machines
+### Virtual Machines
-### New articles
+**New articles**
- [Monitoring virtual machines with Azure Monitor - Alerts](vm/monitor-virtual-machine-alerts.md) - [Monitoring virtual machines with Azure Monitor - Analyze monitoring data](vm/monitor-virtual-machine-analyze.md)
This article lists the significant changes to AzureMonitor docs during the month
- [Monitor virtual machines with Azure Monitor - Security monitoring](vm/monitor-virtual-machine-security.md) - [Monitoring virtual machines with Azure Monitor - Workloads](vm/monitor-virtual-machine-workloads.md) - [Monitoring virtual machines with Azure Monitor](vm/monitor-virtual-machine.md)-- [VM insights Generally Available (GA) Frequently Asked Questions](vm/vminsights-ga-release-faq.yml)
-### Updated articles
+**Updated articles**
- [Troubleshoot VM insights guest health (preview)](vm/vminsights-health-troubleshoot.md) - [Create interactive reports VM insights with workbooks](vm/vminsights-workbooks.md)
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-active-directory-connections.md
na ms.devlang: na Previously updated : 06/14/2021 Last updated : 08/16/2021 # Create and manage Active Directory connections for Azure NetApp Files
Several features of Azure NetApp Files require that you have an Active Directory
* You must have already set up a capacity pool. See [Set up a capacity pool](azure-netapp-files-set-up-capacity-pool.md). * A subnet must be delegated to Azure NetApp Files. See [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md).
-## Requirements for Active Directory connections
+## <a name="requirements-for-active-directory-connections"></a>Requirements and considerations for Active Directory connections
* You can configure only one Active Directory (AD) connection per subscription and per region.
Several features of Azure NetApp Files require that you have an Active Directory
* The admin account you use must have the capability to create machine accounts in the organizational unit (OU) path that you will specify.
+* If you change the password of the Active Directory user account that is used in Azure NetApp Files, be sure to update the password configured in the [Active Directory Connections](#create-an-active-directory-connection). Otherwise, you will not be able to create new volumes, and your access to existing volumes might also be affected depending on the setup.
+ * Proper ports must be open on the applicable Windows Active Directory (AD) server. The required ports are as follows:
azure-netapp-files Cross Region Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/cross-region-replication-introduction.md
na ms.devlang: na Previously updated : 07/12/2021 Last updated : 08/16/2021
Azure NetApp Files volume replication is supported between various [Azure region
* Japan East and Japan West * North Europe and West Europe * UK South and UK West
+* UAE North and UAE Central
+* Norway East and Norway West
### Azure regional non-standard pairs
azure-portal Capture Browser Trace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/capture-browser-trace.md
Title: Capture a browser trace for troubleshooting description: Capture network information from a browser trace to help troubleshoot issues with the Azure portal. Previously updated : 03/25/2021 Last updated : 08/16/2021
If you're troubleshooting an issue with the Azure portal, and you need to contact Microsoft support, we recommend you first capture a browser trace and some additional information. The information you collect can provide important details about the portal at the time the issue occurs. Follow the steps in this article for the developer tools in the browser you use: Google Chrome or Microsoft Edge (Chromium), Microsoft Edge (EdgeHTML), Apple Safari, or Firefox.
+> [!IMPORTANT]
+> Microsoft support uses these traces for troubleshooting purposes only. Please be mindful who you share your traces with, as they may contain sensitive information about your environment.
+ ## Google Chrome and Microsoft Edge (Chromium) Google Chrome and Microsoft Edge (Chromium) are both based on the [Chromium open source project](https://www.chromium.org/Home). The following steps show how to use the developer tools, which are very similar in the two browsers. For more information, see [Chrome DevTools](https://developers.google.com/web/tools/chrome-devtools) and [Microsoft Edge (Chromium) Developer Tools](/microsoft-edge/devtools-guide-chromium).
-1. Sign in to the [Azure portal](https://portal.azure.com). It's important to sign in _before_ you start the trace so that the trace doesn't contain sensitive information related to your sign-in.
+1. Sign in to the [Azure portal](https://portal.azure.com). It's important to sign in _before_ you start the trace so that the trace doesn't contain sensitive information related to your sign-in.
1. Start recording the steps you take in the portal, using [Steps Recorder](https://support.microsoft.com/help/22878/windows-10-record-steps).
The following steps show how to use the developer tools in Apple Safari. For mor
The following steps show how to use the developer tools in Firefox. For more information, see [Firefox Developer Tools](https://developer.mozilla.org/docs/Tools).
-1. Sign in to the [Azure portal](https://portal.azure.com). It's important to sign in _before_ you start the trace so that the trace doesn't contain sensitive information related to your sign-in.
+1. Sign in to the [Azure portal](https://portal.azure.com). It's important to sign in _before_ you start the trace so that the trace doesn't contain sensitive information related to your sign-in.
1. Start recording the steps you take in the portal. Use [Steps Recorder](https://support.microsoft.com/help/22878/windows-10-record-steps) on Windows, or see [How to record the screen on your Mac](https://support.apple.com/HT208721).
The following steps show how to use the developer tools in Firefox. For more inf
## Next steps
-[Azure portal overview](azure-portal-overview.md)
+- Read more about the [Azure portal](azure-portal-overview.md).
+- Learn how to [open a support request](supportability/how-to-create-azure-support-request.md) in the Azure portal.
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/bicep-functions-resource.md
description: Describes the functions to use in a Bicep file to retrieve values a
Previously updated : 07/30/2021 Last updated : 08/16/2021 # Resource functions for Bicep
Built-in policy definitions are tenant level resources. For an example of deploy
Returns a secret from an Azure Key Vault. The `getSecret` function can only be called on a `Microsoft.KeyVault/vaults` resource. Use this function to pass a secret to a secure string parameter of a Bicep module. The function can be used only with a parameter that has the `@secure()` decorator.
+The key vault must have `enabledForTemplateDeployment` set to `true`. The user deploying the Bicep file must have access to the secret. For more information, see [Use Azure Key Vault to pass secure parameter value during Bicep deployment](key-vault-parameter.md).
+ ### Parameters | Parameter | Required | Type | Description |
azure-resource-manager Resource Declaration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/resource-declaration.md
description: Describes how to declare resources to deploy in Bicep.
Previously updated : 07/30/2021 Last updated : 08/16/2021 # Resource declaration in Bicep
az provider show \
You can apply tags to a resource during deployment. Tags help you logically organize your deployed resources. For examples of the different ways you can specify the tags, see [ARM template tags](../management/tag-resources.md#arm-templates).
+## Set managed identities for Azure resources
+
+Some resources support [managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). Those resources have an identity object at the root level of the resource declaration.
+
+You can use either system-assigned or user-assigned identities.
+
+The following example shows how to configure a system-assigned identity for an Azure Kubernetes Service cluster.
+
+```bicep
+resource aks 'Microsoft.ContainerService/managedClusters@2020-09-01' = {
+ name: clusterName
+ location: location
+ tags: tags
+ identity: {
+ type: 'SystemAssigned'
+ }
+```
+
+The next example shows how to configure a user-assigned identity for a virtual machine.
+
+```bicep
+param userAssignedIdentity string
+
+resource vm 'Microsoft.Compute/virtualMachines@2020-06-01' = {
+ name: vmName
+ location: location
+ identity: {
+ type: 'UserAssigned'
+ userAssignedIdentities: {
+ '${userAssignedIdentity}': {}
+ }
+ }
+```
+ ## Set resource-specific properties The preceding properties are generic to most resource types. After setting those values, you need to set the properties that are specific to the resource type you're deploying.
azure-resource-manager Key Vault Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/key-vault-access.md
Title: Use Key Vault when deploying managed app description: Shows how to use access secrets in Azure Key Vault when deploying Managed Applications -+ Previously updated : 01/30/2019 Last updated : 08/16/2021 # Access Key Vault secret when deploying Azure Managed Applications
This article describes how to configure the Key Vault to work with Managed Appli
## Add service as contributor
-1. Select **Access control (IAM)**.
-
- ![Select access control](./media/key-vault-access/access-control.png)
-
-1. Select **Add role assignment**.
-
- ![Select add](./media/key-vault-access/add-access-control.png)
-
-1. Select **Contributor** for the role. Search for **Appliance Resource Provider** and select it from the available options.
-
- ![Search for provider](./media/key-vault-access/search-provider.png)
+Assign the **Contributor** role to the **Appliance Resource Provider** user at the key vault scope.
-1. Select **Save**.
+For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Reference Key Vault secret
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
description: Shows how to create an Azure managed application that is intended f
- Previously updated : 04/14/2020+ Last updated : 08/16/2021
Copy the storage account's resource ID. It will be used later when deploying the
### Set the role assignment for "Appliance Resource Provider" in your storage account
-Before your managed application definition can be deployed to your storage account, you must give contributor permissions to the **Appliance Resource Provider** role so that it can write the definition files to your storage account's container.
+Before your managed application definition can be deployed to your storage account, assign the **Contributor** role to the **Appliance Resource Provider** user at the storage account scope. This assignment lets the identity write definition files to your storage account's container.
-1. In the [Azure portal](https://portal.azure.com), navigate to your storage account.
-1. Select **Access control (IAM)** to display the access control settings for the storage account. Select the **Role assignments** tab to see the list of role assignments.
-1. In the **Add role assignment** window, select the **Contributor** role.
-1. From the **Assign access to** field, select **Azure AD user, group, or service principal**.
-1. Under **Select**, search for **Appliance Resource Provider** role and select it.
-1. Save the role assignment.
+For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
### Deploy the managed application definition with an ARM template
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/lock-resources.md
Applying locks can lead to unexpected results because some operations that don't
- A read-only lock on an **Application Gateway** prevents you from getting the backend health of the application gateway. That [operation uses POST](/rest/api/application-gateway/application-gateways/backend-health), which is blocked by the read-only lock.
+- A read-only lock on a **AKS cluster** prevents all users from accessing any cluster resources from the **Kubernetes Resources** section of AKS cluster left-side blade on the Azure portal. These operations require a POST request for authentication.
+ ## Who can create or delete locks To create or delete management locks, you must have access to `Microsoft.Authorization/*` or `Microsoft.Authorization/locks/*` actions. Of the built-in roles, only **Owner** and **User Access Administrator** are granted those actions.
In the request, include a JSON object that specifies the properties for the lock
- To learn about logically organizing your resources, see [Using tags to organize your resources](tag-resources.md). - You can apply restrictions and conventions across your subscription with customized policies. For more information, see [What is Azure Policy?](../../governance/policy/overview.md).-- For guidance on how enterprises can use Resource Manager to effectively manage subscriptions, see [Azure enterprise scaffold - prescriptive subscription governance](/azure/architecture/cloud-adoption-guide/subscription-governance).
+- For guidance on how enterprises can use Resource Manager to effectively manage subscriptions, see [Azure enterprise scaffold - prescriptive subscription governance](/azure/architecture/cloud-adoption-guide/subscription-governance).
azure-resource-manager Networking Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-limitations/networking-move-limitations.md
Title: Move Azure Networking resources to new subscription or resource group description: Use Azure Resource Manager to move virtual networks and other networking resources to a new resource group or subscription. Previously updated : 10/16/2019 Last updated : 08/16/2021 # Move guidance for networking resources This article describes how to move virtual networks and other networking resources for specific scenarios.
+During the move, your networking resources will operate without interruption.
+ ## Dependent resources > [!NOTE] > Please note that VPN Gateways associated with Public IP Standard SKU addresses are not currently able to move between resource groups or subscriptions.
-When moving a resource, you must also move its dependent resources (e.g. public IP addresses, virtual network gateways, all associated connection resources). Local network gateways can be in a different resource group.
+When moving a resource, you must also move its dependent resources (for example - public IP addresses, virtual network gateways, all associated connection resources). Local network gateways can be in a different resource group.
To move a virtual machine with a network interface card to a new subscription, you must move all dependent resources. Move the virtual network for the network interface card, all other network interface cards for the virtual network, and the VPN gateways.
azure-resource-manager Syntax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/syntax.md
Title: Template structure and syntax description: Describes the structure and properties of Azure Resource Manager templates (ARM templates) using declarative JSON syntax. Previously updated : 06/22/2021 Last updated : 08/16/2021 # Understand the structure and syntax of ARM templates
You define resources with the following structure:
"<tag-name1>": "<tag-value1>", "<tag-name2>": "<tag-value2>" },
+ "identity": {
+ "type": "<system-assigned-or-user-assigned-identity>",
+ "userAssignedIdentities": {
+ "<resource-id-of-identity>": {}
+ }
+ },
"sku": { "name": "<sku-name>", "tier": "<sku-tier>",
You define resources with the following structure:
| location |Varies |Supported geo-locations of the provided resource. You can select any of the available locations, but typically it makes sense to pick one that is close to your users. Usually, it also makes sense to place resources that interact with each other in the same region. Most resource types require a location, but some types (such as a role assignment) don't require a location. See [Set resource location](resource-location.md). | | dependsOn |No |Resources that must be deployed before this resource is deployed. Resource Manager evaluates the dependencies between resources and deploys them in the correct order. When resources aren't dependent on each other, they're deployed in parallel. The value can be a comma-separated list of a resource names or resource unique identifiers. Only list resources that are deployed in this template. Resources that aren't defined in this template must already exist. Avoid adding unnecessary dependencies as they can slow your deployment and create circular dependencies. For guidance on setting dependencies, see [Define the order for deploying resources in ARM templates](./resource-dependency.md). | | tags |No |Tags that are associated with the resource. Apply tags to logically organize resources across your subscription. |
+| identity | No | Some resources support [managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). Those resources have an identity object at the root level of the resource declaration. You can set whether the identity is user-assigned or system-assigned. For user-assigned identities, provide a list of resource IDs for the identities. Set the key to the resource ID and the value to an empty object. For more information, see [Configure managed identities for Azure resources on an Azure VM using templates](../../active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md). |
| sku | No | Some resources allow values that define the SKU to deploy. For example, you can specify the type of redundancy for a storage account. | | kind | No | Some resources allow a value that defines the type of resource you deploy. For example, you can specify the type of Cosmos DB to create. | | scope | No | The scope property is only available for [extension resource types](../management/extension-resource-types.md). Use it when specifying a scope that is different than the deployment scope. See [Setting scope for extension resources in ARM templates](scope-extension-resources.md). |
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-resource.md
Title: Template functions - resources description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about resources. Previously updated : 08/09/2021 Last updated : 08/16/2021
The next example shows a list function that takes a parameter. In this case, the
`pickZones(providerNamespace, resourceType, location, [numberOfZones], [offset])`
-Determines whether a resource type supports zones for the specified location or region. This function only supports zonal resources, zone redundant services will return an empty array. For more information see [Azure Services that support Availability Zones](../../availability-zones/az-region.md). To use the pickZones function with zone redundant services, see the examples below.
+Determines whether a resource type supports zones for the specified location or region. This function only supports zonal resources, zone redundant services will return an empty array. For more information, see [Azure Services that support Availability Zones](../../availability-zones/az-region.md). To use the pickZones function with zone redundant services, see the examples below.
### Parameters
When the `numberOfZones` parameter is set to 3, it returns:
] ```
-When the resource type or region doesn't support zones an empty array is returned. An empty array is also returned for zone redundant services.
+When the resource type or region doesn't support zones, an empty array is returned. An empty array is also returned for zone redundant services.
```json [
When the resource type or region doesn't support zones an empty array is returne
### Remarks
-There are different categories for Azure Availability Zones, zonal and zone-redundant. The pickZones function can be used to return an availability zone number or numbers for a zonal resource. For zone redundant services (ZRS), the function will return an empty array. Zonal resources can typically be identified by the use of a `zones` property on the resource header. Zone redundant services have different ways for identifying and using availability zones per resource, use the documentation for a specific service to determine the category of support for availability zones. For more information see [Azure Services that support Availability Zones](../../availability-zones/az-region.md).
+There are different categories for Azure Availability Zones - zonal and zone-redundant. The pickZones function can be used to return an availability zone number or numbers for a zonal resource. For zone redundant services (ZRS), the function will return an empty array. Zonal resources can typically be identified by the use of a `zones` property on the resource header. Zone redundant services have different ways for identifying and using availability zones per resource, use the documentation for a specific service to determine the category of support for availability zones. For more information, see [Azure Services that support Availability Zones](../../availability-zones/az-region.md).
-To determine if a given Azure region or location supports availability zones, simply call the pickZones() function with a zonal resource type, for example `Microsoft.Storage/storageAccounts`. If the response is non-empty, the region supports availability zones.
+To determine if a given Azure region or location supports availability zones, call the pickZones() function with a zonal resource type, for example `Microsoft.Storage/storageAccounts`. If the response is non-empty, the region supports availability zones.
### pickZones example
To simplify the creation of any resource ID, use the `resourceId()` functions de
The pattern is:
-`"[reference(resourceId(<resource-provider-namespace>, <resource-name>, <API-version>, 'Full').Identity.propertyName]"`
+`"[reference(resourceId(<resource-provider-namespace>, <resource-name>), <API-version>, 'Full').Identity.propertyName]"`
For example, to get the principal ID for a managed identity that is applied to a virtual machine, use:
azure-sql Data Discovery And Classification Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/data-discovery-and-classification-overview.md
Previously updated : 02/17/2021 Last updated : 08/16/2021 tags: azure-synapse # Data Discovery & Classification
tags: azure-synapse
Data Discovery & Classification is built into Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics. It provides basic capabilities for discovering, classifying, labeling, and reporting the sensitive data in your databases.
-Your most sensitive data might include business, financial, healthcare, or personal information. Discovering and classifying this data can play a pivotal role in your organization's information-protection approach. It can serve as infrastructure for:
+Your most sensitive data might include business, financial, healthcare, or personal information. It can serve as infrastructure for:
- Helping to meet standards for data privacy and requirements for regulatory compliance. - Various security scenarios, such as monitoring (auditing) access to sensitive data.
Your most sensitive data might include business, financial, healthcare, or perso
## <a id="what-is-dc"></a>What is Data Discovery & Classification?
-Data Discovery & Classification forms a new information-protection paradigm for SQL Database, SQL Managed Instance, and Azure Synapse, aimed at protecting the data and not just the database. Currently it supports the following capabilities:
+Data Discovery & Classification currently supports the following capabilities:
- **Discovery and recommendations:** The classification engine scans your database and identifies columns that contain potentially sensitive data. It then provides you with an easy way to review and apply recommended classification via the Azure portal. -- **Labeling:** You can apply sensitivity-classification labels persistently to columns by using new metadata attributes that have been added to the SQL Server database engine. This metadata can then be used for sensitivity-based auditing and protection scenarios.
+- **Labeling:** You can apply sensitivity-classification labels persistently to columns by using new metadata attributes that have been added to the SQL Server database engine. This metadata can then be used for sensitivity-based auditing scenarios.
- **Query result-set sensitivity:** The sensitivity of a query result set is calculated in real time for auditing purposes.
Data Discovery & Classification comes with a built-in set of sensitivity labels
You define and customize of your classification taxonomy in one central place for your entire Azure organization. That location is in [Azure Security Center](../../security-center/security-center-introduction.md), as part of your security policy. Only someone with administrative rights on the organization's root management group can do this task.
-As part of policy management for information protection, you can define custom labels, rank them, and associate them with a selected set of information types. You can also add your own custom information types and configure them with string patterns. The patterns are added to the discovery logic for identifying this type of data in your databases.
+As part of policy management, you can define custom labels, rank them, and associate them with a selected set of information types. You can also add your own custom information types and configure them with string patterns. The patterns are added to the discovery logic for identifying this type of data in your databases.
For more information, see [Customize the SQL information protection policy in Azure Security Center (Preview)](../../security-center/security-center-info-protection-policy.md).
After the organization-wide policy has been defined, you can continue classifyin
## <a id="audit-sensitive-data"></a>Audit access to sensitive data
-An important aspect of the information-protection paradigm is the ability to monitor access to sensitive data. [Azure SQL Auditing](../../azure-sql/database/auditing-overview.md) has been enhanced to include a new field in the audit log called `data_sensitivity_information`. This field logs the sensitivity classifications (labels) of the data that was returned by a query. Here's an example:
+An important aspect of the classification is the ability to monitor access to sensitive data. [Azure SQL Auditing](../../azure-sql/database/auditing-overview.md) has been enhanced to include a new field in the audit log called `data_sensitivity_information`. This field logs the sensitivity classifications (labels) of the data that was returned by a query. Here's an example:
![Audit log](./media/data-discovery-and-classification-overview/11_data_classification_audit_log.png)
azure-sql Performance Improve Use Batching https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/performance-improve-use-batching.md
Previously updated : 01/25/2019 Last updated : 06/22/2021 # How to use batching to improve Azure SQL Database and Azure SQL Managed Instance application performance [!INCLUDE[appliesto-sqldb-sqlmi](includes/appliesto-sqldb-sqlmi.md)]
This example demonstrates that even more complex database operations, such as ma
### UPSERT
-Another batching scenario involves simultaneously updating existing rows and inserting new rows. This operation is sometimes referred to as an "UPSERT" (update + insert) operation. Rather than making separate calls to INSERT and UPDATE, the MERGE statement is best suited to this task. The MERGE statement can perform both insert and update operations in a single call.
+Another batching scenario involves simultaneously updating existing rows and inserting new rows. This operation is sometimes referred to as an "UPSERT" (update + insert) operation. Rather than making separate calls to INSERT and UPDATE, the MERGE statement can be a suitable replacement. The MERGE statement can perform both insert and update operations in a single call. The MERGE statement locking mechanics work differently from separate INSERT and UPDATE statements. Test your specific workloads before deploying to production.
Table-valued parameters can be used with the MERGE statement to perform updates and inserts. For example, consider a simplified Employee table that contains the following columns: EmployeeID, FirstName, LastName, SocialSecurityNumber:
azure-vmware Azure Security Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-security-integration.md
The diagram shows the integrated monitoring architecture of integrated security
:::image type="content" source="media/azure-security-integration/azure-integrated-security-architecture.png" alt-text="Diagram showing the architecture of Azure Integrated Security." border="false":::
+**Log Analytics agent** collects log data from Azure, Azure VMware Solution, and on-premises VMs. The log data is sent to Azure Monitor Logs and stored in a **Log Analytics Workspace**. Each workspace has its own data repository and configuration to store data. Once the logs are collected, **Azure Security Center** assesses the vulnerability status of Azure VMware Solution VMs and raises an alert for any critical vulnerability. Once assessed, Azure Security Center forwards the vulnerability status to Azure Sentinel to create an incident and map with other threats. Azure Security Center is connected to Azure Sentinel using Azure Security Center Connector.
## Prerequisites
Recommendations and assessments provide you with the security health details of
## Deploy an Azure Sentinel workspace
+Azure Sentinel provides security analytics, alert detection, and automated threat response across an environment. It's a cloud-native, security information event management (SIEM) solution that's built on top of a Log Analytics Workspace.
+ Since Azure Sentinel is built on top of a Log Analytics workspace, you'll only need to select the workspace you want to use. 1. In the Azure portal, search for **Azure Sentinel**, and select it.
azure-vmware Concepts Monitor Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-monitor-protection.md
- Title: Concepts - Monitor and protection
-description: Learn about the Azure native services that help secure and protect your Azure VMware Solution workloads.
- Previously updated : 06/14/2021--
-# Monitor and protect Azure VMware Solution workloads
-
-Microsoft Azure native services let you monitor, manage, and protect your virtual machines (VMs) on Azure VMware Solution and on-premises VMs. The Azure native services that you can integrate with Azure VMware Solution include:
--- **Log Analytics workspace** is a unique environment to store log data. Each workspace has its own data repository and configuration. Data sources and solutions are configured to store their data in a specific workspace. Easily deploy the Log Analytics agent using Azure Arc enabled servers VM extension support for new and existing VMs. -- **Azure Security Center** is a unified infrastructure security management system. It strengthens security of data centers, and provides advanced threat protection across hybrid workloads in the cloud or on premises. It assesses the vulnerability of Azure VMware Solution VMs and raises alerts as needed. These security alerts can be forwarded to Azure Monitor for resolution. You can define security policies in Azure Security Center. For more information, see [Integrate Azure Security Center with Azure VMware Solution](azure-security-integration.md).-- **[Azure Monitor](../azure-monitor/vm/vminsights-enable-overview.md)** is a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. It requires no deployment. With Azure Monitor, you can monitor guest operating system performance and discover and map application dependencies for Azure VMware Solution or on-premises VMs. Your Log Analytics workspace in Azure Monitor enables log collection and performance counter collection using the Log Analytics agent or extensions. Collect data and logs to a single point and present that data to different Azure native services.-- **Azure Arc** extends Azure management to any infrastructure, including Azure VMware Solution, on-premises, or other cloud platforms. [Azure Arc enabled servers](../azure-arc/servers/overview.md) enables you to manage your Windows and Linux physical servers and virtual machines hosted *outside* of Azure, on your corporate network, or other cloud provider. You can attach a Kubernetes cluster hosted in your Azure VMware Solution environment using [Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md). -- **[Azure Update Management](../automation/update-management/overview.md)** in Azure Automation manages operating system updates for your Windows and Linux machines in a hybrid environment. It monitors patching compliance and forwards patching deviation alerts to Azure Monitor for remediation. Azure Update Management must connect to your Log Analytics workspace to use stored data to assess the status of updates on your VMs.
-
--
-## Topology
-
-The diagram shows the integrated monitoring architecture for Azure VMware Solution VMs.
--
-The Log Analytics agent enables collection of log data from Azure, Azure VMware Solution, and on-premises VMs. The log data is sent to Azure Monitor Logs and stored in a Log Analytics workspace. You can deploy the Log Analytics agent using Arc enabled servers [VM extensions support](../azure-arc/servers/manage-vm-extensions.md) for new and existing VMs.
-
-Once the logs are collected by the Log Analytics workspace, you can configure the Log Analytics workspace with Azure Security Center. Azure Security Center assesses the vulnerability status of Azure VMware Solution VMs and raises an alert for any critical vulnerability. For instance, it assesses missing operating system patches, security misconfigurations, and [endpoint protection](../security-center/security-center-services.md).
-
-You can configure the Log Analytics workspace with Azure Sentinel for alert detection, threat visibility, hunting, and threat response. In the preceding diagram, Azure Security Center is connected to Azure Sentinel using Azure Security Center connector. Azure Security Center will forward the environment vulnerability to Azure Sentinel to create an incident and map with other threats. You can also create the scheduled rules query to detect unwanted activity and convert it to the incidents.
--
-## Next steps
-
-Now that you've covered Azure VMware Solution network and interconnectivity concepts, you may want to learn about:
--- [Integrating Azure native services in Azure VMware Solution](integrate-azure-native-services.md)-- [Integrate Azure Security Center with Azure VMware Solution](azure-security-integration.md)-- [Automation account authentication](../automation/automation-security-overview.md)-- [Designing your Azure Monitor Logs deployment](../azure-monitor/logs/design-logs-deployment.md) and [Azure Monitor](../azure-monitor/overview.md)-- [Azure Security Center planning](../security-center/security-center-planning-and-operations-guide.md) and [Supported platforms for Security Center](../security-center/security-center-os-coverage.md)--
azure-vmware Create Placement Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/create-placement-policy.md
Title: Create a placement policy
+ Title: Create a placement policy (Preview)
description: Learn how to create a placement policy in Azure VMware Solution to control the placement of virtual machines (VMs) on hosts within a cluster through the Azure portal. Last updated 8/16/2021
Last updated 8/16/2021
-# Create a placement policy in Azure VMware Solution
+# Create a placement policy in Azure VMware Solution (Preview)
>[!IMPORTANT]
->Azure VMware Solution placement policy (Preview) is currently in preview. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>Azure VMware Solution placement policy (Preview) is currently in preview. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). To use the preview feature, [you'll need to register both the _DRS Placement Policy_ and _Early Access_ features](https://ms.portal.azure.com/#blade/Microsoft_Azure_Resources/PreviewFeaturesBlade). Search for and select the features, and then select **Register**.
In Azure VMware Solution, clusters in a private cloud are a managed resource. As a result, the cloudadmin role can't make certain changes to the cluster from the vSphere Client, including the management of Distributed Resource Scheduler (DRS) rules.
A placement policy has at least five required components:
## Prerequisites
-You must have _Contributor_ level access to the private cloud to manage placement policies.
+- You must have _Contributor_ level access to the private cloud to manage placement policies.
+- _DRS Placement Policy_ and _Early Access_ [features are registered](https://ms.portal.azure.com/#blade/Microsoft_Azure_Resources/PreviewFeaturesBlade).
## Placement policy types
azure-vmware Integrate Azure Native Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/integrate-azure-native-services.md
Title: Integrate and deploy Azure native services
+ Title: Monitor and protect VMs with Azure native services
description: Learn how to integrate and deploy Microsoft Azure native tools to monitor and manage your Azure VMware Solution workloads. Previously updated : 06/15/2021 Last updated : 08/15/2021
-# Integrate and deploy Azure native services
+# Monitor and protect VMs with Azure native services
-Microsoft Azure native services let you monitor, manage, and protect your virtual machines (VMs) in a hybrid environment (Azure, Azure VMware Solution, and on-premises). The Azure native services that you can integrate with Azure VMware Solution include:
+Microsoft Azure native services let you monitor, manage, and protect your virtual machines (VMs) in a hybrid environment (Azure, Azure VMware Solution, and on-premises). In this article, you'll integrate Azure native services in your Azure VMware Solution private cloud. You'll also learn how to use the tools to manage your VMs throughout their lifecycle.
-- **Log Analytics workspace:** Each workspace has its own data repository and configuration for storing log data. Data sources and solutions are configured to store their data in a specific workspace. Easily deploy the Log Analytics agent using Azure Arc enabled servers VM extension support for new and existing VMs. -- **Azure Security Center:** Unified infrastructure security management system that strengthens security of data centers, and provides advanced threat protection across hybrid workloads in the cloud or on premises. It assesses the vulnerability of Azure VMware Solution VMs and raises alerts as needed. To enable Azure Security Center, see [Integrate Azure Security Center with Azure VMware Solution](azure-security-integration.md).-- **Azure Sentinel:** A cloud-native, security information event management (SIEM) solution. It provides security analytics, alert detection, and automated threat response across an environment. Azure Sentinel is built on top of a Log Analytics workspace.-- **Azure Arc:** Extends Azure management to any infrastructure, including Azure VMware Solution, on-premises, or other cloud platforms. -- **Azure Update Management:** Manages operating system updates for your Windows and Linux machines in a hybrid environment.-- **Azure Monitor:** Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. It requires no deployment.
+The Azure native services that you can integrate with Azure VMware Solution include:
+
+- **Azure Arc** extends Azure management to any infrastructure, including Azure VMware Solution, on-premises, or other cloud platforms. [Azure Arc enabled servers](../azure-arc/servers/overview.md) lets you manage your Windows and Linux physical servers and virtual machines hosted *outside* of Azure, on your corporate network, or another cloud provider. You can attach a Kubernetes cluster hosted in your Azure VMware Solution environment using [Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md).
+
+- **Azure Monitor** collects, analyzes, and acts on telemetry from your cloud and on-premises environments. It requires no deployment. You can monitor guest operating system performance to discover and map application dependencies for Azure VMware Solution or on-premises VMs. Your Log Analytics workspace in Azure Monitor enables log collection and performance counter collection using the Log Analytics agent or extensions.
+
+ With Azure Monitor, you can collect data from different [sources to monitor and analyze](../azure-monitor/agents/data-sources.md) and different types of [data for analysis, visualization, and alerting](../azure-monitor/data-platform.md). You can also create alert rules to identify issues in your environment, like high use of resources, missing patches, low disk space, and heartbeat of your VMs. You can set an automated response to detected events by sending an alert to IT Service Management (ITSM) tools. Alert detection notification can also be sent via email.
+
+- **Azure Security Center** strengthens data centers' security and provides advanced threat protection across hybrid workloads in the cloud or on-premises. It assesses Azure VMware Solution VMs' vulnerability, raises alerts as needed, and forwards them to Azure Monitor for resolution. For instance, it assesses missing operating system patches, security misconfigurations, and [endpoint protection](../security-center/security-center-services.md). You can also define security policies in [Azure Security Center](azure-security-integration.md).
+
+- **Azure Update Management** manages operating system updates for your Windows and Linux machines in a hybrid environment in Azure Automation. It monitors patching compliance and forwards patching deviation alerts to Azure Monitor for remediation. Azure Update Management must connect to your Log Analytics workspace to use stored data to assess the status of updates on your VMs.
+
+- **Log Analytics workspace** stores log data. Each workspace has its own data repository and configuration to store data. You can monitor Azure VMware Solution VMs through the Log Analytics agent. Machines connected to the Log Analytics Workspace use the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) to collect data about changes to installed software, Microsoft services, Windows registry and files, and Linux daemons on monitored servers. When data is available, the agent sends it to Azure Monitor Logs for processing. Azure Monitor Logs applies logic to the received data, records it, and makes it available for analysis. Use the Azure Arc enabled servers [VM extensions support](../azure-arc/servers/manage-vm-extensions.md) to deploy Log Analytics agents on VMs.
+++
+## Benefits
+
+- Azure native services can be used to manage your VMs in a hybrid environment (Azure, Azure VMware Solution, and on-premises).
+- Integrated monitoring and visibility of your Azure, Azure VMware Solution, and on-premises VMs.
+- With Azure Update Management in Azure Automation, you can manage operating system updates for both your Windows and Linux machines.
+- Azure Security Center provides advanced threat protection, including:
+ - File integrity monitoring
+ - Fileless security alerts
+ - Operating system patch assessment
+ - Security misconfigurations assessment
+ - Endpoint protection assessment
+- Easily deploy the Log Analytics agent using Azure Arc enabled servers VM extension support for new and existing VMs.
+- Your Log Analytics workspace in Azure Monitor enables log collection and performance counter collection using the Log Analytics agent or extensions. Collect data and logs to a single point and present that data to different Azure native services.
+- Added benefits of Azure Monitor include:
+ - Seamless monitoring
+ - Better infrastructure visibility
+ - Instant notifications
+ - Automatic resolution
+ - Cost efficiency
++
+## Topology
+
+The diagram shows the integrated monitoring architecture for Azure VMware Solution VMs.
++
+The Log Analytics agent enables collection of log data from Azure, Azure VMware Solution, and on-premises VMs. The log data is sent to Azure Monitor Logs and stored in a Log Analytics workspace. You can deploy the Log Analytics agent using Arc enabled servers [VM extensions support](../azure-arc/servers/manage-vm-extensions.md) for new and existing VMs.
+
+Once the logs are collected by the Log Analytics workspace, you can configure the Log Analytics workspace with Azure Security Center. Azure Security Center assesses the vulnerability status of Azure VMware Solution VMs and raises an alert for any critical vulnerability. For instance, it assesses missing operating system patches, security misconfigurations, and [endpoint protection](../security-center/security-center-services.md).
+
+You can configure the Log Analytics workspace with Azure Sentinel for alert detection, threat visibility, hunting, and threat response. In the preceding diagram, Azure Security Center is connected to Azure Sentinel using Azure Security Center connector. Azure Security Center will forward the environment vulnerability to Azure Sentinel to create an incident and map with other threats. You can also create the scheduled rules query to detect unwanted activity and convert it to the incidents.
++
+## Before you start
+
+If you are new to Azure or unfamiliar with any of the services previously mentioned, review the following articles:
+
+- [Automation account authentication overview](../automation/automation-security-overview.md)
+- [Designing your Azure Monitor Logs deployment](../azure-monitor/logs/design-logs-deployment.md) and [Azure Monitor](../azure-monitor/overview.md)
+- [Planning](../security-center/security-center-planning-and-operations-guide.md) and [Supported platforms](../security-center/security-center-os-coverage.md) for Azure Security Center
+- [Enable Azure Monitor for VMs overview](../azure-monitor/vm/vminsights-enable-overview.md)
+- [What is Azure Arc enabled servers?](../azure-arc/servers/overview.md) and [What is Azure Arc enabled Kubernetes?](../azure-arc/kubernetes/overview.md)
+- [Update Management overview](../automation/update-management/overview.md)
-In this article, you'll integrate Azure native services in your Azure VMware Solution private cloud. You'll also learn how to use the tools to manage your VMs throughout their lifecycle.
## Enable Azure Update Management [Azure Update Management](../automation/update-management/overview.md) in Azure Automation manages operating system updates for your Windows and Linux machines in a hybrid environment. It monitors patching compliance and forwards patching deviation alerts to Azure Monitor for remediation. Azure Update Management must connect to your Log Analytics workspace to use stored data to assess the status of updates on your VMs.
-1. [Create an Azure Automation account](../automation/automation-create-standalone-account.md).
+1. Before you can add Log Analytics Workspace to Azure Update Management, you first need to [Create an Azure Automation account](../automation/automation-create-standalone-account.md).
>[!TIP] >You can [use an Azure Resource Manager (ARM) template to create an Automation account](../automation/quickstart-create-automation-account-template.md). Using an ARM template takes fewer steps compared to other deployment methods.
-1. [Enable Update Management from an Automation account](../automation/update-management/enable-from-automation-account.md). It links your Log Analytics workspace to your automation account. It also enables Azure and non-Azure VMs in Update Management.
-
- - If you have a workspace, select **Update management**. Then select the Log Analytics workspace, and Automation account and select **Enable**. The setup takes up to 15 minutes to complete.
+1. [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md). If you prefer, you can also create a workspace via [CLI](../azure-monitor/logs/quick-create-workspace-cli.md), [PowerShell](../azure-monitor/logs/powershell-workspace-configuration.md), or [Azure Resource Manager template](../azure-monitor/logs/resource-manager-workspace.md).
- - If you want to create a new Log Analytics workspace, see [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md). You can also create a workspace with [CLI](../azure-monitor/logs/quick-create-workspace-cli.md), [PowerShell](../azure-monitor/logs/powershell-workspace-configuration.md), or [Azure Resource Manager template](../azure-monitor/logs/resource-manager-workspace.md).
+1. [Enable Update Management from an Automation account](../automation/update-management/enable-from-automation-account.md). In the process, you'll link your Log Analytics workspace with your automation account.
1. Once you've enabled Update Management, you can [deploy updates on VMs and review the results](../automation/update-management/deploy-updates.md). +
+## Enable Azure Security Center
+
+Assess the vulnerability of Azure VMware Solution VMs and raise alerts as needed. These security alerts can be forwarded to Azure Monitor for resolution. For more information, see [Supported features for VMs](../security-center/security-center-services.md).
+
+Azure Security Center offers many features, including:
+- File integrity monitoring
+- Fileless attack detection
+- Operating system patch assessment
+- Security misconfigurations assessment
+- Endpoint protection assessment
+
+>[!NOTE]
+>Azure Security Center is a pre-configured tool that doesn't require deployment, but you'll need to enable it in the Azure portal.
++
+1. [Add Azure VMware Solution VMs to Security Center](azure-security-integration.md#add-azure-vmware-solution-vms-to-security-center).
+
+2. [Enable Azure Defender in Security Center](../security-center/enable-azure-defender.md). Security Center assesses the VMs for potential security issues. It also provides [security recommendations](../security-center/security-center-recommendations.md) in the Overview tab.
+
+3. [Define security policies](../security-center/tutorial-security-policy.md) in Azure Security Center.
+
+For more information, see [Integrate Azure Security Center with Azure VMware Solution](azure-security-integration.md).
+++ ## Onboard VMs to Azure Arc enabled servers
-Azure Arc extends Azure management to any infrastructure, including Azure VMware Solution and on-premises. [Azure Arc enabled servers](../azure-arc/servers/overview.md) lets you manage your Windows and Linux physical servers and virtual machines hosted *outside* of Azure, on your corporate network, or another cloud provider.
+Extend Azure management to any infrastructure, including Azure VMware Solution, on-premises, or other cloud platforms.
-For information on enabling Azure Arc enabled servers for multiple Windows or Linux VMs, see [Connect hybrid machines to Azure at scale](../azure-arc/servers/onboard-service-principal.md).
+- For information on enabling Azure Arc enabled servers for multiple Windows or Linux VMs, see [Connect hybrid machines to Azure at scale](../azure-arc/servers/onboard-service-principal.md).
-## Onboard hybrid Kubernetes clusters with Arc enabled Kubernetes
-[Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md) lets you attach a Kubernetes cluster hosted in your Azure VMware Solution environment.
-For more information, see [Create an Azure Arc-enabled onboarding Service Principal](../azure-arc/kubernetes/create-onboarding-service-principal.md).
+## Onboard hybrid Kubernetes clusters with Arc enabled Kubernetes
-## Deploy the Log Analytics agent
-You can monitor Azure VMware Solution VMs through the Log Analytics agent. Machines connected to the Log Analytics workspace use the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) to collect data about changes to installed software, Microsoft services, Windows registry and files, and Linux daemons on monitored servers. When data is available, the agent sends it to Azure Monitor Logs for processing. Azure Monitor Logs applies logic to the received data, records it, and makes it available for analysis.
+Attach a Kubernetes cluster hosted in your Azure VMware Solution environment using Azure Arc enabled Kubernetes.
-Deploy the Log Analytics agent by using [Azure Arc enabled servers VM extension support](../azure-arc/servers/manage-vm-extensions.md).
+- For more information, see [Create an Azure Arc-enabled onboarding Service Principal](../azure-arc/kubernetes/create-onboarding-service-principal.md).
-## Enable Azure Monitor
-[Azure Monitor](../azure-monitor/overview.md) is a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. Some of the added benefits of Azure Monitor include:
+## Deploy the Log Analytics agent
+
+Monitor Azure VMware Solution VMs through the Log Analytics agent. Machines connected to the Log Analytics workspace use the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) to collect data about changes to installed software, Microsoft services, Windows registry and files, and Linux daemons on monitored servers. When data is available, the agent sends it to Azure Monitor Logs for processing. Azure Monitor Logs applies logic to the received data, records it, and makes it available for analysis.
- - Seamless monitoring
+Deploy the Log Analytics agent by using [Azure Arc enabled servers VM extension support](../azure-arc/servers/manage-vm-extensions.md).
- - Better infrastructure visibility
- - Instant notifications
- - Automatic resolution
- - Cost efficiency
+## Enable Azure Monitor
-You can collect data from different sources to monitor and analyze. For more information, see [Sources of monitoring data for Azure Monitor](../azure-monitor/agents/data-sources.md). You can also collect different types of data for analysis, visualization, and alerting. For more information, see [Azure Monitor data platform](../azure-monitor/data-platform.md).
+Can collect data from different [sources to monitor and analyze](../azure-monitor/agents/data-sources.md) and different types of [data for analysis, visualization, and alerting](../azure-monitor/data-platform.md). You can also create alert rules to identify issues in your environment, like high use of resources, missing patches, low disk space, and heartbeat of your VMs. You can set an automated response to detected events by sending an alert to IT Service Management (ITSM) tools. Alert detection notification can also be sent via email.
-You can monitor guest operating system performance and discover and map application dependencies for Azure VMware Solution or on-premises VMs. You can also create alert rules to identify issues in your environment, like high use of resources, missing patches, low disk space, and heartbeat of your VMs. You can set an automated response to detected events by sending an alert to IT Service Management (ITSM) tools. Alert detection notification can also be sent via email.
+Monitor guest operating system performance to discover and map application dependencies for Azure VMware Solution or on-premises VMs. Your Log Analytics workspace in Azure Monitor enables log collection and performance counter collection using the Log Analytics agent or extensions.
1. [Design your Azure Monitor Logs deployment](../azure-monitor/logs/design-logs-deployment.md)
You can monitor guest operating system performance and discover and map applicat
1. [Configure Log Analytics workspace for Azure Monitor for VMs](../azure-monitor/vm/vminsights-configure-workspace.md). 1. Create alert rules to identify issues in your environment:+ - [Create, view, and manage metric alerts using Azure Monitor](../azure-monitor/alerts/alerts-metric.md).+ - [Create, view, and manage log alerts using Azure Monitor](../azure-monitor/alerts/alerts-log.md).+ - [Action rules](../azure-monitor/alerts/alerts-action-rules.md) to set automated actions and notifications.+ - [Connect Azure to ITSM tools using IT Service Management Connector](../azure-monitor/alerts/itsmc-overview.md).++
+## Next steps
+
+Now that you've covered Azure VMware Solution network and interconnectivity concepts, you may want to learn about [integrating Azure Security Center with Azure VMware Solution](azure-security-integration.md).
cognitive-services Get Speech Devices Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/get-speech-devices-sdk.md
The Speech Devices SDK is a pretuned library designed to work with purpose-built development kits, and varying microphone array configurations.
-## Choose a Development kit
+## Choose a Development Kit
|Devices|Specification|Description|Scenarios| |--|--|--|--|
+|[Azure Percept Audio DK](/azure/azure-percept/overview-azure-percept-audio)<br>[Setup](/azure/azure-percept/quickstart-percept-dk-unboxing) / [Quickstart](/azure/azure-percept/quickstart-percept-audio-setup)![Azure Percept Audio DK](./media/speech-devices-sdk/azure-percept-audio.png)|4 mic linear array with XMOS Codec. <br> Linux| An accessory device that adds speech artificial intelligence (AI) capabilities to your edge device. It contains a pre-configured audio processor and a four-microphone linear array that enable the use of voice commands, keyword spotting, and far field speech with the help of Azure Cognitive Services. This device ships with Azure Percept DK, Azure Percept Studio, and other Azure edge management services to smoothly integrate with our most powerful and compact all-in-one speech devices SDK.|Conversation Transcription, Robotics, Smart Building, Manufacturing, Agriculture|
+|[Azure Kinect DK](https://azure.microsoft.com/services/kinect-dk/)<br>[Setup](../../kinect-dk/set-up-azure-kinect-dk.md) / [Quickstart](./speech-devices-sdk-quickstart.md?pivots=platform-windows%253fpivots%253dplatform-windows)![Azure Kinect DK](medi?pivots=platform-linux%253fpivots%253dplatform-linux)|A spatial computing developer kit with advanced artificial intelligence (AI) sensors for building sophisticated computer vision and speech models. It combines a best-in-class spatial microphone array and depth camera with a video camera and orientation sensorΓÇöall in one small device with multiple modes, options, and SDKs that seamlessly integrate with Azure Cognitive Services.|Conversation Transcription, Robotics, Smart Building|
|[Urbetter Dev Kit](http://www.urbetter.com/products_56/278.html)![URbetter DDK](media/speech-devices-sdk/device-urbetter.jpg)|7 Mic Array, ARM SOC, WIFI, Ethernet, HDMI, USB Camera. <br>Linux|An industry level Speech Devices SDK that adapts Microsoft Mic array and supports extended I/O such as HDMI/Ethernet and more USB peripherals <br> [Contact Urbetter](http://www.urbetter.com/products_56/278.html)|Conversation Transcription, Education, Hospital, Robots, OTT Box, Voice Agent, Drive Thru| |[Roobo Smart Audio Dev Kit](http://ddk.roobo.com)<br>[Setup](speech-devices-sdk-roobo-v1.md) / [Quickstart](./speech-devices-sdk-quickstart.md?pivots=platform-android%253fpivots%253dplatform-android)![Roobo Smart Audio Dev Kit](medi?pivots=platform-android%253fpivots%253dplatform-android)|The first Speech Devices SDK to adapt Microsoft Mic Array and front processing SDK, for developing high-quality transcription and speech scenarios|Conversation Transcription, Smart Speaker, Voice Agent, Wearable|
-|[Azure Kinect DK](https://azure.microsoft.com/services/kinect-dk/)<br>[Setup](../../kinect-dk/set-up-azure-kinect-dk.md) / [Quickstart](./speech-devices-sdk-quickstart.md?pivots=platform-windows%253fpivots%253dplatform-windows)![Azure Kinect DK](medi?pivots=platform-linux%253fpivots%253dplatform-linux)|A developer kit with advanced artificial intelligence (AI) sensors for building sophisticated computer vision and speech models. It combines a best-in-class spatial microphone array and depth camera with a video camera and orientation sensorΓÇöall in one small device with multiple modes, options, and SDKs to accommodate a range of compute types.|Conversation Transcription, Robotics, Smart Building|
|Roobo Smart Audio Dev Kit 2<br>[Setup](speech-devices-sdk-roobo-v2.md)<br>![Roobo Smart Audio Dev Kit 2](media/speech-devices-sdk/device-roobo-v2.jpg)|7 Mic Array, ARM SOC, WIFI, Bluetooth, IO. <br>Linux|The 2nd generation Speech Devices SDK that provides alternative OS and more features in a cost effective reference design.|Conversation Transcription, Smart Speaker, Voice Agent, Wearable|
Download the [Speech Devices SDK](./speech-devices-sdk.md).
## Next steps > [!div class="nextstepaction"]
-> [Get started with the Speech Devices SDK](./speech-devices-sdk-quickstart.md?pivots=platform-android)
+> [Get started with the Speech Devices SDK](./speech-devices-sdk-quickstart.md?pivots=platform-android)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
| Maltese (Malta) | `mt-MT` | Text | | | | Marathi (India) | `mr-IN` | Text | | | | Norwegian (Bokmål, Norway) | `nb-NO` | Text | Yes | |
+| Persian (Iran) | `fa-IR` | Text | | |
| Polish (Poland) | `pl-PL` | Text | Yes | | | Portuguese (Brazil) | `pt-BR` | Audio (20190620, 20201015)<br>Text<br>Pronunciation| Yes | | | Portuguese (Portugal) | `pt-PT` | Text<br>Pronunciation | Yes | |
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
| Spanish (Uruguay) | `es-UY` | Text<br>Pronunciation | | | | Spanish (USA) | `es-US` | Text<br>Pronunciation | | | | Spanish (Venezuela) | `es-VE` | Text<br>Pronunciation | | |
+| Swahili (Kenya) | `sw-KE` | Text<br>Pronunciation | | |
| Swedish (Sweden) | `sv-SE` | Text | Yes | | | Tamil (India) | `ta-IN` | Text | | | | Telugu (India) | `te-IN` | Text | | |
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/sdk-features.md
The following list presents the set of features which are currently available in
| | Get notified when participants are actively typing a message in a chat thread | ✔️ | ❌ | ❌ | ❌ | ✔️ | ✔️ | | | Get all messages in a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | | Send Unicode emojis as part of message content | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Add metadata to chat messages | ✔️ | ✔️ | ✔️ | ❌ | ❌ | ✔️ |
-| | Add display name to typing indicator notification | ✔️ | ✔️ | ✔️ | ❌ | ❌ | ✔️ |
+| | Add metadata to chat messages | ✔️ | ✔️ | ✔️ | ✔️ | ❌ | ✔️ |
+| | Add display name to typing indicator notification | ✔️ | ✔️ | ✔️ | ✔️ | ❌ | ✔️ |
|Real-time notifications (enabled by proprietary signaling package**)| Chat clients can subscribe to get real-time updates for incoming messages and other operations occurring in a chat thread. To see a list of supported updates for real-time notifications, see [Chat concepts](concepts.md#real-time-notifications) | ✔️ | ❌ | ❌ | ❌ | ✔️ | ✔️ | | Integration with Azure Event Grid | Use the chat events available in Azure Event Grid to plug custom notification services or post that event to a webhook to execute business logic like updating CRM records after a chat is finished | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | Reporting </br>(This info is available under Monitoring tab for your Communication Services resource on Azure portal) | Understand API traffic from your chat app by monitoring the published metrics in Azure Metrics Explorer and set alerts to detect abnormalities | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
communication-services Call Automation Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/call-automation-apis.md
Title: Azure Communication Services Call Automation API overview description: Provides an overview of the Call Automation feature and APIs.-+ -+ Last updated 06/30/2021
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/call-recording.md
Title: Azure Communication Services Call Recording overview description: Provides an overview of the Call Recording feature and APIs.-+ -+ Last updated 06/30/2021
communication-services Call Automation Api Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/call-automation-api-sample.md
Title: Azure Communication Services Call Automation API quickstart description: Provides a quickstart sample for the Call Automation APIs.-+ -+ Last updated 06/30/2021
communication-services Call Recording Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/call-recording-sample.md
Title: Azure Communication Services Call Recording API quickstart
description: Provides a quickstart sample for the Call Recording APIs. -+ -+ Last updated 06/30/2021
communication-services Download Recording File Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/download-recording-file-sample.md
Title: Record and download calls with Event Grid - An Azure Communication Services quickstart description: In this quickstart, you'll learn how to record and download calls using Event Grid.-+ -+ Last updated 06/30/2021
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/high-availability.md
Multi-region accounts will experience different behaviors depending on the follo
| Write regions | Automatic failover | What to expect | What to do | | -- | -- | -- | -- |
-| Single write region | Not enabled | In case of outage in a read region, all clients will redirect to other regions. No read or write availability loss. No data loss. <p/> In case of an outage in the write region, clients will experience write availability loss. If strong consistency level is not selected, some data may not have been replicated to the remaining active regions. This depends on the consistenvy level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. <p/> Cosmos DB will restore write availability automatically when the outage ends. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. |
-| Single write region | Enabled | In case of outage in a read region, all clients will redirect to other regions. No read or write availability loss. No data loss. <p/> In case of an outage in the write region, clients will experience write availability loss until Cosmos DB automatically elects a new region as the new write region according to your preferences. If strong consistency level is not selected, some data may not have been replicated to the remaining active regions. This depends on the consistenvy level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, you may move the write region back to the original region, and re-adjust provisioned RUs as appropriate. Accounts using SQL APIs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
+| Single write region | Not enabled | In case of outage in a read region, all clients will redirect to other regions. No read or write availability loss. No data loss. <p/> In case of an outage in the write region, clients will experience write availability loss. If strong consistency level is not selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. <p/> Cosmos DB will restore write availability automatically when the outage ends. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. |
+| Single write region | Enabled | In case of outage in a read region, all clients will redirect to other regions. No read or write availability loss. No data loss. <p/> In case of an outage in the write region, clients will experience write availability loss until Cosmos DB automatically elects a new region as the new write region according to your preferences. If strong consistency level is not selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, you may move the write region back to the original region, and re-adjust provisioned RUs as appropriate. Accounts using SQL APIs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
| Multiple write regions | Not applicable | No read or write availability loss. <p/> Recently updated data in the failed region may be unavilable in the remaining active regions. Eventual, consistent prefix, and session consistency levels guarantee a staleness of <15mins. Bounded staleness guarantees less than K updates or T seconds, depending on the configuration. If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support additional traffic. <p/> When the outage is over, you may re-adjust provisioned RUs as appropriate. If possible, Cosmos DB will automatically recover non-replicated data in the failed region using the configured conflict resolution method for SQL API accounts, and Last Write Wins for accounts using other APIs. | ## Next steps
Next you can read the following articles:
* [How to configure your Cosmos account with multiple write regions](how-to-multi-master.md)
-* [SDK behavior on multi-regional environments](troubleshoot-sdk-availability.md)
+* [SDK behavior on multi-regional environments](troubleshoot-sdk-availability.md)
data-factory Connect Data Factory To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connect-data-factory-to-azure-purview.md
Previously updated : 12/3/2020 Last updated : 08/10/2021 # Connect Data Factory to Azure Purview (Preview)
-This article will explain how to connect Data Factory to Azure Purview and how to report data lineage of Azure Data Factory activities Copy data, Data flow and Execute SSIS package.
+[Azure Purview](../purview/overview.md) is a unified data governance service that helps you manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data. You can connect your data factory to Azure Purview. That connection allows you to use Azure Purview for capturing lineage data, as well as to discover and explore Azure Purview assets.
## Connect Data Factory to Azure Purview
-Azure Purview is a new cloud service for use by data users centrally manage data governance across their data estate spanning cloud and on-prem environments. You can connect your Data Factory to Azure Purview and the connection allows you to use Azure Purview for capturing lineage data of Copy, Data flow and Execute SSIS package.
-You have two ways to connect data factory to Azure Purview:
-### Register Azure Purview account to Data Factory
-1. In the ADF portal, go to **Manage** -> **Azure Purview**. Select **Connect to a Purview account**.
-2. You can choose **From Azure subscription** or **Enter manually**. **From Azure subscription**, you can select the account that you have access to.
-3. Once connected, you should be able to see the name of the Purview account in the tab **Purview account**.
-4. You can use the Search bar at the top center of Azure Data Factory portal to search for data.
+You have two options to connect data factory to Azure Purview:
-If you see warning in Azure Data Factory portal after you register Azure Purview account to Data Factory, follow below steps to fix the issue:
+- [Connect to Azure Purview account in Data Factory](#connect-to-azure-purview-account-in-data-factory)
+- [Register Data Factory in Azure Purview](#register-data-factory-in-azure-purview)
+### Connect to Azure Purview account in Data Factory
+
+To establish the connection, you need to have **Owner** or **Contributor** role on your data factory.
-1. Go to Azure portal and find your data factory. Choose section "Tags" and see if there is a tag named **catalogUri**. If not, please disconnect and reconnect the Azure Purview account in the ADF portal.
+1. In the ADF authoring UI, go to **Manage** -> **Azure Purview**, and select **Connect to a Purview account**.
+ :::image type="content" source="./media/data-factory-purview/register-purview-account.png" alt-text="Screenshot for registering a Purview account.":::
-2. Check if the permission is granted for registering an Azure Purview account to Data Factory. See [How to connect Azure Data Factory and Azure Purview](../purview/how-to-link-azure-data-factory.md#create-new-data-factory-connection)
+2. Choose **From Azure subscription** or **Enter manually**. **From Azure subscription**, you can select the account that you have access to.
+
+3. Once connected, you can see the name of the Purview account in the tab **Purview account**.
+
+When connecting data factory to Purview, ADF UI also tries to grant the data factory's managed identity **Purview Data Curator** role on your Purview account. Managed identity is used to authenticate lineage push operations from data factory to Purview. If you have **Owner** or **User Access Administrator** role on the Purview account, this operation will be done automatically. If not, you would see warning like below:
++
+To fix the issue, go to Azure portal -> your Purview account -> Access control (IAM), check if **Purview Data Curator** role is granted to the data factory's managed identity. If not, manually add the role assignment.
### Register Data Factory in Azure Purview+ For how to register Data Factory in Azure Purview, see [How to connect Azure Data Factory and Azure Purview](../purview/how-to-link-azure-data-factory.md).
-## Report Lineage data to Azure Purview
-When customers run Copy, Data flow or Execute SSIS package activity in Azure Data Factory, customers could get the dependency relationship and have a high-level overview of whole workflow process among data sources and destination.
-For how to collect lineage from Azure Data Factory, see [data factory lineage](../purview/how-to-link-azure-data-factory.md#supported-azure-data-factory-activities).
+## Report lineage data to Azure Purview
+
+After you connect the data factory to a Purview account, when you run Copy, Data flow or Execute SSIS package activity, you can get the lineage between datasets created by data processes and have a high-level overview of whole workflow process among data sources and destination. For detailed supported capabilities, see [Supported Azure Data Factory activities](../purview/how-to-link-azure-data-factory.md#supported-azure-data-factory-activities). For an end to end walkthrough, refer to [Tutorial: Push Data Factory lineage data to Azure Purview](tutorial-push-lineage-to-purview.md).
+
+## Discover and explore data using Purview
+
+After you connect the data factory to a Purview account, you can use the search bar at the top center of Azure Data Factory autoring UI to search for data and perform actions. Learn more from [Discover and explore data in ADF using Purview](how-to-discover-explore-purview-data.md).
## Next steps
-[Catalog lineage user guide](../purview/catalog-lineage-user-guide.md)
-[Tutorial: Push Data Factory lineage data to Azure Purview](turorial-push-lineage-to-purview.md)
+[Tutorial: Push Data Factory lineage data to Azure Purview](tutorial-push-lineage-to-purview.md)
+
+[Discover and explore data in ADF using Purview](how-to-discover-explore-purview-data.md)
+
+[Azure Purview Data Catalog lineage user guide](../purview/catalog-lineage-user-guide.md)
data-factory Connector Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-data-warehouse.md
Previously updated : 08/06/2021 Last updated : 08/15/2021 # Copy and transform data in Azure Synapse Analytics by using Azure Data Factory or Synapse pipelines
For Copy activity, this Azure Synapse Analytics connector supports these functio
- Copy data by using SQL authentication and Azure Active Directory (Azure AD) Application token authentication with a service principal or managed identities for Azure resources. - As a source, retrieve data by using a SQL query or stored procedure. You can also choose to parallel copy from an Azure Synapse Analytics source, see the [Parallel copy from Azure Synapse Analytics](#parallel-copy-from-azure-synapse-analytics) section for details.-- As a sink, load data by using [PolyBase](#use-polybase-to-load-data-into-azure-synapse-analytics) or [COPY statement](#use-copy-statement) or bulk insert. We recommend PolyBase or COPY statement for better copy performance. The connector also supports automatically creating destination table if not exists based on the source schema.
+- As a sink, load data by using [COPY statement](#use-copy-statement) or [PolyBase](#use-polybase-to-load-data-into-azure-synapse-analytics) or bulk insert. We recommend COPY statement or PolyBase for better copy performance. The connector also supports automatically creating destination table with DISTRIBUTION = ROUND_ROBIN if not exists based on the source schema.
> [!IMPORTANT] > If you copy data by using an Azure Integration Runtime, configure a [server-level firewall rule](../azure-sql/database/firewall-configure.md) so that Azure services can access the [logical SQL server](../azure-sql/database/logical-servers.md).
data-factory How To Discover Explore Purview Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-discover-explore-purview-data.md
Previously updated : 01/15/2021 Last updated : 08/10/2021 # Discover and explore data in ADF using Purview
You can perform the following tasks in ADF:
- Connect those data to your data factory with linked services or datasets ## Prerequisites + - [Azure Purview account](../purview/create-catalog-portal.md) - [Data Factory](./quickstart-create-data-factory-portal.md) - [Connect an Azure Purview Account into Data Factory](./connect-data-factory-to-azure-purview.md)
You can directly create Linked Service, Dataset, or dataflow over the data you s
##  Next steps -- [Register and scan Azure Data Factory assets in Azure Purview](../purview/register-scan-azure-synapse-analytics.md)-- [How to Search Data in Azure Purview Data Catalog](../purview/how-to-search-catalog.md)
+[Tutorial: Push Data Factory lineage data to Azure Purview](turorial-push-lineage-to-purview.md)
+
+[Connect an Azure Purview Account into Data Factory](connect-data-factory-to-azure-purview.md)
+
+[How to Search Data in Azure Purview Data Catalog](../purview/how-to-search-catalog.md)
data-factory Load Azure Data Lake Storage Gen2 From Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/load-azure-data-lake-storage-gen2-from-gen1.md
This article shows you how to use the Data Factory copy data tool to copy data f
1. On the left menu, select **Create a resource** > **Data + Analytics** > **Data Factory**.
- ![Data Factory selection in the New pane](./media/quickstart-create-data-factory-portal/new-azure-data-factory-menu.png)
+ ![Screenshot showing the Data Factory selection in the New pane.](./media/quickstart-create-data-factory-portal/new-azure-data-factory-menu.png)
2. On the **New data factory** page, provide values for the fields that are shown in the following image:
- ![New Data factory page](./media/load-azure-data-lake-storage-gen2-from-gen1/new-azure-data-factory.png)
+ ![Screenshot showing the New Data factory page.](./media/load-azure-data-lake-storage-gen2-from-gen1/new-azure-data-factory.png)
* **Name**: Enter a globally unique name for your Azure data factory. If you receive the error "Data factory name \"LoadADLSDemo\" is not available," enter a different name for the data factory. For example, use the name _**yourname**_**ADFTutorialDataFactory**. Create the data factory again. For the naming rules for Data Factory artifacts, see [Data Factory naming rules](naming-rules.md). * **Subscription**: Select your Azure subscription in which to create the data factory.
This article shows you how to use the Data Factory copy data tool to copy data f
4. Select **Azure Data Lake Storage Gen1** from the connector gallery, and select **Continue**.
- ![Source data store Azure Data Lake Storage Gen1 page](./media/load-azure-data-lake-storage-gen2-from-gen1/source-data-store-page-adls-gen1.png)
+ ![Screenshot showing the page of selecting the Azure Data Lake Storage Gen1 connection.](./media/load-azure-data-lake-storage-gen2-from-gen1/source-data-store-page-adls-gen1.png)
5. On the **New connection (Azure Data Lake Storage Gen1)** page, follow these steps: 1. Select your Data Lake Storage Gen1 for the account name, and specify or validate the **Tenant**.
This article shows you how to use the Data Factory copy data tool to copy data f
> [!IMPORTANT] > In this walk-through, you use a managed identity for Azure resources to authenticate your Azure Data Lake Storage Gen1. To grant the managed identity the proper permissions in Azure Data Lake Storage Gen1, follow [these instructions](connector-azure-data-lake-store.md#managed-identity).
- ![Specify Azure Data Lake Storage Gen1 account](./media/load-azure-data-lake-storage-gen2-from-gen1/specify-adls-gen1-account.png)
+ ![Screenshot showing the configuration of the Azure Data Lake Storage Gen1 connection.](./media/load-azure-data-lake-storage-gen2-from-gen1/specify-adls-gen1-account.png)
6. On the **Source data store** page, complete the following steps. 1. Select the newly created connection in the **Connection** section.
This article shows you how to use the Data Factory copy data tool to copy data f
7. On the **Destination data store** page, select **+ New connection** > **Azure Data Lake Storage Gen2** > **Continue**.
- ![Destination data store page](./media/load-azure-data-lake-storage-gen2-from-gen1/destination-data-store-page-adls-gen2.png)
+ ![Screenshot showing the page of selecting the Azure Data Lake Storage Gen2 connection.](./media/load-azure-data-lake-storage-gen2-from-gen1/destination-data-store-page-adls-gen2.png)
8. On the **New connection (Azure Data Lake Storage Gen2)** page, follow these steps: 1. Select your Data Lake Storage Gen2 capable account from the **Storage account name** drop-down list. 1. Select **Create** to create the connection.
- ![Specify Azure Data Lake Storage Gen2 account](./media/load-azure-data-lake-storage-gen2-from-gen1/specify-adls-gen2-account.png)
+ ![Screenshot showing the configuration of the Azure Data Lake Storage Gen2 connection.](./media/load-azure-data-lake-storage-gen2-from-gen1/specify-adls-gen2-account.png)
9. On the **Destination data store** page, complete the following steps. 1. Select the newly created connection in the **Connection** block.
This article shows you how to use the Data Factory copy data tool to copy data f
11. On the **Summary** page, review the settings, and select **Next**.
- ![Summary page](./media/load-azure-data-lake-storage-gen2-from-gen1/copy-summary.png)
+ ![Screenshot showing the Summary page.](./media/load-azure-data-lake-storage-gen2-from-gen1/copy-summary.png)
12. On the **Deployment page**, select **Monitor** to monitor the pipeline.
- ![Deployment page](./media/load-azure-data-lake-storage-gen2-from-gen1/deployment-page.png)
+ ![Screenshot showing the Deployment page.](./media/load-azure-data-lake-storage-gen2-from-gen1/deployment-page.png)
13. Notice that the **Monitor** tab on the left is automatically selected. The **Pipeline name** column includes links to view activity run details and to rerun the pipeline.
- ![Monitor pipeline runs](./media/load-azure-data-lake-storage-gen2-from-gen1/monitor-pipeline-runs.png)
+ ![Screenshot showing the page of monitoring pipeline runs.](./media/load-azure-data-lake-storage-gen2-from-gen1/monitor-pipeline-runs.png)
14. To view activity runs that are associated with the pipeline run, select the link in the **Pipeline name** column. There's only one activity (copy activity) in the pipeline, so you see only one entry. To switch back to the pipeline runs view, select the **All pipeline runs** link in the breadcrumb menu at the top. Select **Refresh** to refresh the list.
- ![Monitor activity runs](./media/load-azure-data-lake-storage-gen2-from-gen1/monitor-activity-runs.png)
+ ![Screenshot showing the page of monitoring activity runs.](./media/load-azure-data-lake-storage-gen2-from-gen1/monitor-activity-runs.png)
15. To monitor the execution details for each copy activity, select the **Details** link (eyeglasses image) under the **Activity name** column in the activity monitoring view. You can monitor details like the volume of data copied from the source to the sink, data throughput, execution steps with corresponding duration, and used configurations.
data-factory Turorial Push Lineage To Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/turorial-push-lineage-to-purview.md
- Title: Push Data Factory lineage data to Azure Purview
-description: Learn about how to push Data Factory lineage data to Azure Purview
------ Previously updated : 12/3/2020--
-# Push Data Factory lineage data to Azure Purview (Preview)
--
-In this tutorial, you'll use the Data Factory user interface (UI) to create a pipeline that run activities and report lineage data to Azure Purview account. Then you can view all the lineage information in your Azure Purview account.
-
-## Prerequisites
-* **Azure subscription**. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
-* **Azure Data Factory**. If you don't have an Azure Data Factory, see [Create an Azure Data Factory](./quickstart-create-data-factory-portal.md).
-* **Azure Purview account**. The Purview account captures all lineage data generated by data factory. If you don't have an Azure Purview account, see [Create an Azure Purview](../purview/create-catalog-portal.md).
--
-## Run Data Factory activities and push lineage data to Azure Purview
-### Step 1: Connect Data Factory to your Purview account
-Log in to your Purview account in Purview portal, go to **Management Center**. Choose **Data Factory** in **External connections** and click **New** button to create a connection to a new Data Factory.
-[![Screenshot for creating a connection between Data Factory and Purview account](./media/data-factory-purview/connect-adf-to-purview.png) ](./media/data-factory-purview/connect-adf-to-purview.png#lightbox)
-
-In the popup page, you can choose the Data Factory you want to connect to this Purview account.
-![Screenshot for a new connection](./media/data-factory-purview/new-adf-purview-connection.png)
-
-You can check the status after creating the connection. **Connected** means the connection between Data Factory and this Purview is successfully connected.
-> [!NOTE]
-> You need to be assigned any of below roles in the Purview account and Data Factory **Contributor** role to create the connection between Data Factory and Azure Purview.
-> - Owner
-> - User Access Administrator
-
-### Step 2: Run Copy and Dataflow activities in Data Factory
-You can create pipelines, Copy activities and Dataflow activities in Data Factory. You don't need any additional configuration for lineage data capture. The lineage data will automatically be captured during the activities execution.
-![Screenshot of Copy and Dataflow activity](./media/data-factory-purview/adf-activities-for-lineage.png)
-If you don't know how to create Copy and Dataflow activities, see
-[Copy data from Azure Blob storage to a database in Azure SQL Database by using Azure Data Factory](./tutorial-copy-data-portal.md) and
-[Transform data using mapping data flows](./tutorial-data-flow.md).
-
-### Step 3: Run Execute SSIS Package activities in Data Factory
-You can create pipelines, Execute SSIS Package activities in Data Factory. You don't need any additional configuration for lineage data capture. The lineage data will automatically be captured during the activities execution.
-![Screenshot of Execute SSIS Package activity](./media/data-factory-purview/ssis-activities-for-lineage.png)
-
-If you don't know how to create Execute SSIS Package activities, see
-[Run SSIS Packages in Azure](./tutorial-deploy-ssis-packages-azure.md).
-
-### Step 4: View lineage information in your Purview account
-Go back to your Purview Account. In the home page, select **Browse assets**. Choose the asset you want, and click Lineage tab. You will see all the lineage information.
-[![Screenshot of Purview account](./media/data-factory-purview/view-dataset.png) ](./media/data-factory-purview/view-dataset.png#lightbox)
-
-You can see lineage data for Copy activity.
-[![Screenshot of Copy lineage](./media/data-factory-purview/copy-lineage.png) ](./media/data-factory-purview/copy-lineage.png#lightbox)
-
-You also can see lineage data for Dataflow activity.
-[![Screenshot of Dataflow lineage](./media/data-factory-purview/dataflow-lineage.png) ](./media/data-factory-purview/dataflow-lineage.png#lightbox)
-
-> [!NOTE]
-> For the lineage of Dataflow activity, we only support source and sink. The lineage for Dataflow transformation is not supported yet.
-
-You also can see lineage data for Execute SSIS Package activity.
-[![Screenshot of SSIS lineage](./media/data-factory-purview/ssis-lineage.png) ](./media/data-factory-purview/ssis-lineage.png#lightbox)
-
-> [!NOTE]
-> For the lineage of Execute SSIS Package activity, we only support source and destination. The lineage for transformation is not supported yet.
-
-## Next steps
-[Catalog lineage user guide](../purview/catalog-lineage-user-guide.md)
-
-[Connect Data Factory to Azure Purview](connect-data-factory-to-azure-purview.md)
data-factory Tutorial Push Lineage To Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-push-lineage-to-purview.md
+
+ Title: Push Data Factory lineage data to Azure Purview
+description: Learn about how to push Data Factory lineage data to Azure Purview
++++++ Last updated : 08/10/2021++
+# Push Data Factory lineage data to Azure Purview (Preview)
++
+In this tutorial, you'll use the Data Factory user interface (UI) to create a pipeline that run activities and report lineage data to Azure Purview account. Then you can view all the lineage information in your Azure Purview account.
+
+Currently, lineage is supported for Copy, Data Flow, and Execute SSIS activities. Learn more details on the supported capabilities from [Supported Azure Data Factory activities](../purview/how-to-link-azure-data-factory.md#supported-azure-data-factory-activities).
+
+## Prerequisites
+
+* **Azure subscription**. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+* **Azure Data Factory**. If you don't have an Azure Data Factory, see [Create an Azure Data Factory](./quickstart-create-data-factory-portal.md).
+* **Azure Purview account**. The Purview account captures all lineage data generated by data factory. If you don't have an Azure Purview account, see [Create an Azure Purview](../purview/create-catalog-portal.md).
+
+## Run pipeline and push lineage data to Azure Purview
+
+### Step 1: Connect Data Factory to your Purview account
+
+You can establish the connection between Data Factory and Purview account by following the steps in [Connect Data Factory to Azure Purview](connect-data-factory-to-azure-purview.md).
+
+### Step 2: Run pipeline in Data Factory
+
+You can create pipelines, Copy activities and Dataflow activities in Data Factory. You don't need any additional configuration for lineage data capture. The lineage data will automatically be captured during the activities execution.
+++
+Learn more about how to create Copy, Data Flow and Execute SSIS activities from
+[Copy data from Azure Blob storage to a database in Azure SQL Database by using Azure Data Factory](./tutorial-copy-data-portal.md), [Transform data using mapping data flows](./tutorial-data-flow.md) and [Run SSIS Packages in Azure](./tutorial-deploy-ssis-packages-azure.md).
+
+### Step 3: Monitor lineage reporting status
+
+After you run the pipeline, in the [pipeline monitoring view](monitor-visually.md#monitor-pipeline-runs), you can check the lineage reporting status by clicking the following **Lineage status** button. The same information is also available in the activity output JSON -> `reportLineageToPurvew` section.
++
+### Step 4: View lineage information in your Purview account
+
+On Purview UI, you can browse assets and choose type "Azure Data Factory". You can also search the Data Catalog using keywords.
++
+On the activity asset, click the Lineage tab, you can see all the lineage information.
+
+- Copy activity:
+
+ :::image type="content" source="./media/data-factory-purview/copy-lineage.png" alt-text="Screenshot of the Copy activity lineage in Purview." lightbox="./media/data-factory-purview/copy-lineage.png":::
+
+- Data Flow activity:
+
+ :::image type="content" source="./media/data-factory-purview/dataflow-lineage.png" alt-text="Screenshot of the Data Flow lineage in Purview." lightbox="./media/data-factory-purview/dataflow-lineage.png":::
+
+ > [!NOTE]
+ > For the lineage of Dataflow activity, we only support source and sink. The lineage for Dataflow transformation is not supported yet.
+
+- Execute SSIS Package activity:
+
+ :::image type="content" source="./media/data-factory-purview/ssis-lineage.png" alt-text="Screenshot of the Execute SSIS lineage in Purview." lightbox="./media/data-factory-purview/ssis-lineage.png":::
+
+ > [!NOTE]
+ > For the lineage of Execute SSIS Package activity, we only support source and destination. The lineage for transformation is not supported yet.
+
+## Next steps
+
+[Catalog lineage user guide](../purview/catalog-lineage-user-guide.md)
+
+[Connect Data Factory to Azure Purview](connect-data-factory-to-azure-purview.md)
databox Data Box Heavy Migrate Spo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-heavy-migrate-spo.md
Title: Use Azure Data Box Heavy to move file share content to SharePoint Online
-description: Use this tutorial to learn how to migrate file share content to Share Point Online using your Azure Data Box Heavy
+description: Use this tutorial to learn how to migrate file share content to SharePoint Online using your Azure Data Box Heavy
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/getting-started.md
The following table describes user access permissions to Azure Defender for IoT
| Permission | Security reader | Security administrator | Subscription contributor | Subscription owner | |--|--|--|--|--| | View details and access software, activation files and threat intelligence packages | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Onboard a sensor | | Γ£ô | Γ£ô | Γ£ô |
-| Update pricing | | | Γ£ô | Γ£ô |
-| Recover password | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Onboard sensors | | Γ£ô | Γ£ô | Γ£ô |
+| Onboard subscriptions and update committed devices | | | Γ£ô | Γ£ô |
+| Recover passwords | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
## Identify the solution infrastructure
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Title: Manage subscriptions description: Subscriptions consist of managed committed devices and can be onboarded or offboarded as needed. Previously updated : 3/30/2021 Last updated : 08/10/2021
-# Manage a subscription
+# Manage Defender for IoT subscriptions
-Subscriptions are managed on a monthly basis. When you onboard a subscription, you will be billed for that subscription until the end of the month. Similarly if you when you offboard a subscription, you will be billed for the remainder of the month in which you offboarded that subscription.
+## About subscriptions
-## Onboard a subscription
+Your Defender for IoT deployment is managed through your Azure Defender for IoT account subscriptions.
+You can onboard, edit, and offboard your subscriptions to Defender for IoT from the[ Azure Defender for IoT Portal](https://ms.portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started).
+
+For each subscription, you will be asked to define a number of *committed devices*. Committed devices are the approximate number of devices that will be monitored in your enterprise.
+
+### Subscription billing
+
+You are billed based on the number of committed devices associated with each subscription.
+
+The billing cycle for Azure Defender for IoT follows a calendar month. Changes you make to committed devices during the month are implemented one hour after confirming your update, and are reflected in your monthly bill. Subscription *offboarding* also takes effect one hour after confirming the offboard.
+
+Your enterprise may have more than one paying entity. If this is the case you can onboard more than one subscription.
+
+Before you subscribe, you should have a sense of how many devices you would like your subscriptions to cover.
+
+Users can also work with trial subscription, which supports monitoring a limited number of devices for 30 days.
+See [Azure Defender pricing](https://azure.microsoft.com/pricing/details/azure-defender/) information on committed device prices.
-To get started with Azure Defender for IoT, you must have a Microsoft Azure subscription. If you do not have a subscription, you can sign up for a free account. If you already have access to an Azure subscription, but it isn't listed, check your account details, and confirm your permissions with the subscription owner.
+### Before you begin
-To onboard a subscription:
+Before you onboard a subscription, verify that:
+- Your Azure account is set up.
+- You have the required Azure user permissions.
+#### Azure account setup
-1. Navigate to the Azure portal's Pricing page.
+To get started with Azure Defender for IoT, you must have a Microsoft Azure subscription. If you do not have a subscription, you can sign up for a free account. If you already have access to an Azure subscription, but it isn't listed when subscribing, check your account details and confirm your permissions with the subscription owner.
- :::image type="content" source="media/how-to-manage-subscriptions/no-subscription.png" alt-text="Navigate to the Azure portal's Pricing page.":::
+- If have an account: https://ms.portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade.
+- If you donΓÇÖt have an account: https://azure.microsoft.com/free/.
+
+#### User permission requirements
+
+Azure **Subscription Owners** and **Subscription Contributor**s can onboard, update, and offboard Azure Defender for IoT subscriptions.
+
+## Onboard a trial subscription
+
+If you would like to evaluate Defender for IoT, you can use a trial subscription. The trial is valid for 30 days and supports 1000 committed devices. Using the trial lets you deploy one more Defender for IoT sensors on your network. Use the sensors to monitor traffic, analyze data, generate alerts, learn about network risks and vulnerabilities, and more. The trial also allows you to download an on-premises management console to view aggregated information generated by sensors.
+
+This section describes how to create a trial subscription for a sensor.
+
+**To create a trial subscription:**
+
+1. Navigate to the [ Azure Defender for IoT Portal](https://ms.portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started).
+1. Select **Getting Started.**
1. Select **Onboard subscription**.
+1. In the Pricing page, select **Start with a Trial**.
+1. Select a subscription from the Onboard trial subscription pane and then select **Evaluate**.
+1. Confirm your evaluation.
+1. Onboard a sensor or set up a sensor, if required.
-1. In the **Onboard subscription** window select your subscription, and the number of committed devices from the drop-down menus.
+## Onboard a subscription
+
+This section describes how to onboard a subscription.
+
+**To onboard a subscription:**
+
+1. Navigate to the [ Azure Defender for IoT Portal](https://ms.portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started).
+1. Select **Getting Started.**
+1. Select **Onboard subscription**.
+1. In the Pricing page, select **Subscribe**.
+1. In the **Onboard subscription** pane, select a subscription and the number of committed devices from the drop-down menu.
:::image type="content" source="media/how-to-manage-subscriptions/onboard-subscription.png" alt-text="select your subscription and the number of committed devices.":::
-1. Select **Onboard**.
+1. Select **Subscribe**.
+1. Confirm your subscription.
+1. If you have not done so already, onboard a sensor or Set up a sensor.
+
+## Update committed devices in a subscription
+
+You may need to update your subscription with more committed devices, or more fewer committed devices. More devices may require monitoring if, for example, you are increasing existing site coverage, discovered more devices than expected or there are network changes such as adding switches.
+
+**To update a subscription:**
+1. Navigate to the [ Azure Defender for IoT Portal](https://ms.portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started).
+1. Select **Onboard subscription**.
+1. Select the subscription, and then select the three dots. (...).
+1. Select **Edit**.
+1. Update the committed devices and select **Save**.
+2. In the confirmation dialog box that opens, select **Confirm.**
+Changes in device commitment will take effect one hour after confirming the change. Billing for these changes will be reflected at the beginning of the month following confirmation of the change.
+You will need to upload a new activation file to your on-premises management console. The activation file reflects the new number of committed devices. See[Upload an activation file](how-to-manage-the-on-premises-management-console.md#upload-an-activation-file).
## Offboard a subscription
-Subscriptions are managed on a monthly basis. When you offboard a subscription, you will be billed for that subscription until the end of the month.
+You may need to offboard a subscription, for example if you need to work with a new payment entity. Subscription offboarding takes effect one hour after confirming the offboard.
+Your upcoming monthly bill will reflect this change.
-Uninstall all sensors that are associated with the subscription prior to offboarding the subscription. For more information on how to delete a sensor, see [Delete a sensor](how-to-manage-sensors-on-the-cloud.md#delete-a-sensor).
+Remove all sensors that are associated with the subscription prior to offboarding. For more information on how to delete a sensor, see [Delete a sensor](how-to-manage-sensors-on-the-cloud.md#delete-a-sensor).
-To offboard a subscription:
+**To offboard a subscription:**
+
+1. Navigate to the [ Azure Defender for IoT Portal](https://ms.portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started).
+1. Select the subscription, and then select the three dots. (...).
+
+1. Select **Offboard subscription**.
-1. Navigate to the **Pricing** page.
-1. Select the subscription, and then select the **delete** icon :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/delete-icon.png" border="false":::.
1. In the confirmation popup, select the checkbox to confirm you have deleted all sensors associated with the subscription. :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/offboard-popup.png" alt-text="Select the checkbox and select offboard to offboard your sensor.":::
-1. Select the **Offboard** button.
+1. Select **Offboard**.
+
+## Apply a new subscription
+
+Business considerations may require that you apply a different subscription to your deployment than the one currently being used. If you change the subscription, you will need to upload a new sensor activation file. The file contains information on subscription expiration dates.
+
+**To apply a new subscription:**
+
+1. Delete the subscription currently being used.
+1. Select a new subscription.
+1. Download an activation file for the sensor associated with the subscription.
+1. Upload the activation file to the sensor.
## Next steps
-[Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md)
+- [Manage sensors in the Defender for IoT portal](how-to-manage-sensors-on-the-cloud.md)
+
+- [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md)
digital-twins Concepts Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-azure-digital-twins-explorer.md
description: Understand the capabilities and purpose of Azure Digital Twins Explorer Previously updated : 4/28/2021 Last updated : 6/1/2021
digital-twins Concepts Data Ingress Egress https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-data-ingress-egress.md
description: Understand the ingress and egress requirements for integrating Azure Digital Twins with other services. Previously updated : 3/16/2020 Last updated : 6/1/2021
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-models.md
description: Understand how Azure Digital Twins uses custom models to describe entities in your environment. Previously updated : 3/12/2020 Last updated : 6/1/2021
digital-twins Concepts Ontologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-ontologies.md
description: Learn about DTDL industry ontologies for modeling in a certain domain Previously updated : 2/12/2021 Last updated : 6/1/2021
digital-twins Concepts Query Language https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-query-language.md
description: Understand the basics of the Azure Digital Twins query language. Previously updated : 4/22/2021 Last updated : 6/1/2021
digital-twins Concepts Route Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-route-events.md
description: Understand how to route events within Azure Digital Twins and to other Azure Services. Previously updated : 10/12/2020 Last updated : 6/1/2021
digital-twins Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-security.md
description: Understand security best practices with Azure Digital Twins. Previously updated : 3/18/2020 Last updated : 6/1/2021
digital-twins Concepts Twins Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-twins-graph.md
description: Understand the concept of a digital twin, and how their relationships make a graph. Previously updated : 3/12/2020 Last updated : 6/1/2021
digital-twins How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-model.md
description: See how to create, edit, and delete a model within Azure Digital Twins. Previously updated : 7/13/2021 Last updated : 8/13/2021
digital-twins How To Manage Routes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes.md
description: See how to set up and manage endpoints and event routes for Azure Digital Twins data Previously updated : 7/22/2020 Last updated : 7/30/2021
digital-twins How To Parse Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-parse-models.md
description: Learn how to use the parser library to parse DTDL models. Previously updated : 4/10/2020 Last updated : 8/13/2021
digital-twins How To Send Twin To Twin Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-send-twin-to-twin-events.md
description: See how to create a function in Azure for propagating events through the twin graph. Previously updated : 8/5/2021 Last updated : 8/13/2021
digital-twins Tutorial Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-code.md
description: Tutorial to write the minimal code for a client app, using the .NET (C#) SDK. Previously updated : 11/02/2020 Last updated : 04/28/2021
digital-twins Tutorial Command Line App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-command-line-app.md
description: Tutorial to build an Azure Digital Twins scenario using a sample command-line application Previously updated : 5/8/2020 Last updated : 6/1/2021
digital-twins Tutorial Command Line Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-command-line-cli.md
description: Tutorial to build an Azure Digital Twins scenario using the Azure CLI Previously updated : 2/26/2021 Last updated : 6/1/2021
dms Tutorial Mysql Azure Mysql Offline Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-mysql-azure-mysql-offline-portal.md
To complete this tutorial, you need to:
* Azure Database for MySQL supports only InnoDB tables. To convert MyISAM tables to InnoDB, see the article [Converting Tables from MyISAM to InnoDB](https://dev.mysql.com/doc/refman/5.7/en/converting-tables-to-innodb.html) * The user must have the privileges to read data on the source database.
+## Sizing the target Azure Database for MySQL instance
+
+To prepare the target Azure Database for MySQL server for faster data loads using the Azure Database Migration Service, the following server parameters and configuration changes are recommended.
+
+* max_allowed_packet ΓÇô set to 1073741824 (i.e. 1GB) to prevent any connection issues due to large rows.
+* slow_query_log ΓÇô set to OFF to turn off the slow query log. This will eliminate the overhead caused by slow query logging during data loads.
+* query_store_capture_mode ΓÇô set to NONE to turn off the Query Store. This will eliminate the overhead caused by sampling activities by Query Store.
+* innodb_buffer_pool_size ΓÇô Innodb_buffer_pool_size can only be increased by scaling up compute for Azure Database for MySQL server. Scale up the server to 64 vCore General Purpose SKU from the Pricing tier of the portal during migration to increase the innodb_buffer_pool_size.
+* innodb_io_capacity & innodb_io_capacity_max - Change to 9000 from the Server parameters in Azure portal to improve the IO utilization to optimize for migration speed.
+* innodb_write_io_threads & innodb_write_io_threads - Change to 4 from the Server parameters in Azure portal to improve the speed of migration.
+* Scale up Storage tier ΓÇô The IOPs for Azure Database for MySQL server increases progressively with the increase in storage tier.
+ * In the Single Server deployment option, for faster loads, we recommend increasing the storage tier to increase the IOPs provisioned.
+ * In the Flexible Server deployment option, we recommend you can scale (increase or decrease) IOPS irrespective of the storage size.
+ * Note that storage size can only be scaled up, not down.
+
+Once the migration is complete, you can revert back the server parameters and configuration to values required by your workload.
++ ## Migrate database schema To transfer all the database objects like table schemas, indexes and stored procedures, we need to extract schema from the source database and apply to the target database. To extract schema, you can use mysqldump with the `--no-data` parameter. For this you need a machine which can connect to both the source MySQL database and the target Azure Database for MySQL.
For example:
mysql.exe -h mysqlsstrgt.mysql.database.azure.com -u docadmin@mysqlsstrgt -p migtestdb < d:\migtestdb.sql ```
-If you have foreign keys in your schema, the parallel data load during migration will be handled by the migration task. There is no need to drop foreign keys during schema migration.
-
-If you have triggers in the database, it will enforce data integrity in the target ahead of full data migration from the source. The recommendation is to disable triggers on all the tables in the target during migration, and then enable the triggers after migration is done.
-
-Execute the following script in MySQL Workbench on the target database to extract the drop trigger script and add trigger script.
-
-```sql
-SELECT
- SchemaName,
- GROUP_CONCAT(DropQuery SEPARATOR ';\n') as DropQuery,
- Concat('DELIMITER $$ \n\n', GROUP_CONCAT(AddQuery SEPARATOR '$$\n'), '$$\n\nDELIMITER ;') as AddQuery
-FROM
-(
-SELECT
- TRIGGER_SCHEMA as SchemaName,
- Concat('DROP TRIGGER `', TRIGGER_NAME, "`") as DropQuery,
- Concat('CREATE TRIGGER `', TRIGGER_NAME, '` ', ACTION_TIMING, ' ', EVENT_MANIPULATION,
- '\nON `', EVENT_OBJECT_TABLE, '`\n' , 'FOR EACH ', ACTION_ORIENTATION, ' ',
- ACTION_STATEMENT) as AddQuery
-FROM
- INFORMATION_SCHEMA.TRIGGERS
-ORDER BY EVENT_OBJECT_SCHEMA, EVENT_OBJECT_TABLE, ACTION_TIMING, EVENT_MANIPULATION, ACTION_ORDER ASC
-) AS Queries
-GROUP BY SchemaName
-```
-
-Run the generated drop trigger query (DropQuery column) in the result to drop triggers in the target database. The add trigger query can be saved, to be used post data migration completion.
+If you have foreign keys or triggers in your schema, the parallel data load during migration will be handled by the migration task. There is no need to drop foreign keys or triggers during schema migration.
[!INCLUDE [resource-provider-register](../../includes/database-migration-service-resource-provider-register.md)]
education-hub About Program https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/education-hub/azure-dev-tools-teaching/about-program.md
Students don't need to sign up for an Azure subscription to access their softwar
Microsoft does offer USD$100 in Azure credit plus access to free Azure services for students: students can sign up for the [Azure for Students offer](azure-students-program.md) without needing a credit card.
+## Why canΓÇÖt I purchase Azure Dev Tools for Teaching?
+Azure Dev Tools for Teaching is now only available to redeem if you have a Volume Licensing (VL) agreement with Microsoft. If you have a VL agreement with Microsoft and are still having issues redeeming, please contact support. For more information on Volume Licensing for Education, please visit https://aka.ms/ees
+ ## Getting help [!INCLUDE [Subscription support](../../../includes/edu-dev-tools-program-support.md)]
event-hubs Event Hubs Capture Enable Through Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-capture-enable-through-portal.md
The default time window is 5 minutes. The minimum value is 1, the maximum 15. Th
## Capture data to Azure Data Lake Storage Gen 2
-1. Follow [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal#create-a-storage-account) article to create an Azure Storage account. Set **Hierarchical namespace** to **Enabled** on the **Advanced** tab to make it an Azure Data Lake Storage Gen 2 account.
+1. Follow [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal#create-a-storage-account) article to create an Azure Storage account. Set **Hierarchical namespace** to **Enabled** on the **Advanced** tab to make it an Azure Data Lake Storage Gen 2 account. The Azure Storage account must be in the same subscription as the event hub.
2. When creating an event hub, do the following steps: 1. Select **On** for **Capture**.
expressroute Expressroute Howto Macsec https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-macsec.md
To start the configuration, sign in to your Azure account and select the subscri
$MACsecCAKSecret = Set-AzKeyVaultSecret -VaultName "your_key_vault_name" -Name "CAK_name" -SecretValue $CAK $MACsecCKNSecret = Set-AzKeyVaultSecret -VaultName "your_key_vault_name" -Name "CKN_name" -SecretValue $CKN ```
+ > [!NOTE]
+ > CKN must be an even-length string up to 64 hexadecimal digits (0-9, A-F).
+ >
+ > CAK length depends on cipher suite specified:
+ >
+ > * For GcmAes128, the CAK must be an even-length string up to 32 hexadecimal digits (0-9, A-F).
+ >
+ > * For GcmAes256, the CAK must be an even-length string up to 64 hexadecimal digits (0-9, A-F).
+ >
+ 4. Assign the GET permission to the user identity. ```azurepowershell-interactive
firewall-manager Create Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/create-policy-powershell.md
+
+ Title: 'Quickstart: Create and update an Azure Firewall policy using Azure PowerShell'
+description: In this quickstart, you learn how create and update an Azure Firewall policy using Azure PowerShell.
+++ Last updated : 08/16/2021++++
+# Quickstart: Create and update an Azure Firewall policy using Azure PowerShell
+
+In this quickstart, you use Azure PowerShell to create an Azure Firewall policy with network and application rules. You also update the existing policy by adding network and application rules.
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
++
+## Sign in to Azure
+
+```azurepowershell
+Connect-AzAccount
+Select-AzSubscription -Subscription "<sub name>"
+```
+
+## Set up the network and policy
+
+First, create a resource group and a virtual network. Then create an Azure Firewall policy.
+
+### Create a resource group
+
+The resource group contains all the resources used in this procedure.
+
+```azurepowershell
+New-AzResourceGroup -Name Test-FWpolicy-RG -Location "East US"
+```
+
+### Create a virtual network
+
+```azurepowershell
+$ServerSubnet = New-AzVirtualNetworkSubnetConfig -Name subnet-1 -AddressPrefix 10.0.0.0/24
+$testVnet = New-AzVirtualNetwork -Name Test-FWPolicy-VNET -ResourceGroupName Test-FWPolicy-RG -Location "East US" -AddressPrefix 10.0.0.0/8 -Subnet $ServerSubnet
+```
+### Create a firewall policy
+
+```azurepowershell
+New-AzFirewallPolicy -Name EUS-Policy -ResourceGroupName Test-FWPolicy-RG -Location "EAST US"
+```
+
+## Create a network rule collection group and add new rules
+
+First, you create the rule collection group, then add the rule collection with the rules.
+
+### Create the network rule collection group
+
+```azurepowershell
+$firewallpolicy = Get-AzFirewallPolicy -Name EUS-Policy -ResourceGroupName Test-FWPolicy-RG
+$newnetworkrulecollectiongroup = New-AzFirewallPolicyRuleCollectionGroup -Name "NetworkRuleCollectionGroup" -Priority 200 -ResourceGroupName Test-FWPolicy-RG -FirewallPolicyName EUS-Policy
+$networkrulecollectiongroup = Get-AzFirewallPolicyRuleCollectionGroup -Name "NetworkRuleCollectionGroup" -ResourceGroupName Test-FWPolicy-RG -AzureFirewallPolicyName EUS-Policy
+```
+### Create network rules
+
+```azurepowershell
+$networkrule1= New-AzFirewallPolicyNetworkRule -Name NwRule1 -Description testRule1 -SourceAddress 10.0.0.0/24 -Protocol TCP -DestinationAddress 192.168.0.1/32 -DestinationPort 22
+$networkrule2= New-AzFirewallPolicyNetworkRule -Name NWRule2 -Description TestRule2 -SourceAddress 10.0.0.0/24 -Protocol UDP -DestinationAddress 192.168.0.10/32 -DestinationPort 1434
+```
+### Create a network rule collection and add new rules
+
+```azurepowershell
+$newrulecollectionconfig=New-AzFirewallPolicyFilterRuleCollection -Name myfirstrulecollection -Priority 1000 -Rule $networkrule1,$networkrule2 -ActionType Allow
+$newrulecollection = $networkrulecollectiongroup.Properties.RuleCollection.Add($newrulecollectionconfig)
+```
+
+### Update the network rule collection group
+
+```azurepowershell
+Set-AzFirewallPolicyRuleCollectionGroup -Name "NetworkRuleCollectionGroup" -Priority "200" -FirewallPolicyObject $firewallpolicy -RuleCollection $networkrulecollectiongroup.Properties.RuleCollection
+```
+
+### Output
+
+View the new rule collection and its rules:
+
+```azurepowershell
+$output = $networkrulecollectiongroup.Properties.GetRuleCollectionByName("myfirstrulecollection")
+Write-Output $output
+```
+
+## Add network rules to an existing rule collection
+
+Now that you have an existing rule collection, you can add more rules to it.
+
+### Get existing network rule group collection
+
+```azurepowershell
+$firewallpolicy = Get-AzFirewallPolicy -Name EUS-Policy -ResourceGroupName Test-FWPolicy-RG
+$networkrulecollectiongroup = Get-AzFirewallPolicyRuleCollectionGroup -Name "NetworkRuleCollectionGroup" -ResourceGroupName Test-FWPolicy-RG -AzureFirewallPolicyName EUS-Policy
+```
+
+### Create new network rules
+
+```azurepowershell
+$newnetworkrule1 = New-AzFirewallPolicyNetworkRule -Name newNwRule01 -Description testRule01 -SourceAddress 10.0.0.0/24 -Protocol TCP -DestinationAddress 192.168.0.5/32 -DestinationPort 3389
+$newnetworkrule2 = New-AzFirewallPolicyNetworkRule -Name newNWRule02 -Description TestRule02 -SourceAddress 10.0.0.0/24 -Protocol UDP -DestinationAddress 192.168.0.15/32 -DestinationPort 1434
+```
+### Update the network rule collection and add new rules
+
+```azurepowershell
+$getexistingrullecollection = $networkrulecollectiongroup.Properties.RuleCollection | where {$_.Name -match "myfirstrulecollection"}
+$getexistingrullecollection.RuleS.Add($newnetworkrule1)
+$getexistingrullecollection.RuleS.Add($newnetworkrule2)
+```
+### Update network rule collection group
+
+```azurepowershell
+Set-AzFirewallPolicyRuleCollectionGroup -Name "NetworkRuleCollectionGroup" -FirewallPolicyObject $firewallpolicy -Priority 200 -RuleCollection $networkrulecollectiongroup.Properties.RuleCollection
+```
+### Output
+
+View the rules that you just added:
+
+```azurepowershell
+$output = $networkrulecollectiongroup.Properties.GetRuleCollectionByName("myfirstrulecollection")
+Write-output $output
+```
+## Create an application rule collection and add new rules
+
+First, create the rule collection group, then add the rule collection with rules.
+
+### Create the application rule collection group
+
+```azurepowershell
+$firewallpolicy = Get-AzFirewallPolicy -Name EUS-Policy -ResourceGroupName Test-FWPolicy-RG
+$newapprulecollectiongroup = New-AzFirewallPolicyRuleCollectionGroup -Name "ApplicationRuleCollectionGroup" -Priority 300 -ResourceGroupName Test-FWPolicy-RG -FirewallPolicyName EUS-Policy
+```
+### Create new application rules
+
+```azurepowershell
+$apprule1 = New-AzFirewallPolicyApplicationRule -Name apprule1 -Description testapprule1 -SourceAddress 192.168.0.1/32 -TargetFqdn "*.contoso.com" -Protocol HTTPS
+$apprule2 = New-AzFirewallPolicyApplicationRule -Name apprule2 -Description testapprule2 -SourceAddress 192.168.0.10/32 -TargetFqdn "www.contosoweb.com" -Protocol HTTPS
+```
+### Create a new application rule collection with rules
+
+```azurepowershell
+$apprulecollectiongroup = Get-AzFirewallPolicyRuleCollectionGroup -Name "ApplicationRuleCollectionGroup" -ResourceGroupName Test-FWPolicy-RG -AzureFirewallPolicyName EUS-Policy
+$apprulecollection = New-AzFirewallPolicyFilterRuleCollection -Name myapprulecollection -Priority 1000 -Rule $apprule1,$apprule2 -ActionType Allow
+$newapprulecollection = $apprulecollectiongroup.Properties.RuleCollection.Add($apprulecollection)
+```
+
+### Update the application rule collection group
+
+```azurepowershell
+Set-AzFirewallPolicyRuleCollectionGroup -Name "ApplicationRuleCollectionGroup" -FirewallPolicyObject $firewallpolicy -Priority 300 -RuleCollection $apprulecollectiongroup.Properties.RuleCollection
+```
+
+### Output
+
+Examine the new rule collection group and its new rules:
+
+```azurepowershell
+$output = $apprulecollectiongroup.Properties.GetRuleCollectionByName("myapprulecollection")
+Write-Output $output
+```
+
+## Add application rules to an existing rule collection
+
+Now that you have an existing rule collection, you can add more rules to it.
+
+```azurepowershell
+#Create new Application Rules for exist Rule collection
+$newapprule1 = New-AzFirewallPolicyApplicationRule -Name newapprule01 -Description testapprule01 -SourceAddress 192.168.0.5/32 -TargetFqdn "*.contosoabc.com" -Protocol HTTPS
+$newapprule2 = New-AzFirewallPolicyApplicationRule -Name newapprule02 -Description testapprule02 -SourceAddress 192.168.0.15/32 -TargetFqdn "www.contosowebabcd.com" -Protocol HTTPS
+```
+
+### Update the application rule collection
+
+```azurepowershell
+$apprulecollection = $apprulecollectiongroup.Properties.RuleCollection | where {$_.Name -match "myapprulecollection"}
+$apprulecollection.Rules.Add($newapprule1)
+$apprulecollection.Rules.Add($newapprule2)
+
+# Update Application Rule collection Group
+Set-AzFirewallPolicyRuleCollectionGroup -Name "ApplicationRuleCollectionGroup" -FirewallPolicyObject $firewallpolicy -Priority 300 -RuleCollection $apprulecollectiongroup.Properties.RuleCollection
+```
+### Output
+
+View the new rules:
+
+```azurepowershell
+$Output = $apprulecollectiongroup.Properties.GetRuleCollectionByName("myapprulecollection")
+Write-Output $output
+```
+
+## Clean up resources
+
+When you no longer need the resources that you created, delete the resource group. This removes the all the created resources.
+
+To delete the resource group, use the `Remove-AzResourceGroup` cmdlet:
+
+```azurepowershell
+Remove-AzResourceGroup -Name Test-FWpolicy-RG
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about Azure Firewall Manager deployment](deployment-overview.md)
firewall Premium Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-migrate.md
Previously updated : 07/15/2021 Last updated : 08/16/2021
The following two examples show how to:
## Performance considerations
-Performance is a consideration when migrating from the standard SKU. IDPS and TLS inspection are compute intensive operations. The premium SKU uses a more powerful VM SKU which scales to a maximum throughput of 30Gbps comparable with the standard SKU. The 30 Gbps throughput is supported when configured with IDPS in alert mode. Use of IDPS in deny mode and TLS inspection increases CPU consumption. Degradation in max throughput might occur.
+Performance is a consideration when migrating from the standard SKU. IDPS and TLS inspection are compute intensive operations. The premium SKU uses a more powerful VM SKU which scales to a maximum throughput of 30 Gbps comparable with the standard SKU. The 30 Gbps throughput is supported when configured with IDPS in alert mode. Use of IDPS in deny mode and TLS inspection increases CPU consumption. Degradation in max throughput might occur.
The firewall throughput might be lower than 30 Gbps when you have one or more signatures set to **Alert and Deny** or application rules with **TLS inspection** enabled. Microsoft recommends customers perform full scale testing in their Azure deployment to ensure the firewall service performance meets your expectations.
+## Downtime
+
+Migrate your firewall during a planned maintenance time, as there will be some downtime during the migration.
+ ## Migrate an existing policy using Azure PowerShell `Transform-Policy.ps1` is an Azure PowerShell script that creates a new Premium policy from an existing Standard policy.
iot-central Howto Create Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-analytics.md
Title: Analyze device data in your Azure IoT Central application | Microsoft Doc
description: Analyze device data in your Azure IoT Central application. Previously updated : 11/27/2019 Last updated : 08/16/2021 - # This article applies to operators, builders, and administrators. # How to use analytics to analyze device data
-Azure IoT Central provides rich analytics capabilities to analyze historical trends and correlate various telemetries from your devices. To get started, visit **Analytics** on the left pane.
+Azure IoT Central provides rich analytics capabilities to analyze historical trends and correlate telemetry from your devices. To get started, select **Analytics** on the left pane.
-## Understanding the Analytics UI
-Analytics user interface is made of three main components:
-- **Data configuration panel:** On the configuration panel, start by selecting the device group for which you want to analyze the data. Next, select the telemetry that you want to analyze and select the aggregation method for each telemetry. **Split By** control helps to group the data by using the device properties as dimensions.
+## Understand the analytics UI
-- **Time control:** Time control is used to select the duration for which you want to analyze the data. You can drag either end of the time slider to select the time span. Time control also has an **Interval size** slider that controls the bucket or the interval size used to aggregate the data.
+The analytics user interface has three main components:
-- **Chart control:** Chart control visualizes the data as a line chart. You can toggle the visibility of specific lines by interacting with the chart legend.
+- **Data configuration panel:** On the configuration panel, select the device group for which you want to analyze the data. Next, select the telemetry that you want to analyze and select the aggregation method for each telemetry. The **Split By** control helps to group the data by using device properties as dimensions.
+- **Time control:** Use the time control to select the duration for which you want to analyze the data. You can drag either end of the time slider to select the time span. The time control also has an **Interval size** slider that controls the bucket or the interval size used to aggregate the data.
- ![Analytics UI Overview](media/howto-create-analytics/analyticsui.png)
+- **Chart control:** The chart control visualizes the data as a line chart. You can toggle the visibility of specific lines by interacting with the chart legend.
+ :::image type="content" source="media/howto-create-analytics/analytics-ui.png" alt-text="Screenshot that shows the three areas of the analytics UI.":::
-## Querying your data
+## Query your data
-You'll need to start by choosing a **device group**, and the telemetry that you want to analyze. Once you're done, select **Analyze** to start visualizing your data.
+Choose a **Device group** to get started and then the telemetry you want to analyze. When you're done, select **Analyze** to start visualizing your data:
-- **Device group:** A [device group](tutorial-use-device-groups.md) is a user-defined group of your devices. For example, all Refrigerators in Oakland, or All version 2.0 wind turbines.
+- **Device group:** A [device group](tutorial-use-device-groups.md) is a user-defined group of your devices. For example, **All Refrigerators in Oakland**, or **All version 2.0 wind turbines**.
-- **Telemetry:** Select the telemetry that you want to analyze and explore. You can select multiple telemetries to analyze together. Default aggregation method is set to Average for numerical and Count for string data-type respectively. Supported aggregation methods for Numeric data types are Average, Maximum, Minimum, Count and, Sum. Supported aggregation methods for string data type are count.
+- **Telemetry:** Select the telemetry that you want to analyze and explore. You can select multiple telemetry types to analyze together. The default aggregation method is set to **Average** for numerical data types and **Count** for strings. Aggregation methods for numeric data types are **Average**, **Maximum**, **Minimum**, **Count** and, **Sum**. **Count** is the only aggregation method for strings.
-- **Split by:** 'Split by' control helps to group the data by using the device properties as dimensions. Values of the device and cloud properties are joined along with the telemetry as and when it is sent by the device. If the cloud or device property has been updated, then you will see the telemetry grouped by different values on the chart.
+ > [!NOTE]
+ > Historic data points are only shown when the conditions of the query are true. For example, a device was upgraded from **Template1** to **Template2** yesterday. Today, if you query device groups that contain **Template1** devices, you see device data from yesterday and before. If you query device groups that contain **Template2** devices, you see the device and data from when it was upgraded going forward.
+
+- **Split by:** The **Split by** control helps to group the data by using the device properties as dimensions. Device telemetry and properties are combined with cloud properties when the device sends data. If the cloud or device property is updated, then you see the telemetry grouped by different values on the chart.
> [!TIP]
- > To view data for each device separately, select Device Id in the 'Split by' control.
+ > To view data for each device separately, select **Device Id** in the **Split by** control.
-## Interacting with your data
+## Interact with your data
-Once you've queried your data, you can start visualizing it on the line chart. You can show/hide telemetry, change the time duration, view telemetry in a data grid.
+After you've queried your data, you can visualize it on the line chart. You can show or hide telemetry, change the time duration, or view the data in a grid.
-- **Time editor panel:** By default we'll retrieve data from the past one day. You can drag either end of the time slider to change the time duration. You can also use the calendar control to select one of the predefined time buckets or select a custom time range. Time control also has an **Interval size** slider that controls the bucket or the interval size used to aggregate the data.
+- **Time editor panel:** By default you see data from the last day. You can drag either end of the slider to change the time duration. You can also use the calendar control to select one of the predefined time buckets or select a custom time range. The time control also has an **Interval size** slider that controls the interval size used to aggregate the data.
- ![Time Editor](media/howto-create-analytics/timeeditorpanel.png)
+ :::image type="content" source="media/howto-create-analytics/time-editor-panel.png" alt-text="Screenshot that shows the time editor panel.":::
- - **Inner date range slider tool**: Use the two endpoint controls by dragging them over the time span you want. This inner date range is constrained by the outer date range slider control.
-
-
- - **Outer date range slider control**: Use the endpoint controls to select the outer date range, which will be available for your inner date range control.
+ - **Inner date range slider tool**: Use the two endpoint controls to highlight the time span you want. The inner date range is constrained by the outer date range slider control.
+
+ - **Outer date range slider control**: Use the endpoint controls to select the outer date range that's available for your inner date range control.
- - **Increase and decrease date range buttons**: Increase or decrease your time span by selecting either button for the interval you want.
+ - **Increase and decrease date range buttons**: Increase or decrease your time span by selecting either button for the interval you want.
- - **Interval-size slider**: Use it to zoom in and out of intervals over the same time span. This action provides more precise control of movement between large slices of time. You can use it to see granular, high-resolution views of your data, even down to milliseconds. The slider's default starting point is set as the most optimal view of the data from your selection, which balances resolution, query speed, and granularity.
-
- - **Date range picker**: With this web control, you can easily select the date and time ranges you want. You can also use the control to switch between different time zones. After you make the changes to apply to your current workspace, select Save.
+ - **Interval-size slider**: Use the slider to zoom in and out of intervals over the same time span. This control gives more precise control of movement between large slices of time. You can use it to see granular, high-resolution views of your data, even down to milliseconds. The default start point of the slider gives you an optimal view of the data from your selection. This view balances resolution, query speed, and granularity.
+
+ - **Date range picker**: Use this control, to select the date and time ranges you want. You can also use the control to switch between different time zones. After you make the changes to apply to your current workspace, select **Save**.
- > [!TIP]
- > Interval size is determined dynamically based on the selected time span. Smaller time spans will enable aggregating the data into very granular intervals of up to a few seconds.
+ > [!TIP]
+ > Interval size is determined dynamically based on the selected time span. Smaller time spans let you aggregate the data into very granular intervals of up to a few seconds.
+- **Chart Legend:** The chart legend shows the selected telemetry on the chart. Hover over an item on the legend to bring it into focus on the chart. When you use **Split by**, the telemetry is grouped by the values of the selected dimension. You can toggle the visibility of each telemetry type or clicking on the group name to toggle the group visibility.
-- **Chart Legend:** Chart legend shows the selected telemetry on the chart. You can hover over each item on the legend to bring it into focus on the chart. When using 'Split By', the telemetry is grouped by the respective values of the selected dimension. You can toggle the visibility of each specific telemetry or the whole group by clicking on the group name.
+- **Y-axis format control:** The y-axis mode cycles through the available y-axis view options. This control is available only when you're visualizing multiple telemetry types. The three modes are:
+ - **Stacked:** A graph for each telemetry type is stacked and each graph has its own y-axis. This mode is the default.
+ - **Shared:** A graph for each telemetry type is plotted against the same y-axis.
+ - **Overlap:** Use this mode to stack multiple lines on the same y-axis, with the y-axis data changing based on the selected line.
-- **Y-axis format control:** y-axis mode cycles through the available y-axis view options. This control is available only when different telemetries are being visualized. You can set the y-axis by choosing from one of three modes:
+ :::image type="content" source="media/howto-create-analytics/y-axis-control.png" alt-text="A screenshot that highlights the y-axis control.":::
- - **Stacked:** A graph for every telemetry is stacked and each of the graphs have their own y-axis. This mode is set as default.
- - **Shared:** A graph for every telemetry is plotted against the same y-axis.
- - **Overlap:** Use it to stack multiple lines on the same y-axis, with the y-axis data changing based on the selected line.
+- **Zoom control:** The zoom control lets you drill further into your data. If you find a time period you'd like to focus on within your result set, use your mouse pointer to highlight the area. Then right-click on the selected area and select **Zoom**.
- ![Arrange data across y-axis with different visualization modes](media/howto-create-analytics/yaxiscontrol.png)
+ :::image type="content" source="media/howto-create-analytics/zoom.png" alt-text="A Screenshot that shows the use of the zoom control.":::
-- **Zoom control:** Zoom lets you drill further into your data. If you find a time period you'd like to focus on within your result set, use your mouse pointer to grab the area and then drag it to the endpoint of your choice. Then right click on the selected area and click Zoom.
+Select the ellipsis, for more chart controls:
- ![Zoom into the data](media/howto-create-analytics/zoom.png)
+- **Display Grid:** Display your results in a table format that lets you view the value for each data point.
-Under the ellipsis, there are more chart controls to interact with the data:
+- **Download as CSV:** Export your results as a comma-separated values (CSV) file. The CSV file contains data for each device. Results are exported by using the interval and timeframe specified.
-- **Display Grid:** Your results are available in a table format, enabling you to view the specific value for each data point.
+- **Drop a Marker:** The **Drop Marker** control lets you anchor certain data points on the chart. It's useful when you're trying to compare data for multiple lines across different time periods.
-- **Download as CSV:** Your results are available to export as a comma-separated values (CSV) file. The CSV file contains data for each device. Results are exported by using the interval and timeframe specified.
+ :::image type="content" source="media/howto-create-analytics/additional-chart-controls.png" alt-text="A Screenshot that shows how to access the additional chart controls.":::
-- **Drop a Marker:** The 'Drop Marker' control provides a way to anchor certain data points on the chart. It is useful when you are trying to compare data for multiple lines across different time periods.
+## Next steps
- ![Showing the grid view for your analytics](media/howto-create-analytics/additionalchartcontrols.png)
+Now that you've learned how to visualize your data with the built-in analytics capabilities, a suggested next step is to learn how to [Export IoT data to cloud destinations using data export](howto-export-data.md).
iot-hub-device-update Import Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/import-update.md
If you haven't already done so, be sure to familiarize yourself with the basic [
## Review the generated import manifest
-Example:
+An example manifest output is below. If you have questions about any of the items, view the complete [import manifest schema](import-schema.md).
```json { "updateId": {
If you've just finished using the steps above to import via the Azure portal, sk
If you want to use the [Device Update for IoT Hub Update APIs](/rest/api/deviceupdate/updates) to import an update instead of importing via the Azure portal, note the following: - You will need to upload your update file(s) to an Azure Blob Storage location before you call the Update APIs. - You can reference this [sample API call](import-schema.md#example-import-request-body) which uses the import manifest you created above.
+ - If you re-use the same SAS URL while testing, you may encounter errors when the token expires. This is the case when submitting the import manifest as well as the update content itself.
## Next Steps
lighthouse Manage Hybrid Infrastructure Arc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/manage-hybrid-infrastructure-arc.md
As a service provider, you may have onboarded multiple customer tenants to [Azur
[Azure Arc](../../azure-arc/overview.md) helps simplify complex and distributed environments across on-premises, edge and multicloud, enabling deployment of Azure services anywhere and extending Azure management to any infrastructure.
-With [Azure Arc enabled servers](../../azure-arc/servers/overview.md), customers can manage any Windows and Linux machines hosted outside of Azure on their corporate network, in the same way they manage native Azure virtual machines. By linking a hybrid machine to Azure, it becomes connected and is treated as a resource in Azure. Service providers can then manage these non-Azure machines along with their customers' Azure resources.
+With [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), customers can manage any Windows and Linux machines hosted outside of Azure on their corporate network, in the same way they manage native Azure virtual machines. By linking a hybrid machine to Azure, it becomes connected and is treated as a resource in Azure. Service providers can then manage these non-Azure machines along with their customers' Azure resources.
-[Azure Arc enabled Kubernetes](../../azure-arc/kubernetes/overview.md) lets customers attach and configure Kubernetes clusters inside or outside of Azure. When a Kubernetes cluster is attached to Azure Arc, it will appear in the Azure portal, with an Azure Resource Manager ID and a managed identity. Clusters are attached to standard Azure subscriptions, are located in a resource group, and can receive tags just like any other Azure resource.
+[Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md) lets customers attach and configure Kubernetes clusters inside or outside of Azure. When a Kubernetes cluster is attached to Azure Arc, it will appear in the Azure portal, with an Azure Resource Manager ID and a managed identity. Clusters are attached to standard Azure subscriptions, are located in a resource group, and can receive tags just like any other Azure resource.
This topic provides an overview of how service providers can use Azure Arc enabled servers and Azure Arc enabled Kubernetes in a scalable way to manage their customers' hybrid environment, with visibility across all managed customer tenants. > [!TIP] > Though we refer to service providers and customers in this topic, this guidance also applies to [enterprises using Azure Lighthouse to manage multiple tenants](../concepts/enterprise.md).
-## Manage hybrid servers at scale with Azure Arc enabled servers
+## Manage hybrid servers at scale with Azure Arc-enabled servers
As a service provider, you can manage on-premises Windows Server or Linux machines outside Azure that your customers have connected to their subscription using the [Azure Connected Machine agent](../../azure-arc/servers/agent-overview.md).
You can also monitor connected clusters with Azure Monitor, and [use Azure Polic
## Next steps - Explore the jumpstarts and samples in the [Azure Arc GitHub repository](https://github.com/microsoft/azure_arc).-- Learn about [supported scenarios for Azure Arc enabled servers](../../azure-arc/servers/overview.md#supported-scenarios).
+- Learn about [supported scenarios for Azure Arc enabled servers](../../azure-arc/servers/overview.md#supported-cloud-operations).
- Learn about [Kubernetes distributions supported by Azure Arc](../../azure-arc/kubernetes/overview.md#supported-kubernetes-distributions). - Learn how to [deploy a policy at scale](policy-at-scale.md). - Learn how to [use Azure Monitor Logs at scale](monitor-at-scale.md).
lighthouse Onboard Customer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/onboard-customer.md
Title: Onboard a customer to Azure Lighthouse description: Learn how to onboard a customer to Azure Lighthouse, allowing their resources to be accessed and managed by users in your tenant. Previously updated : 07/16/2021 Last updated : 08/16/2021
The template you choose will depend on whether you are onboarding an entire subs
If you want to include [eligible authorizations](create-eligible-authorizations.md#create-eligible-authorizations-using-azure-resource-manager-templates) (currently in public preview), select the corresponding template from the [delegated-resource-management-eligible-authorizations section of our samples repo](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/delegated-resource-management-eligible-authorizations). > [!TIP]
-> While you can't onboard an entire management group in one deployment, you can [deploy a policy at the management group level](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/policy-delegate-management-groups). The policy uses the [deployIfNotExists effect](../../governance/policy/concepts/effects.md#deployifnotexists) to check if each subscription within the management group has been delegated to the specified managing tenant, and if not, will create the assignment based on the values you provide. You will then have access to all of the subscriptions in the management group, although you'll have to work on them as individual subscriptions (rather than taking actions on the management group as a whole).
+> While you can't onboard an entire management group in one deployment, you can deploy a policy to [onboard each subscription in a management group](onboard-management-group.md). You'll then have access to all of the subscriptions in the management group, although you'll have to work on them as individual subscriptions (rather than taking actions on the management group resource directly).
The following example shows a modified **delegatedResourceManagement.parameters.json** file that can be used to onboard a subscription. The resource group parameter files (located in the [rg-delegated-resource-management](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/rg-delegated-resource-management) folder) are similar, but also include an **rgName** parameter to identify the specific resource group(s) to be onboarded.
load-balancer Egress Only https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/egress-only.md
- Title: Outbound-only load balancer configuration-
-description: With this article, learn about how to create an internal load balancer with outbound NAT
------ Previously updated : 08/07/2020---
-# Outbound-only load balancer configuration
-
-Use a combination of internal and external standard load balancers to create outbound connectivity for VMs behind an internal load balancer.
-
-This configuration provides outbound NAT for an internal load balancer scenario, producing an "egress only" setup for your backend pool.
--
-<p align="center">
- <img src="./media/egress-only/load-balancer-egress-only.png" alt="Figure depicts a egress only load balancer configuration." width="500" title="Virtual Network NAT">
-</p>
--
-<!--
>--
-*Figure: Egress only load balancer configuration*
-
-The steps required are:
-
-1. Create a virtual network with a bastion host.
-2. Create a virtual machine with only a private IP.
-3. Create both internal and public standard load balancers.
-4. Add backend pools to both load balancers and place the VM into each pool.
-5. Connect to your VM through the bastion host and:
- 1. Test outbound connectivity,
- 2. Configure an outbound rule on the public load balancer.
- 3. Retest outbound connectivity.
-
-## Create virtual network and virtual machine
-
-Create a virtual network with two subnets:
-
-* Primary subnet
-* Bastion subnet
-
-Create a virtual machine in the new virtual network.
-
-### Create the virtual network
-
-1. [Sign in](https://portal.azure.com) to the Azure portal.
-
-2. On the upper-left side of the screen, select **Create a resource > Networking > Virtual network** or search for **Virtual network** in the search box.
-
-2. In **Create virtual network**, enter or select this information in the **Basics** tab:
-
- | **Setting** | **Value** |
- ||--|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **Create new**. </br> Enter **myResourceGroupLB**. </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter **myVNet** |
- | Region | Select **East US 2** |
-
-3. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
-
-4. In the **IP Addresses** tab, enter this information:
-
- | Setting | Value |
- |--|-|
- | IPv4 address space | Enter **10.1.0.0/16** |
-
-5. Under **Subnet name**, select the word **default**.
-
-6. In **Edit subnet**, enter this information:
-
- | Setting | Value |
- |--|-|
- | Subnet name | Enter **myBackendSubnet** |
- | Subnet address range | Enter **10.1.0.0/24** |
-
-7. Select **Save**.
-
-8. Select the **Security** tab.
-
-9. Under **BastionHost**, select **Enable**. Enter this information:
-
- | Setting | Value |
- |--|-|
- | Bastion name | Enter **myBastionHost** |
- | AzureBastionSubnet address space | Enter **10.1.1.0/24** |
- | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
--
-8. Select the **Review + create** tab or select the **Review + create** button.
-
-9. Select **Create**.
-
-### Create a virtual machine
-
-1. On the upper-left side of the portal, select **Create a resource** > **Compute** > **Virtual machine**.
-
-2. In **Create a virtual machine**, type or select the values in the **Basics** tab:
-
- | Setting | Value |
- |--|-|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **myResourceGroupLB** |
- | **Instance details** | |
- | Virtual machine name | Enter **myVM** |
- | Region | Select **East US 2** |
- | Availability Options | Select **No infrastructure redundancy required** |
- | Image | Select **Windows Server 2019 Datacenter** |
- | Azure Spot instance | Select **No** |
- | Size | Choose VM size or take default setting |
- | **Administrator account** | |
- | Username | Enter a username |
- | Password | Enter a password |
- | Confirm password | Reenter password |
- | **Inbound port rules** | |
- | Public inbound ports | Select **Allow selected ports** |
- | Select inbound ports | Select **RDP (3389)** |
-
-3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-
-4. In the Networking tab, select or enter:
-
- | Setting | Value |
- |--||
- | **Network interface** | |
- | Virtual network | **myVNet** |
- | Subnet | **myBackendSubnet** |
- | Public IP | Select **None**. |
- | NIC network security group | Select **None**|
- | Place this virtual machine behind an existing load-balancing solution? | Select **No** |
-
-5. Select the **Management** tab, or select **Next** > **Management**.
-
-6. In the **Management** tab, select or enter:
-
- | Setting | Value |
- |-|-|
- | **Monitoring** | |
- | Boot diagnostics | Select **Off** |
-
-7. Select **Review + create**.
-
-8. Review the settings, and then select **Create**.
-
-## Create load balancers and test connectivity
-
-Use the Azure portal to create:
-
-* Internal load balancer
-* Public load balancer
-
-Add your created VM to the backend pool of each. You'll then set up a configuration to only permit outbound connectivity from your VM, testing before and after.
-
-### Create internal load balancer
-
-1. On the top left-hand side of the screen, select **Create a resource** > **Networking** > **Load Balancer**.
-
-2. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
-
- | Setting | Value |
- | | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroupLB** created in the previous step.|
- | Name | Enter **myInternalLoadBalancer** |
- | Region | Select **East US 2**. |
- | Type | Select **Internal**. |
- | SKU | Select **Standard** |
- | Virtual network | Select **myVNet** created in the previous step. |
- | Subnet | Select **myBackendSubnet** created in the previous step. |
- | IP address assignment | Select **Dynamic**. |
-
-3. Accept the defaults for the remaining settings, and then select **Review + create**.
-
-4. In the **Review + create** tab, select **Create**.
-
-### Create public load balancer
-
-1. On the top left-hand side of the screen, select **Create a resource** > **Networking** > **Load Balancer**.
-
-2. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
-
- | Setting | Value |
- | | |
- | Subscription | Select your subscription. |
- | Resource group | Select **Create new** and enter **myResourceGroupLB** in the text box.|
- | Name | Enter **myPublicLoadBalancer** |
- | Region | Select **East US 2**. |
- | Type | Select **Public**. |
- | SKU | Select **Standard** |
- | Public IP address | Select **Create new**. |
- | Public IP address name | Enter **myFrontendIP** in the text box.|
- | Availability zone | Select **Zone-redundant** |
- | Add a public IPv6 address | Select **No**. |
-
-3. Accept the defaults for the remaining settings, and then select **Review + create**.
-
-4. In the **Review + create** tab, select **Create**.
-
-### Create internal backend address pool
-
-Create the backend address pool **myInternalBackendPool**:
-
-1. Select **All services** in the left-hand menu, select **All resources**, and then select **myInternalLoadBalancer** from the resources list.
-
-2. Under **Settings**, select **Backend pools**, then select **Add**.
-
-3. On the **Add a backend pool** page, for name, type **myInternalBackendPool**, as the name for your backend pool.
-
-4. In **Virtual network**, select **myVNet**.
-
-5. Under **Virtual machines**, select **Add** and choose **myVM**.
-
-6. select **Add**.
-
-### Create public backend address pool
-
-Create the backend address pool **myPublicBackendPool**:
-
-1. Select **All services** in the left-hand menu, select **All resources**, and then select **myPublicLoadBalancer** from the resources list.
-
-2. Under **Settings**, select **Backend pools**, then select **Add**.
-
-3. On the **Add a backend pool** page, for name, type **myPublicBackendPool**, as the name for your backend pool.
-
-4. In **Virtual network**, select **myVNet**.
-
-5. Under **Virtual machines**, select **Add** and choose **myVM**.
-
-6. select **Add**.
-
-### Test connectivity before outbound rule
-
-1. Select **All services** in the left-hand menu, select **All resources**, and then from the resources list, select **myVM** that is located in the **myResourceGroupLB** resource group.
-
-2. On the **Overview** page, select **Connect**, then **Bastion**.
-
-4. Enter the username and password entered during VM creation.
-
-5. Select **Connect**.
-
-6. Open Internet Explorer.
-
-7. Enter **https://whatsmyip.org** in the address bar.
-
-8. The connection should fail. By default, standard public load balancer [doesn't allow outbound traffic without a defined outbound rule](load-balancer-overview.md#securebydefault).
-
-### Create a public load balancer outbound rule
-
-1. Select **All services** in the left-hand menu, select **All resources**, and then select **myPublicLoadBalancer** from the resources list.
-
-2. Under **Settings**, select **Outbound rules**, then select **Add**.
-
-3. Use these values to configure the outbound rules:
-
- | Setting | Value |
- | - | -- |
- | Name | Enter **myOutboundRule**. |
- | Frontend IP address | Select **LoadBalancerFrontEnd**.|
- | Idle timeout (minutes) | Move slider to **15 minutes**.|
- | TCP Reset | Select **Enabled**.|
- | Backend pool | Select **myPublicBackendPool**.|
- | Port allocation -> Port allocation | Select **Use the default number of outbound ports** |
-
-4. Select **Add**.
-
-### Test connectivity after outbound rule
-
-1. Select **All services** in the left-hand menu, select **All resources**, and then from the resources list, select **myVM** that is located in the **myResourceGroupLB** resource group.
-
-2. On the **Overview** page, select **Connect**, then **Bastion**.
-
-4. Enter the username and password entered during VM creation.
-
-5. Select **Connect**.
-
-6. Open Internet Explorer.
-
-7. Enter **https://whatsmyip.org** in the address bar.
-
-8. The connection should succeed.
-
-9. The IP address displayed should be the frontend IP address of **myPublicLoadBalancer**.
-
-## Clean up resources
-
-When no longer needed, delete the resource group, load Balancers, VM, and all related resources.
-
-To do so, select the resource group **myResourceGroupLB** and then select **Delete**.
-
-## Next steps
-
-In this tutorial, you created an "egress only" configuration with a combination of public and internal load balancers.
-
-This configuration allows you to load balance incoming internal traffic to your backend pool while still preventing any public inbound connections.
--- Learn about [Azure Load Balancer](load-balancer-overview.md).-- Learn about [outbound connections in Azure](load-balancer-outbound-connections.md).-- Load balancer [FAQs](load-balancer-faqs.yml).-- Learn about [Azure Bastion](../bastion/bastion-overview.md)
load-balancer Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/manage.md
Title: Azure Load Balancer portal settings description: Get started learning about Azure Load Balancer portal settings- Previously updated : 09/8/2020 Last updated : 08/16/2021
As you create Azure Load Balancer, information in this article will help you lea
## Create load balancer
-Azure Load Balancer is a network load balancer that distributes traffic across VM instances in the backend pool.
+Azure Load Balancer is a network load balancer that distributes traffic across VM instances in the backend pool.
+To create a load balancer in the portal, at the top of the page select the search box. Enter **Load balancer**. Select **Load balancers** in the search results. Select **+ Create** in the **Load balancers** page.
### Basics In the **Basics** tab of the create load balancer portal page, you'll see the following information: -- | Setting | Details | | - | - | | Subscription | Select your subscription. This selection is the subscription you want your load balancer to be deployed in. |
In the **Basics** tab of the create load balancer portal page, you'll see the fo
| Region | Select an Azure region you'd like to deploy your load balancer in. | | Type | Load balancer has two types: </br> **Internal (Private)** </br> **Public (External)**.</br> An internal load balancer (ILB) routes traffic to backend pool members via a private IP address.</br> A public load balancer directs requests from clients over the internet to the backend pool.</br> Learn more about [load balancer types](components.md#frontend-ip-configuration-).| | SKU | Select **Standard**. </br> Load balancer has two SKUs: **Basic** and **Standard**. </br> Basic has limited functionality. </br> **Standard** is recommended for production workloads. </br> Learn more about [SKUs](skus.md). |
+| Tier | Load balancer has two tiers: </br> **Regional** </br> **Global** </br> A regional load balancer is constrained to load balancing within a region. Global refers to a cross-region load balancer that load-balances across regions. </br> For more information on the **Global** tier, see [Cross-region load balancer (preview)](cross-region-overview.md)
++
+### Frontend IP configuration
+
+In the **Frontend IP configuration** tab of the create load balancer portal page, select **+ Add frontend IP address** to open the creation page.
++
+#### **+ Add a frontend IP**
+##### Public
If you select **Public** as your type, you'll see the following information:
-| Setting | Details |
-| - | - |
-| Public IP address | Select **Create new** to create a public IP address for your public load balancer. </br> If you have an existing public IP, select **Use existing**. |
-| Public IP address name | The name of the public IP address.|
-| Public IP address SKU | Public IP addresses have two SKUs: **Basic** and **Standard**. </br> Basic doesn't support zone-resiliency and zonal attributes. </br> **Standard** is recommended for production workloads. </br> Load balancer and public IP address SKUs **must match**. |
+| Setting | Details |
+| - | - |
+| Name | The name of the frontend that will be added to the load balancer. |
+| IP version | **IPv4** </br> **IPv6** </br> Load balancer supports IPv4 and IPv6 frontends. </br> For more information see, [load Balancer and IPv6](load-balancer-ipv6-overview.md). |
+| IP type | **IP address** </br> **IP prefix** </br> Load balancer supports an IP address or an IP prefix for the frontend IP address. For more information, see [Azure Public IP address prefix](../virtual-network/public-ip-address-prefix.md). |
++
+If you select **IP address** for **IP type**, you'll see the following information:
+
+| Setting | Details |
+| - | - |
+| Public IP address | Select **Create new** to create a public IP address for your public load balancer. </br> If you have an existing public IP, select it in the pull down box. |
+| Name | The name of the public IP address resource. |
+| SKU | Public IP addresses have two SKUs: **Basic** and **Standard**. </br> Basic doesn't support zone-resiliency and zonal attributes. </br> **Standard** is recommended for production workloads. </br> Load balancer and public IP address SKUs **must match**. |
+| Tier | **Regional** </br> **Global** </br> Depending on type of load balancer tier will determine what is selected. Regional for traditional load balancer, global for cross-region. |
| Assignment | **Static** is auto selected for standard. </br> Basic public IPs have two types: **Dynamic** and **Static**. </br> Dynamic public IP addresses aren't assigned until creation. </br> IPs can be lost if the resource is deleted. </br> Static IP addresses are recommended. | | Availability zone | Select **Zone-redundant** to create a resilient load balancer. </br> To create a zonal load balancer, select a specific zone from **1**, **2**, or **3**. </br> Standard load balancer and public IPs support zones. </br> Learn more about [load balancer and availability zones](load-balancer-standard-availability-zones.md). </br> You won't see zone selection for basic. Basic load balancer doesn't support zones. |
-| Add a public IPv6 address | Load balancer supports **IPv6** addresses for your frontend. </br> Learn more about [load Balancer and IPv6](load-balancer-ipv6-overview.md)
| Routing preference | Select **Microsoft Network**. </br> Microsoft Network means that traffic is routed via the Microsoft global network. </br> Internet means that traffic is routed through the internet service provider network. </br> Learn more about [Routing Preferences](../virtual-network/routing-preference-overview.md)| +
+If you select **IP prefix** for **IP type**, you'll see the following information:
+
+| Setting | Details |
+| - | - |
+| Public IP prefix | Select **Create new** to create a public IP prefix for your public load balancer. </br> If you have an existing public prefix, select it in the pull down box. |
+| Name | The name of the public IP prefix resource. |
+| SKU | Public IP prefixes have one SKU, **Standard**. |
+| IP version | **IPv4** or **IPv6**. </br> The version displayed will correspond to the version chosen above. |
+| Prefix size | IPv4 or IPv6 prefixes are displayed depending on the selection above. </br> **IPv4** </br> /24 (256 addresses) </br> /25 (128 addresses) </br> /26 (64 addresses) </br> /27 (32 addresses) </br> /28 (16 addresses) </br> /29 (8 addresses) </br> /30 (4 addresses) </br> /31 (2 addresses) </br> **IPv6** </br> /124 (16 addresses) </br> /125 (8 addresses) </br> 126 (4 addresses) </br> 127 (2 addresses) |
+| Availability zone | Select **Zone-redundant** to create a resilient load balancer. </br> To create a zonal load balancer, select a specific zone from **1**, **2**, or **3**. </br> Standard load balancer and public IP prefixes support zones. </br> Learn more about [load balancer and availability zones](load-balancer-standard-availability-zones.md).
-If you select **Internal** in Type, you'll see the following information:
+
+##### Internal
+
+If you select **Internal** as your type in the **Basics** tab, you'll see the following information:
| Setting | Details | | - | - | | Virtual network | The virtual network you want your internal load balancer to be part of. </br> The private frontend IP address you select for your internal load balancer will be from this virtual network. |
-| IP address assignment | Your options are **Static** or **Dynamic**. </br> Static ensures the IP doesn't change. A dynamic IP could change. |
+| Subnet | The subnets available for the IP address of the frontend IP are displayed here. |
+| Assignment | Your options are **Static** or **Dynamic**. </br> Static ensures the IP doesn't change. A dynamic IP could change. |
| Availability zone | Your options are: </br> **Zone redundant** </br> **Zone 1** </br> **Zone 2** </br> **Zone 3** </br> To create a load balancer that is highly available and resilient to availability zone failures, select a **zone-redundant** IP. |
+### Backend pools
+
+In the **Backend pools** tab of the create load balancer portal page, select **+ Add a backend pool** to open the creation page.
++
+#### **+ Add a backend pool**
+
+The following is displayed in the **+ Add a backend pool** creation page:
+
+| Setting | Details |
+| - | - |
+| Name | The name of your backend pool. |
+| Virtual network | The virtual network your backend instances are. |
+| Backend pool configuration | Your options are: </br> **NIC** </br> **IP address** </br> NIC configures the backend pool to use the network interface card of the virtual machines. </br> IP address configures the backend pool to use the IP address of the virtual machines. </br> For more information on backend pool configuration see, [Backend pool management](backend-pool-management.md).
+| IP version | Your options are **IPv4** or **IPv6**. |
+
+You can add virtual machines or virtual machine scale sets to the backend pool of your Azure Load Balancer. Create the virtual machines or virtual machine scale sets first.
++
+### Inbound rules
+
+There are two sections in the **Inbound rules** tab, **Load balancing rule** and **Inbound NAT rule**.
+
+In the **Inbound rules** tab of the create load balancer portal page, select **+ Add a load balancing rule** to open the creation page.
++
+#### **+ Add a load balancing rule**
+
+The following is displayed in the **+ Add a load balancing rule** creation page:
+
+| Setting | Details |
+| - | - |
+| Name | The name of the load balancer rule. |
+| IP Version | Your options are **IPv4** or **IPv6**. |
+| Frontend IP address | Select the frontend IP address. </br> The frontend IP address of your load balancer you want the load balancer rule associated to.|
+| Protocol | Azure Load Balancer is a layer 4 network load balancer. </br> Your options are: **TCP** or **UDP**. |
+| Port | This setting is the port associated with the frontend IP that you want traffic to be distributed based on this load-balancing rule. |
+| Backend port | This setting is the port on the instances in the backend pool you would like the load balancer to send traffic to. This setting can be the same as the frontend port or different if you need the flexibility for your application. |
+| Backend pool | The backend pool you would like this load balancer rule to be applied on. |
+| Health probe | Select **Create new**, to create a new probe. </br> Only healthy instances will receive new traffic. |
+| Session persistence | Your options are: </br> **None** </br> **Client IP** </br> **Client IP and protocol**</br> </br> Maintain traffic from a client to the same virtual machine in the backend pool. This traffic will be maintained for the duration of the session. </br> **None** specifies that successive requests from the same client may be handled by any virtual machine. </br> **Client IP** specifies that successive requests from the same client IP address will be handled by the same virtual machine. </br> **Client IP and protocol** ensure that successive requests from the same client IP address and protocol will be handled by the same virtual machine. </br> Learn more about [distribution modes](load-balancer-distribution-mode.md). |
+| Idle timeout (minutes) | Keep a **TCP** or **HTTP** connection open without relying on clients to send keep-alive messages |
+| TCP reset | Load balancer can send **TCP resets** to help create a more predictable application behavior on when the connection is idle. </br> Learn more about [TCP reset](load-balancer-tcp-reset.md)|
+| Floating IP | Floating IP is Azure's terminology for a portion of what is known as **Direct Server Return (DSR)**. </br> DSR consists of two parts: <br> 1. Flow topology </br> 2. An IP address-mapping scheme at a platform level. </br></br> Azure Load Balancer always operates in a DSR flow topology whether floating IP is enabled or not. </br> This operation means that the outbound part of a flow is always correctly rewritten to flow directly back to the origin. </br> Without floating IP, Azure exposes a traditional load-balancing IP address-mapping scheme, the VM instances' IP. </br> Enabling floating IP changes the IP address mapping to the frontend IP of the load Balancer to allow for additional flexibility. </br> For more information, see [Multiple frontends for Azure Load Balancer](load-balancer-multivip-overview.md).|
+| Outbound source network address translation (SNAT) | Your options are: </br> **(Recommended) Use outbound rules to provide backend pool members access to the internet.** </br> **Use implicit outbound rule. This is not recommended because it can cause SNAT port exhaustion.** </br> Select the **Recommended** option to prevent SNAT port exhaustion. A **NAT gateway** or **Outbound rules** are required to provide SNAT for the backend pool members. For more information on **NAT gateway**, see [What is Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md). </br> For more information on outbound connections in Azure, see [Using Source Network Address Translation (SNAT) for outbound connections](load-balancer-outbound-connections.md). |
+
-## Frontend IP configuration
+#### Create health probe
+
+If you selected **Create new** in the health probe configuration of the load-balancing rule above, the following options are displayed:
+
+| Setting | Details |
+| - | - |
+| Name | The name of your health probe. |
+| Protocol | The protocol you select determines the type of check used to determine if the backend instance(s) are healthy. </br> Your options are: </br> **TCP** </br> **HTTPS** </br> **HTTP** </br> Ensure you're using the right protocol. This selection will depend on the nature of your application. </br> The configuration of the health probe and probe responses determines which backend pool instances will receive new flows. </br> You can use health probes to detect the failure of an application on a backend endpoint. </br> Learn more about [health probes](load-balancer-custom-probe-overview.md). |
+| Port | The destination port for the health probe. </br> This setting is the port on the backend instance the health probe will use to determine the instance's health. |
+| Interval | The number of seconds in between probe attempts. </br> The interval will determine how frequently the health probe will attempt to reach the backend instance. </br> If you select 5, the second probe attempt will be made after 5 seconds and so on. |
+| Unhealthy threshold | The number of consecutive probe failures that must occur before a VM is considered unhealthy.</br> If you select 2, no new flows will be set to this backend instance after two consecutive failures. |
++
+In the **Inbound rules** tab of the create load balancer portal page, select **+ Add a an inbound nat rule** to open the creation page.
+
+#### **+ Add a an inbound nat rule**
+
+The following is displayed in the **+ Add inbound NAT rule** creation page:
+
+| Setting | Details |
+| - | - |
+| Name | The name of your inbound NAT rule |
+| Frontend IP address | Select the frontend IP address. </br> The frontend IP address of your load balancer you want the inbound NAT rule associated to. |
+| IP Version | Your options are IPv4 and IPv6. |
+| Service | The type of service you'll be running on Azure Load Balancer. </br> A selection here will update the port information appropriately. |
+| Protocol | Azure Load Balancer is a layer 4 network load balancer. </br> Your options are: TCP or UDP. |
+| Idle timeout (minutes) | Keep a TCP or HTTP connection open without relying on clients to send keep-alive messages. |
+| TCP Reset | Load Balancer can send TCP resets to help create a more predictable application behavior on when the connection is idle. </br> Learn more about [TCP reset](load-balancer-tcp-reset.md) |
+| Port | This setting is the port associated with the frontend IP that you want traffic to be distributed based on this inbound NAT rule. |
+| Target virtual machine | The virtual machine part of the backend pool you would like this rule to be associated to. |
+| Port mapping | This setting can be default or custom based on your application preference. |
++
+### Outbound rules
+
+In the **Outbound rules** tab of the create load balancer portal page, select **+ Add an outbound rule** to open the creation page.
+
+> [!NOTE]
+> The outbound rules tab is only valid for a public standard load balancer. Outbound rules are not supported on an internal or basic load balancer. Azure Virtual Network NAT is the recommended way to provide outbound internet access for the backend pool. For more information on **Azure Virtual Network NAT** and the NAT gateway resource, see **[What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)**.
++
+#### **+ Add an outbound rule**
+
+The following is displayed in the **+ Add an outbound rule** creation page:
+
+| Setting | Details |
+| - | |
+| Name | The name of your outbound rule. |
+| Frontend IP address | Select the frontend IP address. </br> The frontend IP address of your load balancer you want the outbound rule to be associated to. |
+| Protocol | Azure Load Balancer is a layer 4 network load balancer. </br> Your options are: **All**, **TCP**, or **UDP**. |
+| Idle timeout (minutes) | Keep a **TCP** or **HTTP** connection open without relying on clients to send keep-alive messages. |
+| TCP Reset | Load balancer can send **TCP resets** to help create a more predictable application behavior on when the connection is idle. </br> Learn more about [TCP reset](load-balancer-tcp-reset.md) |
+| Backend pool | The backend pool you would like this outbound rule to be applied on. |
+| **Port allocation** | |
+| Port allocation | Your choices are: </br> **Manually choose number of outbound ports** </br> **Use the default number of outbound ports** </br> It's recommended to select the default of **Manually choose number of outbound ports** to prevent SNAT port exhaustion. If choose **Use the default number of outbound ports**, the **Outbound ports** selection is disabled. |
+| Outbound ports | Your choices are: </br> **Ports per instance** </br> **Maximum number of backend instances**. </br> It's recommend to select **Ports per instance** and enter **10,000**. |
++
+## Portal settings
+### Frontend IP configuration
The IP address of your Azure Load Balancer. It's the point of contact for clients.
-You can have one or many frontend IP configurations. If you went through the basics section above, you would have already created a frontend for your load balancer.
+You can have one or many frontend IP configurations. If you went through the create section above, you would have already created a frontend for your load balancer.
If you want to add a frontend IP configuration to your load balancer, go to your load balancer in the Azure portal, select **Frontend IP configuration**, and then select **+Add**.
If you want to add a frontend IP configuration to your load balancer, go to your
:::image type="content" source="./media/manage/frontend.png" alt-text="Create frontend ip configuration page." border="true":::
-## Backend pools
+### Backend pools
A backend address pool contains the IP addresses of the virtual network interfaces in the backend pool.
If you want to add a backend pool to your load balancer, go to your load balance
| - | - | | Name | The name of your backend pool. | | Virtual network | The virtual network your backend instances are. |
+| Backend Pool Configuration | Your options are: </br> **NIC** </br> **IP address** </br> NIC configures the backend pool to use the network interface card of the virtual machines. </br> IP address configures the backend pool to use the IP address of the virtual machines. </br> For more information on backend pool configuration see, [Backend pool management](backend-pool-management.md). |
| IP version | Your options are **IPv4** or **IPv6**. | You can add virtual machines or virtual machine scale sets to the backend pool of your Azure Load Balancer. Create the virtual machines or virtual machine scale sets first. Next, add them to the load balancer in the portal. :::image type="content" source="./media/manage/backend.png" alt-text="Create backend pool page." border="true":::
-## Health probes
+### Health probes
A health probe is used to monitor the status of your backend VMs or instances. The health probe status determines when new connections are sent to an instance based on health checks.
If you want to add a health probe to your load balancer, go to your load balance
| Interval | The number of seconds in between probe attempts. </br> The interval will determine how frequently the health probe will attempt to reach the backend instance. </br> If you select 5, the second probe attempt will be made after 5 seconds and so on. | | Unhealthy threshold | The number of consecutive probe failures that must occur before a VM is considered unhealthy.</br> If you select 2, no new flows will be set to this backend instance after two consecutive failures. |
-## Load-balancing rules
+### Load-balancing rules
Defines how incoming traffic is distributed to all the instances within the backend pool. A load-balancing rule maps a given frontend IP configuration and port to multiple backend IP addresses and ports.
If you want to add a load balancer rule to your load balancer, go to your load b
| Idle timeout (minutes) | Keep a **TCP** or **HTTP** connection open without relying on clients to send keep-alive messages | | TCP reset | Load balancer can send **TCP resets** to help create a more predictable application behavior on when the connection is idle. </br> Learn more about [TCP reset](load-balancer-tcp-reset.md)| | Floating IP | Floating IP is Azure's terminology for a portion of what is known as **Direct Server Return (DSR)**. </br> DSR consists of two parts: <br> 1. Flow topology </br> 2. An IP address-mapping scheme at a platform level. </br></br> Azure Load Balancer always operates in a DSR flow topology whether floating IP is enabled or not. </br> This operation means that the outbound part of a flow is always correctly rewritten to flow directly back to the origin. </br> Without floating IP, Azure exposes a traditional load-balancing IP address-mapping scheme, the VM instances' IP. </br> Enabling floating IP changes the IP address mapping to the frontend IP of the load Balancer to allow for additional flexibility. </br> For more information, see [Multiple frontends for Azure Load Balancer](load-balancer-multivip-overview.md).|
-| Create implicit outbound rules | Select **No**. </br> Default: **disableOutboundSnat = false** </br> In this case outbound occurs via same frontend IP. </br></br> **disableOutboundSnat = true** </br>In this case, outbound rules are needed for outbound. |
+| Outbound source network address translation (SNAT) | Your options are: </br> **(Recommended) Use outbound rules to provide backend pool members access to the internet.** </br> **Use implicit outbound rule. This is not recommended because it can cause SNAT port exhaustion.** </br> Select the **Recommended** option to prevent SNAT port exhaustion. A **NAT gateway** or **Outbound rules** are required to provide SNAT for the backend pool members. For more information on **NAT gateway**, see [What is Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md). </br> For more information on outbound connections in Azure, see [Using Source Network Address Translation (SNAT) for outbound connections](load-balancer-outbound-connections.md). |
-## Inbound NAT rules
+### Inbound NAT rules
An inbound NAT rule forwards incoming traffic sent to frontend IP address and port combination.
If you want to add an inbound nat rule to your load balancer, go to your load ba
| Target virtual machine | The virtual machine part of the backend pool you would like this rule to be associated to. | | Port mapping | This setting can be default or custom based on your application preference. |
-## Outbound rules
+### Outbound rules
Load balancer outbound rules configure outbound SNAT for VMs in the backend pool.
If you want to add an outbound rule to your load balancer, go to your load balan
| TCP Reset | Load balancer can send **TCP resets** to help create a more predictable application behavior on when the connection is idle. </br> Learn more about [TCP reset](load-balancer-tcp-reset.md) | | Backend pool | The backend pool you would like this outbound rule to be applied on. |
-### Port allocation
-
-| Setting | Details |
-| - | |
-| Port allocation | We recommend selecting **Manually choose number of outbound ports**.|
-
-### Outbound ports
-
-| Setting | Details |
-| - | |
-| Choose by | Select **Ports per instance** |
-| Ports per instance | Enter **10,000**. |
- ## Next Steps
load-balancer Tutorial Cross Region Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/tutorial-cross-region-portal.md
Previously updated : 02/24/2021 Last updated : 08/02/2021 #Customer intent: As a administrator, I want to deploy a cross-region load balancer for global high availability of my application or service.
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
## Create cross-region load balancer
-In this section, you'll create a cross-region load balancer and public IP address.
+In this section, you'll create a
-1. Select **Create a resource**.
-2. In the search box, enter **Load balancer**. Select **Load balancer** in the search results.
-3. In the **Load balancer** page, select **Create**.
-4. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
+* Cross-region load balancer
+* Frontend with a global public IP address
+* Backend pool with two regional load balancers
+
+> [!IMPORTANT]
+> To complete these steps, ensure that two regional load balancers with backend pools have been deployed in your subscription. For more information, see, **[Quickstart: Create a public load balancer to load balance VMs using the Azure portal](quickstart-load-balancer-standard-public-portal.md)**.
+
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancer** in the search results.
+
+2. In the **Load balancer** page, select **Create**.
+
+3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
| Setting | Value | | | |
+ | **Project details** | |
| Subscription | Select your subscription. |
- | Resource group | Select **Create new** and enter **CreateCRLBTutorial-rg** in the text box.|
+ | Resource group | Select **Create new** and enter **CreateCRLBTutorial-rg** in the text box. |
+ | **Instance details** | |
| Name | Enter **myLoadBalancer-CR** | | Region | Select **(US) West US**. | | Type | Select **Public**. | | SKU | Leave the default of **Standard**. | | Tier | Select **Global** |
- | Public IP address | Select **Create new**.|
- | Public IP address name | Type **myPublicIP-CR** in the text box.|
- | Routing preference| Select **Microsoft network**. </br> For more information on routing preference, see [What is routing preference (preview)?](../virtual-network/routing-preference-overview.md). |
-
- > [!NOTE]
- > Cross region load-balancer can only be deployed in the following home regions: **East US 2, West US, West Europe, Southeast Asia, Central US, North Europe, East Asia**. For more information, see **https://aka.ms/homeregionforglb**.
--
-3. Accept the defaults for the remaining settings, and then select **Review + create**.
-
-4. In the **Review + create** tab, select **Create**.
:::image type="content" source="./media/tutorial-cross-region-portal/create-cross-region.png" alt-text="Create a cross-region load balancer" border="true":::
+
+4. Select **Next: Frontend IP configuration** at the bottom of the page.
-## Create backend pool
-
-In this section, you'll add two regional standard load balancers to the backend pool of the cross-region load balancer.
+5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
-> [!IMPORTANT]
-> To complete these steps, ensure that two regional load balancers with backend pools have been deployed in your subscription. For more information, see, **[Quickstart: Create a public load balancer to load balance VMs using the Azure portal](quickstart-load-balancer-standard-public-portal.md)**.
-
-Create the backend address pool **myBackendPool-CR** to include the regional standard load balancers.
-
-1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer-CR** from the resources list.
+6. Enter **LoadBalancerFrontend** in **Name** in **Add frontend IP address**.
-2. Under **Settings**, select **Backend pools**, then select **Add**.
+7. Select **IPv4** or **IPv6** for **IP version**.
-3. On the **Add a backend pool** page, for name, type **myBackendPool-CR**.
+8. In **Public IP address**, select **Create new**. Enter **myPublicIP-cr** in **Name**. Select **OK**.
-4. Select **Add**.
+9. Select **Add**.
-4. Select **myBackendPool-CR**.
+10. Select **Next: Backend pools** at the bottom of the page.
-5. Under **Load balancers**, select the pull-down box under **Load balancer**.
+11. In **Backend pools**, select **+ Add a backend pool**.
-5. Select **myLoadBalancer-R1**, or the name of your load balancer in region 1.
+12. Enter **myBackendPool-cr** in **Name** in **Add backend pool**.
-6. Select the pull-down box under **Frontend IP configuration**. Choose **LoadBalancerFrontEnd**.
+13. In **Load balancers**, select **myLoadBalancer-r1** or your first regional load balancer in the **Load balancer** pull-down box. Verify the **Frontend IP configuration** and **IP address** correspond with **myLoadBalancer-r1**.
-7. Repeat steps 4-6 to add **myLoadBalancer-R2**.
+14. Select **myLoadBalancer-r2** or your second regional load balancer in the **Load balancer** pull-down box. Verify the **Frontend IP configuration** and **IP address** correspond with **myLoadBalancer-r2**.
-8. Select **Add**.
+15. Select **Add**.
- :::image type="content" source="./media/tutorial-cross-region-portal/add-to-backendpool.png" alt-text="Add regional load balancers to backendpool" border="true":::
+16. Select **Next: Inbound rules** at the bottom of the page.
-## Create a load balancer rule
+17. In **Inbound rules**, select **+ Add a load balancing rule**.
-In this section, you'll create a load balancer rule:
-
-* Named **myHTTPRule**.
-* In the frontend named **LoadBalancerFrontEnd**.
-* Listening on **Port 80**.
-* Directs load balanced traffic to the backend named **myBackendPool-CR** on **Port 80**.
-
- > [!NOTE]
- > Frontend port must match backend port and the frontend port of the regional load balancers in the backend pool.
+18. In **Add load balancing rule**, enter or select the following information:
-1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer-CR** from the resources list.
-
-2. Under **Settings**, select **Load-balancing rules**, then select **Add**.
-
-3. Use these values to configure the load-balancing rule:
-
| Setting | Value | | - | -- |
- | Name | Enter **myHTTPRule**. |
- | IP Version | Select **IPv4** |
- | Frontend IP address | Select **LoadBalancerFrontEnd** |
+ | Name | Enter **myHTTPRule-cr**. |
+ | IP Version | Select **IPv4** or **IPv6** for **IP Version**. |
+ | Frontend IP address | Select **LoadBalancerFrontend**. |
| Protocol | Select **TCP**. |
- | Port | Enter **80**.|
- | Backend port | Enter **80**. |
- | Backend pool | Select **myBackendPool**.|
- | Idle timeout (minutes) | Move slider to **15**. |
+ | Port | Enter **80**. |
+ | Backend pool | Select **myBackendPool-cr**. |
+ | Session persistence | Select **None**. |
+ | Idle timeout (minutes) | Enter or move the slider to **15**. |
| TCP reset | Select **Enabled**. |
+ | Floating IP | Leave the default of **Disabled**. |
+
+19. Select **Add**.
-4. Leave the rest of the defaults and then select **OK**.
+20. Select **Review + create** at the bottom of the page.
- :::image type="content" source="./media/tutorial-cross-region-portal/create-lb-rule.png" alt-text="Create load balancer rule" border="true":::
+21. Select **Create** in the **Review + create** tab.
+
+ > [!NOTE]
+ > Cross region load-balancer can only be deployed in the following home regions: **East US 2, West US, West Europe, Southeast Asia, Central US, North Europe, East Asia**. For more information, see **https://aka.ms/homeregionforglb**.
## Test the load balancer
load-balancer Tutorial Load Balancer Ip Backend Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/tutorial-load-balancer-ip-backend-portal.md
Previously updated : 3/31/2021 Last updated : 08/06/2021
In this section, you'll create a NAT gateway and assign it to the subnet in the
10. Select **Create**. ## Create load balancer
-In this section, you'll create a Standard Azure Load Balancer.
+In this section, you'll create a zone redundant load balancer that load balances virtual machines. With zone-redundancy, one or more availability zones can fail and the data path survives as long as one zone in the region remains healthy.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **Create a resource**.
-3. In the search box, enter **Load balancer**. Select **Load balancer** in the search results.
-4. In the **Load balancer** page, select **Create**.
-5. On the **Create load balancer** page enter, or select the following information:
+During the creation of the load balancer, you'll configure:
+
+* Frontend IP address
+* Backend pool
+* Inbound load-balancing rules
+
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+2. In the **Load balancer** page, select **Create**.
+
+3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
| Setting | Value | | | | | **Project details** | | | Subscription | Select your subscription. |
- | Resource group | Select **TutorPubLBIP-rg**.|
+ | Resource group | Select **TutorPubLBIP-rg**. |
| **Instance details** | | | Name | Enter **myLoadBalancer** | | Region | Select **(US) East US**. | | Type | Select **Public**. | | SKU | Leave the default **Standard**. | | Tier | Leave the default **Regional**. |
- | **Public IP address** | |
- | Public IP address | Select **Create new**. </br> If you have an existing Public IP you would like to use, select **Use existing**. |
- | Public IP address name | Enter **myPublicIP-LB** in the text box.|
- | Availability zone | Select **Zone-redundant** to create a resilient load balancer. To create a zonal load balancer, select a specific zone from 1, 2, or 3 |
- | Add a public IPv6 address | Select **No**. </br> For more information on IPv6 addresses and load balancer, see [What is IPv6 for Azure Virtual Network?](../virtual-network/ipv6-overview.md) |
- | Routing preference | Leave the default of **Microsoft network**. </br> For more information on routing preference, see [What is routing preference (preview)?](../virtual-network/routing-preference-overview.md). |
-6. Accept the defaults for the remaining settings, and then select **Review + create**.
+4. Select **Next: Frontend IP configuration** at the bottom of the page.
-7. In the **Review + create** tab, select **Create**.
+5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
-## Create load balancer resources
+6. Enter **LoadBalancerFrontend** in **Name**.
-In this section, you configure:
+7. Select **IPv4** or **IPv6** for the **IP version**.
-* Load balancer settings for a backend address pool.
-* A health probe.
-* A load balancer rule.
+ > [!NOTE]
+ > IPv6 isn't currently supported with Routing Preference or Cross-region load-balancing (Global Tier).
-### Create a backend pool
+8. Select **IP address** for the **IP type**.
-A backend address pool contains the IP addresses of the virtual (NICs) connected to the load balancer.
+ > [!NOTE]
+ > For more information on IP prefixes, see [Azure Public IP address prefix](../virtual-network/public-ip-address-prefix.md).
-Create the backend address pool **myBackendPool** to include virtual machines for load-balancing internet traffic.
+9. Select **Create new** in **Public IP address**.
-1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+10. In **Add a public IP address**, enter **myPublicIP** for **Name**.
-2. Under **Settings**, select **Backend pools**, then select **+ Add**.
+11. Select **Zone-redundant** in **Availability zone**.
-3. On the **Add a backend pool** page, enter or select the following information:
+ > [!NOTE]
+ > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
- | Setting | Value |
- | - | -- |
- | Name | Enter **myBackendPool**. |
- | Virtual network | Select **myVNet**. |
- | Backend Pool Configuration | Select **IP Address**. |
- | IP Version | Select **IPv4**. |
+12. Leave the default of **Microsoft Network** for **Routing preference**.
-4. Select **Add**.
+13. Select **OK**.
-### Create a health probe
+14. Select **Add**.
-The load balancer monitors the status of your app with a health probe.
+15. Select **Next: Backend pools** at the bottom of the page.
-The health probe adds or removes VMs from the load balancer based on their response to health checks.
+16. In the **Backend pools** tab, select **+ Add a backend pool**.
-Create a health probe named **myHealthProbe** to monitor the health of the VMs.
+17. Enter **myBackendPool** for **Name** in **Add backend pool**.
-1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
-
-2. Under **Settings**, select **Health probes**, then select **+ Add**.
-
- | Setting | Value |
- | - | -- |
- | Name | Enter **myHealthProbe**. |
- | Protocol | Select **TCP**. |
- | Port | Enter **80**.|
- | Interval | Enter **15** for number of **Interval** in seconds between probe attempts. |
- | Unhealthy threshold | Select **2**. |
-
-3. Leave the rest the defaults and Select **Add**.
+18. Select **myVNet** in **Virtual network**.
-### Create a load balancer rule
+19. Select **IP Address** for **Backend Pool Configuration**.
-A load balancer rule is used to define how traffic is distributed to the VMs. You define the frontend IP configuration for the incoming traffic and the backend IP pool to receive the traffic. The source and destination port are defined in the rule.
+20. Select **IPv4** or **IPv6** for **IP version**.
-In this section, you'll create a load balancer rule:
+21. Select **Add**.
-* Named **myHTTPRule**.
-* In the frontend named **LoadBalancerFrontEnd**.
-* Listening on **Port 80**.
-* Directs load balanced traffic to the backend named **myBackendPool** on **Port 80**.
+22. Select the **Next: Inbound rules** button at the bottom of the page.
-1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+23. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
-2. Under **Settings**, select **Load-balancing rules**, then select **+ Add**.
+24. In **Add load balancing rule**, enter or select the following information:
-3. Enter or select the following information for the load balancer rule:
-
| Setting | Value | | - | -- |
- | Name | Enter **myHTTPRule**. |
- | IP Version | Select **IPv4** |
- | Frontend IP address | Select **LoadBalancerFrontEnd** |
+ | Name | Enter **myHTTPRule** |
+ | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
+ | Frontend IP address | Select **LoadBalancerFrontend**. |
| Protocol | Select **TCP**. |
- | Port | Enter **80**.|
+ | Port | Enter **80**. |
| Backend port | Enter **80**. |
- | Backend pool | Select **myBackendPool**.|
- | Health probe | Select **myHealthProbe**. |
- | Session persistence | Leave the default of **None**. |
- | Idle timeout (minutes) | Enter **15** minutes. |
+ | Backend pool | Select **myBackendPool**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Session persistence | Select **None**. |
+ | Idle timeout (minutes) | Enter or select **15**. |
| TCP reset | Select **Enabled**. | | Floating IP | Select **Disabled**. |
- | Outbound source network address translation (SNAT) | Select **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
+ | Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
+
+25. Select **Add**.
+
+26. Select the blue **Review + create** button at the bottom of the page.
+
+27. Select **Create**.
+
+ > [!NOTE]
+ > In this example we created a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed as it's optional isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
+ > For more information about outbound connections in Azure, see [Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md)
-4. Leave the rest of the defaults and then select **Add**.
## Create virtual machines
These VMs are added to the backend pool of the load balancer that was created ea
| Subnet | **myBackendSubnet** | | Public IP | Select **None**. | | NIC network security group | Select **Advanced**|
- | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Within **Inbound rules**, select **+Add an inbound rule**. </br> Under **Service**, select **HTTP**. </br> In **Priority**, enter **100**. </br> Under **Name**, enter **myHTTPRule** </br> Select **Add** </br> Select **OK** |
+ | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Within **Inbound rules**, select **+Add an inbound rule**. </br> Under **Service**, select **HTTP**. </br> In **Priority**, enter **100**. </br> Under **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
| **Load balancing** | | Place this virtual machine behind an existing load-balancing solution? | Select the check box.| | **Load balancing settings** |
These VMs are added to the backend pool of the load balancer that was created ea
* Add a new iisstart.htm file that displays the name of the VM: ```powershell
- # install IIS server role
+ # Install IIS server role
Install-WindowsFeature -name Web-Server -IncludeManagementTools
- # remove default htm file
+ # Remove default htm file
Remove-Item C:\inetpub\wwwroot\iisstart.htm # Add a new htm file that displays server name
load-balancer Tutorial Load Balancer Standard Public Zonal Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/tutorial-load-balancer-standard-public-zonal-portal.md
Title: "Tutorial: Load Balancer VMs within a zone--Azure portal"
+ Title: "Tutorial: Load balance VMs within an availability zone - Azure portal"
description: This tutorial demonstrates how to create a Standard Load Balancer with zonal frontend to load balance VMs within an availability zone by using Azure portal - # Customer intent: As an IT administrator, I want to create a load balancer that load balances incoming internet traffic to virtual machines within a specific zone in a region. - Previously updated : 02/27/2019 Last updated : 08/15/2021
-# Tutorial: Load balance VMs within an availability zone with Standard Load Balancer by using the Azure portal
+# Tutorial: Load balance VMs within an availability zone by using the Azure portal
-This tutorial creates a public [Azure Standard Load Balancer instance](https://aka.ms/azureloadbalancerstandard) with a zonal frontend that uses a public IP standard address by using the Azure portal. In this scenario, you specify a particular zone for your frontend and backend instances, to align your data path and resources with a specific zone. You learn how to perform the following functions:
+This tutorial creates a public [load balancer](https://aka.ms/azureloadbalancerstandard) with a zonal IP. In the tutorial, you specify a zone for your frontend and backend instances.
+
+In this tutorial, you learn how to:
> [!div class="checklist"]
-> * Create a Standard Load Balancer instance with a zonal frontend.
-> * Create network security groups to define incoming traffic rules.
+> * Create a virtual network with an Azure Bastion host for management.
+> * Create a NAT gateway for outbound internet access of the resources in the virtual network.
+> * Create a load balancer with a health probe and traffic rules.
> * Create zonal virtual machines (VMs) and attach them to a load balancer.
-> * Create a load balancer health probe.
-> * Create a load balancer traffic rules.
> * Create a basic Internet Information Services (IIS) site.
-> * View a load balancer in action.
-
-For more information about using availability zones with Standard Load Balancer, see [Standard Load Balancer and Availability Zones](load-balancer-standard-availability-zones.md).
+> * Test the load balancer.
-If you prefer, use [Azure CLI](./quickstart-load-balancer-standard-public-cli.md) to complete this tutorial.
+For more information about availability zones and a standard load balancer, see [Standard load balancer and availability zones](load-balancer-standard-availability-zones.md).
## Prerequisites
If you prefer, use [Azure CLI](./quickstart-load-balancer-standard-public-cli.md
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-## Create a public Standard Load Balancer instance
+## Create the virtual network
+
+In this section, you'll create a virtual network and subnet.
+
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results.
+
+2. In **Virtual networks**, select **+ Create**.
+
+3. In **Create virtual network**, enter or select this information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ ||--|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **Create new**. </br> In **Name** enter **CreateZonalLBTutorial-rg**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter **myVNet** |
+ | Region | Select **(Europe) West Europe** |
+
+4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+
+5. In the **IP Addresses** tab, enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | IPv4 address space | Enter **10.1.0.0/16** |
+
+6. Under **Subnet name**, select the word **default**.
+
+7. In **Edit subnet**, enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | Subnet name | Enter **myBackendSubnet** |
+ | Subnet address range | Enter **10.1.0.0/24** |
+
+8. Select **Save**.
+
+9. Select the **Security** tab.
+
+10. Under **BastionHost**, select **Enable**. Enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | Bastion name | Enter **myBastionHost** |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/27** |
+ | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
++
+11. Select the **Review + create** tab or select the **Review + create** button.
+
+12. Select **Create**.
+
+## Create NAT gateway
+
+In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network.
+
+1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
+
+2. In **NAT gateways**, select **+ Create**.
+
+3. In **Create network address translation (NAT) gateway**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **CreateZonalLBTutorial-rg**. |
+ | **Instance details** | |
+ | NAT gateway name | Enter **myNATgateway**. |
+ | Availability zone | Select **1**. |
+ | Idle timeout (minutes) | Enter **15**. |
+
+4. Select the **Outbound IP** tab or select the **Next: Outbound IP** button at the bottom of the page.
+
+5. In **Outbound IP**, select **Create a new public IP address** next to **Public IP addresses**.
+
+6. Enter **myNATGatewayIP** in **Name** in **Add a public IP address**.
+
+7. Select **OK**.
+
+8. Select the **Subnet** tab or select the **Next: Subnet** button at the bottom of the page.
+
+9. In **Virtual network** in the **Subnet** tab, select **myVNet**.
+
+10. Select **myBackendSubnet** under **Subnet name**.
+
+11. Select the blue **Review + create** button at the bottom of the page, or select the **Review + create** tab.
+
+12. Select **Create**.
+
+## Create load balancer
+
+In this section, you'll create a zonal load balancer that load balances virtual machines.
+
+During the creation of the load balancer, you'll configure:
+
+* Frontend IP address
+* Backend pool
+* Inbound load-balancing rules
-Standard Load Balancer only supports a standard public IP address. When you create a new public IP while creating the load balancer, it's automatically configured as a Standard SKU version. It's also automatically zone redundant.
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
-1. On the upper left side of the screen, select **Create a resource** > **Networking** > **Load Balancer**.
-2. In the **Basics** tab of the **Create load balancer** page, enter or select the following information, accept the defaults for the remaining settings, and then select **Review + create**:
+2. In the **Load balancer** page, select **Create**.
+
+3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
| Setting | Value | | | |
+ | **Project details** | |
| Subscription | Select your subscription. |
- | Resource group | Select **Create new** and type *MyResourceGroupZLB* in the text box.|
- | Name | *myLoadBalancer* |
- | Region | Select **West Europe**. |
+ | Resource group | Select **CreateZonalLBTutorial-rg**. |
+ | **Instance details** | |
+ | Name | Enter **myLoadBalancer** |
+ | Region | Select **(Europe) West Europe**. |
| Type | Select **Public**. |
- | SKU | Select **Standard**. |
- | Public IP address | Select **Create new**. |
- | Public IP address name | Type *myPublicIP* in the text box. |
- |Availability zone| Select **1**. |
-3. In the **Review + create** tab, click **Create**.
+ | SKU | Leave the default **Standard**. |
+ | Tier | Leave the default **Regional**. |
+
+4. Select **Next: Frontend IP configuration** at the bottom of the page.
+
+5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
+
+6. Enter **LoadBalancerFrontend** in **Name**.
+
+7. Select **IPv4** or **IPv6** for the **IP version**.
+
+ > [!NOTE]
+ > IPv6 isn't currently supported with Routing Preference or Cross-region load-balancing (Global Tier).
+
+8. Select **IP address** for the **IP type**.
+
+ > [!NOTE]
+ > For more information on IP prefixes, see [Azure Public IP address prefix](../virtual-network/public-ip-address-prefix.md).
+
+9. Select **Create new** in **Public IP address**.
+
+10. In **Add a public IP address**, enter **myPublicIP** for **Name**.
+
+11. Select **1** in **Availability zone**.
+
+ > [!NOTE]
+ > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
+
+12. Leave the default of **Microsoft Network** for **Routing preference**.
+
+13. Select **OK**.
+
+14. Select **Add**.
+
+15. Select **Next: Backend pools** at the bottom of the page.
+
+16. In the **Backend pools** tab, select **+ Add a backend pool**.
+
+17. Enter **myBackendPool** for **Name** in **Add backend pool**.
+
+18. Select **myVNet** in **Virtual network**.
-## Create backend servers
+19. Select **NIC** or **IP Address** for **Backend Pool Configuration**.
-In this section, you create a virtual network. You also create two virtual machines in same zone (namely, zone 1) for the region to add to the backend pool of your load balancer. Then you install IIS on the virtual machines to help test the zone-redundant load balancer. If one VM fails, the health probe for the VM in the same zone fails. Traffic continues to be served by other VMs within the same zone.
+20. Select **IPv4** or **IPv6** for **IP version**.
-## Virtual network and parameters
+21. Select **Add**.
-In this section you'll need to replace the following parameters in the steps with the information below:
+22. Select the **Next: Inbound rules** button at the bottom of the page.
-| Parameter | Value |
-|--|-|
-| **\<resource-group-name>** | myResourceGroupZLB (Select existing resource group) |
-| **\<virtual-network-name>** | myVNet |
-| **\<region-name>** | West Europe |
-| **\<IPv4-address-space>** | 10.0.0.0\16 |
-| **\<subnet-name>** | myBackendSubnet |
-| **\<subnet-address-range>** | 10.0.0.0\24 |
+23. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
+24. In **Add load balancing rule**, enter or select the following information:
-## Create a network security group
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myHTTPRule** |
+ | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
+ | Frontend IP address | Select **LoadBalancerFrontend**. |
+ | Protocol | Select **TCP**. |
+ | Port | Enter **80**. |
+ | Backend port | Enter **80**. |
+ | Backend pool | Select **myBackendPool**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Session persistence | Select **None**. |
+ | Idle timeout (minutes) | Enter or select **15**. |
+ | TCP reset | Select **Enabled**. |
+ | Floating IP | Select **Disabled**. |
+ | Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
-1. On the upper left side of the screen, select **Create a resource**. In the search box, enter **Network Security Group**. In the network security group page, select **Create**.
-2. In the **Create network security group** page, enter these values:
- - **myNetworkSecurityGroup**, for the name of the network security group.
- - **myResourceGroupLBAZ**, for the name of the existing resource group.
+25. Select **Add**.
+
+26. Select the blue **Review + create** button at the bottom of the page.
+
+27. Select **Create**.
+
+ > [!NOTE]
+ > In this example we created a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed as it's optional isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
+ > For more information about outbound connections in Azure, see [Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md)
+
+## Create virtual machines
+
+In this section, you'll create three VMs (**myVM1**, **myVM2**, and **myVM3**) in one zone (**Zone 1**).
+
+These VMs are added to the backend pool of the load balancer that was created earlier.
+
+1. On the upper-left side of the portal, select **Create a resource** > **Compute** > **Virtual machine**.
- ![Create a network security group](./media/tutorial-load-balancer-standard-zonal-portal/create-network-security-group.png)
-
-### Create NSG rules
-
-In this section, you create NSG rules to allow inbound connections that use HTTP and Microsoft Remote Desktop Protocol (RDP) by using the Azure portal.
-
-1. In the Azure portal, select **All resources** in the leftmost menu. Then search for and select **myNetworkSecurityGroup**. It's located in the **myResourceGroupZLB** resource group.
-2. Under **Settings**, select **Inbound security rules**. Then select **Add**.
-3. Enter these values for the inbound security rule named **myHTTPRule** to allow for inbound HTTP connections that use port 80:
- - **Service Tag**, for **Source**.
- - **Internet**, for **Source service tag**.
- - **80**, for **Destination port ranges**.
- - **vTCP**, for **Protocol**.
- - **Allow**, for **Action**.
- - **100**, for **Priority**.
- - **myHTTPRule**, for **Name**.
- - **Allow HTTP**, for **Description**.
-4. Select **OK**.
-
- ![Create NSG rules](./media/load-balancer-standard-public-availability-zones-portal/8-load-balancer-nsg-rules.png)
-
-5. Repeat steps 2 to 4 to create another rule named **myRDPRule**. This rule allows for an inbound RDP connection that uses port 3389, with the following values:
- - **Service Tag**, for **Source**.
- - **Internet**, for **Source service tag**.
- - **3389**, for **Destination port ranges**.
- - **TCP**, for **Protocol**.
- - **Allow**, for **Action**.
- - **200**, for **Priority**.
- - **myRDPRule**, for **Name**.
- - **Allow RDP**, for **Description**.
-
- ![Create a RDP rule](./media/tutorial-load-balancer-standard-zonal-portal/create-rdp-rule.png)
-
-### Create virtual machines
-
-1. On the upper left side of the screen, select **Create a resource** > **Compute** > **Windows Server 2016 Datacenter**. Enter these values for the virtual machine:
- - **myVM1**, for the name of the virtual machine.
- - **azureuser**, for the administrator user name.
- - **myResourceGroupZLB**, for **Resource group**. Select **Use existing**, and then select **myResourceGroupZLB**.
-2. Select **OK**.
-3. Select **DS1_V2** for the size of the virtual machine. Choose **Select**.
-4. Enter these values for the VM settings:
- - **zone 1**, for the Availability zone where you place the VM.
- - **myVNet**. Ensure it's selected as the virtual network.
- - **myVM1PIP**, for the standard public IP address that you create. Select **Create new**. Then for name type, select **myVM1PIP**. For **Zone**, select **1**. The IP address SKU is standard by default.
- - **myBackendSubnet**. Make sure it's selected as the subnet.
- - **myNetworkSecurityGroup**, for the name of the network security group firewall that already exists.
-5. Select **Disabled** to disable boot diagnostics.
-6. Select **OK**. Review the settings on the summary page. Then select **Create**.
-7. Repeat steps 1 to 6 to create a second VM, named **myVM2**, in Zone 1. Make **myVnet** the virtual network. Make **myVM2PIP** the standard public IP address. Make **myBackendSubnet** the subnet. And make **myNetworkSecurityGroup** the network security group.
-
- ![Create virtual machines](./media/tutorial-load-balancer-standard-zonal-portal/create-virtual-machine.png)
-
-### Install IIS on VMs
-
-1. Select **All resources** in the leftmost menu. Then from the resources list, select **myVM1**. It's located in the **myResourceGroupZLB** resource group.
-2. On the **Overview** page, select **Connect** to use RDP to go to the VM.
-3. Sign in to the VM with the user name and password that you specified when you created the VM. To specify the credentials you entered when you created the VM, you might need to select **More choices**. Then select **Use a different account**. And then select **OK**. You might receive a certificate warning during the sign-in process. Select **Yes** to proceed with the connection.
-4. On the server desktop, navigate to **Windows Administrative Tools** > **Windows PowerShell**.
-6. In the **PowerShell** window, run the following commands to install the IIS server. These commands also remove the default iisstart.htm file and then add a new iisstart.htm file that displays the name of the VM:
-
- ```azurepowershell-interactive
- # install IIS server role
- Install-WindowsFeature -name Web-Server -IncludeManagementTools
- # remove default htm file
- remove-item C:\inetpub\wwwroot\iisstart.htm
- # Add a new htm file that displays server name
- Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from" + $env:computername)
- ```
-7. Close the RDP session with **myVM1**.
-8. Repeat steps 1 to 7 to install IIS on **myVM2**.
+2. In **Create a virtual machine**, type or select the values in the **Basics** tab:
+
+ | Setting | Value |
+ |--|-|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **CreateZonalLBTutorial-rg** |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM1** |
+ | Region | Select **(Europe) West Europe** |
+ | Availability Options | Select **Availability zones** |
+ | Availability zone | Select **1** |
+ | Image | Select **Windows Server 2019 Datacenter - Gen1** |
+ | Azure Spot instance | Leave the default of unchecked. |
+ | Size | Choose VM size or take default setting |
+ | **Administrator account** | |
+ | Username | Enter a username |
+ | Password | Enter a password |
+ | Confirm password | Reenter password |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None** |
+
+3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+
+4. In the Networking tab, select or enter:
+
+ | Setting | Value |
+ |-|-|
+ | **Network interface** | |
+ | Virtual network | **myVNet** |
+ | Subnet | **myBackendSubnet** |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Advanced**|
+ | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Service**, select **HTTP**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
+ | **Load balancing** |
+ | Place this virtual machine behind an existing load-balancing solution? | Select the check box. |
+ | **Load balancing settings** |
+ | Load-balancing options | Select **Azure load balancing** |
+ | Select a load balancer | Select **myLoadBalancer** |
+ | Select a backend pool | Select **myBackendPool** |
+
+7. Select **Review + create**.
+
+8. Review the settings, and then select **Create**.
-## Create load balancer resources
+9. Follow the steps 1 to 8 to create two more VMs with the following values and all the other settings the same as **myVM1**:
-In this section, you configure load balancer settings for a backend address pool and a health probe. You also specify load balancer and network address translation rules.
+ | Setting | VM 2| VM 3|
+ | - | -- ||
+ | Name | **myVM2** |**myVM3**|
+ | Availability zone | **1** |**1**|
+ | Network security group | Select the existing **myNSG**| Select the existing **myNSG**|
-### Create a backend address pool
+## Install IIS
-To distribute traffic to the VMs, a backend address pool contains the IP addresses of the virtual network interface cards that are connected to the load balancer. Create the backend address pool **myBackendPool** to include **VM1** and **VM2**.
+1. Select **All services** in the left-hand menu, select **All resources**, and then from the resources list, select **myVM1** that is located in the **CreatePubLBQS-rg** resource group.
-1. Select **All resources** in the leftmost menu. Then select **myLoadBalancer** from the resources list.
-2. Under **Settings**, select **Backend pools**. Then select **Add**.
-3. On the **Add a backend pool** page, take the following actions:
- - For name, enter **myBackEndPool** as the name for your backend pool.
- - For **Virtual network**, in the drop-down menu, select **myVNet**.
- - For **Virtual machine** and **IP address**, add **myVM1** and **myVM2** and their corresponding public IP addresses.
-4. Select **Add**.
-5. Check to make sure your load balancer backend pool setting displays both the VMs: **myVM1** and **myVM2**.
-
- ![Create a backend pool](./media/tutorial-load-balancer-standard-zonal-portal/create-backend-pool.png)
+2. On the **Overview** page, select **Connect**, then **Bastion**.
-### Create a health probe
+3. Select **Use Bastion**.
-Use a health probe so the load balancer can monitor the status of your app. The health probe dynamically adds or removes VMs from the load balancer rotation based on their response to health checks. Create a health probe **myHealthProbe** to monitor the health of the VMs.
+4. Enter the username and password entered during VM creation.
-1. Select **All resources** in the leftmost menu. Then select **myLoadBalancer** from the resources list.
-2. Under **Settings**, select **Health probes**. Then select **Add**.
-3. Use these values to create the health probe:
- - **myHealthProbe**, for the name of the health probe.
- - **HTTP**, for the protocol type.
- - **80**, for the port number.
- - **15**, for number of **Interval** in seconds between probe attempts.
- - **2**, for number of **Unhealthy threshold** or consecutive probe failures that must occur before a VM is considered unhealthy.
-4. Select **OK**.
+5. Select **Connect**.
- ![Add a health probe](./media/load-balancer-standard-public-availability-zones-portal/4-load-balancer-probes.png)
+6. On the server desktop, navigate to **Windows Administrative Tools** > **Windows PowerShell**.
-### Create a load balancer rule
+7. In the PowerShell Window, run the following commands to:
-A load balancer rule defines how traffic is distributed to the VMs. You define the frontend IP configuration for the incoming traffic and the backend IP pool to receive the traffic, along with the required source and destination port. Create a load balancer rule **myLoadBalancerRuleWeb**, for listening to port 80 in the frontend **FrontendLoadBalancer**. The rule sends load-balanced network traffic to the backend address pool **myBackEndPool**, also by using port 80.
+ * Install the IIS server
+ * Remove the default iisstart.htm file
+ * Add a new iisstart.htm file that displays the name of the VM:
-1. Select **All resources** in the leftmost menu. Then select **myLoadBalancer** from the resources list.
-2. Under **Settings**, select **Load balancing rules**. Then select **Add**.
-3. Use these values to configure the load balancing rule:
- - **myHTTPRule**, for the name of the load balancing rule.
- - **TCP**, for the protocol type.
- - **80**, for the port number.
- - **80**, for the backend port.
- - **myBackendPool**, for the name of the backend pool.
- - **myHealthProbe**, for the name of the health probe.
-4. Select **OK**.
+ ```powershell
+ # Install IIS server role
+ Install-WindowsFeature -name Web-Server -IncludeManagementTools
+
+ # Remove default htm file
+ Remove-Item C:\inetpub\wwwroot\iisstart.htm
- ![Add a load-balancing rule](./media/tutorial-load-balancer-standard-zonal-portal/load-balancing-rule.png)
+ # Add a new htm file that displays server name
+ Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " + $env:computername)
+ ```
+
+8. Close the Bastion session with **myVM1**.
+
+9. Repeat steps 1 to 8 to install IIS and the updated iisstart.htm file on **myVM2** and **myVM3**.
## Test the load balancer
-1. Find the public IP address for the load balancer on the **Overview** screen. Select **All resources**. Then select **myPublicIP**.
-2. Copy the public IP address. Then paste it into the address bar of your browser. The default page that includes the name of the web server page is displayed on the browser.
+1. In the search box at the top of the page, enter **Load balancer**. Select **Load balancers** in the search results.
+
+2. Find the public IP address for the load balancer on the **Overview** page under **Public IP address**.
+
+3. Copy the public IP address, and then paste it into the address bar of your browser. The custom VM page of the IIS Web server is displayed in the browser.
- ![IIS web server](./media/tutorial-load-balancer-standard-zonal-portal/load-balancer-test.png)
-3. To see the load balancer in action, force stop the VM that is displayed. Refresh the browser to see the other server name displayed on the browser.
+ :::image type="content" source="./media/tutorial-load-balancer-standard-zonal-portal/load-balancer-test.png" alt-text="Screenshot of load balancer test":::
## Clean up resources
-When they're no longer needed, delete the resource group, load balancer, and all related resources. Select the resource group that contains the load balancer. Then select **Delete**.
+When no longer needed, delete the resource group, load balancer, and all related resources. To do so, select the resource group **CreateZonalLBTutorial-rg** that contains the resources and then select **Delete**.
## Next steps
-Advance to the next article to learn how to load balance VMs across availability zones..
+Advance to the next article to learn how to load balance VMs across availability zones:
> [!div class="nextstepaction"] > [Load balance VMs across availability zones](tutorial-load-balancer-standard-public-zone-redundant-portal.md)
load-balancer Tutorial Load Balancer Standard Public Zone Redundant Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/tutorial-load-balancer-standard-public-zone-redundant-portal.md
- Title: 'Tutorial: Load balance VMs across availability zones - Azure portal'-
-description: This tutorial demonstrates how to create a Standard Load Balancer with zone-redundant frontend to load balance VMs across availability zones using Azure portal
---
-# Customer intent: As an IT administrator, I want to create a load balancer that load balances incoming internet traffic to virtual machines across availability zones in a region, so that the customers can still access the web service if a datacenter is unavailable.
--- Previously updated : 02/27/2019----
-# Tutorial: Load balance VMs across availability zones with a Standard Load Balancer using the Azure portal
-
-Load balancing provides a higher level of availability by spreading incoming requests across multiple virtual machines. This tutorial steps through creating a public Standard Load Balancer that load balances VMs across availability zones. This helps to protect your apps and data from an unlikely failure or loss of an entire datacenter. With zone-redundancy, one or more availability zones can fail and the data path survives as long as one zone in the region remains healthy. You learn how to:
-
-> [!div class="checklist"]
-> * Create a Standard Load Balancer
-> * Create network security groups to define incoming traffic rules
-> * Create zone-redundant VMs across multiple zones and attach to a load balancer
-> * Create load balancer health probe
-> * Create load balancer traffic rules
-> * Create a basic IIS site
-> * View a load balancer in action
-
-For more information about using Availability zones with Standard Load Balancer, see [Standard Load Balancer and Availability Zones](load-balancer-standard-availability-zones.md).
-
-If you prefer, you can complete this tutorial using the [Azure CLI](./quickstart-load-balancer-standard-public-cli.md).
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Prerequisites
-
-* An Azure subscription
-
-## Sign in to Azure
-
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-
-## Create a Standard Load Balancer
-
-Standard Load Balancer only supports a Standard Public IP address. When you create a new public IP while creating the load balancer, it is automatically configured as a Standard SKU version, and is also automatically zone-redundant.
-
-1. On the top left-hand side of the screen, click **Create a resource** > **Networking** > **Load Balancer**.
-2. In the **Basics** tab of the **Create load balancer** page, enter or select the following information, accept the defaults for the remaining settings, and then select **Review + create**:
-
- | Setting | Value |
- | | |
- | Subscription | Select your subscription. |
- | Resource group | Select **Create new** and type *MyResourceGroupLBAZ* in the text box.|
- | Name | *myLoadBalancer* |
- | Region | Select **West Europe**. |
- | Type | Select **Public**. |
- | SKU | Select **Standard**. |
- | Public IP address | Select **Create new**. |
- | Public IP address name | Type *myPublicIP* in the text box. |
- |Availability zone| Select **Zone redundant**. |
-
-
-## Create backend servers
-
-In this section, you create a virtual network, virtual machines in different zones for the region, and then install IIS on the virtual machines to help test the zone-redundant load balancer. Hence, if a zone fails, the health probe for VM in the same zone fails, and traffic continues to be served by VMs in the other zones.
-
-## Virtual network and parameters
-
-In this section you'll need to replace the following parameters in the steps with the information below:
-
-| Parameter | Value |
-|--|-|
-| **\<resource-group-name>** | myResourceGroupLBAZ (Select existing resource group) |
-| **\<virtual-network-name>** | myVNet |
-| **\<region-name>** | West Europe |
-| **\<IPv4-address-space>** | 10.0.0.0/16 |
-| **\<subnet-name>** | myBackendSubnet |
-| **\<subnet-address-range>** | 10.0.0.0/24 |
--
-## Create a network security group
-
-Create network security group to define inbound connections to your virtual network.
-
-1. On the top left-hand side of the screen, click **Create a resource**, in the search box type *Network Security Group*, and in the network security group page, click **Create**.
-2. In the Create network security group page, enter these values:
- - *myNetworkSecurityGroup* - for the name of the network security group.
- - *myResourceGroupLBAZ* - for the name of the existing resource group.
-
-![Screenshot shows the Create network security group pane.](./media/load-balancer-standard-public-availability-zones-portal/create-nsg.png)
-
-### Create network security group rules
-
-In this section, you create network security group rules to allow inbound connections using HTTP and RDP using the Azure portal.
-
-1. In the Azure portal, click **All resources** in the left-hand menu, and then search and click **myNetworkSecurityGroup** that is located in the **myResourceGroupLBAZ** resource group.
-2. Under **Settings**, click **Inbound security rules**, and then click **Add**.
-3. Enter these values for the inbound security rule named *myHTTPRule* to allow for an inbound HTTP connections using port 80:
- - *Service Tag* - for **Source**.
- - *Internet* - for **Source service tag**
- - *80* - for **Destination port ranges**
- - *TCP* - for **Protocol**
- - *Allow* - for **Action**
- - *100* for **Priority**
- - *myHTTPRule* - for name of the load balancer rule.
- - *Allow HTTP* - for description of the load balancer rule.
-4. Click **OK**.
-
- ![Screenshot shows the Add inbound security rule pane.](./media/load-balancer-standard-public-availability-zones-portal/8-load-balancer-nsg-rules.png)
-5. Repeat steps 2 to 4 to create another rule named *myRDPRule* to allow for an inbound RDP connection using port 3389 with the following values:
- - *Service Tag* - for **Source**.
- - *Internet* - for **Source service tag**
- - *3389* - for **Destination port ranges**
- - *TCP* - for **Protocol**
- - *Allow* - for **Action**
- - *200* for **Priority**
- - *myRDPRule* for name
- - *Allow RDP* - for description
-
-### Create virtual machines
-
-Create virtual machines in different zones (zone 1, zone 2, and zone 3) for the region that can act as backend servers to the load balancer.
-
-1. On the top left-hand side of the screen, click **Create a resource** > **Compute** > **Windows Server 2016 Datacenter** and enter these values for the virtual machine:
- - *myVM1* - for the name of the virtual machine.
- - *azureuser* - for the administrator user name.
- - *myResourceGroupLBAZ* - for **Resource group**, select **Use existing**, and then select *myResourceGroupLBAZ*.
-2. Click **OK**.
-3. Select **DS1_V2** for the size of the virtual machine, and click **Select**.
-4. Enter these values for the VM settings:
- - *zone 1* - for the zone where you place the VM.
- - *myVNet* - ensure it is selected as the virtual network.
- - *myBackendSubnet* - ensure it is selected as the subnet.
- - *myNetworkSecurityGroup* - for the name of network security group (firewall).
-5. Click **Disabled** to disable boot diagnostics.
-6. Click **OK**, review the settings on the summary page, and then click **Create**.
-7. Create a second VM, named, *VM2* in Zone 2, and third VM in Zone 3, and with *myVnet* as the virtual network, *myBackendSubnet* as the subnet, and **myNetworkSecurityGroup* as the network security group using steps 1-6.
-
-### Install IIS on VMs
-
-1. Click **All resources** in the left-hand menu, and then from the resources list click **myVM1** that is located in the *myResourceGroupLBAZ* resource group.
-2. On the **Overview** page, click **Connect** to RDP into the VM.
-3. Log into the VM with username *azureuser*.
-4. On the server desktop, navigate to **Windows Administrative Tools**>**Windows PowerShell**.
-5. In the PowerShell Window, run the following commands to install the IIS server, remove the default iisstart.htm file, and then add a new iisstart.htm file that displays the name of the VM:
- ```azurepowershell-interactive
-
- # install IIS server role
- Install-WindowsFeature -name Web-Server -IncludeManagementTools
-
- # remove default htm file
- remove-item C:\inetpub\wwwroot\iisstart.htm
-
- # Add a new htm file that displays server name
- Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from" + $env:computername)
- ```
-6. Close the RDP session with *myVM1*.
-7. Repeat steps 1 to 6 to install IIS and the updated iisstart.htm file on *myVM2* and *myVM3*.
-
-## Create load balancer resources
-
-In this section, you configure load balancer settings for a backend address pool and a health probe, and specify load balancer and NAT rules.
--
-### Create a backend address pool
-
-To distribute traffic to the VMs, a back-end address pool contains the IP addresses of the virtual (NICs) connected to the load balancer. Create the backend address pool *myBackendPool* to include *VM1*, *VM2*, and *VM3*.
-
-1. Click **All resources** in the left-hand menu, and then click **myLoadBalancer** from the resources list.
-2. Under **Settings**, click **Backend pools**, then click **Add**.
-3. On the **Add a backend pool** page, do the following:
- - For name, type *myBackEndPool*, as the name for your backend pool.
- - For **Virtual network**, in the drop-down menu, click **myVNet**
- - For **Virtual machine**, in the drop-down menu, click, **myVM1**.
- - For **IP address**, in the drop-down menu, click the IP address of myVM1.
-4. Click **Add new backend resource** to add each virtual machine (*myVM2* and *myVM3*) to add to the backend pool of the load balancer.
-5. Click **Add**.
-
- ![Adding to the backend address pool -](./media/load-balancer-standard-public-availability-zones-portal/add-backend-pool.png)
-
-3. Check to make sure your load balancer backend pool setting displays all the three VMs - **myVM1**, **myVM2** and **myVM3**.
-
-### Create a health probe
-
-To allow the load balancer to monitor the status of your app, you use a health probe. The health probe dynamically adds or removes VMs from the load balancer rotation based on their response to health checks. Create a health probe *myHealthProbe* to monitor the health of the VMs.
-
-1. Click **All resources** in the left-hand menu, and then click **myLoadBalancer** from the resources list.
-2. Under **Settings**, click **Health probes**, then click **Add**.
-3. Use these values to create the health probe:
- - *myHealthProbe* - for the name of the health probe.
- - **HTTP** - for the protocol type.
- - *80* - for the port number.
- - *15* - for number of **Interval** in seconds between probe attempts.
- - *2* - for number of **Unhealthy threshold** or consecutive probe failures that must occur before a VM is considered unhealthy.
-4. Click **OK**.
-
- ![Adding a probe](./media/load-balancer-standard-public-availability-zones-portal/4-load-balancer-probes.png)
-
-### Create a load balancer rule
-
-A load balancer rule is used to define how traffic is distributed to the VMs. You define the front-end IP configuration for the incoming traffic and the back-end IP pool to receive the traffic, along with the required source and destination port. Create a load balancer rule *myLoadBalancerRuleWeb* for listening to port 80 in the frontend *FrontendLoadBalancer* and sending load-balanced network traffic to the backend address pool *myBackEndPool* also using port 80.
-
-1. Click **All resources** in the left-hand menu, and then click **myLoadBalancer** from the resources list.
-2. Under **Settings**, click **Load balancing rules**, then click **Add**.
-3. Use these values to configure the load balancing rule:
- - *myHTTPRule* - for the name of the load balancing rule.
- - **TCP** - for the protocol type.
- - *80* - for the port number.
- - *80* - for the backend port.
- - *myBackendPool* - for the name of the backend pool.
- - *myHealthProbe* - for the name of the health probe.
-4. Click **OK**.
-
-
- ![Adding a load balancing rule](./media/load-balancer-standard-public-availability-zones-portal/load-balancing-rule.png)
-
-## Test the load balancer
-1. Find the public IP address for the Load Balancer on the **Overview** screen. Click **All resources** and then click **myPublicIP**.
-
-2. Copy the public IP address, and then paste it into the address bar of your browser. The default page of IIS Web server is displayed on the browser.
-
- ![IIS Web server](./media/tutorial-load-balancer-standard-zonal-portal/load-balancer-test.png)
-
-To see the load balancer distribute traffic across the VMs distributed across the zone you can force-refresh your web browser.
-
-## Clean up resources
-
-When no longer needed, delete the resource group, load balancer, and all related resources. To do so, select the resource group that contains the load balancer and select **Delete**.
-
-## Next steps
-
-Learn more about load balancing a VM within a specific availability zone..
-> [!div class="nextstepaction"]
-> [Load balance VMs within an availability zone](tutorial-load-balancer-standard-public-zonal-portal.md)
load-balancer Tutorial Multi Availability Sets Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/tutorial-multi-availability-sets-portal.md
Previously updated : 04/21/2021 Last updated : 08/12/2021 # Tutorial: Create a load balancer with more than one availability set in the backend pool using the Azure portal
In this section, you'll create a NAT gateway for outbound connectivity of the vi
6. Select **Create a new public IP address** next to **Public IP addresses** in the **Outbound IP** tab.
-7. Enter **myPublicIP-nat** in **Name**.
+7. Enter **myNATgatewayIP** in **Name**.
8. Select **OK**.
In this section, you'll create a NAT gateway for outbound connectivity of the vi
In this section, you'll create a load balancer for the virtual machines.
-1. In the search box at the top of the portal, enter **Load balancer**.
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
-2. Select **Load balancers** in the search results.
+2. In the **Load balancer** page, select **Create**.
-3. Select **+ Create**.
-
-4. In the **Basics** tab of **Create load balancer**, enter, or select the following information:
+3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
- | Setting | Value |
- | - | -- |
+ | Setting | Value |
+ | | |
| **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **TutorLBmultiAVS-rg**. |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorLBmultiAVS-rg**. |
| **Instance details** | |
- | Name | Enter **myLoadBalancer**. |
- | Region | Select **(US) West US 2**. |
- | Type | Leave the default of **Public**. |
- | SKU | Leave the default of **Standard**. |
- | Tier | Leave the default of **Regional**. |
- | **Public IP address** | |
- | Public IP address | Leave the default of **Create new**. |
- | Public IP address name | Enter **myPublicIP-lb**. |
- | Availability zone | Select **Zone-redundant**. |
- | Add a public IPv6 address | Leave the default of **No**. |
- | Routing preference | Leave the default of **Microsoft network**. |
+ | Name | Enter **myLoadBalancer** |
+ | Region | Select **(US) West US 2**. |
+ | Type | Select **Public**. |
+ | SKU | Leave the default **Standard**. |
+ | Tier | Leave the default **Regional**. |
-5. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+4. Select **Next: Frontend IP configuration** at the bottom of the page.
-6. Select **Create**.
+5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
-### Configure load balancer settings
+6. Enter **LoadBalancerFrontend** in **Name**.
-In this section, you'll create a backend pool for **myLoadBalancer**.
+7. Select **IPv4** or **IPv6** for the **IP version**.
-You'll create a health probe to monitor **HTTP** and **Port 80**. The health probe will monitor the health of the virtual machines in the backend pool.
+ > [!NOTE]
+ > IPv6 isn't currently supported with Routing Preference or Cross-region load-balancing (Global Tier).
-You'll create a load-balancing rule for **Port 80** with outbound SNAT disabled. The NAT gateway you created earlier will handle the outbound connectivity of the virtual machines.
+8. Select **IP address** for the **IP type**.
-1. In the search box at the top of the portal, enter **Load balancer**.
+ > [!NOTE]
+ > For more information on IP prefixes, see [Azure Public IP address prefix](../virtual-network/public-ip-address-prefix.md).
-2. Select **Load balancers** in the search results.
+9. Select **Create new** in **Public IP address**.
-3. Select **myLoadBalancer**.
+10. In **Add a public IP address**, enter **myPublicIP-lb** for **Name**.
-4. In **myLoadBalancer**, select **Backend pools** in **Settings**.
+11. Select **Zone-redundant** in **Availability zone**.
-5. Select **+ Add** in **Backend pools**.
+ > [!NOTE]
+ > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
-6. In **Add backend pool**, enter or select the following information:
+12. Leave the default of **Microsoft Network** for **Routing preference**.
- | Setting | Value |
- | - | -- |
- | Name | Enter **myBackendPool**. |
- | Virtual network | Select **myVNet**. |
- | Backend Pool Configuration | Leave the default of **NIC**. |
- | IP Version | Leave the default of **IPv4**. |
+13. Select **OK**.
-7. Select **Add**.
+14. Select **Add**.
-8. Select **Health probes**.
+15. Select **Next: Backend pools** at the bottom of the page.
-9. Select **+ Add**.
+16. In the **Backend pools** tab, select **+ Add a backend pool**.
-10. In **Add health probe**, enter or select the following information:
+17. Enter **myBackendPool** for **Name** in **Add backend pool**.
- | Setting | Value |
- | - | -- |
- | Name | Enter **myHTTPProbe**. |
- | Protocol | Select **HTTP**. |
- | Port | Leave the default of **80**. |
- | Path | Leave the default of **/**. |
- | Interval | Leave the default of **5** seconds. |
- | Unhealthy threshold | Leave the default of **2** consecutive failures. |
+18. Select **myVNet** in **Virtual network**.
+
+19. Select **NIC** or **IP Address** for **Backend Pool Configuration**.
-11. Select **Add**.
+20. Select **IPv4** or **IPv6** for **IP version**.
-12. Select **Load-balancing rules**.
+21. Select **Add**.
-13. Select **+ Add**.
+22. Select the **Next: Inbound rules** button at the bottom of the page.
-14. Enter or select the following information in **Add load-balancing rule**:
+23. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
+
+24. In **Add load balancing rule**, enter or select the following information:
| Setting | Value | | - | -- |
- | Name | Enter **myHTTPRule**. |
- | IP Version | Leave the default of **IPv4**. |
- | Frontend IP address | Select **LoadBalancerFrontEnd**. |
- | Protocol | Select the default of **TCP**. |
+ | Name | Enter **myHTTPRule** |
+ | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
+ | Frontend IP address | Select **LoadBalancerFrontend**. |
+ | Protocol | Select **TCP**. |
| Port | Enter **80**. | | Backend port | Enter **80**. | | Backend pool | Select **myBackendPool**. |
- | Health probe | Select **myHTTPProbe**. |
- | Session persistence | Leave the default of **None**. |
- | Idle timeout (minutes) | Change the slider to **15**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Session persistence | Select **None**. |
+ | Idle timeout (minutes) | Enter or select **15**. |
| TCP reset | Select **Enabled**. |
- | Floating IP | Leave the default of **Disabled**. |
+ | Floating IP | Select **Disabled**. |
| Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
-5. Select **Add**.
+25. Select **Add**.
+
+26. Select the blue **Review + create** button at the bottom of the page.
+
+27. Select **Create**.
+
+ > [!NOTE]
+ > In this example we created a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed as it's optional isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
+ > For more information about outbound connections in Azure, see [Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md)
## Create virtual machines
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-limits-and-config.md
The following table lists the message size limits that apply to B2B protocols:
## Firewall configuration: IP addresses and service tags
-If your environment has strict network requirements or firewalls that limit traffic to specific IP addresses, your environment or firewall needs to allow access for *both* the [inbound](#inbound) and [outbound](#outbound) IP addresses used by the Azure Logic Apps service or runtime in the Azure region where your logic app resource exists. To set up this access, you can create Azure Firewall [rules](/firewall/rule-processing). *All* logic apps in the same region use the same IP address ranges.
+If your environment has strict network requirements or firewalls that limit traffic to specific IP addresses, your environment or firewall needs to allow access for *both* the [inbound](#inbound) and [outbound](#outbound) IP addresses used by the Azure Logic Apps service or runtime in the Azure region where your logic app resource exists. To set up this access, you can create [Azure Firewall rules](../firewall/rule-processing.md). *All* logic apps in the same region use the same IP address ranges.
> [!NOTE] > If you're using [Power Automate](/power-automate/getting-started), some actions, such as **HTTP** and **HTTP + OpenAPI**,
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/workflow-definition-language-functions-reference.md
Title: Reference guide for functions in expressions
description: Reference guide to functions in expressions for Azure Logic Apps and Power Automate ms.suite: integration-+ Previously updated : 07/16/2021 Last updated : 08/16/2021 # Reference guide to using functions in expressions for Azure Logic Apps and Power Automate
Last updated 07/16/2021
For workflow definitions in [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and [Power Automate](/flow/getting-started), some [expressions](../logic-apps/logic-apps-workflow-definition-language.md#expressions) get their values from runtime actions that might not yet exist when your workflow starts running. To reference these values or process the values in these expressions, you can use *functions* provided by the [Workflow Definition Language](../logic-apps/logic-apps-workflow-definition-language.md). > [!NOTE]
-> This reference page applies to both Azure Logic Apps and Power Automate,
-> but appears in the Azure Logic Apps documentation. Although this page refers
-> specifically to logic apps, these functions work for both flows and logic apps.
-> For more information about functions and expressions in Power Automate, see
-> [Use expressions in conditions](/flow/use-expressions-in-conditions).
+> This reference page applies to both Azure Logic Apps and Power Automate, but appears in the
+> Azure Logic Apps documentation. Although this page refers specifically to logic app workflows,
+> these functions work for both flows and logic app workflows. For more information about functions
+> and expressions in Power Automate, see [Use expressions in conditions](/flow/use-expressions-in-conditions).
For example, you can calculate values by using math functions, such as the [add()](../logic-apps/workflow-definition-language-functions-reference.md#add) function, when you want the sum from integers or floats. Here are other example tasks that you can perform with functions:
you get a combined string, for example, "SophiaOwen":
Either way, both examples assign the result to the `customerName` property.
-Here are some other notes about functions in expressions:
+## Considerations for using functions
* Function parameters are evaluated from left to right.
+* The designer doesn't evaluate runtime expressions that are used as function parameters at design time. The designer requires that all expressions can be fully evaluated at design time.
+ * In the syntax for parameter definitions, a question mark (?) that appears after a parameter means the parameter is optional. For example, see [getFutureTime()](#getFutureTime). The following sections organize functions based on their general purpose, or you can browse these functions in [alphabetical order](#alphabetical-list).
Logic Apps automatically or implicitly performs base64 encoding or decoding, so
* `decodeDataUri(<value>)` > [!NOTE]
-> If you manually add any of these functions to your workflow through the Logic App Designer, for example, by using the expression editor, navigate away
-> from the designer, and return to the designer, the function disappears from the designer, leaving behind only the parameter values. This behavior also
-> happens if you select a trigger or action that uses this function without editing the function's parameter values. This result affects only the function's
-> visibility and not the effect. In code view, the function is unaffected. However, if you edit the function's parameter values, the function and its effect
-> are both removed from code view, leaving behind only the function's parameter values.
+> If you manually add any of these functions while using the workflow designer, either directly to a trigger
+> or action or by using the expression editor, navigate away from the designer, and then return to the designer,
+> the function disappears from the designer, leaving behind only the parameter values. This behavior also happens
+> if you select a trigger or action that uses this function without editing the function's parameter values.
+> This result affects only the function's visibility and not the effect. In code view, the function is unaffected.
+> However, if you edit the function's parameter values, the function and its effect are both removed from code view,
+> leaving behind only the function's parameter values.
<a name="math-functions"></a>
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-manage-compute-instance.md
You can schedule the automatic start and stop of a compute instance by using a R
```json "schedules": {
- "value": {
- "computeStartStop": [
- {
- "TriggerType": "Cron",
- "Cron": {
- "StartTime": "2021-03-10T21:21:07",
- "TimeZone": "Pacific Standard Time",
- "Expression": "0 18 * * *"
- },
- "Action": "Stop",
- "Status": "Enabled"
- },
- {
- "TriggerType": "Cron",
- "Cron": {
- "StartTime": "2021-03-10T21:21:07",
- "TimeZone": "Pacific Standard Time",
- "Expression": "0 8 * * *"
- },
- "Action": "Start",
- "Status": "Enabled"
- },
- {
- "triggerType":ΓÇ»"Recurrence",
- "recurrence":ΓÇ»{
- "frequency":ΓÇ»"Day",
- "interval": 1,
- "timeZone": "Pacific Standard Time",
-   "schedule": {
- "hours":ΓÇ»[18],
-     "minutes": [0],
- "weekDays":ΓÇ»[
- "Saturday",
- "Sunday"
- ]
- }
- },
- "Action":ΓÇ»"Stop",
- "Status":ΓÇ»"Enabled"
- }
- ]
+ "computeStartStop": [
+ {
+ "triggerType": "Cron",
+ "cron": {
+ "startTime": "2021-03-10T21:21:07",
+ "timeZone": "Pacific Standard Time",
+ "expression": "0 18 * * *"
+ },
+ "action": "Stop",
+ "status": "Enabled"
+ },
+ {
+ "triggerType": "Cron",
+ "cron": {
+ "startTime": "2021-03-10T21:21:07",
+ "timeZone": "Pacific Standard Time",
+ "expression": "0 8 * * *"
+ },
+ "action": "Start",
+ "status": "Enabled"
+ },
+ {
+ "triggerType":ΓÇ»"Recurrence",
+ "recurrence":ΓÇ»{
+ "frequency":ΓÇ»"Day",
+ "interval": 1,
+ "timeZone": "Pacific Standard Time",
+   "schedule": {
+ "hours":ΓÇ»[18],
+     "minutes": [0],
+ "weekDays":ΓÇ»[
+ "Saturday",
+ "Sunday"
+ ]
+ }
+ },
+ "action":ΓÇ»"Stop",
+ "status":ΓÇ»"Enabled"
+ }
+ ]
+}
``` * Action can have value of ΓÇ£StartΓÇ¥ or ΓÇ£StopΓÇ¥.
mysql Concepts Networking Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/concepts-networking-vnet.md
Last updated 8/6/2021
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-This article describes the private connectivity option for Azure MySQL Flexible Server. You will learn in detail the vitrual network concepts for Azure Database for MySQL Flexible server to create a server securely in Azure.
+This article describes the private connectivity option for Azure MySQL Flexible Server. You will learn in detail the Virtual network concepts for Azure Database for MySQL Flexible server to create a server securely in Azure.
> [!IMPORTANT]
-> Azure Database for MySQL - Flexible server is in preview.
+> Azure Database for MySQL - Flexible Server is in preview.
## Private access (VNet Integration)
-Private access with virtual network (vnet) integration provides private and secure communication for your MySQL flexible server.
+[Azure Virtual Network (VNet)](../../virtual-network/virtual-networks-overview.md) is the fundamental building block for your private network in Azure. Virtual Network (VNet) integration with Azure Database for MySQL - Flexible Server brings the Azure's benefits of network security and isolation.
+
+Virtual Network (VNet) integration for an Azure Database for MySQL - Flexible Server enables you to lock down access to the server to only your virtual network infrastructure. Your virtual network(VNet) can include all your application and database resources in a single virtual network or may stretch across different VNets in the same or different regions. Seamless connectivity between various Virtual networks can be established by [peering](../../virtual-network/virtual-network-peering-overview.md), which uses Microsoft's low latency, high-bandwidth private backbone infrastructure backbone infrastructure. The virtual networks appear as one for connectivity purposes.
+
+Azure Database for MySQL - Flexible Server supports client connectivity from:
+
+* Virtual networks within the same Azure region. (locally peered VNets)
+* Virtual networks across Azure regions. (Global peered VNets)
+
+Subnets enable you to segment the virtual network into one or more sub-networks and allocate a portion of the virtual network's address space to which you can then deploy Azure resources. Azure Database for MySQL - Flexible Server requires a [delegated subnet](../../virtual-network/subnet-delegation-overview.md). A delegated subnet is an explicit identifier that a subnet can host a only Azure Database for MySQL - Flexible Servers. By delegating the subnet, the service gets explicit permissions to create service-specific resources in the subnet to seamlessly manage your Azure Database for MySQL - Flexible Server.
+
+Azure Database for MySQL - Flexible Server integrates with Azure [Private DNS zones](../../dns/private-dns-privatednszone.md) to provide a reliable, secure DNS service to manage and resolve domain names in a virtual network without the need to add a custom DNS solution. Private DNS zone can be linked to one or more virtual networks by creating [virtual network links](../../dns/private-dns-virtual-network-links.md)
+ :::image type="content" source="./media/concepts-networking/vnet-diagram.png" alt-text="Flexible server MySQL VNET":::
In the above diagram,
2. Applications that are deployed on different subnets within the same vnet can access the Flexible servers directly. 3. Applications that are deployed on a different VNET **VNet-2** do not have direct access to flexible servers. You have to perform [private DNS zone VNET peering](#private-dns-zone-and-vnet-peering) before they can access the flexible server.
-### Virtual network concepts
+## Virtual network concepts
Here are some concepts to be familiar with when using virtual networks with MySQL flexible servers.
Here are some concepts to be familiar with when using virtual networks with MySQ
Virtual network peering enables you to seamlessly connect two or more Virtual Networks in Azure. The peered virtual networks appear as one for connectivity purposes. The traffic between virtual machines in peered virtual networks uses the Microsoft backbone infrastructure. The traffic between client application and flexible server in peered VNets is routed through Microsoft's private network only and is isolated to that network only.
-### Using Private DNS Zone
+## Using Private DNS Zone
* If you use the Azure portal or the Azure CLI to create flexible servers with VNET, a new private DNS zone is auto-provisioned per server in your subscription using the server name provided. Alternatively, if you want to setup your own private DNS zone to use with the flexible server, please see the [private DNS overview](../../dns/private-dns-overview.md) documentation. * If you use Azure API, an Azure Resource Manager template (ARM template), or Terraform, please create private DNS zones that end with `mysql.database.azure.com` and use them while configuring flexible servers with private access. For more information, see the [private DNS zone overview](../../dns/private-dns-overview.md).
Here are some concepts to be familiar with when using virtual networks with MySQ
Learn how to create a flexible server with private access (VNet integration) in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).
-### Integration with custom DNS server
+## Integration with custom DNS server
If you are using the custom DNS server then you must use a DNS forwarder to resolve the FQDN of Azure Database for MySQL - Flexible Server. The forwarder IP address should be [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md). The custom DNS server should be inside the VNet or reachable via the VNET's DNS Server setting. Refer to [name resolution that uses your own DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
-### Private DNS zone and VNET peering
+## Private DNS zone and VNET peering
Private DNS zone settings and VNET peering are independent of each other. Please refer to the [Using Private DNS Zone](concepts-networking-vnet.md#using-private-dns-zone) section above for more details on creating and using Private DNS zones.
If you want to connect to the flexible server from a client that is provisioned
> [!NOTE] > Private DNS zone names that end with `mysql.database.azure.com` can only be linked.
-### Connecting from on-premises to flexible server in Virtual Network using ExpressRoute or VPN
+## Connecting from on-premises to flexible server in Virtual Network using ExpressRoute or VPN
For workloads requiring access to flexible server in virtual network from on-premises network, you will require [ExpressRoute](/azure/architecture/reference-architectures/hybrid-networking/expressroute/) or [VPN](/azure/architecture/reference-architectures/hybrid-networking/vpn/) and virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/). With this setup in place, you will require a DNS forwarder to resolve the flexible servername if you would like to connect from client application (like MySQL Workbench) running on on-premises virtual network. This DNS forwarder is responsible for resolving all the DNS queries via a server-level forwarder to the Azure-provided DNS service [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md).
To configure properly, you need the following resources:
You can then use the flexible servername (FQDN) to connect from the client application in peered virtual network or on-premises network to flexible server.
-### Unsupported virtual network scenarios
+## Unsupported virtual network scenarios
* Public endpoint (or public IP or DNS) - A flexible server deployed to a virtual network cannot have a public endpoint * After the flexible server is deployed to a virtual network and subnet, you cannot move it to another virtual network or subnet. You cannot move the virtual network into another resource group or subscription.
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/overview.md
One advantage of running your workload in Azure is its global reach. The flexibl
| West US | :heavy_check_mark: | :x: | | West US 2 | :heavy_check_mark: | :heavy_check_mark: | | West Europe | :heavy_check_mark: | :heavy_check_mark: |
+| Australia Southeast | :heavy_check_mark: | :x: |
+| South Africa North | :heavy_check_mark: | :x: |
+| East Asia (Hong Kong) | :heavy_check_mark: | :x: |
+| Central India | :heavy_check_mark: | :x: |
+ ## Contacts For any questions or suggestions you might have on Azure Database for MySQL flexible server, send an email to the Azure Database for MySQL Team ([@Ask Azure DB for MySQL](mailto:AskAzureDBforMySQL@service.microsoft.com)). This email address isn't a technical support alias.
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/whats-new.md
Previously updated : 08/12/2021 Last updated : 08/17/2021 # What's new in Azure Database for MySQL - Flexible Server (Preview)?
This article summarizes new releases and features in Azure Database for MySQL -
This release of Azure Database for MySQL - Flexible Server includes the following updates. -- **High availability within a single zone using same-zone high availability**
+- **High availability within a single zone using Same-Zone High Availability**
- The service now provides customers with the flexibility to choose the preferred availability zone for their standby server when they enable high availability. With this feature, customers can place a standby server in the same zone as the primary server, which reduces the replication lag between primary and standby. This also provides for lower latencies between the application server and database server if placed within the same Azure zone. [Learn more](./concepts-high-availability.md).
+ The service now provides customers with the flexibility to choose the preferred availability zone for their standby server when they enable high availability. With this feature, customers can place a standby server in the same zone as the primary server, which reduces the replication lag between primary and standby. This also provides for lower latencies between the application server and database server if placed within the same Azure zone. [Learn more](https://aka.ms/mysql-ha-concept).
-- **Standby zone selection using zone redundant high availability**
+- **Standby zone selection using Zone-Redundant High Availability**
- The service now provides customers with the ability to choose the standby server zone location. Using this feature, customers can place their standby server in the zone of their choice. Co-locating the standby database servers and standby applications in the same zone reduces latencies and allows customers to better prepare for disaster recovery situations and ΓÇ£zone downΓÇ¥ scenarios. [Learn more](./concepts-high-availability.md).
+ The service now provides customers with the ability to choose the standby server zone location. Using this feature, customers can place their standby server in the zone of their choice. Co-locating the standby database servers and standby applications in the same zone reduces latencies and allows customers to better prepare for disaster recovery situations and ΓÇ£zone downΓÇ¥ scenarios. [Learn more](https://aka.ms/standby-selection).
- **Private DNS zone integration**
- The service now provides integration with an Azure private DNS zone. Integration with Azure private DNS zone allows seamless resolution of private DNS within the current VNet, or any peered VNet to which the private DNS Zone is linked to. [Learn more](./concepts-networking-vnet.md).
+ [Azure Private DNS](../../dns/private-dns-privatednszone.md) provides a reliable and secure DNS service (responsible for translating a service name to IP address) for your virtual network. Azure Private DNS manages and resolves domain names in the virtual network without the need to configure a custom DNS solution. This enables you to connect your application running on a virtual network to your flexible server running on a locally or globally peered virtual network. The Azure Database for MySQL - Flexible Server now provides integration with an Azure private DNS zone to allow seamless resolution of private DNS within the current VNet, or any peered VNet to which the private DNS zone is linked. With this integration, if the IP address of the backend flexible server changes during failover or any other event, your integrated private DNS zone will be updated automatically to ensure your application connectivity resumes automatically once the server is online. [Learn more](./concepts-networking-vnet.md).
- **Point-In-Time Restore for a server in a specified virtual network**
This release of Azure Database for MySQL - Flexible Server includes the followin
- **Point-In-Time Restore for a server in an availability zone**
- The Point-In-Time Restore experience for the service now enables customers to configure availability zone, Co-locating the database servers and standby applications in the same zone reduces latencies and allows customers to better prepare for disaster recovery situations and ΓÇ£zone downΓÇ¥ scenarios. [Learn more](./concepts-high-availability.md).
+ The Point-In-Time Restore experience for the service now enables customers to configure availability zone, Co-locating the database servers and standby applications in the same zone reduces latencies and allows customers to better prepare for disaster recovery situations and ΓÇ£zone downΓÇ¥ scenarios. [Learn more](https://aka.ms/standby-selection).
- **Availability in four additional Azure regions**
- The service is now available in the following Azure regions:
+ The public preview of Azure Database for MySQL - Flexible Server is now available in the following Azure regions [Learn more](overview.md#azure-regions):
- Australia Southeast - South Africa North
If you have questions about or suggestions for working with Azure Database for M
- Learn more about [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/server/). - Browse the [public documentation](index.yml) for Azure Database for MySQL ΓÇô Flexible Server.-- Review details on [troubleshooting common migration errors](../howto-troubleshoot-common-errors.md).
+- Review details on [troubleshooting common migration errors](../howto-troubleshoot-common-errors.md).
network-function-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-function-manager/faq.md
Check with your network function partner on the billing cycle for network functi
## Next steps
-For more information, see the [Overview](overview.md).
+For more information, see the [Overview](overview.md).
networking Azure For Network Engineers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/azure-for-network-engineers.md
Title: 'Azure ExpressRoute: Azure for Network Engineers'
+ Title: 'Azure for Network Engineers'
description: This page explains to traditional network engineers how networks work in Azure. documentationcenter: na -+
Learn about the [network security groups][network-security].
<!--Link References--> [VNet]: ../virtual-network/tutorial-connect-virtual-networks-portal.md [vnet-routing]: ../virtual-network/virtual-networks-udr-overview.md
-[network-security]: ../virtual-network/network-security-groups-overview.md
+[network-security]: ../virtual-network/network-security-groups-overview.md
object-anchors Get Started Hololens Directx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/object-anchors/quickstarts/get-started-hololens-directx.md
Previously updated : 02/02/2021 Last updated : 08/02/2021
To complete this quickstart, make sure you have:
[!INCLUDE [Create Account](../../../includes/object-anchors-get-started-create-account.md)] + ## Open the sample project [!INCLUDE [Clone Sample Repo](../../../includes/object-anchors-clone-sample-repository.md)]
Now, build the **AoaSampleApp** project by right-clicking the project and select
After compiling the sample project successfully, you can deploy the app to HoloLens.
-Power on the HoloLens device, sign in, and connect it to the PC using a USB cable. Make sure **Device** is the chosen deployment target (see above).
+Ensure the HoloLens device is powered on and connected to the PC through a USB cable. Make sure **Device** is the chosen deployment target (see above).
Right-click **AoaSampleApp** project, then click **Deploy** from the pop-up menu to install the app. If no error shows up in Visual Studio's **Output Window**, the app will be installed on HoloLens. :::image type="content" source="./media/vs-deploy-app.png" alt-text="Deploy app to HoloLens":::
-Before launching the app, you need to upload an object model. Follow the instructions in **Ingest object model and detect its instance** section below.
+Before launching the app, you ought to have uploaded an object model, **chair.ou** for example, to the **3D Objects** folder on your HoloLens. If you haven't, follow the instructions in the ["Upload your model"](#upload-your-model) section.
-To launch and debug the app, select **Debug > Start debugging**. To stop the app, select either **Stop Debugging** or press **Shift + F5**.
+To launch and debug the app, select **Debug > Start debugging**.
## Ingest object model and detect its instance
-You'll need to create an object model to run the sample app. Assume you've already got either a CAD or scanned 3D mesh model of an object in your space. Refer to [Quickstart: Ingesting a 3D Model](./get-started-model-conversion.md) on how to create a model.
-
-Download that model, **chair.ou** in our case, to your computer. Then, from the HoloLens device portal, select **System > File explorer > LocalAppData > AoaSampleApp > LocalState** and select **Browse...**. Then select your model file, **chair.ou** for example, and select **Upload**. You should then see the model file in the local cache.
--
-From the HoloLens, launch the **AoaSampleApp** app (if it was already open, close it, and reopen it). Walk close (within 2-meter distance) to the target object (chair) and scan it by looking at it from multiple perspectives. You should see a pink bounding box around the object with some yellow points rendered close to object's surface, which indicates that it was detected.
+The **AoaSampleApp** app is now running on your HoloLens device. Walk close (within 2-meter distance) to the target object (chair) and scan it by looking at it from multiple perspectives. You should see a pink bounding box around the object with some yellow points rendered close to object's surface, which indicates that it was detected.
:::image type="content" source="./media/chair-detection.png" alt-text="Chair Detection":::
Figure: a detected chair rendered with its bounding box (pink), point cloud (yel
You can define a search space for the object in the app by finger clicking in the air with either your right or left hand. The search space will switch among a sphere of 2-meters radius, a 4 m^3 bounding box and a view frustum. For larger objects such as cars, the best choice will typically be to use the view frustum selection while standing facing a corner of the object at about a 2-meter distance. Each time the search area changes, the app will remove instances currently being tracked, and then try to find them again in the new search area.
-This app can track multiple objects at one time. To do that, upload multiple models to the **LocalState** folder and set a search area that covers all the target objects. It may take longer to detect and track multiple objects.
+This app can track multiple objects at one time. To do that, upload multiple models to the **3D Objects** folder of your device and set a search area that covers all the target objects. It may take longer to detect and track multiple objects.
The app aligns a 3D model to its physical counterpart closely. A user can air tap using their left hand to turn on the high precision tracking mode, which computes a more accurate pose. This is still an experimental feature, which consumes more system resources, and could result in higher jitter in the estimated pose. Air tap again with the left hand to switch back to the normal tracking mode.
object-anchors Get Started Unity Hololens Mrtk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/object-anchors/quickstarts/get-started-unity-hololens-mrtk.md
Previously updated : 03/02/2021 Last updated : 08/02/2021
You'll learn how to:
[!INCLUDE [Unity device setup](../../../includes/object-anchors-quickstart-unity-device-setup.md)] + ## Open the sample project [!INCLUDE [Clone Sample Repo](../../../includes/object-anchors-clone-sample-repository.md)]
When a "TMP Importer" dialog prompts you to import TextMesh Pro resources, selec
[!INCLUDE [Unity build and deploy](../../../includes/object-anchors-quickstart-unity-build-deploy.md)]
-### Run the sample app
-
-Turn on the device, select **All Apps**, then locate and start the app. After the Unity splash screen, you should see a white bounding box. You can use your hand to move, scale, or rotate the bounding box. Place the box to cover the object you want to detect.
+ After the Unity splash screen, you should see a white bounding box. You can use your hand to move, scale, or rotate the bounding box. Place the box to cover the object you want to detect.
Open the <a href="https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/README_HandMenu.html" target="_blank">hand menu</a> and select **Lock SearchArea** to prevent further movement of the bounding box. Select **Start Search** to start object detection. When the object is detected, a mesh will be rendered on the object. Details of a detected instance will show on the screen, such as updated timestamp and surface coverage ratio. Select **Stop Search** to stop tracking and all detected instances will be removed.
You can also do other actions using the <a href="https://microsoft.github.io/Mix
* **Tracker Settings** ΓÇô Toggles activation of the tracker settings menu. * **Search Area Settings** ΓÇô Toggles activation of the search area settings menu. * **Start Tracing** ΓÇô Capture diagnostics data and save it to the device. See more detail in section **Debug Detection Issues and Capture Diagnostics**.
-* **Upload Tracing** ΓÇô Upload diagnostics data to the Object Anchors service. A user must provide their subscription account in `subscription.json` and upload it to the `LocalState` folder. A sample `subscription.json` file can be found below.
+* **Upload Tracing** ΓÇô Upload diagnostics data to the Object Anchors service.
:::image type="content" source="./media/mrtk-hand-menu-primary.png" alt-text="Unity primary hand menu":::
You can also do other actions using the <a href="https://microsoft.github.io/Mix
:::image type="content" source="./media/mrtk-hand-menu-search-area.png" alt-text="Unity search area hand menu":::
-Example `subscription.json`:
-
-```json
-{
- "AccountId": "<your account id>",
- "AccountKey": "<your account key>",
- "AccountDomain": "<your account domain>"
-}
-```
--- [!INCLUDE [Unity troubleshooting](../../../includes/object-anchors-quickstart-unity-troubleshooting.md)] ## Next steps
object-anchors Get Started Unity Hololens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/object-anchors/quickstarts/get-started-unity-hololens.md
Previously updated : 03/02/2021 Last updated : 08/02/2021
You'll learn how to:
[!INCLUDE [Unity device setup](../../../includes/object-anchors-quickstart-unity-device-setup.md)] + ## Open the sample project [!INCLUDE [Clone Sample Repo](../../../includes/object-anchors-clone-sample-repository.md)]
In Unity, open the `quickstarts/apps/unity/basic` project.
[!INCLUDE [Unity build and deploy](../../../includes/object-anchors-quickstart-unity-build-deploy.md)]
-### Run the sample app
-
-Turn on the device, select **All Apps**, then locate and start the app. After the Unity splash screen, you'll see a message indicating that the Object Observer has been initialized. However, you'll need to add your model to the app.
--
+After the Unity splash screen, you'll see a message indicating that the Object Observer has been initialized.
The app looks for objects in the current field of view and then tracks them once detected. An instance will be removed when it's 6 meters away from the user's location. The debug text shows details about an instance, like ID, updated timestamp and surface coverage ratio.
private-link Create Private Link Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/create-private-link-service-portal.md
# Customer intent: As someone with a basic network background who's new to Azure, I want to create an Azure Private Link service by using the Azure portal Previously updated : 01/18/2021 Last updated : 08/18/2021 - - # Quickstart: Create a Private Link service by using the Azure portal Get started creating a Private Link service that refers to your service. Give Private Link access to your service or resource deployed behind an Azure Standard Load Balancer. Users of your service have private access from their virtual network.
In this section, you'll create a virtual network and an internal Azure Load Bala
In this section, you create a virtual network and subnet to host the load balancer that accesses your Private Link service.
-1. On the upper-left side of the screen, select **Create a resource > Networking > Virtual network** or search for **Virtual network** in the search box.
+1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In **Create virtual network**, enter or select this information in the **Basics** tab:
+2. On the upper-left side of the screen, select **Create a resource > Networking > Virtual network** or search for **Virtual network** in the search box.
+
+3. Select **Create**.
+
+4. In **Create virtual network**, enter or select this information in the **Basics** tab:
| **Setting** | **Value** | ||--| | **Project Details** | | | Subscription | Select your Azure subscription |
- | Resource Group | Select **CreatePrivLinkService-rg** |
+ | Resource Group | Select **Create new**. Enter **CreatePrivLinkService-rg**. </br> Select **OK**. |
| **Instance details** | | | Name | Enter **myVNet** |
- | Region | Select **East US 2** |
+ | Region | Select **(US) East US** |
-3. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+5. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
-4. In the **IP Addresses** tab, enter this information:
+6. In the **IP Addresses** tab, enter this information:
| Setting | Value | |--|-| | IPv4 address space | Enter **10.1.0.0/16** |
-5. Under **Subnet name**, select the word **default**.
+7. Under **Subnet name**, select the word **default**.
-6. In **Edit subnet**, enter this information:
+8. In **Edit subnet**, enter this information:
| Setting | Value | |--|-|
- | Subnet name | Enter **mySubnet** |
+ | Subnet name | Enter **myBackendSubnet** |
| Subnet address range | Enter **10.1.0.0/24** |
-7. Select **Save**.
+9. Select **Save**.
-8. Select the **Review + create** tab or select the **Review + create** button.
+10. Select the **Review + create** tab or select the **Review + create** button.
-9. Select **Create**.
+11. Select **Create**.
-### Create a standard load balancer
+### Create NAT gateway
-Use the portal to create a standard internal load balancer.
+In this section, you'll create a NAT gateway and assign it to the subnet in the virtual network you created previously.
-1. On the top left-hand side of the screen, select **Create a resource** > **Networking** > **Load Balancer**.
+1. On the upper-left side of the screen, select **Create a resource > Networking > NAT gateway** or search for **NAT gateway** in the search box.
-2. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
+2. Select **Create**.
- | Setting | Value |
- | | |
- | Subscription | Select your subscription. |
- | Resource group | Select **CreatePrivLinkService-rg** created in the previous step.|
- | Name | Enter **myLoadBalancer** |
- | Region | Select **East US 2**. |
- | Type | Select **Internal**. |
- | SKU | Select **Standard** |
- | Virtual network | Select **myVNet** created in the previous step. |
- | Subnet | Select **mySubnet** created in the previous step. |
- | IP address assignment | Select **Dynamic**. |
- | Availability zone | Select **Zone-redundant** |
+3. In **Create network address translation (NAT) gateway**, enter or select this information in the **Basics** tab:
-3. Accept the defaults for the remaining settings, and then select **Review + create**.
+ | **Setting** | **Value** |
+ ||--|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource Group | Select **CreatePrivLinkService-rg**. |
+ | **Instance details** | |
+ | Name | Enter **myNATGateway** |
+ | Region | Select **(US) East US 2** |
+ | Availability Zone | Select **None**. |
+ | Idle timeout (minutes) | Enter **10**. |
-4. In the **Review + create** tab, select **Create**.
+4. Select the **Outbound IP** tab, or select the **Next: Outbound IP** button at the bottom of the page.
-## Create load balancer resources
+5. In the **Outbound IP** tab, enter or select the following information:
-In this section, you configure:
+ | **Setting** | **Value** |
+ | -- | |
+ | Public IP addresses | Select **Create a new public IP address**. </br> In **Name**, enter **myNATgatewayIP**. </br> Select **OK**. |
-* Load balancer settings for a backend address pool.
-* A health probe.
-* A load balancer rule.
+6. Select the **Subnet** tab, or select the **Next: Subnet** button at the bottom of the page.
-### Create a backend pool
+7. In the **Subnet** tab, select **myVNet** in the **Virtual network** pull-down.
-A backend address pool contains the IP addresses of the virtual (NICs) connected to the load balancer.
+8. Check the box next to **myBackendSubnet**.
-Create the backend address pool **myBackendPool** to include virtual machines for load-balancing internet traffic.
+9. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
-1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+10. Select **Create**.
-2. Under **Settings**, select **Backend pools**, then select **Add**.
+### Create load balancer
-3. On the **Add a backend pool** page, for name, type **myBackendPool**, as the name for your backend pool, and then select **Add**.
+In this section, you create a load balancer that load balances virtual machines.
-### Create a health probe
+During the creation of the load balancer, you'll configure:
-The load balancer monitors the status of your app with a health probe.
+* Frontend IP address
+* Backend pool
+* Inbound load-balancing rules
-The health probe adds or removes VMs from the load balancer based on their response to health checks.
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
-Create a health probe named **myHealthProbe** to monitor the health of the VMs.
+2. In the **Load balancer** page, select **Create**.
-1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
-2. Under **Settings**, select **Health probes**, then select **Add**.
-
- | Setting | Value |
- | - | -- |
- | Name | Enter **myHealthProbe**. |
- | Protocol | Select **TCP**. |
- | Port | Enter **80**.|
- | Interval | Enter **15** for number of **Interval** in seconds between probe attempts. |
- | Unhealthy threshold | Select **2** for number of **Unhealthy threshold** or consecutive probe failures that must occur before a VM is considered unhealthy.|
- | | |
+ | Setting | Value |
+ | | |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **CreatePrivLinkService-rg**. |
+ | **Instance details** | |
+ | Name | Enter **myLoadBalancer** |
+ | Region | Select **(US) East US 2**. |
+ | Type | Select **Internal**. |
+ | SKU | Leave the default **Standard**. |
-3. Leave the rest the defaults and Select **OK**.
+4. Select **Next: Frontend IP configuration** at the bottom of the page.
-### Create a load balancer rule
+5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
-A load balancer rule is used to define how traffic is distributed to the VMs. You define the frontend IP configuration for the incoming traffic and the backend IP pool to receive the traffic. The source and destination port are defined in the rule.
+6. Enter **LoadBalancerFrontend** in **Name**.
-In this section, you'll create a load balancer rule:
+7. Select **myBackendSubnet** in **Subnet**.
-* Named **myHTTPRule**.
-* In the frontend named **LoadBalancerFrontEnd**.
-* Listening on **Port 80**.
-* Directs load balanced traffic to the backend named **myBackendPool** on **Port 80**.
+8. Select **Dynamic** for **Assignment**.
-1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+9. Select **Zone-redundant** in **Availability zone**.
-2. Under **Settings**, select **Load-balancing rules**, then select **Add**.
+ > [!NOTE]
+ > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
+
+10. Select **Add**.
+
+11. Select **Next: Backend pools** at the bottom of the page.
+
+12. In the **Backend pools** tab, select **+ Add a backend pool**.
+
+13. Enter **myBackendPool** for **Name** in **Add backend pool**.
+
+14. Select **NIC** or **IP Address** for **Backend Pool Configuration**.
+
+15. Select **IPv4** or **IPv6** for **IP version**.
+
+16. Select **Add**.
+
+17. Select the **Next: Inbound rules** button at the bottom of the page.
+
+18. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
+
+19. In **Add load balancing rule**, enter or select the following information:
-3. Use these values to configure the load-balancing rule:
-
| Setting | Value | | - | -- |
- | Name | Enter **myHTTPRule**. |
- | IP Version | Select **IPv4** |
- | Frontend IP address | Select **LoadBalancerFrontEnd** |
+ | Name | Enter **myHTTPRule** |
+ | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
+ | Frontend IP address | Select **LoadBalancerFrontend**. |
| Protocol | Select **TCP**. |
- | Port | Enter **80**.|
+ | Port | Enter **80**. |
| Backend port | Enter **80**. |
- | Backend pool | Select **myBackendPool**.|
- | Health probe | Select **myHealthProbe**. |
- | Idle timeout (minutes) | Move the slider to **15** minutes. |
+ | Backend pool | Select **myBackendPool**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Session persistence | Select **None**. |
+ | Idle timeout (minutes) | Enter or select **15**. |
| TCP reset | Select **Enabled**. |
+ | Floating IP | Select **Disabled**. |
+
+20. Select **Add**.
-4. Leave the rest of the defaults and then select **OK**.
+21. Select the blue **Review + create** button at the bottom of the page.
+
+22. Select **Create**.
## Create a private link service
In this section, you'll create a Private Link service behind a standard load bal
| Resource Group | Select **CreatePrivLinkService-rg**. | | **Instance details** | | | Name | Enter **myPrivateLinkService**. |
- | Region | Select **East US 2**. |
+ | Region | Select **(US) East US 2**. |
6. Select the **Outbound settings** tab or select **Next: Outbound settings** at the bottom of the page.
In this section, you'll create a Private Link service behind a standard load bal
Your private link service is created and can receive traffic. If you want to see traffic flows, configure your application behind your standard load balancer. - ## Create private endpoint In this section, you'll map the private link service to a private endpoint. A virtual network contains the private endpoint for the private link service. This virtual network contains the resources that will access your private link service.
In this section, you'll map the private link service to a private endpoint. A vi
| Resource Group | Select **CreatePrivLinkService-rg** | | **Instance details** | | | Name | Enter **myVNetPE** |
- | Region | Select **East US 2** |
+ | Region | Select **(US) East US 2** |
3. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
In this section, you'll map the private link service to a private endpoint. A vi
| Resource group | Select **CreatePrivLinkService-rg**. You created this resource group in the previous section.| | **Instance details** | | | Name | Enter **myPrivateEndpoint**. |
- | Region | Select **East US 2**. |
+ | Region | Select **(US) East US 2**. |
6. Select the **Resource** tab or the **Next: Resource** button at the bottom of the page.
In this section, you'll find the IP address of the private endpoint that corresp
4. In the **Overview** page of **myPrivateEndpoint**, select the name of the network interface associated with the private endpoint. The network interface name begins with **myPrivateEndpoint.nic**. 5. In the **Overview** page of the private endpoint nic, the IP address of the endpoint is displayed in **Private IP address**.
-
## Clean up resources When you're done using the private link service, delete the resource group to clean up the resources used in this quickstart. 1. Enter **CreatePrivLinkService-rg** in the search box at the top of the portal, and select **CreatePrivLinkService-rg** from the search results.
-1. Select **Delete resource group**.
-1. In **TYPE THE RESOURCE GROUP NAME**, enter **CreatePrivLinkService-rg**.
-1. Select **Delete**.
+
+2. Select **Delete resource group**.
+
+3. In **TYPE THE RESOURCE GROUP NAME**, enter **CreatePrivLinkService-rg**.
+
+4. Select **Delete**.
## Next steps
purview Catalog Lineage User Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-lineage-user-guide.md
Previously updated : 11/29/2020 Last updated : 08/10/2021 # Azure Purview Data Catalog lineage user guide
One of the platform features of Azure Purview is the ability to show the lineage
### Data processing system Data integration and ETL tools can push lineage in to Azure Purview at execution time. Tools such as Data Factory, Data Share, Synapse, Azure Databricks, and so on, belong to this category of data systems. The data processing systems reference datasets as source from different databases and storage solutions to create target datasets. The list of data processing systems currently integrated with Purview for lineage are listed in below table. - | Data processing system | Supported scope | | - | |
-| Azure Data Factory | [Copy activity](how-to-link-azure-data-factory.md#data-factory-copy-activity-support) <br> [Data flow activity](how-to-link-azure-data-factory.md#data-factory-data-flow-support) <br> [Execute SSIS package activity](how-to-link-azure-data-factory.md#data-factory-execute-ssis-package-support) |
+| Azure Data Factory | [Copy activity](how-to-link-azure-data-factory.md#copy-activity-support) <br> [Data flow activity](how-to-link-azure-data-factory.md#data-flow-support) <br> [Execute SSIS package activity](how-to-link-azure-data-factory.md#execute-ssis-package-support) |
+| Azure Synapse Analytics | [Copy activity](how-to-lineage-azure-synapse-analytics.md#copy-activity-support) |
| Azure Data Share | [Share snapshot](how-to-link-azure-data-share.md) | ### Data storage systems
purview How To Lineage Azure Synapse Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-azure-synapse-analytics.md
+
+ Title: Metadata and lineage from Azure Synapse Analytics
+description: This article describes how to connect Azure Synapse Analytics and Azure Purview to track data lineage.
+++++ Last updated : 08/10/2021+
+# How to get lineage from Azure Synapse Analytics into Azure Purview
+
+This document explains the steps required for connecting an Azure Synapse workspace with an Azure Purview account to track data lineage. The document also gets into the details of the coverage scope and supported lineage capabilities.
+
+## Supported Azure Synapse capabilities
+
+Currently, Azure Purview captures runtime lineage from the following Azure Synapse pipeline activities:
+
+- [Copy Data](../data-factory/copy-activity-overview.md)
+
+> [!IMPORTANT]
+> Azure Purview drops lineage if the source or destination uses an unsupported data storage system.
++
+## Bring Azure Synapse lineage into Purview
+
+### Step 1: Connect Azure Synapse workspace to your Purview account
+
+You can connect an Azure Sysnpase workspace to Purview, and the connection enables Azure Synapse to push lineage information to Purview. Follow the steps in [Connect an Azure Purview Account into Synapse](../synapse-analytics/catalog-and-governance/quickstart-connect-azure-purview.md). Multiple Azure Synapse workspaces can connect to a single Azure Purview account for holistic lineage tracking.
+
+### Step 2: Run pipeline in Azure Synapse workspace
+
+You can create pipelines with Copy activity in Azure Synapse workspace. You don't need any additional configuration for lineage data capture. The lineage data will automatically be captured during the activities execution.
+
+### Step 3: Monitor lineage reporting status
+
+After you run the Azure Synapse pipeline, in the Synapse pipeline monitoring view, you can check the lineage reporting status by clicking the following **Lineage status** button. The same information is also available in the activity output JSON -> `reportLineageToPurvew` section.
++
+### Step 4: View lineage information in your Purview account
+
+In your Purview account, you can browse assets and choose type "Azure Synapse Analytics". You can also search the Data Catalog using keywords.
++
+Select the Synapse account -> pipeline -> activity, you can view the lineage information.
++
+## Next steps
+
+[Catalog lineage user guide](catalog-lineage-user-guide.md)
+
+[Link to Azure Data Share for lineage](how-to-link-azure-data-share.md)
purview How To Link Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-link-azure-data-factory.md
Previously updated : 05/31/2021 Last updated : 08/10/2021 # How to connect Azure Data Factory and Azure Purview
This document explains the steps required for connecting an Azure Data Factory a
## View existing Data Factory connections
-Multiple Azure Data Factories can connect to a single Azure Purview Data Catalog to push lineage information. The current limit allows you to connect up ten Data Factory accounts at a time from the Purview management center. To show the list of Data Factory accounts connected to your Purview Data Catalog, do the following:
+Multiple Azure Data Factories can connect to a single Azure Purview Data Catalog to push lineage information. The current limit allows you to connect up 10 Data Factory accounts at a time from the Purview management center. To show the list of Data Factory accounts connected to your Purview Data Catalog, do the following:
1. Select **Management** on the left navigation pane. 2. Under **Lineage connections**, select **Data Factory**.
Multiple Azure Data Factories can connect to a single Azure Purview Data Catalog
> > Besides, it requires the users to be the data factoryΓÇÖs ΓÇ£OwnerΓÇ¥, or ΓÇ£ContributorΓÇ¥.
-Follow the steps below to connect an existing Data Factory accounts to your Purview Data Catalog.
+Follow the steps below to connect an existing data factory to your Purview Data Catalog.
1. Select **Management** on the left navigation pane. 2. Under **Lineage connections**, select **Data Factory**.
Follow the steps below to connect an existing Data Factory accounts to your Purv
A warning message will be displayed if any of the selected Data Factories are already connected to other Purview account. By selecting OK, the Data Factory connection with the other Purview account will be disconnected. No additional confirmations are required. - :::image type="content" source="./media/how-to-link-azure-data-factory/warning-for-disconnect-factory.png" alt-text="Screenshot showing warning to disconnect Azure Data Factory." lightbox="./media/how-to-link-azure-data-factory/warning-for-disconnect-factory.png"::: >[!Note]
Follow the steps below to connect an existing Data Factory accounts to your Purv
### How does the authentication work?
-When a Purview user registers an Data Factory to which they have access to, the following happens in the backend:
+When a Purview user registers a data factory to which they have access, the following happens in the backend:
1. The **Data Factory managed identity** gets added to Purview RBAC role: **Purview Data Curator**.
To remove a data factory connection, do the following:
:::image type="content" source="./media/how-to-link-azure-data-factory/remove-data-factory-connection.png" alt-text="Screenshot showing how to select data factories to remove connection." lightbox="./media/how-to-link-azure-data-factory/remove-data-factory-connection.png":::
-## Configure a Self-hosted Integration Runtime to collect lineage
-
-Lineage for the Data Factory Copy activity is available for on-premises data stores like SQL databases. If you're running self-hosted integration runtime for the data movement with Azure Data Factory and want to capture lineage in Azure Purview, ensure the version is 5.0 or later. For more information about self-hosted integration runtime, see [Create and configure a self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md).
- ## Supported Azure Data Factory activities Azure Purview captures runtime lineage from the following Azure Data Factory activities:
Azure Purview captures runtime lineage from the following Azure Data Factory act
The integration between Data Factory and Purview supports only a subset of the data systems that Data Factory supports, as described in the following sections.
-### Data Factory Copy activity support
-
-| Data store | Supported |
-| - | - |
-| Azure Blob Storage | Yes |
-| Azure Cognitive Search | Yes |
-| Azure Cosmos DB (SQL API) \* | Yes |
-| Azure Cosmos DB's API for MongoDB \* | Yes |
-| Azure Data Explorer \* | Yes |
-| Azure Data Lake Storage Gen1 | Yes |
-| Azure Data Lake Storage Gen2 | Yes |
-| Azure Database for Maria DB \* | Yes |
-| Azure Database for MySQL \* | Yes |
-| Azure Database for PostgreSQL \* | Yes |
-| Azure File Storage | Yes |
-| Azure SQL Database \* | Yes |
-| Azure SQL Managed Instance \* | Yes |
-| Azure Synapse Analytics \* | Yes |
-| Azure Table Storage | Yes |
-| Amazon S3 | Yes |
-| Hive \* | Yes |
-| SAP Table *(when used to connect to SAP ECC or SAP S/4HANA)* | Yes |
-| SQL Server \* | Yes |
-| Teradata \* | Yes |
-
-*\* Azure Purview currently doesn't support query or stored procedure for lineage or scanning. Lineage is limited to table and view sources only.*
-
-> [!Note]
-> The lineage feature has certain performance overhead in Data Factory copy activity. For those who setup data factory connections in Purview, you may observe certain copy jobs taking longer to complete. Mostly the impact is none to negligible. Please contact support with time comparison if the copy jobs take significantly longer to finish than usual.
-
-#### Known limitations on copy activity lineage
-
-Currently, if you use the following copy activity features, the lineage is not yet supported:
--- Copy data into Azure Data Lake Storage Gen1 using Binary format.-- Copy data into Azure Synapse Analytics using PolyBase or COPY statement.-- Compression setting for Binary, delimited text, Excel, JSON, and XML files.-- Source partition options for Azure SQL Database, Azure SQL Managed Instance, Azure Synapse Analytics, SQL Server, and SAP Table.-- Source partition discovery option for file-based stores.-- Copy data to file-based sink with setting of max rows per file.-- Add additional columns during copy.-
-In additional to lineage, the data asset schema (shown in Asset -> Schema tab) is reported for the following connectors:
-- CSV and Parquet files on Azure Blob, Azure File Storage, ADLS Gen1, ADLS Gen2, and Amazon S3-- Azure Data Explorer, Azure SQL Database, Azure SQL Managed Instance, Azure Synapse Analytics, SQL Server, Teradata-
-### Data Factory Data Flow support
+### Data Flow support
| Data store | Supported | | - | - |
In additional to lineage, the data asset schema (shown in Asset -> Schema tab) i
*\* Azure Purview currently doesn't support query or stored procedure for lineage or scanning. Lineage is limited to table and view sources only.*
-### Data Factory Execute SSIS Package support
+### Execute SSIS Package support
Refer to [supported data stores](how-to-lineage-sql-server-integration-services.md#supported-data-stores).
+## Bring Data Factory lineage into Purview
+
+For an end to end walkthrough, follow the [Tutorial: Push Data Factory lineage data to Azure Purview](../data-factory/turorial-push-lineage-to-purview.md).
+ ## Supported lineage patterns There are several patterns of lineage that Azure Purview supports. The generated lineage data is based on the type of source and sink used in the Data Factory activities. Although Data Factory supports over 80 source and sinks, Azure Purview supports only a subset, as listed in [Supported Azure Data Factory activities](#supported-azure-data-factory-activities). To configure Data Factory to send lineage information, see [Get started with lineage](catalog-lineage-user-guide.md#get-started-with-lineage).
-Some additional ways of finding information in the lineage view, include the following:
+Some other ways of finding information in the lineage view, include the following:
-- In the **Lineage** tab, hover on shapes to preview additional information about the asset in the tooltip .
+- In the **Lineage** tab, hover on shapes to preview additional information about the asset in the tooltip.
- Select the node or edge to see the asset type it belongs or to switch assets. - Columns of a dataset are displayed in the left side of the **Lineage** tab. For more information about column-level lineage, see [Dataset column lineage](catalog-lineage-user-guide.md#dataset-column-lineage).
In the following example, an Azure Data Lake Gen2 resource set is produced from
## Next steps -- [Catalog lineage user guide](catalog-lineage-user-guide.md)-- [Link to Azure Data Share for lineage](how-to-link-azure-data-share.md)
+[Tutorial: Push Data Factory lineage data to Azure Purview](../data-factory/turorial-push-lineage-to-purview.md)
+
+[Catalog lineage user guide](catalog-lineage-user-guide.md)
+
+[Link to Azure Data Share for lineage](how-to-link-azure-data-share.md)
purview Reference Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/reference-purview-glossary.md
Previously updated : 06/21/2021 Last updated : 08/16/2021 # Azure Purview product glossary
A report that shows key classification details about the scanned data.  
## Classification A type of annotation used to identify an attribute of an asset or a column such as "Age", “Email Address", and "Street Address". These attributes can be assigned during scans or added manually.  ## Classification rule
-A classification rule is a set of conditions that determine how scanned data should be classified when content matches the specified pattern.  
+A classification rule is a set of conditions that determine how scanned data should be classified when content matches the specified pattern.
+## Classified asset
+An asset where Azure Purview extracts schema and applies classifications during an automated scan. The scan rule set determines which assets get classified. If the asset is considered a candidate for classification and no classifications are applied during scan time, an asset is still considered a classified asset.
+## Column pattern
+A regular expression included in a classification rule that represents the column names that you want to match.
## Contact An individual who is associated with an entity in the data catalog
-## Control plane
+## Control plane operation
Operations that manage resources in your subscription, such as role-based access control and Azure policy, that are sent to the Azure Resource Manager end point. ## Credential A verification of identity or tool used in an access control system. Credentials can be used to authenticate an individual or group for the purpose of granting access to a data asset. 
A verification of identity or tool used in an access control system. Credent
Azure Purview features that enable customers to view and manage the metadata for assets in your data estate. ## Data map Azure Purview features that enable customers to manage their data estate, such as scanning, lineage, and movement.
+## Data pattern
+A regular expression that represents the data that is stored in a data field. For example, a data pattern for employee ID could be Employee{GUID}.
+## Data plane operation
+An operation within a specific Azure Purview instance, such as editing an asset or creating a glossary term. Each instance has predefined roles, such as ""data reader"" and ""data curator"" that control which data plane operations a user can perform.
+## Discovered asset
+An asset that Azure Purview identifies in a data source during the scanning process. The number of discovered assets includes all files or tables before resource set grouping.
+## Distinct match threshold
+The total number of distinct data values that need to be found in a column before the scanner runs the data pattern on it. For example, a distinct match threshold of eight for employee ID requires that there are at least eight unique data values among the sampled values in the column that match the data pattern set for employee ID.
## Expert
-An individual within an organization who understands the full context of a data asset or glossary term. 
+An individual within an organization who understands the full context of a data asset or glossary term.
+## Full scan
+A scan that processes all assets within a selected scope of a data source.
## Fully Qualified Name (FQN) A path that defines the location of an asset within its data source.   ## Glossary term
-An entry in the Business glossary that defines a concept specific to an organization. Glossary terms can contain information on synonyms, acronyms, and related terms. 
+An entry in the Business glossary that defines a concept specific to an organization. Glossary terms can contain information on synonyms, acronyms, and related terms.
+## Incremental scan
+A scan that detects and processes assets which have been created, modified, or deleted since the previous successful scan. To run an incremental scan, at least one full scan must be completed on the source.
+## Ingested asset
+An asset that has been scanned, classified (when applicable), and added to the Azure Purview data map. Ingested assets are discoverable and consumable within the data catalog through automated scanning or external connections, such as Azure Data Factory and Azure Synapse.
## Insights An area within Azure Purview where you can view reports that summarize information about your data. ## Integration runtime
The compute infrastructure used to scan in a data source.
How data transforms and flows as it moves from its origin to its destination. Understanding this flow across the data estate helps organizations see the history of their data, and aid in troubleshooting or impact analysis.  ## Management Center An area within Azure Purview where you can manage connections, users, roles, and credentials.
+## Minimum match threshold
+The minimum percentage of matches among the distinct data values in a column that must be found by the scanner for a classification to be applied.
+
+For example, a minimum match threshold of 60% for employee ID requires that 60% of all distinct values among the sampled data in a column match the data pattern set for employee ID. If the scanner samples 128 values in a column and finds 60 distinct values in that column, then at least 36 of the distinct values (60%) must match the employee ID data pattern for the classification to be applied.
## On-premises data Data that is in a data center controlled by a customer, for example, not in the cloud or as a software as a service (SaaS).  ## Owner
An Azure Purview process that examines a source or set of sources and inge
A set of rules that define which data types and classifications a scan ingests into a catalog.  ## Scan trigger A schedule that determines the recurrence of when a scan runs.
+## Search
+A data discovery feature of Azure Purview that returns a list of assets that match to a keyword.
## Search relevance The scoring of data assets that determines the order search results are returned. Multiple factors determine an asset's relevance score. ## Self-hosted integration runtime
search Cognitive Search Defining Skillset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-defining-skillset.md
Within a skillset definition, the skills array specifies which skills to execute
### How built-in skills are structured
-Each skill is unique in terms of its input values and the parameters it takes. The documentation for each skill describes all of the properties of a given skill. Although there are difference, most skills share a common set of parameters and are similarly patterned. To illustrate several points, the [Entity Recognition skill](cognitive-search-skill-entity-recognition-v3.md) provides an example:
+Each skill is unique in terms of its input values and the parameters it takes. The [documentation for each skill](cognitive-search-predefined-skills.md) describes all of the parameters and properties of a given skill. Although there are differences, most skills share a common set and are similarly patterned. To illustrate several points, the [Entity Recognition skill](cognitive-search-skill-entity-recognition-v3.md) provides an example:
```json {
Each skill is unique in terms of its input values and the parameters it takes. T
} ```
-+ Common parameters include `"odata.type"` which uniquely identifies the skill, `inputs`, and `outputs`. The other properties, namely`"categories"` and `"defaultLanguageCode"`, are examples of properties that are specific to Entity Recognition.
+Common parameters include "odata.type", "inputs", and "outputs". The other parameters, namely "categories" and "defaultLanguageCode", are examples of parameters that are specific to Entity Recognition.
-+ `"context"` is a node in an enrichment tree and it represents the level at which operations take place. All skills have this property. If the `"context"` field is not explicitly set, the default context is the document. In the example, the context is the whole document, meaning that the entity recognition skill is called once per document.
++ **"odata.type"** uniquely identifies each skill. You can find the type in the [skill reference documentation](cognitive-search-predefined-skills.md).
- The context also determines where outputs are also produced in the enrichment tree. In this example, the skill returns a property called `organizations`, captured as `orgs`, which is added as a child node of `"/document"`. In downstream skills, the path to this newly-created enrichment node is `"/document/orgs"`. For a particular document, the value of `"/document/orgs"` is an array of organizations extracted from the text (for example: `["Microsoft", "LinkedIn"]`).
++ **"context"** is a node in an enrichment tree and it represents the level at which operations take place. All skills have this property. If the "context" field is not explicitly set, the default context is `"/document"`. In the example, the context is the whole document, meaning that the entity recognition skill is called once per document.
-+ This skill has one input called "text", with a source input set to `"/document/content"`. An input's name is a valid value that's defined for the skill. The source is a path to a node in the enrichment tree. In this example, the skill operates on the *content* field of each document, which is a standard field created by the Azure Blob indexer.
+ The context also determines where outputs are also produced in the enrichment tree. In this example, the skill returns a property called `"organizations"`, captured as `orgs`, which is added as a child node of `"/document"`. In downstream skills, the path to this newly-created enrichment node is `"/document/orgs"`. For a particular document, the value of `"/document/orgs"` is an array of organizations extracted from the text (for example: `["Microsoft", "LinkedIn"]`). For more information about path syntax, see [Referencing annotations in a skillset](cognitive-search-concept-annotations-syntax.md).
-+ Outputs exist only during processing. To chain this output to a downstream skill's input, reference the output as `"/document/orgs"`. To send output to a field in a search index, [create an output field mapping](cognitive-search-output-field-mapping.md) in an indexer. To send output to a knowledge store, [create a projection](knowledge-store-projection-overview.md).
++ **"inputs"** specify the origin of the incoming data and how it will be used. In the case of Entity Recognition, one of the inputs is `"text"`, which is the content to be analyzed for entities. The content is sourced from the `"/document/content"` node in an enrichment tree. In an enrichment tree, `"/document"` is the root node. For documents retrieved using an Azure Blob indexer, the `content` field of each document is a standard field created by the indexer.
-Outputs from the one skill can conflict with outputs from a different skill. If you have multiple skills returning the same output, use the `targetName` for name disambiguation in enrichment node paths.
++ **"outputs"** represent the output of the skill. Each skill is designed to emit specific kinds of output, which are referenced by name in the skillset. In the case of Entity Recognition, `"organizations"` is one of the outputs it supports. The documentation for each skill describes the outputs it can produce.+
+Outputs exist only during processing. To chain this output to a downstream skill's input, reference the output as `"/document/orgs"`. To send output to a field in a search index, [create an output field mapping](cognitive-search-output-field-mapping.md) in an indexer. To send output to a knowledge store, [create a projection](knowledge-store-projection-overview.md).
+
+Outputs from the one skill can conflict with outputs from a different skill. If you have multiple skills returning the same output, use the `"targetName"` for name disambiguation in enrichment node paths.
Some situations call for referencing each element of an array separately. For example, suppose you want to pass *each element* of `"/document/orgs"` separately to another skill. To do so, add an asterisk to the path: `"/document/orgs/*"`
-The second skill for sentiment analysis follows the same pattern as the first enricher. It takes `"/document/content"` as input, and returns a sentiment score for each content instance. Since you did not set the `"context"` field explicitly, the output (mySentiment) is now a child of `"/document"`.
+The second skill for sentiment analysis follows the same pattern as the first enricher. It takes `"/document/content"` as input, and returns a sentiment score for each content instance. Since you did not set the "context" field explicitly, the output (mySentiment) is now a child of `"/document"`.
```json {
As each skill executes, its output is added as nodes in a document's enrichment
In the early stages of skillset evaluation, you'll want to check preliminary results with minimal effort. We recommend the search index because it's simpler to set up. For each skill output, [define an output field mapping](cognitive-search-output-field-mapping.md) in the indexer, and a field in the index. After running the indexer, you can use [Search Explorer](search-explorer.md) to return documents from the index and check the contents of each field to determine what the skillset detected or created. The following example shows the results of an entity recognition skill that detected persons, locations, organizations, and other entities in a chunk of text. Viewing the results in Search Explorer can help you determine whether a skill adds value to your solution. ## Next steps Context and input source fields are paths to nodes in an enrichment tree. As a next step, learn more about the syntax for setting up paths to nodes in an enrichment tree. > [!div class="nextstepaction"]
-> [Referencing annotations in a skillset](cognitive-search-concept-annotations-syntax.md).
+> [Referencing annotations in a skillset](cognitive-search-concept-annotations-syntax.md)
search Cognitive Search Predefined Skills https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-predefined-skills.md
Built-in skills are based on pre-trained models from Microsoft, which means you
The following table enumerates and describes the built-in skills.
-| Type | Description | Metered by |
+| OData type | Description | Metered by |
|-|-|-| |[Microsoft.Skills.Text.CustomEntityLookupSkill](cognitive-search-skill-custom-entity-lookup.md) | Looks for text from a custom, user-defined list of words and phrases.| Azure Cognitive Search ([pricing](https://azure.microsoft.com/pricing/details/search/)) | | [Microsoft.Skills.Text.KeyPhraseExtractionSkill](cognitive-search-skill-keyphrases.md) | This skill uses a pretrained model to detect important phrases based on term placement, linguistic rules, proximity to other terms, and how unusual the term is within the source data. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
security-center Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-container-registries-introduction.md
Title: Azure Defender for container registries - the benefits and features
description: Learn about the benefits and features of Azure Defender for container registries. Previously updated : 07/05/2021 Last updated : 08/16/2021
Azure Container Registry (ACR) is a managed, private Docker registry service tha
To protect the Azure Resource Manager based registries in your subscription, enable **Azure Defender for container registries** at the subscription level. Azure Defender will then scan all images when theyΓÇÖre pushed to the registry, imported into the registry, or pulled within the last 30 days. YouΓÇÖll be charged for every image that gets scanned ΓÇô once per image.
+## Availability
+
+|Aspect|Details|
+|-|:-|
+|Release state:|Generally available (GA)|
+|Pricing:|**Azure Defender for container registries** is billed as shown on [the pricing page](security-center-pricing.md)|
+|Supported registries and images:|Linux images in ACR registries accessible from the public internet with shell access<br>[ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md)|
+|Unsupported registries and images:|Windows images<br>'Private' registries<br>Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images, or "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br>Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md)|
+|Required roles and permissions:|**Security reader** and [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md)|
+|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png" border="false"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png" border="false"::: US Gov and China Gov - Only the scan on push feature is currently supported. Learn more in [When are images scanned?](#when-are-images-scanned)|
+|||
## What are the benefits of Azure Defender for container registries?
security-center Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/deploy-vulnerability-assessment-vm.md
The vulnerability scanner extension works as follows:
>[!IMPORTANT] > If the deployment fails on one or more machines, ensure the target machines can communicate with Qualys' cloud service by adding the following IPs to your allow lists (via port 443 - the default for HTTPS): >
- > - https://qagpublic.qg3.apps.qualys.com - Qualys' US data center
+ > - `https://qagpublic.qg3.apps.qualys.com` - Qualys' US data center
>
- > - https://qagpublic.qg2.apps.qualys.eu - Qualys' European data center
+ > - `https://qagpublic.qg2.apps.qualys.eu` - Qualys' European data center
> > If your machine is in a European Azure region, its artifacts will be processed in Qualys' European data center. Artifacts for virtual machines located elsewhere are sent to the US data center.
The Azure Security Center vulnerability assessment extension (powered by Qualys)
During setup, Security Center checks to ensure that the machine can communicate with the following two Qualys data centers (via port 443 - the default for HTTPS): -- https://qagpublic.qg3.apps.qualys.com - Qualys' US data center-- https://qagpublic.qg2.apps.qualys.eu - Qualys' European data center
+- `https://qagpublic.qg3.apps.qualys.com` - Qualys' US data center
+- `https://qagpublic.qg2.apps.qualys.eu` - Qualys' European data center
The extension doesn't currently accept any proxy configuration details.
security-center Just In Time Explained https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/just-in-time-explained.md
JIT requires [Azure Defender for servers](defender-for-servers-introduction.md)
If you want to create custom roles that can work with JIT, you'll need the details from the table below. > [!TIP]
-> To create a least-privileged role for users that need to request JIT access to a VM, and perform no other JIT operations, use the [Set-JitLeastPrivilegedRole script](https://github.com/Azure/Azure-Security-Center/tree/master/Powershell%20scripts/JIT%20Custom%20Role) from the Security Center GitHub community pages.
+> To create a least-privileged role for users that need to request JIT access to a VM, and perform no other JIT operations, use the [Set-JitLeastPrivilegedRole script](https://github.com/Azure/Azure-Security-Center/tree/main/Powershell%20scripts/JIT%20Scripts/JIT%20Custom%20Role) from the Security Center GitHub community pages.
| To enable a user to: | Permissions to set| | | |
If you want to create custom roles that can work with JIT, you'll need the detai
This page explained _why_ just-in-time (JIT) virtual machine (VM) access should be used. To learn about _how_ to enable JIT and request access to your JIT-enabled VMs, see the following: > [!div class="nextstepaction"]
-> [How to secure your management ports with JIT](security-center-just-in-time.md)
+> [How to secure your management ports with JIT](security-center-just-in-time.md)
security-center Prevent Misconfigurations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/prevent-misconfigurations.md
# Prevent misconfigurations with Enforce/Deny recommendations
-Security misconfigurations are a major cause of security incidents. Security Center now has the ability to help *prevent* misconfigurations of new resources with regard to specific recommendations.
+Security misconfigurations are a major cause of security incidents. Security Center can help *prevent* misconfigurations of new resources with regard to specific recommendations.
This feature can help keep your workloads secure and stabilize your secure score.
security-center Quickstart Onboard Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/quickstart-onboard-machines.md
Security Center can monitor the security posture of your non-Azure computers, bu
You can connect your non-Azure computers in any of the following ways: -- Using Azure Arc enabled servers (**recommended**)
+- Using Azure Arc-enabled servers (**recommended**)
- From Security Center's pages in the Azure portal (**Getting started** and **Inventory**) Each of these is described on this page.
Each of these is described on this page.
## Add non-Azure machines with Azure Arc
-The preferred way of adding your non-Azure machines to Azure Security Center is with [Azure Arc enabled servers](../azure-arc/servers/overview.md).
+The preferred way of adding your non-Azure machines to Azure Security Center is with [Azure Arc-enabled servers](../azure-arc/servers/overview.md).
-A machine with Azure Arc enabled servers becomes an Azure resource and - when you've installed the Log Analytics agent on it - appears in Security Center with recommendations like your other Azure resources.
+A machine with Azure Arc-enabled servers becomes an Azure resource and - when you've installed the Log Analytics agent on it - appears in Security Center with recommendations like your other Azure resources.
-In addition, Azure Arc enabled servers provides enhanced capabilities such as the option to enable guest configuration policies on the machine, simplify deployment with other Azure services, and more. For an overview of the benefits, see [Supported scenarios](../azure-arc/servers/overview.md#supported-scenarios).
+In addition, Azure Arc-enabled servers provides enhanced capabilities such as the option to enable guest configuration policies on the machine, simplify deployment with other Azure services, and more. For an overview of the benefits, see [Supported cloud operations](../azure-arc/servers/overview.md#supported-cloud-operations).
> [!NOTE] > Security Center's auto-deploy tools for deploying the Log Analytics agent don't support machines running Azure Arc. When you've connected your machines using Azure Arc, use the relevant Security Center recommendation to deploy the agent and benefit from the full range of protections offered by Security Center:
In addition, Azure Arc enabled servers provides enhanced capabilities such as th
> - [Log Analytics agent should be installed on your Linux-based Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/720a3e77-0b9a-4fa9-98b6-ddf0fd7e32c1) > - [Log Analytics agent should be installed on your Windows-based Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/27ac71b1-75c5-41c2-adc2-858f5db45b08)
-Learn more about [Azure Arc enabled servers](../azure-arc/servers/overview.md).
+Learn more about [Azure Arc-enabled servers](../azure-arc/servers/overview.md).
**To deploy Azure Arc:** - For one machine, follow the instructions in [Quickstart: Connect hybrid machines with Azure Arc enabled servers](../azure-arc/servers/learn/quick-enable-hybrid-vm.md).-- To connect multiple machines at scale to Arc enabled servers, see [Connect hybrid machines to Azure at scale](../azure-arc/servers/onboard-service-principal.md)
+- To connect multiple machines at scale to Azure Arc-enabled servers, see [Connect hybrid machines to Azure at scale](../azure-arc/servers/onboard-service-principal.md)
> [!TIP] > If you're onboarding machines running on Amazon Web Services (AWS), Security Center's connector for AWS transparently handles the Azure Arc deployment for you. Learn more in [Connect your AWS accounts to Azure Security Center](quickstart-onboard-aws.md).
security-center Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/release-notes.md
Previously updated : 08/10/2021 Last updated : 08/15/2021
Updates in August include:
- [Two new recommendations for managing endpoint protection solutions (in preview)](#two-new-recommendations-for-managing-endpoint-protection-solutions-in-preview) - [Built-in troubleshooting and guidance for solving common issues](#built-in-troubleshooting-and-guidance-for-solving-common-issues) - [Regulatory compliance dashboard's Azure Audit reports released for general availability (GA)](#regulatory-compliance-dashboards-azure-audit-reports-released-for-general-availability-ga)-- [Deprecate recommendation 'Log Analytics agent health issues should be resolved on your machines'](#deprecated-recommendation-log-analytics-agent-health-issues-should-be-resolved-on-your-machines)
+- [Deprecated recommendation 'Log Analytics agent health issues should be resolved on your machines'](#deprecated-recommendation-log-analytics-agent-health-issues-should-be-resolved-on-your-machines)
+- [Azure Defender for container registries now scans for vulnerabilities in registries protected with Azure Private Link](#azure-defender-for-container-registries-now-scans-for-vulnerabilities-in-registries-protected-with-azure-private-link)
+- [Security Center can now auto provision the Azure Policy's Guest Configuration extension (in preview)](#security-center-can-now-auto-provision-the-azure-policys-guest-configuration-extension-in-preview)
+- [Recommendations to enable Azure Defender plans now support "Enforce"](#recommendations-to-enable-azure-defender-plans-now-support-enforce)
+- [CSV exports of recommendation data now limited to 20 MB](#csv-exports-of-recommendation-data-now-limited-to-20-mb)
+- [Recommendations page now includes multiple views](#recommendations-page-now-includes-multiple-views)
### Microsoft Defender for Endpoint for Linux now supported by Azure Defender for servers (in preview)
It's likely that this change will impact your secure scores. For most subscripti
> The [asset inventory](asset-inventory.md) page was also affected by this change as it displays the monitored status for machines (monitored, not monitored, or partially monitored - a state which refers to an agent with health issues).
+### Azure Defender for container registries now scans for vulnerabilities in registries protected with Azure Private Link
+Azure Defender for container registries includes a vulnerability scanner to scan images in your Azure Container Registry registries. Learn how to scan your registries and remediate findings in [Use Azure Defender for container registries to scan your images for vulnerabilities](defender-for-container-registries-usage.md).
+
+To limit access to a registry hosted in Azure Container Registry, assign virtual network private IP addresses to the registry endpoints and use Azure Private Link as explained in [Connect privately to an Azure container registry using Azure Private Link](../container-registry/container-registry-private-link.md).
+
+As part of our ongoing efforts to support additional environments and use cases, Azure Defender now also scans container registries protected with [Azure Private Link](../private-link/private-link-overview.md).
++
+### Security Center can now auto provision the Azure Policy's Guest Configuration extension (in preview)
+Azure Policy can audit settings inside a machine, both for machines running in Azure and Arc connected machines. The validation is performed by the Guest Configuration extension and client. Learn more in [Understand Azure Policy's Guest Configuration](../governance/policy/concepts/guest-configuration.md).
+
+With this update you can now set Security Center to automatically provision this extension to all supported machines.
++
+### Recommendations to enable Azure Defender plans now support "Enforce"
+Security Center includes two features that help ensure newly created resources are provisioned in a secure manner: **enforce** and **deny**. When a recommendation offers these options, you can ensure your security requirements are met whenever someone attempts to create a resource:
+
+- **Deny** stops unhealthy resources from being created
+- **Enforce** automatically remediates non-compliant resources when they're created
+
+With this update, the enforce option is now available on the recommendations to enable Azure Defender plans (such as **Azure Defender for App Service should be enabled**, **Azure Defender for Key Vault should be enabled**, **Azure Defender for Storage should be enabled**).
+
+Learn more about these options in [Prevent misconfigurations with Enforce/Deny recommendations](prevent-misconfigurations.md).
+
+### CSV exports of recommendation data now limited to 20 MB
+
+We're instituting a limit of 20 MB when exporting Security Center recommendations data.
++
+If you need to export larger amounts of data, use the available filters before selecting, or select subsets of your subscriptions and download the data in batches.
++
+Learn more about [performing a CSV export of your security recommendations](continuous-export.md#manual-one-time-export-of-alerts-and-recommendations).
+++
+### Recommendations page now includes multiple views
+
+The recommendations page now has two tabs to provide alternate ways to view the recommendations relevant to your resources:
+
+- **Secure score recommendations** - Use this tab to view the list of recommendations grouped by security control. Learn more about these controls in [Security controls and their recommendations](secure-score-security-controls.md#security-controls-and-their-recommendations).
+- **All recommendations** - Use this tab to view the list of recommendations as a flat list. This tab is also great for understanding which initiative (including regulatory compliance standards) generated the recommendation. Learn more about initiatives and their relationship to recommendations in [What are security policies, initiatives, and recommendations?](security-policy-concept.md).
++ ## July 2021 Updates in July include:
These are the alerts that were part of Azure Defender for Resource Manager, and
- ARM_VMAccessUnusualPasswordReset - ARM_VMAccessUnusualSSHReset
-Learn more about the [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) and [Azure Defender for servers](defender-for-servers-introduction.md).
+Learn more about the [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) and [Azure Defender for servers](defender-for-servers-introduction.md) plans.
### Enhancements to recommendation to enable Azure Disk Encryption (ADE)
security-center Security Center Wdatp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-wdatp.md
Previously updated : 08/03/2021 Last updated : 08/16/2021
Confirm that your machine meets the necessary requirements for Defender for Endp
1. Ensure the machine is connected to Azure and the internet as required:
- - **Azure virtual machines (Windows or Linux)**:
- - Confirm that your target machines have the Log Analytics agent. Use the Security Center recommendation to deploy the Log Analytics agent where necessary: [Log Analytics agent should be installed on your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d1db3318-01ff-16de-29eb-28b344515626).
- - Configure the network settings described in configure device proxy and internet connectivity settings: [Windows](/windows/security/threat-protection/microsoft-defender-atp/configure-proxy-internet) or [Linux](/microsoft-365/security/defender-endpoint/linux-static-proxy-configuration)
+ - **Azure virtual machines (Windows or Linux)** - Configure the network settings described in configure device proxy and internet connectivity settings: [Windows](/windows/security/threat-protection/microsoft-defender-atp/configure-proxy-internet) or [Linux](/microsoft-365/security/defender-endpoint/linux-static-proxy-configuration).
- - **On-premises machines**:
- 1. Connect your target machines to Azure Arc as explained in [Connect hybrid machines with Azure Arc enabled servers](../azure-arc/servers/learn/quick-enable-hybrid-vm.md).
- 1. Use the relevant Security Center recommendation to deploy the Log Analytics agent:<br>[Log Analytics agent should be installed on your **Linux-based** Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/720a3e77-0b9a-4fa9-98b6-ddf0fd7e32c1)<br>[Log Analytics agent should be installed on your **Windows-based** Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/27ac71b1-75c5-41c2-adc2-858f5db45b08)
+ - **On-premises machines** - Connect your target machines to Azure Arc as explained in [Connect hybrid machines with Azure Arc enabled servers](../azure-arc/servers/learn/quick-enable-hybrid-vm.md).
1. Enable **Azure Defender for servers**. See [Quickstart: Enable Azure Defender](enable-azure-defender.md).
Full instructions for switching from a non-Microsoft endpoint solution are avail
## Next steps - [Platforms and features supported by Azure Security Center](security-center-os-coverage.md)-- [Managing security recommendations in Azure Security Center](security-center-recommendations.md): Learn how recommendations help you protect your Azure resources.
+- [Learn how recommendations help you protect your Azure resources](security-center-recommendations.md)
security-center Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/upcoming-changes.md
Previously updated : 08/10/2021 Last updated : 08/16/2021
If you're looking for the latest release notes, you'll find them in the [What's
## Planned changes
-| Planned change | Estimated date for change |
-|||
-| [Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013](#legacy-implementation-of-iso-27001-is-being-replaced-with-new-iso-270012013) | August 2021 |
-| [CSV exports to be limited to 20 MB](#csv-exports-to-be-limited-to-20-mb) | August 2021 |
-| [Enable Azure Defender security control to be included in secure score](#enable-azure-defender-security-control-to-be-included-in-secure-score) | Q3 2021 |
-| [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | Q4 2021 |
-| [Enhancements to recommendation to classify sensitive data in SQL databases](#enhancements-to-recommendation-to-classify-sensitive-data-in-sql-databases) | Q1 2022 || | |
+| Planned change | Estimated date for change |
+|-||
+| [Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013](#legacy-implementation-of-iso-27001-is-being-replaced-with-new-iso-270012013)| August 2021|
+| [Changing prefix of some alert types from "ARM_" to "VM_"](#changing-prefix-of-some-alert-types-from-arm_-to-vm_) | October 2021|
+| [Enable Azure Defender security control to be included in secure score](#enable-azure-defender-security-control-to-be-included-in-secure-score) | Q3 2021 |
+| [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | Q4 2021 |
+| [Enhancements to recommendation to classify sensitive data in SQL databases](#enhancements-to-recommendation-to-classify-sensitive-data-in-sql-databases) | Q1 2022 |
### Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013
The legacy implementation of ISO 27001 will be removed from Security Center's re
:::image type="content" source="media/upcoming-changes/removing-iso-27001-legacy-implementation.png" alt-text="Security Center's regulatory compliance dashboard showing the message about the removal of the legacy implementation of ISO 27001." lightbox="media/upcoming-changes/removing-iso-27001-legacy-implementation.png":::
-### CSV exports to be limited to 20 MB
+### Changing prefix of some alert types from "ARM_" to "VM_"
+
+**Estimated date for change:** October 2021
+
+In July 2021 we announced a [logical reorganization of Azure Defender for Resource Manager alerts](release-notes.md#logical-reorganization-of-azure-defender-for-resource-manager-alerts)
+
+As part of a logical reorganization of some of the Azure Defender plans, we moved twenty one alerts from [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) to [Azure Defender for servers](defender-for-servers-introduction.md).
+
+We're now planning to update the prefixes of these alerts to match this reassignment. We'll be replacing "ARM_" with "VM_" as shown in the following table.
+
+| Current name | After this change |
+||--|
+| ARM_AmBroadFilesExclusion | VM_AmBroadFilesExclusion |
+| ARM_AmDisablementAndCodeExecution | VM_AmDisablementAndCodeExecution |
+| ARM_AmDisablement | VM_AmDisablement |
+| ARM_AmFileExclusionAndCodeExecution | VM_AmFileExclusionAndCodeExecution |
+| ARM_AmTempFileExclusionAndCodeExecution | VM_AmTempFileExclusionAndCodeExecution |
+| ARM_AmTempFileExclusion | VM_AmTempFileExclusion |
+| ARM_AmRealtimeProtectionDisabled | VM_AmRealtimeProtectionDisabled |
+| ARM_AmTempRealtimeProtectionDisablement | VM_AmTempRealtimeProtectionDisablement |
+| ARM_AmRealtimeProtectionDisablementAndCodeExec | VM_AmRealtimeProtectionDisablementAndCodeExec |
+| ARM_AmMalwareCampaignRelatedExclusion | VM_AmMalwareCampaignRelatedExclusion |
+| ARM_AmTemporarilyDisablement | VM_AmTemporarilyDisablement |
+| ARM_UnusualAmFileExclusion | VM_UnusualAmFileExclusion |
+| ARM_CustomScriptExtensionSuspiciousCmd | VM_CustomScriptExtensionSuspiciousCmd |
+| ARM_CustomScriptExtensionSuspiciousEntryPoint | VM_CustomScriptExtensionSuspiciousEntryPoint |
+| ARM_CustomScriptExtensionSuspiciousPayload | VM_CustomScriptExtensionSuspiciousPayload |
+| ARM_CustomScriptExtensionSuspiciousFailure | VM_CustomScriptExtensionSuspiciousFailure |
+| ARM_CustomScriptExtensionUnusualDeletion | VM_CustomScriptExtensionUnusualDeletion |
+| ARM_CustomScriptExtensionUnusualExecution | VM_CustomScriptExtensionUnusualExecution |
+| ARM_VMAccessUnusualConfigReset | VM_VMAccessUnusualConfigReset |
+| ARM_VMAccessUnusualPasswordReset | VM_VMAccessUnusualPasswordReset |
+| ARM_VMAccessUnusualSSHReset | VM_VMAccessUnusualSSHReset |
+
+Learn more about the [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) and [Azure Defender for servers](defender-for-servers-introduction.md) plans.
-**Estimated date for change:** August 2021
-
-When exporting Security Center recommendations data, there's currently no limit on the amount of data that you can download.
--
-With this change, we're instituting a limit of 20 MB.
-
-If you need to export larger amounts of data, use the available filters before selecting, or select subsets of your subscriptions and download the data in batches.
--
-Learn more about [performing a CSV export of your security recommendations](continuous-export.md#manual-one-time-export-of-alerts-and-recommendations).
### Enable Azure Defender security control to be included in secure score
security Recover From Identity Compromise https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/recover-from-identity-compromise.md
Azure Sentinel has many built-in resources to help in your investigation, such a
For more information, see: - [Visualize and analyze your environment](../../sentinel/get-visibility.md)-- [Detect threats out of the box](../../sentinel/detect-threats-built-in.md)
+- [Detect threats out of the box](../../sentinel/detect-threats-built-in.md).
### Monitoring with Microsoft 365 Defender
sentinel Audit Sentinel Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/audit-sentinel-data.md
Use Azure Sentinel's own features to monitor events and actions that occur withi
In Azure Sentinel, use the **Workspace audit** workbook to audit the activities in your SOC environment.
-For more information, see [Visualize and monitor your data](/azure/sentinel/articles/sentinel/monitor-your-data.md).
+For more information, see [Visualize and monitor your data](monitor-your-data.md).
sentinel Best Practices Workspace Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/best-practices-workspace-architecture.md
For more information, see [Extend Azure Sentinel across workspaces and tenants](
>[On-board Azure Sentinel](quickstart-onboard.md) > [!div class="nextstepaction"]
->[Get visibility into alerts](/azure/sentinel/articles/sentinel/get-visibility.md)
+>[Get visibility into alerts](get-visibility.md)
sentinel Connect Agari Phishing Defense https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-agari-phishing-defense.md
In this document, you learned how to connect Agari Phishing Defense and Brand Pr
- Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Ai Vectra Detect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-ai-vectra-detect.md
To learn more about Azure Sentinel, see the following articles:
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Akamai Security Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-akamai-security-events.md
In this document, you learned how to connect Akamai Security Events to Azure Sen
- Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Alcide Kaudit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-alcide-kaudit.md
To learn more about Azure Sentinel, see the following articles:
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Alsid Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-alsid-active-directory.md
In this document, you learned how to connect Alsid for AD to Azure Sentinel. To
- Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Apache Http Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-apache-http-server.md
In this document, you learned how to connect Apache HTTP Server to Azure Sentine
- Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Aruba Clearpass https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-aruba-clearpass.md
In this document, you learned how to connect Aruba ClearPass to Azure Sentinel.
- Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Asc Iot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-asc-iot.md
In this document, you learned how to connect Defender for IoT to Azure Sentinel.
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Aws https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-aws.md
You must have write permission on the Azure Sentinel workspace.
In this document, you learned how to connect AWS CloudTrail to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Barracuda Cloudgen Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-barracuda-cloudgen-firewall.md
In this document, you learned how to connect Barracuda CloudGen Firewall to Azur
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Barracuda https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-barracuda.md
It may take up to 20 minutes until your logs start to appear in Log Analytics.
In this document, you learned how to connect Barracuda appliances to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Besecure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-besecure.md
It may take up to 20 minutes until your logs start to appear in Log Analytics.
In this document, you learned how to connect beSECURE to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Better Mtd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-better-mtd.md
It may take up to 20 minutes until your logs start to appear in Log Analytics.
In this document, you learned how to connect BETTER Mobile Threat Defense (MTD) to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Broadcom Symantec Dlp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-broadcom-symantec-dlp.md
In this document, you learned how to connect Symantec DLP to Azure Sentinel. To
- Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Cef Verify https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cef-verify.md
In this document, you learned how to connect CEF appliances to Azure Sentinel. T
- Learn about [CEF and CommonSecurityLog field mapping](cef-name-mapping.md). - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](./detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Checkpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-checkpoint.md
Configure your Check Point appliance to forward Syslog messages in CEF format to
In this document, you learned how to connect Check Point appliances to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - [Validate connectivity](connect-cef-verify.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Cisco Ucs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cisco-ucs.md
In this document, you learned how to connect Cisco UCS to Azure Sentinel. To lea
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Cisco Umbrella https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cisco-umbrella.md
In this document, you learned how to connect Cisco Umbrella data to Azure Sentin
- Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Cisco https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cisco.md
Cisco ASA doesn't support CEF, so the logs are sent as Syslog and the Azure Sent
In this document, you learned how to connect Cisco ASA appliances to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Citrix Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-citrix-analytics.md
Citrix Analytics (Security) integration with Azure Sentinel helps you to export
In this document, you learned how to connect Citrix Analytics (Security) to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Citrix Waf https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-citrix-waf.md
To query the Citrix WAF logs in Log Analytics, enter `CommonSecurityLog` at the
In this document, you learned how to connect Citrix WAF to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Cyberark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cyberark.md
To query the CyberArk EPV logs in Log Analytics, enter `CommonSecurityLog` at th
In this document, you learned how to connect CyberArk Enterprise Password Vault logs to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Extrahop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-extrahop.md
The ExtraHop Reveal(x) data connector lets you easily connect your Reveal(x) sys
In this document, you learned how to connect ExtraHop Reveal(x) to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect F5 Big Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-f5-big-ip.md
It may take up to 20 minutes until your logs start to appear in Log Analytics.
In this document, you learned how to connect F5 BIG-IP to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect F5 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-f5.md
This article explains how to use the F5 ASM data connector to easily pull your F
In this document, you learned how to connect F5 ASM to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](./detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Forcepoint Casb Ngfw https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-forcepoint-casb-ngfw.md
In this document, you learned how to connect Forcepoint products to Azure Sentin
- Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md). -- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Forcepoint Dlp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-forcepoint-dlp.md
In this document, you learned how to connect Forcepoint DLP to Azure Sentinel. T
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Fortinet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-fortinet.md
Configure Fortinet to forward Syslog messages in CEF format to your Azure worksp
In this article, you learned how to connect Fortinet appliances to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Google Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-google-workspace.md
In this document, you learned how to connect Google Workspace to Azure Sentinel.
- Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Illusive Attack Management System https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-illusive-attack-management-system.md
In this document, you learned how to connect Illusive Attack Management System t
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Imperva Waf Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-imperva-waf-gateway.md
In this document, you learned how to connect Imperva WAF Gateway to Azure Sentin
- Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Infoblox https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-infoblox.md
In this document, you learned how to connect Infoblox NIOS to Azure Sentinel. To
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Juniper Srx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-juniper-srx.md
In this document, you learned how to connect Juniper SRX to Azure Sentinel. To l
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Nxlog Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-nxlog-dns.md
In this document, you learned how to use NXLog to ingest Windows DNS logs into A
- Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Nxlog Linuxaudit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-nxlog-linuxaudit.md
In this document, you learned how to use NXLog LinuxAudit to ingest Linux securi
- Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Okta Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-okta-single-sign-on.md
In this document, you learned how to connect Okta Single Sign-On to Azure Sentin
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect One Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-one-identity.md
The One Identity Safeguard data connector enhances the standard Common Event For
In this document, you learned how to connect One Identity Safeguard to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Orca Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-orca-security-alerts.md
It may take up to 20 minutes until your logs start to appear in Log Analytics.
In this document, you learned how to connect Orca security alerts to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Paloalto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-paloalto.md
Configure Palo Alto Networks to forward Syslog messages in CEF format to your Az
In this document, you learned how to connect Palo Alto Networks appliances to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Perimeter 81 Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-perimeter-81-logs.md
To learn more about Azure Sentinel, see the following articles:
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Proofpoint Pod https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-proofpoint-pod.md
In this document, you learned how to connect Proofpoint On Demand Email Security
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Proofpoint Tap https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-proofpoint-tap.md
In this document, you learned how to connect Proofpoint TAP to Azure Sentinel us
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Pulse Connect Secure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-pulse-connect-secure.md
In this document, you learned how to connect Pulse Connect Secure to Azure Senti
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Qualys Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-qualys-vm.md
In this document, you learned how to connect Qualys VM to Azure Sentinel using A
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Salesforce Service Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-salesforce-service-cloud.md
In this document, you learned how to connect Salesforce Service Cloud to Azure S
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Sophos Cloud Optix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-sophos-cloud-optix.md
In this document, you learned how to connect Sophos Cloud Optix to Azure Sentine
- Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Sophos Xg Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-sophos-xg-firewall.md
In this document, you learned how to connect Sophos XG Firewall to Azure Sentine
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Squadra Secrmm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-squadra-secrmm.md
In this document, you learned how to connect Squadra Technologies secRMM to Azur
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Squid Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-squid-proxy.md
In this document, you learned how to connect Squid Proxy to Azure Sentinel. To l
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Symantec Proxy Sg https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-symantec-proxy-sg.md
In this document, you learned how to connect Symantec ProxySG to Azure Sentinel.
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Symantec Vip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-symantec-vip.md
In this document, you learned how to connect Symantec VIP logs to Azure Sentinel
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Symantec https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-symantec.md
It may take up to 20 minutes until your logs start to appear in Log Analytics.
In this document, you learned how to connect Symantec ICDx to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Syslog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-syslog.md
This detection requires a specific configuration of the Syslog data connector:
In this document, you learned how to connect Syslog on-premises appliances to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
{"mode":"full","isActive":false}
sentinel Connect Trend Micro Tippingpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-trend-micro-tippingpoint.md
In this document, you learned how to connect Trend Micro TippingPoint to Azure S
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Trend Micro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-trend-micro.md
The Trend Micro Deep Security connector lets you easily connect your Deep Securi
In this document, you learned how to connect Trend Micro Deep Security to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Vmware Carbon Black https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-vmware-carbon-black.md
It may take up to 20 minutes until your logs start to appear in Log Analytics.
In this document, you learned how to connect VMware Carbon Black Cloud Endpoint Standard to Azure Sentinel using Azure Function Apps. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Vmware Esxi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-vmware-esxi.md
In this document, you learned how to connect VMware ESXi to Azure Sentinel. To l
- Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Wirex Systems https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-wirex-systems.md
See the **Next steps** tab in the connector page for more query samples.
In this document, you learned how to connect WireX Systems NFP to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Zimperium Mtd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-zimperium-mtd.md
In this document, you learned how to connect Zimperium Mobile Threat Defense to
- Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md). -- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
To learn more about Zimperium, see the following:
sentinel Connect Zscaler https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-zscaler.md
This article explains how to connect your Zscaler Internet Access appliance to A
In this document, you learned how to connect Zscaler Internet Access to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](./detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/customer-managed-keys.md
Azure Sentinel does not support replacing a customer-managed key. You should use
In this document, you learned how to set up a customer-managed key in Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Azure Sentinel](./detect-threats-built-in.md).-- [Use workbooks](/azure/sentinel/articles/sentinel/monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel File Event Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/file-event-normalization-schema.md
For more information about normalization in Azure Sentinel, see [Normalization a
Azure Sentinel provides the following built-in, product-specific file event parsers: - **Sysmon file activity events** (Events 11, 23, and 26), collected using the Log Analytics Agent or Azure Monitor Agent.-- **Microsoft Office 365 Sharepoint and OneDrive events**, collected using the Office Activity connector.
+- **Microsoft Office 365 SharePoint and OneDrive events**, collected using the Office Activity connector.
- **Microsoft 365 Defender for Endpoints file events** - **Azure Storage**, including Blob, File, Queue, and Table Storage.
The following Azure Sentinel **Analytics rules** works with any file activity th
- NOBELIUM - Domain, Hash and IP IOCs - May 2021 - SUNSPOT log file creation
-For more information, see [Create custom analytics rules to detect threats](/azure/sentinel/articles/sentinel/detect-threats-custom.md).
+For more information, see [Create custom analytics rules to detect threats](detect-threats-custom.md).
## Schema details
service-bus-messaging Service Bus Dotnet Get Started With Queues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-dotnet-get-started-with-queues.md
Title: Get started with Azure Service Bus queues (Azure.Messaging.ServiceBus)
description: In this tutorial, you create a .NET Core C# application to send messages to and receive messages from a Service Bus queue. dotnet Previously updated : 08/01/2021 Last updated : 08/16/2021 # Send messages to and receive messages from Azure Service Bus queues (.NET) This quickstart shows how to send messages to and receive messages from a Service Bus queue using the [Azure.Messaging.ServiceBus](https://www.nuget.org/packages/Azure.Messaging.ServiceBus/) .NET library.
+> [!NOTE]
+> You can find more .NET samples for Azure Service Bus in the [Azure SDK for .NET repository on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples).
## Prerequisites
This section shows you how to create a .NET Core console application to send mes
1. Replace code in the **Program.cs** with the following code. Here are the important steps from the code. 1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string to the namespace.
- 1. Invokes the `CreateSender` method on the `ServiceBusClient` object to create a `ServiceBusSender` object for the specific Service Bus queue.
- 1. Creates a `ServiceBusMessageBatch` object by using the `ServiceBusSender.CreateMessageBatchAsync` method.
- 1. Add messages to the batch using the `ServiceBusMessageBatch.TryAddMessage`.
- 1. Sends the batch of messages to the Service Bus queue using the `ServiceBusSender.SendMessagesAsync` method.
+ 1. Invokes the [CreateSender](/dotnet/api/azure.messaging.servicebus.servicebusclient.createsender) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus queue.
+ 1. Creates a [ServiceBusMessageBatch](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch) object by using the [ServiceBusSender.CreateMessageBatchAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.createmessagebatchasync) method.
+ 1. Add messages to the batch using the [ServiceBusMessageBatch.TryAddMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch.tryaddmessage).
+ 1. Sends the batch of messages to the Service Bus queue using the [ServiceBusSender.SendMessagesAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.sendmessagesasync) method.
For more information, see code comments.
In this section, you'll add code to retrieve messages from the queue.
1. Replace code in the **Program.cs** with the following code. Here are the important steps from the code. Here are the important steps from the code: 1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string to the namespace.
- 1. Invokes the `CreateProcessor` method on the `ServiceBusClient` object to create a `ServiceBusProcessor` object for the specified Service Bus queue.
- 1. Specifies handlers for the `ProcessMessageAsync` and `ProcessErrorAsync` events of the `ServiceBusProcessor` object.
- 1. Starts processing messages by invoking the `StartProcessingAsync` on the `ServiceBusProcessor` object.
- 1. When user presses a key to end the processing, invokes the `StopProcessingAsync` on the `ServiceBusProcessor` object.
+ 1. Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus queue.
+ 1. Specifies handlers for the [ProcessMessageAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processerrorasync) events of the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
+ 1. Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
+ 1. When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
For more information, see code comments.
site-recovery Azure To Azure How To Enable Zone To Zone Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md
This article describes how to replicate, failover, and failback Azure virtual ma
>[!NOTE] >
->- Support for Zone to Zone disaster recovery is currently limited to the following regions: Southeast Asia, Japan East, Australia East, UK South, West Europe, North Europe, France Central, Central US, South Central US, East US, East US 2, West US 2, and West US 3.
+>- Support for Zone to Zone disaster recovery is currently limited to the following regions: Southeast Asia, Japan East, Australia East, UK South, West Europe, North Europe, France Central, Canada Central, Central US, South Central US, East US, East US 2, West US 2, and West US 3.
>- Site Recovery does not move or store customer data out of the region in which it is deployed when the customer is using Zone to Zone Disaster Recovery. Customers may select a Recovery Services Vault from a different region if they so choose. The Recovery Services Vault contains metadata but no actual customer data. Site Recovery service contributes to your business continuity and disaster recovery strategy by keeping your business apps up and running, during planned and unplanned outages. It is the recommended Disaster Recovery option to keep your applications up and running if there are regional outages.
site-recovery Vmware Physical Secondary Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/vmware-physical-secondary-disaster-recovery.md
Scout Update 4 is a cumulative update. It includes all fixes from Update 1 to Up
* Red Hat Enterprise Linux (RHEL) 6.x * Oracle Linux (OL) 6.x * For Linux, all folder access permissions in the unified agent installation directory are now restricted to the local user only.
-* On Windows, a fix for a timing out issue that occurred when issuing common distributed consistency bookmarks, on heavily loaded distributed applications such as SQL Server and Share Point clusters.
+* On Windows, a fix for a timing out issue that occurred when issuing common distributed consistency bookmarks, on heavily loaded distributed applications such as SQL Server and SharePoint clusters.
* A log related fix in the configuration server base installer. * A download link to VMware vCLI 6.0 was added to the Windows master target base installer. * Additional checks and logs were added, for network configuration changes during failover and disaster recovery drills.
spatial-anchors Tutorial New Android App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spatial-anchors/tutorials/tutorial-new-android-app.md
To complete this tutorial, make sure you have:
## Getting started
-Start Android Studio. In the **Welcome to Android Studio** window, click **Start a new Android Studio project**. Or, if you have a project already opened, select **File**->**New Project**.
+Start Android Studio. In the **Welcome to Android Studio** window, click **Start a new Android Studio project**.
+1. Select **File**->**New Project**.
+1. In the **Create New Project** window, under the **Phone and Tablet** section, choose **Empty Activity**, and click **Next**.
+1. In the New Project - Empty Activity window, change the following values:
+ - Change the **Name**, **Package name** and **Save location** to your desired values
+ - Set **Language** is to `Java`
+ - Set **Minimum API level** to `API 26: Android 8.0 (Oreo)`
+ - Leave the other options as they are
+ - Click **Finish**.
+1. The **Component Installer** will run. After some processing, Android Studio will open the IDE.
-In the **Create New Project** window, under the **Phone and Tablet** section, choose **Empty Activity**, and click **Next**. Then, under **Minimum API level**, choose `API 26: Android 8.0 (Oreo)`, and ensure the **Language** is set to `Java`. You may want to change the Project Name & Location, and the Package name. Leave the other options as they are. Click **Finish**. The **Component Installer** will run. Once it's done, click **Finish**. After some processing, Android Studio will open the IDE.
+![Android Studio - New Project](../../../includes/media/spatial-anchors-androidstudio/androidstudio-newproject.png)
+
+<!-- maybe snapshot here -->
## Trying it out
-To test out your new app, connect your developer-enabled device to your development machine with a USB cable. Click **Run**->**Run 'app'**. In the **Select Deployment Target** window, select your device, and click **OK**. Android Studio installs the app on your connected device and starts it. You should now see "Hello World!" displayed in the app running on your device. Click **Run**->**Stop 'app'**.
+To test out your new app, connect your developer-enabled device to your development machine with a USB cable. On the top right of Android Studio select your connected device and click on the **Run 'app'** icon. Android Studio installs the app on your connected device and starts it. You should now see "Hello World!" displayed in the app running on your device. Click **Run**->**Stop 'app'**.
+![Android Studio - Run](../../../includes/media/spatial-anchors-androidstudio/androidstudio-run.png)
+ ## Integrating _ARCore_
Modify `app\manifests\AndroidManifest.xml` to include the following entries insi
- It will configure the Google Play Store to download and install ARCore, if it isn't installed already, when your app is installed. ```xml
-<uses-permission android:name="android.permission.CAMERA" />
-<uses-feature android:name="android.hardware.camera.ar" />
+<manifest ...>
-<application>
- ...
- <meta-data android:name="com.google.ar.core" android:value="required" />
- ...
-</application>
-```
+ <uses-permission android:name="android.permission.CAMERA" />
+ <uses-feature android:name="android.hardware.camera.ar" />
-Modify `Gradle Scripts\build.gradle (Module: app)` to include the following entry. This code will ensure that your app targets ARCore version 1.8. After this change, you might get a notification from Gradle asking you to sync: click **Sync now**.
+ <application>
+ ...
+ <meta-data android:name="com.google.ar.core" android:value="required" />
+ ...
+ </application>
+</manifest>
```+
+Modify `Gradle Scripts\build.gradle (Module: app)` to include the following entry. This code will ensure that your app targets ARCore version 1.25. After this change, you might get a notification from Gradle asking you to sync: click **Sync now**.
+
+```gradle
dependencies { ...
- implementation 'com.google.ar:core:1.11.0'
+ implementation 'com.google.ar:core:1.25.0'
... } ```
dependencies {
[_Sceneform_](https://developers.google.com/sceneform/develop/) makes it simple to render realistic 3D scenes in Augmented Reality apps, without having to learn OpenGL.
-Modify `Gradle Scripts\build.gradle (Module: app)` to include the following entries. This code will allow your app to use language constructs from Java 8, which `Sceneform` requires. It will also ensure your app targets `Sceneform` version 1.8, since it should match the version of ARCore your app is using. After this change, you might get a notification from Gradle asking you to sync: click **Sync now**.
+Modify `Gradle Scripts\build.gradle (Module: app)` to include the following entries. This code will allow your app to use language constructs from Java 8, which `Sceneform` requires. It will also ensure your app targets `Sceneform` version 1.15. After this change, you might get a notification from Gradle asking you to sync: click **Sync now**.
-```
+```gradle
android { ...
android {
dependencies { ...
- implementation 'com.google.ar.sceneform.ux:sceneform-ux:1.11.0'
+ implementation 'com.google.ar.sceneform.ux:sceneform-ux:1.15.0'
... } ```
-Open your `app\res\layout\activity_main.xml`, and replace the existing Hello Wolrd `<TextView>` element with the following ArFragment. This code will cause the camera feed to be displayed on your screen enabling ARCore to track your device position as it moves.
+Open your `app\res\layout\activity_main.xml`, and replace the existing Hello Wolrd `<TextView ... />` element with the following ArFragment. This code will cause the camera feed to be displayed on your screen enabling ARCore to track your device position as it moves.
+ ```xml <fragment android:name="com.google.ar.sceneform.ux.ArFragment"
Open your `app\res\layout\activity_main.xml`, and replace the existing Hello Wol
android:layout_height="match_parent" /> ```
+> [!NOTE]
+> In order to see the raw xml of your main activity click on the "Code" or "Split" button on the top right of Android Studio.
+ [Redeploy](#trying-it-out) your app to your device to validate it once more. This time, you should be asked for camera permissions. Once approved, you should see your camera feed rendering on your screen. ## Place an object in the real world Let's create & place an object using your app. First, add the following imports into your `app\java\<PackageName>\MainActivity`:
-[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=23-33)]
+[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=21-23,27-33,17-18)]
Then, add the following member variables into your `MainActivity` class:
-[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=52-57)]
+[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=47-52)]
Next, add the following code into your `app\java\<PackageName>\MainActivity` `onCreate()` method. This code will hook up a listener, called `handleTap()`, that will detect when the user taps the screen on your device. If the tap happens to be on a real world surface that has already been recognized by ARCore's tracking, the listener will run.
-[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=68-74,85&highlight=6-7)]
+[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=63-69,80&highlight=6-7)]
Finally, add the following `handleTap()` method, that will tie everything together. It will create a sphere, and place it on the tapped location. The sphere will initially be black, since `this.recommendedSessionProgress` is set to zero right now. This value will be adjusted later on.
-[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=151-159,171-172,175-183,199-200)]
+[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=159-167,179-180,183-192,209)]
[Redeploy](#trying-it-out) your app to your device to validate it once more. This time, you can move around your device to get ARCore to start recognizing your environment. Then, tap the screen to create & place your black sphere over the surface of your choice. ## Attach a local Azure Spatial Anchor
-Modify `Gradle Scripts\build.gradle (Module: app)` to include the following entry. This sample code snippet targets Azure Spatial Anchors SDK version 2.10.0. Note that SDK version 2.7.0 is currenlty the minimum supported version, and referencing any more recent version of Azure Spatial Anchors should work as well. We recommend using the latest version of Azure Spatial Anchors SDK. You can find the SDK release notes [here.](https://github.com/Azure/azure-spatial-anchors-samples/releases)
+Modify `Gradle Scripts\build.gradle (Module: app)` to include the following entry. This sample code snippet targets Azure Spatial Anchors SDK version 2.10.2. Note that SDK version 2.7.0 is currently the minimum supported version, and referencing any more recent version of Azure Spatial Anchors should work as well. We recommend using the latest version of Azure Spatial Anchors SDK. You can find the SDK release notes [here.](https://github.com/Azure/azure-spatial-anchors-samples/releases)
-```
+```gradle
dependencies { ...
- implementation "com.microsoft.azure.spatialanchors:spatialanchors_jni:[2.10.0]"
- implementation "com.microsoft.azure.spatialanchors:spatialanchors_java:[2.10.0]"
+ implementation 'com.microsoft.azure.spatialanchors:spatialanchors_jni:[2.10.2]'
+ implementation 'com.microsoft.azure.spatialanchors:spatialanchors_java:[2.10.2]'
... } ```
-If you are targeting Azure Spatial Anchors SDK 2.10.0 or later, include the following entry in the repositories section of your project's build.gradle file. This will include the URL to the Maven package feed that hosts Azure Spatial Anchors Android packages for SDK 2.10.0 or later:
-```
-repositories {
+If you are targeting Azure Spatial Anchors SDK 2.10.0 or later, include the following entry in the repositories section of your project's `settings.gradle` file. This will include the URL to the Maven package feed that hosts Azure Spatial Anchors Android packages for SDK 2.10.0 or later:
+
+```gradle
+dependencyResolutionManagement {
...
- maven {
- url 'https://pkgs.dev.azure.com/aipmr/MixedReality-Unity-Packages/_packaging/Maven-packages/maven/v1'
+ repositories {
+ ...
+ maven {
+ url 'https://pkgs.dev.azure.com/aipmr/MixedReality-Unity-Packages/_packaging/Maven-packages/maven/v1'
+ }
+ ...
}
- ...
}- ```
-Right-click `app\java\<PackageName>`->**New**->**Java Class**. Set **Name** to _MyFirstApp_, and **Superclass** to _android.app.Application_. Leave the other options as they are. Click **OK**. A file called `MyFirstApp.java` will be created. Add the following import to it:
+Right-click `app\java\<PackageName>`->**New**->**Java Class**. Set **Name** to _MyFirstApp_, and select **Class**. A file called `MyFirstApp.java` will be created. Add the following import to it:
```java import com.microsoft.CloudServices; ```
+Define `android.app.Application` as its superclass.
+```java
+public class MyFirstApp extends android.app.Application {...
+```
+ Then, add the following code inside the new `MyFirstApp` class, which will ensure Azure Spatial Anchors is initialized with your application's context. ```java
Now, modify `app\manifests\AndroidManifest.xml` to include the following entry i
Back in `app\java\<PackageName>\MainActivity`, add the following imports into it:
-[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=33-40&highlight=2-8)]
+[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=17,16,18,24,26,35,36,38&highlight=2-8)]
Then, add the following member variables into your `MainActivity` class:
-[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=57-60&highlight=3-4)]
+[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=52-56&highlight=3-5)]
+
+Next, let's add the following `initializeSession()` method inside your `mainActivity` class. Once called, it will ensure an Azure Spatial Anchors session is created and properly initialized during the startup of your app. This code makes sure that the sceneview session passed to ASA session via the `cloudSession.setSession` call is not null by having early return.
-Next, let's add the following `initializeSession()` method inside your `mainActivity` class. Once called, it will ensure an Azure Spatial Anchors session is created and properly initialized during the startup of your app.
+[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=92-107,155)]
-[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=89-97,147)]
+Since `initializeSession()` could do an early return if sceneView session is not yet setup (i.e., if `sceneView.getSession()` is null), we add an onUpdate call to make sure that ASA session gets initialized once sceneView session is created.
+[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?name=scene_OnUpdate)]
-Now, let's hook your `initializeSession()` method into your `onCreate()` method. Also, we'll ensure that frames from your camera feed are sent to Azure Spatial Anchors SDK for processing.
+Now, let's hook your `initializeSession()` and `scene_OnUpdate(...)` method into your `onCreate()` method. Also, we'll ensure that frames from your camera feed are sent to Azure Spatial Anchors SDK for processing.
-[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=68-85&highlight=9-17)]
+[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=63-80&highlight=9-17)]
Finally, add the following code into your `handleTap()` method. It will attach a local Azure Spatial Anchor to the black sphere that we're placing in the real world.
-[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=151-159,171-183,199-200&highlight=12-13)]
+[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=159-167,179-192,209&highlight=12-13)]
[Redeploy](#trying-it-out) your app once more. Move around your device, tap the screen, and place a black sphere. This time, though, your code will be creating and attaching a local Azure Spatial Anchor to your sphere.
Before proceeding any further, you'll need to create an Azure Spatial Anchors ac
Once you have your Azure Spatial Anchors account Identifier, Key, and Domain, we can go back in `app\java\<PackageName>\MainActivity`, add the following imports into it:
-[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=40-45&highlight=3-6)]
+[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=38-43&highlight=3-6)]
Then, add the following member variables into your `MainActivity` class:
-[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=60-65&highlight=3-6)]
+[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=56-61&highlight=3-6)]
Now, add the following code into your `initializeSession()` method. First, this code will allow your app to monitor the progress that the Azure Spatial Anchors SDK makes as it collects frames from your camera feed. As it does, the color of your sphere will start changing from its original black, into grey. Then, it will turn white once enough frames are collected to submit your anchor to the cloud. Second, this code will provide the credentials needed to communicate with the cloud back-end. Here is where you'll configure your app to use your account Identifier, Key, and Domain. You copied them into a text editor when [setting up the Spatial Anchors resource](#create-a-spatial-anchors-resource).
-[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=89-120,142-148&highlight=11-37)]
+[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=92-130,151-155&highlight=17-43)]
Next, add the following `uploadCloudAnchorAsync()` method inside your `mainActivity` class. Once called, this method will asynchronously wait until enough frames are collected from your device. As soon as that happens, it will switch the color of your sphere to yellow, and then it will start uploading your local Azure Spatial Anchor into the cloud. Once the upload finishes, the code will return an anchor identifier.
Next, add the following `uploadCloudAnchorAsync()` method inside your `mainActiv
Finally, let's hook everything together. In your `handleTap()` method, add the following code. It will invoke your `uploadCloudAnchorAsync()` method as soon as your sphere is created. Once the method returns, the code below will perform one final update to your sphere, changing its color to blue.
-[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=151-159,171-200&highlight=24-37)]
+[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=159-167,179-209&highlight=26-39)]
+
+[Redeploy](#trying-it-out) your app once more. Move around your device, tap the screen, and place your sphere. This time, though, your sphere will change its color from black to white, as camera frames are collected. Once we have enough frames, the sphere will turn to yellow, and the cloud upload will start. Make sure your phone is connected to the internet. Once the upload finishes, your sphere will turn blue. Optionally, you can monitor the `Logcat` window in Android Studio to view the log messages your app is sending. Examples of messages that would be logged include the session progress during the frames-capture and the anchor identifier that the cloud returns once the upload is completed.
+
+> [!NOTE]
+> If you are not seeing the value of `recommendedSessionProgress` (in your debug logs referred to as `Session progress`) change then make sure you are **both moving and rotating** your phone around the sphere you have placed.
-[Redeploy](#trying-it-out) your app once more. Move around your device, tap the screen, and place your sphere. This time, though, your sphere will change its color from black towards white, as camera frames are collected. Once we have enough frames, the sphere will turn into yellow, and the cloud upload will start. Once the upload finishes, your sphere will turn blue. Optionally, you could also use the `Logcat` window inside Android Studio to monitor the log messages your app is sending. For example, the session progress during frame captures, and the anchor identifier that the cloud returns once the upload is completed.
## Locate your cloud spatial anchor
-One your anchor is uploaded to the cloud, we're ready to attempt locating it again. First, let's add the following imports into your code.
+Once your anchor is uploaded to the cloud, we're ready to attempt locating it again. First, let's add the following imports into your code.
-[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=45-48&highlight=3-4)]
+[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?range=43,44,34,37&highlight=3-4)]
Then, let's add the following code into your `handleTap()` method. This code will:
Then, let's add the following code into your `handleTap()` method. This code wil
Now, let's hook the code that will be invoked when the anchor we're querying for is located. Inside your `initializeSession()` method, add the following code. This snippet will create & place a green sphere once the cloud spatial anchor is located. It will also enable screen tapping again, so you can repeat the whole scenario once more: create another local anchor, upload it, and locate it again.
-[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?name=initializeSession&highlight=34-53)]
+[!code-java[MainActivity](../../../includes/spatial-anchors-new-android-app-finished.md?name=initializeSession&highlight=40-59)]
That's it! [Redeploy](#trying-it-out) your app one last time to try out the whole scenario end to end. Move around your device, and place your black sphere. Then, keep moving your device to capture camera frames until the sphere turns yellow. Your local anchor will be uploaded, and your sphere will turn blue. Finally, tap your screen once more, so that your local anchor is removed, and then we'll query for its cloud counterpart. Continue moving your device around until your cloud spatial anchor is located. A green sphere should appear in the correct location, and you can rinse & repeat the whole scenario again.
+## Putting everything together
+
+Here is how the complete `MainActivity` class file should look, after all
+the different elements have been put together. You can use it as a reference to
+compare against your own file, and spot if you may have any differences left.
+ [!INCLUDE [Share Anchors Sample Prerequisites](../../../includes/spatial-anchors-new-android-app-finished.md)]+
+## Next steps
+
+In this tutorial, you've seen how to create a new Android app that integrates ARCore functionality with Azure Spatial Anchors. To learn more about the Azure Spatial Anchors library, continue to our guide on how to create and locate anchors.
+
+> [!div class="nextstepaction"]
+> [Create and locate anchors using Azure Spatial Anchors](../../../articles/spatial-anchors/create-locate-anchors-overview.md)
storage File Sync Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/file-sync/file-sync-troubleshoot.md
description: Troubleshoot common issues in a deployment on Azure File Sync, whic
Previously updated : 4/20/2021 Last updated : 8/16/2021
Sync sessions may fail for various reasons including the server being restarted
[!INCLUDE [storage-sync-files-bad-connection](../../../includes/storage-sync-files-bad-connection.md)]
+> [!Note]
+> Once network connectivity to the Azure File Sync service is restored, sync may not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
+ <a id="-2134376372"></a>**The user request was throttled by the service.** | Error | Code |
By setting this registry value, the Azure File Sync agent will accept any locall
[!INCLUDE [storage-sync-files-bad-connection](../../../includes/storage-sync-files-bad-connection.md)]
+> [!Note]
+> Once network connectivity to the Azure File Sync service is restored, sync may not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
+ <a id="-2147012721"></a>**Sync failed because the server was unable to decode the response from the Azure File Sync service** | Error | Code |
This error occurs because the server endpoint deletion failed and the endpoint i
| **Error string** | ECS_E_NOT_ENOUGH_LOCAL_STORAGE | | **Remediation required** | Yes |
-Sync sessions fail with one of these errors because the volume on the server has filled up. This error commonly occurs because files outside the server endpoint are using up space on the volume. Free up space on the volume by adding additional server endpoints, moving files to a different volume, or increasing the size of the volume the server endpoint is on.
+Sync sessions fail with one of these errors because either the volume has insufficient disk space or disk quota limit is reached. This error commonly occurs because files outside the server endpoint are using up space on the volume. Free up space on the volume by adding additional server endpoints, moving files to a different volume, or increasing the size of the volume the server endpoint is on. If a disk quota is configured on the volume using [File Server Resource Manager](https://docs.microsoft.com/windows-server/storage/fsrm/fsrm-overview) or [NTFS quota](https://docs.microsoft.com/windows-server/administration/windows-commands/fsutil-quota), increase the quota limit.
<a id="-2134364145"></a><a id="replica-not-ready"></a>**The service is not yet ready to sync with this server endpoint.**
This error occurs because the Cloud Tiering filter driver (StorageSync.sys) vers
This error occurs because the Azure File Sync service is unavailable. This error will auto-resolve when the Azure File Sync service is available again.
+> [!Note]
+> Once network connectivity to the Azure File Sync service is restored, sync may not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
+ <a id="-2146233088"></a>**Sync failed due to an exception.** | Error | Code |
If files fail to tier to Azure Files:
| 0x8e5e03fe | -1906441218 | JET_errDiskIO | The file failed to tier due to an I/O error when writing to the cloud tiering database. | If the error persists, run chkdsk on the volume and check the storage hardware. | | 0x8e5e0442 | -1906441150 | JET_errInstanceUnavailable | The file failed to tier because the cloud tiering database is not running. | To resolve this issue, restart the FileSyncSvc service or server. If the error persists, run chkdsk on the volume and check the storage hardware. | | 0x80C80285 | -2160591493 | ECS_E_GHOSTING_SKIPPED_BY_CUSTOM_EXCLUSION_LIST | The file cannot be tiered because the file type is excluded from tiering. | To tier files with this file type, modify the GhostingExclusionList registry setting which is located under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync. |
+| 0x80C86050 | -2160615504 | ECS_E_REPLICA_NOT_READY_FOR_TIERING | The file failed to tier because the current sync mode is initial upload or reconciliation. | No action required. The file will be tiered once sync completes initial upload or reconciliation. |
Unintended recalls also might occur in other scenarios, like when you are browsi
> [!NOTE] >Use Event ID 9059 in the Telemetry event log to determine which application(s) is causing recalls. This event provides application recall distribution for a server endpoint and is logged once an hour.
+### Process exclusions for Azure File Sync
+
+If you want to configure your antivirus or other applications to skip scanning for files accessed by Azure File Sync, configure the following process exclusions:
+
+- C:\Program Files\Azure\StorageSyncAgent\AfsAutoUpdater.exe
+- C:\Program Files\Azure\StorageSyncAgent\FileSyncSvc.exe
+- C:\Program Files\Azure\StorageSyncAgent\MAAgent\MonAgentLauncher.exe
+- C:\Program Files\Azure\StorageSyncAgent\MAAgent\MonAgentHost.exe
+- C:\Program Files\Azure\StorageSyncAgent\MAAgent\MonAgentManager.exe
+- C:\Program Files\Azure\StorageSyncAgent\MAAgent\MonAgentCore.exe
+- C:\Program Files\Azure\StorageSyncAgent\MAAgent\Extensions\XSyncMonitoringExtension\AzureStorageSyncMonitor.exe
+ ### TLS 1.2 required for Azure File Sync You can view the TLS settings at your server by looking at the [registry settings](/windows-server/security/tls/tls-registry-settings).
storage Storage Files Configure P2s Vpn Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-configure-p2s-vpn-linux.md
The Azure virtual network gateway can provide VPN connections using several VPN
> Verified with Ubuntu 18.10. ```bash
-sudo apt install strongswan strongswan-pki libstrongswan-extra-plugins curl libxml2-utils cifs-utils
+sudo apt update
+sudo apt install strongswan strongswan-pki libstrongswan-extra-plugins curl libxml2-utils cifs-utils unzip
installDir="/etc/" ```
Now that you have set up your Point-to-Site VPN, you can mount your Azure file s
## See also - [Azure Files networking overview](storage-files-networking-overview.md) - [Configure a Point-to-Site (P2S) VPN on Windows for use with Azure Files](storage-files-configure-p2s-vpn-windows.md)-- [Configure a Site-to-Site (S2S) VPN for use with Azure Files](storage-files-configure-s2s-vpn.md)
+- [Configure a Site-to-Site (S2S) VPN for use with Azure Files](storage-files-configure-s2s-vpn.md)
synapse-analytics Quickstart Connect Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/catalog-and-governance/quickstart-connect-azure-purview.md
Previously updated : 12/16/2020 Last updated : 08/06/2021 # QuickStart: Connect an Azure Purview Account to a Synapse workspace -
-In this quickstart, you will register an Azure Purview Account to a Synapse workspace. That connection allows you to discover Azure Purview assets and interact with them through Synapse capabilities.
+In this quickstart, you will register an Azure Purview Account to a Synapse workspace. That connection allows you to discover Azure Purview assets, interact with them through Synapse capabilities, and push lineage information to Purview.
You can perform the following tasks in Synapse: - Use the search box at the top to find Purview assets based on keywords - Understand the data based on metadata, [lineage](../../purview/catalog-lineage-user-guide.md), annotations - Connect those data to your workspace with linked services or integration datasets - Analyze those datasets with Synapse Apache Spark, Synapse SQL, and Data Flow
+- Execute pipelines and [push lineage information to Purview](../../purview/how-to-lineage-azure-synapse-analytics.md)
## Prerequisites - [Azure Purview account](../../purview/create-catalog-portal.md)
You can perform the following tasks in Synapse:
Go to [https://web.azuresynapse.net](https://web.azuresynapse.net) and sign in to your workspace.
-## Permissions for connecting an Azure Purview Account
+## Permissions for connecting an Azure Purview account
-- To connect an Azure Purview Account to a Synapse workspace, you need a **Contributor** role in Synapse workspace from Azure portal IAM and you need access to that Azure Purview Account. For more details, see [Azure Purview permissions](../../purview/catalog-permissions.md).
+- To connect an Azure Purview Account to a Synapse workspace, you need a **Contributor** role in Synapse workspace from Azure portal IAM and you need access to that Azure Purview Account. For more information, see [Azure Purview permissions](../../purview/catalog-permissions.md).
-## Connect an Azure Purview Account
+## Connect an Azure Purview account
-- In the Synapse workspace, go to **Manage** -> **Azure Purview**. Select **Connect to a Purview account**. -- You can choose **From Azure subscription** or **Enter manually**. **From Azure subscription**, you can select the account that you have access to. -- Once connected, you should be able to see the name of the Purview account in the tab **Azure Purview account**. -- You can use the Search bar at the top center of the Synapse workspace to search for data.
+Follow the steps to connect an Purview account:
-## Next steps
+1. In the Synapse workspace, go to **Manage** -> **Azure Purview**. Select **Connect to a Purview account**.
+2. You can choose **From Azure subscription** or **Enter manually**. **From Azure subscription**, you can select the account that you have access to.
+3. Once connected, you can see the name of the Purview account in the tab **Azure Purview account**.
-[Register and scan Azure Synapse assets in Azure Purview](../../purview/register-scan-azure-synapse-analytics.md)
+When connecting Synapse workspace with Purview, Synapse also tries to grant the Synapse workspace's managed identity **Purview Data Curator** role on your Purview account. Managed identity is used to authenticate lineage push operations from Synapse to Purview. If you have **Owner** or **User Access Administrator** role on the Purview account, this operation will be done automatically.
-[Discover, connect and explore data in Synapse using Azure Purview](how-to-discover-connect-analyze-azure-purview.md)ΓÇ»
+To make sure the connection is properly set for the Synapse pipeline lineage push, go to Azure portal -> your Purview account -> Access control (IAM), check if **Purview Data Curator** role is granted to the Synapse workspace's managed identity. Manually add the role assignment as needed.
-[Connect Azure Data Factory and Azure Purview](../../purview/how-to-link-azure-data-factory.md)
+Once the connection is established, you can use the Search bar at the top center of the Synapse workspace to search for data, and the pipeline execution will push lineage information to the Purview account.
+
+## Next steps
-[Connect Azure Data Share and Azure Purview](../../purview/how-to-link-azure-data-share.md)
+[Discover, connect and explore data in Synapse using Azure Purview](how-to-discover-connect-analyze-azure-purview.md)
+
+[Metadata and lineage from Azure Synapse Analytics](../../purview/how-to-lineage-azure-synapse-analytics.md)
+
+[Register and scan Azure Synapse assets in Azure Purview](../../purview/register-scan-azure-synapse-analytics.md)
[Get lineage from Power BI into Azure Purview](../../purview/how-to-lineage-powerbi.md)+
+[Connect Azure Data Share and Azure Purview](../../purview/how-to-link-azure-data-share.md)
synapse-analytics Tutorial Automl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-automl.md
If you don't have an Azure subscription, [create a free account before you begin
## Prerequisites - An [Azure Synapse Analytics workspace](../get-started-create-workspace.md). Ensure that it has an Azure Data Lake Storage Gen2 storage account configured as the default storage. For the Data Lake Storage Gen2 file system that you work with, ensure that you're the *Storage Blob Data Contributor*.-- An Apache Spark pool in your Azure Synapse Analytics workspace. For details, see [Quickstart: Create a serverless Apache Spark pool using Synapse Studio](../quickstart-create-apache-spark-pool-studio.md).
+- An Apache Spark pool (version 2.4) in your Azure Synapse Analytics workspace. For details, see [Quickstart: Create a serverless Apache Spark pool using Synapse Studio](../quickstart-create-apache-spark-pool-studio.md).
- An Azure Machine Learning linked service in your Azure Synapse Analytics workspace. For details, see [Quickstart: Create a new Azure Machine Learning linked service in Azure Synapse Analytics](quickstart-integrate-azure-machine-learning.md). ## Sign in to the Azure portal
synapse-analytics Synapse Workspace Ip Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-ip-firewall.md
Previously updated : 04/15/2020 Last updated : 08/15/2021 -+ # Azure Synapse Analytics IP firewall rules
IP firewall rules grant or deny access to your Synapse workspace based on the or
## Create and manage IP firewall rules
-There are two ways IP firewall rules are added to a Synapse workspace. To add an IP firewall to your workspace, select **Security + networking** and check **Allow connections from all IP addresses** during workspace creation.
+There are two ways IP firewall rules are added to an Azure Synapse workspace. To add an IP firewall to your workspace, select **Networking** and check **Allow connections from all IP addresses** during workspace creation.
-![Screenshot that highlights the Security + networking button.](./media/synpase-workspace-ip-firewall/ip-firewall-1.png)
+> [!Important]
+> This feature is only available to Azure Synapse workspaces not associated with a Managed VNet.
+
-![Azure portal Synapse workspace IP configuration.](./media/synpase-workspace-ip-firewall/ip-firewall-2.png)
You can also add IP firewall rules to a Synapse workspace after the workspace is created. Select **Firewalls** under **Security** from Azure portal. To add a new IP firewall rule, give it a name, Start IP, and End IP. Select **Save** when done.
-![Azure Synapse workspace IP configuration in Azure portal.](./media/synpase-workspace-ip-firewall/ip-firewall-3.png)
## Connect to Synapse from your own network
synapse-analytics Synapse Workspace Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-managed-vnet.md
Previously updated : 01/18/2021 Last updated : 08/16/2021 -+ # Azure Synapse Analytics Managed Virtual Network
If you leave the checkbox unchecked, then your workspace won't have a Virtual Ne
>[!IMPORTANT] >You can only use private links in a workspace that has a Managed workspace Virtual Network.
-![Enable Managed workspace Virtual Network](./media/synapse-workspace-managed-vnet/enable-managed-vnet-1.png)
After you choose to associate a Managed workspace Virtual Network with your workspace, you can protect against data exfiltration by allowing outbound connectivity from the Managed workspace Virtual Network only to approved targets using [Managed private endpoints](./synapse-workspace-managed-private-endpoints.md). Select **Yes** to limit outbound traffic from the Managed workspace Virtual Network to targets through Managed private endpoints.
After you choose to associate a Managed workspace Virtual Network with your work
>[!IMPORTANT] >Metastore is disabled in Synapse workspaces that have Managed Virtual Network with data exfiltration protection enabled. You will not be able to use Spark SQL in these workspaces.
-![Outbound traffic using Managed private endpoints](./media/synapse-workspace-managed-vnet/select-outbound-connectivity.png)
Select **No** to allow outbound traffic from the workspace to any target. You can also control the targets to which Managed private endpoints are created from your Azure Synapse workspace. By default, Managed private endpoints to resources in the same AAD tenant that your subscription belongs to are allowed. If you want to create a Managed private endpoint to a resource in an AAD tenant that is different from the one that your subscription belongs to, then you can add that AAD tenant by selecting **+ Add**. You can either select the AAD tenant from the dropdown or manually enter the AAD tenant ID.
-![Add additional AAD tenants](./media/synapse-workspace-managed-vnet/add-additional-azure-active-directory-tenants.png)
After the workspace is created, you can check whether your Azure Synapse workspace is associated to a Managed workspace Virtual Network by selecting **Overview** from Azure portal.
-![Workspace overview in Azure portal](./media/synapse-workspace-managed-vnet/enable-managed-vnet-2.png)
## Next steps
synapse-analytics Synapse Workspace Understand What Role You Need https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md
Publish or delete a notebook or job definition (including output) to the service
Commit changes to a notebook or job definition to the Git repo|Git permissions|none PIPELINES, INTEGRATION RUNTIMES, DATAFLOWS, DATASETS & TRIGGERS| Create, update, or delete an Integration runtime|Azure Owner or Contributor on the workspace|
-Monitor Integration runtime status|Synapse User|read, pipelines/viewOutputs
+Monitor Integration runtime status|Synapse Compute Operator|read, integrationRuntimes/viewLogs
Review pipeline runs|Synapse Artifact Publisher/Synapse Contributor|read, pipelines/viewOutputs Create a pipeline |Synapse User</br>*Additional Synapse permissions are required to debug, add triggers, publish, or commit changes*|read Create a dataflow or dataset |Synapse User</br>*Additional Synapse permissions are required to publish, or commit changes*|read
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
In the data set is valid, and the workaround cannot help, report a support ticke
Azure team will investigate the content of the `delta_log` file and provide more info about the possible errors and the workarounds.
+## Security
+
+### AAD service principal login failures when SPI is creating a role assignment
+If you want to create role assignment for Service Principal Identifier/AAD app using another SPI, or have already created one and it fails to login, you're probably receiving following error:
+```
+Login error: Login failed for user '<token-identified principal>'.
+```
+For service principals login should be created with Application ID as SID (not with Object ID). There is a known limitation for service principals which is preventing Synapse service to fetch Application Id from Azure AD Graph when creating role assignment for another SPI/app.
+
+#### Solution #1
+Navigate to Azure Portal > Synapse Studio > Manage > Access control and manually add Synapse Administrator or Synapse SQL Administrator for desired Service Principal.
+
+#### Solution #2
+You need to manually create a proper login through SQL code:
+```sql
+use master
+go
+CREATE LOGIN [<service_principal_name>] FROM EXTERNAL PROVIDER;
+go
+ALTER SERVER ROLE sysadmin ADD MEMBER [<service_principal_name>];
+go
+```
+
+#### Solution #3
+You can also setup service principal Synapse Admin using PowerShell. You need to have [Az.Synapse module](/powershell/module/az.synapse) installed.
+The solution is to use cmdlet New-AzSynapseRoleAssignment with `-ObjectId "parameter"` - and in that parameter field to provide Application ID (instead of Object ID) using workspace admin Azure service principal credentials. PowerShell script:
+```azurepowershell
+$spAppId = "<app_id_which_is_already_an_admin_on_the_workspace>"
+$SPPassword = "<application_secret>"
+$tenantId = "<tenant_id>"
+$secpasswd = ConvertTo-SecureString -String $SPPassword -AsPlainText -Force
+$cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $spAppId, $secpasswd
+
+Connect-AzAccount -ServicePrincipal -Credential $cred -Tenant $tenantId
+
+New-AzSynapseRoleAssignment -WorkspaceName "<workspaceName>" -RoleDefinitionName "Synapse Administrator" -ObjectId "<app_id_to_add_as_admin>" [-Debug]
+```
+
+#### Validation
+Connect to serverless SQL endpoint and verify that the external login with SID `app_id_to_add_as_admin` is created:
+```sql
+select name, convert(uniqueidentifier, sid) as sid, create_date
+from sys.server_principals where type in ('E', 'X')
+```
+or just try to login on serverless SQL endpoint using the just set admin app.
+ ## Constraints There are some general system constraints that may affect your workload:
virtual-desktop App Attach https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/app-attach.md
Dismount-DiskImage -ImagePath $vhdSrc -Confirm:$false
#endregion ```
+>[!NOTE]
+>You can shut down the device even while the **$volumeGuid** point remains after executing the destage script.
+ ## Set up simulation scripts for the MSIX app attach agent After you create the scripts, users can manually run them or set them up to run automatically as startup, logon, logoff, and shutdown scripts. To learn more about these types of scripts, see [Using startup, shutdown, logon, and logoff scripts in Group Policy](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn789196(v=ws.11)/).
Each of these automatic scripts runs one phase of the app attach scripts:
- The logoff script runs the deregister script. - The shutdown script runs the destage script.
+>[!NOTE]
+>You can run the task scheduler with the stage script. To run the script, set the task trigger to **When the computer starts**, then enable **Run with highest privileges**.
+ ## Use packages offline If you're using packages from the [Microsoft Store for Business](https://businessstore.microsoft.com/) or the [Microsoft Store for Education](https://educationstore.microsoft.com/) within your network or on devices that aren't connected to the internet, you need to get the package licenses from the Microsoft Store and install them on your device to successfully run the app. If your device is online and can connect to the Microsoft Store for Business, the required licenses should download automatically, but if you're offline, you'll need to set up the licenses manually.
virtual-desktop Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/authentication.md
Previously updated : 07/14/2021 Last updated : 08/16/2021
In this article, we'll give you a brief overview of what kinds of authentication you can use in Azure Virtual Desktop.
-## Session host authentication
+## Identities
+
+Azure Virtual desktop supports different types of identities depending on which configuration you choose. This section explains which identities you can use for each configuration.
+
+### On-premise identity
+
+Since users must be discoverable through Azure Active Directory (Azure AD) to access the Azure Virtual Desktop, user identities that exist only in Active Directory Domain Services (AD DS) are not supported. This includes standalone Active Directory deployments with Active Directory Federation Services (AD FS).
+
+### Hybrid identity
+
+Azure Virtual Desktop supports [hybrid identities](../active-directory/hybrid/whatis-hybrid-identity.md) through Azure AD, including those federated using AD FS. You can manage these user identities in AD DS and sync them to Azure AD using [Azure AD Connect](../active-directory/hybrid/whatis-azure-ad-connect.md). You can also use Azure AD to manage these identities and sync them to [Azure AD Directory Services (Azure AD DS)](../active-directory-domain-services/overview.md).
+
+When accessing Azure Virtual Desktop using hybrid identities, sometimes the User Principal Name (UPN) or Security Identifier (SID) for the user in Active Directory (AD) and Azure AD don't match. For example, the AD account user@contoso.local may correspond to user@contoso.com in Azure AD. Azure Virtual Desktop only supports this type of configuration if either the UPN or SID for both your AD and Azure AD accounts match. SID refers to the user object property "ObjectSID" in AD and "OnPremisesSecurityIdentifier" in Azure AD.
+
+### Cloud-only identity
+
+Azure Virtual Desktop supports cloud-only identities when using [Azure AD-joined VMs](deploy-azure-ad-joined-vm.md).
+
+### External identity
+
+Azure Virtual Desktop currently doesn't support [external identities](../active-directory/external-identities/index.yml).
-Azure Virtual Desktop supports both NT LAN Manager (NTLM) and Kerberos for session host authentication. However, to use Kerberos, the client needs to get Kerberos security tickets from a Key Distribution Center (KDC) service running on a domain controller. To get tickets, the client needs a direct line of sight to the domain controller. You can get a direct line of sight by using your corporate network. You can also use a VPN connection to your corporate network or set up a [KDC Proxy server](key-distribution-center-proxy.md).
+## Service authentication
-These are the currently supported sign-in methods:
+To access Azure Virtual Desktop resources, you must first authenticate to the service by signing in to an Azure AD account. Authentication happens when subscribing to a workspace to retrieve your resources or every time you connect to apps or desktops. You can use [third-party identity providers](../active-directory/devices/azureadjoin-plan.md#federated-environment) as long as they federate with Azure AD.
+
+### Multifactor authentication
+
+Follow the instructions in [Set up multifactor authentication in Azure Virtual Desktop](set-up-mfa.md) to learn how to enable multifactor authentication (MFA) for your deployment. That article will also tell you how to configure how often your users are prompted to enter their credentials. When deploying Azure AD-joined VMs, follow the configuration guide in [Enabling MFA for Azure AD-joined VMs](deploy-azure-ad-joined-vm.md#enabling-mfa-for-azure-ad-joined-vms).
+
+### Smart card authentication
+
+To use a smart card to authenticate to Azure AD, you must first [configure AD FS for user certificate authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication).
+
+## Session host authentication
+
+If you haven't already enabled [single sign-on](#single-sign-on-sso) or saved your credentials locally, you'll also need to authenticate to the session host. These are the sign-in methods for the session host that the Azure Virtual Desktop clients currently support:
- Windows Desktop client - Username and password
These are the currently supported sign-in methods:
- macOS - Username and password
->[!NOTE]
->Smartcard and Windows Hello for Business can only use Kerberos to sign in. Signing in with Kerberos requires line-of-sight to the domain controller or a [KDC Proxy server](key-distribution-center-proxy.md).
+Azure Virtual Desktop supports both NT LAN Manager (NTLM) and Kerberos for session host authentication. Smart card and Windows Hello for Business can only use Kerberos to sign in. To use Kerberos, the client needs to get Kerberos security tickets from a Key Distribution Center (KDC) service running on a domain controller. To get tickets, the client needs a direct networking line-of-sight to the domain controller. You can get a line-of-sight by connecting directly within your corporate network, using a VPN connection or setting up a [KDC Proxy server](key-distribution-center-proxy.md).
-## Hybrid identity
+### Single sign-on (SSO)
-Azure Virtual Desktop supports [hybrid identities](../active-directory/hybrid/whatis-hybrid-identity.md) through Azure Active Directory (Azure AD), including those federated using Active Directory Federation Services (ADFS). Since users must be discoverable through Azure AD, Azure Virtual Desktop doesn't support standalone Active Directory deployments with ADFS.
+Azure Virtual Desktop supports [SSO using Active Directory Federation Services (ADFS)](configure-adfs-sso.md) for the Windows and web clients. SSO allows you to skip the session host authentication.
-### UPN mismatch
-
-When accessing Active Directory joined or Hybrid Azure Active Directory joined VMs using hybrid identities, sometimes the User Principal Name (UPN) for the Active Directory (AD) and Azure AD don't match. For example, the AD account user@contoso.local may correspond to user@contoso.com in Azure AD. Azure Virtual Desktop only supports this type of configuration if the Security Identifier (SID) for both your AD and Azure AD accounts match.
-
-## Cloud-only identity
+Otherwise, the only way to avoid being prompted for your credentials for the session host is to save them in the client. We recommend you only do this with secure devices to prevent other users from accessing your resources.
-Azure Virtual Desktop supports cloud-only identities when using [Azure AD-joined VMs](deploy-azure-ad-joined-vm.md).
+## In-session authentication
-## External identity
+Once you're connected to your remote app or desktop, you may be prompted for authentication inside the session. This section explains how to use credentials other than username and password in this scenario.
-Azure Virtual Desktop currently doesn't support [external identities](../active-directory/external-identities/index.yml).
+### Smart cards
-## Single sign-on (SSO)
+To use a smart card in your session, make sure you've installed the smart card drivers on the session host and enabled [smart card redirection](configure-device-redirections.md#smart-card-redirection) is enabled. Review the [client comparison chart](/windows-server/remote/remote-desktop-services/clients/remote-desktop-app-compare#other-redirection-devices-etc) to make sure your client supports smart card redirection.
-Azure Virtual Desktop supports [SSO using Active Directory Federation Services (ADFS)](configure-adfs-sso.md) for the Windows and web clients.
+### FIDO2 and Windows Hello for Business
-Otherwise, the only way to avoid being prompted for your credentials for the session host is to save them in the client. We recommend you only do this with secure devices to prevent other users from accessing your resources.
+Azure Virtual Desktop doesn't currently support in-session authentication with FIDO2 or Windows Hello for Business.
## Next steps
-Curious about other ways to keep your deployment secure? Check out [Security best practices](security-guide.md).
+- Curious about other ways to keep your deployment secure? Check out [Security best practices](security-guide.md).
+- Having issues connecting to Azure AD-joined VMs? [Troubleshoot connections to Azure AD-joined VMs](troubleshoot-azure-ad-connections.md).
+- Want to use smart cards from outside your corporate network? Review how to setup a [KDC Proxy server](key-distribution-center-proxy.md).
virtual-desktop Deploy Azure Ad Joined Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/deploy-azure-ad-joined-vm.md
Previously updated : 07/27/2021 Last updated : 08/11/2021 # Deploy Azure AD joined virtual machines in Azure Virtual Desktop
> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-This article will walk you through the process of deploying and accessing Azure Active Directory joined virtual machines in Azure Virtual Desktop. This removes the need to have line-of-sight from the VM to an on-premise or virtualized Active Directory Domain Controller (DC) or to deploy Azure AD Domain services (Azure AD DS). In some cases, it can remove the need for a DC entirely, simplifying the deployment and management of the environment. These VMs can also be automatically enrolled in Intune for ease of management.
+This article will walk you through the process of deploying and accessing Azure Active Directory joined virtual machines in Azure Virtual Desktop. Azure AD-joined VMs remove the need to have line-of-sight from the VM to an on-premise or virtualized Active Directory Domain Controller (DC) or to deploy Azure AD Domain services (Azure AD DS). In some cases, it can remove the need for a DC entirely, simplifying the deployment and management of the environment. These VMs can also be automatically enrolled in Intune for ease of management.
> [!NOTE] > Azure Virtual Desktop (Classic) doesn't support this feature.
You can deploy Azure AD-joined VMs directly from the Azure portal when [creating
> - Host pools should only contain VMs of the same domain join type. For example, AD-joined VMs should only be with other AD VMs, and vice-versa. > - The host pool VMs must be Windows 10 single-session or multi-session, version 2004 or later.
-After you've created the host pool, you must assign user access. For Azure AD-joined VMs, you'll need to do two things: give users access to both the App Group and VMs.
+After you've created the host pool, you must assign user access. For Azure AD-joined VMs, you'll need to do two things:
+
+- Add users to the App Group to give them access to the resources.
+- Grant users the Virtual Machine User Login role so they can sign in to the VMs.
Follow the instructions in [Manage app groups](manage-app-groups.md) to assign user access to apps and desktops. We recommend that you use user groups instead of individual users wherever possible.
To access Azure AD-joined VMs using the web, Android, macOS and iOS clients, you
### Enabling MFA for Azure AD joined VMs
-You can enable [multifactor authentication](set-up-mfa.md) for Azure AD joined VMs by setting a Conditional Access policy on the "Azure Virtual Desktop" app. Unless you want to restrict sign in to strong authentication methods like Windows Hello, you should exclude the "Azure Windows VM Sign-In" app from the list of cloud apps as described in the [MFA sign-in method requirements](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#mfa-sign-in-method-required) for Azure AD joined VMs. If you're using non-Windows clients, you must disable the MFA policy on "Azure Windows VM Sign-In".
+You can enable [multifactor authentication](set-up-mfa.md) for Azure AD joined VMs by setting a Conditional Access policy on the Azure Virtual Desktop app. For connections to succeed, [disable the legacy per-user multifactor authentication](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#using-conditional-access). If you don't want to restrict signing in to strong authentication methods like Windows Hello for Business, you'll also need to [exclude the Azure Windows VM Sign-In app](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#mfa-sign-in-method-required) from your Conditional Access policy.
## User profiles
virtual-desktop Troubleshoot Azure Ad Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/troubleshoot-azure-ad-connections.md
Previously updated : 07/23/2021 Last updated : 08/11/2021 # Connections to Azure AD-joined VMs
If you come across an error saying **The logon attempt failed** on the Windows S
### The sign-in method you're trying to use isn't allowed
-If you come across an error saying **The sign-in method you're trying to use isn't allowed. Try a different sign-in method or contact your system administrator**, you have Conditional Access policies restricting the type of credentials that can be used to sign in to the VMs. Ensure you use the right credential type when signing in or update your [Conditional Access policies](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#mfa-sign-in-method-required).
+If you come across an error saying **The sign-in method you're trying to use isn't allowed. Try a different sign-in method or contact your system administrator**, you have Conditional Access policies restricting access. Follow the instructions in [Enable multifactor authentication](deploy-azure-ad-joined-vm.md#enabling-mfa-for-azure-ad-joined-vms) to enable multifactor authentication for your Azure AD-joined VMs.
## Web client
If you come across an error saying **Oops, we couldn't connect to NAME. Sign in
### We couldn't connect to the remote PC because of a security error
-If you come across an error saying **Oops, we couldn't connect to NAME. We couldn't connect to the remote PC because of a security error. If this keeps happening, ask your admin or tech support for help.**, you have Conditional Access policies restricting the type of credentials that can be used to sign in to the VMs. This isn't supported for this client. Follow the instructions to [enable multifactor authentication](deploy-azure-ad-joined-vm.md#enabling-mfa-for-azure-ad-joined-vms) for Azure AD joined VMs.
+If you come across an error saying **Oops, we couldn't connect to NAME. We couldn't connect to the remote PC because of a security error. If this keeps happening, ask your admin or tech support for help.**, you have Conditional Access policies restricting access. Follow the instructions in [Enable multifactor authentication](deploy-azure-ad-joined-vm.md#enabling-mfa-for-azure-ad-joined-vms) to enable multifactor authentication for your Azure AD-joined VMs.
## Android client
virtual-desktop Manage App Groups 2019 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/virtual-desktop-fall-2019/manage-app-groups-2019.md
Title: Manage app groups for Azure Virtual Desktop (classic) - Azure
description: Learn how to set up Azure Virtual Desktop (classic) tenants in Azure Active Directory (AD). Previously updated : 03/30/2020 Last updated : 08/16/2021
Add-RdsAccount -DeploymentUrl "https://rdbroker.wvd.microsoft.com"
1. Run the following PowerShell cmdlet to create a new empty RemoteApp app group. ```powershell
- New-RdsAppGroup <tenantname> <hostpoolname> <appgroupname> -ResourceType "RemoteApp"
+ New-RdsAppGroup -TenantName <tenantname> -HostPoolName <hostpoolname> -Name <appgroupname> -ResourceType "RemoteApp"
``` 2. (Optional) To verify that the app group was created, you can run the following cmdlet to see a list of all app groups for the host pool. ```powershell
- Get-RdsAppGroup <tenantname> <hostpoolname>
+ Get-RdsAppGroup -TenantName <tenantname> -HostPoolName <hostpoolname>
``` 3. Run the following cmdlet to get a list of **Start** menu apps on the host pool's virtual machine image. Write down the values for **FilePath**, **IconPath**, **IconIndex**, and other important information for the application that you want to publish. ```powershell
- Get-RdsStartMenuApp <tenantname> <hostpoolname> <appgroupname>
+ Get-RdsStartMenuApp -TenantName <tenantname> -HostPoolName <hostpoolname> -AppGroupName <appgroupname>
``` 4. Run the following cmdlet to install the application based on `AppAlias`. `AppAlias` becomes visible when you run the output from step 3. ```powershell
- New-RdsRemoteApp <tenantname> <hostpoolname> <appgroupname> -Name <remoteappname> -AppAlias <appalias>
+ New-RdsRemoteApp -TenantName <tenantname> -HostPoolName <hostpoolname> -AppGroupName <appgroupname> -Name <remoteappname> -AppAlias <appalias>
``` 5. (Optional) Run the following cmdlet to publish a new RemoteApp program to the application group created in step 1. ```powershell
- New-RdsRemoteApp <tenantname> <hostpoolname> <appgroupname> -Name <remoteappname> -Filepath <filepath> -IconPath <iconpath> -IconIndex <iconindex>
+ New-RdsRemoteApp -TenantName <tenantname> -HostPoolName <hostpoolname> -AppGroupName <appgroupname> -Name <remoteappname> -Filepath <filepath> -IconPath <iconpath> -IconIndex <iconindex>
``` 6. To verify that the app was published, run the following cmdlet. ```powershell
- Get-RdsRemoteApp <tenantname> <hostpoolname> <appgroupname>
+ Get-RdsRemoteApp -TenantName <tenantname> -HostPoolName <hostpoolname> -AppGroupName <appgroupname>
``` 7. Repeat steps 1ΓÇô5 for each application that you want to publish for this app group. 8. Run the following cmdlet to grant users access to the RemoteApp programs in the app group. ```powershell
- Add-RdsAppGroupUser <tenantname> <hostpoolname> <appgroupname> -UserPrincipalName <userupn>
+ Add-RdsAppGroupUser -TenantName <tenantname> -HostPoolName <hostpoolname> -AppGroupName <appgroupname> -UserPrincipalName <userupn>
``` ## Next steps
virtual-machines Constrained Vcpu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/constrained-vcpu.md
# Constrained vCPU capable VM sizes
+> [!TIP]
+> Try the **[Virtual Machine selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload.
+ Some database workloads like SQL Server or Oracle require high memory, storage, and I/O bandwidth, but not a high core count. Many database workloads are not CPU-intensive. Azure offers certain VM sizes where you can constrain the VM vCPU count to reduce the cost of software licensing, while maintaining the same memory, storage, and I/O bandwidth. The vCPU count can be constrained to one half or one quarter of the original VM size. These new VM sizes have a suffix that specifies the number of active vCPUs to make them easier for you to identify.
virtual-machines Dedicated Hosts Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dedicated-hosts-portal.md
A **host group** is a resource that represents a collection of dedicated hosts.
- Span across multiple availability zones. In this case, you are required to have a host group in each of the zones you wish to use. - Span across multiple fault domains which are mapped to physical racks.
-In either case, you are need to provide the fault domain count for your host group. If you do not want to span fault domains in your group, use a fault domain count of 1.
+In either case, you need to provide the fault domain count for your host group. If you do not want to span fault domains in your group, use a fault domain count of 1.
You can also decide to use both availability zones and fault domains.
virtual-machines Disks Shared Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-shared-enable.md
description: Configure an Azure managed disk with shared disks so that you can s
Previously updated : 08/03/2021 Last updated : 08/16/2021 -+ # Enable shared disk
virtual-machines Disks Shared https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-shared.md
description: Learn about sharing Azure managed disks across multiple Linux VMs.
Previously updated : 08/03/2021 Last updated : 08/16/2021 - # Share an Azure managed disk
virtual-machines Fx Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/fx-series.md
# FX-series
-The FX-series runs on the Intel® Xeon® Gold 6246R (Cascade Lake) processors. It features an all-core-turbo frequency of 4.0 GHz, 21 GB RAM per vCPU, up to 1 TB total RAM, and local temporary storage. The FX-series will benefit workloads that require a high CPU clock speed and high memory to CPU ratio, workloads with high per-core licensing costs, and applications requiring high a single-core performance. A typical use case for FX-series is the Electronic Design Automation (EDA) workload.
+The FX-series runs on the Intel® Xeon® Gold 6246R (Cascade Lake) processors. It features an all-core-turbo frequency of 4.0 GHz, 21 GB RAM per vCPU, up to 1 TB total RAM, and local temporary storage. The FX-series will benefit workloads that require a high CPU clock speed and high memory to CPU ratio, workloads with high per-core licensing costs, and applications requiring a high single-core performance. A typical use case for FX-series is the Electronic Design Automation (EDA) workload.
FX-series VMs feature [Intel® Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel® Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html), and [Intel® Advanced Vector Extensions 512 (Intel® AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html).
virtual-machines Tutorial Manage Disks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/tutorial-manage-disks.md
Once a disk has been attached to the virtual machine, the operating system needs
Create an SSH connection with the virtual machine. Replace the example IP address with the public IP of the virtual machine. ```console
-ssh 10.101.10.10
+ssh azureuser@10.101.10.10
``` Partition the disk with `parted`.
virtual-machines Nv Series Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/nv-series-migration-guide.md
+
+ Title: NV-series migration guide
+description: NV-series migration guide
++++ Last updated : 01/12/2020++
+# NV-series migration guide
+
+As more powerful GPU VM sizes become available in Microsoft Azure datacenters, we recommend assessing your workloads and migrating the virtual machines (VMs) in the NV and NV_Promo series. These legacy VMs can be migrated into new VM series such as NVsv3 and NVasv4 series for better performance with reduced cost. NVsv3 VM series is powered by the Nvidia M60 GPUs and NVasv4 series by AMD Radeon Instinct MI25 GPUs. The main difference between the NV, NV_Promo series, and the newer NVsv3 and NVasv4 is improved performance, support for premium storage and the option to choose from a fractional GPU size to multi-GPU configurations. Both NVsv3 and NVasv4 series have more modern cores and greater capacity.
+
+The following section summarizes the difference between the legacy NV series and the NVsv3 and NVv4 series.
+
+ ## NVsv3 series
+
+The NVv3-series virtual machines are powered by NVIDIA Tesla M60 GPUs and NVIDIA GRID technology with Intel E5-2690 v4 (Broadwell) CPUs and Intel Hyper-Threading Technology. These virtual machines are targeted for GPU accelerated graphics applications and virtual desktops where customers want to visualize their data, simulate results to view, work on CAD, or render and stream content. Additionally, these virtual machines can run single precision workloads such as encoding and rendering. NVv3 virtual machines support Premium Storage and come with twice the system memory (RAM) when compared with NV-series. For the most up-to-date specifications, see [GPU Accelerated Compute VM Sizes: NVsv3-series](nvv3-series.md)
+
+| Current VM Size | Target VM Size | Difference in Specification |
+||||
+|Standard_NV6 <br> Standard_NV6_Promo |Standard_NV12s_v3 | vCPU: 12 (+6) <br> Memory: GiB 112 (+56) <br> Temp Storage (SSD) GiB: 320 (-20) <br> Max data disks: 12 (-12) <br> Accelerated Networking: Yes <br> Premium Storage: Yes |
+|Standard_NV12 <br> Standard_NV12_Promo |Standard_NV24s_v3 | vCPU: 24 (+12) <br>Memory: GiB 224 (+112) <br>Temp Storage (SSD) GiB: 640 (-40)<br>Max data disks: 24 (-24)<br>Accelerated Networking: Yes <br>Premium Storage: Yes |
+|Standard_NV24 <br> Standard_NV24_Promo |Standard_NV48s_v3 | vCPU: 48 (+24) <br>Memory: GiB 448 (+224) <br>Temp Storage (SSD) GiB: 1280 (-160) <br>Max data disks: 32 (-32) <br>Accelerated Networking: Yes <br>Premium Storage: Yes |
+
+## NVsv4 series
+
+The NVv4-series virtual machines are powered by AMD Radeon Instinct MI25 GPUs and AMD EPYC 7V12(Rome) CPUs. With NVv4-series Azure is introducing virtual machines with partial GPUs. Pick the right sized virtual machine for GPU accelerated graphics applications and virtual desktops starting at 1/8th of a GPU with 2 GiB frame buffer to a full GPU with 16 GiB frame buffer. NVv4 virtual machines currently support only Windows guest operating system. For the most up-to-date specifications, see [GPU Accelerated Compute VM Sizes: NVsv4-series](nvv4-series.md)
+
+| Current VM Size | Target VM Size | Difference in Specification |
+||||
+|Standard_NV6 <br> Standard_NV6_Promo |Standard_NV16as_v4 | vCPU: 16 (+10) <br>Memory: GiB 56 <br>Temp Storage (SSD) GiB: 352 (+12) <br>Max data disks: 16 (-8) <br>Accelerated Networking: Yes <br>Premium Storage: Yes |
+|Standard_NV12 <br> Standard_NV12_Promo |Standard_NV32as_v4 | vCPU: 32 (+20) <br>Memory: GiB 112 <br>Temp Storage (SSD) GiB: 704 (+24) <br>Max data disks: 32 (+16)<br>Accelerated Networking: Yes <br>Premium Storage: Yes |
+|Standard_NV24 <br> Standard_NV24_Promo |N/A | N/A |
+
+## Migration Steps
+
+
+General Changes
+
+1. Choose a series and size for migration.
+
+2. Get quota for the target VM series
+
+3. Resize the current NV series VM size to the target size
+
+ If the target size is NVv4 then make sure to remove the Nvidia GPU driver and install the AMD GPU driver
+
+## Breaking Changes
+
+## Select target size for migration
+After assessing your current usage, decide what type of GPU VM you need. Depending on the workload requirements, you have few different choices. HereΓÇÖs how to choose:
+
+If the workload is graphics/visualization and has a hard dependency on using Nvidia GPU then we recommend migrating to the NVsv3 series.
+
+If the workload is graphics/visualization and has no hard dependency on a specific type of GPU then we recommend NVsv3 or NVVasv4 series.
+> [!Note]
+>A best practice is to select a VM size based on both cost and performance.
+>The recommendations in this guide are based on a one-to-one comparison of performance metrics for the NV and NV_Promo sizes and the nearest match in another VM series.
+>Before deciding on the right size, get a cost comparison using the Azure Pricing Calculator.
+
+## Get quota for the target VM family
+
+Follow the guide to [request an increase in vCPU quota by VM family](../azure-portal/supportability/per-vm-quota-requests.md). Select NVSv3 Series or NVv4 Series as the VM family name depending on the target VM size you have selected for migration.
+## Resize the current virtual machine
+You can [resize the virtual machine through Azure portal or PowerShell](./windows/resize-vm.md). You can also [resize the virtual machine using Azure CLI](./linux/change-vm-size.md).
+
+## FAQ
+**Q:** Which GPU driver should I use for the target VM size?
+
+**A:** For NVsv3 series, use the [Nvidia GRID driver](./windows/n-series-driver-setup.md). For NVv4, use the [AMD GPU drivers](./windows/n-series-amd-driver-setup.md).
+
+**Q:** I use Nvidia GPU driver extension today. Will it work for the target VM size?
+
+**A:** The current [Nvidia driver extension](./extensions/hpccompute-gpu-windows.md) will work for NVsv3. Use the [AMD GPU driver extensions](./extensions/hpccompute-amd-gpu-windows.md) if the target VM size is NVv4.
+
+**Q:** Which target VM series should I use if I have dependency on CUDA?
+
+ **A:** NVv3 supports CUDA. NVv4 VM series with the AMD GPUs do not support CUDA.
virtual-machines Nv Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/nv-series-retirement.md
+
+ Title: NV-series retirement
+description: NV-series retirement starting September 1, 2021
++++ Last updated : 01/12/2020++
+# Migrate your NV and NV_Promo series virtual machines by August 31, 2022
+As we continue to bring modern and optimized virtual machine instances to Azure using the latest innovations in datacenter technologies, we thoughtfully plan how we retire aging hardware.
+With this in mind, we are retiring our NV-series Azure Virtual Machine sizes on September 01, 2022.
+
+## How does the NV-series migration affect me?
+
+After September 01, 2022, any remaining NV and NV_Promo size virtual machines remaining in your subscription will be set to a deallocated state. These virtual machines will be stopped and removed from the host. These virtual machines will no longer be billed in the deallocated state.
+
+The current VM size retirement only impacts the VM sizes in the [NV-series](nv-series.md). This does not impact the [NVv3](nvv3-series.md) and [NVv4](nvv4-series.md) series virtual machines.
+
+## What actions should I take?
+
+You will need to resize or deallocate your NV virtual machines. We recommend moving your GPU visualization/graphics workloads to another [GPU Accelerated Virtual Machine size](sizes-gpu.md).
+
+[Learn more](nv-series-migration-guide.md) about migrating your workloads to other GPU Azure Virtual Machine sizes.
+
+If you have questions, contact us through customer support.
virtual-machines Sizes Compute https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/sizes-compute.md
# Compute optimized virtual machine sizes
+> [!TIP]
+> Try the **[Virtual machines selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload.
+ Compute optimized VM sizes have a high CPU-to-memory ratio. These sizes are good for medium traffic web servers, network appliances, batch processes, and application servers. This article provides information about the number of vCPUs, data disks, and NICs. It also includes information about storage throughput and network bandwidth for each size in this grouping. - The [Fsv2-series](fsv2-series.md) runs on 2nd Generation Intel® Xeon® Platinum 8272CL (Cascade Lake) processors and Intel® Xeon® Platinum 8168 (Skylake) processors. It features a sustained all core Turbo clock speed of 3.4 GHz and a maximum single-core turbo frequency of 3.7 GHz. Intel® AVX-512 instructions are new on Intel Scalable Processors. These instructions provide up to a 2X performance boost to vector processing workloads on both single and double precision floating point operations. In other words, they're really fast for any computational workload. At a lower per-hour list price, the Fsv2-series is the best value in price-performance in the Azure portfolio based on the Azure Compute Unit (ACU) per vCPU.
virtual-machines Sizes General https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/sizes-general.md
# General purpose virtual machine sizes
+> [!TIP]
+> Try the **[Virtual machines selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload.
+ General purpose VM sizes provide balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers. This article provides information about the offerings for general purpose computing. - The [Av2-series](av2-series.md) VMs can be deployed on a variety of hardware types and processors. A-series VMs have CPU performance and memory configurations best suited for entry level workloads like development and test. The size is throttled, based upon the hardware, to offer consistent processor performance for the running instance, regardless of the hardware it is deployed on. To determine the physical hardware on which this size is deployed, query the virtual hardware from within the Virtual Machine. Example use cases include development and test servers, low traffic web servers, small to medium databases, proof-of-concepts, and code repositories.
virtual-machines Sizes Gpu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/sizes-gpu.md
# GPU optimized virtual machine sizes
+> [!TIP]
+> Try the **[Virtual machines selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload.
+ GPU optimized VM sizes are specialized virtual machines available with single, multiple, or fractional GPUs. These sizes are designed for compute-intensive, graphics-intensive, and visualization workloads. This article provides information about the number and type of GPUs, vCPUs, data disks, and NICs. Storage throughput and network bandwidth are also included for each size in this grouping. - The [NCv3-series](ncv3-series.md) and [NC T4_v3-series](nct4-v3-series.md) sizes are optimized for compute-intensive GPU-accelerated applications. Some examples are CUDA and OpenCL-based applications and simulations, AI, and Deep Learning. The NC T4 v3-series is focused on inference workloads featuring NVIDIA's Tesla T4 GPU and AMD EPYC2 Rome processor. The NCv3-series is focused on high-performance computing and AI workloads featuring NVIDIAΓÇÖs Tesla V100 GPU.
virtual-machines Sizes Hpc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/sizes-hpc.md
# High performance computing VM sizes
+> [!TIP]
+> Try the **[Virtual machines selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload.
+ Azure H-series virtual machines (VMs) are designed to deliver leadership-class performance, scalability, and cost efficiency for various real-world HPC workloads. [HBv3-series](hbv3-series.md) VMs are optimized for HPC applications such as fluid dynamics, explicit and implicit finite element analysis, weather modeling, seismic processing, reservoir simulation, and RTL simulation. HBv3 VMs feature up to 120 AMD EPYCΓäó 7003-series (Milan) CPU cores, 448 GB of RAM, and no hyperthreading. HBv3-series VMs also provide 350 GB/sec of memory bandwidth, up to 32 MB of L3 cache per core, up to 7 GB/s of block device SSD performance, and clock frequencies up to 3.675 GHz.
virtual-machines Sizes Memory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/sizes-memory.md
# Memory optimized virtual machine sizes
+> [!TIP]
+> Try the **[Virtual machines selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload.
+ Memory optimized VM sizes offer a high memory-to-CPU ratio that is great for relational database servers, medium to large caches, and in-memory analytics. This article provides information about the number of vCPUs, data disks and NICs as well as storage throughput and network bandwidth for each size in this grouping. - [Dv2 and DSv2-series](dv2-dsv2-series-memory.md), a follow-on to the original D-series, features a more powerful CPU. The Dv2-series is about 35% faster than the D-series. It runs on the Intel&reg; Xeon&reg; 8171M 2.1 GHz (Skylake) or the Intel&reg; Xeon&reg; E5-2673 v4 2.3 GHz (Broadwell) or the Intel&reg; Xeon&reg; E5-2673 v3 2.4 GHz (Haswell) processors, and with the Intel Turbo Boost Technology 2.0. The Dv2-series has the same memory and disk configurations as the D-series.
virtual-machines Sizes Previous Gen https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/sizes-previous-gen.md
# Previous generations of virtual machine sizes
+> [!TIP]
+> Try the **[Virtual machines selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload.
+ This section provides information on previous generations of virtual machine sizes. These sizes can still be used, but there are newer generations available. ## F-series
virtual-machines Sizes Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/sizes-storage.md
# Storage optimized virtual machine sizes
+> [!TIP]
+> Try the **[Virtual machines selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload.
+ Storage optimized VM sizes offer high disk throughput and IO, and are ideal for Big Data, SQL, NoSQL databases, data warehousing, and large transactional databases. Examples include Cassandra, MongoDB, Cloudera, and Redis. This article provides information about the number of vCPUs, data disks, and NICs as well as local storage throughput and network bandwidth for each optimized size. The [Lsv2-series](lsv2-series.md) features high throughput, low latency, directly mapped local NVMe storage running on the [AMD EPYC<sup>TM</sup> 7551 processor](https://www.amd.com/en/products/epyc-7000-series) with an all core boost of 2.55GHz and a max boost of 3.0GHz. The Lsv2-series VMs come in sizes from 8 to 80 vCPU in a simultaneous multi-threading configuration. There is 8 GiB of memory per vCPU, and one 1.92TB NVMe SSD M.2 device per 8 vCPUs, with up to 19.2TB (10x1.92TB) available on the L80s v2.
virtual-machines Sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/sizes.md
# Sizes for virtual machines in Azure
+> [!TIP]
+> Try the **[Virtual machines selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload.
+ This article describes the available sizes and options for the Azure virtual machines you can use to run your apps and workloads. It also provides deployment considerations to be aware of when you're planning to use these resources. :::image type="content" source="media/sizes/azurevmsthumb.jpg" alt-text="YouTube video for selecting the right size for your VM." link="https://youtu.be/zOSvnJFd3ZM":::
virtual-network Tutorial Nat Gateway Load Balancer Internal Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/nat-gateway/tutorial-nat-gateway-load-balancer-internal-portal.md
Previously updated : 03/19/2021 Last updated : 08/04/2021
In this section, you'll create a virtual network and subnet.
| Resource Group | Select **Create new**. Enter **TutorIntLBNAT-rg**. </br> Select **OK**. | | **Instance details** | | | Name | Enter **myVNet** |
- | Region | Select **East US** |
+ | Region | Select **(US) East US** |
4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
In this section, you'll create a virtual network and subnet.
## Create load balancer
-In this section, you'll create a Standard Azure Load Balancer.
+In this section, you create a load balancer that load balances virtual machines.
-1. Select **Create a resource**.
+During the creation of the load balancer, you'll configure:
-2. In the search box, enter **Load balancer**. Select **Load balancer** in the search results.
+* Frontend IP address
+* Backend pool
+* Inbound load-balancing rules
-3. In the **Load balancer** page, select **Create**.
-4. On the **Create load balancer** page enter, or select the following information:
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+2. In the **Load balancer** page, select **Create**.
+
+3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
| Setting | Value | | | | | **Project details** | | | Subscription | Select your subscription. |
- | Resource group | Select **TutorIntLBNAT-rg**.|
- | **Instance details** | |
+ | Resource group | Select **TutorIntLBNAT-rg**. |
+ | **Instance details** | |
| Name | Enter **myLoadBalancer** | | Region | Select **(US) East US**. | | Type | Select **Internal**. | | SKU | Leave the default **Standard**. |
- | **Configure virtual network** | |
- | Virtual network | Select **myVNet**. |
- | Subnet | Select **myBackendSubnet**. |
- | IP address assignment | Select **Dynamic**. |
- | Availability zone | Select **Zone-redundant** to create a resilient load balancer. </br> To create a zonal load balancer, select a specific zone from 1, 2, or 3 |
-
-
-5. Accept the defaults for the remaining settings, and then select **Review + create**.
-
-6. In the **Review + create** tab, select **Create**.
-
-## Create load balancer resources
-
-In this section, you configure:
-
-* Load balancer settings for a backend address pool.
-* A health probe.
-* A load balancer rule.
-### Create a backend pool
+4. Select **Next: Frontend IP configuration** at the bottom of the page.
-A backend address pool contains the IP addresses of the virtual (NICs) connected to the load balancer.
+5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
-Create the backend address pool **myBackendPool** to include virtual machines for load-balancing internet traffic.
+6. Enter **LoadBalancerFrontend** in **Name**.
-1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+7. Select **myBackendSubnet** in **Subnet**.
-2. Under **Settings**, select **Backend pools**, then select **Add**.
+8. Select **Dynamic** for **Assignment**.
-3. On the **Add a backend pool** page, for name, enter **myBackendPool**, as the name for your backend pool, and then select **Add**.
+9. Select **Zone-redundant** in **Availability zone**.
-### Create a health probe
+ > [!NOTE]
+ > In regions with [Availability Zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../../availability-zones/az-overview.md).
-The load balancer monitors the status of your app with a health probe.
+10. Select **Add**.
-The health probe adds or removes VMs from the load balancer based on their response to health checks.
+11. Select **Next: Backend pools** at the bottom of the page.
-Create a health probe named **myHealthProbe** to monitor the health of the VMs.
+12. In the **Backend pools** tab, select **+ Add a backend pool**.
-1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+13. Enter **myBackendPool** for **Name** in **Add backend pool**.
-2. Under **Settings**, select **Health probes**, then select **Add**.
-
- | Setting | Value |
- | - | -- |
- | Name | Enter **myHealthProbe**. |
- | Protocol | Select **TCP**. |
- | Port | Enter **80**.|
- | Interval | Enter **15** for number of **Interval** in seconds between probe attempts. |
- | Unhealthy threshold | Select **2** for number of **Unhealthy threshold** or consecutive probe failures that must occur before a VM is considered unhealthy.|
-
-3. Leave the rest the defaults and Select **Add**.
-
-### Create a load balancer rule
+14. Select **NIC** or **IP Address** for **Backend Pool Configuration**.
-A load balancer rule is used to define how traffic is distributed to the VMs. You define the frontend IP configuration for the incoming traffic and the backend IP pool to receive the traffic. The source and destination port are defined in the rule.
+15. Select **IPv4** or **IPv6** for **IP version**.
-In this section, you'll create a load balancer rule:
+16. Select **Add**.
-* Named **myHTTPRule**.
-* In the frontend named **LoadBalancerFrontEnd**.
-* Listening on **Port 80**.
-* Directs load balanced traffic to the backend named **myBackendPool** on **Port 80**.
+17. Select the **Next: Inbound rules** button at the bottom of the page.
-1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+18. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
-2. Under **Settings**, select **Load-balancing rules**, then select **Add**.
+19. In **Add load balancing rule**, enter or select the following information:
-3. Use these values to configure the load-balancing rule:
-
| Setting | Value | | - | -- |
- | Name | Enter **myHTTPRule**. |
- | IP Version | Select **IPv4** |
- | Frontend IP address | Select **LoadBalancerFrontEnd** |
+ | Name | Enter **myHTTPRule** |
+ | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
+ | Frontend IP address | Select **LoadBalancerFrontend**. |
| Protocol | Select **TCP**. |
- | Port | Enter **80**.|
+ | Port | Enter **80**. |
| Backend port | Enter **80**. |
- | Backend pool | Select **myBackendPool**.|
- | Health probe | Select **myHealthProbe**. |
- | Idle timeout (minutes) | Enter **15** minutes. |
+ | Backend pool | Select **myBackendPool**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Session persistence | Select **None**. |
+ | Idle timeout (minutes) | Enter or select **15**. |
| TCP reset | Select **Enabled**. |
+ | Floating IP | Select **Disabled**. |
+
+20. Select **Add**.
-4. Leave the rest of the defaults and then select **OK**.
+21. Select the blue **Review + create** button at the bottom of the page.
+22. Select **Create**.
## Create virtual machines
These VMs are added to the backend pool of the load balancer that was created ea
| Resource Group | Select **TutorIntLBNAT-rg** | | **Instance details** | | | Virtual machine name | Enter **myVM1** |
- | Region | Select **East US** |
+ | Region | Select **(US) East US** |
| Availability Options | Select **Availability zones** | | Availability zone | Select **1** | | Image | Select **Windows Server 2019 Datacenter** |
These VMs are added to the backend pool of the load balancer that was created ea
| Subnet | **myBackendSubnet** | | Public IP | Select **None**. | | NIC network security group | Select **Advanced**|
- | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Destination port ranges**, enter **80**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myHTTPRule** </br> Select **Add** </br> Select **OK** |
+ | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Destination port ranges**, enter **80**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
| **Load balancing** | | Place this virtual machine behind an existing load-balancing solution? | Select the check box. | | **Load balancing settings** |
These VMs are added to the backend pool of the load balancer that was created ea
6. Review the settings, and then select **Create**.
-7. Follow the steps 1 to 7 to create a VM with the following values and all the other settings the same as **myVM1**:
+7. Follow the steps 1 to 6 to create a VM with the following values and all the other settings the same as **myVM1**:
| Setting | VM 2 | | - | -- |
In this section, you'll create a NAT gateway and assign it to the subnet in the
| **Setting** | **Value** | | -- | |
- | Public IP addresses | Select **Create a new public IP address**. </br> In **Name**, enter **myPublicIP-NAT**. </br> Select **OK**. |
+ | Public IP addresses | Select **Create a new public IP address**. </br> In **Name**, enter **myNATgatewayIP**. </br> Select **OK**. |
6. Select the **Subnet** tab, or select the **Next: Subnet** button at the bottom of the page.
virtual-network Tutorial Nat Gateway Load Balancer Public Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/nat-gateway/tutorial-nat-gateway-load-balancer-public-portal.md
In this tutorial, you learn how to:
An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-## Create load balancer
+## Create the virtual network
-In this section, you'll create a Standard Azure Load Balancer.
+In this section, you'll create a virtual network and subnet.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **Create a resource**.
-3. In the search box, enter **Load balancer**. Select **Load balancer** in the search results.
-4. In the **Load balancer** page, select **Create**.
-5. On the **Create load balancer** page enter, or select the following information:
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results.
- | Setting | Value |
- | | |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **Create new** and enter **TutorPubLBNAT-rg** in the text box. </br> Select **OK**.|
- | **Instance details** | |
- | Name | Enter **myLoadBalancer** |
- | Region | Select **(US) East US**. |
- | Type | Select **Public**. |
- | SKU | Leave the default **Standard**. |
- | Tier | Leave the default **Regional**. |
- | **Public IP address** | |
- | Public IP address | Select **Create new**. </br> If you have an existing Public IP you would like to use, select **Use existing**. |
- | Public IP address name | Enter **myPublicIP-LB** in the text box.|
- | Availability zone | Select **Zone-redundant** to create a resilient load balancer. To create a zonal load balancer, select a specific zone from 1, 2, or 3 |
- | Add a public IPv6 address | Select **No**. </br> For more information on IPv6 addresses and load balancer, see [What is IPv6 for Azure Virtual Network?](../ipv6-overview.md) |
- | Routing preference | Leave the default of **Microsoft network**. </br> For more information on routing preference, see [What is routing preference (preview)?](../routing-preference-overview.md). |
+2. In **Virtual networks**, select **+ Create**.
+
+3. In **Create virtual network**, enter or select this information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ ||--|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **Create new**. </br> In **Name** enter **TutorPubLBNAT-rg**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter **myVNet** |
+ | Region | Select **(US) East US** |
+
+4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+
+5. In the **IP Addresses** tab, enter this information:
-6. Accept the defaults for the remaining settings, and then select **Review + create**.
+ | Setting | Value |
+ |--|-|
+ | IPv4 address space | Enter **10.1.0.0/16** |
-7. In the **Review + create** tab, select **Create**.
+6. Under **Subnet name**, select the word **default**.
-## Create load balancer resources
+7. In **Edit subnet**, enter this information:
-In this section, you configure:
+ | Setting | Value |
+ |--|-|
+ | Subnet name | Enter **myBackendSubnet** |
+ | Subnet address range | Enter **10.1.0.0/24** |
+
+8. Select **Save**.
+
+9. Select the **Security** tab.
-* Load balancer settings for a backend address pool.
-* A health probe.
-* A load balancer rule.
+10. Under **BastionHost**, select **Enable**. Enter this information:
-### Create a backend pool
+ | Setting | Value |
+ |--|-|
+ | Bastion name | Enter **myBastionHost** |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/27** |
+ | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
-A backend address pool contains the IP addresses of the virtual (NICs) connected to the load balancer.
-Create the backend address pool **myBackendPool** to include virtual machines for load-balancing internet traffic.
+11. Select the **Review + create** tab or select the **Review + create** button.
-1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+12. Select **Create**.
-2. Under **Settings**, select **Backend pools**, then select **Add**.
+## Create load balancer
-3. On the **Add a backend pool** page, for name, type **myBackendPool**, as the name for your backend pool, and then select **Add**.
+In this section, you'll create a zone redundant load balancer that load balances virtual machines. With zone-redundancy, one or more availability zones can fail and the data path survives as long as one zone in the region remains healthy.
-### Create a health probe
+During the creation of the load balancer, you'll configure:
-The load balancer monitors the status of your app with a health probe.
+* Frontend IP address
+* Backend pool
+* Inbound load-balancing rules
-The health probe adds or removes VMs from the load balancer based on their response to health checks.
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
-Create a health probe named **myHealthProbe** to monitor the health of the VMs.
+2. In the **Load balancer** page, select **Create**.
-1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
-2. Under **Settings**, select **Health probes**, then select **Add**.
-
- | Setting | Value |
- | - | -- |
- | Name | Enter **myHealthProbe**. |
- | Protocol | Select **TCP**. |
- | Port | Enter **80**.|
- | Interval | Enter **15** for number of **Interval** in seconds between probe attempts. |
- | Unhealthy threshold | Select **2** for number of **Unhealthy threshold** or consecutive probe failures that must occur before a VM is considered unhealthy.|
-
+ | Setting | Value |
+ | | |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorPubLBNAT-rg**. |
+ | **Instance details** | |
+ | Name | Enter **myLoadBalancer** |
+ | Region | Select **(US) East US**. |
+ | Type | Select **Public**. |
+ | SKU | Leave the default **Standard**. |
+ | Tier | Leave the default **Regional**. |
-3. Leave the rest the defaults and Select **Add**.
-### Create a load balancer rule
+4. Select **Next: Frontend IP configuration** at the bottom of the page.
-A load balancer rule is used to define how traffic is distributed to the VMs. You define the frontend IP configuration for the incoming traffic and the backend IP pool to receive the traffic. The source and destination port are defined in the rule.
+5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
-In this section, you'll create a load balancer rule:
+6. Enter **LoadBalancerFrontend** in **Name**.
-* Named **myHTTPRule**.
-* In the frontend named **LoadBalancerFrontEnd**.
-* Listening on **Port 80**.
-* Directs load balanced traffic to the backend named **myBackendPool** on **Port 80**.
+7. Select **IPv4** or **IPv6** for the **IP version**.
-1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+ > [!NOTE]
+ > IPv6 isn't currently supported with Routing Preference or Cross-region load-balancing (Global Tier).
-2. Under **Settings**, select **Load-balancing rules**, then select **Add**.
+8. Select **IP address** for the **IP type**.
-3. Use these values to configure the load-balancing rule:
-
- | Setting | Value |
- | - | -- |
- | Name | Enter **myHTTPRule**. |
- | IP Version | Select **IPv4** |
- | Frontend IP address | Select **LoadBalancerFrontEnd** |
- | Protocol | Select **TCP**. |
- | Port | Enter **80**.|
- | Backend port | Enter **80**. |
- | Backend pool | Select **myBackendPool**.|
- | Health probe | Select **myHealthProbe**. |
- | Idle timeout (minutes) | Enter **15** minutes. |
- | TCP reset | Select **Enabled**. |
- | Outbound source network address translation (SNAT) | Select **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
+ > [!NOTE]
+ > For more information on IP prefixes, see [Azure Public IP address prefix](../../virtual-network/public-ip-address-prefix.md).
-4. Leave the rest of the defaults and then select **OK**.
+9. Select **Create new** in **Public IP address**.
-## Create the virtual network
+10. In **Add a public IP address**, enter **myPublicIP** for **Name**.
-In this section, you'll create a virtual network and subnet.
+11. Select **Zone-redundant** in **Availability zone**.
-1. On the upper-left side of the screen, select **Create a resource > Networking > Virtual network** or search for **Virtual network** in the search box.
+ > [!NOTE]
+ > In regions with [Availability Zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../../availability-zones/az-overview.md).
-2. Select **Create**.
+12. Leave the default of **Microsoft Network** for **Routing preference**.
-3. In **Create virtual network**, enter or select this information in the **Basics** tab:
+13. Select **OK**.
- | **Setting** | **Value** |
- ||--|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **TutorPubLBNAT-rg** |
- | **Instance details** | |
- | Name | Enter **myVNet** |
- | Region | Select **East US** |
+14. Select **Add**.
-4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+15. Select **Next: Backend pools** at the bottom of the page.
-5. In the **IP Addresses** tab, enter this information:
+16. In the **Backend pools** tab, select **+ Add a backend pool**.
- | Setting | Value |
- |--|-|
- | IPv4 address space | Enter **10.1.0.0/16** |
+17. Enter **myBackendPool** for **Name** in **Add backend pool**.
-6. Under **Subnet name**, select the word **default**.
+18. Select **myVNet** in **Virtual network**.
-7. In **Edit subnet**, enter this information:
+19. Select **NIC** or **IP Address** for **Backend Pool Configuration**.
- | Setting | Value |
- |--|-|
- | Subnet name | Enter **myBackendSubnet** |
- | Subnet address range | Enter **10.1.0.0/24** |
+20. Select **IPv4** or **IPv6** for **IP version**.
-8. Select **Save**.
+21. Select **Add**.
-9. Select the **Security** tab.
+22. Select the **Next: Inbound rules** button at the bottom of the page.
-10. Under **BastionHost**, select **Enable**. Enter this information:
+23. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
- | Setting | Value |
- |--|-|
- | Bastion name | Enter **myBastionHost** |
- | AzureBastionSubnet address space | Enter **10.1.1.0/24** |
- | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
+24. In **Add load balancing rule**, enter or select the following information:
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myHTTPRule** |
+ | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
+ | Frontend IP address | Select **LoadBalancerFrontend**. |
+ | Protocol | Select **TCP**. |
+ | Port | Enter **80**. |
+ | Backend port | Enter **80**. |
+ | Backend pool | Select **myBackendPool**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Session persistence | Select **None**. |
+ | Idle timeout (minutes) | Enter or select **15**. |
+ | TCP reset | Select **Enabled**. |
+ | Floating IP | Select **Disabled**. |
+ | Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
-11. Select the **Review + create** tab or select the **Review + create** button.
+25. Select **Add**.
-12. Select **Create**.
+26. Select the blue **Review + create** button at the bottom of the page.
+
+27. Select **Create**.
## Create virtual machines
These VMs are added to the backend pool of the load balancer that was created ea
| Resource Group | Select **TutorPubLBNAT-rg** | | **Instance details** | | | Virtual machine name | Enter **myVM1** |
- | Region | Select **East US** |
+ | Region | Select **(US) East US** |
| Availability Options | Select **Availability zones** | | Availability zone | Select **1** | | Image | Select **Windows Server 2019 Datacenter** |
These VMs are added to the backend pool of the load balancer that was created ea
| Subnet | **myBackendSubnet** | | Public IP | Select **None**. | | NIC network security group | Select **Advanced**|
- | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Destination port ranges**, enter **80**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myHTTPRule** </br> Select **Add** </br> Select **OK** |
+ | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> In **Destination port ranges**, enter **80**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
| **Load balancing** | | Place this virtual machine behind an existing load-balancing solution? | Select the check box.| | **Load balancing settings** |
- | Load balancing options | Select **Azure load balancer** |
+ | Load-balancing options | Select **Azure load balancer** |
| Select a load balancer | Select **myLoadBalancer** | | Select a backend pool | Select **myBackendPool** |
These VMs are added to the backend pool of the load balancer that was created ea
| - | -- | | Name | **myVM2** | | Availability zone | **2** |
- | Network security group | Select the existing **myNSG**|
+ | Network security group | Select the existing **myNSG** |
## Create NAT gateway
In this section, you'll create a NAT gateway and assign it to the subnet in the
| **Setting** | **Value** | | -- | |
- | Public IP addresses | Select **Create a new public IP address**. </br> In **Name**, enter **myPublicIP-NAT**. </br> Select **OK**. |
+ | Public IP addresses | Select **Create a new public IP address**. </br> In **Name**, enter **myNATgatewayIP**. </br> Select **OK**. |
6. Select the **Subnet** tab, or select the **Next: Subnet** button at the bottom of the page.
virtual-wan Sd Wan Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/sd-wan-connectivity-architecture.md
In this model, some vendor proprietary traffic optimization based on real-time t
With Virtual WAN, users can get Azure Path Selection, which is policy-based path selection across multiple ISP links from the branch CPE to Virtual WAN VPN gateways. Virtual WAN allows for the setup of multiple links (paths) from the same SD-WAN branch CPE; each link represents a dual tunnel connection from a unique public IP of the SD-WAN CPE to two different instances of Azure Virtual WAN VPN gateway. SD-WAN vendors can implement the most optimal path to Azure, based on traffic policies set by their policy engine on the CPE links. On the Azure end, all connections coming in are treated equally.
-## <a name="direct"></a>Direct Interconnect Model with NVA-in-VWAN-hub
+## <a name="direct-nva"></a>Direct Interconnect Model with NVA-in-VWAN-hub
:::image type="content" source="./media/sd-wan-connectivity-architecture/direct-nva.png" alt-text="Direct interconnect model with NVA-in-VWAN-hub":::