Updates from: 07/07/2022 01:15:24
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Application Provisioning Quarantine Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-quarantine-status.md
After the first failure, the first retry happens within the next 2 hours (usuall
- The third retry happens 12 hours after the first failure. - The fourth retry happens 24 hours after the first failure. - The fifth retry happens 48 hours after the first failure.-- The sixth retry happens 96 hours after the first failure-- The seventh retry happens 168 hours after the first failure.
+- The sixth retry happens 72 hours after the first failure.
+- The seventh retry happens 96 hours after the first failure.
+- The eigth retry happens 120 hours after the first failure.
-After the 7th failure, entry is flagged and no further retries are run.
+This cycle is repeated every 24 hours until the 30th day when retries are stopped and the job is disabled.
## How do I get my application out of quarantine?
active-directory Provision On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/provision-on-demand.md
Previously updated : 06/30/2022 Last updated : 07/06/2022
There are currently a few known limitations to on-demand provisioning. Post your
> [!NOTE] > The following limitations are specific to the on-demand provisioning capability. For information about whether an application supports provisioning groups, deletions, or other capabilities, check the tutorial for that application.
-* Amazon Web Services (AWS) application does not support on-demand provisioning.
* On-demand provisioning of groups supports updating up to 5 members at a time * On-demand provisioning of roles isn't supported. * On-demand provisioning supports disabling users that have been unassigned from the application. However, it doesn't support disabling or deleting users that have been disabled or deleted from Azure AD. Those users won't appear when you search for a user.
active-directory Workday Retrieve Pronoun Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-retrieve-pronoun-information.md
+
+ Title: Retrieve pronoun information from Workday
+description: Learn how to retrieve pronoun information from Workday
+++++++ Last updated : 07/05/2022++++
+# Configure Azure AD provisioning to retrieve pronoun information from Workday
+This article describes how you can customize the following two HR-driven provisioning apps to fetch pronoun information from Workday.
+
+* [Workday to on-premises Active Directory user provisioning](../saas-apps/workday-inbound-tutorial.md)
+* [Workday to Azure Active Directory user provisioning](../saas-apps/workday-inbound-cloud-only-tutorial.md)
+
+## About pronoun information in Workday
+Workday introduced the ability for workers to [display pronoun information](https://community.workday.com/node/731178) in their worker profile in Workday 2021 R1 release. The ability to fetch pronoun data using Workday Web Services (WWS) API call was introduced in [Get_Workers API version 38.1](https://community.workday.com/sites/default/files/file-hosting/productionapi/Human_Resources/v38.1/Get_Workers.html) in Workday 2022 R1 release.
+
+>[!NOTE]
+>Links to certain Workday community notes and documents in this article require Workday community account credentials. Please check with your Workday administrator or partner to get the required access.
+
+## Enabling pronoun data in Workday
+This section describes steps required to enable pronoun data in Workday. We recommend engaging your Workday administrator to complete the steps listed below.
+1. Ensure that pronoun display and sharing preferences are enabled as per Workday guidelines. Refer Workday documents:
+
+ [Steps: Set Up Gender Pronouns to Display on a Worker Profile * Human Capital Management * Reader * Administrator Guide (workday.com)](https://doc.workday.com/r/gJQvxHUyQOZv_31Vknf~3w/7gZPvVfbRhLiPissprv6lQ)
+
+ [Steps: Set Up Public Profile Preferences * Human Capital Management * Reader * Administrator Guide (workday.com)](https://doc.workday.com/r/gJQvxHUyQOZv_31Vknf~3w/FuENV1VTRTHWo_h93KIjJA)
+
+1. Use Workday **Maintain Pronouns** task to define preferred pronoun data (HE/HIM, SHE/HER, and THEY/THEM) in your Workday tenant.
+1. Use Workday **Maintain Localization Settings task -> Personal Information** area to activate pronoun data for different countries.
+1. Select the Workday Integration System Security Group used with your Azure AD integration. Update the [domain permissions for the security group](../saas-apps/workday-inbound-tutorial.md#configuring-domain-security-policy-permissions), so it has GET access for the Workday domain **Reports: Public Profile**.
+ >[!div class="mx-imgBorder"]
+ >![Screenshot of permissions to setup in Workday.](./media/workday-pronoun-data/workday-pronoun-permissions.png)
+1. Activate Pending Security Policy changes.
+1. Select a worker in your Workday tenant for testing purposes. Set pronoun information for this worker using the **Edit Personal Information** task. Ensure that the worker has enabled pronoun display to all in their public profile preference.
+
+ >[!div class="mx-imgBorder"]
+ >![Screenshot of enabling pronoun display option.](./media/workday-pronoun-data/enable-pronoun-display-preference.png)
+
+1. Use Workday Studio or Postman to invoke [Get_Workers API version 38.1](https://community.workday.com/sites/default/files/file-hosting/productionapi/Human_Resources/v38.1/Get_Workers.html) for the test user using the Workday Azure AD integration system user. In the SOAP request header specify the option Include_Reference_Descriptors_In_Response.
+ ```
+ <bsvc:Workday_Common_Header>
+ <bsvc:Include_Reference_Descriptors_In_Response>true</bsvc:Include_Reference_Descriptors_In_Response>
+ </bsvc:Workday_Common_Header>
+ ```
+1. In the Get_Workers response, you will now see pronoun information.
+
+ >[!div class="mx-imgBorder"]
+ >![Screenshot of Workday Get Workers API response.](./media/workday-pronoun-data/get-workers-response-with-pronoun.png)
+
+>[!NOTE]
+>If you are not able to retrieve pronoun data in the *Get_Workers* response, then troubleshoot Workday domain security permissions. Ensure your integration security group has permission to the segmented security group that grants access to the pronoun data.
+
+Once you confirm that pronoun data is available in the *Get_Workers* response, go to the next step of updating your Azure AD provisioning app configuration.
+
+## Updating Azure AD provisioning app to retrieve pronouns
+
+To retrieve pronouns from Workday, you'll need to update your Azure AD provisioning app to query Workday using v38.1 of the Workday Web Services. We recommend testing this configuration first in your test/sandbox environment before implementing the change in production.
+
+1. Sign-in to Azure portal as administrator.
+1. Open your *Workday to AD User provisioning* app OR *Workday to Azure AD User provisioning* app.
+1. In the **Admin Credentials** section, update the **Tenant URL** to include the Workday Web Service version v38.1 as shown below.
+
+ >[!div class="mx-imgBorder"]
+ >![Screenshot of Azure portal provisioning app with Workday version.](./media/workday-pronoun-data/update-workday-version.png)
+
+1. Open the **Attribute mappings** blade. Scroll down and click **Show advanced options**. Click on **Edit attribute list for Workday**.
+1. If your provisioning app is configured to use the default WWS API version v21.1, then [reference this article to review and update the XPATHs for each attribute](workday-attribute-reference.md#xpath-values-for-workday-web-services-wws-api-v30).
+1. Add a new attribute called **PreferredPronoun** with XPATH
+
+ `/wd:Worker/wd:Worker_Data/wd:Personal_Data/wd:Personal_Information_Data/wd:Pronoun_Reference/@wd:Descriptor`
+
+1. Save your changes.
+1. You can now add a new attribute mapping to flow the Workday attribute **PreferredPronoun** to any attribute in AD/Azure AD.
+1. If you want to incorporate pronoun information as part of display name, you can update the attribute mapping for displayName attribute to use the below expression.
+
+ `Switch([PreferredPronoun], Join("", [PreferredNameData], " (", [PreferredPronoun], ")"), "", [PreferredNameData])`
+
+1. If worker *Aaron Hall* has set his pronoun information in Workday as `HE/HIM`, then the above expression will set the display name in Azure AD as: *Aaron Hall (HE/HIM)*
+1. Save your changes.
+1. Test the configuration for one user with provisioning on demand.
+
+## Next steps
+
+* [Learn how to configure Workday to Active Directory provisioning](../saas-apps/workday-inbound-tutorial.md)
+* [Learn how to configure write back to Workday](../saas-apps/workday-writeback-tutorial.md)
+* [Learn more about supported Workday Attributes for inbound provisioning](workday-attribute-reference.md)
active-directory Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-aws.md
This article describes how to onboard an Amazon Web Services (AWS) account on Pe
> [!NOTE] > A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md). -
-## View a training video on configuring and onboarding an AWS account
-
-To view a video on how to configure and onboard AWS accounts in Permissions Management, select [Configure and onboard AWS accounts](https://www.youtube.com/watch?v=R6K21wiWYmE).
- ## Onboard an AWS account 1. If the **Data Collectors** dashboard isn't displayed when Permissions Management launches:
active-directory Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-azure.md
To add Permissions Management to your Azure AD tenant:
- You must have an Azure AD user account and an Azure command-line interface (Azure CLI) on your system, or an Azure subscription. If you don't already have one, [create a free account](https://azure.microsoft.com/free/). - You must have **Microsoft.Authorization/roleAssignments/write** permission at the subscription or management group scope to perform these tasks. If you don't have this permission, you can ask someone who has this permission to perform these tasks for you. -
-## View a training video on enabling Permissions Management in your Azure AD tenant
-
-To view a video on how to enable Permissions Management in your Azure AD tenant, select [Enable Permissions Management in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo).
- ## How to onboard an Azure subscription 1. If the **Data Collectors** dashboard isn't displayed when Permissions Management launches:
active-directory Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-tenant.md
To enable Permissions Management in your organization:
> [!NOTE] > During public preview, Permissions Management doesn't perform a license check.
-## View a training video on enabling Permissions Management
--- To view a video on how to enable Permissions Management in your Azure AD tenant, select [Enable Permissions Management in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo).-- To view a video on how to configure and onboard AWS accounts in Permissions Management, select [Configure and onboard AWS accounts](https://www.youtube.com/watch?v=R6K21wiWYmE).-- To view a video on how to configure and onboard GCP accounts in Permissions Management, select [Configure and onboard GCP accounts](https://www.youtube.com/watch?app=desktop&v=W3epcOaec28).-- ## How to enable Permissions Management on your Azure AD tenant 1. In your browser:
active-directory Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md
This article describes how to onboard a Google Cloud Platform (GCP) project on P
> [!NOTE] > A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md).
-## View a training video on configuring and onboarding a GCP account
-
-To view a video on how to configure and onboard GCP accounts in Permissions Management, select [Configure and onboard GCP accounts](https://www.youtube.com/watch?app=desktop&v=W3epcOaec28).
-- ## Onboard a GCP project 1. If the **Data Collectors** dashboard isn't displayed when Permissions Management launches:
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
Previously updated : 06/29/2022 Last updated : 07/06/2022
The sign-in frequency setting works with apps that have implemented OAuth2 or OI
- Dynamics CRM Online - Azure portal
-The sign-in frequency setting works with 3rd party SAML applications and apps that have implemented OAuth2 or OIDC protocols, as long as they don't drop their own cookies and are redirected back to Azure AD for authentication on regular basis.
+The sign-in frequency setting works with third-party SAML applications and apps that have implemented OAuth2 or OIDC protocols, as long as they don't drop their own cookies and are redirected back to Azure AD for authentication on regular basis.
### User sign-in frequency and multifactor authentication
When administrators select **Every time**, it will require full reauthentication
> [!NOTE] > An early preview version included the option to prompt for Secondary authentication methods only at reauthentication. This option is no longer supported and should not be used.
+> [!WARNING]
+> Using require reauthentication every time with the sign-in risk grant control set to **No risk** isnΓÇÖt supported and will result in poor user experience.
+ ## Persistence of browsing sessions A persistent browser session allows users to remain signed in after closing and reopening their browser window.
After administrators confirm your settings using [report-only mode](howto-condit
### Validation
-Use the What-If tool to simulate a login from the user to the target application and other conditions based on how you configured your policy. The authentication session management controls show up in the result of the tool.
+Use the What-If tool to simulate a sign in from the user to the target application and other conditions based on how you configured your policy. The authentication session management controls show up in the result of the tool.
![Conditional Access What If tool results](media/howto-conditional-access-session-lifetime/conditional-access-what-if-tool-result.png)
active-directory Service Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/service-dependencies.md
Previously updated : 02/14/2022 Last updated : 07/06/2022
The below table lists some more service dependencies, where the client apps must
| | SharePoint | Late-bound | | Outlook groups | Exchange | Early-bound | | | SharePoint | Early-bound |
-| Power Apps | Microsoft Azure Management (portal and API) | Early-bound |
+| Power Apps | Microsoft Azure Management (portal and API) | Early-bound |
| | Windows Azure Active Directory | Early-bound | | | SharePoint | Early-bound | | | Exchange | Early-bound |
The below table lists some more service dependencies, where the client apps must
| | SharePoint | Early-bound | | Microsoft To-Do | Exchange | Early-bound |
+## Troubleshooting service dependencies
+
+The Azure Active Directory sign-ins log is a valuable source of information when troubleshooting why and how a Conditional Access policy applied in your environment. For more information about troubleshooting unexpected sign-in outcomes related to Conditional Access, see the article [Troubleshooting sign-in problems with Conditional Access](troubleshoot-conditional-access.md#service-dependencies).
+ ## Next steps To learn how to implement Conditional Access in your environment, see [Plan your Conditional Access deployment in Azure Active Directory](plan-conditional-access.md).
active-directory Troubleshoot Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-conditional-access.md
Previously updated : 03/15/2022 Last updated : 07/06/2022
To find out which Conditional Access policy or policies applied and why do the f
### Policy details
-Selecting the ellipsis on the right side of the policy in a sign-in event brings up policy details. This gives administrators additional information about why a policy was successfully applied or not.
+Selecting the ellipsis on the right side of the policy in a sign-in event brings up policy details. This option gives administrators additional information about why a policy was successfully applied or not.
![Sign in event Conditional Access tab](./media/troubleshoot-conditional-access/image5.png)
Selecting the ellipsis on the right side of the policy in a sign-in event brings
The left side provides details collected at sign-in and the right side provides details of whether those details satisfy the requirements of the applied Conditional Access policies. Conditional Access policies only apply when all conditions are satisfied or not configured.
-If the information in the event isn't enough to understand the sign-in results or adjust the policy to get desired results, the sign-in diagnostic tool can be used. The sign-in diagnostic can be found under **Basic info** > **Troubleshoot Event**. For more information about the sign-in diagnostic, see the article [What is the sign-in diagnostic in Azure AD](../reports-monitoring/overview-sign-in-diagnostics.md).
+If the information in the event isn't enough to understand the sign-in results, or adjust the policy to get desired results, the sign-in diagnostic tool can be used. The sign-in diagnostic can be found under **Basic info** > **Troubleshoot Event**. For more information about the sign-in diagnostic, see the article [What is the sign-in diagnostic in Azure AD](../reports-monitoring/overview-sign-in-diagnostics.md).
If you need to submit a support incident, provide the request ID and time and date from the sign-in event in the incident submission details. This information will allow Microsoft support to find the specific event you're concerned about.
If you need to submit a support incident, provide the request ID and time and da
| 53003 | BlockedByConditionalAccess | | 53004 | ProofUpBlockedDueToRisk |
+## Service dependencies
+
+In some specific scenarios, users are blocked because there are cloud apps with dependencies on resources that are blocked by Conditional Access policy.
+
+To determine the service dependency, check the sign-ins log for the Application and Resource called by the sign-in. In the following screenshot, the application called is **Azure Portal** but the resource called is **Windows Azure Service Management API**. To target this scenario appropriately all the applications and resources should be similarly combined in Conditional Access policy.
++ ## What to do if you're locked out of the Azure portal? If you're locked out of the Azure portal due to an incorrect setting in a Conditional Access policy:
active-directory Active Directory Certificate Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-certificate-credentials.md
To compute the assertion, you can use one of the many JWT libraries in the langu
| | | | `alg` | Should be **RS256** | | `typ` | Should be **JWT** |
-| `x5t` | Base64url-encoded SHA-1 thumbprint of the X.509 certificate thumbprint. For example, given an X.509 certificate hash of `84E05C1D98BCE3A5421D225B140B36E86A3D5534` (Hex), the `x5t` claim would be `hOBcHZi846VCHSJbFAs26Go9VTQ=` (Base64url). |
+| `x5t` | Base64-encoded SHA-1 thumbprint of the X.509 certificate thumbprint. For example, given an X.509 certificate hash of `84E05C1D98BCE3A5421D225B140B36E86A3D5534` (Hex), the `x5t` claim would be `hOBcHZi846VCHSJbFAs26Go9VTQ=` (Base64). |
### Claims (payload)
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-aadsts-error-codes.md
The `error` field has several possible values - review the protocol documentatio
| AADSTS50001 | InvalidResource - The resource is disabled or doesn't exist. Check your app's code to ensure that you have specified the exact resource URL for the resource you're trying to access. | | AADSTS50002 | NotAllowedTenant - Sign-in failed because of a restricted proxy access on the tenant. If it's your own tenant policy, you can change your restricted tenant settings to fix this issue. | | AADSTS500021 | Access to '{tenant}' tenant is denied. AADSTS500021 indicates that the tenant restriction feature is configured and that the user is trying to access a tenant that isn't in the list of allowed tenants specified in the header `Restrict-Access-To-Tenant`. For more information, see [Use tenant restrictions to manage access to SaaS cloud applications](../manage-apps/tenant-restrictions.md).|
+| AADSTS500022 | Access to '{tenant}' tenant is denied. AADSTS500022 indicates that the tenant restriction feature is configured and that the user is trying to access a tenant that isn't in the list of allowed tenants specified in the header `Restrict-Access-To-Tenant`. For more information, see [Use tenant restrictions to manage access to SaaS cloud applications](../manage-apps/tenant-restrictions.md).|
| AADSTS50003 | MissingSigningKey - Sign-in failed because of a missing signing key or certificate. This might be because there was no signing key configured in the app. To learn more, see the troubleshooting article for error [AADSTS50003](/troubleshoot/azure/active-directory/error-code-aadsts50003-cert-or-key-not-configured). If you still see issues, contact the app owner or an app admin. | | AADSTS50005 | DevicePolicyError - User tried to log in to a device from a platform that's currently not supported through Conditional Access policy. | | AADSTS50006 | InvalidSignature - Signature verification failed because of an invalid signature. |
active-directory Scenario Web Api Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-app-configuration.md
In the following example, the `GraphBeta` section specifies these settings.
"AzureAd": { "Instance": "https://login.microsoftonline.com/", "ClientId": "[Client_id-of-web-api-eg-2ec40e65-ba09-4853-bcde-bcb60029e596]",
- "TenantId": "common"
-
- // To call an API
- "ClientSecret": "[Copy the client secret added to the app from the Azure portal]",
- "ClientCertificates": [
- ]
- },
- "GraphBeta": {
+ "TenantId": "common",
+
+ // To call an API
+ "ClientSecret": "[Copy the client secret added to the app from the Azure portal]",
+ "ClientCertificates": []
+ },
+ "GraphBeta": {
"BaseUrl": "https://graph.microsoft.com/beta", "Scopes": "user.read"
- }
+ }
} ```
Instead of a client secret, you can provide a client certificate. The following
"AzureAd": { "Instance": "https://login.microsoftonline.com/", "ClientId": "[Client_id-of-web-api-eg-2ec40e65-ba09-4853-bcde-bcb60029e596]",
- "TenantId": "common"
-
- // To call an API
- "ClientCertificates": [
+ "TenantId": "common",
+
+ // To call an API
+ "ClientCertificates": [
{ "SourceType": "KeyVault", "KeyVaultUrl": "https://msidentitywebsamples.vault.azure.net", "KeyVaultCertificateName": "MicrosoftIdentitySamplesCert" }
- ]
+ ]
}, "GraphBeta": { "BaseUrl": "https://graph.microsoft.com/beta",
using Microsoft.Identity.Web;
public class Startup {
- // ...
- public void ConfigureServices(IServiceCollection services)
- {
- // ...
- services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
- .AddMicrosoftIdentityWebApi(Configuration, Configuration.GetSection("AzureAd"))
- .EnableTokenAcquisitionToCallDownstreamApi()
- .AddInMemoryTokenCaches();
- // ...
- }
- // ...
+ // ...
+ public void ConfigureServices(IServiceCollection services)
+ {
+ // ...
+ services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
+ .AddMicrosoftIdentityWebApi(Configuration, Configuration.GetSection("AzureAd"))
+ .EnableTokenAcquisitionToCallDownstreamApi()
+ .AddInMemoryTokenCaches();
+ // ...
+ }
+ // ...
} ```
using Microsoft.Identity.Web;
public class Startup {
- // ...
- public void ConfigureServices(IServiceCollection services)
- {
- // ...
- services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
- .AddMicrosoftIdentityWebApi(Configuration, Configuration.GetSection("AzureAd"))
- .EnableTokenAcquisitionToCallDownstreamApi()
- .AddMicrosoftGraph(Configuration.GetSection("GraphBeta"))
- .AddInMemoryTokenCaches();
- // ...
- }
- // ...
+ // ...
+ public void ConfigureServices(IServiceCollection services)
+ {
+ // ...
+ services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
+ .AddMicrosoftIdentityWebApi(Configuration, Configuration.GetSection("AzureAd"))
+ .EnableTokenAcquisitionToCallDownstreamApi()
+ .AddMicrosoftGraph(Configuration.GetSection("GraphBeta"))
+ .AddInMemoryTokenCaches();
+ // ...
+ }
+ // ...
} ```
using Microsoft.Identity.Web;
public class Startup {
- // ...
- public void ConfigureServices(IServiceCollection services)
- {
- // ...
- services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
- .AddMicrosoftIdentityWebApi(Configuration, "AzureAd")
- .EnableTokenAcquisitionToCallDownstreamApi()
- .AddDownstreamWebApi("MyApi", Configuration.GetSection("GraphBeta"))
- .AddInMemoryTokenCaches();
- // ...
- }
- // ...
+ // ...
+ public void ConfigureServices(IServiceCollection services)
+ {
+ // ...
+ services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
+ .AddMicrosoftIdentityWebApi(Configuration, "AzureAd")
+ .EnableTokenAcquisitionToCallDownstreamApi()
+ .AddDownstreamWebApi("MyApi", Configuration.GetSection("GraphBeta"))
+ .AddInMemoryTokenCaches();
+ // ...
+ }
+ // ...
} ```
active-directory Concept Azure Ad Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-azure-ad-register.md
# Azure AD registered devices
-The goal of Azure AD registered devices is to provide your users with support for bring your own device (BYOD) or mobile device scenarios. In these scenarios, a user can access your organizationΓÇÖs resources using a personal device.
+The goal of Azure AD registered - also known as Workplace joined - devices is to provide your users with support for bring your own device (BYOD) or mobile device scenarios. In these scenarios, a user can access your organizationΓÇÖs resources using a personal device.
| Azure AD Registered | Description | | | |
active-directory Howto Hybrid Join Downlevel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-hybrid-join-downlevel.md
If some of your domain-joined devices are Windows [downlevel devices](hybrid-azu
- Configure the local intranet settings for device registration - Install Microsoft Workplace Join for Windows downlevel computers
+- Need AD FS (for federated domains) or Seamless SSO configured (for managed domains).
> [!NOTE] > Windows 7 support ended on January 14, 2020. For more information, [Support for Windows 7 has ended](https://support.microsoft.com/en-us/help/4057281/windows-7-support-ended-on-january-14-2020).
active-directory Road To The Cloud Establish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-establish.md
+
+ Title: Road to the cloud - Establish a footprint for moving identity and access management from AD to Azure AD
+description: Establish an Azure AD footprint as part of planning your migration of IAM from AD to Azure AD.
+documentationCenter: ''
+++++ Last updated : 06/03/2022++++
+# Establish an Azure AD footprint
+
+If you're using Microsoft Office 365, Exchange Online, or Teams then you are already using Azure AD. If you do, your next step is to establish more Azure AD capabilities.
+
+* Establish hybrid identity synchronization between AD and Azure AD using [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md) or [Azure AD Connect Cloud Sync](../cloud-sync/what-is-cloud-sync.md).
+
+* [Select authentication methods](../hybrid/choose-ad-authn.md). We strongly recommend password hash synchronization (PHS).
+
+* Secure your hybrid identity infrastructure by following [Secure your Azure AD identity infrastructure - Azure Active Directory](../../security/fundamentals/steps-secure-identity.md)
+
+## Optional tasks
+
+The following aren't specific or mandatory to transforming from AD to Azure AD but are recommended functions to incorporate into your environment. These are also items recommended in the [Zero Trust](/security/zero-trust/) guidance.
+
+### Deploy Passwordless authentication
+
+In addition to the security benefits of [passwordless credentials](../authentication/concept-authentication-passwordless.md), this simplifies your environment because the management and registration experience is already native to the cloud. Azure AD provides different passwordless credentials that align with different use cases. Use the information in this document to plan your deployment: [Plan a passwordless authentication deployment in Azure Active Directory](../authentication/howto-authentication-passwordless-deployment.md)
+
+Once you roll out passwordless credentials to your users, consider reducing the use of password credentials. You can use the [reporting and Insights dashboard](../authentication/howto-authentication-methods-activity.md) to continue to drive use of passwordless credentials and reduce use of passwords in Azure AD.
+
+>[!IMPORTANT]
+>During your application discovery, you might find applications that have a dependency or assumptions around passwords. Users of these applications need to have access to their passwords until those applications are updated or migrated.
+
+### Configure hybrid Azure AD join for existing Windows clients
+
+You can configure hybrid Azure AD join for existing AD joined Windows clients to benefit from cloud-based security features such as [co-management](/mem/configmgr/comanage/overview), conditional access, and Windows Hello for Business. New devices should be Azure AD joined and not hybrid Azure AD joined.
+
+To learn more, check: [Plan your hybrid Azure Active Directory join deployment](../devices/hybrid-azuread-join-plan.md)
+
+## Next steps
+
+[Introduction](road-to-the-cloud-introduction.md)
+
+[Cloud transformation posture](road-to-the-cloud-posture.md)
+
+[Implement a cloud-first approach](road-to-the-cloud-implement.md)
+
+[Transition to the cloud](road-to-the-cloud-migrate.md)
active-directory Road To The Cloud Implement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-implement.md
+
+ Title: Road to the cloud - Implementing a cloud-first approach when moving identity and access management from AD to Azure AD
+description: Implement a cloud-first approach as part of planning your migration if IAM from AD to Azure AD.
+documentationCenter: ''
+++++ Last updated : 06/03/2022++++
+# Implement cloud first approach
+
+This is mainly a process and policy driven phase to stop, or limit as much as possible, adding new dependencies to AD and implement a cloud-first approach for new demand of IT solutions.
+
+It's key at this point to identify the internal processes that would lead to adding new dependencies on AD. For example, most organizations would have a change management process that has to be followed before new scenarios/features/solutions are implemented. We strongly recommend making sure that these change approval processes are updated to include a step to evaluate whether the proposed change would add new dependencies on AD and request the evaluation of Azure AD alternatives when possible.
+
+## Users and groups
+
+You can enrich user attributes in Azure AD to make more user attributes available for inclusion. Examples of common scenarios that require rich user attributes include:
+
+* App provisioning - The data source of app provisioning is Azure AD and necessary user attributes must be in there.
+
+* Application authorization - Token issued by Azure AD can include claims generated from user attributes.
+
+* Application can make authorization decision based on the claims in token.
+
+* Group membership population and maintenance - Dynamic groups enables dynamic population of group membership based on user attributes such as department information.
+
+These two links provide guidance on making schema changes:
+
+* [Understand the Azure AD schema and custom expressions](../cloud-sync/concept-attributes.md)
+
+* [Attributes synchronized by Azure AD Connect](../hybrid/reference-connect-sync-attributes-synchronized.md)
+
+These links provide additional information on this topic but are not specific to changing the schema:
+
+* [Use Azure AD schema extension attributes in claims - Microsoft identity platform](../develop/active-directory-schema-extensions.md)
+
+* [What are custom security attributes in Azure AD? (Preview) - Azure Active Directory](../fundamentals/custom-security-attributes-overview.md)
+
+* [Tutorial - Customize Azure Active Directory attribute mappings in Application Provisioning](../app-provisioning/customize-application-attributes.md)
+
+* [Provide optional claims to Azure AD apps - Microsoft identity platform](../develop/active-directory-optional-claims.md)
+
+* [Create or edit a dynamic group and get status - Azure AD](../enterprise-users/groups-create-rule.md)
+
+* Use dynamic groups for automated group management
+
+* Use self-service groups for user-initiated group management
+
+* For application access, consider using [scope provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md) or [entitlement management](../governance/entitlement-management-overview.md)
+
+For more information on group types, see [Compare groups](/microsoft-365/admin/create-groups/compare-groups).
+
+* Use external identities for collaboration with other organizations - stop creating accounts of external users in on-premises directories
+
+You and your team might feel compelled to change your current employee provisioning to use cloud-only accounts at this stage. The effort is non-trivial and doesn't provide enough business value to warrant the effort. We recommend you plan this transition at a different phase of your transformation.
+
+## Devices
+
+Client workstations are traditionally joined to AD and managed via group policy (GPO) and/or device management solutions such as Microsoft Endpoint Configuration Manager (MECM). Your teams will establish a new policy and process to prevent newly deployed workstations from being domain-joined going forward. Key points include:
+
+* Mandate [Azure AD join](../devices/concept-azure-ad-join.md) for new Windows client workstations to achieve "No more domain join"
+
+* Manage workstations from cloud by using Unified Endpoint Management (UEM) solutions such as [Intune](/mem/intune/fundamentals/what-is-intune)
+
+[Windows Autopilot](/mem/autopilot/windows-autopilot) is highly recommended to establish a streamlined onboarding and device provisioning, which can enforce these directives.
+
+For more information, see [Get started with cloud native Windows endpoints - Microsoft Endpoint Manager](/mem/cloud-native-windows-endpoints)
+
+## Applications
+
+Traditionally, application servers are often joined to an on-premises Active Directory domain so that they can utilize Windows Integrated Authentication (Kerberos or NTLM), directory queries using LDAP and server management using Group Policy or Microsoft Endpoint Configuration Manager (MECM).
+
+The organization has a process to evaluate Azure AD alternatives when considering new services/apps/infrastructure. Directives for a cloud-first approach to applications should be as follows (new on-premises/legacy applications should be a rare exception when no modern alternative exists):
+
+* Provide recommendation to change procurement policy and application development policy to require modern protocols (OIDC/OAuth2 and SAML) and authenticate using Azure AD. New apps should also support [Azure AD App Provisioning](../app-provisioning/what-is-hr-driven-provisioning.md) and have no dependency on LDAP queries. Exceptions require explicit review and approval.
+
+> [!IMPORTANT]
+> Depending on anticipated demand of application that require legacy protocols, when more current alternatives are not feasible you can choose to deploy [Azure AD Domain Services](../../active-directory-domain-services/overview.md).
+
+* Provide a recommendation to create a policy to prioritize use of cloud native alternatives. The policy should limit deployment of new application servers to the domain. Common cloud native scenarios to replace AD joined servers include:
+
+ * File servers
+
+ * SharePoint / OneDrive - Collaboration support across Microsoft 365 solutions and built-in governance, risk, security, and compliance.
+
+ * [Azure Files](../../storage/files/storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the industry standard SMB or NFS protocol. Customers can use native [Azure AD authentication to Azure Files](../../virtual-desktop/create-profile-container-azure-ad.md) over the internet without line of sight to a DC.
+
+ * Azure AD also works with third party applications in our [Application Gallery](/security/business/identity-access-management/integrated-apps-azure-ad)
+
+ * Print Servers
+
+ * Mandate to procure [Universal Print](/universal-print/) compatible printers - [Partner Integrations](/universal-print/fundamentals/universal-print-partner-integrations)
+
+ * Bridge with [Universal Print connector](/universal-print/fundamentals/universal-print-connector-overview) for non-compatible printers
+
+## Next steps
+
+[Introduction](road-to-the-cloud-introduction.md)
+
+[Cloud transformation posture](road-to-the-cloud-posture.md)
+
+[Establish an Azure AD footprint](road-to-the-cloud-establish.md)
+
+[Transition to the cloud](road-to-the-cloud-migrate.md)
active-directory Road To The Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-introduction.md
+
+ Title: Road to the cloud - Introduction to moving identity and access management from AD to Azure AD
+description: Introduction to planning your migration if IAM from AD to Azure AD.
+documentationCenter: ''
+++++ Last updated : 06/03/2022++++
+# Introduction
+
+This content provides guidance to move:
+
+ * **From** - Active Directory (AD) and other non-cloud based services, either hosted on-premises or Infrastructure-as-a-Service (IaaS), that provide identity management (IDM), identity and access management (IAM) and device management.
+
+* **To** - Azure Active Directory (Azure AD) and other Microsoft cloud native solutions for identity management (IDM), identity and access management (IAM), and device management.
+
+>[!NOTE]
+> In this content, when we refer to AD, we are referring to Windows Server Active Directory Domain Services.
+
+Some organizations set goals to remove AD, and their on-premises IT footprint. Others set goals to take advantage of some cloud-based capabilities, but not to completely remove their on-premises or IaaS environments. Transformation must be aligned with and achieve business objectives including increased productivity, reduced costs and complexity, and improved security posture. To better understand the costs vs. value of moving to the cloud, see [Forrester TEI for Microsoft Azure Active Directory](https://www.microsoft.com/security/business/forrester-tei-study) and other TEI reports and [Cloud economics](https://azure.microsoft.com/overview/cloud-economics/).
+
+## Next steps
+
+[Cloud transformation posture](road-to-the-cloud-posture.md)
+
+[Establish an Azure AD footprint](road-to-the-cloud-establish.md)
+
+[Implement a cloud-first approach](road-to-the-cloud-implement.md)
+
+[Transition to the cloud](road-to-the-cloud-migrate.md)
active-directory Road To The Cloud Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-migrate.md
+
+ Title: Road to the cloud - Moving identity and access management from AD to Azure AD migration workstream
+description: Learn to plan your migration workstream of IAM from AD to Azure AD.
+documentationCenter: ''
+++++ Last updated : 06/03/2022++++
+# Transition to the cloud
+
+After aligning the organization towards halting growth of the AD footprint, you can focus on moving the existing on-premises workloads to Azure AD. This section describes the various migration workstreams. You can execute the workstreams in this section based on your priorities and resources.
+
+A typical migration workstream has the following stages:
+
+* **Discover**: find out what you currently have in your environment
+
+* **Pilot**: deploy new cloud capabilities to a small subset (of users, applications, or devices, depending on the workstream)
+
+* **Scale Out**: expand the pilot out to complete the transition of a capability to the cloud
+
+* **Cut-over (when applicable)**: stop using the old on-premises workload
+
+## Users and Groups
+
+### Move password self-service
+
+We recommend a [passwordless environment](../authentication/concept-authentication-passwordless.md). Until then, you can migrate password self-service workflows from on-premises systems to Azure AD to simplify your environment. Azure AD [self-service password reset (SSPR)](../authentication/concept-sspr-howitworks.md) gives users the ability to change or reset their password, with no administrator or help desk involvement.
+
+To enable self-service capabilities, your authentication methods must be updated to a [level that supported by self-service capabilities](../authentication/tutorial-enable-sspr.md). Once authentication methods are updated, you'll want to enable user self-service password capability for your Azure AD authentication environment.
+
+### To evaluate and pilot SSPR
+
+* Enable [combined registration (multi-factor authentication (MFA) +SSPR)](../authentication/concept-registration-mfa-sspr-combined.md) for a target group of users
+
+* Deploy [SSPR](../authentication/tutorial-enable-sspr.md) for a target group of users
+
+* For that group of users with Azure AD and Hybrid Azure AD joined devices (Windows devices - 7, 8, 8.1 and 10), enable [Windows password reset](../authentication/howto-sspr-windows.md) for those users.
+
+* Deploy [Password Protection](../authentication/howto-password-ban-bad-on-premises-operations.md) in a subset of DCs with *Audit Mode* to gather information about impact of modern policies. For more guidance, see [Enable on-premises Azure Active Directory Password Protection](../authentication/howto-password-ban-bad-on-premises-operations.md).
+
+### To scale out
+
+Gradually register and enable SSPR. For example, roll out by region, subsidiary, department, etc. for all users. This enables both MFA and SSPR. Refer to [Sample SSPR rollout materials](/download/details.aspx?id=56768) to assist with required end-user communications and evangelizing.
+
+**Key points:**
+
+* Use Azure AD password policies on the domain.
+
+* Go through a cycle of password change for all users to flush out weak passwords.
+
+* Once the cycle is complete, implement the policy expiration time.
+
+* Enable Windows 10 password reset ([Self-service password reset for Windows devices - Azure Active Directory](../authentication/howto-sspr-windows.md)) for all users
+
+For Windows down-level devices, follow [these instructions](../authentication/howto-sspr-windows.md)
+
+* Add monitoring information like workbooks, for reset activity ([Self-service password reset reports - Azure Active Directory](../authentication/howto-sspr-reporting.md)) - Authentication Methods Insights and reporting ([Authentication Methods Activity - Azure Active Directory](../authentication/howto-authentication-methods-activity.md))
+
+* Switch the "Password Protection" configuration in the DCs that have "Audit Mode" set to "Enforced mode" ([Enable on-premises Azure AD Password Protection](../authentication/howto-password-ban-bad-on-premises-operations.md))
+
+* For customers with Azure AD Identity Protection, enable [password reset as a control in Conditional Access policies](../identity-protection/howto-identity-protection-configure-risk-policies.md)for risky users (users marked as risky through Identity Protection). [Investigate risk Azure Active Directory Identity Protection](../identity-protection/howto-identity-protection-investigate-risk.md)
+
+### Move groups management
+
+To transform groups and distribution lists:
+
+* For security groups, use your existing business logic that assigns users to security groups, migrate the logic and capability to Azure AD and dynamic groups.
+
+* For self-managed group capabilities provided by Microsoft Identity Manager (MIM), replace the capability with self-service group management.
+
+* [Conversion of legacy distribution lists to Microsoft 365 groups](/microsoft-365/admin/manage/upgrade-distribution-lists) - You can upgrade distribution lists to Microsoft 365 groups in Outlook. This is a great way to give your organization's distribution lists all the features and functionality of Microsoft 365 groups.
+
+* Upgrade your [distribution lists to Microsoft 365 groups in Outlook](https://support.microsoft.com/office/7fb3d880-593b-4909-aafa-950dd50ce188) and [decommission your on-premises Exchange server](/exchange/decommission-on-premises-exchange).
+
+### Move application provisioning
+
+This workstream will help you to simplify your environment by removing application provisioning flows from on-premises IDM systems such as Microsoft Identity Manager. Based on your application discovery, categorize your application based on the following:
+
+* Applications in your environment that have a provisioning integration with the [Azure AD Application Gallery](https://www.microsoft.com/security/business/identity-access-management/integrated-apps-azure-ad)
+
+* Applications that aren't in the gallery but support the SCIM 2.0 protocol are natively compatible with Azure AD cloud provisioning service.
+
+* On-Premises applications that have an ECMA connector available, can be integrated with [Azure AD on-premises application provisioning](../app-provisioning/on-premises-application-provisioning-architecture.md)
+
+For more information check: [Plan an automatic user provisioning deployment for Azure Active Directory](../app-provisioning/plan-auto-user-provisioning.md)
+
+### Move to Cloud HR provisioning
+
+This workstream will reduce your on-premises footprint by moving the HR provisioning workflows from on-premises identity management (IDM) systems such as Microsoft Identity Manager (MIM) to Azure AD. Azure AD cloud HR provisioning can provision hybrid accounts or cloud-only accounts.
+
+* For new employees who are exclusively using applications that use Azure AD, you can choose to provision cloud-only accounts, which in turn helps you to contain the footprint of AD.
+
+* For new employees who need access to applications that have dependency on AD, you can provision hybrid accounts
+
+Azure AD Cloud HR provisioning can also manage AD accounts for existing employees. For more information, see: [Plan cloud HR application to Azure Active Directory user provisioning](../app-provisioning/plan-cloud-hr-provision.md) and, specifically, [Plan the deployment project](../app-provisioning/plan-auto-user-provisioning.md).
+
+### Move external identity management
+
+If your organization provisions accounts in AD or other on-premises directories for external identities such as vendors, contractors, consultants, etc. You can simplify your environment by managing those third parties (3P) user objects natively in the cloud.
+
+* For new external users, use [Azure AD External Identities](../external-identities/external-identities-overview.md), which will stop the AD footprint of users.
+
+* For existing AD accounts that you provision for external identities, you can remove the overhead of managing local credentials (for example, passwords) by configuring them for B2B collaboration using the steps here: [Invite internal users to B2B collaboration](../external-identities/invite-internal-users.md).
+
+* Use [Azure AD Entitlement Management](../governance/entitlement-management-overview.md) to grant access to applications and resources. Most companies have dedicated systems and workflows for this purpose that you can now move out on-premises tools.
+
+* Use [Access Reviews](../governance/access-reviews-external-users.md) to remove access rights and/or external identities that are no longer needed.
+
+## Devices
+
+### Move Non-Windows OS workstations
+
+Non-Windows workstations can be integrated with Azure AD to enhance user experience and benefit from cloud-based security features such as conditional access.
+
+* macOS
+
+ * [Set up enrollment for macOS devices - Microsoft Intune](/mem/intune/enrollment/macos-enroll)
+
+ * Deploy [Microsoft Enterprise SSO plug-in for Apple devices - Microsoft identity platform | Azure](../develop/apple-sso-plugin.md)
+
+* Linux
+
+ * Consider Linux on Azure VM where possible
+
+ * [Sign in to a Linux VM with Azure Active Directory credentials - Azure Virtual Machines](../../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md)
+
+### Replace Other Windows versions as Workstation use
+
+If you have below versions of Windows, consider replacing to latest Windows client version to benefit from cloud native management (Azure AD join and UEM):
+
+* Windows 7 or 8.x
+
+* Windows Server OS as workstation use
+
+### Virtual desktop infrastructure (VDI) solution
+
+This project has two primary initiatives. The first is to plan and implement a VDI environment for new deployments that isn't AD-dependent. The second is to plan a transition path for existing deployments that have AD-dependency.
+
+* **New deployments** - Deploy a cloud managed VDI solution such as Windows 365 and or Azure Virtual Desktop (AVD) that doesn't require on-premises AD.
+
+* **Existing deployments** - If your existing VDI deployment is dependent on AD, use business objectives and goals to determine whether you maintain the solution or migrate it to Azure AD.
+
+For more information, see:
+
+* [Deploy Azure AD joined VMs in Azure Virtual Desktop - Azure](/virtual-desktop/deploy-azure-ad-joined-vm)
+
+* [Windows 365 planning guide](/windows-365/enterprise/planning-guide)
+
+## Applications
+
+To help maintain a secure environment, Azure AD supports modern authentication protocols. To transition application authentication from AD to Azure AD, you must:
+
+* Determine which applications can migrate to Azure AD with no modification
+
+* Determine which applications have an upgrade path that enables you to migrate with an upgrade
+
+* Determine which applications require replacement or significant code changes to migrate
+
+The outcome of your application discovery initiative is to create a prioritized list used to migrate your application portfolio. The list also contains applications that:
+
+* Require an upgrade or update to the software - there's an upgrade path available
+
+* Require an upgrade or update to the software - there isn't an upgrade path available
+
+Using the list, you can further evaluate the applications that don't have an existing upgrade path.
+
+* Determine whether business value warrants updating the software or if it should be retired.
+
+* If retired, is a replacement needed or is the application no longer needed?
+
+Based on the results, you might redesign various aspects of your transformation from AD to Azure AD. While there are approaches you can use to extend on-premises AD to Azure IaaS (lift-and-shift) for applications with non-supported authentication protocols, we recommend you set a policy that requires a policy exception to use this approach.
+
+### Application discovery
+
+Once you've segmented your app portfolio, then you can prioritize migration based on business value and business priority. The following are types of applications you might use to categorize your portfolio, and some tools you can use to discover certain apps in your environment.
+
+When you think about application types, there are three main ways to categorize your apps:
+
+* **Modern Authentication Apps**: These are applications that use modern authentication protocols such as OIDC, OAuth2, SAML, WS-Federation, using a Federation Service such as AD FS.
+
+* **Web Access Management (WAM) tools**: These applications use headers, cookies, and similar techniques for SSO. These apps typically require a WAM Identity Provider such as Symantec Site Minder.
+
+* **Legacy Apps**: These applications use legacy protocols such as Kerberos, LDAP, Radius, Remote Desktop, NTLM (not recommended) etc.
+
+Azure AD can be used with each type of application providing different levels of functionality that will result in different migration strategies, complexity, and trade-offs. Some organizations have an application inventory, that can be used as a discovery baseline (It's common that this inventory isn't complete or updated). Below are some tools that can be used to create or refresh your inventory:
+
+To discover modern authentication apps:
+
+* If you're using AD FS, use the [AD FS application activity report](../manage-apps/migrate-adfs-application-activity.md)
+
+* If you're using a different identity provider, you can use the logs and configuration.
+
+The following tools can help you to discover applications that use LDAP.
+
+* [Event1644Reader](/troubleshoot/windows-server/identity/event1644reader-analyze-ldap-query-performance) : Sample tool for collecting data on LDAP Queries made to Domain Controllers using Field Engineering Logs.
+
+* [Microsoft Microsoft 365 Defender for Identity](/ATPDocs/monitored-activities.md): Utilize the sign in Operations monitoring capability (note captures binds using LDAP, but not Secure LDAP.
+
+* [PSLDAPQueryLogging](https://github.com/RamblingCookieMonster/PSLDAPQueryLogging) : GitHub tool for reporting on LDAP queries.
+
+### Migrate AD FS / federation services
+
+When you plan your migration to Azure AD, consider migrating the apps that use modern authentication protocols (such as SAML and Open ID Connect) first. These apps can be reconfigured to authenticate with Azure AD either via a built-in connector from the Azure App Gallery, or by registering the application in Azure AD. Once you have moved SaaS applications that were federated to Azure AD, there are a few steps to decommission the on-premises federation system. Verify you've completed the following:
+
+* [Move application authentication to Azure Active Directory](../manage-apps/migrate-adfs-apps-to-azure.md)
+
+Once you have moved SaaS applications that were federated to Azure AD, there are a few steps to decommission the on-premises federation system. Verify you have completed migration of:
+
+* [Migrate from Azure AD Multi-Factor Authentication Server to Azure multi-factor authentication](../authentication/how-to-migrate-mfa-server-to-azure-mfa.md)
+
+* [Migrate from federation to cloud authentication](../hybrid/migrate-from-federation-to-cloud-authentication.md)
+
+* If you're using Web Application Proxy, [Move Remote Access to internal applications](#move-remote-access-to-internal-applications)
+
+>[!IMPORTANT]
+>If you are using other features, such as remote access, verify those services are relocated prior to decommissioning AD federated services.
+### Move WAM Authentication apps
+
+This project focuses on migrating SSO capability from Web Access Management systems (such as Symantec SiteMinder) to Azure AD. To learn more, see [Migrating applications from Symantec SiteMinder to Azure AD](https://azure.microsoft.com/resources/migrating-applications-from-symantec-siteminder-to-azure-active-directory/)
+
+### Define Application Server Management strategy
+
+In terms of infrastructure management, on-premises (using AD) environments often use a combination of group policy objects (GPOs) and Microsoft Endpoint Configuration Manager (MECM) features to segment management duties. For example, security policy management, update management, config management, and monitoring.
+
+Since AD was designed and built for on-premises IT environments and Azure AD was built for cloud-based IT environments, one-to-one parity of features isn't present here. Therefore, application servers can be managed in several different ways. For example, Azure Arc helps bring many of these features that exist in AD together into a single view when Azure AD is used for IAM. Azure AD DS can also be used to domain join servers in Azure AD, especially those where it's desirable to use GPOs for specific business or technical reasons.
+
+Use the following table to determine what Azure-based tools you use to replace the on-premises or AD-based environment:
+
+| Management area | On-premises (AD) feature | Equivalent Azure AD feature |
+| - | - | -|
+| Security policy management| GPO, MECM| [Microsoft Microsoft 365 Defender for Cloud](https://azure.microsoft.com/services/security-center/) |
+| Update management| MECM, WSUS| [Azure Automation Update Management](../../automation/update-management/overview.md) |
+| Configuration management| GPO, MECM| [Azure Automation State Configuration](../../automation/automation-dsc-overview.md) |
+| Monitoring| System Center Operations Manager| [Azure Monitor Log Analytics](../../azure-monitor/logs/log-analytics-overview.md) |
+
+More tools and notes:
+
+* [Azure Arc](https://azure.microsoft.com/services/azure-arc/) enables above Azure features to non-Azure VMs. For example, Windows Server when used on-premises
+* or on AWS.
+
+* [Manage and secure your Azure VM environment](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/).
+
+* If you must wait to migrate or perform a partial migration, GPO can be used with [Azure AD Domain Services (Azure AD DS)](https://azure.microsoft.com/services/active-directory-ds/)
+
+If you require management of application servers with Microsoft Endpoint Configuration Manager (MECM), you can't achieve this using Azure AD DS. MECM isn't supported to run in an Azure AD DS environment. Instead, you'll need to extend your on-premises AD to a Domain Controller (DC) running on an Azure VM or deploy a new Active Directory (AD) to an Azure IaaS vNet.
+
+### Define Legacy Application migration strategy
+
+Legacy applications have different areas of dependencies to AD:
+
+* User Authentication and Authorization: Kerberos, NTLM, LDAP Bind, ACLs
+
+* Access to Directory Data: LDAP queries, schema extensions, read/write of directory objects
+
+* Server Management: As determined by the [server management strategy](#define-application-server-management-strategy)
+
+To reduce or eliminate the dependencies above, there are three main approaches, listed below in order of preference:
+
+**Approach 1** Replace with SaaS alternatives that use modern authentication. In this approach, undertake projects to migrate from legacy applications to SaaS alternatives that use modern authentication. Have the SaaS alternatives authenticate to Azure AD directly.
+
+**Approach 2** Replatform (for example, adopt serverless/PaaS) to support modern hosting without servers and/or update the code to support modern authentication. In this approach, undertake projects to update authentication code for applications that will be modernized or replatform on serverless/PaaS to eliminate the need for underlying server management. Enable the app to use modern authentication and integrate to Azure AD directly. [Learn about MSAL - Microsoft identity platform](../develop/msal-overview.md).
+
+**Approach 3** Leave the applications as legacy applications for the foreseeable future or sunset the applications and opportunity arises. We recommend that this is considered as a last resort.
+
+Based on the app dependencies, you have three migration options:
+
+#### Migration option #1
+
+* Utilize Azure AD Domain Services if the dependencies are aligned with [Common deployment scenarios for Azure AD Domain Services](../../active-directory-domain-services/scenarios.md).
+
+* To validate if Azure AD DS is a good fit, you might use tools like Service Map [Microsoft Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.ServiceMapOMS?tab=Overview) and [Automatic Dependency Mapping with Service Map and Live Maps](https://techcommunity.microsoft.com/t5/system-center-blog/automatic-dependency-mapping-with-service-map-and-live-maps/ba-p/351867).
+
+* Validate your SQL server instantiations can be [migrated to a different domain](https://social.technet.microsoft.com/wiki/contents/articles/24960.migrating-sql-server-to-new-domain.aspx). If your SQL service is running in virtual machines, [use this guidance](/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide).
+
+##### Option 1 steps
+
+1. Deploy Azure AD Domain Services into an Azure virtual network
+
+2. Lift and shift legacy apps to VMs on the Azure virtual network that are domain-joined to Azure AD Domain Services
+
+3. Publish legacy apps to the cloud using Azure AD App Proxy or a [Secure Hybrid Access](../manage-apps/secure-hybrid-access.md) partner
+
+4. As legacy apps retire through attrition, eventually decommission Azure AD Domain Services running in the Azure virtual network
+
+#### Migration option #2
+
+Extend on-premises AD to Azure IaaS. If #1 isn't possible and an application has a strong dependency on AD
+
+##### Option 2 steps
+
+1. Connect an Azure virtual network to the on-premises network via VPN or ExpressRoute
+
+2. Deploy new Domain Controllers for the on-premises AD as virtual machines into the Azure virtual network
+
+3. Lift and shift legacy apps to VMs on the Azure virtual network that are domain-joined
+
+4. Publish legacy apps to the cloud using Azure AD App Proxy or a [Secure Hybrid Access](../manage-apps/secure-hybrid-access.md) partner
+
+5. Eventually, decommission the on-premises AD infrastructure and run the Active Directory in the Azure virtual network entirely
+
+6. As legacy apps retire through attrition, eventually decommission the Active Directory running in the Azure virtual network
+
+#### Migration option #3
+
+Deploy a new AD to Azure IaaS. If migration option #1 isn't possible and an application has a strong dependency on AD. This approach enables you to decouple the app from the existing AD to reduce surface area.
+
+##### Option 3 steps
+
+1. Deploy a new Active Directory as virtual machines into an Azure virtual network
+
+2. Lift and shift legacy apps to VMs on the Azure virtual network that are domain-joined to the new Active Directory
+
+3. Publish legacy apps to the cloud using Azure AD App Proxy or a [Secure Hybrid Access](../manage-apps/secure-hybrid-access.md) partner
+
+4. As legacy apps retire through attrition, eventually decommission the Active Directory running in the Azure virtual network
+
+#### Comparison of strategies
+
+| Strategy | A-Azure AD Domain Services | B-Extend AD to IaaS | C-Independent AD in IaaS |
+| - | - | - | - |
+| De-coupled from on-premises AD| Yes| No| Yes |
+| Allows Schema Extensions| No| Yes| Yes |
+| Full administrative control| No| Yes| Yes |
+| Potential reconfiguration of apps required (ACLs, authorization etc.)| Yes| No| Yes |
+
+### Move Virtual private network (VPN) authentication
+
+This project focuses on moving your VPN authentication to Azure AD. It's important to know that there are different configurations available for VPN gateway connections. You need to determine which configuration best fits your needs. For more information on designing a solution, see [VPN Gateway design](../../vpn-gateway/design.md). Some key points about usage of Azure AD for VPN authentication:
+
+* Check if your VPN providers support modern authentication. For example:
+
+* [Tutorial: Azure Active Directory single sign-on (SSO) integration with Cisco AnyConnect](../saas-apps/cisco-anyconnect.md)
+
+* [Tutorial: Azure Active Directory single sign-on (SSO) integration with Palo Alto Networks](../saas-apps/palo-alto-networks-globalprotect-tutorial.md) [GlobalProtect](../saas-apps/palo-alto-networks-globalprotect-tutorial.md)
+
+* For windows 10 devices, consider integrating [Azure AD support to the built-in VPN client](/windows-server/remote/remote-access/vpn/ad-ca-vpn-connectivity-windows10)
+
+* After evaluating this scenario, you can implement a solution to remove your dependency with on-premises to authenticate to VPN
+
+### Move Remote Access to internal applications
+
+To simplify your environment, you can use [Azure AD Application Proxy](../app-proxy/application-proxy.md) or [Secure hybrid access partners](../manage-apps/secure-hybrid-access.md) to provide remote access. This will allow you to remove the dependency on on-premises reverse proxy solutions.
+
+It's important to call out that enabling remote access to an application using the aforementioned technologies is an interim step and more work is needed to completely decouple the application from AD.
+
+Azure AD Domain Services allows you to migrate application servers to the cloud IaaS and decouple from AD, while using Azure AD App Proxy to enable remote access. To learn more about this scenario, check [Deploy Azure AD Application Proxy for Azure AD Domain Services](../../active-directory-domain-services/deploy-azure-app-proxy.md)
+
+## Next steps
+
+[Introduction](road-to-the-cloud-introduction.md)
+
+[Cloud transformation posture](road-to-the-cloud-posture.md)
+
+[Establish an Azure AD footprint](road-to-the-cloud-establish.md)
+
+[Implement a cloud-first approach](road-to-the-cloud-implement.md)
active-directory Road To The Cloud Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-posture.md
+
+ Title: Road to the cloud - Determine cloud transformation posture when moving identity and access management from AD to Azure AD
+description: Determine cloud transformation posture when planning your migration of IAM from AD to Azure AD.
+documentationCenter: ''
+++++ Last updated : 06/03/2022++++
+# Cloud transformation posture
+
+Active Directory (AD) and Azure Active Directory (Azure AD) and other Microsoft tools are at the core of identity and access management. For example, device management in AD is provided by Active Directory Domain Services (AD DS) and Microsoft Endpoint Configuration Manager (MECM). In Azure AD, the same capability is provided using Azure AD and Intune.
+
+As part of most modernization, migration, or zero-trust initiatives, identity and access management (IAM) activities are shifted from using on-premises or Infrastructure-as-a-Service (IaaS) solutions to using built-for-the-cloud solutions. For an IT environment with Microsoft products and services, Active Directory (AD) and Azure Active Directory (Azure AD) play a role.
+
+Many companies migrating from Active Directory (AD) to Azure Active Directory (Azure AD) start with an environment similar to the following diagram. The diagram also overlays three pillars:
+
+* **Applications**: This pillar includes the applications, resources, and their underlying domain-joined servers.
+
+* **Devices**: This pillar focuses on domain-joined client devices.
+
+* **Users and Groups**: Represent the human and non-human identities and attributes that access resources from different devices as specified.
++
+Microsoft has modeled five states of transformation that commonly align with the business goals of our customers. As the goals of customers mature, it's typical for them to shift from one state to the next at a pace that suits their resourcing and culture. This approach closely follows [Active Directory in Transition: Gartner Survey| Results and Analysis](https://www.gartner.com/en/documents/4006741).
+
+The five states have exit criteria to help you determine where your environment resides today. Some projects, such as application migration span all five states, while others span a single state.
+
+The content then provides more detailed guidance organized to help with intentional changes to people, process, and technology to:
+
+* Establish Azure AD capabilities
+
+* Implement a cloud-first approach
+
+* Start to migrate out of your AD environment
+
+Guidance is provided organized by user management, device management, and application management per the pillars above.
+
+Organizations that are formed in Azure AD rather than in AD don't have the legacy on-premises environment that more established organizations must contend with. For them, or customers that are completely recreating their IT environment in the cloud, becoming 100% cloud-centric can be accomplished as the new IT environment is established.
+
+For customers with established on-premises IT capability, the transformation process introduces complexity that requires careful planning. Additionally, since AD and Azure AD are separate products targeted at different IT environments, there aren't like-by-like features. For example, Azure AD does not have the notion of AD domain and forest trusts.
+
+## Five States of transformation
+
+In enterprise-sized organizations, IAM transformation, or even transformation from AD to Azure AD is typically a multi-year effort with multiple states. You analyze your environment to determine your current state, and then set a goal for your next state. Your goal might remove the need for AD entirely, or you might decide not to migrate some capability to Azure AD and leave it in place. The states are meant to logically group initiatives into projects towards completing a transformation. During the state transitions, interim solutions are put in place. The interim solutions enable the IT environment to support IAM operations in both AD and Azure AD. The interim solutions must also enable the two environments to interoperate. The following diagram shows the five states:
+
+[ ![Diagram that shows five elements, each depicting a possible network architecture. Options include cloud attached, hybrid, cloud first, AD minimized, and 100% cloud.](media/road-to-cloud-posture/road-to-the-cloud-five-states.png) ](media/road-to-cloud-posture/road-to-the-cloud-five-states.png#lightbox)
+
+**State 1 Cloud attached** - In this state, organizations have created an Azure AD tenant to enable user productivity and collaboration tools and the tenant is fully operational. Most companies that use Microsoft products and services in their IT environment are already in or beyond this state. In this state operational costs may be higher because there's an on-premises environment and cloud environment to maintain and make interactive. Also, people must have expertise in both environments to support their users and the organization. In this state:
+
+* Devices are joined to AD and managed using group policy and or on-premises device management tools.
+* Users are managed in AD, provisioned via on-premises IDM systems, and synchronized to Azure AD with Azure AD Connect.
+* Apps are authenticated to AD, federation servers like AD FS, or Web Access Manager (WAM), Microsoft 365 or other tools such as SiteMinder and Oracle Access Manager (OAM).
+
+**State 2 Hybrid** - In this state, the organizations start to enhance their on-premises environment through cloud capabilities. The solutions can be planned to reduce complexity, increase security posture, and reduce the footprint of the on-premises environment. During transition and operating in this state, organizations grow the skills and expertise using Azure AD for IAM solutions. Since user accounts and device attachments are relatively easy and a common part of day-to-day IT operations, this is the approach most organizations have used. In this state:
+
+* Windows clients are hybrid Azure AD joined.
+
+* Non-Microsoft SaaS-based start being integrated with Azure AD, for example Salesforce and ServiceNow.
+
+* Legacy apps are authenticating to Azure AD via App Proxy or secure hybrid access partner solutions.
+
+* Self-service password reset (SSPR) and password protection for users are enabled.
+
+* Some legacy apps are authenticated in the cloud using Azure AD Directory Service (Azure AD DS) and App Proxy.
+
+**State 3 Cloud first** - In this state, the teams across the organization build a track record of success and start planning to move more challenging workloads to Azure AD. Organizations typically spend the most time in this state of transformation. As complexity and number of workloads grow over time, the longer an organization has used Active Directory (AD) the greater the effort and number of initiatives needed to shift to the cloud. In this state:
+
+* New Windows clients are joined to Azure AD and are managed with Intune.
+* ECMA connectors are used for provision users and groups for on-premises apps.
+* All apps that were previously using an AD DS-integrated federated identity provider such as Active Directory Federation Services (ADFS), are updated to use Azure AD for authentication. And, if you were using password-based authentication via that identity provider for Azure AD, that is migrated to Password Hash Synchronization (PHS).
+* Plans to shift file and print services to Azure AD are being developed.
+* Collaboration capability is provided by Azure AD B2B.
+* New groups are created and managed in Azure AD.
+
+**State 4 AD minimized** - Most IAM capability is provided by Azure AD while edge cases and exceptions continue to utilize on-premises AD. This state is more difficult to achieve, especially for larger organizations with significant on-premises technical debt. Azure AD continues to evolve as your organizationΓÇÖs transformation matures, bringing new features and tools that you can utilize. Organizations are required to deprecate capability or build new capability to provide replacement. In this state:
+
+* New users provisioned using the HR provisioning capability are created directly in Azure AD.
+
+* A plan to move apps that depend on AD and are part of the vision for the future state Azure AD environment is being executed. A plan to replace services that won't move (file, print, fax services) is in place.
+
+* On-premises workloads have been replaced with cloud alternatives such as Windows Virtual Desktop, Azure Files, Cloud Print. SQL is replaced by SQL MI. Azure AD Kerberos is being migrated to Azure AD.
+
+**State 5 100% cloud** - In this state, IAM capability is all provided by Azure AD and other Azure tools. This is the long-term aspiration for many organizations. In this state:
+
+* No on-premises IAM footprint required.
+
+* All devices are managed in Azure AD and cloud solution such as Intune.
+
+* User identity lifecycle is managed using Azure AD.
+
+* All users and groups are cloud native.
+
+* Network services that rely on AD are relocated.
+
+The transformation between the states is similar to moving locations:
+
+* **Establish new location** - You purchase your destination and establish connectivity between the current location and the new location. This enables you to maintain your productivity and ability to operate. In this content, the activities are described in **[Establish Azure AD capabilities](road-to-the-cloud-establish.md)**. The results transition you to State 2.
+
+* **Limit new items in old location** - You stop investing in the old location and set policy to stage new items in new location. In this content, the activities are described in **[Implement cloud-first approach](road-to-the-cloud-implement.md)**. The activities set the foundation to migrate at scale and reach State 3.
+
+* **Move existing items to new location** - You move items from the old location to the new location. You assess the business value of the items to determine if you'll move them as-is, upgrade them, replace them, or deprecate them. In this content, the activities are described in **[Transition to the cloud](road-to-the-cloud-migrate.md)**. These activities enable you to complete State 3 and reach State 4 and State 5. Based on your business objectives, you decide what end state you want to target.
+
+Transformation to the cloud isn't only the identity team's responsibility. Coordination across teams to define policies beyond technology that include people and process change are required. Using a coordinated approach helps to ensure consistent progress and reduces the risk of regressing to on-premises solutions. Involve teams that manage:
+
+* Device/endpoint
+* Networking
+* Security/risk
+* Application owners
+* Human resources
+* Collaboration
+* Procurement
+* Operations
+
+As a migration of IAM to Azure AD is started, organizations must determine the prioritization of efforts based on their specific needs. Teams of operational staff and support staff must be trained to perform their jobs in the new environment. The following shows the high-level journey for AD to Azure AD migration:
++
+## Establish Azure AD capabilities
+
+* **Initialize tenant** - Create your new Azure AD tenant that supports the vision for your end-state deployment.
+
+* **Secure tenant** - Adopt a [Zero Trust](https://www.microsoft.com/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/) approach and a security model that [protects your tenant from on-premises compromise](../fundamentals/protect-m365-from-on-premises-attacks.md) early in your journey.
+
+## Implement cloud-first approach
+Establish a policy that mandates all new devices, apps and services should be cloud-first. New applications and services using legacy protocols (NTLM, Kerberos, LDAP etc.) should be by exception only.
+
+## Transition to the cloud
+Shift the management and integration of users, apps and devices away from on-premises and over to cloud-first alternatives. Optimize user provisioning by taking advantage of [cloud-first provisioning capabilities](../governance/what-is-provisioning.md) that integrate with Azure AD.
+
+The transformation changes how users accomplish tasks and how support teams provide end-user support. Initiatives or projects should be designed and implemented in a manner that minimizes the impact on user productivity. As part of the transformation, self-service IAM capabilities are introduced. Some portions of the workforce more easily adapt to the self-service user environment prevalent in cloud-based businesses.
+
+Aging applications might require updating or replacing to operate well in cloud-based IT environments. Application updates or replacements can be costly and time-consuming. The planning and Stages must also take the age and capability of the applications an organization uses.
+
+## Next steps
+
+* [Introduction](road-to-the-cloud-introduction.md)
+
+* [Establish an Azure AD footprint](road-to-the-cloud-establish.md)
+
+* [Implement a cloud-first approach](road-to-the-cloud-implement.md)
+
+* [Transition to the cloud](road-to-the-cloud-migrate.md)
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history.md
ms.assetid: ef2797d7-d440-4a9a-a648-db32ad137494
Previously updated : 3/25/2022 Last updated : 7/6/2022
If you want all the latest features and updates, check this page and install wha
To read more about auto-upgrade, see [Azure AD Connect: Automatic upgrade](how-to-connect-install-automatic-upgrade.md).
+## 2.1.15.0
+
+### Release status
+7/6/2022: Released for download, will be made available for auto-upgrade soon.
+
+> [!IMPORTANT]
+> We have discovered a security vulnerability in the Azure AD Connect Admin Agent. If you have installed the Admin Agent previously it is important that you update your Azure AD Connect server(s) to this version to mitigate the vulnerability.
+
+### Functional changes
+ - We have removed the public preview functionality for the Admin Agent from Azure AD Connect. We will not provide this functionality going forward.
+ - We added support for two new attributes: employeeOrgDataCostCenter and employeeOrgDataDivision.
+ - We added CerificateUserIds attribute to AAD Connector static schema.
+ - The AAD Connect wizard will now abort if write event logs permission is missing.
+ - We updated the AADConnect health endpoints to support the US government clouds.
+ - We added new cmdlets ΓÇ£Get-ADSyncToolsDuplicateUsersSourceAnchor and Set-ADSyncToolsDuplicateUsersSourceAnchorΓÇ£ to fix bulk "source anchor has changed" errors. When a new forest is added to AADConnect with duplicate user objects, the objects are running into bulk "source anchor has changed" errors. This is happening due to the mismatch between msDsConsistencyGuid & ImmutableId. More information about this module and the new cmdlets can be found in [this article](https://docs.microsoft.com/azure/active-directory/hybrid/reference-connect-adsynctools).
+
+### Bug fixes
+ - We fixed a bug that prevented localDB upgrades in some Locales.
+ - We fixed a bug to prevent database corruption when using localDB.
+ - We added timeout and size limit errors to the connection log.
+ - We fixed a bug where, if child domain has a user with same name as parent domain user that happens to be an enterprise admin, the group membership failed.
+ - We updated the expressions used in the "In from AAD - Group SOAInAAD" rule to limit the description attribute to 448 characters.
+ - We made a change to set extended rights for "Unexpire Password" for Password Reset.
+ - We modified the AD connector upgrade to refresh the schema ΓÇô we no longer show constructed and non-replicated attributes in the Wizard during upgrade.
+ - We fixed a bug in ADSyncConfig functions ConvertFQDNtoDN and ConvertDNtoFQDN - If a user decides to set variables called '$dn' or '$fqdn', these variables will no longer be used inside the script scope.
+ - We made the following Accessibility fixes:
+ - Fixed a bug where Focus is lost during keyboard navigation on Domain and OU Filtering page.
+ - We updated the accessible name of Clear Runs drop down.
+ - We fixed a bug where the tooltip of the "Help" button is not accessible through keyboard if navigated with arrow keys.
+ - We fixed a bug where the underline of hyperlinks was missing on the Welcome page of the wizard.
+ - We fixed a bug in Sync Service Manager's About dialog where the Screen reader is not announcing the information about the data appearing under the "About" dialog box.
+ - We fixed a bug where the Management Agent Name was not mentioned in logs when an error occurred while validating MA Name.
+ - We fixed several accessibility issues with the keyboard navigation and custom control type fixes. The Tooltip of the "help" button is not collapsing by pressing "Esc" key. There was an Illogical keyboard focus on the User Sign In radio buttons and there was an invalid control type on the help popups.
+ - We fixed a bug where an empty label was causing an accessibility error.
+ ## 2.1.1.0 ### Release status
This is a bug fix release. There are no functional changes in this release.
## Next steps
-Learn more about how to [integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
+Learn more about how to [integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
active-directory Subscription Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/subscription-requirements.md
Title: License requirements to use Privileged Identity Management - Azure Active
description: Describes the licensing requirements to use Azure AD Privileged Identity Management (PIM). documentationcenter: ''--++ editor: '' ms.assetid: 34367721-8b42-4fab-a443-a2e55cdbf33d
na Previously updated : 06/27/2022-- Last updated : 07/06/2022++
You will need an Azure AD license to use PIM and all of it's settings. Currently
## Licenses you must have
-Ensure that your directory has at least as many Azure AD Premium P2 licenses as you have employees that will be performing the following tasks:
+Ensure that your directory has Azure AD Premium P2 licenses for the following categories of users:
-- Users assigned as eligible to Azure AD or Azure roles managed using PIM-- Users who are assigned as eligible members or owners of privileged access groups
+- Users with eligible and/or time-bound assignments to Azure AD or Azure roles managed using PIM
+- Users with eligible and/or time-bound assignments as members or owners of privileged access groups
- Users able to approve or reject activation requests in PIM - Users assigned to an access review - Users who perform access reviews
active-directory Howto Integrate Activity Logs With Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md
If you want to know for how long the activity data is stored in a Premium tenant
* To send audit logs to the Log Analytics workspace, select the **AuditLogs** check box. * To send sign-in logs to the Log Analytics workspace, select the **SignInLogs** check box. * To send non-interactive user sign-in logs to the Log Analytics workspace, select the **NonInteractiveUserSignInLogs** check box.
- * To send service principle sign-in logs to the Log Analytics workspace, select the **ServicePrincipleSignInLogs** check box.
+ * To send service principal sign-in logs to the Log Analytics workspace, select the **ServicePrincipalSignInLogs** check box.
* To send managed identity sign-in logs to the Log Analytics workspace, select the **ManagedIdentitySignInLogs** check box. * To send provisioning logs to the Log Analytics workspace, select the **ProvisioningLogs** check box. * To send Active Directory Federation Services (ADFS) sign-in logs to the Log Analytics workspace, select **ADFSSignInLogs**.
active-directory Sap Analytics Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-analytics-cloud-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure SAP Analytics Cloud to support provisioning with Azure AD
-1. Sign into [SAP Identity Provisioning admin console](https://ips-xlnk9v890j.dispatcher.us1.hana.ondemand.com/) with your administrator account and then select **Proxy Systems**.
+1. Sign into the SAP Identity Provisioning admin console with your administrator account and then select **Proxy Systems**.
![SAP Proxy Systems](./media/sap-analytics-cloud-provisioning-tutorial/sap-proxy-systems.png)
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory How To Use Quickstart Idtoken https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-idtoken.md
Previously updated : 06/22/2022 Last updated : 07/06/2022 #Customer intent: As an administrator, I am looking for information to help me create verifiable credentials for ID tokens.
The JSON attestation definition should contain the **idTokens** name, the [OIDC
The claims mapping in the following example requires that you configure the token as explained in the [Claims in the ID token from the identity provider](#claims-in-the-id-token-from-the-identity-provider) section. + ```json { "attestations": {
The claims mapping in the following example requires that you configure the toke
"required": false } ]
+ },
+ "validityInterval": 2592000,
+ "vc": {
+ "type": [
+ "VerifiedCredentialExpert"
+ ]
} } ``` + ## Application registration The clientId attribute is the application ID of a registered application in the OIDC identity provider. For Azure Active Directory, you create the application by doing the following:
active-directory How To Use Quickstart Selfissued https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-selfissued.md
Previously updated : 06/22/2022 Last updated : 07/06/2022 #Customer intent: As a verifiable credentials administrator, I want to create a verifiable credential for self-asserted claims scenario.
The JSON attestation definition should contain the **selfIssued** name and the c
], "required": false }
+ },
+ "validityInterval": 2592000,
+ "vc": {
+ "type": [
+ "VerifiedCredentialExpert"
+ ]
} } ```
active-directory How To Use Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart.md
Previously updated : 06/16/2022 Last updated : 07/06/2022 #Customer intent: As a verifiable credentials administrator, I want to create a verifiable credential for the ID token hint scenario.
The expected JSON for the rules definitions is the inner content of the rules at
"required": false } ]
+ },
+ "validityInterval": 2592000,
+ "vc": {
+ "type": [
+ "VerifiedCredentialExpert"
+ ]
} } ``` + ## Configure the samples to issue and verify your custom credential To configure your sample code to issue and verify by using custom credentials, you need:
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
FIPS-enabled nodes are currently are now supported on Linux-based node pools. Fo
AKS doesn't apply Network Security Groups (NSGs) to its subnet and doesn't modify any of the NSGs associated with that subnet. AKS only modifies the network interfaces NSGs settings. If you're using CNI, you also must ensure the security rules in the NSGs allow traffic between the node and pod CIDR ranges. If you're using kubenet, you must also ensure the security rules in the NSGs allow traffic between the node and pod CIDR. For more information, see [Network security groups](concepts-network.md#network-security-groups).
+## How does Time syncronization work in AKS?
+
+AKS nodes run the "chrony" service which pulls time from the localhost, which in turn sync time with ntp.ubuntu.com. Containers running on pods get the time from the AKS nodes. Applications launched inside a container use time from the container of the pod.
+ <!-- LINKS - internal --> [aks-upgrade]: ./upgrade-cluster.md
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-about.md
The OSM project was originated by Microsoft and has since been donated and is go
OSM can be added to your Azure Kubernetes Service (AKS) cluster by enabling the OSM add-on using the [Azure CLI][osm-azure-cli] or a [Bicep template][osm-bicep]. The OSM add-on provides a fully supported installation of OSM that is integrated with AKS. > [!IMPORTANT]
-> The OSM add-on installs version *1.0.0* of OSM on your cluster.
+> The OSM add-on installs version *1.1.1* of OSM on clusters running Kubernetes version 1.23.5 and higher. The OSM add-on installs version *1.0.0.* on clusters running a Kubernetes version below 1.23.5.
## Capabilities and features
aks Open Service Mesh Binary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-binary.md
zone_pivot_groups: client-operating-system
This article will discuss how to download the OSM client library to be used to operate and configure the OSM add-on for AKS, and how to configure the binary for your environment.
+> [!WARNING]
+> If you are using a Kubernetes version below 1.23.5, the OSM add-on installs version *1.0.0.* of OSM on your cluster, and you must use the OSM client library version *1.0.0* with the following commands.
+ ::: zone pivot="client-operating-system-linux" [!INCLUDE [Linux - download and install client binary](includes/servicemesh/osm/open-service-mesh-binary-install-linux.md)]
aks Open Service Mesh Deploy Addon Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-az-cli.md
This article shows you how to install the Open Service Mesh (OSM) add-on on an Azure Kubernetes Service (AKS) cluster and verify that it's installed and running. > [!IMPORTANT]
-> The OSM add-on installs version *1.0.0* of OSM on your cluster.
+> The OSM add-on installs version *1.1.1* of OSM on clusters running Kubernetes version 1.23.5 and higher. The OSM add-on installs version *1.0.0.* on clusters running a Kubernetes version below 1.23.5.
## Prerequisites
aks Open Service Mesh Deploy Addon Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-bicep.md
This article shows you how to deploy the Open Service Mesh (OSM) add-on to Azure Kubernetes Service (AKS) by using a [Bicep](../azure-resource-manager/bicep/index.yml) template. > [!IMPORTANT]
-> The OSM add-on installs version *1.0.0* of OSM on your cluster.
+> The OSM add-on installs version *1.1.1* of OSM on clusters running Kubernetes version 1.23.5 and higher. The OSM add-on installs version *1.0.0.* on clusters running a Kubernetes version below 1.23.5.
[Bicep](../azure-resource-manager/bicep/overview.md) is a domain-specific language that uses declarative syntax to deploy Azure resources. You can use Bicep in place of creating [Azure Resource Manager templates](../azure-resource-manager/templates/overview.md) to deploy your infrastructure-as-code Azure resources. ## Prerequisites - Azure CLI version 2.20.0 or later-- OSM version 0.11.1 or later - An SSH public key used for deploying AKS - [Visual Studio Code](https://code.visualstudio.com/) with a Bash terminal - The Visual Studio Code [Bicep extension](../azure-resource-manager/bicep/install.md)
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md
More information about policies:
+ [Set or edit policies](set-edit-policies.md) + [Policy expressions](api-management-policy-expressions.md)
+> [!IMPORTANT]
+> [Limit call rate by subscription](api-management-access-restriction-policies.md#LimitCallRate) and [Set usage quota by subscription](api-management-access-restriction-policies.md#SetUsageQuota) have a dependency on the subscription key. A subscription key isn't required when using other policies.
++ ## [Access restriction policies](api-management-access-restriction-policies.md) - [Check HTTP header](api-management-access-restriction-policies.md#CheckHTTPHeader) - Enforces existence and/or value of an HTTP Header. - [Get authorization context](api-management-access-restriction-policies.md#GetAuthorizationContext) - Gets the authorization context of a specified [authorization](authorizations-overview.md) configured in the API Management instance.
api-management Authorizations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-overview.md
The feature consists of two parts, management and runtime:
For public preview the following limitations exist: -- Authorizations feature will be available in the Consumption tier in the coming weeks. - Authorizations feature is not supported in the following regions: swedencentral, australiacentral, australiacentral2, jioindiacentral.-- Supported identity providers: Azure AD, DropBox, Generic OAuth 2.0, GitHub, Google, LinkedIn, Spotify
+- Supported identity providers can be found in [this](https://github.com/Azure/APIManagement-Authorizations/blob/main/docs/identityproviders.md) GitHub repository.
- Maximum configured number of authorization providers per API Management instance: 50 - Maximum configured number of authorizations per authorization provider: 500 - Maximum configured number of access policies per authorization: 100
Authorization provider configuration includes which identity provider and grant
* An authorization provider configuration can only have one grant type. * One authorization provider configuration can have multiple authorizations.-
-The following identity providers are supported for public preview:
--- Azure AD, DropBox, Generic OAuth 2.0, GitHub, Google, LinkedIn, Spotify-
+* You can find the supported identity providers for public preview in [this](https://github.com/Azure/APIManagement-Authorizations/blob/main/docs/identityproviders.md) GitHub repository.
With the Generic OAuth 2.0 provider, other identity providers that support the standards of OAuth 2.0 flow can be used.
api-management Howto Protect Backend Frontend Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-protect-backend-frontend-azure-ad-b2c.md
Open the Azure AD B2C blade in the portal and do the following steps.
``` > [!TIP]
- > The c# script function code you just pasted simply logs a line to the functions logs, and returns the text "Hello World" with some dynamic data (the date and time).
+ > The C# script function code you just pasted simply logs a line to the functions logs, and returns the text "Hello World" with some dynamic data (the date and time).
1. Select ΓÇ£IntegrationΓÇ¥ from the left-hand blade, then click the http (req) link inside the 'Trigger' box. 1. From the 'Selected HTTP methods' dropdown, uncheck the http POST method, leaving only GET selected, then click Save.
api-management Self Hosted Gateway Settings Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-settings-reference.md
This article provides a reference for required and optional settings that are us
| telemetry.metrics.local.statsd.endpoint | StatsD endpoint. | Yes, if `telemetry.metrics.local` is set to `statsd`; otherwise no. | N/A | | telemetry.metrics.local.statsd.sampling | StatsD metrics sampling rate. Value must be between 0 and 1, for example, 0.5. | No | N/A | | telemetry.metrics.local.statsd.tag-format | StatsD exporter [tagging format](https://github.com/prometheus/statsd_exporter#tagging-extensions). Value is one of the following: `ibrato`, `dogStatsD`, `influxDB`. | No | N/A |
-| telemetry.metrics.cloud | Whether or not to [enable emitting metrics to Azure Monitor](how-to-configure-cloud-metrics-logs.md). | No | `true` |
-| observability.opentelemetry.enabled | Whether or not to enable [emitting metrics to an OpenTelemetry collector](how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md) on Kubernetes. | No | `false` |
+| telemetry.metrics.cloud | Indication whether or not to [enable emitting metrics to Azure Monitor](how-to-configure-cloud-metrics-logs.md). | No | `true` |
+| observability.opentelemetry.enabled | Indication whether or not to enable [emitting metrics to an OpenTelemetry collector](how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md) on Kubernetes. | No | `false` |
| observability.opentelemetry.collector.uri | URI of the OpenTelemetry collector to send metrics to. | Yes, if `observability.opentelemetry.enabled` is set to `true`; otherwise no. | N/A | | observability.opentelemetry.histogram.buckets | Histogram buckets in which OpenTelemetry metrics should be reported. Format: "*x,y,z*,...". | No | "5,10,25,50,100,250,500,1000,2500,5000,10000" |
This article provides a reference for required and optional settings that are us
| telemetry.logs.local.journal.endpoint | Journal endpoint. |Yes if `telemetry.logs.local` is set to `journal`; otherwise no. | N/A | | telemetry.logs.local.json.endpoint | UDP endpoint that accepts JSON data, specified as file path, IP:port, or hostname:port. | Yes if `telemetry.logs.local` is set to `json`; otherwise no. | 127.0.0.1:8888 |
-## Ciphers
+## Security
| Name | Description | Required | Default | | - | - | - | -|
+| certificates.local.ca.enabled | Indication whether or not to the self-hosted gateway should use local CA certificates that are mounted. This requires the self-hosted gateway to run as root or with user ID 1001. | No | `false` |
| net.server.tls.ciphers.allowed-suites | Comma-separated list of ciphers to use for TLS connection between API client and the self-hosted gateway. | No | N/A | | net.client.tls.ciphers.allowed-suites | Comma-separated list of ciphers to use for TLS connection between the self-hosted gateway and the backend. | No | N/A |
applied-ai-services Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/encryption.md
+ Last updated 07/02/2021
applied-ai-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/glossary.md
+ Last updated 09/14/2020
applied-ai-services Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/alerts.md
+ Last updated 09/14/2020
attestation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/overview.md
# Microsoft Azure Attestation
-Microsoft Azure Attestation is a unified solution for remotely verifying the trustworthiness of a platform and integrity of the binaries running inside it. The service supports attestation of the platforms backed by Trusted Platform Modules (TPMs) alongside the ability to attest to the state of Trusted Execution Environments (TEEs) such as [Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) enclaves, [Virtualization-based Security](/windows-hardware/design/device-experiences/oem-vbs) (VBS) enclaves, [Trusted Platform Modules (TPMs)](/windows/security/information-protection/tpm/trusted-platform-module-overview), [Trusted launch for Azure VMs](../virtual-machines/trusted-launch.md) and [Azure confidential VMs](../confidential-computing/confidential-vm-overview.md).
+Microsoft Azure Attestation is a unified solution for remotely verifying the trustworthiness of a platform and integrity of the binaries running inside it. The service supports attestation of the platforms backed by Trusted Platform Modules (TPMs) alongside the ability to attest to the state of Trusted Execution Environments (TEEs) such as [Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) enclaves, [Virtualization-based Security](/windows-hardware/design/device-experiences/oem-vbs) (VBS) enclaves, [Trusted Platform Modules (TPMs)](/windows/security/information-protection/tpm/trusted-platform-module-overview), [Trusted launch for Azure VMs](../virtual-machines/trusted-launch.md) and [Azure confidential VMs](../confidential-computing/confidential-vm-overview.md).
Attestation is a process for demonstrating that software binaries were properly instantiated on a trusted platform. Remote relying parties can then gain confidence that only such intended software is running on trusted hardware. Azure Attestation is a unified customer-facing service and framework for attestation.
Azure Attestation provides comprehensive attestation services for multiple envir
### SGX enclave attestation
-[Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) refers to hardware-grade isolation, which is supported on certain Intel CPUs models. SGX enables code to run in sanitized compartments known as SGX enclaves. Access and memory permissions are then managed by hardware to ensure a minimal attack surface with proper isolation.
+[Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) refers to hardware-grade isolation, which is supported on certain Intel CPU models. SGX enables code to run in sanitized compartments known as SGX enclaves. Access and memory permissions are then managed by hardware to ensure a minimal attack surface with proper isolation.
Client applications can be designed to take advantage of SGX enclaves by delegating security-sensitive tasks to take place inside those enclaves. Such applications can then make use of Azure Attestation to routinely establish trust in the enclave and its ability to access sensitive data.
-Intel® Xeon® Scalable processors only support [ECDSA-based attestation solutions](https://software.intel.com/content/www/us/en/develop/topics/software-guard-extensions/attestation-services.html#Elliptic%20Curve%20Digital%20Signature%20Algorithm%20(ECDSA)%20Attestation) for remotely attesting SGX enclaves. Utilizing ECDSA based attestation model, Azure Attestation supports validation of Intel® Xeon® E3 processors and Intel® Xeon® Scalable processor-based server platforms.
+Intel® Xeon® Scalable processors only support [ECDSA-based attestation solutions](https://software.intel.com/content/www/us/en/develop/topics/software-guard-extensions/attestation-services.html#Elliptic%20Curve%20Digital%20Signature%20Algorithm%20(ECDSA)%20Attestation) for remotely attesting SGX enclaves. Utilizing ECDSA based attestation model, Azure Attestation supports validation of Intel® Xeon® E3 processors and Intel® Xeon® Scalable processor-based server platforms.
> [!NOTE] > To perform attestation of Intel® Xeon® Scalable processor-based server platforms using Azure Attestation, users are expected to install [Azure DCAP version 1.10.0](https://github.com/microsoft/Azure-DCAP-Client) or higher.
OE standardizes specific requirements for verification of an enclave evidence. T
### TPM attestation
-[Trusted Platform Modules (TPM)](/windows/security/information-protection/tpm/trusted-platform-module-overview) based attestation is critical to provide proof of a platformsΓÇÖ state. TPM acts as the root of trust and the security coprocessor to provide cryptographic validity to the measurements(evidence). Devices with a TPM, can rely on attestation to prove that boot integrity is not compromised along with using the claims to detect feature states enablementΓÇÖs during boot.
+[Trusted Platform Modules (TPM)](/windows/security/information-protection/tpm/trusted-platform-module-overview) based attestation is critical to provide proof of a platform's state. A TPM acts as the root of trust and the security coprocessor to provide cryptographic validity to the measurements (evidence). Devices with a TPM can rely on attestation to prove that boot integrity is not compromised and use the claims to detect feature state enablement during boot.
Client applications can be designed to take advantage of TPM attestation by delegating security-sensitive tasks to only take place after a platform has been validated to be secure. Such applications can then make use of Azure Attestation to routinely establish trust in the platform and its ability to access sensitive data.
Azure [Confidential VM](../confidential-computing/confidential-vm-overview.md) (
### Trusted Launch attestation
-Azure customers can [prevent bootkit and rootkit infections](https://www.youtube.com/watch?v=CQqu_rTSi0Q) by enabling [Trusted launch](../virtual-machines/trusted-launch.md)) for their virtual machines (VMs). When the VM is Secure Boot and vTPM enabled with guest attestation extension installed, vTPM measurements get submitted to Azure Attestation periodically for monitoring of boot integrity. An attestation failure indicates potential malware, which is surfaced to customers via Microsoft Defender for Cloud, through Alerts and Recommendations.
+Azure customers can [prevent bootkit and rootkit infections](https://www.youtube.com/watch?v=CQqu_rTSi0Q) by enabling [trusted launch](../virtual-machines/trusted-launch.md) for their virtual machines (VMs). When the VM is Secure Boot and vTPM enabled with guest attestation extension installed, vTPM measurements get submitted to Azure Attestation periodically for monitoring boot integrity. An attestation failure indicates potential malware, which is surfaced to customers via Microsoft Defender for Cloud, through Alerts and Recommendations.
## Azure Attestation runs in a TEE
attestation View Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/view-logs.md
+
+ Title: Azure Attestation logs
+description: Describes the full logs that are collected for Azure Attestation
++++ Last updated : 11/23/2020+++++
+# Azure Attestation logging
+
+If you create one or more Azure Attestation resources, youΓÇÖll want to monitor how and when your attestation instance is accessed, and by whom. You can do this by enabling logging for Microsoft Azure Attestation, which saves information in an Azure storage account you provide.
+
+Logging information will be available up to 10 minutes after the operation occurred (in most cases, it will be quicker than this). Since you provide the storage account, you can secure your logs via standard Azure access controls and delete logs you no longer want to keep in your storage account.
+
+## Interpret your Azure Attestation logs
+
+When logging is enabled, up to three containers may be automatically created for you in your specified storage account: **insights-logs-auditevent, insights-logs-operational, insights-logs-notprocessed**. It is recommended to only use **insights-logs-operational** and **insights-logs-notprocessed**. **insights-logs-auditevent** was created to provide early access to logs for customers using VBS. Future enhancements to logging will occur in the **insights-logs-operational** and **insights-logs-notprocessed**.
+
+**Insights-logs-operational** contains generic information across all TEE types.
+
+**Insights-logs-notprocessed** contains requests which the service was unable to process, typically due to malformed HTTP headers, incomplete message bodies, or similar issues.
+
+Individual blobs are stored as text, formatted as a JSON blob. LetΓÇÖs look at an example log entry:
++
+```json
+{
+ "Time": "2021-11-03T19:33:54.3318081Z",
+ "resourceId": "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.Attestation/attestationProviders/<instance name>",
+ "region": "EastUS",
+ "operationName": "AttestSgxEnclave",
+ "category": "Operational",
+ "resultType": "Succeeded",
+ "resultSignature": "400",
+ "durationMs": 636,
+ "callerIpAddress": "::ffff:24.17.183.201",
+ "traceContext": "{\"traceId\":\"e4c24ac88f33c53f875e5141a0f4ce13\",\"parentId\":\"0000000000000000\",}",
+ "identity": "{\"callerAadUPN\":\"deschuma@microsoft.com\",\"callerAadObjectId\":\"6ab02abe-6ca2-44ac-834d-42947dbde2b2\",\"callerId\":\"deschuma@microsoft.com\"}",
+ "uri": "https://deschumatestrp.eus.test.attest.azure.net:443/attest/SgxEnclave?api-version=2018-09-01-preview",
+ "level": "Informational",
+ "location": "EastUS",
+ "properties":
+ {
+ "failureResourceId": "",
+ "failureCategory": "None",
+ "failureDetails": "",
+ "infoDataReceived":
+ {
+ "Headers":
+ {
+ "User-Agent": "PostmanRuntime/7.28.4"
+ },
+ "HeaderCount": 10,
+ "ContentType": "application/json",
+ "ContentLength": 6912,
+ "CookieCount": 0,
+ "TraceParent": ""
+ }
+ }
+ }
+```
+
+Most of these fields are documented in the [Top-level common schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). The following table lists the field names and descriptions for the entries not included in the top-level common schema:
+
+| Field Name | Description |
+||--|
+| traceContext | JSON blob representing the W3C trace-context |
+| uri | Request URI |
+
+The properties contain additional Azure attestation specific context:
+
+| Field Name | Description |
+||--|
+| failureResourceId | Resource ID of component which resulted in request failure |
+| failureCategory | Broad category indicating category of a request failure. Includes categories such as AzureNetworkingPhysical, AzureAuthorization etc. |
+| failureDetails | Detailed information about a request failure, if available |
+| infoDataReceived | Information about the request received from the client. Includes some HTTP headers, the number of headers received, the content type and content length |
+
+## Next steps
+- [How to enable Microsoft Azure Attestation logging ](azure-diagnostic-monitoring.md)
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
In the Product Catalog, always-available services are listed as "non-regional" s
| [Azure App Service: App Service Environment](migrate-app-service-environment.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure Bastion](../bastion/bastion-overview.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Batch](../batch/create-pool-availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| [Azure Cache for Redis](../azure-cache-for-redis/cache-high-availability.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
+| [Azure Cache for Redis](migrate-cache-redis.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
| [Azure Cognitive Search](../search/search-performance-optimization.md#availability-zones) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Container Instances](../container-instances/container-instances-region-availability.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure Container Registry](../container-registry/zone-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
availability-zones Migrate Cache Redis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-cache-redis.md
+
+ Title: Migrate an Azure Cache for Redis instance to availability zone support
+description: Learn how to migrate an Azure Cache for Redis instance to availability zone support.
+++ Last updated : 06/23/2022++++
+
+# Migrate an Azure Cache for Redis instance to availability zone support
+
+This guide describes how to migrate your Azure Cache for Redis instance from non-availability zone support to availability zone support.
+
+Azure Cache for Redis supports zone redundancy in its Premium, Enterprise, and Enterprise Flash tiers. A zone-redundant cache runs on VMs spread across multiple availability zone to provide high resilience and availability.
+
+Currently, the only way to convert a resource from non-availability zone support to availability zone support is to redeploy your current cache.
+
+## Prerequisites
+
+To migrate to availability zone support, you must have an Azure Cache for Redis resource in either the Premium, Enterprise, or Enterprise Flash tiers.
+
+## Downtime requirements
+
+There are multiple ways to migrate data to a new cache. Many of them require some downtime.
+
+## Migration guidance: redeployment
+
+### When to use redeployment
+
+Azure Cache for Redis currently doesnΓÇÖt allow adding availability zone support to an existing cache. The best way to convert a non-zone redundant cache to a zone redundant cache is to deploy a new cache using the availability zone configuration you need, and then migrate your data from the current cache to the new cache.
+
+### Redeployment considerations
+
+Running multiple caches simultaneously as you convert your data to the new cache creates extra expenses.
+
+### How to redeploy
+
+1. To create a new zone redundant cache that meets your requirements, follow the steps in [Enable zone redundancy for Azure Cache for Redis](../azure-cache-for-redis/cache-how-to-zone-redundancy.md).
+
+>[!TIP]
+>To ease the migration process, it is recommended that you create the cache to use the same tier, SKU, and region as your current cache.
+
+1. Migrate your data from the current cache to the new zone redundant cache. To learn the most common ways to migrate based on your requirements and constraints, see [Cache migration guide - Migration options](../azure-cache-for-redis/cache-migration-guide.md).
+
+1. Configure your application to point to the new zone redundant cache
+
+1. Delete your old cache
+
+## Next Steps
+
+Learn more about:
+
+> [!div class="nextstepaction"]
+> [Regions and Availability Zones in Azure](az-overview.md)
+
+> [!div class="nextstepaction"]
+> [Azure Services that support Availability Zones](az-region.md)
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Azure Connected Machine agent description: This article provides a detailed overview of the Azure Arc-enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 06/06/2022 Last updated : 07/05/2022
Metadata information about a connected machine is collected after the Connected
* Cluster resource ID (for Azure Stack HCI nodes) * Hardware manufacturer * Hardware model
+* CPU logical core count
* Cloud provider * Amazon Web Services (AWS) metadata, when running in AWS: * Account ID
Metadata information about a connected machine is collected after the Connected
* Instance ID * Image * Machine type
- * OS
* Project ID * Project number * Service accounts
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
Title: Archive for What's new with Azure Arc-enabled servers agent description: The What's new release notes in the Overview section for Azure Arc-enabled servers agent contains six months of activity. Thereafter, the items are removed from the main article and put into this article. Previously updated : 06/06/2022 Last updated : 07/06/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
- Known issues - Bug fixes
+## Version 1.15 - February 2022
+
+### Known issues
+
+- The "Arc" proxy bypass feature on Linux includes some endpoints that belong to Azure Active Directory. As a result, if you only specify the "Arc" bypass rule, traffic destined for Azure Active Directory endpoints will not use the proxy server as expected. This issue will be fixed in an upcoming release.
+
+### New features
+
+- Network check improvements during onboarding:
+ - Added TLS 1.2 check
+ - Azure Arc network endpoints are now required, onboarding will abort if they are not accessible
+ - New `--skip-network-check` flag to override the new network check behavior
+ - On-demand network check now available using `azcmagent check`
+- [Proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) is now available for customers using private endpoints. This allows you to send Azure Active Directory and Azure Resource Manager traffic through a proxy server, but skip the proxy server for traffic that should stay on the local network to reach private endpoints.
+- Oracle Linux 8 is now supported
+
+### Fixed
+
+- Improved reliability when disconnecting the agent from Azure
+- Improved reliability when installing and uninstalling the agent on Active Directory Domain Controllers
+- Extended the device login timeout to 5 minutes
+- Removed resource constraints for Azure Monitor Agent to support high throughput scenarios
+ ## Version 1.14 - January 2022 ### Fixed
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Arc-enabled servers agent description: This article has release notes for Azure Arc-enabled servers agent. For many of the summarized issues, there are links to more details. Previously updated : 06/06/2022 Last updated : 07/05/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Arc-enabled servers agent](agent-release-notes-archive.md).
+## Version 1.20 - July 2022
+
+### Known issues
+
+- Some systems may incorrectly report their cloud provider as Azure Stack HCI.
+
+### New features
+
+- Added support for Debian 10
+- Updates to the [instance metadata](agent-overview.md#instance-metadata) collected on each machine:
+ - GCP VM OS is no longer collected
+ - CPU logical core count is now collected
+- Improved error messages and colorization
+
+### Fixed
+
+- Agents configured to use private endpoints will now download extensions over the private endpoint
+- The `--use-private-link` flag on [azcmagent check](manage-agent.md#check) has been renamed to `--enable-pls-check` to more accurately represent its function
+ ## Version 1.19 - June 2022
+### Known issues
+
+- Agents configured to use private endpoints will incorrectly try to download extensions from a public endpoint. [Upgrade the agent](manage-agent.md#upgrade-the-agent) to version 1.20 or later to restore correct functionality.
+- Some systems may incorrectly report their cloud provider as Azure Stack HCI.
+ ### New features - When installed on a Google Compute Engine virtual machine, the agent will now detect and report Google Cloud metadata in the "detected properties" of the Azure Arc-enabled servers resource. [Learn more](agent-overview.md#instance-metadata) about the new metadata.
This page is updated monthly, so revisit it regularly. If you're looking for ite
### Fixed -- `systemd` is now an official prerequisite on Linux and your package manger will alert you if you try to install the Azure Connected Machine agent on a server without systemd.
+- `systemd` is now an official prerequisite on Linux and your package manager will alert you if you try to install the Azure Connected Machine agent on a server without systemd.
- Guest configuration policies no longer create unnecessary files in the `/tmp` directory on Linux servers - Improved reliability when extracting extensions and guest configuration policy packages - Improved reliability for guest configuration policies that have child processes
This page is updated monthly, so revisit it regularly. If you're looking for ite
- The "Arc" proxy bypass keyword no longer includes Azure Active Directory endpoints on Linux. Azure Storage endpoints for extension downloads are now included with the "Arc" keyword.
-## Version 1.15 - February 2022
-
-### Known issues
--- The "Arc" proxy bypass feature on Linux includes some endpoints that belong to Azure Active Directory. As a result, if you only specify the "Arc" bypass rule, traffic destined for Azure Active Directory endpoints will not use the proxy server as expected. This issue will be fixed in an upcoming release.-
-### New features
--- Network check improvements during onboarding:
- - Added TLS 1.2 check
- - Azure Arc network endpoints are now required, onboarding will abort if they are not accessible
- - New `--skip-network-check` flag to override the new network check behavior
- - On-demand network check now available using `azcmagent check`
-- [Proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) is now available for customers using private endpoints. This allows you to send Azure Active Directory and Azure Resource Manager traffic through a proxy server, but skip the proxy server for traffic that should stay on the local network to reach private endpoints.-- Oracle Linux 8 is now supported-
-### Fixed
--- Improved reliability when disconnecting the agent from Azure-- Improved reliability when installing and uninstalling the agent on Active Directory Domain Controllers-- Extended the device login timeout to 5 minutes-- Removed resource constraints for Azure Monitor Agent to support high throughput scenarios- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
When running a network connectivity check, you must provide the name of the Azur
`azcmagent check --location <regionName> --verbose`
-If you expect your server to communicate with Azure through an Azure Arc Private Link Scope, use the `--use-private-link` parameter to run additional tests that verify the hostnames and IP addresses resolved for the Azure Arc services are private endpoints.
+If you expect your server to communicate with Azure through an Azure Arc Private Link Scope, use the `--enable-pls-check` (`--use-private-link` on versions 1.17-1.19) parameter to run additional tests that verify the hostnames and IP addresses resolved for the Azure Arc services are private endpoints.
-`azcmagent check --location <regionName> --use-private-link --verbose`
+`azcmagent check --location <regionName> --enable-pls-check --verbose`
### connect
azure-arc Onboard Group Policy Service Principal Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-group-policy-service-principal-encryption.md
+
+ Title: Connect machines at scale using Group Policy with service principal encryption
+description: In this article, you learn how to create a Group Policy Object to onboard Active Directory-joined Windows machines to Azure Arc-enabled servers.
Last updated : 07/06/2022++++
+# Create a Group Policy Object for onboarding with DPAPI encryption of service principal secret
+
+You can onboard Active DirectoryΓÇôjoined Windows machines to Azure Arc-enabled servers at scale using Group Policy.
+
+You'll first need to set up a local remote share with the Connected Machine agent and modify a script specifying the Arc-enabled server's landing zone within Azure. You'll then run a script that generates a Group Policy Object to onboard a group of machines to Azure Arc-enabled servers. This Group Policy can be applied to the site, domain, or organizational level. Assignment can also use Access Control List (ACL) and other security filtering native to Group Policy. Machines in the scope of the Group Policy will be onboarded to Azure Arc-enabled servers.
+
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prepare a remote share and create a service principal
+
+The Group Policy to onboard Azure Arc-enabled servers requires a remote share with the Connected Machine agent. You will need to:
+
+1. Prepare a remote share to host the Azure Connected Machine agent package for Windows and the configuration file. You need to be able to add files to the distributed location. The network share should provide Domain Controllers, Domain Computers, and Domain Admins with Change permissions.
+
+1. Follow the steps toΓÇ»[create a service principal for onboarding at scale](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale).
+
+ * Assign the Azure Connected Machine Onboarding role to your service principal and limit the scope of the role to the target Azure landing zone.
+ * Make a note of the Service Principal Secret; you'll need this value later.
+
+1. For each of the scripts below, click to go to its GitHub directory and download the raw script to your local share using your browser's **Save as** function:
+ * [`EnableAzureArc.ps1`](https://raw.githubusercontent.com/Azure/ArcEnabledServersGroupPolicy/main/EnableAzureArc.ps1)
+ * [`DeployGPO.ps1`](https://raw.githubusercontent.com/Azure/ArcEnabledServersGroupPolicy/main/DeployGPO.ps1)
+ * [`AzureArcDeployment.psm1`](https://raw.githubusercontent.com/Azure/ArcEnabledServersGroupPolicy/main/AzureArcDeployment.psm1)
+
+ > [!NOTE]
+ > The ArcGPO folder must be in the same directory as the downloaded script files above. The ArcGPO folder contains the files that define the Group Policy Object that's created when the DeployGPO script is run. When running the DeployGPO script, make sure you're in the same directory as the ps1 files and ArcGPO folder.
+
+1. Modify the script `EnableAzureArc.ps1` by providing the parameter declarations for servicePrincipalClientId, tenantId, subscriptionId, ResourceGroup, Location, Tags, and ReportServerFQDN fields respectively.
+
+1. Execute the deployment script `DeployGPO.ps1`, modifying the run parameters for the DomainFQDN, ReportServerFQDN, ArcRemoteShare, AgentProxy (if applicable), and Service Principal secret:
+
+ ```
+ .\DeployGPO.ps1 -DomainFQDN <INSERT Domain FQDN> -ReportServerFQDN <INSERT Domain FQDN of Network Share> -ArcRemoteShare <INSERT Name of Network Share> -Spsecret <INSERT SPN SECRET> [-AgentProxy $AgentProxy]
+ ```
+
+1. Download the latest version of the [Windows agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center and save it to the remote share.
+
+## Apply the Group Policy Object
+
+On the Group Policy Management Console (GPMC), right-click on the desired Organizational Unit and select the option to link an existent GPO. Choose the Group Policy Object defined in the Scheduled Task. After 10 or 20 minutes, the Group Policy Object will be replicated to the respective domain controllers. Learn more about [creating and managing group policy in Azure AD Domain Services](../../active-directory-domain-services/manage-group-policy.md).
+
+After you have successfully installed the agent and configured it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the servers in your Organizational Unit have successfully connected. View your machines in the [Azure portal](https://aka.ms/hybridmachineportal).
+
+## Next steps
+
+- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.
+- Review connection troubleshooting information in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).
+- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/policy/concepts/guest-configuration.md), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
+- Learn more about [Group Policy](/troubleshoot/windows-server/group-policy/group-policy-overview).
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
Title: Connected Machine agent prerequisites description: Learn about the prerequisites for installing the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 05/24/2022 Last updated : 07/05/2022
This topic describes the basic requirements for installing the Connected Machine
## Supported environments
-Azure Arc-enabled servers supports the installation of the Connected Machine agent on physical servers and virtual machines hosted outside of Azure. This includes support for virtual machines running on platforms like:
+Azure Arc-enabled servers support the installation of the Connected Machine agent on physical servers and virtual machines hosted outside of Azure. This includes support for virtual machines running on platforms like:
* VMware * Azure Stack HCI * Other cloud environments
-Azure Arc-enabled servers does not support installing the agent on virtual machines running in Azure, or on virtual machines running on Azure Stack Hub or Azure Stack Edge, as they are already modeled as Azure VMs and able to be managed directly in Azure.
+Azure Arc-enabled servers do not support installing the agent on virtual machines running in Azure, or on virtual machines running on Azure Stack Hub or Azure Stack Edge, as they are already modeled as Azure VMs and able to be managed directly in Azure.
## Supported operating systems
The following versions of the Windows and Linux operating system are officially
* Windows IoT Enterprise * Azure Stack HCI * Ubuntu 16.04, 18.04, and 20.04 LTS
+* Debian 10
* CentOS Linux 7 and 8 * SUSE Linux Enterprise Server (SLES) 12 and 15 * Red Hat Enterprise Linux (RHEL) 7 and 8
The following versions of the Windows and Linux operating system are officially
* Oracle Linux 7 and 8 > [!NOTE]
-> On Linux, Azure Arc-enabled servers installs several daemon processes. We only support using systemd to manage these processes. In some environments, systemd may not be installed or available, in which case Arc-enabled servers is not supported, even if the distribution is otherwise supported. These environments include **Windows Subsystem for Linux** (WSL) and most container-based systems, such as Kubernetes or Docker. The Azure Connected Machine agent can be installed on the node that runs the containers but not inside the containers themselves.
+> On Linux, Azure Arc-enabled servers install several daemon processes. We only support using systemd to manage these processes. In some environments, systemd may not be installed or available, in which case Arc-enabled servers are not supported, even if the distribution is otherwise supported. These environments include **Windows Subsystem for Linux** (WSL) and most container-based systems, such as Kubernetes or Docker. The Azure Connected Machine agent can be installed on the node that runs the containers but not inside the containers themselves.
> [!WARNING] > If the Linux hostname or Windows computer name uses a reserved word or trademark, attempting to register the connected machine with Azure will fail. For a list of reserved words, see [Resolve reserved resource name errors](../../azure-resource-manager/templates/error-reserved-resource-name.md). > [!NOTE]
-> While Azure Arc-enabled servers supports Amazon Linux, the following features are not supported by this distribution:
+> While Azure Arc-enabled servers support Amazon Linux, the following features are not supported by this distribution:
> > * The Dependency agent used by Azure Monitor VM insights > * Azure Automation Update Management
The following Azure built-in roles are required for different aspects of managin
## Azure subscription and service limits
-Azure Arc-enabled servers supports up to 5,000 machine instances in a resource group.
+Azure Arc-enabled servers support up to 5,000 machine instances in a resource group.
Before configuring your machines with Azure Arc-enabled servers, review the Azure Resource Manager [subscription limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#subscription-limits) and [resource group limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#resource-group-limits) to plan for the number of machines to be connected.
azure-cache-for-redis Cache Best Practices Memory Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-memory-management.md
Last updated 03/22/2022 + + # Memory management ## Eviction policy
Configure your [maxmemory-reserved setting](cache-configure.md#memory-policies)
- The `maxfragmentationmemory-reserved` setting configures the amount of memory, in MB per instance in a cluster, that is reserved to accommodate for memory fragmentation. When you set this value, the Redis server experience is more consistent when the cache is full or close to full and the fragmentation ratio is high. When memory is reserved for such operations, it's unavailable for storage of cached data. The allowed range for `maxfragmentationmemory-reserved` is 10% - 60% of `maxmemory`. If you try to set these values lower than 10% or higher than 60%, they are re-evaluated and set to the 10% minimum and 60% maximum. The values are rendered in megabytes. -- One thing to consider when choosing a new memory reservation value (`maxmemory-reserved` or `maxfragmentationmemory-reserved`) is how this change might affect a cache with large amounts of data in it that is already running. For instance, if you have a 53-GB cache with 49 GB of data and then change the reservation value to 8 GB, the max available memory for the system will drop to 45 GB. If either your current `used_memory` or your `used_memory_rss` values are higher than the new limit of 45 GB, then the system must evict data until both `used_memory` and `used_memory_rss` are below 45 GB. Eviction can increase server load and memory fragmentation. For more information on cache metrics such as `used_memory` and `used_memory_rss`, see [Available metrics and reporting intervals](cache-how-to-monitor.md#available-metrics-and-reporting-intervals).
+- One thing to consider when choosing a new memory reservation value (`maxmemory-reserved` or `maxfragmentationmemory-reserved`) is how this change might affect a cache with large amounts of data in it that is already running. For instance, if you have a 53-GB cache with 49 GB of data and then change the reservation value to 8 GB, the max available memory for the system will drop to 45 GB. If either your current `used_memory` or your `used_memory_rss` values are higher than the new limit of 45 GB, then the system must evict data until both `used_memory` and `used_memory_rss` are below 45 GB. Eviction can increase server load and memory fragmentation. For more information on cache metrics such as `used_memory` and `used_memory_rss`, see [Create your own metrics](cache-how-to-monitor.md#create-your-own-metrics).
## Next steps
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md
You can view and configure the following settings using the **Resource Menu**. T
### Activity log
-Select **Activity log** to view actions done to your cache. You can also use filtering to expand this view to include other resources. For more information on working with audit logs, see [Audit operations with Resource Manager](../azure-monitor/essentials/activity-log.md). For more information on monitoring Azure Cache for Redis events, see [alerts](cache-how-to-monitor.md#alerts).
+Select **Activity log** to view actions done to your cache. You can also use filtering to expand this view to include other resources. For more information on working with audit logs, see [Audit operations with Resource Manager](../azure-monitor/essentials/activity-log.md). For more information on monitoring Azure Cache for Redis events, see [Create alerts](cache-how-to-monitor.md#create-alerts).
### Access control (IAM)
The **maxmemory-reserved** setting configures the amount of memory in MB per ins
The **maxfragmentationmemory-reserved** setting configures the amount of memory in MB per instance in a cluster that is reserved to accommodate for memory fragmentation. When you set this value, the Redis server experience is more consistent when the cache is full or close to full and the fragmentation ratio is high. When memory is reserved for such operations, it's unavailable for storage of cached data. The minimum and maximum values on the slider are 10% and 60%, shown in megabytes. You must set the value in that range.
-When choosing a new memory reservation value (**maxmemory-reserved** or **maxfragmentationmemory-reserved**), consider how this change might affect a cache that is already running with large amounts of data in it. For instance, if you have a 53-GB cache with 49 GB of data, then change the reservation value to 8 GB, this change drops the max available memory for the system down to 45 GB. If either your current `used_memory` or your `used_memory_rss` values are higher than the new limit of 45 GB, then the system will have to evict data until both `used_memory` and `used_memory_rss` are below 45 GB. Eviction can increase server load and memory fragmentation. For more information on cache metrics such as `used_memory` and `used_memory_rss`, see [Available metrics and reporting intervals](cache-how-to-monitor.md#available-metrics-and-reporting-intervals).
+When choosing a new memory reservation value (**maxmemory-reserved** or **maxfragmentationmemory-reserved**), consider how this change might affect a cache that is already running with large amounts of data in it. For instance, if you have a 53-GB cache with 49 GB of data, then change the reservation value to 8 GB, this change drops the max available memory for the system down to 45 GB. If either your current `used_memory` or your `used_memory_rss` values are higher than the new limit of 45 GB, then the system will have to evict data until both `used_memory` and `used_memory_rss` are below 45 GB. Eviction can increase server load and memory fragmentation. For more information on cache metrics such as `used_memory` and `used_memory_rss`, see [Create your own metrics](cache-how-to-monitor.md#create-your-own-metrics).
> [!IMPORTANT] > The **maxmemory-reserved** and **maxfragmentationmemory-reserved** settings are available only for Standard and Premium caches.
Redis keyspace notifications are configured on the **Advanced settings** on the
For more information, see [Redis Keyspace Notifications](https://redis.io/topics/notifications). For sample code, see the [KeySpaceNotifications.cs](https://github.com/rustd/RedisSamples/blob/master/HelloWorld/KeySpaceNotifications.cs) file in the [Hello world](https://github.com/rustd/RedisSamples/tree/master/HelloWorld) sample.
-<a name="recommendations"></a>
- ## Azure Cache for Redis Advisor The **Azure Cache for Redis Advisor** on the left displays recommendations for your cache. During normal operations, no recommendations are displayed.
Further information can be found on the **Recommendations** on the left.
:::image type="content" source="media/cache-configure/redis-cache-recommendations.png" alt-text="Recommendations":::)
-You can monitor these metrics on the [Monitoring charts](cache-how-to-monitor.md#monitoring-charts) and [Usage charts](cache-how-to-monitor.md#usage-charts) sections of the **Azure Cache for Redis** on the left.
+You can monitor these metrics on the [Monitoring](cache-how-to-monitor.md) section of the Resource menu on the left.
Each pricing tier has different limits for client connections, memory, and bandwidth. If your cache approaches maximum capacity for these metrics over a sustained period of time, a recommendation is created. For more information about the metrics and limits reviewed by the **Recommendations** tool, see the following table:
Each pricing tier has different limits for client connections, memory, and bandw
| | | | Network bandwidth usage |[Cache performance - available bandwidth](./cache-planning-faq.yml#azure-cache-for-redis-performance) | | Connected clients |[Default Redis server configuration - max clients](#maxclients) |
-| Server load |[Usage charts - Redis Server Load](cache-how-to-monitor.md#usage-charts) |
+| Server load |[Redis Server Load](cache-how-to-monitor.md#view-cache-metrics) |
| Memory usage |[Cache performance - size](./cache-planning-faq.yml#azure-cache-for-redis-performance) | To upgrade your cache, select **Upgrade now** to change the pricing tier and [scale](#scale) your cache. For more information on choosing a pricing tier, see [Choosing the right tier](cache-overview.md#choosing-the-right-tier)
Select **Redis metrics** to [view metrics](cache-how-to-monitor.md#view-cache-me
### Alert rules
-Select **Alert rules** to configure alerts based on Azure Cache for Redis metrics. For more information, see [Alerts](cache-how-to-monitor.md#alerts).
+Select **Alert rules** to configure alerts based on Azure Cache for Redis metrics. For more information, see [Create alerts](cache-how-to-monitor.md#create-alerts).
### Diagnostics
-By default, cache metrics in Azure Monitor are [stored for 30 days](../azure-monitor/essentials/data-platform-metrics.md) and then deleted. To persist your cache metrics for longer than 30 days, select **Diagnostics** to [configure the storage account](cache-how-to-monitor.md#export-cache-metrics) used to store cache diagnostics.
+By default, cache metrics in Azure Monitor are [stored for 30 days](../azure-monitor/essentials/data-platform-metrics.md) and then deleted. To persist your cache metrics for longer than 30 days, select **Diagnostics** to [configure the storage account](cache-how-to-monitor.md#use-a-storage-account-to-export-cache-metrics) used to store cache diagnostics.
>[!NOTE] >In addition to archiving your cache metrics to storage, you can also [stream them to an Event hub or send them to Azure Monitor logs](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md).
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md
Last updated 05/06/2022
# Monitor Azure Cache for Redis
-Azure Cache for Redis uses [Azure Monitor](../azure-monitor/index.yml) to provide several options for monitoring your cache instances. These tools enable you to monitor the health of your Azure Cache for Redis instances and help you manage your caching applications.
+Azure Cache for Redis uses [Azure Monitor](../azure-monitor/index.yml) to provide several options for monitoring your cache instances. Use these tools to monitor the health of your Azure Cache for Redis instances and to help you manage your caching applications.
Use Azure Monitor to: - view metrics-- pin metrics charts to the Startboard
+- pin metrics charts to the dashboard
- customize the date and time range of monitoring charts - add and remove metrics from the charts - and set alerts when certain conditions are met
-Metrics for Azure Cache for Redis instances are collected using the Redis [INFO](https://redis.io/commands/info) command. Metrics are collected approximately two times per minute and automatically stored for 30 days so they can be displayed in the metrics charts and evaluated by alert rules.
+Metrics for Azure Cache for Redis instances are collected using the Redis [`INFO`](https://redis.io/commands/info) command. Metrics are collected approximately two times per minute and automatically stored for 30 days so they can be displayed in the metrics charts and evaluated by alert rules.
-To configure a different retention policy, see [Export cache metrics](#export-cache-metrics).
+To configure a different retention policy, see [Use a storage account to export cache metrics](#use-a-storage-account-to-export-cache-metrics).
-For more information about the different INFO values used for each cache metric, see [Available metrics and reporting intervals](#available-metrics-and-reporting-intervals).
+For more information about the different `INFO` values used for each cache metric, see [Create your own metrics](#create-your-own-metrics).
## View cache metrics
-To view cache metrics, [browse](cache-configure.md#configure-azure-cache-for-redis-settings) to your cache instance in the [Azure portal](https://portal.azure.com). Azure Cache for Redis provides some built-in charts on the left using **Overview** and **Redis metrics**. Each chart can be customized by adding or removing metrics and changing the reporting interval.
+The Resource menu shows some simple metrics in two places: **Overview** and **Monitoring**.
+To view basic cache metrics, [find your cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the [Azure portal](https://portal.azure.com). On the left, select **Overview**. You see the following predefined monitoring charts: **Memory Usage**, and **Redis Server Load**. These charts are useful summaries that allow you to take a quick look at the state of your cache.
-## View pre-configured metrics charts
-On the left, **Overview** has the following pre-configured monitoring charts.
+For more in depth information, you can see more metrics under the **Monitoring** section of the Resource menu. Select **Metrics** to see, create, or customize a chart by adding metrics, removing metrics, and changing the reporting interval.
-- [Monitoring charts](#monitoring-charts)-- [Usage charts](#usage-charts)
-### Monitoring charts
+The other options under **Monitoring**, provide other ways to view and use the metrics for your caches.
-The **Monitoring** sectionin **Overview** on the lefthas **Hits and Misses**, **Gets and Sets**, **Connections**, and **Total Commands** charts.
+|Selection | Description |
+|||
+| [**Insights**](#use-insights-for-predefined-charts) | A group of predefined tiles and charts to use as starting point for your cache metrics. |
+| [**Alerts**](#create-alerts) | Configure alerts based on metrics and activity logs. |
+| [**Metrics**](#create-your-own-metrics) | Create your own custom chart to track the metrics you want to see. |
+| [**Advisor Recommendations**](cache-configure.md#azure-cache-for-redis-advisor) | Helps you follow best practices to optimize your Azure deployments. |
+| [**Workbooks**](#organize-with-workbooks) | Organize your metrics into groups that provide the information in coherent way. |
+## View metrics charts with Azure Monitor for Azure Cache for Redis
-### Usage charts
+Use [Azure Monitor for Azure Cache for Redis](../azure-monitor/insights/redis-cache-insights-overview.md) for a view of the overall performance, failures, capacity, and operational health of all your Azure Cache for Redis resources. View metrics in a customizable, unified, and interactive experience that lets you drill down into details for individual resources. Azure Monitor for Azure Cache for Redis is based on the [workbooks feature of Azure Monitor](../azure-monitor/visualize/workbooks-overview.md) that provides rich visualizations for metrics and other data. To learn more, see the [Explore Azure Monitor for Azure Cache for Redis](../azure-monitor/insights/redis-cache-insights-overview.md) article.
-The **Usage** sectionin **Overview** on the lefthas **Redis Server Load**, **Memory Usage**, **Network Bandwidth**, and **CPU Usage** charts, and also displays the **Pricing tier** for the cache instance.
+While you can access Azure Monitor features from the Monitor menu in the Azure portal, Azure Monitor features can be accessed directly from the Resource menu for an Azure Cache for Redis resource. For more information on working with metrics using Azure Monitor, see [Overview of metrics in Microsoft Azure](../azure-monitor/data-platform.md).
+For scenarios where you don't need the full flexibility of Azure Monitor for Azure Cache for Redis, you can instead view metrics and create custom charts using **Metrics** from the Resource menu for your cache, and customize your chart using your preferred metrics, reporting interval, chart type, and more. For more information, see [Create your own metrics](#create-your-own-metrics).
-The **Pricing tier** displays the cache pricing tier, and can be used to [scale](cache-how-to-scale.md) the cache to a different pricing tier.
+## Use Insights for predefined charts
-## View metrics charts for all your caches with Azure Monitor for Azure Cache for Redis
+The **Monitoring** section in the Resource menu contains **Insights**. When you select **Insights**, you see groupings of three types of charts: **Overview**, **Performance** and **Operations**.
-Use [Azure Monitor for Azure Cache for Redis](../azure-monitor/insights/redis-cache-insights-overview.md) (preview) for a view of the overall performance, failures, capacity, and operational health of all your Azure Cache for Redis resources. View metrics in a customizable, unified, and interactive experience that lets you drill down into details for individual resources. Azure Monitor for Azure Cache for Redis is based on the [workbooks feature of Azure Monitor](../azure-monitor/visualize/workbooks-overview.md) that provides rich visualizations for metrics and other data. To learn more, see the [Explore Azure Monitor for Azure Cache for Redis](../azure-monitor/insights/redis-cache-insights-overview.md) article.
-## View metrics with Azure Monitor metrics explorer
+Each tab contains status tiles and charts. These tiles and charts are a starting point for your metrics. If you wish to expand beyond **Insights**, you can define your own alerts, metrics, diagnostic settings and workbooks.
-For scenarios where you don't need the full flexibility of Azure Monitor for Azure Cache for Redis, you can instead view metrics and create custom charts using the Azure Monitor metrics explorer. Select **Metrics** from the **Resource menu**, and customize your chart using your preferred metrics, reporting interval, chart type, and more.
+## Use a storage account to export cache metrics
-In the left navigation pane of contoso55, Metrics is an option under Monitoring and is highlighted. On Metrics, is a list of metrics. Cache hits and Cache misses are selected.
+By default, cache metrics in Azure Monitor are [stored for 30 days](../azure-monitor/essentials/data-platform-metrics.md) and then deleted. To persist your cache metrics for longer than 30 days, you can use a [storage account](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) and specify a **Retention (days)** policy that meets your requirements.
+Configure a storage account to use with to store your metrics. The storage account must be in the same region as the caches. Once you've created a storage account, configure a storage account for your cache metrics:
-For more information on working with metrics using Azure Monitor, see [Overview of metrics in Microsoft Azure](../azure-monitor/data-platform.md).
+1. In the **Azure Cache for Redis** page, under the **Monitoring** heading, select **Diagnostics settings**.
-## Export cache metrics
-
-By default, cache metrics in Azure Monitor are [stored for 30 days](../azure-monitor/essentials/data-platform-metrics.md) and then deleted. To persist your cache metrics for longer than 30 days, you can [designate a storage account](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) and specify a **Retention (days)** policy for your cache metrics.
-
-To configure a storage account for your cache metrics:
-
-1. In the **Azure Cache for Redis** page, under the **Monitoring** heading, select **Diagnostics**.
1. Select **+ Add diagnostic setting**.+ 1. Name the settings.+ 1. Check **Archive to a storage account**. YouΓÇÖll be charged normal data rates for storage and transactions when you send diagnostics to a storage account.+ 1. Select **Configure** to choose the storage account in which to store the cache metrics.+ 1. Under the table heading **metric**, check box beside the line items you want to store, such as **AllMetrics**. Specify a **Retention (days)** policy. The maximum days retention you can specify is **365 days**. However, if you want to keep the metrics data forever, set **Retention (days)** to **0**.
-1. Select **Save**.
+1. Select **Save**.
+ :::image type="content" source="./media/cache-how-to-monitor/cache-diagnostics.png" alt-text="Redis diagnostics":::
>[!NOTE]
->In addition to archiving your cache metrics to storage, you can also [stream them to an Event hub or send them to Azure Monitor logs](../azure-monitor/essentials/rest-api-walkthrough.md#retrieve-metric-values).
+>In addition to archiving your cache metrics to storage, you can also [stream them to an Event hub or send them to a Log Analytics workspace](../azure-monitor/essentials/rest-api-walkthrough.md#retrieve-metric-values).
>
-To access your metrics, you can view them in the Azure portal as previously described in this article. You can also access them using the [Azure Monitor Metrics REST API](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md).
+To access your metrics, you view them in the Azure portal as previously described in this article. You can also access them using the [Azure Monitor Metrics REST API](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md).
> [!NOTE] > If you change storage accounts, the data in the previously configured storage account remains available for download, but it is not displayed in the Azure portal. >
-## Available metrics and reporting intervals
+## Create your own metrics
-Cache metrics are reported using several reporting intervals, including **Past hour**, **Today**, **Past week**, and **Custom**. On the left, you find the **Metric** selection for each metrics chart displays the average, minimum, and maximum values for each metric in the chart, and some metrics display a total for the reporting interval.
+You can create your own custom chart to track the metrics you want to see. Cache metrics are reported using several reporting intervals, including **Past hour**, **Today**, **Past week**, and **Custom**. On the left, select the **Metric** in the **Monitoring** section. Each metrics chart displays the average, minimum, and maximum values for each metric in the chart, and some metrics display a total for the reporting interval.
-Each metric includes two versions. One metric measures performance for the entire cache, and for caches that use [clustering](cache-how-to-premium-clustering.md), a second version of the metric that includes `(Shard 0-9)` in the name measures performance for a single shard in a cache. For example if a cache has four shards, `Cache Hits` is the total number of hits for the entire cache, and `Cache Hits (Shard 3)` is just the hits for that shard of the cache.
+Each metric includes two versions: One metric measures performance for the entire cache, and for caches that use [clustering](cache-how-to-premium-clustering.md). A second version of the metric, which includes `(Shard 0-9)` in the name, measures performance for a single shard in a cache. For example if a cache has four shards, `Cache Hits` is the total number of hits for the entire cache, and `Cache Hits (Shard 3)` measures just the hits for that shard of the cache.
-> [!NOTE]
-> When you're seeing the aggregation type :
->
-> - CountΓÇ¥ show 2, it indicates the metric received 2 data points for your time granularity (1 minute).
-> - ΓÇ£MaxΓÇ¥ shows the maximum value of a data point in the time granularity,
-> - ΓÇ£MinΓÇ¥ shows the minimum value of a data point in the time granularity,
-> - ΓÇ£AverageΓÇ¥ shows the average value of all data points in the time granularity.
-> - ΓÇ£SumΓÇ¥ shows the sum of all data points in the time granularity and may be misleading depending on the specific metric.
-> Under normal conditions, ΓÇ£AverageΓÇ¥ and ΓÇ£MaxΓÇ¥ will be very similar because only one node emits these metrics (the master node). In a scenario where the number of connected clients changes rapidly, ΓÇ£Max,ΓÇ¥ ΓÇ£Average,ΓÇ¥ and ΓÇ£MinΓÇ¥ would show very different values and this is also expected behavior.
->
-> Generally, ΓÇ£AverageΓÇ¥ will show you a smooth chart of your desired metric and reacts well to changes in time granularity. ΓÇ£MaxΓÇ¥ and ΓÇ£MinΓÇ¥ may hide large changes in the metric if the time granularity is large but can be used with a small time granularity to help pinpoint exact times when large changes occur in the metric.
->
-> ΓÇ£CountΓÇ¥ and ΓÇ£SumΓÇ¥ may be misleading for certain metrics (connected clients included).
->
-> Hence, we suggested you to have a look at the Average metrics and not the Sum metrics.
+In the Resource menu on the left, select **Metrics** under **Monitoring**. Here, you design your own chart for your cache, defining the metric type and aggregation type.
++
+### Aggregation types
+
+When you're seeing the aggregation type:
+
+- **Count** show 2, it indicates the metric received 2 data points for your time granularity (1 minute).
+- **Max** shows the maximum value of a data point in the time granularity,
+- **Min** shows the minimum value of a data point in the time granularity,
+- **Average** shows the average value of all data points in the time granularity.
+- **Sum** shows the sum of all data points in the time granularity and may be misleading depending on the specific metric.
+
+Under normal conditions, **Average** and **Max** are similar because only one node emits these metrics (the primary node). In a scenario where the number of connected clients changes rapidly, **Max**, **Average**, and **Min** would show different values and this is also expected behavior.
+
+Generally, **Average** shows you a smooth chart of your desired metric and reacts well to changes in time granularity. **Max** and **Min** can hide large changes in the metric if the time granularity is large but can be used with a small time granularity to help pinpoint exact times when large changes occur in the metric.
+
+The types **Count** and **ΓÇ£Sum** can be misleading for certain metrics (connected clients included). Instead, we suggest you look at the **Average** metrics and not the **Sum** metrics.
> [!NOTE]
-> Even when the cache is idle with no connected active client applications, you may see some cache activity, such as connected clients, memory usage, and operations being performed. This activity is normal during the operation of an Azure Cache for Redis instance.
->
+> Even when the cache is idle with no connected active client applications, you might see some cache activity, such as connected clients, memory usage, and operations being performed. This activity is normal during the operation of an instance of Azure Cache for Redis.
> | Metric | Description |
-| | |
+| -|-|
| Cache Hits |The number of successful key lookups during the specified reporting interval. This number maps to `keyspace_hits` from the Redis [INFO](https://redis.io/commands/info) command. |
-| Cache Latency (Preview) | The latency of the cache calculated using the internode latency of the cache. This metric is measured in microseconds, and has three dimensions: `Avg`, `Min`, and `Max`. The dimensions represent the average, minimum, and maximum latency of the cache during the specified reporting interval. |
+| Cache Latency | The latency of the cache calculated using the internode latency of the cache. This metric is measured in microseconds, and has three dimensions: `Avg`, `Min`, and `Max`. The dimensions represent the average, minimum, and maximum latency of the cache during the specified reporting interval. |
| Cache Misses |The number of failed key lookups during the specified reporting interval. This number maps to `keyspace_misses` from the Redis INFO command. Cache misses don't necessarily mean there's an issue with the cache. For example, when using the cache-aside programming pattern, an application looks first in the cache for an item. If the item isn't there (cache miss), the item is retrieved from the database and added to the cache for next time. Cache misses are normal behavior for the cache-aside programming pattern. If the number of cache misses is higher than expected, examine the application logic that populates and reads from the cache. If items are being evicted from the cache because of memory pressure, then there may be some cache misses, but a better metric to monitor for memory pressure would be `Used Memory` or `Evicted Keys`. | | Cache Read |The amount of data read from the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. **This value corresponds to the network bandwidth used by this cache. If you want to set up alerts for server-side network bandwidth limits, then create it using this `Cache Read` counter. See [this table](./cache-planning-faq.yml#azure-cache-for-redis-performance) for the observed bandwidth limits for various cache pricing tiers and sizes.** | | Cache Write |The amount of data written to the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. This value corresponds to the network bandwidth of data sent to the cache from the client. |
Each metric includes two versions. One metric measures performance for the entir
| Used Memory Percentage | The % of total memory that is being used during the specified reporting interval. This value references the `used_memory` value from the Redis INFO command to calculate the percentage. | | Used Memory RSS |The amount of cache memory used in MB during the specified reporting interval, including fragmentation and metadata. This value maps to `used_memory_rss` from the Redis INFO command. |
-## Alerts
+## Create alerts
You can configure to receive alerts based on metrics and activity logs. Azure Monitor allows you to configure an alert to do the following when it triggers:
You can configure to receive alerts based on metrics and activity logs. Azure Mo
- Call a webhook - Invoke an Azure Logic App
-To configure Alert rules for your cache, select **Alert rules** from the **Resource menu**.
+To configure alerts for your cache, select **Alerts** under **Monitoring** on the Resource menu.
For more information about configuring and using Alerts, see [Overview of Alerts](../azure-monitor/alerts/alerts-classic-portal.md).+
+## Organize with workbooks
+
+Once you've defined a metric, you can send it to a workbook. Workbooks provide a way to organize your metrics into groups that provide the information in coherent way.
+
+For information on creating a metric, see [Create your own metrics](#create-your-own-metrics).
+
+## Next steps
+
+- [Azure Monitor for Azure Cache for Redis](../azure-monitor/insights/redis-cache-insights-overview.md)
+- [Azure Monitor Metrics REST API](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md)
+- [`INFO`](https://redis.io/commands/info)
azure-cache-for-redis Cache How To Premium Clustering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-clustering.md
Title: Configure Redis clustering - Premium Azure Cache for Redis description: Learn how to create and manage Redis clustering for your Premium tier Azure Cache for Redis instances - Last updated 02/28/2022+ # Configure Redis clustering for a Premium Azure Cache for Redis instance
Clustering is enabled **New Azure Cache for Redis** on the left during cache cr
:::image type="content" source="media/cache-how-to-premium-clustering/redis-cache-clustering-selected.png" alt-text="Clustering toggle selected.":::
- Once the cache is created, you connect to it and use it just like a non-clustered cache. Redis distributes the data throughout the Cache shards. If diagnostics is [enabled](cache-how-to-monitor.md#export-cache-metrics), metrics are captured separately for each shard and can be [viewed](cache-how-to-monitor.md) in Azure Cache for Redis on the left.
+ Once the cache is created, you connect to it and use it just like a non-clustered cache. Redis distributes the data throughout the Cache shards. If diagnostics is [enabled](cache-how-to-monitor.md#use-a-storage-account-to-export-cache-metrics), metrics are captured separately for each shard and can be [viewed](cache-how-to-monitor.md) in Azure Cache for Redis on the left.
1. Select the **Next: Tags** tab or select the **Next: Tags** button at the bottom of the page.
azure-cache-for-redis Cache Troubleshoot Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-troubleshoot-server.md
If the `used_memory_rss` value is higher than 1.5 times the `used_memory` metric
If a cache is fragmented and is running under high memory pressure, the system does a failover to try recovering Resident Set Size (RSS) memory.
-Redis exposes two stats, `used_memory` and `used_memory_rss`, through the [INFO](https://redis.io/commands/info) command that can help you identify this issue. You can [view these metrics](cache-how-to-monitor.md#view-metrics-with-azure-monitor-metrics-explorer) using the portal.
+Redis exposes two stats, `used_memory` and `used_memory_rss`, through the [INFO](https://redis.io/commands/info) command that can help you identify this issue. You can [view these metrics](cache-how-to-monitor.md#view-cache-metrics) using the portal.
Validate that the `maxmemory-reserved` and `maxfragmentationmemory-reserved` values are set appropriately.
There are several possible changes you can make to help keep memory usage health
- [Configure a memory policy](cache-configure.md#memory-policies) and set expiration times on your keys. This policy may not be sufficient if you have fragmentation. - [Configure a maxmemory-reserved value](cache-configure.md#memory-policies) that is large enough to compensate for memory fragmentation.-- [Create alerts](cache-how-to-monitor.md#alerts) on metrics like used memory to be notified early about potential impacts.
+- [Create alerts](cache-how-to-monitor.md#create-alerts) on metrics like used memory to be notified early about potential impacts.
- [Scale](cache-how-to-scale.md) to a larger cache size with more memory capacity. For more information, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml). For recommendations on memory management, see [Best practices for memory management](cache-best-practices-memory-management.md).
azure-cache-for-redis Cache Troubleshoot Timeouts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-troubleshoot-timeouts.md
For more information on failovers, see [Failover and patching for Azure Cache fo
High server load means the Redis server is unable to keep up with the requests, leading to timeouts. The server might be slow to respond and unable to keep up with request rates.
-[Monitor metrics](cache-how-to-monitor.md#view-metrics-with-azure-monitor-metrics-explorer) such as server load. Watch for spikes in `Server Load` usage that correspond with timeouts. [Create alerts](cache-how-to-monitor.md#alerts) on metrics on server load to be notified early about potential impacts.
+[Monitor metrics](cache-how-to-monitor.md#monitor-azure-cache-for-redis) such as server load. Watch for spikes in `Server Load` usage that correspond with timeouts. [Create alerts](cache-how-to-monitor.md#create-alerts) on metrics on server load to be notified early about potential impacts.
There are several changes you can make to mitigate high server load:
event_no_wait_count:1
Different cache sizes have different network bandwidth capacities. If the server exceeds the available bandwidth, then data won't be sent to the client as quickly. Client requests could time out because the server can't push data to the client fast enough.
-The "Cache Read" and "Cache Write" metrics can be used to see how much server-side bandwidth is being used. You can [view these metrics](cache-how-to-monitor.md#view-metrics-with-azure-monitor-metrics-explorer) in the portal. [Create alerts](cache-how-to-monitor.md#alerts) on metrics like cache read or cache write to be notified early about potential impacts.
+The "Cache Read" and "Cache Write" metrics can be used to see how much server-side bandwidth is being used. You can [view these metrics](cache-how-to-monitor.md#view-cache-metrics) in the portal. [Create alerts](cache-how-to-monitor.md#create-alerts) on metrics like cache read or cache write to be notified early about potential impacts.
To mitigate situations where network bandwidth usage is close to maximum capacity:
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
These two new metrics can help identify whether Azure Cache for Redis clients ar
- Connections Created Per Second - Connections Closed Per Second
-For more information, see [Available metrics and reporting intervals](cache-how-to-monitor.md#available-metrics-and-reporting-intervals).
+For more information, see [View cache metrics ](cache-how-to-monitor.md#view-cache-metrics).
### Default cache change
azure-functions Functions Bindings Signalr Service Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-output.md
The following example shows a function that sends a message using the output bin
```cs [FunctionName("SendMessage")] public static Task SendMessage(
- [HttpTrigger(AuthorizationLevel.Anonymous, "post")]object message,
+ [HttpTrigger(AuthorizationLevel.Anonymous, "post")]object message,
[SignalR(HubName = "chat")]IAsyncCollector<SignalRMessage> signalRMessages) { return signalRMessages.AddAsync(
- new SignalRMessage
+ new SignalRMessage
{
- Target = "newMessage",
- Arguments = new [] { message }
+ Target = "newMessage",
+ Arguments = new [] { message }
}); } ``` # [Isolated process](#tab/isolated-process)
-The following example shows a SignalR trigger that reads a message string from one hub using a SignalR trigger and writes it to a second hub using an output binding. The *target* is the name of the method to be invoked on each client.
--
-The `MyConnectionInfo` and `MyMessage` classes are defined as follows:
-
+The following example shows a function that sends a message using the output binding to all connected clients. The *newMessage* is the name of the method to be invoked on each client.
# [C# Script](#tab/csharp-script)
public static Task Run(
IAsyncCollector<SignalRMessage> signalRMessages) { return signalRMessages.AddAsync(
- new SignalRMessage
+ new SignalRMessage
{
- Target = "newMessage",
- Arguments = new [] { message }
+ Target = "newMessage",
+ Arguments = new [] { message }
}); } ``` ### Broadcast to all clients
module.exports = async function (context, req) {
}; ```
-
+ Complete PowerShell examples are pending. Here's the Python code:
You can send a message only to connections that have been authenticated to a use
```cs [FunctionName("SendMessage")] public static Task SendMessage(
- [HttpTrigger(AuthorizationLevel.Anonymous, "post")]object message,
+ [HttpTrigger(AuthorizationLevel.Anonymous, "post")]object message,
[SignalR(HubName = "chat")]IAsyncCollector<SignalRMessage> signalRMessages) { return signalRMessages.AddAsync(
- new SignalRMessage
+ new SignalRMessage
{ // the message will only be sent to this user ID UserId = "userId1",
public static Task SendMessage(
# [Isolated process](#tab/isolated-process)
-Not supported for isolated process.
# [C# Script](#tab/csharp-script)
public static Task Run(
IAsyncCollector<SignalRMessage> signalRMessages) { return signalRMessages.AddAsync(
- new SignalRMessage
+ new SignalRMessage
{ // the message will only be sent to this user ID UserId = "userId1",
- Target = "newMessage",
- Arguments = new [] { message }
+ Target = "newMessage",
+ Arguments = new [] { message }
}); } ``` ### Send to a user
Example function.json:
``` ::: zone-end Here's the JavaScript code: ```javascript
module.exports = async function (context, req) {
}; ```
-
+ Complete PowerShell examples are pending. Here's the Python code:
public static Task SendMessage(
``` # [Isolated process](#tab/isolated-process)
-Example not available for isolated process.
# [C# Script](#tab/csharp-script)
public static Task Run(
IAsyncCollector<SignalRMessage> signalRMessages) { return signalRMessages.AddAsync(
- new SignalRMessage
+ new SignalRMessage
{ // the message will be sent to the group with this name GroupName = "myGroup",
- Target = "newMessage",
- Arguments = new [] { message }
+ Target = "newMessage",
+ Arguments = new [] { message }
}); } ``` ::: zone pivot="programming-language-javascript,programming-language-python,programming-language-powershell" ### Send to a group
Example function.json:
} ``` Here's the JavaScript code: ```javascript
module.exports = async function (context, req) {
}; ```
-
+ Complete PowerShell examples are pending. Here's the Python code:
public SignalRMessage sendMessage(
::: zone pivot="programming-language-csharp" ### Group management
-SignalR Service allows users to be added to groups. Messages can then be sent to a group. You can use the `SignalR` output binding to manage a user's group membership. The following example adds a user to a group.
+SignalR Service allows users or connections to be added to groups. Messages can then be sent to a group. You can use the `SignalR` output binding to manage groups.
# [In-process](#tab/in-process)+
+Specify `GroupAction` to add or remove a member. The following example adds a user to a group.
+ ```csharp [FunctionName("addToGroup")] public static Task AddToGroup(
public static Task AddToGroup(
} ```
-The following example removes a user from a group.
-
-```csharp
-[FunctionName("removeFromGroup")]
-public static Task RemoveFromGroup(
- [HttpTrigger(AuthorizationLevel.Anonymous, "post")]HttpRequest req,
- ClaimsPrincipal claimsPrincipal,
- [SignalR(HubName = "chat")]
- IAsyncCollector<SignalRGroupAction> signalRGroupActions)
-{
- var userIdClaim = claimsPrincipal.FindFirst(ClaimTypes.NameIdentifier);
- return signalRGroupActions.AddAsync(
- new SignalRGroupAction
- {
- UserId = userIdClaim.Value,
- GroupName = "myGroup",
- Action = GroupAction.Remove
- });
-}
-```
- # [Isolated process](#tab/isolated-process)
-Example not available for isolated process.
+Specify `SignalRGroupActionType` to add or remove a member. The following example adds a user to a group.
+ # [C# Script](#tab/csharp-script)
public static Task Run(
> [!NOTE] > In order to get the `ClaimsPrincipal` correctly bound, you must have configured the authentication settings in Azure Functions. ### Group management
-SignalR Service allows users to be added to groups. Messages can then be sent to a group. You can use the `SignalR` output binding to manage a user's group membership. The following example adds a user to a group.
+SignalR Service allows users or connections to be added to groups. Messages can then be sent to a group. You can use the `SignalR` output binding to manage groups.
Example *function.json* that defines the output binding:
module.exports = async function (context, req) {
}; ```
-
+ Complete PowerShell examples are pending. The following example adds a user to a group.
def main(req: func.HttpRequest, action: func.Out[str]) -> func.HttpResponse:
### Group management
-SignalR Service allows users to be added to groups. Messages can then be sent to a group. You can use the `SignalR` output binding to manage a user's group membership.
+SignalR Service allows users or connections to be added to groups. Messages can then be sent to a group. You can use the `SignalR` output binding to manage groups.
The following example adds a user to a group.
The following table explains the binding configuration properties that you set i
-
+ ## Annotations The following table explains the supported settings for the `SignalROutput` annotation.
The following table explains the supported settings for the `SignalROutput` anno
|**hubName**|This value must be set to the name of the SignalR hub for which the connection information is generated.| |**connectionStringSetting**|The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. | ## Configuration The following table explains the binding configuration properties that you set in the *function.json* file.
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
cloud: gov Previously updated : 01/19/2022 Last updated : 07/05/2022 # Azure Government authorized reseller list
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Agile IT](https://www.agileit.com/)| |[Airnet Group](https://www.airnetgroup.com/)| |[AIS Network](https://www.aisn.net/)|
+|[AlasConnect](https://www.alasconnect.com)|
|[Alcala Consulting Inc.](https://www.alcalaconsulting.com/)| |[Alliance Enterprises, Inc.](https://www.allianceenterprises.com)| |[Alvarez Technology Group](https://www.alvareztg.com/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Ambonare](https://redriver.com/press-release/austinacquisition)| |[American Technology Services LLC](https://networkats.com/)| |[Anautics](https://anautics.com)|
+|[Anika Systems Inc.](https://www.anikasystems.com)|
|[APEX TECHNOLOGY MANAGEMENT INC](https://www.apex.com)| |[Applied Information Sciences, Inc.](https://www.appliedis.com)| |[Apollo Information Systems Corp.](https://www.apollo-is.com/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Apps4Rent](https://www.apps4rent.com)| |[Apptus](https://apttus.com)| |[ArcherPoint, Inc.](https://www.archerpoint.com)|
+|[Ardalyst Federal LLC](https://ardalyst.com)|
|[ArdentMC](https://www.ardentmc.com)| |[Army of Quants](https://www.armyofquants.com/)| |[Ascent Innovations LLC](https://www.ascent365.com/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Blueforce Development Corporation](https://www.blueforcedev.com/)| |[Booz Allen Hamilton](https://www.boozallen.com/)| |[Bridge Partners LLC](https://www.bridgepartnersllc.com)|
+|[C2 Technology Solutions](https://www.c2techsol.com)|
|[CACI Inc - Federal](https://www.caci.com/)| |[Cambria Solutions, Inc.](https://www.cambriasolutions.com/)| |[Capgemini Government Solutions LLC](https://www.capgemini.com/us-en/service/capgemini-government-solutions/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[CGI Technologies and Solutions Inc.](https://www.cgi.com)| |[Ciellos Inc.](https://www.ciellos.com/)| |[Ciracom Inc.](https://www.ciracomcloud.com)|
+|[ciyis](https://ciyis.net)|
|[Clients First Business Solutions LLC](https://www.clientsfirst-us.com)| |[ClearShark](https://clearshark.com/)| |[CloudFit Software, LLC](https://www.cloudfitsoftware.com/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Competitive Innovations, LLC](https://www.cillc.com)| |[Computer Solutions Inc.](http://cs-inc.co/)| |[Computex Technology Solutions](http://www.computex-inc.com/)|
+|[Communication Square LLC](https://www.communicationsquare.com)|
|[ConvergeOne](https://www.convergeone.com)| |[Copper River Technologies, LLC](http://www.copperrivertech.com/)| |[Coretek Services](https://www.coretekservices.com/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Delphi Technology Solutions](https://delphi-ts.com/)| |[Developing Today LLC](https://www.developingtoday.net/)| |[DevHawk, LLC](https://www.devhawk.io)|
+|[Diamond Capture Associates LLC]|
|[Diffeo, Inc.](https://diffeo.com)| |[DirectApps, Inc. D.B.A. Direct Technology](https://directtechnology.com)| |[DominionTech Inc.](https://www.dominiontech.com)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Enterprise Infrastructure Partners, LLC](http://www.entisp.com/)| |[Enterprise Technology International](https://enterpriseti.com)| |[Envistacom](https://www.envistacom.com)|
+|[EPI-USE Labs LCC](https://www.epiuselabs.com)|
|[Epic Systems Inc](http://epicinfotech.com/)| |[EpochConcepts](https://epochconcepts.com)| |[Equilibrium IT Solutions, Inc. (Ntiva)](https://www.ntiva.com/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[FourPoints Technology](https://www.4points.com)| |[For The Record LTD](https://www.fortherecord.com/)| |[Fujitsu America Inc.](https://www.fujitsu.com/us/)|
+|[G2-Ops, Inc.](https://g2-ops.com)|
|[General Dynamics Information Technology](https://gdit.com/)| |[Giga-Green Technologies](https://giga-green.com)| |[Gimmal](https://www.gimmal.com/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Global Tech Inc.](https://www.eglobaltech.com)| |[Globalscape, Inc.](https://www.globalscape.com)| |[Go Full Cloud](https://www.gofullcloud.com/)|
+|[GoCloudOffice Inc.](https://www.gocloudoffice.com)|
|[Golden Five LLC](https://www.goldenfiveconsulting.com/)| |[GovPlace](https://www.govplace.com/)| |[Gov4Miles](https://www.milestechnologies.com)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Integration Partners Corp.](https://integrationpartners.com/)| |[Intelice Solutions, LLC](https://www.intelice.com/)| |[InterVision Systems LLC](https://intervision.com)|
+|[Intrinium](https://www.intrinium.com/)|
|[Invoke, LLC](https://invokellc.com)| |[It1 Source LLC](https://www.it1.com)| |[ITInfra](https://itinfra.biz/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Liquid Mercury Solutions](https://www.liquid-hg.com/)| |[Logicalis, Inc.](https://www.us.logicalis.com/)| |[Lucidius Group LLC](http://www.lucidiusgrp.com)|
+|[Lumen](https://www.lumen.com/)|
|[M2 Technology, Inc.](http://www.m2ti.com/)| |[Magenium Solutions, LLC](https://www.magenium.com)| |[Mainstay Technologies](https://www.mstech.com)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[mindSHIFT Technologies, Inc.](https://www.mindshift.com/)| |[MIS Sciences Corp](https://www.mis-sciences.com/)| |[Mission Cyber LLC](https://missioncyber.com/b/)|
+|[MNSG Acquisition Company LLC]()|
|[Mobomo, LLC](https://www.mobomo.com)| |[Nanavati Consulting, Inc.](https://www.nanavaticonsulting.com)| |[Navisite LLC](https://www.navisite.com/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[NeoTech Solutions Inc.](https://neotechreps.com)| |[Neovera Inc.](https://www.neovera.com)| |[NetData Consulting Services Inc.](https://www.netdatacs.com)|
+|[NetCentrics Corp.](https://netcentrics.com)|
|[Netwize](https://www.netwize.com)| |[NewWave Telecom & Technologies, Inc](https://www.newwave.io)| |[NexusTek](https://www.nexustek.com/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Nubelity LLC](http://www.nubelity.com)| |[NuSoft Solutions (Atrio Systems, Inc.)](https://nusoftsolutions.com)| |[NWN Corporation](https://www.nwnit.com)|
+|[Oakwood Systems Group, Inc.](https://oakwoodsys.com)|
|[OCH Technologies LLC](https://www.ochtechnologies.com)| |[Olive + Goose](https://www.oliveandgoose.com/)| |[Om Group, Inc.](http://www.omgroupinc.us/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Onyx Point, Inc.](https://www.onyxpoint.com)| |[Opsgility](https://www.opsgility.com)| |[OpsPro](https://opspro.com/)|
+|[Optuminsight Inc.](https://www.optum.com)|
|[Orion Communications, Inc.](https://www.orioncom.com)| |[Outlook Insight, LLC](http://outlookinsight.com/)| |[PA-Group](https://pa-group.us/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[PCM](https://www.pcm.com/)| |[Peerless Tech Solutions](https://www.getpeerless.com)| |[People Services Inc. DBA CATCH Intelligence](https://catchintelligence.com)|
+|[Perizer Corp.](https://perizer.com)|
|[Perrygo Consulting Group, LLC](https://perrygo.com)| |[Perspecta](https://perspecta.com/)| |[Phacil (By Light)](https://www.bylight.com/phacil/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Protected Trust](https://www.proarch.com/)| |[Protera Technologies](https://www.protera.com)| |[Pueo Business Solutions, LLC](https://www.pueo.com/)|
+|[Quad M Tech](https://www.quadmtech.com/)|
|[Quality Technology Services LLC](https://www.qtsdatacenters.com/)| |[Quest Media & Supplies Inc.](https://www.questsys.com/)| |[Quisitive](https://quisitive.com)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Science Applications International Corporation](https://www.saic.com)| |[Secure-24](https://www.secure-24.com)| |[Selex Galileo Inc](http://www.selexgalileo.com/)|
+|[Sentinel Blue](https://www.sentinelblue.com/)|
|[Sev1Tech](https://www.sev1tech.com/)| |[SEV Technologies](http://sevtechnologies.com/)| |[Sevatec Inc.](https://www.sevatec.com/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[StoneFly, Inc.](https://stonefly.com)| |[Strategic Communications](https://stratcomminc.com)| |[Stratus Solutions](https://stratussolutions.com)|
+|[Stratum Technology Manamgent, LLC](https://www.stratumtechnology.com)|
|[Strongbridge LLC](https://www.sb-llc.com)| |[Summit 7 Systems, Inc.](https://www.summit7.us/)| |[Sumo Logic](https://www.sumologic.com/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Trigent Solutions Inc.](http://trigentsolutions.com/)| |[Triple Point Security Incorporated](https://www.triplepointsecurity.com)| |[Trusted Tech Team](https://www.trustedtechteam.com)|
+|[TSAChoice Inc.](https://www.tsachoice.com)|
+|[Turnkey Technologies, Inc.](https://www.turnkeytec.com)|
|[U2Cloud LLC](https://www.u2cloud.com)| |[UDRI - SSG](https://udayton.edu/udri/_resources/docs/ssg_v8.pdf)| |[Unisys Corp / Blue Bell](https://www.unisys.com)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Wintellisys, Inc.](https://wintellisys.com)| |[Withum](https://www.withum.com/service/cyber-information-security-services/)| |[Workspot, Inc.](https://workspot.com)|
-|[Wovenware CA, Inc.](https://www.wovenware.com)|
+|[WorkMagic LLC](https://www.workmagic.com)|
+|[Wovenware US, Inc.](https://www.wovenware.com)|
|[WCC Global](https://wwcglobal.com)| |[WWT](https://www2.wwt.com)| |[Xantrion Incorporated](https://www.xantrion.com)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|-| |[12:34 MicroTechnologies Inc.](https://1234micro.com/)| |[Accenture Federal Service](https://www.accenture.com/us-en/industries/afs-index)|
+|[Accenture LLP](https://www.accenture.coM)|
|[Agile IT, Inc](https://www.agileit.com)| |[American Technology Services LLC](https://networkats.com)| |[Applied Information Sciences](https://www.appliedis.com)|
+|[Applied Insight LLC](https://www.applied-insight.com)|
|[Arctic Information Technology, Inc.](https://arcticit.com)| |[Booz Allen Hamilton](https://www.boozallen.com/)| |[C3 Integrated Solutions, Inc.](https://www.c3isit.com)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Conquest Cyber](https://conquestcyber.com/)| |[CyberSheath](https://cybersheath.com)| |[Daymark Solutions, Inc.](https://www.daymarksi.com/)|
+|[DLT](https://www.dlt.com/)|
|[Dox Electornics Inc.](https://www.doxnet.com)|
+|[ECF Data, LLC](https://www.ecfdata.com)|
|[Enlighten IT Consulting](https://www.eitccorp.com/)| |[eTrepid Inc.](https://www.etrepid.com/)| |[F1 Soluitions Inc](https://www.f1networks.com)| |[Four Points Technolgy, LLC](https://www.4points.com)|
+|[G2 Ops, Inc.](https://g2-ops.com)|
|[General Dynamics Information Technology](https://www.gdit.com)| |[Golden Five LLC](https://www.goldenfiveconsulting.com/)| |[Hypori, Inc.](https://hypori.com/)|
+|[Imager Software, Inc dba ISC]|
+|[Impact Networking, LLC](https://www.impactmybiz.com/)|
+|[IBM Corp.](https://www.ibm.com/industries/government)|
|[Jackpine Technologies](https://www.jackpinetech.com)| |[Jasper Solutions](https://www.jaspersolutions.com/)| |[Johnson Technology Systems Inc](https://www.jtsusa.com/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Nimbus Logic, LLC](https://www.nimbus-logic.com/)| |[Northrop Grumman](https://www.northropgrumman.com/)| |[Novetta](https://www.novetta.com)|
+|[PAX 8](https://www.pax8.com)|
|[Permuta Technologies, Inc.](http://www.permuta.com/)| |[Perspecta](https://perspecta.com)| |[Planet Technologies, Inc.](https://go-planet.com)| |[Progeny Systems](https://www.progeny.net/)|
+|[Project Hosts](https://www.projecthosts.com)|
|[Quiet Professionals, LLC](https://quietprofessionalsllc.com)| |[R3, LLC](https://www.r3-it.com/)| |[Red River](https://www.redriver.com)|
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
description: This article describes the version details for the Azure Monitor ag
Previously updated : 6/6/2022 Last updated : 7/6/2022
We strongly recommended to update to the latest version at all times, or opt in
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
-| May 2022 | <ul><li>Fixed issue where agent stops functioning due to faulty XPath query. With this version, only query related Windows events will fail, other data types will continue to be collected</li><li>Collection of Windows network troubleshooting logs added to 'CollectAMAlogs.ps1' tool</li></ul> | 1.5.0.0 | Coming soon |
+| May 2022 | <ul><li>Fixed issue where agent stops functioning due to faulty XPath query. With this version, only query related Windows events will fail, other data types will continue to be collected</li><li>Collection of Windows network troubleshooting logs added to 'CollectAMAlogs.ps1' tool</li><li>Linux support for Debian 11 distro</li><li>Fixed issue to list mount paths instead of device names for Linux disk metrics</li></ul> | 1.5.0.0 | 1.21.0 |
| April 2022 | <ul><li>Private IP information added in Log Analytics <i>Heartbeat</i> table for Windows and Linux</li><li>Fixed bugs in Windows IIS log collection (preview) <ul><li>Updated IIS site column name to match backend KQL transform</li><li>Added delay to IIS upload task to account for IIS buffering</li></ul></li><li>Fixed Linux CEF syslog forwarding for Sentinel</li><li>Removed 'error' message for Azure MSI token retrieval failure on Arc to show as 'Info' instead</li><li>Support added for Ubuntu 22.04, AlmaLinux and RockyLinux distros</li></ul> | 1.4.1.0<sup>Hotfix</sup> | 1.19.3 | | March 2022 | <ul><li>Fixed timestamp and XML format bugs in Windows Event logs</li><li>Full Windows OS information in Log Analytics Heartbeat table</li><li>Fixed Linux performance counters to collect instance values instead of 'total' only</li></ul> | 1.3.0.0 | 1.17.5.0 | | February 2022 | <ul><li>Bugfixes for the AMA Client installer (private preview)</li><li>Versioning fix to reflect appropriate Windows major/minor/hotfix versions</li><li>Internal test improvement on Linux</li></ul> | 1.2.0.0 | 1.15.3 |
azure-monitor Azure Monitor Agent Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration-tools.md
+
+ Title: Migration tools for legacy agent to Azure Monitor agent
+description: This article describes various migration tools and helpers available for migrating from the existing legacy agents to the new Azure Monitor agent (AMA) and data collection rules (DCR).
+++ Last updated : 7/6/2022 ++++
+# Migration tools for Log Analytics agent to Azure Monitor Agent
+The [Azure Monitor agent (AMA)](azure-monitor-agent-overview.md) collects monitoring data from the guest operating system of Azure virtual machines, scale sets, on premise and multi-cloud servers and Windows client devices. It uploads the data to Azure Monitor destinations where it can be used by different features, insights, and other services such as [Microsoft Sentinel](../../sentintel/../sentinel/overview.md) and [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md). All of the data collection configuration is handled via [Data Collection Rules](../essentials/data-collection-rule-overview.md).
+The Azure Monitor agent is meant to replace the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines. By comparison, it is more **secure, cost-effective, performant, manageable and reliable**. You must migrate from [Log Analytics agent] to [Azure Monitor agent] before **August 2024**. To make this process easier and automated, use agent migration described in this article.
++
+## AMA Migration Helper (preview)
+A workbook-based solution in Azure Monitor that helps you discover **what to migrate** and **track progress** as you move from legacy Log Analytics agents to Azure Monitor agent on your virtual machines, scale sets, on premise and Arc-enabled servers in your subscriptions. Use this single glass pane view to expedite your agent migration journey.
+
+The workbook is available under the *Community Git repo > Azure Monitor* option, linked [here](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Monitor/Agents/Migration%20Tools/Migration%20Helper%20Workbook)
+
+1. Open Azure portal > Monitor > Workbooks
+2. Click ΓÇÿ+ NewΓÇÖ
+3. Click on the ΓÇÿAdvanced EditorΓÇÖ </> button
+4. Copy and paste the workbook JSON content here.
+5. Click ΓÇÿApplyΓÇÖ to load the workbook. Finally click ΓÇÿDone EditingΓÇÖ. YouΓÇÖre now ready to use the workbook
+6. Select subscriptions and workspaces drop-downs to view relevant information
++
+## DCR Config Generator (preview)
+The Azure Monitor agent relies only on [Data Collection rules](../essentials/data-collection-rule-overview.md) for configuration, whereas the legacy agent pulls all its configuration from Log Analytics workspaces. Use this tool to parse legacy agent configuration from your workspaces and automatically generate corresponding rules. You can then associate the rules to machines running the new agent using built-in association policies.
+
+> [!NOTE]
+> Additional configuration for [Azure solutions or services](./azure-monitor-agent-overview.md#supported-services-and-features) dependent on agent are not yet supported in this tool.
++
+1. **Prerequisites**
+ - PowerShell version 7.1.3 or higher is recommended (minimum version 5.1)
+ - Primarily uses `Az Powershell module` to pull workspace agent configuration information
+ - You must have read access for the specified workspace resource
+ - `Connect-AzAccount` and `Select-AzSubscription` will be used to set the context for the script to run so proper Azure credentials will be needed
+2. [Download the PowerShell script](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Monitor/Agents/Migration%20Tools/DCR%20Config%20Generator)
+2. Run the script using one of the options below:
+ - Option 1
+ # [PowerShell](#tab/ARMAgentPowerShell)
+ ```powershell
+ .\WorkspaceConfigToDCRMigrationTool.ps1 -SubscriptionId $subId -ResourceGroupName $rgName -WorkspaceName $workspaceName -DCRName $dcrName -Location $location -FolderPath $folderPath
+ ```
+ - Option 2 (if you are just looking for the DCR payload json)
+ # [PowerShell](#tab/ARMAgentPowerShell)
+ ```powershell
+ $dcrJson = Get-DCRJson -ResourceGroupName $rgName -WorkspaceName $workspaceName -PlatformType $platformType $dcrJson | ConvertTo-Json -Depth 10 | Out-File "<filepath>\OutputFiles\dcr_output.json"
+ ```
+
+ **Parameters**
+
+ | Parameter | Required? | Description |
+ ||||
+ | SubscriptionId | Yes | Subscription ID that contains the target workspace |
+ | ResourceGroupName | Yes | Resource Group that contains the target workspace |
+ | WorkspaceName | Yes | Name of the target workspace |
+ | DCRName | Yes | Name of the new generated DCR to create |
+ | Location | Yes | Region location for the new DCR |
+ | FolderPath | No | Local path to store the output. Current directory will be used if nothing is provided |
+
+3. Review the output data collection rule(s). There are two separate ARM templates that can be produced (based on agent configuration of the target workspace):
+ - Windows ARM Template and Parameter Files: will be created if target workspace contains Windows Performance Counters and/or Windows Events
+ - Linux ARM Template and Parameter Files: will be created if target workspace contains Linux Performance Counters and/or Linux Syslog Events
+
+4. Use the rule association built-in policies and other available methods to associate generated rules with machines running the new agent. [Learn more](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association)
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Last updated 02/09/2022
- # Migrate to Azure Monitor agent from Log Analytics agent
The [Azure Monitor agent (AMA)](azure-monitor-agent-overview.md) collects monito
## Why should I migrate to the Azure Monitor agent? - **Security and performance**
- - AMA uses Managed Identity or AAD tokens (for clients) which are much more secure than the legacy authentication methods.
+ - AMA uses Managed Identity or Azure Active Directory (Azure AD) tokens (for clients) which are much more secure than the legacy authentication methods.
- AMA can provide higher events per second (EPS) upload rate compared to legacy agents - **Cost savings** via efficient data collection [using Data Collection Rules](data-collection-rule-azure-monitor-agent.md). This is one of the most useful advantages of using AMA. - DCRs allow granular targeting of machines connected to a workspace to collect data from as compared to the ΓÇ£all or nothingΓÇ¥ mode that legacy agents have.
The [Azure Monitor agent (AMA)](azure-monitor-agent-overview.md) collects monito
- **Simpler management** of data collection, including ease of troubleshooting - **Multihoming** on both Windows and Linux is possible easily - Every action across the data collection lifecycle, from onboarding/setup to deployment to updates and changes over time, is significantly easier and scalable thanks to agent configuration becoming centralized and ΓÇÿin the cloudΓÇÖ as compared to configuring things on every machine.
- - Enabling/disabling of additional capabilities or services (Sentinel, Defender for Cloud, VM Insights, etc) is more transparent and controlled, using the extensibility architecture of AMA.
+ - Enabling/disabling of additional capabilities or services (Sentinel, Defender for Cloud, VM Insights, etc.) is more transparent and controlled, using the extensibility architecture of AMA.
- **A single agent** that will consolidate all the features necessary to address all telemetry data collection needs across servers and client devices (running Windows 10, 11) as compared to running various different monitoring agents. This is the eventual goal, though AMA is currently converging with the Log Analytics agents. ## When should I migrate to the Azure Monitor agent?
azure-monitor Autoscale Custom Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-custom-metric.md
Title: Autoscale in Azure using a custom metric
-description: Learn how to scale your resource by custom metric in Azure.
- Previously updated : 05/07/2017
+ Title: How to autoscale in Azure using a custom metric
+description: Learn how to scale your web app is custom metric in the Azure portal
+++ + Last updated : 06/22/2022 +
+# Customer intent: As a user or dev ops administrator I want to use the portal to set up autoscale so I can scale my resources.
+
-# Get started with auto scale by custom metric in Azure
-This article describes how to scale your resource by a custom metric in Azure portal.
-
-Azure Monitor autoscale applies only to [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Cloud Services](https://azure.microsoft.com/services/cloud-services/), [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), [Azure Data Explorer Cluster](https://azure.microsoft.com/services/data-explorer/) ,
-Integration Service Environment and [API Management services](../../api-management/api-management-key-concepts.md).
-
-## Lets get started
-This article assumes that you have a web app with application insights configured. If you don't have one already, you can [set up Application Insights for your ASP.NET website][1]
--- Open [Azure portal][2]-- Click on Azure Monitor icon in the left navigation pane.
- ![Launch Azure Monitor][3]
-- Click on Autoscale setting to view all the resources for which auto scale is applicable, along with its current autoscale status
- ![Discover auto scale in Azure monitor][4]
-- Open 'Autoscale' blade in Azure Monitor and select a resource you want to scale
- > Note: The steps below use an app service plan associated with a web app that has app insights configured.
-- In the scale setting blade for the resource, notice that the current instance count is 1. Click on 'Enable autoscale'.
- ![Scale setting for new web app][5]
-- Provide a name for the scale setting, and the click on "Add a rule". Notice the scale rule options that opens as a context pane in the right hand side. By default, it sets the option to scale your instance count by 1 if the CPU percentage of the resource exceeds 70%. Change the metric source at the top to "Application Insights", select the app insights resource in the 'Resource' dropdown and then select the custom metric based on which you want to scale.
- ![Scale by custom metric][6]
-- Similar to the step above, add a scale rule that will scale in and decrease the scale count by 1 if the custom metric is below a threshold.
- ![Scale based on cpu][7]
-- Set the instance limits. For example, if you want to scale between 2-5 instances depending on the custom metric fluctuations, set 'minimum' to '2', 'maximum' to '5' and 'default' to '2'
- > Note: In case there is a problem reading the resource metrics and the current capacity is below the default capacity, then to ensure the availability of the resource, Autoscale will scale out to the default value. If the current capacity is already higher than default capacity, Autoscale will not scale in.
-- Click on 'Save'-
-Congratulations. You now successfully created your scale setting to auto scale your web app based on a custom metric.
-
-> Note: The same steps are applicable to get started with a VMSS or cloud service role.
-
-<!--Reference-->
-[1]: ../app/asp-net.md
-[2]: https://portal.azure.com
-[3]: ./media/autoscale-custom-metric/azure-monitor-launch.png
-[4]: ./media/autoscale-custom-metric/discover-autoscale-azure-monitor.png
-[5]: ./media/autoscale-custom-metric/scale-setting-new-web-app.png
-[6]: ./media/autoscale-custom-metric/scale-by-custom-metric.png
-[7]: ./media/autoscale-custom-metric/autoscale-setting-custom-metrics-ai.png
+# How to autoscale a web app using custom metrics.
+
+This article describes how to set up autoscale for a web app using a custom metric in the Azure portal.
+
+Autoscale allows you to add and remove resources to handle increases and decreases in load. In this article we'll show you how to set up autoscale for a web app, using one of the Application Insights metrics to scale the web app in and out.
+
+Azure Monitor autoscale applies to:
++ [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/)++ [Cloud Services](https://azure.microsoft.com/services/cloud-services/)++ [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/)++ [Azure Data Explorer Cluster](https://azure.microsoft.com/services/data-explorer/) ++ Integration Service Environment and [API Management services](../../api-management/api-management-key-concepts.md).+
+## Prerequisites
+An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+
+## Overview
+To create an autoscaled web app, follow the steps below.
+1. If you do not already have one, [Create an App Service Plan](#create-an-app-service-plan). Note that you can't set up autoscale for free or basic tiers.
+1. If you do not already have one, [Create a web app](#create-a-web-app) using your service plan.
+1. [Configure autoscaling](#configure-autoscale) for your service plan.
+
+
+## Create an App Service Plan
+
+An App Service plan defines a set of compute resources for a web app to run on.
+
+1. Open the [Azure portal](https://portal.azure.com).
+1. Search for and select **App Service plans**.
+
+ :::image type="content" source="media\autoscale-custom-metric\search-app-service-plan.png" alt-text="Screenshot of the search bar, searching for app service plans.":::
+
+1. Select **Create** from the **App Service plan** page.
+1. Select a **Resource group** or create a new one.
+1. Enter a **Name** for your plan.
+1. Select an **Operating system** and **Region**.
+1. Select an **Sku and size**.
+ > [!NOTE]
+ > You cannot use autoscale with free or basic tiers.
+
+1. Select **Review + create**, then **Create**.
+
+ :::image type="content" source="media\autoscale-custom-metric\create-app-service-plan.png" alt-text="Screenshot of the Basics tab of the Create App Service Plan screen that you configure the App Service plan on.":::
+
+## Create a web app
+
+1. Search for and select *App services*.
+
+ :::image type="content" source="media\autoscale-custom-metric\search-app-services.png" alt-text="Screenshot of the search bar, searching for app service.":::
+
+1. Select **Create** from the **App Services** page.
+1. On the **Basics** tab, enter a **Name** and select a **Runtime stack**.
+1. Select the **Operating System** and **Region** that you chose when defining your App Service plan.
+1. Select the **App Service plan** that you created earlier.
+1. Select the **Monitoring** tab from the menu bar.
+
+ :::image type="content" source="media\autoscale-custom-metric\create-web-app.png" alt-text="Screenshot of the Basics tab of the Create web app page where you set up a web app.":::
+
+1. On the **Monitoring** tab, select **Yes** to enable Application Insights.
+1. Select **Review + create**, then **Create**.
+
+ :::image type="content" source="media\autoscale-custom-metric\enable-application-insights.png"alt-text="Screenshot of the Monitoring tab of the Create web app page where you enable Application Insights.":::
++
+## Configure autoscale
+Configure the autoscale settings for your App Service plan.
+
+1. Search and select *autoscale* in the search bar or select **Autoscale** under **Monitor** in the side menu bar.
+1. Select your App Service plan. You can only configure production plans.
+
+ :::image type="content" source="media\autoscale-custom-metric\autoscale-overview-page.png" alt-text="A screenshot of the autoscale landing page where you select the resource to set up autoscale for.":::
+
+### Set up a scale out rule
+Set up a scale out rule so that Azure spins up an additional instance of the web app, when your web app is handling more than 70 sessions per instance.
+
+1. Select **Custom autoscale**.
+1. In the **Rules** section of the default scale condition, select **Add a rule**.
+
+ :::image type="content" source="media/autoscale-custom-metric/autoscale-settings.png" alt-text="A screenshot of the autoscale settings page where you set up the basic autoscale settings.":::
+
+1. From the **Metric source** dropdown, select **Other resource**.
+1. From **Resource Type**, select **Application Insights**.
+1. From the **Resource** dropdown, select your web app.
+1. Select a **Metric name** to base your scaling on, for example *Sessions*.
+1. Select **Enable metric divide by instance count** so that the number of sessions per instance is measured.
+1. 1. From the **Operator** dropdown, select **Greater than**.
+1. Enter the **Metric threshold to trigger the scale action**, for example, *70*.
+1. Under **Actions**, set the **Operation** to *Increase count* and set the **Instance count** to *1*.
+1. Select **Add**.
+
+ :::image type="content" source="media/autoscale-custom-metric/scale-out-rule.png" alt-text="A screenshot of the Scale rule page where you configure the scale out rule.":::
++
+### Set up a scale in rule
+Set up a scale in rule so Azure spins down one of the instances when the number of sessions your web app is handling is less than 60 per instance. Azure will reduce the number of instances each time this rule is run until the minimum number of instances is reached.
+1. In the **Rules** section of the default scale condition, select **Add a rule**.
+1. From the **Metric source** dropdown, select **Other resource**.
+1. From **Resource Type**, select **Application Insights**.
+1. From the **Resource** dropdown, select your web app.
+1. Select a **Metric name** to base your scaling on, for example *Sessions*.
+1. Select **Enable metric divide by instance count** so that the number of sessions per instance is measured.
+1. From the **Operator** dropdown, select **Less than**.
+1. Enter the **Metric threshold to trigger the scale action**, for example, *60*.
+1. Under **Actions**, set the **Operation** to **Decrease count** and set the **Instance count** to *1*.
+1. Select **Add**.
+
+ :::image type="content" source="media/autoscale-custom-metric/scale-in-rule.png" alt-text="A screenshot of the Scale rule page where you configure the scale in rule.":::
+
+### Limit the number of instances
+
+1. Set the maximum number of instances that can be spun up in the **Maximum** field of the **Instance limits** section, for example, *4*.
+1. Select **Save**.
+
+ :::image type="content" source="media/autoscale-custom-metric/autoscale-instance-limits.png" alt-text="A screenshot of the autoscale settings page where you set up instance limits.":::
+
+## Clean up resources
+
+If you're not going to continue to use this application, delete
+resources with the following steps:
+1. From the App service overview page, select **Delete**.
+
+ :::image type="content" source="media/autoscale-custom-metric/delete-web-app.png" alt-text="A screenshot of the App Service page where you can Delete the web app.":::
+
+1. From The App Service Plan page, select **Delete**. The autoscale settings are deleted along with the App Service plan.
+
+ :::image type="content" source="media/autoscale-custom-metric/delete-service-plan.png" alt-text="A screenshot of the App Service plan page where you can Delete the app service plan.":::
+
+## Next steps
+Learn more about autoscale by referring to the following articles:
+- [Use autoscale actions to send email and webhook alert notifications](./autoscale-webhook-email.md)
+- [Overview of autoscale](./autoscale-overview.md)
+- [Azure Monitor autoscale common metrics](./autoscale-common-metrics.md)
+- [Best practices for Azure Monitor autoscale](./autoscale-best-practices.md)
+- [Autoscale REST API](/rest/api/monitor/autoscalesettings)
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on commi
### Optimize workspace configuration As your monitoring environment becomes more complex, you will need to consider whether to create additional Log Analytics workspaces. This may be as you place resources in additional regions or as you implement additional services that use workspaces such as Azure Sentinel and Microsoft Defender for Cloud.
-There can be cost implications with your workspace design, most notably when you combine different services such as operational data from Azure Monitor and security data from . See [Workspaces with Microsoft Sentinel](logs/cost-logs.md#workspaces-with-microsoft-sentinel) and [Workspaces with Microsoft Defender for Cloud](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) for a description of these implications and guidance on determining the most cost-effective solution for your environment.
+There can be cost implications with your workspace design, most notably when you combine different services such as operational data from Azure Monitor and security data from Microsoft Sentinel. See [Workspaces with Microsoft Sentinel](logs/cost-logs.md#workspaces-with-microsoft-sentinel) and [Workspaces with Microsoft Defender for Cloud](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) for a description of these implications and guidance on determining the most cost-effective solution for your environment.
## Configure tables in each workspace Except for [tables that don't incur charges](logs/cost-logs.md#data-size-calculation), all data in a Log Analytics workspace is billed at the same rate by default. You may be collecting data though that you query infrequently or that you need to archive for compliance but rarely access. You can significantly reduce your costs by configuring Basic Logs and by optimizing your data retention and archiving.
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
Container insights supports the following environments:
## Supported Kubernetes versions The versions of Kubernetes and support policy are the same as those [supported in Azure Kubernetes Service (AKS)](../../aks/supported-kubernetes-versions.md).
+>[!NOTE]
+> Container insights support for Windows Server 2022 operating system in public preview.
+ ## Prerequisites Before you start, make sure that you've met the following requirements:
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
Container insights is a feature designed to monitor the performance of container
Container insights supports clusters running the Linux and Windows Server 2019 operating system. The container runtimes it supports are Docker, Moby, and any CRI compatible runtime such as CRI-O and ContainerD.
+>[!NOTE]
+> Container insights support for Windows Server 2022 operating system in public preview.
+ Monitoring your containers is critical, especially when you're running a production cluster, at scale, with multiple applications. Container insights gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and Container logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are sent to the [metrics database in Azure Monitor](../essentials/data-platform-metrics.md), and log data is sent to your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md).
azure-monitor Redis Cache Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/redis-cache-insights-overview.md
When you select **Failures** at the top of the page, the **Failures** table of t
### Metric definitions
-For a full list of the metric definitions that form these workbooks, check out the [article on available metrics and reporting intervals](../../azure-cache-for-redis/cache-how-to-monitor.md#available-metrics-and-reporting-intervals).
+For a full list of the metric definitions that form these workbooks, check out the [article on available metrics and reporting intervals](../../azure-cache-for-redis/cache-how-to-monitor.md#create-your-own-metrics).
## View from an Azure Cache for Redis resource
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
Setting a table's [log data plan](log-analytics-workspace-overview.md#log-data-p
> You can switch a table's plan once a week. The Basic Logs feature is not available for workspaces in [legacy pricing tiers](cost-logs.md#legacy-pricing-tiers). ## Which tables support Basic Logs?
-All tables in your Log Analytics are Analytics tables, by default. You can configure particular tables to use Basic Logs. You can't configure a table for Basic Logs if Azure Monitor relies on that table for specific features.
-
+By default, all tables in your Log Analytics are Analytics tables, and available for query and alerts.
You can currently configure the following tables for Basic Logs: - All tables created with the [Data Collection Rule (DCR)-based custom logs API.](custom-logs-overview.md)
azure-monitor Basic Logs Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-query.md
https://api.loganalytics.io/v1/workspaces/testWS/search?timespan=P1D
```json {
- "query": "ContainerLog | where LogEntry has \"some value\"\n",
+ "query": "ContainerLogV2 | where Computer == \"some value\"\n",
} ```
azure-monitor Workbook Templates Move Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbook-templates-move-region.md
ibiza Previously updated : 08/12/2020 Last updated : 07/05/2022 #Customer intent: As an Azure service administrator, I want to move my resources to another Azure region
azure-monitor Workbooks Automate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-automate.md
ibiza Previously updated : 04/30/2020 Last updated : 07/05/2022 # Programmatically manage workbooks
The example below demonstrates the customization of an exported Workbook Azure R
} ```
-In this example, the following steps facilitated the customization of an exported Azure Resource Manager template:
+In this example, the following steps facilitate the customization of an exported Azure Resource Manager template:
1. Export the Workbook as an Azure Resource Manager template as explained in the above section 2. In the template's `variables` section: 1. Parse the `serializedData` value into a JSON object variable, which creates a JSON structure including an array of items that represent the content of the Workbook.
azure-monitor Workbooks Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-bring-your-own-storage.md
ibiza Previously updated : 12/11/2020 Last updated : 07/05/2022 # Bring your own storage to save workbooks
azure-monitor Workbooks Chart Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-chart-visualizations.md
ibiza Previously updated : 09/04/2020 Last updated : 07/05/2022 # Chart visualizations
Azure Monitor logs gives resources owners detailed information about the working
The example below shows the trend of requests to an app over the previous days.
-1. Switch the workbook to edit mode by selecting the **Edit** toolbar item.
+1. Switch the workbook to edit mode by selecting **Edit** in the toolbar.
2. Use the **Add query** link to add a log query control to the workbook. 3. Select the query type as **Log**, resource type (for example, Application Insights) and the resources to target. 4. Use the Query editor to enter the [KQL](/azure/kusto/query/) for your analysis (for example, trend of requests).
Most Azure resources emit metric data about state and health (for example, CPU u
The following example will show the number of transactions in a storage account over the prior hour. This allows the storage owner to see the transaction trend and look for anomalies in behavior.
-1. Switch the workbook to edit mode by selecting the **Edit** toolbar item.
+1. Switch the workbook to edit mode by selecting **Edit** in the toolbar.
2. Use the **Add metric** link to add a metric control to the workbook. 3. Select a resource type (for example, Storage Account), the resources to target, the metric namespace and name, and the aggregation to use. 4. Set other parameters if needed - like time range, split-by, visualization, size, and color palette.
azure-monitor Workbooks Composite Bar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-composite-bar.md
ibiza Previously updated : 9/04/2020 Last updated : 07/05/2022 # Composite bar renderer
Composite bar renderer is supported for grids, tiles, and graphs visualizations.
## Adding composite bar renderer
-1. Switch the workbook to edit mode by selecting *Edit* toolbar item.
-2. Select *Add* then *Add query*.
+1. Switch the workbook to edit mode by selecting **Edit** in the toolbar.
+2. Select **Add** and then **Add query**.
3. Set *Data source* to "JSON" and *Visualization* to "Grid". 4. Add the following JSON data.
azure-monitor Workbooks Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-configurations.md
ibiza Previously updated : 07/20/2020 Last updated : 07/05/2022 # Workbook Configuration Options
The workbooks settings has these tabs to help you configure your workbook.
|Resources|This tab contains the resources that appear as default selections in this workbook.<br>The resource marked as the **Owner** resource is where the workbook will be saved, and the location of the workbooks and templates you'll see when browsing. The owner resource can't be removed.<br> You can add a default resource by selecting **Add Resources**. You can remove resources by selecting a resource or several resources, and selecting **Remove Selected Resources**. When you're done adding and removing resources, select **Apply Changes**.| |Versions| This tab contains a list of all the available versions of this workbook. Select a version and use the toolbar to compare, view, or restore versions. Previous workbook versions are available for 90 days.<br><ul><li>**Compare**: Compare the JSON of the previous workbook to the most recently saved version.</li><li>**View**: Opens the selected version of the workbook in a context pane.</li><li>**Restore**: Saves a new copy of the workbook with the contents of the selected version and overwrites any existing current content. You'll be prompted to confirm this action.</li></ul><br>| |Style |In this tab, you can set a padding and spacing style for the whole workbook. The possible options are `Wide`, `Standard`, `Narrow`, `None`. `Standard` is the default style setting.|
-|Pin |While in pin mode, you can select **Pin Workbook** to pin an item from this workbook to a dashboard. Select **Link to Workbook**, to pin a static link to this workbook on your dashboard. You can choose a specific item in your workbook to pin.|
+|Pin |While in pin mode, you can select **Pin Workbook** to pin an component from this workbook to a dashboard. Select **Link to Workbook**, to pin a static link to this workbook on your dashboard. You can choose a specific component in your workbook to pin.|
|Trusted hosts |In this tab, you can enable a trusted source or mark this workbook as trusted in this browser. See [trusted hosts](#trusted-hosts) for detailed information. | > [!NOTE]
There are several ways that you can create interactive reports and experiences i
- **Grid, tile, and chart selections**: You can construct scenarios where clicking a row in a grid updates subsequent charts based on the content of the row. For example, if you have a grid that shows a list of requests and some statistics like failure counts, you can set it up so that if you click on the row of a request, the detailed charts below update to show only that request. Learn how to [set up a grid row click](#set-up-a-grid-row-click). - **Grid Cell Clicks**: You to add interactivity with a special type of grid column renderer called a [link renderer](#link-renderer-actions). A link renderer converts a grid cell into a hyperlink based on the contents of the cell. Workbooks support many kinds of link renderers including renderers that open resource overview blades, property bag viewers, App Insights search, usage, transaction tracing, etc. Learn how to [set up a grid cell click](#set-up-grid-cell-clicks). - **Conditional Visibility**: You can make controls appear or disappear based on the values of parameters. This allows you to have reports that look different based on user input or telemetry state. For example, you can show consumers a summary when there are no issues, and show detailed information when there's something wrong. Learn how to [set up conditional visibility](#set-conditional-visibility).
+ - **Export parameters with multi-selections**: You can export parameters from query and metrics workbook components when a row or multiple rows are selected.Learn how to [set up multi-selects in grids and charts](#set-up-multi-selects-in-grids-and-charts).
### Set up a grid row click
The following image shows a more elaborate interactive report in read mode based
### Set conditional visibility
-1. Follow the steps in the [Setting up interactivity on grid row click](#set-up-a-grid-row-click) section to set up two interactive controls.
+1. Follow the steps in the [setting up interactivity on grid row click](#set-up-a-grid-row-click) section to set up two interactive controls.
1. Add a new parameter with these values: - Name: `ShowDetails` - Parameter type: `Drop down`
The following image shows a more elaborate interactive report in read mode based
The following image shows the case where `ShowDetails` is `Yes`:
- :::image type="content" source="media/workbooks-configurations/workbooks-conditional-visibility-visible.png" alt-text="Screenshot showing a workbook with a conditional item that is visible.":::
+ :::image type="content" source="media/workbooks-configurations/workbooks-conditional-visibility-visible.png" alt-text="Screenshot showing a workbook with a conditional component that is visible.":::
-The image below shows the hidden case where `ShowDetails` is `No`
+The image below shows the hidden case where `ShowDetails` is `No`:
### Set up multi-selects in grids and charts
-Query and metrics items can export parameters when a row or multiple rows are selected.
+Query and metrics components can export parameters when a row or multiple rows are selected.
:::image type="content" source="media/workbooks-configurations/workbooks-export-parameters.png" alt-text="Screenshot showing the workbooks export parameters settings with multiple parameters.":::
-1. In the query step displaying the grid, select **Advanced settings**.
+1. In the query component displaying the grid, select **Advanced settings**.
2. Select the `When items are selected, export parameters` checkbox. 1. Select the `allow selection of multiple values` checkbox. - The displayed visualization allows multi-selecting and the exported parameter's values will be arrays of values, like when using multi-select dropdown parameters.
When single selection is enabled, you can specify which field of the original da
When multi-selection is enabled, you specify which field of the original data to export. Fields include parameter name, parameter type, quote with and delimiter. The quote with and delimiter values are used when turning arrow values into text when being replaced in a query. In multi-selection, if no values are selected, the default value is an empty array. > [!NOTE]
-> For multi select, only unique values are exported. For example, you will not see output array values like " 1,1,2,1". The array output will be get "1,2".
+> For multi-select, only unique values are exported. For example, you will not see output array values like " 1,1,2,1". The array output will be get "1,2".
If you leave the `Field to export` setting empty in the export settings, all the available fields in the data will be exported as a stringified JSON object of key:value pairs. For grids and titles, the string includes the fields in the grid. For charts, the available fields are x,y,series, and label (depending on the type of chart).
azure-monitor Workbooks Create Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-create-workbook.md
This is a metric chart in edit mode:
## Adding links
-You can use links to create links to other views, workbooks, other items inside a workbook, or to create tabbed views within a workbook. The links can be styled as hyperlinks, buttons, and tabs.
+You can use links to create links to other views, workbooks, other components inside a workbook, or to create tabbed views within a workbook. The links can be styled as hyperlinks, buttons, and tabs.
:::image type="content" source="media/workbooks-create-workbook/workbooks-empty-links.png" alt-text="Screenshot of adding a link to a workbook."::: ### Link styles
You can apply styles to the link element itself and to individual links.
|Style |Sample |Notes | ||||
-|Bullet List | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-bullet.png" alt-text="Screenshot of bullet style workbook link."::: | The default, links, appears as a bulleted list of links, one on each line. The **Text before link** and **Text after link** fields can be used to add more text before or after the link items. |
+|Bullet List | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-bullet.png" alt-text="Screenshot of bullet style workbook link."::: | The default, links, appears as a bulleted list of links, one on each line. The **Text before link** and **Text after link** fields can be used to add more text before or after the link components. |
|List |:::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-list.png" alt-text="Screenshot of list style workbook link."::: | Links appear as a list of links, with no bullets. | |Paragraph | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-paragraph.png" alt-text="Screenshot of paragraph style workbook link."::: |Links appear as a paragraph of links, wrapped like a paragraph of text. | |Navigation | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-navigation.png" alt-text="Screenshot of navigation style workbook link."::: | Links appear as links, with vertical dividers, or pipes (`|`) between each link. |
Links can use all of the link actions available in [link actions](workbooks-link
| Action | Description | |:- |:-| |Set a parameter value | A parameter can be set to a value when selecting a link, button, or tab. Tabs are often configured to set a parameter to a value, which hides and shows other parts of the workbook based on that value.|
-|Scroll to a step| When selecting a link, the workbook will move focus and scroll to make another step visible. This action can be used to create a "table of contents", or a "go back to the top" style experience. |
+|Scroll to a step| When selecting a link, the workbook will move focus and scroll to make another component visible. This action can be used to create a "table of contents", or a "go back to the top" style experience. |
### Using tabs
-Most of the time, tab links are combined with the **Set a parameter value** action. Here's an example showing the links step configured to create 2 tabs, where selecting either tab will set a **selectedTab** parameter to a different value (the example shows a third tab being edited to show the parameter name and parameter value placeholders):
+Most of the time, tab links are combined with the **Set a parameter value** action. Here's an example showing the links component configured to create 2 tabs, where selecting either tab will set a **selectedTab** parameter to a different value (the example shows a third tab being edited to show the parameter name and parameter value placeholders):
:::image type="content" source="media/workbooks-create-workbook/workbooks-creating-tabs.png" alt-text="Screenshot of creating tabs in workbooks.":::
-You can then add other items in the workbook that are conditionally visible if the **selectedTab** parameter value is "1" by using the advanced settings:
+You can then add other components in the workbook that are conditionally visible if the **selectedTab** parameter value is "1" by using the advanced settings:
:::image type="content" source="media/workbooks-create-workbook/workbooks-selected-tab.png" alt-text="Screenshot of conditionally visible tab in workbooks.":::
-The first tab is selected by default, initially setting **selectedTab** to 1, and making that step visible. Selecting the second tab will change the value of the parameter to "2", and different content will be displayed:
+The first tab is selected by default, initially setting **selectedTab** to 1, and making that component visible. Selecting the second tab will change the value of the parameter to "2", and different content will be displayed:
:::image type="content" source="media/workbooks-create-workbook/workbooks-selected-tab2.png" alt-text="Screenshot of workbooks with content displayed when selected tab is 2.":::
A sample workbook with the above tabs is available in [sample Azure Workbooks wi
### Tabs limitations - URL links aren't supported in tabs. A URL link in a tab appears as a disabled tab.
+ - No component styling is supported in tabs. components are displayed as tabs, and only the tab name (link text) field is displayed. Fields that aren't used in tab style are hidden while in edit mode.
- The first tab is selected by default, invoking whatever action that tab has specified. If the first tab's action opens another view, as soon as the tabs are created, a view appears. - You can use tabs to open another views, but this functionality should be used sparingly, since most users won't expect to navigate by selecting a tab. Keep in mind that if other tabs are setting parameter to a specific value, a tab that opens a view wouldn't change that value, so the rest of the workbook content will continue to show the view/data for the previous tab.
A sample workbook with toolbars, globals parameters, and ARM Actions is availab
## Adding groups
-A group item in a workbook allows you to logically group a set of steps in a workbook.
+A group component in a workbook allows you to logically group a set of components in a workbook.
Groups in workbooks are useful for several things:
- - **Layout**: When you want items to be organized vertically, you can create a group of items that will all stack up and set the styling of the group to be a percentage width instead of setting percentage width on all the individual items.
- - **Visibility**: When you want several items to hide or show together, you can set the visibility of the entire group of items, instead of setting visibility settings on each individual item. This can be useful in templates that use tabs, as you can use a group as the content of the tab, and the entire group can be hidden/shown based on a parameter set by the selected tab.
- - **Performance**: When you have a large template with many sections or tabs, you can convert each section into its own subtemplate, and use groups to load all the subtemplates within the top-level template. The content of the subtemplates won't load or run until a user makes those groups visible. Learn more about [how to split a large template into many templates](#splitting-a-large-template-into-many-templates).
+ - **Layout**: When you want components to be organized vertically, you can create a group of components that will all stack up and set the styling of the group to be a percentage width instead of setting percentage width on all the individual components.
+ - **Visibility**: When you want several components to hide or show together, you can set the visibility of the entire group of components, instead of setting visibility settings on each individual component. This can be useful in templates that use tabs, as you can use a group as the content of the tab, and the entire group can be hidden/shown based on a parameter set by the selected tab.
+ - **Performance**: When you have a large template with many sections or tabs, you can convert each section into its own sub-template, and use groups to load all the sub-templates within the top-level template. The content of the sub-templates won't load or run until a user makes those groups visible. Learn more about [how to split a large template into many templates](#splitting-a-large-template-into-many-templates).
### Add a group to your workbook
Groups in workbooks are useful for several things:
- Select the ellipses (...) to the right of the **Edit** button next to one of the elements in the workbook, then select **Add** and then **Add group**. :::image type="content" source="media/workbooks-create-workbook/workbooks-add-group.png" alt-text="Screenshot showing selecting adding a group to a workbook. ":::
-1. Select items for your group.
+1. Select components for your group.
1. Select **Done editing.**
- This is a group in read mode with two items inside: a text item and a query item.
+ This is a group in read mode with two components inside: a text component and a query component.
:::image type="content" source="media/workbooks-create-workbook/workbooks-groups-view.png" alt-text="Screenshot showing a group in read mode in a workbook.":::
- In edit mode, you can see those two items are actually inside a group item. In the screenshot below, the group is in edit mode. The group contains two items inside the dashed area. Each item can be in edit or read mode, independent of each other. For example, the text step is in edit mode while the query step is in read mode.
+ In edit mode, you can see those two components are actually inside a group component. In the screenshot below, the group is in edit mode. The group contains two components inside the dashed area. Each component can be in edit or read mode, independent of each other. For example, the text step is in edit mode while the query step is in read mode.
:::image type="content" source="media/workbooks-create-workbook/workbooks-groups-edit.png" alt-text="Screenshot of a group in edit mode in a workbook.":::
A group is treated as a new scope in the workbook. Any parameters created in the
You can specify which type of group to add to your workbook. There are two types of groups:
+ - **Editable**: The group in the workbook allows you to add, remove, or edit the contents of the components in the group. This is most commonly used for layout and visibility purposes.
+ - **From a template**: The group in the workbook loads from the contents of another workbook by its ID. The content of that workbook is loaded and merged into the workbook at runtime. In edit mode, you can't modify any of the contents of the group, as they will just load again from the template next time the component loads. When loading a group from a template, use the full Azure Resource ID of an existing workbook.
### Load types
You can specify how and when the contents of a group are loaded.
#### Lazy loading
-Lazy loading is the default. In lazy loading, the group is only loaded when the item is visible. This allows a group to be used by tab items. If the tab is never selected, the group never becomes visible and therefore the content isn't loaded.
+Lazy loading is the default. In lazy loading, the group is only loaded when the component is visible. This allows a group to be used by tab components. If the tab is never selected, the group never becomes visible and therefore the content isn't loaded.
-For groups created from a template, the content of the template isn't retrieved and the items in the group aren't created until the group becomes visible. Users see progress spinners for the whole group while the content is retrieved.
+For groups created from a template, the content of the template isn't retrieved and the components in the group aren't created until the group becomes visible. Users see progress spinners for the whole group while the content is retrieved.
#### Explicit loading
In **Always** mode, the content of the group is always loaded and created as soo
When a group is configured to load from a template, by default, that content will be loaded in lazy mode, and it will only load when the group is visible.
-When a template is loaded into a group, the workbook attempts to merge any parameters declared in the template with parameters that already exist in the group. Any parameters that already exist in the workbook with identical names will be merged out of the template being loaded. If all parameters in a parameter step are merged out, the entire parameters step will disappear.
+When a template is loaded into a group, the workbook attempts to merge any parameters declared in the template with parameters that already exist in the group. Any parameters that already exist in the workbook with identical names will be merged out of the template being loaded. If all parameters in a parameter component are merged out, the entire parameters component will disappear.
#### Example 1: All parameters have identical names
Suppose you have a template that has two parameters at the top, a time range par
:::image type="content" source="media/workbooks-create-workbook/workbooks-groups-top-level-params.png" alt-text="Screenshot showing top level parameters in a workbook.":::
-Then a group item loads a second template that has its own two parameters and a text step, where the parameters are named the same:
+Then a group component loads a second template that has its own two parameters and a text component, where the parameters are named the same:
:::image type="content" source="media/workbooks-create-workbook/workbooks-groups-merged-away.png" alt-text="Screenshot of a workbook template with top level parameters.":::
-When the second template is loaded into the group, the duplicate parameters are merged out. Since all of the parameters are merged away, the inner parameters step is also merged out, resulting in the group containing only the text step.
+When the second template is loaded into the group, the duplicate parameters are merged out. Since all of the parameters are merged away, the inner parameters component is also merged out, resulting in the group containing only the text component.
### Example 2: One parameter has an identical name Suppose you have a template that has two parameters at the top, a **time range** parameter and a text parameter named "**FilterB**" ():
- :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-wont-merge-away.png" alt-text="Screenshot of a group item with the result of parameters merged away.":::
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-wont-merge-away.png" alt-text="Screenshot of a group component with the result of parameters merged away.":::
-When the group's item's template is loaded, the **TimeRange** parameter is merged out of the group. The workbook contains the initial parameters step with **TimeRange** and **Filter**, and the group's parameter only includes **FilterB**.
+When the group's component's template is loaded, the **TimeRange** parameter is merged out of the group. The workbook contains the initial parameters component with **TimeRange** and **Filter**, and the group's parameter only includes **FilterB**.
:::image type="content" source="media/workbooks-create-workbook/workbooks-groups-wont-merge-away-result.png" alt-text="Screenshot of workbook group where parameters won't merge away.":::
-If the loaded template had contained **TimeRange** and **Filter** (instead of **FilterB**), then the resulting workbook would have a parameters step and a group with only the text step remaining.
+If the loaded template had contained **TimeRange** and **Filter** (instead of **FilterB**), then the resulting workbook would have a parameters component and a group with only the text component remaining.
### Splitting a large template into many templates To improve performance, it's helpful to break up a large template into multiple smaller templates that loads some content in lazy mode or on demand by the user. This makes the initial load faster since the top-level template can be much smaller.
-When splitting a template into parts, you'll basically need to split the template into many templates (subtemplates) that all work individually. If the top-level template has a **TimeRange** parameter that other items use, the subtemplate will need to also have a parameters item that defines a parameter with same exact name. The subtemplates will work independently and can load inside larger templates in groups.
+When splitting a template into parts, you'll basically need to split the template into many templates (sub-templates) that all work individually. If the top-level template has a **TimeRange** parameter that other components use, the sub-template will need to also have a parameters component that defines a parameter with same exact name. The sub-templates will work independently and can load inside larger templates in groups.
-To turn a larger template into multiple subtemplates:
+To turn a larger template into multiple sub-templates:
-1. Create a new empty group near the top of the workbook, after the shared parameters. This new group will eventually become a subtemplate.
-1. Create a copy of the shared parameters step, and then use **move into group** to move the copy into the group created in step 1. This parameter allows the subtemplate to work independently of the outer template, and will get merged out when loaded inside the outer template.
+1. Create a new empty group near the top of the workbook, after the shared parameters. This new group will eventually become a sub-template.
+1. Create a copy of the shared parameters component, and then use **move into group** to move the copy into the group created in step 1. This parameter allows the sub-template to work independently of the outer template, and will get merged out when loaded inside the outer template.
> [!NOTE]
- > Subtemplates don't technically need to have the parameters that get merged out if you never plan on the sub-templates being visible by themselves. However, if the sub-templates do not have the parameters, it will make them very hard to edit or debug if you need to do so later.
+ > sub-templates don't technically need to have the parameters that get merged out if you never plan on the sub-templates being visible by themselves. However, if the sub-templates do not have the parameters, it will make them very hard to edit or debug if you need to do so later.
-1. Move each item in the workbook you want to be in the subtemplate into the group created in step 1.
-1. If the individual steps moved in step 3 had conditional visibilities, that will become the visibility of the outer group (like used in tabs). Remove them from the items inside the group and add that visibility setting to the group itself. Save here to avoid losing changes and/or export and save a copy of the json content.
+1. Move each component in the workbook you want to be in the sub-template into the group created in step 1.
+1. If the individual components moved in step 3 had conditional visibilities, that will become the visibility of the outer group (like used in tabs). Remove them from the components inside the group and add that visibility setting to the group itself. Save here to avoid losing changes and/or export and save a copy of the json content.
1. If you want that group to be loaded from a template, you can use the **Edit** toolbar button in the group. This will open just the content of that group as a workbook in a new window. You can then save it as appropriate and close this workbook view (don't close the browser, just that view to go back to the previous workbook you were editing).
-1. You can then change the group step to load from template and set the template ID field to the workbook/template you created in step 5. To work with workbooks IDs, the source needs to be the full Azure Resource ID of a shared workbook. Press *Load* and the content of that group will now be loaded from that subtemplate instead of saved inside this outer workbook.
+1. You can then change the group component to load from template and set the template ID field to the workbook/template you created in step 5. To work with workbooks IDs, the source needs to be the full Azure Resource ID of a shared workbook. Press *Load* and the content of that group will now be loaded from that sub-template instead of saved inside this outer workbook.
azure-monitor Workbooks Criteria https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-criteria.md
Previously updated : 05/30/2022 Last updated : 07/05/2022
azure-monitor Workbooks Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-data-sources.md
Previously updated : 05/30/2022 Last updated : 07/05/2022
azure-monitor Workbooks Dropdowns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-dropdowns.md
ibiza Previously updated : 10/23/2019 Last updated : 07/05/2022 # Workbook drop down parameters
azure-monitor Workbooks Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-getting-started.md
Once you start creating your own workbook template, you may want to share it wit
## Pin a visualization
-Use the pin button next to a text, query, or metrics steps in a workbook can be pinned by using the pin button on those items while the workbook is in pin mode, or if the workbook author has enabled settings for that element to make the pin icon visible.
+Use the pin button next to a text, query, or metrics components in a workbook can be pinned by using the pin button on those items while the workbook is in pin mode, or if the workbook author has enabled settings for that element to make the pin icon visible.
To access pin mode, select **Edit** to enter editing mode, and select the blue pin icon in the top bar. An individual pin icon will then appear above each corresponding workbook part's *Edit* box on the right-hand side of your screen.
Pinned workbook query parts will respect the dashboard's time range if the pinne
Additionally, pinned workbook parts using a time range parameter will auto refresh at a rate determined by the dashboard's time range. The last time the query ran will appear in the subtitle of the pinned part.
-If a pinned step has an explicitly set time range (does not use a time range parameter), that time range will always be used for the dashboard, regardless of the dashboard's settings. The subtitle of the pinned part will not show the dashboard's time range, and the query will not auto-refresh on the dashboard. The subtitle will show the last time the query executed.
+If a pinned component has an explicitly set time range (does not use a time range parameter), that time range will always be used for the dashboard, regardless of the dashboard's settings. The subtitle of the pinned part will not show the dashboard's time range, and the query will not auto-refresh on the dashboard. The subtitle will show the last time the query executed.
> [!NOTE] > Queries using the *merge* data source are not currently supported when pinning to dashboards.
Clicking on the Auto-Refresh button opens a list of intervals to let the user pi
:::image type="content" source="media/workbooks-getting-started/workbooks-auto-refresh-interval.png" alt-text="Screenshot of workbooks with auto-refresh with interval set."::: ## Next Steps
+ - [Azure workbooks data sources](workbooks-data-sources.md)
azure-monitor Workbooks Graph Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-graph-visualizations.md
ibiza Previously updated : 09/04/2020 Last updated : 07/05/2022 # Graph visualizations
azure-monitor Workbooks Grid Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-grid-visualizations.md
Title: Azure Monitor workbook grid visualizations
description: Learn about all the Azure Monitor workbook grid visualizations. Previously updated : 06/22/2022 Last updated : 07/05/2022
The example below shows a grid that combines icons, heatmaps, and spark-bars to
## Adding a log-based grid
-1. Switch the workbook to edit mode by clicking on the **Edit** toolbar item.
+1. Switch the workbook to edit mode by selecting **Edit** in the toolbar.
2. Select **Add query** to add a log query control to the workbook. 3. Select the query type as **Log**, resource type (for example, Application Insights) and the resources to target. 4. Use the Query editor to enter the KQL for your analysis (for example, VMs with memory below a threshold)
azure-monitor Workbooks Honey Comb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-honey-comb.md
ibiza Previously updated : 09/18/2020 Last updated : 07/05/2022 # Honey comb visualizations
azure-monitor Workbooks Jsonpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-jsonpath.md
ibiza Previously updated : 05/06/2020 Last updated : 07/05/2022 # How to use JSONPath to transform JSON data in workbooks
By using JSONPath transformation, workbook authors are able to convert JSON into
## Using JSONPath
-1. Switch the workbook to edit mode by clicking on the *Edit* toolbar item.
+1. Switch the workbook to edit mode by clicking **Edit** in the toolbar.
2. Use the *Add* > *Add query* link to add a query control to the workbook. 3. Select the data source as *JSON*. 4. Use the JSON editor to enter the following JSON snippet
azure-monitor Workbooks Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-limits.md
Previously updated : 05/30/2022 Last updated : 07/05/2022
This table lists the limits of specific data visualizations.
|Visualization|Limits | |||
-|Grid|By default, grids only display the first 250 rows of data. This setting can be changed in the query step's advanced settings to display up to 10,000 rows. Any further items will be ignored, and a warning will be displayed.|
+|Grid|By default, grids only display the first 250 rows of data. This setting can be changed in the query component's advanced settings to display up to 10,000 rows. Any further items are ignored, and a warning will be displayed.|
|Charts|Charts are limited to 100 series.<br>Charts are limited to 10000 data points. |
-|Tiles|Tiles is limited to displaying 100 tiles. Any further items will be ignored, and a warning will be displayed.|
-|Maps|Maps are limited to displaying 100 points. Any further items will be ignored, and a warning will be displayed.|
+|Tiles|Tiles is limited to displaying 100 tiles. Any further items are ignored, and a warning will be displayed.|
+|Maps|Maps are limited to displaying 100 points. Any further items are ignored, and a warning will be displayed.|
|Text|Text visualization only displays the first cell of data returned by a query. Any other data is ignored.|
azure-monitor Workbooks Link Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-link-actions.md
Title: Azure Monitor Workbooks link actions description: How to use link actions in Azure Monitor Workbooks Previously updated : 06/23/2022 Last updated : 07/05/2022 # Link actions
-Link actions can be accessed through Workbook link steps or through column settings of [grids](../visualize/workbooks-grid-visualizations.md), [titles](../visualize/workbooks-tile-visualizations.md), or [graphs](../visualize/workbooks-graph-visualizations.md).
+Link actions can be accessed through Workbook link components or through column settings of [grids](../visualize/workbooks-grid-visualizations.md), [titles](../visualize/workbooks-tile-visualizations.md), or [graphs](../visualize/workbooks-graph-visualizations.md).
## General link actions
When the workbook link is opened, the new workbook view will be passed all of th
|Column| When selected, another field will be displayed to let the author select another column in the grid. The value of that column for the row will be used in the link value. This is commonly used to enable each row of a grid to open a different template, by setting the **Template Id** field to **column**, or to open up the same workbook template for different resources, if the **Workbook resources** field is set to a column that contains an Azure Resource ID. | |Parameter| When selected, another field will be displayed to let the author select a parameter. The value of that parameter will be used for the value when the link is clicked | |Static value| When selected, another field will be displayed to let the author type in a static value that will be used in the linked workbook. This is commonly used when all of the rows in the grid will use the same value for a field. |
-|Step| Use the value set in the current step of the workbook. This is common in query and metrics steps to set the workbook resources in the linked workbook to those used *in the query/metrics step*, not the current workbook. |
+|component| Use the value set in the current component of the workbook. This is common in query and metrics components to set the workbook resources in the linked workbook to those used in the query/metrics component, not the current workbook. |
|Workbook| Use the value set in the current workbook. | |Default| Use the default value that would be used if no value was specified. This is common for Gallery Type, where the default gallery would be set by the type of the owner resource. |
azure-monitor Workbooks Map Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-map-visualizations.md
ibiza Previously updated : 11/25/2020 Last updated : 07/05/2022 # Map visualization
Map can be visualized if the underlying data/metrics has Latitude/Longitude info
### Using Azure location
-1. Switch the workbook to edit mode by selecting on the **Edit** toolbar item.
+1. Switch the workbook to edit mode by selecting **Edit** in the toolbar.
2. Select **Add** then *Add query*. 3. Change the *Data Source* to `Azure Resource Graph` then pick any subscription that has storage account. 4. Enter the query below for your analysis and the select **Run Query**.
Map can be visualized if the underlying data/metrics has Latitude/Longitude info
### Using Azure resource
-1. Switch the workbook to edit mode by selecting on the **Edit** toolbar item.
+1. Switch the workbook to edit mode by selecting **Edit** in the toolbar.
2. Select **Add** then *Add Metric*. 3. Use a subscription that has storage accounts. 4. Change *Resource Type* to `storage account` and in *Resource* select multiple storage accounts.
Map can be visualized if the underlying data/metrics has Latitude/Longitude info
### Using country/region
-1. Switch the workbook to edit mode by selecting on the **Edit** toolbar item.
+1. Switch the workbook to edit mode by selecting **Edit** in the toolbar.
2. Select **Add*, then *Add query*. 3. Change the *Data source* to `Log`. 4. Select *Resource type* as `Application Insights`, then pick any Application Insights resource that has pageViews data.
Map can be visualized if the underlying data/metrics has Latitude/Longitude info
### Using latitude/location
-1. Switch the workbook to edit mode by selecting on the **Edit** toolbar item.
+1. Switch the workbook to edit mode by selecting **Edit** in the toolbar.
2. Select **Add*, then *Add query*. 3. Change the *Data source* to `JSON`. 1. Enter the JSON data in below in the query editor and select **Run Query**.
azure-monitor Workbooks Move Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-move-region.md
ibiza Previously updated : 08/12/2020 Last updated : 07/05/2022 #Customer intent: As an Azure service administrator, I want to move my resources to another Azure region
azure-monitor Workbooks Multi Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-multi-value.md
Previously updated : 05/30/2022 Last updated : 07/05/2022
azure-monitor Workbooks Options Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-options-group.md
Previously updated : 05/30/2022 Last updated : 07/05/2022
azure-monitor Workbooks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-overview.md
Previously updated : 05/30/2022 Last updated : 07/05/2022
azure-monitor Workbooks Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-parameters.md
ibiza Previously updated : 10/23/2019 Last updated : 07/05/2022 # Workbook parameters
azure-monitor Workbooks Renderers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-renderers.md
Title: Azure Workbook rendering options
description: Learn about all the Azure Monitor workbook rendering options. Previously updated : 06/22/2022 Last updated : 07/05/2022
azure-monitor Workbooks Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-resources.md
description: Learn how to use resource parameters to allow picking of resources
ibiza Previously updated : 10/23/2019 Last updated : 07/05/2022 # Workbook resource parameters
azure-monitor Workbooks Sample Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-sample-links.md
Previously updated : 05/30/2022 Last updated : 07/05/2022
azure-monitor Workbooks Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-templates.md
Previously updated : 05/30/2022 Last updated : 07/05/2022
azure-monitor Workbooks Text Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-text-visualizations.md
ibiza Previously updated : 09/04/2020 Last updated : 07/05/2022 # Text visualizations
Edit Mode:
Preview Mode:
-![Screenshot of a text step in edit mode on the preview tab.](./media/workbooks-text-visualizations/text-edit-mode-preview.png)
+![Screenshot of a text component in edit mode on the preview tab.](./media/workbooks-text-visualizations/text-edit-mode-preview.png)
## Add a text control
-1. Switch the workbook to edit mode by clicking on the **Edit** toolbar item.
+1. Switch the workbook to edit mode by clicking on **Edit** in the toolbar.
2. Use the **Add text** link to add a text control to the workbook. 3. Add Markdown in the editor field. 4. Use the *Text Style* option to switch between plain markdown and markdown wrapped with the Azure portal's standard info/warning/success/error styling. 5. Use the **Preview** tab to see how your content will look. While editing, the preview will show the content inside a scrollbar area to limit its size; however, at runtime the markdown content will expand to fill whatever space it needs, with no scrollbars.
-6. Select the **Done Editing** button to complete editing the step.
+6. Select the **Done Editing** button to complete editing the component.
> [!TIP] > Use this [Markdown cheat sheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) to learn about different formatting options. ## Text styles
-The following text styles are available for text step:
+The following text styles are available for text component:
| Style | Explanation | |--|-|
azure-monitor Workbooks Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-text.md
ibiza Previously updated : 07/02/2021 Last updated : 07/05/2022 # Workbook text parameters
azure-monitor Workbooks Tile Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-tile-visualizations.md
ibiza Previously updated : 09/04/2020 Last updated : 07/05/2022 # Tile visualizations
azure-monitor Workbooks Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-time.md
ibiza Previously updated : 10/23/2019 Last updated : 07/05/2022 # Workbook time parameters
azure-monitor Workbooks Tree Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-tree-visualizations.md
ibiza Previously updated : 09/04/2020 Last updated : 07/05/2022 # Tree visualizations
azure-monitor Workbooks Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-visualizations.md
Previously updated : 05/30/2022 Last updated : 07/05/2022
azure-resource-manager Bicep Functions Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-deployment.md
This function returns the object that is passed during deployment. The propertie
* deploying a local Bicep file. * deploying to a resource group or deploying to one of the other scopes ([Azure subscription](deploy-to-subscription.md), [management group](deploy-to-management-group.md), or [tenant](deploy-to-tenant.md)).
-When deploying a local Bicep file to a resource group: the function returns the following format:
+When deploying a local Bicep file to a resource group, the function returns the following format:
```json {
When deploying a local Bicep file to a resource group: the function returns the
} ```
-When you deploy to an Azure subscription, management group, or tenant, the return object includes a `location` property. The location property is included when deploying a local Bicep file. The format is:
+When you deploy to an Azure subscription, management group, or tenant, the return object includes a `location` property. The `location` property is not included when deploying a local Bicep file. The format is:
```json {
azure-resource-manager Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/data-types.md
Title: Data types in Bicep description: Describes the data types that are available in Bicep Previously updated : 09/30/2021 Last updated : 07/06/2022 # Data types in Bicep
Within a Bicep, you can use these data types:
## Arrays
-Arrays start with a left bracket (`[`) and end with a right bracket (`]`). In Bicep, an array must be declared in multiple lines. Don't use commas between values.
+Arrays start with a left bracket (`[`) and end with a right bracket (`]`). In Bicep, an array can be declared in single line or multiple lines. Commas (`,`) are used between values in single-line declarations, but not used in multiple-line declarations, You can mix and match single-line and multiple-line declarations. The multiple-line declaration requires **Bicep version 0.7.4 or later**.
+
+```bicep
+var multiLineArray = [
+ 'abc'
+ 'def'
+ 'ghi'
+]
+
+var singleLineArray = ['abc', 'def', 'ghi']
+
+var mixedArray = ['abc', 'def'
+ 'ghi']
+```
In an array, each item is represented by the [any type](bicep-functions-any.md). You can have an array where each item is the same data type, or an array that holds different data types.
Floating point, decimal or binary formats aren't currently supported.
## Objects
-Objects start with a left brace (`{`) and end with a right brace (`}`). In Bicep, an object must be declared in multiple lines. Each property in an object consists of key and value. The key and value are separated by a colon (`:`). An object allows any property of any type. Don't use commas to between properties.
+Objects start with a left brace (`{`) and end with a right brace (`}`). In Bicep, an object can be declared in single line or multiple lines. Each property in an object consists of key and value. The key and value are separated by a colon (`:`). An object allows any property of any type. Commas (`,`) are used between properties for single-line declarations, but not used between properties for multiple-line declarations. You can mix and match single-line and multiple-line declarations. The multiple-line declaration requires **Bicep version 0.7.4 or later**.
```bicep
-param exampleObject object = {
+param singleLineObject object = {name: 'test name', id: '123-abc', isCurrent: true, tier: 1}
+
+param multiLineObject object = {
name: 'test name' id: '123-abc' isCurrent: true tier: 1 }+
+param mixedObject object = {name: 'test name', id: '123-abc', isCurrent: true
+ tier: 1}
``` In Bicep, quotes are optionally allowed on object property keys:
azure-resource-manager File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/file.md
Title: Bicep file structure and syntax description: Describes the structure and properties of a Bicep file using declarative syntax. Previously updated : 11/17/2021 Last updated : 07/06/2022 # Understand the structure and syntax of Bicep files
The preceding example is equivalent to the following JSON.
} ```
+## Multiple-line declarations
+
+You can now use multiple lines in function, array and object declarations. This feature requires **Bicep version 0.7.4 or later**.
+
+In the following example, the `resourceGroup()` definition is broken into multiple lines.
+
+```bicep
+var foo = resourceGroup(
+ mySubscription,
+ myRgName)
+```
+
+See [Arrays](./data-types.md#arrays) and [Objects](./data-types.md#objects) for multiple-line declaration samples.
+ ## Known limitations -- No support for the concept of apiProfile, which is used to map a single apiProfile to a set apiVersion for each resource type.-- No support for user-defined functions.-- Some Bicep features require a corresponding change to the intermediate language (Azure Resource Manager JSON templates). We announce these features as available when all of the required updates have been deployed to global Azure. If you're using a different environment, such as Azure Stack, there may be a delay in the availability of the feature. The Bicep feature is only available when the intermediate language has also been updated in that environment.
+* No support for the concept of apiProfile, which is used to map a single apiProfile to a set apiVersion for each resource type.
+* No support for user-defined functions.
+* Some Bicep features require a corresponding change to the intermediate language (Azure Resource Manager JSON templates). We announce these features as available when all of the required updates have been deployed to global Azure. If you're using a different environment, such as Azure Stack, there may be a delay in the availability of the feature. The Bicep feature is only available when the intermediate language has also been updated in that environment.
## Next steps
azure-sql-edge Create External Stream Transact Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/create-external-stream-transact-sql.md
WITH ( <with_options> )
The staging area for high-throughput data ingestion into Azure Synapse Analytics - Reserved for future usage. Does not apply to Azure SQL Edge.
-For more information about supported input and output options corresponding to the data source type, see [Azure Stream Analytics - Input Overview](../../stream-analytics/stream-analytics-add-inputs.md) and [Azure Stream Analytics - Outputs Overview](../../stream-analytics/stream-analytics-define-outputs.md) respectively.
+For more information about supported input and output options corresponding to the data source type, see [Azure Stream Analytics - Input Overview](../stream-analytics/stream-analytics-add-inputs.md) and [Azure Stream Analytics - Outputs Overview](../stream-analytics/stream-analytics-define-outputs.md) respectively.
## Examples
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
+
+ Title: Azure Video Indexer accounts
+description: This article gives an overview of Azure Video Indexer accounts and provides links to other articles for more details.
+ Last updated : 06/22/2022+++
+# Azure Video Indexer account types
+
+This article gives an overview of Azure Video Indexer accounts and provides links to other articles for more details.
+
+## Differences between classic, ARM, trial accounts
+
+Classic and ARM (Azure Resource Manager) are both paid accounts with similar data plane capabilities and pricing. The main difference is that classic accounts control plane is managed by Azure Video Indexer and ARM accounts control plane is managed by Azure Resource Manager.
+Going forward, ARM account support more Azure native features and integrations such as: Azure Monitor, Private endpoints, Service tag and CMK (Customer managed key).
+**The recommended paid account type is the ARM-based account**
+
+| | ARM-based |Classic| Trial|
+|||||
+|Get access token | [ARM REST API](https://aka.ms/avam-arm-api) |[Get access token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token)|Same as classic
+|Share account| [Azure RBAC(role based access control)](../role-based-access-control/overview.md)| [Invite users](invite-users.md) |Same as classic
++
+A trial Azure Video Indexer account has limitation on number of videos, support, and SLA.
+
+### Indexing
+
+* Free trial account: up to 10 hours of free indexing, and up to 40 hours of free indexing for API registered users.
+* Paid unlimited account: for larger scale indexing, create a new Video Indexer account connected to a paid Microsoft Azure subscription.
+
+For more details, see [Pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
+
+### Create accounts
+
+* ARM accounts: [Get started with Azure Video Indexer in Azure portal](create-account-portal.md)
+
+ * Upgrade a trial account to an ARM based account and [**import** your content for free](connect-to-azure.md#import-your-content-from-the-trial-account).
+* Classic accounts: [Create classic accounts using API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Paid-Account).
+* Connect a classic account to ARM: [Connect an existing classic paid Azure Video Indexer account to ARM-based account](connect-classic-account-to-arm.md).
+
+## Limited access features
+
+This section talks about limited access features in Azure Video Indexer.
+
+|When did I create the account?|Trial Account (Free)| Paid Account <br/>(classic or ARM-based)|
+||||
+|Existing VI accounts <br/><br/>created before June 21, 2022|Able to access face identification, customization and celebrities recognition till June 2023. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMkZIOUE1R0YwMkU0M1NMUTA0QVNXVDlKNiQlQCN0PWcu) and based on the eligibility criteria we will enable the features also after the grace period. |Able to access face identification, customization and celebrities recognition till June 2023\*.<br/><br/>**Recommended**: fill in the [intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMkZIOUE1R0YwMkU0M1NMUTA0QVNXVDlKNiQlQCN0PWcu) and based on the eligibility criteria we will enable the features also after the grace period. <br/><br/>We proactively sent emails to these customers + AEs.|
+|New VI accounts <br/><br/>created after June 21, 2022 |Not able the access face identification, customization and celebrities recognition as of today. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMkZIOUE1R0YwMkU0M1NMUTA0QVNXVDlKNiQlQCN0PWcu). Based on the eligibility criteria we will enable the features (after max 10 days).|Azure Video Indexer disables the access face identification, customization and celebrities recognition as of today by default, but gives the option to enable it. <br/><br/>**Recommended**: Fill in the [intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMkZIOUE1R0YwMkU0M1NMUTA0QVNXVDlKNiQlQCN0PWcu) and based on the eligibility criteria we will enable the features (after max 10 days).|
+
+\*In Brazil South we also disabled the face detection.
+
+For more information, see [Azure Video Indexer limited access features](limited-access-features.md).
+
+## Next steps
+
+[Pricing](https://azure.microsoft.com/pricing/details/video-indexer/)
azure-video-indexer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-classic-account-to-arm.md
In this article, we will go through options on connecting your **existing** Azur
## Prerequisites
+Before creating a new account, review [Account types](accounts-overview.md).
+ 1. Unlimited paid Azure Video Indexer account (classic account). 1. To perform the connect to the ARM (Azure Resource Manager) action, you should have owner's permissions on the Azure Video Indexer classic account.
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
# Create an Azure Video Indexer account
-When creating an Azure Video Indexer account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you're not limited by the quota). With a free trial, Azure Video Indexer provides up to 600 minutes of free indexing to users and up to 2400 minutes of free indexing to users that subscribe to the Azure Video Indexer API on the [developer portal](https://aka.ms/avam-dev-portal). With the paid options, Azure Video Indexer offers two types of accounts: classic accounts(General Availability), and ARM-based accounts(Public Preview). Main difference between the two is account management platform. While classic accounts are built on the API Management, ARM-based accounts management is built on Azure, enables to apply access control to all services with role-based access control (Azure RBAC) natively.
+When creating an Azure Video Indexer account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you're not limited by the quota). With a free trial, Azure Video Indexer provides up to 600 minutes of free indexing to users and up to 2400 minutes of free indexing to users that subscribe to the Azure Video Indexer API on the [developer portal](https://aka.ms/avam-dev-portal). With the paid options, Azure Video Indexer offers two types of accounts: classic accounts(General Availability), and ARM-based accounts(Public Preview). Main difference between the two is account management platform. While classic accounts are built on the API Management, ARM-based accounts management is built on Azure, which enables apply access control to all services with role-based access control (Azure RBAC) natively.
+
+> [!NOTE]
+> Before creating a new account, review [Account types](accounts-overview.md).
* You can create an Azure Video Indexer **classic** account through our [API](https://aka.ms/avam-dev-portal). * You can create an Azure Video Indexer **ARM-based** account through one of the following:
To read more on how to create a **new ARM-Based** Azure Video Indexer account, r
For more details, see [pricing](https://azure.microsoft.com/pricing/details/video-indexer/). ## How to create classic accounts+ This article shows how to create an Azure Video Indexer classic account. The topic provides steps for connecting to Azure using the automatic (default) flow. It also shows how to connect to Azure manually (advanced). If you are moving from a *trial* to *paid ARM-Based* Azure Video Indexer account, you can choose to copy all of the videos and model customization to the new account, as discussed in the [Import your content from the trial account](#import-your-content-from-the-trial-account) section.
The article also covers [Linking an Azure Video Indexer account to Azure Governm
Search for **Microsoft.Media** and **Microsoft.EventGrid**. If not in the "Registered" state, click **Register**. It takes a couple of minutes to register.
- :::image type="content" alt-text="Screenshot that shows how to select an event grid subscription." source="./media/create-account/event-grid.png":::
+ :::image type="content" alt-text="Screenshot that shows how to select an Event Grid subscription." source="./media/create-account/event-grid.png":::
## Connect to Azure manually (advanced option)
azure-video-indexer Create Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/create-account-portal.md
To start using Azure Video Indexer, you'll need to create an Azure Video Indexer
## Prerequisites
+### Account types
+
+Before creating a new account, review [Account types](accounts-overview.md).
+ ### Azure level * This user should be a member of your Azure subscription with either an **Owner** role, or both **Contributor** and **User Access Administrator** roles. A user can be added twice, with two roles. Once with Contributor and once with user Access Administrator. For more information, see [View the access a user has to Azure resources](../role-based-access-control/check-access.md).
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
Azure Video Indexer is a cloud application, part of Azure Applied AI Services, b
Azure Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights. Below is an illustration of the audio and video analysis performed by Azure Video Indexer in the background. > [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/video-indexer-overview/model-chart.png" alt-text="Azure Video Indexer flow diagram":::
+> :::image type="content" source="./media/video-indexer-overview/model-chart.png" alt-text="Diagram of Azure Video Indexer flow.":::
To start extracting insights with Azure Video Indexer, you need to [create an account](connect-to-azure.md) and upload videos, see the [how can i get started](#how-can-i-get-started-with-azure-video-indexer) section below.
The following list shows the insights you can retrieve from your videos using Az
* **Animated characters detection** (preview): Detection, grouping, and recognition of characters in animated content via integration with [Cognitive Services custom vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/). For more information, see [Animated character detection](animated-characters-recognition.md). * **Editorial shot type detection**: Tagging shots based on their type (like wide shot, medium shot, close up, extreme close up, two shot, multiple people, outdoor and indoor, and so on). For more information, see [Editorial shot type detection](scenes-shots-keyframes.md#editorial-shot-type-detection). * **Observed People Tracking** (preview): detects observed people in videos and provides information such as the location of the person in the video frame (using bounding boxes) and the exact timestamp (start, end) and confidence when a person appears. For more information, see [Trace observed people in a video](observed-people-tracing.md).
- * **People's detected clothing**: detects the clothing types of people appearing in the video and provides information such as long or short sleeves, long or short pants and skirt or dress. The detected clothing are associated with the people wearing it and the exact timestamp (start,end) along with a confidence level for the detection are provided.
+ * **People's detected clothing**: detects the clothing types of people appearing in the video and provides information such as long or short sleeves, long or short pants and skirt or dress. The detected clothing is associated with the people wearing it and the exact timestamp (start,end) along with a confidence level for the detection are provided.
* **Matched person**: matches between people that were observed in the video with the corresponding faces detected. The matching between the observed people and the faces contain a confidence level. ### Audio insights
When indexing by one channel, partial result for those models will be available.
## How can I get started with Azure Video Indexer?
+### Prerequisite
+
+Before creating a new account, review [Account types](accounts-overview.md).
+
+### Start using Azure Video Indexer
+ You can access Azure Video Indexer capabilities in three ways:
-* Azure Video Indexer portal: An easy to use solution that lets you evaluate the product, manage the account, and customize models.
+* Azure Video Indexer portal: An easy-to-use solution that lets you evaluate the product, manage the account, and customize models.
For more information about the portal, see [Get started with the Azure Video Indexer website](video-indexer-get-started.md).
azure-vmware Enable Public Ip Nsx Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md
For example, the following rule is set to Match External Address, and this setti
If **Match Internal Address** was specified, the destination would be the internal or private IP address of the VM. For more information on the NSX-T Gateway Firewall see the [NSX-T Gateway Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-A52E1A6F-F27D-41D9-9493-E3A75EC35481.html)
-The Distributed Firewall could be used to filter traffic to VMs. This feature is outside the scope of this document. For more information, see [NSX-T Distributed Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-6AB240DB-949C-4E95-A9A7-4AC6EF5E3036.html)git status.
+The Distributed Firewall could be used to filter traffic to VMs. This feature is outside the scope of this document. For more information, see [NSX-T Distributed Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-6AB240DB-949C-4E95-A9A7-4AC6EF5E3036.html).
## Next steps [Internet connectivity design considerations (Preview)](concepts-design-public-internet-access.md)
azure-vmware Tutorial Create Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-create-private-cloud.md
Last updated 09/29/2021
# Tutorial: Deploy an Azure VMware Solution private cloud
-The Azure VMware Solution private gives you the ability to deploy a vSphere cluster in Azure. For each private cloud created, there's one vSAN cluster by default. You can add, delete, and scale clusters. The minimum number of hosts per cluster is three. More hosts can be added one at a time, up to a maximum of 16 hosts per cluster. The maximum number of clusters per private cloud is four. The initial deployment of Azure VMware Solution has three hosts.
+The Azure VMware Solution private gives you the ability to deploy a vSphere cluster in Azure. For each private cloud created, there's one vSAN cluster by default. You can add, delete, and scale clusters. The minimum number of hosts per cluster is three. More hosts can be added one at a time, up to a maximum of 16 hosts per cluster. The maximum number of clusters per private cloud is 12. The initial deployment of Azure VMware Solution has three hosts.
You use vCenter Server and NSX-T Manager to manage most other aspects of cluster configuration or operation. All local storage of each host in a cluster is under the control of vSAN. >[!TIP]
->You can always extend the cluster and add additional clusters later if you need to go beyond the initial deployment number.
+>You can always extend the cluster and add more clusters later if you need to go beyond the initial deployment number.
Because Azure VMware Solution doesn't allow you to manage your private cloud with your cloud vCenter Server at launch, you'll need to do additional steps for the configuration. This tutorial covers these steps and related prerequisites.
azure-web-pubsub Quickstart Bicep Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-bicep-template.md
-# Quickstart: Use Bicep to deploy Azure Web PubSub Service
+# Quickstart: Use Bicep to deploy Azure Web PubSub service
This quickstart describes how to use Bicep to create an Azure Web PubSub service using Azure CLI or PowerShell.
az group delete --name exampleRG
```azurepowershell-interactive Remove-AzResourceGroup -Name exampleRG ```-+ ## Next steps For a step-by-step tutorial that guides you through the process of creating a Bicep file using Visual Studio Code, see:
backup Backup Azure Monitoring Built In Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-monitoring-built-in-monitor.md
Title: Monitor Azure Backup protected workloads description: In this article, learn about the monitoring and notification capabilities for Azure Backup workloads using the Azure portal. Previously updated : 05/16/2022 Last updated : 07/06/2022 ms.assetid: 86ebeb03-f5fa-4794-8a5f-aa5cbbf68a81
The following table summarizes the different backup alerts currently available (
| **Alert Category** | **Alert Name** | **Supported workload types / vault types** | **Description** | | | - | | -- |
-| Security | Delete Backup Data | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM <br><br> - Azure Backup Agent <br><br> - DPM <br><br> - Azure Backup Server <br><br> - Azure Database for PostgreSQL Server <br><br> - Azure Blobs <br><br> - Azure Managed Disks | This alert is fired when a user stops backup and deletes backup data (Note ΓÇô If soft-delete feature is disabled for the vault, Delete Backup Data alert is not received) |
+| Security | Delete Backup Data | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM <br><br> - Azure Backup Agent <br><br> - DPM <br><br> - Azure Backup Server <br><br> - Azure Database for PostgreSQL Server <br><br> - Azure Blobs <br><br> - Azure Managed Disks | Stop protection with delete data alert is only generated if soft-delete functionality is enabled for the vault, that is, if soft-delete feature is disabled for a vault, then a single alert is sent to notify the user that soft-delete has been disabled. Subsequent deletion of backup data of any item does not raise an alert. |
| Security | Upcoming Purge | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM | For all workloads which support soft-delete, this alert is fired when the backup data for an item is 2 days away from being permanently purged by the Azure Backup service | | Security | Purge Complete | - Azure Virtual Machine <br><br> - SQL in Azure VM (non-AG scenarios) <br><br> - SAP HANA in Azure VM | Delete Backup Data | | Security | Soft Delete Disabled for Vault | Recovery Services vaults | This alert is fired when the soft-deleted backup data for an item has been permanently deleted by the Azure Backup service |
cdn Cdn Sas Storage Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-sas-storage-support.md
This option is the simplest and uses a single SAS token, which is passed from Az
For example: ```
- https://demoendpoint.azureedge.net/container1/demo.jpg/?sv=2017-07-29&ss=b&srt=c&sp=r&se=2027-12-19T17:35:58Z&st=2017-12-19T09:35:58Z&spr=https&sig=kquaXsAuCLXomN7R00b8CYM13UpDbAHcsRfGOW3Du1M%3D
+ https://demoendpoint.azureedge.net/container1/demo.jpg?sv=2017-07-29&ss=b&srt=c&sp=r&se=2027-12-19T17:35:58Z&st=2017-12-19T09:35:58Z&spr=https&sig=kquaXsAuCLXomN7R00b8CYM13UpDbAHcsRfGOW3Du1M%3D
``` 3. Fine-tune the cache duration either by using caching rules or by adding `Cache-Control` headers at the origin server. Because Azure CDN treats the SAS token as a plain query string, as a best practice you should set up a caching duration that expires at or before the SAS expiration time. Otherwise, if a file is cached for a longer duration than the SAS is active, the file may be accessible from the Azure CDN origin server after the SAS expiration time has elapsed. If this situation occurs, and you want to make your cached file inaccessible, you must perform a purge operation on the file to clear it from the cache. For information about setting the cache duration on Azure CDN, see [Control Azure CDN caching behavior with caching rules](cdn-caching-rules.md).
For more information about SAS, see the following articles:
- [Using shared access signatures (SAS)](../storage/common/storage-sas-overview.md) - [Shared Access Signatures, Part 2: Create and use a SAS with Blob storage](../storage/common/storage-sas-overview.md)
-For more information about setting up token authentication, see [Securing Azure Content Delivery Network assets with token authentication](./cdn-token-auth.md).
+For more information about setting up token authentication, see [Securing Azure Content Delivery Network assets with token authentication](./cdn-token-auth.md).
certification How To Test Device Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-test-device-update.md
+
+ Title: How to test Device Update for IoT Hub
+description: A guide describing how to test Device Update for IoT Hub on a Linux host in preparation for Edge Secured-core certification.
++++ Last updated : 06/20/2022+++
+# How to test Device Update for IoT Hub
+The [Device Update for IoT Hub](..\iot-hub-device-update\understand-device-update.md) test exercises your deviceΓÇÖs ability to receive an update from IoT Hub. The following steps will guide you through the process to test Device Update for IoT Hub when attempting device certification.
+
+## Prerequisites
+* Device must be capable of running Linux [IoT Edge supported container](..\iot-edge\support.md).
+* Your device must be capable of receiving an [.SWU update](https://swupdate.org/) and be able to return to a running and connected state after the update is applied.
+* The update package and manifest must be applicable to the device under test. (Example: If the device is running ΓÇ£Version 1.0ΓÇ¥, the update should be ΓÇ£Version 2.0ΓÇ¥.)
+* Upload your .SWU file to a blob storage location of your choice.
+* Create a SAS URL for accessing the uploaded .SWU file.
+
+## Testing the device
+ 1. On the Connect + test page, select **"Yes"** for the **"Are you able to test Device Update for IoT Hub?"** question.
+ > [!Note]
+ > If you are not able to test Device Update and select No, you will still be able to run all other Secured-core tests, but your product will not be eligible for certification.
++
+2. Proceed with connecting your device to the test infrastructure.
+
+3. Select **"Upload"** to upload the ".manifest.json" file.
+
+4. On the Select Requirement Validation step, select the **"Upload"** button at the bottom of the page.
+
+ :::image type="content" source="./media/how-to-adu/select-tests.png" alt-text="Dialog that shows the selected tests that will be validated.":::
+
+5. Upload your .importmanifest.json file by selecting the **Choose File** button. Select your file and then select the **Upload** button.
+ > [!Note]
+ > The file extension must be .importmanifest.json.
+
+ :::image type="content" source="./media/how-to-adu/upload-swu.png" alt-text="Dialog that shows how the SWU file can be uploaded.":::
+
+6. Copy and Paste the SAS URL to the location of your .SWU file in the provided input box, then select the **Validate** button.
+ :::image type="content" source="./media/how-to-adu/validate-swu.png" alt-text="Dialog that shows how the SAS url is applied.":::
+
+7. Once weΓÇÖve validated our service can reach the provided URL, select **Import**.
+ :::image type="content" source="./media/how-to-adu/staging-complete.png" alt-text="Dialog that shows the staging process is complete":::
+
+ > [!Note]
+ > If you receive an ΓÇ£Invalid SAS URLΓÇ¥ message, generate a new SAS URL from your storage blob and try again.
+
+8. Select **Continue** to proceed
+
+9. Congratulations! You're now ready to proceed with Edge Secured-core testing.
+
+10. Select the **Run tests** button to begin the testing process. Your device will be updated as the final step in our Edge Secured-core testing.
certification Program Requirements Edge Secured Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/program-requirements-edge-secured-core.md
description: Edge Secured-core Certification program requirements
Previously updated : 05/15/2021 Last updated : 06/21/2021 zone_pivot_groups: app-service-platform-windows-linux
-# Azure Certified Device - Edge Secured-core (Preview) #
+# Azure Certified Device - Edge Secured-core #
## Edge Secured-Core certification requirements ##
Edge Secured-core is an incremental certification in the Azure Certified Device
6. Built in security agent and hardening ## Preview Program Support
-While in public preview, we are supporting a small number of partners to pre-validate devices against the Edge Secured-core program requirements. If you would like participate in the Edge Secured-core public preview, please contact iotcert@microsoft.com
+While in public preview, we are supporting a small number of partners to pre-validate devices against the Edge Secured-core program requirements. If you would like to participate in the Edge Secured-core public preview, please contact iotcert@microsoft.com
Overview content ::: zone pivot="platform-windows"
cloud-services-extended-support Schema Cscfg Networkconfiguration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-cscfg-networkconfiguration.md
# Azure Cloud Services (extended support) config networkConfiguration schema
-The `NetworkConfiguration` element of the service configuration file specifies Virtual Network and DNS values. These settings are optional for Cloud Services.
+The `NetworkConfiguration` element of the service configuration file specifies Virtual Network and DNS values. These settings are optional for Cloud Services (classic).
You can use the following resource to learn more about Virtual Networks and the associated schemas:
cloud-services Cloud Services Dotnet Install Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-install-dotnet.md
You can use startup tasks to perform operations before a role starts. Installing
REM ***** To install .NET 4.7 set the variable netfx to "NDP47" ***** REM ***** To install .NET 4.7.1 set the variable netfx to "NDP471" ***** https://go.microsoft.com/fwlink/?LinkId=852095 REM ***** To install .NET 4.7.2 set the variable netfx to "NDP472" ***** https://go.microsoft.com/fwlink/?LinkId=863262
- set netfx="NDP472"
REM ***** To install .NET 4.8 set the variable netfx to "NDP48" ***** https://dotnet.microsoft.com/download/thank-you/net48
-
+ set netfx="NDP48"
+
REM ***** Set script start timestamp ***** set timehour=%time:~0,2% set timestamp=%date:~-4,4%%date:~-10,2%%date:~-7,2%-%timehour: =0%%time:~3,2%
You can use startup tasks to perform operations before a role starts. Installing
set TEMP=%PathToNETFXInstall% REM ***** Setup .NET filenames and registry keys *****
+ if %netfx%=="NDP48" goto NDP48
if %netfx%=="NDP472" goto NDP472 if %netfx%=="NDP471" goto NDP471 if %netfx%=="NDP47" goto NDP47
You can use startup tasks to perform operations before a role starts. Installing
:NDP472 set "netfxinstallfile=NDP472-KB4054531-Web.exe"
- set netfxregkey="0x70BF6"
+ set netfxregkey="0x70BF0"
+ goto logtimestamp
+
+ :NDP48
+ set "netfxinstallfile=NDP48-Web.exe"
+ set netfxregkey="0x80EA8"
goto logtimestamp :logtimestamp
cognitive-services Anomaly Detection Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/concepts/anomaly-detection-best-practices.md
+ Last updated 01/22/2021
cognitive-services Best Practices Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/concepts/best-practices-multivariate.md
Last updated 06/07/2022+ keywords: anomaly detection, machine learning, algorithms
cognitive-services Multivariate Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/concepts/multivariate-architecture.md
Last updated 04/01/2021 + keywords: anomaly detection, machine learning, algorithms
cognitive-services Overview Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/overview-multivariate.md
Title: What is Multivariate Anomaly Detector?
+ Title: What is Multivariate Anomaly Detection?
description: Overview of new Anomaly Detector preview multivariate APIs.
- Previously updated : 01/16/2022+ Last updated : 07/06/2022 keywords: anomaly detection, machine learning, algorithms
keywords: anomaly detection, machine learning, algorithms
# What is Multivariate Anomaly Detector? (Public Preview)
-The new **multivariate anomaly detection** APIs further enable developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the need for machine learning knowledge or labeled data. Dependencies and inter-correlations between up to 300 different signals are now automatically counted as key factors. This new capability helps you to proactively protect your complex systems such as software applications, servers, factory machines, spacecraft, or even your business, from failures.
+The **multivariate anomaly detection** APIs further enable developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the need for machine learning knowledge or labeled data. Dependencies and inter-correlations between up to 300 different signals are now automatically counted as key factors. This new capability helps you to proactively protect your complex systems such as software applications, servers, factory machines, spacecraft, or even your business, from failures.
![Multiple time series line graphs for variables of: rotation, optical filter, pressure, bearing with anomalies highlighted in orange](./media/multivariate-graph.png) Imagine 20 sensors from an auto engine generating 20 different signals like rotation, fuel pressure, bearing, etc. The readings of those signals individually may not tell you much about system level issues, but together they can represent the health of the engine. When the interaction of those signals deviates outside the usual range, the multivariate anomaly detection feature can sense the anomaly like a seasoned expert. The underlying AI models are trained and customized using your data such that it understands the unique needs of your business. With the new APIs in Anomaly Detector, developers can now easily integrate the multivariate time series anomaly detection capabilities into predictive maintenance solutions, AIOps monitoring solutions for complex enterprise software, or business intelligence tools.
-## When to use **multivariate** versus **univariate**
-
-If your goal is to detect anomalies out of a normal pattern on each individual time series purely based on their own historical data, use univariate anomaly detection APIs. For example, you want to detect daily revenue anomalies based on revenue data itself, or you want to detect a CPU spike purely based on CPU data.
-
-If your goal is to detect system level anomalies from a group of time series data, use multivariate anomaly detection APIs. Particularly, when any individual time series won't tell you much, and you have to look at all signals (a group of time series) holistically to determine a system level issue. For example, you have an expensive physical asset like aircraft, equipment on an oil rig, or a satellite. Each of these assets has tens or hundreds of different types of sensors. You would have to look at all those time series signals from those sensors to decide whether there is system level issue.
## Sample Notebook
See the following technical documents for information about the algorithms used:
## Join the Anomaly Detector community -- Join the [Anomaly Detector Advisors group on Microsoft Teams](https://aka.ms/AdAdvisorsJoin)
+Join the [Anomaly Detector Advisors group on Microsoft Teams](https://aka.ms/AdAdvisorsJoin) for better support and any updates!
## Next steps - [Tutorial](./tutorials/learn-multivariate-anomaly-detection.md): This article is an end-to-end tutorial of how to use the multivariate APIs. - [Quickstarts](./quickstarts/client-libraries-multivariate.md).-- [Best Practices](./concepts/best-practices-multivariate.md): This article is about recommended patterns to use with the multivariate APIs.
+- [Best Practices](./concepts/best-practices-multivariate.md): This article is about recommended patterns to use with the multivariate APIs.
cognitive-services Overview Univariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/overview-univariate.md
+
+ Title: What is the Univariate Anomaly Detector?
+
+description: Use the Anomaly Detector univariate API's algorithms to apply anomaly detection on your time series data.
++++++ Last updated : 07/06/2022+
+keywords: anomaly detection, machine learning, algorithms
+++
+# What is Univariate Anomaly Detector?
+
+The Anomaly Detector API enables you to monitor and detect abnormalities in your time series data without having to know machine learning. The Anomaly Detector API's algorithms adapt by automatically identifying and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your time series data, the API determines boundaries for anomaly detection, expected values, and which data points are anomalies.
+
+![Detect pattern changes in service requests](./media/anomaly_detection2.png)
+
+Using the Anomaly Detector doesn't require any prior experience in machine learning, and the REST API enables you to easily integrate the service into your applications and processes.
++
+## Features
+
+With the Univariate Anomaly Detector, you can automatically detect anomalies throughout your time series data, or as they occur in real-time.
+
+|Feature |Description |
+|||
+|Anomaly detection in real-time. | Detect anomalies in your streaming data by using previously seen data points to determine if your latest one is an anomaly. This operation generates a model using the data points you send, and determines if the target point is an anomaly. By calling the API with each new data point you generate, you can monitor your data as it's created. |
+|Detect anomalies throughout your data set as a batch. | Use your time series to detect any anomalies that might exist throughout your data. This operation generates a model using your entire time series data, with each point analyzed with the same model. |
+|Detect change points throughout your data set as a batch. | Use your time series to detect any trend change points that exist in your data. This operation generates a model using your entire time series data, with each point analyzed with the same model. |
+
+## Demo
+
+Check out this [interactive demo](https://aka.ms/adDemo) to understand how Anomaly Detector works.
+To run the demo, you need to create an Anomaly Detector resource and get the API key and endpoint.
+
+## Notebook
+
+To learn how to call the Anomaly Detector API, try this [Notebook](https://aka.ms/adNotebook). This Jupyter Notebook shows you how to send an API request and visualize the result.
+
+To run the Notebook, you should get a valid Anomaly Detector API **subscription key** and an **API endpoint**. In the notebook, add your valid Anomaly Detector API subscription key to the `subscription_key` variable, and change the `endpoint` variable to your endpoint.
+
+<!-- ## Workflow
+
+The Anomaly Detector API is a RESTful web service, making it easy to call from any programming language that can make HTTP requests and parse JSON.
+++
+After signing up:
+
+1. Take your time series data and convert it into a valid JSON format. Use [best practices](concepts/anomaly-detection-best-practices.md) when preparing your data to get the best results.
+1. Send a request to the Anomaly Detector API with your data.
+1. Process the API response by parsing the returned JSON message.
++
+## Algorithms
+
+* See the following technical blogs for information about the algorithms used:
+ * [Introducing Azure Anomaly Detector API](https://techcommunity.microsoft.com/t5/AI-Customer-Engineering-Team/Introducing-Azure-Anomaly-Detector-API/ba-p/490162)
+ * [Overview of SR-CNN algorithm in Azure Anomaly Detector](https://techcommunity.microsoft.com/t5/AI-Customer-Engineering-Team/Overview-of-SR-CNN-algorithm-in-Azure-Anomaly-Detector/ba-p/982798)
+
+You can read the paper [Time-Series Anomaly Detection Service at Microsoft](https://arxiv.org/abs/1906.03821) (accepted by KDD 2019) to learn more about the SR-CNN algorithms developed by Microsoft.
+
+> [!VIDEO https://www.youtube.com/embed/ERTaAnwCarM]
+
+## Service availability and redundancy
+
+### Is the Anomaly Detector service zone resilient?
+
+Yes. The Anomaly Detector service is zone-resilient by default.
+
+### How do I configure the Anomaly Detector service to be zone-resilient?
+
+No customer configuration is necessary to enable zone-resiliency. Zone-resiliency for Anomaly Detector resources is available by default and managed by the service itself.
+
+## Deploy on premises using Docker containers
+
+[Use Univariate Anomaly Detector containers](anomaly-detector-container-howto.md) to deploy API features on-premises. Docker containers enable you to bring the service closer to your data for compliance, security, or other operational reasons.
+
+## Join the Anomaly Detector community
+
+Join the [Anomaly Detector Advisors group on Microsoft Teams](https://aka.ms/AdAdvisorsJoin) for better support and any updates!
+
+## Next steps
+
+* [Quickstart: Detect anomalies in your time series data using the Univariate Anomaly Detector](quickstarts/client-libraries.md)
+* [What's multivariate anomaly detection?](./overview-multivariate.md)
+* The Anomaly Detector API [online demo](https://github.com/Azure-Samples/AnomalyDetector/tree/master/ipython-notebook)
+* The Anomaly Detector [REST API reference](https://aka.ms/anomaly-detector-rest-api-ref)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/overview.md
Title: What is the Univariate Anomaly Detector?
+ Title: What is Anomaly Detector?
description: Use the Anomaly Detector API's algorithms to apply anomaly detection on your time series data.
Previously updated : 02/16/2021 Last updated : 07/06/2022 keywords: anomaly detection, machine learning, algorithms
-# What is Univariate Anomaly Detector?
+# What is Anomaly Detector?
-The Anomaly Detector API enables you to monitor and detect abnormalities in your time series data without having to know machine learning. The Anomaly Detector API's algorithms adapt by automatically identifying and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your time series data, the API determines boundaries for anomaly detection, expected values, and which data points are anomalies.
+Anomaly Detector is an AI service with a set of APIs, which enables you to monitor and detect anomalies in your time series data with little ML knowledge, either batch validation or real-time inference.
-![Detect pattern changes in service requests](./media/anomaly_detection2.png)
-
-Using the Anomaly Detector doesn't require any prior experience in machine learning, and the REST API enables you to easily integrate the service into your applications and processes.
This documentation contains the following types of articles: * The [quickstarts](./Quickstarts/client-libraries.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
This documentation contains the following types of articles:
## Features
-With the Anomaly Detector, you can automatically detect anomalies throughout your time series data, or as they occur in real-time.
+With the Anomaly Detector, you can either detect anomalies in one variable using Univariate Anomaly Detector, or detect anomalies in multiple variables with Multivariate Anomaly Detector.
|Feature |Description | |||
-|Anomaly detection in real-time. | Detect anomalies in your streaming data by using previously seen data points to determine if your latest one is an anomaly. This operation generates a model using the data points you send, and determines if the target point is an anomaly. By calling the API with each new data point you generate, you can monitor your data as it's created. |
-|Detect anomalies throughout your data set as a batch. | Use your time series to detect any anomalies that might exist throughout your data. This operation generates a model using your entire time series data, with each point analyzed with the same model. |
-|Detect change points throughout your data set as a batch. | Use your time series to detect any trend change points that exist in your data. This operation generates a model using your entire time series data, with each point analyzed with the same model. |
-| Get additional information about your data. | Get useful details about your data and any observed anomalies, including expected values, anomaly boundaries, and positions. |
-| Adjust anomaly detection boundaries. | The Anomaly Detector API automatically creates boundaries for anomaly detection. Adjust these boundaries to increase or decrease the API's sensitivity to data anomalies, and better fit your data. |
+|Univariate Anomaly Detector | Detect anomalies in one variable, like revenue, cost, etc. The model was selected automatically based on your data pattern. |
+|Multivariate Anomaly Detector| Detect anomalies in multiple variables with correlations, which are usually gathered from equipment or other complex system. The underlying model used is Graph attention network.|
+
+### When to use **Univariate Anomaly Detector** v.s. **Multivariate Anomaly Detector**
+
+If your goal is to detect anomalies out of a normal pattern on each individual time series purely based on their own historical data, use univariate anomaly detection APIs. For example, you want to detect daily revenue anomalies based on revenue data itself, or you want to detect a CPU spike purely based on CPU data.
+
+If your goal is to detect system level anomalies from a group of time series data, use multivariate anomaly detection APIs. Particularly, when any individual time series won't tell you much, and you have to look at all signals (a group of time series) holistically to determine a system level issue. For example, you have an expensive physical asset like aircraft, equipment on an oil rig, or a satellite. Each of these assets has tens or hundreds of different types of sensors. You would have to look at all those time series signals from those sensors to decide whether there is system level issue.
+ ## Demo
To learn how to call the Anomaly Detector API, try this [Notebook](https://aka.m
To run the Notebook, you should get a valid Anomaly Detector API **subscription key** and an **API endpoint**. In the notebook, add your valid Anomaly Detector API subscription key to the `subscription_key` variable, and change the `endpoint` variable to your endpoint.
-## Workflow
-
-The Anomaly Detector API is a RESTful web service, making it easy to call from any programming language that can make HTTP requests and parse JSON.
---
-After signing up:
-
-1. Take your time series data and convert it into a valid JSON format. Use [best practices](concepts/anomaly-detection-best-practices.md) when preparing your data to get the best results.
-1. Send a request to the Anomaly Detector API with your data.
-1. Process the API response by parsing the returned JSON message.
--
-## Algorithms
-
-* See the following technical blogs for information about the algorithms used:
- * [Introducing Azure Anomaly Detector API](https://techcommunity.microsoft.com/t5/AI-Customer-Engineering-Team/Introducing-Azure-Anomaly-Detector-API/ba-p/490162)
- * [Overview of SR-CNN algorithm in Azure Anomaly Detector](https://techcommunity.microsoft.com/t5/AI-Customer-Engineering-Team/Overview-of-SR-CNN-algorithm-in-Azure-Anomaly-Detector/ba-p/982798)
-
-You can read the paper [Time-Series Anomaly Detection Service at Microsoft](https://arxiv.org/abs/1906.03821) (accepted by KDD 2019) to learn more about the SR-CNN algorithms developed by Microsoft.
-
-> [!VIDEO https://www.youtube.com/embed/ERTaAnwCarM]
- ## Service availability and redundancy ### Is the Anomaly Detector service zone resilient?
Yes. The Anomaly Detector service is zone-resilient by default.
No customer configuration is necessary to enable zone-resiliency. Zone-resiliency for Anomaly Detector resources is available by default and managed by the service itself.
-## Deploy on premises using Docker containers
-
-[Use Anomaly Detector containers](anomaly-detector-container-howto.md) to deploy API features on-premises. Docker containers enable you to bring the service closer to your data for compliance, security, or other operational reasons.
-
-## Join the Anomaly Detector community
-
-* Join the [Anomaly Detector Advisors group on Microsoft Teams](https://aka.ms/AdAdvisorsJoin)
## Next steps
-* [Quickstart: Detect anomalies in your time series data using the Anomaly Detector](quickstarts/client-libraries.md)
-* The Anomaly Detector API [online demo](https://github.com/Azure-Samples/AnomalyDetector/tree/master/ipython-notebook)
-* The Anomaly Detector [REST API reference](https://aka.ms/anomaly-detector-rest-api-ref)
+* [What is Univariate Anomaly Detector?](./overview-univariate.md)
+* [What is Multivariate Anomaly Detector?](./overview-multivariate.md)
+* Join the [Anomaly Detector Advisors group on Microsoft Teams](https://aka.ms/AdAdvisorsJoin) for better support and any updates!
cognitive-services Tutorial Bing Video Search Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/tutorial-bing-video-search-single-page-app.md
A renderer function can accept the following parameters:
The `index` and `count` parameters can be used to number results, to generate special HTML for the beginning or end of a collection, to insert line breaks after a certain number of items, and so on. If a renderer does not need this functionality, it does not need to accept these two parameters.
-The `video` renderer is shown in the following javascript excerpt. Using the Videos endpoint, all results are of type `Videos`. The `searchItemRenderers` are shown in the following code segment.
+The `video` renderer is shown in the following JavaScript excerpt. Using the Videos endpoint, all results are of type `Videos`. The `searchItemRenderers` are shown in the following code segment.
```javascript // render functions for various types of search results
cognitive-services Bing Insights Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/bing-insights-usage.md
To get started with your first request, see the quickstarts:
* [Java](quickstarts/java.md)
-* [node.js](quickstarts/nodejs.md)
+* [Node.js](quickstarts/nodejs.md)
* [Python](quickstarts/python.md)
cognitive-services Default Insights Tag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/default-insights-tag.md
To get started quickly with your first request, see the quickstarts:
* [Java](quickstarts/java.md)
-* [node.js](quickstarts/nodejs.md)
+* [Node.js](quickstarts/nodejs.md)
* [Python](quickstarts/python.md).
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/overview.md
To get started quickly with your first request, see the quickstarts:
* [Java](quickstarts/java.md)
-* [node.js](quickstarts/nodejs.md)
+* [Node.js](quickstarts/nodejs.md)
* [Python](quickstarts/python.md)
cognitive-services Cognitive Services Apis Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-apis-create-account.md
keywords: cognitive services, cognitive intelligence, cognitive solutions, ai services -+ Last updated 06/06/2022
cognitive-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/concepts/evaluation-metrics.md
The model would have the following entity-level evaluation, for the *city* entit
| False Positive | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. | | False Negative | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
-* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `2 / (2 + 1) = 0.67`
-* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `2 / (2 + 1) = 0.67`
-* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.67 * 0.67) / (0.67 + 0.67) = 0.67`
+* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `1 / (1 + 1) = 0.5`
+* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `1 / (1 + 1) = 0.5`
+* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
### Model-level evaluation for the collective model
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/quickstart.md
Previously updated : 06/14/2022 Last updated : 07/06/2022 ms.devlang: csharp, java, javascript, python
communication-services Get Started Raw Media Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-raw-media-access.md
Title: Quickstart - Add RAW media access to your app (Android)
+ Title: Quickstart - Add RAW media access to your app
description: In this quickstart, you'll learn how to add raw media access calling capabilities to your app using Azure Communication Services.-+ - Previously updated : 06/09/2022+ Last updated : 06/30/2022
+zone_pivot_groups: acs-plat-android-web
-# Raw Video
+# QuickStart: Add raw media access to your app
-In this quickstart, you'll learn how to implement raw media access using the Azure Communication Services Calling SDK for Android.
-The Azure Communication Services Calling SDK offers APIs allowing apps to generate their own video frames to send to remote participants.
-This quick start builds upon [QuickStart: Add 1:1 video calling to your app](./get-started-with-video-calling.md?pivots=platform-android) for Android.
+## Next steps
+For more information, see the following articles:
-
-## Virtual Video Stream Overview
-
-Since the app will be generating the video frames, the app must inform the Azure Communication Services Calling SDK about the video formats the app is capable of generating. This is required to allow the Azure Communication Services Calling SDK to pick the best video format configuration given the network conditions at any giving time.
-
-The app must register a delegate to get notified about when it should start or stop producing video frames. The delegate event will inform the app which video format is more appropriate for the current network conditions.
-
-### Supported Video Resolutions
-
-| Aspect Ratio | Resolution | Maximum FPS |
-| :--: | :-: | :-: |
-| 16x9 | 1080p | 30 |
-| 16x9 | 720p | 30 |
-| 16x9 | 540p | 30 |
-| 16x9 | 480p | 30 |
-| 16x9 | 360p | 30 |
-| 16x9 | 270p | 15 |
-| 16x9 | 240p | 15 |
-| 16x9 | 180p | 15 |
-| 4x3 | VGA (640x480) | 30 |
-| 4x3 | 424x320 | 15 |
-| 4x3 | QVGA (320x240) | 15 |
-| 4x3 | 212x160 | 15 |
-
-The following is an overview of the steps required to create a virtual video stream.
-
-1. Create an array of `VideoFormat` with the video formats supported by the app. It is fine to have only one video format supported, but at least one of the provided video formats must be of the `VideoFrameKind::VideoSoftware` type. When multiple formats are provided, the order of the format in the list doesn't influence or prioritize which one will be used. The selected format is based on external factors like network bandwidth.
-
- ```java
- ArrayList<VideoFormat> videoFormats = new ArrayList<VideoFormat>();
-
- VideoFormat format = new VideoFormat();
- format.setWidth(1280);
- format.setHeight(720);
- format.setPixelFormat(PixelFormat.RGBA);
- format.setVideoFrameKind(VideoFrameKind.VIDEO_SOFTWARE);
- format.setFramesPerSecond(30);
- format.setStride1(1280 * 4); // It is times 4 because RGBA is a 32-bit format.
-
- videoFormats.add(format);
- ```
-
-2. Create `RawOutgoingVideoStreamOptions` and set `VideoFormats` with the previously created object.
-
- ```java
- RawOutgoingVideoStreamOptions rawOutgoingVideoStreamOptions = new RawOutgoingVideoStreamOptions();
- rawOutgoingVideoStreamOptions.setVideoFormats(videoFormats);
- ```
-
-3. Subscribe to `RawOutgoingVideoStreamOptions::addOnOutgoingVideoStreamStateChangedListener` delegate. This delegate will inform the state of the current stream, it's important that you don't send frames if the state is no equal to `OutgoingVideoStreamState.STARTED`.
-
- ```java
- private OutgoingVideoStreamState outgoingVideoStreamState;
-
- rawOutgoingVideoStreamOptions.addOnOutgoingVideoStreamStateChangedListener(event -> {
-
- outgoingVideoStreamState = event.getOutgoingVideoStreamState();
- });
- ```
-
-4. Make sure the `RawOutgoingVideoStreamOptions::addOnVideoFrameSenderChangedListener` delegate is defined. This delegate will inform its listener about events requiring the app to start or stop producing video frames. In this quick start, `mediaFrameSender` is used as trigger to let the app know when it's time to start generating frames. Feel free to use any mechanism in your app as a trigger.
-
- ```java
- private VideoFrameSender mediaFrameSender;
-
- rawOutgoingVideoStreamOptions.addOnVideoFrameSenderChangedListener(event -> {
-
- mediaFrameSender = event.getVideoFrameSender();
- });
- ```
-
-5. Create an instance of `VirtualRawOutgoingVideoStream` using the `RawOutgoingVideoStreamOptions` we created previously
-
- ```java
- private VirtualRawOutgoingVideoStream virtualRawOutgoingVideoStream;
-
- virtualRawOutgoingVideoStream = new VirtualRawOutgoingVideoStream(rawOutgoingVideoStreamOptions);
- ```
-
-7. Once outgoingVideoStreamState is equal to `OutgoingVideoStreamState.STARTED` create and instance of `FrameGenerator` class this will start a non-UI thread and will send frames, call `FrameGenerator.SetVideoFrameSender` each time we get an updated `VideoFrameSender` on the previous delegate, cast the `VideoFrameSender` to the appropriate type defined by the `VideoFrameKind` property of `VideoFormat`. For example, cast it to `SoftwareBasedVideoFrameSender` and then call the `send` method according to the number of planes defined by the VideoFormat.
-After that, create the ByteBuffer backing the video frame if needed. Then, update the content of the video frame. Finally, send the video frame to other participants with the `sendFrame` API.
-
- ```java
- public class FrameGenerator implements VideoFrameSenderChangedListener {
-
- private VideoFrameSender videoFrameSender;
- private Thread frameIteratorThread;
- private final Random random;
- private volatile boolean stopFrameIterator = false;
-
- public FrameGenerator() {
-
- random = new Random();
- }
-
- public void FrameIterator() {
-
- ByteBuffer plane = null;
- while (!stopFrameIterator && videoFrameSender != null) {
-
- plane = GenerateFrame(plane);
- }
- }
-
- private ByteBuffer GenerateFrame(ByteBuffer plane) {
-
- try {
-
- VideoFormat videoFormat = videoFrameSender.getVideoFormat();
- if (plane == null || videoFormat.getStride1() * videoFormat.getHeight() != plane.capacity()) {
-
- plane = ByteBuffer.allocateDirect(videoFormat.getStride1() * videoFormat.getHeight());
- plane.order(ByteOrder.nativeOrder());
- }
-
- int bandsCount = random.nextInt(15) + 1;
- int bandBegin = 0;
- int bandThickness = videoFormat.getHeight() * videoFormat.getStride1() / bandsCount;
-
- for (int i = 0; i < bandsCount; ++i) {
-
- byte greyValue = (byte) random.nextInt(254);
- java.util.Arrays.fill(plane.array(), bandBegin, bandBegin + bandThickness, greyValue);
- bandBegin += bandThickness;
- }
-
- if (videoFrameSender instanceof SoftwareBasedVideoFrameSender) {
- SoftwareBasedVideoFrameSender sender = (SoftwareBasedVideoFrameSender) videoFrameSender;
-
- long timeStamp = sender.getTimestampInTicks();
- sender.sendFrame(plane, timeStamp).get();
- } else {
-
- HardwareBasedVideoFrameSender sender = (HardwareBasedVideoFrameSender) videoFrameSender;
-
- int[] textureIds = new int[1];
- int targetId = GLES20.GL_TEXTURE_2D;
-
- GLES20.glEnable(targetId);
- GLES20.glGenTextures(1, textureIds, 0);
- GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
- GLES20.glBindTexture(targetId, textureIds[0]);
- GLES20.glTexImage2D(targetId,
- 0,
- GLES20.GL_RGB,
- videoFormat.getWidth(),
- videoFormat.getHeight(),
- 0,
- GLES20.GL_RGB,
- GLES20.GL_UNSIGNED_BYTE,
- plane);
-
- long timeStamp = sender.getTimestampInTicks();
- sender.sendFrame(targetId, textureIds[0], timeStamp).get();
- }
-
- Thread.sleep((long) (1000.0f / videoFormat.getFramesPerSecond()));
- } catch (InterruptedException ex) {
-
- Log.d("FrameGenerator", String.format("FrameGenerator.GenerateFrame, %s", ex.getMessage()));
- } catch (ExecutionException ex2) {
-
- Log.d("FrameGenerator", String.format("FrameGenerator.GenerateFrame, %s", ex2.getMessage()));
- }
-
- return plane;
- }
-
- private void StartFrameIterator() {
-
- frameIteratorThread = new Thread(this::FrameIterator);
- frameIteratorThread.start();
- }
-
- public void StopFrameIterator() {
-
- try {
-
- if (frameIteratorThread != null) {
-
- stopFrameIterator = true;
- frameIteratorThread.join();
- frameIteratorThread = null;
- stopFrameIterator = false;
- }
- } catch (InterruptedException ex) {
-
- Log.d("FrameGenerator", String.format("FrameGenerator.StopFrameIterator, %s", ex.getMessage()));
- }
- }
- ```
-
-## Screen Share Video Stream Overview
-
-Repeat steps `1 to 4` from the previous VirtualRawOutgoingVideoStream tutorial.
-
-Since the Android system generates the frames, you must implement your own foreground service to capture the frames and send them through using our Azure Communication Services Calling API
-
-### Supported Video Resolutions
-
-| Aspect Ratio | Resolution | Maximum FPS |
-| :--: | :-: | :-: |
-| Anything | Anything | 30 |
-
-The following is an overview of the steps required to create a screen share video stream.
-
-1. Add this permission to your `Manifest.xml` file inside your Android project
-
- ```xml
- <uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
- ```
-
-2. Create an instance of `ScreenShareRawOutgoingVideoStream` using the `RawOutgoingVideoStreamOptions` we created previously
-
- ```java
- private ScreenShareRawOutgoingVideoStream screenShareRawOutgoingVideoStream;
-
- screenShareRawOutgoingVideoStream = new ScreenShareRawOutgoingVideoStream(rawOutgoingVideoStreamOptions);
- ```
-
-3. Request needed permissions for screen capture on Android, once this method is called Android will call automatically `onActivityResult` containing the request code we've sent and the result of the operation, expect `Activity.RESULT_OK` if the permission has been provided by the user if so attach the screenShareRawOutgoingVideoStream to the call and start your own foreground service to capture the frames.
-
- ```java
- public void GetScreenSharePermissions() {
-
- try {
-
- MediaProjectionManager mediaProjectionManager = (MediaProjectionManager) getSystemService(Context.MEDIA_PROJECTION_SERVICE);
- startActivityForResult(mediaProjectionManager.createScreenCaptureIntent(), Constants.SCREEN_SHARE_REQUEST_INTENT_REQ_CODE);
- } catch (Exception e) {
-
- String error = "Could not start screen share due to failure to startActivityForResult for mediaProjectionManager screenCaptureIntent";
- Log.d("FrameGenerator", error);
- }
- }
-
- @Override
- protected void onActivityResult(int requestCode, int resultCode, Intent data) {
-
- super.onActivityResult(requestCode, resultCode, data);
-
- if (requestCode == Constants.SCREEN_SHARE_REQUEST_INTENT_REQ_CODE) {
-
- if (resultCode == Activity.RESULT_OK && data != null) {
-
- // Attach the screenShareRawOutgoingVideoStream to the call
- // Start your foreground service
- } else {
-
- String error = "user cancelled, did not give permission to capture screen";
- }
- }
- }
- ```
-
-4. Once you receive a frame on your foreground service send it through using the `VideoFrameSender` provided
-
- ````java
- public void onImageAvailable(ImageReader reader) {
-
- Image image = reader.acquireLatestImage();
- if (image != null) {
-
- final Image.Plane[] planes = image.getPlanes();
- if (planes.length > 0) {
-
- Image.Plane plane = planes[0];
- final ByteBuffer buffer = plane.getBuffer();
- try {
-
- SoftwareBasedVideoFrameSender sender = (SoftwareBasedVideoFrameSender) videoFrameSender;
- sender.sendFrame(buffer, sender.getTimestamp()).get();
- } catch (Exception ex) {
-
- Log.d("MainActivity", "MainActivity.onImageAvailable trace, failed to send Frame");
- }
- }
-
- image.close();
- }
- }
- ````
+- Check out our [calling hero sample](../../samples/calling-hero-sample.md)
+- Get started with the [UI Library](https://aka.ms/acsstorybook)
+- Learn about [Calling SDK capabilities](./getting-started-with-calling.md?pivots=platform-web)
+- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Events Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/events-playbook.md
Microsoft Graph enables event management platforms to empower organizers to sche
1. Authorize application to use Graph APIs on behalf of service account. This authorization is required in order to have the application use credentials to interact with your tenant to schedule events and register attendees.
- 1. Create an account that will own the meetings and is branded appropriately. This is the account that will create the events and which will receive notifications for it. We recommend to not user a personal production account given the overhead it might incur in the form of remainders.
+ 1. Create an account that will own the meetings and is branded appropriately. This is the account that will create the events and which will receive notifications for it. We recommend to not user a personal production account given the overhead it might incur in the form of reminders.
1. As part of the application setup, the service account is used to login into the solution once. With this permission the application can retrieve and store an access token on behalf of the service account that will own the meetings. Your application will need to store the tokens generated from the login and place them in a secure location such as a key vault. The application will need to store both the access token and the refresh token. Learn more about [auth tokens](../../active-directory/develop/access-tokens.md). and [refresh tokens](../../active-directory/develop/refresh-tokens.md).
Microsoft Graph enables event management platforms to empower organizers to sche
>[!NOTE] >Authorization is required by both developers for testing and organizers who will be using your event platform to set up their events.
-2. Organizer logins to Contoso platform to create an event and generate a registration URL. To enable these capabilities developers should use:
+2. Organizer logs in to Contoso platform to create an event and generate a registration URL. To enable these capabilities developers should use:
1. The [Create Calendar Event API](/graph/api/user-post-events?tabs=http&view=graph-rest-1.0) to POST the new event to be created. The Event object returned will contain the join URL required for the next step. Need to set the following parameter: `isonlinemeeting: true` and `onlineMeetingProvider: "teamsForBusiness"`. Set a time zone for the event, using the `Prefer` header.
Event management platforms can use a custom registration flow to register attend
### Communicate with your attendees using Azure Communication Services
-Through Azure Communication Services, developers can use SMS and Email capabilities to send remainders to attendees for the event they have registered. Communication can also include confirmation for the event as well as information for joining and participating.
+Through Azure Communication Services, developers can use SMS and Email capabilities to send reminders to attendees for the event they have registered. Communication can also include confirmation for the event as well as information for joining and participating.
- [SMS capabilities](../quickstarts/sms/send.md) enable you to send text messages to your attendees. - [Email capabilities](../quickstarts/email/send-email.md) support direct communication to your attendees using custom domains.
Attendee experience can be directly embedded into an application or platform usi
>[!NOTE]
->Azure Communication Services is a consumption-based service billed through Azure. For more information on pricing visit our resources.
+>Azure Communication Services is a consumption-based service billed through Azure. For more information on pricing visit our resources.
confidential-ledger Authenticate Ledger Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/authenticate-ledger-nodes.md
Azure confidential ledger nodes can be authenticated by code samples and by user
## Code samples
-When initializing, code samples get the node certificate by querying Identity Service. After retrieving the node certificate, a code sample will query the ledger network to get a quote, which is then validated using the Host Verify binaries. If the verification succeeds, the code sample proceeds to ledger operations.
+When initializing, code samples get the node certificate by querying the Identity Service. After retrieving the node certificate, a code sample will query the ledger to get a quote, which is then validated using the Host Verify binaries. If the verification succeeds, the code sample proceeds to ledger operations.
## Users
-Users can validate the authenticity of Azure confidential ledger nodes to confirm they are indeed interfacing with their ledgerΓÇÖs enclave. You can build trust in Azure confidential ledger nodes in a few ways, which can be stacked on one another to increase the overall level of confidence. As such, Steps 1-2 help build confidence in that Azure confidential ledger enclave as part of the initial TLS handshake and authentication within functional workflows. Beyond that, a persistent client connection is maintained between the user's client and the confidential ledger.
+Users can validate the authenticity of Azure confidential ledger nodes to confirm they are indeed interfacing with their ledgerΓÇÖs enclave. You can build trust in Azure confidential ledger nodes in a few ways, which can be stacked on one another to increase the overall level of confidence. As such, steps 1 and 2 are important confidence building mechanisms for users of Azure confidential ledger enclave as part of the initial TLS handshake and authentication within functional workflows. Beyond that, a persistent client connection is maintained between the user's client and the confidential ledger.
-- **Validating a confidential ledger node**: This is accomplished by querying the identity service hosted by Microsoft, which provides a network cert and thus helps verify that the ledger node is presenting a cert endorsed/signed by the network cert for that specific instance. Similar to PKI-based HTTPS, a server’s cert is signed by a well-known Certificate Authority (CA) or intermediate CA. In the case of Azure confidential ledger, the CA cert is returned by an Identity service in the form of a network cert. This is an important confidence building measure for users of confidential ledger. If this node cert isn’t signed by the returned network cert, the client connection should fail (as implemented in the sample code).-- **Validating a confidential ledger enclave**: A confidential ledger runs in an Intel® SGX enclave that’s represented by a Quote, a data blob generated inside that enclave. It can be used by any other entity to verify that the quote has been produced from an application running with Intel® SGX protections. The quote is structured in a way that enables easy verification. It contains claims that help identify various properties of the enclave and the application that it’s running. This is an important confidence building mechanism for users of the confidential ledger. This can be accomplished by calling a functional workflow API to get an enclave quote. The client connection should fail if the quote is invalid. The retrieved quote can then be validated with the open_enclaves Host_Verify tool. More details about this can be found [here](https://github.com/openenclave/openenclave/tree/master/samples/host_verify).
+1. **Validating a confidential ledger node**: This is accomplished by querying the identity service hosted by Microsoft, which provides a service certificate and thus helps verify that the ledger node is presenting a certificate endorsed/signed by the service certificate for that specific instance. Using PKI-based HTTPS, a serverΓÇÖs certificate is signed by a well-known Certificate Authority (CA) or intermediate CA. In the case of Azure confidential ledger, the CA certificate is returned by the Identity Service in the form of the service certificate. If this node certificate isnΓÇÖt signed by the returned service certificate, the client connection should fail (as implemented in the sample code).
+
+2. **Validating a confidential ledger enclave**: A confidential ledger runs in an Intel® SGX enclave that’s represented by a remote attestation report (or quote), a data blob generated inside that enclave. It can be used by any other entity to verify that the quote has been produced from an application running with Intel® SGX protections. The quote contains claims that help identify various properties of the enclave and the application that it’s running. In particular, it contains the SHA-256 hash of the public key contained in the confidential ledger node's certificate. The quote of a confidential ledger node can be retrieved by calling a functional workflow API. The retrieved quote can then be validated following the steps described [here](https://microsoft.github.io/CCF/main/use_apps/verify_quote.html).
## Next steps
container-apps Deploy Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio-code.md
Container images are stored inside of container registries. You can easily creat
3) Choose the subscription you would like to use to create your container registry and build your image, and then press enter to continue.
-4) Select **+ Create new registry**, or if you already have a registry you'd like to use, select that item and skip to step 7.
+4) Select **+ Create new registry**, or if you already have a registry you'd like to use, select that item and skip to creating and deloying to the container app.
5) Enter a unique name for the new registry such as *msdocscapps123*, where 123 are unique numbers of your own choosing, and then press enter. Container registry names must be globally unique across all over Azure.
container-registry Container Registry Helm Repos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-helm-repos.md
Run the [az acr repository show-manifests][az-acr-repository-show-manifests] com
```azurecli az acr manifest list-metadata \ --registry $ACR_NAME \
- --name helm/hello-world --detail
+ --name helm/hello-world
``` Output, abbreviated in this example, shows a `configMediaType` of `application/vnd.cncf.helm.config.v1+json`:
cosmos-db Change Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/change-log.md
+
+ Title: Change log for Azure CosmosDB API for MongoDB
+description: Notifies our customers of any minor/medium updates that were pushed
+++ Last updated : 06/22/2022++++
+# Change log for Azure Cosmos DB API for MongoDB
+The Change log for the API for MongoDB is meant to inform you about our feature updates. This document covers more granular updates and complements [Azure Updates](https://azure.microsoft.com/updates/).
+
+## Cosmos DB's API for MongoDB updates
+
+### Azure Data Studio MongoDB extension for Azure Cosmos DB (Preview)
+You can now use the free and lightweight tool feature to manage and query your MongoDB resources using mongo shell. Azure Data Studio MongoDB extension for Azure Cosmos DB allows you to manage multiple accounts all in one view by
+1. Connecting your Mongo resources
+2. Configuring the database settings
+3. Performing create, read, update, and delete (CRUD) across Windows, macOS, and Linux.
+
+[Learn more](https://aka.ms/cosmosdb-ads)
++
+### Linux emulator with Azure Cosmos DB API for MongoDB
+The Azure Cosmos DB Linux emulator with API for MongoDB support provides a local environment that emulates the Azure Cosmos DB service for development purposes on Linux and macOS. Using the emulator, you can develop and test your MongoDB applications locally, without creating an Azure subscription or incurring any costs.
+
+[Learn more](https://aka.ms/linux-emulator-mongo)
++
+### 16-MB limit per document in API for MongoDB (Preview)
+The 16-MB document limit in the Azure Cosmos DB API for MongoDB provides developers the flexibility to store more data per document. This ease-of-use feature will speed up your development process in these cases.
+
+[Learn more](./mongodb-introduction.md)
++
+### Azure Cosmos DB API for MongoDB data plane Role-Based Access Control (RBAC) (Preview)
+The API for MongoDB now offers a built-in role-based access control (RBAC) that allows you to authorize your data requests with a fine-grained, role-based permission model. Using this role-based access control (RBAC) allows you access with more options for control, security, and auditability of your database account data.
+
+[Learn more](./how-to-setup-rbac.md)
++
+### Unique partial indexes in Azure Cosmos DB API for MongoDB
+The unique partial indexes feature allows you more flexibility to specify exactly which fields in which documents youΓÇÖd like to index, all while enforcing uniqueness of that fieldΓÇÖs value. Resulting in the unique constraint being applied only to the documents that meet the specified filter expression.
+
+[Learn more](./feature-support-42.md)
++
+### Azure Cosmos DB API for MongoDB unique index reindexing (Preview)
+The unique index feature for Azure Cosmos DB allows you to create unique indexes when your collection was empty and didn't contain documents. This feature provides you with more flexibility by giving you the ability to create unique indexes whenever you want toΓÇömeaning thereΓÇÖs no need to plan unique indexes ahead of time before inserting any data into the collection.
+
+[Learn more](./mongodb-indexing.md) and enable the feature today by [submitting a support ticket request](https://azure.microsoft.com/support/create-ticket/)
++
+### Azure Cosmos DB API for MongoDB supports version 4.2
+The Azure Cosmos DB API for MongoDB version 4.2 includes new aggregation functionality and improved security features such as client-side field encryption. These features help you accelerate development by applying the new functionality instead of developing it yourself.
+
+[Learn more](./feature-support-42.md)
++
+### Support $expr in Mongo 3.6+
+`$expr` allows the use of [aggregation expressions](https://www.mongodb.com/docs/manual/meta/aggregation-quick-reference/#std-label-aggregation-expressions) within the query language.
+`$expr` can build query expressions that compare fields from the same document in a `$match` stage.
+
+[Learn more](https://www.mongodb.com/docs/manual/reference/operator/query/expr/)
++
+### Role-Based Access Control for $merge stage
+* Added Role-Based Access Control(RBAC) for `$merge` stage.
+* `$merge` writes the results of aggregation pipeline to specified collection. The `$merge` operator must be the last stage in the pipeline
+
+[Learn more](https://www.mongodb.com/docs/manual/reference/operator/aggregation/merge/)
++
+## Next steps
+
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB API for MongoDB.
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB API for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB API for MongoDB.
+- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
cosmos-db Create Mongodb Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/create-mongodb-nodejs.md
- Title: 'Quickstart: Connect a Node.js MongoDB app to Azure Cosmos DB'
-description: This quickstart demonstrates how to connect an existing MongoDB app written in Node.js to Azure Cosmos DB.
----- Previously updated : 04/26/2022--
-# Quickstart: Migrate an existing MongoDB Node.js web app to Azure Cosmos DB
-
-> [!div class="op_single_selector"]
-> * [.NET](create-mongodb-dotnet.md)
-> * [Python](create-mongodb-python.md)
-> * [Java](create-mongodb-java.md)
-> * [Node.js](create-mongodb-nodejs.md)
-> * [Golang](create-mongodb-go.md)
->
-
-In this quickstart, you create and manage an Azure Cosmos DB for Mongo DB API account by using the Azure Cloud Shell, and with a MEAN (MongoDB, Express, Angular, and Node.js) app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
-
-## Prerequisites
--- An Azure account with an active subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with the connection string `.mongodb://localhost:C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==@localhost:10255/admin?ssl=true`.--- [Node.js](https://nodejs.org/), and a working knowledge of Node.js.--- [Git](https://git-scm.com/downloads).---- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--
-## Clone the sample application
-
-Run the following commands to clone the sample repository. This sample repository contains the default [MEAN.js](https://meanjs.org/) application.
-
-1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
-
- ```bash
- mkdir "C:\git-samples"
- ```
-
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
-
- ```bash
- cd "C:\git-samples"
- ```
-
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
-
- ```bash
- git clone https://github.com/prashanthmadi/mean
- ```
-
-## Run the application
-
-This MongoDB app written in Node.js connects to your Azure Cosmos DB database, which supports MongoDB client. In other words, it is transparent to the application that the data is stored in an Azure Cosmos DB database.
-
-Install the required packages and start the application.
-
-```bash
-cd mean
-npm install
-npm start
-```
-The application will try to connect to a MongoDB source and fail, go ahead and exit the application when the output returns "[MongoError: connect ECONNREFUSED 127.0.0.1:27017]".
-
-## Sign in to Azure
-
-If you are using an installed Azure CLI, sign in to your Azure subscription with the [az login](/cli/azure/reference-index#az-login) command and follow the on-screen directions. You can skip this step if you're using the Azure Cloud Shell.
-
-```azurecli
-az login
-```
-
-## Add the Azure Cosmos DB module
-
-If you are using an installed Azure CLI, check to see if the `cosmosdb` component is already installed by running the `az` command. If `cosmosdb` is in the list of base commands, proceed to the next command. You can skip this step if you're using the Azure Cloud Shell.
-
-If `cosmosdb` is not in the list of base commands, reinstall [Azure CLI](/cli/azure/install-azure-cli).
-
-## Create a resource group
-
-Create a [resource group](../../azure-resource-manager/management/overview.md) with the [az group create](/cli/azure/group#az-group-create). An Azure resource group is a logical container into which Azure resources like web apps, databases and storage accounts are deployed and managed.
-
-The following example creates a resource group in the West Europe region. Choose a unique name for the resource group.
-
-If you are using Azure Cloud Shell, select **Try It**, follow the onscreen prompts to login, then copy the command into the command prompt.
-
-```azurecli-interactive
-az group create --name myResourceGroup --location "West Europe"
-```
-
-## Create an Azure Cosmos DB account
-
-Create a Cosmos account with the [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) command.
-
-In the following command, please substitute your own unique Cosmos account name where you see the `<cosmosdb-name>` placeholder. This unique name will be used as part of your Cosmos DB endpoint (`https://<cosmosdb-name>.documents.azure.com/`), so the name needs to be unique across all Cosmos accounts in Azure.
-
-```azurecli-interactive
-az cosmosdb create --name <cosmosdb-name> --resource-group myResourceGroup --kind MongoDB
-```
-
-The `--kind MongoDB` parameter enables MongoDB client connections.
-
-When the Azure Cosmos DB account is created, the Azure CLI shows information similar to the following example.
-
-> [!NOTE]
-> This example uses JSON as the Azure CLI output format, which is the default. To use another output format, see [Output formats for Azure CLI commands](/cli/azure/format-output-azure-cli).
-
-```json
-{
- "databaseAccountOfferType": "Standard",
- "documentEndpoint": "https://<cosmosdb-name>.documents.azure.com:443/",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Document
-DB/databaseAccounts/<cosmosdb-name>",
- "kind": "MongoDB",
- "location": "West Europe",
- "name": "<cosmosdb-name>",
- "readLocations": [
- {
- "documentEndpoint": "https://<cosmosdb-name>-westeurope.documents.azure.com:443/",
- "failoverPriority": 0,
- "id": "<cosmosdb-name>-westeurope",
- "locationName": "West Europe",
- "provisioningState": "Succeeded"
- }
- ],
- "resourceGroup": "myResourceGroup",
- "type": "Microsoft.DocumentDB/databaseAccounts",
- "writeLocations": [
- {
- "documentEndpoint": "https://<cosmosdb-name>-westeurope.documents.azure.com:443/",
- "failoverPriority": 0,
- "id": "<cosmosdb-name>-westeurope",
- "locationName": "West Europe",
- "provisioningState": "Succeeded"
- }
- ]
-}
-```
-
-## Connect your Node.js application to the database
-
-In this step, you connect your MEAN.js sample application to the Azure Cosmos DB database account you just created.
-
-<a name="devconfig"></a>
-## Configure the connection string in your Node.js application
-
-In your MEAN.js repository, open `config/env/local-development.js`.
-
-Replace the content of this file with the following code. Be sure to also replace the two `<cosmosdb-name>` placeholders with your Cosmos account name.
-
-```javascript
-'use strict';
-
-module.exports = {
- db: {
- uri: 'mongodb://<cosmosdb-name>:<primary_master_key>@<cosmosdb-name>.documents.azure.com:10255/mean-dev?ssl=true&sslverifycertificate=false'
- }
-};
-```
-
-## Retrieve the key
-
-In order to connect to a Cosmos database, you need the database key. Use the [az cosmosdb keys list](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command to retrieve the primary key.
-
-```azurecli-interactive
-az cosmosdb keys list --name <cosmosdb-name> --resource-group myResourceGroup --query "primaryMasterKey"
-```
-
-The Azure CLI outputs information similar to the following example.
-
-```json
-"RUayjYjixJDWG5xTqIiXjC..."
-```
-
-Copy the value of `primaryMasterKey`. Paste this over the `<primary_master_key>` in `local-development.js`.
-
-Save your changes.
-
-### Run the application again.
-
-Run `npm start` again.
-
-```bash
-npm start
-```
-
-A console message should now tell you that the development environment is up and running.
-
-Go to `http://localhost:3000` in a browser. Select **Sign Up** in the top menu and try to create two dummy users.
-
-The MEAN.js sample application stores user data in the database. If you are successful and MEAN.js automatically signs into the created user, then your Azure Cosmos DB connection is working.
--
-## View data in Data Explorer
-
-Data stored in a Cosmos database is available to view and query in the Azure portal.
-
-To view, query, and work with the user data created in the previous step, login to the [Azure portal](https://portal.azure.com) in your web browser.
-
-In the top Search box, enter **Azure Cosmos DB**. When your Cosmos account blade opens, select your Cosmos account. In the left navigation, select **Data Explorer**. Expand your collection in the Collections pane, and then you can view the documents in the collection, query the data, and even create and run stored procedures, triggers, and UDFs.
---
-## Deploy the Node.js application to Azure
-
-In this step, you deploy your Node.js application to Cosmos DB.
-
-You may have noticed that the configuration file that you changed earlier is for the development environment (`/config/env/local-development.js`). When you deploy your application to App Service, it will run in the production environment by default. So now, you need to make the same change to the respective configuration file.
-
-In your MEAN.js repository, open `config/env/production.js`.
-
-In the `db` object, replace the value of `uri` as show in the following example. Be sure to replace the placeholders as before.
-
-```javascript
-'mongodb://<cosmosdb-name>:<primary_master_key>@<cosmosdb-name>.documents.azure.com:10255/mean?ssl=true&sslverifycertificate=false',
-```
-
-> [!NOTE]
-> The `ssl=true` option is important because of Cosmos DB requirements. For more information, see [Connection string requirements](connect-mongodb-account.md#connection-string-requirements).
->
->
-
-In the terminal, commit all your changes into Git. You can copy both commands to run them together.
-
-```bash
-git add .
-git commit -m "configured MongoDB connection string"
-```
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos DB MongoDB API account using the Azure Cloud Shell, and create and run a MEAN.js app to add users to the account. You can now import additional data to your Azure Cosmos DB account.
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
-
-> [!div class="nextstepaction"]
-> [Import MongoDB data into Azure Cosmos DB](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-dotnet.md
+
+ Title: Quickstart - Azure Cosmos DB MongoDB API for .NET with MongoDB drier
+description: Learn how to build a .NET app to manage Azure Cosmos DB MongoDB API account resources in this quickstart.
+++++
+ms.devlang: dotnet
+ Last updated : 07/06/2022+++
+# Quickstart: Azure Cosmos DB MongoDB API for .NET with the MongoDB driver
++
+Get started with MongoDB to create databases, collections, and docs within your Cosmos DB resource. Follow these steps to install the package and try out example code for basic tasks.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-dotnet-samples) are available on GitHub as a .NET project.
+
+[MongoDB API reference documentation](https://www.mongodb.com/docs/drivers/csharp) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver)
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* [.NET 6.0](https://dotnet.microsoft.com/download)
+* [Azure Command-Line Interface (CLI)](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
+
+### Prerequisite check
+
+* In a terminal or command window, run ``dotnet --list-sdks`` to check that .NET 6.x is one of the available versions.
+* Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
+
+## Setting up
+
+This section walks you through creating an Azure Cosmos account and setting up a project that uses the MongoDB NuGet packages.
+
+### Create an Azure Cosmos DB account
+
+This quickstart will create a single Azure Cosmos DB account using the MongoDB API.
+
+#### [Azure CLI](#tab/azure-cli)
++
+#### [PowerShell](#tab/azure-powershell)
++
+#### [Portal](#tab/azure-portal)
++++
+### Get MongoDB connection string
+
+#### [Azure CLI](#tab/azure-cli)
++
+#### [PowerShell](#tab/azure-powershell)
++
+#### [Portal](#tab/azure-portal)
++++
+### Create a new .NET app
+
+Create a new .NET application in an empty folder using your preferred terminal. Use the [``dotnet new console``](/dotnet/core/tools/dotnet-newt) to create a new console app.
+
+```console
+dotnet new console -o <app-name>
+```
+
+### Install the NuGet package
+
+Add the [MongoDB.Driver](https://www.nuget.org/packages/MongoDB.Driver) NuGet package to the new .NET project. Use the [``dotnet add package``](/dotnet/core/tools/dotnet-add-package) command specifying the name of the NuGet package.
+
+```console
+dotnet add package MongoDb.Driver
+```
+
+### Configure environment variables
++
+## Object model
+
+Before you start building the application, let's look into the hierarchy of resources in Azure Cosmos DB. Azure Cosmos DB has a specific object model used to create and access resources. The Azure Cosmos DB creates resources in a hierarchy that consists of accounts, databases, collections, and docs.
+
+ Hierarchical diagram showing an Azure Cosmos DB account at the top. The account has two child database nodes. One of the database nodes includes two child collection nodes. The other database node includes a single child collection node. That single collection node has three child doc nodes.
+
+You'll use the following MongoDB classes to interact with these resources:
+
+* [``MongoClient``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoClient.htm) - This class provides a client-side logical representation for the MongoDB API layer on Cosmos DB. The client object is used to configure and execute requests against the service.
+* [``MongoDatabase``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoDatabase.htm) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
+* [``Collection``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoCollection.htm) - This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.
+
+## Code examples
+
+* [Authenticate the client](#authenticate-the-client)
+* [Create a database](#create-a-database)
+* [Create a container](#create-a-collection)
+* [Create an item](#create-an-item)
+* [Get an item](#get-an-item)
+* [Query items](#query-items)
+
+The sample code described in this article creates a database named ``adventureworks`` with a collection named ``products``. The ``products`` collection is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
+
+### Authenticate the client
+
+From the project directory, open the *Program.cs* file. In your editor, add a using directive for ``MongoDB.Driver``.
++
+Define a new instance of the ``MongoClient`` class using the constructor, and [``Environment.GetEnvironmentVariable``](/dotnet/api/system.environment.getenvironmentvariables) to read the connection string you set earlier.
++
+### Create a database
+
+Use the [``MongoClient.GetDatabase``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/M_MongoDB_Driver_MongoClient_GetDatabase.htm) method to create a new database if it doesn't already exist. This method will return a reference to the existing or newly created database.
++
+### Create a collection
+
+The [``MongoDatabase.GetCollection``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/M_MongoDB_Driver_MongoDatabase_GetCollection.htm) will create a new collection if it doesn't already exist and return a reference to the collection.
++
+### Create an item
+
+The easiest way to create a new item in a collection is to create a C# [class](/dotnet/csharp/language-reference/keywords/class) or [record](/dotnet/csharp/language-reference/builtin-types/record) type with all of the members you want to serialize into JSON. In this example, the C# record has a unique identifier, a *category* field for the partition key, and extra *name*, *quantity*, and *sale* fields.
+
+```csharp
+public record Product(
+ string Id,
+ string Category,
+ string Name,
+ int Quantity,
+ bool Sale
+);
+```
+
+Create an item in the collection using the `Product` record by calling [``IMongoCollection<TDocument>.InsertOne``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_InsertOne_1.htm).
++
+### Get an item
+
+In Azure Cosmos DB, you can retrieve items by composing queries using Linq. In the SDK, call [``IMongoCollection.FindAsync<>``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_FindAsync__1.htm) and pass in a C# expression to filter the results.
++
+### Query items
+
+After you insert an item, you can run a query to get all items that match a specific filter by treating the collection as an `IQueryable`. This example uses an expression to filter products by category. Once the call to `AsQueryable` is made, call [``MongoQueryable.Where``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/M_MongoDB_Driver_Linq_MongoQueryable_Where__1.htm) to retrieve a set of filtered items.
++
+## Run the code
+
+This app creates an Azure Cosmos MongoDb API database and collection. The example then creates an item and then reads the exact same item back. Finally, the example creates a second item and then performs a query that should return multiple items. With each step, the example outputs metadata to the console about the steps it has performed.
+
+To run the app, use a terminal to navigate to the application directory and run the application.
+
+```dotnetcli
+dotnet run
+```
+
+The output of the app should be similar to this example:
+
+```output
+Single product name:
+Yamba Surfboard
+Multiple products:
+Yamba Surfboard
+Sand Surfboard
+```
+
+## Clean up resources
+
+When you no longer need the Azure Cosmos DB SQL API account, you can delete the corresponding resource group.
+
+### [Azure CLI / Resource Manager template](#tab/azure-cli)
+
+Use the [``az group delete``](/cli/azure/group#az-group-delete) command to delete the resource group.
+
+```azurecli-interactive
+az group delete --name $resourceGroupName
+```
+
+### [PowerShell](#tab/azure-powershell)
+
+Use the [``Remove-AzResourceGroup``](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to delete the resource group.
+
+```azurepowershell-interactive
+$parameters = @{
+ Name = $RESOURCE_GROUP_NAME
+}
+Remove-AzResourceGroup @parameters
+```
+
+### [Portal](#tab/azure-portal)
+
+1. Navigate to the resource group you previously created in the Azure portal.
+
+ > [!TIP]
+ > In this quickstart, we recommended the name ``msdocs-cosmos-quickstart-rg``.
+1. Select **Delete resource group**.
+
+ :::image type="content" source="media/delete-account-portal/delete-resource-group-option.png" lightbox="media/delete-account-portal/delete-resource-group-option.png" alt-text="Screenshot of the Delete resource group option in the navigation bar for a resource group.":::
+
+1. On the **Are you sure you want to delete** dialog, enter the name of the resource group, and then select **Delete**.
+
+ :::image type="content" source="media/delete-account-portal/delete-confirmation.png" lightbox="media/delete-account-portal/delete-confirmation.png" alt-text="Screenshot of the delete confirmation page for a resource group.":::
++
cosmos-db Quickstart Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-javascript.md
Title: Quickstart - Azure Cosmos DB MongoDB API for JavaScript with MongoDB drie
description: Learn how to build a JavaScript app to manage Azure Cosmos DB MongoDB API account resources in this quickstart. + ms.devlang: javascript Previously updated : 06/21/2022 Last updated : 07/06/2022 # Quickstart: Azure Cosmos DB MongoDB API for JavaScript with MongoDB driver+ [!INCLUDE[appliesto-mongodb-api](../includes/appliesto-mongodb-api.md)] Get started with the MongoDB npm package to create databases, collections, and docs within your Cosmos DB resource. Follow these steps to install the package and try out example code for basic tasks.
Get started with the MongoDB npm package to create databases, collections, and d
## Setting up
-This section walks you through creating an Azure Cosmos account and setting up a project that uses the MongoDB npm package.
+This section walks you through creating an Azure Cosmos account and setting up a project that uses the MongoDB npm package.
### Create an Azure Cosmos DB account
This quickstart will create a single Azure Cosmos DB account using the MongoDB A
### Create a new JavaScript app
-Create a new JavaScript application in an empty folder using your preferred terminal. Use the [``npm init``](https://docs.npmjs.com/cli/v8/commands/npm-init) command to begin the prompts to create the `package.json` file. Accept the defaults for the prompts.
+Create a new JavaScript application in an empty folder using your preferred terminal. Use the [``npm init``](https://docs.npmjs.com/cli/v8/commands/npm-init) command to begin the prompts to create the `package.json` file. Accept the defaults for the prompts.
```console npm init
npm install mongodb dotenv
Before you start building the application, let's look into the hierarchy of resources in Azure Cosmos DB. Azure Cosmos DB has a specific object model used to create and access resources. The Azure Cosmos DB creates resources in a hierarchy that consists of accounts, databases, collections, and docs.
- Hierarchical diagram showing an Azure Cosmos D B account at the top. The account has two child database nodes. One of the database nodes includes two child collection nodes. The other database node includes a single child collection node. That single collection node has three child doc nodes.
+ Hierarchical diagram showing an Azure Cosmos DB account at the top. The account has two child database nodes. One of the database nodes includes two child collection nodes. The other database node includes a single child collection node. That single collection node has three child doc nodes.
:::image-end::: You'll use the following MongoDB classes to interact with these resources: -- [``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html) - This class provides a client-side logical representation for the MongoDB API layer on Cosmos DB. The client object is used to configure and execute requests against the service.-- [``Db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.-- [``Collection``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html) - This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.-
+* [``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html) - This class provides a client-side logical representation for the MongoDB API layer on Cosmos DB. The client object is used to configure and execute requests against the service.
+* [``Db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
+* [``Collection``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html) - This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.
## Code examples -- [Authenticate the client](#authenticate-the-client)-- [Get database instance](#get-database-instance)-- [Get collection instance](#get-collection-instance)-- [Chained instances](#chained-instances)-- [Create an index](#create-an-index)-- [Create a doc](#create-a-doc)-- [Get an doc](#get-a-doc)-- [Query docs](#query-docs)
+* [Authenticate the client](#authenticate-the-client)
+* [Get database instance](#get-database-instance)
+* [Get collection instance](#get-collection-instance)
+* [Chained instances](#chained-instances)
+* [Create an index](#create-an-index)
+* [Create a doc](#create-a-doc)
+* [Get an doc](#get-a-doc)
+* [Query docs](#query-docs)
The sample code described in this article creates a database named ``adventureworks`` with a collection named ``products``. The ``products`` collection is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
-For this procedure, the database will not use sharding.
+For this procedure, the database won't use sharding.
### Authenticate the client
-1. From the project directory, create an *index.js* file. In your editor, add requires statements to reference the MongoDB and DotEnv npm packages.
+1. From the project directory, create an *index.js* file. In your editor, add requires statements to reference the MongoDB and DotEnv npm packages.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/001-quickstart/index.js" id="package_dependencies":::
main()
.catch(console.error) .finally(() => client.close()); ```
-
+ The following code snippets should be added into the *main* function in order to handle the async/await syntax. ### Connect to the database
-Use the [``MongoClient.connect``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#connect) method to connect to your Cosmos DB API for MongoDB resource. This method returns a reference to the database.
+Use the [``MongoClient.connect``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#connect) method to connect to your Cosmos DB API for MongoDB resource. The connect method returns a reference to the database.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/001-quickstart/index.js" id="connect_client"::: ### Get database instance
-Use the [``MongoClient.db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#db) gets a reference to a database.
+Use the [``MongoClient.db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#db) gets a reference to a database.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/001-quickstart/index.js" id="new_database" :::
The [``MongoClient.Db.collection``](https://mongodb.github.io/node-mongodb-nativ
### Chained instances
-You can chain the client, database, and collection together. This is more convenient if you need to access multiple databases or collections.
+You can chain the client, database, and collection together. Chaining is more convenient if you need to access multiple databases or collections.
```javascript const db = await client.db(`adventureworks`).collection('products').updateOne(query, update, options)
Use the [``Collection.createIndex``](https://mongodb.github.io/node-mongodb-nati
Create a doc with the *product* properties for the `adventureworks` database: * An _id property for the unique identifier of the product.
-* A *category* property. This can be used as the logical partition key.
+* A *category* property. This property can be used as the logical partition key.
* A *name* property. * An inventory *quantity* property. * A *sale* property, indicating whether the product is on sale.
Create an doc in the collect by calling [``Collection.UpdateOne``](https://mongo
### Get a doc
-In Azure Cosmos DB, you can perform a less-expensive [point read](https://devblogs.microsoft.com/cosmosdb/point-reads-versus-queries/) operation by using both the unique identifier (``_id``) and partition key (``category``).
+In Azure Cosmos DB, you can perform a less-expensive [point read](https://devblogs.microsoft.com/cosmosdb/point-reads-versus-queries/) operation by using both the unique identifier (``_id``) and partition key (``category``).
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/001-quickstart/index.js" id="read_doc" :::
The output of the app should be similar to this example:
When you no longer need the Azure Cosmos DB SQL API account, you can delete the corresponding resource group.
-#### [Azure CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
Use the [``az group delete``](/cli/azure/group#az-group-delete) command to delete the resource group.
Use the [``az group delete``](/cli/azure/group#az-group-delete) command to delet
az group delete --name $resourceGroupName ```
-#### [PowerShell](#tab/azure-powershell)
+### [PowerShell](#tab/azure-powershell)
Use the [``Remove-AzResourceGroup``](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to delete the resource group.
$parameters = @{
Remove-AzResourceGroup @parameters ```
-#### [Portal](#tab/azure-portal)
+### [Portal](#tab/azure-portal)
1. Navigate to the resource group you previously created in the Azure portal.
cosmos-db How To Configure Cosmos Db Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-configure-cosmos-db-trigger.md
description: Learn how to configure logging and connection policy used by Azure
Previously updated : 05/09/2022 Last updated : 07/06/2022
If your Azure Functions project is working with Azure Functions V1 runtime, the
} ```
+## Customizing the user agent
+
+The Azure Functions trigger for Cosmos DB performs requests to the service that will be reflected on your [monitoring](../monitor-cosmos-db.md). You can customize the user agent used for the requests from an Azure Function by changing the `userAgentSuffix` in the `host.json` [extra settings](../../azure-functions/functions-bindings-cosmosdb-v2.md?tabs=extensionv4#hostjson-settings):
+
+```js
+{
+ "cosmosDB": {
+ "userAgentSuffix": "MyUniqueIdentifier"
+ }
+}
+```
+ > [!NOTE] > When hosting your function app in a Consumption plan, each instance has a limit in the amount of Socket Connections that it can maintain. When working with Direct / TCP mode, by design more connections are created and can hit the [Consumption plan limit](../../azure-functions/manage-connections.md#connection-limit), in which case you can either use Gateway mode or instead host your function app in either a [Premium plan](../../azure-functions/functions-premium-plan.md) or a [Dedicated (App Service) plan](../../azure-functions/dedicated-plan.md).
cosmos-db How To Dotnet Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-create-container.md
ms.devlang: csharp Previously updated : 06/08/2022 Last updated : 07/06/2022
cosmos-db How To Dotnet Create Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-create-database.md
ms.devlang: csharp Previously updated : 06/08/2022 Last updated : 07/06/2022
cosmos-db How To Dotnet Create Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-create-item.md
ms.devlang: csharp Previously updated : 06/15/2022 Last updated : 07/06/2022
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-get-started.md
ms.devlang: csharp Previously updated : 06/08/2022 Last updated : 07/06/2022
The most common constructor for **CosmosClient** has two parameters:
1. From the Azure Cosmos DB SQL API account page, select the **Keys** navigation menu option.
- :::image type="content" source="media/get-credentials-portal/cosmos-keys-option.png" lightbox="media/get-credentials-portal/cosmos-keys-option.png" alt-text="Screenshot of an Azure Cosmos D B SQL A P I account page. The Keys option is highlighted in the navigation menu.":::
+ :::image type="content" source="media/get-credentials-portal/cosmos-keys-option.png" lightbox="media/get-credentials-portal/cosmos-keys-option.png" alt-text="Screenshot of an Azure Cosmos DB SQL A P I account page. The Keys option is highlighted in the navigation menu.":::
1. Record the values from the **URI** and **PRIMARY KEY** fields. You'll use these values in a later step.
- :::image type="content" source="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" lightbox="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" alt-text="Screenshot of Keys page with various credentials for an Azure Cosmos D B SQL A P I account.":::
+ :::image type="content" source="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" lightbox="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" alt-text="Screenshot of Keys page with various credentials for an Azure Cosmos DB SQL A P I account.":::
As you build your application, your code will primarily interact with four types
The following diagram shows the relationship between these resources.
- Hierarchical diagram showing an Azure Cosmos D B account at the top. The account has two child database nodes. One of the database nodes includes two child container nodes. The other database node includes a single child container node. That single container node has three child item nodes.
+ Hierarchical diagram showing an Azure Cosmos DB account at the top. The account has two child database nodes. One of the database nodes includes two child container nodes. The other database node includes a single child container node. That single container node has three child item nodes.
:::image-end::: Each type of resource is represented by one or more associated .NET classes. Here's a list of the most common classes:
The following guides show you how to use each of these classes to build your app
|--|| | [Create a database](how-to-dotnet-create-database.md) | Create databases | | [Create a container](how-to-dotnet-create-container.md) | Create containers |
+| [Read an item](how-to-dotnet-read-item.md) | Point read a specific item |
+| [Query items](how-to-dotnet-query-items.md) | Query multiple items |
## See also
cosmos-db How To Dotnet Read Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-read-item.md
ms.devlang: csharp Previously updated : 06/15/2022 Last updated : 07/06/2022
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quickstart-dotnet.md
ms.devlang: csharp Previously updated : 06/08/2022 Last updated : 07/06/2022
This quickstart will create a single Azure Cosmos DB account using the SQL API.
1. On the **Select API option** page, select the **Create** option within the **Core (SQL) - Recommend** section. Azure Cosmos DB has five APIs: SQL, MongoDB, Gremlin, Table, and Cassandra. [Learn more about the SQL API](../index.yml).
- :::image type="content" source="media/create-account-portal/cosmos-api-choices.png" lightbox="media/create-account-portal/cosmos-api-choices.png" alt-text="Screenshot of select A P I option page for Azure Cosmos D B.":::
+ :::image type="content" source="media/create-account-portal/cosmos-api-choices.png" lightbox="media/create-account-portal/cosmos-api-choices.png" alt-text="Screenshot of select A P I option page for Azure Cosmos DB.":::
1. On the **Create Azure Cosmos DB Account** page, enter the following information:
This quickstart will create a single Azure Cosmos DB account using the SQL API.
1. Select **Go to resource** to go to the Azure Cosmos DB account page.
- :::image type="content" source="media/create-account-portal/cosmos-deployment-complete.png" lightbox="media/create-account-portal/cosmos-deployment-complete.png" alt-text="Screenshot of deployment page for Azure Cosmos D B SQL A P I resource.":::
+ :::image type="content" source="media/create-account-portal/cosmos-deployment-complete.png" lightbox="media/create-account-portal/cosmos-deployment-complete.png" alt-text="Screenshot of deployment page for Azure Cosmos DB SQL A P I resource.":::
1. From the Azure Cosmos DB SQL API account page, select the **Keys** navigation menu option.
- :::image type="content" source="media/get-credentials-portal/cosmos-keys-option.png" lightbox="media/get-credentials-portal/cosmos-keys-option.png" alt-text="Screenshot of an Azure Cosmos D B SQL A P I account page. The Keys option is highlighted in the navigation menu.":::
+ :::image type="content" source="media/get-credentials-portal/cosmos-keys-option.png" lightbox="media/get-credentials-portal/cosmos-keys-option.png" alt-text="Screenshot of an Azure Cosmos DB SQL A P I account page. The Keys option is highlighted in the navigation menu.":::
1. Record the values from the **URI** and **PRIMARY KEY** fields. You'll use these values in a later step.
- :::image type="content" source="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" lightbox="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" alt-text="Screenshot of Keys page with various credentials for an Azure Cosmos D B SQL A P I account.":::
+ :::image type="content" source="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" lightbox="media/get-credentials-portal/cosmos-endpoint-key-credentials.png" alt-text="Screenshot of Keys page with various credentials for an Azure Cosmos DB SQL A P I account.":::
#### [Resource Manager template](#tab/azure-resource-manager)
export COSMOS_KEY="<cosmos-account-PRIMARY-KEY>"
Before you start building the application, let's look into the hierarchy of resources in Azure Cosmos DB. Azure Cosmos DB has a specific object model used to create and access resources. The Azure Cosmos DB creates resources in a hierarchy that consists of accounts, databases, containers, and items.
- Hierarchical diagram showing an Azure Cosmos D B account at the top. The account has two child database nodes. One of the database nodes includes two child container nodes. The other database node includes a single child container node. That single container node has three child item nodes.
+ Hierarchical diagram showing an Azure Cosmos DB account at the top. The account has two child database nodes. One of the database nodes includes two child container nodes. The other database node includes a single child container node. That single container node has three child item nodes.
:::image-end::: For more information about the hierarchy of different resources, see [working with databases, containers, and items in Azure Cosmos DB](../account-databases-containers-items.md).
cosmos-db Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/samples-dotnet.md
ms.devlang: csharp Previously updated : 06/08/2022 Last updated : 07/06/2022 # Examples for Azure Cosmos DB SQL API SDK for .NET+ [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] > [!div class="op_single_selector"]
-> * [.NET](quickstart-dotnet.md)
+>
+> * [.NET](samples-dotnet.md)
+>
The [cosmos-db-sql-api-dotnet-samples](https://github.com/Azure-Samples/cosmos-db-sql-api-dotnet-samples) GitHub repository includes multiple sample projects. These projects illustrate how to perform common operations on Azure Cosmos DB SQL API resources.
The sample projects are all self-contained and are designed to be ran individual
Dive deeper into the SDK to import more data, perform complex queries, and manage your Azure Cosmos DB SQL API resources. > [!div class="nextstepaction"]
-> [Get started with Azure Cosmos DB SQL API and .NET >](how-to-dotnet-get-started.md)
+> [Get started with Azure Cosmos DB SQL API and .NET](how-to-dotnet-get-started.md)
cost-management-billing Quick Create Budget Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-create-budget-bicep.md
+
+ Title: Quickstart - Create an Azure budget with Bicep
+description: Quickstart showing how to create a budget with Bicep.
+++++ Last updated : 07/06/2022+++
+# Quickstart: Create a budget with Bicep
+
+Budgets in Cost Management help you plan for and drive organizational accountability. With budgets, you can account for the Azure services you consume or subscribe to during a specific period. They help you inform others about their spending to proactively manage costs and monitor how spending progresses over time. When the budget thresholds you've created are exceeded, notifications are triggered. None of your resources are affected and your consumption isn't stopped. You can use budgets to compare and track spending as you analyze costs. This quickstart shows you how to create a budget named 'MyBudget' using Bicep.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+If you have a new subscription, you can't immediately create a budget or use other Cost Management features. It might take up to 48 hours before you can use all Cost Management features.
+
+Budgets are supported for the following types of Azure account types and scopes:
+
+- Azure role-based access control (Azure RBAC) scopes
+ - Management groups
+ - Subscription
+- Enterprise Agreement scopes
+ - Billing account
+ - Department
+ - Enrollment account
+- Individual agreements
+ - Billing account
+- Microsoft Customer Agreement scopes
+ - Billing account
+ - Billing profile
+ - Invoice section
+ - Customer
+- AWS scopes
+ - External account
+ - External subscription
+
+To view budgets, you need at least read access for your Azure account.
+
+For Azure EA subscriptions, you must have read access to view budgets. To create and manage budgets, you must have contributor permission.
+
+The following Azure permissions, or scopes, are supported per subscription for budgets by user and group. For more information about scopes, see [Understand and work with scopes](understand-work-scopes.md).
+
+- Owner: Can create, modify, or delete budgets for a subscription.
+- Contributor and Cost Management contributor: Can create, modify, or delete their own budgets. Can modify the budget amount for budgets created by others.
+- Reader and Cost Management reader: Can view budgets that they have permission to.
+
+For more information about assigning permission to Cost Management data, see [Assign access to Cost Management data](assign-access-acm-data.md).
+
+## No filter
+
+### Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/create-budget-simple).
++
+One Azure resource is defined in the Bicep file:
+
+- [Microsoft.Consumption/budgets](/azure/templates/microsoft.consumption/budgets): Create an Azure budget.
+
+### Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ myContactEmails ='("user1@contoso.com", "user2@contoso.com")'
+
+ az deployment sub create --name demoSubDeployment --location centralus --template-file main.bicep --parameters startDate=<start-date> endDate=<end-date> contactEmails=$myContactEmails
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ $myContactEmails = @("user1@contoso.com", "user2@contoso.com")
+
+ New-AzSubscriptionDeployment -Name demoSubDeployment -Location centralus -TemplateFile ./main.bicep -startDate "<start-date>" -endDate "<end-date>" -contactEmails $myContactEmails
+ ```
+
+
+
+ You need to enter the following parameters:
+
+ - **startDate**: Replace **\<start-date\>** with the start date. It must be the first of the month in YYYY-MM-DD format. A future start date shouldn't be more than three months in the future. A past start date should be selected within the timegrain period.
+ - **endDate**: Replace **\<end-date\>** with the end date in YYYY-MM-DD format. If not provided, it defaults to ten years from the start date.
+ - **contactEmails**: First create a variable that holds your emails and then pass that variable. Replace the sample emails with the email addresses to send the budget notification to when the threshold is exceeded.
+
+ > [!NOTE]
+ > When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## One filter
+
+### Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/create-budget-onefilter).
++
+One Azure resource is defined in the Bicep file:
+
+- [Microsoft.Consumption/budgets](/azure/templates/microsoft.consumption/budgets): Create an Azure budget.
+
+### Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ myContactEmails ='("user1@contoso.com", "user2@contoso.com")'
+ myRgFilterValues ='("resource-group-01", "resource-group-02")'
+
+ az deployment sub create --name demoSubDeployment --location centralus --template-file main.bicep --parameters startDate=<start-date> endDate=<end-date> contactEmails=$myContactEmails resourceGroupFilterValues=$myRgFilterValues
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ $myContactEmails = @("user1@contoso.com", "user2@contoso.com")
+ $myRgFilterValues = @("resource-group-01", "resource-group-02")
+
+ New-AzSubscriptionDeployment -Name demoSubDeployment -Location centralus -TemplateFile ./main.bicep -startDate "<start-date>" -endDate "<end-date>" -contactEmails $myContactEmails -resourceGroupFilterValues $myRgFilterValues
+ ```
+
+
+
+ You need to enter the following parameters:
+
+ - **startDate**: Replace **\<start-date\>** with the start date. It must be the first of the month in YYYY-MM-DD format. A future start date shouldn't be more than three months in the future. A past start date should be selected within the timegrain period.
+ - **endDate**: Replace **\<end-date\>** with the end date in YYYY-MM-DD format. If not provided, it defaults to ten years from the start date.
+ - **contactEmails**: First create a variable that holds your emails and then pass that variable. Replace the sample emails with the email addresses to send the budget notification to when the threshold is exceeded.
+ - **resourceGroupFilterValues** First create a variable that holds your resource group filter values and then pass that variable. Replace the sample filter values with the set of values for your resource group filter.
+
+ > [!NOTE]
+ > When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Two or more filters
+
+### Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/create-budget).
++
+One Azure resource is defined in the Bicep file:
+
+- [Microsoft.Consumption/budgets](/azure/templates/microsoft.consumption/budgets): Create an Azure budget.
+
+### Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ myContactEmails ='("user1@contoso.com", "user2@contoso.com")'
+ myContactGroups ='("action-group-resource-id-01", "action-group-resource-id-02")'
+ myRgFilterValues ='("resource-group-01", "resource-group-02")'
+ myMeterCategoryFilterValues ='("meter-category-01", "meter-category-02")'
+
+ az deployment sub create --name demoSubDeployment --location centralus --template-file main.bicep --parameters startDate=<start-date> endDate=<end-date> contactEmails=$myContactEmails contactGroups=$myContactGroups resourceGroupFilterValues=$myRgFilterValues meterCategoryFilterValues=$myMeterCategoryFilterValues
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ $myContactEmails = @("user1@contoso.com", "user2@contoso.com")
+ $myContactGroups = @("action-group-resource-id-01", "action-group-resource-id-02")
+ $myRgFilterValues = @("resource-group-01", "resource-group-02")
+ $myMeterCategoryFilterValues = @("meter-category-01", "meter-category-02")
++
+ New-AzSubscriptionDeployment -Name demoSubDeployment -Location centralus -TemplateFile ./main.bicep -startDate "<start-date>" -endDate "<end-date>" -contactEmails $myContactEmails -contactGroups $myContactGroups -resourceGroupFilterValues $myRgFilterValues -meterCategoryFilterValues $myMeterCategoryFilterValues
+ ```
+
+
+
+ You need to enter the following parameters:
+
+ - **startDate**: Replace **\<start-date\>** with the start date. It must be the first of the month in YYYY-MM-DD format. A future start date shouldn't be more than three months in the future. A past start date should be selected within the timegrain period.
+ - **endDate**: Replace **\<end-date\>** with the end date in YYYY-MM-DD format. If not provided, it defaults to ten years from the start date.
+ - **contactEmails**: First create a variable that holds your emails and then pass that variable. Replace the sample emails with the email addresses to send the budget notification to when the threshold is exceeded.
+ - **contactGroups**: First create a variable that holds your contact groups and then pass that variable. Replace the sample contact groups with the list of action groups to send the budget notification to when the threshold is exceeded.
+ - **resourceGroupFilterValues**: First create a variable that holds your resource group filter values and then pass that variable. Replace the sample filter values with the set of values for your resource group filter.
+ - **meterCategoryFilterValues**: First create a variable that holds your meter category filter values and then pass that variable. Replace the sample filter values within parentheses with the set of values for your meter category filter.
+
+ > [!NOTE]
+ > When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az consumption budget list
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzConsumptionBudget
+```
+++
+## Clean up resources
+
+When you no longer need the budget, use the Azure portal, Azure CLI, or Azure PowerShell to delete it:
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az consumption budget delete --budget-name MyBudget
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzConsumptionBudget -Name MyBudget
+```
+++
+## Next steps
+
+In this quickstart, you created an Azure budget and deployed it using Bicep. To learn more about Cost Management and Billing and Bicep, continue on to the articles below.
+
+- Read the [Cost Management and Billing](../cost-management-billing-overview.md) overview.
+- [Create budgets](tutorial-acm-create-budgets.md) in the Azure portal.
+- Learn more about [Bicep](../../azure-resource-manager/bicep/overview.md).
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/pay-bill.md
On 1 October 2021, automatic payments in India may block some credit card transa
[Learn more about the Reserve Bank of India directive; Processing of e-mandate on cards for recurring transactions](https://www.rbi.org.in/Scripts/NotificationUser.aspx?Id=11668&Mode=0)
-On 1 July 2022, Microsoft and other online merchants will no longer be storing credit card information. To comply with this regulation Microsoft will be removing all stored card details from Microsoft Azure. To avoid service interruption, you will need to add a payment method and make a one-time payment for all invoices.
+On 30 September 2022, Microsoft and other online merchants will no longer be storing credit card information. To comply with this regulation Microsoft will be removing all stored card details from Microsoft Azure. To avoid service interruption, you will need to add and verify your payment method to make a payment in the Azure portal for all invoices.
-[Learn about the Reserve Bank of India directive; Restriction on storage of actual card data ](https://www.rbi.org.in/Scripts/NotificationUser.aspx?Id=12211)
+[Learn about the Reserve Bank of India directive; Restriction on storage of actual card data ](https://rbidocs.rbi.org.in/rdocs/notification/PDFs/DPSSC09B09841EF3746A0A7DC4783AC90C8F3.PDF)
## Pay by default payment method
data-factory Concepts Data Flow Performance Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-transformations.md
If your data is not evenly partitioned after a transformation, you can use the [
> [!TIP] > If you repartition your data, but have downstream transformations that reshuffle your data, use hash partitioning on a column used as a join key.
+> [!NOTE]
+> Transformations inside your data flow (with the exception of the Sink transformation) do not modify the file and folder partitioning of data at rest. Partitioning in each transformation repartitions data inside the data frames of the temporary serverless Spark cluster that ADF manages for each of your data flow executions.
++ ## Next steps - [Data flow performance overview](concepts-data-flow-performance.md)
data-factory Data Flow Assert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-assert.md
Previously updated : 06/09/2022 Last updated : 06/23/2022 # Assert transformation in mapping data flow
By default, the assert transformation will include NULLs in row assertion evalua
## Direct assert row failures
-When an assertion fails, you can optionally direct those error rows to a file in Azure by using the "Errors" tab on the sink transformation.
+When an assertion fails, you can optionally direct those error rows to a file in Azure by using the "Errors" tab on the sink transformation. You will also have an option on the sink transformation to not output rows with assertion failures at all by ignoring error rows.
## Examples
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-sink.md
Previously updated : 03/25/2022 Last updated : 06/23/2022 # Sink transformation in mapping data flow
Below is a video tutorial on how to use database error row handling automaticall
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4IWne]
-For assert failure rows, you can use the Assert transformation upstream in your data flow and then redirect failed assertions to an output file here in the sink errors tab.
+For assert failure rows, you can use the Assert transformation upstream in your data flow and then redirect failed assertions to an output file here in the sink errors tab. You also have an option here to ignore rows with assertion failures and not output those rows at all to the sink destination data store.
:::image type="content" source="media/data-flow/assert-errors.png" alt-text="Assert failure rows":::
data-factory Format Delta https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-delta.md
In Settings tab, you will find three more options to optimize delta sink transfo
* When **Auto compact** is enabled, after an individual write, transformation checks if files can further be compacted, and runs a quick OPTIMIZE job (with 128 MB file sizes instead of 1GB) to further compact files for partitions that have the most number of small files. Auto compaction helps in coalescing a large number of small files into a smaller number of large files. Auto compaction only kicks in when there are at least 50 files. Once a compaction operation is performed, it creates a new version of the table, and writes a new file containing the data of several previous files in a compact compressed form.
-* When **Optimize write** is enabled, sink transformation dynamically optimizes partition sizes based on the actual data by attempting to write out 128 MB files for each table partition. This is an approximate size and can vary depending on dataset characteristics. Optimized writes improve the overall efficiency of the *writes and subsequent reads*. It organizes partitions such that the performance of subsequent reads will improve.
+* When **Optimize write** is enabled, sink transformation dynamically optimizes partition sizes based on the actual data by attempting to write out 128 MB files for each table partition. This is an approximate size and can vary depending on dataset characteristics. Optimized writes improve the overall efficiency of the *writes and subsequent reads*. It organizes partitions such that the performance of subsequent reads will improve
+> [!TIP]
+> The optimized write process will slow down your overall ETL job because the Sink will issue the Spark Delta Lake Optimize command after your data is processed. It is recommended to use Optimized Write sparingly. For example, if you have an hourly data pipeline, execute a data flow with Optimized Write daily.
### Known limitations
data-factory How To Develop Azure Ssis Ir Licensed Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-develop-azure-ssis-ir-licensed-components.md
Title: Install licensed components for Azure-SSIS integration runtime
-description: Learn how an ISV can develop and install paid or licensed custom components for the Azure-SSIS integration runtime
+description: Learn how an ISV can develop and install paid or licensed custom components for the Azure-SSIS integration runtime and proxy with Self-hosted integration runtime
Last updated 02/17/2022
[!INCLUDE[appliesto-adf-asa-preview-md](includes/appliesto-adf-asa-preview-md.md)]
-This article describes how an ISV can develop and install paid or licensed custom components for SQL Server Integration Services (SSIS) packages that run in Azure in the Azure-SSIS integration runtime.
+This article describes how an ISV can develop and install paid or licensed custom components for SQL Server Integration Services (SSIS) packages that run in Azure in the Azure-SSIS integration runtime, and proxy with self-hosted integration runtime.
-## The problem
+## Install paid or licensed custom components for the Azure-SSIS integration runtime
+### The problem
The nature of the Azure-SSIS integration runtime presents several challenges, which make the typical licensing methods used for the on-premises installation of custom components inadequate. As a result, the Azure-SSIS IR requires a different approach.
The nature of the Azure-SSIS integration runtime presents several challenges, wh
- You can also scale the Azure-SSIS IR in or out, so that the number of nodes can shrink or expand at any time.
-## The solution
+### The solution
As a result of the limitations of traditional licensing methods described in the previous section, the Azure-SSIS IR provides a new solution. This solution uses Windows environment variables and SSIS system variables for the license binding and validation of third-party components. ISVs can use these variables to obtain unique and persistent info for an Azure-SSIS IR, such as Cluster ID and Cluster Node Count. With this info, ISVs can then bind the license for their component to an Azure-SSIS IR *as a cluster*. This binding uses an ID that doesn't change when customers start or stop, scale up or down, scale in or out, or reconfigure the Azure-SSIS IR in any way.
The following diagram shows the typical installation, activation and license bin
} ```
+## Enable custom/3rd party data flow components using self-hosted IR as a proxy
+
+To enable your custom/3rd party data flow components to access data on premises using self-hosted IR as a proxy for Azure-SSIS IR, follow these instructions:
+
+1. Install your custom/3rd party data flow components targeting SQL Server 2017 on Azure-SSIS IR via [standard/express custom setups](./how-to-configure-azure-ssis-ir-custom-setup.md).
+
+1. Create the following DTSPath registry keys on self-hosted IR if they donΓÇÖt exist already:
+ 1. `Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\140\SSIS\Setup\DTSPath` set to `C:\Program Files\Microsoft SQL Server\140\DTS\`
+ 1. `Computer\HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\Microsoft SQL Server\140\SSIS\Setup\DTSPath` set to `C:\Program Files (x86)\Microsoft SQL Server\140\DTS\`
+
+1. Install your custom/3rd party data flow components targeting SQL Server 2017 on self-hosted IR under the DTSPath above and ensure that your installation process:
+
+ 1. Creates `<DTSPath>`, `<DTSPath>/Connections`, `<DTSPath>/PipelineComponents`, and `<DTSPath>/UpgradeMappings` folders if they don't exist already.
+
+ 1. Creates your own XML file for extension mappings in `<DTSPath>/UpgradeMappings` folder.
+
+ 1. Installs all assemblies referenced by your custom/3rd party data flow component assemblies in the global assembly cache (GAC).
+
+Here are examples from our partners, [Theobald Software](https://kb.theobald-software.com/xtract-is/XIS-for-Azure-SHIR) and [Aecorsoft](https://www.aecorsoft.com/blog/2020/11/8/using-azure-data-factory-to-bring-sap-data-to-azure-via-self-hosted-ir-and-ssis-ir), who have adapted their data flow components to use our express custom setup and self-hosted IR as a proxy for Azure-SSIS IR.
++ ## ISV partners You can find a list of ISV partners who have adapted their components and extensions for the Azure-SSIS IR at the end of this blog post - [Enterprise Edition, Custom Setup, and 3rd Party Extensibility for SSIS in ADF](https://techcommunity.microsoft.com/t5/SQL-Server-Integration-Services/Enterprise-Edition-Custom-Setup-and-3rd-Party-Extensibility-for/ba-p/388360).
data-factory Self Hosted Integration Runtime Proxy Ssis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-proxy-ssis.md
The on-premises staging tasks and Execute SQL/Process Tasks that run on your sel
The cloud staging tasks that run on your Azure-SSIS IR are not be billed separately, but your running Azure-SSIS IR is billed as specified in the [Azure-SSIS IR pricing](https://azure.microsoft.com/pricing/details/data-factory/ssis/) article.
-## Enable custom/3rd party data flow components
-
-To enable your custom/3rd party data flow components to access data on premises using self-hosted IR as a proxy for Azure-SSIS IR, follow these instructions:
-
-1. Install your custom/3rd party data flow components targeting SQL Server 2017 on Azure-SSIS IR via [standard/express custom setups](./how-to-configure-azure-ssis-ir-custom-setup.md).
-
-1. Create the following DTSPath registry keys on self-hosted IR if they donΓÇÖt exist already:
- 1. `Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\140\SSIS\Setup\DTSPath` set to `C:\Program Files\Microsoft SQL Server\140\DTS\`
- 1. `Computer\HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\Microsoft SQL Server\140\SSIS\Setup\DTSPath` set to `C:\Program Files (x86)\Microsoft SQL Server\140\DTS\`
-
-1. Install your custom/3rd party data flow components targeting SQL Server 2017 on self-hosted IR under the DTSPath above and ensure that your installation process:
-
- 1. Creates `<DTSPath>`, `<DTSPath>/Connections`, `<DTSPath>/PipelineComponents`, and `<DTSPath>/UpgradeMappings` folders if they don't exist already.
-
- 1. Creates your own XML file for extension mappings in `<DTSPath>/UpgradeMappings` folder.
-
- 1. Installs all assemblies referenced by your custom/3rd party data flow component assemblies in the global assembly cache (GAC).
-
-Here are examples from our partners, [Theobald Software](https://kb.theobald-software.com/xtract-is/XIS-for-Azure-SHIR) and [Aecorsoft](https://www.aecorsoft.com/blog/2020/11/8/using-azure-data-factory-to-bring-sap-data-to-azure-via-self-hosted-ir-and-ssis-ir), who have adapted their data flow components to use our express custom setup and self-hosted IR as a proxy for Azure-SSIS IR.
- ## Enforce TLS 1.2 If you need to access data stores that have been configured to use only the strongest cryptography/most secure network protocol (TLS 1.2), including your Azure Blob Storage for staging, you must enable only TLS 1.2 and disable older SSL/TLS versions at the same time on your self-hosted IR. To do so, you can download and run the *main.cmd* script that we provide in the *CustomSetupScript/UserScenarios/TLS 1.2* folder of our public preview blob container. Using [Azure Storage Explorer](https://storageexplorer.com/), you can connect to our public preview blob container by entering the following SAS URI:
databox-online Azure Stack Edge Gpu Overview Gpu Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-overview-gpu-virtual-machines.md
Previously updated : 06/28/2022 Last updated : 07/05/2022 #Customer intent: As an IT admin, I need to understand how to deploy and manage GPU-accelerated VM workloads on my Azure Stack Edge Pro GPU devices.
GPU-accelerated workloads on an Azure Stack Edge Pro GPU device require a GPU vi
## About GPU VMs
-Your Azure Stack Edge devices may be equipped with 1 or 2 of Nvidia's Tesla T4 GPU. To deploy GPU-accelerated VM workloads on these devices, use GPU-optimized VM sizes. For example, the NC T4 v3-series should be used to deploy inference workloads featuring T4 GPUs. For more information, see [NC T4 v3-series VMs](../virtual-machines/nct4-v3-series.md).
+Your Azure Stack Edge devices may be equipped with 1 or 2 of Nvidia's Tesla T4 or Tensor Core A2 GPU. To deploy GPU-accelerated VM workloads on these devices, use GPU-optimized VM sizes. The GPU VM chosen should match with the make of the GPU on your Azure Stack Edge device. For more information, see [Supported N series GPU optimized VMs](azure-stack-edge-gpu-virtual-machine-sizes.md#n-series-gpu-optimized).
To take advantage of the GPU capabilities of Azure N-series VMs, Nvidia GPU drivers must be installed. The Nvidia GPU driver extension installs appropriate Nvidia CUDA or GRID drivers. You can [install the GPU extensions using templates or via the Azure portal](#gpu-vm-deployment).
defender-for-cloud Defender For Containers Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md
Title: Container security architecture in Microsoft Defender for Cloud description: Learn about the architecture of Microsoft Defender for Containers for each container platform-- Previously updated : 05/31/2022 Last updated : 06/19/2022 # Defender for Containers architecture
To protect your Kubernetes containers, Defender for Containers receives and anal
When Defender for Cloud protects a cluster hosted in Azure Kubernetes Service, the collection of audit log data is agentless and frictionless.
-The **Defender profile (preview)** deployed to each node provides the runtime protections and collects signals from nodes using [eBPF technology](https://ebpf.io/).
+The **Defender profile** deployed to each node provides the runtime protections and collects signals from nodes using [eBPF technology](https://ebpf.io/).
The **Azure Policy add-on for Kubernetes** collects cluster and workload configuration for admission control policies as explained in [Protect your Kubernetes workloads](kubernetes-workload-protections.md).
-> [!NOTE]
-> Defender for Containers **Defender profile** is a preview feature.
- :::image type="content" source="./media/defender-for-containers/architecture-aks-cluster.png" alt-text="Diagram of high-level architecture of the interaction between Microsoft Defender for Containers, Azure Kubernetes Service, and Azure Policy." lightbox="./media/defender-for-containers/architecture-aks-cluster.png"::: ### Defender profile component details
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Defender for Containers protects your clusters whether they're running in:
Learn about this plan in [Overview of Microsoft Defender for Containers](defender-for-containers-introduction.md).
-You can learn more by watching these video from the Defender for Cloud in the Field video series:
+You can learn more by watching these videos from the Defender for Cloud in the Field video series:
- [Microsoft Defender for Containers in a multi-cloud environment](episode-nine.md) - [Protect Containers in GCP with Defender for Containers](episode-ten.md)
A full list of supported alerts is available in the [reference table of all Defe
[!INCLUDE [Remove the extension](./includes/defender-for-containers-remove-extension.md)] ::: zone-end + ::: zone-end ::: zone pivot="defender-for-container-aks"
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
You can learn more by watching this video from the Defender for Cloud in the Fie
| Aspect | Details | |--|--|
-| Release state: | General availability (GA)<br> Certain features are in preview, for a full list see the [availability](supported-machines-endpoint-solutions-clouds-containers.md) section. |
+| Release state: | General availability (GA)<br> Certain features are in preview, for a full list see the [availability](supported-machines-endpoint-solutions-clouds-containers.md) section. |
| Feature availability | Refer to the [availability](supported-machines-endpoint-solutions-clouds-containers.md) section for additional information on feature release state and availability.| | Pricing: | **Microsoft Defender for Containers** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) | | Required roles and permissions: | ΓÇó To auto provision the required components, see the [permissions for each of the components](enable-data-collection.md?tabs=autoprovision-containers)<br> ΓÇó **Security admin** can dismiss alerts<br> ΓÇó **Security reader** can view vulnerability assessment findings<br> See also [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md) |
You can learn more by watching this video from the Defender for Cloud in the Fie
Defender for Containers helps with the core aspects of container security: -- [**Environment hardening**](#hardening) - Defender for Containers protects your Kubernetes clusters whether they're running on Azure Kubernetes Service, Kubernetes on-premises/IaaS, or Amazon EKS. By continuously assessing clusters, Defender for Containers provides visibility into misconfigurations and guidelines to help mitigate identified threats.
+- [**Environment hardening**](#hardening) - Defender for Containers protects your Kubernetes clusters whether they're running on Azure Kubernetes Service, Kubernetes on-premises/IaaS, or Amazon EKS. Defender for Containers continuously assesses clusters to provide visibility into misconfigurations and guidelines to help mitigate identified threats.
- [**Vulnerability assessment**](#vulnerability-assessment) - Vulnerability assessment and management tools for images **stored** in ACR registries and **running** in Azure Kubernetes Service.
No. Only Azure Kubernetes Service (AKS) clusters that use virtual machine scale
### Do I need to install the Log Analytics VM extension on my AKS nodes for security protection?
-No, AKS is a managed service, and manipulation of the IaaS resources isn't supported. The Log Analytics VM extension isn't needed and may result in additional charges.
+No, AKS is a managed service, and manipulation of the IaaS resources isn't supported. The Log Analytics VM extension isn't needed and may result in extra charges.
## Learn More
defender-for-cloud Defender For Databases Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-introduction.md
Title: Microsoft Defender for open-source relational databases - the benefits and features description: Learn about the benefits and features of Microsoft Defender for open-source relational databases such as PostgreSQL, MySQL, and MariaDB Previously updated : 01/17/2022 Last updated : 06/19/2022
defender-for-cloud Defender For Kubernetes Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-kubernetes-introduction.md
Defender for Cloud provides real-time threat protection for your Azure Kubernetes Service (AKS) containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers. Threat protection at the cluster level is provided by the analysis of the Kubernetes audit logs.
-Host-level threat detection for your Linux AKS nodes is available if you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) and its Log Analytics agent. However, if your cluster is deployed on an Azure Kubernetes Service virtual machine scale set, the Log Analytics agent is not currently supported.
+Host-level threat detection for your Linux AKS nodes is available if you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) and its Log Analytics agent. However, if your cluster is deployed on an Azure Kubernetes Service virtual machine scale set, the Log Analytics agent isn't currently supported.
## Availability
Our global team of security researchers constantly monitor the threat landscape.
In addition, Microsoft Defender for Kubernetes provides **cluster-level threat protection** by monitoring your clusters' logs. This means that security alerts are only triggered for actions and deployments that occur *after* you've enabled Defender for Kubernetes on your subscription.
-Examples of security events that Microsoft Defender for Kubernetes monitors include:
+Examples of security events that Microsoft Defenders for Kubernetes monitors include:
- Exposed Kubernetes dashboards - Creation of high privileged roles
No. Subscriptions that have either Microsoft Defender for Kubernetes or Microsof
### Does the new plan reflect a price increase?
-The new comprehensive Container security plan combines Kubernetes protection and container registry image scanning, and removes the previous dependency on the (paid) Defender for Servers plan. Pricing is dependant on your container architecture and coverage. For example, your price may change depending on the number of images in your Container Registry, or the number of Kubernetes nodes among other reasons.
+The new comprehensive Container security plan combines Kubernetes protection and container registry image scanning, and removes the previous dependency on the (paid) Defender for Servers plan. Pricing is dependent on your container architecture and coverage. For example, your price may change depending on the number of images in your Container Registry, or the number of Kubernetes nodes among other reasons.
### How can I calculate my potential price change? In order to help you understand your costs, Defender for Cloud offers the Price Estimation workbook as part of its published Workbooks. The Price Estimation workbook allows you to estimate the expected price for Defender for Cloud plans before enabling them.
-Your price is dependant on your container architecture and coverage. For example, your price may change depending on the number of images in your Container Registry, or the number of Kubernetes nodes among other reasons.
+Your price is dependent on your container architecture and coverage. For example, your price may change depending on the number of images in your Container Registry, or the number of Kubernetes nodes among other reasons.
You can learn [how to enable and use](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-price-estimation-dashboard/ba-p/3247622) the Price Estimation workbook.
defender-for-cloud Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-data-collection.md
Title: Auto-deploy agents for Microsoft Defender for Cloud | Microsoft Docs
+ Title: Auto-deploy agents for Microsoft sign for Cloud | Microsoft Docs
description: This article describes how to set up auto provisioning of the Log Analytics agent and other agents and extensions used by Microsoft Defender for Cloud -- Previously updated : 04/28/2022 Last updated : 07/06/2022 # Configure auto provisioning for agents and extensions from Microsoft Defender for Cloud
By default, auto provisioning is enabled when you enable Defender for Containers
| Aspect | Azure Kubernetes Service clusters | Azure Arc-enabled Kubernetes clusters | ||-||
-| Release state: | ΓÇó Defender profile: Preview<br> ΓÇó Azure Policy add-on: Generally available (GA) | ΓÇó Defender extension: Preview<br> ΓÇó Azure Policy extension: Preview |
+| Release state: | ΓÇó Defender profile: GA<br> ΓÇó Azure Policy add-on: Generally available (GA) | ΓÇó Defender extension: Preview<br> ΓÇó Azure Policy extension: Preview |
| Relevant Defender plan: | [Microsoft Defender for Containers](defender-for-containers-introduction.md) | [Microsoft Defender for Containers](defender-for-containers-introduction.md) | | Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) or [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) | [Owner](../role-based-access-control/built-in-roles.md#owner) or [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) | | Supported destinations: | The AKS Defender profile only supports [AKS clusters that have RBAC enabled](../aks/concepts-identity.md#kubernetes-rbac). | [See Kubernetes distributions supported for Arc-enabled Kubernetes](supported-machines-endpoint-solutions-clouds-containers.md?tabs=azure-aks#kubernetes-distributions-and-configurations) |
Any of the agents and extensions described on this page *can* be installed manua
We recommend enabling auto provisioning, but it's disabled by default. ## How does auto provisioning work?
-Defender for Cloud's auto provisioning settings have a toggle for each type of supported extension. When you enable auto provisioning of an extension, you assign the appropriate **Deploy if not exists** policy. This policy type ensures the extension is provisioned on all existing and future resources of that type.
+Defender for Cloud's auto provisioning settings has a toggle for each type of supported extension. When you enable auto provisioning of an extension, you assign the appropriate **Deploy if not exists** policy. This policy type ensures the extension is provisioned on all existing and future resources of that type.
> [!TIP] > Learn more about Azure Policy effects including deploy if not exists in [Understand Azure Policy effects](../governance/policy/concepts/effects.md).
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 06/26/2022 Last updated : 07/05/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
> [!TIP] > If you're looking for items older than six months, you'll find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+## July 2022
+
+Updates in July include:
+
+- [General availability (GA) of the Cloud-native security agent for Kubernetes runtime protection](#general-availability-ga-of-the-cloud-native-security-agent-for-kubernetes-runtime-protection)
+### General availability (GA) of the Cloud-native security agent for Kubernetes runtime protection
+
+We're excited to share that the Cloud-native security agent for Kubernetes runtime protection is now generally available (GA)!
+
+The production deployments of Kubernetes clusters continue to grow as customers continue to containerize their applications. To assist with this growth, the Defender for Containers team has developed a cloud-native Kubernetes oriented security agent.
+
+The new security agent is a Kubernetes DaemonSet, based on eBPF technology and is fully integrated into AKS clusters as part of the AKS Security Profile.
+
+The security agent enablement is available through auto-provisioning, recommendations flow, AKS RP or at scale using Azure Policy.
+
+You can [deploy the Defender profile](/azure/defender-for-cloud/defender-for-containers-enable?tabs=aks-deploy-portal%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Caks-removeprofile-api&pivots=defender-for-container-aks#deploy-the-defender-profile) today on your AKS clusters.
+
+With this announcement, the runtime protection - threat detection (workload) is now also generally available.
+
+Learn more about the Defender for Container's [feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
+
+You can also review [all available alerts](alerts-reference.md#alerts-k8scluster).
+
+Note, if you're using the preview version, the `AKS-AzureDefender` feature flag is no longer required.
+ ## June 2022 Updates in June include:
You can now also group your alerts by resource group to view all of your alerts
Until now, the integration with Microsoft Defender for Endpoint (MDE) included automatic installation of the new [MDE unified solution](/microsoft-365/security/defender-endpoint/configure-server-endpoints?view=o365-worldwide#new-windows-server-2012-r2-and-2016-functionality-in-the-modern-unified-solution&preserve-view=true) for machines (Azure subscriptions and multicloud connectors) with Defender for Servers Plan 1 enabled, and for multicloud connectors with Defender for Servers Plan 2 enabled. Plan 2 for Azure subscriptions enabled the unified solution for Linux machines and Windows 2019 and 2022 servers only. Windows servers 2012R2 and 2016 used the MDE legacy solution dependent on Log Analytics agent.
-Now, the new unified solution is available for all machines in both plans, for both Azure subscriptions and multi-cloud connectors. For Azure subscriptions with Servers plan 2 that enabled MDE integration *after* 06-20-2022, the unified solution is enabled by default for all machines Azure subscriptions with the Defender for Servers Plan 2 enabled with MDE integration *before* 06-20-2022 can now enable unified solution installation for Windows servers 2012R2 and 2016 through the dedicated button in the Integrations page:
+Now, the new unified solution is available for all machines in both plans, for both Azure subscriptions and multi-cloud connectors. For Azure subscriptions with Servers plan 2 that enabled MDE integration *after* June 20th 2022, the unified solution is enabled by default for all machines Azure subscriptions with the Defender for Servers Plan 2 enabled with MDE integration *before* June 20th 2022 can now enable unified solution installation for Windows servers 2012R2 and 2016 through the dedicated button in the Integrations page:
:::image type="content" source="media/integration-defender-for-endpoint/enable-unified-solution.png" alt-text="The integration between Microsoft Defender for Cloud and Microsoft's EDR solution, Microsoft Defender for Endpoint, is enabled." lightbox="media/integration-defender-for-endpoint/enable-unified-solution.png":::
These alerts inform you of an access denied anomaly, is detected for any of your
| Alert (alert type) | Description | MITRE tactics | Severity | |--|--|--|--| | **Unusual access denied - User accessing high volume of key vaults denied**<br>(KV_DeniedAccountVolumeAnomaly) | A user or service principal has attempted access to anomalously high volume of key vaults in the last 24 hours. This anomalous access pattern may be legitimate activity. Though this attempt was unsuccessful, it could be an indication of a possible attempt to gain access of key vault and the secrets contained within it. We recommend further investigations. | Discovery | Low |
-| **Unusual access denied - Unusual user accessing key vault denied**<br>(KV_UserAccessDeniedAnomaly) | A key vault access was attempted by a user that does not normally access it, this anomalous access pattern may be legitimate activity. Though this attempt was unsuccessful, it could be an indication of a possible attempt to gain access of key vault and the secrets contained within it. | Initial Access, Discovery | Low |
+| **Unusual access denied - Unusual user accessing key vault denied**<br>(KV_UserAccessDeniedAnomaly) | A key vault access was attempted by a user that doesn't normally access it, this anomalous access pattern may be legitimate activity. Though this attempt was unsuccessful, it could be an indication of a possible attempt to gain access of key vault and the secrets contained within it. | Initial Access, Discovery | Low |
## May 2022
Learn more about [vulnerability management](deploy-vulnerability-assessment-tvm.
### JIT (Just-in-time) access for VMs is now available for AWS EC2 instances (Preview)
-When you [connect AWS accounts](quickstart-onboard-aws.md), JIT will automatically evaluate the network configuration of your instance's security groups and recommend which instances need protection for their exposed management ports. This is similar to how JIT works with Azure. When you onboard unprotected EC2 instances, JIT will block public access to the management ports and only open them with authorized requests for a limited time frame.
+When you [connect AWS accounts](quickstart-onboard-aws.md), JIT will automatically evaluate the network configuration of your instance's security groups and recommend which instances need protection for their exposed management ports. This is similar to how JIT works with Azure. When you onboard unprotected EC2 instances, JIT will block public access to the management ports, and only open them with authorized requests for a limited time frame.
Learn how [JIT protects your AWS EC2 instances](just-in-time-access-overview.md#how-jit-operates-with-network-resources-in-azure-and-aws)
defender-for-iot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture.md
In contrast, when working with locally managed sensors:
- Sensor names can be updated in the sensor console.
-### Devices monitored by Defender for IoT
+### What is a Defender for IoT committed device?
[!INCLUDE [devices-inventoried](includes/devices-inventoried.md)]
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
If you're working with an OT network, we recommend that you identify system requ
- Research your own network architecture and monitor bandwidth. Check requirements for creating certificates and other network details, and clarify the sensor appliances you'll need for your own network load.
- Calculate the approximate number of devices you'll be monitoring. Devices can be added in intervals of **100**, such as **100**, **200**, **300**. The numbers of monitored devices are called *committed devices*.
+ Calculate the approximate number of devices you'll be monitoring. Devices can be added in intervals of **100**, such as **100**, **200**, **300**. The numbers of monitored devices are called *committed devices*. For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device)
Microsoft Defender for IoT supports both physical and virtual deployments. For physical deployments, you'll be able to purchase certified, preconfigured appliances, or download software to install yourself.
defender-for-iot How To Investigate All Enterprise Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md
You can view device information from connected sensors by using the *device inventory* in the on-premises management console. This feature gives you a comprehensive view of all network information. Use import, export, and filtering tools to manage this information. The status information about the connected sensor versions also appears.
-For more information, see [Devices monitored by Defender for IoT](architecture.md#devices-monitored-by-defender-for-iot).
+For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device)
## View the device inventory from an on-premises management console
defender-for-iot How To Investigate Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-sensor-detections-in-a-device-inventory.md
Options are available to:
- Create groups for display in the device map.
-For more information, see [Devices monitored by Defender for IoT](architecture.md#devices-monitored-by-defender-for-iot).
+For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device)
## View device attributes in the inventory
Deleting inactive devices helps:
- Better evaluate committed devices when managing subscriptions - Reduce clutter on your screen
+For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device)
+ ### View inactive devices You can filter the inventory to display devices that are inactive:
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Title: Manage Defender for IoT plans on Azure subscriptions description: Manage Defender for IoT plans on your Azure subscriptions. Previously updated : 11/09/2021 Last updated : 07/06/2022
Before you add a plan or services, we recommend that you have a sense of how man
Users can also work with a trial commitment, which supports monitoring a limited number of devices for 30 days. For more information, see the [Microsoft Defender for IoT pricing page](https://azure.microsoft.com/pricing/details/iot-defender/).
-### What's a device?
-- ## Prerequisites Before you onboard a plan, verify that:
If you already have access to an Azure subscription, but it isn't listed when ad
Azure **Security admin**, **Subscription owners** and **Subscription contributors** can onboard, update, and remove Defender for IoT. For more information on user permissions, see [Defender for IoT user permissions](getting-started.md#permissions).
-### Calculate the number of devices you need to monitor
+### Defender for IoT committed devices
When onboarding or editing your Defender for IoT plan, you'll need to know how many devices you want to monitor.
-**To calculate the number of devices you need to monitor**:
-Collect the total number of devices in your network and remove:
+**To calculate the number of devices you need to monitor**:
-- **Duplicate devices that have the same IP or MAC address**. When detected, the duplicates are automatically removed by Defender for IoT.
+We recommend making an initial estimate of your committed devices when onboarding your Defender for IoT plan.
-- **Duplicate devices that have the same ID**. These are the same devices, seen by the same sensor, with different field values. For such devices, check the last time each device had activity and use the latest device only.
+1. Collect the total number of devices in your network.
-- **Inactive devices**, with no traffic for more than 60 days.
+1. Remove any devices that are *not* considered as committed devices by Defender for IoT.
-- **Broadcast / multicast devices**. These represent unique addresses but not unique devices.
+ If you are also a Defender for Endpoint customer, you can identify devices managed by Defender for Endpoint in the Defender for Endpoint **Device inventory** page. In the **Endpoints** tab, filter for devices by **Onboarding status**. For more information, see [Defender for Endpoint Device discovery overview](/microsoft-365/security/defender-endpoint/device-discovery).
-For more information, see [What's a device?](#whats-a-device)
+After you've set up your network sensor and have full visibility into all devices, you can [edit your plan](#edit-a-plan) to update the number of committed devices as needed.
## Onboard a Defender for IoT plan to a subscription
-This procedure describes how to add a Defender for IoT plan to an Azure subscription.
+This procedure describes how to add a Defender for IoT plan to an Azure subscription.
**To onboard a Defender for IoT plan to a subscription:**
defender-for-iot How To Work With Device Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-device-notifications.md
The following table describes the notification event types you might receive, al
| Type | Description | Responses | |--|--|--| | New IP detected | A new IP address is associated with the device. Five scenarios might be detected: <br /><br /> An additional IP address was associated with a device. This device is also associated with an existing MAC address.<br /><br /> A new IP address was detected for a device that's using an existing MAC address. Currently the device does not communicate by using an IP address.<br /> <br /> A new IP address was detected for a device that's using a NetBIOS name. <br /><br /> An IP address was detected as the management interface for a device associated with a MAC address. <br /><br /> A new IP address was detected for a device that's using a virtual IP address. | **Set Additional IP to Device** (merge devices) <br /> <br />**Replace Existing IP** <br /> <br /> **Dismiss**<br /> Remove the notification. |
-| Inactive devices | Traffic wasn't detected on a device for more than 60 days. | **Delete** <br /> If this device isn't part of your network, remove it. <br /><br />**Dismiss** <br /> Remove the notification if the device is part of your network. If the device is inactive (for example, because it's mistakenly disconnected from the network), dismiss the notification and reconnect the device. |
+| Inactive devices | Traffic wasn't detected on a device for more than 60 days. For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device) | **Delete** <br /> If this device isn't part of your network, remove it. <br /><br />**Dismiss** <br /> Remove the notification if the device is part of your network. If the device is inactive (for example, because it's mistakenly disconnected from the network), dismiss the notification and reconnect the device. |
| New OT devices | A subnet includes an OT device that's not defined in an ICS subnet. <br /><br /> Each subnet that contains at least one OT device can be defined as an ICS subnet. This helps differentiate between OT and IT devices on the map. | **Set as ICS Subnet** <br /> <br /> **Dismiss** <br />Remove the notification if the device isn't part of the subnet. | | No subnets configured | No subnets are currently configured in your network. <br /><br /> Configure subnets for better representation in the map and the ability to differentiate between OT and IT devices. | **Open Subnets Configuration** and configure subnets. <br /><br />**Dismiss** <br /> Remove the notification. | | Operating system changes | One or more new operating systems have been associated with the device. | Select the name of the new OS that you want to associate with the device.<br /><br /> **Dismiss** <br /> Remove the notification. |
defender-for-iot Pre Deployment Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/pre-deployment-checklist.md
Review your industrial network architecture to define the proper location for th
:::image type="content" source="media/how-to-set-up-your-network/backbone-switch.png" alt-text="Diagram of the industrial OT environment for the global network."::: > [!NOTE]
- > The Defender for IoT appliance should be connected to a lower-level switch that sees the traffic between the ports on the switch.
+ > The Defender for IoT appliance should be connected to a lower-level switch that sees the traffic between the ports on the switch.
-1. **Committed devices** - Provide the approximate number of network devices that will be monitored. You'll need this information when onboarding your subscription to Defender for IoT in the Azure portal. During the onboarding process, you'll be prompted to enter the number of devices in increments of 1000.
+1. **Committed devices** - Provide the approximate number of network devices that will be monitored. You'll need this information when onboarding your subscription to Defender for IoT in the Azure portal. During the onboarding process, you'll be prompted to enter the number of devices in increments of 1000. For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device)
1. **(Optional) Subnet list** - Provide a subnet list for the production networks and a description (optional).
For more information, see:
- [Quickstart: Get started with Defender for IoT](getting-started.md) - [Best practices for planning your OT network monitoring](best-practices/plan-network-monitoring.md)-- [Prepare your network for Microsoft Defender for IoT](how-to-set-up-your-network.md)
+- [Prepare your network for Microsoft Defender for IoT](how-to-set-up-your-network.md)
defender-for-iot References Defender For Iot Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-defender-for-iot-glossary.md
This glossary provides a brief description of important terms and concepts for t
|--|--|--| | **Data mining** | Generate comprehensive and granular reports about your network devices:<br /><br />- **SOC incident response**: Reports in real time to help deal with immediate incident response. For example, a report can list devices that might need patching.<br /><br />- **Forensics**: Reports based on historical data for investigative reports.<br /><br />- **IT network integrity**: Reports that help improve overall network security. For example, a report can list devices with weak authentication credentials.<br /><br />- **visibility**: Reports that cover all query items to view all baseline parameters of your network.<br /><br />Save data-mining reports for read-only users to view. | **[Baseline](#b)<br /><br />[Reports](#r)** | | **Defender for IoT platform** | The Defender for IoT solution installed on Defender for IoT sensors and the on-premises management console. | **[Sensor](#s)<br /><br />[On-premises management console](#o)** |
-| **Device inventories** | Defender for IoT considers any of the following as single and unique network devices:<br><br>- Managed or un-managed standalone IT/OT/IoT devices, with one or more NICs<br>- Devices with multiple backplane components, including all racks, slots, or modules<br>- Devices that provide network infrastructure, such as switches or routers with multiple NICs<br><br>Monitored devices are listed in the **Device inventory** pages on the Azure portal, sensor console, and the on-premises management console. Data integration features let you enhance device data with details from other enterprise resources, such as CMDBs, DNS, firewalls, and Web APIs. <br><br>The following items are not monitored as devices, and do not appear in the Defender for IoT device inventories: <br>- Public internet IP addresses<br>- Multi-cast groups<br>- Broadcast groups<br><br>Devices that are inactive for more than 60 days are classified as *inactive* inventory devices.<br>The data integration capabilities of the on-premises management console let you enhance the data in the device inventory with information from other enterprise resources. Example resources are CMDBs, DNS, firewalls, and Web APIs.| [**Device map**](#d)|
+| **Device inventories** | Device inventory data is available from Defender for IoT in the Azure portal, the OT sensor, and the on-premises management console. For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device)| [**Device map**](#d)|
| **Device map** | A graphical representation of network devices that Defender for IoT detects. It shows the connections between devices and information about each device. Use the map to:<br /><br />- Retrieve and control critical device information.<br /><br />- Analyze network slices.<br /><br />- Export device details and summaries. | **[Purdue layer group](#p)** | ## E
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
For more information, see the [Microsoft Security Development Lifecycle practice
## July 2022 -- [Enterprise IoT purchase experience and Defender for Endpoint integration in GA](#enterprise-iot-purchase-experience-and-defender-for-endpoint-integration-in-ga)-
-**Sensor software version**: 22.2.3
--- [PCAP access from the Azure portal](#pcap-access-from-the-azure-portal-public-preview)-- [Bi-directional alert synch between sensors and the Azure portal](#bi-directional-alert-synch-between-sensors-and-the-azure-portal-public-preview)-- [Support diagnostic log enhancements](#support-diagnostic-log-enhancements-public-preview)-- [Improved security for uploading protocol plugins](#improved-security-for-uploading-protocol-plugins)-
-To update to version 22.2.3:
--- From version 22.1.x, update directly to version 22.2.3-- From version 10.x, first update to version 21.1.6, and then update again to 22.2.3
+|Service area |Updates |
+|||
+|**Enterprise IoT networks** | - [Enterprise IoT purchase experience and Defender for Endpoint integration in GA](#enterprise-iot-purchase-experience-and-defender-for-endpoint-integration-in-ga) |
+|**OT networks** |Sensor software version 22.2.3:<br><br>- [PCAP access from the Azure portal](#pcap-access-from-the-azure-portal-public-preview)<br>- [Bi-directional alert synch between sensors and the Azure portal](#bi-directional-alert-synch-between-sensors-and-the-azure-portal-public-preview)<br>- [Support diagnostic log enhancements](#support-diagnostic-log-enhancements-public-preview)<br>- [Improved security for uploading protocol plugins](#improved-security-for-uploading-protocol-plugins)<br><br>To update to version 22.2.3:<br>- From version 22.1.x, update directly to version 22.2.3<br>- From version 10.x, first update to version 21.1.6, and then update again to 22.2.3<br><br>For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md). |
+|**Cloud-only features** | - [Microsoft Sentinel incident synch with Defender for IoT alerts](#microsoft-sentinel-incident-synch-with-defender-for-iot-alerts) |
-For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
### Enterprise IoT purchase experience and Defender for Endpoint integration in GA
For more information, see:
Starting in sensor version [22.1.1](#new-support-diagnostics-log), you've been able to download a diagnostic log from the sensor console to send to support when you open a ticket.
-Now, for locally-managed sensors, you can upload that diagnostic log directly on the Azure portal.
+Now, for locally managed sensors, you can upload that diagnostic log directly on the Azure portal.
:::image type="content" source="media/how-to-manage-sensors-on-the-cloud/upload-diagnostics-log.png" alt-text="Screenshot of the Send diagnostic files to support option." lightbox="media/how-to-manage-sensors-on-the-cloud/upload-diagnostics-log.png":::
This version of the sensor provides an improved security for uploading proprieta
For more information, see [Manage proprietary protocols with Horizon plugins](resources-manage-proprietary-protocols.md).
+### Microsoft Sentinel incident synch with Defender for IoT alerts
+
+The **IoT OT Threat Monitoring with Defender for IoT** solution now ensures that alerts in Defender for IoT are updated with any related incident **Status** changes from Microsoft Sentinel.
+
+This synchronization overrides any status defined in Defender for IoT, in the Azure portal or the sensor console, so that the alert statuses match that of the related incident.
+
+Update your **IoT OT Threat Monitoring with Defender for IoT** solution to use the latest synchronization support, including the new **AD4IoT-AutoAlertStatusSync** playbook. After updating the solution, make sure that you also take the [required steps](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended#update-alert-statuses-in-defender-for-iot) to ensure that the new playbook works as expected.
+
+For more information, see:
+
+- [Tutorial: Integrate Defender for Iot and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
+- [View and manage alerts on the Defender for IoT portal (Preview)](how-to-manage-cloud-alerts.md)
+- [View alerts on your sensor](how-to-view-alerts.md)
+ ## June 2022 **Sensor software version**: 22.1.5
dms Known Issues Azure Sql Db Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-db-managed-instance-online.md
Known issues and limitations that are associated with online migrations from SQL
SQL Managed Instance is a PaaS service with automatic patching and version updates. During migration of your SQL Managed Instance, non-critical updates are held for up to 36 hours. Afterwards (and for critical updates), if the migration is disrupted, the process resets to a full restore state.
- Migration cutover can only be called after the full backup is restored and catches up with all log backups. If your production migration cutovers are affected, contact the [Azure DMS Feedback alias](mailto:dmsfeedback@microsoft.com).
+ Migration cutover can only be called after the full backup is restored and catches up with all log backups. If your production migration cutovers are affected by unexpected issues, [open a support ticket to get assistance](https://azure.microsoft.com/support/create-ticket/).
+
+ You can submit ideas/suggestions for improvement, and other feedback, including bugs in the [Azure Community forum ΓÇö Azure Database Migration Service](https://feedback.azure.com/d365community/forum/2dd7eb75-ef24-ec11-b6e6-000d3a4f0da0).
## SMB file share connectivity
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-azure-sql.md
After the service is created, locate it within the Azure portal, open it, and th
## Select databases for migration
-Select either all databases or specific databases that you want to migrate to Azure SQL Database. DMS provides you with the expected migration time for selected databases. If the migration downtimes are acceptable continue with the migration. If the migration downtimes are not acceptable, consider migrating to [SQL Managed Instance with near-zero downtime](tutorial-sql-server-managed-instance-online.md) or contacting the [DMS team](mailto:DMSFeedback@microsoft.com) for other options.
+Select either all databases or specific databases that you want to migrate to Azure SQL Database. DMS provides you with the expected migration time for selected databases. If the migration downtimes are acceptable continue with the migration. If the migration downtimes are not acceptable, consider migrating to [SQL Managed Instance with near-zero downtime](tutorial-sql-server-managed-instance-online.md) or submit ideas/suggestions for improvement, and other feedback in the [Azure Community forum ΓÇö Azure Database Migration Service](https://feedback.azure.com/d365community/forum/2dd7eb75-ef24-ec11-b6e6-000d3a4f0da0).
1. Choose the database(s) you want to migrate from the list of available databases. 1. Review the expected downtime. If it's acceptable, select **Next: Select target >>**
event-grid Webhook Event Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/webhook-event-delivery.md
Title: WebHook event delivery description: This article describes WebHook event delivery and endpoint validation when using webhooks. Previously updated : 10/13/2021 Last updated : 07/06/2022
-# Webhook Event delivery
+# Webhook event delivery
Webhooks are one of the many ways to receive events from Azure Event Grid. When a new event is ready, Event Grid service POSTs an HTTP request to the configured endpoint with the event in the request body.
-Like many other services that support webhooks, Event Grid requires you to prove ownership of your Webhook endpoint before it starts delivering events to that endpoint. This requirement prevents a malicious user from flooding your endpoint with events. When you use any of the three Azure services listed below, the Azure infrastructure automatically handles this validation:
+Like many other services that support webhooks, Event Grid requires you to prove ownership of your Webhook endpoint before it starts delivering events to that endpoint. This requirement prevents a malicious user from flooding your endpoint with events.
+
+## Endpoint validation with Event Grid events
+When you use any of the three Azure services listed below, the Azure infrastructure automatically handles this validation:
- Azure Logic Apps with [Event Grid Connector](/connectors/azureeventgrid/) - Azure Automation via [webhook](../event-grid/ensure-tags-exists-on-new-virtual-machines.md) - Azure Functions with [Event Grid Trigger](../azure-functions/functions-bindings-event-grid.md)
-## Endpoint validation with Event Grid events
- If you're using any other type of endpoint, such as an HTTP trigger based Azure function, your endpoint code needs to participate in a validation handshake with Event Grid. Event Grid supports two ways of validating the subscription. - **Synchronous handshake**: At the time of event subscription creation, Event Grid sends a subscription validation event to your endpoint. The schema of this event is similar to any other Event Grid event. The data portion of this event includes a `validationCode` property. Your application verifies that the validation request is for an expected event subscription, and returns the validation code in the response synchronously. This handshake mechanism is supported in all Event Grid versions. -- **Asynchronous handshake**: In certain cases, you can't return the ValidationCode in response synchronously. For example, if you use a third-party service (like [`Zapier`](https://zapier.com) or [IFTTT](https://ifttt.com/)), you can't programmatically respond with the validation code.
+- **Asynchronous handshake**: In certain cases, you can't return the `validationCode` in response synchronously. For example, if you use a third-party service (like [`Zapier`](https://zapier.com) or [IFTTT](https://ifttt.com/)), you can't programmatically respond with the validation code.
- Starting with version 2018-05-01-preview, Event Grid supports a manual validation handshake. If you're creating an event subscription with an SDK or tool that uses API version 2018-05-01-preview or later, Event Grid sends a `validationUrl` property in the data portion of the subscription validation event. To complete the handshake, find that URL in the event data and do a GET request to it. You can use either a REST client or your web browser.
+ Event Grid supports a manual validation handshake. If you're creating an event subscription with an SDK or tool that uses API version 2018-05-01-preview or later, Event Grid sends a `validationUrl` property in the data portion of the subscription validation event. To complete the handshake, find that URL in the event data and do a GET request to it. You can use either a REST client or your web browser.
The provided URL is valid for **5 minutes**. During that time, the provisioning state of the event subscription is `AwaitingManualAction`. If you don't complete the manual validation within 5 minutes, the provisioning state is set to `Failed`. You'll have to create the event subscription again before starting the manual validation.
To prove endpoint ownership, echo back the validation code in the `validationRes
} ```
-You must return an **HTTP 200 OK** response status code. **HTTP 202 Accepted** isn't recognized as a valid Event Grid subscription validation response. The HTTP request must complete within 30 seconds. If the operation doesn't finish within 30 seconds, then the operation will be canceled and it may be reattempted after 5 seconds. If all the attempts fail, then it will be treated as validation handshake error.
+And, follow one of these steps:
+
+- You must return an **HTTP 200 OK** response status code. **HTTP 202 Accepted** isn't recognized as a valid Event Grid subscription validation response. The HTTP request must complete within 30 seconds. If the operation doesn't finish within 30 seconds, then the operation will be canceled and it may be reattempted after 5 seconds. If all the attempts fail, then it will be treated as validation handshake error.
+
+ The fact that your application is prepared to handle and return the validation code indicates that you created the event subscription and expected to receive the event. Imagine the scenario that there is no handshake validation supported and a hacker gets to know your application URL. The hacker can create a topic and an event subscription with your application's URL, and start conducting a DoS attack to your application by sending a lot of events. The handshake validation prevents that to happen.
+
+ Imagine that you already have the validation implemented in your app because you created your own event subscriptions. Even if a hacker creates an event subscription with your app URL, your correct implementation of the validation request event will check for the `aeg-subscription-name` header in the request to ascertain that it's an event subscription that you recognize.
+
+ Even after that correct handshake implementation, a hacker can flood your app (it already validated the event subscription) by replicating a request that seems to be coming from Event Grid. To prevent that, you must secure your webhook with AAD authentication. For more information, see [Deliver events to Azure Active Directory protected endpoints](secure-webhook-delivery.md).
+- Or, you can manually validate the subscription by sending a GET request to the validation URL. The event subscription stays in a pending state until validated. The validation Url uses **port 553**. If your firewall rules block port 553, you'll need update rules for a successful manual handshake.
-Or, you can manually validate the subscription by sending a GET request to the validation URL. The event subscription stays in a pending state until validated. The validation Url uses port 553. If your firewall rules block port 553 then rules may need to be updated for successful manual handshake.
+ In your validation of the subscription validation event, if you identify that it isn't an event subscription for which you are expecting events, you wouldn't return a 200 response or no response at all. Hence, the validation will fail.
For an example of handling the subscription validation handshake, see a [C# sample](https://github.com/Azure-Samples/event-grid-dotnet-publish-consume-events/blob/master/EventGridConsumer/EventGridConsumer/Function1.cs). ## Endpoint validation with CloudEvents v1.0
-CloudEvents v1.0 implements its own [abuse protection semantics](webhook-event-delivery.md) using the **HTTP OPTIONS** method. You can read more about it [here](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection). When using the CloudEvents schema for output, Event Grid uses with the CloudEvents v1.0 abuse protection in place of the Event Grid validation event mechanism.
+CloudEvents v1.0 implements its own abuse protection semantics using the **HTTP OPTIONS** method. You can read more about it [here](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection). When you use the CloudEvents schema for output, Event Grid uses with the CloudEvents v1.0 abuse protection in place of the Event Grid validation event mechanism.
## Event schema compatibility When a topic is created, an incoming event schema is defined. And, when a subscription is created, an outgoing event schema is defined. The following table shows you the compatibility allowed when creating a subscription.
event-hubs Store Captured Data Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/store-captured-data-data-warehouse.md
-# Tutorial: Migrate captured Event Hubs data to Azure Synapse Analytics using Event Grid and Azure Functions
+# Tutorial: Migrate captured Event Hubs Avro data to Azure Synapse Analytics using Event Grid and Azure Functions
Azure Event Hubs [Capture](./event-hubs-capture-overview.md) enables you to automatically capture the streaming data in Event Hubs in an Azure Blob storage or Azure Data Lake Storage. This tutorial shows you how to migrate captured Event Hubs data from Storage to Azure Synapse Analytics by using an Azure function that's triggered by [Event Grid](../event-grid/overview.md). [!INCLUDE [event-grid-event-hubs-functions-synapse-analytics.md](../event-grid/includes/event-grid-event-hubs-functions-synapse-analytics.md)]
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 06/29/2022 Last updated : 07/05/2022 ++ # Azure Policy built-in policy definitions
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policies-managed-identity](../../../../includes/policy/reference/bycat/policies-managed-identity.md)]
-## Managed Labs
-- ## Maps [!INCLUDE [azure-policy-reference-policies-maps](../../../../includes/policy/reference/bycat/policies-maps.md)]
healthcare-apis Events Consume Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-consume-logic-apps.md
Previously updated : 05/26/2022 Last updated : 07/06/2022 # Consume events with Logic Apps
-This tutorial shows how to use Azure Logic Apps to process Azure Health Data Services FHIR events. Logic Apps create and run automated workflows to process event data from other applications. You will learn how to register a FHIR event with your Logic App, meet a specified event criteria, and perform a service operation.
+This tutorial shows how to use Azure Logic Apps to process Azure Health Data Services Fast Healthcare Interoperability Resources (FHIR&#174;) events. Logic Apps create and run automated workflows to process event data from other applications. You will learn how to register a FHIR event with your Logic App, meet a specified event criteria, and perform a service operation.
Here's an example of a Logic App workflow:
For more information about FHIR events, see
>[!div class="nextstepaction"] >[What are Events?](./events-overview.md)
-(FHIR&#174;) is a registered trademark of HL7 and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-deploy-portal.md
Title: Deploy Events in the Azure portal - Azure Health Data Services
+ Title: Deploy Events using the Azure portal - Azure Health Data Services
description: This article describes how to deploy the Events feature in the Azure portal. Previously updated : 03/21/2022 Last updated : 07/06/2022
-# Deploy Events in the Azure portal
+# Deploy Events using the Azure portal
In this quickstart, youΓÇÖll learn how to deploy the Azure Health Data Services Events feature in the Azure portal to send Fast Healthcare Interoperability Resources (FHIR&#174;) event messages.
To learn how to export Event Grid system diagnostic logs and metrics, see
>[!div class="nextstepaction"] >[How to export Events diagnostic logs and metrics](./events-display-metrics.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Disable Delete Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-disable-delete-workspace.md
Title: Disable Events and delete workspaces - Azure Health Data Services
+ Title: Disable events and delete workspaces - Azure Health Data Services
description: This article provides resources on how to disable Events and delete workspaces. Previously updated : 03/22/2022 Last updated : 07/06/2022
-# Disable Events and delete workspaces
+# Disable events and delete workspaces
-In this article, you'll learn how to disable Events and delete workspaces in Azure Health Data Services.
+In this article, you'll learn how to disable the Events feature and delete workspaces in the Azure Health Data Services.
-## Disable Events
+## Disable events
-To disable Events from sending event messages for a single Event Subscription, the Event Subscription must be deleted.
+To disable events from sending event messages for a single Event Subscription, the Event Subscription must be deleted.
1. Select the Event Subscription to be deleted. In this example, we'll be selecting an Event Subscription named **fhir-events**.
For more information about how to troubleshoot Events, see
>[!div class="nextstepaction"] >[Troubleshoot Events](./events-troubleshooting-guide.md)
-(FHIR&#174;) is a registered trademark of HL7 and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Display Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-display-metrics.md
Previously updated : 03/22/2022 Last updated : 07/06/2022
To learn how to export Events Azure Event Grid system diagnostic logs and metric
>[!div class="nextstepaction"] >[Configure Events diagnostic logs and metrics exporting](./events-export-logs-metrics.md)
-(FHIR&#174;) is a registered trademark of HL7 and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Export Logs Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-export-logs-metrics.md
Previously updated : 03/22/2022 Last updated : 07/06/2022
After they're configured, Event Grid system topics diagnostic logs and metrics w
|View a list of currently captured Event Grid system topics metrics.|[Event Grid system topic metrics](../../azure-monitor/essentials/metrics-supported.md#microsofteventgridsystemtopics)| |More information about how to work with diagnostics logs.|[Azure Resource Log documentation](../../azure-monitor/essentials/platform-logs-overview.md)|
-> [!Note]
+> [!NOTE]
> It might take up to 15 minutes for the first Events diagnostic logs and metrics to display in the destination of your choice. ## Next steps
To learn how to display Events metrics in the Azure portal, see
>[!div class="nextstepaction"] >[How to display Events metrics](./events-display-metrics.md)
-(FHIR&#174;) is a registered trademark of HL7 and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-faqs.md
Previously updated : 03/02/2022 Last updated : 07/06/2022
The following are some of the frequently asked questions about Events.
### Can I use Events with a different FHIR service other than the Azure Health Data Services FHIR service?
-No. The Azure Health Data Services Events feature only currently supports the Azure Health Data Services FHIR service.
+No. The Azure Health Data Services Events feature only currently supports the Azure Health Data Services Fast Healthcare Interoperability Resources (FHIR&#174;) service.
### What FHIR resource events does Events support?
The throughput of FHIR events is governed by the throughput of the FHIR service
### How am I charged for using Events?
-There are no extra charges for using Azure Health Data Services Events. However, applicable charges for the [Event Grid](https://azure.microsoft.com/pricing/details/event-grid/) might be assessed against your Azure subscription.
-
+There are no extra charges for using [Azure Health Data Services Events](https://azure.microsoft.com/pricing/details/health-data-services/). However, applicable charges for the [Event Grid](https://azure.microsoft.com/pricing/details/event-grid/) will be assessed against your Azure subscription.
### How do I subscribe to multiple FHIR services in the same workspace separately?
You can use the Event Grid filtering feature. There are unique identifiers in th
:::image type="content" source="media\event-grid\event-grid-filters.png" alt-text="Screenshot of the Event Grid filters tab." lightbox="media\event-grid\event-grid-filters.png"::: - ### Can I use the same subscriber for multiple workspaces or multiple FHIR accounts?
-Yes. We recommend that you use different subscribers for each individual FHIR accounts to process in isolated scopes.
+Yes. We recommend that you use different subscribers for each individual FHIR account to process in isolated scopes.
### Is Event Grid compatible with HIPAA and HITRUST compliance obligations? Yes. Event Grid supports customer's Health Insurance Portability and Accountability Act (HIPAA) and Health Information Trust Alliance (HITRUST) obligations. For more information, see [Microsoft Azure Compliance Offerings](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/). - ### What is the expected time to receive an Events message? On average, you should receive your event message within one second after a successful HTTP request. 99.99% of the event messages should be delivered within five seconds unless the limitation of either the FHIR service or [Event Grid](../../event-grid/quotas-limits.md) has been met.
On average, you should receive your event message within one second after a succ
Yes. The Event Grid guarantees at least one Events message delivery with its push mode. There may be chances that the event delivery request returns with a transient failure status code for random reasons. In this situation, the Event Grid will consider that as a delivery failure and will resend the Events message. For more information, see [Azure Event Grid delivery and retry](../../event-grid/delivery-and-retry.md). - Generally, we recommend that developers ensure idempotency for the event subscriber. The event ID or the combination of all fields in the ```data``` property of the message content are unique per each event. The developer can rely on them to de-duplicate. ## More frequently asked questions
Generally, we recommend that developers ensure idempotency for the event subscri
[FAQs about Azure Health Data Services MedTech service](../iot/iot-connector-faqs.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Message Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-message-structure.md
Previously updated : 03/22/2022 Last updated : 07/06/2022
In this article, you'll learn about the Events message structure, required and non-required elements, and you'll be provided with samples of Events message payloads. > [!IMPORTANT]
-> Events currently supports only the following FHIR resource operations:
+> Events currently supports only the following Fast Healthcare Interoperability Resources (FHIR&#174;) resource operations:
> > - **FhirResourceCreated** - The event emitted after a FHIR resource gets created successfully. >
In this article, you'll learn about the Events message structure, required and n
> > For more information about the FHIR service delete types, see [FHIR REST API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md) - ## Events message structure |Name|Type|Required|Description|
For more information about deploying Events, see
>[!div class="nextstepaction"] >[Deploying Events in the Azure portal](./events-deploy-portal.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-overview.md
Previously updated : 04/06/2022 Last updated : 07/06/2022
Use Events to send FHIR resource change messages to services like [Azure Event H
## Secure
-Built on a platform that supports protected health information (PHI) and personal identifiable information (PII) data compliance with privacy, safety, and security in mind, the Events messages do not transmit sensitive data as part of the message payload.
+Built on a platform that supports protected health information and customer content data compliance with privacy, safety, and security in mind, the Events messages do not transmit sensitive data as part of the message payload.
Use [Azure Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to provide secure access from your Event Grid system topic to the Events message receiving endpoints of your choice.
Use [Azure Managed identities](../../active-directory/managed-identities-azure-r
For more information about deploying Events, see >[!div class="nextstepaction"]
->[Deploying Events in the Azure portal](./events-deploy-portal.md)
+>[Deploying Events using the Azure portal](./events-deploy-portal.md)
For frequently asks questions (FAQs) about Events, see
For Events troubleshooting resources, see
>[!div class="nextstepaction"] >[Events troubleshooting guide](./events-troubleshooting-guide.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-troubleshooting-guide.md
Previously updated : 03/14/2022 Last updated : 07/06/2022 # Troubleshoot Events
This article provides guides and resources to troubleshoot Events.
> [!IMPORTANT] >
-> FHIR resource change data is only written and event messages are sent when the Events feature is turned on. The Event feature doesn't send messages on past FHIR resource changes or when the feature is turned off.
+> Fast Healthcare Interoperability Resources (FHIR&#174;) resource change data is only written and event messages are sent when the Events feature is turned on. The Event feature doesn't send messages on past FHIR resource changes or when the feature is turned off.
:::image type="content" source="media/events-overview/events-overview-flow.png" alt-text="Diagram of data flow from users to a FHIR service and then into the Events pipeline" lightbox="media/events-overview/events-overview-flow.png":::
Use this resource to learn about the Events message structure, required and non-
### How to Use this resource to learn how to deploy Events in the Azure portal:
-* [How to deploy Events in the Azure portal](./events-deploy-portal.md)
+* [How to deploy Events using the Azure portal](./events-deploy-portal.md)
>[!Important] >The Event Subscription requires access to whichever endpoint you chose to send Events messages to. For more information, see [Enable managed identity for a system topic](../../event-grid/enable-identity-system-topics.md).
To learn about frequently asked questions (FAQs) about Events, see
>[!div class="nextstepaction"] >[Frequently asked questions about Events](./events-faqs.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy Iot Connector In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-iot-connector-in-azure.md
Previously updated : 07/05/2022 Last updated : 07/06/2022
-# Deploy the MedTech service in the Azure portal
+# Deploy the MedTech service using the Azure portal
In this quickstart, you'll learn how to deploy the MedTech service in the Azure portal using two different methods: with a [quickstart template](#deploy-the-medtech-service-with-a-quickstart-template) or [manually](#deploy-the-medtech-service-manually). The MedTech service will enable you to ingest data from Internet of Things (IoT) into your Fast Healthcare Interoperability Resources (FHIR&#174;) service.
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/production-checklist.md
This checklist is a starting point for firewall rules:
| FQDN (\* = wildcard) | Outbound TCP Ports | Usage | | -- | -- | -- | | `mcr.microsoft.com` | 443 | Microsoft Container Registry |
- | `\*.data.mcr.microsoft.com` | 443 | Data endpoint providing content delivery. |
+ | `\*.data.mcr.microsoft.com` | 443 | Data endpoint providing content delivery |
+ | `*.cdn.azcr.io` | 443 | Deploy modules from the Marketplace to devices |
| `global.azure-devices-provisioning.net` | 443 | [Device Provisioning Service](../iot-dps/about-iot-dps.md) access (optional) | | `\*.azurecr.io` | 443 | Personal and third-party container registries | | `\*.blob.core.windows.net` | 443 | Download Azure Container Registry image deltas from blob storage |
iot-hub Iot Hub Devguide Direct Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-direct-methods.md
Direct method invocations on a device are HTTPS calls that are made up of the fo
} ```
-The value provided as `responseTimeoutInSeconds` in the request is the amount of time that IoT Hub service must await for completion of a direct method execution on a device. Set this timeout to be at least as long as the expected execution time of a direct method by a device. If timeout is not provided, it the default value of 30 seconds is used. The minimum and maximum values for `responseTimeoutInSeconds` are 5 and 300 seconds, respectively.
+The value provided as `responseTimeoutInSeconds` in the request is the amount of time that IoT Hub service must await for completion of a direct method execution on a device. Set this timeout to be at least as long as the expected execution time of a direct method by a device. If timeout is not provided, the default value of 30 seconds is used. The minimum and maximum values for `responseTimeoutInSeconds` are 5 and 300 seconds, respectively.
The value provided as `connectTimeoutInSeconds` in the request is the amount of time upon invocation of a direct method that IoT Hub service must await for a disconnected device to come online. The default value is 0, meaning that devices must already be online upon invocation of a direct method. The maximum value for `connectTimeoutInSeconds` is 300 seconds.
key-vault Key Vault Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/key-vault-recovery.md
For more information about soft-delete, see [Azure Key Vault soft-delete overvie
1. Log in to the Azure portal. 1. Click on the search bar at the top of the page.
-1. Under "Recent Services" click "Key Vault". Do not click an individual key vault.
+1. Search for the "Key Vault" service. Do not click an individual key vault.
1. At the top of the screen click the option to "Manage deleted vaults" 1. A context pane will open on the right side of your screen. 1. Select your subscription.
key-vault Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/quick-create-portal.md
Azure Key Vault is a cloud service that provides a secure store for [keys](../ke
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - In this quickstart, you create a key vault with the [Azure portal](https://portal.azure.com). ## Sign in to Azure
key-vault Overview Storage Keys Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/overview-storage-keys-powershell.md
Tags :
You can now use the [Get-AzKeyVaultSecret](/powershell/module/az.keyvault/get-azkeyvaultsecret) cmdlet with the `VaultName` and `Name` parameters to view the contents of that secret. ```azurepowershell-interactive
-$secret = Get-AzKeyVaultSecret -VaultName <YourKeyVaultName> -Name <SecretName>
-$ssPtr = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($secret.SecretValue)
-try {
- $secretValueText = [System.Runtime.InteropServices.Marshal]::PtrToStringBSTR($ssPtr)
-} finally {
- [System.Runtime.InteropServices.Marshal]::ZeroFreeBSTR($ssPtr)
-}
+$secretValueText = Get-AzKeyVaultSecret -VaultName <YourKeyVaultName> -Name <SecretName> -AsPlainText
Write-Output $secretValueText ```
The output of this command will show your SAS definition string.
## Next steps - [Managed storage account key samples](https://github.com/Azure-Samples?utf8=%E2%9C%93&q=key+vault+storage&type=&language=)-- [Key Vault PowerShell reference](/powershell/module/az.keyvault/#key_vault)
+- [Key Vault PowerShell reference](/powershell/module/az.keyvault/#key_vault)
lab-services How To Connect Vnet Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-connect-vnet-injection.md
Last updated 2/11/2022
-# Connect to your virtual network in Azure Lab Services
+# Use advanced networking (virtual network injection) to connect to your virtual network in Azure Lab Services
[!INCLUDE [preview focused article](./includes/lab-services-new-update-focused-article.md)]
See the following articles:
- As an admin, [attach a compute gallery to a lab plan](how-to-attach-detach-shared-image-gallery.md). - As an admin, [configure automatic shutdown settings for a lab plan](how-to-configure-auto-shutdown-lab-plans.md).-- As an admin, [add lab creators to a lab plan](add-lab-creator.md).
+- As an admin, [add lab creators to a lab plan](add-lab-creator.md).
machine-learning Boosted Decision Tree Regression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/boosted-decision-tree-regression.md
This article describes a component in Azure Machine Learning designer.
Use this component to create an ensemble of regression trees using boosting. *Boosting* means that each tree is dependent on prior trees. The algorithm learns by fitting the residual of the trees that preceded it. Thus, boosting in a decision tree ensemble tends to improve accuracy with some small risk of less coverage.
-This component is based LightGBM algorithm.
+This component is based on the LightGBM algorithm.
This regression method is a supervised learning method, and therefore requires a *labeled dataset*. The label column must contain numerical values.
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
traits:
transformations: - read_delimited: encoding: ascii
- header: all_files_have_same_headers
+ header: all_files_same_headers
delimiter: " " - keep_columns: ["store_location", "zip_code", "date", "amount", "x", "y", "z"] - convert_column_types:
Just like `uri_file` and `uri_folder`, you can create a data asset with `mltable
- [Create datastores](how-to-datastore.md#create-datastores) - [Create data assets](how-to-create-register-data-assets.md#create-data-assets) - [Read and write data in a job](how-to-read-write-data-v2.md#read-and-write-data-in-a-job)-- [Data administration](how-to-administrate-data-authentication.md#data-administration)
+- [Data administration](how-to-administrate-data-authentication.md#data-administration)
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
Azure Machine Learning uses MLflow Tracking for metric logging and artifact stor
## Model Registries with MLflow
-Azure Machine Learning supports MLflow for model management. This represents a convenient way to support the entire model lifecycle for users familiar with the MLFlow client. The following article describes the different capabilities and how it compares with other options.
+Azure Machine Learning supports MLflow for model management. This represents a convenient way to support the entire model lifecycle for users familiar with the MLFlow client.
To learn more about how you can manage models using the MLflow API in Azure Machine Learning, view [Manage models registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md).
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
These rule collections are described in more detail in [What are some Azure Fire
| __\*.kusto.windows.net__<br>__\*.table.core.windows.net__<br>__\*.queue.core.windows.net__ | https:443 | Required to upload system logs to Kusto. |**&check;**|**&check;**| | __\*.azurecr.io__ | https:443 | Azure container registry, required to pull docker images used for machine learning workloads.|**&check;**|**&check;**| | __\*.blob.core.windows.net__ | https:443 | Azure blob storage, required to fetch machine learning project scripts,data or models, and upload job logs/outputs.|**&check;**|**&check;**|
-| __\*.workspace.\<region\>.api.azureml.ms__<br>__\<region\>.experiments.azureml.net__<br>__\<region\>.api.azureml.ms__ | https:443 | Azure mahince learning service API.|**&check;**|**&check;**|
+| __\*.workspace.\<region\>.api.azureml.ms__<br>__\<region\>.experiments.azureml.net__<br>__\<region\>.api.azureml.ms__ | https:443 | Azure machince learning service API.|**&check;**|**&check;**|
| __pypi.org__ | https:443 | Python package index, to install pip packages used for training job environment initialization.|**&check;**|N/A| | __archive.ubuntu.com__<br>__security.ubuntu.com__<br>__ppa.launchpad.net__ | http:80 | Required to download the necessary security patches. |**&check;**|N/A|
machine-learning How To Attach Kubernetes Anywhere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-anywhere.md
In this article, you can learn about steps to configure an existing Kubernetes c
* Install or upgrade Azure CLI to version 2.24.0 or higher. * Install or upgrade Azure CLI extension ```k8s-extension``` to version 1.2.3 or higher.
+## Limitations
+
+- [Using a service principal with AKS](../aks/kubernetes-service-principal.md) is **not supported** by Azure Machine Learning. The AKS cluster must use a managed identity instead.
+- [Disabling local accounts](../aks/managed-aad.md#disable-local-accounts) for AKS is **not supported** by Azure Machine Learning. When deploying an AKS Cluster, local accounts are enabled by default.
+- If your AKS cluster has an [Authorized IP range enabled to access the API server](../aks/api-server-authorized-ip-ranges.md), enable the AzureML control plane IP ranges for the AKS cluster. The AzureML control plane is deployed across paired regions. Without access to the API server, the machine learning pods cannot be deployed. Use the [IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=56519) for both the [paired regions](../availability-zones/cross-region-replication-azure.md) when enabling the IP ranges in an AKS cluster.
## Deploy AzureML extension to Kubernetes cluster
For AzureML extension deployment configurations, use ```--config``` or ```--conf
| ```inferenceRouterServiceType``` |```loadBalancer```, ```nodePort``` or ```clusterIP```. **Required** if ```enableInference=True```. | N/A| **&check;** | **&check;** | | ```internalLoadBalancerProvider``` | This config is only applicable for Azure Kubernetes Service(AKS) cluster now. Set to ```azure``` to allow the inference router using internal load balancer. | N/A| Optional | Optional | |```sslSecret```| The name of Kubernetes secret in `azureml` namespace to store `cert.pem` (PEM-encoded SSL cert) and `key.pem` (PEM-encoded SSL key), required for inference HTTPS endpoint support, when ``allowInsecureConnections`` is set to False. You can find a sample YAML definition of sslSecret [here](./reference-kubernetes.md#sample-yaml-definition-of-kubernetes-secret-for-tlsssl). Use this config or combination of `sslCertPemFile` and `sslKeyPemFile` protected config settings. |N/A| Optional | Optional |
- |```sslCname``` |A SSL CName used by inference HTTPS endpoint. **Required** if ```allowInsecureConnections=True``` | N/A | Optional | Optional|
+ |```sslCname``` |An SSL CName is used by inference HTTPS endpoint. **Required** if ```allowInsecureConnections=False``` | N/A | Optional | Optional|
| ```inferenceRouterHA``` |```True``` or ```False```, default ```True```. By default, AzureML extension will deploy 3 inference router replicas for high availability, which requires at least 3 worker nodes in a cluster. Set to ```False``` if your cluster has fewer than 3 worker nodes, in this case only one inference router service is deployed. | N/A| Optional | Optional | |```nodeSelector``` | By default, the deployed kubernetes resources are randomly deployed to 1 or more nodes of the cluster, and daemonset resources are deployed to ALL nodes. If you want to restrict the extension deployment to specific nodes with label `key1=value1` and `key2=value2`, use `nodeSelector.key1=value1`, `nodeSelector.key2=value2` correspondingly. | Optional| Optional | Optional | |```installNvidiaDevicePlugin``` | ```True``` or ```False```, default ```False```. [NVIDIA Device Plugin](https://github.com/NVIDIA/k8s-device-plugin#nvidia-device-plugin-for-kubernetes) is required for ML workloads on NVIDIA GPU hardware. By default, AzureML extension deployment will not install NVIDIA Device Plugin regardless Kubernetes cluster has GPU hardware or not. User can specify this setting to ```True```, to install it, but make sure to fulfill [Prerequisites](https://github.com/NVIDIA/k8s-device-plugin#prerequisites). | Optional |Optional |Optional |
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
For more information about the YAML schema, see the [online endpoint YAML refere
> [!NOTE] > To use Kubernetes instead of managed endpoints as a compute target: > 1. Create and attach your Kubernetes cluster as a compute target to your Azure Machine Learning workspace by using [Azure Machine Learning studio](how-to-attach-kubernetes-anywhere.md?&tabs=studio#attach-a-kubernetes-cluster-to-an-azure-ml-workspace).
-> 1. Use the [endpoint YAML](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/managed/sample/endpoint.yml) to target Kubernetes instead of the managed endpoint YAML. You'll need to edit the YAML to change the value of `target` to the name of your registered compute target. You can use this [deployment.yaml](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/managed/sample/blue-deployment.yml) that has additional properties applicable to Kubernetes deployment.
+> 1. Use the [endpoint YAML](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/kubernetes/kubernetes-endpoint.yml) to target Kubernetes instead of the managed endpoint YAML. You'll need to edit the YAML to change the value of `target` to the name of your registered compute target. You can use this [deployment.yaml](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/kubernetes/kubernetes-blue-deployment.yml) that has additional properties applicable to Kubernetes deployment.
> > All the commands that are used in this article (except the optional SLA monitoring and Azure Log Analytics integration) can be used either with managed endpoints or with Kubernetes endpoints.
The preceding YAML uses a general-purpose type (`Standard_F2s_v2`) and a non-GPU
For supported general-purpose and GPU instance types, see [Managed online endpoints supported VM SKUs](reference-managed-online-endpoints-vm-sku-list.md). For a list of Azure Machine Learning CPU and GPU base images, see [Azure Machine Learning base images](https://github.com/Azure/AzureML-Containers).
+> [!NOTE]
+> To use Kubernetes instead of managed endpoints as a compute target, see [Create instance type for Kubernetes compute](./how-to-attach-kubernetes-anywhere.md#create-and-use-instance-types-for-efficient-compute-resource-utilization)
+ ### Use more than one model Currently, you can specify only one model per deployment in the YAML. If you have more than one model, when you register the model, copy all the models as files or subdirectories into a folder that you use for registration. In your scoring script, use the environment variable `AZUREML_MODEL_DIR` to get the path to the model root folder. The underlying directory structure is retained.
The `update` command also works with local deployments. Use the same `az ml onli
> [!TIP] > With the `update` command, you can use the [`--set` parameter in the Azure CLI](/cli/azure/use-cli-effectively#generic-update-arguments) to override attributes in your YAML *or* to set specific attributes without passing the YAML file. Using `--set` for single attributes is especially valuable in development and test scenarios. For example, to scale up the `instance_count` value for the first deployment, you could use the `--set instance_count=2` flag. However, because the YAML isn't updated, this technique doesn't facilitate [GitOps](https://www.atlassian.com/git/tutorials/gitops).+ > [!Note]
-> The above is an example of inplace rolling update: i.e. the same deployment is updated with the new configuration, with 20% nodes at a time. If the deployment has 10 nodes, 2 nodes at a time will be updated. For production usage, you might want to consider [blue-green deployment](how-to-safely-rollout-managed-endpoints.md), which offers a safer alternative.
+> The above is an example of inplace rolling update.
+> * For managed online endpoint, the same deployment is updated with the new configuration, with 20% nodes at a time, i.e. if the deployment has 10 nodes, 2 nodes at a time will be updated.
+> * For Kubernetes online endpoint, the system will iterately create a new deployment instance with the new configuration and delete the old one.
+> * For production usage, you might want to consider [blue-green deployment](how-to-safely-rollout-managed-endpoints.md), which offers a safer alternative.
### (Optional) Configure autoscaling
Autoscale automatically runs the right amount of resources to handle the load on
### (Optional) Monitor SLA by using Azure Monitor
-To view metrics and set alerts based on your SLA, complete the steps that are described in [Monitor managed online endpoints](how-to-monitor-online-endpoints.md).
+To view metrics and set alerts based on your SLA, complete the steps that are described in [Monitor online endpoints](how-to-monitor-online-endpoints.md).
### (Optional) Integrate with Log Analytics
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md
Title: Log & view metrics and log files
+ Title: Log & view parameters, metrics and files
description: Enable logging on your ML training runs to monitor real-time run metrics with MLflow, and to help diagnose errors and warnings.
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python SDK you are using:"] > * [v1](./v1/how-to-log-view-metrics.md)
-> * [v2 (preview)](how-to-log-view-metrics.md)
+> * [v2 (current)](how-to-log-view-metrics.md)
-Log real-time information using [MLflow Tracking](https://www.mlflow.org/docs/latest/tracking.html). You can log models, metrics, and artifacts with MLflow as it supports local mode to cloud portability.
+Azure Machine Learning supports logging and tracking experiments using [MLflow Tracking](https://www.mlflow.org/docs/latest/tracking.html). You can log models, metrics, parameters, and artifacts with MLflow as it supports local mode to cloud portability.
> [!IMPORTANT]
-> Unlike the Azure Machine Learning SDK v1, there is no logging functionality in the SDK v2 preview.
+> Unlike the Azure Machine Learning SDK v1, there is no logging functionality in the Azure Machine Learning SDK for Python (v2). If you were using Azure Machine Learning SDK v1 before, we recommend you to start leveraging MLflow for tracking experiments. See [Migrate logging from SDK v1 to MLflow](reference-migrate-sdk-v1-mlflow-tracking.md) for specific guidance.
Logs can help you diagnose errors and warnings, or track performance metrics like parameters and model performance. In this article, you learn how to enable logging in the following scenarios:
Logs can help you diagnose errors and warnings, or track performance metrics lik
* To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). * You must have an Azure Machine Learning workspace. A workspace is created in [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
-* You must have the `aureml-core`, `mlflow`, and `azure-mlflow` packages installed. If you don't, use the following command to install them in your development environment:
+* You must have `mlflow`, and `azureml-mlflow` packages installed. If you don't, use the following command to install them in your development environment:
```bash
- pip install azureml-core mlflow azureml-mlflow
+ pip install mlflow azureml-mlflow
```
-## Data types
+> [!IMPORTANT]
+> If you are running outside of any Azure Machine Learning Compute and you want to do remote tracking (running your training routine in other compute but tracking on Azure Machine Learning), you must have MLflow configured to do tracking to your workspace. See [Setup your tracking environment](how-to-use-mlflow-cli-runs.md?#set-up-tracking-environment) for more details.
+
+## Logging parameters
+
+MLflow supports the logging parameters used by your experiments. Parameters can be of any type, and can be logged using the following syntax:
-The following table describes how to log specific value types:
+```python
+mlflow.log_param("num_epochs", 20)
+```
+
+MLflow also offers a convenient way to log multiple parameters by indicating all of them using a dictionary. Several frameworks can also pass parameters to models using dictionaries and hence this is a convenient way to log them in the experiment.
+
+```python
+params = {
+ "num_epochs": 20,
+ "dropout_rate": .6,
+ "objective": "binary_crossentropy"
+}
+
+mlflow.log_params(params)
+```
+
+> [!NOTE]
+> Azure ML SDK v1 logging can't log parameters. We recommend the use of MLflow for tracking experiments as it offers a superior set of features.
+
+## Logging metrics
+
+Metrics, as opposite to parameters, are always numeric. The following table describes how to log specific numeric types:
|Logged Value|Example code| Notes| |-|-|-|
-|Log a numeric value (int or float) | `mlflow.log_metric('my_metric', 1)`| |
-|Log a boolean value | `mlflow.log_metric('my_metric', 0)`| 0 = True, 1 = False|
-|Log a string | `mlflow.log_text('foo', 'my_string')`| Logged as an artifact|
-|Log numpy metrics or PIL image objects|`mlflow.log_image(img, 'figure.png')`||
-|Log matlotlib plot or image file|` mlflow.log_figure(fig, "figure.png")`||
+|Log a numeric value (int or float) | `mlflow.log_metric("my_metric", 1)`| |
+|Log a numeric value (int or float) over time | `mlflow.log_metric("my_metric", 1, step=1)`| Use parameter `step` to indicate the step at which you are logging the metric value. It can be any integer number. It defaults to zero. |
+|Log a boolean value | `mlflow.log_metric("my_metric", 0)`| 0 = True, 1 = False|
-## Log a training job with MLflow
+> [!IMPORTANT]
+> __Performance considerations:__ If you need to log multiple metrics (or multiple values for the same metric) avoid making calls to `mlflow.log_metric` in loops. Better performance can be achieved by logging batch of metrics. Use the method `mlflow.log_metrics` which accepts a dictionary with all the metrics you want to log at once or use `mlflow.log_batch` which accepts multiple type of elements for logging.
-To set up for logging with MLflow, import `mlflow` and set the tracking URI:
+### Logging curves or list of values
-> [!TIP]
-> You do not need to set the tracking URI when using a notebook running on an Azure Machine Learning compute instance.
+Curves (or list of numeric values) can be logged with MLflow by logging the same metric multiple times. The following example shows how to do it:
```python
-from azureml.core import Workspace
-import mlflow
+list_to_log = [1, 2, 3, 2, 1, 2, 3, 2, 1]
+from mlflow.entities import Metric
+from mlflow.tracking import MlflowClient
+import time
-ws = Workspace.from_config()
-# Set the tracking URI to the Azure ML backend
-# Not needed if running on Azure ML compute instance
-# or compute cluster
-mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
+client = MlflowClient()
+client.log_batch(mlflow.active_run().info.run_id,
+ metrics=[Metric(key="sample_list", value=val, timestamp=int(time.time() * 1000), step=0) for val in list_to_log])
```
-### Interactive jobs
+## Logging images
+
+MLflow supports two ways of logging images:
+
+|Logged Value|Example code| Notes|
+|-|-|-|
+|Log numpy metrics or PIL image objects|`mlflow.log_image(img, "figure.png")`| `img` should be an instance of `numpy.ndarray` or `PIL.Image.Image`. `figure.png` is the name of the artifact that will be generated inside of the run. It doesn't have to be an existing file.|
+|Log matlotlib plot or image file|` mlflow.log_figure(fig, "figure.png")`| `figure.png` is the name of the artifact that will be generated inside of the run. It doesn't have to be an existing file. |
+
+## Logging other types of data
+
+|Logged Value|Example code| Notes|
+|-|-|-|
+|Log text in a text file | `mlflow.log_text("text string", "notes.txt")`| Text is persisted inside of the run in a text file with name `notes.txt`. |
+|Log dictionaries as `JSON` and `YAML` files | `mlflow.log_dict(dictionary, "file.yaml"` | `dictionary` is a dictionary object containing all the structure that you want to persist as `JSON` or `YAML` file. |
+|Log a trivial file already existing | `mlflow.log_artifact("path/to/file.pkl")`| Files are always logged in the root of the run. If `artifact_path` is provided, then the file is logged in a folder as indicated in that parameter. |
+|Log all the artifacts in an existing folder | `mlflow.log_artifacts("path/to/folder")`| Folder structure is copied to the run, but the root folder indicated is not included. |
+
+## Logging models
+
+MLflow introduces the concept of "models" as a way to package all the artifacts required for a given model to function. Models in MLflow are always a folder with an arbitrary number of files, depending on the framework used to generate the model. Logging models has the advantage of tracking all the elements of the model as a single entity that can be __registered__ and then __deployed__. On top of that, MLflow models enjoy the benefit of [no-code deployment](how-to-deploy-mlflow-models.md) and can be used with the [Responsible AI dashboard](how-to-responsible-ai-dashboard.md) in studio.
+
+To save the model from a training run, use the `log_model()` API for the framework you're working with. For example, [mlflow.sklearn.log_model()](https://mlflow.org/docs/latest/python_api/mlflow.sklearn.html#mlflow.sklearn.log_model). For frameworks that MLflow doesn't support, see [Convert custom models to MLflow](how-to-convert-custom-model-to-mlflow.md).
+
+## Automatic logging
+
+With Azure Machine Learning and MLFlow, users can log metrics, model parameters and model artifacts automatically when training a model. A [variety of popular machine learning libraries](https://mlflow.org/docs/latest/tracking.html#automatic-logging) are supported.
+
+To enable [automatic logging](https://mlflow.org/docs/latest/tracking.html#automatic-logging) insert the following code before your training code:
+
+```Python
+mlflow.autolog()
+```
+
+> [!TIP]
+> You can control what gets automatically logged wit autolog. For instance, if you indicate `mlflow.autolog(log_models=False)`, MLflow will log everything but models for you. Such control is useful in cases where you want to log models manually but still enjoy automatic logging of metrics and parameters. Also notice that some frameworks may disable automatic logging of models if the trained model goes behond specific boundaries. Such behavior depends on the flavor used and we recommend you to view they documentation if this is your case.
+
+[Learn more about Automatic logging with MLflow](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.autolog).
+
+## Configuring experiments and runs in Azure Machine Learning
+
+MLflow organizes the information in experiments and runs (in Azure Machine Learning, runs are called Jobs). There are some differences in how to configure them depending on how you are running your code:
+
+# [Training interactively](#tab/interactive)
When training interactively, such as in a Jupyter Notebook, use the following pattern:
When training interactively, such as in a Jupyter Notebook, use the following pa
1. Use logging methods to log metrics and other information. 1. End the job.
-For example, the following code snippet demonstrates setting the tracking URI, creating an experiment, and then logging during a job
+For example, the following code snippet demonstrates configuring the experiment, and then logging during a job:
```python
-from mlflow.tracking import MlflowClient
-
-# Create a new experiment if one doesn't already exist
-mlflow.create_experiment("mlflow-experiment")
+import mlflow
+mlflow.set_experiment("mlflow-experiment")
# Start the run, log metrics, end the run mlflow_run = mlflow.start_run()
mlflow.end_run()
You can also use the context manager paradigm: ```python
-from mlflow.tracking import MlflowClient
-
-# Create a new experiment if one doesn't already exist
-mlflow.create_experiment("mlflow-experiment")
+import mlflow
+mlflow.set_experiment("mlflow-experiment")
# Start the run, log metrics, end the run with mlflow.start_run() as run:
with mlflow.start_run() as run:
pass ```
+When you start a new run with `mlflow.start_run`, it may be useful to indicate the parameter `run_name` which will then translate to the name of the run in Azure Machine Learning user interface and help you identify the run quicker:
+
+```python
+with mlflow.start_run(run_name="iris-classifier-random-forest") as run:
+ mlflow.log_metric('mymetric', 1)
+ mlflow.log_metric('anothermetric',1)
+```
+ For more information on MLflow logging APIs, see the [MLflow reference](https://www.mlflow.org/docs/latest/python_api/mlflow.html#mlflow.log_artifact).
-### Remote runs
+# [Training with jobs](#tab/jobs)
-For remote training runs, the tracking URI and experiment are set automatically. Otherwise, the options for logging the run are the same as for interactive logging:
+When running training jobs in Azure Machine Learning you don't need to configure the MLflow tracking URI as it is already configured for you. On top of that, you don't need to call `mlflow.start_run` as runs are automatically started. Hence, you can use mlflow tracking capabilities directly in your training scripts:
-* Call `mlflow.start_run()`, log information, and then call `mlflow.end_run()`.
-* Use the context manager paradigm with `mlflow.start_run()`.
-* Call a logging API such as `mlflow.log_metric()`, which will start a run if one doesn't already exist.
+```python
+import mlflow
-## Log a model
+mlflow.set_experiment("my-experiment")
-To save the model from a training run, use the `log_model()` API for the framework you're working with. For example, [mlflow.sklearn.log_model()](https://mlflow.org/docs/latest/python_api/mlflow.sklearn.html#mlflow.sklearn.log_model). For frameworks that MLflow doesn't support, see [Convert custom models to MLflow](how-to-convert-custom-model-to-mlflow.md).
+mlflow.autolog()
+
+mlflow.log_metric('mymetric', 1)
+mlflow.log_metric('anothermetric',1)
+```
+
+> [!TIP]
+> When submitting jobs using Azure ML CLI v2, you can set the experiment name using the property `experiment_name` in the YAML definition of the job. You don't have to configure it on your training script. See [YAML: display name, experiment name, description, and tags](reference-yaml-job-command.md#yaml-display-name-experiment-name-description-and-tags) for details.
++ ## View job information
You can view the metrics, parameters, and tags for the run in the data field of
```python metrics = finished_mlflow_run.data.metrics
-tags = finished_mlflow_run.data.tags
params = finished_mlflow_run.data.params
+tags = finished_mlflow_run.data.tags
``` >[!NOTE]
machine-learning How To Monitor Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-online-endpoints.md
In this article you learn how to:
## View metrics
-Use the following steps to view metrics for a managed endpoint or deployment:
+Use the following steps to view metrics for an online endpoint or deployment:
1. Go to the [Azure portal](https://portal.azure.com). 1. Navigate to the online endpoint or deployment resource.
machine-learning How To Track Monitor Analyze Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-monitor-analyze-runs.md
Title: Track, monitor, and analyze jobs in studio
+ Title: Monitor and analyze jobs in studio
description: Learn how to start, monitor, and track your machine learning experiment jobs with the Azure Machine Learning studio.
-# Start, monitor, and track job history in studio
+# Monitor and analyze jobs in studio
You can use [Azure Machine Learning studio](https://ml.azure.com) to monitor, organize, and track your jobs for training and experimentation. Your ML job history is an important part of an explainable and repeatable ML development process.
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
import mlflow
#Enter details of your AzureML workspace subscription_id = '<SUBSCRIPTION_ID>' resource_group = '<RESOURCE_GROUP>'
-workspace = '<AZUREML_WORKSPACE_NAME>'
+workspace_name = '<AZUREML_WORKSPACE_NAME>'
ml_client = MLClient(credential=DefaultAzureCredential(), subscription_id=subscription_id, resource_group_name=resource_group)
-azureml_mlflow_uri = ml_client.workspaces.get(workspace).mlflow_tracking_uri
+azureml_mlflow_uri = ml_client.workspaces.get(workspace_name).mlflow_tracking_uri
mlflow.set_tracking_uri(azureml_mlflow_uri) ```
The Azure Machine Learning Tracking URI can be constructed using the subscriptio
```python import mlflow
-aml_region = ""
+region = ""
subscription_id = "" resource_group = ""
-workspace = ""
+workspace_name = ""
-azureml_mlflow_uri = f"azureml://{aml_region}.api.azureml.ms/mlflow/v1.0/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{workspace}"
+azureml_mlflow_uri = f"azureml://{region}.api.azureml.ms/mlflow/v1.0/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{workspace_name}"
mlflow.set_tracking_uri(azureml_mlflow_uri) ```
experiment_name = 'experiment_with_mlflow'
mlflow.set_experiment(experiment_name) ```
+> [!TIP]
+> When submitting jobs using Azure ML CLI v2, you can set the experiment name using the property `experiment_name` in the YAML definition of the job. You don't have to configure it on your training script. See [YAML: display name, experiment name, description, and tags](reference-yaml-job-command.md#yaml-display-name-experiment-name-description-and-tags) for details.
+ You can also set one of the MLflow environment variables [MLFLOW_EXPERIMENT_NAME or MLFLOW_EXPERIMENT_ID](https://mlflow.org/docs/latest/cli.html#cmdoption-mlflow-run-arg-uri) with the experiment name. ```bash
Open your terminal and use the following to submit the job.
az ml job create -f job.yml --web ```
-## Automatic logging
-With Azure Machine Learning and MLFlow, users can log metrics, model parameters and model artifacts automatically when training a model. A [variety of popular machine learning libraries](https://mlflow.org/docs/latest/tracking.html#automatic-logging) are supported.
-
-To enable [automatic logging](https://mlflow.org/docs/latest/tracking.html#automatic-logging) insert the following code before your training code:
-
-```Python
-mlflow.autolog()
-```
-
-[Learn more about Automatic logging with MLflow](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.autolog).
- ## View metrics and artifacts in your workspace
To register and view a model from a run, use the following steps:
## Limitations
-The following MLflow methods are not fully supported with Azure Machine Learning.
-
-* `mlflow.tracking.MlflowClient.create_experiment() `
-* `mlflow.tracking.MlflowClient.rename_experiment()`
-* `mlflow.tracking.MlflowClient.search_runs()`
-* `mlflow.tracking.MlflowClient.download_artifacts()`
-* `mlflow.tracking.MlflowClient.rename_registered_model()`
-
+Some methods available in the MLflow API may not be available when connected to Azure Machine Learning. For details about supported and unsupported operations please read [Support matrix for querying runs and experiments](how-to-track-experiments-mlflow.md#support-matrix-for-querying-runs-and-experiments).
## Next steps
machine-learning Tutorial 1St Experiment Sdk Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-1st-experiment-sdk-train.md
Make sure you save this file before you submit the run.
### <a name="submit-again"></a> Submit the run to Azure Machine Learning
-Select the tab for the *run-pytorch.py* script, then select **Save and run script in terminal** to re-run the *run-pytorch.py* script. Make sure you've saved your changes to `pytorch-aml-env.yml` first.
+Select the tab for the *run-pytorch.py* script, then select **Save and run script in terminal** to re-run the *run-pytorch.py* script. Make sure you've saved your changes to `pytorch-env.yml` first.
This time when you visit the studio, go to the **Metrics** tab where you can now see live updates on the model training loss! It may take a 1 to 2 minutes before the training begins.
marketplace Pc Saas Fulfillment Subscription Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/pc-saas-fulfillment-subscription-api.md
description: Learn how to use the Subscription APIs, which are part of the the
Previously updated : 06/30/2022 Last updated : 07/06/2022
The customer will be billed if a subscription is canceled after the preceding gr
| Parameter | Value | | | | | `content-type` | `application/json` |
-| `x-ms-requestid` | A unique string value for tracking the request from the client, preferably a GUID. If this value isn't provided, one will be generated and provided in the response headers. |
+| `x-ms-requestid` | A unique string value for tracking the request from the client, preferably a GUID. If this value isn't provided, one will be generated and provided in the response headers. |
| `x-ms-correlationid` | A unique string value for operation on the client. This parameter correlates all events from client operation with events on the server side. If this value isn't provided, one will be generated and provided in the response headers. | | `authorization` | A unique access token that identifies the publisher making this API call. The format is `"Bearer <access_token>"` when the token value is retrieved by the publisher as explained in [Get a token based on the Azure AD app](./pc-saas-registration.md#get-the-token-with-an-http-post). |
mysql Azure Pipelines Deploy Database Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/azure-pipelines-deploy-database-task.md
Title: Azure Pipelines task for Azure Database for MySQL Flexible Server description: Enable Azure Database for MySQL Flexible Server CLI task for using with Azure Pipelines- -+ + Last updated 08/09/2021
mysql Concept Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concept-servers.md
Title: Server concepts - Azure Database for MySQL Flexible Server description: This topic provides considerations and guidelines for working with Azure Database for MySQL Flexible Server-- - +++ Last updated 05/24/2022
mysql Concepts Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-audit-logs.md
Title: Audit logs - Azure Database for MySQL - Flexible Server description: Describes the audit logs available in Azure Database for MySQL Flexible Server.-- ++ Last updated 9/21/2020
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-backup-restore.md
Title: Backup and restore in Azure Database for MySQL Flexible Server description: Learn about the concepts of backup and restore with Azure Database for MySQL Flexible Server-- - +++ Last updated 05/24/2022
mysql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-business-continuity.md
Title: Overview of business continuity - Azure Database for MySQL Flexible Server description: Learn about the concepts of business continuity with Azure Database for MySQL Flexible Server-- - +++ Last updated 05/24/2022
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-in-replication.md
Title: Data-in replication - Azure Database for MySQL Flexible description: Learn about using Data-in replication to synchronize from an external server into the Azure Database for MySQL Flexible service.-- ++ Last updated 06/08/2021
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-high-availability.md
Title: Zone-redundant HA with Azure Database for MySQL - Flexible Server description: Get a conceptual overview of zone-redundant high availability in Azure Database for MySQL - Flexible Server.-- ++ Last updated 08/26/2021
mysql Concepts Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-limitations.md
Title: Limitations - Azure Database for MySQL - Flexible Server description: This article describes Limitations in Azure Database for MySQL - Flexible Server, such as number of connection and storage engine options.-- ++ Last updated 10/1/2020
mysql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-maintenance.md
Title: Scheduled maintenance - Azure Database for MySQL - Flexible server description: This article describes the scheduled maintenance feature in Azure Database for MySQL - Flexible server.-- - +++ Last updated 05/24/2022
mysql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-monitoring.md
Title: 'Monitoring - Azure Database for MySQL - Flexible Server' description: This article describes the metrics for monitoring and alerting for Azure Database for MySQL Flexible Server, including CPU, storage, and connection statistics.-- ++ Last updated 9/21/2020
mysql Concepts Networking Public https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-public.md
Title: Public Network Access overview - Azure Database for MySQL Flexible Server description: Learn about public access networking option in the Flexible Server deployment option for Azure Database for MySQL-- ++ Last updated 8/6/2021
mysql Concepts Networking Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-vnet.md
Title: Private Network Access overview - Azure Database for MySQL Flexible Server description: Learn about private access networking option in the Flexible Server deployment option for Azure Database for MySQL-- ++ Last updated 8/6/2021
mysql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking.md
Title: Networking overview - Azure Database for MySQL Flexible Server description: Learn about connectivity and networking options in the Flexible Server deployment option for Azure Database for MySQL-- ++ Last updated 9/23/2020
mysql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-read-replicas.md
Title: Read replicas - Azure Database for MySQL - Flexible Server description: 'Learn about read replicas in Azure Database for MySQL Flexible Server: creating replicas, connecting to replicas, monitoring replication, and stopping replication.'-- - +++ Last updated 05/24/2022
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-server-parameters.md
Title: Server parameters - Azure Database for MySQL - Flexible Server description: This topic provides guidelines for configuring server parameters in Azure Database for MySQL - Flexible Server.-- - +++ Last updated 05/24/2022 # Server parameters in Azure Database for MySQL - Flexible Server
mysql Concepts Service Tiers Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-service-tiers-storage.md
Title: Azure Database for MySQL - Flexible Server service tiers description: This article describes the compute and storage options in Azure Database for MySQL - Flexible Server.-- - +++ Last updated 05/24/2022
mysql Concepts Slow Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-slow-query-logs.md
Title: Slow query logs - Azure Database for MySQL - Flexible Server description: Describes the slow query logs available in Azure Database for MySQL Flexible Server.-- ++ Last updated 9/21/2020 # Slow query logs in Azure Database for MySQL Flexible Server
mysql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-supported-versions.md
Title: Supported versions - Azure Database for MySQL Flexible Server description: Learn which versions of the MySQL server are supported in the Azure Database for MySQL Flexible Server-- - +++ Last updated 05/24/2022
mysql Concepts Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-workbooks.md
Title: Monitor Azure Database for MySQL Flexible Server by using Azure Monitor workbooks description: This article describes how you can monitor Azure Database for MySQL Flexible Server by using Azure Monitor workbooks.-- ++ Last updated 10/01/2021 # Monitor Azure Database for MySQL Flexible Server by using Azure Monitor workbooks
mysql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-azure-cli.md
Title: 'Quickstart: Connect using Azure CLI - Azure Database for MySQL - Flexible Server' description: This quickstart provides several ways to connect with Azure CLI with Azure Database for MySQL - Flexible Server.-- - +++ Last updated 03/01/2021
-ms.tool: azure-cli
# Quickstart: Connect and query with Azure CLI with Azure Database for MySQL - Flexible Server
mysql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-csharp.md
Title: 'Quickstart: Connect using C# - Azure Database for MySQL Flexible Server' description: "This quickstart provides a C# (.NET) code sample you can use to connect and query data from Azure Database for MySQL Flexible Server."-- - ++
+ms.devlang: csharp
+ Last updated 01/16/2021
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-nodejs.md
Title: 'Quickstart: Connect using Node.js - Azure Database for MySQL - Flexible Server' description: This quickstart provides several Node.js code samples you can use to connect and query data from Azure Database for MySQL - Flexible Server.-- - ++
+ms.devlang: javascript
+ Last updated 01/27/2022 # Quickstart: Use Node.js to connect and query data in Azure Database for MySQL - Flexible Server
mysql Connect Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-php.md
Title: 'Quickstart: Connect using PHP - Azure Database for MySQL - Flexible Server' description: This quickstart provides several PHP code samples you can use to connect and query data from Azure Database for MySQL - Flexible Server.-- - +++ Last updated 9/21/2020
mysql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-python.md
Title: 'Quickstart: Connect using Python - Azure Database for MySQL - Flexible Server' description: This quickstart provides several Python code samples you can use to connect and query data from Azure Database for MySQL - Flexible Server.-- - ++
+ms.devlang: python
+ Last updated 9/21/2020
mysql How To Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-alert-on-metric.md
Title: Configure Azure Database for MySQL metric alerts description: This article describes how to configure and access metric alerts for Azure Database for MySQL Flexible Server from the Azure portal.-- Previously updated : 05/06/2022++ Last updated : 05/06/2022 # Set up alerts on metrics for Azure Database for MySQL - Flexible Server
mysql How To Configure High Availability Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-configure-high-availability-cli.md
Title: Manage zone redundant high availability - Azure CLI - Azure Database for MySQL Flexible Server description: This article describes how to configure zone redundant high availability in Azure Database for MySQL flexible Server with the Azure CLI.-- ++ Last updated 05/24/2022
mysql How To Configure High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-configure-high-availability.md
Title: Manage zone redundant high availability - Azure portal - Azure Database for MySQL Flexible Server description: This article describes how to enable or disable zone redundant high availability in Azure Database for MySQL flexible Server through the Azure portal.-- ++ Last updated 05/24/2022
mysql How To Configure Server Parameters Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-configure-server-parameters-cli.md
Title: Configure server parameters - Azure CLI - Azure Database for MySQL Flexible Server description: This article describes how to configure the service parameters in Azure Database for MySQL flexible server using the Azure CLI command line utility.-- ++
+ms.devlang: azurecli
Last updated 11/10/2020
mysql How To Configure Server Parameters Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-configure-server-parameters-portal.md
Title: Configure server parameters - Azure portal - Azure Database for MySQL Flexible Server description: This article describes how to configure MySQL server parameters in Azure Database for MySQL flexible server using the Azure portal.-- ++ Last updated 11/10/2020
mysql How To Connect Tls Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-connect-tls-ssl.md
Title: Encrypted connectivity using TLS/SSL in Azure Database for MySQL - Flexible Server description: Instructions and information on how to connect using TLS/SSL in Azure Database for MySQL - Flexible Server.-- - Previously updated : 05/24/2022+++ ms.devlang: csharp, golang, java, javascript, php, python, ruby Last updated : 05/24/2022 # Connect to Azure Database for MySQL - Flexible Server with encrypted connections
Following are the different configurations of SSL and TLS settings you can have
> * Changes to SSL Cipher on flexible server is not supported. FIPS cipher suites is enforced by default when tls_version is set to TLS version 1.2 . For TLS versions other than version 1.2, SSL Cipher is set to default settings which comes with MySQL community installation. > * MySQL open-source community editions starting with the release of MySQL versions 8.0.26 and 5.7.35, the TLSv1 and TLSv1.1 protocols are deprecated. These protocols released in 1996 and 2006, respectively to encrypt data in motion, are considered weak, outdated, and vulnerable to security threats. For more information, see [Removal of Support for the TLSv1 and TLSv1.1 Protocols.](https://dev.mysql.com/doc/refman/8.0/en/encrypted-connection-protocols-ciphers.html#encrypted-connection-deprecated-protocols).Azure Database for MySQL – Flexible Server will also stop supporting TLS versions once the community stops the support for the protocol, to align with modern security standards.
-In this article, you'll learn how to:
+In this article, you learn how to:
* Configure your flexible server * With SSL disabled
mysql How To Create Manage Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-create-manage-databases.md
Title: How to create databases for Azure Database for MySQL Flexible Server description: This article describes how to create and manage databases on Azure Database for MySQL Flexible server.-- ++ Last updated 02/17/2022
mysql How To Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-in-replication.md
Title: Configure Data-in replication - Azure Database for MySQL Flexible Server description: This article describes how to set up Data-in replication for Azure Database for MySQL Flexible Server.-- ++ Last updated 06/08/2021
mysql How To Maintenance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-maintenance-portal.md
Title: Azure Database for MySQL - Flexible Server - Scheduled maintenance - Azure portal description: Learn how to configure scheduled maintenance settings for an Azure Database for MySQL - Flexible server from the Azure portal.-- ++ Last updated 9/21/2020
mysql How To Manage Firewall Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-manage-firewall-cli.md
Title: Manage firewall rules - Azure CLI - Azure Database for MySQL - Flexible Server description: Create and manage firewall rules for Azure Database for MySQL - Flexible Server using Azure CLI command line.-- ++
+ms.devlang: azurecli
Last updated 9/21/2020
mysql How To Manage Firewall Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-manage-firewall-portal.md
Title: Manage firewall rules - Azure portal - Azure Database for MySQL - Flexible Server description: Create and manage firewall rules for Azure Database for MySQL - Flexible Server using the Azure portal-- ++ Last updated 9/21/2020
mysql How To Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-manage-server-portal.md
Title: Manage server - Azure portal - Azure Database for MySQL Flexible Server description: Learn how to manage an Azure Database for MySQL Flexible server from the Azure portal.-- ++ Last updated 9/21/2020
mysql How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-move-regions.md
Title: Move Azure regions - Azure portal - Azure Database for MySQL Flexible server description: Move an Azure Database for MySQL Flexible server from one Azure region to another using the Azure portal.-- ++ Last updated 04/08/2022 #Customer intent: As an Azure service administrator, I want to move my service resources to another Azure region.
mysql How To Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-read-replicas-cli.md
Title: Manage read replicas in Azure Database for MySQL Flexible Server using Azure CLI. description: Learn how to set up and manage read replicas in Azure Database for MySQL flexible server using the Azure CLI.-- Previously updated : 10/23/2021++ Last updated : 10/23/2021 # How to create and manage read replicas in Azure Database for MySQL flexible server using the Azure CLI
In this article, you will learn how to create and manage read replicas in the Azure Database for MySQL flexible server using the Azure CLI. To learn more about read replicas, see the [overview](concepts-read-replicas.md).
-> [!Note]
+[!Note]
> > * Replica is not supported on high availability enabled server. >
mysql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-read-replicas-portal.md
Title: Manage read replicas - Azure portal - Azure Database for MySQL - Flexible Server description: Learn how to set up and manage read replicas in Azure Database for MySQL flexible server using the Azure portal.-- ++ Last updated 06/17/2021
mysql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restart-server-portal.md
Title: Restart server - Azure portal - Azure Database for MySQL - Flexible Server description: This article describes how you can restart an Azure Database for MySQL Flexible Server using the Azure portal.-- ++ Last updated 10/26/2020
mysql How To Restart Stop Start Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restart-stop-start-server-cli.md
Title: Restart/Stop/start - Azure portal - Azure Database for MySQL Flexible Server description: This article describes how to restart/stop/start operations in Azure Database for MySQL through the Azure CLI.-- ++ Last updated 03/30/2021
mysql How To Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restore-dropped-server.md
Title: Restore a deleted Azure Database for MySQL Flexible server description: This article describes how to restore a deleted server in Azure Database for MySQL Flexible server using the Azure portal.-- ++ Last updated 11/10/2021
mysql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restore-server-cli.md
Title: Restore Azure Database for MySQL - Flexible Server with Azure CLI description: This article describes how to perform restore operations in Azure Database for MySQL through the Azure CLI.-- ++ Last updated 04/01/2021
mysql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restore-server-portal.md
Title: Restore an Azure Database for MySQL Flexible Server with Azure portal. description: This article describes how to perform restore operations in Azure Database for MySQL Flexible server through the Azure portal-- ++ Last updated 04/01/2021
mysql How To Stop Start Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-stop-start-server-portal.md
Title: Stop/start - Azure portal - Azure Database for MySQL Flexible Server description: This article describes how to stop/start operations in Azure Database for MySQL through the Azure portal.-- ++ Last updated 09/29/2020
mysql How To Troubleshoot Cli Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-troubleshoot-cli-errors.md
Title: Troubleshoot Azure Database for MySQL Flexible Server CLI errors description: This topic gives guidance on troubleshooting common issues with Azure CLI when using MySQL Flexible Server.-- ++ Last updated 08/24/2021
mysql How To Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-troubleshoot-common-connection-issues.md
Title: Troubleshoot connection issues - Azure Database for MySQL - Flexible Server description: Learn how to troubleshoot connection issues to Azure Database for MySQL Flexible Server.
-keywords: mysql connection,connection string,connectivity issues,persistent error,connection error
-- ++ Last updated 9/21/2020
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
description: Learn about the Azure Database for MySQL Flexible server, a relatio
--++ Last updated 05/24/2022
mysql Quickstart Create Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-arm-template.md
Title: 'Quickstart: Create an Azure DB for MySQL - Flexible Server - ARM template' description: In this Quickstart, learn how to create an Azure Database for MySQL - Flexible Server using ARM template.- ++ - Last updated 10/23/2020
Last updated 10/23/2020
## Prerequisites
-An Azure account with an active subscription.
+- An Azure account with an active subscription.
[!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)]
-## Review the template
-
-An Azure Database for MySQL Flexible Server is the parent resource for one or more databases within a region. It provides the scope for management policies that apply to its databases: login, firewall, users, roles, configurations.
-
-Create a _mysql-flexible-server-template.json_ file and copy this JSON script into it.
+## Create server with public access
+Create a _mysql-flexible-server-template.json_ file and copy this JSON script to create a server using public access connectivity method and also create a database on the server.
```json {
- "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "administratorLogin": {
- "type": "String"
- },
- "administratorLoginPassword": {
- "type": "SecureString"
- },
- "location": {
- "type": "String"
- },
- "serverName": {
- "type": "String"
- },
- "serverEdition": {
- "type": "String"
- },
- "storageSizeMB": {
- "type": "Int"
- },
- "haEnabled": {
- "type": "string",
- "defaultValue": "Disabled"
- },
- "availabilityZone": {
- "type": "String"
- },
- "version": {
- "type": "String"
- },
- "tags": {
- "defaultValue": {},
- "type": "Object"
+ "$schema": "http://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "administratorLogin": {
+ "type": "string"
+ },
+ "administratorLoginPassword": {
+ "type": "securestring"
+ },
+ "location": {
+ "type": "string"
+ },
+ "serverName": {
+ "type": "string"
+ },
+ "serverEdition": {
+ "type": "string",
+ "defaultValue": "Burstable",
+ "metadata": {
+ "description": "The tier of the particular SKU, e.g. Burstable, GeneralPurpose, MemoryOptimized. High Availability is available only for GeneralPurpose and MemoryOptimized sku."
+ }
+ },
+ "skuName": {
+ "type": "string",
+ "defaultValue": "Standard_B1ms",
+ "metadata": {
+ "description": "The name of the sku, e.g. Standard_D32ds_v4."
+ }
+ },
+ "storageSizeGB": {
+ "type": "int"
+ },
+ "storageIops": {
+ "type": "int"
+ },
+ "storageAutogrow": {
+ "type": "string",
+ "defaultValue": "Enabled"
+ },
+ "availabilityZone": {
+ "type": "string",
+ "metadata": {
+ "description": "Availability Zone information of the server. (Leave blank for No Preference)."
+ }
+ },
+ "version": {
+ "type": "string"
+ },
+ "tags": {
+ "type": "object",
+ "defaultValue": {}
+ },
+ "haEnabled": {
+ "type": "string",
+ "defaultValue": "Disabled",
+ "metadata": {
+ "description": "High availability mode for a server : Disabled, SameZone, or ZoneRedundant"
+ }
+ },
+ "standbyAvailabilityZone": {
+ "type": "string",
+ "metadata": {
+ "description": "Availability zone of the standby server."
+ }
+ },
+ "firewallRules": {
+ "type": "object",
+ "defaultValue": {}
+ },
+ "backupRetentionDays": {
+ "type": "int"
+ },
+ "geoRedundantBackup": {
+ "type": "string"
+ },
+ "databaseName": {
+ "type": "string"
+ }
},
- "firewallRules": {
- "defaultValue": {},
- "type": "Object"
+ "variables": {
+ "api": "2021-05-01",
+ "firewallRules": "[parameters('firewallRules').rules]"
},
- "vnetData": {
- "defaultValue": {},
- "type": "Object"
+ "resources": [
+ {
+ "type": "Microsoft.DBforMySQL/flexibleServers",
+ "apiVersion": "[variables('api')]",
+ "location": "[parameters('location')]",
+ "name": "[parameters('serverName')]",
+ "sku": {
+ "name": "[parameters('skuName')]",
+ "tier": "[parameters('serverEdition')]"
+ },
+ "properties": {
+ "version": "[parameters('version')]",
+ "administratorLogin": "[parameters('administratorLogin')]",
+ "administratorLoginPassword": "[parameters('administratorLoginPassword')]",
+ "availabilityZone": "[parameters('availabilityZone')]",
+ "highAvailability": {
+ "mode": "[parameters('haEnabled')]",
+ "standbyAvailabilityZone": "[parameters('standbyAvailabilityZone')]"
+ },
+ "Storage": {
+ "storageSizeGB": "[parameters('storageSizeGB')]",
+ "iops": "[parameters('storageIops')]",
+ "autogrow": "[parameters('storageAutogrow')]"
+ },
+ "Backup": {
+ "backupRetentionDays": "[parameters('backupRetentionDays')]",
+ "geoRedundantBackup": "[parameters('geoRedundantBackup')]"
+ }
+ },
+ "tags": "[parameters('tags')]"
+ },
+ {
+ "condition": "[greater(length(variables('firewallRules')), 0)]",
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2021-04-01",
+ "name": "[concat('firewallRules-', copyIndex())]",
+ "copy": {
+ "count": "[if(greater(length(variables('firewallRules')), 0), length(variables('firewallRules')), 1)]",
+ "mode": "Serial",
+ "name": "firewallRulesIterator"
+ },
+ "dependsOn": [
+ "[concat('Microsoft.DBforMySQL/flexibleServers/', parameters('serverName'))]"
+ ],
+ "properties": {
+ "mode": "Incremental",
+ "template": {
+ "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.DBforMySQL/flexibleServers/firewallRules",
+ "name": "[concat(parameters('serverName'),'/',variables('firewallRules')[copyIndex()].name)]",
+ "apiVersion": "[variables('api')]",
+ "properties": {
+ "StartIpAddress": "[variables('firewallRules')[copyIndex()].startIPAddress]",
+ "EndIpAddress": "[variables('firewallRules')[copyIndex()].endIPAddress]"
+ }
+ }
+ ]
+ }
+ }
+ },
+ {
+ "type": "Microsoft.DBforMySQL/flexibleServers/databases",
+ "apiVersion": "[variables('api')]",
+ "name": "[concat(parameters('serverName'),'/',parameters('databaseName'))]",
+ "dependsOn": [
+ "[concat('Microsoft.DBforMySQL/flexibleServers/', parameters('serverName'))]"
+ ],
+ "properties": {
+ "charset": "utf8",
+ "collation": "utf8_general_ci"
+ }
+ }
+ ]
+}
+```
+
+## Create a server with private access
+Create a _mysql-flexible-server-template.json_ file and copy this JSON script to create a server using private access connectivity method inside a virtual network.
+
+```json
+{
+ "$schema": "http://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "administratorLogin": {
+ "type": "string"
+ },
+ "administratorLoginPassword": {
+ "type": "securestring"
+ },
+ "location": {
+ "type": "string"
+ },
+ "serverName": {
+ "type": "string"
+ },
+ "serverEdition": {
+ "type": "string",
+ "defaultValue": "Burstable",
+ "metadata": {
+ "description": "The tier of the particular SKU, e.g. Burstable, GeneralPurpose, MemoryOptimized. High Availability is available only for GeneralPurpose and MemoryOptimized sku."
+ }
+ },
+ "skuName": {
+ "type": "string",
+ "defaultValue": "Standard_B1ms",
+ "metadata": {
+ "description": "The name of the sku, e.g. Standard_D32ds_v4."
+ }
+ },
+ "storageSizeGB": {
+ "type": "int"
+ },
+ "storageIops": {
+ "type": "int"
+ },
+ "storageAutogrow": {
+ "type": "string",
+ "defaultValue": "Enabled"
+ },
+ "availabilityZone": {
+ "type": "string",
+ "metadata": {
+ "description": "Availability Zone information of the server. (Leave blank for No Preference)."
+ }
+ },
+ "version": {
+ "type": "string"
+ },
+ "tags": {
+ "type": "object",
+ "defaultValue": {}
+ },
+ "haEnabled": {
+ "type": "string",
+ "defaultValue": "Disabled",
+ "metadata": {
+ "description": "High availability mode for a server : Disabled, SameZone, or ZoneRedundant"
+ }
+ },
+ "standbyAvailabilityZone": {
+ "type": "string",
+ "metadata": {
+ "description": "Availability zone of the standby server."
+ }
+ },
+ "vnetName": {
+ "type": "string",
+ "defaultValue": "azure_mysql_vnet",
+ "metadata": { "description": "Virtual Network Name" }
+ },
+ "subnetName": {
+ "type": "string",
+ "defaultValue": "azure_mysql_subnet",
+ "metadata": { "description": "Subnet Name"}
+ },
+ "vnetAddressPrefix": {
+ "type": "string",
+ "defaultValue": "10.0.0.0/16",
+ "metadata": { "description": "Virtual Network Address Prefix" }
+ },
+ "subnetPrefix": {
+ "type": "string",
+ "defaultValue": "10.0.0.0/24",
+ "metadata": { "description": "Subnet Address Prefix" }
+ },
+ "backupRetentionDays": {
+ "type": "int"
+ },
+ "geoRedundantBackup": {
+ "type": "string"
+ },
+ "databaseName": {
+ "type": "string"
+ }
},
- "backupRetentionDays": {
- "type": "Int"
- }
- },
- "variables": {
- "api": "2021-05-01",
- "firewallRules": "[parameters('firewallRules').rules]",
- "publicNetworkAccess": "[if(empty(parameters('vnetData')), 'Enabled', 'Disabled')]",
- "vnetDataSet": "[if(empty(parameters('vnetData')), json('{ \"subnetArmResourceId\": \"\" }'), parameters('vnetData'))]",
- "finalVnetData": "[json(concat('{ \"subnetArmResourceId\": \"', variables('vnetDataSet').subnetArmResourceId, '\"}'))]"
- },
- "resources": [
- {
- "type": "Microsoft.DBforMySQL/flexibleServers",
- "apiVersion": "[variables('api')]",
- "name": "[parameters('serverName')]",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Standard_D4ds_v4",
- "tier": "[parameters('serverEdition')]"
- },
- "tags": "[parameters('tags')]",
- "properties": {
- "version": "[parameters('version')]",
- "administratorLogin": "[parameters('administratorLogin')]",
- "administratorLoginPassword": "[parameters('administratorLoginPassword')]",
- "publicNetworkAccess": "[variables('publicNetworkAccess')]",
- "DelegatedSubnetArguments": "[if(empty(parameters('vnetData')), json('null'), variables('finalVnetData'))]",
- "haEnabled": "[parameters('haEnabled')]",
- "storageProfile": {
- "storageMB": "[parameters('storageSizeMB')]",
- "backupRetentionDays": "[parameters('backupRetentionDays')]"
- },
- "availabilityZone": "[parameters('availabilityZone')]"
- }
+ "variables": {
+ "api": "2021-05-01"
},
- {
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-08-01",
- "name": "[concat('firewallRules-', copyIndex())]",
- "dependsOn": [
- "[concat('Microsoft.DBforMySQL/flexibleServers/', parameters('serverName'))]"
- ],
- "properties": {
- "mode": "Incremental",
- "template": {
- "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "resources": [
- {
- "type": "Microsoft.DBforMySQL/flexibleServers/firewallRules",
- "name": "[concat(parameters('serverName'),'/',variables('firewallRules')[copyIndex()].name)]",
- "apiVersion": "[variables('api')]",
- "properties": {
- "StartIpAddress": "[variables('firewallRules')[copyIndex()].startIPAddress]",
- "EndIpAddress": "[variables('firewallRules')[copyIndex()].endIPAddress]"
+ "resources": [
+ {
+ "type": "Microsoft.Network/virtualNetworks",
+ "apiVersion": "2021-05-01",
+ "name": "[parameters('vnetName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "addressSpace": {
+ "addressPrefixes": [
+ "[parameters('vnetAddressPrefix')]"
+ ]
} }
- ]
+ },
+ {
+ "type": "Microsoft.Network/virtualNetworks/subnets",
+ "apiVersion": "2021-05-01",
+ "name": "[concat(parameters('vnetName'),'/',parameters('subnetName'))]",
+ "dependsOn": [
+ "[concat('Microsoft.Network/virtualNetworks/', parameters('vnetName'))]"
+ ],
+ "properties": {
+ "addressPrefix": "[parameters('subnetPrefix')]",
+ "delegations": [
+ {
+ "name": "MySQLflexibleServers",
+ "properties": {
+ "serviceName": "Microsoft.DBforMySQL/flexibleServers"
+ }
+ }
+ ]
+ }
+ },
+ {
+ "type": "Microsoft.DBforMySQL/flexibleServers",
+ "apiVersion": "[variables('api')]",
+ "location": "[parameters('location')]",
+ "name": "[parameters('serverName')]",
+ "dependsOn": [
+ "[resourceID('Microsoft.Network/virtualNetworks/subnets/', parameters('vnetName'), parameters('subnetName'))]"
+ ],
+ "sku": {
+ "name": "[parameters('skuName')]",
+ "tier": "[parameters('serverEdition')]"
+ },
+ "properties": {
+ "version": "[parameters('version')]",
+ "administratorLogin": "[parameters('administratorLogin')]",
+ "administratorLoginPassword": "[parameters('administratorLoginPassword')]",
+ "availabilityZone": "[parameters('availabilityZone')]",
+ "highAvailability": {
+ "mode": "[parameters('haEnabled')]",
+ "standbyAvailabilityZone": "[parameters('standbyAvailabilityZone')]"
+ },
+ "Storage": {
+ "storageSizeGB": "[parameters('storageSizeGB')]",
+ "iops": "[parameters('storageIops')]",
+ "autogrow": "[parameters('storageAutogrow')]"
+ },
+ "network": {
+ "delegatedSubnetResourceId": "[resourceID('Microsoft.Network/virtualNetworks/subnets', parameters('vnetName'), parameters('subnetName'))]"
+ },
+ "Backup": {
+ "backupRetentionDays": "[parameters('backupRetentionDays')]",
+ "geoRedundantBackup": "[parameters('geoRedundantBackup')]"
+ }
+ },
+ "tags": "[parameters('tags')]"
+ },
+ {
+ "type": "Microsoft.DBforMySQL/flexibleServers/databases",
+ "apiVersion": "[variables('api')]",
+ "name": "[concat(parameters('serverName'),'/',parameters('databaseName'))]",
+ "dependsOn": [
+ "[concat('Microsoft.DBforMySQL/flexibleServers/', parameters('serverName'))]"
+ ],
+ "properties": {
+ "charset": "utf8",
+ "collation": "utf8_general_ci"
+ }
}
- },
- "copy": {
- "name": "firewallRulesIterator",
- "count": "[if(greater(length(variables('firewallRules')), 0), length(variables('firewallRules')), 1)]",
- "mode": "Serial"
- },
- "condition": "[greater(length(variables('firewallRules')), 0)]"
- }
- ]
+
+ ]
} ```
-These resources are defined in the template:
--- Microsoft.DBforMySQL/flexibleServers- ## Deploy the template Select **Try it** from the following PowerShell code block to open [Azure Cloud Shell](../../cloud-shell/overview.md).
mysql Quickstart Create Connect Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-connect-server-vnet.md
Title: 'Connect to Azure Database for MySQL flexible server with private access in the Azure portal' description: This article walks you through using the Azure portal to create and connect to an Azure Database for MySQL flexible server in private access.-- - +++ Last updated 04/18/2021
mysql Quickstart Create Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-server-cli.md
Title: 'Quickstart: Create a server - Azure CLI - Azure Database for MySQL - Flexible Server' description: This quickstart describes how to use the Azure CLI to create an Azure Database for MySQL Flexible Server in an Azure resource group.-- Previously updated : 9/21/2020++
+ms.devlang: azurecli
Last updated : 9/21/2020 # Quickstart: Create an Azure Database for MySQL Flexible Server using Azure CLI
mysql Quickstart Create Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-server-portal.md
Title: 'Quickstart: Create an Azure Database for MySQL flexible server - Azure portal' description: This article walks you through using the Azure portal to create an Azure Database for MySQL flexible server in minutes.-- - +++ Last updated 06/13/2022
mysql Quickstart Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-terraform.md
Title: 'Quickstart: Use Terraform to create an Azure Database for MySQL - Flexible Server' description: Learn how to deploy a database for Azure Database for MySQL Flexible Server using Terraform- ++ - Last updated 5/27/2022
mysql Tutorial Configure Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-configure-audit.md
Title: 'Tutorial: Configure audit logs by using Azure Database for MySQL Flexible Server' description: 'This tutorial shows you how to configure audit logs by using Azure Database for MySQL Flexible Server.'-- ++ Last updated 10/01/2021
mysql Tutorial Query Performance Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-query-performance-insights.md
Title: 'Tutorial: Query Performance Insight for Azure Database for MySQL Flexible Server' description: 'This article shows you the tools to help visualize Query Performance Insight for Azure Database for MySQL Flexible Server.'-- ++ Last updated 10/01/2021
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
Title: What's new in Azure Database for MySQL - Flexible Server description: Learn about recent updates to Azure Database for MySQL - Flexible Server, a relational database service in the Microsoft cloud based on the MySQL Community Edition.- -- +++ Last updated 05/24/2022
open-datasets Dataset Taxi Yellow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-taxi-yellow.md
Sample not available for this platform/package combination.
blob_account_name = "azureopendatastorage" blob_container_name = "nyctlc" blob_relative_path = "yellow"
-blob_sas_token = r"
+blob_sas_token = "r"
# Allow SPARK to read from Blob remotely wasbs_path = 'wasbs://%s@%s.blob.core.windows.net/%s' % (blob_container_name, blob_account_name, blob_relative_path)
display(spark.sql('SELECT * FROM source LIMIT 10'))
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
orbital Satellite Imagery With Orbital Ground Station https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/satellite-imagery-with-orbital-ground-station.md
+
+ Title: 'Collect and process Aqua satellite payload'
+description: 'An end-to-end walk-through of using the Azure Orbital Ground Station as-a-Service (GSaaS) to capture and process satellite imagery'
++++ Last updated : 07/06/2022+++
+# Collect and process Aqua satellite payload using Azure Orbital Ground Station as-a-Service (GSaaS)
+
+This topic is a comprehensive walk-through showing how to use the [Azure Orbital Ground Station as-a-Service (GSaaS)](https://azure.microsoft.com/services/orbital/) to capture and process satellite imagery. It introduces the Azure Orbital GSaaS and its core concepts and shows how to schedule contacts. The topic also steps through an example in which we collect and process NASA Aqua satellite data in an Azure virtual machine (VM) using NASA-provided tools.
+
+Aqua is a polar-orbiting spacecraft launched by NASA in 2002. Data from all science instruments aboard Aqua is downlinked to the Earth using direct broadcast over the X-band in near real-time. More information about Aqua can be found on the [Aqua Project Science](https://aqua.nasa.gov/) website. With Azure Orbital Ground Station as-a-Service (GSaaS), we can capture the Aqua broadcast when the satellite is within line of sight of a ground station.
+
+A *contact* is time reserved at an orbital ground station to communicate with a satellite. During the contact, the ground station orients its antenna towards Aqua and captures the broadcast payload. The captured data is sent to an Azure VM as a data stream that is processed using the [RT-STPS](http://directreadout.sci.gsfc.nasa.gov/index.cfm?section=technology&page=NISGS&subpage=NISFES&sub2page=RT-STPS&sub3Page=overview) (Real-Time Software Telemetry Processing System) provided by [Direct Readout Laboratory](http://directreadout.sci.gsfc.nasa.gov/) at NASA to generate a level 0 product. Further processing of level 0 data is done using IPOPP (International Planetary Observation Processing Package) tool, also provided by DRL.
+
+Processing the Aqua data stream involves the following steps in order:
+
+1. [Prerequisites](#step-1-prerequisites).
+2. [Process RAW data using RT-STPS](#step-2-process-raw-data-using-rt-stps).
+3. [Prepare a virtual machine (processor-vm) to process higher level products](#step-3-prepare-a-virtual-machine-processor-vm-to-create-higher-level-products).
+4. [Create higher level products using IPOPP](#step-4-create-higher-level-products-using-ipopp).
+
+Optional setup for capturing the ground station telemetry are included in the [Appendix](#appendix)
+
+## Step 1: Prerequisites
+
+You must first follow the steps listed in [Tutorial: Downlink data from NASA's AQUA public satellite](howto-downlink-aqua.md).
+
+> [!NOTE]
+> In the section [Prepare a virtual machine (VM) to receive the downlinked AQUA data](howto-downlink-aqua.md#prepare-a-virtual-machine-vm-to-receive-the-downlinked-aqua-data), use the following values:
+>
+> - **Name:** receiver-vm
+> - **Operating System:** Linux (CentOS Linux 7 or higher)
+> - **Size:** Standard_D8_v5 or higher
+> - **IP Address:** Ensure that the VM has at least one standard public IP address
+
+## Step 2: Process RAW data using RT-STPS
+
+The [Real-time Software Telemetry Processing System (RT-STPS)](https://directreadout.sci.gsfc.nasa.gov/?id=dspContent&cid=69) is NASA-provided software for processing the raw Aqua payload. The steps below cover installation of RT-STPS on the receiver-vm, and production of level-0 PDS files for the Aqua payload captured in the previous step.
+
+Register with the [NASA DRL](https://directreadout.sci.gsfc.nasa.gov/) to download the RT-STPS installation package.
+
+Transfer the installation binaries to the receiver-vm:
+
+```console
+mkdir ~/software/
+scp RT-STPS_6.0*.tar.gz azureuser@receiver-vm:~/software/rt-stps/.
+```
+
+Alternatively, you can upload your installation binaries to a container in Azure Storage and download them to the receiver-vm using [AzCopy](../storage/common/storage-use-azcopy-v10.md)
+
+### Install rt-stps
+
+```console
+sudo yum install java (find version of java)
+cd ~/software
+tar -xzvf RT-STPS_6.0.tar.gz
+cd ./rt-stps
+./install.sh
+```
+
+### Install rt-stps patches
+
+```console
+cd ~/software
+tar -xzvf RT-STPS_6.0_PATCH_1.tar.gz
+tar -xzvf RT-STPS_6.0_PATCH_2.tar.gz
+tar -xzvf RT-STPS_6.0_PATCH_3.tar.gz
+cd ./rt-stps
+./install.sh
+```
+
+### Validate install
+
+```console
+cd ~/software
+tar -xzvf RT-STPS_6.0_testdata.tar.gz
+cd ~/software/rt-stps
+rm ./data/*
+./bin/batch.sh config/npp.xml ./testdata/input/rt-stps_npp_testdata.dat
+#Verify that files exist
+ls -la ./data
+```
+
+### Process RAW Aqua data
+
+Run rt-stps in batch mode to process the previously captured Aqua data (.bin files).
+
+```console
+cd ~/software/rt-stps
+./bin/batch.sh ./config/aqua.xml ~/aquadata/raw-2022-05-29T0957-0700.bin
+```
+
+That command creates level-0 product (.PDS files) in the ```~/software/rt-stps/data``` directory.
+[AzCopy](../storage/common/storage-use-azcopy-v10.md) the level-0 files to a storage container:
+
+```console
+azcopy sync ~/software/rt-stps/data/ "https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]"
+```
+
+Download the level-0 PDS files from this storage container for further processing in later steps.
+
+## Step 3: Prepare a virtual machine (processor-vm) to create higher level products
+
+[International Planetary Observation Processing Package (IPOPP)](https://directreadout.sci.gsfc.nasa.gov/?id=dspContent&cid=68) is another NASA-provided software to process Aqua Level-0 data into higher level products.
+In the steps below, you'll process the Aqua PDS files downloaded from the Azure Storage container in the previous step.
+Because IPOPP has higher system requirements than RT-STPS, it should be run on a bigger VM called the "processor-vm".
+
+[Create a virtual machine(VM)](../virtual-machines/linux/quick-create-portal.md) within the virtual network above. Ensure that this VM has the following specifications:
+
+- **Name:** processor-vm
+- **Size:** Standard D16ds v5
+- **Operating System:** Linux (CentOS Linux 7 or higher)
+- **Disk:** 2 TB Premium SSD data disk
+
+Create a file system on the data disk:
+
+```console
+sudo fdisk /dev/sdc
+sudo mkfs -t ext4 /dev/sdc1
+sudo mount /dev/sdc1 /datadrive
+```
+
+IPOPP installation requires using a browser to sign on to the DRL website to download the installation script. This script must be run from the same host that it was downloaded to. IPOPP configuration also requires a GUI. Therefore, we install a full desktop and a vnc server to enable running GUI applications.
+
+### Install Desktop and VNC Server
+
+```console
+sudo yum install tigervnc-server
+sudo yum groups install "GNOME Desktop"
+```
+
+Start VNC server:
+
+```console
+vncsever
+```
+Enter a password when prompted.
+
+Port forward the vncserver port (5901) over ssh:
+
+```console
+ssh -L 5901:localhost:5901 azureuser@processor-vm
+```
+
+Download the [TightVNC](https://www.tightvnc.com/download.php) viewer and connect to ```localhost:5901``` and enter the vncserver password entered in the previous step. You should see the GNOME desktop running on the VM.
+
+Start a new terminal, and start the Firefox browser
+
+```console
+firefox
+```
+
+[Log on the DRL website](https://directreadout.sci.gsfc.nasa.gov/loginDRL.cfm?cid=320&type=software) and download the downloader script.
+
+Run the downloader script from the ```/datadrive/ipopp``` directory because
+the home directory isn't large enough to hold the downloaded content.
+
+```console
+INSTALL_DIR=/datadrive/ipopp
+cp ~/Downloads/downloader_DRL-IPOPP_4.1.sh $INSTALL_DIR
+cd $INSTALL_DIR
+./downloader_DRL-IPOPP_4.1.sh
+```
+
+This script will download \~35G and will take 1 hour or more.
+
+### Install IPOPP
+
+```console
+tar --C $INSTALL_DIR -xzf DRL-IPOPP_4.1.tar.gz
+chmod -R 755 $INSTALL_DIR/IPOPP
+$INSTALL_DIR/IPOPP/install_ipopp.sh -installdir $INSTALL_DIR/drl -datadir $INSTALL_DIR/data -ingestdir $INSTALL_DIR/data/ingest
+```
+
+### Install patches
+
+```console
+$INSTALL_DIR/drl/tools/install_patch.sh $PATCH_FILE_NAME
+```
+### Start IPOPP services
+
+```console
+$INSTALL_DIR/drl/tools/services.sh start
+```
+### Verify service status
+
+```
+$INSTALL_DIR/drl/tools/services.sh status
+$INSTALL_DIR/drl/tools/spa_services.sh status
+```
+
+## Step 4: Create higher level products using IPOPP
+
+Before we can create level-1 and level-2 products from the PDS files, we need to configure IPOPP.
+
+### Configure the IPOPP service using the dashboard
+
+IPOPP must be configured with the dashboard GUI. To start the dashboard, first port forward the vncserver port (5901) over ssh:
+
+```console
+ssh -L 5901:localhost:5901 azureuser@processor-vm
+```
+
+Using the TightVNC client, connect to localhost:5901 and enter the vncserver password. On the virtual machine desktop, open a new terminal and start the dashboard:
+
+```console
+cd /datadrive/ipopp
+./drl/tools/dashboard.sh & 
+```
+
+1. IPOPP Dashboard starts in process monitoring mode. Switch to **Configuration Mode** by using the menu option. 
+
+2. Aqua related products can be configured from EOS tab in configuration mode. Disable all other tabs. We're interested in the MODIS Aerosol L2 (MOD04) product, which is produced by IMAPP SPA. Therefore, enable the following in the **EOS** tab: 
+
+ - gbad 
+
+ - MODISL1DB l0l1aqua 
+
+ - MODISL1DB l1atob 
+
+ - IMAPP 
+
+3. After updating the configuration, switch back to **Process Monitoring** mode using the menu. All tiles will be in OFF mode initially. 
+
+4. When prompted, save changes to the configuration.  
+
+5. Click **Start Services** in the action menu. Note that **Start Services** is only enabled in process monitoring mode.  
+
+6. Click **Check IPOPP Services** in action menu to validate.
+
+## Ingest data for processing
+
+Download the PDS files generated by the RT-STPS tool from your storage container to the IPOPP ingest directory configured during installation.
+
+```console
+azcopy cp
+"https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]"
+"/datadrive/ipopp/drl/data/dsm/ingest" --recursive=true
+```
+
+Run the IPOPP ingest to create the products configured in the dashboard. 
+
+```
+/datadrive/ipopp/drl/tools/ingest_ipopp.sh
+```
+
+You can watch the progress in the dashboard.
+
+```
+/datadrive/ipopp/drl/tools/dashboard.sh
+```
+
+IPOPP will produce output products in the following directories:
+
+```
+/datadrive/ipopp/drl/data/pub/gsfcdata/aqua/modis/level[0,1,2] 
+```
+
+## Appendix
+
+### Capture ground station telemetry
+
+An Azure Orbital Ground station emits telemetry events that can be used to analyze the ground station operation for the duration of the contact. You can configure your contact profile to send such telemetry events to Azure Event Hubs. The steps below describe how to create an Event Hub and grant Azure Orbital access to send events to it.
+
+1. In your subscription, go to **Resource Provider** settings and register Microsoft.Orbital as a provider.  
+2. [Create an Azure Event Hub](../event-hubs/event-hubs-create.md) in your subscription.
+3. From the left menu, select **Access Control (IAM)**. Under **Grant Access to this Resource**, select **Add Role Assignment**.
+4. Select **Azure Event Hubs Data Sender**.  
+5. Assign access to '**User, group, or service principal**'.
+6. Click '**+ Select members**'. 
+7. Search for '**Azure Orbital Resource Provider**' and press **Select**. 
+8. Press **Review + Assign** to grant Azure Orbital the rights to send telemetry into your event hub.
+9. To confirm the newly added role assignment, go back to the Access Control (IAM) page and select **View access to this resource**.
+
+Congrats! Orbital can now communicate with your hub. 
+
+### Enable telemetry for a contact profile in the Azure portal 
+
+1. Go to **Contact Profile** resource, and click **Create**. 
+2. Choose a namespace using the **Event Hub Namespace** dropdown. 
+3. Choose an instance using the **Event Hub Instance** dropdown that appears after namespace selection. 
+
+### Test telemetry on a contact 
+
+1. Schedule a contact using the Contact Profile that you previously configured for Telemetry. 
+2. Once the contact begins, you should begin to see data in your Event Hub soon after. 
+
+To verify that events are being received in your Event Hub, you can check the graphs present on the Event Hub namespace **Overview** page. The graphs show data across all Event Hub instances within a namespace. You can navigate to the Overview page of a specific instance to see the graphs for that instance. 
+
+You can enable an Event Hub's [Capture feature](../event-hubs/event-hubs-capture-enable-through-portal.md) that will automatically deliver the telemetry data to an Azure Blob storage account of your choosing. 
+
+Once enabled, you can check your container and view or download the data. 
+ 
+The Event Hubs documentation provides a great deal of guidance on how to write simple consumer apps to receive events from Event Hubs: 
+
+- [Python](../event-hubs/event-hubs-python-get-started-send.md)
+
+- [.NET](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md) 
+
+- [Java](../event-hubs/event-hubs-java-get-started-send.md) 
+
+- [JavaScript](../event-hubs/event-hubs-node-get-started-send.md)  
+
+Other helpful resources: 
+
+- [Event Hubs using Python Getting Started](../event-hubs/event-hubs-python-get-started-send.md) 
+
+- [Azure Event Hubs client library for Python code samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/eventhub/azure-eventhub/samples/async_samples) 
+
+## Next steps
+
+For an end-to-end implementation that involves extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with Azure Synapse Analytics, see:
+
+- [Spaceborne data analysis with Azure Synapse Analytics](https://docs.microsoft.com/azure/architecture/industries/aerospace/geospatial-processing-analytics)
+
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
Previously updated : 06/23/2022 Last updated : 07/06/2022 # Overview - Azure Database for PostgreSQL - Flexible Server [Azure Database for PostgreSQL](../overview.md) powered by the PostgreSQL community edition is available in three deployment modes:
One advantage of running your workload in Azure is global reach. The flexible se
| East US | :heavy_check_mark: | :heavy_check_mark: | :x: | | East US 2 | :heavy_check_mark: | :x: $ | :heavy_check_mark: | | France Central | :heavy_check_mark: | :heavy_check_mark: | :x: |
-| Germany West Central | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Germany West Central | :x: $$ | :x: $ | :x: |
| Japan East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Japan West | :heavy_check_mark: | :x: | :heavy_check_mark: | | Jio India West | :heavy_check_mark: (v3 only)| :x: | :x: |
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/overview.md
Title: What is Azure Database for PostgreSQL
-description: Provides an overview of Azure Database for PostgreSQL relational database service in the context of flexible server.
+description: Provides an overview of Azure Database for PostgreSQL relational database service in the context of single server.
purview Catalog Private Link End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-end-to-end.md
Previously updated : 01/12/2022 Last updated : 06/21/2022 # Customer intent: As a Microsoft Purview admin, I want to set up private endpoints for my Microsoft Purview account to access purview account and scan data sources from restricted network.
Using one of the deployment options explained further in this guide, you can dep
6. After completing this guide, adjust DNS configurations if needed. 7. Validate your network and name resolution between management machine, self-hosted IR VM and data sources to Microsoft Purview.
+ > [!NOTE]
+ > If you enable a managed event hub after deploying your ingestion private endpoint, you'll need to redeploy the ingestion private endpoint.
+ ## Option 1 - Deploy a new Microsoft Purview account with _account_, _portal_ and _ingestion_ private endpoints 1. Go to the [Azure portal](https://portal.azure.com), and then go to the **Microsoft Purview accounts** page. Select **+ Create** to create a new Microsoft Purview account.
purview Catalog Private Link Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-troubleshoot.md
Previously updated : 01/12/2022 Last updated : 06/21/2022 # Customer intent: As a Microsoft Purview admin, I want to set up private endpoints for my Microsoft Purview account, for secure access.
This guide summarizes known limitations related to using private endpoints for M
## Known limitations -- We currently do not support ingestion private endpoints that work with your AWS sources.-- Scanning Azure Multiple Sources using self-hosted integration runtime is not supported.-- Using Azure integration runtime to scan data sources behind private endpoint is not supported.-- Using Azure portal, the ingestion private endpoints can be created via the Microsoft Purview portal experience described in the preceding steps. They can't be created from the Private Link Center.-- Creating DNS A records for ingestion private endpoints inside existing Azure DNS Zones, while the Azure Private DNS Zones are located in a different subscription than the private endpoints is not supported via the Microsoft Purview portal experience. A records can be added manually in the destination DNS Zones in the other subscription.
+- We currently don't support ingestion private endpoints that work with your AWS sources.
+- Scanning Azure Multiple Sources using self-hosted integration runtime isn't supported.
+- Using Azure integration runtime to scan data sources behind private endpoint isn't supported.
+- The ingestion private endpoints can be created via the Microsoft Purview governance portal experience described in the preceding steps. They can't be created from the Private Link Center.
+- Creating a DNS record for ingestion private endpoints inside existing Azure DNS Zones, while the Azure Private DNS Zones are located in a different subscription than the private endpoints isn't supported via the Microsoft Purview governance portal experience. A record can be added manually in the destination DNS Zones in the other subscription.
+- If you enable a managed event hub after deploying an ingestion private endpoint, you'll need to redeploy the ingestion private endpoint.
- Self-hosted integration runtime machine must be deployed in the same VNet or a peered VNet where Microsoft Purview account and ingestion private endpoints are deployed.-- We currently do not support scanning a cross-tenant Power BI tenant, which has a private endpoint configured with public access blocked.
+- We currently don't support scanning a cross-tenant Power BI tenant, which has a private endpoint configured with public access blocked.
- For limitation related to Private Link service, see [Azure Private Link limits](../azure-resource-manager/management/azure-subscription-service-limits.md#private-link-limits). ## Recommended troubleshooting steps
This guide summarizes known limitations related to using private endpoints for M
|Portal |Microsoft Purview Account |mypurview-private-portal | |Ingestion |Managed Storage Account (Blob) |mypurview-ingestion-blob | |Ingestion |Managed Storage Account (Queue) |mypurview-ingestion-queue |
- |Ingestion |Managed Event Hubs Namespace |mypurview-ingestion-namespace |
+ |Ingestion |Managed Event Hubs Namespace* |mypurview-ingestion-namespace |
+
+ >[!NOTE]
+ > *Managed Event Hubs Namespace is only needed if it has been enabled on your Microsoft Purview account. You can check in **Managed Resources** under settings on your Microsoft Purview account page in the Azure Portal.
2. If portal private endpoint is deployed, make sure you also deploy account private endpoint.
This guide summarizes known limitations related to using private endpoints for M
- To verify the correct name resolution, you can use a **NSlookup.exe** command line tool to query `web.purview.azure.com`. The result must return a private IP address that belongs to portal private endpoint. - To verify network connectivity, you can use any network test tools to test outbound connectivity to `web.purview.azure.com` endpoint to port **443**. The connection must be successful.
-3. If Azure Private DNS Zones are used, make sure the required Azure DNS Zones are deployed and there is DNS (A) record for each private endpoint.
+3. If Azure Private DNS Zones are used, make sure the required Azure DNS Zones are deployed and there's DNS (A) record for each private endpoint.
4. Test network connectivity and name resolution from management machine to Microsoft Purview endpoint and purview web url. If account and portal private endpoints are deployed, the endpoints must be resolved through private IP addresses.
This guide summarizes known limitations related to using private endpoints for M
TcpTestSucceeded : True ```
-5. If you have created your Microsoft Purview account after 18 August 2021, make sure you download and install the latest version of self-hosted integration runtime from [Microsoft download center](https://www.microsoft.com/download/details.aspx?id=39717).
+5. If you've created your Microsoft Purview account after 18 August 2021, make sure you download and install the latest version of self-hosted integration runtime from [Microsoft download center](https://www.microsoft.com/download/details.aspx?id=39717).
6. From self-hosted integration runtime VM, test network connectivity and name resolution to Microsoft Purview endpoint.
-7. From self-hosted integration runtime, test network connectivity and name resolution to Microsoft Purview managed resources such as blob queue and Event Hub through port 443 and private IP addresses. (Replace the managed storage account and Event Hubs namespace with corresponding managed resource name assigned to your Microsoft Purview account).
+7. From self-hosted integration runtime, test network connectivity and name resolution to Microsoft Purview managed resources such as blob queue and Event Hubs through port 443 and private IP addresses. (Replace the managed storage account and Event Hubs namespace with corresponding managed resource name assigned to your Microsoft Purview account).
```powershell Test-NetConnection -ComputerName `scansoutdeastasiaocvseab`.blob.core.windows.net -Port 443
This guide summarizes known limitations related to using private endpoints for M
```powershell Test-NetConnection -ComputerName `Atlas-1225cae9-d651-4039-86a0-b43231a17a4b`.servicebus.windows.net -Port 443 ```
- Example of successful outbound connection to Event Hub namespace through private IP address:
+ Example of successful outbound connection to Event Hubs namespace through private IP address:
``` ComputerName : Atlas-1225cae9-d651-4039-86a0-b43231a17a4b.servicebus.windows.net
This guide summarizes known limitations related to using private endpoints for M
8. From the network where data source is located, test network connectivity and name resolution to Microsoft Purview endpoint and managed resources endpoints.
-9. If data sources are located in on-premises network, review your DNS forwarder configuration. Test name resolution from within the same network where data sources are located to self-hosted integration runtime, Microsoft Purview endpoints and managed resources. It is expected to obtain a valid private IP address from DNS query for each endpoint.
+9. If data sources are located in on-premises network, review your DNS forwarder configuration. Test name resolution from within the same network where data sources are located to self-hosted integration runtime, Microsoft Purview endpoints and managed resources. It's expected to obtain a valid private IP address from DNS query for each endpoint.
For more information, see [Virtual network workloads without custom DNS server](../private-link/private-endpoint-dns.md#virtual-network-workloads-without-custom-dns-server) and [On-premises workloads using a DNS forwarder](../private-link/private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder) scenarios in [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md).
You may receive the following error message when running a scan:
`Internal system error. Please contact support with correlationId:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx System Error, contact support.` ### Cause
-This can be an indication of issues related to connectivity or name resolution between the VM running self-hosted integration runtime and Microsoft Purview's managed resources storage account or Event Hub.
+This can be an indication of issues related to connectivity or name resolution between the VM running self-hosted integration runtime and Microsoft Purview's managed resources storage account or Event Hubs.
### Resolution Validate if name resolution between the VM running Self-Hosted Integration Runtime.
Review your existing Azure Policy Assignments and make sure deployment of the fo
### Issue
-Not authorized to access this Microsoft Purview account. This Microsoft Purview account is behind a private endpoint. Please access the account from a client in the same virtual network (VNet) that has been configured for the Microsoft Purview account's private endpoint.
+Not authorized to access this Microsoft Purview account. This Microsoft Purview account is behind a private endpoint. Access the account from a client in the same virtual network (VNet) that has been configured for the Microsoft Purview account's private endpoint.
### Cause User is trying to connect to Microsoft Purview from a public endpoint or using Microsoft Purview public endpoints where **Public network access** is set to **Deny**. ### Resolution
-In this case, to open the Microsoft Purview governance portal, either use a machine that is deployed in the same virtual network as the Microsoft Purview portal private endpoint or use a VM that is connected to your CorpNet in which hybrid connectivity is allowed.
+In this case, to open the Microsoft Purview governance portal, either use a machine that is deployed in the same virtual network as the Microsoft Purview governance portal private endpoint or use a VM that is connected to your CorpNet in which hybrid connectivity is allowed.
### Issue You may receive the following error message when scanning a SQL server, using a self-hosted integration runtime:
You may receive the following error message when scanning a SQL server, using a
### Cause Self-hosted integration runtime machine has enabled the FIPS mode.
-Federal Information Processing Standards (FIPS) defines a certain set of cryptographic algorithms that are allowed to be used. When FIPS mode is enabled on the machine, some cryptographic classes that the invoked processes depends on are blocked in some scenarios.
+Federal Information Processing Standards (FIPS) defines a certain set of cryptographic algorithms that are allowed to be used. When FIPS mode is enabled on the machine, some cryptographic classes that the invoked processes depend on are blocked in some scenarios.
### Resolution Disable FIPS mode on self-hosted integration server.
purview Concept Best Practices Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-automation.md
Previously updated : 05/17/2022 Last updated : 06/20/2022 # Microsoft Purview automation best practices
When to use?
* Custom application development or process automation. ## Streaming (Atlas Kafka)
-Each Microsoft Purview account comes with a fully managed event hub, accessible via the Atlas Kafka endpoint found via the Azure portal > Microsoft Purview Account > Properties. Microsoft Purview events can be monitored by consuming messages from the event hub. External systems can also use the event hub to publish events to Microsoft Purview as they occur.
+Each Microsoft Purview account can enable a fully managed event hub that is accessible via the Atlas Kafka endpoint found via the Azure portal > Microsoft Purview Account > Properties.
+
+To enable this Event Hubs namespace, you can follow these steps:
+1. Search for and open your Microsoft Purview account in the [Azure portal](https://portal.azure.com).
+1. Select **Managed Resources** under settings on your Microsoft Purview account page in the Azure portal.
+ :::image type="content" source="media/concept-best-practices/enable-disable-event-hubs.png" alt-text="Screenshot showing the Event Hubs namespace toggle highlighted on the Managed resources page of the Microsoft Purview account page in the Azure portal.":::
+1. Select the Enable/Disable toggle to enable your Event Hubs namespace. It can be disabled at any time.
+1. Select **Save** to save the choice and begin the enablement or disablement process. This can take several minutes to complete.
+ :::image type="content" source="media/concept-best-practices/select-save.png" alt-text="Screenshot showing the Managed resources page of the Microsoft Purview account page in the Azure portal with the save button highlighted.":::
+
+>[!NOTE]
+>Enabling this Event Hubs namespace does incur a cost for the namespace. For specific details, see [the pricing page](https://azure.microsoft.com/pricing/details/purview/).
++
+Once the namespace is enabled, Microsoft Purview events can be monitored by consuming messages from the event hub. External systems can also use the event hub to publish events to Microsoft Purview as they occur.
* **Consume Events** - Microsoft Purview will send notifications about metadata changes to Kafka topic **ATLAS_ENTITIES**. Applications interested in metadata changes can monitor for these notifications. Supported operations include: `ENTITY_CREATE`, `ENTITY_UPDATE`, `ENTITY_DELETE`, `CLASSIFICATION_ADD`, `CLASSIFICATION_UPDATE`, `CLASSIFICATION_DELETE`. * **Publish Events** - Microsoft Purview can be notified of metadata changes via notifications to Kafka topic **ATLAS_HOOK**. Supported operations include: `ENTITY_CREATE_V2`, `ENTITY_PARTIAL_UPDATE_V2`, `ENTITY_FULL_UPDATE_V2`, `ENTITY_DELETE_V2`.
purview Concept Best Practices Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-security.md
Previously updated : 12/05/2021 Last updated : 06/28/2022 # Microsoft Purview security best practices
For more information, see [Best practices related to connectivity to Azure PaaS
### Deploy private endpoints for Microsoft Purview accounts
-If you need to use Microsoft Purview from inside your private network, it is recommended to use Azure Private Link Service with your Microsoft Purview accounts for partial or [end-to-end isolation](catalog-private-link-end-to-end.md) to connect to Microsoft Purview governance portal, access Microsoft Purview endpoints and to scan data sources.
+If you need to use Microsoft Purview from inside your private network, it's recommended to use Azure Private Link Service with your Microsoft Purview accounts for partial or [end-to-end isolation](catalog-private-link-end-to-end.md) to connect to Microsoft Purview governance portal, access Microsoft Purview endpoints and to scan data sources.
The Microsoft Purview _account_ private endpoint is used to add another layer of security, so only client calls that are originated from within the virtual network are allowed to access the Microsoft Purview account. This private endpoint is also a prerequisite for the portal private endpoint.
The Microsoft Purview _portal_ private endpoint is required to enable connectivi
Microsoft Purview can scan data sources in Azure or an on-premises environment by using ingestion private endpoints. - For scanning Azure _platform as a service_ data sources, review [Support matrix for scanning data sources through ingestion private endpoint](catalog-private-link.md#support-matrix-for-scanning-data-sources-through-ingestion-private-endpoint).-- If you are deploying Microsoft Purview with end-to-end network isolation, to scan Azure data sources, these data sources must be also configured with private endpoints.
+- If you're deploying Microsoft Purview with end-to-end network isolation, to scan Azure data sources, these data sources must be also configured with private endpoints.
- Review [known limitations](catalog-private-link-troubleshoot.md). For more information, see [Microsoft Purview network architecture and best practices](concept-best-practices-network.md).
To gain access to Microsoft Purview, users must be authenticated and authorized.
We use Azure Active Directory to provide authentication and authorization mechanisms for Microsoft Purview inside Collections. You can assign Microsoft Purview roles to the following security principals from your Azure Active Directory tenant that is associated with Azure subscription where your Microsoft Purview instance is hosted: -- Users and guest users (if they are already added into your Azure AD tenant)
+- Users and guest users (if they're already added into your Azure AD tenant)
- Security groups - Managed Identities - Service Principals
In Azure, you can apply [resource locks](../azure-resource-manager/management/lo
Enable Azure resource lock for your Microsoft Purview accounts to prevent accidental deletion of Microsoft Purview instances in your Azure subscriptions.
-Adding a `CanNotDelete` or `ReadOnly` lock to Microsoft Purview account does not prevent deletion or modification operations inside Microsoft Purview data plane, however, it prevents any operations in control plane, such as deleting the Microsoft Purview account, deploying a private endpoint or configuration of diagnostic settings.
+Adding a `CanNotDelete` or `ReadOnly` lock to Microsoft Purview account doesn't prevent deletion or modification operations inside Microsoft Purview data plane, however, it prevents any operations in control plane, such as deleting the Microsoft Purview account, deploying a private endpoint or configuration of diagnostic settings.
For more information, see [Understand scope of locks](../azure-resource-manager/management/lock-resources.md#understand-scope-of-locks).
-Resource locks can be assigned to Microsoft Purview resource groups or resources, however, you cannot assign an Azure resource lock to Microsoft Purview Managed resources or managed Resource Group.
+Resource locks can be assigned to Microsoft Purview resource groups or resources, however, you can't assign an Azure resource lock to Microsoft Purview Managed resources or managed Resource Group.
### Implement a break glass strategy Plan for a break glass strategy for your Azure Active Directory tenant, Azure subscription and Microsoft Purview accounts to prevent tenant-wide account lockout.
Microsoft Purview provides rich insights into the sensitivity of your data, whic
Often, one of the biggest challenges for security organization in a company is to identify and protect assets based on their criticality and sensitivity. Microsoft recently [announced integration between Microsoft Purview and Microsoft Defender for Cloud in Public Preview](https://techcommunity.microsoft.com/t5/azure-purview-blog/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954) to help overcome these challenges.
-If you have extended your Microsoft 365 sensitivity labels for assets and database columns in Microsoft Purview, you can keep track of highly valuable assets using Microsoft Defender for Cloud from inventory, alerts and recommendations based on assets detected sensitivity labels.
+If you've extended your Microsoft 365 sensitivity labels for assets and database columns in Microsoft Purview, you can keep track of highly valuable assets using Microsoft Defender for Cloud from inventory, alerts and recommendations based on assets detected sensitivity labels.
-- For recommendations, we've provided **security controls** to help you understand how important each recommendation is to your overall security posture. Defender for Cloud includes a **secure score** value for each control to help you prioritize your security work. Learn more in [Security controls and their recommendations](../defender-for-cloud/secure-score-security-controls.md#security-controls-and-their-recommendations).
+- For recommendations, we've provided **security controls** to help you understand how important each recommendation is to your overall security posture. Microsoft Defender for Cloud includes a **secure score** value for each control to help you prioritize your security work. Learn more in [Security controls and their recommendations](../defender-for-cloud/secure-score-security-controls.md#security-controls-and-their-recommendations).
- For alerts, we've assigned **severity labels** to each alert to help you prioritize the order in which you attend to each alert. Learn more in [How are alerts classified?](../defender-for-cloud/alerts-overview.md#how-are-alerts-classified).
For more information, see [Integrate Microsoft Purview with Azure security produ
### Secure metadata extraction and storage
-Microsoft Purview is a data governance solution in cloud. You can register and scan different data sources from various data systems from your on-premises, Azure, or multi-cloud environments into Microsoft Purview. While data source is registered and scanned in Microsoft Purview, the actual data and data sources stay in their original locations, only metadata is extracted from data sources and stored in Microsoft Purview Data Map, which means you do not need to move data out of the region or their original location to extract the metadata into Microsoft Purview.
+Microsoft Purview is a data governance solution in cloud. You can register and scan different data sources from various data systems from your on-premises, Azure, or multi-cloud environments into Microsoft Purview. While data source is registered and scanned in Microsoft Purview, the actual data and data sources stay in their original locations, only metadata is extracted from data sources and stored in Microsoft Purview Data Map, which means you don't need to move data out of the region or their original location to extract the metadata into Microsoft Purview.
-When a Microsoft Purview account is deployed, in addition, a managed resource group is also deployed in your Azure subscription. A managed Azure Storage Account and a Managed Event Hubs are deployed inside this resource group. The managed storage account is used to ingest metadata from data sources during the scan. Since these resources are consumed by the Microsoft Purview they cannot be accessed by any other users or principals, except the Microsoft Purview account. This is because an Azure role-based access control (RBAC) deny assignment is added automatically for all principals to this resource group at the time of Microsoft Purview account deployment, preventing any CRUD operations on these resources if they are not initiated from Microsoft Purview.
+When a Microsoft Purview account is deployed, in addition, a managed resource group is also deployed in your Azure subscription. A managed Azure Storage Account is deployed inside this resource group, and a managed Event Hubs Namespace can be deployed in this group if the setting is enabled under **Managed Resources** to allow for events ingestion. The managed storage account is used to ingest metadata from data sources during the scan. Since these resources are consumed by the Microsoft Purview they can't be accessed by any other users or principals, except the Microsoft Purview account. This is because an Azure role-based access control (RBAC) deny assignment is added automatically for all principals to this resource group at the time of Microsoft Purview account deployment, preventing any CRUD operations on these resources if they aren't initiated from Microsoft Purview.
### Where is metadata stored?
Microsoft Purview allows you to use any of the following options to extract meta
:::image type="content" source="media/concept-best-practices/security-azure-runtime.png" alt-text="Screenshot that shows the connection flow between Microsoft Purview, the Azure runtime, and data sources."lightbox="media/concept-best-practices/security-azure-runtime.png":::
- 1. A manual or automatic scan is initiated from the Microsoft Purview data map through the Azure integration runtime.
+ 1. A manual or automatic scan is initiated from the Microsoft Purview Data Map through the Azure integration runtime.
2. The Azure integration runtime connects to the data source to extract metadata. 3. Metadata is queued in Microsoft Purview managed storage and stored in Azure Blob Storage.
- 4. Metadata is sent to the Microsoft Purview data map.
+ 4. Metadata is sent to the Microsoft Purview Data Map.
-- **Self-hosted integration runtime**. Metadata is extracted and processed by self-hosted integration runtime inside self-hosted integration runtime VMs' memory before they are sent to Microsoft Purview Data Map. In this case, customers have to deploy and manage one or more self-hosted integration runtime Windows-based virtual machines inside their Azure subscriptions or on-premises environments. Scanning on-premises and VM-based data sources always requires using a self-hosted integration runtime. The Azure integration runtime is not supported for these data sources. The following steps show the communication flow at a high level when you're using a self-hosted integration runtime to scan a data source.
+- **Self-hosted integration runtime**. Metadata is extracted and processed by self-hosted integration runtime inside self-hosted integration runtime VMs' memory before they're sent to Microsoft Purview Data Map. In this case, customers have to deploy and manage one or more self-hosted integration runtime Windows-based virtual machines inside their Azure subscriptions or on-premises environments. Scanning on-premises and VM-based data sources always requires using a self-hosted integration runtime. The Azure integration runtime isn't supported for these data sources. The following steps show the communication flow at a high level when you're using a self-hosted integration runtime to scan a data source.
:::image type="content" source="media/concept-best-practices/security-self-hosted-runtime.png" alt-text="Screenshot that shows the connection flow between Microsoft Purview, a self-hosted runtime, and data sources."lightbox="media/concept-best-practices/security-self-hosted-runtime.png":::
- 1. A manual or automatic scan is triggered. Microsft Purview connects to Azure Key Vault to retrieve the credential to access a data source.
+ 1. A manual or automatic scan is triggered. Microsoft Purview connects to Azure Key Vault to retrieve the credential to access a data source.
- 2. The scan is initiated from the Microsoft Purview data map through a self-hosted integration runtime.
+ 2. The scan is initiated from the Microsoft Purview Data Map through a self-hosted integration runtime.
3. The self-hosted integration runtime service from the VM connects to the data source to extract metadata. 4. Metadata is processed in VM memory for the self-hosted integration runtime. Metadata is queued in Microsoft Purview managed storage and then stored in Azure Blob Storage.
- 5. Metadata is sent to the Microsoft Purview data map.
+ 5. Metadata is sent to the Microsoft Purview Data Map.
- If you need to extract metadata from data sources with sensitive data that cannot leave the boundary of your on-premises network, it is highly recommended to deploy the self-hosted integration runtime VM inside your corporate network, where data sources are located, to extract and process metadata in on-premises, and send only metadata to Microsoft Purview.
+ If you need to extract metadata from data sources with sensitive data that can't leave the boundary of your on-premises network, it's highly recommended to deploy the self-hosted integration runtime VM inside your corporate network, where data sources are located, to extract and process metadata in on-premises, and send only metadata to Microsoft Purview.
:::image type="content" source="media/concept-best-practices/security-self-hosted-runtime-on-premises.png" alt-text="Screenshot that shows the connection flow between Microsoft Purview, an on-premises self-hosted runtime, and data sources in on-premises network."lightbox="media/concept-best-practices/security-self-hosted-runtime-on-premises.png":::
- 1. A manual or automatic scan is triggered. Microsft Purview connects to Azure Key Vault to retrieve the credential to access a data source.
+ 1. A manual or automatic scan is triggered. Microsoft Purview connects to Azure Key Vault to retrieve the credential to access a data source.
2. The scan is initiated through the on-premises self-hosted integration runtime.
Microsoft Purview allows you to use any of the following options to extract meta
4. Metadata is processed in VM memory for the self-hosted integration runtime. Metadata is queued in Microsoft Purview managed storage and then stored in Azure Blob Storage. Actual data never leaves the boundary of your network.
- 5. Metadata is sent to the Microsoft Purview data map.
+ 5. Metadata is sent to the Microsoft Purview Data Map.
### Information protection and encryption
Data at rest includes information that resides in persistent storage on physical
To add another layer of security in addition to access controls, Microsoft Purview encrypts data at rest to protect against 'out of band' attacks (such as accessing underlying storage). It uses encryption with Microsoft-managed keys. This practice helps make sure attackers can't easily read or modify the data.
-For more information, see [Encrypt sensitive data at rest](/security/benchmark/azure/baselines/purview-security-baseline#dp-5-encrypt-sensitive-data-at-rest).
+For more information, see [Encrypt sensitive data at rest](/security/benchmark/azure/baselines/purview-security-baseline#dp-5-encrypt-sensitive-data-at-rest).
+
+### Optional Event Hubs namespace
+
+Each Microsoft Purview account can enable a fully managed event hub that is accessible via the Atlas Kafka endpoint found via the Azure portal > Microsoft Purview Account > Properties. This can be enabled at creation, or from the Azure portal. It's recommended to enable optional managed event hub, only if it's used to distribute events into or outside of Microsoft Purview account Data Map. To remove this information distribution point, you can disable this Event Hubs namespace.
+
+To disable the Event Hubs namespace, you can follow these steps:
+1. Search for and open your Microsoft Purview account in the [Azure portal](https://portal.azure.com).
+1. Select **Managed Resources** under settings on your Microsoft Purview account page in the Azure portal.
+ :::image type="content" source="media/concept-best-practices/disable-event-hubs.png" alt-text="Screenshot showing the Event Hubs namespace toggle highlighted on the Managed resources page of the Microsoft Purview account page in the Azure portal.":::
+1. Select the Enable/Disable toggle to enable your Event Hubs namespace.
+1. Select **Save** to save the choice and begin the disablement process. This can take several minutes to complete.
+ :::image type="content" source="media/concept-best-practices/disable-select-save.png" alt-text="Screenshot showing the Managed resources page of the Microsoft Purview account page in the Azure portal with the save button highlighted.":::
+
+> [!NOTE]
+> If you have an ingestion private endpoint when you disable this Event Hubs namespace, after disabling the ingestion private endpoint will show as disconnected.
## Credential management
-To extract metadata from a data source system into Microsoft Purview Data Map, it is required to register and scan the data source systems in Microsoft Purview Data Map. To automate this process, we have made available [connectors](azure-purview-connector-overview.md) for different data source systems in Microsoft Purview to simplify the registration and scanning process.
+To extract metadata from a data source system into Microsoft Purview Data Map, it's required to register and scan the data source systems in Microsoft Purview Data Map. To automate this process, we have made available [connectors](azure-purview-connector-overview.md) for different data source systems in Microsoft Purview to simplify the registration and scanning process.
To connect to a data source Microsoft Purview requires a credential with read-only access to the data source system.
-It is recommended prioritizing the use of the following credential options for scanning, when possible:
+It's recommended prioritizing the use of the following credential options for scanning, when possible:
1. Microsoft Purview Managed Identity 2. User Assigned Managed Identity
It is recommended prioritizing the use of the following credential options for s
If you use any options rather than managed identities, all credentials must be stored and protected inside an [Azure key vault](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account). Microsoft Purview requires get/list access to secret on the Azure Key Vault resource.
-As a general rule, you can use the following options to set up integration runtime and credentials to scan data source systems:
+As a general rule, you can use the following options to set up integration runtime, and credentials to scan data source systems:
|Scenario |Runtime option |Supported Credentials | ||||
For self-hosted integration runtime VMs deployed as virtual machines in Azure, f
- Lock down inbound traffic to your VMs using Network Security Groups and [Azure Defender access Just-in-Time](../defender-for-cloud/just-in-time-access-usage.md). - Install antivirus or antimalware. - Deploy Azure Defender to get insights around any potential anomaly on the VMs. -- Limit the amount of software in the self-hosted integration runtime VMs. Although it is not a mandatory requirement to have a dedicated VM for a self-hosted runtime for Microsoft Purview, we highly suggest using dedicated VMs especially for production environments.
+- Limit the amount of software in the self-hosted integration runtime VMs. Although it isn't a mandatory requirement to have a dedicated VM for a self-hosted runtime for Microsoft Purview, we highly suggest using dedicated VMs especially for production environments.
- Monitor the VMs using [Azure Monitor for VMs](../azure-monitor/vm/vminsights-overview.md). By using Log analytics agent, you can capture content such as performance metrics to adjust required capacity for your VMs. - By integrating virtual machines with Microsoft Defender for Cloud, you can you prevent, detect, and respond to threats. - Keep your machines current. You can enable Automatic Windows Update or use [Update Management in Azure Automation](../automation/update-management/overview.md) to manage operating system level updates for the OS. - Use multiple machines for greater resilience and availability. You can deploy and register multiple self-hosted integration runtimes to distribute the scans across multiple self-hosted integration runtime machines or deploy the self-hosted integration runtime on a Virtual Machine Scale Set for higher redundancy and scalability. -- Optionally, you can plan to enable Azure backup from your self-hosted integration runtime VMs to increase the recovery time of a self-hosted integration runtime VM if there is a VM level disaster.
+- Optionally, you can plan to enable Azure backup from your self-hosted integration runtime VMs to increase the recovery time of a self-hosted integration runtime VM if there's a VM level disaster.
## Next steps - [Microsoft Purview accounts architectures and best practices](concept-best-practices-accounts.md)
purview Concept Guidelines Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-guidelines-pricing.md
Direct costs impacting Microsoft Purview pricing are based on these applications
- [The Microsoft Purview Data Map](concept-guidelines-pricing-data-map.md) - [Data Estate Insights](concept-guidelines-pricing-data-estate-insights.md)
-For guidelines about pricing for these applications, select the links above.
- ## Indirect costs Indirect costs impacting Microsoft Purview (formerly Azure Purview) pricing to be considered are: - [Managed resources](https://azure.microsoft.com/pricing/details/azure-purview/)
- - When an account is provisioned, a storage account and event hub queue are created within the subscription in order to cater to secured scanning, which may be charged separately
+ - When an account is provisioned, a storage account is created in the subscription in order to cater to secured scanning, which may be charged separately.
+ - An Event Hubs namespace can be [enabled at creation](create-catalog-portal.md#create-an-account) or enabled in the [Azure portal](https://portal.azure.com) on the managed resources page of the account to enable monitoring with [*Atlas Kafka* topics events](manage-kafka-dotnet.md). This will be charged separately if it's enabled.
- [Azure private endpoint](./catalog-private-link.md)
purview Create Azure Purview Portal Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-azure-purview-portal-faq.md
Last updated 08/26/2021
# Create an exception to deploy Microsoft Purview
-Many subscriptions have [Azure Policies](../governance/policy/overview.md) in place that restrict the creation of some resources. This is to maintain subscription security and cleanliness. However, Microsoft Purview accounts deploy two other Azure resources when they're created: an Azure Storage account, and an Event Hubs namespace. When you [create Microsoft Purview Account](create-catalog-portal.md), these resources will be deployed. They'll be managed by Azure, so you don't need to maintain them, but you'll need to deploy them. Existing policies may block this deployment, and you may receive an error when attempting to create a Microsoft Purview account.
+Many subscriptions have [Azure Policies](../governance/policy/overview.md) in place that restrict the creation of some resources. This is to maintain subscription security and cleanliness. However, Microsoft Purview accounts deploy two other Azure resources when they're created: an Azure Storage account, and optionally an Event Hubs namespace. When you [create Microsoft Purview Account](create-catalog-portal.md), these resources will be deployed. They'll be managed by Azure, so you don't need to maintain them, but you'll need to deploy them. Existing policies may block this deployment, and you may receive an error when attempting to create a Microsoft Purview account.
To maintain your policies in your subscription, but still allow the creation of these managed resources, you can create an exception.
purview Create Catalog Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-catalog-portal.md
Title: 'Quickstart: Create a Microsoft Purview (formerly Azure Purview) account'
description: This Quickstart describes how to create a Microsoft Purview (formerly Azure Purview) account and configure permissions to begin using it. Previously updated : 05/23/2022 Last updated : 06/20/2022
For more information about the governance capabilities of Microsoft Purview, for
> [!Note] > The Microsoft Purview, formerly Azure Purview, does not support moving accounts across regions, so be sure to deploy to the correction region. You can find out more information about this in [move operation support for resources](../azure-resource-manager/management/move-support-resources.md).
+1. You can choose to enable the optional Event Hubs namespace by selecting the toggle. It's disabled by default. Enable this option if you want to be able to programmatically monitor your Microsoft Purview account using Event Hubs and Atlas Kafka**:
+ - [Use Event Hubs and .NET to send and receive Atlas Kafka topics messages](manage-kafka-dotnet.md)
+ - [Publish and consume events for Microsoft Purview with Atlas Kafka](concept-best-practices-automation.md#streaming-atlas-kafka)
+
+ :::image type="content" source="media/create-catalog-portal/event-hubs-namespace.png" alt-text="Screenshot showing the Event Hubs namespace toggle highlighted under the Managed resources section of the Create Microsoft Purview account page.":::
+
+ >[!NOTE]
+ > This option can be enabled or disabled after you have created your account in **Managed Resources** under settings on your Microsoft Purview account page in the Azure Portal.
+ >
+ > :::image type="content" source="media/create-catalog-portal/enable-disable-event-hubs.png" alt-text="Screenshot showing the Event Hubs namespace toggle highlighted on the Managed resources page of the Microsoft Purview account page in the Azure Portal.":::
+ 1. Select **Review & Create**, and then select **Create**. It takes a few minutes to complete the creation. The newly created account will appear in the list on your **Microsoft Purview accounts** page. :::image type="content" source="media/create-catalog-portal/create-resource.png" alt-text="Screenshot showing the Create Microsoft Purview account screen with the Review + Create button highlighted":::
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-integration-runtimes.md
Your self-hosted integration runtime machine needs to connect to several resourc
* The Microsoft Purview services used to manage the self-hosted integration runtime. * The data sources you want to scan using the self-hosted integration runtime.
-* The managed Storage account and Event Hubs resource created by Microsoft Purview. Microsoft Purview uses these resources to ingest the results of the scan, among many other things, so the self-hosted integration runtime need to be able to connect with these resources.
+* The managed Storage account and optional Event Hubs resource created by Microsoft Purview. Microsoft Purview uses these resources to ingest the results of the scan, among many other things, so the self-hosted integration runtime need to be able to connect with these resources.
There are two firewalls to consider:
purview Manage Kafka Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-kafka-dotnet.md
# Use Event Hubs and .NET to send and receive Atlas Kafka topics messages
-This quickstart teaches you how to send and receive *Atlas Kafka* topics events. We will make use of *Azure Event Hubs* and the **Azure.Messaging.EventHubs** .NET library.
+This quickstart teaches you how to send and receive *Atlas Kafka* topics events. We'll make use of *Azure Event Hubs* and the **Azure.Messaging.EventHubs** .NET library.
> [!IMPORTANT] > A managed event hub is created automatically when your *Microsoft Purview* account is created. See, [Purview account creation](create-catalog-portal.md). You can publish messages to Event Hubs Kafka topic, ATLAS_HOOK. Purview will receive it, process it and notify Kafka topic ATLAS_ENTITIES of entity changes. This quickstart uses the new **Azure.Messaging.EventHubs** library.
If you're new to Event Hubs, see [Event Hubs overview](../event-hubs/event-hubs-
To follow this quickstart, you need certain prerequisites in place: - **A Microsoft Azure subscription**. To use Azure services, including Event Hubs, you need an Azure subscription. If you don't have an Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/) or use your MSDN subscriber benefits when you [create an account](https://azure.microsoft.com).-- **Microsoft Visual Studio 2022**. The Event Hubs client library makes use of new features that were introduced in C# 8.0. You can still use the library with previous C# versions, but the new syntax won't be available. To make use of the full syntax, it is recommended that you compile with the [.NET Core SDK](https://dotnet.microsoft.com/download) 3.0 or higher and [language version](/dotnet/csharp/language-reference/configure-language-version#override-a-default) set to `latest`. If you're using a Visual Studio version prior to Visual Studio 2019 it doesn't have the tools needed to build C# 8.0 projects. Visual Studio 2022, including the free Community edition, can be downloaded [here](https://visualstudio.microsoft.com/vs/).
+- **Microsoft Visual Studio 2022**. The Event Hubs client library makes use of new features that were introduced in C# 8.0. You can still use the library with previous C# versions, but the new syntax won't be available. To make use of the full syntax, it's recommended that you compile with the [.NET Core SDK](https://dotnet.microsoft.com/download) 3.0 or higher and [language version](/dotnet/csharp/language-reference/configure-language-version#override-a-default) set to `latest`. If you're using a Visual Studio version prior to Visual Studio 2019, it doesn't have the tools needed to build C# 8.0 projects. Visual Studio 2022, including the free Community edition, can be downloaded [here](https://visualstudio.microsoft.com/vs/).
+- An active [Microsoft Purview account](create-catalog-portal.md) with an Event Hubs namespace enabled. This option can be enabled during account creation or in **Managed Resources** under settings on your Microsoft Purview account page in the [Azure portal](https://portal.azure.com). Select the toggle to enable, then save. It can take a few minutes for the namespace to be available after it's enabled.
+ :::image type="content" source="media/manage-eventhub-kafka-dotnet/enable-disable-event-hubs.png" alt-text="Screenshot showing the Event Hubs namespace toggle highlighted on the Managed resources page of the Microsoft Purview account page in the Azure portal.":::
+
+ >[!NOTE]
+ >Enabling this Event Hubs namespace does incur a cost for the namespace. For specific details, see [the pricing page](https://azure.microsoft.com/pricing/details/purview/).
## Publish messages to Purview
-Let's create a .NET Core console application that sends events to Purview via Event Hub Kafka topic, **ATLAS_HOOK**.
+Let's create a .NET Core console application that sends events to Purview via Event Hubs Kafka topic, **ATLAS_HOOK**.
## Create a Visual Studio project
Next create a C# .NET console application in Visual Studio:
using Azure.Messaging.EventHubs.Producer; ```
-2. Add constants to the `Program` class for the Event Hubs connection string and Event Hub name.
+2. Add constants to the `Program` class for the Event Hubs connection string and Event Hubs name.
```csharp private const string connectionString = "<EVENT HUBS NAMESPACE - CONNECTION STRING>"; private const string eventHubName = "<EVENT HUB NAME>"; ```
- You can get the Event Hub namespace associated with the Purview account by looking at the Atlas kafka endpoint primary/secondary connection strings. These can be found in **Properties** tab of your Purview account.
+ You can get the Event Hubs namespace associated with the Purview account by looking at the Atlas kafka endpoint primary/secondary connection strings. These can be found in **Properties** tab of your Purview account.
:::image type="content" source="media/manage-eventhub-kafka-dotnet/properties.png" alt-text="A screenshot that shows an Event Hubs Namespace.":::
We'll use Azure Storage as the checkpoint store. Use the following steps to crea
You can get event hub namespace associated with your Purview account by looking at your Atlas kafka endpoint primary/secondary connection strings. This can be found in the **Properties** tab of your Purview account.
- :::image type="content" source="media/manage-eventhub-kafka-dotnet/properties.png" alt-text="A screenshot that show an Event Hubs Namespace.":::
+ :::image type="content" source="media/manage-eventhub-kafka-dotnet/properties.png" alt-text="A screenshot that shows an Event Hubs Namespace.":::
Use **ATLAS_ENTITIES** as the event hub name when sending messages to Purview.
purview Tutorial Azure Purview Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-azure-purview-checklist.md
This article lists prerequisites that help you get started quickly on planning a
|:|:|:|:| |1 | Azure Active Directory Tenant |N/A |An [Azure Active Directory tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md) should be associated with your subscription. <ul><li>*Global Administrator* or *Information Protection Administrator* role is required, if you plan to [extend Microsoft 365 Sensitivity Labels to the Microsoft Purview Data Map for files and db columns](create-sensitivity-label.md)</li><li> *Global Administrator* or *Power BI Administrator* role is required, if you're planning to [scan Power BI tenants](register-scan-power-bi-tenant.md).</li></ul> | |2 |An active Azure Subscription |*Subscription Owner* |An Azure subscription is needed to deploy Microsoft Purview and its managed resources. If you don't have an Azure subscription, create a [free subscription](https://azure.microsoft.com/free/) before you begin. |
-|3 |Define whether you plan to deploy a Microsoft Purview with a managed event hub | N/A |A managed event hub is created as part of Microsoft Purview account creation, see Microsoft Purview account creation. You can publish messages to the event hub kafka topic ATLAS_HOOK and Microsoft Purview will consume and process it. Microsoft Purview will notify entity changes to the event hub kafka topic ATLAS_ENTITIES and user can consume and process it. |
+|3 |Define whether you plan to deploy a Microsoft Purview with a managed event hub | N/A | You can choose to deploy a managed Event Hubs namespace as part of Microsoft Purview account creation, see [Microsoft Purview account creation](create-catalog-portal.md). With this managed namespace you can publish messages to the event hub kafka topic ATLAS_HOOK and Microsoft Purview will consume and process it. Microsoft Purview will notify entity changes to the event hub kafka topic ATLAS_ENTITIES and user can consume and process it. You can enable or disable this feature any time after account creation. |
|4 |Register the following resource providers: <ul><li>Microsoft.Storage</li><li>Microsoft.EventHub (optional)</li><li>Microsoft.Purview</li></ul> |*Subscription Owner* or custom role to register Azure resource providers (_/register/action_) | [Register required Azure Resource Providers](../azure-resource-manager/management/resource-providers-and-types.md) in the Azure Subscription that is designated for Microsoft Purview Account. Review [Azure resource provider operations](../role-based-access-control/resource-provider-operations.md). | |5 |Update Azure Policy to allow deployment of the following resources in your Azure subscription: <ul><li>Microsoft Purview</li><li>Azure Storage</li><li>Azure Event Hubs (optional)</li></ul> |*Subscription Owner* |Use this step if an existing Azure Policy prevents deploying such Azure resources. If a blocking policy exists and needs to remain in place, follow our [Microsoft Purview exception tag guide](create-azure-purview-portal-faq.md) and follow the steps to create an exception for Microsoft Purview accounts. | |6 | Define your network security requirements. | Network and Security architects. |<ul><li> Review [Microsoft Purview network architecture and best practices](concept-best-practices-network.md) to define what scenario is more relevant to your network requirements. </li><li>If private network is needed, use [Microsoft Purview Managed IR](catalog-managed-vnet.md) to scan Azure data sources when possible to reduce complexity and administrative overhead. </li></ul> |
This article lists prerequisites that help you get started quickly on planning a
|35 |Grant access to data roles in the organization |*Collection admin* |Provide access to other teams to use Microsoft Purview: <ul><li> Data curator</li><li>Data reader</li><li>Collection admin</li><li>Data source admin</li><li>Policy Author</li><li>Workflow admin</li></ul> <br> For more information, see [Access control in Microsoft Purview](catalog-permissions.md). | ## Next steps-- [Review Microsoft Purview deployment best practices](./deployment-best-practices.md)
+- [Review Microsoft Purview deployment best practices](./deployment-best-practices.md)
search Cognitive Search Skill Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-image-analysis.md
Parameters are case-sensitive.
| Parameter name | Description | |--|-|
-| `defaultLanguageCode` | A string indicating the language to return. The service returns recognition results in a specified language. If this parameter isn't specified, the default value is "en". <br/><br/>Supported languages include all generally available languages documented under the [Cognitive Services Computer Vision language support documentation](../cognitive-services/computer-vision/language-support.md#image-analysis).|
+| `defaultLanguageCode` | A string indicating the language to return. The service returns recognition results in a specified language. If this parameter isn't specified, the default value is "en". <br/><br/>Supported languages include all of the [generally available languages](../cognitive-services/computer-vision/language-support.md#image-analysis) of Cognitive Services Computer Vision. |
| `visualFeatures` | An array of strings indicating the visual feature types to return. Valid visual feature types include: <ul><li>*adult* - detects if the image is pornographic (depicts nudity or a sex act), gory (depicts extreme violence or blood) or suggestive (also known as racy content). </li><li>*brands* - detects various brands within an image, including the approximate location. </li><li> *categories* - categorizes image content according to a [taxonomy](../cognitive-services/Computer-vision/Category-Taxonomy.md) defined by Cognitive Services. </li><li>*description* - describes the image content with a complete sentence in supported languages.</li><li>*faces* - detects if faces are present. If present, generates coordinates, gender and age. </li><li>*objects* - detects various objects within an image, including the approximate location. </li><li> *tags* - tags the image with a detailed list of words related to the image content.</li></ul> Names of visual features are case-sensitive. Both *color* and *imageType* visual features have been deprecated, but you can access this functionality through a [custom skill](./cognitive-search-custom-skill-interface.md). Refer to the [Computer Vision Image Analysis documentation](../cognitive-services/computer-vision/language-support.md#image-analysis) on which visual features are supported with each `defaultLanguageCode`.| | `details` | An array of strings indicating which domain-specific details to return. Valid visual feature types include: <ul><li>*celebrities* - identifies celebrities if detected in the image.</li><li>*landmarks* - identifies landmarks if detected in the image. </li></ul> |
search Cognitive Search Skill Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-ocr.md
The **Optical character recognition (OCR)** skill recognizes printed and handwri
An OCR skill uses the machine learning models provided by [Computer Vision](../cognitive-services/computer-vision/overview.md) API [v3.2](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) in Cognitive Services. The **OCR** skill maps to the following functionality:
-+ For the languages listed under the [Cognitive Services Computer Vision language support documentation](../cognitive-services/computer-vision/language-support.md#optical-character-recognition-ocr), the ["Read"](../cognitive-services/computer-vision/overview-ocr.md#read-api) API is used.
++ For the languages listed under [Cognitive Services Computer Vision language support](../cognitive-services/computer-vision/language-support.md#optical-character-recognition-ocr), the [Read API](../cognitive-services/computer-vision/overview-ocr.md#read-api) is used. + For Greek and Serbian Cyrillic, the [legacy OCR](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) API is used. The **OCR** skill extracts text from image files. Supported file formats include:
Parameters are case-sensitive.
| Parameter name | Description | |--|-|
-| `detectOrientation` | Detects image orientation. Valid values are `true` or `false`. <br/><br/> This parameter only applies if the [legacy OCR](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) API is used. |
-| `defaultLanguageCode` | Language code of the input text. Supported languages include all generally available languages documented under the [Cognitive Services Computer Vision language support documentation](../cognitive-services/computer-vision/language-support.md#optical-character-recognition-ocr) and `unk` (Unknown). <br/><br/> If the language code is unspecified or null, the language is set to English. If the language is explicitly set to `unk`, all languages found are auto-detected and returned. </p> |
+| `detectOrientation` | Detects image orientation. Valid values are `true` or `false`. </p>This parameter only applies if the [legacy OCR](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) API is used. |
+| `defaultLanguageCode` | Language code of the input text. Supported languages include all of the [generally available languages](../cognitive-services/computer-vision/language-support.md#image-analysis) of Cognitive Services Computer Vision. You can also specify `unk` (Unknown). </p>If the language code is unspecified or null, the language is set to English. If the language is explicitly set to `unk`, all languages found are auto-detected and returned.|
| `lineEnding` | The value to use as a line separator. Possible values: "Space", "CarriageReturn", "LineFeed". The default is "Space". | In previous versions, there was a parameter called "textExtractionAlgorithm" to specify extraction of "printed" or "handwritten" text. This parameter is deprecated because the current Read API algorithm extracts both types of text at once. If your skill includes this parameter, you don't need to remove it, but it won't be used during skill execution.
search Monitor Azure Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/monitor-azure-cognitive-search.md
Previously updated : 04/29/2022 Last updated : 07/06/2022 # Monitoring Azure Cognitive Search
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by Azure Cognitive Search and how to analyze and alert on this data with Azure Monitor.
+This article describes the metrics and usage data collected by Azure Cognitive Search. While these metrics are helpful, you can also expand the scope and durability of monitoring and data collection through Azure Monitor.
-## Monitor overview
+## Monitoring in Azure portal
-In Azure portal, the **Monitoring** tab in the **Overview** page for each Azure Cognitive Search service includes a brief summary of key metrics, including query volume, latency, and throttled queries. This data is collected automatically, and is available for analysis as soon as you create the resource.
+In the search services pages in Azure portal, you can find current status of service operations and resource availability.
-While these metrics are helpful, they are a subset of the monitoring and diagnostic data available for Azure Cognitive Search. You can enable additional types of data collection with some configuration. Additional data collection is supported through Azure Monitor.
-
-## What is Azure Monitor?
+ ![Azure Monitor integration in a search service](./media/search-monitor-usage/azure-monitor-search.png "Azure Monitor integration in a search service")
-Azure Cognitive Search creates monitoring data using [Azure Monitor](../azure-monitor/overview.md) which is a full stack monitoring service in Azure that provides a complete set of features to monitor your Azure resources.
+On the **Overview** page, the **Monitoring** tab summarizes key [query metrics](search-monitor-queries.md), including query volume, latency, and throttled queries. This data is collected automatically and stored internally for up to 30 days. Data becomes available for analysis as soon as you create the resource.
-If you're not already familiar with monitoring Azure services, start with the article [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) which describes the following concepts:
-
-* What Azure Monitor is and how it's integrated into the portal for other Azure services
-* The types of data collected by Azure Monitor for Azure resources
-* Azure Monitor tools used to collect and analyze data
+On the **Overview** page, the **Usage** tab reports on available capacity and the quantity of indexes, indexers, data sources, and skillsets relative to the maximum allowed for your [service tier](search-sku-tier.md).
-The following sections build on this article by describing the specific data gathered from Azure Cognitive Search and providing examples for configuring data collection and analyzing this data with Azure tools.
+From the menu on the left, open the standard **Activity log** page to view search activity at the subscription level. Service administration and control plane operations through Azure Resource Manager are reflected in the activity log.
-## Monitoring data
+> [!NOTE]
+> Cognitive Search does not monitor individual user access to search data (sometimes referred to as document-level or row-level access). Indexing and query requests originate from a client application that presents either an admin or query API key on the request.
-Azure Cognitive Search collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md). See the [data reference](monitor-azure-cognitive-search-data-reference.md) for detailed information on the metrics and logs metrics created by Azure Cognitive Search.
+## Get system data from REST APIs
-In addition to the resource logs collected by Azure Monitor, you can also obtain system data from the search service itself, including statistics, counts, and status:
+Although query metrics aren't available through REST, the **Usage** data that's visible in the portal can be obtained programmatically:
* [Service Statistics (REST)](/rest/api/searchservice/get-service-statistics) * [Index Statistics (REST)](/rest/api/searchservice/get-index-statistics) * [Document Counts (REST)](/rest/api/searchservice/count-documents) * [Indexer Status (REST)](/rest/api/searchservice/get-indexer-status)
-The system information above information can also be read from the Azure portal. For REST calls, use an [admin API key](search-security-api-keys.md) and [Postman](search-get-started-rest.md) or another REST client.
+For REST calls, use an [admin API key](search-security-api-keys.md) and [Postman](search-get-started-rest.md) or another REST client to query your search service.
-On Azure portal pages, check the Usage and Monitoring tabs for counts and metrics. Commands on the left-navigation provide access to configuration and data exploration pages.
+## Expand monitoring with Azure Monitor
- ![Azure Monitor integration in a search service](./media/search-monitor-usage/azure-monitor-search.png "Azure Monitor integration in a search service")
+Azure Cognitive Search creates monitoring data using [Azure Monitor](../azure-monitor/overview.md) which is a full stack monitoring service in Azure that provides a complete set of features to monitor your Azure resources.
-> [!NOTE]
-> Cognitive Search does not monitor individual user access to search data (sometimes referred to as document-level or row-level access). Indexing and query requests originate from a client application that presents either an admin or query API key on the request.
+If you're not already familiar with monitoring Azure services, start with the article [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) which describes the following concepts:
+
+* What Azure Monitor is and how it's integrated into the portal for other Azure services
+* The types of data collected by Azure Monitor for Azure resources
+* Azure Monitor tools used to collect and analyze data
+
+The following sections build on this article by describing the specific data gathered from Azure Cognitive Search and providing examples for configuring data collection and analyzing this data with Azure tools.
+
+## Monitoring data
+
+Azure Cognitive Search collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md). See the [data reference](monitor-azure-cognitive-search-data-reference.md) for detailed information on the metrics and logs metrics created by Azure Cognitive Search.
## Collection and routing
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
For more information, see Azure Attestation [public documentation](../../attesta
| TLS 1.2 enforcement | GA | GA | | BCDR support | GA | - | | [Service tag integration](../../virtual-network/service-tags-overview.md) | GA | GA |
-| [Immutable log storage](../../attestation/audit-logs.md) | GA | GA |
+| [Immutable log storage](../../attestation/view-logs.md) | GA | GA |
| Network isolation using private link | Public Preview | - | | [FedRAMP High certification](../../azure-government/compliance/azure-services-in-fedramp-auditscope.md) | GA | - | | Customer lockbox | GA | - |
security Pen Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/pen-testing.md
As of June 15, 2017, Microsoft no longer requires pre-approval to conduct a pene
Standard tests you can perform include:
-* Tests on your endpoints to uncover the [Open Web Application Security Project (OWASP) top 10 vulnerabilities](https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project)
+* Tests on your endpoints to uncover the [Open Web Application Security Project (OWASP) top 10 vulnerabilities](https://owasp.org/www-project-top-ten/)
* [Fuzz testing](https://cloudblogs.microsoft.com/microsoftsecure/2007/09/20/fuzz-testing-at-microsoft-and-the-triage-process/) of your endpoints * [Port scanning](https://en.wikipedia.org/wiki/Port_scanner) of your endpoints
sentinel Dns Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/dns-normalization-schema.md
The following list mentions fields that have specific guidelines for DNS events:
| **EventType** | Mandatory | Enumerated | Indicates the operation reported by the record. <br><br> For DNS records, this value would be the [DNS op code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>Example: `lookup`| | **EventSubType** | Optional | Enumerated | Either `request` or `response`. <br><br>For most sources, [only the responses are logged](#guidelines-for-collecting-dns-events), and therefore the value is often **response**. | | <a name=eventresultdetails></a>**EventResultDetails** | Mandatory | Enumerated | For DNS events, this field provides the [DNS response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>**Notes**:<br>- IANA doesn't define the case for the values, so analytics must normalize the case.<br> - If the source provides only a numerical response code and not a response code name, the parser must include a lookup table to enrich with this value. <br>- If this record represents a request and not a response, set to **NA**. <br><br>Example: `NXDOMAIN` |
-| **EventSchemaVersion** | Mandatory | String | The version of the schema documented here is **0.1.3**. |
+| **EventSchemaVersion** | Mandatory | String | The version of the schema documented here is **0.1.4**. |
| **EventSchema** | Mandatory | String | The name of the schema documented here is **Dns**. | | **Dvc** fields| - | - | For DNS events, device fields refer to the system that reports the DNS event. |
The changes in version 0.1.3 of the schema are:
- Added optional Geo Location and Risk Level fields. The changes in version 0.1.4 of the schema are:
+- Added the optional fields `ThreatIpAddr`, `ThreatName`, `ThreatConfidence`, `ThreatOriginalConfidence`, `ThreatOriginalRiskLevel`, `ThreatIsActive`, `ThreatFirstReportedTime`, and `ThreatLastReportedTime`
## Source-specific discrepancies
sentinel Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/iot-solution.md
Title: Integrate Microsoft Sentinel and Microsoft Defender for IoT | Microsoft
description: This tutorial describes how to use the Microsoft Sentinel data connector and solution for Microsoft Defender for IoT to secure your entire OT environment. Detect and respond to OT threats, including multistage attacks that may cross IT and OT boundaries. Previously updated : 12/20/2021 Last updated : 06/20/2022
For more information, see [About Microsoft Sentinel content and solutions](senti
## Detect threats out-of-the-box with Defender for IoT data
-Incidents are not created for alerts generated by Defender for IoT data by default.
+Incidents aren't created for alerts generated by Defender for IoT data by default.
You can ensure that Microsoft Sentinel creates incidents for relevant alerts generated by Defender for IoT, either by using out-of-the-box analytics rules provided in the **IoT OT Threat Monitoring with Defender for IoT** solution, configuring analytics rules manually, or by configuring your data connector to automatically create incidents for *all* alerts generated by Defender for IoT.
The following table describes the out-of-the-box analytics rules provided in the
| **PLC insecure key state** | The new mode may indicate that the PLC is not secure. Leaving the PLC in an insecure operating mode may allow adversaries to perform malicious activities on it, such as a program download. <br><br>If the PLC is compromised, devices and processes that interact with it may be impacted. which may affect overall system security and safety. | | **PLC stop** | The PLC stop command may indicate an improper configuration of an application that has caused the PLC to stop functioning, or malicious activity on the network. For example, a cyber threat that attempts to manipulate PLC programming to affect the functionality of the network. | | **Suspicious malware found in the network** | Suspicious malware found on the network indicates that suspicious malware is trying to compromise production. |
-| **Multiple scans in the network** | Multiple scans on the network can be an indication of one of the following: <br><br>- A new device on the network <br>- New functionality of an existing device <br>- Misconfiguration of an application, such as due to a firmware update or re-installation <br>- Malicious activity on the network for reconnaissance |
+| **Multiple scans in the network** | Multiple scans on the network can be an indication of one of the following: <br><br>- A new device on the network <br>- New functionality of an existing device <br>- Misconfiguration of an application, such as due to a firmware update or reinstallation <br>- Malicious activity on the network for reconnaissance |
| **Internet connectivity** | An OT device communicating with internet addresses may indicate an improper application configuration, such as anti-virus software attempting to download updates from an external server, or malicious activity on the network. | | **Unauthorized device in the SCADA network** | An unauthorized device on the network may be a legitimate, new device recently installed on the network, or an indication of unauthorized or even malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. | | **Unauthorized DHCP configuration in the SCADA network** | An unauthorized DHCP configuration on the network may indicate a new, unauthorized device operating on the network. <br><br>This may be one a legitimate, new device recently deployed on the network, or an indication of unauthorized or even malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
-| **Excessive login attempts** | Excessive login attempts may indicate improper service configuration, human error, or malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
+| **Excessive login attempts** | Excessive sign in attempts may indicate improper service configuration, human error, or malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
| **High bandwidth in the network** | An unusually high bandwidth may be an indication of a new service/process on the network, such as backup, or an indication of malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. | | **Denial of Service** | This alert detects attacks that would prevent the use or proper operation of the DCS system. | | **Unauthorized remote access to the network** | Unauthorized remote access to the network can compromise the target device. <br><br> This means that if another device on the network is compromised, the target devices can be accessed remotely, increasing the attack surface. |
In some cases, maintenance activities generate alerts in Microsoft Sentinel that
To use this playbook: - Enter the relevant time period when the maintenance is expected to occur, and the IP addresses of any relevant assets, such as listed in an Excel file.-- Create a watchlist that includes al the asset IP addresses on which alerts should be handled automatically.
+- Create a watchlist that includes all the asset IP addresses on which alerts should be handled automatically.
### Email notifications by production line
To use this playbook, create a watchlist that maps between the sensor names and
### Create a new ServiceNow ticket
-**Playbook name** AD4IoT-NewAssetServiceNowTicket
+**Playbook name**: AD4IoT-NewAssetServiceNowTicket
Typically, the entity authorized to program a PLC is the Engineering Workstation. Therefore, attackers might create new Engineering Workstations in order to create malicious PLC programming. This playbook opens a ticket in ServiceNow each time a new Engineering Workstation is detected, explicitly parsing the IoT device entity fields.
+### Update alert statuses in Defender for IoT
+
+**Playbook name**: AD4IoT-AutoAlertStatusSync
+
+This playbook updates alert statuses in Defender for IoT whenever a related alert in Microsoft Sentinel has a **Status** update.
+
+This synchronization overrides any status defined in Defender for IoT, in the Azure portal or the sensor console, so that the alert statuses match that of the related incident.
+
+To use this playbook, make sure that you have the required role applied, valid connections where required, and an automation rule to connect incident triggers with the **AD4IoT-AutoAlertStatusSync** playbook:
+
+**To add the *Security Admin* role to the Azure subscription where the playbook is installed**:
+
+1. Open the **AD4IoT-AutoAlertStatusSync** playbook from the Microsoft Sentinel **Automation** page.
+
+1. With the playbook opened as a Logic app, select **Identity > System assigned**, and then in the **Permissions** area, select the **Azure role assignments** button.
+
+1. In the **Azure role assignments** page, select **Add role assignment**.
+
+1. In the **Add role assignment** pane:
+
+ - Define the **Scope** as **Subscription**
+ - From the **Subscription** dropdown, select the subscription where your playbook is installed.
+ - From the **Role** dropdown, select the **Security Admin** role, and then select **Save**.
+
+**To ensure that you have valid connections for each of your connection steps in the playbook**:
+
+1. Open the **AD4IoT-AutoAlertStatusSync** playbook from the Microsoft Sentinel **Automation** page.
+
+1. With the playbook opened as a Logic app, select **Logic app designer**. If you have invalid connection details, you may have warning signs in both of the **Connections** steps. For example:
+
+ :::image type="content" source="media/iot-solution/connection-steps.png" alt-text="Screenshot of the default AD4IOT AutoAlertStatusSync playbook." lightbox="media/iot-solution/connection-steps.png":::
+
+1. Select a **Connections** step to expand it and add a valid connection as needed.
+
+**To connect your incidents, relevant analytics rules, and the **AD4IoT-AutoAlertStatusSync** playbook**:
+
+Add a new Microsoft Sentinel analytics rule, defined as follows:
+
+- In the **Trigger** field, select **When an incident is updated**
+
+- In the **Conditions** area, select **If > Analytic rule name > Contains**, and then select the specific analytics rules relevant for Defender for IoT in your organization.
+
+ You may be using out-of-the-box analytics rules, or you may have modified the out-of-the-box content, or created your own. For more information, see [Detect threats out-of-the-box with Defender for IoT data](#detect-threats-out-of-the-box-with-defender-for-iot-data).
+
+- In the **Actions** area, select **Run playbook > AD4IoT-AutoAlertStatusSync**.
+
+For example:
++ ## Next steps For more information, see:
site-recovery Asr Arm Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/asr-arm-templates.md
Title: Azure Resource Manager Templates description: Azure Resource Manager templates for using Azure Site Recovery features. -+ Last updated 02/04/2021-+ # Azure Resource Manager templates for Azure Site Recovery
site-recovery Avs Tutorial Dr Drill Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-dr-drill-azure.md
Title: Run a disaster recovery drill from Azure VMware Solution to Azure with Azure Site Recovery description: Learn how to run a disaster recovery drill from Azure VMware Solution private cloud to Azure, with Azure Site Recovery.-+ Last updated 09/30/2020-+
site-recovery Avs Tutorial Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-failback.md
Title: Fail back Azure VMware Solution VMsfrom Azure with Azure Site Recovery description: Learn how to failback to the Azure VMware Solution private cloud after failover to Azure, during disaster recovery.-+ Last updated 09/30/2020-+
site-recovery Avs Tutorial Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-failover.md
Title: Fail over Azure VMware Solution VMs to Azure with Site Recovery description: Learn how to fail over Azure VMware Solution VMs to Azure in Azure Site Recovery-+ Last updated 09/30/2020-+ # Fail over Azure VMware Solution VMs
site-recovery Avs Tutorial Prepare Avs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-prepare-avs.md
Title: Prepare Azure VMware Solution for disaster recovery to Azure Site Recovery description: Learn how to prepare Azure VMware Solution servers for disaster recovery to Azure using the Azure Site Recovery service.-+ Last updated 09/29/2020-+
site-recovery Avs Tutorial Prepare Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-prepare-azure.md
Title: Prepare Azure Site Recovery resources for disaster recovery of Azure VMware Solution VMs description: Learn how to prepare Azure resources for disaster recovery of Azure VMware Solution machines using Azure Site Recovery. -+ Last updated 09/29/2020-+
site-recovery Avs Tutorial Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-replication.md
Title: Setup Azure Site Recovery for Azure VMware Solution VMs description: Learn how to set up disaster recovery to Azure for Azure VMware Solution VMs with Azure Site Recovery.-+ Last updated 09/29/2020-+
site-recovery Avs Tutorial Reprotect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-reprotect.md
Title: Reprotect Azure VMs to an Azure VMware Solution private cloud with Azure Site Recovery description: Learn how to reprotect Azure VMware Solution VMs after failover to Azure with Azure Site Recovery.-+ Last updated 09/30/2020-+
site-recovery Azure To Azure About Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-about-networking.md
Title: About networking in Azure VM disaster recovery with Azure Site Recovery description: Provides an overview of networking for replication of Azure VMs using Azure Site Recovery. -+ Last updated 3/13/2020-+ # About networking in Azure VM disaster recovery
site-recovery Azure To Azure Autoupdate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-autoupdate.md
Title: Automatic update of the Mobility service in Azure Site Recovery description: Overview of automatic update of the Mobility service when replicating Azure VMs by using Azure Site Recovery. -+ Last updated 04/02/2020-+ # Automatic update of the Mobility service in Azure-to-Azure replication
site-recovery Azure To Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-common-questions.md
Title: Common questions about Azure VM disaster recovery with Azure Site Recovery description: This article answers common questions about Azure VM disaster recovery when you use Azure Site Recovery.-+ Last updated 04/28/2022
site-recovery Azure To Azure Customize Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-customize-networking.md
Title: Customize networking configurations for a failover VM | Microsoft Docs description: Provides an overview of customize networking configurations for a failover VM in the replication of Azure VMs using Azure Site Recovery. -+ Last updated 10/01/2021-+ # Customize networking configurations of the target Azure VM
site-recovery Azure To Azure Enable Replication Added Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-enable-replication-added-disk.md
Title: Enable replication for an added Azure VM disk in Azure Site Recovery description: This article describes how to enable replication for a disk added to an Azure VM that's enabled for disaster recovery with Azure Site Recovery-+ Last updated 04/29/2019
site-recovery Azure To Azure Exclude Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-exclude-disks.md
Title: Exclude Azure VM disks from replication with Azure Site Recovery and Azure PowerShell description: Learn how to exclude disks of Azure virtual machines during Azure Site Recovery by using Azure PowerShell.-+ Last updated 02/18/2019
site-recovery Azure To Azure How To Enable Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-policy.md
Title: Enable Azure Site Recovery for your VMs by using Azure Policy description: Learn how to enable policy support to help protect your VMs by using Azure Site Recovery.--++ Last updated 07/25/2021
site-recovery Azure To Azure How To Enable Replication Ade Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-ade-vms.md
Title: Enable replication for encrypted Azure VMs in Azure Site Recovery description: This article describes how to configure replication for Azure Disk Encryption-enabled VMs from one Azure region to another by using Site Recovery.-+ Last updated 08/08/2019-+
site-recovery Azure To Azure How To Enable Replication Cmk Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-cmk-disks.md
Title: Enable replication of encrypted Azure VMs in Azure Site Recovery description: This article describes how to configure replication for VMs with customer-managed key (CMK) enabled disks from one Azure region to another by using Site Recovery.-+ Last updated 07/25/2021-+
site-recovery Azure To Azure How To Enable Replication Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-private-endpoints.md
Title: Enable replication for private endpoints in Azure Site Recovery description: This article describes how to configure replication for VMs with private endpoints from one Azure region to another by using Site Recovery.--++ Last updated 07/14/2020
site-recovery Azure To Azure How To Enable Replication S2d Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-s2d-vms.md
Title: Replicate Azure VMs running Storage Spaces Direct with Azure Site Recovery description: Learn how to replicate Azure VMs running Storage Spaces Direct using Azure Site Recovery.-+ Last updated 01/29/2019
site-recovery Azure To Azure How To Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication.md
Title: Configure replication for Azure VMs in Azure Site Recovery description: Learn how to configure replication to another region for Azure VMs, using Site Recovery.-+ Last updated 04/29/2018
site-recovery Azure To Azure How To Enable Zone To Zone Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md
Title: Enable Zone to Zone Disaster Recovery for Azure Virtual Machines description: This article describes when and how to use Zone to Zone Disaster Recovery for Azure virtual machines.-+ Last updated 03/23/2022-+
site-recovery Azure To Azure How To Reprotect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-reprotect.md
Title: Reprotect Azure VMs to the primary region with Azure Site Recovery | Microsoft Docs description: Describes how to reprotect Azure VMs after failover, the secondary to primary region, using Azure Site Recovery. -+ Last updated 11/27/2018-+ # Reprotect failed over Azure VMs to the primary region
site-recovery Azure To Azure Move Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-move-overview.md
Title: Moving Azure VMs to another region with Azure Site Recovery description: Using Azure Site Recovery to move Azure VMs from one Azure region to another.-+ Last updated 01/28/2019-+
site-recovery Azure To Azure Network Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-network-mapping.md
Title: Map virtual networks between two regions in Azure Site Recovery description: Learn about mapping virtual networks between two Azure regions for Azure VM disaster recovery with Azure Site Recovery.-+ Last updated 10/15/2019-+ # Set up network mapping and IP addressing for VNets
site-recovery Azure To Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-powershell.md
Title: Disaster recovery for Azure VMs using Azure PowerShell and Azure Site Recovery description: Learn how to set up disaster recovery for Azure virtual machines with Azure Site Recovery using Azure PowerShell. -+ Last updated 3/29/2019-+
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Title: Support matrix for Azure VM disaster recovery with Azure Site Recovery
description: Summarizes support for Azure VMs disaster recovery to a secondary region with Azure Site Recovery. Last updated 05/05/2022--++ # Support matrix for Azure VM disaster recovery between Azure regions
site-recovery Azure To Azure Troubleshoot Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-errors.md
Title: Troubleshoot Azure VM replication in Azure Site Recovery description: Troubleshoot errors when replicating Azure virtual machines for disaster recovery.-+ Last updated 04/29/2022-+ # Troubleshoot Azure-to-Azure VM replication errors
site-recovery Azure To Azure Troubleshoot Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-network-connectivity.md
Title: Troubleshoot connectivity for Azure to Azure disaster recovery with Azure Site Recovery description: Troubleshoot connectivity issues in Azure VM disaster recovery-+ Last updated 04/06/2020
site-recovery Azure To Azure Troubleshoot Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-replication.md
Title: Troubleshoot replication of Azure VMs with Azure Site Recovery description: Troubleshoot replication in Azure VM disaster recovery with Azure Site Recovery-+ Last updated 04/03/2020
site-recovery Azure To Azure Tutorial Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-tutorial-migrate.md
Title: Move Azure VMs to a different Azure region with Azure Site Recovery description: Use Azure Site Recovery to move Azure VMs from one Azure region to another. -+ Last updated 01/28/2019-+
site-recovery Azure Vm Disaster Recovery With Accelerated Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-vm-disaster-recovery-with-accelerated-networking.md
Title: Enable accelerated networking for Azure VM disaster recovery with Azure S
description: Describes how to enable Accelerated Networking with Azure Site Recovery for Azure virtual machine disaster recovery documentationcenter: ''-+ Last updated 04/08/2019-+ # Accelerated Networking with Azure virtual machine disaster recovery
site-recovery Azure Vm Disaster Recovery With Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-vm-disaster-recovery-with-expressroute.md
Title: Integrate Azure ExpressRoute Azure VM disaster recovery with Azure Site Recovery description: Describes how to set up disaster recovery for Azure VMs using Azure Site Recovery and Azure ExpressRoute -+ Last updated 07/25/2021-+ # Integrate ExpressRoute with disaster recovery for Azure VMs
site-recovery Concepts Expressroute With Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-expressroute-with-site-recovery.md
Title: About using ExpressRoute with Azure Site Recovery description: Describes how to use Azure ExpressRoute with the Azure Site Recovery service for disaster recovery and migration. -+ Last updated 10/13/2019-+ # Azure ExpressRoute with Azure Site Recovery
site-recovery Concepts Multiple Ip Address Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-multiple-ip-address-failover.md
Title: Configure failover of multiple IP addresses with Azure Site Recovery description: Describes how to configure the failover of secondary IP configs for Azure VMs -+ Last updated 11/01/2021-+ # Configure failover of multiple IP addresses with Azure Site Recovery
site-recovery Concepts Network Security Group With Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-network-security-group-with-site-recovery.md
Title: Network Security Groups with Azure Site Recovery | Microsoft Docs description: Describes how to use Network Security Groups with Azure Site Recovery for disaster recovery and migration-+ Last updated 04/08/2019-+ # Network Security Groups with Azure Site Recovery
site-recovery Concepts On Premises To Azure Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-on-premises-to-azure-networking.md
Title: Connect to Azure VMs on-premises failover with Azure Site Recovery description: Describes how to connect to Azure VMs after failover from on-premises to Azure using Azure Site Recovery-+ Last updated 10/13/2019-+ # Connect to Azure VMs after failover from on-premises
site-recovery Concepts Public Ip Address With Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-public-ip-address-with-site-recovery.md
Title: Assign public IP addresses after failover with Azure Site Recovery description: Describes how to set up public IP addresses with Azure Site Recovery and Azure Traffic Manager for disaster recovery and migration -+ Last updated 04/08/2019-+ # Set up public IP addresses after failover
site-recovery Concepts Traffic Manager With Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-traffic-manager-with-site-recovery.md
Title: Azure Traffic Manager with Azure Site Recovery | Microsoft Docs description: Describes how to use Azure Traffic Manager with Azure Site Recovery for disaster recovery and migration -+ Last updated 04/08/2019-+ # Azure Traffic Manager with Azure Site Recovery
site-recovery Configure Mobility Service Proxy Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/configure-mobility-service-proxy-settings.md
Title: Configure Mobility Service Proxy Settings for Azure to Azure Disaster Recovery | Microsoft Docs description: Provides details on how to configure mobility service when customers use a proxy in their source environment. -+ Last updated 03/18/2020-+ # Configure Mobility Service Proxy Settings for Azure to Azure Disaster Recovery
site-recovery Delete Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/delete-vault.md
Title: Delete an Azure Site Recovery vault description: Learn how to delete a Recovery Services vault configured for Azure Site Recovery-+ Last updated 11/05/2019-+
site-recovery Encryption Feature Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/encryption-feature-deprecation.md
Title: Deprecation of Azure Site Recovery data encryption feature | Microsoft Docs description: Details regarig Azure Site Recovery data encryption feature -+ Last updated 11/15/2019-+ # Deprecation of Site Recovery data encryption feature
site-recovery File Server Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/file-server-disaster-recovery.md
Title: Protect a file server by using Azure Site Recovery description: This article describes how to protect a file server by using Azure Site Recovery -+ Last updated 07/31/2019-+ # Protect a file server by using Azure Site Recovery
site-recovery How To Enable Replication Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-enable-replication-proximity-placement-groups.md
Title: Replicate Azure VMs running in a proximity placement group description: Learn how to replicate Azure VMs running in proximity placement groups by using Azure Site Recovery.-+ Last updated 02/11/2021
site-recovery Hybrid How To Enable Replication Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hybrid-how-to-enable-replication-private-endpoints.md
Title: Enable replication for on-premises machines with private endpoints description: This article describes how to configure replication for on-premises machines by using private endpoints in Site Recovery. --++ Last updated 07/14/2020
site-recovery Hyper V Azure Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-failback.md
Title: Fail back Hyper-V VMs from Azure with Azure Site Recovery description: How to fail back Hyper-V VMs to an on-premises site from Azure with Azure Site Recovery. -+ Last updated 09/12/2019-+
site-recovery Hyper V Azure Powershell Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-powershell-resource-manager.md
Title: Hyper-V VM disaster recovery using Azure Site Recovery and PowerShell description: Automate disaster recovery of Hyper-V VMs to Azure with the Azure Site Recovery service using PowerShell and Azure Resource Manager.-+ Last updated 01/10/2020-+ ms.tool: azure-powershell
site-recovery Hyper V Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-support-matrix.md
description: Summarizes the supported components and requirements for Hyper-V VM
Last updated 7/14/2020--++
site-recovery Hyper V Azure Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-troubleshoot.md
Title: Troubleshoot Hyper-V disaster recovery with Azure Site Recovery description: Describes how to troubleshoot disaster recovery issues with Hyper-V to Azure replication using Azure Site Recovery -+ Last updated 04/14/2019-+ # Troubleshoot Hyper-V to Azure replication and failover
site-recovery Hyper V Deployment Planner Analyze Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-deployment-planner-analyze-report.md
Title: Analyze the Hyper-V Deployment Planner report in Azure Site Recovery description: This article describes how to analyze a report generated by the Azure Site Recovery Deployment Planner for disaster recovery of Hyper-V VMs to Azure. -+ Last updated 10/21/2019-+ # Analyze the Azure Site Recovery Deployment Planner report
site-recovery Hyper V Deployment Planner Cost Estimation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-deployment-planner-cost-estimation.md
Title: Review the Azure Site Recovery Deployment Planner cost estimation report for disaster recovery of Hyper-V VMs to Azure| Microsoft Docs description: This article describes how to review the cost estimation report generated the Azure Site Recovery Deployment Planner for Hyper-V disaster recovery to Azure. -+ Last updated 4/9/2019-+ # Cost estimation report by Azure Site Recovery Deployment Planner
site-recovery Hyper V Deployment Planner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-deployment-planner-overview.md
Title: Deployment Planner for Hyper-V disaster recovery with Azure Site Recovery description: Learn about the Azure Site Recovery Deployment Planner Hyper-V disaster recovery to Azure.-+ Last updated 3/13/2020-+ # About the Azure Site Recovery Deployment Planner for Hyper-V disaster recovery to Azure
site-recovery Hyper V Deployment Planner Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-deployment-planner-run.md
Title: Run the Hyper-V Deployment Planner in Azure Site Recovery description: This article describes how to run the Azure Site Recovery Deployment Planner for Hyper-V disaster recovery to Azure.-+ Last updated 04/09/2019-+
site-recovery Hyper V Exclude Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-exclude-disk.md
Title: Exclude Hyper-V VM disks from disaster recovery to Azure with Azure Site Recovery description: How to exclude Hyper-V VM disks from replication to Azure with Azure Site Recovery.-+ -+ Last updated 11/12/2019
site-recovery Hyper V Vmm Performance Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-performance-results.md
Title: Test Hyper-V VM replication to a secondary site with VMM using Azure Site Recovery description: This article provides information about performance testing for replication of Hyper-V VMs in VMM clouds to a secondary site using Azure Site Recovery.-+ Last updated 12/27/2018-+ # Test results for Hyper-V replication to a secondary site
site-recovery Hyper V Vmm Powershell Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-powershell-resource-manager.md
Title: Set up Hyper-V (with VMM) disaster recovery to a secondary site with Azure Site Recovery/PowerShell description: Describes how to set up disaster recovery of Hyper-V VMs in VMM clouds to a secondary VMM site using Azure Site Recovery and PowerShell. -+ Last updated 1/10/2020-+
site-recovery Hyper V Vmm Recovery Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-recovery-script.md
Title: Add a script to a recovery plan in Azure Site Recovery description: Learn how to add a VMM script to a recovery plan for disaster recovery of Hyper-V VMs in VMM clouds. -+ Last updated 11/27/2018-+ # Add a VMM script to a recovery plan
site-recovery Hyper V Vmm Test Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-test-failover.md
Title: Run a NHyper-V disaster recovery drill to a secondary site with Azure Site Recovery description: Learn how to run a DR drill for Hyper-V VMs in VMM clouds to a secondary on-premises datacenter using Azure Site Recovery.-+ Last updated 11/27/2018-+ # Run a DR drill for Hyper-V VMs to a secondary site
site-recovery Monitoring High Churn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/monitoring-high-churn.md
Title: Monitoring churn patterns on virtual machines description: Learn how to monitor churn patterns on Virtual Machines protected using Azure Site Recovery-+ Last updated 09/09/2020-+ # Monitoring churn patterns on virtual machines
site-recovery Move Azure Vms Avset Azone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/move-azure-VMs-AVset-Azone.md
Title: Move VMs to an Azure region with availability zones using Azure Site Recovery description: Learn how to move VMs to an availability zone in a different region with Site Recovery -+ Last updated 01/28/2019-+
site-recovery Move Azure Vms Cross Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/move-azure-VMs-cross-region.md
Title: Move Azure VMs to another region with Azure Site Recovery description: Use Azure Site Recovery to move Azure IaaS VMs from one Azure region to another. -+ Last updated 01/28/2019-+
site-recovery Move Vaults Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/move-vaults-across-regions.md
Title: Move an Azure Site Recovery vault to another region description: Describes how to move a Recovery Services vault (Azure Site Recovery) to another Azure region -+ Last updated 07/31/2019-+
site-recovery Physical Azure Set Up Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-azure-set-up-source.md
Title: Set up the configuration server for disaster recovery of physical servers to Azure using Azure Site Recovery | Microsoft Docs' description: This article describes how to set up the on-premises configuration server for disaster recovery of on-premises physical servers to Azure. -+ Last updated 07/03/2019-+ # Set up the configuration server for disaster recovery of physical servers to Azure
site-recovery Physical Azure Set Up Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-azure-set-up-target.md
Title: Set up the target environment for physical servers in Azure Site Recovery description: This article describes how to set up the target Azure environment for disaster recovery of physical servers using Azure Site Recovery.-+ Last updated 11/27/2018-+ # Prepare target (VMware to Azure)
site-recovery Physical Manage Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-manage-configuration-server.md
Title: Manage the configuration server for physical servers in Azure Site Recovery description: This article describes how to manage the Azure Site Recovery configuration server for physical server disaster recovery to Azure. -+ Last updated 02/28/2019-+ # Manage the configuration server for physical server disaster recovery
site-recovery Quickstart Create Vault Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/quickstart-create-vault-bicep.md
Title: Quickstart to create an Azure Recovery Services vault using Bicep. description: In this quickstart, you learn how to create an Azure Recovery Services vault using Bicep.--++ Last updated 06/27/2022
site-recovery Region Move Cross Geos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/region-move-cross-geos.md
Title: Move Azure VMs between government and public regions with Azure Site Recovery description: Use Azure Site Recovery to move Azure VMs between Azure government and public regions.-+ Last updated 04/16/2019-+ # Move Azure VMs between Azure Government and Public regions
site-recovery Service Updates How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/service-updates-how-to.md
Title: Updates and component upgrades in Azure Site Recovery description: Provides an overview of Azure Site Recovery service updates, and component upgrades.-+ -+ Last updated 08/11/2021 # Service updates in Site Recovery
site-recovery Site Recovery Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-active-directory.md
Title: Set up Active Directory/DNS disaster recovery with Azure Site Recovery description: This article describes how to implement a disaster recovery solution for Active Directory and DNS with Azure Site Recovery.-+ Last updated 04/01/2020-+ # Set up disaster recovery for Active Directory and DNS
site-recovery Site Recovery Backup Interoperability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-backup-interoperability.md
Title: Support for using Azure Site Recovery with Azure Backup description: Provides an overview of how Azure Site Recovery and Azure Backup can be used together.-+ Last updated 10/15/2019-+ # Support for using Site Recovery with Azure Backup
site-recovery Site Recovery Citrix Xenapp And Xendesktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-citrix-xenapp-and-xendesktop.md
Title: Set up Citrix XenDesktop/XenApp disaster recovery with Azure Site Recovery description: This article describes how to set up disaster recovery fo Citrix XenDesktop and XenApp deployments using Azure Site Recovery.-+ Last updated 11/27/2018-+ # End of support for disaster recovery of Citrix workloads
site-recovery Site Recovery Deployment Planner History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-deployment-planner-history.md
Title: Azure Site Recovery Deployment Planner Version History description: Known different Site Recovery Deployment Planner Versions fixes and known limitations along with their release dates. -+ Last updated 6/4/2020-+ # Azure Site Recovery Deployment Planner Version History
site-recovery Site Recovery Deployment Planner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-deployment-planner.md
Title: Azure Site Recovery Deployment Planner for VMware disaster recovery description: Learn about the Azure Site Recovery Deployment Planner for disaster recovery of VMware VMs to Azure.-+ -+ Last updated 04/06/2022
site-recovery Site Recovery Dynamicsax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-dynamicsax.md
Title: Disaster recovery of Dynamics AX with Azure Site Recovery description: Learn how to set up disaster recovery for Dynamics AX with Azure Site Recovery-+ Last updated 11/27/2018
site-recovery Site Recovery Extension Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-extension-troubleshoot.md
Title: Troubleshoot the Azure VM extension for disaster recovery with Azure Site Recovery description: Troubleshoot issues with the Azure VM extension for disaster recovery with Azure Site Recovery.-+ Last updated 11/27/2018
site-recovery Site Recovery Failover To Azure Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-failover-to-azure-troubleshoot.md
Title: 'Troubleshoot failover to Azure failures | Microsoft Docs' description: This article describes ways to troubleshoot common errors in failing over to Azure-+ Last updated 01/08/2020-+ # Troubleshoot errors when failing over VMware VM or physical machine to Azure
site-recovery Site Recovery Iis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-iis.md
Title: Set up disaster recovery for an IIS web app using Azure Site Recovery description: Learn how to replicate IIS web farm virtual machines using Azure Site Recovery.-+ Last updated 11/27/2018-+ # Set up disaster recovery for a multi-tier IIS-based web application
site-recovery Site Recovery Ipconfig Cmdlet Parameter Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-ipconfig-cmdlet-parameter-deprecation.md
Title: Deprecation of IPConfig parameters for the cmdlet New-AzRecoveryServicesAsrVMNicConfig | Microsoft Docs description: Details about deprecation of IPConfig parameters of the cmdlet New-AzRecoveryServicesAsrVMNicConfig and information about the use of new cmdlet New-AzRecoveryServicesAsrVMNicIPConfig -+ Last updated 04/30/2021-+ # Deprecation of IP Config parameters for the cmdlet New-AzRecoveryServicesAsrVMNicConfig
site-recovery Site Recovery Manage Network Interfaces On Premises To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-manage-network-interfaces-on-premises-to-azure.md
Title: Manage network adapters for on-premises disaster recovery with Azure Site Recovery description: Describes how to manage network interfaces for on-premises disaster recovery to Azure with Azure Site Recovery-+ Last updated 4/9/2019-+ # Manage VM network interfaces for on-premises disaster recovery to Azure
site-recovery Site Recovery Manage Registration And Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-manage-registration-and-protection.md
Title: Remove servers and disable protection | Microsoft Docs description: This article describes how to unregister servers from a Site Recovery vault, and to disable protection for virtual machines and physical servers.-+ Last updated 06/18/2019-+
site-recovery Site Recovery Plan Capacity Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-plan-capacity-vmware.md
Title: Plan capacity for VMware disaster recovery with Azure Site Recovery description: This article can help you plan capacity and scaling when you set up disaster recovery of VMware VMs to Azure by using Azure Site Recovery.-+ -+ Last updated 08/19/2021
site-recovery Site Recovery Retain Ip Azure Vm Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-retain-ip-azure-vm-failover.md
Title: Keep IP addresses after Azure VM failover with Azure Site Recovery
description: Describes how to retain IP addresses when failing over Azure VMs for disaster recovery to a secondary region with Azure Site Recovery Last updated 07/25/2021-+ -+ # Retain IP addresses during failover
site-recovery Site Recovery Role Based Linked Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-role-based-linked-access-control.md
Title: Manage Azure role-based access control in Azure Site Recovery
description: This article describes how to apply Azure role-based access control (Azure RBAC) to manage Azure Site Recovery access. Last updated 04/08/2019-+ -+ # Manage Site Recovery access with Azure role-based access control (Azure RBAC)
site-recovery Site Recovery Runbook Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-runbook-automation.md
Title: Add Azure Automation runbooks to Site Recovery recovery plans description: Learn how to extend recovery plans with Azure Automation for disaster recovery using Azure Site Recovery.-+ -+ Last updated 07/15/2021
site-recovery Site Recovery Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-sap.md
Title: Set up SAP NetWeaver disaster recovery with Azure Site Recovery description: Learn how to set up disaster recovery for SAP NetWeaver with Azure Site Recovery.-+ Last updated 11/27/2018
site-recovery Site Recovery Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-sharepoint.md
Title: Disaster recovery for a multi-tier SharePoint app using Azure Site Recovery description: This article describes how to set up disaster recovery for a multi-tier SharePoint application using Azure Site Recovery capabilities.-+ Last updated 6/27/2019-+ # Set up disaster recovery for a multi-tier SharePoint application for disaster recovery using Azure Site Recovery
site-recovery Site Recovery Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-sql.md
Title: Set up disaster recovery for SQL Server with Azure Site Recovery description: This article describes how to set up disaster recovery for SQL Server by using SQL Server and Azure Site Recovery. -+ Last updated 08/02/2019-+ # Set up disaster recovery for SQL Server
site-recovery Site Recovery Vmware Deployment Planner Analyze Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-vmware-deployment-planner-analyze-report.md
Title: Analyze the Deployment Planner report for VMware disaster recovery with Azure Site Recovery description: This article describes how to analyze the report generated by the Recovery Deployment Planner for VMware disaster recovery to Azure, using Azure Site Recovery.-+ -+ Last updated 05/27/2021
site-recovery Site Recovery Vmware Deployment Planner Cost Estimation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-vmware-deployment-planner-cost-estimation.md
Title: Review cost estimations in the Azure Site Recovery Deployment Planner description: This articles describes how to review the cost estimations in the Azure Site Recovery Deployment Planner for VMware disaster recovery.-+ -+ Last updated 05/27/2021
site-recovery Site Recovery Vmware Deployment Planner Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-vmware-deployment-planner-run.md
Title: Run the Deployment Planner for VMware disaster recovery with Azure Site Recovery description: This article describes how to run Azure Site Recovery Deployment Planner for VMware disaster recovery to Azure.-+ -+ Last updated 05/27/2021
site-recovery Site To Site Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-to-site-deprecation.md
Title: Deprecation of disaster recovery between customer-managed sites (with VMM) using Azure Site Recovery | Microsoft Docs description: Details about Upcoming deprecation of DR between customer owned sites using Hyper-V and between sites managed by SCVMM to Azure and alternate options -+ Last updated 02/25/2020-+ # Deprecation of disaster recovery between customer-managed sites (with VMM) using Azure Site Recovery
site-recovery Upgrade 2012R2 To 2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/upgrade-2012R2-to-2016.md
Title: Upgrade Windows Server/System Center VMM 2012 R2 to Windows Server 2016-Azure Site Recovery description: Learn how to upgrade Windows Server 2012 R2 hosts & SCVMM 2012 R2 that are configured with Azure Site Recovery, to Windows Server 2016 & SCVMM 2016. -+ Last updated 12/03/2018-+ # Upgrade Windows Server Server/System Center 2012 R2 VMM to Windows Server/VMM 2016
site-recovery Vmware Azure Deploy Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-deploy-configuration-server.md
Title: Deploy the configuration server in Azure Site Recovery description: This article describes how to deploy a configuration server for VMware disaster recovery with Azure Site Recovery -+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Disaster Recovery Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-disaster-recovery-powershell.md
Title: Set up VMware disaster recovery using PowerShell in Azure Site Revoery description: Learn how to set up replication and failover to Azure for disaster recovery of VMware VMs using PowerShell in Azure Site Recovery.-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-enable-replication.md
Title: Enable VMware VMs for disaster recovery using Azure Site Recovery description: This article describes how to enable VMware VM replication for disaster recovery using the Azure Site Recovery service-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Exclude Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-exclude-disk.md
Title: Exclude VMware VM disks from disaster recovery to Azure with Azure Site Recovery description: How to exclude VMware VM disks from replication to Azure with Azure Site Recovery.-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-failback.md
Title: Fail back VMware VMs/physical servers from Azure with Azure Site Recovery description: Learn how to fail back to the on-premises site after failover to Azure, during disaster recovery of VMware VMs and physical servers to Azure.-+ -+ Last updated 05/27/2021 # Fail back VMware VMs to on-premises site
site-recovery Vmware Azure Install Linux Master Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-install-linux-master-target.md
Title: Install a master target server for Linux VM failback with Azure Site Recovery description: Learn how to set up a Linux master target server for failback to an on-premises site during disaster recovery of VMware VMs to Azure using Azure Site Recovery. -+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Install Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-install-mobility-service.md
Title: Prepare source machines to install the Mobility Service through push installation for disaster recovery of VMware VMs and physical servers to Azure | Microsoft Docs description: Learn how to prepare your server to install Mobility agent through push installation for disaster recovery of VMware VMs and physical servers to Azure using the Azure Site Recovery service. -+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Manage Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-manage-configuration-server.md
Title: Manage the configuration server for disaster recovery with Azure Site Recovery description: Learn about the common tasks to manage an on-premises configuration server for disaster recovery of VMware VMs and physical servers to Azure with Azure Site Recovery.-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Manage Process Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-manage-process-server.md
Title: Manage a process server for VMware VMs/physical server disaster recovery in Azure Site Recovery description: This article describes manage a process server for disaster recovery of VMware VMs/physical servers using Azure Site Recovery.-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Manage Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-manage-vcenter.md
Title: Manage VMware vCenter servers in Azure Site Recovery description: This article describes how to add and manage VMware vCenter for disaster recovery of VMware VMs to Azure with Azure Site Recovery. -+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Mobility Install Configuration Mgr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-mobility-install-configuration-mgr.md
Title: Automate Mobility Service for disaster recovery of installation in Azure Site Recovery description: How to automatically install the Mobility Service for VMware /physical server disaster recovery with Azure Site Recovery. -+ -+ Last updated 05/02/2022
site-recovery Vmware Azure Multi Tenant Csp Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-multi-tenant-csp-disaster-recovery.md
Title: Set up VMware disaster recovery to Azure in a multi-tenancy environment using Site Recovery and the Cloud Solution Provider (CSP) program | Microsoft Docs description: Describes how to set up VMware disaster recovery in a multi-tenant environment with Azure Site Recovery.-+ Last updated 11/27/2018-+
site-recovery Vmware Azure Multi Tenant Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-multi-tenant-overview.md
Title: VMware VM multi-tenant disaster recovery with Azure Site Recovery description: Provides an overview of Azure Site Recovery support for VMWare disaster recovery to Azure in a multi-tenant environment (CSP) program.-+ Last updated 11/27/2018-+ # Overview of multi-tenant support for VMware disaster recovery to Azure with CSP
site-recovery Vmware Azure Reprotect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-reprotect.md
Title: Reprotect VMware VMs to an on-premises site with Azure Site Recovery description: Learn how to reprotect VMware VMs after failover to Azure with Azure Site Recovery.-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Set Up Process Server Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-set-up-process-server-azure.md
Title: Set up a process server VMware/physical failback in Azure Site Recovery description: This article describes how to set up a process server in Azure, to failback Azure VMs to VMware. -+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Set Up Process Server Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-set-up-process-server-scale.md
Title: Set up a scale-out process server during disaster recovery of VMware VMs and physical servers with Azure Site Recovery | Microsoft Docs' description: This article describes how to set up scale-out process server during disaster recovery of VMware VMs and physical servers.-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Set Up Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-set-up-replication.md
Title: Set up replication policies for VMware disaster recovery with Azure Site Recovery| Microsoft Docs description: Describes how to configure replication settings for VMware disaster recovery to Azure with Azure Site Recovery.-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Set Up Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-set-up-source.md
Title: Set up source settings for VMware disaster recovery to Azure with Azure Site Recovery description: This article describes how to set up your on-premises environment to replicate VMware VMs to Azure with Azure Site Recovery. -+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Set Up Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-set-up-target.md
Title: Prepare the VMware VM replication target in Azure Site Recovery description: This article describes how to prepare your target Azure environment for VMware VM replication to Azure. -+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Troubleshoot Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-troubleshoot-configuration-server.md
Title: Troubleshoot issues with the configuration server during disaster recovery of VMware VMs and physical servers to Azure by using Azure Site Recovery | Microsoft Docs description: This article provides troubleshooting information for deploying the configuration server for disaster recovery of VMware VMs and physical servers to Azure by using Azure Site Recovery.-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Troubleshoot Failback Reprotect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-troubleshoot-failback-reprotect.md
Title: Troubleshoot failback in VMware VM disaster recovery with Azure Site Recovery description: This article describes ways to troubleshoot failback and reprotection issues during VMware VM disaster recovery to Azure with Azure Site Recovery.-+ Last updated 11/27/2018-+
site-recovery Vmware Azure Troubleshoot Push Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-troubleshoot-push-install.md
Title: Troubleshoot Mobility Service push installation with Azure Site Recovery description: Troubleshoot Mobility Services installation errors when enabling replication for disaster recovery with Azure Site Recovery.-+ -+ Last updated 05/27/2021
site-recovery Vmware Azure Troubleshoot Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-troubleshoot-replication.md
Title: Troubleshoot replication issues for disaster recovery of VMware VMs and physical servers to Azure by using Azure Site Recovery | Microsoft Docs description: This article provides troubleshooting information for common replication issues during disaster recovery of VMware VMs and physical servers to Azure by using Azure Site Recovery.-+ Last updated 05/02/2022-+ # Troubleshoot replication issues for VMware VMs and physical servers
site-recovery Vmware Azure Troubleshoot Vcenter Discovery Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-troubleshoot-vcenter-discovery-failures.md
Title: Troubleshoot VMware vCenter discovery failures in Azure Site Recovery description: This article describes how to troubleshooting VMware vCenter discovery failures in Azure Site Recovery. -+ -+ Last updated 05/27/2021
site-recovery Vmware Physical Manage Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-manage-mobility-service.md
Title: Manage the Mobility agent for VMware/physical servers with Azure Site Recovery description: Manage Mobility Service agent for disaster recovery of VMware VMs and physical servers to Azure using the Azure Site Recovery service.-+ -+ Last updated 05/27/2021
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
Title: About the Mobility service for disaster recovery of VMware VMs and physical servers with Azure Site Recovery | Microsoft Docs description: Learn about the Mobility service agent for disaster recovery of VMware VMs and physical servers to Azure using the Azure Site Recovery service.-+ -+ Last updated 04/28/2022
spatial-anchors Get Started Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-ios.md
cd ./iOS/Objective-C/
Run `pod install --repo-update` to install the CocoaPods for the project.
+> [!NOTE]
+> Use the following command if you have macOS Monterey (12.2.1)
+
+Run `pod update` to install the CocoaPods for the project.
+ Now open the `.xcworkspace` in Xcode. > [!NOTE]
spring-cloud How To Enterprise Build Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enterprise-build-service.md
The following list shows the Tanzu Buildpacks available in Azure Spring Apps Ent
For details about Tanzu Buildpacks, see [Using the Tanzu Partner Buildpacks](https://docs.pivotal.io/tanzu-buildpacks/partner-integrations/partner-integration-buildpacks.html).
-## Create a Customized Builder to build apps
+## Build apps using a custom builder
Besides the `default` builder, you can also create custom builders with the provided buildpacks.
All the builders configured in a Spring Cloud Service instance are listed in the
:::image type="content" source="media/enterprise/how-to-enterprise-build-service/builder-list.png" alt-text="Screenshot of Azure portal showing Build Service page with list of configured builders." lightbox="media/enterprise/how-to-enterprise-build-service/builder-list.png":::
-Select **Add** to create a new builder. The image below shows the resources you should use to create the customized builder.
+Select **Add** to create a new builder. The image below shows the resources you should use to create the custom builder.
:::image type="content" source="media/enterprise/how-to-enterprise-build-service/builder-create.png" alt-text="Screenshot of 'Add Builder' pane." lightbox="media/enterprise/how-to-enterprise-build-service/builder-create.png":::
az spring-cloud app deploy \
If you're using the `tanzu-buildpacks/java-azure` buildpack, we recommend that you set the `BP_JVM_VERSION` environment variable in the `build-env` argument.
+When you use a custom builder in an app deployment, the builder can't make edits and deletions. If you want to change the configuration, create a new builder and use the new builder to deploy the app. After you deploy the app with the new builder, the deployment is linked to the new builder. You can then migrate the deployments under the previous builder to the new builder, and make edits and deletions.
+ ## Real-time build logs A build task will be triggered when an app is deployed from an Azure CLI command. Build logs are streamed in real time as part of the CLI command output. For information on using build logs to diagnose problems, see [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md) .
Currently, buildpack binding only supports binding the buildpacks listed below.
You can manage buildpack bindings with the Azure portal or the Azure CLI.
+> [!NOTE]
+> You can only manage buildpack bindings when the parent builder isn't used by any app deployments. To create, update, or delete buildpack bindings of an existing builder, create a new builder and configure new buildpack bindings there.
+ ### [Portal](#tab/azure-portal) ### View buildpack bindings using the Azure portal
az spring build-service builder buildpack-binding delete \
--builder-name <your-builder-name> ```
-> [!NOTE]
-> The bindings aren't allowed to create, edit, or delete when the parent builder is used in a deployment.
- ## Next steps
static-web-apps Assign Roles Microsoft Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/assign-roles-microsoft-graph.md
The sample application contains a serverless function (*api/GetRoles/index.js*)
1. When you are logged in, the sample app displays the list of roles that you are assigned based on your identity's Active Directory group membership. Depending on these roles, you are permitted or prohibited to access some of the routes in the app.
+> [!NOTE]
+> Some queries against Microsoft Graph return multiple pages of data. When more than one query request is required, Microsoft Graph returns an `@odata.nextLink` property in the response which contains a URL to the next page of results. For more details please refer to [Paging Microsoft Graph data in your app](/graph/paging)
+ ## Clean up resources Clean up the resources you deployed by deleting the resource group.
static-web-apps Build Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/build-configuration.md
For Node.js applications, you can take fine-grained control over what commands r
> [!NOTE] > Currently, you can only define `app_build_command` and `api_build_command` for Node.js builds.
+> To specify the Node.js version, use the [`engines`](https://docs.npmjs.com/cli/v8/configuring-npm/package-json#engines) field in the `package.json` file.
# [GitHub Actions](#tab/github-actions)
storage Access Tiers Online Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-online-manage.md
Title: Set a blob's access tier description: Learn how to specify a blob's access tier when you upload it, or how to change the access tier for an existing blob.-+ -+ Last updated 03/02/2022
storage Access Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md
Title: Hot, Cool, and Archive access tiers for blob data description: Azure storage offers different access tiers so that you can store your blob data in the most cost-effective manner based on how it's being used. Learn about the Hot, Cool, and Archive access tiers for Blob Storage.-+ -+ Last updated 06/16/2022
storage Archive Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-blob.md
Title: Archive a blob
description: Learn how to create a blob in the Archive tier, or move an existing blob to the Archive tier. -+ Last updated 03/01/2022-+
storage Archive Rehydrate Handle Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-handle-event.md
Title: Run an Azure Function in response to a blob rehydration event
description: Learn how to develop an Azure Function with .NET, then configure Azure Event Grid to run the function in response to an event fired when a blob is rehydrated from the Archive tier. -+ Last updated 02/28/2022-+ ms.devlang: csharp
storage Archive Rehydrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-overview.md
Title: Blob rehydration from the Archive tier description: While a blob is in the Archive access tier, it's considered to be offline and can't be read or modified. In order to read or modify data in an archived blob, you must first rehydrate the blob to an online tier, either the Hot or Cool tier. -+ -+ Last updated 05/13/2022
storage Archive Rehydrate To Online Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-to-online-tier.md
Title: Rehydrate an archived blob to an online tier
description: Before you can read a blob that is in the Archive tier, you must rehydrate it to either the Hot or Cool tier. You can rehydrate a blob either by copying it from the Archive tier to an online tier, or by changing its tier from Archive to Hot or Cool. -+ Last updated 04/15/2022-+
storage Data Lake Storage Access Control Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control-model.md
Title: Access control model for Azure Data Lake Storage Gen2 description: Learn how to configure container, directory, and file-level access in accounts that have a hierarchical namespace.-+ Last updated 02/17/2021-+ # Access control model in Azure Data Lake Storage Gen2
storage Data Lake Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control.md
Title: Access control lists in Azure Data Lake Storage Gen2 description: Understand how POSIX-like ACLs access control lists work in Azure Data Lake Storage Gen2.-+ Last updated 02/17/2021-+ ms.devlang: python
storage Data Lake Storage Acl Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-azure-portal.md
Title: Use the Azure portal to manage ACLs in Azure Data Lake Storage Gen2 description: Use the Azure portal to manage access control lists (ACLs) in storage accounts that has hierarchical namespace (HNS) enabled.-+ Last updated 04/15/2021-+
storage Data Lake Storage Explorer Acl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-explorer-acl.md
Title: 'Storage Explorer: Set ACLs in Azure Data Lake Storage Gen2' description: Use the Azure Storage Explorer to manage access control lists (ACLs) in storage accounts that has hierarchical namespace (HNS) enabled.-+ Last updated 10/28/2021-+
storage Data Lake Storage Migrate Gen1 To Gen2 Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-migrate-gen1-to-gen2-azure-portal.md
As you create the account, make sure to configure settings with the following va
> The migration tool in the Azure portal doesn't move account settings. Therefore, after you've created the account, you'll have to manually configure settings such as encryption, network firewalls, data protection. > [!IMPORTANT]
-> Ensure that you use a newly created Gen2 account that's empty. It's important that you don't migrate to a previously used Gen2 account.
+> Ensure that you use a fresh, newly created Gen2 account that has no history of any usage. **Don't** migrate to a previously used Gen2 account or use a Gen2 account in which containers have been deleted to make the Gen2 account empty.
## Verify RBAC role assignments
storage Data Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-protection-overview.md
Title: Data protection overview
description: The data protection options available for your for Blob Storage and Azure Data Lake Storage Gen2 data enable you to protect your data from being deleted or overwritten. If you should need to recover data that has been deleted or overwritten, this guide can help you to choose the recovery option that's best for your scenario. -+ Last updated 10/26/2021 -+
storage Immutable Legal Hold Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-legal-hold-overview.md
Title: Legal holds for immutable blob data
description: A legal hold stores blob data in a Write-Once, Read-Many (WORM) format until it is explicitly cleared. Use a legal hold when the period of time that the data must be kept in a WORM state is unknown. -+ Last updated 12/01/2021-+
storage Immutable Policy Configure Container Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-policy-configure-container-scope.md
Title: Configure immutability policies for containers
description: Learn how to configure an immutability policy that is scoped to a container. Immutability policies provide WORM (Write Once, Read Many) support for Blob Storage by storing data in a non-erasable, non-modifiable state. -+ Last updated 12/01/2021-+ ms.devlang: azurecli
storage Immutable Policy Configure Version Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-policy-configure-version-scope.md
Title: Configure immutability policies for blob versions
description: Learn how to configure an immutability policy that is scoped to a blob version. Immutability policies provide WORM (Write Once, Read Many) support for Blob Storage by storing data in a non-erasable, non-modifiable state. -+ Last updated 05/17/2022-+
storage Immutable Storage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-storage-overview.md
Title: Overview of immutable storage for blob data
description: Azure Storage offers WORM (Write Once, Read Many) support for Blob Storage that enables users to store data in a non-erasable, non-modifiable state. Time-based retention policies store blob data in a WORM state for a specified interval, while legal holds remain in effect until explicitly cleared. -+ Last updated 12/01/2021-+
storage Immutable Time Based Retention Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-time-based-retention-policy-overview.md
Title: Time-based retention policies for immutable blob data
description: Time-based retention policies store blob data in a Write-Once, Read-Many (WORM) state for a specified interval. You can configure a time-based retention policy that is scoped to a blob version or to a container. -+ Last updated 05/02/2022-+
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
Title: Optimize costs by automatically managing the data lifecycle description: Use Azure Storage lifecycle management policies to create automated rules for moving data between hot, cool, and archive tiers.-+ -+ Last updated 05/09/2022
storage Lifecycle Management Policy Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-policy-configure.md
Title: Configure a lifecycle management policy description: Configure a lifecycle management policy to automatically move data between hot, cool, and archive tiers during the data lifecycle.-+ -+ Last updated 08/18/2021
storage Object Replication Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-configure.md
Title: Configure object replication
description: Learn how to configure object replication to asynchronously copy block blobs from a container in one storage account to another. -+ Last updated 05/05/2022-+
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
Title: Object replication overview
description: Object replication asynchronously copies block blobs between a source storage account and a destination account. Use object replication to minimize latency on read requests, to increase efficiency for compute workloads, to optimize data distribution, and to minimize costs. -+ Last updated 05/24/2022-+
storage Object Replication Prevent Cross Tenant Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-prevent-cross-tenant-policies.md
Title: Prevent object replication across Azure Active Directory tenants (preview
description: Prevent cross-tenant object replication -+ Last updated 09/02/2021-+
storage Point In Time Restore Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/point-in-time-restore-manage.md
Title: Perform a point-in-time restore on block blob data
description: Learn how to use point-in-time restore to restore a set of block blobs to their previous state at a given point in time. -+ Last updated 01/29/2021-+
storage Point In Time Restore Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/point-in-time-restore-overview.md
Title: Point-in-time restore for block blobs
description: Point-in-time restore for block blobs provides protection against accidental deletion or corruption by enabling you to restore a storage account to its previous state at a given point in time. -+ Last updated 06/22/2022-+
storage Snapshots Manage Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/snapshots-manage-dotnet.md
Title: Create and manage a blob snapshot in .NET
description: Learn how to use the .NET client library to create a read-only snapshot of a blob to back up blob data at a given moment in time. -+ Last updated 08/27/2020-+ ms.devlang: csharp
storage Snapshots Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/snapshots-overview.md
Title: Blob snapshots
description: Understand how blob snapshots work and how they are billed. -+ Last updated 12/29/2021-+
storage Soft Delete Blob Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-enable.md
Title: Enable soft delete for blobs
description: Enable soft delete for blobs to protect blob data from accidental deletes or overwrites. -+ Last updated 05/05/2022-+
storage Soft Delete Blob Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-manage.md
Title: Manage and restore soft-deleted blobs
description: Manage and restore soft-deleted blobs and snapshots with the Azure portal or with the Azure Storage client libraries. -+ Last updated 06/29/2022-+ ms.devlang: csharp
storage Soft Delete Blob Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-overview.md
Title: Soft delete for blobs
description: Soft delete for blobs protects your data so that you can more easily recover your data when it's erroneously modified or deleted by an application or by another storage account user. -+ Last updated 06/22/2022-+
storage Soft Delete Container Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-container-enable.md
Title: Enable and manage soft delete for containers
description: Enable container soft delete to more easily recover your data when it is erroneously modified or deleted. -+ Last updated 07/06/2021-+
storage Soft Delete Container Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-container-overview.md
Title: Soft delete for containers
description: Soft delete for containers protects your data so that you can more easily recover your data when it's erroneously modified or deleted by an application or by another storage account user. -+ Last updated 06/22/2022-+
storage Storage Blob Account Delegation Sas Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-account-delegation-sas-create-javascript.md
+
+ Title: Create account SAS tokens - JavaScript
+
+description: Create and use account SAS tokens in a JavaScript application that works with Azure Blob Storage. This article helps you set up a project and authorizes access to an Azure Blob Storage endpoint.
+++++ Last updated : 07/05/2022+++++
+# Create and use account SAS tokens with Azure Blob Storage and JavaScript
+
+This article shows you how to create and use account SAS tokens to use the Azure Blob Storage client library v12 for JavaScript. Once connected, your code can operate on containers, blobs, and features of the Blob Storage service.
+
+The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files.
+
+[Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [Samples](../common/storage-samples-javascript.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples) | [API reference](/javascript/api/preview-docs/@azure/storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/storage/storage-blob) | [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
+
+## Account SAS tokens
+
+An **account SAS token** is one [type of SAS token](../common/storage-sas-overview.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json#types-of-shared-access-signatures) for access delegation provided by Azure Storage. An account SAS token provides access to Azure Storage. The token is only as restrictive as you define it when creating it. Because anyone with the token can use it to access your Storage account, you should define the token with the most restrictive permissions that still allow the token to complete the required tasks.
+
+[Best practices for token](../common/storage-sas-overview.md#best-practices-when-using-sas) creation include limiting permissions:
+
+*
+* Resource types: service, container, or object
+* Permissions such as create, read, write, update, and delete
+
+## Add required dependencies to your application
+
+Include the required dependencies to create an account SAS token.
++
+## Get environment variables to create shared key credential
+
+Use the Blob Storage account name and key to create a [StorageSharedKeyCredential](/javascript/api/@azure/storage-blob/storagesharedkeycredential). This key is required to create the SAS token and to use the SAS token.
+
+Create a [StorageSharedKeyCredential](/javascript/api/@azure/storage-blob/storagesharedkeycredential) by using the storage account name and account key. Then use the StorageSharedKeyCredential to initialize a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient).
++
+## Async operation boilerplate
+
+The remaining sample code snippets assume the following async boilerplate code for Node.js.
++
+## Create SAS token
+
+Because this token can be used with blobs, queues, tables, and files, some of the settings are more broad than just blob options.
+
+1. Create the options object.
+
+ The scope of the abilities of a SAS token is defined by the [AccountSASSignatureValues](/javascript/api/@azure/storage-blob/accountsassignaturevalues).
+
+ Use the following helper functions provided by the SDK to create the correct value types for the values:
+
+ * [AccountSASServices.parse("btqf").toString()](/javascript/api/@azure/storage-blob/accountsasservices):
+ * b: blob
+ * t: table
+ * q: query
+ * f: file
+ * [resourceTypes: AccountSASResourceTypes.parse("sco").toString()](/javascript/api/@azure/storage-blob/accountsasresourcetypes)
+ * s: service
+ * c: container - such as blob container, table or queue
+ * o: object - blob, row, message
+ * [permissions: AccountSASPermissions.parse("rwdlacupi")](/javascript/api/@azure/storage-blob/accountsaspermissions)
+ * r: read
+ * w: write
+ * d: delete
+ * l: list
+ * f: filter
+ * a: add
+ * c: create
+ * u: update
+ * t: tag access
+ * p: process - such as process messages in a queue
+ * i: set immutability policy
+
+1. Pass the object to the [generateAccountSASQueryParameters](/javascript/api/@azure/storage-blob/#@azure-storage-blob-generateaccountsasqueryparameters) function, along with the [SharedKeyCredential](/javascript/api/@azure/storage-blob/#@azure-storage-blob-generateaccountsasqueryparameters), to create the SAS token.
+
+ Before returning the SAS token, prepend the query string delimiter, `?`.
+
+ :::code language="javascript" source="~/azure_storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/create-account-sas.js" id="Snippet_GetSas":::
+
+1. Secure the SAS token until it is used.
+
+## Use Blob service with account SAS token
+
+To use the account SAS token, you need to combine it with the account name to create the URI. Pass the URI to create the blobServiceClient. Once you have the blobServiceClient, you can use that client to access your Blob service.
+
++
+## See also
+
+- [Types of SAS tokens](../common/storage-sas-overview.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json)
+- [How a shared access signature works](../common/storage-sas-overview.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json#how-a-shared-access-signature-works)
+- [API reference](/javascript/api/@azure/storage-blob/)
+- [Library source code](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob)
+- [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
storage Storage Blob Block Blob Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-block-blob-premium.md
Title: Premium block blob storage accounts description: Achieve lower and consistent latencies for Azure Storage workloads that require fast and consistent response times.-+ -+ Last updated 10/14/2021
storage Storage Blob Change Feed How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-change-feed-how-to.md
Title: Process change feed in Azure Blob Storage description: Learn how to process change feed logs in a .NET client application-+ -+ Last updated 03/03/2022
storage Storage Blob Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-change-feed.md
Title: Change feed in Blob Storage description: Learn about change feed logs in Azure Blob Storage and how to use them.-+ -+ Last updated 06/15/2022
storage Storage Samples Blobs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-samples-blobs-cli.md
Title: Azure CLI samples for Blob storage | Microsoft Docs description: See links to Azure CLI samples for working with Azure Blob Storage, such as creating a storage account, deleting containers with a specific prefix, and more.-+ -+ Last updated 06/13/2017
storage Storage Samples Blobs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-samples-blobs-powershell.md
Title: Azure PowerShell samples for Azure Blob storage | Microsoft Docs description: See links to Azure PowerShell script samples for working with Azure Blob storage, such as creating a storage account, migrating blobs across accounts, and more.-+ -+ Last updated 11/07/2017
storage Versioning Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versioning-enable.md
Title: Enable and manage blob versioning
description: Learn how to enable blob versioning in the Azure portal or by using an Azure Resource Manager template. -+ Last updated 06/07/2021-+
storage Versioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versioning-overview.md
Title: Blob versioning
description: Blob storage versioning automatically maintains previous versions of an object and identifies them with timestamps. You can restore a previous version of a blob to recover your data if it is erroneously modified or deleted. -+ Last updated 06/22/2022-+
storage Configure Network Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/configure-network-routing-preference.md
Title: Configure network routing preference
description: Configure network routing preference for your Azure storage account to specify how network traffic is routed to your account from clients over the Internet. -+ Last updated 03/17/2021-+
storage Network Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/network-routing-preference.md
Title: Network routing preference
description: Network routing preference enables you to specify how network traffic is routed to your account from clients over the internet. -+ Last updated 02/11/2021-+
storage Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Storage
description: Lists Azure Policy Regulatory Compliance controls available for Azure Storage. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 06/16/2022 --++
storage Storage Account Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-move.md
Title: Move an Azure Storage account to another region | Microsoft Docs description: Shows you how to move an Azure Storage account to another region. -+ Last updated 06/15/2022-+
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Title: Configure Azure Storage firewalls and virtual networks | Microsoft Docs description: Configure layered network security for your storage account using Azure Storage firewalls and Azure Virtual Network. -+ Last updated 03/31/2022-+
storage Storage Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-private-endpoints.md
Title: Use private endpoints
description: Overview of private endpoints for secure access to storage accounts from virtual networks. -+ Last updated 03/16/2021-+
storage Storage Use Azurite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azurite.md
To connect to the table service only, the connection string is:
The full HTTPS connection string is:
-`DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=https://127.0.0.1:10001/devstoreaccount1;TableEndpoint=https://127.0.0.1:10001/devstoreaccount1;`
+`DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=https://127.0.0.1:10001/devstoreaccount1;TableEndpoint=https://127.0.0.1:10002/devstoreaccount1;`
To use the blob service only, the HTTPS connection string is:
To use the queue service only, the HTTPS connection string is:
To use the table service only, the HTTPS connection string is:
-`DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;TableEndpoint=https://127.0.0.1:10001/devstoreaccount1;`
+`DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;TableEndpoint=https://127.0.0.1:10002/devstoreaccount1;`
If you used `dotnet dev-certs` to generate your self-signed certificate, use the following connection string.
-`DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://localhost:10000/devstoreaccount1;QueueEndpoint=https://localhost:10001/devstoreaccount1;TableEndpoint=https://localhost:10001/devstoreaccount1;`
+`DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://localhost:10000/devstoreaccount1;QueueEndpoint=https://localhost:10001/devstoreaccount1;TableEndpoint=https://localhost:10002/devstoreaccount1;`
Update the connection string when using [custom storage accounts and keys](#custom-storage-accounts-and-keys).
You can also instantiate a TableClient or TableServiceClient.
```csharp // With table URL and DefaultAzureCredential var client = new Client(
- new Uri("https://127.0.0.1:10001/devstoreaccount1/table-name"), new DefaultAzureCredential()
+ new Uri("https://127.0.0.1:10002/devstoreaccount1/table-name"), new DefaultAzureCredential()
); // With connection string var client = new TableClient(
- "DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;TableEndpoint=https://127.0.0.1:10001/devstoreaccount1;", "table-name"
+ "DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;TableEndpoint=https://127.0.0.1:10002/devstoreaccount1;", "table-name"
); // With account name and key var client = new TableClient(
- new Uri("https://127.0.0.1:10001/devstoreaccount1/table-name"),
+ new Uri("https://127.0.0.1:10002/devstoreaccount1/table-name"),
new StorageSharedKeyCredential("devstoreaccount1", "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==") ); ```
storage Azure Files Case Study https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/azure-files-case-study.md
Last updated 6/14/2022 + # Customer case studies
storage Storage How To Use Files Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-linux.md
uname -r
--resource-group $resourceGroupName \ --name $storageAccountName \ --query "primaryEndpoints.file" --output tsv | tr -d '"')
- smbPath=$(echo $httpEndpoint | cut -c7-$(expr length $httpEndpoint))
+ smbPath=$(echo $httpEndpoint | cut -c7-${#httpEndpoint})
fileHost=$(echo $smbPath | tr -d "/") nc -zvw3 $fileHost 445
httpEndpoint=$(az storage account show \
--resource-group $resourceGroupName \ --name $storageAccountName \ --query "primaryEndpoints.file" --output tsv | tr -d '"')
-smbPath=$(echo $httpEndpoint | cut -c7-$(expr length $httpEndpoint))$fileShareName
+smbPath=$(echo $httpEndpoint | cut -c7-${#httpEndpoint})$fileShareName
storageAccountKey=$(az storage account keys list \ --resource-group $resourceGroupName \
httpEndpoint=$(az storage account show \
--resource-group $resourceGroupName \ --name $storageAccountName \ --query "primaryEndpoints.file" --output tsv | tr -d '"')
-smbPath=$(echo $httpEndpoint | cut -c7-$(expr length $httpEndpoint))$fileShareName
+smbPath=$(echo $httpEndpoint | cut -c7-${#httpEndpoint})$fileShareName
storageAccountKey=$(az storage account keys list \ --resource-group $resourceGroupName \
httpEndpoint=$(az storage account show \
--resource-group $resourceGroupName \ --name $storageAccountName \ --query "primaryEndpoints.file" --output tsv | tr -d '"')
-smbPath=$(echo $httpEndpoint | cut -c7-$(expr length $httpEndpoint))$fileShareName
+smbPath=$(echo $httpEndpoint | cut -c7-${#httpEndpoint})$fileShareName
storageAccountKey=$(az storage account keys list \ --resource-group $resourceGroupName \
httpEndpoint=$(az storage account show \
--resource-group $resourceGroupName \ --name $storageAccountName \ --query "primaryEndpoints.file" --output tsv | tr -d '"')
-smbPath=$(echo $httpEndpoint | cut -c7-$(expr length $httpEndpoint))$fileShareName
+smbPath=$(echo $httpEndpoint | cut -c7-${#httpEndpoint})$fileShareName
if [ -z "$(grep $smbPath\ $mntPath /etc/fstab)" ]; then echo "$smbPath $mntPath cifs nofail,credentials=$smbCredentialFile,serverino,nosharesock,actimeo=30" | sudo tee -a /etc/fstab >
storage Storage Powershell How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-powershell-how-to-use-queues.md
Title: How to use Azure Queue Storage from PowerShell - Azure Storage description: Perform operations on Azure Queue Storage via PowerShell. With Azure Queue Storage, you can store large numbers of messages that are accessible by HTTP/HTTPS.--++ Last updated 05/15/2019
virtual-desktop Getting Started Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/getting-started-feature.md
Title: Deploy Azure Virtual Desktop getting started feature - Azure
+ Title: Deploy Azure Virtual Desktop with the getting started feature
description: A quickstart guide for how to quickly set up Azure Virtual Desktop with the Azure portal's getting started feature.-+ Previously updated : 07/14/2021- Last updated : 06/14/2022+ # Deploy Azure Virtual Desktop with the getting started feature
-The Azure portal's new getting started feature is a quick, easy way to install and configure Azure Virtual Desktop on your deployment.
+You can quickly deploy Azure Virtual Desktop with the *getting started* feature in the Azure portal. This can be used in smaller scenarios with a few users and apps, or you can use it to evaluate Azure Virtual Desktop in larger enterprise scenarios. It works with existing Active Directory Domain Services (AD DS) or Azure Active Directory Domain Services (Azure AD DS) deployments, or it can deploy Azure AD DS for you. Once you've finished, a user will be able to sign in to a full virtual desktop session, consisting of one host pool (with one or more session hosts), one app group, and one user. To learn about the terminology used in Azure Virtual Desktop, see [Azure Virtual Desktop terminology](environment-setup.md).
-## Requirements
+> [!TIP]
+> Enterprises should plan an Azure Virtual Desktop deployment using information from [Enterprise-scale support for Microsoft Azure Virtual Desktop](/azure/cloud-adoption-framework/scenarios/wvd/enterprise-scale-landing-zone). You can also find more a granular deployment process in a [series of tutorials](create-host-pools-azure-marketplace.md), which also cover programmatic methods and less permission.
-You'll need to meet the following requirements to be able to use getting started:
+You can see the list of [resources that will be deployed](#resources-that-will-be-deployed) further down in this article.
-- An Azure Active Directory (AD) tenant.-- An account with global admin permissions on Azure AD.
+## Prerequisites
- >[!NOTE]
- >The getting started feature doesn't currently support MSA, B2B, or guest accounts at this time.
+Please review the [Prerequisites for Azure Virtual Desktop](prerequisites.md) to start for a general idea of what's required, however there are some differences when using the getting started feature that you'll need to meet. Select a tab below to show instructions that are most relevant to your scenario.
-- An active Azure subscription.
+> [!TIP]
+> If you don't already have other Azure resources, we recommend you select the **New Azure AD DS** tab. This scenario will deploy everything you need to be ready to connect to a full virtual desktop session. If you already have AD DS or Azure AD DS, select the relevant tab for your scenario instead.
- >[!NOTE]
- >The getting started feature doesn't currently support accounts with multi-factor authentication.
+# [New Azure AD DS](#tab/new-aadds)
-- An account with **Owner permissions** on the subscription.
+At a high level, you'll need:
-If you're using the getting started feature in an environment with Active Directory Domain Services (AD DS), you'll also need to meet these requirements:
+- An Azure account with an active subscription
+- An account with the [global administrator Azure AD role](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md) assigned on the Azure tenant and the [owner role](../role-based-access-control/role-assignments-portal.md) assigned on subscription you're going to use.
+- No existing Azure AD DS domain deployed in your Azure tenant.
+- User names you choose must not include any keywords [that the username guideline list doesn't allow](../virtual-machines/windows/faq.yml#what-are-the-username-requirements-when-creating-a-vm-), and you must use a unique user name that's not already in your Azure AD subscription.
+- The user name for AD Domain join UPN should be a unique one that doesn't already exist in Azure AD. The getting started feature doesn't support using existing Azure AD user names when also deploying Azure AD DS.
-- AD DS domain admin credentials.-- You must configure Azure AD connect on your subscription and make sure the "USERS" container is syncing with Azure AD.-- The domain controller in your virtual machine (VM) must not have DSC extensions of type **Microsoft.Powershell.DSC**.
+# [Existing AD DS](#tab/existing-adds)
-If you're using the getting started feature in an environment without an identity provider, these are the extra requirements you should follow:
+At a high level, you'll need:
-- Your AD domain join UPN must not include any keywords [that the username guideline list doesn't allow](../virtual-machines/windows/faq.yml#what-are-the-username-requirements-when-creating-a-vm-), and you must use a unique user name that's not already in your Azure AD subscription.-- You must create a new host pool to add session hosts you create with the getting started feature. If you try to make a session host in an existing host pool, it won't work.
+- An Azure account with an active subscription.
+- An account with the [global administrator Azure AD role](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md) assigned on the Azure tenant and the [owner role](../role-based-access-control/role-assignments-portal.md) assigned on subscription you're going to use.
+- An AD DS domain controller deployed in Azure in the same subscription as the one you choose to use with the getting started feature. Using multiple subscriptions isn't supported. Make sure you know the fully qualified domain name (FQDN).
+- Domain admin credentials for your existing AD DS domain
+- You must configure [Azure AD connect](../active-directory/hybrid/whatis-azure-ad-connect.md) on your subscription and make sure the **Users** container is syncing with Azure AD. A security group called **AVDValidationUsers** will be created during deployment in the *Users* container by default. You can also pre-create the **AVDValidationUsers** security group in a different organization unit in your existing AD DS domain. You must make sure this group is then synchronized to Azure AD.
+- A virtual network in the same Azure region you want to deploy Azure Virtual Desktop to. We recommend that you [create a new virtual network](../virtual-network/quick-create-portal.md) for Azure Virtual Desktop and use [virtual network peering](../virtual-network/virtual-network-peering-overview.md) to peer it with the virtual network for AD DS or Azure AD DS. You also need to make sure you can resolve your AD DS or Azure AD DS domain name from this new virtual network.
+- Internet access is required from your domain controller VM to download PowerShell DSC configuration from `https://wvdportalstorageblob.blob.core.windows.net/galleryartifacts/`.
-## For subscriptions with Azure AD DS or AD DS
+> [!NOTE]
+> The PowerShell Desired State Configuration (DSC) extension will be added to your domain controller VM. A configuration will be added called **AddADDSUser** that contains PowerShell scripts to create the security group and test user, and to populate the security group with any users you choose to add during deployment.
-Here's how to use the getting started feature in a subscription that already has Azure AD DS or AD DS:
+# [Existing Azure AD DS](#tab/existing-aadds)
-1. Open [the Azure portal](https://portal.azure.com).
+At a high level, you'll need:
-2. Sign in to Azure and open **Azure Virtual Desktop management**, then select the **Getting started** tab. This will open the landing page for the getting started feature.
+- An Azure account with an active subscription.
+- An account with the [global administrator Azure AD role](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md) assigned on the Azure tenant and the [owner role](../role-based-access-control/role-assignments-portal.md) assigned on subscription you're going to use.
+- Azure AD DS deployed in the same tenant and subscription. Peered subscriptions aren't supported. Make sure you know the fully qualified domain name (FQDN).
+- Your domain admin user needs to have the same UPN suffix in Azure AD and Azure AD DS. This means your Azure AD DS name is the same as your `.onmicrosoft.com` tenant name or you've added the domain name used for Azure AD DS as a verified custom domain name to Azure AD.
+- An Azure AD account that is a member of **AAD DC Administrators** group in Azure AD.
+- The *forest type* for Azure AD DS must be **User**.
+- A virtual network in the same Azure region you want to deploy Azure Virtual Desktop to. We recommend that you [create a new virtual network](../virtual-network/quick-create-portal.md) for Azure Virtual Desktop and use [virtual network peering](../virtual-network/virtual-network-peering-overview.md) to peer it with the virtual network or Azure AD DS. You also need to make sure you [configure DNS servers](../active-directory-domain-services/tutorial-configure-networking.md#configure-dns-servers-in-the-peered-virtual-network) to resolve your Azure AD DS domain name from this virtual network for Azure Virtual Desktop.
-3. Select **Create**.
-
-4. In the **Basic** tab, select the following values:
-
- - For **Subscription**, go to **How is your subscription configured**, then select **Existing setup**.
-
- - In the **Location**, select the location where you'll deploy your resources.
-
- - For **Azure admin UPN**, enter the full user principal name (UPN) of the account with admin permissions in Azure AD and owner permissions in the subscription that you plan to use.
-
- - For **AD Domain join UPN** enter the full UPN of the account with permissions that you plan to use to join the VMs to your domain.
-
- - For **Identity**, select either **Azure AD DS** or **AD DS** depending on your environment. What you choose here will affect the input your VMs will need.
+
-5. In the **Virtual machines** tab, select the following values:
+> [!IMPORTANT]
+> The getting started feature doesn't currently support accounts that use multi-factor authentication. It also does not support personal Microsoft accounts (MSA) or [Azure AD B2B collaboration](../active-directory/external-identities/user-properties.md) users (either member or guest accounts).
- - For **Do you want the users to share this machine?**, select one of the following options depending on your needs:
- - If you want to create a single-session or personal host pool, select **No**.
- - If you want to create a multi-session or pooled host pool, select **Yes (multi-session)**. This will also create an Azure Files storage account joined to either Azure AD DS or AD DS.
+## Deployment steps
- - For **Image type**, select an image from the Azure image gallery, a custom image, or a VHD from a storage blob.
+# [New Azure AD DS](#tab/new-aadds)
+
+Here's how to deploy Azure Virtual Desktop and a new Azure AD DS domain using the getting started feature:
- - For **VM size**, select the size and SKU you want for the VMs you'll deploy.
+1. Sign in to [the Azure portal](https://portal.azure.com).
- - For **Number of VMs**, select how many VMs you want to provision in the host pool.
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
- - If you're using an existing setup with AD DS, these options will appear:
+1. Select **Getting started** to open the landing page for the getting started feature, then select **Start**.
- - For **Subnet**, select a subnet in the VNET. The subnet you choose must either be in the same location as the identity (AD DS or Azure AD DS) or peered to it.
+1. On the **Basics** tab, complete the following information, then select **Next: Virtual Machines >**:
- - For **Domain controller resource group**, select the resource group where the AD DS VM is either located or peered to. The resource group with the domain controller must be in the same subscription. The get started feature doesn't currently support peered subscriptions at this time.
+ | Parameter | Value/Description |
+ |--|--|
+ | Subscription | The subscription you want to use from the drop-down list. |
+ | Identity provider | No identity provider. |
+ | Identity service type | Azure AD Domain Services. |
+ | Resource group | Enter a name. This will be used as the prefix for the resource groups that are deployed. |
+ | Location | The Azure region where your Azure Virtual Desktop resources will be deployed. |
+ | Azure admin user name | The user principal name (UPN) of the account with the global administrator Azure AD role assigned on the Azure tenant and the owner role on the subscription that you selected.<br /><br />Make sure this account meets the requirements noted in the [prerequisites](#prerequisites). |
+ | Azure admin password | The password for the Azure admin account. |
+ | Domain admin user name | The user principal name (UPN) for a new Azure AD account that will be added to a new *AAD DC Administrators* group and used to manage your Azure AD DS domain. The UPN suffix will be used as the Azure AD DS domain name.<br /><br />Make sure this user name meets the requirements noted in the [prerequisites](#prerequisites). |
+ | Domain admin password | The password for the domain admin account. |
- - For **Domain controller virtual machine**, enter the name of the VM running your deployment's AD DS.
+1. On the **Virtual machines** tab, complete the following information, then select **Next: Assignments >**:
- - If you want to open the Select Azure AD users or Users group, select the **Assign existing users** check box.
+ | Parameter | Value/Description |
+ |--|--|
+ | Users per virtual machine | Select **Multiple users** or **One user at a time** depending on whether you want users to share a session host or assign a session host to an individual user. Learn more about [host pool types](environment-setup.md#host-pools). Selecting **Multiple users** will also create an Azure Files storage account joined to the same Azure AD DS domain. |
+ | Image type | Select **Gallery** to choose from a predefined list, or **storage blob** to enter a URI to the image. |
+ | Image | If you chose **Gallery** for image type, select the operating system image you want to use from the drop-down list. You can also select **See all images** to choose an image from the [Azure Compute Gallery](../virtual-machines/azure-compute-gallery.md).<br /><br />If you chose **Storage blob** for image type, enter the URI of the image. |
+ | Virtual machine size | The [Azure virtual machine size](../virtual-machines/sizes.md) used for your session host(s) |
+ | Name prefix | The name prefix for your session host(s). Each session host will have a hyphen and then a number added to the end, for example **avd-sh-1**. This name prefix can be a maximum of 11 characters and will also be used as the device name in the operating system. |
+ | Number of virtual machines | The number of session hosts you want to deploy at this time. You can add more later. |
+ | Link Azure template | Tick the box if you want to [link a separate ARM template](../azure-resource-manager/templates/linked-templates.md) for custom configuration on your session host(s) during deployment. You can specify inline deployment script, desired state configuration, and custom script extension. Provisioning other Azure resources in the template isn't supported.<br /><br />Untick the box if you don't want to link a separate ARM template during deployment. |
+ | ARM template file URL | The URL of the ARM template file you want to use. This could be stored in a storage account. |
+ | ARM template parameter file URL | The URL of the ARM template parameter file you want to use. This could be stored in a storage account. |
- - If you want to create a validation user account to test your deployment, select the **Create validation user** check box, then enter a username and password in the prompt that appears.
+1. On the **Assignments** tab, complete the following information, then select **Next: Review + create >**:
- >[!NOTE]
- >Getting started will create the validation user group in the "USERS" container. You must make sure your validation group is synced to Azure AD. If the sync doesn't work, then pre-create the AVDValidationUsers group in an organization unit that is being synced to Azure AD.
+ | Parameter | Value/Description |
+ |--|--|
+ | Create test user account | Tick the box if you want a new user account created during deployment for testing purposes. |
+ | Test user name | The user principal name (UPN) of the test account you want to be created, for example `testuser@contoso.com`. This user will be created in your new Azure AD tenant, synchronized to Azure AD DS, and made a member of the **AVDValidationUsers** security group that is also created during deployment. It must contain a valid UPN suffix for your domain that is also [added as a verified custom domain name in Azure AD](../active-directory/fundamentals/add-custom-domain.md).<br /><br />Make sure this user name meets the requirements noted in the [prerequisites](#prerequisites). |
+ | Test password | The password to be used for the test account. |
+ | Confirm password | Confirmation of the password to be used for the test account. |
+
+1. On the **Review + create** tab, ensure validation passes and review the information that will be used during deployment.
-## For subscriptions without Azure AD DS or AD DS
+1. Select **Create**.
-This section will show you how to use the getting started feature for a subscription without Azure AD DS or AD DS. For reference, these subscriptions are sometimes called "empty" subscriptions.
+# [Existing AD DS](#tab/existing-adds)
-To deploy Azure Virtual Desktop on a subscription without Azure AD DS or AD DS:
+Here's how to deploy Azure Virtual Desktop using the getting started feature where you already have AD DS available:
+
+1. Sign in to [the Azure portal](https://portal.azure.com).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. Select **Getting started** to open the landing page for the getting started feature, then select **Start**.
-1. Open [the Azure portal](https://portal.azure.com).
+1. On the **Basics** tab, complete the following information, then select **Next: Virtual Machines >**:
-2. Sign in to Azure and open **Azure Virtual Desktop management**, then select the **Getting started** tab. This will open the landing page for the getting started feature.
+ | Parameter | Value/Description |
+ |--|--|
+ | Subscription | The subscription you want to use from the drop-down list. |
+ | Identity provider | Existing Active Directory. |
+ | Identity service type | Active Directory. |
+ | Resource group | Enter a name. This will be used as the prefix for the resource groups that are deployed. |
+ | Location | The Azure region where your Azure Virtual Desktop resources will be deployed. |
+ | Virtual network | The virtual network in the same Azure region you want to connect your Azure Virtual Desktop resources to. This must have connectivity to your AD DS domain controller in Azure and be able to resolve its FQDN. |
+ | Subnet | The subnet of the virtual network you want to connect your Azure Virtual Desktop resources to. |
+ | Azure admin user name | The user principal name (UPN) of the account with the global administrator Azure AD role assigned on the Azure tenant and the owner role on the subscription that you selected.<br /><br />Make sure this account meets the requirements noted in the [prerequisites](#prerequisites). |
+ | Azure admin password | The password for the Azure admin account. |
+ | Domain admin user name | The user principal name (UPN) of the domain admin account in your AD DS domain. The UPN suffix doesn't need to be added as a custom domain in Azure AD.<br /><br />Make sure this account meets the requirements noted in the [prerequisites](#prerequisites). |
+ | Domain admin password | The password for the domain admin account. |
-3. In the **Basic** tab, select the following values:
+1. On the **Virtual machines** tab, complete the following information, then select **Next: Assignments >**:
- - For **Subscription**, select the subscription you want to deploy Azure Virtual Desktop in.
+ | Parameter | Value/Description |
+ |--|--|
+ | Users per virtual machine | Select **Multiple users** or **One user at a time** depending on whether you want users to share a session host or assign a session host to an individual user. Learn more about [host pool types](environment-setup.md#host-pools). Selecting **Multiple users** will also create an Azure Files storage account joined to the same AD DS domain. |
+ | Image type | Select **Gallery** to choose from a predefined list, or **storage blob** to enter a URI to the image. |
+ | Image | If you chose **Gallery** for image type, select the operating system image you want to use from the drop-down list. You can also select **See all images** to choose an image from the [Azure Compute Gallery](../virtual-machines/azure-compute-gallery.md).<br /><br />If you chose **Storage blob** for image type, enter the URI of the image. |
+ | Virtual machine size | The [Azure virtual machine size](../virtual-machines/sizes.md) used for your session host(s). |
+ | Name prefix | The name prefix for your session host(s). Each session host will have a hyphen and then a number added to the end, for example **avd-sh-1**. This name prefix can be a maximum of 11 characters and will also be used as the device name in the operating system. |
+ | Number of virtual machines | The number of session hosts you want to deploy at this time. You can add more later. |
+ | Specify domain or unit | Select **Yes** if:<br /><ul><li>The FQDN of your domain is different to the UPN suffix of the domain admin user in the previous step.</li><li>You want to create the computer account in a specific Organizational Unit (OU).</li></ul><br />If you select **Yes** and you only want to specify an OU, you must enter a value for **Domain to join**, even if that is the same as the UPN suffix of the domain admin user in the previous step. Organizational Unit path is optional and if it's left empty, the computer account will be placed in the *Users* container.<br /><br />Select **No** to use the suffix of the Active Directory domain join UPN as the FQDN. For example, the user `vmjoiner@contoso.com` has a UPN suffix of `contoso.com`. The computer account will be placed in the *Users* container. |
+ | Domain controller resource group | The resource group that contains your domain controller virtual machine from the drop-down list. The resource group must be in the same subscription you selected earlier. |
+ | Domain controller virtual machine | Your domain controller virtual machine from the drop-down list. This is required for creating or assigning the initial user and group. |
+ | Link Azure template | Tick the box if you want to [link a separate ARM template](../azure-resource-manager/templates/linked-templates.md) for custom configuration on your session host(s) during deployment. You can specify inline deployment script, desired state configuration, and custom script extension. Provisioning other Azure resources in the template isn't supported.<br /><br />Untick the box if you don't want to link a separate ARM template during deployment. |
+ | ARM template file URL | The URL of the ARM template file you want to use. This could be stored in a storage account. |
+ | ARM template parameter file URL | The URL of the ARM template parameter file you want to use. This could be stored in a storage account. |
- - For **How is your subscription configured**, select **Empty subscription**. An "empty" subscription is a subscription that doesn't already have Azure AD DS or AD DS deployed.
+1. On the **Assignments** tab, complete the following information, then select **Next: Review + create >**:
- - For **Resource group prefix**, enter the prefixes for the resource group you're going to create: *-prerequisite*, *-deployment*, and *-avd*.
+ | Parameter | Value/Description |
+ |--|--|
+ | Create test user account | Tick the box if you want a new user account created during deployment for testing purposes. |
+ | Test user name | The user principal name (UPN) of the test account you want to be created, for example `testuser@contoso.com`. This user will be created in your AD DS domain, synchronized to Azure AD, and made a member of the **AVDValidationUsers** security group that is also created during deployment. It must contain a valid UPN suffix for your domain that is also [added as a verified custom domain name in Azure AD](../active-directory/fundamentals/add-custom-domain.md).<br /><br />Make sure this user name meets the requirements noted in the [prerequisites](#prerequisites). |
+ | Test password | The password to be used for the test account. |
+ | Confirm password | Confirmation of the password to be used for the test account. |
+ | Assign existing users or groups | You can select existing users or groups by ticking the box and selecting **Add Azure AD users or user groups**. Select Azure AD users or user groups, then select **Select**. These users and groups must be [hybrid identities](../active-directory/hybrid/whatis-hybrid-identity.md), which means the user account is synchronized between your AD DS domain and Azure AD. Admin accounts arenΓÇÖt able to sign in to the virtual desktop. |
+
+1. On the **Review + create** tab, ensure validation passes and review the information that will be used during deployment.
+
+1. Select **Create**.
+
+# [Existing Azure AD DS](#tab/existing-aadds)
+
+Here's how to deploy Azure Virtual Desktop using the getting started feature where you already have Azure AD DS available:
+
+1. Sign in to [the Azure portal](https://portal.azure.com).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. Select **Getting started** to open the landing page for the getting started feature, then select **Start**.
+
+1. On the **Basics** tab, complete the following information, then select **Next: Virtual Machines >**:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Subscription | The subscription you want to use from the drop-down list. |
+ | Identity provider | Existing Active Directory. |
+ | Identity service type | Azure AD Domain Services. |
+ | Resource group | Enter a name. This will be used as the prefix for the resource groups that are deployed. |
+ | Location | The Azure region where your Azure Virtual Desktop resources will be deployed. |
+ | Virtual network | The virtual network in the same Azure region you want to connect your Azure Virtual Desktop resources to. This must have connectivity to your Azure AD DS domain and be able to resolve its FQDN. |
+ | Subnet | The subnet of the virtual network you want to connect your Azure Virtual Desktop resources to. |
+ | Azure admin user name | The user principal name (UPN) of the account with the global administrator Azure AD role assigned on the Azure tenant and the owner role on the subscription that you selected.<br /><br />Make sure this account meets the requirements noted in the [prerequisites](#prerequisites). |
+ | Azure admin password | The password for the Azure admin account. |
+ | Domain admin user name | The user principal name (UPN) of the admin account to manage your Azure AD DS domain. The UPN suffix of the user in Azure AD must match the Azure AD DS domain name.<br /><br />Make sure this account meets the requirements noted in the [prerequisites](#prerequisites). |
+ | Domain admin password | The password for the domain admin account. |
+
+1. On the **Virtual machines** tab, complete the following information, then select **Next: Assignments >**:
- - In **Location**, enter the resource location you want to use for your deployment.
+ | Parameter | Value/Description |
+ |--|--|
+ | Users per virtual machine | Select **Multiple users** or **One user at a time** depending on whether you want users to share a session host or assign a session host to an individual user. Learn more about [host pool types](environment-setup.md#host-pools). Selecting **Multiple users** will also create an Azure Files storage account joined to the same Azure AD DS domain. |
+ | Image type | Select **Gallery** to choose from a predefined list, or **storage blob** to enter a URI to the image. |
+ | Image | If you chose **Gallery** for image type, select the operating system image you want to use from the drop-down list. You can also select **See all images** to choose an image from the [Azure Compute Gallery](../virtual-machines/azure-compute-gallery.md).<br /><br />If you chose **Storage blob** for image type, enter the URI of the image. |
+ | Virtual machine size | The [Azure virtual machine size](../virtual-machines/sizes.md) used for your session host(s) |
+ | Name prefix | The name prefix for your session host(s). Each session host will have a hyphen and then a number added to the end, for example **avd-sh-1**. This name prefix can be a maximum of 11 characters and will also be used as the device name in the operating system. |
+ | Number of virtual machines | The number of session hosts you want to deploy at this time. You can add more later. |
+ | Link Azure template | Tick the box if you want to [link a separate ARM template](../azure-resource-manager/templates/linked-templates.md) for custom configuration on your session host(s) during deployment. You can specify inline deployment script, desired state configuration, and custom script extension. Provisioning other Azure resources in the template isn't supported.<br /><br />Untick the box if you don't want to link a separate ARM template during deployment. |
+ | ARM template file URL | The URL of the ARM template file you want to use. This could be stored in a storage account. |
+ | ARM template parameter file URL | The URL of the ARM template parameter file you want to use. This could be stored in a storage account. |
- - For **Azure admin UPN**, enter the full UPN of an account with admin permissions on Azure AD and owner permissions on the subscription.
+1. On the **Assignments** tab, complete the following information, then select **Next: Review + create >**:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Create test user account | Tick the box if you want a new user account created during deployment for testing purposes. |
+ | Test user name | The user principal name (UPN) of the test account you want to be created, for example `testuser@contoso.com`. This user will be created in your Azure AD tenant, synchronized to Azure AD DS, and made a member of the **AVDValidationUsers** security group that is also created during deployment. It must contain a valid UPN suffix for your domain that is also [added as a verified custom domain name in Azure AD](../active-directory/fundamentals/add-custom-domain.md).<br /><br />Make sure this user name meets the requirements noted in the [prerequisites](#prerequisites). |
+ | Test password | The password to be used for the test account. |
+ | Confirm password | Confirmation of the password to be used for the test account. |
+ | Assign existing users or groups | You can select existing users or groups by ticking the box and selecting **Add Azure AD users or user groups**. Select Azure AD users or user groups, then select **Select**. These users and groups must be in the synchronization scope configured for Azure AD DS. Admin accounts arenΓÇÖt able to sign in to the virtual desktop. |
+
+1. On the **Review + create** tab, ensure validation passes and review the information that will be used during deployment.
+
+1. Select **Create**.
- - For **AD Domain join UPN**, enter the full UPN for an account that will be added to **AAD DC Administrators** group.
+
- >[!NOTE]
- >The user name for AD Domain join UPN should be a unique one that doesn't already exist in Azure AD. The getting started feature doesn't currently support using existing Azure AD user names for accounts without Azure AD or AD DS.
+## Connect to the desktop
+
+Once the deployment has completed successfully, if you created a test account or assigned an existing user during deployment, you can connect to it following the steps for one of the supported Remote Desktop clients. For example, you can follow the steps to [Connect with the Windows Desktop client](user-documentation/connect-windows-7-10.md).
+
+If you didn't create a test account or assigned an existing user during deployment, you'll need to add users to the **AVDValidationUsers** security group before you can connect.
+
+## Resources that will be deployed
+
+# [New Azure AD DS](#tab/new-aadds)
+
+| Resource type | Name | Resource group name | Notes |
+|--|--|--|--|
+| Resource group | *your prefix*-avd | N/A | This is a predefined name. |
+| Resource group | *your prefix*-deployment | N/A | This is a predefined name. |
+| Resource group | *your prefix*-prerequisite | N/A | This is a predefined name. |
+| Azure AD DS | *your domain name* | *your prefix*-prerequisite | Deployed with the [Enterprise SKU](https://azure.microsoft.com/pricing/details/active-directory-ds/#pricing). You can [change the SKU](../active-directory-domain-services/change-sku.md) after deployment. |
+| Automation Account | ebautomation*random string* | *your prefix*-deployment | This is a predefined name. |
+| Automation Account runbook | inputValidationRunbook(*Automation Account name*) | *your prefix*-deployment | This is a predefined name. |
+| Automation Account runbook | prerequisiteSetupCompletionRunbook(*Automation Account name*) | *your prefix*-deployment | This is a predefined name. |
+| Automation Account runbook | resourceSetupRunbook(*Automation Account name*) | *your prefix*-deployment | This is a predefined name. |
+| Automation Account runbook | roleAssignmentRunbook(*Automation Account name*) | *your prefix*-deployment | This is a predefined name. |
+| Managed Identity | easy-button-fslogix-identity | *your prefix*-avd | Only created if **Multiple users** is selected for **Users per virtual machine**. This is a predefined name. |
+| Host pool | EB-AVD-HP | *your prefix*-avd | This is a predefined name. |
+| Application group | EB-AVD-HP-DAG | *your prefix*-avd | This is a predefined name. |
+| Workspace | EB-AVD-WS | *your prefix*-avd | This is a predefined name. |
+| Storage account | eb*random string* | *your prefix*-avd | This is a predefined name. |
+| Virtual machine | *your prefix*-*number* | *your prefix*-avd | This is a predefined name. |
+| Virtual network | avdVnet | *your prefix*-prerequisite | The address space used is **10.0.0.0/16**. The address space and name are predefined. |
+| Network interface | *virtual machine name*-nic | *your prefix*-avd | This is a predefined name. |
+| Network interface | aadds-*random string*-nic | *your prefix*-prerequisite | This is a predefined name. |
+| Network interface | aadds-*random string*-nic | *your prefix*-prerequisite | This is a predefined name. |
+| Disk | *virtual machine name*\_OsDisk_1_*random string* | *your prefix*-avd | This is a predefined name. |
+| Load balancer | aadds-*random string*-lb | *your prefix*-prerequisite | This is a predefined name. |
+| Public IP address | aadds-*random string*-pip | *your prefix*-prerequisite | This is a predefined name. |
+| Network security group | avdVnet-nsg | *your prefix*-prerequisite | This is a predefined name. |
+| Group | AVDValidationUsers | N/A | Created in your new Azure AD tenant and synchronized to Azure AD DS. It contains a new test user (if created) and users you selected. This is a predefined name. |
+| User | *your test user* | N/A | If you select to create a test user, it will be created in your new Azure AD tenant, synchronized to Azure AD DS, and made a member of the *AVDValidationUsers* security group. |
+
+# [Existing AD DS](#tab/existing-adds)
+
+| Resource type | Name | Resource group name | Notes |
+|--|--|--|--|
+| Resource group | *your prefix*-avd | N/A | This is a predefined name. |
+| Resource group | *your prefix*-deployment | N/A | This is a predefined name. |
+| Automation Account | ebautomation*random string* | *your prefix*-deployment | This is a predefined name. |
+| Automation Account runbook | inputValidationRunbook(*Automation Account name*) | *your prefix*-deployment | This is a predefined name. |
+| Automation Account runbook | prerequisiteSetupCompletionRunbook(*Automation Account name*) | *your prefix*-deployment | This is a predefined name. |
+| Automation Account runbook | resourceSetupRunbook(*Automation Account name*) | *your prefix*-deployment | This is a predefined name. |
+| Automation Account runbook | roleAssignmentRunbook(*Automation Account name*) | *your prefix*-deployment | This is a predefined name. |
+| Managed Identity | easy-button-fslogix-identity | *your prefix*-avd | Only created if **Multiple users** is selected for **Users per virtual machine**. This is a predefined name. |
+| Host pool | EB-AVD-HP | *your prefix*-avd | This is a predefined name. |
+| Application group | EB-AVD-HP-DAG | *your prefix*-avd | This is a predefined name. |
+| Workspace | EB-AVD-WS | *your prefix*-avd | This is a predefined name. |
+| Storage account | eb*random string* | *your prefix*-avd | This is a predefined name. |
+| Virtual machine | *your prefix*-*number* | *your prefix*-avd | This is a predefined name. |
+| Network interface | *virtual machine name*-nic | *your prefix*-avd | This is a predefined name. |
+| Disk | *virtual machine name*\_OsDisk_1_*random string* | *your prefix*-avd | This is a predefined name. |
+| Group | AVDValidationUsers | N/A | Created in your AD DS domain and synchronized to Azure AD. It contains a new test user (if created) and users you selected. This is a predefined name. |
+| User | *your test user* | N/A | If you select to create a test user, it will be created in your AD DS domain, synchronized to Azure AD, and made a member of the *AVDValidationUsers* security group. |
+
+# [Existing Azure AD DS](#tab/existing-aadds)
+
+| Resource type | Name | Resource group name | Notes |
+|--|--|--|--|
+| Resource group | *your prefix*-avd | N/A | This is a predefined name. |
+| Resource group | *your prefix*-deployment | N/A | This is a predefined name. |
+| Automation Account | ebautomation*random string* | *your prefix*-deployment | This is a predefined name. |
+| Automation Account runbook | inputValidationRunbook(*Automation Account name*) | *your prefix*-deployment | This is a predefined name. |
+| Automation Account runbook | prerequisiteSetupCompletionRunbook(*Automation Account name*) | *your prefix*-deployment | This is a predefined name. |
+| Automation Account runbook | resourceSetupRunbook(*Automation Account name*) | *your prefix*-deployment | This is a predefined name. |
+| Automation Account runbook | roleAssignmentRunbook(*Automation Account name*) | *your prefix*-deployment | This is a predefined name. |
+| Managed Identity | easy-button-fslogix-identity | *your prefix*-avd | Only created if **Multiple users** is selected for **Users per virtual machine**. This is a predefined name. |
+| Host pool | EB-AVD-HP | *your prefix*-avd | This is a predefined name. |
+| Application group | EB-AVD-HP-DAG | *your prefix*-avd | This is a predefined name. |
+| Workspace | EB-AVD-WS | *your prefix*-avd | This is a predefined name. |
+| Storage account | eb*random string* | *your prefix*-avd | This is a predefined name. |
+| Virtual machine | *your prefix*-*number* | *your prefix*-avd | This is a predefined name. |
+| Network interface | *virtual machine name*-nic | *your prefix*-avd | This is a predefined name. |
+| Disk | *virtual machine name*\_OsDisk_1_*random string* | *your prefix*-avd | This is a predefined name. |
+| Group | AVDValidationUsers | N/A | Created in your Azure AD tenant and synchronized to Azure AD DS. It contains a new test user (if created) and users you selected. This is a predefined name. |
+| User | *your test user* | N/A | If you select to create a test user, it will be created in your Azure AD tenant, synchronized to Azure AD DS, and made a member of the *AVDValidationUsers* security group. |
-4. In the **Virtual machines** tab, select the following values:
+
- - For **Do you want the users to share this machine?**, select one of the following options depending on your needs:
- - If you want to create a single-session or personal host pool, select **No**.
- - If you want to create a multi-session or pooled host pool, select **Yes (multi-session)**. This will also create an Azure Files storage account joined to either Azure AD DS or AD DS.
+## Clean up resources
- - For **Image type**, select an image from the Azure image gallery, a custom image, or a VHD from a storage blob.
+If you want to remove Azure Virtual Desktop resources from your environment, you can safely remove them by deleting the resource groups that were deployed. These are:
- - For **VM size**, select the size and SKU you want for the VMs you'll deploy.
+- *your-prefix*-deployment
+- *your-prefix*-avd
+- *your-prefix*-prerequisite (only if you deployed the getting started feature with a new Azure AD DS domain)
- - For **Number of VMs**, select how many VMs you want to provision in the host pool.
+To delete the resource groups:
-5. In the **Assignments** tab, select the **Create validation user**, then enter a username and password into the **Validation user username** and **Validation user password** fields. The validation user is a user who'll test your deployment once it's ready.
+1. Sign in to [the Azure portal](https://portal.azure.com).
-## Clean up resources
+1. In the search bar, type *Resource groups* and select the matching service entry.
-If after deployment you change your mind and want to remove Azure Virtual Desktop resources from your environment without incurring extra billing costs, you can safely remove them by following the instructions in this section.
+1. Select the name of one of resource groups, then select **Delete resource group**.
-If you created your resources on a subscription with Azure AD DS or AD DS, the feature will have made two resource groups with the prefixes "*-deployment*" and "*-avd*." In the Azure portal, go to **Resource groups** and delete any resource groups with those prefixes to remove the deployment.
+1. Review the affected resources, then type the resource group name in the box, and select **Delete**.
-If you created your resources on a subscription without Azure AD DS or AD DS, the feature will have made three resource groups with the prefixes *-prerequisite*, *-deployment*, and *-avd*. In the Azure portal, go to **Resource groups** and delete any resource groups with those prefixes to remove the deployment.
+1. Repeat these steps for the remaining resource groups.
## Next steps
-If you'd like to learn how to deploy Azure Virtual Desktop in a more in-depth way, check out our tutorials for setting up your deployment manually, starting with [Create a host pool with the Azure portal](create-host-pools-azure-marketplace.md).
+If you want to publish apps as well as the full virtual desktop, see the tutorial to [Manage app groups with the Azure portal](manage-app-groups.md).
+
+If you'd like to learn how to deploy Azure Virtual Desktop in a more in-depth way, with less permission required, or programmatically, check out our series of tutorials, starting with [Create a host pool with the Azure portal](create-host-pools-azure-marketplace.md).
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
At a high level, you'll need:
## Azure account with an active subscription
-You'll need an Azure account with an active subscription to deploy Azure Virtual Desktop. If you don't have one already, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+You'll need an Azure account with an active subscription to deploy Azure Virtual Desktop. If you don't have one already, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Your account must be assigned the [contributor or owner role](../role-based-access-control/built-in-roles.md) on your subscription.
-You also need to make sure you've registered the *Microsoft.DesktopVirtualization* resource provider for your subscription.
+You also need to make sure you've registered the *Microsoft.DesktopVirtualization* resource provider for your subscription. To check the status of the resource provider and register if needed:
> [!IMPORTANT]
-> You must have permission to register a resource provider, which requires the `*/register/action` operation. This is included if you are assigned the [contributor or owner role](../role-based-access-control/built-in-roles.md) on your subscription.
-
-To check the status of the resource provider and register if needed:
+> You must have permission to register a resource provider, which requires the `*/register/action` operation. This is included if your account is assigned the [contributor or owner role](../role-based-access-control/built-in-roles.md) on your subscription.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Subscriptions**.
virtual-desktop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/overview.md
You can set up your deployment manually by following these tutorials:
If you'd prefer an automatic process, you can use the getting started feature to set up your deployment for you. For more information, check out these articles: -- [Deploy Azure Virtual Desktop with the getting started feature](../getting-started-feature.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) (When following these instructions, make sure to follow the instructions in [For subscriptions with Azure AD DS or AD DS](../getting-started-feature.md#for-subscriptions-with-azure-ad-ds-or-ad-ds). This method gives you better identity management and app compatibility while also giving you the power to fine-tune identity-related infrastructure costs. The method for subscriptions that don't already have Azure AD DS or AD DS doesn't give you these benefits.)
+- [Deploy Azure Virtual Desktop with the getting started feature](../getting-started-feature.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) (When following these instructions, make sure to follow the instructions for existing Azure AD DS or AD DS. This method gives you better identity management and app compatibility while also giving you the power to fine-tune identity-related infrastructure costs. The method for subscriptions that don't already have Azure AD DS or AD DS doesn't give you these benefits.)
- [Troubleshoot the getting started feature](../troubleshoot-getting-started.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) ## Customize and manage Azure Virtual Desktop
virtual-desktop Shortpath Public https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/shortpath-public.md
Many of the NAT gateways are configured to allow the incoming traffic to the soc
After the initial packet exchange, the client and session host may establish one or many data flows. After that, Remote Desktop Protocol chooses the fastest network path. Client then establishes a secure TLS connection with the session host and initiates the RDP Shortpath transport. After RDP establishes the Shortpath, all Dynamic Virtual Channels (DVCs), including remote graphics, input, and device redirection move to the new transport.
-## Requirements
-
-To support RDP Shortpath, the Azure Virtual Desktop client needs a direct line of sight to the session host. You can get a direct line of sight by using one of these methods:
--- Make sure the remote client machines are running Windows 11, Windows 10, or Windows 7 and have the [Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop) installed. Currently, non-Windows clients aren't supported.-- Use [ExpressRoute private peering](../expressroute/expressroute-circuit-peerings.md)-- Use a [Site-to-Site virtual private network (VPN) (IPsec-based)](../vpn-gateway/tutorial-site-to-site-portal.md)-- Use a [Point-to-Site VPN (IPsec-based)](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md)-- Use a [public IP address assignment](../virtual-network/ip-services/virtual-network-public-ip-address.md)-
-If you're using other VPN types to connect to the Azure portal, we recommend using a User Datagram Protocol (UDP)-based VPN. While most Transmission Control Protocol (TCP)-based VPN solutions support nested UDP, they add inherited overhead of TCP congestion control, which slows down RDP performance.
-
-Having a direct line of sight means that the client can connect directly to the session host without being blocked by firewalls.
- ## Enabling the preview of RDP Shortpath for public networks To participate in the preview of RDP Shortpath, you need to enable the Shortpath functionality. You can configure RDP Shortpath on any number of session hosts used in your environment. There's no requirement to enable RDP Shortpath on all hosts in the pool.
virtual-machine-scale-sets Quick Create Bicep Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-bicep-windows.md
+
+ Title: Quickstart - Create a Windows virtual machine scale set with Bicep
+description: Learn how to quickly create a Windows virtual machine scale with Bicep to deploy a sample app and configures autoscale rules
+++++ Last updated : 06/28/2022+++
+# Quickstart: Create a Windows virtual machine scale set with Bicep
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
+
+A virtual machine scale set allows you to deploy and manage a set of auto-scaling virtual machines. You can scale the number of VMs in the virtual machine scale set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the virtual machine scale set. In this quickstart, you create a virtual machine scale set and deploy a sample application with Bicep.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/vmss-windows-webapp-dsc-autoscale/).
++
+The following resources are defined in the Bicep file:
+
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
+- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses)
+- [**Microsoft.Network/loadBalancers**](/azure/templates/microsoft.network/loadbalancers)
+- [**Microsoft.Compute/virtualMachineScaleSets**](/azure/templates/microsoft.compute/virtualmachinescalesets)
+- [**Microsoft.Insights/autoscaleSettings**](/azure/templates/microsoft.insights/autoscalesettings)
+
+### Define a scale set
+
+To create a virtual machine scale set with a Bicep file, you define the appropriate resources. The core parts of the virtual machine scale set resource type are:
+
+| Property | Description of property | Example template value |
+||-|-|
+| type | Azure resource type to create | Microsoft.Compute/virtualMachineScaleSets |
+| name | The scale set name | myScaleSet |
+| location | The location to create the scale set | East US |
+| sku.name | The VM size for each scale set instance | Standard_A1 |
+| sku.capacity | The number of VM instances to initially create | 2 |
+| upgradePolicy.mode | VM instance upgrade mode when changes occur | Automatic |
+| imageReference | The platform or custom image to use for the VM instances | Microsoft Windows Server 2016 Datacenter |
+| osProfile.computerNamePrefix | The name prefix for each VM instance | myvmss |
+| osProfile.adminUsername | The username for each VM instance | azureuser |
+| osProfile.adminPassword | The password for each VM instance | P@ssw0rd! |
+
+To customize a virtual machine scale set Bicep file, you can change the VM size or initial capacity. Another option is to use a different platform or a custom image.
+
+### Add a sample application
+
+To test your virtual machine scale set, install a basic web application. When you deploy a virtual machine scale set, VM extensions can provide post-deployment configuration and automation tasks, such as installing an app. Scripts can be downloaded from [GitHub](https://azure.microsoft.com/resources/templates/vmss-windows-webapp-dsc-autoscale/) or provided to the Azure portal at extension run-time. To apply an extension to your virtual machine scale set, add the `extensionProfile` section to the resource example above. The extension profile typically defines the following properties:
+
+- Extension type
+- Extension publisher
+- Extension version
+- Location of configuration or install scripts
+- Commands to execute on the VM instances
+
+The Bicep file uses the PowerShell DSC extension to install an ASP.NET MVC app that runs in IIS.
+
+An install script is downloaded from GitHub, as defined in `url`. The extension then runs `InstallIIS` from the `IISInstall.ps1` script, as defined in `function` and `Script`. The ASP.NET app itself is provided as a Web Deploy package, which is also downloaded from GitHub, as defined in `WebDeployPackagePath`:
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as `main.bicep` to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters vmssName=<vmss-name>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -vmssName "<vmss-name>"
+ ```
+
+
+
+ Replace *\<vmss-name\>* with the name of the virtual machine scale set. It must be 3-61 characters in length and globally unique across Azure. You'll be prompted to enter `adminPassword`.
+
+ > [!NOTE]
+ > When the deployment finishes, you should see a message indicating the deployment succeeded. It can take 10-15 minutes for the virtual machine scale set to be created and apply the extension to configure the app.
+
+## Validate the deployment
+
+To see your virtual machine scale set in action, access the sample web application in a web browser. Obtain the public IP address of your load balancer using Azure CLI or Azure PowerShell.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az network public-ip show --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzPublicIpAddress -ResourceGroupName exampleRG | Select IpAddress
+```
+++
+Enter the public IP address of the load balancer in to a web browser in the format *http:\//publicIpAddress/MyApp*. The load balancer distributes traffic to one of your VM instances, as shown in the following example:
+
+![Running IIS site](./media/virtual-machine-scale-sets-create-powershell/running-iis-site.png)
+
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to remove the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created a Windows virtual machine scale set with a Bicep file and used the PowerShell DSC extension to install a basic ASP.NET app on the VM instances. To learn more, continue to the tutorial for how to create and manage Azure virtual machine scale sets.
+
+> [!div class="nextstepaction"]
+> [Create and manage Azure virtual machine scale sets](tutorial-create-and-manage-powershell.md)
virtual-machines Find Unattached Nics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/find-unattached-nics.md
- Title: Find and delete unattached Azure NICs
-description: How to find and delete Azure NICs that are not attached to VMs with the Azure CLI
----- Previously updated : 04/10/2018---
-# How to find and delete unattached network interface cards (NICs) for Azure VMs
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-
-When you delete a virtual machine (VM) in Azure, the network interface cards (NICs) are not deleted by default. If you create and delete multiple VMs, the unused NICs continue to use the internal IP address leases. As you create other VM NICs, they may be unable to obtain an IP lease in the address space of the subnet. This article shows you how to find and delete unattached NICs.
-
-## Find and delete unattached NICs
-
-The *virtualMachine* property for a NIC stores the ID and resource group of the VM the NIC is attached to. The following script loops through all the NICs in a subscription and checks if the *virtualMachine* property is null. If this property is null, the NIC is not attached to a VM.
-
-To view all the unattached NICs, it's highly recommend to first run the script with the *deleteUnattachedNics* variable to *0*. To delete all the unattached NICs after you review the list output, run the script with *deleteUnattachedNics* to *1*.
-
-```azurecli
-# Set deleteUnattachedNics=1 if you want to delete unattached NICs
-# Set deleteUnattachedNics=0 if you want to see the Id(s) of the unattached NICs
-deleteUnattachedNics=0
-
-unattachedNicsIds=$(az network nic list --query '[?virtualMachine==`null`].[id]' -o tsv)
-for id in ${unattachedNicsIds[@]}
-do
- if (( $deleteUnattachedNics == 1 ))
- then
-
- echo "Deleting unattached NIC with Id: "$id
- az network nic delete --ids $id
- echo "Deleted unattached NIC with Id: "$id
- else
- echo $id
- fi
-done
-```
-
-## Next steps
-
-For more information on how to create and manage virtual networks in Azure, see [create and manage VM networks](tutorial-virtual-network.md).
virtual-machines Nda100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nda100-v4-series.md
Last updated 05/26/2021
The ND A100 v4 series virtual machine is a new flagship addition to the Azure GPU family, designed for high-end Deep Learning training and tightly-coupled scale-up and scale-out HPC workloads.
-The ND A100 v4 series starts with a single virtual machine (VM) and eight NVIDIA Ampere A100 Tensor Core GPUs. ND A100 v4-based deployments can scale up to thousands of GPUs with an 1.6 Tb/s of interconnect bandwidth per VM. Each GPU within the VM is provided with its own dedicated, topology-agnostic 200 Gb/s NVIDIA Mellanox HDR InfiniBand connection. These connections are automatically configured between VMs occupying the same virtual machine scale set, and support GPUDirect RDMA.
+The ND A100 v4 series starts with a single virtual machine (VM) and eight NVIDIA Ampere A100 40GB Tensor Core GPUs. ND A100 v4-based deployments can scale up to thousands of GPUs with an 1.6 Tb/s of interconnect bandwidth per VM. Each GPU within the VM is provided with its own dedicated, topology-agnostic 200 Gb/s NVIDIA Mellanox HDR InfiniBand connection. These connections are automatically configured between VMs occupying the same virtual machine scale set, and support GPUDirect RDMA.
-Each GPU features NVLINK 3.0 connectivity for communication within the VM, and the instance is also backed by 96 physical 2nd-generation AMD EpycΓäó CPU cores.
+Each GPU features NVLINK 3.0 connectivity for communication within the VM, and the instance is also backed by 96 physical 2nd-generation AMD EpycΓäó 7V12 (Rome) CPU cores.
These instances provide excellent performance for many AI, ML, and analytics tools that support GPU acceleration 'out-of-the-box,' such as TensorFlow, Pytorch, Caffe, RAPIDS, and other frameworks. Additionally, the scale-out InfiniBand interconnect is supported by a large set of existing AI and HPC tools built on NVIDIA's NCCL2 communication libraries for seamless clustering of GPUs.
virtual-machines Ndm A100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ndm-a100-v4-series.md
The NDm A100 v4 series virtual machine is a new flagship addition to the Azure G
The NDm A100 v4 series starts with a single virtual machine (VM) and eight NVIDIA Ampere A100 80GB Tensor Core GPUs. NDm A100 v4-based deployments can scale up to thousands of GPUs with an 1.6 Tb/s of interconnect bandwidth per VM. Each GPU within the VM is provided with its own dedicated, topology-agnostic 200 Gb/s NVIDIA Mellanox HDR InfiniBand connection. These connections are automatically configured between VMs occupying the same virtual machine scale set, and support GPUDirect RDMA.
-Each GPU features NVLINK 3.0 connectivity for communication within the VM, and the instance is also backed by 96 physical 2nd-generation AMD EpycΓäó CPU cores.
+Each GPU features NVLINK 3.0 connectivity for communication within the VM, and the instance is also backed by 96 physical 2nd-generation AMD EpycΓäó 7V12 (Rome) CPU cores.
These instances provide excellent performance for many AI, ML, and analytics tools that support GPU acceleration 'out-of-the-box,' such as TensorFlow, Pytorch, Caffe, RAPIDS, and other frameworks. Additionally, the scale-out InfiniBand interconnect is supported by a large set of existing AI and HPC tools built on NVIDIA's NCCL2 communication libraries for seamless clustering of GPUs.
virtual-machines Trusted Launch Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-portal.md
You can deploy trusted launch VMs using a quickstart template:
1. Sign in to the Azure [portal](https://portal.azure.com). 2. To create an Azure Compute Gallery Image from a VM, open an existing Trusted launch VM and select **Capture**.
-3. In the Create an Image page that follows, allow the image to be shared to the gallery as a VM image version as Managed Images are not supported for Trusted Launch.
+3. In the Create an Image page that follows, allow the image to be shared to the gallery as a VM image version. Creation of Managed Images is not supported for Trusted Launch VMs.
4. Create a new target Azure Compute Gallery or select an existing gallery.
-5. Select the **Operating system state** as either **Generalized** or **Specialized**.
-6. Create a new image definition by providing a name, publisher, offer and SKU details. The Security Type of the image definition is already set to 'Trusted launch'.
+5. Select the **Operating system state** as either **Generalized** or **Specialized**. If you want to create a generalized image, ensure that you [generalize the VM to remove machine specific information](generalize.md) before selecting this option. If Bitlocker based encryption is enabled on your Trusted launch Windows VM, you may not be able to generalize the same.
+6. Create a new image definition by providing a name, publisher, offer and SKU details. The **Security Type** of the image definition should already be set to **Trusted launch**.
7. Provide a version number for the image version. 8. Modify replication options if required. 9. At the bottom of the **Create an Image** page, select **Review + Create** and when validation shows as passed, select **Create**.
You can deploy trusted launch VMs using a quickstart template:
16. At the bottom of the page, select **Review + Create** 17. On the **Create a virtual machine** page, you can see the details about the VM you are about to deploy. Once validation shows as passed, select **Create**.
+In case you want to use either a managed disk or a managed disk snapshot as a source of the image version (instead of a trusted launch VM), then use the following steps
+
+1. Sign in to the [portal](https://portal.azure.com)
+2. Search for **VM Image Versions** and select **Create**
+3. Provide the subscription, resource group, region and image version number
+4. Select the source as **Disks and/or Snapshots**
+5. Select the OS disk as a managed disk or a managed disk snapshot from the dropdown list
+6. Select a **Target Azure Compute Gallery** to create and share the image. If no gallery exists, create a new gallery.
+7. Select the **Operating system state** as either **Generalized** or **Specialized**. If you want to create a generalized image, ensure that you generalize the disk or snapshot to remove machine specific information.
+8. For the **Target VM Image Definition** select Create new. In the window that opens, select an image definition name and ensure that the **Security type** is set to **Trusted launch**. Provide the publisher, offer and SKU information and select **OK**.
+9. The **Replication** tab can be used to set the replica count and target regions for image replication, if required.
+10. The **Encryption** tab can also be used to provide SSE encryption related information, if required.
+11. Select **Create** in the **Review + create** tab to create the image
+12. Once the image version is successfully created, select the **+ Create VM** to land on the Create a virtual machine page.
+13. Please follow steps 12 to 17 as mentioned earlier to create a trusted launch VM using this image version
++ ### [CLI](#tab/cli2) Make sure you are running the latest version of Azure CLI
az sig image-definition create --resource-group MyResourceGroup --location eastu
--features SecurityType=TrustedLaunch ```
-Generalize the VM using waagagent command and create an image version with an existing Trusted Launch VM as image source
+To create an image version, we can capture an existing Linux based Trusted launch VM. [Generalize the Trusted launch VM](generalize.md) before creating the image version.
```azurecli-interactive az sig image-version create --resource-group MyResourceGroup \
az sig image-version create --resource-group MyResourceGroup \
--gallery-image-version 1.0.0 \ --managed-image /subscriptions/00000000-0000-0000-0000-00000000xxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM ```+
+In case a managed disk or a managed disk snapshot needs to be used as the image source for the image version, replace the --managed-image in the above command with --os-snapshot and provide the disk or the snapshot resource name
+ Create a Trusted Launch VM from the above image version ```azurecli-interactive
$features = @($SecurityType)
New-AzGalleryImageDefinition -ResourceGroupName $rgName -GalleryName $galleryName -Name $galleryImageDefinitionName -Location $location -Publisher $publisherName -Offer $offerName -Sku $skuName -HyperVGeneration "V2" -OsState "Generalized" -OsType "Windows" -Description $description -Feature $features ```
-Generalize the VM using sysprep tool and create an image version with an existing Trusted Launch VM as image source
+To create an image version, we can capture an existing Windows based Trusted launch VM. [Generalize the Trusted launch VM](generalize.md) before creating the image version.
```azurepowershell-interactive $rgName = "MyResourceGroup"
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch.md
HVCI is a powerful system mitigation that protects Windows kernel-mode processes
With trusted launch and VBS you can enable Windows Defender Credential Guard. This feature isolates and protects secrets so that only privileged system software can access them. It helps prevent unauthorized access to secrets and credential theft attacks, like Pass-the-Hash (PtH) attacks. For more information, see [Credential Guard](/windows/security/identity-protection/credential-guard/credential-guard).
-## Azure Defender for Cloud integration
+## Microsoft Defender for Cloud integration
Trusted launch is integrated with Azure Defender for Cloud to ensure your VMs are properly configured. Azure Defender for Cloud will continually assess compatible VMs and issue relevant recommendations. - **Recommendation to enable Secure Boot** - This Recommendation only applies for VMs that support trusted launch. Azure Defender for Cloud will identify VMs that can enable Secure Boot, but have it disabled. It will issue a low severity recommendation to enable it. - **Recommendation to enable vTPM** - If your VM has vTPM enabled, Azure Defender for Cloud can use it to perform Guest Attestation and identify advanced threat patterns. If Azure Defender for Cloud identifies VMs that support trusted launch and have vTPM disabled, it will issue a low severity recommendation to enable it. - **Recommendation to install guest attestation extension** - If your VM has secure boot and vTPM enabled but it doesn't have the guest attestation extension installed, Azure Defender for Cloud will issue a low severity recommendation to install the guest attestation extension on it. This extension allows Azure Defender for Cloud to proactively attest and monitor the boot integrity of your VMs. Boot integrity is attested via remote attestation.-- **Attestation health assessment or Boot Integrity Monitoring** - If your VM has Secure Boot and vTPM enabled and attestation extension installed, Azure Defender for Cloud can remotely validate that your VM booted in a healthy way. This is known as boot integrity monitoring. Azure Defender for Cloud issues an assessment, indicating the status of remote attestation. Currently boot integrity monitoring is supported for both Windows and Linux singe virtual machines and uniform scale sets.--
-## Microsoft Defender for Cloud integration
+- **Attestation health assessment or Boot Integrity Monitoring** - If your VM has Secure Boot and vTPM enabled and attestation extension installed, Azure Defender for Cloud can remotely validate that your VM booted in a healthy way. This is known as boot integrity monitoring. Azure Defender for Cloud issues an assessment, indicating the status of remote attestation. Currently boot integrity monitoring is supported for both Windows and Linux single virtual machines and uniform scale sets.
If your VMs are properly set up with trusted launch, Microsoft Defender for Cloud can detect and alert you of VM health problems.
virtual-machines Oracle Database Quick Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-database-quick-create.md
In this task you must configure some external endpoints for the database listene
az network public-ip show ^ --resource-group rg-oracle ^ --name vmoracle19cPublicIP ^
- --query [ipAddress] ^
+ --query "ipAddress" ^
--output tsv ```
virtual-network-manager Concept Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-deployments.md
Previously updated : 06/09/2022 Last updated : 07/06/2022
In this article, you'll learn about how configurations are applied to your netwo
## Deployment
-*Deployment* is the method Azure Virtual Network Manager uses to apply configurations to your virtual networks in network groups. Configurations won't take effect until they are deployed. Changes to network groups, including events such as removal and addition of a virtual network into a network group, will take effect without the need of re-deployment. When committing a deployment, you select the region(s) to which the configuration will be applied. When a deployment request is sent to Azure Virtual Network Manager, it will calculate the [goal state](#goalstate) of network resources and request the necessary changes to your infrastructure. The changes can take about 15-20 minutes depending on how large the configuration is.
-
-## <a name="deployment"></a>Deployment against network group membership types
-
-Changing the definition of a network group won't have an impact unless the configuration using this network group is deployed. As such, deployment updates are different for static and dynamic group members in a network group. When you have dynamic group membership defined, such as all virtual networks whose name contains "production", Azure Virtual Network Manager will automatically determine if the dynamic members meet the requirements of the configuration and adjust without you needing to deploy the configuration again. This is because you already defined the condition of the membership, and the definition didn't change. However, if you have virtual networks that are added as static members, you'll need to deploy the configuration again for the changes to apply. For example, if you add a new virtual network as a static member, you'll need to deploy the configuration again to take effect.
+*Deployment* is the method Azure Virtual Network Manager uses to apply configurations to your virtual networks in network groups. Configurations won't take effect until they are deployed. Changes to network groups, including events such as removal and addition of a virtual network into a network group, will take effect without the need for re-deployment. For example, if you have a configuration deployed, and a virtual network is added to a network group, it takes effect immediately. When committing a deployment, you select the region(s) to which the configuration will be applied. When a deployment request is sent to Azure Virtual Network Manager, it will calculate the [goal state](#goalstate) of network resources and request the necessary changes to your infrastructure. The changes can take a few minutes depending on how large the configuration is.
## Deployment status
-When you commit a configuration deployment, the API does a POST operation and you won't see the completion of the deployment afterward. Once the deployment request has been made, Azure Virtual Network Manager will calculate the goal state of your networks and request the underlying infrastructure to make the changes. You can see the deployment status on the *Deployment* page of the Virtual Network Manager.
+When you commit a configuration deployment, the API does a POST operation. Once the deployment request has been made, Azure Virtual Network Manager will calculate the goal state of your networks and request the underlying infrastructure to make the changes. You can see the deployment status on the *Deployment* page of the Virtual Network Manager.
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/deployment-in-progress.png" alt-text="Screenshot of deployment in progress in deployment list."::: ## <a name = "goalstate"></a> Goal state model
-When you commit a deployment of configuration(s), you're describing the goal state of the configuration you want as an end result. For example, when you commit configurations named *Config1* and *Config2* into a region, these two configurations gets applied. If you decided to commit configuration named *Config1* and *Config3* into the same region, *Config2* would then be removed and *Config3* would be added. To remove all configurations, you would deploy a **None** configuration against the region(s) you no longer wish to have any configurations applied.
+When you commit a deployment of configuration(s), you're describing the goal state of the configuration you want as an end result in the regions you assigned. For example, when you commit configurations named *Config1* and *Config2* into a region, these two configurations get applied and become the goal of this region. If you decided to commit configuration named *Config1* and *Config3* into the same region, *Config2* would then be removed and *Config3* would be added. To remove all configurations, you would deploy a **None** configuration against the region(s) you no longer wish to have any configurations applied.
## Next steps
virtual-network-manager Concept Network Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-network-groups.md
Previously updated : 06/09/2022 Last updated : 07/06/2022
Dynamic membership gives you the flexibility of selecting multiple virtual netwo
When you create a network group, an Azure policy is created so that Azure Virtual Network Manager gets notified about changes made to virtual network membership. The policies defined are available for you to see, but they are not editable by users today. Creating, changing, and deleting Azure policy definitions and assignments for network groups is only possible through the Azure Network Manager today.
-To create an Azure policy initiative definition and assignment for Azure Network Manager resources, create and deploy a network group with the necessary configurations. To update an existing Azure policy initiative definition or corresponding assignment, you'll need to change and deploy changes to the network group within the Azure Virtual Network Manager resource. To delete an Azure policy initiative definition and assignment, you'll need to undeploy and delete the Azure Virtual Network Manager resources associated with your policy. This may include undeploying a configuration, deleting a configuration, and deleting a network group. For more information on deletion, review the Azure Virtual Network Manager [checklist for removing components](concept-remove-components-checklist.md).
+To create an Azure policy initiative definition and assignment for Azure Network Manager resources, create and deploy a network group with the necessary configurations. To update an existing Azure policy initiative definition or corresponding assignment, you'll need to change and deploy changes to the network group within the Azure Virtual Network Manager resource. To delete an Azure policy initiative definition and assignment, you'll need to undeploy and delete the Azure Virtual Network Manager resources associated with your policy. This may include removing a configuration, deleting a configuration, and deleting a network group. For more information on deletion, review the Azure Virtual Network Manager [checklist for removing components](concept-remove-components-checklist.md).
## Next steps
virtual-network-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/overview.md
Previously updated : 11/02/2021 Last updated : 07/06/2022 #Customer intent: As an IT administrator, I want to learn about Azure Virtual Network Manager and what I can use it for.
# What is Azure Virtual Network Manager (Preview)?
-Azure Virtual Network Manager is a management service that enables you to group, configure, deploy, and manage virtual networks globally across subscriptions. With Virtual Network Manager, you can define network groups to identify and logically segment your virtual networks. Then you can determine the connectivity and security configurations you want and apply them across all the selected virtual networks in network groups at once.
+Azure Virtual Network Manager is a management service that enables you to group, configure, deploy, and manage virtual networks globally across subscriptions. With Virtual Network Manager, you can define network groups to identify and logically segment your virtual networks. Then you can determine the connectivity and security configurations you want and apply them across all the selected virtual networks in network groups at once.
> [!IMPORTANT] > Azure Virtual Network Manager is currently in public preview.
A connectivity configuration enables you to create a mesh or a hub-and-spoke net
* Highly scalable and highly available service with redundancy and replication across the globe.
-* Ability to create global network security rules that override network security group rules.
+* Ability to create network security rules that override network security group rules.
* Low latency and high bandwidth between resources in different virtual networks using virtual network peering.
virtual-network What Is Ip Address 168 63 129 16 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/what-is-ip-address-168-63-129-16.md
The public IP address 168.63.129.16 is used in all regions and all national clou
- 168.63.129.16 can provide DNS services to the VM. If this is not desired, outbound traffic to 168.63.129.16 ports 53/udp and 53/tcp can be blocked in the local firewall on the VM.
- By default DNS communication is not subject to the configured network security groups unless specifically targeted leveraging the [AzurePlatformDNS](../virtual-network/service-tags-overview.md#available-service-tags) service tag. To block DNS traffic to Azure DNS through NSG, create an outbound rule to deny traffic to [AzurePlatformDNS](../virtual-network/service-tags-overview.md#available-service-tags), and specify "*" as "Destination port ranges" and "Any" as protocol.
+ By default DNS communication is not subject to the configured network security groups unless specifically targeted leveraging the [AzurePlatformDNS](../virtual-network/service-tags-overview.md#available-service-tags) service tag. To block DNS traffic to Azure DNS through NSG, create an outbound rule to deny traffic to [AzurePlatformDNS](../virtual-network/service-tags-overview.md#available-service-tags), and specify "Any" as "Source", "*" as "Destination port ranges", "Any" as protocol and "Deny" as action.
- When the VM is part of a load balancer backend pool, [health probe](../load-balancer/load-balancer-custom-probe-overview.md) communication should be allowed to originate from 168.63.129.16. The default network security group configuration has a rule that allows this communication. This rule leverages the [AzureLoadBalancer](../virtual-network/service-tags-overview.md#available-service-tags) service tag. If desired this traffic can be blocked by configuring the network security group however this will result in probes that fail.
virtual-wan Azure Vpn Client Optional Configurations Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/azure-vpn-client-optional-configurations-windows.md
+
+ Title: 'Azure VPN Client optional configuration steps: OpenVPN protocol - Windows'
+
+description: Learn how to configure the Azure VPN Client optional configuration parameters for P2S OpenVPN connections.
+++ Last updated : 07/06/2022+++
+# Configure Azure VPN Client optional settings - OpenVPN protocol - Windows
+
+This article helps you configure optional settings for an Azure VPN Client installed on a Windows computer.
+
+* For information about installing the Azure VPN Client, see [Configure the Azure VPN client - Windows](openvpn-azure-ad-client.md).
+
+* For information about how to download VPN client profile configuration file (xml file), see [Download a global or hub-based profile](global-hub-profile.md).
+
+> [!NOTE]
+> The Azure VPN Client is only supported for OpenVPN® protocol connections.
+>
+
+## <a name="xml"></a>Edit and import VPN client profile configuration files
+
+The steps in this article require you to modify and import the Azure VPN Client profile configuration file. To work with VPN client profile configuration files (xml files), do the following:
+
+1. Locate the profile configuration file and open it using the editor of your choice.
+1. Modify the file as necessary, then save your changes.
+1. Import the file to configure the Azure VPN client.
+
+You can import the file using these methods:
+
+* Import using the Azure VPN Client interface. Open the Azure VPN Client and click **+** and then **Import**. Locate the modified xml file, configure any additional settings in the Azure VPN Client interface (if necessary), then click **Save**.
+
+* Import the profile from a command-line prompt. Add the downloaded **azurevpnconfig.xml** file to the **%userprofile%\AppData\Local\Packages\Microsoft.AzureVpn_8wekyb3d8bbwe\LocalState** folder, then run the following command. To force the import, use the **-f** switch.
+
+ ```xml
+ azurevpn -i azurevpnconfig.xml
+ ```
+
+## DNS
+
+### <a name="add-suffix"></a>Add DNS suffixes
+
+Modify the downloaded profile xml file and add the **\<dnssuffixes>\<dnssufix> \</dnssufix>\</dnssuffixes>** tags.
+
+```xml
+<azvpnprofile>
+<clientconfig>
+
+ <dnssuffixes>
+ <dnssuffix>.mycorp.com</dnssuffix>
+ <dnssuffix>.xyz.com</dnssuffix>
+ <dnssuffix>.etc.net</dnssuffix>
+ </dnssuffixes>
+
+</clientconfig>
+</azvpnprofile>
+```
+
+### <a name="custom-dns"></a>Add custom DNS servers
+
+Modify the downloaded profile xml file and add the **\<dnsservers>\<dnsserver> \</dnsserver>\</dnsservers>** tags.
+
+```xml
+<azvpnprofile>
+<clientconfig>
+
+ <dnsservers>
+ <dnsserver>x.x.x.x</dnsserver>
+ <dnsserver>y.y.y.y</dnsserver>
+ </dnsservers>
+
+</clientconfig>
+</azvpnprofile>
+```
+
+> [!NOTE]
+> The OpenVPN Azure AD client utilizes DNS Name Resolution Policy Table (NRPT) entries, which means DNS servers will not be listed under the output of `ipconfig /all`. To confirm your in-use DNS settings, please consult [Get-DnsClientNrptPolicy](/powershell/module/dnsclient/get-dnsclientnrptpolicy) in PowerShell.
+>
+
+## Routing
+
+### <a name="custom-routes"></a>Add custom routes
+
+Modify the downloaded profile xml file and add the **\<includeroutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</includeroutes>** tags.
+
+```xml
+<azvpnprofile>
+<clientconfig>
+
+ <includeroutes>
+ <route>
+ <destination>x.x.x.x</destination><mask>24</mask>
+ </route>
+ </includeroutes>
+
+</clientconfig>
+</azvpnprofile>
+```
+
+### <a name="forced-tunneling"></a>Direct all traffic to the VPN tunnel (force tunnel)
+
+You can include 0/0 if you're using the Azure VPN Client version 2.1900:39.0 or higher.
+
+Modify the downloaded profile xml file and add the **\<includeroutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</includeroutes>** tags. Make sure to update the version number to **2**.
+
+```xml
+<azvpnprofile>
+<clientconfig>
+ <includeroutes>
+ <route>
+ <destination>0.0.0.0</destination><mask>0</mask>
+ </route>
+ </includeroutes>
+ </clientconfig>
+
+<version>2</version>
+</azvpnprofile>
+```
+
+### <a name="exclude-routes"></a>Block (exclude) routes
+
+Modify the downloaded profile xml file and add the **\<excluderoutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</excluderoutes>** tags.
+
+```xml
+<azvpnprofile>
+<clientconfig>
+
+ <excluderoutes>
+ <route>
+ <destination>x.x.x.x</destination><mask>24</mask>
+ </route>
+ </excluderoutes>
+
+</clientconfig>
+</azvpnprofile>
+```
+
+## Certificates
+
+This section applies to certificate authentication clients.
+
+### <a name="multi-cert"></a>Specify multiple certificates
+
+If you have 2 hubs for your virtual WAN that are each configured for P2S User VPN and use the same VPN configuration, and that configuration is configured to use multiple certificates, you can now configure the VPN clients for multiple certificates. This means that if one certificate can't be used for any reason, the other certificate can still be used for authentication. Previously, you couldn't configure the client with the settings for both certificates.
+
+You can configure multiple certificate support on the client side by either using the Azure VPN Client interface (version 2.1963.44.0 or higher), or by modifying the xml profile to include multiple certificate tags.
+
+You need to first download the User VPN profile in order to obtain the necessary settings. Go to the **Virtual WAN -> User VPN configurations** page. Select the User VPN configuration used by both hubs, then select **Download virtual WAN user VPN profile** to download the global user VPN profile (rather than the hub profile). The files you download contain the root end certificates.
+
+* To configure multiple certificates directly in the Azure VPN Client, specify multiple certificates when you add a VPN connection.
+
+* To configure the Azure VPN client using the profile xml file, modify the xml file to include multiple certificates, then import the file either directly in the Azure VPN client, or from the command line.
+
+ ```xml
+ </protocolconfig>
+ <serverlist>
+ <ServerEntry>
+ <displayname
+ i:nil="true" />
+ <fqdn>wan.kycyz81dpw483xnf3fg62v24f.vpn.azure.com</fqdn>
+ </ServerEntry>
+ </serverlist>
+ <servervalidation>
+ <cert>
+ <hash>A8985D3A65E5E5C4B2D7D66D40C6DD2FB19C5436</hash>
+ <issuer
+ i:nil="true" />
+ </cert>
+ <cert>
+ <hash>59470697201baejC4B2D7D66D40C6DD2FB19C5436</hash>
+ <issuer
+ i:nil="true" />
+ </cert>
+ <cert>
+ <hash>cab20a7f63f00f2bae76202gdfe36db3a03a9cb9</hash>
+ <issuer
+ i:nil="true" />
+ </cert>
+ ```
+
+## Next steps
+
+For more information, see [Create an Azure Active Directory tenant for P2S Open VPN connections that use Azure AD authentication](openvpn-azure-ad-tenant.md).
virtual-wan Openvpn Azure Ad Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/openvpn-azure-ad-client.md
These steps help you configure your connection to connect automatically with Alw
![Screenshot shows the results of the diagnosis.](./media/openvpn-azure-ad-client/diagnose/diagnose4.jpg)
-## FAQ
-
-### <a name="add-suffix"></a>How do I add DNS suffixes to the VPN client?
-
-You can modify the downloaded profile XML file and add the **\<dnssuffixes>\<dnssufix> \</dnssufix>\</dnssuffixes>** tags.
-
-```xml
-<azvpnprofile>
-<clientconfig>
-
- <dnssuffixes>
- <dnssuffix>.mycorp.com</dnssuffix>
- <dnssuffix>.xyz.com</dnssuffix>
- <dnssuffix>.etc.net</dnssuffix>
- </dnssuffixes>
-
-</clientconfig>
-</azvpnprofile>
-```
-
-### <a name="custom-dns"></a>How do I add custom DNS servers to the VPN client?
-
-You can modify the downloaded profile XML file and add the **\<dnsservers>\<dnsserver> \</dnsserver>\</dnsservers>** tags.
-
-```xml
-<azvpnprofile>
-<clientconfig>
-
- <dnsservers>
- <dnsserver>x.x.x.x</dnsserver>
- <dnsserver>y.y.y.y</dnsserver>
- </dnsservers>
-
-</clientconfig>
-</azvpnprofile>
-```
-
-> [!NOTE]
-> The OpenVPN Azure AD client utilizes DNS Name Resolution Policy Table (NRPT) entries, which means DNS servers will not be listed under the output of `ipconfig /all`. To confirm your in-use DNS settings, please consult [Get-DnsClientNrptPolicy](/powershell/module/dnsclient/get-dnsclientnrptpolicy) in PowerShell.
->
-
-### <a name="multi-cert"></a>Can I specify multiple certificates for the VPN client?
-
-If you have 2 hubs for your virtual WAN that are each configured for P2S User VPN and use the same VPN configuration, and that configuration is configured to use multiple certificates, you can now configure the VPN clients for multiple certificates. This means that if one certificate can't be used for any reason, the other certificate can still be used for authentication. Previously, you couldn't configure the client with the settings for both certificates.
-
-To configure, go to the Virtual WAN -> User VPN configurations page. Select the User VPN configuration used by both hubs, then select **Download virtual WAN user VPN profile** to download the global user VPN profile (rather than the hub profile). The files you download contain the root end certificates. You can configure multiple certificate support on the client side by either using the Azure VPN Client interface (version 2.1963.44.0 or higher), or by modifying the xml profile to include multiple certificate tags.
-
-Example:
-
-```xml
- </protocolconfig>
- <serverlist>
- <ServerEntry>
- <displayname
- i:nil="true" />
- <fqdn>wan.kycyz81dpw483xnf3fg62v24f.vpn.azure.com</fqdn>
- </ServerEntry>
- </serverlist>
- <servervalidation>
- <cert>
- <hash>A8985D3A65E5E5C4B2D7D66D40C6DD2FB19C5436</hash>
- <issuer
- i:nil="true" />
- </cert>
- <cert>
- <hash>59470697201baejC4B2D7D66D40C6DD2FB19C5436</hash>
- <issuer
- i:nil="true" />
- </cert>
- <cert>
- <hash>cab20a7f63f00f2bae76202gdfe36db3a03a9cb9</hash>
- <issuer
- i:nil="true" />
- </cert>
-```
-
-### <a name="custom-routes"></a>How do I add custom routes to the VPN client?
-
-You can modify the downloaded profile XML file and add the **\<includeroutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</includeroutes>** tags.
-
-```xml
-<azvpnprofile>
-<clientconfig>
-
- <includeroutes>
- <route>
- <destination>x.x.x.x</destination><mask>24</mask>
- </route>
- </includeroutes>
-
-</clientconfig>
-</azvpnprofile>
-```
-
-### <a name="force-tunneling"></a>How do I direct all traffic to the VPN tunnel (force tunnel)?
-
-You can modify the downloaded profile XML file and add the **\<includeroutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</includeroutes>** tags.
-
-```xml
-<azvpnprofile>
-<clientconfig>
-
- <includeroutes>
- <route>
- <destination>0.0.0.0</destination><mask>1</mask>
- </route>
- <route>
- <destination>128.0.0.0</destination><mask>1</mask>
- </route>
- </includeroutes>
-
-</clientconfig>
-</azvpnprofile>
-```
-
-### <a name="exclude-routes"></a>How do I block (exclude) routes from the VPN client?
-
-You can modify the downloaded profile XML file and add the **\<excluderoutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</excluderoutes>** tags.
-
-```xml
-<azvpnprofile>
-<clientconfig>
-
- <excluderoutes>
- <route>
- <destination>x.x.x.x</destination><mask>24</mask>
- </route>
- </excluderoutes>
-
-</clientconfig>
-</azvpnprofile>
-```
-
-### <a name="import-profile"></a>Can I import the profile from a command-line prompt?
-
-You can import the profile from a command-line prompt by placing the downloaded **azurevpnconfig.xml** file in the **%userprofile%\AppData\Local\Packages\Microsoft.AzureVpn_8wekyb3d8bbwe\LocalState** folder and running the following command:
-
-```xml
-azurevpn -i azurevpnconfig.xml
-```
-To force the import, use the **-f** switch.
+## Optional client settings
+You can configure optional settings for the Azure VPN Client, such as forced tunneling, exclude routes, DNS, and certificate authentication settings. For steps, see [Configure Azure VPN Client optional settings](azure-vpn-client-optional-configurations-windows.md).
## Next steps
vpn-gateway Packet Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/packet-capture.md
For more information on parameter options, see [Stop-AzVirtualNetworkGatewayConn
- Packet capture data files are generated in PCAP format. Use Wireshark or other commonly available applications to open PCAP files. - Packet captures aren't supported on policy-based gateways. - If the `SASurl` parameter isn't configured correctly, the trace might fail with Storage errors. For examples of how to correctly generate an `SASurl` parameter, see [Stop-AzVirtualNetworkGatewayPacketCapture](/powershell/module/az.network/stop-azvirtualnetworkgatewaypacketcapture).
+- If you are configuring a User Delegated SAS, make sure the user account is granted proper RBAC permissions on the storage account such as Storage Blob Data Owner.