Updates from: 11/07/2022 02:05:17
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/known-issues.md
If a user and their manager are both in scope for provisioning, the service prov
The global reader role is unable to read the provisioning configuration. Please create a custom role with the `microsoft.directory/applications/synchronization/standard/read` permission in order to read the provisioning configuration from the Azure Portal.
+#### Microsoft Azure Government Cloud
+Credentials, including the secret token, notification email, and SSO certificate notification emails together have a 1KB limit in the Microsoft Azure Government Cloud.
+ ## On-premises application provisioning The following information is a current list of known limitations with the Azure AD ECMA Connector Host and on-premises application provisioning.
The following attributes and objects aren't supported:
The ECMA host does not support updating the password in the connectivity page of the wizard. Please create a new connector when changing the password. ## Next steps
-[How provisioning works](how-provisioning-works.md)
+[How provisioning works](how-provisioning-works.md)
active-directory On Premises Application Provisioning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-application-provisioning-architecture.md
Previously updated : 08/26/2022 Last updated : 11/04/2022
You can define one or more matching attribute(s) and prioritize them based on th
- The agent must communicate with both Azure and your application, so the placement of the agent affects the latency of those two connections. You can minimize the latency of the end-to-end traffic by optimizing each network connection. Each connection can be optimized by: - Reducing the distance between the two ends of the hop. - Choosing the right network to traverse. For example, traversing a private network rather than the public internet might be faster because of dedicated links.-
+- The agent and ECMA Host rely on a certificate for communication. The self-signed certificate generated by the ECMA host should only be used for testing purposes. The self-signed certificate expires in two years by default and cannot be revoked. Microsoft recommends using a certificiate from a trusted CA for production use cases.
## Provisioning agent questions
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Previously updated : 10/17/2022 Last updated : 11/04/2022
Applications that support the SCIM profile described in this article can be conn
The following screenshot shows the Azure AD application gallery:
- ![Screenshot shows the Azure AD application gallery.](media/use-scim-to-provision-users-and-groups/scim-figure-2b-1.png)
-
-
- > [!NOTE]
- > If you are using the old app gallery experience, follow the screen guide below.
-
- The following screenshot shows the Azure AD old app gallery experience:
-
- ![Screenshot shows the Azure AD old app gallery experience](media/use-scim-to-provision-users-and-groups/scim-figure-2a.png)
-
+ ![Screenshot shows the Azure AD application gallery.](media/use-scim-to-provision-users-and-groups/scim-figure-2b-1.png)
1. In the app management screen, select **Provisioning** in the left panel. 1. In the **Provisioning Mode** menu, select **Automatic**.
active-directory Scenario Spa Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-acquire-token.md
# Single-page application: Acquire a token to call an API
-The pattern for acquiring tokens for APIs with [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js) is to first attempt a silent token request by using the `acquireTokenSilent` method. When this method is called, the library first checks the cache in browser storage to see if a non-expired access token exists and returns it. If no access token is found for the given parameters, it will throw an `InteractionRequiredAuthError`, which should be handled with an interactive token request method (`acquireTokenPopup` or `acquireTokenRedirect`). If an access token is found but it's expired, it attempts to use its refresh token to get a fresh access token. If the refresh token's 24-hour lifetime has also expired, MSAL.js will open a hidden iframe to silently request a new authorization code by leveraging the existing active session with Azure AD (if any), which will then be exchanged for a fresh set of tokens (access _and_ refresh tokens). For more information about single sign-on (SSO) session and token lifetime values in Azure AD, see [Token lifetimes](active-directory-configurable-token-lifetimes.md). For more information on MSAL.js cache lookup policy, see: [Acquiring an Access Token](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/acquire-token.md#acquiring-an-access-token).
+The pattern for acquiring tokens for APIs with [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js) is to first attempt a silent token request by using the `acquireTokenSilent` method. When this method is called, the library first checks the cache in browser storage to see if a non-expired access token exists and returns it. If no access token is found or the access token found has expired, it attempts to use its refresh token to get a fresh access token. If the refresh token's 24-hour lifetime has also expired, MSAL.js will open a hidden iframe to silently request a new authorization code by leveraging the existing active session with Azure AD (if any), which will then be exchanged for a fresh set of tokens (access _and_ refresh tokens). For more information about single sign-on (SSO) session and token lifetime values in Azure AD, see [Token lifetimes](active-directory-configurable-token-lifetimes.md). For more information on MSAL.js cache lookup policy, see: [Acquiring an Access Token](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/acquire-token.md#acquiring-an-access-token).
The silent token requests to Azure AD might fail for reasons like a password change or updated conditional access policies. More often, failures are due to the refresh token's 24-hour lifetime expiring and [the browser blocking third party cookies](reference-third-party-cookies-spas.md), which prevents the use of hidden iframes to continue authenticating the user. In these cases, you should invoke one of the interactive methods (which may prompt the user) to acquire tokens:
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
description: The What's new release notes in the Overview section of this conten
- Last updated 1/31/2022-+
With Azure Active Directory (Azure AD) Access Reviews, you can create a download
**Product capability:** Identity Security & Protection
-Azure AD Identity Protection is extending its core capabilities of detecting, investigating, and remediating identity-based risk to workload identities. This allows organizations to better protect their applications, service principals, and managed identities. We are also extending Conditional Access so you can block at-risk workload identities. [Learn more](../identity-protection/concept-workload-identity-risk.md)
+Azure AD Identity Protection is extending its core capabilities of detecting, investigating, and remediating identity-based risk to workload identities. This allows organizations to better protect their applications, service principals, and managed identities. We're also extending Conditional Access so you can block at-risk workload identities. [Learn more](../identity-protection/concept-workload-identity-risk.md)
We previously announced in April 2020, a new combined registration experience en
**Service category:** Microsoft Authenticator App **Product capability:** User Authentication
-To prevent accidental notification approvals, admins can now require users to enter the number displayed on the sign in screen when approving a multi-factor authentication notification in the Authenticator app. This feature adds an extra security measure to the Microsoft Authenticator app. [Learn more](../authentication/how-to-mfa-number-match.md).
+To prevent accidental notification approvals, admins can now require users to enter the number displayed on the sign-in screen when approving a multi-factor authentication notification in the Authenticator app. This feature adds an extra security measure to the Microsoft Authenticator app. [Learn more](../authentication/how-to-mfa-number-match.md).
We previously announced in April 2020, a new combined registration experience en
**Service category:** Authentications (Logins) **Product capability:** User Authentication
-A problematic interaction between Windows and a local Active Directory Federation Services (ADFS) instance can result in users attempting to sign into another account, but be silently signed into their existing account instead, with no warning. For federated IdPs such as ADFS, that support the [prompt=login](/windows-server/identity/ad-fs/operations/ad-fs-prompt-login) pattern, Azure AD will now trigger a fresh sign-in at ADFS when a user is directed to ADFS with a sign in hint. This ensures that the user is signed into the account they requested, rather than being silently signed into the account they're already signed in with.
+A problematic interaction between Windows and a local Active Directory Federation Services (ADFS) instance can result in users attempting to sign into another account, but be silently signed into their existing account instead, with no warning. For federated IdPs such as ADFS, that support the [prompt=login](/windows-server/identity/ad-fs/operations/ad-fs-prompt-login) pattern, Azure AD will now trigger a fresh sign-in at ADFS when a user is directed to ADFS with a sign-in hint. This ensures that the user is signed into the account they requested, rather than being silently signed into the account they're already signed in with.
For more information, see the [change notice](../develop/reference-breaking-changes.md).
We've released a new major version of Azure Active Directory Connect. This versi
-### Public Preview - Azure AD single Sign-on and device-based Conditional Access support in Firefox on Windows 10
+### Public Preview - Azure AD single sign-on and device-based Conditional Access support in Firefox on Windows 10
**Type:** New feature **Service category:** Authentications (Logins)
You can now view your users' last sign-in date and time stamp on the Azure porta
Microsoft Silverlight will reach its end of support on October 12, 2021. This change only impacts customers using the Microsoft BHOLD Suite, and doesn't impact other Microsoft Identity Manager scenarios. For more information, see [Silverlight End of Support](https://support.microsoft.com/windows/silverlight-end-of-support-0a3be3c7-bead-e203-2dfd-74f0a64f1788).
-Users who haven't installed Microsoft Silverlight in their browser can't use the BHOLD Suite modules which require Silverlight. This includes the BHOLD Model Generator, BHOLD FIM Self-service integration, and BHOLD Analytics. Customers with an existing BHOLD deployment of one or more of those modules should plan to uninstall those modules from their BHOLD server computers by October 2021. Also, they should plan to uninstall Silverlight from any user computers that were previously interacting with that BHOLD deployment.
+Users who haven't installed Microsoft Silverlight in their browser can't use the BHOLD Suite modules, which require Silverlight. This includes the BHOLD Model Generator, BHOLD FIM Self-service integration, and BHOLD Analytics. Customers with an existing BHOLD deployment of one or more of those modules should plan to uninstall those modules from their BHOLD server computers by October 2021. Also, they should plan to uninstall Silverlight from any user computers that were previously interacting with that BHOLD deployment.
Azure AD customers can now easily design and issue verifiable credentials. Verif
**Service category:** User Authentication **Product capability:** Authentications (Logins)
-As a security improvement, the [device code flow](../develop/v2-oauth2-device-code.md) has been updated to include an another prompt, which validates that the user is signing into the app they expect. The rollout is planned to start in June and expected to be complete by June 30.
+As a security improvement, the [device code flow](../develop/v2-oauth2-device-code.md) has been updated to include another prompt, which validates that the user is signing into the app they expect. The rollout is planned to start in June and expected to be complete by June 30.
To help prevent phishing attacks where an attacker tricks the user into signing into a malicious application, the following prompt is being added: "Are you trying to sign in to [application display name]?". All users will see this prompt while signing in using the device code flow. As a security measure, it can't be removed or bypassed. [Learn more](../develop/reference-breaking-changes.md#the-device-code-flow-ux-will-now-include-an-app-confirmation-prompt).
You can create customized experiences for these external users, including collec
**Service category:** B2C - Consumer Identity Management **Product capability:** B2B/B2C
-B2C Phone Sign-up and Sign-in using a built-in policy enable IT administrators and developers of organizations to allow their end-users to sign in and sign-up using a phone number in user flows. With this feature, disclaimer links such as privacy policy and terms of use can be customized and shown on the page before the end-user proceeds to receive the one-time passcode via text message. [Learn more](../../active-directory-b2c/phone-authentication-user-flows.md).
+B2C Phone Sign-up and Sign-in using a built-in policy enable IT administrators and developers of organizations to allow their end-users to sign in and sign up using a phone number in user flows. With this feature, disclaimer links such as privacy policy and terms of use can be customized and shown on the page before the end-user proceeds to receive the one-time passcode via text message. [Learn more](../../active-directory-b2c/phone-authentication-user-flows.md).
For more information, see [What is sign-in diagnostic in Azure AD?](../reports-m
Azure AD connect cloud sync now has an updated agent (version# - 1.1.359). For more details on agent updates, including bug fixes, check out the [version history](../cloud-sync/reference-version-history.md). With the updated agent, cloud sync customers can use GMSA cmdlets to set and reset their gMSA permission at a granular level. In addition that, we've changed the limit of syncing members using group scope filtering from 1499 to 50,000 (50K) members.
-Check out the newly available [expression builder](../cloud-sync/how-to-expression-builder.md#deploy-the-expression) for cloud sync, which, helps you build complex expressions as well as simple expressions when you do transformations of attribute values from AD to Azure AD using attribute mapping.
+Check out the newly available [expression builder](../cloud-sync/how-to-expression-builder.md#deploy-the-expression) for cloud sync, which, helps you build complex expressions and simple expressions when you do transformations of attribute values from AD to Azure AD using attribute mapping.
For more information about how to better secure your organization by using autom
**Service category:** MS Graph **Product capability:** B2B/B2C
-[MS Graph API for the Company Branding](/graph/api/resources/organizationalbrandingproperties) is available for the Azure AD or Microsoft 365 login experience to allow the management of the branding parameters programmatically.
+[MS Graph API for the Company Branding](/graph/api/resources/organizationalbrandingproperties) is available for the Azure AD or Microsoft 365 sign-in experience to allow the management of the branding parameters programmatically.
Azure AD Application Proxy native support for header-based authentication is now
**Product capability:** Identity Security & Protection
-Two-way SMS for MFA Server was originally deprecated in 2018, and will not be supported after February 24, 2021. Administrators should enable another method for users who still use two-way SMS.
+Two-way SMS for MFA Server was originally deprecated in 2018, and won't be supported after February 24, 2021. Administrators should enable another method for users who still use two-way SMS.
Email notifications and Azure portal Service Health notifications were sent to affected admins on December 8, 2020 and January 28, 2021. The alerts went to the Owner, Co-Owner, Admin, and Service Admin RBAC roles tied to the subscriptions. [Learn more](../authentication/how-to-authentication-two-way-sms-unsupported.md).
Users can now create their own groupings of apps on the My Apps app launcher. Th
Microsoft Authenticator provides multifactor authentication and account management capabilities, and now also will autofill passwords on sites and apps users visit on their mobile (iOS and Android).
-To use autofill on Authenticator, users need to add their personal Microsoft account to Authenticator and use it to sync their passwords. Work or school accounts cannot be used to sync passwords at this time. [Learn more](../user-help/user-help-auth-app-faq.md#autofill-for-it-admins).
+To use autofill on Authenticator, users need to add their personal Microsoft account to Authenticator and use it to sync their passwords. Work or school accounts can't be used to sync passwords at this time. [Learn more](../user-help/user-help-auth-app-faq.md#autofill-for-it-admins).
Some common delegation scenarios:
- the creation of Azure AD Gallery applications - update and read of basic SAML Configurations for SAML based single sign-on applications - management of signing certificates for SAML based single sign-on applications-- update of expiring sign in certificates notification email addresses for SAML based single sign-on applications
+- update of expiring sign-in certificates notification email addresses for SAML based single sign-on applications
- update of the SAML token signature and sign-in algorithm for SAML based single sign-on applications - create, delete, and update of user attributes and claims for SAML-based single sign-on applications - ability to turn on, off, and restart provisioning jobs
For more information, see [Automate user provisioning to SaaS applications with
-### Public Preview - Email Sign-In with ProxyAddresses now deployable via Staged Rollout
+### Public Preview - Email Sign in with ProxyAddresses now deployable via Staged Rollout
**Type:** New feature **Service category:** Authentications (Logins)
The new service also aims to complete member addition and removal because of att
## October 2020
-### Azure AD On-Premises Hybrid Agents Impacted by Azure TLS Certificate Changes
+### Azure AD on-premises Hybrid Agents Impacted by Azure TLS Certificate Changes
**Type:** Plan for change **Service category:** N/A
Previously, onboarding to Privileged Identity Management (PIM) required user con
Onboarding to PIM does not have any direct adverse effect on your tenant. You can expect the following changes: - Additional assignment options such as active vs. eligible with start and end time when you make an assignment in either PIM or Azure AD roles and administrators blade. - Additional scoping mechanisms, like Administrative Units and custom roles, introduced directly into the assignment experience. -- If you are a global administrator or privileged role administrator, you may start getting a few additional emails like the PIM weekly digest.
+- If you're a global administrator or privileged role administrator, you may start getting a few additional emails like the PIM weekly digest.
- You might also see ms-pim service principal in the audit log related to role assignment. This expected change shouldn't affect your regular workflow. For more information, see [Start using Privileged Identity Management](../privileged-identity-management/pim-getting-started.md).
A [hotfix rollup package (build 4.6.263.0)](https://support.microsoft.com/help/4
With the GA release of the client apps condition in Conditional Access, new policies will now apply by default to all client applications. This includes legacy authentication clients. Existing policies will remain unchanged, but the *Configure Yes/No* toggle will be removed from existing policies to easily see which client apps are applied to by the policy.
-When creating a new policy, make sure to exclude users and service accounts that are still using legacy authentication; if you don't, they will be blocked. [Learn more](../conditional-access/concept-conditional-access-conditions.md).
+When creating a new policy, make sure to exclude users and service accounts that are still using legacy authentication; if you don't, they'll be blocked. [Learn more](../conditional-access/concept-conditional-access-conditions.md).
When creating a new policy, make sure to exclude users and service accounts that
**Service category:** App Provisioning **Product capability:** Identity Lifecycle Management
-The Azure AD provisioning service leverages the SCIM standard for integrating with applications. Our implementation of the SCIM standard is evolving, and we expect to make changes to our behavior around how we perform PATCH operations as well as set the property "active" on a resource. [Learn more](../app-provisioning/application-provisioning-config-problem-scim-compatibility.md).
+The Azure AD provisioning service uses the SCIM standard for integrating with applications. Our implementation of the SCIM standard is evolving, and we expect to make changes to our behavior around how we perform PATCH operations and set the property "active" on a resource. [Learn more](../app-provisioning/application-provisioning-config-problem-scim-compatibility.md).
The Azure AD provisioning service leverages the SCIM standard for integrating wi
**Service category:** Group Management **Product capability:** Collaboration
-Owner settings on Groups general setting page can be configured to restrict owner assignment privileges to a limited group of users in the Azure Admin portal and Access Panel. We will soon have the ability to assign group owner privilege not only on these two UX portals but also enforce the policy on the backend to provide consistent behavior across endpoints, such as PowerShell and Microsoft Graph.
+Owner settings on Groups general setting page can be configured to restrict owner assignment privileges to a limited group of users in the Azure Admin portal and Access Panel. We'll soon have the ability to assign group owner privilege not only on these two UX portals but also enforce the policy on the backend to provide consistent behavior across endpoints, such as PowerShell and Microsoft Graph.
-We will start to disable the current setting for the customers who aren't using it and will offer an option to scope users for group owner privilege in the next few months. For guidance on updating group settings, see Edit your group information using [Azure Active Directory](./active-directory-groups-settings-azure-portal.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context).
+We'll start to disable the current setting for the customers who aren't using it and will offer an option to scope users for group owner privilege in the next few months. For guidance on updating group settings, see Edit your group information using [Azure Active Directory](./active-directory-groups-settings-azure-portal.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context).
Admins can now see whether a Windows authentication used Windows Hello for Busin
**Service category:** App Provisioning **Product capability:** Identity Lifecycle Management
-Previously, when a group changed from "in-scope" to "out-of-scope" and an admin clicked restart before the change was completed, the group object was not being deleted. Now the group object will be deleted from the target application when it goes out of scope (disabled, deleted, unassigned, or did not pass scoping filter). [Learn more](../app-provisioning/how-provisioning-works.md#incremental-cycles).
+Previously, when a group changed from "in-scope" to "out-of-scope" and an admin clicked restart before the change was completed, the group object wasn't being deleted. Now the group object will be deleted from the target application when it goes out of scope (disabled, deleted, unassigned, or didn't pass scoping filter). [Learn more](../app-provisioning/how-provisioning-works.md#incremental-cycles).
For more information about this issue, see [Azure Active Directory Authenticatio
In August 2019, we've added these 26 new apps with Federation support to the app gallery:
-[Civic Platform](../saas-apps/civic-platform-tutorial.md), [Amazon Business](../saas-apps/amazon-business-tutorial.md), [ProNovos Ops Manager](../saas-apps/pronovos-ops-manager-tutorial.md), [Cognidox](../saas-apps/cognidox-tutorial.md), [Viareport's Inativ Portal (Europe)](../saas-apps/viareports-inativ-portal-europe-tutorial.md), [Azure Databricks](https://azure.microsoft.com/services/databricks), [Robin](../saas-apps/robin-tutorial.md), [Academy Attendance](../saas-apps/academy-attendance-tutorial.md), [Priority Matrix](https://sync.appfluence.com/pmwebng/), [Cousto MySpace](https://cousto.platformers.be/account/login), [Uploadcare](https://uploadcare.com/accounts/signup/), [Carbonite Endpoint Backup](../saas-apps/carbonite-endpoint-backup-tutorial.md), [CPQSync by Cincom](../saas-apps/cpqsync-by-cincom-tutorial.md), [Chargebee](../saas-apps/chargebee-tutorial.md), [deliver.media™ Portal](https://portal.deliver.media), [Frontline Education](../saas-apps/frontline-education-tutorial.md), [F5](https://www.f5.com/products/security/access-policy-manager), [stashcat AD connect](https://www.stashcat.com), [Blink](../saas-apps/blink-tutorial.md), [Vocoli](../saas-apps/vocoli-tutorial.md), [ProNovos Analytics](../saas-apps/pronovos-analytics-tutorial.md), [Sigstr](../saas-apps/sigstr-tutorial.md), [Darwinbox](../saas-apps/darwinbox-tutorial.md), [Watch by Colors](../saas-apps/watch-by-colors-tutorial.md), [Harness](../saas-apps/harness-tutorial.md), [EAB Navigate Strategic Care](../saas-apps/eab-navigate-strategic-care-tutorial.md)
+[Civic Platform](../saas-apps/civic-platform-tutorial.md), [Amazon Business](../saas-apps/amazon-business-tutorial.md), [ProNovos Ops Manager](../saas-apps/pronovos-ops-manager-tutorial.md), [Cognidox](../saas-apps/cognidox-tutorial.md), [Viareport's Inativ Portal (Europe)](../saas-apps/viareports-inativ-portal-europe-tutorial.md), [Azure Databricks](https://azure.microsoft.com/services/databricks), [Robin](../saas-apps/robin-tutorial.md), [Academy Attendance](../saas-apps/academy-attendance-tutorial.md), [Cousto MySpace](https://cousto.platformers.be/account/login), [Uploadcare](https://uploadcare.com/accounts/signup/), [Carbonite Endpoint Backup](../saas-apps/carbonite-endpoint-backup-tutorial.md), [CPQSync by Cincom](../saas-apps/cpqsync-by-cincom-tutorial.md), [Chargebee](../saas-apps/chargebee-tutorial.md), [deliver.media™ Portal](https://portal.deliver.media), [Frontline Education](../saas-apps/frontline-education-tutorial.md), [F5](https://www.f5.com/products/security/access-policy-manager), [stashcat AD connect](https://www.stashcat.com), [Blink](../saas-apps/blink-tutorial.md), [Vocoli](../saas-apps/vocoli-tutorial.md), [ProNovos Analytics](../saas-apps/pronovos-analytics-tutorial.md), [Sigstr](../saas-apps/sigstr-tutorial.md), [Darwinbox](../saas-apps/darwinbox-tutorial.md), [Watch by Colors](../saas-apps/watch-by-colors-tutorial.md), [Harness](../saas-apps/harness-tutorial.md), [EAB Navigate Strategic Care](../saas-apps/eab-navigate-strategic-care-tutorial.md)
For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
For more information about the new cookies, see [Cookie settings for accessing o
In January 2019, we've added these 35 new apps with Federation support to the app gallery:
-[Firstbird](../saas-apps/firstbird-tutorial.md), [Folloze](../saas-apps/folloze-tutorial.md), [Talent Palette](../saas-apps/talent-palette-tutorial.md), [Infor CloudSuite](../saas-apps/infor-cloud-suite-tutorial.md), [Cisco Umbrella](../saas-apps/cisco-umbrella-tutorial.md), [Zscaler Internet Access Administrator](../saas-apps/zscaler-internet-access-administrator-tutorial.md), [Expiration Reminder](../saas-apps/expiration-reminder-tutorial.md), [InstaVR Viewer](../saas-apps/instavr-viewer-tutorial.md), [CorpTax](../saas-apps/corptax-tutorial.md), [Verb](https://app.verb.net/login), [OpenLattice](https://openlattice.com/#/), [TheOrgWiki](https://www.theorgwiki.com/signup), [Pavaso Digital Close](../saas-apps/pavaso-digital-close-tutorial.md), [GoodPractice Toolkit](../saas-apps/goodpractice-toolkit-tutorial.md), [Cloud Service PICCO](../saas-apps/cloud-service-picco-tutorial.md), [AuditBoard](../saas-apps/auditboard-tutorial.md), [iProva](../saas-apps/iprova-tutorial.md), [Workable](../saas-apps/workable-tutorial.md), [CallPlease](https://webapp.callplease.com/create-account/create-account.html), [GTNexus SSO System](../saas-apps/gtnexus-sso-module-tutorial.md), [CBRE ServiceInsight](../saas-apps/cbre-serviceinsight-tutorial.md), [Deskradar](../saas-apps/deskradar-tutorial.md), [Coralogixv](../saas-apps/coralogix-tutorial.md), [Signagelive](../saas-apps/signagelive-tutorial.md), [ARES for Enterprise](../saas-apps/ares-for-enterprise-tutorial.md), [K2 for Office 365](https://www.k2.com/O365), [Xledger](https://www.xledger.net/), [iDiD Manager](../saas-apps/idid-manager-tutorial.md), [HighGear](../saas-apps/highgear-tutorial.md), [Visitly](../saas-apps/visitly-tutorial.md), [Korn Ferry ALP](../saas-apps/korn-ferry-alp-tutorial.md), [Acadia](../saas-apps/acadia-tutorial.md), [Adoddle cSaas Platform](../saas-apps/adoddle-csaas-platform-tutorial.md)
+[Firstbird](../saas-apps/firstbird-tutorial.md), [Folloze](../saas-apps/folloze-tutorial.md), [Talent Palette](../saas-apps/talent-palette-tutorial.md), [Infor CloudSuite](../saas-apps/infor-cloud-suite-tutorial.md), [Cisco Umbrella](../saas-apps/cisco-umbrella-tutorial.md), [Zscaler Internet Access Administrator](../saas-apps/zscaler-internet-access-administrator-tutorial.md), [Expiration Reminder](../saas-apps/expiration-reminder-tutorial.md), [InstaVR Viewer](../saas-apps/instavr-viewer-tutorial.md), [CorpTax](../saas-apps/corptax-tutorial.md), [Verb](https://app.verb.net/login), [TheOrgWiki](https://www.theorgwiki.com/signup), [Pavaso Digital Close](../saas-apps/pavaso-digital-close-tutorial.md), [GoodPractice Toolkit](../saas-apps/goodpractice-toolkit-tutorial.md), [Cloud Service PICCO](../saas-apps/cloud-service-picco-tutorial.md), [AuditBoard](../saas-apps/auditboard-tutorial.md), [iProva](../saas-apps/iprova-tutorial.md), [Workable](../saas-apps/workable-tutorial.md), [CallPlease](https://webapp.callplease.com/create-account/create-account.html), [GTNexus SSO System](../saas-apps/gtnexus-sso-module-tutorial.md), [CBRE ServiceInsight](../saas-apps/cbre-serviceinsight-tutorial.md), [Deskradar](../saas-apps/deskradar-tutorial.md), [Coralogixv](../saas-apps/coralogix-tutorial.md), [Signagelive](../saas-apps/signagelive-tutorial.md), [ARES for Enterprise](../saas-apps/ares-for-enterprise-tutorial.md), [K2 for Office 365](https://www.k2.com/O365), [Xledger](https://www.xledger.net/), [iDiD Manager](../saas-apps/idid-manager-tutorial.md), [HighGear](../saas-apps/highgear-tutorial.md), [Visitly](../saas-apps/visitly-tutorial.md), [Korn Ferry ALP](../saas-apps/korn-ferry-alp-tutorial.md), [Acadia](../saas-apps/acadia-tutorial.md), [Adoddle cSaas Platform](../saas-apps/adoddle-csaas-platform-tutorial.md)
For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md
A conceptual overview of this feature is available in the [Azure RBAC on Azure A
## Set up Azure AD applications
-### Create a server application
+### [AzureCLI >= v2.37](#tab/AzureCLI)
+#### Create a server application
1. Create a new Azure AD application and get its `appId` value. This value is used in later steps as `serverApplicationId`. ```azurecli CLUSTER_NAME="<clusterName>" TENANT_ID="<tenant>"
- SERVER_APP_ID=$(az ad app create --display-name "${CLUSTER_NAME}Server" --identifier-uris "api://${TENANT_ID}/ClientAnyUniqueSuffix" --query appId -o tsv)
+ SERVER_UNIQUE_SUFFIX="<identifier_suffix>"
+ SERVER_APP_ID=$(az ad app create --display-name "${CLUSTER_NAME}Server" --identifier-uris "api://${TENANT_ID}/${SERVER_UNIQUE_SUFFIX}" --query appId -o tsv)
echo $SERVER_APP_ID ```
-1. Update the application's group membership claims:
+1. To grant "Sign in and read user profile" API permissions to the server application. Copy this JSON and save it in a file called oauth2-permissions.json:
- ```azurecli
- az ad app update --id "${SERVER_APP_ID}" --set groupMembershipClaims=All
+ ```json
+ {
+ "oauth2PermissionScopes": [
+ {
+ "adminConsentDescription": "Sign in and read user profile",
+ "adminConsentDisplayName": "Sign in and read user profile",
+ "id": "<unique_guid>",
+ "isEnabled": true,
+ "type": "User",
+ "userConsentDescription": "Sign in and read user profile",
+ "userConsentDisplayName": "Sign in and read user profile",
+ "value": "User.Read"
+ }
+ ]
+ }
+ ```
+
+1. Update the application's group membership claims. Run the commands in the same directory as `oauth2-permissions.json` file. RBAC for Azure Arc-enabled Kubernetes requires [`signInAudience` to be set to **AzureADMyOrg**](/azure/active-directory/develop/supported-accounts-validation):
+
+ ```azurecli
+ az ad app update --id "${SERVER_APP_ID}" --set groupMembershipClaims=All
+ az ad app update --id ${SERVER_APP_ID} --set api=@oauth2-permissions.json
+ az ad app update --id ${SERVER_APP_ID} --set signInAudience=AzureADMyOrg
+ SERVER_OBJECT_ID=$(az ad app show --id "${SERVER_APP_ID}" --query "id" -o tsv)
+ az rest --method PATCH --headers "Content-Type=application/json" --uri https://graph.microsoft.com/v1.0/applications/${SERVER_OBJECT_ID}/ --body '{"api":{"requestedAccessTokenVersion": 1}}'
``` + 1. Create a service principal and get its `password` field value. This value is required later as `serverApplicationSecret` when you're enabling this feature on the cluster. Please note that this secret is valid for 1 year by default and will need to be [rotated after that](./azure-rbac.md#refresh-the-secret-of-the-server-application). Please refer to [this](/cli/azure/ad/sp/credential?view=azure-cli-latest&preserve-view=true#az-ad-sp-credential-reset) to set a custom expiry duration. ```azurecli
- az ad sp create --id "${SERVER_APP_ID}"
- SERVER_APP_SECRET=$(az ad sp credential reset --name "${SERVER_APP_ID}" --credential-description "ArcSecret" --query password -o tsv)
+ az ad sp create --id "${SERVER_APP_ID}"
+ SERVER_APP_SECRET=$(az ad sp credential reset --id "${SERVER_APP_ID}" --query password -o tsv)
```
-1. Grant "Sign in and read user profile" API permissions to the application:
+1. Grant "Sign in and read user profile" API permissions to the application. [Additional information](/cli/azure/ad/app/permission?view=azure-cli-latest#az-ad-app-permission-add-examples):
```azurecli
- az ad app permission add --id "${SERVER_APP_ID}" --api 00000003-0000-0000-c000-000000000000 --api-permissions e1fe6dd8-ba31-4d61-89e7-88639da4683d=Scope
- az ad app permission grant --id "${SERVER_APP_ID}" --api 00000003-0000-0000-c000-000000000000
+ az ad app permission add --id "${SERVER_APP_ID}" --api 00000003-0000-0000-c000-000000000000 --api-permissions e1fe6dd8-ba31-4d61-89e7-88639da4683d=Scope
+ az ad app permission grant --id "${SERVER_APP_ID}" --api 00000003-0000-0000-c000-000000000000 --scope User.Read
+ ```
+
+ > [!NOTE]
+ > An Azure tenant administrator has to run this step.
+ >
+ > For usage of this feature in production, we recommend that you create a different server application for every cluster.
+
+#### Create a client application
+
+1. Create a new Azure AD application and get its `appId` value. This value is used in later steps as `clientApplicationId`.
+
+ ```azurecli
+ CLIENT_UNIQUE_SUFFIX="<identifier_suffix>"
+ CLIENT_APP_ID=$(az ad app create --display-name "${CLUSTER_NAME}Client" --is-fallback-public-client --public-client-redirect-uris "api://${TENANT_ID}/${CLIENT_UNIQUE_SUFFIX}" --query appId -o tsv)
+ echo $CLIENT_APP_ID
+ ```
++
+2. Create a service principal for this client application:
+
+ ```azurecli
+ az ad sp create --id "${CLIENT_APP_ID}"
+ ```
+
+3. Get the `oAuthPermissionId` value for the server application:
+
+ ```azurecli
+ az ad app show --id "${SERVER_APP_ID}" --query "api.oauth2PermissionScopes[0].id" -o tsv
+ ```
+
+4. Grant the required permissions for the client application. RBAC for Azure Arc-enabled Kubernetes requires [`signInAudience` to be set to **AzureADMyOrg**](/azure/active-directory/develop/supported-accounts-validation):
+
+ ```azurecli
+ az ad app permission add --id "${CLIENT_APP_ID}" --api "${SERVER_APP_ID}" --api-permissions <oAuthPermissionId>=Scope
+ RESOURCE_APP_ID=$(az ad app show --id "${CLIENT_APP_ID}" --query "requiredResourceAccess[0].resourceAppId" -o tsv)
+ az ad app permission grant --id "${CLIENT_APP_ID}" --api "${RESOURCE_APP_ID}" --scope User.Read
+ az ad app update --id ${CLIENT_APP_ID} --set signInAudience=AzureADMyOrg
+ CLIENT_OBJECT_ID=$(az ad app show --id "${CLIENT_APP_ID}" --query "id" -o tsv)
+ az rest --method PATCH --headers "Content-Type=application/json" --uri https://graph.microsoft.com/v1.0/applications/${CLIENT_OBJECT_ID}/ --body '{"api":{"requestedAccessTokenVersion": 1}}'
+ ```
++
+### [AzureCLI < v2.37](#tab/AzureCLI236)
+#### Create a server application
+1. Create a new Azure AD application and get its `appId` value. This value is used in later steps as `serverApplicationId`.
+
+ ```azurecli
+ CLUSTER_NAME="<clusterName>"
+ TENANT_ID="<tenant>"
+ SERVER_UNIQUE_SUFFIX="<identifier_suffix>"
+ SERVER_APP_ID=$(az ad app create --display-name "${CLUSTER_NAME}Server" --identifier-uris "api://${TENANT_ID}/${SERVER_UNIQUE_SUFFIX}" --query appId -o tsv)
+ echo $SERVER_APP_ID
+ ```
+
+1. Update the application's group membership claims:
+ ```azurecli
+ az ad app update --id "${SERVER_APP_ID}" --set groupMembershipClaims=All
+ ```
+
+1. Create a service principal and get its `password` field value. This value is required later as `serverApplicationSecret` when you're enabling this feature on the cluster. This secret is valid for one year by default and will need to be [rotated after that](./azure-rbac.md#refresh-the-secret-of-the-server-application). You can also [set a custom expiration duration](/cli/azure/ad/sp/credential?view=azure-cli-latest&preserve-view=true#az-ad-sp-credential-reset).
+
+ ```azurecli
+ az ad sp create --id "${SERVER_APP_ID}"
+ SERVER_APP_SECRET=$(az ad sp credential reset --name "${SERVER_APP_ID}" --credential-description "ArcSecret" --query password -o tsv)
+ ```
+
+1. Grant "Sign in and read user profile" API permissions to the application. [Additional information](/cli/azure/ad/app/permission?view=azure-cli-latest#az-ad-app-permission-add-examples):
+
+ ```azurecli
+ az ad app permission add --id "${SERVER_APP_ID}" --api 00000003-0000-0000-c000-000000000000 --api-permissions e1fe6dd8-ba31-4d61-89e7-88639da4683d=Scope
+ az ad app permission grant --id "${SERVER_APP_ID}" --api 00000003-0000-0000-c000-000000000000
``` > [!NOTE]
A conceptual overview of this feature is available in the [Azure RBAC on Azure A
> > For usage of this feature in production, we recommend that you create a different server application for every cluster.
-### Create a client application
+#### Create a client application
1. Create a new Azure AD application and get its `appId` value. This value is used in later steps as `clientApplicationId`. ```azurecli
- CLIENT_APP_ID=$(az ad app create --display-name "${CLUSTER_NAME}Client" --native-app --reply-urls "api://${TENANT_ID}/ServerAnyUniqueSuffix" --query appId -o tsv)
- echo $CLIENT_APP_ID
+ CLIENT_UNIQUE_SUFFIX="<identifier_suffix>"
+ CLIENT_APP_ID=$(az ad app create --display-name "${CLUSTER_NAME}Client" --native-app --reply-urls "api://${TENANT_ID}/${CLIENT_UNIQUE_SUFFIX}" --query appId -o tsv)
+ echo $CLIENT_APP_ID
``` 2. Create a service principal for this client application:
A conceptual overview of this feature is available in the [Azure RBAC on Azure A
3. Get the `oAuthPermissionId` value for the server application: ```azurecli
- az ad app show --id "${SERVER_APP_ID}" --query "oauth2Permissions[0].id" -o tsv
+ az ad app show --id "${SERVER_APP_ID}" --query "oauth2Permissions[0].id" -o tsv
``` 4. Grant the required permissions for the client application: ```azurecli
- az ad app permission add --id "${CLIENT_APP_ID}" --api "${SERVER_APP_ID}" --api-permissions <oAuthPermissionId>=Scope
- az ad app permission grant --id "${CLIENT_APP_ID}" --api "${SERVER_APP_ID}"
+ az ad app permission add --id "${CLIENT_APP_ID}" --api "${SERVER_APP_ID}" --api-permissions <oAuthPermissionId>=Scope
+ az ad app permission grant --id "${CLIENT_APP_ID}" --api "${SERVER_APP_ID}"
```+ ## Create a role assignment for the server application
az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --featur
1. The `azure-arc-guard-manifests` secret in the `kube-system` namespace contains two files `guard-authn-webhook.yaml` and `guard-authz-webhook.yaml`. Copy these files to the `/etc/guard` directory of the node.
+ ```console
+ sudo mkdir -p /etc/guard
+ kubectl get secrets azure-arc-guard-manifests -n kube-system -o json | jq '.data."guard-authn-webhook.yaml"' | base64 -d > /etc/guard/guard-authn-webhook.yaml
+ kubectl get secrets azure-arc-guard-manifests -n kube-system -o json | jq '.data."guard-authz-webhook.yaml"' | base64 -d > /etc/guard/guard-authz-webhook.yaml
+ ```
+ 1. Open the `apiserver` manifest in edit mode: ```console
azure-arc Diagnose Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/diagnose-connection-issues.md
+
+ Title: "Diagnose connection issues for Azure Arc-enabled Kubernetes clusters"
Last updated : 11/04/2022+
+description: "Learn how to resolve common issues when connecting Kubernetes clusters to Azure Arc."
+++
+# Diagnose connection issues for Azure Arc-enabled Kubernetes clusters
+
+If you are experiencing issues connecting a cluster to Azure Arc, it's probably due to one of the issues listed here. We provide two flowcharts with guided help: one if you're [not using a proxy server](#connections-without-a-proxy), and one that applies if your network connection [uses a proxy server](#connections-with-a-proxy-server).
+
+> [!TIP]
+> The steps in this flowchart apply whether you're using Azure CLI or Azure PowerShell to [connect your cluster](quickstart-connect-cluster.md). However, some of the steps require the use of Azure CLI. If you haven't already [installed Azure CLI](/cli/azure/install-azure-cli), be sure to do so before you begin.
+
+## Connections without a proxy
+
+Review this flowchart in order to diagnose your issue when attempting to connect a cluster to Azure Arc without a proxy server. More details about each step are provided below.
++
+### Does the Azure identity have sufficient permissions?
+
+Review the [prerequisites for connecting a cluster](quickstart-connect-cluster.md?tabs=azure-cli#prerequisites) and make sure that the identity you're using to connect the cluster has the necessary permissions.
+
+### Is Azure CLI version above 2.30.0?
+
+Make sure you [have the latest version installed](/cli/azure/install-azure-cli).
+
+If you connected your cluster by using Azure PowerShell, make sure you are running [Azure PowerShell version 6.6.0 or later](/powershell/azure/install-az-ps).
+
+### Is the `connectedk8s` extension the latest version?
+
+Update the Azure CLI `connectedk8s` extension to the latest version by running this command:
+
+```azurecli
+az extension update --name connectedk8s
+```
+
+If you haven't installed the extension yet, you can do so by running the following command:
+
+```azurecli
+az extension add --name connectedk8s
+```
+
+### Is kubeconfig pointing to the right cluster?
+
+Run `kubectl config get-contexts` to confirm the target context name. Then set the default context to the right cluster by running `kubectl config use-context <target-cluster-name>`.
++
+### Are all required resource providers registered?
+
+Be sure that the Microsoft.Kubernetes, Microsoft.KubernetesConfiguration, and Microsoft.ExtendedLocation resource providers are [registered](quickstart-connect-cluster.md#register-providers-for-azure-arc-enabled-kubernetes).
+
+### Are all network requirements met?
+
+Review the [network requirements](quickstart-connect-cluster.md#meet-network-requirements) and ensure that no required endpoints are blocked.
+
+### Are all pods in the `azure-arc` namespace running?
+
+If everything is working correctly, your pods should all be in the `Running` state. Run `kubectl get pods -n azure-arc` to confirm whether any pod's state is not `Running`.
+
+### Still having problems?
+
+The steps above will resolve many common connection issues, but if you're still unable to connect successfully, generate a troubleshooting log file and then [open a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) so we can investigate the problem further.
+
+To generate the troubleshooting log file, run the following command:
+
+```azurecli
+az connectedk8s troubleshoot -g <myResourceGroup> -n <myK8sCluster>
+```
+
+When you [create your support request](/azure/azure-portal/supportability/how-to-create-azure-support-request), in the **Additional details** section, use the **File upload** option to upload the generated log file.
+
+## Connections with a proxy server
+
+If you are using a proxy server on at least one machine, complete the first five steps of the non-proxy flowchart (through resource provider registration) for basic troubleshooting steps. Then, if you are still encountering issues, review the next flowchart for additional troubleshooting steps. More details about each step are provided below.
++
+### Is the machine executing commands behind a proxy server?
+
+Be sure you have set all of the necessary environment variables. For more information, see [Connect using an outbound proxy server](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server).
+
+### Does the proxy server only accept trusted certificates?
+
+Be sure to include the certificate file path by including `--proxy-cert <path-to-cert-file>` when running the `az connectedk8s connect` command.
+
+```azurecli
+az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-cert <path-to-cert-file>
+```
+
+### Is the proxy server able to reach required network endpoints?
+
+Review the [network requirements](quickstart-connect-cluster.md#meet-network-requirements) and ensure that no required endpoints are blocked.
+
+### Is the proxy server only using HTTP?
+
+If your proxy server only uses HTTP, you can use `proxy-http` for both parameters.
+
+If your proxy server is set up with both HTTP and HTTPS, run the `az connectedk8s connect` command with the `--proxy-https` and `--proxy-http` parameters specified. Be sure you are using `--proxy-http` for the HTTP proxy and `--proxy-https` for the HTTPS proxy.
+
+```azurecli
+az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-https https://<proxy-server-ip-address>:<port> --proxy-http http://<proxy-server-ip-address>:<port>
+```
+
+### Does the proxy server require skip ranges for service-to-service communication?
+
+If you require skip ranges, use `--proxy-skip-range <excludedIP>,<excludedCIDR>` in your `az connectedk8s connect` command.
+
+```azurecli
+az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-https https://<proxy-server-ip-address>:<port> --proxy-http http://<proxy-server-ip-address>:<port> --proxy-skip-range <excludedIP>,<excludedCIDR>
+```
+
+### Are all pods in the `azure-arc` namespace running?
+
+If everything is working correctly, your pods should all be in the `Running` state. Run `kubectl get pods -n azure-arc` to confirm whether any pod's state is not `Running`.
+
+### Still having problems?
+
+The steps above will resolve many common connection issues, but if you're still unable to connect successfully, generate a troubleshooting log file and then [open a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) so we can investigate the problem further.
+
+To generate the troubleshooting log file, run the following command:
+
+```azurecli
+az connectedk8s troubleshoot -g <myResourceGroup> -n <myK8sCluster>
+```
+
+When you [create your support request](/azure/azure-portal/supportability/how-to-create-azure-support-request), in the **Additional details** section, use the **File upload** option to upload the generated log file.
++
+## Next steps
+
+- View more [troubleshooting tips for using Azure Arc-enabled Kubernetes](troubleshooting.md).
+- Review the process to [connect an existing Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md).
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
Title: "Quickstart: Connect an existing Kubernetes cluster to Azure Arc" description: In this quickstart, you learn how to connect an Azure Arc-enabled Kubernetes cluster. Previously updated : 10/12/2022 Last updated : 11/04/2022 ms.devlang: azurecli
eastus AzureArcTest1 microsoft.kubernetes/connectedclusters
> [!NOTE] > After onboarding the cluster, it takes around 5 to 10 minutes for the cluster metadata (cluster version, agent version, number of nodes, etc.) to surface on the overview page of the Azure Arc-enabled Kubernetes resource in Azure portal.
+> [!TIP]
+> For help troubleshooting problems while connecting your cluster, see [Diagnose connection issues for Azure Arc-enabled Kubernetes clusters](diagnose-connection-issues.md).
+ ## View Azure Arc agents for Kubernetes Azure Arc-enabled Kubernetes deploys a few agents into the `azure-arc` namespace.
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
Title: "Troubleshoot common Azure Arc-enabled Kubernetes issues"
# Previously updated : 10/24/2022 Last updated : 11/04/2022 description: "Learn how to resolve common issues with Azure Arc-enabled Kubernetes clusters and GitOps." keywords: "Kubernetes, Arc, Azure, containers, GitOps, Flux"
All pods should show `STATUS` as `Running` with either `3/3` or `2/2` under the
Connecting clusters to Azure Arc requires access to an Azure subscription and `cluster-admin` access to a target cluster. If you can't reach the cluster, or if you have insufficient permissions, connecting the cluster to Azure Arc will fail. Make sure you've met all of the [prerequisites to connect a cluster](quickstart-connect-cluster.md#prerequisites).
+> [!TIP]
+> For a visual guide to troubleshooting these issues, see [Diagnose connection issues for Arc-enabled Kubernetes clusters](diagnose-connection-issues.md).
+
+### DNS resolution issues
+
+If you see an error message about an issue with the DNS resolution on your cluster, there are a few things you can try in order to diagnose and resolve the problem.
+
+For more information, see [Debugging DNS Resolution](https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/).
+
+### Outbound network connectivity issues
+
+Issues with outbound network connectivity from the cluster may arise for different reasons. First make sure all of the [network requirements](quickstart-connect-cluster.md#meet-network-requirements) have been met.
+
+If you encounter this issue, and your cluster is behind an outbound proxy server, make sure you have passed proxy parameters during the onboarding of your cluster and that the proxy is configured correctly. For more information, see [Connect using an outbound proxy server](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server).
+
+### Unable to retrieve MSI certificate
+
+Problems retrieving the MSI certificate are usually due to network issues. Check to make sure all of the [network requirements](quickstart-connect-cluster.md#meet-network-requirements) have been met, then try again.
+ ### Azure CLI is unable to download Helm chart for Azure Arc agents With Helm version >= 3.7.0, you may run into the following error when using `az connectedk8s connect` to connect the cluster to Azure Arc:
azure-functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/overview.md
The Start/Stop VMs v2 feature starts or stops Azure Virtual Machines instances a
This new version of Start/Stop VMs v2 provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the [original version](../../automation/automation-solution-vm-management.md) available with Azure Automation, but it's designed to take advantage of newer technology in Azure.
-> [!NOTE]
-> We've added a plan (**AZ - Availability Zone**) to our Start/Stop MVs v2 solution to enable a high-availability offering. You can now choose between Consumption and Availability Zone plans before you start your deployment. In most cases, the monthly cost of the Availability Zone plan is higher when compared to the Consumption plan.
-
-> [!NOTE]
-> Automatic updating functionality was introduced on April 28th, 2022. This new auto update feature helps you stay on the latest version of the solution. This feature is enabled by default when you perform a new installation.
-> If you deployed your solution before this date, you can reinstall to the latest version from our [GitHub repository](https://github.com/microsoft/startstopv2-deployments)
+## Important Start/Stop VMs v2 Updates
+
+> + We've updated our Start/Stop VMs v2 function app resource to use [Azure Functions version 4.x](../functions-versions.md), and you'll get this version by default when you install Start/Stop VMs v2 from the marketplace. Existing customers should migrate from Functions version 3.x to version 4.x using our auto-update functionality. This functionality gets the latest version either by running the TriggerAutoUpdate timer function once manually or waiting for the schedule to run, if you've enabled it.
+>
+> + We've added a plan (**AZ - Availability Zone**) to our Start/Stop VMs v2 solution to enable a high-availability offering. You can now choose between Consumption and Availability Zone plans before you start your deployment. In most cases, the monthly cost of the Availability Zone plan is higher when compared to the Consumption plan.
+>
+> + Automatic updating functionality was introduced on April 28th, 2022. This new auto update feature helps you stay on the latest version of the solution. This feature is enabled by default when you perform a new installation.
+> If you deployed your solution before this date, you can reinstall to the latest version from our [GitHub repository](https://github.com/microsoft/startstopv2-deployments)
## Overview
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md
those will also be collected for all '/login' requests.
} ```
-## Common span attributes
+## Span attributes available for sampling
-This section lists some common span attributes that sampling overrides can use.
+Span attribute names are based on the OpenTelemetry semantic conventions:
-### HTTP spans
+* [HTTP](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/http.md)
+* [Messaging](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/messaging.md)
+* [Database](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/database.md)
+* [RPC](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/rpc.md)
-| Attribute | Type | Description |
-||||
-| `http.method` | string | HTTP request method.|
-| `http.url` | string | Full HTTP request URL in the form `scheme://host[:port]/path?query[#fragment]`. The fragment isn't usually transmitted over HTTP. But if the fragment is known, it should be included.|
-| `http.flavor` | string | Type of HTTP protocol. |
-| `http.user_agent` | string | Value of the [HTTP User-Agent](https://tools.ietf.org/html/rfc7231#section-5.5.3) header sent by the client. |
+To see the exact set of attributes captured by Application Insights Java for your application, set the
+[self-diagnostics level to debug](./java-standalone-config.md#self-diagnostics), and look for debug messages starting
+with the text "exporting span".
-Please note that `http.status_code` cannot be used for sampling decisions because it is not available
-at the start of the span.
-
-### JDBC spans
-
-| Attribute | Type | Description |
-||||
-| `db.system` | string | Identifier for the database management system (DBMS) product being used. See [list of identifiers](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/database.md#connection-level-attributes). |
-| `db.connection_string` | string | Connection string used to connect to the database. It's recommended to remove embedded credentials.|
-| `db.user` | string | Username for accessing the database. |
-| `db.name` | string | String used to report the name of the database being accessed. For commands that switch the database, this string should be set to the target database, even if the command fails.|
-| `db.statement` | string | Database statement that's being run.|
+Note that only attributes set at the start of the span are available for sampling,
+so attributes such as `http.status_code` which are captured later on cannot be used for sampling.
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md
Log Analytics Dedicated Clusters [pricing model](./logs-dedicated-clusters.md#cl
## How Customer-managed key works in Azure Monitor
-Azure Monitor uses managed identity to grant access to your Azure Key Vault. The identity of the Log Analytics cluster is supported at the cluster level. To allow Customer-managed key on multiple workspaces, a new Log Analytics cluster resource performs as an intermediate identity connection between your Key Vault and your Log Analytics workspaces. The cluster storage uses the managed identity that\'s associated with the *Cluster* resource to authenticate to your Azure Key Vault via Azure Active Directory.
+Azure Monitor uses managed identity to grant access to your Azure Key Vault. The identity of the Log Analytics cluster is supported at the cluster level. To allow Customer-managed key on multiple workspaces, a Log Analytics *Cluster* resource performs as an intermediate identity connection between your Key Vault and your Log Analytics workspaces. The cluster's storage uses the managed identity that\'s associated with the *Cluster* resource to authenticate to your Azure Key Vault via Azure Active Directory.
-After the Customer-managed key configuration, new ingested data to workspaces linked to your dedicated cluster gets encrypted with your key. You can unlink workspaces from the cluster at any time. New data then gets ingested to cluster storage and encrypted with Microsoft key, while you can query your new and old data seamlessly.
+You can apply Customer-managed key configuration to a new cluster, or existing cluster that has linked workspaces with data ingested to them. New data ingested to linked workspaces gets encrypted with your key, and older data ingested before the configuration, remains encrypted with Microsoft key. Your queries aren't affected by Customer-managed key configuration and query is performed across old and new data seamlessly. You can unlink workspaces from your cluster at any time, and new data ingested after the unlink gets encrypted with Microsoft key, and query is performed across old and new data seamlessly.
> [!IMPORTANT] > Customer-managed key capability is regional. Your Azure Key Vault, cluster and linked workspaces must be in the same region, but they can be in different subscriptions.
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Title: "What's new in Azure Monitor documentation" description: "What's new in Azure Monitor documentation" Previously updated : 10/13/2022 Last updated : 11/06/2022
This article lists significant changes to Azure Monitor documentation.
+## October 2022
+
+
+
+|Sub-service| Article | Description |
+||||
+|General|Table of contents|We have updated the Azure Monitor Table of Contents. The new TOC structure better reflects the customer experience and makes it easier for users to navigate and discover our content.|
+Alerts|[Connect Azure to ITSM tools by using IT Service Management](https://docs.microsoft.com/azure/azure-monitor/alerts/itsmc-definition)|Deprecating support for sending ITSM actions and events to ServiceNow. Instead, use ITSM actions in action groups based on Azure alerts to create work items in your ITSM tool.|
+Alerts|[Create a new alert rule](https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-create-new-alert-rule)|New PowerShell commands to create and manage log alerts.|
+Alerts|[Types of Azure Monitor alerts](https://learn.microsoft.com/azure/azure-monitor/alerts/alerts-types)|Updated to include Prometheus alerts.|
+Alerts|[Customize alert notifications using Logic Apps](https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-logic-apps)|New: How to use alerts to send emails or Teams posts using logic apps|
+Application-insights|[Sampling in Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/sampling)|The "When to use sampling" and "How sampling works" sections have been prioritized as prerequisite information for the rest of the article.|
+Application-insights|[What is auto-instrumentation for Azure Monitor Application Insights?](https://docs.microsoft.com/azure/azure-monitor/app/codeless-overview)|The auto-instrumentation overview has been visually overhauled with links and footnotes.|
+Application-insights|[Enable Azure Monitor OpenTelemetry for .NET, Node.js, and Python applications (preview)](https://docs.microsoft.com/azure/azure-monitor/app/opentelemetry-enable)|Open Telemetry Metrics are now available for .NET, Node.js and Python applications.|
+Application-insights|[Find and diagnose performance issues with Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/tutorial-performance)|The URL Ping (Classic) Test has been replaced with the Standard Test step-by-step instructions.|
+Application-insights|[Application Insights API for custom events and metrics](https://docs.microsoft.com/azure/azure-monitor/app/api-custom-events-metrics)|Flushing information was added to the FAQ.|
+Application-insights|[Azure AD authentication for Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/azure-ad-authentication)|We updated the `TelemetryConfiguration` code sample using .NET.|
+Application-insights|[Using Azure Monitor Application Insights with Spring Boot](https://docs.microsoft.com/azure/azure-monitor/app/java-spring-boot)|Spring Boot information was updated to 3.4.2.|
+Application-insights|[Configuration options: Azure Monitor Application Insights for Java](https://docs.microsoft.com/azure/azure-monitor/app/java-standalone-config)|New features include Capture Log4j Markers and Logback Markers as custom properties on the corresponding trace (log message) telemetry.|
+Application-insights|[Create custom KPI dashboards using Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/tutorial-app-dashboards)|This article has been refreshed with new screenshots and instructions.|
+Application-insights|[Share Azure dashboards by using Azure role-based access control](https://docs.microsoft.com/azure/azure-portal/azure-portal-dashboard-share-access)|This article has been refreshed with new screenshots and instructions.|
+Application-insights|[Application Monitoring for Azure App Service and ASP.NET](https://docs.microsoft.com/azure/azure-monitor/app/azure-web-apps-net)|Important notes added regarding System.IO.FileNotFoundException after 2.8.44 auto-instrumentation upgrade.|
+Application-insights|[Geolocation and IP address handling](https://docs.microsoft.com/azure/azure-monitor/app/ip-collection)| Geolocation lookup information has been updated.|
+Containers|[Metric alert rules in Container insights (preview)](https://docs.microsoft.com/azure/azure-monitor/containers/container-insights-metric-alerts)|Container insights metric Alerts|
+Containers|[Custom metrics collected by Container insights](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-custom-metrics?tabs=portal)|New article.|
+Containers|[Overview of Container insights in Azure Monitor](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-overview)|Rewritten to simplify onboarding options.|
+Containers|[Enable Container insights for Azure Kubernetes Service (AKS) cluster](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-enable-aks?tabs=azure-cli)|Updated to combine new and existing clusters.|
+Containers Prometheus|[Query logs from Container insights](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-log-query)|Now includes log queries for Prometheus data.|
+Containers Prometheus|[Collect Prometheus metrics with Container insights](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-prometheus?tabs=cluster-wide)|Updated to include Azure Monitor managed service for Prometheus.|
+Essentials Prometheus|[Metrics in Azure Monitor](https://learn.microsoft.com/azure/azure-monitor/essentials/data-platform-metrics)|Updated to include Azure Monitor managed service for Prometheus|
+Essentials Prometheus|<ul> <li> [Azure Monitor workspace overview (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/azure-monitor-workspace-overview?tabs=azure-portal) </li><li> [Overview of Azure Monitor Managed Service for Prometheus (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-overview) </li><li>[Rule groups in Azure Monitor Managed Service for Prometheus (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-rule-groups)</li><li>[Remote-write in Azure Monitor Managed Service for Prometheus (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-remote-write-managed-identity) </li><li>[Use Azure Monitor managed service for Prometheus (preview) as data source for Grafana](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-grafana)</li><li>[Troubleshoot collection of Prometheus metrics in Azure Monitor (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-troubleshoot)</li><li>[Default Prometheus metrics configuration in Azure Monitor (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-scrape-default)</li><li>[Scrape Prometheus metrics at scale in Azure Monitor (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-scrape-scale)</li><li>[Customize scraping of Prometheus metrics in Azure Monitor (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-scrape-configuration)</li><li>[Create, validate and troubleshoot custom configuration file for Prometheus metrics in Azure Monitor (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-scrape-validate)</li><li>[Minimal Prometheus ingestion profile in Azure Monitor (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-scrape-configuration-minimal)</li><li>[Collect Prometheus metrics from AKS cluster (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-enable)</li><li>[Send Prometheus metrics to multiple Azure Monitor workspaces (preview)](https://learn.microsoft.com/azure/azure-monitor/essentials/prometheus-metrics-multiple-workspaces) </li></ul> |New articles. Public preview of Azure Monitor managed service for Prometheus|
+Essentials Prometheus|[Azure Monitor managed service for Prometheus remote write - managed identity (preview)](https://docs.microsoft.com/azure/azure-monitor/essentials/prometheus-remote-write-managed-identity)|Addition: Verify Prometheus remote write is working correctly|
+Essentials|[Azure resource logs](https://docs.microsoft.com/azure/azure-monitor/essentials/resource-logs)|Clarification: Which blobs logs are written to, and when|
+Essentials|[Resource Manager template samples for Azure Monitor](https://learn.microsoft.com/azure/azure-monitor/resource-manager-samples?tabs=portal)|Added template deployment methods.|
+Essentials|[Azure Monitor service limits](https://learn.microsoft.com/azure/azure-monitor/service-limits)|Added Azure Monitor managed service for Prometheus|
+Logs|[Manage access to Log Analytics workspaces](https://docs.microsoft.com/azure/azure-monitor/logs/manage-access)|Table-level role-based access control (RBAC) lets you give specific users or groups read access to particular tables.|
+Logs|[Configure Basic Logs in Azure Monitor](https://docs.microsoft.com/azure/azure-monitor/logs/basic-logs-configure)|General availability of the Basic Logs data plan, retention and archiving, search job, and the table management user experience in the Azure portal.|
+Logs|[Guided project - Analyze logs in Azure Monitor with KQL - Training](https://learn.microsoft.com/training/modules/analyze-logs-with-kql/)|New Learn module. Learn to write KQL queries to retrieve and transform log data to answer common business and operational questions.|
+Logs|[Detect and analyze anomalies with KQL in Azure Monitor](https://learn.microsoft.com/azure/azure-monitor/logs/kql-machine-learning-azure-monitor)|New tutorial. Walkthrough of how to use KQL for time series analysis and anomaly detection in Azure Monitor Log Analytics. |
+Virtual-machines|[Enable VM insights for a hybrid virtual machine](https://docs.microsoft.com/azure/azure-monitor/vm/vminsights-enable-hybrid)|Updated versions of standalone installers.|
+Visualizations|[Retrieve legacy Application Insights workbooks](https://docs.microsoft.com/azure/azure-monitor/visualize/workbooks-retrieve-legacy-workbooks)|New article about how to access legacy workbooks in the Azure portal.|
+Visualizations|[Azure Workbooks](https://docs.microsoft.com/azure/azure-monitor/visualize/workbooks-overview)|New video to see how you can use Azure Workbooks to get insights and visualize your data. |
+ ## September 2022
This article lists significant changes to Azure Monitor documentation.
| Article | Description | ||| |[Autoscale in Microsoft Azure](autoscale/autoscale-overview.md)|Updated conceptual diagrams|
-|[Use predictive autoscale to scale out before load demands in virtual machine scale sets (preview)](autoscale/autoscale-predictive.md)|Predictive autoscale (preview) is now available in all regions|
+|[Use predictive autoscale to scale out before load demands in Virtual Machine Scale Sets (preview)](autoscale/autoscale-predictive.md)|Predictive autoscale (preview) is now available in all regions|
### Change analysis
This article lists significant changes to Azure Monitor documentation.
|[Telemetry sampling in Azure Application Insights](app/sampling.md)|Sampling documentation has been updated to warn of the potential impact on alerting accuracy. |[Azure Monitor Application Insights Java (redirect to OpenTelemetry)](app/java-in-process-agent-redirect.md)|Java Auto-Instrumentation now redirects to OpenTelemetry documentation. |[Azure Application Insights for ASP.NET Core applications](app/asp-net-core.md)|Updated .NET Core FAQ
-|[Create a new Azure Monitor Application Insights workspace-based resource](app/create-workspace-resource.md)|We've linked out to Microsoft.Insights components for more information on Properties.
+|[Create a new Azure Monitor Application Insights workspace-based resource](app/create-workspace-resource.md)|We've linked out to Microsoft Insights components for more information on Properties.
|[Application Insights SDK support guidance](app/sdk-support-guidance.md)|SDK support guidance has been updated and clarified. |[Azure Monitor Application Insights Java](app/java-in-process-agent.md)|Example code has been updated. |[IP addresses used by Azure Monitor](app/ip-addresses.md)|The IP/FQDN table has been updated.
This article lists significant changes to Azure Monitor documentation.
| [Azure Application Insights for JavaScript web apps](app/javascript.md) | Our Java on-premises page has been retired and redirected to [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](app/java-in-process-agent.md).| | [Azure Application Insights Telemetry Data Model - Telemetry Context](app/data-model-context.md) | Clarified that Anonymous User ID is simply User.Id for easy selection in Intellisense.| | [Continuous export of telemetry from Application Insights](app/export-telemetry.md) | On February 29, 2024, continuous export will be deprecated as part of the classic Application Insights deprecation.|
-| [Dependency Tracking in Azure Application Insights](app/asp-net-dependencies.md) | The EventHub Client SDK and ServiceBus Client SDK information has been updated.|
+| [Dependency Tracking in Azure Application Insights](app/asp-net-dependencies.md) | The Event Hub Client SDK and ServiceBus Client SDK information has been updated.|
| [Monitor Azure app services performance .NET Core](app/azure-web-apps-net-core.md) | Updated Linux troubleshooting guidance. | | [Performance counters in Application Insights](app/performance-counters.md) | A prerequisite section has been added to ensure performance counter data is accessible.|
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts (Preview)
+ Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts
description: Learn how to create Azure NetApp Files-based NSF datastores for Azure VMware Solution hosts.
azure-vmware Deploy Disaster Recovery Using Jetstream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-disaster-recovery-using-jetstream.md
For full details, refer to the article: [Disaster Recovery with Azure NetApp Fil
- (Optional) Azure NetApp Files volume(s) are created and attached to the Azure VMware Solution private cloud for recovery or failover of protected VMs to Azure NetApp Files backed datastores.
- - [Attach Azure NetApp Files datastores to Azure VMware Solution hosts (Preview)](attach-azure-netapp-files-to-azure-vmware-solution-hosts.md)
+ - [Attach Azure NetApp Files datastores to Azure VMware Solution hosts](attach-azure-netapp-files-to-azure-vmware-solution-hosts.md)
- [Disaster Recovery with Azure NetApp Files, JetStream DR and AVS (Azure VMware Solution)](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/disaster-recovery-with-azure-netapp-files-jetstream-dr-and-avs-azure-vmware-solution/) ### Scenario 2: Azure VMware Solution to Azure VMware Solution DR
For full details, refer to the article: [Disaster Recovery with Azure NetApp Fil
- DNS configured on both the primary and DR sites to resolve the IP addresses of Azure VMware Solution vCenter Server, Azure VMware Solution ESXi hosts, Azure Storage account, the JetStream DR Management Server Appliance (MSA) and the JetStream Marketplace service for the JetStream virtual appliances. - (Optional) Azure NetApp Files volume(s) are created and attached to the Azure VMware Solution private cloud for recovery or failover of protected VMs to Azure NetApp Files backed datastores.
- - [Attach Azure NetApp Files datastores to Azure VMware Solution hosts (Preview)](attach-azure-netapp-files-to-azure-vmware-solution-hosts.md)
+ - [Attach Azure NetApp Files datastores to Azure VMware Solution hosts](attach-azure-netapp-files-to-azure-vmware-solution-hosts.md)
- [Disaster Recovery with Azure NetApp Files, JetStream DR and AVS (Azure VMware Solution)](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/disaster-recovery-with-azure-netapp-files-jetstream-dr-and-avs-azure-vmware-solution/) For more on-premises JetStream DR prerequisites, see the [JetStream Pre-Installation Guide](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/pre-installation-guidelines/).
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
The diagram shows the adjacency between private clouds and VNets in Azure, Azure
:::image type="content" source="media/adjacency-overview-drawing-final.png" alt-text="Diagram of Azure VMware Solution private cloud adjacency to Azure and on-premises." border="false":::
+## AV36P and AV52 node sizes generally available in Azure VMware Solution
+
+ The new node sizes in will increase memory and storage options to optimize your workloads. These gains in performance enable you to do more per server, break storage bottlenecks, and lower transaction costs of latency-sensitive workloads. The availability of these new nodes allows large latency-sensitive services to be hosted efficiently on the Azure VMware Solution infrastructure.
+
+**AV36P key highlights for Memory and Storage optimized Workloads:**
+
+- Runs on Intel® Xeon® Gold 6240 Processor with 36 Cores and a Base Frequency of 2.6Ghz and Turbo of 3.9Ghz.
+- 768 GB of DRAM Memory
+- 19.2 TB Storage Capacity with all NVMe based SSDs (With Random Read of 636500 IOPS and Random Write of 223300 IOPS)
+- 1.5TB of NVMe Cache
+
+**AV52 key highlights for Memory and Storage optimized Workloads:**
+
+- Runs on Intel® Xeon® Platinum 8270 with 52 Cores and a Base
+Frequency of 2.7Ghz and Turbo of 4.0Ghz.
+- 1.5 TB of DRAM Memory
+- 38.4TB storage capacity with all NVMe based SSDs (With Random Read of 636500 IOPS and Random Write of 223300 IOPS)
+- 1.5TB of NVMe Cache
+
+For pricing and region availability, see the [Azure VMware Solution pricing page](https://azure.microsoft.com/pricing/details/azure-vmware/) and see the [Products available by region page](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/).
+ ## Hosts, clusters, and private clouds [!INCLUDE [host-sku-sizes](includes/disk-capabilities-of-the-host.md)]
container-apps Managed Identity Image Pull https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity-image-pull.md
Create a container app revision with a private image and the system-assigned man
1. Select **Save**. 1. Select **Create** from the **Create and deploy new revision** page.
-A new revision will be created and deployed. The portal will automatically attempt to add the `acrpull` role to the user-assigned managed identity. If the role isn't added, you can add it manually.
+A new revision will be created and deployed. The portal will automatically attempt to add the `acrpull` role to the user-assigned managed identity. If the role isn't added, you can add it manually.
+
+You can verify that the role was added by checking the identity from the **Identity** pane of the container app page.
+
+1. Select **Identity** from the left menu.
+1. Select the **User assigned** tab.
+1. Select the user-assigned managed identity.
+1. Select **Azure role assignments** from the menu on the managed identity resource page.
+1. Verify that the `acrpull` role is assigned to the user-assigned managed identity.
### Clean up resources
Edit the container to use the image from your private Azure Container Registry,
1. Select **Create** at the bottom of the **Create and deploy new revision** page 1. After a few minutes, select **Refresh** on the **Revision management** page to see the new revision.
+A new revision will be created and deployed. The portal will automatically attempt to add the `acrpull` role to the system-assigned managed identity. If the role isn't added, you can add it manually.
+
+You can verify that the role was added by checking the identity in the **Identity** pane of the container app page.
+
+1. Select **Identity** from the left menu.
+1. Select the **System assigned** tab.
+1. Select **Azure role assignments**.
+1. Verify that the `acrpull` role is assigned to the system-assigned managed identity.
+ ### Clean up resources If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the resource group.
Create your container app with your image from the private registry authenticate
# [Azure CLI](#tab/azure-cli)
-Copy the identity's resource ID to paste into the *\<IDENTITY_ID\>* placeholders in the command below.
+Copy the identity's resource ID to paste into the *\<IDENTITY_ID\>* placeholders in the command below. If your image tag isn't `latest`, replace 'latest' with your tag.
```azurecli echo $IDENTITY_ID
New-AzContainerApp @AppArgs
Update the container app with the image from your private container registry and add a system-assigned identity to authenticate the Azure Container Registry pull. You can also include other settings necessary for your container app, such as ingress, scale and Dapr settings.
-If you are using an image tag other than `latest`, replace the `latest` value with your value.
-- # [Azure CLI](#tab/azure-cli) Set the registry server and turn on system-assigned managed identity in the container app.
az containerapp registry set \
--server "$REGISTRY_NAME.azurecr.io" ``` - ```azurecli az containerapp update \ --name $CONTAINERAPP_NAME \
cost-management-billing Understand Cost Mgt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-cost-mgt-data.md
description: This article helps you better understand data that's included in Cost Management and how frequently it's processed, collected, shown, and closed. Previously updated : 10/13/2021 Last updated : 11/04/2022
The following information shows the currently supported [Microsoft Azure offers]
| **Category** | **Offer name** | **Quota ID** | **Offer number** | **Data available from** | | | | | | |
-| **Azure Government** | Azure Government Enterprise | EnterpriseAgreement_2014-09-01 | MS-AZR-USGOV-0017P | May 2014<sup>1</sup> |
+| **Azure Government** | Azure Government Enterprise | EnterpriseAgreement_2014-09-01 | MS-AZR-USGOV-0017P | May 2014┬╣ |
| **Azure Government** | Azure Government Pay-As-You-Go | PayAsYouGo_2014-09-01 | MS-AZR-USGOV-0003P | October 2, 2018 |
-| **Enterprise Agreement (EA)** | Enterprise Dev/Test | MSDNDevTest_2014-09-01 | MS-AZR-0148P | May 2014<sup>1</sup> |
-| **Enterprise Agreement (EA)** | Microsoft Azure Enterprise | EnterpriseAgreement_2014-09-01 | MS-AZR-0017P | May 2014<sup>1</sup> |
-| **Microsoft Customer Agreement** | Microsoft Azure Plan | EnterpriseAgreement_2014-09-01 | N/A | March 2019<sup>2</sup> |
-| **Microsoft Customer Agreement** | Microsoft Azure Plan for Dev/Test | MSDNDevTest_2014-09-01 | N/A | March 2019<sup>2</sup> |
-| **Microsoft Customer Agreement supported by partners** | Microsoft Azure Plan | CSP_2015-05-01, CSP_MG_2017-12-01, and CSPDEVTEST_2018-05-01<sup>4</sup> | N/A | October 2019 |
-| **Microsoft Developer Network (MSDN)** | MSDN Platforms<sup>3</sup> | MSDN_2014-09-01 | MS-AZR-0062P | October 2, 2018 |
+| **Enterprise Agreement (EA)** | Enterprise Dev/Test | MSDNDevTest_2014-09-01 | MS-AZR-0148P | May 2014┬╣ |
+| **Enterprise Agreement (EA)** | Microsoft Azure Enterprise | EnterpriseAgreement_2014-09-01 | MS-AZR-0017P | May 2014┬╣ |
+| **Microsoft Customer Agreement** | Microsoft Azure Plan | EnterpriseAgreement_2014-09-01 | N/A | March 2019┬▓ |
+| **Microsoft Customer Agreement** | Microsoft Azure Plan for Dev/Test | MSDNDevTest_2014-09-01 | N/A | March 2019┬▓ |
+| **Microsoft Customer Agreement supported by partners** | Microsoft Azure Plan | CSP_2015-05-01, CSP_MG_2017-12-01, and CSPDEVTEST_2018-05-01⁴ | N/A | October 2019 |
+| **Microsoft Developer Network (MSDN)** | MSDN Platforms┬│ | MSDN_2014-09-01 | MS-AZR-0062P | October 2, 2018 |
| **Pay-As-You-Go** | Pay-As-You-Go | PayAsYouGo_2014-09-01 | MS-AZR-0003P | October 2, 2018 | | **Pay-As-You-Go** | Pay-As-You-Go Dev/Test | MSDNDevTest_2014-09-01 | MS-AZR-0023P | October 2, 2018 | | **Pay-As-You-Go** | Microsoft Cloud Partner Program | MPN_2014-09-01 | MS-AZR-0025P | October 2, 2018 |
-| **Pay-As-You-Go** | Free Trial<sup>3</sup> | FreeTrial_2014-09-01 | MS-AZR-0044P | October 2, 2018 |
-| **Pay-As-You-Go** | Azure in Open<sup>3</sup> | AzureInOpen_2014-09-01 | MS-AZR-0111P | October 2, 2018 |
-| **Pay-As-You-Go** | Azure Pass<sup>3</sup> | AzurePass_2014-09-01 | MS-AZR-0120P, MS-AZR-0122P - MS-AZR-0125P, MS-AZR-0128P - MS-AZR-0130P | October 2, 2018 |
-| **Visual Studio** | Visual Studio Enterprise ΓÇô MPN<sup>3</sup> | MPN_2014-09-01 | MS-AZR-0029P | October 2, 2018 |
-| **Visual Studio** | Visual Studio Professional<sup>3</sup> | MSDN_2014-09-01 | MS-AZR-0059P | October 2, 2018 |
-| **Visual Studio** | Visual Studio Test Professional<sup>3</sup> | MSDNDevTest_2014-09-01 | MS-AZR-0060P | October 2, 2018 |
-| **Visual Studio** | Visual Studio Enterprise<sup>3</sup> | MSDN_2014-09-01 | MS-AZR-0063P | October 2, 2018 |
-| **Visual Studio** | Visual Studio Enterprise: BizSpark<sup>3</sup> | MSDN_2014-09-01 | MS-AZR-0064P | October 2, 2018 |
+| **Pay-As-You-Go** | Free Trial┬│ | FreeTrial_2014-09-01 | MS-AZR-0044P | October 2, 2018 |
+| **Pay-As-You-Go** | Azure in Open┬│ | AzureInOpen_2014-09-01 | MS-AZR-0111P | October 2, 2018 |
+| **Pay-As-You-Go** | Azure Pass┬│ | AzurePass_2014-09-01 | MS-AZR-0120P, MS-AZR-0122P - MS-AZR-0125P, MS-AZR-0128P - MS-AZR-0130P | October 2, 2018 |
+| **Visual Studio** | Visual Studio Enterprise ΓÇô MPN┬│ | MPN_2014-09-01 | MS-AZR-0029P | October 2, 2018 |
+| **Visual Studio** | Visual Studio Professional┬│ | MSDN_2014-09-01 | MS-AZR-0059P | October 2, 2018 |
+| **Visual Studio** | Visual Studio Test Professional┬│ | MSDNDevTest_2014-09-01 | MS-AZR-0060P | October 2, 2018 |
+| **Visual Studio** | Visual Studio Enterprise┬│ | MSDN_2014-09-01 | MS-AZR-0063P | October 2, 2018 |
+| **Visual Studio** | Visual Studio Enterprise: BizSpark┬│ | MSDN_2014-09-01 | MS-AZR-0064P | October 2, 2018 |
-_<sup>**1**</sup> For data before May 2014, visit the [Azure Enterprise portal](https://ea.azure.com)._
+_┬╣ For data before May 2014, visit the [Azure Enterprise portal](https://ea.azure.com)._
-_<sup>**2**</sup> Microsoft Customer Agreements started in March 2019 and don't have any historical data before this point._
+_┬▓ Microsoft Customer Agreements started in March 2019 and don't have any historical data before this point._
-_<sup>**3**</sup> Historical data for credit-based and pay-in-advance subscriptions might not match your invoice. See [Historical data may not match invoice](#historical-data-might-not-match-invoice) below._
+_┬│ Historical data for credit-based and pay-in-advance subscriptions might not match your invoice. See [Historical data may not match invoice](#historical-data-might-not-match-invoice) below._
-_<sup>**4**</sup> Quota IDs are the same across Microsoft Customer Agreement and classic subscription offers. Classic CSP subscriptions are not supported._
+_⁴ Quota IDs are the same across Microsoft Customer Agreement and classic subscription offers. Classic CSP subscriptions are not supported._
The following offers aren't supported yet:
The following offers aren't supported yet:
| **Cloud Solution Provider (CSP)** | Azure Government CSP | CSP_2015-05-01 | MS-AZR-USGOV-0145P | | **Cloud Solution Provider (CSP)** | Azure Germany in CSP for Microsoft Cloud Germany | CSP_2015-05-01 | MS-AZR-DE-0145P | | **Pay-As-You-Go** | Azure for Students Starter | DreamSpark_2015-02-01 | MS-AZR-0144P |
-| **Pay-As-You-Go** | Azure for Students<sup>3</sup> | AzureForStudents_2018-01-01 | MS-AZR-0170P |
+| **Pay-As-You-Go** | Azure for Students┬│ | AzureForStudents_2018-01-01 | MS-AZR-0170P |
| **Pay-As-You-Go** | Microsoft Azure Sponsorship | Sponsored_2016-01-01 | MS-AZR-0036P | | **Support Plans** | Standard support | Default_2014-09-01 | MS-AZR-0041P | | **Support Plans** | Professional Direct support | Default_2014-09-01 | MS-AZR-0042P |
The following tables show data that's included or isn't in Cost Management. All
| **Included** | **Not included** | | | |
-| Azure service usage<sup>5</sup> | Support charges - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
-| Marketplace offering usage<sup>6</sup> | Taxes - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
-| Marketplace purchases<sup>6</sup> | Credits - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
-| Reservation purchases<sup>7</sup> | |
-| Amortization of reservation purchases<sup>7</sup> | |
-| New Commerce non-Azure products (Microsoft 365 and Dynamics 365) <sup>8</sup> | |
+| Azure service usage⁵ | Support charges - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
+| Marketplace offering usage⁶ | Taxes - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
+| Marketplace purchases⁶ | Credits - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
+| Reservation purchases⁷ | |
+| Amortization of reservation purchases⁷ | |
+| New Commerce non-Azure products (Microsoft 365 and Dynamics 365) ⁸ | |
-_<sup>**5**</sup> Azure service usage is based on reservation and negotiated prices._
+_⁵ Azure service usage is based on reservation and negotiated prices._
-_<sup>**6**</sup> Marketplace purchases aren't available for MSDN and Visual Studio offers at this time._
+_⁶ Marketplace purchases aren't available for MSDN and Visual Studio offers at this time._
-_<sup>**7**</sup> Reservation purchases are only available for Enterprise Agreement (EA) and Microsoft Customer Agreement accounts at this time._
+_⁷ Reservation purchases are only available for Enterprise Agreement (EA) and Microsoft Customer Agreement accounts at this time._
-_<sup>**8**</sup> Only available for specific offers._
+_⁸ Only available for specific offers._
## How tags are used in cost and usage data
The following examples illustrate how billing periods could end:
* Enterprise Agreement (EA) subscriptions ΓÇô If the billing month ends on March 31, estimated charges are updated up to 72 hours later. In this example, by midnight (UTC) April 4. * Pay-as-you-go subscriptions ΓÇô If the billing month ends on May 15, then the estimated charges might get updated up to 72 hours later. In this example, by midnight (UTC) May 19.
-Once cost and usage data becomes available in Cost Management, it will be retained for at least seven years. Only the last 13 months is available from the portal. For historical data before 13 months, please use [Exports](tutorial-export-acm-data.md) or the [Cost Details API](../automate/usage-details-best-practices.md#cost-details-api).
+After your billing period ends and your invoice is created, it can take up to 48 hours later for the usage data to get finalized. If the usage file isn't ready, you'll see a message on the Invoices page in the Azure portal stating `Your usage and charges file is not ready`. After the usage file is available, you can download it.
+
+Once cost and usage data becomes available in Cost Management, it will be retained for at least seven years. Only the last 13 months are available from the portal. For historical data before 13 months, please use [Exports](tutorial-export-acm-data.md) or the [Cost Details API](../automate/usage-details-best-practices.md#cost-details-api).
### Rerated data
cost-management-billing Purchase Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/purchase-recommendations.md
Previously updated : 10/12/2022 Last updated : 11/04/2022 # Azure savings plan recommendations
-Azure savings plan purchase recommendations are provided through [Azure Advisor](../../advisor/advisor-reference-cost-recommendations.md#reserved-instances), and through the savings plan purchase experience in the Azure portal. The recommended commitment is calculated for the highest possible usage, and it's based on your historical usage. Your recommendation might not be for 100% utilization if you have inconsistent usage. To maximize savings with savings plans, try to purchase reservations as close to the recommendation as possible.
+Azure savings plan purchase recommendations are provided through [Azure Advisor](../../advisor/advisor-reference-cost-recommendations.md#reserved-instances), and through the savings plan purchase experience in the Azure portal. The recommended commitment is calculated for the highest possible usage, and it's based on your historical usage. Your recommendation might not be for 100% utilization if you have inconsistent usage. To maximize savings with savings plans, try to make a savings plan commitment that's as close to the recommendation as possible.
The following steps define how recommendations are calculated:
-1. The recommendation engine evaluates the hourly usage for your resources in the given scope over the past 7, 30, and 60 days.
+1. The recommendation engine evaluates the hourly on-demand usage for your resources in the given scope over the past 7, 30, and 60 days. Usage covered by existing reservations or savings plans is excluded.
2. Based on the usage data, the engine simulates your costs with and without a savings plan. 3. The costs are simulated for different commitment amounts, and the commitment amount that maximizes the savings is recommended. 4. The recommendation calculations include any discounts that you might have on your on-demand usage rates. ## Purchase recommendations in the Azure portal
-The savings plan purchase experience shows up to 10 commitment amounts. All recommendations are based on the last 30 days of usage. For each amount, we include the percentage (off of your current pay-as-you-go costs) that the amount could save you. The percentage of your total compute usage that would be covered with the commitment amount is also included.
+The savings plan purchase experience shows up to 10 commitment amounts. All recommendations are based on the last 30 days of usage. For each amount, we include the percentage (off your current pay-as-you-go costs) that the amount could save you. The percentage of your total compute usage that would be covered with the commitment amount is also included.
By default, the recommendations are for the entire billing scope (billing account or billing profile for MCA and enrollment for EA). You can view subscription and resource group-level recommendations by restricting benefit application to one of those levels. We don't currently support management group-level recommendations.
The minimum value doesn't necessarily represent the hourly commitment necessary
When appropriate, a savings plan purchase recommendation can also be found in Azure Advisor. Keep in mind the following points: - The savings plan recommendations are for a single-subscription scope. If you want to see recommendations for the entire billing scope (billing account or billing profile), then:
- - In the Azure portal, navigate to Savings plans > **Add** and then select the type that you want to see the recommendations for.
+ - In the Azure portal, navigate to **Savings plans** > **Add** and then select the type that you want to see the recommendations for.
- Recommendations available in Advisor consider your past 30-day usage trend. - The recommendation is for a three-year savings plan. - The recommendation calculations include any special discounts that you might have on your on-demand usage rates.
cost-management-billing Savings Plan Compute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/savings-plan-compute-overview.md
Title: What are Azure savings plans for compute?
+ Title: What is Azure savings plans for compute?
description: Learn how Azure savings plans help you save money by committing an hourly spend for one-year or three-year plan for Azure compute resources.
Previously updated : 10/20/2022 Last updated : 11/04/2022
-# What are Azure savings plans for compute?
+# What is Azure savings plans for compute?
Azure savings plans save you money when you have consistent usage of Azure compute resources. An Azure savings plan helps you save money by allowing you to commit to a fixed hourly spend on compute services for one-year or three-year terms. A savings plan can significantly reduce your resource costs by up to 65% from pay-as-you-go prices. Discount rates per meter vary by commitment term (1-year or 3-year), not commitment amount.
Each hour with savings plan, your compute usage is discounted until you reach yo
You can acquire a savings plan by making a new commitment, or you can trade in one or more active reservations for a savings plan. When you acquire a savings plan with a reservation trade in, the reservation is canceled. The prorated residual value of the unused reservation benefit is converted to the equivalent hourly commitment for the savings plan. The commitment may not be sufficient for your needs, and while you may not reduce it, you can increase it to cover your needs.
-After you purchase a savings plan, the discount automatically applies to matching resources. Savings plans provide a billing discount and don't affect the runtime state of your resources.
-
-You can pay for a savings plan up front or monthly. The total cost of up-front and monthly savings plan is the same and you don't pay any extra fees when you choose to pay monthly.
-
-You can buy a savings plan in the Azure portal.
+Currently, you can only acquire savings plans in the Azure portal. You can pay for a savings plan up front or monthly. The total cost of up-front and monthly savings plan is the same. You don't pay any extra fees when you choose to pay monthly. After you purchase a savings plan, the discount automatically applies to matching resources. Savings plans provide a billing discount and don't affect the runtime state of your resources.
## Why buy a savings plan? If you have consistent compute spend, buying a savings plan gives you the option to reduce your costs. For example, when you continuously run instances of a service without a savings plan, you're charged at pay-as-you-go rates. When you buy a savings plan, your compute usage is immediately eligible for the savings plan discount. Your discounted rates add-up to the commitment amount. Usage covered by a savings plan receives discounted rates, not the pay-as-you-go rates.
-## How savings plan discount is applied
+## Decide between a savings plan and a reservation
+
+Azure provides you with two ways to save on your usage by committing for one or three years. You have the freedom to choose the savings options that best align with your workload patterns.
+
+With reservations, you commit to a specific virtual machine type in a particular Azure region. For example, a D2v4 VM in Japan East for one year. With Azure savings plan, you commit to spend a fixed hourly amount collectively on compute services. For example, $5.00/hour on compute services for one year. Reservations only apply to the identified compute service and region combination. Savings plan benefits are applicable to all usage from participating compute services across the globe, up to the hourly commitment.
+
+For highly stable workloads that run continuously and where you have no expected changes to the machine series or region, consider a reservation. Reservations provide the greatest savings.
+
+For dynamic workloads where you need to run different sized virtual machines or that frequently change datacenter regions, consider a compute savings plan. Savings plans provide flexible benefit application and automatic optimization.
+
+## How savings plan discounts is applied
Almost immediately after purchase the savings plan benefit begins to apply without other action required by you. Every hour, we apply benefit to savings plan-eligible meters that are within the savings plan's scope. The benefits are applied to the meter with the greatest discount percentage first. Savings plan scope selects where the savings plan benefit applies.
For more information about how discount is applied, see [Savings plan discount a
For more information about how savings plan scope works, see [Scope savings plans](buy-savings-plan.md#scope-savings-plans).
-## Determine what to purchase
+## Determine your savings plan commitment
+
+On-demand usage from compute services such as VMs, dedicated hosts, container instances, Azure premium functions, and Azure app services are eligible for savings plan benefits. It's important to consider your usage when you determine your hourly commitment. Azure provides [commitment recommendations](purchase-recommendations.md) based on usage from your last 30 days. The recommendations are found in:
-Usage from compute services such as VMs, dedicated hosts, container instances, Azure premium functions and Azure app services are eligible for savings plan benefits. Consider savings plan purchases based on your consistent compute usage. You can determine your optimal commitment by analyzing your usage data or by using the savings plan recommendation. Recommendations are available in:
+- [Azure Advisor](https://portal.azure.com/#view/Microsoft_Azure_Expert/AdvisorMenuBlade/~/score)
+- Savings plan purchase experience in the [Azure portal](https://portal.azure.com/)
+- [Benefit Recommendation APIs](/rest/api/cost-management/benefit-recommendations/list)
-- Azure Advisor (VMs only)-- Savings plan purchase experience in the Azure portal-- Cost Management Power BI app-- APIs
+You can also analyze your usage data to determine a different hourly commitment.
For more information, seeΓÇ»[Choose an Azure saving plan commitment amount](choose-commitment-amount.md)
-## Buying a savings plan
+## Buy a savings plan
You can purchase savings plans from the Azure portal. For more information, seeΓÇ»[Buy a savings plan](buy-savings-plan.md).
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud Previously updated : 09/20/2022+ Last updated : 10/23/2022 zone_pivot_groups: connect-aws-accounts
The native cloud connector requires:
Defender for Cloud will immediately start scanning your AWS resources and you'll see security recommendations within a few hours. For a reference list of all the recommendations Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md).
+## CloudFormation deployment source
+
+As part of connecting an AWS account to Microsoft Defender for Cloud, a CloudFormation template should be deployed to the AWS account. This CloudFormation template creates all the required resources so Microsoft Defender for Cloud can connect to the AWS account.
+
+The CloudFormation template should be deployed using Stack (or StackSet if you have a management account).
+
+When deploying the CloudFormation template, the Stack creation wizard offers the following options:
++
+1. **Amazon S3 URL** ΓÇô upload the downloaded CloudFormation template to your own S3 bucket with your own security configurations. Enter the URL to the S3 bucket in the AWS deployment wizard.
+
+1. **Upload a template file** ΓÇô AWS will automatically create an S3 bucket in which the CloudFormation template will be saved. With this automation, the S3 bucket is created with a security misconfiguration that will result in the security recommendation ΓÇ£S3 buckets should require requests to use Secure Socket LayerΓÇ¥. Apply the following policy to fix this recommendation:
+
+```bash
+{
+ "Id": "ExamplePolicy",
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Sid": "AllowSSLRequestsOnly",
+ "Action": "s3:*",
+ "Effect": "Deny",
+ "Resource": [
+ "<S3_Bucket ARN>",
+ "<S3_Bucket ARN>/*"
+ ],
+ "Condition": {
+ "Bool": {
+ "aws:SecureTransport": "false"
+ }
+ },
+ "Principal": "*"
+ }
+ ]
+}
+```
+ ### Remove 'classic' connectors If you have any existing connectors created with the classic cloud connectors experience, remove them first:
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
Enter the following parameters:
| Syslog CEF output format | Description | |--|--| | Priority | User.Alert |
-| Date and time | Date and time that the syslog server machine received the information. (Added by Syslog server) |
-| Hostname | Sensor hostname (Added by Syslog server) |
+| Date and time | Date and time that sensor sent the information |
+| Hostname | Sensor hostname |
| Message | CEF:0 <br />Microsoft Defender for IoT/CyberX <br />Sensor name <br />Sensor version <br />Microsoft Defender for IoT Alert <br />Alert title <br />Integer indication of serverity. 1=**Warning**, 4=**Minor**, 8=**Major**, or 10=**Critical**.<br />msg= The message of the alert. <br />protocol= The protocol of the alert. <br />severity= **Warning**, **Minor**, **Major**, or **Critical**. <br />type= **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />UUID= UUID of the alert <br /> start= The time that the alert was detected. <br />Might vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br />src_ip= IP address of the source device. <br />src_mac= MAC address of the source device. (Optional) <br />dst_ip= IP address of the destination device.<br />dst_mac= MAC address of the destination device. (Optional)<br />cat= The alert group associated with the alert. | | Syslog LEEF output format | Description |
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-install-software.md
This procedure describes how to install OT sensor software on a physical or virt
1. In the `Select monitor interfaces` screen, select the interfaces you want to monitor.
- By default, eno1 is reserved for the management interface. and we recommend that you leave this option unselected.
+ > [!IMPORTANT]
+ > Make sure that you select only interfaces that are connected.
+ > If you select interfaces that are enabled but not connected, the sensor will show a *No traffic monitored* health notification in the Azure portal. If you connect more traffic sources after installation and want to monitor them with Defender for IoT, you can add them via the CLI.
+
+ By default, eno1 is reserved for the management interface and we recommend that you leave this option unselected.
For example:
iot-hub Iot Hub Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-customer-managed-keys.md
IoT Hub supports encryption of data at rest using customer-managed keys (CMK), also known as Bring your own key (BYOK). Azure IoT Hub provides encryption of data at rest and in-transit as it's written in our datacenters; the data is encrypted when read and decrypted when written. >[!NOTE]
->The customer-managed keys feature is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>The customer-managed keys feature is in private preview, and is not currently accepting new customers.
By default, IoT Hub uses Microsoft-managed keys to encrypt the data. With CMK, you can get another layer of encryption on top of default encryption and can choose to encrypt data at rest with a key encryption key, managed through your [Azure Key Vault](https://azure.microsoft.com/services/key-vault/). This gives you the flexibility to create, rotate, disable, and revoke access controls. If BYOK is configured for your IoT Hub, we also provide double encryption, which offers a second layer of protection, while still allowing you to control the encryption key through your Azure Key Vault.
machine-learning How To Batch Scoring Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-batch-scoring-script.md
+
+ Title: 'Author scoring scripts for batch deployments'
+
+description: In this article, learn how to author scoring scripts to perform batch inference in batch deployments.
+++++++ Last updated : 11/03/2022+++
+# Author scoring scripts for batch deployments
++
+Batch endpoints allow you to deploy models to perform inference at scale. Because how inference should be executed varies from model's format, model's type and use case, batch endpoints require a scoring script (also known as batch driver script) to indicate the deployment how to use the model over the provided data. In this article you will learn how to use scoring scripts in different scenarios and their best practices.
+
+> [!TIP]
+> MLflow models don't require a scoring script as it is autogenerated for you. For more details about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md). Notice that this feature doesn't prevent you from writing an specific scoring script for MLflow models as explained at [Using MLflow models with a scoring script](how-to-mlflow-batch.md#using-mlflow-models-with-a-scoring-script).
+
+> [!WARNING]
+> If you are deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for Online Endpoints and it is not designed for batch execution. Please follow this guideline to learn how to create one depending on what your model does.
+
+## Understanding the scoring script
+
+The scoring script is a Python file (`.py`) that contains the logic about how to run the model and read the input data submitted by the batch deployment executor driver. Each model deployment has to provide a scoring script, however, an endpoint may host multiple deployments using different scoring script versions.
+
+The scoring script must contain two methods:
+
+#### The `init` method
+
+Use the `init()` method for any costly or common preparation. For example, use it to load the model into a global object. This function will be called once at the beginning of the process. You model's files will be available in an environment variable called `AZUREML_MODEL_DIR`. Use this variable to locate the files associated with the model.
+
+```python
+def init():
+ global model
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ # The path "model" is the name of the registered model's folder
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+
+ # load the model
+ model = load_model(model_path)
+```
+
+Notice that in this example we are placing the model in a global variable `model`. Use global variables to make available any asset needed to perform inference to your scoring function.
+
+#### The `run` method
+
+Use the `run(mini_batch: List[str]) -> Union[List[Any], pandas.DataFrame]` method to perform the scoring of each mini-batch generated by the batch deployment. Such method will be called once per each `mini_batch` generated for your input data. Batch deployments read data in batches accordingly to how the deployment is configured.
+
+```python
+def run(mini_batch):
+ results = []
+
+ for file in mini_batch:
+ (...)
+
+ return pd.DataFrame(results)
+```
+
+The method receives a list of file paths as a parameter (`mini_batch`). You can use this list to either iterate over each file and process it one by one, or to read the entire batch and process it at once. The best option will depend on your compute memory and the throughput you need to achieve. For an example of how to read entire batches of data at once see [High throughput deployments](how-to-image-processing-batch.md#high-throughput-deployments).
+
+> [!NOTE]
+> __How is work distributed?__:
+>
+> Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
+
+The `run()` method should return a pandas DataFrame or an array/list. Each returned output element indicates one successful run of an input element in the input `mini_batch`. For file datasets, each row/element will represent a single file processed. For a tabular dataset, each row/element will represent a row in a processed file.
+
+> [!IMPORTANT]
+> __How to write predictions?__:
+>
+> Use __arrays__ when you need to output a single prediction. Use __pandas DataFrames__ when you need to return multiple pieces of information. For instance, for tabular data, you may want to append your predictions to the original record. Use a pandas DataFrame for this case. For file datasets, __we still recommend to output a pandas DataFrame__ as they provide a more robust approach to read the results.
+>
+> Although pandas DataFrame may contain column names, they are not included in the output file. If needed, please see [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md).
+
+> [!WARNING]
+> Do not not output complex data types (or lists of complex data types) in the `run` function. Those outputs will be transformed to string and they will be hard to read.
+
+The resulting DataFrame or array is appended to the output file indicated. There's no requirement on the cardinality of the results (1 file can generate 1 or many rows/elements in the output). All elements in the result DataFrame or array will be written to the output file as-is (considering the `output_action` isn't `summary_only`).
+
+## Writing predictions in a different way
+
+By default, the batch deployment will write the model's predictions in a single file as indicated in the deployment. However, there are some cases where you need to write the predictions in multiple files. For instance, if the input data is partitioned, you typically would want to generate your output partitioned too. On those cases you can [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md) to indicate:
+
+> [!div class="checklist"]
+> * The file format used (CSV, parquet, json, etc).
+> * The way data is partitioned in the output.
+
+Read the article [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md) for an example about how to achieve it.
+
+## Source control of scoring scripts
+
+It is highly advisable to put scoring scripts under source control.
+
+## Best practices for writing scoring scripts
+
+When writing scoring scripts that work with big amounts of data, you need to take into account several factors, including:
+
+* The size of each file.
+* The amount of data on each file.
+* The amount of memory required to read each file.
+* The amount of memory required to read an entire batch of files.
+* The memory footprint of the model.
+* The memory footprint of the model when running over the input data.
+* The available memory in your compute.
+
+Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
+
+### Running inference at the mini-batch, file or the row level
+
+Batch endpoints will call the `run()` function in your scoring script once per mini-batch. However, you will have the power to decide if you want to run the inference over the entire batch, over one file at a time, or over one row at a time (if your data happens to be tabular).
+
+#### Mini-batch level
+
+You will typically want to run inference over the batch all at once when you want to achieve high throughput in your batch scoring process. This is the case for instance if you run inference over a GPU where you want to achieve saturation of the inference device. You may also be relying on a data loader that can handle the batching itself if data doesn't fit on memory, like `TensorFlow` or `PyTorch` data loaders. On those cases, you may want to consider running inference on the entire batch.
+
+> [!WARNING]
+> Running inference at the batch level may require having high control over the input data size to be able to correctly account for the memory requirements and avoid out of memory exceptions. Whether you are able or not of loading the entire mini-batch in memory will depend on the size of the mini-batch, the size of the instances in the cluster, the number of workers on each node, and the size of the mini-batch.
+
+For an example about how to achieve it see [High throughput deployments](how-to-image-processing-batch.md#high-throughput-deployments).
+
+#### File level
+
+One of the easiest ways to perform inference is by iterating over all the files in the mini-batch and run your model over it. In some cases, like image processing, this may be a good idea. If your data is tabular, you may need to make a good estimation about the number of rows on each file to estimate if your model is able to handle the memory requirements to not just load the entire data into memory but also to perform inference over it. Remember that some models (specially those based on recurrent neural networks) will unfold and present a memory footprint that may not be linear with the number of rows. If your model is expensive in terms of memory, please consider running inference at the row level.
+
+> [!TIP]
+> If file sizes are too big to be readed even at once, please consider breaking down files into multiple smaller files to account for better parallelization.
+
+For an example about how to achieve it see [Image processing with batch deployments](how-to-image-processing-batch.md).
+
+#### Row level (tabular)
+
+For models that present challenges in the size of their inputs, you may want to consider running inference at the row level. Your batch deployment will still provide your scoring script with a mini-batch of files, however, you will read one file, one row at a time. This may look inefficient but for some deep learning models may be the only way to perform inference without scaling up your hardware requirements.
+
+For an example about how to achieve it see [Text processing with batch deployments](how-to-nlp-processing-batch.md).
+
+### Relationship between the degree of parallelism and the scoring script
+
+Your deployment configuration controls the size of each mini-batch and the number of workers on each node. Take into account them when deciding if you want to read the entire mini-batch to perform inference. When running multiple workers on the same instance, take into account that memory will be shared across all the workers. Usually, increasing the number of workers per node should be accompanied by a decrease in the mini-batch size or by a change in the scoring strategy (if data size remains the same).
+
+## Next steps
+
+* [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md).
+* [Use MLflow models in batch deployments](how-to-mlflow-batch.md).
+* [Image processing with batch deployments](how-to-image-processing-batch.md).
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-mlflow-batch.md
You will typically select this workflow when:
> [!IMPORTANT] > If you choose to indicate an scoring script for an MLflow model deployment, you will also have to specify the environment where the deployment will run.
+> [!WARNING]
+> Customizing the scoring script for MLflow deployments is only available from the Azure CLI or SDK for Python. If you are creating a deployment using [Azure ML studio UI](https://ml.azure.com), please switch to the CLI or the SDK.
++ ### Steps Use the following steps to deploy an MLflow model with a custom scoring script.
machine-learning How To Secure Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-secure-batch-endpoint.md
When deploying a machine learning model to a batch endpoint, you can secure thei
All the batch endpoints created inside of secure workspace are deployed as private batch endpoints by default. No further configuration is required. > [!IMPORTANT]
-> When working on a private link-enabled workspaces, batch endpoints can be created and managed using Azure Machine Learning studio. However, they can't be invoked from the UI in studio. Please use the Azure ML CLI v2 instead for job creation. For more details about how to use it see [Invoke the batch endpoint to start a batch scoring job](how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-to-start-a-batch-scoring-job).
+> When working on a private link-enabled workspaces, batch endpoints can be created and managed using Azure Machine Learning studio. However, they can't be invoked from the UI in studio. Please use the Azure ML CLI v2 instead for job creation. For more details about how to use it see [Invoke the batch endpoint to start a batch scoring job](how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-to-start-a-batch-job).
The following diagram shows how the networking looks like for batch endpoints when deployed in a private workspace:
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-use-batch-endpoint.md
In this article, you will learn how to use batch endpoints to do batch scoring.
[!INCLUDE [basic cli prereqs](../../../includes/machine-learning-cli-prereqs.md)] + ### About this example On this example, we are going to deploy a model to solve the classic MNIST ("Modified National Institute of Standards and Technology") digit recognition problem to perform batch inferencing over large amounts of data (image files). In the first section of this tutorial, we are going to create a batch deployment with a model created using Torch. Such deployment will become our default one in the endpoint. On the second half, [we are going to see how we can create a second deployment](#adding-deployments-to-an-endpoint) using a model created with TensorFlow (Keras), test it out, and then switch the endpoint to start using the new deployment as default.
-### Clone the example repository
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples/cli/endpoints/batch
+```
### Create compute
A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch s
## Create a scoring script
-Batch deployments require a scoring script that indicates how the given model should be executed and how input data must be processed. For MLflow models this scoring script is not required as it is automatically generated by Azure Machine Learning. If your model is an MLflow model, you can skip this step.
-
-> [!TIP]
-> For more details about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
-
-In this case, we are deploying a model that read image files representing digits and outputs the corresponding digit. The scoring script looks as follows:
--
-### Understanding the scoring script
-
-The scoring script is a Python file (`.py`) that contains the logic about how to run the model and read the input data submitted by the batch deployment executor driver. It must contain two methods:
-
-#### The `init` method
-
-Use the `init()` method for any costly or common preparation. For example, use it to load the model into a global object. This function will be called once at the beginning of the process. You model's files will be available in an environment variable called `AZUREML_MODEL_DIR`. Use this variable to locate the files associated with the model.
-
-#### The `run` method
-
-Use the `run(mini_batch: List[str]) -> Union[List[Any], pandas.DataFrame]` method to perform the scoring of each mini-batch generated by the batch deployment. Such method will be called once per each `mini_batch` generated for your input data. Batch deployments read data in batches accordingly to how the deployment is configured.
+Batch deployments require a scoring script that indicates how the given model should be executed and how input data must be processed.In this case, we are deploying a model that read image files representing digits and outputs the corresponding digit. The scoring script looks as follows:
> [!NOTE]
-> __How is work distributed?__:
->
-> Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
-
-The method receives a list of file paths as a parameter (`mini_batch`). You can use this list to either iterate over each file and process it one by one, or to read the entire batch and process it at once. The best option will depend on your compute memory and the throughput you need to achieve. For an example of how to read entire batches of data at once see [High throughput deployments](how-to-image-processing-batch.md#high-throughput-deployments).
-
-The `run()` method should return a pandas DataFrame or an array/list. Each returned output element indicates one successful run of an input element in the input `mini_batch`. For file datasets, each row/element will represent a single file processed. For a tabular dataset, each row/element will represent a row in a processed file.
-
-> [!IMPORTANT]
-> __How to write predictions?__:
->
-> Use __arrays__ when you need to output a single prediction. Use __pandas DataFrames__ when you need to return multiple pieces of information. For instance, for tabular data, you may want to append your predictions to the original record. Use a pandas DataFrame for this case. For file datasets, __we still recommend to output a pandas DataFrame__ as they provide a more robust approach to read the results.
->
-> Although pandas DataFrame may contain column names, they are not included in the output file. If needed, please see [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md).
+> For MLflow models this scoring script is not required as it is automatically generated by Azure Machine Learning. If your model is an MLflow model, you can skip this step. For more details about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
-> [!WARNING]
-> Do not not output complex data types (or lists of complex data types) in the `run` function. Those outputs will be transformed to string and they will be hard to read.
+> [!TIP]
+> For more information about how to write scoring scripts and best practices for it please see [Author scoring scripts for batch deployments](how-to-batch-scoring-script.md).
-The resulting DataFrame or array is appended to the output file indicated. There's no requirement on the cardinality of the results (1 file can generate 1 or many rows/elements in the output). All elements in the result DataFrame or array will be written to the output file as-is (considering the `output_action` isn't `summary_only`).
## Create a batch deployment
A deployment is a set of resources required for hosting the model that does the
| `endpoint_name` | The name of the endpoint to create the deployment under. | | `model` | The model to be used for batch scoring. The example defines a model inline using `path`. Model files will be automatically uploaded and registered with an autogenerated name and version. Follow the [Model schema](../reference-yaml-model.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the model separately and reference it here. To reference an existing model, use the `azureml:<model-name>:<model-version>` syntax. | | `code_configuration.code.path` | The local directory that contains all the Python source code to score the model. |
- | `code_configuration.scoring_script` | The Python file in the above directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, load the model in memory). `init()` will be called only once at beginning of process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. For more information on how to author scoring script, see [Understanding the scoring script](#understanding-the-scoring-script). |
+ | `code_configuration.scoring_script` | The Python file in the above directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, load the model in memory). `init()` will be called only once at beginning of process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. For more information on how to author scoring script, see [Understanding the scoring script](how-to-batch-scoring-script.md#understanding-the-scoring-script). |
| `environment` | The environment to score the model. The example defines an environment inline using `conda_file` and `image`. The `conda_file` dependencies will be installed on top of the `image`. The environment will be automatically registered with an autogenerated name and version. Follow the [Environment schema](../reference-yaml-environment.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the environment separately and reference it here. To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. | | `compute` | The compute to run batch scoring. The example uses the `batch-cluster` created at the beginning and references it using `azureml:<compute-name>` syntax. | | `resources.instance_count` | The number of instances to be used for each batch scoring job. |
A deployment is a set of resources required for hosting the model that does the
:::image type="content" source="../media/how-to-use-batch-endpoints-studio/review-batch-wizard.png" alt-text="Screenshot of batch endpoints/deployment review screen.":::
+
+
+ > [!NOTE]
+ > __How is work distributed?__:
+ >
+ > Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
1. Check batch endpoint and deployment details.
A deployment is a set of resources required for hosting the model that does the
:::image type="content" source="../media/how-to-use-batch-endpoints-studio/batch-endpoint-details.png" alt-text="Screenshot of the check batch endpoints and deployment details.":::
-## Invoke the batch endpoint to start a batch scoring job
+## Invoke the batch endpoint to start a batch job
Invoke a batch endpoint triggers a batch scoring job. A job `name` will be returned from the invoke response and can be used to track the batch scoring progress. The batch scoring job runs for a period of time. It splits the entire inputs into multiple `mini_batch` and processes in parallel on the compute cluster. The batch scoring job outputs will be stored in cloud storage, either in the workspace's default blob storage, or the storage you specified.
machine-learning How To Deploy Batch With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-batch-with-rest.md
You can use the tool [jq](https://stedolan.github.io/jq/) to parse the JSON resu
### Upload & register code
-Now that you have the datastore, you can upload the scoring script. For more information about how to author the scoring script, see [Understanding the scoring script](batch-inference/how-to-use-batch-endpoint.md#understanding-the-scoring-script). Use the Azure Storage CLI to upload a blob into your default container:
+Now that you have the datastore, you can upload the scoring script. For more information about how to author the scoring script, see [Understanding the scoring script](batch-inference/how-to-batch-scoring-script.md#understanding-the-scoring-script). Use the Azure Storage CLI to upload a blob into your default container:
:::code language="rest-api" source="~/azureml-examples-main/cli/batch-score-rest.sh" id="upload_code":::
machine-learning Reference Yaml Deployment Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-batch.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `model` | string or object | **Required.** The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification. <br><br> To reference an existing model, use the `azureml:<model-name>:<model-version>` syntax. <br><br> To define a model inline, follow the [Model schema](reference-yaml-model.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the model separately and reference it here. | | | | `code_configuration` | object | Configuration for the scoring code logic. <br><br> This property is not required if your model is in MLflow format. | | | | `code_configuration.code` | string | The local directory that contains all the Python source code to score the model. | | |
-| `code_configuration.scoring_script` | string | The Python file in the above directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, load the model in memory). `init()` will be called only once at beginning of process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. For more information on how to author scoring script, see [Understanding the scoring script](batch-inference/how-to-use-batch-endpoint.md#understanding-the-scoring-script).| | |
+| `code_configuration.scoring_script` | string | The Python file in the above directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, load the model in memory). `init()` will be called only once at beginning of process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. For more information on how to author scoring script, see [Understanding the scoring script](batch-inference/how-to-batch-scoring-script.md#understanding-the-scoring-script).| | |
| `environment` | string or object | The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> This property is not required if your model is in MLflow format. <br><br> To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. <br><br> To define an environment inline, follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the environment separately and reference it here. | | | | `compute` | string | **Required.** Name of the compute target to execute the batch scoring jobs on. This value should be a reference to an existing compute in the workspace using the `azureml:<compute-name>` syntax. | | | | `resources.instance_count` | integer | The number of nodes to use for each batch scoring job. | | `1` |
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-limits.md
A PostgreSQL connection, even idle, can occupy about 10 MB of memory. Also, crea
- Restoring a deleted server isn't supported. - Cross region restore isn't supported.
-### Other features
-
-* Azure AD authentication isn't yet supported. We recommend using the [Single Server](../overview-single-server.md) option if you require Azure AD authentication.
-* Read replicas aren't yet supported. We recommend using the [Single Server](../overview-single-server.md) option if you require read replicas.
-* Moving resources to another subscription isn't supported.
- ## Next steps
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Previously updated : 08/25/2022 Last updated : 11/05/2022 # Release notes - Azure Database for PostgreSQL - Flexible Server
Last updated 08/25/2022
This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant for Flexible Server - PostgreSQL.
-## Release: September 2022
+## Release: October 2022
-* Support for [Fast Restore](./concepts-backup-restore.md)
-* General availability of [Geo-Redundant Backups](./concepts-backup-restore.md)
-
-Please see the [regions](overview.md#azure-regions) where Geo-redundant backup is currently available.
+* Support for [Read Replica](./concepts-read-replicas.md) feature in public preview.
+* Support for [Azure Active Directory](concepts-azure-ad-authentication.md) authentication in public preview.
+* Support for [Customer managed keys](concepts-data-encryption.md) in public preview.
+* Published [Security and compliance certifications](./concepts-compliance.md) for Flexible Server.
+* Postgres 14 is now the default PostgreSQL version.
+
+## Release: September 2022
+* Support for [Fast Restore](./concepts-backup-restore.md) feature.
+* General availability of [Geo-Redundant Backups](./concepts-backup-restore.md). See the [regions](overview.md#azure-regions) where Geo-redundant backup is currently available.
## Release: August 2022
purview Concept Policies Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-policies-devops.md
This article discusses concepts related to managing access to data sources in yo
> This capability is different from access control for Microsoft Purview itself, which is described in [Access control in Microsoft Purview](./catalog-permissions.md). ## Overview
-Access to system metadata is crucial for database administrators and other DevOps users to perform their job. That access can be granted and revoked efficiently and at-scale through Microsoft Purview DevOps policies.
+Access to system metadata is crucial for IT operations and other DevOps personnel to perform their job. That access can be granted and revoked efficiently and at-scale through Microsoft Purview DevOps policies.
### Microsoft Purview access policies vs. DevOps policies Microsoft Purview access policies enable customers to manage access to different data systems across their entire data estate, all from a central location in the cloud. These policies are access grants that can be created through Microsoft Purview Studio, avoiding the need for code. They dictate whether a set of Azure AD principals (users, groups, etc.) should be allowed or denied a specific type of access to a data source or asset within it. These policies get communicated to the data sources where they get natively enforced.
purview How To Policies Devops Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-arc-sql-server.md
Previously updated : 10/11/2022 Last updated : 11/04/2022 # Provision access to Arc-enabled SQL Server for DevOps actions (preview) [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-[DevOps policies](concept-policies-devops.md) are a type of Microsoft Purview access policies. They allow you to manage access to system metadata on data sources that have been registered for *Data use management* in Microsoft Purview. These policies are configured directly in the Microsoft Purview governance portal, and after publishing, they get enforced by the data source.
+[DevOps policies](concept-policies-devops.md) are a type of Microsoft Purview access policies. They allow you to manage access to system metadata on data sources that have been registered for *Data use management* in Microsoft Purview. These policies are configured directly in the Microsoft Purview governance portal, and after being saved they get automatically published and then get enforced by the data source.
This how-to guide covers how to provision access from Microsoft Purview to Arc-enabled SQL Server system metadata (DMVs and DMFs) *SQL Performance Monitoring* or *SQL Security Auditing* actions. Microsoft Purview access policies apply to Azure AD Accounts only.
Check the blog and related docs
* Video: [Microsoft Purview DevOps policies on data sources and resource groups](https://youtu.be/YCDJagrgEAI) * Video: [Reduce the effort with Microsoft Purview DevOps policies on resource groups](https://youtu.be/yMMXCeIFCZ8) * Doc: [Microsoft Purview DevOps policies on Azure SQL DB](./how-to-policies-devops-azure-sql-db.md)
-* Blog: [Deep dive on SQL Performance Monitor and SQL Security Auditor permissions](https://techcommunity.microsoft.com/t5/sql-server-blog/new-granular-permissions-for-sql-server-2022-and-azure-sql-to/ba-p/3607507)
purview How To Policies Devops Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-authoring-generic.md
Previously updated : 10/11/2022 Last updated : 11/04/2022 # Create, list, update and delete Microsoft Purview DevOps policies (preview) [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-[DevOps policies](concept-policies-devops.md) are a type of Microsoft Purview access policies. They allow you to manage access to system metadata on data sources that have been registered for *Data use management* in Microsoft Purview. These policies are configured directly in the Microsoft Purview governance portal, and after publishing, they get enforced by the data source.
+[DevOps policies](concept-policies-devops.md) are a type of Microsoft Purview access policies. They allow you to manage access to system metadata on data sources that have been registered for *Data use management* in Microsoft Purview. These policies are configured directly in the Microsoft Purview governance portal, and after being saved they get automatically published and then get enforced by the data source.
This how-to guide covers how to provision access from Microsoft Purview to SQL-type data sources via *SQL Performance Monitoring* or *SQL Security Auditing* actions. Microsoft Purview access policies apply to Azure AD Accounts only.
purview How To Policies Devops Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-azure-sql-db.md
Previously updated : 10/11/2022 Last updated : 11/04/2022 # Provision access to Azure SQL Database for DevOps actions (preview) [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-[DevOps policies](concept-policies-devops.md) are a type of Microsoft Purview access policies. They allow you to manage access to system metadata on data sources that have been registered for *Data use management* in Microsoft Purview. These policies are configured directly in the Microsoft Purview governance portal, and after publishing, they get enforced by the data source.
+[DevOps policies](concept-policies-devops.md) are a type of Microsoft Purview access policies. They allow you to manage access to system metadata on data sources that have been registered for *Data use management* in Microsoft Purview. These policies are configured directly in the Microsoft Purview governance portal, and after being saved they get automatically published and then get enforced by the data source.
This how-to guide covers how to provision access from Microsoft Purview to Azure SQL Database system metadata (DMVs and DMFs) via *SQL Performance Monitoring* or *SQL Security Auditing* actions. Microsoft Purview access policies apply to Azure AD Accounts only.
Check the blog and related docs
* Video: [Microsoft Purview DevOps policies on data sources and resource groups](https://youtu.be/YCDJagrgEAI) * Video: [Reduce the effort with Microsoft Purview DevOps policies on resource groups](https://youtu.be/yMMXCeIFCZ8) * Doc: [Microsoft Purview DevOps policies on Arc-enabled SQL Server](./how-to-policies-devops-arc-sql-server.md)
-* Blog: [Deep dive on SQL Performance Monitor and SQL Security Auditor permissions](https://techcommunity.microsoft.com/t5/sql-server-blog/new-granular-permissions-for-sql-server-2022-and-azure-sql-to/ba-p/3607507)
+
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/overview.md
Previously updated : 06/17/2022 Last updated : 11/04/2022 # What's available in the Microsoft Purview governance portal? Microsoft Purview's solutions in the governance portal provide a unified data governance service that helps you manage your on-premises, multicloud, and software-as-a-service (SaaS) data. The Microsoft Purview governance portal allows you to: - Create a holistic, up-to-date map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. -- Enable data curators to manage and secure your data estate.
+- Enable data curators and security administrators to manage and keep your data estate secure.
- Empower data consumers to find valuable, trustworthy data. :::image type="complex" source="./media/overview/high-level-overview.png" alt-text="Graphic showing Microsoft Purview's high-level architecture." lightbox="./media/overview/high-level-overview-large.png":::
- Chart showing the high-level architecture of Microsoft Purview. Multicloud and on-premises sources flow into Microsoft Purview, and Microsoft Purview's apps (Data Catalog, Map, Data Estate Insights, Policy, and Data Sharing) allow data consumers and data curators to view and manage metadata, share data, and protect assets. This metadata is also being ported to external analytics services from Microsoft Purview for more processing.
+ Chart shows the high-level architecture of Microsoft Purview. Multicloud and on-premises sources flow into Microsoft Purview's Data Map. On top of it, Microsoft Purview's apps (Data Catalog, Data Estate Insights, Data Policy, and Data Sharing) allow data consumers, data curators and security administrators to view and manage metadata, share data, and protect assets. This metadata is also ported to external analytics services from Microsoft Purview for more processing.
:::image-end::: >[!TIP] > Looking to govern your data in Microsoft 365 by keeping what you need and deleting what you don't? Use [Microsoft Purview Data Lifecycle Management](/microsoft-365/compliance/data-lifecycle-management).
-Microsoft Purview automates data discovery by providing data scanning and classification as a service for assets across your data estate. Metadata and descriptions of discovered data assets are integrated into a holistic map of your data estate. Atop this map, there are purpose-built apps that create environments for data discovery, access management, and insights about your data landscape.
+The [Data Map](#data-map): Microsoft Purview automates data discovery by providing data scanning and classification as a service for assets across your data estate. Metadata and descriptions of discovered data assets are integrated into a holistic map of your data estate. Atop this map, there are purpose-built apps that create environments for data discovery, access management, and insights about your data landscape.
|App |Description | |-|--|
-|[Data Map](#data-map) | Makes your data meaningful by graphing your data assets, and their relationships, across your data estate. The data map used to discover data and manage access to that data. |
|[Data Catalog](#data-catalog) | Finds trusted data sources by browsing and searching your data assets. The data catalog aligns your assets with friendly business terms and data classification to identify data sources. | |[Data Estate Insights](#data-estate-insights) | Gives you an overview of your data estate to help you discover what kinds of data you have and where. | |[Data Sharing](#data-sharing) | Allows you to securely share data internally or cross organizations with business partners and customers. |
+|[Data Policy](#data-policy) | A set of central, cloud-based experiences that help you provision access to data securely and at scale. |
## Data Map
Microsoft Purview Data Sharing enables organizations to securely share data both
For more information, see our [introduction to Data Sharing](concept-data-share.md).
-## Discovery challenges for data consumers
+## Data Policy
+Microsoft Purview Data Policy is a set of central, cloud-based experiences that help you manage access to data sources and datasets securely and at scale.
+Benefits:
+* Structure and simplify the process of granting/revoking access.
+* Reduce the effort of access provisioning.
+* Access decision in Microsoft data systems has negligible latency penalty.
+* Enhanced security:
+ - Easier to review access/revoke it in a central vs. distributed access provisioning model.
+ - Reduced need for privileged accounts to configure access.
+ - Support Principle of Least Privilege (give people the appropriate level of access, limiting to the minimum permissions and the least data objects).
+
+For more information, see our introductory guides:
+* [Data owner access policies](concept-policies-data-owner.md)(preview): Provision fine-grained to broad access to users and groups via intuitive authoring experience.
+* [Self-service access policies](concept-self-service-data-access-policy.md)(preview): Self-Service: Workflow approval and automatic provisioning of access requests initiated by business analysts that discover data assets in Microsoft PurviewΓÇÖs catalog.
+* [DevOps policies](concept-policies-devops.md)(preview): Provision access to system metadata for IT operations and other DevOps personnel, supporting typical functions like SQL Performance Monitor and SQL Security Auditor.
+
+## Traditional challenges that Microsoft Purview seeks to address
+
+### Challenges for data consumers
Traditionally, discovering enterprise data sources has been an organic process based on communal knowledge. For companies that want the most value from their information assets, this approach presents many challenges:
Traditionally, discovering enterprise data sources has been an organic process b
* If users have questions about an information asset, they must locate the expert, or team responsible for that data and engage them offline. There's no explicit connection between the data and the experts that understand the data's context. * Unless users understand the process for requesting access to the data source, discovering the data source and its documentation won't help them access the data.
-## Discovery challenges for data producers
+### Challenges for data producers
Although data consumers face the previously mentioned challenges, users who are responsible for producing and maintaining information assets face challenges of their own:
Although data consumers face the previously mentioned challenges, users who are
When such challenges are combined, they present a significant barrier for companies that want to encourage and promote the use and understanding of enterprise data.
-## Discovery challenges for security administrators
+### Challenges for security administrators
Users who are responsible for ensuring the security of their organization's data may have any of the challenges listed above as data consumers and producers, and the following extra challenges:
Discovering and understanding data sources and their use is the primary purpose
At the same time, users can contribute to the catalog by tagging, documenting, and annotating data sources that have already been registered. They can also register new data sources, which are then discovered, understood, and consumed by the community of catalog users.
+Lastly, Microsoft Purview Data Policy app leverages the metadata in the Data Map, providing a superior solution to keep your data secure.
## Next steps
search Semantic Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-search-overview.md
Although semantic search is not beneficial in every scenario, certain content ca
Semantic search and spell check are available on services that meet the criteria in the table below. To use semantic search, your first need to [enable the capabilities](#enable-semantic-search) on your search service. | Feature | Tier | Region | Sign up | Pricing |
-|||--||-|
+|||--|||
| Semantic search | Standard tier (S1, S2, S3, S3 HD), Storage Optimized tier (L1, L2) | [Region availability](https://azure.microsoft.com/global-infrastructure/services/?products=search)| Required | [Pricing](https://azure.microsoft.com/pricing/details/search/) <sup>1</sup>| | Spell check | Basic <sup>2</sup> and above | All | None | None (free) |
-<sup>1</sup> At lower query volumes (under 1000 monthly), semantic search is free. To go above that limit, you can opt in to the semantic search standard pricing plan. The pricing page shows you the semantic query billing rate for different currencies and intervals.
+<sup>1</sup> On the pricing page, scroll down to view additional features that are billed separately. At lower query volumes (under 1000 monthly), semantic search is free. To go above that limit, you can opt in to the semantic search standard pricing plan. The pricing page shows you the semantic query billing rate for different currencies and intervals.
<sup>2</sup> Due to the provisioning mechanisms and lifespan of shared (free) search services, a small number of services happen to have spell check on the free tier. However, spell check availability on free tier services is not guaranteed and should not be expected.
sentinel Normalization Parsers List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-parsers-list.md
Microsoft Sentinel provides the following out-of-the-box, product-specific DNS p
| **Source** | **Notes** | **Parser** | | | - |
-| **Normalized DNS Logs** | Any event normalized at ingestion to the `ASimDnsActivityLogs` table. | `_Im_Dns_Native` |
+| **Normalized DNS Logs** | Any event normalized at ingestion to the `ASimDnsActivityLogs` table. The DNS connector for the Azure Monitor Agent uses the `ASimDnsActivityLogs` table and is supported by the `_Im_Dns_Native` parser. | `_Im_Dns_Native` |
| **Azure Firewall** | | `_Im_Dns_AzureFirewallVxx` | | **Cisco Umbrella** | | `_Im_Dns_CiscoUmbrellaVxx` | | **Corelight Zeek** | | `_Im_Dns_CorelightZeekVxx` | | **GCP DNS** | | `_Im_Dns_GcpVxx` | | - **Infoblox NIOS**<br> - **BIND**<br> - **BlucCat** | The same parsers support multiple sources. | `_Im_Dns_InfobloxNIOSVxx` |
-| **Microsoft DNS Server** | Collected by the DNS connector and the Log Analytics Agent. | `_Im_Dns_MicrosoftOMSVxx` |
-| **Microsoft DNS Server** | Collected by NXlog. | `_Im_Dns_MicrosoftNXlogVxx` |
-| **Sysmon for Windows** (event 22) | Collected by the Log Analytics Agent<br> or the Azure Monitor Agent,<br>supporting both the<br> `Event` and `WindowsEvent` tables. | `_Im_Dns_MicrosoftSysmonVxx` |
+| **Microsoft DNS Server** | Collected using:<br>- DNS connector for the Log Analytics Agent<br>- DNS connector for the Azure Monitor Agent<br>- NXlog | <br>`_Im_Dns_MicrosoftOMSVxx`<br>See Normalized DNS logs.<br>`_Im_Dns_MicrosoftNXlogVxx` |
+| **Sysmon for Windows** (event 22) | Collected using:<br>- the Log Analytics Agent<br>- the Azure Monitor Agent<br><br>For both agents, both collecting to the<br> `Event` and `WindowsEvent` tables are supported. | `_Im_Dns_MicrosoftSysmonVxx` |
| **Vectra AI** | |`_Im_Dns_VectraIAVxx` | | **Zscaler ZIA** | | `_Im_Dns_ZscalerZIAVxx` | ||||
Deploy the workspace deployed parsers from the [Microsoft Sentinel GitHub reposi
Microsoft Sentinel provides the following out-of-the-box, product-specific File Activity parsers: -- **Sysmon file activity events** (Events 11, 23, and 26), collected using the Log Analytics Agent or Azure Monitor Agent.
+- **Windows file activity**
+ - Reported by **Windows (event 4663)**:
+ - Collected using the Log Analytics Agent based Security Events connector to the SecurityEvent table.
+ - Collected using the Azure Monitor Agent based Security Events connector to the SecurityEvent table.
+ - Collected using the Azure Monitor Agent based WEF (Windows Event Forwarding) connector to the WindowsEvent table.
+ - Reported using **Sysmon file activity events** (Events 11, 23, and 26):
+ - Collected using the Log Analytics Agent to the Event table.
+ - Collected using the Azure Monitor Agent based WEF (Windows Event Forwarding) connector to the WindowsEvent table.
+ - Reported by **Microsoft 365 Defender for Endpoint**, collected using the Microsoft 365 Defender connector.
- **Microsoft Office 365 SharePoint and OneDrive events**, collected using the Office Activity connector.-- **Microsoft 365 Defender for Endpoint file events** - **Azure Storage**, including Blob, File, Queue, and Table Storage. Deploy the parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/ASimFileEvent).
Microsoft Sentinel provides the following out-of-the-box, product-specific Netwo
| **Source** | **Notes** | **Parser** | | | | |
+| **Normalized Network Session Logs** | Any event normalized at ingestion to the `ASimNetworkSessionLogs` table. The Firewall connector for the Azure Monitor Agent uses the `ASimNetworkSessionLogs` table and is supported by the `_Im_NetworkSession_Native` parser. | `_Im_NetworkSession_Native` |
| **AppGate SDP** | IP connection logs collected using Syslog. | `_Im_NetworkSession_AppGateSDPVxx` | | **AWS VPC logs** | Collected using the AWS S3 connector. | `_Im_NetworkSession_AWSVPCVxx` | | **Azure Firewall logs** | |`_Im_NetworkSession_AzureFirewallVxx`| | **Azure Monitor VMConnection** | Collected as part of the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-overview.md). | `_Im_NetworkSession_VMConnectionVxx` | | **Azure Network Security Groups (NSG) logs** | Collected as part of the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-overview.md). | `_Im_NetworkSession_AzureNSGVxx` |
-| **Checkpoint Firewall-1** | Collected using CEF. | `_Im_NetworkSession_CheckPointFirewallVxx`* |
-| **Cisco ASA** | Collected using the CEF connector. | `_Im_NetworkSession_CiscoASAVxx`* |
+| **Checkpoint Firewall-1** | Collected using CEF. | `_Im_NetworkSession_CheckPointFirewallVxx` |
+| **Cisco ASA** | Collected using the CEF connector. | `_Im_NetworkSession_CiscoASAVxx` |
| **Cisco Meraki** | Collected using the Cisco Meraki API connector. | `_Im_NetworkSession_CiscoMerakiVxx` |
-| **Corelight Zeek** | Collected using the Corelight Zeek connector. | `_im_NetworkSession_CorelightZeekVxx`* |
+| **Corelight Zeek** | Collected using the Corelight Zeek connector. | `_im_NetworkSession_CorelightZeekVxx` |
| **Fortigate FortiOS** | IP connection logs collected using Syslog. | `_Im_NetworkSession_FortinetFortiGateVxx` | | **Microsoft 365 Defender for Endpoint** | | `_Im_NetworkSession_Microsoft365DefenderVxx`|
-| **Microsoft Defender for IoT - Endpoint** | | `_Im_NetworkSession_MD4IoTVxx` |
+| **Microsoft Defender for IoT micro agent** | | `_Im_NetworkSession_MD4IoTAgentVxx` |
+| **Microsoft Defender for IoT sensor** | | `_Im_NetworkSession_MD4IoTSensorVxx` * |
| **Palo Alto PanOS traffic logs** | Collected using CEF. | `_Im_NetworkSession_PaloAltoCEFVxx` | | **Sysmon for Linux** (event 3) | Collected using the Log Analytics Agent<br> or the Azure Monitor Agent. |`_Im_NetworkSession_LinuxSysmonVxx` | | **Vectra AI** | | `_Im_NetworkSession_VectraIAVxx` | | **Windows Firewall logs** | Collected as Windows events using the Log Analytics Agent (Event table) or Azure Monitor Agent (WindowsEvent table). Supports Windows events 5150 to 5159. | `_Im_NetworkSession_MicrosoftWindowsEventFirewallVxx`|
-| **Watchguard FirewareOW** | Collected using Syslog. | `_Im_NetworkSession_WatchGuardFirewareOSVxx`* |
+| **Watchguard FirewareOW** | Collected using Syslog. | `_Im_NetworkSession_WatchGuardFirewareOSVxx` |
| **Zscaler ZIA firewall logs** | Collected using CEF. | `_Im_NetworkSessionZscalerZIAVxx` | Note that the parsers marked with (*) are available for deployment from GitHub and are not yet built into workspaces.
Microsoft Sentinel provides the following out-of-the-box, product-specific Web S
| **Source** | **Notes** | **Parser** | | | | |
-|**Squid Proxy** | | `_Im_WebSession_SquidProxyVxx` |
+| **Squid Proxy** | | `_Im_WebSession_SquidProxyVxx` |
| **Vectra AI Streams** | | `_Im_WebSession_VectraAIVxx` | | **Zscaler ZIA** | Collected using CEF | `_Im_WebSessionZscalerZIAVxx` |
spring-apps How To Enable Redundancy And Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enable-redundancy-and-disaster-recovery.md
Title: Enable redundancy and disaster recovery for Azure Spring Apps
-description: Learn how to protect your Spring Apps application from zonal and regional outages
+description: Learn how to protect your Spring Apps application from zonal and regional outages.
Azure Spring Apps currently supports availability zones in the following regions
The following limitations apply when you create an Azure Spring Apps Service instance with zone redundancy enabled: -- Zone redundancy is not available in basic tier.
+- Zone redundancy isn't available in basic tier.
- You can enable zone redundancy only when you create a new Azure Spring Apps Service instance. - If you enable your own resource in Azure Spring Apps, such as your own persistent storage, make sure to enable zone redundancy for the resource. For more information, see [How to enable your own persistent storage in Azure Spring Apps](how-to-custom-persistent-storage.md).-- Zone redundancy ensures that underlying VM nodes are distributed evenly across all availability zones but does not guarantee even distribution of app instances. If an app instance fails because its located zone goes down, Azure Spring Apps creates a new app instance for this app on a node in another availability zone.-- Geo-disaster recovery is not the purpose of zone redundancy. To protect your service from regional outages, see the [Customer-managed geo-disaster recovery](#customer-managed-geo-disaster-recovery) section later in this article.
+- Zone redundancy ensures that underlying VM nodes are distributed evenly across all availability zones but doesn't guarantee even distribution of app instances. If an app instance fails because its located zone goes down, Azure Spring Apps creates a new app instance for this app on a node in another availability zone.
+- Geo-disaster recovery isn't the purpose of zone redundancy. To protect your service from regional outages, see the [Customer-managed geo-disaster recovery](#customer-managed-geo-disaster-recovery) section later in this article.
## Create an Azure Spring Apps instance with zone redundancy enabled
To verify the zone redundancy property of an Azure Spring Apps instance using th
## Pricing
-There's no additional cost associated with enabling zone redundancy. You only need to pay for Standard or Enterprise tier, which is required to enable zone redundancy.
+There's no extra cost associated with enabling zone redundancy. You only need to pay for Standard or Enterprise tier, which is required to enable zone redundancy.
## Customer-managed geo-disaster recovery
To plan your application, it's helpful to understand the following information a
- An Azure geography is a defined area of the world that contains at least one Azure region. - An Azure region is an area within a geography containing one or more data centers.
-Most Azure regions are paired with another region within the same geography, together making a regional pair. Azure serializes platform updates (planned maintenance) across regional pairs, ensuring that only one region in each pair is updated at a time. In the event of an outage affecting multiple regions, at least one region in each pair is prioritized for recovery.
+Most Azure regions are paired with another region within the same geography, together making a regional pair. Azure serializes platform updates (planned maintenance) across regional pairs, ensuring that only one region in each pair is updated at a time. If an outage affects multiple regions, at least one region in each pair is prioritized for recovery.
To ensure high availability and protection from disasters, deploy your applications hosted in Azure Spring Apps to multiple regions. Azure provides a list of paired regions so that you can plan your app deployments accordingly. For more information, see [Cross-region replication in Azure: Business continuity and disaster recovery](../availability-zones/cross-region-replication-azure.md).
- Consider the following three key factors when you design your architecture:
+ Consider the following key factors when you design your architecture:
- Region availability. To minimize network lag and transmission time, choose a region that supports Azure Spring Apps zone redundancy, or a geographic area close to your users. - Azure paired regions. To ensure coordinated platform updates and prioritized recovery efforts if needed, choose paired regions within your chosen geographic area.
To ensure high availability and protection from disasters, deploy your applicati
### Use Azure Traffic Manager to route traffic
-Azure Traffic Manager provides DNS-based traffic load-balancing and can distribute network traffic across multiple regions. Use Azure Traffic Manager to direct customers to the closest Azure Spring Apps service instance. For best performance and redundancy, direct all application traffic through Azure Traffic Manager before sending it to your Azure Spring Apps service instance. For more information, see [What is Traffic Manager?](../traffic-manager/traffic-manager-overview.md)
+Azure Traffic Manager provides DNS-based traffic load balancing and can distribute network traffic across multiple regions. Use Azure Traffic Manager to direct customers to the closest Azure Spring Apps service instance. For best performance and redundancy, direct all application traffic through Azure Traffic Manager before sending it to your Azure Spring Apps service instance. For more information, see [What is Traffic Manager?](../traffic-manager/traffic-manager-overview.md)
If you have applications in Azure Spring Apps running in multiple regions, Azure Traffic Manager can control the flow of traffic to your applications in each region. Define an Azure Traffic Manager endpoint for each service instance using the instance IP. You should connect to an Azure Traffic Manager DNS name pointing to the Azure Spring Apps service instance. Azure Traffic Manager load balances traffic across the defined endpoints. If a disaster strikes a data center, Azure Traffic Manager directs traffic from that region to its pair, ensuring service continuity.
Use the following steps to create an Azure Traffic Manager instance for Azure Sp
1. Set up a custom domain for the service instances. For more information, see [Tutorial: Map an existing custom domain to Azure Spring Apps](./tutorial-custom-domain.md). After successful setup, both service instances will bind to the same custom domain, such as `bcdr-test.contoso.com`.
-1. Create a traffic manager and two endpoints. For instructions, see [Create a Traffic Manager profile using the Azure portal](../traffic-manager/quickstart-create-traffic-manager-profile.md), which produces the following Traffic Manager profile:
+1. Create a traffic manager and two endpoints. For instructions, see [Quickstart: Create a Traffic Manager profile using the Azure portal](../traffic-manager/quickstart-create-traffic-manager-profile.md), which produces the following Traffic Manager profile:
- Traffic Manager DNS Name: `http://asa-bcdr.trafficmanager.net` - Endpoint Profiles:
Use the following steps to create an Azure Traffic Manager instance for Azure Sp
The environment is now set up. If you used the example values in the linked articles, you should be able to access the app using `https://bcdr-test.contoso.com`.
+### Use Azure Front Door and Azure Application Gateway to route traffic
+
+Azure Front Door is a global, scalable entry point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications. Azure Front Door provides the same multi-geo redundancy and routing to the closest region as Azure Traffic Manager. Azure Front Door also provides advanced features such as TLS protocol termination, application layer processing, and Web Application Firewall (WAF). For more information, see [What is Azure Front Door?](../frontdoor/front-door-overview.md)
+
+The following diagram shows the architecture of a multi-region redundancy, virtual-network-integrated Azure Spring Apps service instance. The diagram shows the correct reverse proxy configuration for Application Gateway and Front Door with a custom domain. This architecture is based on the scenario described in [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md). This approach combines two Application-Gateway-integrated Azure Spring Apps virtual-network-injection instances into a geo-redundant instance.
++ ## Next steps
-* [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](./quickstart.md)
+- [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](./quickstart.md)
storage Storage Troubleshoot Windows File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-windows-file-connection-problems.md
description: Troubleshoot problems with SMB Azure file shares in Windows. See co
Previously updated : 10/12/2022 Last updated : 11/04/2022
System error 53 or system error 67 can occur if port 445 outbound communication
To check if your firewall or ISP is blocking port 445, use the [`AzFileDiagnostics`](https://github.com/Azure-Samples/azure-files-samples/tree/master/AzFileDiagnostics/Windows) tool or `Test-NetConnection` cmdlet.
-To use the `Test-NetConnection` cmdlet, the Azure PowerShell module must be installed, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps) for more information. Remember to replace `<your-storage-account-name>` and `<your-resource-group-name>` with the relevant names for your storage account.
+To use the `Test-NetConnection` cmdlet, the Azure PowerShell module must be installed. See [Install Azure PowerShell module](/powershell/azure/install-Az-ps) for more information. Remember to replace `<your-storage-account-name>` and `<your-resource-group-name>` with the relevant names for your storage account.
```azurepowershell
By setting up a VPN or ExpressRoute from on-premises to your Azure storage accou
Work with your IT department or ISP to open port 445 outbound to [Azure IP ranges](https://www.microsoft.com/download/details.aspx?id=41653). #### Solution 4 ΓÇö Use REST API-based tools like Storage Explorer/PowerShell
-Azure Files also supports REST in addition to SMB. REST access works over port 443 (standard tcp). There are various tools that are written using REST API that enable rich UI experience. [Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows) is one of them. [Download and Install Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) and connect to your file share backed by Azure Files. You can also use [PowerShell](./storage-how-to-use-files-portal.md) which also user REST API.
+Azure Files also supports REST in addition to SMB. REST access works over port 443 (standard tcp). There are various tools that are written using REST API that enable a rich UI experience. [Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows) is one of them. [Download and Install Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) and connect to your file share backed by Azure Files. You can also use [PowerShell](./storage-how-to-use-files-portal.md) which also uses REST API.
### Cause 2: NTLMv1 is enabled
Browse to the storage account where the Azure file share is located, click **Acc
## Unable to modify or delete an Azure file share (or share snapshots) because of locks or leases Azure Files provides two ways to prevent accidental modification or deletion of Azure file shares and share snapshots: -- **Storage account resource locks**: All Azure resources, including the storage account, support [resource locks](../../azure-resource-manager/management/lock-resources.md). Locks might put on the storage account by an administrator, or by value-added services such as Azure Backup. Two variations of resource locks exist: modify, which prevents all modifications to the storage account and its resources, and delete, which only prevent deletes of the storage account and its resources. When modifying or deleting shares through the `Microsoft.Storage` resource provider, resource locks are enforced on Azure file shares and share snapshots. Most portal operations, Azure PowerShell cmdlets for Azure Files with `Rm` in the name (i.e. `Get-AzRmStorageShare`), and Azure CLI commands in the `share-rm` command group (i.e. `az storage share-rm list`) use the `Microsoft.Storage` resource provider. Some tools and utilities such as Storage Explorer, legacy Azure Files PowerShell management cmdlets without `Rm` in the name (i.e. `Get-AzStorageShare`), and legacy Azure Files CLI commands under the `share` command group (i.e. `az storage share list`) use legacy APIs in the FileREST API that bypass the `Microsoft.Storage` resource provider and resource locks. For more information on legacy management APIs exposed in the FileREST API, see [control plane in Azure Files](/rest/api/storageservices/file-service-rest-api#control-plane).
+- **Storage account resource locks**: All Azure resources, including the storage account, support [resource locks](../../azure-resource-manager/management/lock-resources.md). Locks might put on the storage account by an administrator, or by value-added services such as Azure Backup. Two variations of resource locks exist: **modify**, which prevents all modifications to the storage account and its resources, and **delete**, which only prevent deletes of the storage account and its resources. When modifying or deleting shares through the `Microsoft.Storage` resource provider, resource locks are enforced on Azure file shares and share snapshots. Most portal operations, Azure PowerShell cmdlets for Azure Files with `Rm` in the name (i.e. `Get-AzRmStorageShare`), and Azure CLI commands in the `share-rm` command group (i.e. `az storage share-rm list`) use the `Microsoft.Storage` resource provider. Some tools and utilities such as Storage Explorer, legacy Azure Files PowerShell management cmdlets without `Rm` in the name (i.e. `Get-AzStorageShare`), and legacy Azure Files CLI commands under the `share` command group (i.e. `az storage share list`) use legacy APIs in the FileREST API that bypass the `Microsoft.Storage` resource provider and resource locks. For more information on legacy management APIs exposed in the FileREST API, see [control plane in Azure Files](/rest/api/storageservices/file-service-rest-api#control-plane).
- **Share/share snapshot leases**: Share leases are a kind of proprietary lock for Azure file shares and file share snapshots. Leases might be put on individual Azure file shares or file share snapshots by administrators by calling the API through a script, or by value-added services such as Azure Backup. When a lease is put on an Azure file share or file share snapshot, modifying or deleting the file share/share snapshot can be done with the *lease ID*. Admins can also release the lease before modification operations, which requires the lease ID, or break the lease, which does not require the lease ID. For more information on share leases, see [lease share](/rest/api/storageservices/lease-share).
-Since resource locks and leases might interfere with intended administrator operations on your storage account/Azure file shares, you might wish to remove any resource locks/leases that have been put on your resources manually or automatically by value-added services such as Azure Backup. The following script removes all resource locks and leases. Remember to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment.
+Because resource locks and leases might interfere with intended administrator operations on your storage account/Azure file shares, you might wish to remove any resource locks/leases that have been put on your resources manually or automatically by value-added services such as Azure Backup. The following script removes all resource locks and leases. Remember to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment.
To run the following script, you must [install the 3.10.1-preview version](https://www.powershellgallery.com/packages/Az.Storage/3.10.1-preview) of the Azure Storage PowerShell module.
LeaseStatus : Locked
``` ### Solution 2
-To remove a lease from a file, you can release the lease or break the lease. To release the lease, you need the LeaseId of the lease, which you set when you create the lease. You do not need the LeaseId to break the lease.
+To remove a lease from a file, you can release the lease or break the lease. To release the lease, you need the LeaseId of the lease, which you set when you create the lease. You don't need the LeaseId to break the lease.
The following example shows how to break the lease for the file indicated in cause 2 (this example continues with the PowerShell variables from cause 2):
If hotfix is installed, the following output is displayed:
<a id="shareismissing"></a> ## No folder with a drive letter in "My Computer" or "This PC"
-If you map an Azure file share as an administrator by using net use, the share appears to be missing.
+If you map an Azure file share as an administrator by using the `net use` command, the share appears to be missing.
### Cause
-By default, Windows File Explorer doesn't run as an administrator. If you run net use from an administrative command prompt, you map the network drive as an administrator. Because mapped drives are user-centric, the user account that is logged in doesn't display the drives if they're mounted under a different user account.
+By default, Windows File Explorer doesn't run as an administrator. If you run `net use` from an administrative command prompt, you map the network drive as an administrator. Because mapped drives are user-centric, the user account that is logged in doesn't display the drives if they're mounted under a different user account.
### Solution Mount the share from a non-administrator command line. Alternatively, you can follow [this TechNet topic](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ee844140(v=ws.10)) to configure the **EnableLinkedConnections** registry value.
Mount the share from a non-administrator command line. Alternatively, you can fo
### Cause
-The net use command interprets a forward slash (/) as a command-line option. If your user account name starts with a forward slash, the drive mapping fails.
+The `net use` command interprets a forward slash (/) as a command-line option. If your user account name starts with a forward slash, the drive mapping fails.
### Solution
Drives are mounted per user. If your application or service is running under a d
Use one of the following solutions: - Mount the drive from the same user account that contains the application. You can use a tool such as PsExec.-- Pass the storage account name and key in the user name and password parameters of the net use command.-- Use the cmdkey command to add the credentials into Credential Manager. Perform this from a command line under the service account context, either through an interactive login or by using `runas`.
+- Pass the storage account name and key in the user name and password parameters of the `net use` command.
+- Use the `cmdkey` command to add the credentials into Credential Manager. Perform this from a command line under the service account context, either through an interactive login or by using `runas`.
`cmdkey /add:<storage-account-name>.file.core.windows.net /user:AZURE\<storage-account-name> /pass:<storage-account-key>` - Map the share directly without using a mapped drive letter. Some applications may not reconnect to the drive letter properly, so using the full UNC path might more reliable.
After you follow these instructions, you might receive the following error messa
When a file is copied over the network, the file is decrypted on the source computer, transmitted in plaintext, and re-encrypted at the destination. However, you might see the following error when you're trying to copy an encrypted file: "You are copying the file to a destination that does not support encryption." ### Cause
-This problem can occur if you are using Encrypting File System (EFS). BitLocker-encrypted files can be copied to Azure Files. However, Azure Files does not support NTFS EFS.
+This problem can occur if you are using Encrypting File System (EFS). BitLocker-encrypted files can be copied to Azure Files. However, Azure Files doesn't support NTFS EFS.
### Workaround To copy a file over the network, you must first decrypt it. Use one of the following methods:
Enable Azure AD DS on the Azure AD tenant of the subscription that your storage
## Unable to mount Azure Files with AD credentials ### Self diagnostics steps
-First, make sure that you've followed through all four steps to [enable Azure Files AD Authentication](./storage-files-identity-auth-active-directory-enable.md).
+First, make sure that you've followed through all four steps to [enable Azure Files AD DS Authentication](./storage-files-identity-auth-active-directory-enable.md).
Second, try [mounting Azure file share with storage account key](./storage-how-to-use-files-windows.md). If the share fails to mount, download [`AzFileDiagnostics`](https://github.com/Azure-Samples/azure-files-samples/tree/master/AzFileDiagnostics/Windows) to help you validate the client running environment, detect the incompatible client configuration which would cause access failure for Azure Files, give prescriptive guidance on self-fix, and collect the diagnostics traces.
The cmdlet performs these checks below in sequence and provides guidance for fai
### Symptom You may experience either symptoms described below when trying to configure Windows ACLs with File Explorer on a mounted file share:-- After you click on Edit permission under the Security tab, the Permission wizard doesn't load.
+- After you click on **Edit permission** under the Security tab, the Permission wizard doesn't load.
- When you try to select a new user or group, the domain location doesn't display the right AD DS domain. ### Solution
-We recommend you to use [icacls tool](/windows-server/administration/windows-commands/icacls) to configure the directory/file level permissions as a workaround.
+We recommend that you [configure directory/file level permissions using icacls](storage-files-identity-ad-ds-configure-permissions.md#configure-windows-acls-with-icacls) as a workaround.
## Errors when running Join-AzStorageAccountForAuth cmdlet ### Error: "The directory service was unable to allocate a relative identifier"
-This error may occur if a domain controller that holds the RID Master FSMO role is unavailable or was removed from the domain and restored from backup. Confirm that all Domain Controllers are running and available.
+This error might occur if a domain controller that holds the RID Master FSMO role is unavailable or was removed from the domain and restored from backup. Confirm that all Domain Controllers are running and available.
### Error: "Cannot bind positional parameters because no names were given"
-This error is most likely triggered by a syntax error in the `Join-AzStorageAccountforAuth` command. Check the command for misspellings or syntax errors and verify that the latest version of the AzFilesHybrid module (https://github.com/Azure-Samples/azure-files-samples/releases) is installed.
+This error is most likely triggered by a syntax error in the `Join-AzStorageAccountforAuth` command. Check the command for misspellings or syntax errors and verify that the latest version of the **AzFilesHybrid** module (https://github.com/Azure-Samples/azure-files-samples/releases) is installed.
## Azure Files on-premises AD DS Authentication support for AES-256 Kerberos encryption
You can remedy this issue easily by rotating the storage account keys. We recomm
To rotate the Kerberos keys of a storage account, see [Update the password of your storage account identity in AD DS](./storage-files-identity-ad-ds-update-password.md). # [Portal](#tab/azure-portal)
-Navigate to the desired storage account in the Azure portal. In the table of contents for the desired storage account, select **Access keys** under the **Security + networking** heading. In the *Access key** pane, select **Rotate key** above the desired key.
+Navigate to the desired storage account in the Azure portal. In the table of contents for the desired storage account, select **Access keys** under the **Security + networking** heading. In the **Access key** pane, select **Rotate key** above the desired key.
![A screenshot of the access key pane](./media/storage-troubleshoot-windows-file-connection-problems/access-keys-1.png)
There is currently no workaround for this error.
#### Cause 2: an application already exists for the storage account
-You might also encounter this error if you have previously enabled Azure AD Kerberos authentication through manual limited preview steps. To delete the existing application, the customer or their IT admin can run the following script. Running this script will remove the old manually created application and allow the new experience to auto-create and manage the newly created application.
+You might also encounter this error if you previously enabled Azure AD Kerberos authentication through manual limited preview steps. To delete the existing application, the customer or their IT admin can run the following script. Running this script will remove the old manually created application and allow the new experience to auto-create and manage the newly created application.
> [!IMPORTANT] > This script must be run in PowerShell 5 because the AzureAD module doesn't work in PowerShell 7. This PowerShell snippet uses Azure AD Graph.
if ($null -ne $application) {
If you've previously enabled Azure AD Kerberos authentication through manual limited preview steps, the password for the storage account's service principal is set to expire every six months. Once the password expires, users won't be able to get Kerberos tickets to the file share.
-To mitigate this, you have two options: either rotate the service principal password in Azure AD every six months, or disable Azure AD Kerberos, delete the existing application, and reconfigure Azure AD Kerberos using the Azure portal.
+To mitigate this, you have two options: either rotate the service principal password in Azure AD every six months, or disable Azure AD Kerberos, delete the existing application, and reconfigure Azure AD Kerberos.
#### Option 1: Update the service principal password using PowerShell
try {
#### Option 2: Disable Azure AD Kerberos, delete the existing application, and reconfigure
-If you don't want to rotate the service principal password every six months, you can follow these steps. Be sure to save domain properties (domainName and domainGUID) before disabling Azure AD Kerberos, as you'll need them during reconfiguration if you want to configure directory and file-level permissions through Windows File Explorer.
+If you don't want to rotate the service principal password every six months, you can follow these steps. Be sure to save domain properties (domainName and domainGUID) before disabling Azure AD Kerberos, as you'll need them during reconfiguration if you want to configure directory and file-level permissions using Windows File Explorer. If you didn't save domain properties, you can still [configure directory/file-level permissions using icacls](storage-files-identity-ad-ds-configure-permissions.md#configure-windows-acls-with-icacls) as a workaround.
1. [Disable Azure AD Kerberos](storage-files-identity-auth-azure-active-directory-enable.md#disable-azure-ad-authentication-on-your-storage-account) 1. [Delete the existing application](#cause-2-an-application-already-exists-for-the-storage-account)
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Title: What's new in Azure Virtual Desktop? - Azure
description: New features and product updates for Azure Virtual Desktop. Previously updated : 10/04/2022 Last updated : 11/05/2022
Azure Virtual Desktop updates regularly. This article is where you'll find out a
Make sure to check back here often to keep up with new updates.
+## October 2022
+
+Here's what changed in October 2022:
+
+### Background effects for macOS Teams on Azure Virtual Desktop now generally available
+
+Background effects for Teams on Azure Virtual Desktop is now generally available for the macOS version of Teams on Azure Virtual Desktop. This feature lets meeting participants select an available image in Teams to change their background or choose to blur their background. Background effects are only compatible with version 10.7.10 or later of the Azure Virtual Desktop macOS client. For more information, see [WhatΓÇÖs new in the macOS client](/windows-server/remote/remote-desktop-services/clients/mac-whatsnew?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json&bc=%2Fazure%2Fvirtual-desktop%2Fbreadcrumb%2Ftoc.json#updates-for-version-10710).
+
+### Host pool deployment support for Azure availability zones now generally available
+
+We've improved the host pool deployment process. You can now deploy host pools into up to three availability zones in supported Azure regions. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-general-availability-of-support-for-azure/ba-p/3636262).
+
+### FSLogix version 2210 now in public preview
+
+FSLogix version 2210 is now public preview. This new version includes new features, bug fixes, and other improvements. One of the new features is Disk Compaction, which lets you remove white space in a disk to shrink the disk size. Disk Compaction will save you significant amounts of storage capacity in the storage spaces where you keep your FSLogix disks. For more information, see [WhatΓÇÖs new in FSLogix](/fslogix/whats-new#fslogix-2210-29830844092public-preview) or [the FSLogix Disk Compaction blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-public-preview-fslogix-disk-compaction/ba-p/3644807).
+
+### Universal Print for Azure Virtual Desktop now generally available
+
+The release of Windows 11 22H2 includes an improved printing experience that combines the benefits of Azure Virtual Desktop and Universal Print for Windows 11 multi-session users. Learn more at [Printing on Azure Virtual Desktop using Universal Print](/universal-print/fundamentals/universal-print-avd).
+ ## September 2022 Here's what changed in September 2022: