Updates from: 09/29/2023 01:09:56
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Tutorial Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md
Previously updated : 09/15/2023 Last updated : 09/28/2023 #Customer intent: As an identity administrator, I want to create a Microsoft Entra Domain Services managed domain so that I can synchronize identity information with my Microsoft Entra tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
With Domain Services successfully deployed, now configure the virtual network to
1. To update the DNS server settings for the virtual network, select the **Configure** button. The DNS settings are automatically configured for your virtual network. > [!TIP]
-> If you selected an existing virtual network in the previous steps, any VMs connected to the network only get the new DNS settings after a restart. You can restart VMs using the Microsoft Entra admin center, Azure PowerShell, or the Azure CLI.
+> If you selected an existing virtual network in the previous steps, any VMs connected to the network only get the new DNS settings after a restart. You can restart VMs using the Microsoft Entra admin center, Microsoft Graph PowerShell, or the Azure CLI.
<a name='enable-user-accounts-for-azure-ad-ds'></a>
To authenticate users on the managed domain, Domain Services needs password hash
The steps to generate and store these password hashes are different for cloud-only user accounts created in Microsoft Entra ID versus user accounts that are synchronized from your on-premises directory using Microsoft Entra Connect.
-A cloud-only user account is an account that was created in your Microsoft Entra directory using either the Microsoft Entra admin center or Azure AD PowerShell cmdlets. These user accounts aren't synchronized from an on-premises directory.
+A cloud-only user account is an account that was created in your Microsoft Entra directory by using either the Microsoft Entra admin center or PowerShell. These user accounts aren't synchronized from an on-premises directory.
> In this tutorial, let's work with a basic cloud-only user account. For more information on the additional steps required to use Microsoft Entra Connect, see [Synchronize password hashes for user accounts synced from your on-premises AD to your managed domain][on-prem-sync].
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/known-issues.md
Previously updated : 09/21/2023 Last updated : 09/28/2023 zone_pivot_groups: app-provisioning-cross-tenant-synchronization
When two users in the source tenant have the same mail, and they both need to be
### Usage of Microsoft Entra B2B collaboration for cross-tenant access - B2B users are unable to manage certain Microsoft 365 services in remote tenants (such as Exchange Online), as there's no directory picker.-- Azure Virtual Desktop currently doesn't support B2B users.
+- To learn about Azure Virtual Desktop support for B2B users, see [Prerequisites for Azure Virtual Desktop](../../virtual-desktop/prerequisites.md?tabs=portal).
- B2B users with UserType Member aren't currently supported in Power BI. For more information, see [Distribute Power BI content to external guest users using Microsoft Entra B2B](/power-bi/guidance/whitepaper-azure-b2b-power-bi) - Converting a guest account into a Microsoft Entra member account or converting a Microsoft Entra member account into a guest isn't supported by Teams. For more information, see [Guest access in Microsoft Teams](/microsoftteams/guest-access). ::: zone-end
Attribute-mapping expressions can have a maximum of 10,000 characters.
#### Unsupported scoping filters
-Directory extensions and the **appRoleAssignments**, **userType**, and **accountExpires** attributes aren't supported as scoping filters.
+The **appRoleAssignments**, **userType**, and **accountExpires** attributes aren't supported as scoping filters.
#### Multivalue directory extensions
active-directory Sap Successfactors Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md
This article explains how the integration works and how you can customize the provisioning behavior for different HR scenarios. ## Establishing connectivity
-Microsoft Entra provisioning service uses basic authentication to connect to Employee Central OData API endpoints. When setting up the SuccessFactors provisioning app, use the *Tenant URL* parameter in the *Admin Credentials* section to configure the [API data center URL](https://apps.support.sap.com/sap/support/knowledge/en/2215682).
+Microsoft Entra provisioning service uses basic authentication to connect to Employee Central OData API endpoints. When setting up the SuccessFactors provisioning app, use the *Tenant URL* parameter in the *Admin Credentials* section to configure the [API data center URL](https://help.sap.com/docs/SAP_SUCCESSFACTORS_PLATFORM/d599f15995d348a1b45ba5603e2aba9b/af2b8d5437494b12be88fe374eba75b6.html).
To further secure the connectivity between Microsoft Entra provisioning service and SuccessFactors, add the Microsoft Entra IP ranges in the SuccessFactors IP allowlist:
active-directory Skip Out Of Scope Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/skip-out-of-scope-deletions.md
Copy the Response into a text file. It looks like the JSON text shown, with valu
Here's the JSON block to add to the mapping. ```json
- {
- "key": "SkipOutOfScopeDeletions",
- "value": "True"
- }
+{
+ "key": "SkipOutOfScopeDeletions",
+ "value": "True"
+}
``` ## Step 4: Update the secrets endpoint with the SkipOutOfScopeDeletions flag
active-directory How To Mfa Registration Campaign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md
Previously updated : 09/27/2023 Last updated : 09/28/2023
Here are a few sample JSONs you can use to get started!
- Include all users
- If you want to include ALL users in your tenant, [download this JSON](https://download.microsoft.com/download/1/4/E/14E6151E-C40A-42FB-9F66-D8D374D13B40/All%20Users%20Enabled.json) and paste it in Graph Explorer and run `PATCH` on the endpoint.
+ If you want to include ALL users in your tenant, update the following JSON example with the relevant GUIDs of your users and groups. Then paste it in Graph Explorer and run `PATCH` on the endpoint.
```json {
Here are a few sample JSONs you can use to get started!
- Include specific users or groups of users
- If you want to include certain users or groups in your tenant, [download this JSON](https://download.microsoft.com/download/1/4/E/14E6151E-C40A-42FB-9F66-D8D374D13B40/Multiple%20Includes.json) and update it with the relevant GUIDs of your users and groups. Then paste the JSON in Graph Explorer and run `PATCH` on the endpoint.
+ If you want to include certain users or groups in your tenant, update the following JSON example with the relevant GUIDs of your users and groups. Then paste the JSON in Graph Explorer and run `PATCH` on the endpoint.
```json {
Here are a few sample JSONs you can use to get started!
] } }
+ }
``` -- Include and exclude specific users/groups of users
+- Include and exclude specific users or groups
- If you want to include AND exclude certain users/groups of users in your tenant, [download this JSON](https://download.microsoft.com/download/1/4/E/14E6151E-C40A-42FB-9F66-D8D374D13B40/Multiple%20Includes%20and%20Excludes.json) and paste it in Graph Explorer and run `PATCH` on the endpoint. Enter the correct GUIDs for your users and groups.
+ If you want to include AND exclude certain users or groups in your tenant, update the following JSON example with the relevant GUIDs of your users and groups. Then paste it in Graph Explorer and run `PATCH` on the endpoint.
```json {
No. The snooze duration for the prompt is a tenant-wide setting and applies to a
The feature aims to empower admins to get users set up with MFA using the Authenticator app and not passwordless phone sign-in.
-**Will a user who has a 3rd party authenticator app setup see the nudge?**
+**Will a user who signs in with a 3rd party authenticator app see the nudge?**
-If this user doesnΓÇÖt have the Authenticator app set up for push notifications and is enabled for it by policy, yes, the user will see the nudge.
+Yes. If a user is enabled for the registration campaign and doesn't have Microsoft Authenticator set up for push notifications, the user is nudged to set up Authenticator.
-**Will a user who has the Authenticator app setup only for TOTP codes see the nudge?** 
+**Will a user who has Authenticator set up only for TOTP codes see the nudge?**
-Yes. If the Authenticator app is not set up for push notifications and the user is enabled for it by policy, yes, the user will see the nudge.
+Yes. If a user is enabled for the registration campaign and Authenticator app isn't set up for push notifications, the user is nudged to set up push notification with Authenticator.
**If a user just went through MFA registration, are they nudged in the same sign-in session?**
Yes. If they have been scoped for the nudge using the policy.
**What if the user closes the browser?**
-It's the same as snoozing. If setup is required for a user after they snoozed three times, the user will get prompted the next time they sign in.
+It's the same as snoozing. If setup is required for a user after they snoozed three times, the user is prompted the next time they sign in.
-**Why donΓÇÖt some users see a nudge when there is a Conditional Access policy for "Register security information"?**
+**Why don't some users see a nudge when there is a Conditional Access policy for "Register security information"?**
A nudge won't appear if a user is in scope for a Conditional Access policy that blocks access to the **Register security information** page.
active-directory How To Migrate Mfa Server To Azure Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa.md
Previously updated : 01/29/2023 Last updated : 09/28/2023
If you can't move your user authentication, see the step-by-step guidance for [M
- Upgrade to AD FS for Windows Server 2019, Farm behavior level (FBL) 4. This upgrade enables you to select authentication provider based on group membership for a more seamless user transition. While it's possible to migrate while on AD FS for Windows Server 2016 FBL 3, it isn't as seamless for users. During the migration, users are prompted to select an authentication provider (MFA Server or Microsoft Entra multifactor authentication) until the migration is complete. - Permissions - Enterprise administrator role in Active Directory to configure AD FS farm for Microsoft Entra multifactor authentication
- - Global administrator role in Microsoft Entra ID to perform configuration of Microsoft Entra ID using Azure AD PowerShell
+ - Global administrator role in Microsoft Entra ID to configure Microsoft Entra ID by using PowerShell
## Considerations for all migration paths
active-directory How To Customize Branding Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-customize-branding-customers.md
The following image displays the neutral default branding of the customer tenant
Before you customize any settings, the neutral default branding will appear in your sign-in and sign-up pages. You can customize this default experience with a custom background image or color, favicon, layout, header, and footer. You can also upload a [custom CSS](/azure/active-directory/fundamentals/reference-company-branding-css-template).
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator).
1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier. 1. Browse to **Company Branding** > **Default sign-in** > **Edit**.
Your customer tenant name replaces the Microsoft banner logo in the neutral defa
:::image type="content" source="media/how-to-customize-branding-customers/tenant-name.png" alt-text="Screenshot of the tenant name." lightbox="media/how-to-customize-branding-customers/tenant-name.png":::
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) as at least a [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator).
1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier. 1. In the search bar, type and select **Properties**. 1. Edit the **Name** field.
Your customer tenant name replaces the Microsoft banner logo in the neutral defa
When no longer needed, you can remove the sign-in customization from your customer tenant via the Azure portal.
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator).
1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier. 1. Browse to **Company branding** > **Default sign-in experience** > **Edit**. 1. Remove the elements you no longer need.
active-directory How To Customize Languages Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-customize-languages-customers.md
You can create a personalized sign-in experience for users who sign in using a s
## Add browser language under Company branding
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator).
1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier. 1. Browse to **Company branding** > **Browser language customizations** > **Add browser language**.
The following languages are supported in the customer tenant:
Language customization in the customer tenant allows your user flow to accommodate different languages to suit your customer's needs. You can use languages to modify the strings displayed to your customers as part of the attribute collection process during sign-up.
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator).
2. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier. 3. Browse to **Identity** > **External Identities** > **User flows**. 5. Select the user flow that you want to enable for translations.
active-directory How To Register Ciam App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-register-ciam-app.md
External ID for customers supports authentication for Single-page apps (SPAs).
The following steps show you how to register your SPA in the Microsoft Entra admin center:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](/azure/active-directory/roles/permissions-reference#application-developer).
1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
External ID for customers supports authentication for web apps.
The following steps show you how to register your web app in the Microsoft Entra admin center:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](/azure/active-directory/roles/permissions-reference#application-developer).
1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
If your web app needs to call an API, you must grant your web app API permission
The following steps show you how to register your app in the Microsoft Entra admin center:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](/azure/active-directory/roles/permissions-reference#application-developer).
1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant.
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-properties.md
Previously updated : 05/18/2023 Last updated : 09/27/2023
active-directory How To Rename Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-rename-azure-ad.md
Previously updated : 09/15/2023 Last updated : 09/27/2023
Audit your experiences to find references to Azure AD and its icons.
**Scan your content** to identify references to Azure AD and its synonyms. Compile a detailed list of all instances. -- Search for the following terms: "Azure Active Directory (Azure AD), Azure Active Directory, Azure AD, AAD"
+- Search for the following terms: `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD`
- Search for graphics with the Azure AD icon (![Azure AD product icon](./media/new-name/azure-ad-icon-1.png) ![Alternative Azure AD product icon](./media/new-name/azure-ad-icon-2.png)) to replace with the Microsoft Entra ID icon (![Microsoft Entra ID product icon](./media/new-name/microsoft-entra-id-icon.png)) You can download the Microsoft Entra ID icon here: [Microsoft Entra architecture icons](../architecture/architecture-icons.md)
You can download the Microsoft Entra ID icon here: [Microsoft Entra architecture
- Don't make breaking changes. - Review the [What names aren't changing?](new-name.md#what-names-arent-changing) section in the naming guidance and note which Azure AD terminology isn't changing.-- DonΓÇÖt change instances of 'Active Directory.' Only 'Azure Active Directory' is being renamed, not 'Active Directory,'which is the shortened name of a different product, Windows Server Active Directory.
+- DonΓÇÖt change instances of `Active Directory`. Only `Azure Active Directory` is being renamed, not `Active Directory`, which is the shortened name of a different product, Windows Server Active Directory.
**Evaluate and prioritize based on future usage**. Consider which content needs to be updated based on whether it's user-facing or has broad visibility within your organization, audience, or customer base. You may decide that some code or content doesn't need to be updated if it has limited exposure to your end-users.
Update your organization's content and experiences using the relevant tools.
### How to use "find and replace" for text-based content 1. Almost all editing tools offer "search and replace" or "find and replace" functionality, either natively or using plug-ins. Use your preferred app.
-1. Use "find and replace" to find the strings "Azure Active Directory (Azure AD), Azure Active Directory, Azure AD, AAD."
+1. Use "find and replace" to find the strings `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD`.
1. Don't replace all instances with Microsoft Entra ID. 1. Review whether each instance refers to the product or a feature of the product.
- - Azure AD as the product name alone should be replaced by Microsoft Entra ID.
- - Azure AD features or functionality become Microsoft Entra features or functionality. For example, Azure AD Conditional Access becomes Microsoft Entra Conditional Access.
+ - Azure AD as the product name alone should be replaced with Microsoft Entra ID.
+ - Azure AD features or functionality become Microsoft Entra features or functionality. For example, "Azure AD Conditional Access" becomes "Microsoft Entra Conditional Access."
### Automate bulk editing using custom code
-Use the following criteria to determine what change(s) you need to make to instances of "Azure Active Directory (Azure AD), Azure Active Directory, Azure AD, AAD."
+Use the following criteria to determine what change(s) you need to make to instances of `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD`.
1. If the text string is found in the naming dictionary of previous terms, change it to the new term. 1. If a punctuation mark follows "Azure Active Directory (Azure AD), Azure Active Directory, Azure AD, AAD," replace with 'Microsoft Entra ID' because that's the product name.
-1. If "Azure Active Directory (Azure AD), Azure Active Directory, Azure AD, AAD" is followed by "for, Premium, Plan, P1, or P2", replace with 'Microsoft Entra ID' because it refers to a SKU name or Service Plan.
-1. If an article (a, an, the) or possessive (your, your organizationΓÇÖs) precedes ("Azure Active Directory (Azure AD), Azure Active Directory, Azure AD, AAD"), then replace with 'Microsoft Entra' because it's a feature name. For example:
- 1. 'an Azure AD tenant' becomes 'a Microsoft Entra tenant'
- 1. 'your organization's Azure AD tenant' becomes 'your Microsoft Entra tenant'
+1. If `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD` is followed by `for`, `Premium`, `Plan`, `P1`, or `P2`, replace with `Microsoft Entra ID` because it refers to a SKU name or Service Plan.
+1. If an article (`a`, `an`, `the`) or possessive (`your`, `your organization's`) precedes (`Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD`), then replace with `Microsoft Entra` because it's a feature name. For example:
+ 1. "an Azure AD tenant" becomes "a Microsoft Entra tenant"
+ 1. "your organization's Azure AD tenant" becomes "your Microsoft Entra tenant"
-1. If "Azure Active Directory (Azure AD), Azure Active Directory, Azure AD, AAD" is followed by an adjective or noun not listed above, then replace with 'Microsoft Entra' because it's a feature name. For example,'Azure AD Conditional Access' becomes 'Microsoft Entra Conditional Access,' while 'Azure AD tenant' becomes 'Microsoft Entra tenant.'
-1. Otherwise, replace "Azure Active Directory (Azure AD), Azure Active Directory, Azure AD, AAD" with 'Microsoft Entra ID'
+1. If `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD` is followed by an adjective or noun not in the previous steps, then replace with `Microsoft Entra` because it's a feature name. For example,"Azure AD Conditional Access" becomes "Microsoft Entra Conditional Access," while "Azure AD tenant" becomes "Microsoft Entra tenant."
+1. Otherwise, replace `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD` with `Microsoft Entra ID`
See the section [Glossary of updated terminology](new-name.md#glossary-of-updated-terminology) to further refine your custom logic. ### Update graphics and icons 1. Replace the Azure AD icon with the Microsoft Entra ID icon.
-1. Replace titles or text containing "Azure Active Directory (Azure AD), Azure Active Directory, Azure AD, AAD" with 'Microsoft Entra ID.'
+1. Replace titles or text containing `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD` with `Microsoft Entra ID`.
+
+## Sample PowerShell script
+
+You can use following PowerShell script as a baseline to rename Azure AD references in your documentation or content. This code sample:
+
+- Scans .resx files within a specified folder and all nested folders.
+- Edits files by replacing any references to `Azure Active Directory (Azure AD)`, `Azure Active Directory`, `Azure AD`, `AAD` with the correct terminology according to [New name for Azure AD](new-name.md).
+
+Edit the baseline script according to your needs and the scope of files you need to update. You may need to account for edge cases and modify the script according to how you've defined the messages in your source files. The script is not fully automated. If you use the script as-is, you must review the outputs and may need to make additional adjustments to follow the guidance in [New name for Azure AD](new-name.md).
+
+```powershell
+# Define the old and new terminology
+$terminology = @(
+ @{ Key = 'Azure AD External Identities'; Value = 'Microsoft Entra External ID' },
+ @{ Key = 'Azure AD Identity Governance'; Value = 'Microsoft Entra ID Governance' },
+ @{ Key = 'Azure AD Verifiable Credentials'; Value = 'Microsoft Entra Verified ID' },
+ @{ Key = 'Azure AD Workload Identities'; Value = 'Microsoft Entra Workload ID' },
+ @{ Key = 'Azure AD Domain Services'; Value = 'Microsoft Entra Domain Services' },
+ @{ Key = 'Azure AD access token authentication'; Value = 'Microsoft Entra access token authentication' },
+ @{ Key = 'Azure AD admin center'; Value = 'Microsoft Entra admin center' },
+ @{ Key = 'Azure AD portal'; Value = 'Microsoft Entra portal' },
+ @{ Key = 'Azure AD application proxy'; Value = 'Microsoft Entra application proxy' },
+ @{ Key = 'Azure AD authentication'; Value = 'Microsoft Entra authentication' },
+ @{ Key = 'Azure AD Conditional Access'; Value = 'Microsoft Entra Conditional Access' },
+ @{ Key = 'Azure AD cloud-only identities'; Value = 'Microsoft Entra cloud-only identities' },
+ @{ Key = 'Azure AD Connect'; Value = 'Microsoft Entra Connect' },
+ @{ Key = 'AD Connect'; Value = 'Microsoft Entra Connect' },
+ @{ Key = 'AD Connect Sync'; Value = 'Microsoft Entra Connect Sync' },
+ @{ Key = 'Azure AD Connect Sync'; Value = 'Microsoft Entra Connect Sync' },
+ @{ Key = 'Azure AD domain'; Value = 'Microsoft Entra domain' },
+ @{ Key = 'Azure AD domain'; Value = 'Microsoft Entra domain' },
+ @{ Key = 'Azure AD Domain Services'; Value = 'Microsoft Entra Domain Services' },
+ @{ Key = 'Azure AD Enterprise Applications'; Value = 'Microsoft Entra enterprise applications' },
+ @{ Key = 'Azure AD federation services'; Value = 'Active Directory Federation Services' },
+ @{ Key = 'Azure AD hybrid identities'; Value = 'Microsoft Entra hybrid identities' },
+ @{ Key = 'Azure AD identities'; Value = 'Microsoft Entra identities' },
+ @{ Key = 'Azure AD role'; Value = 'Microsoft Entra role' },
+ @{ Key = 'Azure AD'; Value = 'Microsoft Entra ID' },
+ @{ Key = 'AAD'; Value = 'ME-ID' },
+ @{ Key = 'Azure AD auth'; Value = 'Microsoft Entra auth' },
+ @{ Key = 'Azure AD-only auth'; Value = 'Microsoft Entra-only auth' },
+ @{ Key = 'Azure AD object'; Value = 'Microsoft Entra object' },
+ @{ Key = 'Azure AD identity'; Value = 'Microsoft Entra identity' },
+ @{ Key = 'Azure AD schema'; Value = 'Microsoft Entra schema' },
+ @{ Key = 'Azure AD seamless single sign-on'; Value = 'Microsoft Entra seamless single sign-on' },
+ @{ Key = 'Azure AD self-service password reset'; Value = 'Microsoft Entra self-service password reset' },
+ @{ Key = 'Azure AD SSPR'; Value = 'Microsoft Entra SSPR' },
+ @{ Key = 'Azure AD SSPR'; Value = 'Microsoft Entra SSPR' },
+ @{ Key = 'Azure AD domain'; Value = 'Microsoft Entra domain' },
+ @{ Key = 'Azure AD group'; Value = 'Microsoft Entra group' },
+ @{ Key = 'Azure AD login'; Value = 'Microsoft Entra login' },
+ @{ Key = 'Azure AD managed'; Value = 'Microsoft Entra managed' },
+ @{ Key = 'Azure AD entitlement'; Value = 'Microsoft Entra entitlement' },
+ @{ Key = 'Azure AD access review'; Value = 'Microsoft Entra access review' },
+ @{ Key = 'Azure AD Identity Protection'; Value = 'Microsoft Entra ID Protection' },
+ @{ Key = 'Azure AD pass-through'; Value = 'Microsoft Entra pass-through' },
+ @{ Key = 'Azure AD password'; Value = 'Microsoft Entra password' },
+ @{ Key = 'Azure AD Privileged Identity Management'; Value = 'Microsoft Entra Privilegd Identity Management' },
+ @{ Key = 'Azure AD registered'; Value = 'Microsoft Entra registered' },
+ @{ Key = 'Azure AD reporting and monitoring'; Value = 'Microsoft Entra reporting and monitoring' },
+ @{ Key = 'Azure AD enterprise app'; Value = 'Microsoft Entra enterprise app' },
+ @{ Key = 'Azure AD cloud-only identities'; Value = 'Microsoft Entra cloud-only identities' },
+ @{ Key = 'Cloud Knox'; Value = 'Microsoft Entra Permissions Management' },
+ @{ Key = 'Azure AD Premium P1'; Value = 'Microsoft Entra ID P1' },
+ @{ Key = 'AD Premium P1'; Value = 'Microsoft Entra ID P1' },
+ @{ Key = 'Azure AD Premium P2'; Value = 'Microsoft Entra ID P2' },
+ @{ Key = 'AD Premium P2'; Value = 'Microsoft Entra ID P2' },
+ @{ Key = 'Azure AD F2'; Value = 'Microsoft Entra ID F2' },
+ @{ Key = 'Azure AD Free'; Value = 'Microsoft Entra ID Free' },
+ @{ Key = 'Azure AD for education'; Value = 'Microsoft Entra ID for education' },
+ @{ Key = 'Azure AD work or school account'; Value = 'Microsoft Entra work or school account' },
+ @{ Key = 'federated with Azure AD'; Value = 'federated with Microsoft Entra' },
+ @{ Key = 'Hybrid Azure AD Join'; Value = 'Microsoft Entra hybrid join' },
+ @{ Key = 'Azure Active Directory External Identities'; Value = 'Microsoft Entra External ID' },
+ @{ Key = 'Azure Active Directory Identity Governance'; Value = 'Microsoft Entra ID Governance' },
+ @{ Key = 'Azure Active Directory Verifiable Credentials'; Value = 'Microsoft Entra Verified ID' },
+ @{ Key = 'Azure Active Directory Workload Identities'; Value = 'Microsoft Entra Workload ID' },
+ @{ Key = 'Azure Active Directory Domain Services'; Value = 'Microsoft Entra Domain Services' },
+ @{ Key = 'Azure Active Directory access token authentication'; Value = 'Microsoft Entra access token authentication' },
+ @{ Key = 'Azure Active Directory admin center'; Value = 'Microsoft Entra admin center' },
+ @{ Key = 'Azure Active Directory portal'; Value = 'Microsoft Entra portal' },
+ @{ Key = 'Azure Active Directory application proxy'; Value = 'Microsoft Entra application proxy' },
+ @{ Key = 'Azure Active Directory authentication'; Value = 'Microsoft Entra authentication' },
+ @{ Key = 'Azure Active Directory Conditional Access'; Value = 'Microsoft Entra Conditional Access' },
+ @{ Key = 'Azure Active Directory cloud-only identities'; Value = 'Microsoft Entra cloud-only identities' },
+ @{ Key = 'Azure Active Directory Connect'; Value = 'Microsoft Entra Connect' },
+ @{ Key = 'Azure Active Directory Connect Sync'; Value = 'Microsoft Entra Connect Sync' },
+ @{ Key = 'Azure Active Directory domain'; Value = 'Microsoft Entra domain' },
+ @{ Key = 'Azure Active Directory domain'; Value = 'Microsoft Entra domain' },
+ @{ Key = 'Azure Active Directory Domain Services'; Value = 'Microsoft Entra Domain Services' },
+ @{ Key = 'Azure Active Directory Enterprise Applications'; Value = 'Microsoft Entra enterprise applications' },
+ @{ Key = 'Azure Active Directory federation services'; Value = 'Active Directory Federation Services' },
+ @{ Key = 'Azure Active Directory hybrid identities'; Value = 'Microsoft Entra hybrid identities' },
+ @{ Key = 'Azure Active Directory identities'; Value = 'Microsoft Entra identities' },
+ @{ Key = 'Azure Active Directory role'; Value = 'Microsoft Entra role' },
+ @{ Key = 'Azure Active Directory'; Value = 'Microsoft Entra ID' },
+ @{ Key = 'Azure Active Directory auth'; Value = 'Microsoft Entra auth' },
+ @{ Key = 'Azure Active Directory-only auth'; Value = 'Microsoft Entra-only auth' },
+ @{ Key = 'Azure Active Directory object'; Value = 'Microsoft Entra object' },
+ @{ Key = 'Azure Active Directory identity'; Value = 'Microsoft Entra identity' },
+ @{ Key = 'Azure Active Directory schema'; Value = 'Microsoft Entra schema' },
+ @{ Key = 'Azure Active Directory seamless single sign-on'; Value = 'Microsoft Entra seamless single sign-on' },
+ @{ Key = 'Azure Active Directory self-service password reset'; Value = 'Microsoft Entra self-service password reset' },
+ @{ Key = 'Azure Active Directory SSPR'; Value = 'Microsoft Entra SSPR' },
+ @{ Key = 'Azure Active Directory SSPR'; Value = 'Microsoft Entra SSPR' },
+ @{ Key = 'Azure Active Directory domain'; Value = 'Microsoft Entra domain' },
+ @{ Key = 'Azure Active Directory group'; Value = 'Microsoft Entra group' },
+ @{ Key = 'Azure Active Directory login'; Value = 'Microsoft Entra login' },
+ @{ Key = 'Azure Active Directory managed'; Value = 'Microsoft Entra managed' },
+ @{ Key = 'Azure Active Directory entitlement'; Value = 'Microsoft Entra entitlement' },
+ @{ Key = 'Azure Active Directory access review'; Value = 'Microsoft Entra access review' },
+ @{ Key = 'Azure Active Directory Identity Protection'; Value = 'Microsoft Entra ID Protection' },
+ @{ Key = 'Azure Active Directory pass-through'; Value = 'Microsoft Entra pass-through' },
+ @{ Key = 'Azure Active Directory password'; Value = 'Microsoft Entra password' },
+ @{ Key = 'Azure Active Directory Privileged Identity Management'; Value = 'Microsoft Entra Privilegd Identity Management' },
+ @{ Key = 'Azure Active Directory registered'; Value = 'Microsoft Entra registered' },
+ @{ Key = 'Azure Active Directory reporting and monitoring'; Value = 'Microsoft Entra reporting and monitoring' },
+ @{ Key = 'Azure Active Directory enterprise app'; Value = 'Microsoft Entra enterprise app' },
+ @{ Key = 'Azure Active Directory cloud-only identities'; Value = 'Microsoft Entra cloud-only identities' },
+ @{ Key = 'Azure Active Directory Premium P1'; Value = 'Microsoft Entra ID P1' },
+ @{ Key = 'Azure Active Directory Premium P2'; Value = 'Microsoft Entra ID P2' },
+ @{ Key = 'Azure Active Directory F2'; Value = 'Microsoft Entra ID F2' },
+ @{ Key = 'Azure Active Directory Free'; Value = 'Microsoft Entra ID Free' },
+ @{ Key = 'Azure Active Directory for education'; Value = 'Microsoft Entra ID for education' },
+ @{ Key = 'Azure Active Directory work or school account'; Value = 'Microsoft Entra work or school account' },
+ @{ Key = 'federated with Azure Active Directory'; Value = 'federated with Microsoft Entra' },
+ @{ Key = 'Hybrid Azure Active Directory Join'; Value = 'Microsoft Entra hybrid join' },
+ @{ Key = 'AAD External Identities'; Value = 'Microsoft Entra External ID' },
+ @{ Key = 'AAD Identity Governance'; Value = 'Microsoft Entra ID Governance' },
+ @{ Key = 'AAD Verifiable Credentials'; Value = 'Microsoft Entra Verified ID' },
+ @{ Key = 'AAD Workload Identities'; Value = 'Microsoft Entra Workload ID' },
+ @{ Key = 'AAD Domain Services'; Value = 'Microsoft Entra Domain Services' },
+ @{ Key = 'AAD access token authentication'; Value = 'Microsoft Entra access token authentication' },
+ @{ Key = 'AAD admin center'; Value = 'Microsoft Entra admin center' },
+ @{ Key = 'AAD portal'; Value = 'Microsoft Entra portal' },
+ @{ Key = 'AAD application proxy'; Value = 'Microsoft Entra application proxy' },
+ @{ Key = 'AAD authentication'; Value = 'Microsoft Entra authentication' },
+ @{ Key = 'AAD Conditional Access'; Value = 'Microsoft Entra Conditional Access' },
+ @{ Key = 'AAD cloud-only identities'; Value = 'Microsoft Entra cloud-only identities' },
+ @{ Key = 'AAD Connect'; Value = 'Microsoft Entra Connect' },
+ @{ Key = 'AAD Connect Sync'; Value = 'Microsoft Entra Connect Sync' },
+ @{ Key = 'AAD domain'; Value = 'Microsoft Entra domain' },
+ @{ Key = 'AAD domain'; Value = 'Microsoft Entra domain' },
+ @{ Key = 'AAD Domain Services'; Value = 'Microsoft Entra Domain Services' },
+ @{ Key = 'AAD Enterprise Applications'; Value = 'Microsoft Entra enterprise applications' },
+ @{ Key = 'AAD federation services'; Value = 'Active Directory Federation Services' },
+ @{ Key = 'AAD hybrid identities'; Value = 'Microsoft Entra hybrid identities' },
+ @{ Key = 'AAD identities'; Value = 'Microsoft Entra identities' },
+ @{ Key = 'AAD role'; Value = 'Microsoft Entra role' },
+ @{ Key = 'AAD'; Value = 'Microsoft Entra ID' },
+ @{ Key = 'AAD auth'; Value = 'Microsoft Entra auth' },
+ @{ Key = 'AAD-only auth'; Value = 'Microsoft Entra-only auth' },
+ @{ Key = 'AAD object'; Value = 'Microsoft Entra object' },
+ @{ Key = 'AAD identity'; Value = 'Microsoft Entra identity' },
+ @{ Key = 'AAD schema'; Value = 'Microsoft Entra schema' },
+ @{ Key = 'AAD seamless single sign-on'; Value = 'Microsoft Entra seamless single sign-on' },
+ @{ Key = 'AAD self-service password reset'; Value = 'Microsoft Entra self-service password reset' },
+ @{ Key = 'AAD SSPR'; Value = 'Microsoft Entra SSPR' },
+ @{ Key = 'AAD SSPR'; Value = 'Microsoft Entra SSPR' },
+ @{ Key = 'AAD domain'; Value = 'Microsoft Entra domain' },
+ @{ Key = 'AAD group'; Value = 'Microsoft Entra group' },
+ @{ Key = 'AAD login'; Value = 'Microsoft Entra login' },
+ @{ Key = 'AAD managed'; Value = 'Microsoft Entra managed' },
+ @{ Key = 'AAD entitlement'; Value = 'Microsoft Entra entitlement' },
+ @{ Key = 'AAD access review'; Value = 'Microsoft Entra access review' },
+ @{ Key = 'AAD Identity Protection'; Value = 'Microsoft Entra ID Protection' },
+ @{ Key = 'AAD pass-through'; Value = 'Microsoft Entra pass-through' },
+ @{ Key = 'AAD password'; Value = 'Microsoft Entra password' },
+ @{ Key = 'AAD Privileged Identity Management'; Value = 'Microsoft Entra Privilegd Identity Management' },
+ @{ Key = 'AAD registered'; Value = 'Microsoft Entra registered' },
+ @{ Key = 'AAD reporting and monitoring'; Value = 'Microsoft Entra reporting and monitoring' },
+ @{ Key = 'AAD enterprise app'; Value = 'Microsoft Entra enterprise app' },
+ @{ Key = 'AAD cloud-only identities'; Value = 'Microsoft Entra cloud-only identities' },
+ @{ Key = 'AAD Premium P1'; Value = 'Microsoft Entra ID P1' },
+ @{ Key = 'AAD Premium P2'; Value = 'Microsoft Entra ID P2' },
+ @{ Key = 'AAD F2'; Value = 'Microsoft Entra ID F2' },
+ @{ Key = 'AAD Free'; Value = 'Microsoft Entra ID Free' },
+ @{ Key = 'AAD for education'; Value = 'Microsoft Entra ID for education' },
+ @{ Key = 'AAD work or school account'; Value = 'Microsoft Entra work or school account' },
+ @{ Key = 'federated with AAD'; Value = 'federated with Microsoft Entra' },
+ @{ Key = 'Hybrid AAD Join'; Value = 'Microsoft Entra hybrid join' }
+)
+
+$postTransforms = @(
+ @{ Key = 'Microsoft Entra ID B2C'; Value = 'Azure AD B2C' },
+ @{ Key = 'Microsoft Entra ID B2B'; Value = 'Microsoft Entra B2B' },
+ @{ Key = 'ME-ID B2C'; Value = 'AAD B2C' },
+ @{ Key = 'ME-ID B2B'; Value = 'Microsoft Entra B2B' },
+ @{ Key = 'ME-IDSTS'; Value = 'AADSTS' },
+ @{ Key = 'ME-ID Connect'; Value = 'Microsoft Entra Connect' }
+ @{ Key = 'Microsoft Entra ID tenant'; Value = 'Microsoft Entra tenant' }
+ @{ Key = 'Microsoft Entra ID organization'; Value = 'Microsoft Entra tenant' }
+ @{ Key = 'Microsoft Entra ID account'; Value = 'Microsoft Entra account' }
+ @{ Key = 'Microsoft Entra ID resources'; Value = 'Microsoft Entra resources' }
+ @{ Key = 'Microsoft Entra ID admin'; Value = 'Microsoft Entra admin' }
+ @{ Key = ' an Microsoft Entra'; Value = ' a Microsoft Entra' }
+ @{ Key = '>An Microsoft Entra'; Value = '>A Microsoft Entra' }
+ @{ Key = ' an ME-ID'; Value = ' a ME-ID' }
+ @{ Key = '>An ME-ID'; Value = '>A ME-ID' }
+ @{ Key = 'Microsoft Entra ID administration portal'; Value = 'Microsoft Entra administration portal' }
+ @{ Key = 'Microsoft Entra IDvanced Threat'; Value = 'Azure Advanced Threat' }
+ @{ Key = 'Entra ID hybrid join'; Value = 'Entra hybrid join' }
+ @{ Key = 'Microsoft Entra ID join'; Value = 'Microsoft Entra join' }
+ @{ Key = 'ME-ID join'; Value = 'Microsoft Entra join' }
+ @{ Key = 'Microsoft Entra ID service principal'; Value = 'Microsoft Entra service principal' }
+ @{ Key = 'DownloMicrosoft Entra Connector'; Value = 'Download connector' }
+ @{ Key = 'Microsoft Microsoft'; Value = 'Microsoft' }
+)
+
+# Sort the replacements by the length of the keys in descending order
+$terminology = $terminology.GetEnumerator() | Sort-Object -Property { $_.Key.Length } -Descending
+$postTransforms = $postTransforms.GetEnumerator() | Sort-Object -Property { $_.Key.Length } -Descending
+
+# Get all resx and resjson files in the current directory and its subdirectories, ignoring .gitignored files.
+Write-Host "Getting all resx and resjson files in the current directory and its subdirectories, ignoring .gitignored files."
+$gitIgnoreFiles = Get-ChildItem -Path . -Filter .gitignore -Recurse
+$targetFiles = Get-ChildItem -Path . -Include *.resx, *.resjson -Recurse
+
+$filteredFiles = @()
+foreach ($file in $targetFiles) {
+ $ignoreFile = $gitIgnoreFiles | Where-Object { $_.DirectoryName -eq $file.DirectoryName }
+ if ($ignoreFile) {
+ $excludedPatterns = Get-Content $ignoreFile.FullName | Select-String -Pattern '^(?!#).*' | ForEach-Object { $_.Line }
+ if ($excludedPatterns -notcontains $file.Name) {
+ $filteredFiles += $file
+ }
+ }
+ else {
+ $filteredFiles += $file
+ }
+}
+
+$scriptPath = $MyInvocation.MyCommand.Path
+$filteredFiles = $filteredFiles | Where-Object { $_.FullName -ne $scriptPath }
+
+# This command will get all the files with the extensions .resx and .resjson in the current directory and its subdirectories, and then filter out those that match the patterns in the .gitignore file. The Resolve-Path cmdlet will find the full path of the .gitignore file, and the Get-Content cmdlet will read its content as a single string. The -notmatch operator will compare the full name of each file with the .gitignore content using regular expressions, and return only those that do not match.
+Write-Host "Found $($filteredFiles.Count) files."
+
+function Update-Terminology {
+ param (
+ [Parameter(Mandatory = $true)]
+ [ref]$Content,
+ [Parameter(Mandatory = $true)]
+ [object[]]$Terminology
+ )
+
+ foreach ($item in $Terminology.GetEnumerator()) {
+ $old = [regex]::Escape($item.Key)
+ $new = $item.Value
+ $toReplace = '(?<!(name=\"[^$]*|https?:\/\/aka.ms/[a-z|0-1]*))' + $($old)
+
+ # Replace the old terminology with the new one
+ $Content.Value = $Content.Value -replace $toReplace, $new
+ }
+}
+
+# Loop through each file
+foreach ($file in $filteredFiles) {
+ # Read the content of the file
+ $content = Get-Content $file.FullName
+
+ Write-Host "Processing $file"
+
+ Update-Terminology -Content ([ref]$content) -Terminology $terminology
+ Update-Terminology -Content ([ref]$content) -Terminology $postTransforms
+
+ $newContent = $content -join "`n"
+ if ($newContent -ne (Get-Content $file.FullName -Raw)) {
+ Write-Host "Updating $file"
+ # Write the updated content back to the file
+ Set-Content -Path $file.FullName -Value $newContent
+ }
+}
+
+```
## Communicate the change to your customers
To help your customers with the transition, it's helpful to add a note: "Azure A
- [Stay up-to-date with what's new in Azure AD/Microsoft Entra ID](whats-new.md) - [Get started using Microsoft Entra ID at the Microsoft Entra admin center](https://entra.microsoft.com/)-- [Learn more about Microsoft Entra with content from Microsoft Learn](/entra)
+- [Learn more about Microsoft Entra with content from Microsoft Learn](/entra)
active-directory New Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/new-name.md
Previously updated : 08/29/2023 Last updated : 09/27/2023
Service plan display names will change on October 1, 2023. Microsoft Entra ID Fr
:::image type="content" source="./media/new-name/azure-ad-new-name.png" alt-text="Diagram showing the new name for Azure AD and Azure AD External Identities." border="false" lightbox="./media/new-name/azure-ad-new-name-high-res.png":::
-During 2023, you may see both the current Azure AD name and the new Microsoft Entra ID name in support area paths. For self-service support, look for the topic path of "Microsoft Entra" or "Azure Active Directory/Microsoft Entra ID."
+During 2023, you may see both the current Azure AD name and the new Microsoft Entra ID name in support area paths. For self-service support, look for the topic path of `Microsoft Entra` or `Azure Active Directory/Microsoft Entra ID`.
The product name and icons are changing, and features are now branded as Microsoft Entra instead of Azure AD. If you're updating the name to Microsoft Entra ID in your own content or experiences, see [How to: Rename Azure AD](how-to-rename-azure-ad.md).
Naming is also not changing for:
No. Today, we offer two PowerShell modules for administering identity tasks: the Azure AD PowerShell module, which is planned for deprecation in March 2024, and the Microsoft Graph PowerShell module.
-In the Azure AD PowerShell for Graph module, "AzureAD" is in the name of almost all the cmdlets. These won't change, and you can continue to use these same cmdlets now that the official product name is Microsoft Entra ID.
+In the Azure AD PowerShell for Graph module, `AzureAD` is in the name of almost all the cmdlets. These won't change, and you can continue to use these same cmdlets now that the official product name is Microsoft Entra ID.
Microsoft Graph PowerShell cmdlets aren't branded with Azure AD. We encourage you to plan your migration from Azure AD PowerShell to Microsoft Graph PowerShell, which is the recommended module for interacting with Microsoft Entra ID in the future.
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
The following example walks you through setting up a custom synchronization rule
- Connected System: contoso.com - Connected System Object Type: user - Metaverse Object Type: person
- - Precedence: 200
+ - Precedence: 20
![Screenshot of creating an inbound synchronization rule basics.](media/how-to-lifecycle-workflow-sync-attributes/create-inbound-rule.png) 1. On the **Scoping filter** screen, select **Next.** 1. On the **Join rules** screen, select **Next**.
The following example walks you through setting up a custom synchronization rule
- Connected System: &lt;your tenant&gt; - Connected System Object Type: user - Metaverse Object Type: person
- - Precedence: 201
+ - Precedence: 21
1. On the **Scoping filter** screen, select **Next.** 1. On the **Join rules** screen, select **Next**. 1. On the **Transformations** screen, Under **Add transformations,** enter the following information.
Get-MgUser -UserId "44198096-38ea-440d-9497-bb6b06bcaf9b" | Select-Object Displa
- [Create a custom workflow using the Microsoft Entra admin center](tutorial-onboard-custom-workflow-portal.md) - [Configure API-driven inbound provisioning app (Public preview)](../app-provisioning/inbound-provisioning-api-configure-app.md) - [Create a Lifecycle workflow](create-lifecycle-workflow.md)+
active-directory Restore Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/restore-application.md
Replace ID with the object ID of the service principal that you want to restore.
:::zone pivot="ms-graph"
-1. To restore the enterprise application, run the following query:
+To restore the enterprise application, run the following query:
- # [HTTP](#tab/http)
```http POST https://graph.microsoft.com/v1.0/directory/deletedItems/{id}/restore ```
- # [C#](#tab/csharp)
- [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/csharp/restore-directory-deleteditem-csharp-snippets.md)]
-
- # [JavaScript](#tab/javascript)
- [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/javascript/restore-directory-deleteditem-javascript-snippets.md)]
-
- # [Java](#tab/java)
- [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/jav)]
-
- # [Go](#tab/go)
- [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/go/restore-directory-deleteditem-go-snippets.md)]
-
- # [PowerShell](#tab/powershell)
- [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/powershell/restore-directory-deleteditem-powershell-snippets.md)]
-
- # [PHP](#tab/php)
- [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/php/restore-directory-deleteditem-php-snippets.md)]
-
-
- Replace ID with the object ID of the service principal that you want to restore. :::zone-end
active-directory Cybsafe Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cybsafe-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* [A Microsoft Entra tenant](../develop/quickstart-create-new-tenant.md) * A user account in Microsoft Entra ID with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* A [CybSafe](https://app.cybsafe.com/login) Administrator account with an enterprise subscription.
+* A [CybSafe](https://www.cybsafe.com/) Administrator account with an enterprise subscription.
## Step 1: Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
active-directory Github Enterprise Managed User Oidc Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-enterprise-managed-user-oidc-provisioning-tutorial.md
This tutorial describes the steps you need to perform in both GitHub Enterprise
> * Provision groups and group memberships in GitHub Enterprise Managed User (OIDC) > * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to GitHub Enterprise Managed User (OIDC) (recommended).
-> [!NOTE]
-> This provisioning connector is enabled only for Enterprise Managed Users beta participants.
-- ## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites:
ai-services Luis Tutorial Node Import Utterances Csv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-tutorial-node-import-utterances-csv.md
The column entries that contain the utterances in the CSV have to be parsed into
For example, the entry for "Turn on the lights" maps to this JSON: ```json
+{
+ "text": "Turn on the lights",
+ "intentName": "TurnOn",
+ "entityLabels": [
{
- "text": "Turn on the lights",
- "intentName": "TurnOn",
- "entityLabels": [
- {
- "entityName": "Operation",
- "startCharIndex": 5,
- "endCharIndex": 6
- },
- {
- "entityName": "Device",
- "startCharIndex": 12,
- "endCharIndex": 17
- }
- ]
+ "entityName": "Operation",
+ "startCharIndex": 5,
+ "endCharIndex": 6
+ },
+ {
+ "entityName": "Device",
+ "startCharIndex": 12,
+ "endCharIndex": 17
}
+ ]
+}
``` In this example, the `intentName` comes from the user request under the **Request** column heading in the CSV file, and the `entityName` comes from the other columns with key information. For example, if there's an entry for **Operation** or **Device**, and that string also occurs in the actual request, then it can be labeled as an entity. The following code demonstrates this parsing process. You can copy or [download](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/_parse.js) it and save it to `_parse.js`.
ai-services Relation Extraction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/text-analytics-for-health/concepts/relation-extraction.md
Text Analytics for health features relation extraction, which is used to identi
Relation extraction output contains URI references and assigned roles of the entities of the relation type. For example, in the following JSON: ```json
- "relations": [
- {
- "relationType": "DosageOfMedication",
- "entities": [
- {
- "ref": "#/results/documents/0/entities/0",
- "role": "Dosage"
- },
- {
- "ref": "#/results/documents/0/entities/1",
- "role": "Medication"
- }
- ]
- },
- {
- "relationType": "RouteOfMedication",
- "entities": [
- {
- "ref": "#/results/documents/0/entities/1",
- "role": "Medication"
- },
- {
- "ref": "#/results/documents/0/entities/2",
- "role": "Route"
- }
- ]
-...
+"relations": [
+ {
+ "relationType": "DosageOfMedication",
+ "entities": [
+ {
+ "ref": "#/results/documents/0/entities/0",
+ "role": "Dosage"
+ },
+ {
+ "ref": "#/results/documents/0/entities/1",
+ "role": "Medication"
+ }
+ ]
+ },
+ {
+ "relationType": "RouteOfMedication",
+ "entities": [
+ {
+ "ref": "#/results/documents/0/entities/1",
+ "role": "Medication"
+ },
+ {
+ "ref": "#/results/documents/0/entities/2",
+ "role": "Route"
+ }
+ ]
+ }
] ```
ai-services Understand Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/understand-embeddings.md
Title: Azure OpenAI Service embeddings
-description: Learn more about Azure OpenAI embeddings API for document search and cosine similarity
+description: Learn more about how the Azure OpenAI embeddings API uses cosine similarity for document search and to measure similarity between texts.
recommendations: false
-# Understanding embeddings in Azure OpenAI Service
+# Understand embeddings in Azure OpenAI Service
-An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Each embedding is a vector of floating-point numbers, such that the distance between two embeddings in the vector space is correlated with semantic similarity between two inputs in the original format. For example, if two texts are similar, then their vector representations should also be similar. Embeddings power vector similarity search in Azure Databases such as [Azure Cosmos DB for MongoDB vCore](../../../cosmos-db/mongodb/vcore/vector-search.md).
+An embedding is a special format of data representation that machine learning models and algorithms can easily use. The embedding is an information dense representation of the semantic meaning of a piece of text. Each embedding is a vector of floating-point numbers, such that the distance between two embeddings in the vector space is correlated with semantic similarity between two inputs in the original format. For example, if two texts are similar, then their vector representations should also be similar. Embeddings power vector similarity search in Azure Databases such as [Azure Cosmos DB for MongoDB vCore](../../../cosmos-db/mongodb/vcore/vector-search.md).
## Embedding models
-Different Azure OpenAI embedding models are specifically created to be good at a particular task. **Similarity embeddings** are good at capturing semantic similarity between two or more pieces of text. **Text search embeddings** help measure whether long documents are relevant to a short query. **Code search embeddings** are useful for embedding code snippets and embedding natural language search queries.
+Different Azure OpenAI embedding models are created to be good at a particular task:
+
+- **Similarity embeddings** are good at capturing semantic similarity between two or more pieces of text.
+- **Text search embeddings** help measure whether long documents are relevant to a short query.
+- **Code search embeddings** are useful for embedding code snippets and embedding natural language search queries.
-Embeddings make it easier to do machine learning on large inputs representing words by capturing the semantic similarities in a vector space. Therefore, we can use embeddings to determine if two text chunks are semantically related or similar, and provide a score to assess similarity.
+Embeddings make it easier to do machine learning on large inputs representing words by capturing the semantic similarities in a vector space. Therefore, you can use embeddings to determine if two text chunks are semantically related or similar, and provide a score to assess similarity.
## Cosine similarity Azure OpenAI embeddings rely on cosine similarity to compute similarity between documents and a query.
-From a mathematic perspective, cosine similarity measures the cosine of the angle between two vectors projected in a multi-dimensional space. This is beneficial because if two documents are far apart by Euclidean distance because of size, they could still have a smaller angle between them and therefore higher cosine similarity. For more information about cosine similarity equations, see [this article on Wikipedia](https://en.wikipedia.org/wiki/Cosine_similarity).
+From a mathematic perspective, cosine similarity measures the cosine of the angle between two vectors projected in a multidimensional space. This measurement is beneficial, because if two documents are far apart by Euclidean distance because of size, they could still have a smaller angle between them and therefore higher cosine similarity. For more information about cosine similarity equations, see [Cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity).
-An alternative method of identifying similar documents is to count the number of common words between documents. Unfortunately, this approach doesn't scale since an expansion in document size is likely to lead to a greater number of common words detected even among completely disparate topics. For this reason, cosine similarity can offer a more effective alternative.
+An alternative method of identifying similar documents is to count the number of common words between documents. This approach doesn't scale since an expansion in document size is likely to lead to a greater number of common words detected even among disparate topics. For this reason, cosine similarity can offer a more effective alternative.
## Next steps * Learn more about using Azure OpenAI and embeddings to perform document search with our [embeddings tutorial](../tutorials/embeddings.md). * Store your embeddings and perform vector (similarity) search using [Azure Cosmos DB for MongoDB vCore](../../../cosmos-db/mongodb/vcore/vector-search.md) or [Azure Cosmos DB for NoSQL](../../../cosmos-db/rag-data-openai.md)-
ai-services Speech Synthesis Markup Pronunciation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-pronunciation.md
The following content types are supported for the `interpret-as` and `format` at
| `ordinal` | None | The text is spoken as an ordinal number. The speech synthesis engine pronounces:<br /><br />`Select the <say-as interpret-as="ordinal">3rd</say-as> option`<br /><br />As "Select the third option."| | `number_digit` | None | The text is spoken as a sequence of individual digits. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="number_digit">123456789</say-as>`<br /><br />As "1 2 3 4 5 6 7 8 9." | | `fraction` | None | The text is spoken as a fractional number. The speech synthesis engine pronounces:<br /><br /> `<say-as interpret-as="fraction">3/8</say-as> of an inch`<br /><br />As "three eighths of an inch." |
-| `date` | dmy, mdy, ymd, ydm, ym, my, md, dm, d, m, y | The text is spoken as a date. The `format` attribute specifies the date's format (*d=day, m=month, and y=year*). The speech synthesis engine pronounces:<br /><br />`Today is <say-as interpret-as="date" format="mdy">10-19-2016</say-as>`<br /><br />As "Today is October nineteenth two thousand sixteen." |
+| `date` | dmy, mdy, ymd, ydm, ym, my, md, dm, d, m, y | The text is spoken as a date. The `format` attribute specifies the date's format (*d=day, m=month, and y=year*). The speech synthesis engine pronounces:<br /><br />`Today is <say-as interpret-as="date">10-12-2016</say-as>`<br /><br />As "Today is October twelfth two thousand sixteen."<br />Pronounces:<br /><br />`Today is <say-as interpret-as="date" format="dmy">10-12-2016</say-as>`<br /><br />As "Today is December tenth two thousand sixteen." |
| `time` | hms12, hms24 | The text is spoken as a time. The `format` attribute specifies whether the time is specified by using a 12-hour clock (hms12) or a 24-hour clock (hms24). Use a colon to separate numbers representing hours, minutes, and seconds. Here are some valid time examples: 12:35, 1:14:32, 08:15, and 02:50:45. The speech synthesis engine pronounces:<br /><br />`The train departs at <say-as interpret-as="time" format="hms12">4:00am</say-as>`<br /><br />As "The train departs at four A M." | | `duration` | hms, hm, ms | The text is spoken as a duration. The `format` attribute specifies the duration's format (*h=hour, m=minute, and s=second*). The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="duration">01:18:30</say-as>`<br /><br /> As "one hour eighteen minutes and thirty seconds".<br />Pronounces:<br /><br />`<say-as interpret-as="duration" format="ms">01:18</say-as>`<br /><br /> As "one minute and eighteen seconds".<br />This tag is only supported on English and Spanish. | | `telephone` | None | The text is spoken as a telephone number. The speech synthesis engine pronounces:<br /><br />`The number is <say-as interpret-as="telephone">(888) 555-1212</say-as>`<br /><br />As "My number is area code eight eight eight five five five one two one two." |
aks Access Private Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/access-private-cluster.md
The pod created by the `run` command provides `helm` and the latest compatible v
`command invoke` runs the commands from your cluster, so any commands run in this manner are subject to your configured networking restrictions and any other configured restrictions. Make sure there are enough nodes and resources in your cluster to schedule this command pod.
+> [!NOTE]
+> The output for `command invoke` is limited to 512kB in size.
+ ## Run commands on your AKS cluster ### [Azure CLI - `command invoke`](#tab/azure-cli)
aks Aks Support Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-support-help.md
Title: Azure Kubernetes Service support and help options
-description: How to obtain help and support for questions or problems when you create solutions using Azure Kubernetes Service.
+ Title: Support and troubleshooting for Azure Kubernetes Service (AKS)
+description: This article provides support and troubleshooting options for Azure Kubernetes Service (AKS).
Previously updated : 10/18/2022 Last updated : 09/27/2023 # Support and troubleshooting for Azure Kubernetes Service (AKS)
-Here are suggestions for where you can get help when developing your Azure Kubernetes Service (AKS) solutions.
- ## Self help troubleshooting :::image type="icon" source="./media/logos/doc-logo.png" alt-text="":::
-Various articles explain how to determine, diagnose, and fix issues that you might encounter when using Azure Kubernetes Service. Use these articles to troubleshoot deployment failures, security-related problems, connection issues and more.
-
-For a full list of self help troubleshooting content, see [Azure Kubernetes Service troubleshooting documentation](/troubleshoot/azure/azure-kubernetes/welcome-azure-kubernetes)
+The [AKS troubleshooting documentation](/troubleshoot/azure/azure-kubernetes/welcome-azure-kubernetes) provides guidance for how to diagnose and resolve issues that you might encounter when using AKS. These articles cover how to troubleshoot deployment failures, security-related problems, connection issues, and more.
## Post a question on Microsoft Q&A :::image type="icon" source="./media/logos/microsoft-logo.png" alt-text="":::
-For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our expert community, engage with us on [Microsoft Q&A](/answers/products/azure), Azure's preferred destination for community support.
+Azure's preferred destination for community support, [Microsoft Q&A](/answers/products/azure), allows you to ask technical questions and engage with Azure engineers, Most Valuable Professionals (MVPs), partners, and customers. When you ask a question, make sure you use the `azure-kubernetes-service` tag. You can also submit your own answers and help other community members with their questions.
+
+If you can't find an answer to your problem using search, you can submit a new question to Microsoft Q&A and tag it with the appropriate Azure service and area.
-If you can't find an answer to your problem using search, submit a new question to Microsoft Q&A. Use one of the following tags when asking your question:
+The following table lists the tags for AKS and related
| Area | Tag | |-|-|
If you can't find an answer to your problem using search, submit a new question
:::image type="icon" source="./media/logos/azure-logo.png" alt-text="":::
-Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
+Explore the range of [Azure support options](https://azure.microsoft.com/support/plans) and choose a plan that best fits your needs. Azure customers can create and manage support requests in the Azure portal.
-- If you already have an Azure Support Plan, [open a support request here](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).--- To sign up for a new Azure Support Plan, [compare support plans](https://azure.microsoft.com/support/plans/) and select the plan that works for you.
+If you already have an Azure Support Plan, you can [open a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
## Create a GitHub issue :::image type="icon" source="./media/logos/github-logo.png" alt-text="":::
-If you need help with the language and tools used to develop and manage Azure Kubernetes Service, open an issue in its repository on GitHub.
+If you need help with the languages and tools for developing and managing AKS, you can open an issue in its GitHub repository.
+
+The following table lists the GitHub repositories for AKS and related
| Library | GitHub issues URL| | | | | Azure PowerShell | https://github.com/Azure/azure-powershell/issues |
-| Azure CLI | https://github.com/Azure/azure-cli/issues |
-| Azure REST API | https://github.com/Azure/azure-rest-api-specs/issues |
-| Azure SDK for Java | https://github.com/Azure/azure-sdk-for-java/issues |
-| Azure SDK for Python | https://github.com/Azure/azure-sdk-for-python/issues |
-| Azure SDK for .NET | https://github.com/Azure/azure-sdk-for-net/issues |
-| Azure SDK for JavaScript | https://github.com/Azure/azure-sdk-for-js/issues |
-| Terraform | https://github.com/Azure/terraform/issues |
+| Azure CLI | https://github.com/Azure/azure-cli/issues |
+| Azure REST API | https://github.com/Azure/azure-rest-api-specs/issues |
+| Azure SDK for Java | https://github.com/Azure/azure-sdk-for-java/issues |
+| Azure SDK for Python | https://github.com/Azure/azure-sdk-for-python/issues |
+| Azure SDK for .NET | https://github.com/Azure/azure-sdk-for-net/issues |
+| Azure SDK for JavaScript | https://github.com/Azure/azure-sdk-for-js/issues |
+| Terraform | https://github.com/Azure/terraform/issues |
## Stay informed of updates and new releases :::image type="icon" source="./media/logos/updates-logo.png" alt-text="":::
-Learn about important product updates, roadmap, and announcements in [Azure Updates](https://azure.microsoft.com/updates/?category=compute).
-
-News and information about Azure Virtual Machines is shared at the [Azure blog](https://azure.microsoft.com/blog/topics/virtual-machines/).
+Learn about important product updates, roadmap, and announcements in [Azure Updates](https://azure.microsoft.com/updates/?category=compute). For information about Azure Virtual Machines, see the [Azure blog](https://azure.microsoft.com/blog/topics/virtual-machines/).
## Next steps
-Learn more about [Azure Kubernetes Service](./index.yml)
+Visit the [Azure Kubernetes Service (AKS) documentation](./index.yml).
aks Auto Upgrade Node Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-node-image.md
AKS now supports an exclusive channel dedicated to controlling node-level OS sec
## How does node OS auto-upgrade work with cluster auto-upgrade?
-Node-level OS security updates come in at a faster cadence than Kubernetes patch or minor version updates. This is the main reason for introducing a separate, dedicated Node OS auto-upgrade channel. With this feature, you can have a flexible and customized strategy for node-level OS security updates and a separate plan for cluster-level Kubernetes version auto-upgrades [auto-upgrade][Autoupgrade].
+Node-level OS security updates come in at a faster cadence than Kubernetes patch or minor version updates. This is the main reason for introducing a separate, dedicated Node OS auto-upgrade channel. With this feature, you can have a flexible and customized strategy for node-level OS security updates and a separate plan for cluster-level Kubernetes version [auto-upgrades][Autoupgrade].
It's highly recommended to use both cluster-level [auto-upgrades][Autoupgrade] and the node OS auto-upgrade channel together. Scheduling can be fine-tuned by applying two separate sets of [maintenance windows][planned-maintenance] - `aksManagedAutoUpgradeSchedule` for the cluster [auto-upgrade][Autoupgrade] channel and `aksManagedNodeOSUpgradeSchedule` for the node OS auto-upgrade channel. ## Using node OS auto-upgrade
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
Modifying any **Azure-created tags** on resources under the node resource group
To reduce the chance of changes in the node resource group affecting your clusters, you can enable node resource group lockdown to apply a deny assignment to your AKS resources. More information can be found in [Cluster configuration in AKS][configure-nrg]. > [!WARNING]
-> If you have don't have node resource group lockdown enabled, you can directly modify any resource in the node resource group. Directly modifying resources in the node resource group can cause your cluster to become unstable or unresponsive.
+> If you don't have node resource group lockdown enabled, you can directly modify any resource in the node resource group. Directly modifying resources in the node resource group can cause your cluster to become unstable or unresponsive.
## Pods
aks Image Cleaner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md
Image Cleaner generates an `ImageList` containing nonrunning and vulnerable imag
name: imagelist spec: images:
- - docker.io/library/alpine:3.7.3 # You can also use "*" to specify all non-running images
+ - docker.io/library/alpine:3.7.3
+ // You can also use "*" to specify all non-running images
``` 2. Apply the `ImageList` to your cluster using the `kubectl apply` command.
aks Keda Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-integrations.md
Title: Integrations with Kubernetes Event-driven Autoscaling (KEDA) on Azure Kub
description: Integrations with Kubernetes Event-driven Autoscaling (KEDA) on Azure Kubernetes Service (AKS) (Preview). Previously updated : 05/24/2022 Last updated : 09/27/2023 # Integrations with Kubernetes Event-driven Autoscaling (KEDA) on Azure Kubernetes Service (AKS) (Preview)
-The Kubernetes Event-driven Autoscaling (KEDA) add-on integrates with features provided by Azure and open source projects.
+The Kubernetes Event-driven Autoscaling (KEDA) add-on for AKS integrates with features provided by Azure and open-source projects.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] > [!IMPORTANT]
-> Integrations with open source projects are not covered by the [AKS support policy][aks-support-policy].
+> The [AKS support policy][aks-support-policy] doesn't cover integrations with open-source projects.
## Observe your autoscaling with Kubernetes events
To learn about the available metrics, we recommend reading the [KEDA documentati
## Scalers for Azure services
-KEDA can integrate with various tools and services through [a rich catalog of 50+ KEDA scalers][keda-scalers]. It supports leading cloud platforms (such as Azure) and open-source technologies such as Redis and Kafka.
+KEDA integrates with various tools and services through [a rich catalog of 50+ KEDA scalers][keda-scalers] and supports leading cloud platforms and open-source technologies.
-It leverages the following scalers for Azure
+KEDA leverages the following scalers for Azure
- [Azure Application Insights](https://keda.sh/docs/latest/scalers/azure-app-insights/) - [Azure Blob Storage](https://keda.sh/docs/latest/scalers/azure-storage-blob/)
It leverages the following scalers for Azure
- [Azure Service Bus](https://keda.sh/docs/latest/scalers/azure-service-bus/) - [Azure Storage Queue](https://keda.sh/docs/latest/scalers/azure-storage-queue/)
-Next to the built-in scalers, you can install external scalers yourself to autoscale on other Azure
+You can also install external scalers to autoscale on other Azure
- [Azure Cosmos DB (Change feed)](https://github.com/kedacore/external-scaler-azure-cosmos-db)
-However, these external scalers aren't supported as part of the add-on and rely on community support.
+These external scalers *aren't supported as part of the add-on* and rely on community support.
## Next steps
-* [Enable the KEDA add-on with an ARM template][keda-arm]
-* [Enable the KEDA add-on with the Azure CLI][keda-cli]
-* [Troubleshoot KEDA add-on problems][keda-troubleshoot]
-* [Autoscale a .NET Core worker processing Azure Service Bus Queue message][keda-sample]
+- [Enable the KEDA add-on with an ARM template][keda-arm]
+- [Enable the KEDA add-on with the Azure CLI][keda-cli]
+- [Troubleshoot KEDA add-on problems][keda-troubleshoot]
+- [Autoscale a .NET Core worker processing Azure Service Bus Queue message][keda-sample]
<!-- LINKS - internal --> [aks-support-policy]: support-policies.md
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
az aks maintenanceconfiguration delete -g myResourceGroup --cluster-name myAKSCl
* I configured a maintenance window, but upgrade didn't happen - why?
- AKS auto-upgrade needs a certain amount of time to take the maintenance window into consideration. We recommend at least 6 hours between the creation/update of the maintenance configuration, and when it's scheduled to start.
+ AKS auto-upgrade needs a certain amount of time to take the maintenance window into consideration. We recommend at least 24 hours between the creation/update of the maintenance configuration, and when it's scheduled to start.
* AKS auto-upgrade didn't upgrade all my agent pools - or one of the pools was upgraded outside of the maintenance window?
aks Use Azure Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-linux.md
To get started using the Azure Linux container host for AKS, see:
* [Creating a cluster with Azure Linux][azurelinux-cluster-config] * [Add an Azure Linux node pool to your existing cluster][azurelinux-node-pool] * [Ubuntu to Azure Linux migration][ubuntu-to-azurelinux]
+* [Azure Linux supported GPU SKUs](../azure-linux/intro-azure-linux.md#azure-linux-container-host-supported-gpu-skus)
## How to upgrade Azure Linux nodes
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
A custom user-assigned managed identity for the control plane enables access to
--name myManagedCluster \ --network-plugin azure \ --vnet-subnet-id <subnet-id> \
- --docker-bridge-address 172.17.0.1/16 \
--dns-service-ip 10.2.0.10 \ --service-cidr 10.2.0.0/24 \ --enable-managed-identity \
Now you can create your AKS cluster with your existing identities. Make sure to
--name myManagedCluster \ --network-plugin azure \ --vnet-subnet-id <subnet-id> \
- --docker-bridge-address 172.17.0.1/16 \
--dns-service-ip 10.2.0.10 \ --service-cidr 10.2.0.0/24 \ --enable-managed-identity \
azure-app-configuration Enable Dynamic Configuration Java Spring Push Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-java-spring-push-refresh.md
ms.devlang: java Previously updated : 04/11/2023 Last updated : 09/27/2023 #Customer intent: I want to use push refresh to dynamically update my app to use the latest configuration data in App Configuration.
In this tutorial, you learn how to:
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
- <version>5.4.0</version>
</dependency> <!-- Adds the Ability to Push Refresh -->
In this tutorial, you learn how to:
<groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency>+
+ <dependencyManagement>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-dependencies</artifactId>
+ <version>5.5.0</version>
+ <type>pom</type>
+ <scope>import</scope>
+ </dependency>
+ </dependencies>
+ </dependencyManagement>
``` ### [Spring Boot 2](#tab/spring-boot-2)
In this tutorial, you learn how to:
<groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency>+
+ <dependencyManagement>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-dependencies</artifactId>
+ <version>4.11.0</version>
+ <type>pom</type>
+ <scope>import</scope>
+ </dependency>
+ </dependencies>
+ </dependencyManagement>
```
azure-app-configuration Howto Convert To The New Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-convert-to-the-new-spring-boot.md
ms.devlang: java
Previously updated : 04/11/2023 Last updated : 09/27/2023 # Convert to the new App Configuration library for Spring Boot
All of the group and artifact IDs in the Azure libraries for Spring Boot have be
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-appconfiguration-config</artifactId>
- <version>5.4.0</version>
</dependency> <dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
- <version>5.4.0</version>
</dependency> <dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-feature-management</artifactId>
- <version>5.4.0</version>
</dependency> <dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-feature-management-web</artifactId>
- <version>5.4.0</version>
</dependency>+
+<dependencyManagement>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-dependencies</artifactId>
+ <version>5.5.0</version>
+ <type>pom</type>
+ <scope>import</scope>
+ </dependency>
+ </dependencies>
+</dependencyManagement>
``` ### [Spring Boot 2](#tab/spring-boot-2)
All of the group and artifact IDs in the Azure libraries for Spring Boot have be
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-appconfiguration-config</artifactId>
- <version>4.10.0</version>
</dependency> <dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
- <version>4.10.0</version>
</dependency> <dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-feature-management</artifactId>
- <version>4.10.0</version>
</dependency> <dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-feature-management-web</artifactId>
- <version>4.10.0</version>
</dependency>+
+<dependencyManagement>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-dependencies</artifactId>
+ <version>4.11.0</version>
+ <type>pom</type>
+ <scope>import</scope>
+ </dependency>
+ </dependencies>
+</dependencyManagement>
```
The 4.7.0 version is the first 4.x version of the library. It matches the versio
As of the 4.7.0 version, the App Configuration and feature management libraries are part of the `spring-cloud-azure-dependencies` bill of materials (BOM). The BOM file ensures that you no longer need to specify the version of the libraries in your project. The BOM automatically manages the version of the libraries.
-```xml
-
-```
-
-### [Spring Boot 3](#tab/spring-boot-3)
-
-```xml
-<dependency>
- <groupId>com.azure.spring</groupId>
- <artifactId>spring-cloud-azure-dependencies</artifactId>
- <version>5.4.0</version>
- <type>pom</type>
-</dependency>
-```
-
-### [Spring Boot 2](#tab/spring-boot-2)
-
-```xml
-<dependency>
- <groupId>com.azure.spring</groupId>
- <artifactId>spring-cloud-azure-dependencies</artifactId>
- <version>4.10.0</version>
- <type>pom</type>
-</dependency>
-```
- ## Package paths renamed
azure-app-configuration Howto Create Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-create-snapshots.md
Title: How to create snapshots (preview) in Azure App Configuration
-description: How to create and manage snapshots Azure App Configuration store.
+ Title: How to manage and use snapshots (preview) in Azure App Configuration
+description: How to manage and use snapshots in an Azure App Configuration store.
Previously updated : 05/16/2023 Last updated : 09/28/2023
-# Use snapshots (preview)
+# Manage and use snapshots (preview)
-In this article, learn how to create and manage snapshots in Azure App Configuration. Snapshot is a set of App Configuration settings stored in an immutable state.
+In this article, learn how to create, use and manage snapshots in Azure App Configuration. Snapshot is a set of App Configuration settings stored in an immutable state.
## Prerequisites
azure-app-configuration Howto Integrate Azure Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md
The following steps describe how to assign the App Configuration Data Reader rol
using Azure.Identity; ```
-1. To access values stored in App Configuration, update the `Builder` configuration to use the the `AddAzureAppConfiguration()` method.
+1. To access values stored in App Configuration, update the `Builder` configuration to use the `AddAzureAppConfiguration()` method.
### [.NET 6.0+](#tab/core6x)
The following steps describe how to assign the App Configuration Data Reader rol
1. Find the endpoint to your App Configuration store. This URL is listed on the **Overview** tab for the store in the Azure portal.
-1. Open `bootstrap.properties`, remove the connection-string property and replace it with endpoint:
+1. Open `bootstrap.properties`, remove the connection-string property and replace it with endpoint for System Assigned Identity:
```properties spring.cloud.azure.appconfiguration.stores[0].endpoint=<service_endpoint> ```
+for User Assigned Identity:
+
+```properties
+spring.cloud.azure.appconfiguration.stores[0].endpoint=<service_endpoint>
+spring.cloud.azure.credential.managed-identity-enabled= true
+spring.cloud.azure.credential.client-id= <client_id>
+```
+ > [!NOTE]
-> If you want to use **user-assigned managed identity** the property `spring.cloud.azure.appconfiguration.stores[0].managed-identity.client-id`, ensure that you specify the `clientId` when creating the [ManagedIdentityCredential](/java/api/com.azure.identity.managedidentitycredential).
+> For more information see [Spring Cloud Azure authentication](/azure/developer/java/spring-framework/authentication).
:::zone-end
azure-app-configuration Quickstart Feature Flag Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-spring-boot.md
ms.devlang: java Previously updated : 04/11/2023 Last updated : 09/27/2023 #Customer intent: As an Spring Boot developer, I want to use feature flags to control feature availability quickly and confidently.
To create a new Spring Boot project:
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
- <version>5.4.0</version>
</dependency> <dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-feature-management-web</artifactId>
- <version>5.4.0</version>
</dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-thymeleaf</artifactId> </dependency>+
+ <dependencyManagement>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-dependencies</artifactId>
+ <version>5.5.0</version>
+ <type>pom</type>
+ <scope>import</scope>
+ </dependency>
+ </dependencies>
+ </dependencyManagement>
``` ### [Spring Boot 2](#tab/spring-boot-2)
To create a new Spring Boot project:
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
- <version>4.10.0</version>
</dependency> <dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-feature-management-web</artifactId>
- <version>4.10.0</version>
</dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-thymeleaf</artifactId> </dependency>
+
+ <dependencyManagement>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-dependencies</artifactId>
+ <version>4.11.0</version>
+ <type>pom</type>
+ <scope>import</scope>
+ </dependency>
+ </dependencies>
+ </dependencyManagement>
```
azure-app-configuration Quickstart Java Spring App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-java-spring-app.md
ms.devlang: java Previously updated : 04/11/2023 Last updated : 09/27/2023 #Customer intent: As a Java Spring developer, I want to manage all my app settings in one place.
To install the Spring Cloud Azure Config starter module, add the following depen
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
- <version>5.4.0</version>
</dependency>+
+<dependencyManagement>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-dependencies</artifactId>
+ <version>5.5.0</version>
+ <type>pom</type>
+ <scope>import</scope>
+ </dependency>
+ </dependencies>
+</dependencyManagement>
``` ### [Spring Boot 2](#tab/spring-boot-2)
To install the Spring Cloud Azure Config starter module, add the following depen
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
- <version>4.10.0</version>
</dependency>+
+<dependencyManagement>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-dependencies</artifactId>
+ <version>4.11.0</version>
+ <type>pom</type>
+ <scope>import</scope>
+ </dependency>
+ </dependencies>
+</dependencyManagement>
```
azure-app-configuration Use Feature Flags Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-feature-flags-spring-boot.md
ms.devlang: java Previously updated : 04/11/2023 Last updated : 09/27/2023
The easiest way to connect your Spring Boot application to App Configuration is
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-feature-management-web</artifactId>
- <version>5.4.0</version>
</dependency>+
+<dependencyManagement>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-dependencies</artifactId>
+ <version>5.5.0</version>
+ <type>pom</type>
+ <scope>import</scope>
+ </dependency>
+ </dependencies>
+</dependencyManagement>
``` ### [Spring Boot 2](#tab/spring-boot-2)
The easiest way to connect your Spring Boot application to App Configuration is
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-feature-management-web</artifactId>
- <version>4.10.0</version>
</dependency>+
+<dependencyManagement>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-dependencies</artifactId>
+ <version>4.11.0</version>
+ <type>pom</type>
+ <scope>import</scope>
+ </dependency>
+ </dependencies>
+</dependencyManagement>
```
azure-arc Limitations Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/limitations-postgresql.md
This article describes limitations of Azure Arc-enabled PostgreSQL.
Configuring high availability to recover from infrastructure failures isn't yet available.
+## Monitoring
+
+Currently, local monitoring with Grafana is only available for the default `postgres` database. Metrics dashboards for user created databases will be empty.
+
+## Configuration
+
+System configurations that are stored in `postgresql.auto.conf` are backed up when a base backup is created. This means that changes made after the last base backup, will not be present in a restored server until a new base backup is taken to capture those changes.
## Roles and responsibilities
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
Delivering ESUs to your Windows Server 2012/2012 R2 machines provides the follow
- **Keyless delivery:** The enrollment of ESUs on Azure Arc-enabled Windows Server 2012/2012 R2 machines won't require the acquisition or activation of keys.
-Other Azure services through Azure Arc-enabled servers are available, with offerings such as:
+## Access to Azure services
+
+For Azure Arc-enabled servers enrolled in WS2012 ESUs enabled by Azure Arc, free access is provided to these Azure services from October 10, 2023:
+
+* [Azure Update Manager](../../update-center/overview.md) - Unified management and governance of update compliance that includes not only Azure and hybrid machines, but also ESU update compliance for all your Windows Server 2012/2012 R2 machines.
+* [Azure Automation Change Tracking and Inventory](/azure/automation/change-tracking/overview?tabs=python-2) - Track changes in virtual machines hosted in Azure, on-premises, and other cloud environments.
+* [Azure Policy Guest Configuration](/azure/cloud-adoption-framework/manage/azure-server-management/guest-configuration-policy) - Audit the configuration settings in a virtual machine. Guest configuration supports Azure VMs natively and non-Azure physical and virtual servers through Azure Arc-enabled servers.
+
+Other Azure services through Azure Arc-enabled servers are available as well, with offerings such as:
* [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) - As part of the cloud security posture management (CSPM) pillar, it provides server protections through [Microsoft Defender for Servers](../../defender-for-cloud/plan-defender-for-servers.md) to help protect you from various cyber threats and vulnerabilities.
-* [Azure Update Manager (preview)](../../update-center/overview.md) - Unified management and governance of update compliance that includes not only Azure and hybrid machines, but also ESU update compliance for all your Windows Server 2012/2012 R2 machines.
-* [Azure Policy](../../governance/policy/overview.md) helps to enforce organizational standards and to assess compliance at-scale. Beyond providing an aggregated view to evaluate the overall state of the environment, Azure Policy helps to bring your resources to compliance through bulk and automatic remediation.
+* [Microsoft Sentinel](scenario-onboard-azure-sentinel.md) - Collect security-related events and correlate them with other data sources.
>[!NOTE] >Activation of ESU is planned for the third quarter of 2023. Using Azure services such as Azure Update Manager (preview) and Azure Policy to support managing ESU-eligible Windows Server 2012/2012 R2 machines are also planned for the third quarter.
azure-cache-for-redis Cache Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-ml.md
When `show_output=True`, the output of the Docker build process is shown. Once t
"passwords": [ { "name": "password",
- "value": "Iv0lRZQ9762LUJrFiffo3P4sWgk4q+nW"
+ "value": "abcdefghijklmmopqrstuv1234567890"
}, { "name": "password2",
- "value": "=pKCxHatX96jeoYBWZLsPR6opszr==mg"
+ "value": "1234567890abcdefghijklmmopqrstuv"
} ],
- "username": "myml08024f78fd10"
+ "username": "charlie.roy"
} ```
After a few moments, the resource group and all of its resources are deleted.
* Learn to configure your function app in the [Functions](../azure-functions/functions-create-function-linux-custom-image.md) documentation. * [API Reference](/python/api/azureml-contrib-functions/azureml.contrib.functions) * Create a [Python app that uses Azure Cache for Redis](./cache-python-get-started.md)+
azure-linux Intro Azure Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/intro-azure-linux.md
The Azure Linux Container Host offers the following key benefits:
- All existing and future AKS extensions, add-ons, and open-source projects on AKS support both Ubuntu and Azure Linux. This includes support for runtime components like Dapr, IaC tools like Terraform, and monitoring solutions like Dynatrace. - Azure Linux ships with containerd as its container runtime and the upstream Linux kernel, which enables existing containers based on Linux images (like Alpine) to work seamlessly on Azure Linux.
+## Azure Linux Container Host supported GPU SKUs
+
+The Azure Linux Container Host supports the following GPU SKUs:
+
+- [NVIDIA V100][nvidia-v100]
+- [NVIDIA T4][nvidia-t4]
+ > [!NOTE]
->
-> For GPU workloads, Azure Linux doesn't support NC A100 v4 series. All other VM SKUs that are available on AKS are available with Azure Linux.
+> Azure Linux doesn't support the NC A100 v4 series. All other VM SKUs that are available on AKS are available with Azure Linux.
> > If there are any areas you would like to have priority, please file an issue in the [AKS GitHub repository](https://github.com/Azure/AKS/issues).
The Azure Linux Container Host offers the following key benefits:
- Follow our tutorial to [Deploy, manage, and update applications](./tutorial-azure-linux-create-cluster.md). - Get started by [Creating an Azure Linux Container Host for AKS cluster using Azure CLI](./quickstart-azure-cli.md).
+<!-- LINKS - internal -->
+[nvidia-v100]: ../virtual-machines/ncv3-series.md
+[nvidia-t4]: ../virtual-machines/nct4-v3-series.md
+[cis-benchmarks]: ../aks/cis-azure-linux.md
+ <!-- LINKS - external --> [cbl-mariner]: https://github.com/microsoft/CBL-Mariner [azure-linux-packages]: https://packages.microsoft.com/cbl-mariner/2.0/prod/-
-<!-- LINKS - internal -->
-[cis-benchmarks]: ../aks/cis-azure-linux.md
azure-maps How To Manage Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-pricing-tier.md
To change your pricing tier from Gen1 to Gen2 in the ARM template, update `prici
:::image type="content" source="./media/how-to-manage-pricing-tier/arm-template.png" border="true" alt-text="Screenshot of an ARM template that demonstrates updating pricingTier to G2 and kind to Gen2."::: ```json
- "pricingTier": {
- "type": "string",
- "allowedValues":[
- "G2"
- ],
- "defaultValue": "G2",
- "metadata": {
- "description": "The pricing tier SKU for the account."
- }
- },
- "kind": {
- "type": "string",
- "allowedValues":[
- "Gen2"
- ],
- "defaultValue": "Gen2",
- "metadata": {
- "description": "The pricing tier for the account."
- }
- }
+ "pricingTier": {
+ "type": "string",
+ "allowedValues":[
+ "G2"
+ ],
+ "defaultValue": "G2",
+ "metadata": {
+ "description": "The pricing tier SKU for the account."
+ }
+ },
+ "kind": {
+ "type": "string",
+ "allowedValues":[
+ "Gen2"
+ ],
+ "defaultValue": "Gen2",
+ "metadata": {
+ "description": "The pricing tier for the account."
+ }
+ }
``` :::code language="json" source="~/quickstart-templates/quickstarts/microsoft.maps/maps-create/azuredeploy.json" range="27-46"::: >
azure-monitor Availability Test Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-test-migration.md
The following steps walk you through the process of creating [standard tests](av
### Steps 1. Connect to your subscription with Azure PowerShell (Connect-AzAccount + Set-AzContext).
-2. List all URL ping tests in a resource group:
+
+2. List all URL ping tests in the current subscription:
```azurepowershell
- $resourceGroup = "myResourceGroup";
- Get-AzApplicationInsightsWebTest -ResourceGroupName $resourceGroup | `
- Where-Object { $_.WebTestKind -eq "ping" };
+ Get-AzApplicationInsightsWebTest | `
+ Where-Object { $_.WebTestKind -eq "ping" } | `
+ Format-Table -Property ResourceGroupName,Name,WebTestKind,Enabled;
```
-3. Find the URL ping Test you want to migrate and record its name.
+
+3. Find the URL Ping Test you want to migrate and record its resource group and name.
+ 4. The following commands create a standard test with the same logic as the URL ping test: ```azurepowershell
+ $resourceGroup = "pingTestResourceGroup";
$appInsightsComponent = "componentName"; $pingTestName = "pingTestName"; $newStandardTestName = "newStandardTestName";
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
- Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported. >[!NOTE]
->If you are migrating from Container Insights on Azure Red Hat OpenShift v4.x, please also ensure that you have [disabled monitoring](./container-insights-optout-openshift-v4.md) before proceeding with configuring Container Insights on Azure Arc enabled Kubernetes to prevent any installation issues.
+>If you are migrating from Container Insights on Azure Red Hat OpenShift v4.x, please also ensure that you have [disabled monitoring](./container-insights-optout-hybrid.md) before proceeding with configuring Container Insights on Azure Arc enabled Kubernetes to prevent any installation issues.
>
For issues with enabling monitoring, we have provided a [troubleshooting script]
- By default, the containerized agent collects the stdout/ stderr container logs of all the containers running in all the namespaces except kube-system. To configure container log collection specific to particular namespace or namespaces, review [Container Insights agent configuration](container-insights-agent-config.md) to configure desired data collection settings to your ConfigMap configurations file. -- To scrape and analyze Prometheus metrics from your cluster, review [Configure Prometheus metrics scraping](container-insights-prometheus-integration.md)
+- To scrape and analyze Prometheus metrics from your cluster, review [Configure Prometheus metrics scraping](container-insights-prometheus-integration.md)
azure-monitor Collect Custom Metrics Guestos Resource Manager Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-resource-manager-vm.md
Previously updated : 05/04/2020 Last updated : 09/28/2023 # Send guest OS metrics to the Azure Monitor metrics store by using an ARM template for a Windows VM
azure-monitor Collect Custom Metrics Guestos Resource Manager Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-resource-manager-vmss.md
Open the **azuredeploy.json** file.
Add a variable to hold the storage account information in the Resource Manager template. Any logs or performance counters specified in the diagnostics config file are written to both the Azure Monitor metric store and the storage account you specify here: ```json
-"variables": {ΓÇ»
-//add this line
-"storageAccountName": "[concat('storage', uniqueString(resourceGroup().id))]",
+"variables": {
+ //add this line
+ "storageAccountName": "[concat('storage', uniqueString(resourceGroup().id))]",
+ ...
+}
``` Find the virtual machine scale set definition in the resources section and add the **identity** section to the configuration. This addition ensures that Azure assigns it a system identity. This step also ensures that the VMs in the scale set can emit guest metrics about themselves to Azure Monitor: ```json
- {
- "type": "Microsoft.Compute/virtualMachineScaleSets",
- "name": "[variables('namingInfix')]",
- "location": "[resourceGroup().location]",
- "apiVersion": "2017-03-30",
- //add these lines below
- "identity": {
- "type": "systemAssigned"
- },
- //end of lines to add
+{
+ "type": "Microsoft.Compute/virtualMachineScaleSets",
+ "name": "[variables('namingInfix')]",
+ "location": "[resourceGroup().location]",
+ "apiVersion": "2017-03-30",
+ //add these lines below
+ "identity": {
+ "type": "systemAssigned"
+ },
+ //end of lines to add
+ ...
+}
``` In the virtual machine scale set resource, find the **virtualMachineProfile** section. Add a new profile called **extensionsProfile** to manage extensions.
In the **extensionProfile**, add a new extension to the template as shown in the
The following code from the MSI extension also adds the diagnostics extension and configuration as an extension resource to the virtual machine scale set resource. Feel free to add or remove performance counters as needed: ```json
- "extensionProfile": {
- "extensions": [
- // BEGINNING of added code
- // Managed identities for Azure resources
- {
- "name": "VMSS-WAD-extension",
- "properties": {
- "publisher": "Microsoft.ManagedIdentity",
- "type": "ManagedIdentityExtensionForWindows",
- "typeHandlerVersion": "1.0",
- "autoUpgradeMinorVersion": true,
- "settings": {
- "port": 50342
- },
- "protectedSettings": {}
- }
-
- },
- // add diagnostic extension. (Remove this comment after pasting.)
- {
- "name": "[concat('VMDiagnosticsVmExt','_vmNodeType0Name')]",
- "properties": {
- "type": "IaaSDiagnostics",
- "autoUpgradeMinorVersion": true,
- "protectedSettings": {
- "storageAccountName": "[variables('storageAccountName')]",
- "storageAccountKey": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')),'2015-05-01-preview').key1]",
- "storageAccountEndPoint": "https://core.windows.net/"
- },
- "publisher": "Microsoft.Azure.Diagnostics",
- "settings": {
- "WadCfg": {
- "DiagnosticMonitorConfiguration": {
- "overallQuotaInMB": "50000",
- "PerformanceCounters": {
- "scheduledTransferPeriod": "PT1M",
- "sinks": "AzMonSink",
- "PerformanceCounterConfiguration": [
- {
- "counterSpecifier": "\\Memory\\% Committed Bytes In Use",
- "sampleRate": "PT15S"
- },
- {
- "counterSpecifier": "\\Memory\\Available Bytes",
- "sampleRate": "PT15S"
- },
- {
- "counterSpecifier": "\\Memory\\Committed Bytes",
- "sampleRate": "PT15S"
- }
- ]
- },
- "EtwProviders": {
- "EtwEventSourceProviderConfiguration": [
- {
- "provider": "Microsoft-ServiceFabric-Actors",
- "scheduledTransferKeywordFilter": "1",
- "scheduledTransferPeriod": "PT5M",
- "DefaultEvents": {
- "eventDestination": "ServiceFabricReliableActorEventTable"
- }
- },
- {
- "provider": "Microsoft-ServiceFabric-Services",
- "scheduledTransferPeriod": "PT5M",
- "DefaultEvents": {
- "eventDestination": "ServiceFabricReliableServiceEventTable"
- }
- }
- ],
- "EtwManifestProviderConfiguration": [
- {
- "provider": "cbd93bc2-71e5-4566-b3a7-595d8eeca6e8",
- "scheduledTransferLogLevelFilter": "Information",
- "scheduledTransferKeywordFilter": "4611686018427387904",
- "scheduledTransferPeriod": "PT5M",
- "DefaultEvents": {
- "eventDestination": "ServiceFabricSystemEventTable"
- }
- }
- ]
- }
- },
- "SinksConfig": {
- "Sink": [
- {
- "name": "AzMonSink",
- "AzureMonitor": {}
- }
- ]
- }
- },
- "StorageAccount": "[variables('storageAccountName')]"
- },
- "typeHandlerVersion": "1.11"
- }
- }
- ]
- }
- }
+ "extensionProfile": {
+ "extensions": [
+ // BEGINNING of added code
+ // Managed identities for Azure resources
+ {
+ "name": "VMSS-WAD-extension",
+ "properties": {
+ "publisher": "Microsoft.ManagedIdentity",
+ "type": "ManagedIdentityExtensionForWindows",
+ "typeHandlerVersion": "1.0",
+ "autoUpgradeMinorVersion": true,
+ "settings": {
+ "port": 50342
+ },
+ "protectedSettings": {}
+ }
+ },
+ // add diagnostic extension. (Remove this comment after pasting.)
+ {
+ "name": "[concat('VMDiagnosticsVmExt','_vmNodeType0Name')]",
+ "properties": {
+ "type": "IaaSDiagnostics",
+ "autoUpgradeMinorVersion": true,
+ "protectedSettings": {
+ "storageAccountName": "[variables('storageAccountName')]",
+ "storageAccountKey": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')),'2015-05-01-preview').key1]",
+ "storageAccountEndPoint": "https://core.windows.net/"
+ },
+ "publisher": "Microsoft.Azure.Diagnostics",
+ "settings": {
+ "WadCfg": {
+ "DiagnosticMonitorConfiguration": {
+ "overallQuotaInMB": "50000",
+ "PerformanceCounters": {
+ "scheduledTransferPeriod": "PT1M",
+ "sinks": "AzMonSink",
+ "PerformanceCounterConfiguration": [
+ {
+ "counterSpecifier": "\\Memory\\% Committed Bytes In Use",
+ "sampleRate": "PT15S"
+ },
+ {
+ "counterSpecifier": "\\Memory\\Available Bytes",
+ "sampleRate": "PT15S"
+ },
+ {
+ "counterSpecifier": "\\Memory\\Committed Bytes",
+ "sampleRate": "PT15S"
+ }
+ ]
+ },
+ "EtwProviders": {
+ "EtwEventSourceProviderConfiguration": [
+ {
+ "provider": "Microsoft-ServiceFabric-Actors",
+ "scheduledTransferKeywordFilter": "1",
+ "scheduledTransferPeriod": "PT5M",
+ "DefaultEvents": {
+ "eventDestination": "ServiceFabricReliableActorEventTable"
+ }
+ },
+ {
+ "provider": "Microsoft-ServiceFabric-Services",
+ "scheduledTransferPeriod": "PT5M",
+ "DefaultEvents": {
+ "eventDestination": "ServiceFabricReliableServiceEventTable"
+ }
+ }
+ ],
+ "EtwManifestProviderConfiguration": [
+ {
+ "provider": "cbd93bc2-71e5-4566-b3a7-595d8eeca6e8",
+ "scheduledTransferLogLevelFilter": "Information",
+ "scheduledTransferKeywordFilter": "4611686018427387904",
+ "scheduledTransferPeriod": "PT5M",
+ "DefaultEvents": {
+ "eventDestination": "ServiceFabricSystemEventTable"
+ }
+ }
+ ]
+ }
+ },
+ "SinksConfig": {
+ "Sink": [
+ {
+ "name": "AzMonSink",
+ "AzureMonitor": {}
+ }
+ ]
+ }
+ },
+ "StorageAccount": "[variables('storageAccountName')]"
+ },
+ "typeHandlerVersion": "1.11"
+ }
}
- },
- //end of added code plus a few brackets. Be sure that the number and type of brackets match properly when done.
- {
- "type": "Microsoft.Insights/autoscaleSettings",
-...
+ ]
+ },
+ // end of added code. Be sure that the number and type of brackets match properly when done.
+ {
+ "type": "Microsoft.Insights/autoscaleSettings",
+ ...
+ }
``` -
-Add a **dependsOn** for the storage account to ensure it's created in the correct order:
+Add a **dependsOn** for the storage account to ensure it's created in the correct order:
```json
-"dependsOn": [
-"[concat('Microsoft.Network/loadBalancers/', variables('loadBalancerName'))]",
-"[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]"
-//add this line below
-"[concat('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]"
+"dependsOn": [
+ "[concat('Microsoft.Network/loadBalancers/', variables('loadBalancerName'))]",
+ "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]",
+ //add this line below
+ "[concat('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]"
+]
``` Create a storage account if one isn't already created in the template: ```json "resources": [
-// add this code
-{
+ // add this code
+ {
"type": "Microsoft.Storage/storageAccounts", "name": "[variables('storageAccountName')]", "apiVersion": "2015-05-01-preview", "location": "[resourceGroup().location]", "properties": {
- "accountType": "Standard_LRS"
+ "accountType": "Standard_LRS"
}
-},
-// end added code
-{
+ },
+ // end added code
+ {
"type": "Microsoft.Network/virtualNetworks", "name": "[variables('virtualNetworkName')]",
+ ...
+ }
+]
```
-Save and close both files.
+Save and close both files.
-## Deploy the Resource Manager template
+## Deploy the Resource Manager template
-> [!NOTE]
-> You must be running the Azure Diagnostics extension version 1.5 or higher **and** have the **autoUpgradeMinorVersion:** property set to **true** in your Resource Manager template. Azure then loads the proper extension when it starts the VM. If you don't have these settings in your template, change them and redeploy the template.
+> [!NOTE]
+> You must be running the Azure Diagnostics extension version 1.5 or higher **and** have the **autoUpgradeMinorVersion:** property set to **true** in your Resource Manager template. Azure then loads the proper extension when it starts the VM. If you don't have these settings in your template, change them and redeploy the template.
+To deploy the Resource Manager template, use Azure PowerShell:
-To deploy the Resource Manager template, use Azure PowerShell:
+1. Launch PowerShell.
-1. Launch PowerShell.
1. Sign in to Azure using `Login-AzAccount`.+ 1. Get your list of subscriptions by using `Get-AzSubscription`.+ 1. Set the subscription you'll create, or update the virtual machine: ```powershell
- Select-AzSubscription -SubscriptionName "<Name of the subscription>"
+ Select-AzSubscription -SubscriptionName "<Name of the subscription>"
```
-1. Create a new resource group for the VM being deployed. Run the following command:
+
+1. Create a new resource group for the VM being deployed. Run the following command:
```powershell
- New-AzResourceGroup -Name "VMSSWADtestGrp" -Location "<Azure Region>"
+ New-AzResourceGroup -Name "VMSSWADtestGrp" -Location "<Azure Region>"
``` > [!NOTE] > Remember to use an Azure region that's enabled for custom metrics. Remember to use an [Azure region that's enabled for custom metrics](./metrics-custom-overview.md#supported-regions).
-
-1. Run the following commands to deploy the VM:
- > [!NOTE]
- > If you want to update an existing scale set, add **-Mode Incremental** to the end of the command.
-
+1. Run the following commands to deploy the VM:
+
+ > [!NOTE]
+ > If you want to update an existing scale set, add **-Mode Incremental** to the end of the command.
+ ```powershell
- New-AzResourceGroupDeployment -Name "VMSSWADTest" -ResourceGroupName "VMSSWADtestGrp" -TemplateFile "<File path of your azuredeploy.JSON file>" -TemplateParameterFile "<File path of your azuredeploy.parameters.JSON file>"
+ New-AzResourceGroupDeployment -Name "VMSSWADTest" -ResourceGroupName "VMSSWADtestGrp" -TemplateFile "<File path of your azuredeploy.JSON file>" -TemplateParameterFile "<File path of your azuredeploy.parameters.JSON file>"
```
-1. After your deployment succeeds, you should find the virtual machine scale set in the Azure portal. It should emit metrics to Azure Monitor.
+1. After your deployment succeeds, you should find the virtual machine scale set in the Azure portal. It should emit metrics to Azure Monitor.
- > [!NOTE]
- > You might run into errors around the selected **vmSkuSize**. In that case, go back to your **azuredeploy.json** file and update the default value of the **vmSkuSize** parameter. We recommend that you try **Standard_DS1_v2**.
+ > [!NOTE]
+ > You might run into errors around the selected **vmSkuSize**. In that case, go back to your **azuredeploy.json** file and update the default value of the **vmSkuSize** parameter. We recommend that you try **Standard_DS1_v2**.
+## Chart your metrics
-## Chart your metrics
+1. Sign in to the Azure portal.
-1. Sign in to the Azure portal.
+1. In the left-hand menu, select **Monitor**.
-1. In the left-hand menu, select **Monitor**.
-
-1. On the **Monitor** page, select **Metrics**.
+1. On the **Monitor** page, select **Metrics**.
:::image source="media/collect-custom-metrics-guestos-resource-manager-vmss/metrics.png" alt-text="A screenshot showing the metrics menu item on the Azure Monitor menu page." lightbox="media/collect-custom-metrics-guestos-resource-manager-vmss/metrics.png":::
-1. Change the aggregation period to **Last 30 minutes**.
+1. Change the aggregation period to **Last 30 minutes**.
+
+1. In the resource drop-down menu, select the virtual machine scale set you created.
-1. In the resource drop-down menu, select the virtual machine scale set you created.
+1. In the namespaces drop-down menu, select **Virtual Machine Guest**.
-1. In the namespaces drop-down menu, select **Virtual Machine Guest**.
+1. In the metrics drop-down menu, select **Memory\%Committed Bytes in Use**.
-1. In the metrics drop-down menu, select **Memory\%Committed Bytes in Use**.
:::image source="media/collect-custom-metrics-guestos-resource-manager-vmss/create-metrics-chart.png" alt-text="A screenshot showing the selection of namespace metric and aggregation for a metrics chart." lightbox="media/collect-custom-metrics-guestos-resource-manager-vmss/create-metrics-chart.png"::: You can then also choose to use the dimensions on this metric to chart it for a particular VM or to plot each VM in the scale set. - ## Next steps - Learn more about [custom metrics](./metrics-custom-overview.md).
azure-monitor Collect Custom Metrics Linux Telegraf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-linux-telegraf.md
Last updated 08/01/2023
This article explains how to deploy and configure the [InfluxData](https://www.influxdata.com/) Telegraf agent on a Linux virtual machine to send metrics to Azure Monitor. > [!NOTE]
-> InfluxData Telegraf is an open source agent and not officially supported by Azure Monitor. For issues wuth the Telegraf connector, please refer to the Telegraf GitHub page here: [InfluxData](https://github.com/influxdata/telegraf)
+> InfluxData Telegraf is an open source agent and not officially supported by Azure Monitor. For issues with the Telegraf connector, please refer to the Telegraf GitHub page here: [InfluxData](https://github.com/influxdata/telegraf)
## InfluxData Telegraf agent
azure-monitor Prometheus Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-workbooks.md
Create dashboards powered by Azure Monitor managed service for Prometheus using [Azure Workbooks](../visualize/workbooks-overview.md). This article introduces workbooks for Azure Monitor workspaces and shows you how to query Prometheus metrics using Azure workbooks and the Prometheus query language (PromQL).
-## Pre-requisites
-To query Prometheus metrics from an Azure Monitor workspace you need the following:
-- An Azure Monitor workspace. To create an Azure Monitor workspace see [Create an Azure Monitor Workspace](./azure-monitor-workspace-overview.md?tabs=azure-portal.md).
+## Prerequisites
+To query Prometheus metrics from an Azure Monitor workspace, you need the following:
+- An Azure Monitor workspace. To create an Azure Monitor workspace, see [Create an Azure Monitor Workspace](./azure-monitor-workspace-overview.md?tabs=azure-portal.md).
- Your Azure Monitor workspace must be [collecting Prometheus metrics](./prometheus-metrics-enable.md) from an AKS cluster. - The user must be assigned role that can perform the **microsoft.monitor/accounts/read** operation on the Azure Monitor workspace.
A workbook has the following input options:
## Create a Prometheus workbook
-Workbooks supports many visualizations and Azure integrations. For more information about Azure Workbooks, see [Creating an Azure Workbook](../visualize/workbooks-create-workbook.md).
+Workbooks support many visualizations and Azure integrations. For more information about Azure Workbooks, see [Creating an Azure Workbook](../visualize/workbooks-create-workbook.md).
-1. From your Azure Monitor workspace select **Workbooks**.
+1. From your Azure Monitor workspace, select **Workbooks**.
1. Select **New**. 1. In the new workbook, select **Add**, and select **Add query** from the dropdown.
Workbooks supports many visualizations and Azure integrations. For more informat
## Troubleshooting
-If your workbook query does not return data:
+If you receive a message indicating that "You currently do not have any Prometheus data ingested to this Azure Monitor workspace":
-- Check that you have sufficient permissions to perform **microsoft.monitor/accounts/read** assigned through Access Control (IAM) in your Azure Monitor workspace - Verify that you have turned on metrics collection in the Monitored clusters blade of your Azure Monitor workspace.
+If your workbook query does not return data with a message "You do not have query access":
+
+- Check that you have sufficient permissions to perform **microsoft.monitor/accounts/read** assigned through Access Control (IAM) in your Azure Monitor workspace.
+- Confirm if your Networking settings support query access. You may need to enable private access through your private endpoint or change settings to allow public access.
+- If you have ad block enabled in your browser, you may need to pause or disable and refresh the workbook in order to view data.
+ ## Next steps * [Collect Prometheus metrics from AKS cluster](./prometheus-metrics-enable.md)
azure-monitor Ingest Logs Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/ingest-logs-event-hub.md
# customer-intent: As a DevOps engineer, I want to ingest data from an event hub into a Log Analytics workspace so that I can monitor logs that I send to Azure Event Hubs. -
-# Tutorial: Ingest events from Azure Event Hubs into Azure Monitor Logs (Preview)
+# Tutorial: Ingest events from Azure Event Hubs into Azure Monitor Logs (Public Preview)
[Azure Event Hubs](../../event-hubs/event-hubs-about.md) is a big data streaming platform that collects events from multiple sources to be ingested by Azure and external services. This article explains how to ingest data directly from an event hub into a Log Analytics workspace.
To ingest data into a [supported Azure table](../logs/logs-ingestion-api-overvie
To: `"outputStream": "outputStream": "[concat(Microsoft-', parameters('tableName'))]"` 1. In `transformKql`, [define a transformation](../essentials/data-collection-transformations-structure.md#transformation-structure) that sends the ingested data into the target columns in the destination Azure table.+ ## Grant the event hub permission to the data collection rule With [managed identity](../../active-directory/managed-identities-azure-resources/overview.md), you can give any event hub, or Event Hubs namespace, permission to send events to the data collection rule and data collection endpoint you created. When you grant the permissions to the Event Hubs namespace, all event hubs within the namespace inherit the permissions.
With [managed identity](../../active-directory/managed-identities-azure-resource
:::image type="content" source="media/ingest-logs-event-hub/event-hub-data-receiver-role-assignment.png" lightbox="media/ingest-logs-event-hub/event-hub-data-receiver-role-assignment.png" alt-text="Screenshot that shows the Add Role Assignment screen for the event hub with the Azure Event Hubs Data Receiver role highlighted.":::
-3. Select **User, group, or service principal** for **Assign access to** and click **Select members**. Select your DCR and click **Select**.
-
- :::image type="content" source="media/ingest-logs-event-hub/event-hub-add-role-assignment-select-member.png" lightbox="media/ingest-logs-event-hub/event-hub-add-role-assignment-select-member.png" alt-text="Screenshot that shows the Members tab of the Add Role Assignment screen.":::
+1. Select **Managed identity** for **Assign access to** and click **Select members**. Select **Data collection rule**, search your DCR by name and click **Select**.
+[ ![Screenshot of how to assign access to managed identity.](media/ingest-logs-event-hub/assign-access-to-managed-identity.png) ](media/ingest-logs-event-hub/assign-access-to-managed-identity.png#lightbox)
4. Select **Review + assign** and verify the details before saving your role assignment. :::image type="content" source="media/ingest-logs-event-hub/event-hub-add-role-assignment-save.png" lightbox="media/ingest-logs-event-hub/event-hub-add-role-assignment-save.png" alt-text="Screenshot that shows the Review and Assign tab of the Add Role Assignment screen."::: - ## Associate the data collection rule with the event hub The final step is to associate the data collection rule to the event hub from which you want to collect events.
To create a data collection rule association in the Azure portal:
1. Select **Review + create** and then **Create** when you review the details. - ## Check your destination table for ingested events Now that you've associated the data collection rule with your event hub, Azure Monitor Logs will ingest all existing events whose [retention period](/azure/event-hubs/event-hubs-features#event-retention) hasn't expired and all new events.
To check your destination table for ingested events:
``` You should see events from your event hub.
-`
- :::image type="content" source="media/ingest-logs-event-hub/log-analytics-query-results-with-events.png" lightbox="media/ingest-logs-event-hub/log-analytics-query-results-with-events.png" alt-text="Screenshot showing the results of a simple query on a custom table. The results consist of events ingested from an event hub.":::
## Clean up resources In this tutorial, you created the following resources:
Learn more about to:
- [Create a custom table](../logs/create-custom-table.md#create-a-custom-table). - [Create a data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint). - [Update an existing data collection rule](../essentials/data-collection-rule-edit.md).++
azure-monitor Resource Manager Vminsights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/resource-manager-vminsights.md
Previously updated : 06/13/2022 Last updated : 09/28/2023 # Resource Manager template samples for VM insights
azure-monitor Scom Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/scom-managed-instance-overview.md
Previously updated : 12/20/2022 Last updated : 09/28/2023
azure-monitor Tutorial Monitor Vm Alert Recommended https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-alert-recommended.md
description: Enable set of recommended metric alert rules for an Azure virtual m
Previously updated : 12/03/2022 Last updated : 09/28/2023
azure-monitor Tutorial Monitor Vm Enable Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-enable-insights.md
description: Enable monitoring with VM insights in Azure Monitor to monitor an A
Previously updated : 12/03/2022 Last updated : 09/28/2023
azure-monitor Tutorial Monitor Vm Guest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-guest.md
description: Create a data collection rule to collect guest logs and metrics fro
Previously updated : 12/03/2022 Last updated : 09/28/2023
azure-monitor Vminsights Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-change-analysis.md
description: VM insights integration with Application Change Analysis integratio
Previously updated : 06/08/2022 Last updated : 09/28/2023
azure-monitor Vminsights Configure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-configure-workspace.md
Previously updated : 06/22/2022 Last updated : 09/28/2023 # Configure a Log Analytics workspace for VM insights
azure-monitor Vminsights Dependency Agent Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-dependency-agent-maintenance.md
description: This article describes how to upgrade the VM insights Dependency ag
Previously updated : 04/16/2020 Last updated : 09/28/2023
azure-monitor Vminsights Enable Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-hybrid.md
description: This article describes how you enable VM insights for a hybrid clou
Previously updated : 06/08/2022 Last updated : 09/28/2023
azure-monitor Vminsights Enable Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-overview.md
Previously updated : 06/24/2022 Last updated : 09/28/2023
azure-monitor Vminsights Enable Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-portal.md
Previously updated : 12/15/2022 Last updated : 09/28/2023
azure-monitor Vminsights Enable Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-powershell.md
Previously updated : 06/08/2022 Last updated : 09/28/2023 # Enable VM insights by using PowerShell
azure-monitor Vminsights Enable Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-resource-manager.md
Previously updated : 06/08/2022 Last updated : 09/28/2023 # Enable VM insights using Resource Manager templates
azure-monitor Vminsights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-log-query.md
description: VM insights solution collects metrics and log data to and this arti
Previously updated : 06/08/2022 Last updated : 09/28/2023 # How to query logs from VM insights
azure-monitor Vminsights Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-maps.md
description: This article shows how to use the VM insights Map feature. It disco
Previously updated : 06/08/2022 Last updated : 09/28/2023
azure-monitor Vminsights Migrate From Service Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-migrate-from-service-map.md
description: Migrate from Service Map to Azure Monitor VM insights to monitor th
Previously updated : 09/13/2022 Last updated : 09/28/2023
azure-monitor Vminsights Optout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-optout.md
description: This article describes how to stop monitoring your virtual machines
Previously updated : 06/08/2022 Last updated : 09/28/2023
azure-monitor Vminsights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-overview.md
description: Overview of VM insights, which monitors the health and performance
Previously updated : 06/08/2023 Last updated : 09/28/2023 # Overview of VM insights
azure-monitor Vminsights Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-performance.md
description: This article discusses the VM insights Performance feature that dis
Previously updated : 06/08/2022 Last updated : 09/28/2023 # Chart performance with VM insights
azure-monitor Vminsights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-troubleshoot.md
description: Troubleshooting information for VM insights installation.
Previously updated : 06/08/2022 Last updated : 09/28/2023
azure-monitor Vminsights Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-workbooks.md
description: Simplify complex reporting with predefined and custom parameterized
Previously updated : 05/27/2022 Last updated : 09/28/2023
azure-netapp-files Azure Netapp Files Delegate Subnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-delegate-subnet.md
na Previously updated : 09/07/2023 Last updated : 09/28/2023 # Delegate a subnet to Azure NetApp Files
You must delegate a subnet to Azure NetApp Files. When you create a volume, you
## Considerations
-* The wizard for creating a new subnet defaults to a /24 network mask, which provides for 251 available IP addresses. You should consider a larger subnet (for example, /23 network mask) in scenarios such as SAP HANA where many volumes and storage endpoints are anticipated. You can also stay with the default network mask /24 as proposed by the wizard if you don't need to reserve many client or VM IP addresses in your Azure Virtual Network (VNet). Note that the network mask of the delegated network cannot be changed after the initial creation.
+* When creating the delegated subnet for Azure NetApp Files, the size of the subnet matters. A subnet with a /28 network mask provides (only) 11 usable IP addresses, which might be insufficient for certain use cases. In that case, you should plan for a larger delegated subnet. For instance, a subnet with a /26 network mask provides 59 and a /24 network mask provides 251 available IP addresses respectively. You should consider even larger subnets (for example, /23 network mask) in scenarios where application volume group for SAP HANA is used and where many volumes and storage endpoints are anticipated. The network mask of the delegated network can't be changed after the initial creation. Make sure to plan your VNet and delegated subnet sizes consciously.
* In each VNet, only one subnet can be delegated to Azure NetApp Files.
- Azure enables you to create multiple delegated subnets in a VNet. However, any attempts to create a new volume will fail if you use more than one delegated subnet.
+ Azure enables you to create multiple delegated subnets in a VNet. However, any attempts to create a new volume would fail if you use more than one delegated subnet.
You can have only a single delegated subnet in a VNet. A NetApp account can deploy volumes into multiple VNets, each having its own delegated subnet.
-* You cannot designate a network security group or service endpoint in the delegated subnet. Doing so causes the subnet delegation to fail.
-* Access to a volume from a globally peered virtual network is not currently supported using Basic networks features. Global VNet peering is supported with Standard network features. See [Supported network topologies](azure-netapp-files-network-topologies.md#supported-network-topologies) for more information.
+* You can't designate a network security group or service endpoint in the delegated subnet. Doing so causes the subnet delegation to fail.
+* Access to a volume from a globally peered virtual network isn't currently supported using Basic networks features. Global VNet peering is supported with Standard network features. For more information, see [Supported network topologies](azure-netapp-files-network-topologies.md#supported-network-topologies).
* For Azure NetApp Files support of [User-defined routes](../virtual-network/virtual-networks-udr-overview.md#custom-routes) (UDRs) and Network security groups (NSGs), see [Constraints in Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md#constraints). To establish routing or access control ***to*** the Azure NetApp Files delegated subnet, you can apply UDRs and NSGs to other subnets, even within the same VNet as the subnet delegated to Azure NetApp Files.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
Azure NetApp Files volumes are designed to be contained in a special purpose sub
* UAE North * UK South * UK West
+* US Gov Texas (public preview)
* US Gov Virginia (public preview) * West Europe * West US
Azure NetApp Files volumes are designed to be contained in a special purpose sub
* Norway West * Qatar Central * South Africa North
+* South Central US
* South India * Southeast Asia * Sweden Central
certification How To Test Pnp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-test-pnp.md
az iot product test task create --type QueueTestRun --test-id [YourTestId] --wai
Example test run output ```json
- "validationTasks": [
- {
- "componentName": "Default component",
- "endTime": "2020-08-25T05:18:49.5224772+00:00",
- "interfaceId": "dtmi:com:example:TemperatureController;1",
- "logs": [
- {
- "message": "Waiting for telemetry from the device",
- "time": "2020-08-25T05:18:37.3862586+00:00"
- },
- {
- "message": "Validating PnP properties",
- "time": "2020-08-25T05:18:37.3875168+00:00"
- },
- {
- "message": "Validating PnP commands",
- "time": "2020-08-25T05:18:37.3894343+00:00"
- },
- {
- "message": "{\"propertyName\":\"serialNumber\",\"expectedSchemaType\":null,\"actualSchemaType\":null,\"message\":\"Property is successfully validated\",\"passed\":true,\"time\":\"2020-08-25T05:18:37.4205985+00:00\"}",
- "time": "2020-08-25T05:18:37.4205985+00:00"
- },
- {
- "message": "PnP interface properties validation passed",
- "time": "2020-08-25T05:18:37.4206964+00:00"
- },
+"validationTasks": [
+ {
+ "componentName": "Default component",
+ "endTime": "2020-08-25T05:18:49.5224772+00:00",
+ "interfaceId": "dtmi:com:example:TemperatureController;1",
+ "logs": [
+ {
+ "message": "Waiting for telemetry from the device",
+ "time": "2020-08-25T05:18:37.3862586+00:00"
+ },
+ {
+ "message": "Validating PnP properties",
+ "time": "2020-08-25T05:18:37.3875168+00:00"
+ },
+ {
+ "message": "Validating PnP commands",
+ "time": "2020-08-25T05:18:37.3894343+00:00"
+ },
+ {
+ "message": "{\"propertyName\":\"serialNumber\",\"expectedSchemaType\":null,\"actualSchemaType\":null,\"message\":\"Property is successfully validated\",\"passed\":true,\"time\":\"2020-08-25T05:18:37.4205985+00:00\"}",
+ "time": "2020-08-25T05:18:37.4205985+00:00"
+ },
+ {
+ "message": "PnP interface properties validation passed",
+ "time": "2020-08-25T05:18:37.4206964+00:00"
+ },
+ ...
+ ]
+ }
+]
``` ## Test using the Azure Certified Device portal
cognitive-services Default Insights Tag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/default-insights-tag.md
The default insights tag is the one with the `displayName` field set to an empty
The PagesIncluding insight provides a list of webpages that include this image. It's actually a list of `Image` objects, and the `hostPageUrl` field contains the URL to the webpage that includes the image. For example usage, see [PagesIncluding insight example](./bing-insights-usage.md#pagesincluding-insight-example). ```json
+{
+ "_type" : "ImageModuleAction",
+ "actionType" : "PagesIncluding",
+ "data" : {
+ "value" : [
{
- "_type" : "ImageModuleAction",
- "actionType" : "PagesIncluding",
- "data" : {
- "value" : [
- {
- "webSearchUrl" : "https://www.bing.com\/images\/search?",
- "name" : "Today's smoking hot country",
- "thumbnailUrl" : "https:\/\/tse2.mm.bing.net\/th?id=OIP...",
- "datePublished" : "2017-09-20T12:00:00.0000000Z",
- "contentUrl" : "http:\/\/contoso.com\/wordstuff",
- "hostPageUrl" : "http:\/\/contoso.com\/2017\/09\/20\/car",
- "contentSize" : "122540 B",
- "encodingFormat" : "jpeg",
- "hostPageDisplayUrl" : "contoso.com\/2017\/09\/20\/car",
- "width" : 894,
- "height" : 1200,
- "thumbnail" : {
- "width" : 474,
- "height" : 636
- },
- "imageInsightsToken" : "ccid_CO5GEthj*mid_5323B1",
- "insightsMetadata" : {
- "pagesIncludingCount" : 12,
- "availableSizesCount" : 7
- },
- "imageId" : "5323B1900FB9087B6B45D176D234E1F2F23CD3A5",
- "accentColor" : "55585B"
- }
- ]
- }
+ "webSearchUrl" : "https://www.bing.com\/images\/search?",
+ "name" : "Today's smoking hot country",
+ "thumbnailUrl" : "https:\/\/tse2.mm.bing.net\/th?id=OIP...",
+ "datePublished" : "2017-09-20T12:00:00.0000000Z",
+ "contentUrl" : "http:\/\/contoso.com\/wordstuff",
+ "hostPageUrl" : "http:\/\/contoso.com\/2017\/09\/20\/car",
+ "contentSize" : "122540 B",
+ "encodingFormat" : "jpeg",
+ "hostPageDisplayUrl" : "contoso.com\/2017\/09\/20\/car",
+ "width" : 894,
+ "height" : 1200,
+ "thumbnail" : {
+ "width" : 474,
+ "height" : 636
+ },
+ "imageInsightsToken" : "ccid_CO5GEthj*mid_5323B1",
+ "insightsMetadata" : {
+ "pagesIncludingCount" : 12,
+ "availableSizesCount" : 7
+ },
+ "imageId" : "5323B1900FB9087B6B45D176D234E1F2F23CD3A5",
+ "accentColor" : "55585B"
}
+ ]
+ }
+}
``` ## ShoppingSources insight
The PagesIncluding insight provides a list of webpages that include this image.
The ShoppingSources insight provides a list of websites where the user can buy the item shown in the image. The list of offers includes the URL of the webpage where the user can buy the item, the price of the item, and rating or review details. For example usage, see [ShoppingSources example](./bing-insights-usage.md#shoppingsources-insight-example). ```json
+{
+ "_type" : "ImageShoppingSourcesAction",
+ "actionType" : "ShoppingSources",
+ "data" : {
+ "offers" : [
{
- "_type" : "ImageShoppingSourcesAction",
- "actionType" : "ShoppingSources",
- "data" : {
- "offers" : [
- {
- "name" : "Apple Pie",
- "url" : "https:\/\/contoso.com\/product_p\/l10.htm",
- "description" : "A taste of the crust, apple, and pie filling...",
- "seller" : {
- "name" : "Contoso"
- },
- "price" : 3.99,
- "priceCurrency" : "USD",
- "aggregateRating" : {
- "ratingValue" : 5
- },
- "lastUpdated" : "2018-04-16T00:00:00.0000000"
- }
- ]
- }
+ "name" : "Apple Pie",
+ "url" : "https:\/\/contoso.com\/product_p\/l10.htm",
+ "description" : "A taste of the crust, apple, and pie filling...",
+ "seller" : {
+ "name" : "Contoso"
+ },
+ "price" : 3.99,
+ "priceCurrency" : "USD",
+ "aggregateRating" : {
+ "ratingValue" : 5
+ },
+ "lastUpdated" : "2018-04-16T00:00:00.0000000"
}
+ ]
+ }
+}
``` ## MoreSizes insight
The ShoppingSources insight provides a list of websites where the user can buy t
The MoreSizes insight identifies the number of sizes (larger or smaller) of the image that Bing found on the Internet (see the `availableSizesCount` field): ```json
- {
- "image" : {
- "webSearchUrl" : "https:\/\/www.bing.com\/images\/search?view=detai...",
- "name" : "Making Apple Pie",
- "thumbnailUrl" : "https:\/\/tse4.mm.bing.net\/th?id=OIP....",
- "datePublished" : "2013-06-21T12:00:00.0000000Z",
- "contentUrl" : "http:\/\/contoso.com\/content\/uploads\/2013\/06\/apple-pie.jpg",
- "hostPageUrl" : "http:\/\/contoso.com\/2013\/06\/21\/making-apple-pie\/",
- "contentSize" : "134847 B",
- "encodingFormat" : "jpeg",
- "hostPageDisplayUrl" : "contoso.com\/2013\/06\/21\/making-apple-pie",
- "width" : 1050,
- "height" : 765,
- "thumbnail" : {
- "width" : 474,
- "height" : 345
- },
- "imageInsightsToken" : "ccid_tmaGQ2eU*mid_D12339146CF...",
- "insightsMetadata" : {
- "recipeSourcesCount" : 6,
- "pagesIncludingCount" : 103,
- "availableSizesCount" : 28
- },
- "imageId" : "D12339146CFEDF3D409CC7A66D2C98D0D71904D4",
- "accentColor" : "3A0B01"
- },
- "actionType" : "MoreSizes"
- },
+{
+ "image" : {
+ "webSearchUrl" : "https:\/\/www.bing.com\/images\/search?view=detai...",
+ "name" : "Making Apple Pie",
+ "thumbnailUrl" : "https:\/\/tse4.mm.bing.net\/th?id=OIP....",
+ "datePublished" : "2013-06-21T12:00:00.0000000Z",
+ "contentUrl" : "http:\/\/contoso.com\/content\/uploads\/2013\/06\/apple-pie.jpg",
+ "hostPageUrl" : "http:\/\/contoso.com\/2013\/06\/21\/making-apple-pie\/",
+ "contentSize" : "134847 B",
+ "encodingFormat" : "jpeg",
+ "hostPageDisplayUrl" : "contoso.com\/2013\/06\/21\/making-apple-pie",
+ "width" : 1050,
+ "height" : 765,
+ "thumbnail" : {
+ "width" : 474,
+ "height" : 345
+ },
+ "imageInsightsToken" : "ccid_tmaGQ2eU*mid_D12339146CF...",
+ "insightsMetadata" : {
+ "recipeSourcesCount" : 6,
+ "pagesIncludingCount" : 103,
+ "availableSizesCount" : 28
+ },
+ "imageId" : "D12339146CFEDF3D409CC7A66D2C98D0D71904D4",
+ "accentColor" : "3A0B01"
+ },
+ "actionType" : "MoreSizes"
+},
``` ## VisualSearch insight
The MoreSizes insight identifies the number of sizes (larger or smaller) of the
The VisualSearch insight provides a list of images that are visually similar to the original image (contains content that's similar to the content shown in the original image). For example usage, see [VisualSearch insight example](./bing-insights-usage.md#visualsearch-insight-example). ```json
+{
+ "_type" : "ImageModuleAction",
+ "actionType" : "VisualSearch",
+ "data" : {
+ "value" : [
{
- "_type" : "ImageModuleAction",
- "actionType" : "VisualSearch",
- "data" : {
- "value" : [
- {
- "webSearchUrl" : "https:\/\/www.bing.com\/images\/search?view=...",
- "name" : "An apple pie...",
- "thumbnailUrl" : "https:\/\/tse4.mm.bing.net\/th?id=OIP.z...",
- "datePublished" : "2017-03-18T00:17:00.0000000Z",
- "contentUrl" : "http:\/\/contoso.net\/images\/8\/8a\/an_apple_pie.png",
- "hostPageUrl" : "http:\/\/contoso.com\/wiki\/an_apple_pie.png",
- "contentSize" : "87930 B",
- "encodingFormat" : "png",
- "hostPageDisplayUrl" : "contoso.com\/wiki\/an_apple_pie.png",
- "width" : 263,
- "height" : 192,
- "thumbnail" : {
- "width" : 474,
- "height" : 346
- },
- "imageInsightsToken" : "ccid_zhRxfGkI*mid_1DCBA7AA6D231...",
- "insightsMetadata" : {
- "recipeSourcesCount" : 6,
- "pagesIncludingCount" : 103,
- "availableSizesCount" : 28
- },
- "imageId" : "1DCBA7AA6D23147F9DD06D47DB3A38EB25389",
- "accentColor" : "3E0D01"
- }
- ]
- }
+ "webSearchUrl" : "https:\/\/www.bing.com\/images\/search?view=...",
+ "name" : "An apple pie...",
+ "thumbnailUrl" : "https:\/\/tse4.mm.bing.net\/th?id=OIP.z...",
+ "datePublished" : "2017-03-18T00:17:00.0000000Z",
+ "contentUrl" : "http:\/\/contoso.net\/images\/8\/8a\/an_apple_pie.png",
+ "hostPageUrl" : "http:\/\/contoso.com\/wiki\/an_apple_pie.png",
+ "contentSize" : "87930 B",
+ "encodingFormat" : "png",
+ "hostPageDisplayUrl" : "contoso.com\/wiki\/an_apple_pie.png",
+ "width" : 263,
+ "height" : 192,
+ "thumbnail" : {
+ "width" : 474,
+ "height" : 346
+ },
+ "imageInsightsToken" : "ccid_zhRxfGkI*mid_1DCBA7AA6D231...",
+ "insightsMetadata" : {
+ "recipeSourcesCount" : 6,
+ "pagesIncludingCount" : 103,
+ "availableSizesCount" : 28
+ },
+ "imageId" : "1DCBA7AA6D23147F9DD06D47DB3A38EB25389",
+ "accentColor" : "3E0D01"
}
+ ]
+ }
+}
``` ## Recipes insight
The VisualSearch insight provides a list of images that are visually similar to
The Recipes insight provides a list of webpages that include a recipe for making the food shown in the image. For example usage, see [Recipes insight example](./bing-insights-usage.md#recipes-insight-example). ```json
+{
+ "_type" : "ImageRecipesAction",
+ "actionType" : "Recipes",
+ "data" : {
+ "value" : [
{
- "_type" : "ImageRecipesAction",
- "actionType" : "Recipes",
- "data" : {
- "value" : [
- {
- "name" : "Granny's Apple Pie",
- "url" : "http:\/\/contoso.com\/recipes\/appetizer\/apple-pie.html",
- "description" : "I love Granny's apple pie. Sooooo delicious...",
- "thumbnailUrl" : "https:\/\/tse4.mm.bing.net\/th?id=A63002cd9",
- "creator" : {
- "_type" : "Person",
- "name" : "Charlene Whitney"
- },
- "aggregateRating" : {
- "text" : "5",
- "ratingValue" : 5,
- "bestRating" : 5,
- "reviewCount" : 1
- },
- "cookTime" : "PT45M",
- "prepTime" : "PT1H",
- "totalTime" : "PT1H45M"
- }
- ]
- }
+ "name" : "Granny's Apple Pie",
+ "url" : "http:\/\/contoso.com\/recipes\/appetizer\/apple-pie.html",
+ "description" : "I love Granny's apple pie. Sooooo delicious...",
+ "thumbnailUrl" : "https:\/\/tse4.mm.bing.net\/th?id=A63002cd9",
+ "creator" : {
+ "_type" : "Person",
+ "name" : "Charlene Whitney"
+ },
+ "aggregateRating" : {
+ "text" : "5",
+ "ratingValue" : 5,
+ "bestRating" : 5,
+ "reviewCount" : 1
+ },
+ "cookTime" : "PT45M",
+ "prepTime" : "PT1H",
+ "totalTime" : "PT1H45M"
}
+ ]
+ }
+}
```
The Recipes insight provides a list of webpages that include a recipe for making
The ImageById insight provides an `Image` object of the image that you requested insights for: ```json
- {
- "image" : {
- "webSearchUrl" : "https:\/\/www.bing.com\/images\/search?view=deta...",
- "name" : "Making Apple Pie",
- "thumbnailUrl" : "https:\/\/tse4.mm.bing.net\/th?id=OIP...",
- "datePublished" : "2013-06-21T12:00:00.0000000Z",
- "contentUrl" : "http:\/\/contoso.com\/content\/uploads\/2013\/06\/apple-pie.jpg",
- "hostPageUrl" : "http:\/\/contoso.com\/2013\/06\/21\/making-apple-pie\/",
- "contentSize" : "134847 B",
- "encodingFormat" : "jpeg",
- "hostPageDisplayUrl" : "contoso.com\/2013\/06\/21\/making-apple-pie",
- "width" : 1050,
- "height" : 765,
- "thumbnail" : {
- "width" : 474,
- "height" : 345
- },
- "imageInsightsToken" : "ccid_tmaGQ2eU*mid_D12339146CFE...",
- "insightsMetadata" : {
- "recipeSourcesCount" : 6,
- "pagesIncludingCount" : 103,
- "availableSizesCount" : 28
- },
- "imageId" : "D12339146CFEDF3D409A66D2C98D0D71904D4",
- "accentColor" : "3A0B01"
- },
- "actionType" : "ImageById"
- },
+{
+ "image" : {
+ "webSearchUrl" : "https:\/\/www.bing.com\/images\/search?view=deta...",
+ "name" : "Making Apple Pie",
+ "thumbnailUrl" : "https:\/\/tse4.mm.bing.net\/th?id=OIP...",
+ "datePublished" : "2013-06-21T12:00:00.0000000Z",
+ "contentUrl" : "http:\/\/contoso.com\/content\/uploads\/2013\/06\/apple-pie.jpg",
+ "hostPageUrl" : "http:\/\/contoso.com\/2013\/06\/21\/making-apple-pie\/",
+ "contentSize" : "134847 B",
+ "encodingFormat" : "jpeg",
+ "hostPageDisplayUrl" : "contoso.com\/2013\/06\/21\/making-apple-pie",
+ "width" : 1050,
+ "height" : 765,
+ "thumbnail" : {
+ "width" : 474,
+ "height" : 345
+ },
+ "imageInsightsToken" : "ccid_tmaGQ2eU*mid_D12339146CFE...",
+ "insightsMetadata" : {
+ "recipeSourcesCount" : 6,
+ "pagesIncludingCount" : 103,
+ "availableSizesCount" : 28
+ },
+ "imageId" : "D12339146CFEDF3D409A66D2C98D0D71904D4",
+ "accentColor" : "3A0B01"
+ },
+ "actionType" : "ImageById"
+},
``` ## ProductVisualSearch insight
The ImageById insight provides an `Image` object of the image that you requested
The ProductVisualSearch insight provides a list of images of products that are visually similar to products shown in the original image. The `insightsMetadata` field may contain information about offers where you can buy the product and the price of the product. ```json
+{
+ "_type" : "ImageModuleAction",
+ "actionType" : "ProductVisualSearch",
+ "data" : {
+ "value" : [
{
- "_type" : "ImageModuleAction",
- "actionType" : "ProductVisualSearch",
- "data" : {
- "value" : [
- {
- "webSearchUrl" : "https:\/\/www.bing.com\/images\/search?view=detail...",
- "name" : "Contoso 4-Piece Kitchen Package...",
- "thumbnailUrl" : "https:\/\/tse3.mm.bing.net\/th?id=OIP.l9hzaabu-RJd...",
- "datePublished" : "2017-07-16T04:28:00.0000000Z",
- "contentUrl" : "https:\/\/www.contoso.com\/assets\/media\/images\/prod...",
- "hostPageUrl" : "https:\/\/www.contoso.com\/4-piece-kitchen-package...",
- "contentSize" : "13594 B",
- "encodingFormat" : "jpeg",
- "hostPageDisplayUrl" : "https:\/\/www.contoso.com\/4-piece-kitchen-package...",
- "width" : 450,
- "height" : 332,
- "thumbnail" : {
- "width" : 474,
- "height" : 349
- },
- "imageInsightsToken" : "ccid_l9hzaabu*mid_70A8B616355D681DB9A5A...",
- "insightsMetadata" : {
- "shoppingSourcesCount" : 1,
- "recipeSourcesCount" : 0,
- "aggregateOffer" : {
- "name":"4-Piece Kitchen Package with...",
- "priceCurrency":"USD",
- "lowPrice":2756,
- "offers" : [
- {
- "name" : "4-Piece Kitchen Package with...",
- "url" : "https:\/\/www.fabrikam.com\/1234.html?ref=bing",
- "description" : "This 36 Frenchdoor refrigerator by...",
- "seller" : {
- "name" : "Fabrikam",
- "image" : {
- "url" : "https:\/\/tse1.mm.bing.net\/th?id=A818f811..."
- }
- },
- "price" : 2756,
- "priceCurrency" : "USD",
- "availability" : "InStock",
- "lastUpdated" : "2018-02-20T00:00:00.0000000"
- }
- ],
- "offerCount":1
+ "webSearchUrl" : "https:\/\/www.bing.com\/images\/search?view=detail...",
+ "name" : "Contoso 4-Piece Kitchen Package...",
+ "thumbnailUrl" : "https:\/\/tse3.mm.bing.net\/th?id=OIP.l9hzaabu-RJd...",
+ "datePublished" : "2017-07-16T04:28:00.0000000Z",
+ "contentUrl" : "https:\/\/www.contoso.com\/assets\/media\/images\/prod...",
+ "hostPageUrl" : "https:\/\/www.contoso.com\/4-piece-kitchen-package...",
+ "contentSize" : "13594 B",
+ "encodingFormat" : "jpeg",
+ "hostPageDisplayUrl" : "https:\/\/www.contoso.com\/4-piece-kitchen-package...",
+ "width" : 450,
+ "height" : 332,
+ "thumbnail" : {
+ "width" : 474,
+ "height" : 349
+ },
+ "imageInsightsToken" : "ccid_l9hzaabu*mid_70A8B616355D681DB9A5A...",
+ "insightsMetadata" : {
+ "shoppingSourcesCount" : 1,
+ "recipeSourcesCount" : 0,
+ "aggregateOffer" : {
+ "name":"4-Piece Kitchen Package with...",
+ "priceCurrency":"USD",
+ "lowPrice":2756,
+ "offers" : [
+ {
+ "name" : "4-Piece Kitchen Package with...",
+ "url" : "https:\/\/www.fabrikam.com\/1234.html?ref=bing",
+ "description" : "This 36 Frenchdoor refrigerator by...",
+ "seller" : {
+ "name" : "Fabrikam",
+ "image" : {
+ "url" : "https:\/\/tse1.mm.bing.net\/th?id=A818f811..."
+ }
},
- "pagesIncludingCount" : 4,
- "availableSizesCount" : 2
- },
- "imageId" : "70A8B616355D681DA5980A8D0514BCC995A3",
- "accentColor" : "60646B"
- }
- ]
- }
+ "price" : 2756,
+ "priceCurrency" : "USD",
+ "availability" : "InStock",
+ "lastUpdated" : "2018-02-20T00:00:00.0000000"
+ }
+ ],
+ "offerCount":1
+ },
+ "pagesIncludingCount" : 4,
+ "availableSizesCount" : 2
+ },
+ "imageId" : "70A8B616355D681DA5980A8D0514BCC995A3",
+ "accentColor" : "60646B"
}
+ ]
+ }
+}
``` ## RelatedSearches insight
The ProductVisualSearch insight provides a list of images of products that are v
The RelatedSearches insight provides a list of related searches made by others (based on other users' search terms). For example usage, see [RelatedSearches insight example](./bing-insights-usage.md#relatedsearches-insight-example). ```json
+{
+ "_type" : "ImageRelatedSearchesAction",
+ "actionType" : "RelatedSearches",
+ "data" : {
+ "value" : [
{
- "_type" : "ImageRelatedSearchesAction",
- "actionType" : "RelatedSearches",
- "data" : {
- "value" : [
- {
- "text" : "Homemade Apple Pies Recipes",
- "displayText" : "Homemade Apple Pies Recipes",
- "webSearchUrl" : "https:\/\/www.bing.com\/images\/search?q=Homemade...",
- "thumbnail" : {
- "url" : "https:\/\/tse1.mm.bing.net\/th?q=Homemade+Apple+Pies"
- }
- }
- ]
+ "text" : "Homemade Apple Pies Recipes",
+ "displayText" : "Homemade Apple Pies Recipes",
+ "webSearchUrl" : "https:\/\/www.bing.com\/images\/search?q=Homemade...",
+ "thumbnail" : {
+ "url" : "https:\/\/tse1.mm.bing.net\/th?q=Homemade+Apple+Pies"
} }
+ ]
+ }
+}
``` ## DocumentLevelSuggestions insight
The RelatedSearches insight provides a list of related searches made by others (
The DocumentLevelSuggestions insight provides a list of suggested search terms based on the contents of the image: ```json
+{
+ "_type" : "ImageRelatedSearchesAction",
+ "actionType" : "DocumentLevelSuggestions",
+ "data" : {
+ "value" : [
{
- "_type" : "ImageRelatedSearchesAction",
- "actionType" : "DocumentLevelSuggestions",
- "data" : {
- "value" : [
- {
- "text" : "American Apple Pie",
- "displayText" : "American Apple Pie",
- "webSearchUrl" : "https:\/\/www.bing.com\/images\/search?q=American",
- "thumbnail" : {
- "url" : "https:\/\/tse3.mm.bing.net\/th?q=American+Apple+Pie."
- }
- }
- ]
+ "text" : "American Apple Pie",
+ "displayText" : "American Apple Pie",
+ "webSearchUrl" : "https:\/\/www.bing.com\/images\/search?q=American",
+ "thumbnail" : {
+ "url" : "https:\/\/tse3.mm.bing.net\/th?q=American+Apple+Pie."
} }
+ ]
+ }
+}
``` ## Next steps
communication-services Enable Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/enable-logging.md
The following are instructions for configuring your Azure Monitor resource to st
These instructions apply to the following Communications Services logs: -- [Call Summary and Call Diagnostic logs](logs/voice-and-video-logs.md)
+- [Call Summary and Call Diagnostic logs](logs/voice-and-video-logs.md)
+- [SMS Diagnostic logs](logs/sms-logs.md)
+ ## Access Diagnostic Settings To access Diagnostic Settings for your Communications Services, start by navigating to your Communications Services home page within Azure portal:
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
The following list presents the set of features that are currently available in
| | Invite another VoIP participant to join an ongoing group call | ✔️ | ✔️ | ✔️ | ✔️ | | Mid call control | Turn your video on/off | ✔️ | ✔️ | ✔️ | ✔️ | | | Mute/Unmute mic | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Mute other participants |✔️<sup>1</sup> | ❌ | ❌ | ❌ |
| | Switch between cameras | ✔️ | ✔️ | ✔️ | ✔️ | | | Local hold/un-hold | ✔️ | ✔️ | ✔️ | ✔️ | | | Active speaker | ✔️ | ✔️ | ✔️ | ✔️ |
The following list presents the set of features that are currently available in
| | Show state of a call<br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ | ✔️ | ✔️ | ✔️ | | | Show if a participant is muted | ✔️ | ✔️ | ✔️ | ✔️ | | | Show the reason why a participant left a call | ✔️ | ✔️ | ✔️ | ✔️ |
-| Screen sharing | Share the entire screen from within the application | ✔️ | ✔️<sup>1</sup>| ✔️<sup>1</sup> | ✔️<sup>1</sup> |
-| | Share a specific application (from the list of running applications) | ✔️ | ✔️<sup>1</sup>| ❌ | ❌ |
-| | Share a web browser tab from the list of open tabs | ✔️ | | | |
+| Screen sharing | Share the entire screen from within the application | ✔️ | ✔️<sup>2</sup> | ✔️<sup>2</sup> | ✔️<sup>2</sup> |
+| | Share a specific application (from the list of running applications) | ✔️ | ✔️<sup>2</sup> | ❌ | ❌ |
+| | Share a web browser tab from the list of open tabs | ✔️ | | | |
| | Share system audio during screen sharing | ❌ | ❌ | ❌ | ❌ | | | Participant can view remote screen share | ✔️ | ✔️ | ✔️ | ✔️ | | Roster | List participants | ✔️ | ✔️ | ✔️ | ✔️ |
The following list presents the set of features that are currently available in
| | Get camera list | ✔️ | ✔️ | ✔️ | ✔️ | | | Set camera | ✔️ | ✔️ | ✔️ | ✔️ | | | Get selected camera | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Get microphone list | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> |
-| | Set microphone | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> |
-| | Get selected microphone | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> |
-| | Get speakers list | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> |
-| | Set speaker | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> |
-| | Get selected speaker | ✔️ | ✔️ | ❌ <sup>2</sup> | ❌ <sup>2</sup> |
+| | Get microphone list | ✔️ | ✔️ | ❌ <sup>3</sup> | ❌<sup>3</sup> |
+| | Set microphone | ✔️ | ✔️ | ❌ <sup>3</sup> | ❌ <sup>3</sup> |
+| | Get selected microphone | ✔️ | ✔️ | ❌ <sup>3</sup> | ❌ <sup>3</sup> |
+| | Get speakers list | ✔️ | ✔️ | ❌ <sup>3</sup> | ❌ <sup>3</sup> |
+| | Set speaker | ✔️ | ✔️ | ❌ <sup>3</sup> | ❌ <sup>3</sup> |
+| | Get selected speaker | ✔️ | ✔️ | ❌ <sup>3</sup> | ❌ <sup>3</sup> |
| Video Rendering | Render single video in many places (local camera or remote stream) | ✔️ | ✔️ | ✔️ | ✔️ | | | Set / update scaling mode | ✔️ | ✔️ | ✔️ | ✔️ | | | Render remote video stream | ✔️ | ✔️ | ✔️ | ✔️ |
The following list presents the set of features that are currently available in
| | Custom background image | ✔️ | ❌ | ❌ | ❌ |
-1. The Share screen capability can be achieved using Raw Media, if you want to learn, **how to add Raw Media**, visit [the quickstart guide](../../quickstarts/voice-video-calling/get-started-raw-media-access.md).
-2. The Calling SDK doesn't have an explicit API, you need to use the OS (android & iOS) API to achieve it.
+1. The capability to Mute Others is currently in public preview.
+2. The Share Screen capability can be achieved using Raw Media, if you want to learn, **how to add Raw Media**, visit [the quickstart guide](../../quickstarts/voice-video-calling/get-started-raw-media-access.md).
+3. The Calling SDK doesn't have an explicit API, you need to use the OS (android & iOS) API to achieve it.
## UI Library
For more information, see the following articles:
- Familiarize yourself with general [call flows](../call-flows.md) - Learn about [call types](../voice-video-calling/about-call-types.md)-- Learn about [call automation API](../call-automation/call-automation.md) that enables you to build server-based calling workflows that can route and control calls with client applications.
+- Learn about [call automation API](../call-automation/call-automation.md) that enables you to build server-based calling workflows that can route and control calls with client applications.
- [Plan your PSTN solution](../telephony/plan-solution.md)
confidential-computing Virtual Machine Solutions Sgx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions-sgx.md
Specify one of the following sizes in your ARM template in the VM resource. This
Under **properties**, you also have to specify an image under **storageProfile**. Use *only one* of the following images for your **imageReference**. ```json
- "2019-datacenter-gensecond": {
- "offer": "WindowsServer",
- "publisher": "MicrosoftWindowsServer",
- "sku": "2019-datacenter-gensecond",
- "version": "latest"
- },
- "20_04-lts-gen2": {
- "offer": "0001-com-ubuntu-server-focal",
- "publisher": "Canonical",
- "sku": "20_04-lts-gen2",
- "version": "latest"
- }
- "22_04-lts-gen2": {
- "offer": "0001-com-ubuntu-server-jammy",
- "publisher": "Canonical",
- "sku": "22_04-lts-gen2",
- "version": "latest"
- },
+ "2019-datacenter-gensecond": {
+ "offer": "WindowsServer",
+ "publisher": "MicrosoftWindowsServer",
+ "sku": "2019-datacenter-gensecond",
+ "version": "latest"
+ },
+ "20_04-lts-gen2": {
+ "offer": "0001-com-ubuntu-server-focal",
+ "publisher": "Canonical",
+ "sku": "20_04-lts-gen2",
+ "version": "latest"
+ }
+ "22_04-lts-gen2": {
+ "offer": "0001-com-ubuntu-server-jammy",
+ "publisher": "Canonical",
+ "sku": "22_04-lts-gen2",
+ "version": "latest"
+ },
``` ## Next step
container-instances Container Instances Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-log-analytics.md
The following sections describe how to create a logging-enabled container group
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../includes/azure-monitor-log-analytics-rebrand.md)]
-> [!NOTE]
-> Currently, you can only send event data from Linux container instances to Log Analytics.
- ## Prerequisites To enable logging in your container instances, you need the following:
cost-management-billing Manage Tax Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-tax-information.md
tags: billing
Previously updated : 04/05/2023 Last updated : 09/27/2023
Customers in the following countries or regions can add their Tax IDs.
|Georgia | Germany | |Ghana | Greece | |Guatemala | Hungary |
-|Iceland | Italy |
-| India ┬╣ | Indonesia |
+|Iceland | Italy ┬╣ |
+|India ┬▓ | Indonesia |
|Ireland | Isle of Man | |Kenya | Korea | | Latvia | Liechtenstein |
Customers in the following countries or regions can add their Tax IDs.
|Uzbekistan | Vietnam | |Zimbabwe | |
+┬╣ For Italy, you must enter your organization's Codice Fiscale using the following steps with the **Manage Tax IDs** option.
+
+┬▓ Follow the instructions in the next section to add your Goods and Services Taxpayer Identification Number (GSTIN).
+ 1. Sign in to the Azure portal using the email address that has an owner or a contributor role on the billing account for an MCA or an account administrator role for a MOSP billing account. 1. Search for **Cost Management + Billing**. ![Screenshot that shows where to search for Cost Management + Billing.](./media/manage-tax-information/search-cmb.png)
Customers in the following countries or regions can add their Tax IDs.
:::image type="content" source="./media/manage-tax-information/update-tax-id.png" alt-text="Screenshot showing where to update the Tax I D." lightbox="./media/manage-tax-information/update-tax-id.png" ::: 1. Enter new tax IDs and then select **Save**. > [!NOTE]
- > If you don't see the Tax IDs section, Tax IDs are not yet collected for your region. Or, updating Tax IDs in the Azure portal isn't supported for your account.
-
-┬╣ Follow the instructions in the next section to add your Goods and Services Taxpayer Identification Number (GSTIN).
+ > If you don't see the Tax IDs section, Tax IDs are not yet collected for your region. Or, updating Tax IDs in the Azure portal isn't supported for your account.
## Add your GSTIN for billing accounts in India
data-factory Connector Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-explorer.md
To use system-assigned managed identity authentication, follow these steps to gr
2. Grant the managed identity the correct permissions in Azure Data Explorer. See [Manage Azure Data Explorer database permissions](/azure/data-explorer/manage-database-permissions) for detailed information about roles and permissions and about managing permissions. In general, you must:
- - **As source**, grant at least the **Database viewer** role to your database
- - **As sink**, grant at least the **Database ingestor** role to your database
+ - **As source**, grant the **Database viewer** role to your database.
+ - **As sink**, grant the **Database ingestor** and **Database viewer** roles to your database.
>[!NOTE] >When you use the UI to author, your login user account is used to list Azure Data Explorer clusters, databases, and tables. Manually enter the name if you don't have permission for these operations.
For more information about the properties, see [Lookup activity](control-flow-lo
* For a list of data stores that the copy activity supports as sources and sinks, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
-* Learn more about how to [copy data from Azure Data Factory and Synapse Analytics to Azure Data Explorer](/azure/data-explorer/data-factory-load-data).
+* Learn more about how to [copy data from Azure Data Factory and Synapse Analytics to Azure Data Explorer](/azure/data-explorer/data-factory-load-data).
data-factory Connector Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-guide.md
You can refer to the troubleshooting pages for each connector to see problems sp
The errors below are general to the copy activity and could occur with any connector.
-### Error code: JreNotFound
+#### Error code: JreNotFound
- **Message**: `Java Runtime Environment cannot be found on the Self-hosted Integration Runtime machine. It is required for parsing or writing to Parquet/ORC files. Make sure Java Runtime Environment has been installed on the Self-hosted Integration Runtime machine.`
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Check your integration runtime environment, see [Use Self-hosted Integration Runtime](./format-parquet.md#using-self-hosted-integration-runtime).
-### Error code: WildcardPathSinkNotSupported
+#### Error code: WildcardPathSinkNotSupported
- **Message**: `Wildcard in path is not supported in sink dataset. Fix the path: '%setting;'.`
The errors below are general to the copy activity and could occur with any conne
3. Save the file, and then restart the Self-hosted IR machine.
-### Error code: JniException
+#### Error code: JniException
- **Message**: `An error occurred when invoking Java Native Interface.`
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Log in to the machine that hosts *each node* of your self-hosted integration runtime. Check to ensure that the system variable is set correctly, as follows: `_JAVA_OPTIONS "-Xms256m -Xmx16g" with memory bigger than 8G`. Restart all the integration runtime nodes, and then rerun the pipeline.
-### Error code: GetOAuth2AccessTokenErrorResponse
+#### Error code: GetOAuth2AccessTokenErrorResponse
- **Message**: `Failed to get access token from your token endpoint. Error returned from your authorization server: %errorResponse;.`
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Correct all OAuth2 client credential flow settings of your authorization server.
-### Error code: FailedToGetOAuth2AccessToken
+#### Error code: FailedToGetOAuth2AccessToken
- **Message**: `Failed to get access token from your token endpoint. Error message: %errorMessage;.`
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Correct all OAuth2 client credential flow settings of your authorization server.
-### Error code: OAuth2AccessTokenTypeNotSupported
+#### Error code: OAuth2AccessTokenTypeNotSupported
- **Message**: `The toke type '%tokenType;' from your authorization server is not supported, supported types: '%tokenTypes;'.`
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Use an authorization server that can return tokens with supported token types.
-### Error code: OAuth2ClientIdColonNotAllowed
+#### Error code: OAuth2ClientIdColonNotAllowed
- **Message**: `The character colon(:) is not allowed in clientId for OAuth2ClientCredential authentication.`
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Use a valid client ID.
-### Error code: ManagedIdentityCredentialObjectNotSupported
+#### Error code: ManagedIdentityCredentialObjectNotSupported
- **Message**: `Managed identity credential is not supported in this version ('%version;') of Self Hosted Integration Runtime.` - **Recommendation**: Check the supported version and upgrade the integration runtime to a higher version.
-### Error code: QueryMissingFormatSettingsInDataset
+#### Error code: QueryMissingFormatSettingsInDataset
- **Message**: `The format settings are missing in dataset %dataSetName;.`
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Deselect the "Binary copy" in the dataset, and set correct format settings.
-### Error code: QueryUnsupportedCommandBehavior
+#### Error code: QueryUnsupportedCommandBehavior
- **Message**: `The command behavior "%behavior;" is not supported.` - **Recommendation**: Don't add the command behavior as a parameter for preview or GetSchema API request URL.
-### Error code: DataConsistencyFailedToGetSourceFileMetadata
+#### Error code: DataConsistencyFailedToGetSourceFileMetadata
- **Message**: `Failed to retrieve source file ('%name;') metadata to validate data consistency.` - **Cause**: There is a transient issue on the sink data store, or retrieving metadata from the sink data store is not allowed.
-### Error code: DataConsistencyFailedToGetSinkFileMetadata
+#### Error code: DataConsistencyFailedToGetSinkFileMetadata
- **Message**: `Failed to retrieve sink file ('%name;') metadata to validate data consistency.` - **Cause**: There is a transient issue on the sink data store, or retrieving metadata from the sink data store is not allowed.
-### Error code: DataConsistencyValidationNotSupportedForNonDirectBinaryCopy
+#### Error code: DataConsistencyValidationNotSupportedForNonDirectBinaryCopy
- **Message**: `Data consistency validation is not supported in current copy activity settings.`
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Remove the 'validateDataConsistency' property in the copy activity payload.
-### Error code: DataConsistencyValidationNotSupportedForLowVersionSelfHostedIntegrationRuntime
+#### Error code: DataConsistencyValidationNotSupportedForLowVersionSelfHostedIntegrationRuntime
- **Message**: `'validateDataConsistency' is not supported in this version ('%version;') of Self Hosted Integration Runtime.` - **Recommendation**: Check the supported integration runtime version and upgrade it to a higher version, or remove the 'validateDataConsistency' property from copy activities.
-### Error code: SkipMissingFileNotSupportedForNonDirectBinaryCopy
+#### Error code: SkipMissingFileNotSupportedForNonDirectBinaryCopy
- **Message**: `Skip missing file is not supported in current copy activity settings, it's only supported with direct binary copy with folder.` - **Recommendation**: Remove 'fileMissing' of the skipErrorFile setting in the copy activity payload.
-### Error code: SkipInconsistencyDataNotSupportedForNonDirectBinaryCopy
+#### Error code: SkipInconsistencyDataNotSupportedForNonDirectBinaryCopy
- **Message**: `Skip inconsistency is not supported in current copy activity settings, it's only supported with direct binary copy when validateDataConsistency is true.` - **Recommendation**: Remove 'dataInconsistency' of the skipErrorFile setting in the copy activity payload.
-### Error code: SkipForbiddenFileNotSupportedForNonDirectBinaryCopy
+#### Error code: SkipForbiddenFileNotSupportedForNonDirectBinaryCopy
- **Message**: `Skip forbidden file is not supported in current copy activity settings, it's only supported with direct binary copy with folder.` - **Recommendation**: Remove 'fileForbidden' of the skipErrorFile setting in the copy activity payload.
-### Error code: SkipForbiddenFileNotSupportedForThisConnector
+#### Error code: SkipForbiddenFileNotSupportedForThisConnector
- **Message**: `Skip forbidden file is not supported for this connector: ('%connectorName;').` - **Recommendation**: Remove 'fileForbidden' of the skipErrorFile setting in the copy activity payload.
-### Error code: SkipInvalidFileNameNotSupportedForNonDirectBinaryCopy
+#### Error code: SkipInvalidFileNameNotSupportedForNonDirectBinaryCopy
- **Message**: `Skip invalid file name is not supported in current copy activity settings, it's only supported with direct binary copy with folder.` - **Recommendation**: Remove 'invalidFileName' of the skipErrorFile setting in the copy activity payload.
-### Error code: SkipInvalidFileNameNotSupportedForSource
+#### Error code: SkipInvalidFileNameNotSupportedForSource
- **Message**: `Skip invalid file name is not supported for '%connectorName;' source.` - **Recommendation**: Remove 'invalidFileName' of the skipErrorFile setting in the copy activity payload.
-### Error code: SkipInvalidFileNameNotSupportedForSink
+#### Error code: SkipInvalidFileNameNotSupportedForSink
- **Message**: `Skip invalid file name is not supported for '%connectorName;' sink.` - **Recommendation**: Remove 'invalidFileName' of the skipErrorFile setting in the copy activity payload.
-### Error code: SkipAllErrorFileNotSupportedForNonBinaryCopy
+#### Error code: SkipAllErrorFileNotSupportedForNonBinaryCopy
- **Message**: `Skip all error file is not supported in current copy activity settings, it's only supported with binary copy with folder.` - **Recommendation**: Remove 'allErrorFile' in the skipErrorFile setting in the copy activity payload.
-### Error code: DeleteFilesAfterCompletionNotSupportedForNonDirectBinaryCopy
+#### Error code: DeleteFilesAfterCompletionNotSupportedForNonDirectBinaryCopy
- **Message**: `'deleteFilesAfterCompletion' is not support in current copy activity settings, it's only supported with direct binary copy.` - **Recommendation**: Remove the 'deleteFilesAfterCompletion' setting or use direct binary copy.
-### Error code: DeleteFilesAfterCompletionNotSupportedForThisConnector
+#### Error code: DeleteFilesAfterCompletionNotSupportedForThisConnector
- **Message**: `'deleteFilesAfterCompletion' is not supported for this connector: ('%connectorName;').` - **Recommendation**: Remove the 'deleteFilesAfterCompletion' setting in the copy activity payload.
-### Error code: FailedToDownloadCustomPlugins
+#### Error code: FailedToDownloadCustomPlugins
- **Message**: `Failed to download custom plugins.`
The errors below are general to the copy activity and could occur with any conne
## General connector errors
-### Error code: UserErrorOdbcInvalidQueryString
+#### Error code: UserErrorOdbcInvalidQueryString
- **Message**: `The following ODBC Query is not valid: '%'.`
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Verify your query is valid and can return dat) if you want to execute non-query scripts and your data store is supported. Alternatively, consider to use stored procedure that returns a dummy result to execute your non-query scripts.
-### Error code: FailToResolveParametersInExploratoryController
+#### Error code: FailToResolveParametersInExploratoryController
- **Message**: `The parameters and expression cannot be resolved for schema operations. …The template function 'linkedService' is not defined or not valid.`
defender-for-cloud Powershell Sample Vulnerability Assessment Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/powershell-sample-vulnerability-assessment-azure-sql.md
if ($haveVaSetting) {
# Set new SQL Vulnerability Assessment Setting LogMessage -LogMessage "Add express configuration Vulnerability Assessment feature setting for '$($ServerName)' server." $Response = SetSqlVulnerabilityAssessmentServerSetting -SubscriptionId $SubscriptionId -ResourceGroupName $ResourceGroupName -ServerName $ServerName-
-if ($Response.Content.Contains("Enabled")) {
+$successStatusCodes = @(200, 201, 202)
+if ($Response.StatusCode -in $successStatusCodes) {
LogMessage -LogMessage "Congratulations, your server '$($ServerName)' server is set up with express configuration Vulnerability Assessment feature" } else {
- LogMessage -LogMessage "There was a problem to enable express configuration Vulnerability Assessment feature on the '$($ServerName)' server, please try again"
+ LogMessage -LogMessage "There was a problem to enable express configuration Vulnerability Assessment feature on the '$($ServerName)' server. Error '$($Response.StatusCode)': '$($Response.Content)'"
return }
devtest-labs Devtest Lab Vm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-vm-powershell.md
You can also call the DevTest Labs REST API to get the properties of existing la
In training, demo, and trial scenarios, you can avoid unnecessary costs by deleting VMs automatically on a certain date. You can set the VM `expirationDate` property when you create a VM. The PowerShell VM creation script earlier in this article sets an expiration date under `properties`: ```json
- "expirationDate" = "2022-12-01"
+ "expirationDate": "2022-12-01"
``` You can also set expiration dates on existing VMs by using PowerShell. The following PowerShell script sets an expiration date for an existing lab VM if it doesn't already have an expiration date:
devtest-labs Resource Group Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/resource-group-control.md
Invoke the script by using the following command. ResourceGroup.ps1 is the file
If you're using an Azure Resource Manager template to create a lab, use the **vmCreationResourceGroupId** property in the lab properties section of your template, as shown in the following example: ```json
- {
- "type": "microsoft.devtestlab/labs",
- "name": "[parameters('lab_name')]",
- "apiVersion": "2018-10-15-preview",
- "location": "eastus",
- "tags": {},
- "scale": null,
- "properties": {
- "vmCreationResourceGroupId": "/subscriptions/<SubscriptionID>/resourcegroups/<ResourceGroupName>",
- "labStorageType": "Premium",
- "premiumDataDisks": "Disabled",
- "provisioningState": "Succeeded",
- "uniqueIdentifier": "000000000f-0000-0000-0000-00000000000000"
- },
- "dependsOn": []
- },
+{
+ "type": "microsoft.devtestlab/labs",
+ "name": "[parameters('lab_name')]",
+ "apiVersion": "2018-10-15-preview",
+ "location": "eastus",
+ "tags": {},
+ "scale": null,
+ "properties": {
+ "vmCreationResourceGroupId": "/subscriptions/<SubscriptionID>/resourcegroups/<ResourceGroupName>",
+ "labStorageType": "Premium",
+ "premiumDataDisks": "Disabled",
+ "provisioningState": "Succeeded",
+ "uniqueIdentifier": "000000000f-0000-0000-0000-00000000000000"
+ },
+ "dependsOn": []
+},
```
firewall-manager Deploy Trusted Security Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/deploy-trusted-security-partner.md
Previously updated : 11/10/2021 Last updated : 09/28/2023
Integrated third-party Security as a service (SECaaS) partners are now available
## Deploy a third-party security provider in a new hub
-Skip this section if you are deploying a third-party provider into an existing hub.
+Skip this section if you're deploying a third-party provider into an existing hub.
1. Sign in to the [Azure portal](https://portal.azure.com). 2. In **Search**, type **Firewall Manager** and select it under **Services**.
-3. Navigate to **Getting Started**. Select **View secured virtual hubs**.
+3. Navigate to **Overview**. Select **View secured virtual hubs**.
4. Select **Create new secured virtual hub**.
-5. Enter you subscription and resource group, select a supported region, and add your hub and virtual WAN information.
+5. Enter your subscription and resource group, select a supported region, and add your hub and virtual WAN information.
6. Select **Include VPN gateway to enable Security Partner Providers**. 7. Select the **Gateway scale units** appropriate for your requirements. 8. Select **Next: Azure Firewall**
Skip this section if you are deploying a third-party provider into an existing h
The VPN gateway deployment can take more than 30 minutes.
-To verify that the hub has been created, navigate to Azure Firewall Manager->Secured Hubs. Select the hub->Overview page to show the partner name and the status as **Security Connection Pending**.
+To verify that the hub has been created, navigate to Azure Firewall Manager->Overview->View secured virtual hubs. You see the security partner provider name and the security partner status as **Security Connection Pending**.
Once the hub is created and the security partner is set up, continue on to connect the security provider to the hub.
Once the hub is created and the security partner is set up, continue on to conne
You can also select an existing hub in a Virtual WAN and convert that to a *secured virtual hub*.
-1. In **Getting Started**, select **View secured virtual hubs**.
+1. In **Getting Started**, **Overview**, select **View secured virtual hubs**.
2. Select **Convert existing hubs**. 3. Select a subscription and an existing hub. Follow rest of the steps to deploy a third-party provider in a new hub.
To set up tunnels to your virtual hubΓÇÖs VPN Gateway, third-party providers nee
Ensure the third-party provider can connect to the hub. The tunnels on the VPN gateway should be in a **Connected** state. This state is more reflective of the connection health between the hub and the third-party partner, compared to previous status. 3. Select the hub, and navigate to **Security Configurations**.
- When you deploy a third-party provider into the hub, it converts the hub into a *secured virtual hub*. This ensures that the third-party provider is advertising a 0.0.0.0/0 (default) route to the hub. However, VNet connections and sites connected to the hub donΓÇÖt get this route unless you opt-in on which connections should get this default route.
+ When you deploy a third-party provider into the hub, it converts the hub into a *secured virtual hub*. This ensures that the third-party provider is advertising a 0.0.0.0/0 (default) route to the hub. However, virtual network connections and sites connected to the hub donΓÇÖt get this route unless you opt-in on which connections should get this default route.
> [!NOTE] > Do not manually create a 0.0.0.0/0 (default) route over BGP for branch advertisements. This is automatically done for secure virtual hub deployments with 3rd party security providers. Doing so may break the deployment process.
To set up tunnels to your virtual hubΓÇÖs VPN Gateway, third-party providers nee
If you use non-RFC1918 addresses for your private traffic prefixes, you may need to configure SNAT policies for your firewall to disable SNAT for non-RFC1918 private traffic. By default, Azure Firewall SNATs all non-RFC1918 traffic.
-## Branch or VNet Internet traffic via third-party service
+## Branch or virtual network Internet traffic via third-party service
-Next, you can check if VNet virtual machines or the branch site can access the Internet and validate that the traffic is flowing to the third-party service.
+Next, you can check if virtual network virtual machines or the branch site can access the Internet and validate that the traffic is flowing to the third-party service.
-After finishing the route setting steps, the VNet virtual machines as well as the branch sites are sent a 0/0 to the third-party service route. You can't RDP or SSH into these virtual machines. To sign in, you can deploy the [Azure Bastion](../bastion/bastion-overview.md) service in a peered VNet.
+After you finish the route setting steps, the virtual network virtual machines and the branch sites are sent a 0/0 to the third-party service route. You can't RDP or SSH into these virtual machines. To sign in, you can deploy the [Azure Bastion](../bastion/bastion-overview.md) service in a peered virtual network.
## Rule configuration Use the partner portal to configure firewall rules. Azure Firewall passes the traffic through.
-For example, you may observe allowed traffic through the Azure Firewall, even though there is no explicit rule to allow the traffic. This is because Azure Firewall passes the traffic to the next hop security partner provider (ZScalar, CheckPoint, or iBoss). Azure Firewall still has rules to allow outbound traffic, but the rule name is not logged.
+For example, you may observe allowed traffic through the Azure Firewall, even though there's no explicit rule to allow the traffic. This is because Azure Firewall passes the traffic to the next hop security partner provider (ZScalar, CheckPoint, or iBoss). Azure Firewall still has rules to allow outbound traffic, but the rule name isn't logged.
For more information, see the partner documentation.
firewall-manager Quick Firewall Policy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-firewall-policy-bicep.md
description: In this quickstart, you deploy an Azure Firewall and a firewall pol
Previously updated : 07/05/2022 Last updated : 09/28/2023
Get-AzResource -ResourceGroupName exampleRG
## Clean up resources
-When you no longer need the resources that you created with the firewall, delete the resource group. This removes the firewall and all the related resources.
+When you no longer need the resources that you created with the firewall, delete the resource group. The firewall and all the related resources are deleted.
# [CLI](#tab/CLI)
firewall-manager Quick Firewall Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-firewall-policy.md
description: In this quickstart, you deploy an Azure Firewall and a firewall pol
Previously updated : 02/17/2021 Last updated : 09/28/2023
For information about Azure Firewall, see [What is Azure Firewall?](../firewall/
For information about IP Groups, see [IP Groups in Azure Firewall](../firewall/ip-groups.md).
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fazurefirewall-create-with-firewallpolicy-apprule-netrule-ipgroups%2Fazuredeploy.json)
firewall Snat Private Range https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/snat-private-range.md
You can use the following JSON to configure auto-learn. Azure Firewall must be a
Use the following JSON to associate an Azure Route Server: ```json
- "type": "Microsoft.Network/azureFirewalls",
- "apiVersion": "2022-11-01",
- "name": "[parameters('azureFirewalls_testFW_name')]",
- "location": "eastus",
- "properties": {
- "sku": {
- "name": "AZFW_VNet",
- "tier": "Standard"
- },
- "threatIntelMode": "Alert",
- "additionalProperties": {
- "Network.RouteServerInfo.RouteServerID": "[parameters'virtualHubs_TestRouteServer_externalid')]"
- },
-
+ "type": "Microsoft.Network/azureFirewalls",
+ "apiVersion": "2022-11-01",
+ "name": "[parameters('azureFirewalls_testFW_name')]",
+ "location": "eastus",
+ "properties": {
+ "sku": {
+ "name": "AZFW_VNet",
+ "tier": "Standard"
+ },
+ "threatIntelMode": "Alert",
+ "additionalProperties": {
+ "Network.RouteServerInfo.RouteServerID": "[parameters'virtualHubs_TestRouteServer_externalid')]"
+ },
+ ...
+ }
``` ### Configure using Azure PowerShell
frontdoor Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/private-link.md
[Azure Private Link](../private-link/private-link-overview.md) enables you to access Azure PaaS services and services hosted in Azure over a private endpoint in your virtual network. Traffic between your virtual network and the service goes over the Microsoft backbone network, eliminating exposure to the public Internet.
-Azure Front Door Premium can connect to your origin using Private Link. Your origin can be hosted in a virtual network or hosted as a PaaS service such as Azure App Service or Azure Storage. Private Link removes the need for your origin to be accessed publicly.
+Azure Front Door Premium can connect to your origin using Private Link. Your origin can be hosted in a virtual network or hosted as a PaaS service such as Azure Web App or Azure Storage. Private Link removes the need for your origin to be accessed publicly.
:::image type="content" source="./media/private-link/front-door-private-endpoint-architecture.png" alt-text="Diagram of Azure Front Door with Private Link enabled.":::
Azure Front Door private link is available in the following regions:
| East US 2 | UK South | | East Asia | | South Central US | West Europe | | | | West US 3 | Sweden Central | | |
+| US Gov Arizona |||
+| US Gov Texas |||
+ ## Limitations Origin support for direct private endpoint connectivity is currently limited to:
-* Storage (Azure Blobs)
-* App Services
+* Blob Storage
+* Web App
* Internal load balancers, or any services that expose internal load balancers such as Azure Kubernetes Service, Azure Container Apps or Azure Red Hat OpenShift * Storage Static Website
The Azure Front Door Private Link feature is region agnostic but for the best la
## Next steps
-* Learn how to [connect Azure Front Door Premium to a App Service origin with Private Link](standard-premium/how-to-enable-private-link-web-app.md).
+* Learn how to [connect Azure Front Door Premium to a Web App origin with Private Link](standard-premium/how-to-enable-private-link-web-app.md).
* Learn how to [connect Azure Front Door Premium to a storage account origin with Private Link](standard-premium/how-to-enable-private-link-storage-account.md). * Learn how to [connect Azure Front Door Premium to an internal load balancer origin with Private Link](standard-premium/how-to-enable-private-link-internal-load-balancer.md). * Learn how to [connect Azure Front Door Premium to a storage static website origin with Private Link](how-to-enable-private-link-storage-static-website.md).
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md
The following are general recommendations for using the Azure Policy Add-on:
- For more than 500 pods in a single cluster with a max of 40 constraints: three vCPUs and 600 MB of memory per component.
+- Open ports for the Azure Policy Add-On. The Azure Policy Add-On uses these domains and ports to fetch policy
+ definitions and assignments and report compliance of the cluster back to Azure Policy.
+
+ |Domain |Port |
+ |||
+ |`data.policy.core.windows.net` |`443` |
+ |`store.policy.core.windows.net` |`443` |
+ |`login.windows.net` |`443` |
+ |`dc.services.visualstudio.com` |`443` |
+
- Windows pods [don't support security contexts](https://kubernetes.io/docs/concepts/security/pod-security-standards/#what-profiles-should-i-apply-to-my-windows-pods). Thus, some of the Azure Policy definitions, such as disallowing root privileges, can't be
hdinsight Troubleshoot Invalidnetworkconfigurationerrorcode Cluster Creation Fails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/troubleshoot-invalidnetworkconfigurationerrorcode-cluster-creation-fails.md
Title: InvalidNetworkConfigurationErrorCode error - Azure HDInsight
description: Various reasons for failed cluster creations with InvalidNetworkConfigurationErrorCode in Azure HDInsight Previously updated : 06/29/2022 Last updated : 09/27/2023 # Cluster creation fails with InvalidNetworkConfigurationErrorCode in Azure HDInsight This article describes troubleshooting steps and possible resolutions for issues when interacting with Azure HDInsight clusters.
-If you see error code `InvalidNetworkConfigurationErrorCode` with the description "Virtual Network configuration isn't compatible with HDInsight Requirement", it usually indicates a problem with the [virtual network configuration](../hdinsight-plan-virtual-network-deployment.md) for your cluster. Based on the rest of the error description, follow the below sections to resolve your problem.
+If you see error code `InvalidNetworkConfigurationErrorCode` with the description "Virtual Network configuration isn't compatible with HDInsight Requirement," it usually indicates a problem with the [virtual network configuration](../hdinsight-plan-virtual-network-deployment.md) for your cluster. Based on the rest of the error description, follow the below sections to resolve your problem.
## "HostName Resolution failed" ### Issue
-Error description contains "HostName Resolution failed".
+Error description contains "HostName Resolution failed."
### Cause
-This error points to a problem with custom DNS configuration. DNS servers within a virtual network can forward DNS queries to Azure's recursive resolvers to resolve hostnames within that virtual network (see [Name Resolution in Virtual Networks](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md) for details). Access to Azure's recursive resolvers is provided via the virtual IP 168.63.129.16. This IP is only accessible from the Azure VMs. So it won't work if you're using an OnPrem DNS server, or your DNS server is an Azure VM, which isn't part of the cluster's virtual network.
+This error points to a problem with custom DNS configuration. DNS servers within a virtual network can forward DNS queries to Azure's recursive resolvers to resolve hostnames within that virtual network (see [Name Resolution in Virtual Networks](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md) for details). Access to Azure's recursive resolvers is provided via the virtual IP 168.63.129.16. This IP is only accessible from the Azure VMs. It is nonfunctional if you are using an OnPrem DNS server, or your DNS server is an Azure VM, which is not part of the cluster's virtual network.
### Resolution
-1. Ssh into the VM that is part of the cluster, and run the command `hostname -f`. This will return the host's fully qualified domain name (referred to as `<host_fqdn>` in the below instructions).
+1. Ssh into the VM that is part of the cluster, and run the command `hostname -f`. This command returns the host's fully qualified domain name (referred to as `<host_fqdn>` in the below instructions).
-1. Then, run the command `nslookup <host_fqdn>` (for example, `nslookup hn*.5h6lujo4xvoe1kprq3azvzmwsd.hx.internal.cloudapp.net`). If this command resolves the name to an IP address, it means your DNS server is working correctly. In this case, raise a support case with HDInsight, and we'll investigate your issue. In your support case, include the troubleshooting steps you executed. This will help us resolve the issue faster.
+1. Then, run the command `nslookup <host_fqdn>` (for example, `nslookup hn*.5h6lujo4xvoe1kprq3azvzmwsd.hx.internal.cloudapp.net`). If this command resolves the name to an IP address, it means your DNS server is working correctly. In this case, raise a support case with HDInsight, and we investigate your issue. In your support case, include the troubleshooting steps you executed. It helps to resolve the issue faster.
1. If the above command doesn't return an IP address, then run `nslookup <host_fqdn> 168.63.129.16` (for example, `nslookup hn*.5h6lujo4xvoe1kprq3azvzmwsd.hx.internal.cloudapp.net 168.63.129.16`). If this command is able to resolve the IP, it means that either your DNS server isn't forwarding the query to Azure's DNS, or it isn't a VM that is part of the same virtual network as the cluster.
-1. If you don't have an Azure VM that can act as a custom DNS server in the cluster's virtual network, then you need to add this first. Create a VM in the virtual network, which will be configured as DNS forwarder.
+1. If you don't have an Azure VM that can act as a custom DNS server in the cluster's virtual network, then you need to add this first. Create a VM in the virtual network, which is configured as DNS forwarder.
1. Once you have a VM deployed in your virtual network, configure the DNS forwarding rules on this VM. Forward all iDNS name resolution requests to 168.63.129.16, and the rest to your DNS server. [Here](../hdinsight-plan-virtual-network-deployment.md) is an example of this setup for a custom DNS server.
This error points to a problem with custom DNS configuration. DNS servers within
### Issue
-Error description contains "Failed to connect to Azure Storage Account" or "Failed to connect to Azure SQL".
+Error description contains "Failed to connect to Azure Storage Account" or "Failed to connect to Azure SQL."
### Cause
Azure Storage and SQL don't have fixed IP Addresses, so we need to allow outboun
If there are routes defined, make sure that there are routes for IP addresses for the region where the cluster was deployed, and the **NextHopType** for each route is **Internet**. There should be a route defined for each required IP Address documented in the aforementioned article.
-## "Failed to establish an outbound connection from the cluster for the communication with the HDInsight resource provider. Please ensure that outbound connectivity is allowed."
+## "Failed to establish an outbound connection from the cluster for the communication with the HDInsight resource provider. Ensure that outbound connectivity is allowed."
### Issue
-Error description contains "Failed to establish an outbound connection from the cluster for the communication with the HDInsight resource provider. Please ensure that outbound connectivity is allowed."
+Error description contains "Failed to establish an outbound connection from the cluster for the communication with the HDInsight resource provider. Ensure that outbound connectivity is allowed."
### Cause
Likely an issue with the custom DNS setup.
Validate that 168.63.129.16 is in the custom DNS chain. DNS servers within a virtual network can forward DNS queries to Azure's recursive resolvers to resolve hostnames within that virtual network. For more information, see [Name Resolution in Virtual Networks](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server). Access to Azure's recursive resolvers is provided via the virtual IP 168.63.129.16.
-1. Use [ssh command](../hdinsight-hadoop-linux-use-ssh-unix.md) to connect to your cluster. Edit the command below by replacing CLUSTERNAME with the name of your cluster, and then enter the command:
+1. Use [ssh command](../hdinsight-hadoop-linux-use-ssh-unix.md) to connect to your cluster. Edit the following command by replacing CLUSTERNAME with the name of your cluster, and then enter the command:
```cmd ssh sshuser@CLUSTERNAME-ssh.azurehdinsight.net
Validate that 168.63.129.16 is in the custom DNS chain. DNS servers within a vir
Add 168.63.129.16 as the first custom DNS for the virtual network using the steps described in [Plan a virtual network for Azure HDInsight](../hdinsight-plan-virtual-network-deployment.md). These steps are applicable only if your custom DNS server runs on Linux. **Option 2**
-Deploy a DNS server VM for the virtual network. This involves the following steps:
+Deploy a DNS server VM for the virtual network. It involves the following steps:
-* Create a VM in the virtual network, which will be configured as DNS forwarder (it can be a Linux or windows VM).
+* Create a VM in the virtual network, which is configured as DNS forwarder (it can be a Linux or windows VM).
* Configure DNS forwarding rules on this VM (forward all iDNS name resolution requests to 168.63.129.16, and the rest to your DNS server). * Add the IP Address of this VM as first DNS entry for Virtual Network DNS configuration. #### 168.63.129.16 is in the list
-In this case, please create a support case with HDInsight, and we'll investigate your issue. Include the result of the below commands in your support case. This will help us investigate and resolve the issue quicker.
+In this case, create a support case with HDInsight, and we investigate your issue. Include the result of the below commands in your support case. It helps to investigate and resolve the issue quickly.
-From an ssh session on the head node, edit and then run the following:
+From an ssh session on the head node, edit and then run the following command:
```bash hostname -f
hdinsight Hdinsight High Availability Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-high-availability-components.md
Title: High availability components in Azure HDInsight
description: Overview of the various high availability components used by HDInsight clusters. Previously updated : 04/28/2022 Last updated : 09/28/2023 # High availability services supported by Azure HDInsight
-In order to provide you with optimal levels of availability for your analytics components, HDInsight was developed with a unique architecture for ensuring high availability (HA) of critical services. Some components of this architecture were developed by Microsoft to provide automatic failover. Other components are standard Apache components that are deployed to support specific services. This article explains the architecture of the HA service model in HDInsight, how HDInsight supports failover for HA services, and best practices to recover from other service interruptions.
+In order to provide you with optimal levels of availability for your analytics components, HDInsight was developed with a unique architecture for ensuring high availability (HA) of critical services. Microsoft developed some components of this architecture to provide automatic failover. Other components are standard Apache components that are deployed to support specific services. This article explains the architecture of the HA service model in HDInsight, how HDInsight supports failover for HA services, and best practices to recover from other service interruptions.
> [!NOTE] > This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
HDInsight provides customized infrastructure to ensure that four primary service
- Job History Server for Hadoop MapReduce - Apache Livy
-This infrastructure consists of a number of services and software components, some of which are designed by Microsoft. The following components are unique to the HDInsight platform:
+This infrastructure consists of many services and software components, some of which are designed by Microsoft. The following components are unique to the HDInsight platform:
- Slave failover controller - Master failover controller
There are also other high availability services, which are supported by open-sou
- YARN ResourceManager - HBase Master
-The following sections will provide more detail about how these services work together.
+The following sections provide more detail about how these services work together.
## HDInsight high availability services
Microsoft provides support for the four Apache services in the following table i
Each HDInsight cluster has two headnodes in active and standby modes, respectively. The HDInsight HA services run on headnodes only. These services should always be running on the active headnode, and stopped and put in maintenance mode on the standby headnode.
-To maintain the correct states of HA services and provide a fast failover, HDInsight utilizes Apache ZooKeeper, which is a coordination service for distributed applications, to conduct active headnode election. HDInsight also provisions a few background Java processes, which coordinate the failover procedure for HDInsight HA services. These services are the following: the master failover controller, the slave failover controller, the *master-ha-service*, and the *slave-ha-service*.
+To maintain the correct states of HA services and provide a fast failover, HDInsight utilizes Apache ZooKeeper, which is a coordination service for distributed applications, to conduct active headnode election. HDInsight also provisions a few background Java processes, which coordinate the failover procedure for HDInsight HA services. These services are: the master failover controller, the slave failover controller, the *master-ha-service*, and the *slave-ha-service*.
### Apache ZooKeeper
-Apache ZooKeeper is a high-performance coordination service for distributed applications. In production, ZooKeeper usually runs in replicated mode where a replicated group of ZooKeeper servers form a quorum. Each HDInsight cluster has three ZooKeeper nodes that allow three ZooKeeper servers to form a quorum. HDInsight has two ZooKeeper quorums running in parallel with each other. One quorum decides the active headnode in a cluster on which HDInsight HA services should run. Another quorum is used to coordinate HA services provided by Apache, as detailed in later sections.
+Apache ZooKeeper is a high-performance coordination service for distributed applications. In production, ZooKeeper usually runs in replicated mode where a replicated group of ZooKeeper server forms a quorum. Each HDInsight cluster has three ZooKeeper nodes that allow three ZooKeeper servers to form a quorum. HDInsight has two ZooKeeper quorums running in parallel with each other. One quorum decides the active headnode in a cluster on which HDInsight HA services should run. Another quorum is used to coordinate HA services provided by Apache, as detailed in later sections.
### Slave failover controller
-The slave failover controller runs on every node in an HDInsight cluster. This controller is responsible for starting the Ambari agent and *slave-ha-service* on each node. It periodically queries the first ZooKeeper quorum about the active headnode. When the active and standby headnodes change, the slave failover controller performs the following:
+The slave failover controller runs on every node in an HDInsight cluster. This controller is responsible for starting the Ambari agent and *slave-ha-service* on each node. It periodically queries the first ZooKeeper quorum about the active headnode. When the active and standby headnodes change, the slave failover controller performs the following steps:
1. Updates the host configuration file. 1. Restarts Ambari agent.
The master-ha-service only runs on the active headnode, it stops the HDInsight H
:::image type="content" source="./media/hdinsight-high-availability-components/failover-steps.png" alt-text="failover process" border="false":::
-A health monitor runs on each headnode along with the master failover controller to send heartbeat notifications to the Zookeeper quorum. The headnode is regarded as an HA service in this scenario. The health monitor checks to see if each high availability service is healthy and if it's ready to join in the leadership election. If yes, this headnode will compete in the election. If not, it will quit the election until it becomes ready again.
+A health monitor runs on each headnode along with the master failover controller to send heartbeat notifications to the Zookeeper quorum. The headnode is regarded as an HA service in this scenario. The health monitor checks to see if each high availability service is healthy and if it's ready to join in the leadership election. If yes, this headnode compete in the election. If not, it quits the election until it becomes ready again.
-If the standby headnode ever achieves leadership and becomes active (such as in the case of a failure with the previous active node), its master failover controller will start all HDInsight HA services on it. The master failover controller will also stop these services on the other headnode.
+If the standby headnode ever achieves leadership and becomes active (such as in the case of a failure with the previous active node), its master failover controller starts all HDInsight HA services on it. The master failover controller stops these services on the other headnode.
For HDInsight HA service failures, such as a service being down or unhealthy, the master failover controller should automatically restart or stop the services according to the headnode status. Users shouldn't manually start HDInsight HA services on both head nodes. Instead, allow automatic or manual failover to help the service recover. ### Inadvertent manual intervention
-HDInsight HA services should only run on the active headnode, and will be automatically restarted when necessary. Since individual HA services don't have their own health monitor, failover can't be triggered at the level of the individual service. Failover is ensured at the node level and not at the service level.
+HDInsight HA services should only run on the active headnode, and it automatically restart when necessary. Since individual HA services don't have their own health monitor, failover can't be triggered at the level of the individual service. Failover is ensured at the node level and not at the service level.
### Some known issues
Apache provides high availability for HDFS NameNode, YARN ResourceManager, and H
### Hadoop Distributed File System (HDFS) NameNode
-HDInsight clusters based on Apache Hadoop 2.0 or higher provide NameNode high availability. There are two NameNodes running on the headnodes, which are configured for automatic failover. The NameNodes use the *ZKFailoverController* to communicate with Zookeeper to elect for active/standby status. The *ZKFailoverController* runs on both headnodes, and works in the same way as the master failover controller above.
+HDInsight clusters based on Apache Hadoop 2.0 or higher provide NameNode high availability. There are two NameNodes running on the headnodes, which are configured for automatic failover. The NameNodes use the *ZKFailoverController* to communicate with Zookeeper to elect for active/standby status. The *ZKFailoverController* runs on both headnodes, and works in the same way as the master failover controller.
The second Zookeeper quorum is independent of the first quorum, so the active NameNode may not run on the active headnode. When the active NameNode is dead or unhealthy, the standby NameNode wins the election and becomes active.
healthcare-apis Migration Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/migration-faq.md
Azure API for FHIR will be retired on September 30, 2026.
## Are new deployments of Azure API for FHIR allowed?
-Due to the transition from Azure API for FHIR to Azure Health Data Services, after April 1, 2025 customers won't be able to create new deployments of Azure API of FHIR. Until April 1, 2025 new deployments are allowed.
+Due to the retirement of Azure API for FHIR after April 1, 2025 customers will not be able to create new deployments of Azure API of FHIR. Until April 1, 2025 new deployments are allowed.
## Why is Microsoft retiring Azure API for FHIR?
AHDS FHIR service offers a rich set of capabilities such as:
- Consumption-based pricing model where customers pay only for used storage and throughput - Support for transaction bundles - Chained search improvements-- Improved ingress and egress of data with \$import, \$export including new features such as incremental import (preview)
+- Improved ingress and egress of data with \$import, \$export including new features such as incremental import
- Events to trigger new workflows when FHIR resources are created, updated or deleted - Connectors to Azure Synapse Analytics, Power BI and Azure Machine Learning for enhanced analytics
After September 30, 2026 customers won't be able to:
- Access customer support (phone, email, web) - Where can customers go to learn more about migrating to Azure Health Data Services FHIR service?
-Start with [migration strategies](migration-strategies.md) to learn more about Azure API for FHIR to Azure Health Data Services FHIR service migration. The migration from Azure API for FHIR to Azure Health Data Services FHIR service involves data migration and updating the applications to use Azure Health Data Services FHIR service. Find more documentation on the step-by-step approach to migrating your data and applications in the [migration tool](https://go.microsoft.com/fwlink/?linkid=2247964).
+Start with [migration strategies](migration-strategies.md) to learn more about Azure API for FHIR to Azure Health Data Services FHIR service migration. The migration from Azure API for FHIR to Azure Health Data Services FHIR service involves data migration and updating the applications to use Azure Health Data Services FHIR service. Find more documentation on the step-by-step approach to migrating your data and applications in the [migration tool](https://github.com/Azure/apiforfhir-migration-tool/blob/main/lift-and-shift-resources/Liftandshiftresources_README.md).
## Where can customers go to get answers to their questions? Check out these resources if you need further assistance: -- Get answers from community experts in [Microsoft Q&A](https://go.microsoft.com/fwlink/?linkid=2248420).
+- Get answers from community experts in [Microsoft Q&A](/answers/questions/1377356/retirement-announcement-azure-api-for-fhir).
- If you have a support plan and require technical support, [contact us](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
healthcare-apis Migration Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/migration-strategies.md
To migrate your data, follow these steps:
Compare the differences between Azure API for FHIR and Azure Health Data Services. Also review your architecture and assess if any changes need to be made.
-|**Capabilities** |**Azure API for FHIR** |**Azure Health Data Services** |
-||||
-| **Settings** | Supported: <br>ΓÇó Local RBAC <br>ΓÇó SMART on FHIR Proxy | Planned deprecation <br>ΓÇó Local RBAC (9/6/23) <br>ΓÇó SMART on FHIR Proxy (9/21/26) | | |
-| **Data storage Volume** | More than 4 TB | Current support is 4 TB. Reach out to CSS team if you need more than 4 TB. | | |
-| **Data ingress** | Tools available in OSS | $import operation | | |
-| **Autoscaling** | Supported on request and incurs charge | Enabled by default at no extra charge | | |
-| **Search parameters** | ΓÇó Bundle type supported: Batch <br> ΓÇó Include and revinclude, iterate modifier not supported <br> ΓÇó Sorting supported by first name, last name, birthdate and clinical date | ΓÇó Bundle Type supported: Batch and transaction <br> ΓÇó Selectable search parameters <br> ΓÇó Include, revinclude, and iterate modifier is supported <br>ΓÇó Sorting supported by string and dateTime fields | | |
-| **Events** | Not Supported | Supported | | |
-| **Infrastructure** | Supported: - <br> ΓÇó Customer managed keys <br> ΓÇó AZ support and PITR <br> ΓÇó Cross region DR | Supported - Data recovery <br> Upcoming: AZ support for customer managed keys | | |
+|Capabilities|Azure API for FHIR|Azure Health Data Services|
+|||--|
+|**Settings**|Supported: <br> ΓÇó Local RBAC <br> ΓÇó SMART on FHIR Proxy|Planned deprecation: <br> ΓÇó Local RBAC (9/6/23) <br> ΓÇó SMART on FHIR Proxy (9/21/26)|
+|**Data storage Volume**|More than 4 TB|Current support is 4 TB (Open an [Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md) if you need more than 4 TB)|
+|**Data ingress**|Tools available in OSS|$import operation|
+|**Autoscaling**|Supported on request and incurs charge|Enabled by default at no extra charge|
+|**Search parameters**|Bundle type supported: Batch <br> ΓÇó Include and revinclude, iterate modifier not supported <br> ΓÇó Sorting supported by first name, last name, birthdate and clinical date|Bundle type supported: Batch and transaction <br> ΓÇó Selectable search parameters <br> ΓÇó Include, revinclude, and iterate modifier is supported <br>ΓÇó Sorting supported by string and dateTime fields|
+|**Events**|Not Supported|Supported|
+|**Infrastructure**|Supported: <br> ΓÇó Customer managed keys <br> ΓÇó AZ support and PITR <br> ΓÇó Cross region DR|Supported: <br> ΓÇó Data recovery <br> Upcoming: <br> ΓÇó AZ support for customer managed keys|
### Things to consider that may affect your architecture
Compare the differences between Azure API for FHIR and Azure Health Data Service
- **SMART on FHIR proxy is being deprecated**. You need to use the new SMART on FHIR capability. More information: [SMART on FHIR](smart-on-fhir.md) -- **Azure Health Data Services FHIR Service does not support local RBAC and custom authority**. The token issuer authority needs to be the authentication endpoint for the tenant that the FHIR Service is running in.
+- **Azure Health Data Services FHIR service does not support local RBAC and custom authority**. The token issuer authority needs to be the authentication endpoint for the tenant that the FHIR Service is running in.
- **The IoT connector is only supported using an Azure API for FHIR service**. The IoT connector is succeeded by the MedTech service. You need to deploy a MedTech service and corresponding FHIR service within an existing or new Azure Health Data Services workspace and point your devices to the new Azure Events Hubs device event hub. Use the existing IoT connector device and destination mapping files with the MedTech service deployment.
If you want to migrate existing IoT connector device FHIR data from your Azure A
First, create a migration plan. We recommend the migration patterns described in the table. Depending on your organizationΓÇÖs tolerance for downtime, you may decide to use certain patterns and tools to help facilitate your migration.
-| Migration Pattern | Details | How? |
-||||
-| Lift and shift | The simplest pattern. Ideal if your data pipelines can afford longer downtime. | Choose the option that works best for your organization: <br> ΓÇó Configure a workflow to [\$export](../azure-api-for-fhir/export-data.md) your data on Azure API for FHIR, and then [\$import](configure-import-data.md) into Azure Health Data Services FHIR service. <br> ΓÇó The [GitHub repo](https://go.microsoft.com/fwlink/?linkid=2247964) provides tips on running these commands, and a script to help automate creating the \$import payload. <br> ΓÇó Or create your own tool to migrate the data using \$export and \$import. |
-| Incremental copy | Continuous version of lift and shift, with less downtime. Ideal for large amounts of data that take longer to copy, or if you want to continue running Azure API for FHIR during the migration. | Choose the option that works best for your organization. <br> ΓÇó We created an [OSS migration tool](https://go.microsoft.com/fwlink/?linkid=2248131) to help with this migration pattern. <br> ΓÇó Or create your own tool to migrate the data incrementally.|
+|Migration pattern|Details|How?|
+|--|-|-|
+|**Lift and shift**|The simplest pattern. Ideal if your data pipeline can afford longer downtime.|Choose the option that works best for your organization: <br> ΓÇó Configure a workflow to [\$export](../azure-api-for-fhir/export-data.md) your data on Azure API for FHIR, and then [\$import](configure-import-data.md) into Azure Health Data Services FHIR service. <br> ΓÇó The [GitHub repo](https://github.com/Azure/apiforfhir-migration-tool/blob/main/lift-and-shift-resources/Liftandshiftresources_README.md) provides tips on running these commands, and a script to help automate creating the \$import payload. <br> ΓÇó Or create your own tool to migrate the data using \$export and \$import.|
+|**Incremental copy**|Continuous version of lift and shift, with less downtime. Ideal for large amounts of data that take longer to copy, or if you want to continue running Azure API for FHIR during the migration.|Choose the option that works best for your organization. <br> ΓÇó We created an [OSS migration tool](https://github.com/Azure/apiforfhir-migration-tool/tree/main/incremental-copy-docs) to help with this migration pattern. <br> ΓÇó Or create your own tool to migrate the data incrementally.|
### OSS migration tool considerations
-If you decide to use the OSS migration tool, review and understand the migration toolΓÇÖs [capabilities and limitations](https://go.microsoft.com/fwlink/?linkid=2248324).
+If you decide to use the OSS migration tool, review and understand the migration toolΓÇÖs [capabilities and limitations](https://github.com/Azure/apiforfhir-migration-tool/blob/main/incremental-copy-docs/Appendix.md).
#### Prepare Azure API for FHIR server
Deploy a new Azure Health Data Services FHIR Service server.
- Then deploy an Azure Health Data Services FHIR Service server. More information: [Deploy a FHIR service within Azure Health Data Services](fhir-portal-quickstart.md) -- Configure your new Azure Health Data Services FHIR Service server. If you need to use the same configurations as you have in Azure API for FHIR for your new server, see the recommended list of what to check for in the [migration tool documentation](https://go.microsoft.com/fwlink/?linkid=2248324). Configure the settings before you migrate.
+- Configure your new Azure Health Data Services FHIR Service server. If you need to use the same configurations as you have in Azure API for FHIR for your new server, see the recommended list of what to check for in the [migration tool documentation](https://github.com/Azure/apiforfhir-migration-tool/blob/main/incremental-copy-docs/Appendix.md). Configure the settings before you migrate.
## Step 3: Migrate data
-Choose the migration pattern that works best for your organization. If you're using OSS migration tools, follow the instructions on [GitHub](https://go.microsoft.com/fwlink/?linkid=2248130).
+Choose the migration pattern that works best for your organization. If you're using OSS migration tools, follow the instructions on [GitHub](https://github.com/Azure/apiforfhir-migration-tool).
## Step 4: Migrate applications and reconfigure settings
Migrate applications that were pointing to the old FHIR server.
## Step 5: Cut over to Azure Health Data Services FHIR services
-After youΓÇÖre confident that your Azure Health Data Services FHIR Service server is stable, you can begin using Azure Health Data Services FHIR Service to satisfy your business scenarios. Turn off any remaining pipelines that are running on Azure API for FHIR, delete data from the intermediate storage account that was used in the migration tool if necessary, delete data from your Azure API for FHIR server, and decommission your Azure API for FHIR account.
+After youΓÇÖre confident that your Azure Health Data Services FHIR Service server is stable, you can begin using Azure Health Data Services FHIR service to satisfy your business scenarios. Turn off any remaining pipelines that are running on Azure API for FHIR, delete data from the intermediate storage account that was used in the migration tool if necessary, delete data from your Azure API for FHIR server, and decommission your Azure API for FHIR account.
iot-central Howto Authorize Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-authorize-rest-api.md
The IoT Central REST API lets you develop client applications that integrate wit
Every IoT Central REST API call requires an authorization header that IoT Central uses to determine the identity of the caller and the permissions that caller is granted within the application.
-This article describes the types of token you can use in the authorization header, and how to get them.
+This article describes the types of token you can use in the authorization header, and how to get them. Please note that service principals are the recommended method for access management for IoT Central REST APIs.
## Token types To access an IoT Central application using the REST API, you can use an: -- _Azure Active Directory bearer token_. A bearer token is associated with an Azure Active Directory user account or service principal. The token grants the caller the same permissions the user or service principal has in the IoT Central application.
+- _Azure Active Directory bearer token_. A bearer token is associated with an Azure Active Directory user account or service principal. The token grants the caller the same permissions the user or service principal has in the IoT Central application.
- IoT Central API token. An API token is associated with a role in your IoT Central application. Use a bearer token associated with your user account while you're developing and testing automation and scripts that use the REST API. Use a bearer token that's associated with a service principal for production automation and scripts. Use a bearer token in preference to an API token to reduce the risk of leaks and problems when tokens expire.
key-vault Create Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/create-certificate.md
The following descriptions correspond to the green lettered steps in the precedi
4. Your chosen CA responds with an X509 Certificate. 5. Your application completes the new certificate creation with a merger of the X509 Certificate from your CA. -- **Create a certificate with a known issuer provider:** This method requires you to do a one-time task of creating an issuer object. Once an issuer object is created in you key vault, its name can be referenced in the policy of the KV certificate. A request to create such a KV certificate will create a key pair in the vault and communicate with the issuer provider service using the information in the referenced issuer object to get an x509 certificate. The x509 certificate is retrieved from the issuer service and is merged with the key pair to complete the KV certificate creation.
+- **Create a certificate with a known issuer provider:** This method requires you to do a one-time task of creating an issuer object. Once an issuer object is created in your key vault, its name can be referenced in the policy of the KV certificate. A request to create such a KV certificate will create a key pair in the vault and communicate with the issuer provider service using the information in the referenced issuer object to get an x509 certificate. The x509 certificate is retrieved from the issuer service and is merged with the key pair to complete the KV certificate creation.
![Create a certificate with a Key Vault partnered certificate authority](../media/certificate-authority-2.png)
When an order is placed with the issuer provider, it may honor or override the x
## See Also - How-to guide to create certificates in Key Vault using [Portal](./quick-create-portal.md), [Azure CLI](./quick-create-cli.md), [Azure PowerShell](./quick-create-powershell.md)
+ - [Monitor and manage certificate creation](create-certificate-scenarios.md)
load-balancer Ipv6 Add To Existing Vnet Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/ipv6-add-to-existing-vnet-cli.md
Title: Add IPv6 to an IPv4 application in Azure virtual network - Azure CLI
description: This article shows how to deploy IPv6 addresses to an existing application in Azure virtual network using Azure CLI. -+ - Previously updated : 03/31/2020- Last updated : 09/27/2023+ ms.devlang: azurecli+ # Add IPv6 to an IPv4 application in Azure virtual network using Azure CLI
This article shows you how to add IPv6 addresses to an application that is using
## Create IPv6 addresses
-Create public IPv6 address with with [az network public-ip create](/cli/azure/network/public-ip) for your Standard Load Balancer. The following example creates an IPv6 public IP address named *PublicIP_v6* in the *myResourceGroupSLB* resource group:
+Create public IPv6 address with [az network public-ip create](/cli/azure/network/public-ip) for your Standard Load Balancer. The following example creates an IPv6 public IP address named *PublicIP_v6* in the *myResourceGroupSLB* resource group:
```azurecli-interactive az network public-ip create \
az network nic ip-config create \
## View IPv6 dual stack virtual network in Azure portal You can view the IPv6 dual stack virtual network in Azure portal as follows:
-1. In the portal's search bar, enter *myVnet*.
-2. When **myVnet** appears in the search results, select it. This launches the **Overview** page of the dual stack virtual network named *myVNet*. The dual stack virtual network shows the three NICs with both IPv4 and IPv6 configurations located in the dual stack subnet named *mySubnet*.
+1. In the portal's search bar, enter **virtual networks** and
+1. In the **Virtual Networks** window, select **myVNet**.
+1. Select **Connected devices** under **Settings** to view the attached network interfaces. The dual stack virtual network shows the three NICs with both IPv4 and IPv6 configurations.
- ![IPv6 dual stack virtual network in Azure](./media/ipv6-add-to-existing-vnet-powershell/ipv6-dual-stack-vnet.png)
+ :::image type="content" source="media/ipv6-add-to-existing-vnet-powershell/ipv6-dual-stack-addresses.png" alt-text="Screenshot of connected devices settings displaying IPv4 and IPv6 addresses on network interfaces.":::
## Clean up resources
-When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group, VM, and all related resources.
+When no longer needed, you can use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, VM, and all related resources.
```azurecli-interactive
-az group delete --name MyAzureResourceGroupSLB
+ az group delete --name MyAzureResourceGroupSLB
``` ## Next steps
load-balancer Ipv6 Add To Existing Vnet Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/ipv6-add-to-existing-vnet-powershell.md
Title: Add an IPv4 application to IPv6 in Azure Virtual Network - PowerShell
description: This article shows how to deploy IPv6 addresses to an existing application in Azure virtual network using Azure PowerShell. --++ - Previously updated : 03/31/2020- Last updated : 09/27/2023+ + # Add an IPv4 application to IPv6 in Azure virtual network using PowerShell
This article shows you how to add IPv6 connectivity to an existing IPv
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Prerequisites
$NIC_3 | Set-AzNetworkInterface
## View IPv6 dual stack virtual network in Azure portal You can view the IPv6 dual stack virtual network in Azure portal as follows:
-1. In the portal's search bar, enter *myVnet*.
-2. When **myVnet** appears in the search results, select it. This launches the **Overview** page of the dual stack virtual network named *myVNet*. The dual stack virtual network shows the three NICs with both IPv4 and IPv6 configurations located in the dual stack subnet named *mySubnet*.
+1. In the portal's search bar, enter **virtual networks** and
+1. In the **Virtual Networks** window, select **myVNet**.
+1. Select **Connected devices** under **Settings** to view the attached network interfaces. The dual stack virtual network shows the three NICs with both IPv4 and IPv6 configurations.
- ![IPv6 dual stack virtual network in Azure](./media/ipv6-add-to-existing-vnet-powershell/ipv6-dual-stack-vnet.png)
+ :::image type="content" source="media/ipv6-add-to-existing-vnet-powershell/ipv6-dual-stack-addresses.png" alt-text="Screenshot of connected devices settings displaying IPv4 and IPv6 addresses on network interfaces.":::
+
## Clean up resources
load-balancer Ipv6 Configure Standard Load Balancer Template Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/ipv6-configure-standard-load-balancer-template-json.md
Title: Deploy an IPv6 dual stack application in Azure virtual network - Resource
description: This article shows how to deploy an IPv6 dual stack application with Standard Load Balancer in Azure virtual network using Azure Resource Manager VM templates. --++ Last updated 03/31/2020-+ + # Deploy an IPv6 dual stack application in Azure virtual network - Template
load-balancer Load Balancer Basic Upgrade Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-basic-upgrade-guidance.md
Previously updated : 09/19/2022 Last updated : 09/27/2023 #customer-intent: As an cloud engineer with basic Load Balancer services, I need guidance and direction on migrating my workloads off basic to standard SKUs
>[!Important] >On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. This article will help guide you through the upgrade process.
-In this article, we'll discuss guidance for upgrading your Basic Load Balancer instances to Standard Load Balancer. Standard Load Balancer is recommended for all production instances and provides many [key differences](#basic-load-balancer-sku-vs-standard-load-balancer-sku) to your infrastructure.
+In this article, we discuss guidance for upgrading your Basic Load Balancer instances to Standard Load Balancer. Standard Load Balancer is recommended for all production instances and provides many [key differences](#basic-load-balancer-sku-vs-standard-load-balancer-sku) to your infrastructure.
## Steps to complete the upgrade
This section lists out some key differences between these two Load Balancer SKUs
| **[Multiple front ends](load-balancer-multivip-overview.md)** | Inbound and [outbound](load-balancer-outbound-connections.md) | Inbound only | | **Management Operations** | Most operations < 30 seconds | Most operations 60-90+ seconds | | **SLA** | [99.99%](https://azure.microsoft.com/support/legal/sla/load-balancer/v1_0/) | Not available |
-| **Global VNet Peering Support** | Standard ILB is supported via Global VNet Peering | Not supported |
+| **Global Virtual Network Peering Support** | Standard ILB is supported via Global Virtual Network Peering | Not supported |
| **[NAT Gateway Support](../virtual-network/nat-gateway/nat-overview.md)** | Both Standard ILB and Standard Public Load Balancer are supported via Nat Gateway | Not supported | | **[Private Link Support](../private-link/private-link-overview.md)** | Standard ILB is supported via Private Link | Not supported | | **[Global tier (Preview)](cross-region-overview.md)** | Standard Load Balancer supports the Global tier for Public LBs enabling cross-region load balancing | Not supported |
load-balancer Load Balancer Multivip Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multivip-overview.md
Azure Load Balancer allows you to load balance services on multiple ports, multi
This article describes the fundamentals of load balancing across multiple IP addresses using the same port and protocol. If you only intend to expose services on one IP address, you can find simplified instructions for [public](./quickstart-load-balancer-standard-public-portal.md) or [internal](./quickstart-load-balancer-standard-internal-portal.md) load balancer configurations. Adding multiple frontends is incremental to a single frontend configuration. Using the concepts in this article, you can expand a simplified configuration at any time.
-When you define an Azure Load Balancer, a frontend and a backend pool configuration are connected with a load balancing rule. The health probe referenced by the load balancing rule is used to determine the health of a VM on a certain port and protocol. Based on the health probe results, new flows are sent to VMs in the backend pool. The frontend is defined by a three-tuple comprised of an IP address (public or internal), a transport protocol (UDP or TCP), and a port number from the load balancing rule. The backend pool is a collection of Virtual Machine IP configurations (part of the NIC resource) which reference the Load Balancer backend pool.
+When you define an Azure Load Balancer, a frontend and a backend pool configuration are connected with a load balancing rule. The health probe referenced by the load balancing rule is used to determine the health of a VM on a certain port and protocol. Based on the health probe results, new flows are sent to VMs in the backend pool. The frontend is defined using a three-tuple comprised of an IP address (public or internal), a transport protocol (UDP or TCP), and a port number from the load balancing rule. The backend pool is a collection of Virtual Machine IP configurations (part of the NIC resource) which reference the Load Balancer backend pool.
The following table contains some example frontend configurations:
Azure Load Balancer provides flexibility in defining the load balancing rules. A
1. The default rule with no backend port reuse. 2. The Floating IP rule where backend ports are reused.
-Azure Load Balancer allows you to mix both rule types on the same load balancer configuration. The load balancer can use them simultaneously for a given VM, or any combination, if you abide by the constraints of the rule. The rule type you choose depends on the requirements of your application and the complexity of supporting that configuration. You should evaluate which rule types are best for your scenario. We'll explore these scenarios further by starting with the default behavior.
+Azure Load Balancer allows you to mix both rule types on the same load balancer configuration. The load balancer can use them simultaneously for a given VM, or any combination, if you abide by the constraints of the rule. The rule type you choose depends on the requirements of your application and the complexity of supporting that configuration. You should evaluate which rule types are best for your scenario. We explore these scenarios further by starting with the default behavior.
## Rule type #1: No backend port reuse In this scenario, the frontends are configured as follows:
If you want to reuse the backend port across multiple rules, you must enable Flo
*Floating IP* is Azure's terminology for a portion of what is known as Direct Server Return (DSR). DSR consists of two parts: a flow topology and an IP address mapping scheme. At a platform level, Azure Load Balancer always operates in a DSR flow topology regardless of whether Floating IP is enabled or not. This means that the outbound part of a flow is always correctly rewritten to flow directly back to the origin.
-With the default rule type, Azure exposes a traditional load balancing IP address mapping scheme for ease of use. Enabling Floating IP changes the IP address mapping scheme to allow for more flexibility as explained below.
+With the default rule type, Azure exposes a traditional load balancing IP address mapping scheme for ease of use. Enabling Floating IP changes the IP address mapping scheme to allow for more flexibility.
:::image type="content" source="media/load-balancer-multivip-overview/load-balancer-multivip-dsr.png" alt-text="Diagram of load balancer traffic for multiple frontend IPs with floating IP.":::
load-balancer Quickstart Load Balancer Standard Internal Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-cli.md
An Azure resource group is a logical container into which you deploy and manage
Create a resource group with [az group create](/cli/azure/group#az-group-create).
-```azurecli
- az group create \
- --name CreateIntLBQS-rg \
- --location westus3
-
+```azurecli-interactive
+ az group create \
+ --name CreateIntLBQS-rg \
+ --location westus3
``` When you create an internal load balancer, a virtual network is configured as the network for the load balancer.
Before you deploy VMs and test your load balancer, create the supporting virtual
Create a virtual network by using [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create).
-```azurecli
+```azurecli-interactive
az network vnet create \ --resource-group CreateIntLBQS-rg \ --location westus3 \
In this example, you create an Azure Bastion host. The Azure Bastion host is use
Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public IP address for the Azure Bastion host.
-```azurecli
+```azurecli-interactive
az network public-ip create \ --resource-group CreateIntLBQS-rg \ --name myBastionIP \
az network public-ip create \
Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create) to create a subnet.
-```azurecli
+```azurecli-interactive
az network vnet subnet create \ --resource-group CreateIntLBQS-rg \ --name AzureBastionSubnet \
az network vnet subnet create \
Use [az network bastion create](/cli/azure/network/bastion#az-network-bastion-create) to create a host.
-```azurecli
+```azurecli-interactive
az network bastion create \ --resource-group CreateIntLBQS-rg \ --name myBastionHost \
This section details how you can create and configure the following components o
Create an internal load balancer with [az network lb create](/cli/azure/network/lb#az-network-lb-create).
-```azurecli
+```azurecli-interactive
az network lb create \ --resource-group CreateIntLBQS-rg \ --name myLoadBalancer \
A virtual machine with a failed probe check is removed from the load balancer. T
Create a health probe with [az network lb probe create](/cli/azure/network/lb/probe#az-network-lb-probe-create).
-```azurecli
+```azurecli-interactive
az network lb probe create \ --resource-group CreateIntLBQS-rg \ --lb-name myLoadBalancer \
A load balancer rule defines:
Create a load balancer rule with [az network lb rule create](/cli/azure/network/lb/rule#az-network-lb-rule-create).
-```azurecli
+```azurecli-interactive
az network lb rule create \ --resource-group CreateIntLBQS-rg \ --lb-name myLoadBalancer \
For a standard load balancer, the VMs in the backend pool are required to have n
To create a network security group, use [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create).
-```azurecli
+```azurecli-interactive
az network nsg create \ --resource-group CreateIntLBQS-rg \ --name myNSG
To create a network security group, use [az network nsg create](/cli/azure/netwo
To create a network security group rule, use [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create).
-```azurecli
+```azurecli-interactive
az network nsg rule create \ --resource-group CreateIntLBQS-rg \ --nsg-name myNSG \
In this section, you create:
Create two network interfaces with [az network nic create](/cli/azure/network/nic#az-network-nic-create).
-```azurecli
+```azurecli-interactive
array=(myNicVM1 myNicVM2) for vmnic in "${array[@]}" do
Create two network interfaces with [az network nic create](/cli/azure/network/ni
Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create).
-```azurecli
+```azurecli-interactive
array=(1 2) for n in "${array[@]}" do
It can take a few minutes for the VMs to deploy.
Add the virtual machines to the backend pool with [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config/address-pool#az-network-nic-ip-config-address-pool-add).
-```azurecli
+```azurecli-interactive
array=(VM1 VM2) for vm in "${array[@]}" do
To provide outbound internet access for resources in the backend pool, create a
Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a single IP for the outbound connectivity.
-```azurecli
+```azurecli-interactive
az network public-ip create \ --resource-group CreateIntLBQS-rg \ --name myNATgatewayIP \
Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public
Use [az network nat gateway create](/cli/azure/network/nat#az-network-nat-gateway-create) to create the NAT gateway resource. The public IP created in the previous step is associated with the NAT gateway.
-```azurecli
+```azurecli-interactive
az network nat gateway create \ --resource-group CreateIntLBQS-rg \ --name myNATgateway \
Use [az network nat gateway create](/cli/azure/network/nat#az-network-nat-gatewa
Configure the source subnet in virtual network to use a specific NAT gateway resource with [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update).
-```azurecli
+```azurecli-interactive
az network vnet subnet update \ --resource-group CreateIntLBQS-rg \ --vnet-name myVNet \
Configure the source subnet in virtual network to use a specific NAT gateway res
Create the network interface with [az network nic create](/cli/azure/network/nic#az-network-nic-create).
-```azurecli
+```azurecli-interactive
az network nic create \ --resource-group CreateIntLBQS-rg \ --name myNicTestVM \
Create the network interface with [az network nic create](/cli/azure/network/nic
``` Create the virtual machine with [az vm create](/cli/azure/vm#az-vm-create).
-```azurecli
+```azurecli-interactive
az vm create \ --resource-group CreateIntLBQS-rg \ --name myTestVM \
You might need to wait a few minutes for the virtual machine to deploy.
Use [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set) to install IIS on the backend virtual machines and set the default website to the computer name.
-```azurecli
+```azurecli-interactive
array=(myVM1 myVM2) for vm in "${array[@]}" do
Use [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set) to instal
When your resources are no longer needed, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, load balancer, and all related resources.
-```azurecli
+```azurecli-interactive
az group delete \ --name CreateIntLBQS-rg ```
load-balancer Quickstart Load Balancer Standard Public Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-bicep.md
Title: "Quickstart: Create a public load balancer - Bicep"
description: This quickstart shows how to create a load balancer using a Bicep file. -+ Previously updated : 08/17/2022- Last updated : 09/27/2023+ #Customer intent: I want to create a load balancer by using a Bicep file so that I can load balance internet traffic to VMs.
To find more Bicep files or ARM templates that are related to Azure Load Balance
# [CLI](#tab/CLI) ```azurecli
- az group create --name exampleRG --location centralus
+ az group create --name exampleRG --location EastUS
az deployment group create --resource-group exampleRG --template-file main.bicep ``` # [PowerShell](#tab/PowerShell) ```azurepowershell
- New-AzResourceGroup -Name exampleRG -Location centralus
+ New-AzResourceGroup -Name exampleRG -Location EastUS
New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep ``` > [!NOTE]
- > The Bicep file deployment creates three availability zones. Availability zones are supported only in [certain regions](../availability-zones/az-overview.md). Use one of the supported regions. If you aren't sure, enter **centralus**.
+ > The Bicep file deployment creates three availability zones. Availability zones are supported only in [certain regions](../availability-zones/az-overview.md). Use one of the supported regions. If you aren't sure, enter **EastUS**.
- You will be prompted to enter the following values:
+ You're prompted to enter the following values:
- **projectName**: used for generating resource names. - **adminUsername**: virtual machine administrator username.
load-balancer Quickstart Load Balancer Standard Public Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-cli.md
description: This quickstart shows how to create a public load balancer using th
Previously updated : 03/16/2022 Last updated : 09/25/2023 #Customer intent: I want to create a load balancer so that I can load balance internet traffic to VMs.
# Quickstart: Create a public load balancer to load balance VMs using the Azure CLI
-Get started with Azure Load Balancer by using the Azure CLI to create a public load balancer and two virtual machines. Additional resources include Azure Bastion, NAT Gateway, a virtual network, and the required subnets.
+Get started with Azure Load Balancer by using the Azure CLI to create a public load balancer and two virtual machines. Along with these resources, you deploy Azure Bastion, NAT Gateway, a virtual network, and the required subnets.
:::image type="content" source="media/quickstart-load-balancer-standard-public-portal/public-load-balancer-resources.png" alt-text="Diagram of resources deployed for a standard public load balancer.":::
Create a resource group with [az group create](/cli/azure/group#az-group-create)
Before you deploy VMs and test your load balancer, create the supporting virtual network and subnet.
-Create a virtual network using [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). The virtual network and subnet will contain the resources deployed later in this article.
+Create a virtual network using [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). The virtual network and subnet contain the resources deployed later in this article.
```azurecli az network vnet create \
Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public
--zone 1 2 3 ```
-To create a zonal public IP address in Zone 1, use the following command:
+To create a zonal public IP address in Zone 1 instead, use the following command:
```azurecli az network public-ip create \
Create a network security group rule using [az network nsg rule create](/cli/azu
## Create a bastion host
-In this section, you'll create the resources for Azure Bastion. Azure Bastion is used to securely manage the virtual machines in the backend pool of the load balancer.
+In this section, you create the resources for Azure Bastion. Azure Bastion is used to securely manage the virtual machines in the backend pool of the load balancer.
> [!IMPORTANT] > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public
--zone 1 2 3 ```
-To create a zonal redundant public IP address in Zone 1:
+To create a zonal redundant public IP address in Zone 1 instead, use the following command:
```azurecli az network public-ip create \
load-balancer Tutorial Nat Rule Multi Instance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-nat-rule-multi-instance-portal.md
Title: "Tutorial: Create a multiple virtual machines inbound NAT rule - Azure portal"
+ Title: "Tutorial: Create a multiple virtual machine inbound NAT rule - Azure portal"
description: In this tutorial, learn how to configure port forwarding using Azure Load Balancer to create a connection to multiple virtual machines in an Azure virtual network. Previously updated : 03/10/2022 Last updated : 09/27/2023
-# Tutorial: Create a multiple virtual machines inbound NAT rule using the Azure portal
+# Tutorial: Create a multiple virtual machine inbound NAT rule using the Azure portal
Inbound NAT rules allow you to connect to virtual machines (VMs) in an Azure virtual network by using an Azure Load Balancer public IP address and port number.
In this tutorial, you learn how to:
## Create virtual network and virtual machines
-A virtual network and subnet is required for the resources in the tutorial. In this section, you'll create a virtual network and virtual machines for the later steps.
+A virtual network and subnet is required for the resources in the tutorial. In this section, you create a virtual network and virtual machines for the later steps.
1. Sign in to the [Azure portal](https://portal.azure.com).
A virtual network and subnet is required for the resources in the tutorial. In t
8. Select **Create**.
-9. At the **Generate new key pair** prompt, select **Download private key and create resource**. Your key file will be downloaded as myKey.pem. Ensure you know where the .pem file was downloaded, you'll need the path to the key file in later steps.
+9. At the **Generate new key pair** prompt, select **Download private key and create resource**. Your key file is downloaded as myKey.pem. Ensure you know where the .pem file was downloaded, you need the path to the key file in later steps.
8. Follow the steps 1 through 8 to create another VM with the following values and all the other settings the same as **myVM1**:
A virtual network and subnet is required for the resources in the tutorial. In t
## Create a load balancer
-You'll create a load balancer in this section. The frontend IP, backend pool, load-balancing, and inbound NAT rules are configured as part of the creation.
+You create a load balancer in this section. The frontend IP, backend pool, load-balancing, and inbound NAT rules are configured as part of the creation.
1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
You'll create a load balancer in this section. The frontend IP, backend pool, lo
## Create a multiple VMs inbound NAT rule
-In this section, you'll create a multiple instance inbound NAT rule to the backend pool of the load balancer.
+In this section, you create a multiple instance inbound NAT rule to the backend pool of the load balancer.
1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
In this section, you'll create a multiple instance inbound NAT rule to the backe
## Create a NAT gateway
-In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network.
+In this section, you create a NAT gateway for outbound internet access for resources in the virtual network.
For more information about outbound connections and Azure Virtual Network NAT, see [Using Source Network Address Translation (SNAT) for outbound connections](load-balancer-outbound-connections.md) and [What is Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md).
In this section, you'll SSH to the virtual machines through the inbound NAT rule
## Test the web server
-You'll open your web browser in this section and enter the IP address for the load balancer you retrieved in the previous step.
+You open your web browser in this section and enter the IP address for the load balancer you retrieved in the previous step.
1. Open your web browser.
machine-learning Concept Endpoints Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints-online.md
reviewer: msakande-+ Last updated 09/13/2023 #Customer intent: As an MLOps administrator, I want to understand what a managed endpoint is and why I need it.
machine-learning Concept Secure Network Traffic Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-network-traffic-flow.md
description: Learn how network traffic flows between components when your Azure
-+
The `public_network_access` flag of the Azure Machine Learning workspace also go
#### Outbound communication
-__Outbound__ communication from a deployment can be secured at the workspace level by enabling managed virtual network isolation for your Azure Machine Learning workspace (preview). Enabling this setting causes Azure Machine Learning to create a managed virtual network for the workspace. Any deployments in the workspace's managed virtual network can use the virtual network's private endpoints for outbound communication.
+__Outbound__ communication from a deployment can be secured at the workspace level by enabling managed virtual network isolation for your Azure Machine Learning workspace. Enabling this setting causes Azure Machine Learning to create a managed virtual network for the workspace. Any deployments in the workspace's managed virtual network can use the virtual network's private endpoints for outbound communication.
-The [legacy network isolation method for securing outbound communication](concept-secure-online-endpoint.md#secure-outbound-access-with-legacy-network-isolation-method) worked by disabling a deployment's `egress_public_network_access` flag. We strongly recommend that you secure outbound communication for deployments by using a [workspace managed virtual network](concept-secure-online-endpoint.md) instead. Unlike the legacy approach, the `egress_public_network_access` flag for the deployment no longer applies when you use a workspace managed virtual network with your deployment (preview). Instead, outbound communication will be controlled by the rules set for the workspace's managed virtual network.
+The [legacy network isolation method for securing outbound communication](concept-secure-online-endpoint.md#secure-outbound-access-with-legacy-network-isolation-method) worked by disabling a deployment's `egress_public_network_access` flag. We strongly recommend that you secure outbound communication for deployments by using a [workspace managed virtual network](concept-secure-online-endpoint.md) instead. Unlike the legacy approach, the `egress_public_network_access` flag for the deployment no longer applies when you use a workspace managed virtual network with your deployment. Instead, outbound communication will be controlled by the rules set for the workspace's managed virtual network.
:::moniker-end
machine-learning Concept Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-online-endpoint.md
reviewer: msakande Previously updated : 08/15/2023 Last updated : 09/27/2023 # Network isolation with managed online endpoints
Last updated 08/15/2023
When deploying a machine learning model to a managed online endpoint, you can secure communication with the online endpoint by using [private endpoints](../private-link/private-endpoint-overview.md). In this article, you'll learn how a private endpoint can be used to secure inbound communication to a managed online endpoint. You'll also learn how a workspace managed virtual network can be used to provide secure communication between deployments and resources. - You can secure inbound scoring requests from clients to an _online endpoint_ and secure outbound communications between a _deployment_, the Azure resources it uses, and private resources. Security for inbound and outbound communication are configured separately. For more information on endpoints and deployments, see [What are endpoints and deployments](concept-endpoints-online.md). The following architecture diagram shows how communications flow through private endpoints to the managed online endpoint. Incoming scoring requests from a client's virtual network flow through the workspace's private endpoint to the managed online endpoint. Outbound communications from deployments to services are handled through private endpoints from the workspace's managed virtual network to those service instances.
The following table lists the supported configurations when configuring inbound
## Next steps - [Workspace managed network isolation](how-to-managed-network.md)-- [How to secure managed online endpoints with network isolation](how-to-secure-online-endpoint.md)
+- [How to secure managed online endpoints with network isolation](how-to-secure-online-endpoint.md)
machine-learning How To Create Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-compute-instance.md
Last updated 07/05/2023
[!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)]
-Learn how to create a [compute instance](concept-compute-instance.md) in your Azure Machine Learning workspace.
+Learn how to create a [compute instance](concept-compute-instance.md) in your Azure Machine Learning workspace.
Use a compute instance as your fully configured and managed development environment in the cloud. For development and testing, you can also use the instance as a [training compute target](concept-compute-target.md#training-compute-targets). A compute instance can run multiple jobs in parallel and has a job queue. As a development environment, a compute instance can't be shared with other users in your workspace.
Choose the tab for the environment you're using for other prerequisites.
# [Studio](#tab/azure-studio)
-* No extra prerequisites.
+* No extra prerequisites.
# [Studio (preview)](#tab/azure-studio-preview)
Choose the tab for the environment you're using for other prerequisites.
**Time estimate**: Approximately 5 minutes.
-Creating a compute instance is a one time process for your workspace. You can reuse the compute as a development workstation or as a compute target for training. You can have multiple compute instances attached to your workspace.
+Creating a compute instance is a one time process for your workspace. You can reuse the compute as a development workstation or as a compute target for training. You can have multiple compute instances attached to your workspace.
The dedicated cores per region per VM family quota and total regional quota, which applies to compute instance creation, is unified and shared with Azure Machine Learning training compute cluster quota. Stopping the compute instance doesn't release quota to ensure you are able to restart the compute instance. It isn't possible to change the virtual machine size of compute instance once it's created.
-The fastest way to create a compute instance is to follow the [Create resources you need to get started](quickstart-create-resources.md).
+The fastest way to create a compute instance is to follow the [Create resources you need to get started](quickstart-create-resources.md).
Or use the following examples to create a compute instance with more options:
Where the file *create-instance.yml* is:
1. Under __Manage__, select __Compute__. 1. Select **Compute instance** at the top. 1. If you have no compute instances, select **Create** in the middle of the page.
-
+ :::image type="content" source="media/how-to-create-attach-studio/create-compute-target.png" alt-text="Create compute target"::: 1. If you see a list of compute resources, select **+New** above the list.
Where the file *create-instance.yml* is:
* If you're using an __Azure Virtual Network__, specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network. You can also select __No public IP__ to prevent the creation of a public IP address, which requires a private link workspace. You must also satisfy these [network requirements](./how-to-secure-training-vnet.md) for virtual network setup.
- * If you're using an Azure Machine Learning __managed virtual network__, the compute instance is created inside the managed virtual network. You can also select __No public IP__ to prevent the creation of a public IP address. For more information, see [managed compute with a managed network](./how-to-managed-network-compute.md).
+ * If you're using an Azure Machine Learning __managed virtual network__, the compute instance is created inside the managed virtual network. You can also select __No public IP__ to prevent the creation of a public IP address. For more information, see [managed compute with a managed network](./how-to-managed-network-compute.md).
* Assign the computer to another user. For more about assigning to other users, see [Create on behalf of](#create-on-behalf-of). * Provision with a setup script - for more information about how to create and use a setup script, see [Customize the compute instance with a script](how-to-customize-compute-instance.md).
-
+ # [Studio (preview)](#tab/azure-studio-preview) 1. Navigate to [Azure Machine Learning studio](https://ml.azure.com). 1. Under __Manage__, select __Compute__. 1. Select **Compute instance** at the top. 1. If you have no compute instances, select **Create** in the middle of the page.
-
+ :::image type="content" source="media/how-to-create-attach-studio/create-compute-target.png" alt-text="Screenshot shows create in the middle of the page."::: 1. If you see a list of compute resources, select **+New** above the list.
Where the file *create-instance.yml* is:
* If you're using an __Azure Virtual Network__, specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network. You can also select __No public IP__ to prevent the creation of a public IP address, which requires a private link workspace. You must also satisfy these [network requirements](./how-to-secure-training-vnet.md) for virtual network setup.
- * If you're using an Azure Machine Learning __managed virtual network__, the compute instance is created inside the managed virtual network. You can also select __No public IP__ to prevent the creation of a public IP address. For more information, see [managed compute with a managed network](./how-to-managed-network-compute.md).
+ * If you're using an Azure Machine Learning __managed virtual network__, the compute instance is created inside the managed virtual network. You can also select __No public IP__ to prevent the creation of a public IP address. For more information, see [managed compute with a managed network](./how-to-managed-network-compute.md).
* Allow root access. (preview) 1. Select **Applications** if you want to add custom applications to use on your compute instance, such as RStudio or Posit Workbench. See [Add custom applications such as RStudio or Posit Workbench](#add-custom-applications-such-as-rstudio-or-posit-workbench).
A compute instance is considered inactive if the below conditions are met:
* No active Jupyter terminal sessions * No active Azure Machine Learning runs or experiments * No SSH connections
-* No VS Code connections; you must close your VS Code connection for your compute instance to be considered inactive. Sessions are autoterminated if VS Code detects no activity for 3 hours.
+* No VS Code connections; you must close your VS Code connection for your compute instance to be considered inactive. Sessions are autoterminated if VS Code detects no activity for 3 hours.
* No custom applications are running on the compute
-A compute instance won't be considered idle if any custom application is running. There are also some basic bounds around inactivity time periods; compute instance must be inactive for a minimum of 15 mins and a maximum of three days.
+A compute instance won't be considered idle if any custom application is running. There are also some basic bounds around inactivity time periods; compute instance must be inactive for a minimum of 15 mins and a maximum of three days.
Also, if a compute instance has already been idle for a certain amount of time, if idle shutdown settings are updated to an amount of time shorter than the current idle duration, the idle time clock is reset to 0. For example, if the compute instance has already been idle for 20 minutes, and the shutdown settings are updated to 15 minutes, the idle time clock is reset to 0.
You can't change the idle time of an existing compute instance with the CLI.
* When creating a new compute instance:
- 1. Select **Advanced** after completing required settings.
+ 1. Select **Advanced** after completing required settings.
1. Select **Enable idle shutdown** to enable or disable. :::image type="content" source="media/how-to-create-compute-instance/enable-idle-shutdown.png" alt-text="Screenshot: Enable compute instance idle shutdown." lightbox="media/how-to-create-compute-instance/enable-idle-shutdown.png":::
-
- 1. Specify the shutdown period when enabled.
+
+ 1. Specify the shutdown period when enabled.
* For an existing compute instance:
You can't change the idle time of an existing compute instance with the CLI.
* When creating a new compute instance:
- 1. Select **Next** to advance to **Scheduling** after completing required settings.
+ 1. Select **Next** to advance to **Scheduling** after completing required settings.
1. Select **Enable idle shutdown** to enable or disable. :::image type="content" source="media/how-to-create-compute-instance/enable-idle-shutdown-preview.png" alt-text="Screenshot: Enable compute instance idle shutdown." lightbox="media/how-to-create-compute-instance/enable-idle-shutdown-preview.png":::
Once the compute instance is created, you can view, edit, or add new schedules f
# [Studio (preview)](#tab/azure-studio-preview) 1. [Fill out the form](?tabs=azure-studio-preview#create).
-1. Select **Next** to advance to **Scheduling** after completing required settings.
+1. Select **Next** to advance to **Scheduling** after completing required settings.
1. Select **Add schedule** to add a new schedule. :::image type="content" source="media/how-to-create-compute-instance/add-schedule-preview.png" alt-text="Screenshot: Add schedule in advanced settings.":::
In the Resource Manager template, add:
``` Then use either cron or LogicApps expressions to define the schedule that starts or stops the instance in your parameter file:
-
+ ```json
- "schedules": {
- "value": {
- "computeStartStop": [
- {
- "triggerType": "Cron",
- "cron": {
- "timeZone": "UTC",
- "expression": "0 18 * * *"
- },
- "action": "Stop",
- "status": "Enabled"
+ "schedules": {
+ "value": {
+ "computeStartStop": [
+ {
+ "triggerType": "Cron",
+ "cron": {
+ "timeZone": "UTC",
+ "expression": "0 18 * * *"
},
- {
- "triggerType": "Cron",
- "cron": {
- "timeZone": "UTC",
- "expression": "0 8 * * *"
- },
- "action": "Start",
- "status": "Enabled"
+ "action": "Stop",
+ "status": "Enabled"
+ },
+ {
+ "triggerType": "Cron",
+ "cron": {
+ "timeZone": "UTC",
+ "expression": "0 8 * * *"
+ },
+ "action": "Start",
+ "status": "Enabled"
+ },
+ {
+ "triggerType": "Recurrence",
+ "recurrence": {
+ "frequency": "Day",
+ "interval": 1,
+ "timeZone": "UTC",
+ "schedule": {
+ "hours": [17],
+ "minutes": [0]
+ }
},
- {
- "triggerType":ΓÇ»"Recurrence",
- "recurrence":ΓÇ»{
- "frequency":ΓÇ»"Day",
- "interval": 1,
- "timeZone":ΓÇ»"UTC",
-   "schedule": {
- "hours":ΓÇ»[17],
-     "minutes": [0]
- }
- },
- "action":ΓÇ»"Stop",
- "status":ΓÇ»"Enabled"
- }
- ]
- }
+ "action": "Stop",
+ "status": "Enabled"
+ }
+ ]
}
+ }
``` * Action can have value of `Start` or `Stop`. * For trigger type of `Recurrence` use the same syntax as logic app, with this [recurrence schema](../logic-apps/logic-apps-workflow-actions-triggers.md#recurrence-trigger).
-* For trigger type of `cron`, use standard cron syntax:
+* For trigger type of `cron`, use standard cron syntax:
```cron // Crontab expression format:
Following is a sample policy to default a shutdown schedule at 10 PM PST.
] } }
-}
+}
``` ## Create on behalf of
As an administrator, you can create a compute instance on behalf of a data scien
## Assign managed identity
-You can assign a system- or user-assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to a compute instance, to authenticate against other Azure resources such as storage. Using managed identities for authentication helps improve workspace security and management. For example, you can allow users to access training data only when logged in to a compute instance. Or use a common user-assigned managed identity to permit access to a specific storage account.
+You can assign a system- or user-assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to a compute instance, to authenticate against other Azure resources such as storage. Using managed identities for authentication helps improve workspace security and management. For example, you can allow users to access training data only when logged in to a compute instance. Or use a common user-assigned managed identity to permit access to a specific storage account.
# [Python SDK](#tab/python)
name: myinstance
type: computeinstance identity: type: user_assigned
- user_assigned_identities:
+ user_assigned_identities:
- resource_id: identity_resource_id ```
You can create compute instance with managed identity from Azure Machine Learnin
Once the managed identity is created, grant the managed identity at least Storage Blob Data Reader role on the storage account of the datastore, see [Accessing storage services](how-to-identity-based-service-authentication.md?tabs=cli#accessing-storage-services). Then, when you work on the compute instance, the managed identity is used automatically to authenticate against datastores. > [!NOTE]
-> The name of the created system managed identity will be in the format /workspace-name/computes/compute-instance-name in your Azure Active Directory.
+> The name of the created system managed identity will be in the format /workspace-name/computes/compute-instance-name in your Azure Active Directory.
You can also use the managed identity manually to authenticate against other Azure resources. The following example shows how to use it to get an Azure Resource Manager access token:
def get_access_token_msi(resource):
arm_access_token = get_access_token_msi("https://management.azure.com") ```
-To use Azure CLI with the managed identity for authentication, specify the identity client ID as the username when logging in:
+To use Azure CLI with the managed identity for authentication, specify the identity client ID as the username when logging in:
```azurecli az login --identity --username $DEFAULT_IDENTITY_CLIENT_ID
az login --identity --username $DEFAULT_IDENTITY_CLIENT_ID
## Enable SSH access
-SSH access is disabled by default. SSH access can't be enabled or disabled after creation. Make sure to enable access if you plan to debug interactively with [VS Code Remote](how-to-set-up-vs-code-remote.md).
+SSH access is disabled by default. SSH access can't be enabled or disabled after creation. Make sure to enable access if you plan to debug interactively with [VS Code Remote](how-to-set-up-vs-code-remote.md).
[!INCLUDE [amlinclude-info](includes/machine-learning-enable-ssh.md)]
Use either Studio or Studio (preview) to see how to set up applications.
1. Fill out the form to [create a new compute instance](?tabs=azure-studio-preview#create) 1. Select **Applications**
- 1. Select **Add application**
+ 1. Select **Add application**
:::image type="content" source="media/how-to-create-compute-instance/custom-service-setup-preview.png" alt-text="Screenshot showing Custom Service Setup.":::
Use either Studio or Studio (preview) to see how to set up applications.
RStudio is one of the most popular IDEs among R developers for ML and data science projects. You can easily set up Posit Workbench, which provides access to RStudio along with other development tools, to run on your compute instance, using your own Posit license, and access the rich feature set that Posit Workbench offers 1. Follow the steps listed above to **Add application** when creating your compute instance.
-1. Select **Posit Workbench (bring your own license)** in the **Application** dropdown and enter your Posit Workbench license key in the **License key** field. You can get your Posit Workbench license or trial license [from posit](https://posit.co).
+1. Select **Posit Workbench (bring your own license)** in the **Application** dropdown and enter your Posit Workbench license key in the **License key** field. You can get your Posit Workbench license or trial license [from posit](https://posit.co).
1. Select **Create** to add Posit Workbench application to your compute instance.
-
+ :::image type="content" source="media/how-to-create-compute-instance/rstudio-workbench.png" alt-text="Screenshot shows Posit Workbench settings." lightbox="media/how-to-create-compute-instance/rstudio-workbench.png"::: [!INCLUDE [private link ports](includes/machine-learning-private-link-ports.md)]
RStudio is one of the most popular IDEs among R developers for ML and data scien
> [!NOTE] > * Support for accessing your workspace file store from Posit Workbench is not yet available. > * When accessing multiple instances of Posit Workbench, if you see a "400 Bad Request. Request Header Or Cookie Too Large" error, use a new browser or access from a browser in incognito mode.
-
+ ### Setup RStudio (open source) To use RStudio, set up a custom application as follows: 1. Follow the previous steps to **Add application** when creating your compute instance.
-1. Select **Custom Application** on the **Application** dropdown
+1. Select **Custom Application** in the **Application** dropdown list.
1. Configure the **Application name** you would like to use.
-1. Set up the application to run on **Target port** `8787` - the docker image for RStudio open source listed below needs to run on this Target port.
+1. Set up the application to run on **Target port** `8787` - the docker image for RStudio open source listed below needs to run on this Target port.
1. Set up the application to be accessed on **Published port** `8787` - you can configure the application to be accessed on a different Published port if you wish.
-1. Point the **Docker image** to `ghcr.io/azure/rocker-rstudio-ml-verse:latest`.
+1. Point the **Docker image** to `ghcr.io/azure/rocker-rstudio-ml-verse:latest`.
1. Select **Create** to set up RStudio as a custom application on your compute instance. :::image type="content" source="media/how-to-create-compute-instance/rstudio-open-source.png" alt-text="Screenshot shows form to set up RStudio as a custom application" lightbox="media/how-to-create-compute-instance/rstudio-open-source.png"::: [!INCLUDE [private link ports](includes/machine-learning-private-link-ports.md)]
-
+ ### Setup other custom applications Set up other custom applications on your compute instance by providing the application on a Docker image. 1. Follow the previous steps to **Add application** when creating your compute instance.
-1. Select **Custom Application** on the **Application** dropdown.
+1. Select **Custom Application** on the **Application** dropdown.
1. Configure the **Application name**, the **Target port** you wish to run the application on, the **Published port** you wish to access the application on and the **Docker image** that contains your application. 1. Optionally, add **Environment variables** you wish to use for your application.
-1. Use **Bind mounts** to add access to the files in your default storage account:
- * Specify **/home/azureuser/cloudfiles** for **Host path**.
+1. Use **Bind mounts** to add access to the files in your default storage account:
+ * Specify **/home/azureuser/cloudfiles** for **Host path**.
* Specify **/home/azureuser/cloudfiles** for the **Container path**. * Select **Add** to add this mounting. Because the files are mounted, changes you make to them are available in other compute instances and applications. 1. Select **Create** to set up the custom application on your compute instance.
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
You can configure your cloud deployment using YAML. Take a look at the sample YA
__tfserving-endpoint.yml__
-```yml
-$schema: https://azuremlsdk2.blob.core.windows.net/latest/managedOnlineEndpoint.schema.json
-name: tfserving-endpoint
-auth_mode: aml_token
-```
__tfserving-deployment.yml__
-```yml
-$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
-name: tfserving-deployment
-endpoint_name: tfserving-endpoint
-model:
- name: tfserving-mounted
- version: {{MODEL_VERSION}}
- path: ./half_plus_two
-environment_variables:
- MODEL_BASE_PATH: /var/azureml-app/azureml-models/tfserving-mounted/{{MODEL_VERSION}}
- MODEL_NAME: half_plus_two
-environment:
- #name: tfserving
- #version: 1
- image: docker.io/tensorflow/serving:latest
- inference_config:
- liveness_route:
- port: 8501
- path: /v1/models/half_plus_two
- readiness_route:
- port: 8501
- path: /v1/models/half_plus_two
- scoring_route:
- port: 8501
- path: /v1/models/half_plus_two:predict
-instance_type: Standard_DS3_v2
-instance_count: 1
-```
- # [Python SDK](#tab/python)
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
Last updated 09/21/2022 -+ # Manage Azure Machine Learning workspaces in the portal or with the Python SDK (v2)
As your needs change or requirements for automation increase you can also manage
[!INCLUDE [register-namespace](includes/machine-learning-register-namespace.md)]
-* When you use network isolation that is based on a workspace's managed virtual network (preview) with a deployment, you can use resources (Azure Container Registry (ACR), Storage account, Key Vault, and Application Insights) from a different resource group or subscription than that of your workspace. However, these resources must belong to the same tenant as your workspace. For limitations that apply to securing managed online endpoints using a workspace's managed virtual network, see [Network isolation with managed online endpoints](concept-secure-online-endpoint.md#limitations).
+* When you use network isolation that is based on a workspace's managed virtual network with a deployment, you can use resources (Azure Container Registry (ACR), Storage account, Key Vault, and Application Insights) from a different resource group or subscription than that of your workspace. However, these resources must belong to the same tenant as your workspace. For limitations that apply to securing managed online endpoints using a workspace's managed virtual network, see [Network isolation with managed online endpoints](concept-secure-online-endpoint.md#limitations).
* By default, creating a workspace also creates an Azure Container Registry (ACR). Since ACR doesn't currently support unicode characters in resource group names, use a resource group that doesn't contain these characters.
machine-learning How To Network Isolation Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-isolation-planning.md
Last updated 02/14/2023 -+ # Plan for network isolation
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
Previously updated : 08/18/2023 Last updated : 09/27/2023
[!INCLUDE [machine-learning-dev-v2](includes/machine-learning-dev-v2.md)]
-In this article, you'll use network isolation to secure a managed online endpoint. You'll create a managed online endpoint that uses an Azure Machine Learning workspace's private endpoint for secure inbound communication. You'll also configure the workspace with a **managed virtual network** that **allows only approved outbound** communication for deployments (preview). Finally, you'll create a deployment that uses the private endpoints of the workspace's managed virtual network for outbound communication.
-
+In this article, you'll use network isolation to secure a managed online endpoint. You'll create a managed online endpoint that uses an Azure Machine Learning workspace's private endpoint for secure inbound communication. You'll also configure the workspace with a **managed virtual network** that **allows only approved outbound** communication for deployments. Finally, you'll create a deployment that uses the private endpoints of the workspace's managed virtual network for outbound communication.
For examples that use the legacy method for network isolation, see the deployment files [deploy-moe-vnet-legacy.sh](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-moe-vnet-legacy.sh) (for deployment using a generic model) and [deploy-moe-vnet-mlflow-legacy.sh](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-moe-vnet-mlflow-legacy.sh) (for deployment using an MLflow model) in the azureml-examples GitHub repo.
To begin, you need an Azure subscription, CLI or SDK to interact with Azure Mach
For more information on how to create a new workspace or to upgrade your existing workspace to use a manged virtual network, see [Configure a managed virtual network to allow internet outbound](how-to-managed-network.md#configure-a-managed-virtual-network-to-allow-internet-outbound).
- When the workspace is configured with a private endpoint, the Azure Container Registry for the workspace must be configured for __Premium__ tier. For more information, see [Azure Container Registry service tiers](../container-registry/container-registry-skus.md).
+ When the workspace is configured with a private endpoint, the Azure Container Registry for the workspace must be configured for __Premium__ tier to allow access via the private endpoint. For more information, see [Azure Container Registry service tiers](../container-registry/container-registry-skus.md). Also, the workspace should be set with the `image_build_compute` property, as deployment creation involves building of images. For more information on setting the `image_build_compute` property for your workspace, see [Create a workspace that uses a private endpoint](how-to-configure-private-link.md#create-a-workspace-that-uses-a-private-endpoint).
1. Configure the defaults for the CLI so that you can avoid passing in the values for your workspace and resource group multiple times.
The commands in this tutorial are in the file `deploy-managed-online-endpoint-wo
To create a secured managed online endpoint, create the endpoint in your workspace and set the endpoint's `public_network_access` to `disabled` to control inbound communication. The endpoint will then have to use the workspace's private endpoint for inbound communication.
-Because the workspace is configured to have a managed virtual network, any deployments of the endpoint will use the private endpoints of the managed virtual network for outbound communication (preview).
+Because the workspace is configured to have a managed virtual network, any deployments of the endpoint will use the private endpoints of the managed virtual network for outbound communication.
1. Set the endpoint's name.
machine-learning How To Submit Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-submit-spark-jobs.md
These prerequisites cover the submission of a Spark job from Azure Machine Learn
> [!NOTE]
-> To learn more about resource access while using Azure Machine Learning serverless Spark compute, and attached Synapse Spark pool, see [Ensuring resource access for Spark jobs](apache-spark-environment-configuration.md#ensuring-resource-access-for-spark-jobs).
+>- To learn more about resource access while using Azure Machine Learning serverless Spark compute, and attached Synapse Spark pool, see [Ensuring resource access for Spark jobs](apache-spark-environment-configuration.md#ensuring-resource-access-for-spark-jobs).
+>- Azure Machine Learning provides a [shared quota](how-to-manage-quotas.md#azure-machine-learning-shared-quota) pool from which all users can access compute quota to perform testing for a limited time. When you use the serverless Spark compute, Azure Machine Learning allows you to access this shared quota for a short time.
### Attach user assigned managed identity using CLI v2
machine-learning How To Create Manage Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-create-manage-runtime.md
Prompt flow's runtime provides the computing resources required for the applicat
> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-## Permissions/roles need to use runtime
+## Permissions/roles for runtime management
-You need to assign enough permission to use runtime in Prompt flow. To assign a role, you need to have `owner` or have `Microsoft.Authorization/roleAssignments/write` permission on resource.
+To create and use a runtime for Prompt flow authoring, you need to have the `AzureML Data Scientist` role in the workspace. To learn more, see [Prerequisites](#prerequisites).
-To create and use runtime to author prompt flow, you need to have `AzureML Data Scientist` role in the workspace. To learn more, see [Prerequisites](#prerequisites)
+## Permissions/roles for deployments
+
+After deploying a Prompt flow, the endpoint must be assigned the `AzureML Data Scientist` role to the workspace for successful inferencing. This can be done at any point after the endpoint has been created.
## Create runtime in UI
To create and use runtime to author prompt flow, you need to have `AzureML Data
> [!IMPORTANT] > Prompt flow is **not supported** in the workspace which has data isolation enabled. The enableDataIsolation flag can only be set at the workspace creation phase and can't be updated. >
->Prompt flow is **not supported** in the project workspace which was created with a workspace hub. The workspace hub is a private preview feature.
+> Prompt flow is **not supported** in the project workspace which was created with a workspace hub. The workspace hub is a private preview feature.
> ### Create compute instance runtime in UI
-If you didn't have compute instance, create a new one: [Create and manage an Azure Machine Learning compute instance](../how-to-create-compute-instance.md).
+If you do not have a compute instance, create a new one: [Create and manage an Azure Machine Learning compute instance](../how-to-create-compute-instance.md).
1. Select add compute instance runtime in runtime list page. :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-runtime-list-add.png" alt-text="Screenshot of Prompt flow on the runtime add with compute instance runtime selected. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-runtime-list-add.png":::
To use the runtime, assigning the `AzureML Data Scientist` role of workspace to
> [!NOTE] > This operation may take several minutes to take effect.
-## Using runtime in prompt flow authoring
+## Using runtime in Prompt flow authoring
When you're authoring your Prompt flow, you can select and change the runtime from left top corner of the flow page.
When performing a bulk test, you can use the original runtime in the flow or cha
We regularly update our base image (`mcr.microsoft.com/azureml/promptflow/promptflow-runtime`) to include the latest features and bug fixes. We recommend that you update your runtime to the [latest version](https://mcr.microsoft.com/v2/azureml/promptflow/promptflow-runtime/tags/list) if possible.
-Every time you open the runtime detail page, we'll check whether there are new versions of the runtime. If there are new versions available, you'll see a notification at the top of the page. You can also manually check the latest version by clicking the **check version** button.
+Every time you open the runtime details page, we'll check whether there are new versions of the runtime. If there are new versions available, you'll see a notification at the top of the page. You can also manually check the latest version by clicking the **check version** button.
:::image type="content" source="./media/how-to-create-manage-runtime/runtime-update-env-notification.png" alt-text="Screenshot of the runtime detail page with checkout version highlighted. " lightbox = "./media/how-to-create-manage-runtime/runtime-update-env-notification.png"::: Try to keep your runtime up to date to get the best experience and performance.
-Go to runtime detail page and select update button at the top. You can change new environment to update. If you select **use default environment** to update, system will attempt to update your runtime to the latest version.
+Go to the runtime details page and select the "Update" button at the top. Here you can update the environment to use in your runtime. If you select **use default environment**, system will attempt to update your runtime to the latest version.
:::image type="content" source="./media/how-to-create-manage-runtime/runtime-update-env.png" alt-text="Screenshot of the runtime detail page with updated selected. " lightbox = "./media/how-to-create-manage-runtime/runtime-update-env.png"::: > [!NOTE]
-> If you used a custom environment, you need to rebuild it using latest prompt flow image first, and then update your runtime with the new custom environment.
+> If you used a custom environment, you need to rebuild it using the latest Prompt flow image first, and then update your runtime with the new custom environment.
## Next steps - [Develop a standard flow](how-to-develop-a-standard-flow.md)-- [Develop a chat flow](how-to-develop-a-chat-flow.md)
+- [Develop a chat flow](how-to-develop-a-chat-flow.md)
machine-learning How To Deploy To Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-deploy-to-code.md
For managed online endpoints, Azure Machine Learning reserves 20% of your comput
Each flow will have a folder which contains codes/prompts, definition and other artifacts of the flow. If you have developed your flow with UI, you can download the flow folder from the flow details page. If you have developed your flow with CLI or SDK, you should have the flow folder already.
-This article will use the [sample flow "basic-chat"](https://github.com/Azure/azureml-examples/examples/flows/chat/basic-chat) as an example to deploy to Azure Machine Learning managed online endpoint.
+This article will use the [sample flow "basic-chat"](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/basic-chat) as an example to deploy to Azure Machine Learning managed online endpoint.
> [!IMPORTANT] >
machine-learning How To Secure Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-secure-prompt-flow.md
Workspace managed virtual network is the recommended way to support network isol
## Known limitations - Workspace hub / lean workspace and AI studio don't support bring your own virtual network.-- Org registry didn't support managed virtual network. - Managed online endpoint only supports workspace with managed virtual network. If you want to use your own virtual network, you may need one workspace for prompt flow authoring with your virtual network and another workspace for prompt flow deployment using managed online endpoint with workspace managed virtual network. ## Next steps
Workspace managed virtual network is the recommended way to support network isol
- [Workspace managed network isolation](../how-to-managed-network.md) - [Secure Azure Kubernetes Service inferencing environment](../how-to-secure-online-endpoint.md) - [Secure your managed online endpoints with network isolation](../how-to-secure-kubernetes-inferencing-environment.md)-- [Secure your RAG workflows with network isolation](../how-to-secure-rag-workflows.md)
+- [Secure your RAG workflows with network isolation](../how-to-secure-rag-workflows.md)
machine-learning Reference Yaml Deployment Managed Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-managed-online.md
-+ Last updated 01/24/2023
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `request_settings` | object | Scoring request settings for the deployment. See [RequestSettings](#requestsettings) for the set of configurable properties. | | | | `liveness_probe` | object | Liveness probe settings for monitoring the health of the container regularly. See [ProbeSettings](#probesettings) for the set of configurable properties. | | | | `readiness_probe` | object | Readiness probe settings for validating if the container is ready to serve traffic. See [ProbeSettings](#probesettings) for the set of configurable properties. | | |
-| `egress_public_network_access` | string |**Note:** This key is applicable when you use the [legacy network isolation method](concept-secure-online-endpoint.md#secure-outbound-access-with-legacy-network-isolation-method) to secure outbound communication for a deployment. We strongly recommend that you secure outbound communication for deployments using [a workspace managed VNet](concept-secure-online-endpoint.md) (preview) instead. <br><br>This flag secures the deployment by restricting communication between the deployment and the Azure resources used by it. Set to `disabled` to ensure that the download of the model, code, and images needed by your deployment are secured with a private endpoint. This flag is applicable only for managed online endpoints. | `enabled`, `disabled` | `enabled` |
+| `egress_public_network_access` | string |**Note:** This key is applicable when you use the [legacy network isolation method](concept-secure-online-endpoint.md#secure-outbound-access-with-legacy-network-isolation-method) to secure outbound communication for a deployment. We strongly recommend that you secure outbound communication for deployments using [a workspace managed VNet](concept-secure-online-endpoint.md) instead. <br><br>This flag secures the deployment by restricting communication between the deployment and the Azure resources used by it. Set to `disabled` to ensure that the download of the model, code, and images needed by your deployment are secured with a private endpoint. This flag is applicable only for managed online endpoints. | `enabled`, `disabled` | `enabled` |
### RequestSettings
mysql Migrate Single Flexible In Place Auto Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-in-place-auto-migration.md
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
-**In-place automigration** from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for Single Server database workloads with **Basic or General Purpose SKU**, data storage used **< 10 GiB** and **no complex features (CMK, AAD, Read Replica, Private Link) enabled**. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details.
+**In-place automigration** from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for Single Server database workloads with **Basic or General Purpose SKU**, data storage used **<= 20 GiB** and **no complex features (CMK, AAD, Read Replica, Private Link) enabled**. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details.
The in-place migration provides a highly resilient and self-healing offline migration experience during a planned maintenance window, with less than **5 mins** of downtime. It uses backup and restore technology for faster migration time. This migration removes the overhead to manually migrate your server and ensure you can take advantage of the benefits of Flexible Server, including better price & performance, granular control over database configuration, and custom maintenance windows. Following described are the key phases of the migration:
The in-place migration provides a highly resilient and self-healing offline migr
> In-place migration is only for Single Server database workloads with Basic or GP SKU, data storage used < 10 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled. All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure MySQL Import to migrate. ## What's new?
-* If you own a Single Server workload with Basic or GP SKU, data storage used < 10 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled, you can now nominate yourself (if not already scheduled by the service) for auto-migration by submitting your server details through this [form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4lhLelkCklCuumNujnaQ-ZUQzRKSVBBV0VXTFRMSDFKSUtLUDlaNTA5Wi4u). (Sept 2023)
+* If you own a Single Server workload with Basic or GP SKU, data storage used <= 20 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled, you can now nominate yourself (if not already scheduled by the service) for auto-migration by submitting your server details through this [form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4lhLelkCklCuumNujnaQ-ZUQzRKSVBBV0VXTFRMSDFKSUtLUDlaNTA5Wi4u). (Sept 2023)
## Configure migration alerts and review migration schedule
mysql Whats Happening To Mysql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/whats-happening-to-mysql-single-server.md
Learn how to migrate from Azure Database for MySQL - Single Server to Azure Data
For more information on migrating from Single Server to Flexible Server using other migration tools, visit [Select the right tools for migration to Azure Database for MySQL](../migrate/how-to-decide-on-right-migration-tools.md). > [!NOTE]
-> In-place auto-migration from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for Single Server database workloads with Basic or General Purpose SKU, data storage used < 10 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details. All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure MySQL Import to migrate. Learn more about in-place auto-migration [here](../migrate/migrate-single-flexible-in-place-auto-migration.md).
+> In-place auto-migration from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for Single Server database workloads with Basic or General Purpose SKU, data storage used <= 20 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details. All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure MySQL Import to migrate. Learn more about in-place auto-migration [here](../migrate/migrate-single-flexible-in-place-auto-migration.md).
## Migration Eligibility
network-watcher Diagnose Vm Network Routing Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem.md
Previously updated : 09/26/2023 Last updated : 09/28/2023 # CustomerIntent: As an Azure administrator, I want to diagnose virtual machine (VM) network routing problem that prevents it from communicating with the internet. # Tutorial: Diagnose a virtual machine network routing problem using the Azure portal
-In this tutorial, You use Azure Network Watcher [next hop](network-watcher-next-hop-overview.md) tool to troubleshoot and diagnose a VM routing problem that's preventing it from correctly communicating with other resources. Next hop shows you that the routing problem is caused by a [custom route](../virtual-network/virtual-networks-udr-overview.md#custom-routes).
+In this tutorial, You use Azure Network Watcher [next hop](network-watcher-next-hop-overview.md) tool to troubleshoot and diagnose a VM routing problem that's preventing it from correctly communicating with other resources. Next hop shows you that the routing problem is caused by a [custom route](../virtual-network/virtual-networks-udr-overview.md?toc=/azure/network-watcher/toc.json#custom-routes).
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a virtual network and a Bastion host
+> * Create a virtual network
> * Create two virtual machines > * Test communication to different IPs using the next hop capability of Azure Network Watcher > * View the effective routes
In this section, you create a virtual network.
| Subscription | Select your Azure subscription. | | Resource Group | Select **Create new**. </br> Enter ***myResourceGroup*** in **Name**. </br> Select **OK**. | | **Instance details** | |
- | Name | Enter ***myVNet***. |
+ | Virtual network name | Enter ***myVNet***. |
| Region | Select **East US**. |
-1. Select the **IP Addresses** tab, or select **Next: IP Addresses** button at the bottom of the page.
+1. Select the **IP Addresses** tab, or select **Next** button at the bottom of the page twice.
1. Enter the following values in the **IP Addresses** tab: | Setting | Value | | | |
- | IPv4 address space | Enter ***10.0.0.0/16***. |
- | Subnet name | Enter ***mySubnet***. |
- | Subnet address range | Enter ***10.0.0.0/24***. |
+ | IPv4 address space | **10.0.0.0/16** |
+ | Subnet name | **mySubnet** |
+ | Subnet IP address range | **10.0.0.0 - 10.0.0.255** (size: **/24**) |
-1. Select the **Security** tab, or select the **Next: Security** button at the bottom of the page.
-
-1. Under **BastionHost**, select **Enable** and enter the following values:
-
- | Setting | Value |
- | | |
- | Bastion name | Enter ***myBastionHost***. |
- | AzureBastionSubnet address space | Enter ***10.0.3.0/24***. |
- | Public IP Address | Select **Create new**. </br> Enter ***myBastionIP*** for **Name**. </br> Select **OK**. |
-
-1. Select the **Review + create** tab or select the **Review + create** button.
+1. Select the **Review + create** tab or select the **Review + create** button at the bottom of the page.
1. Review the settings, and then select **Create**.
In this section, you create two virtual machines: **myVM** and **myNVA**. You us
1. In the search box at the top of the portal, enter ***virtual machines***. Select **Virtual machines** in the search results.
-2. Select **+ Create** and then select **Azure virtual machine**.
+1. Select **+ Create** and then select **Azure virtual machine**.
-3. In **Create a virtual machine**, enter or select the following values in the **Basics** tab:
+1. In **Create a virtual machine**, enter or select the following values in the **Basics** tab:
| Setting | Value | | | |
In this section, you create two virtual machines: **myVM** and **myNVA**. You us
| Password | Enter a password. | | Confirm password | Reenter password. |
-4. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+1. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-5. In the Networking tab, enter or select the following values:
+1. In the Networking tab, enter or select the following values:
| Setting | Value | | | | | **Network interface** | | | Virtual network | Select **myVNet**. | | Subnet | Select **mySubnet**. |
- | Public IP | Select **None**. |
+ | Public IP | Select **(new) myVM-ip**. |
| NIC network security group | Select **Basic**. |
- | Public inbound ports | Select **None**. |
+ | Public inbound ports | Select **Allow selected ports**. |
+ | Select inbound ports | Select **RDP (3389)**. |
+
+ [!INCLUDE [RDP Caution](../../includes/network-watcher-rdp.md)]
-6. Select **Review + create**.
+1. Select **Review + create**.
-7. Review the settings, and then select **Create**.
+1. Review the settings, and then select **Create**.
-8. Once the deployment is complete, select **Go to resource** to go to the **Overview** page of **myVM**.
+1. Once the deployment is complete, select **Go to resource** to go to the **Overview** page of **myVM**.
-9. Select **Connect**, then select **Bastion**.
+1. Select **Connect**, then select **select** under **Native RDP**.
-10. Enter the username and password that you created in the previous steps.
+1. Select **Download RDP file** and open the downloaded file.
-11. Select **Connect** button.
+1. Select **Connect** and then enter the username and password that you created in the previous steps. Accept the certificate if prompted.
-12. Once logged in, open a web browser and go to `www.bing.com` to verify it's reachable.
+1. Once logged in, open a web browser and go to `www.bing.com` to verify it's reachable.
:::image type="content" source="./media/diagnose-vm-network-routing-problem/bing-allowed.png" alt-text="Screenshot showing Bing page in a web browser."::: ### Create second virtual machine
-Follow the previous steps that you used to create **myVM** virtual machine and enter ***myNVA*** for the virtual machine name.
+Follow the previous steps (1-6) and use ***myNVA*** for the virtual machine name to create the second virtual machine.
## Test network communication using Network Watcher next hop
network-watcher Ip Flow Verify Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/ip-flow-verify-overview.md
+
+ Title: IP flow verify overview
+
+description: Learn about Azure Network Watcher IP flow verify to check if traffic is allowed or denied to and from your Azure virtual machines (VMs).
++++ Last updated : 09/28/2023+
+#CustomerIntent: As an Azure administrator, I want learn about IP flow verify so I can use it to check the security rules applied on the VMs to confirm if traffic is allowed or denied.
++
+# IP flow verify overview
+
+IP flow verify is a feature in Azure Network Watcher that you can use to check if a packet is allowed or denied to or from an Azure virtual machine based on the configured security and admin rules. It helps you to troubleshoot virtual machine connectivity issues by checking network security group (NSG) rules and Azure Virtual Network Manager admin rules. It's a quick and simple tool to diagnose connectivity issues to or from other Azure resources, the internet and on-premises environment.
+
+IP flow verify looks at the rules of all network security groups applied to a virtual machine's network interface, whether the network security group is associated to the virtual machine's subnet or network interface. It additionally, looks at the Azure Virtual Network Manager rules applied to the virtual network of the virtual machine.
+
+IP flow verify uses traffic direction, protocol, local IP, remote IP, local port, and remote port to test security and admin rules that apply to the virtual machine's network interface.
++
+IP flow verify returns **Access denied** or **Access allowed**, the name of the security rule that denies or allows the traffic, and the network security group with a link to it so you can edit it if you need to. It doesn't provide a link if a default security rule is denying or allowing the traffic. For more information, see [Default security rules](../virtual-network/network-security-groups-overview.md?toc=/azure/network-watcher/toc.json#default-security-rules).
++
+## Considerations
+
+- You must have a Network Watcher instance in the Azure subscription and region of the virtual machine. For more information, see [Enable or disable Azure Network Watcher](network-watcher-create.md).
+- You must have the necessary permissions to access the feature. For more information, see [RBAC permissions required to use Network Watcher capabilities](required-rbac-permissions.md).
+- IP flow verify only tests TCP and UDP rules, to test ICMP traffic rules, use [NSG diagnostics](network-watcher-network-configuration-diagnostics-overview.md).
+- IP flow verify only tests security and admin rules applied to a virtual machine's network interface, to test rules applied to virtual machine scale sets, use [NSG diagnostics](network-watcher-network-configuration-diagnostics-overview.md).
+
+## Next step
+
+To learn how to use IP flow verify, continue to:
+
+> [!div class="nextstepaction"]
+> [Diagnose a virtual machine network traffic filter problem](diagnose-vm-network-traffic-filtering-problem.md)
network-watcher Network Watcher Ip Flow Verify Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-ip-flow-verify-overview.md
- Title: Introduction to IP flow verify-
-description: This page provides an overview of Azure Network Watcher IP flow verify capability.
----- Previously updated : 10/04/2022---
-# Introduction to Azure Network Watcher IP flow verify
-
-IP flow verify checks if a packet is allowed or denied to or from a virtual machine. The information consists of direction, protocol, local IP, remote IP, local port, and a remote port. If the packet is denied by a security group, the name of the rule that denied the packet is returned. While any source or destination IP can be chosen, IP flow verify helps administrators quickly diagnose connectivity issues from or to the internet and from or to the on-premises environment.
-
-IP flow verify looks at the rules for all Network Security Groups (NSGs) applied to the network interface, such as a subnet or virtual machine NIC. Traffic flow is then verified based on the configured settings to or from that network interface. IP flow verify is useful in confirming if a rule in a Network Security Group is blocking ingress or egress traffic to or from a virtual machine. Now along with the NSG rules evaluation, the Azure Virtual Network Manager rules will also be evaluated.
-
-[Azure Virtual Network Manager (AVNM)](../virtual-network-manager/overview.md) is a management service that enables users to group, configure, deploy, and manage Virtual Networks globally across subscriptions. AVNM security configuration allows users to define a collection of rules that can be applied to one or more network groups at the global level. These security rules have a higher priority than network security group (NSG) rules. An important difference to note here is that admin rules are a resource delivered by ANM in a central location controlled by governance and security teams, which bubble down to each vnet. NSGs are a resource controlled by the vnet owners, which apply at each subnet or NIC level.
-
-An instance of Network Watcher needs to be created in all regions where you plan to run IP flow verify. Network Watcher is a regional service and can only be run against resources in the same region. The instance used does not affect the results of IP flow verify, as any route associated with the NIC or subnet is still returned.
-
-![1][1]
-
-## Next steps
-
-Visit the following article to learn if a packet is allowed or denied for a specific virtual machine through the portal. [Check if traffic is allowed on a VM with IP Flow Verify using the portal](diagnose-vm-network-traffic-filtering-problem.md)
-
-[1]: ./media/network-watcher-ip-flow-verify-overview/figure1.png
-
network-watcher Network Watcher Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-overview.md
Last updated 09/15/2023
-# Customer intent: As someone with basic Azure network experience, I want to understand how Azure Network Watcher can help me resolve some of the network-related problems I've encountered and provide insight into how I use Azure networking.
+
+# CustomerIntent: As someone with basic Azure network experience, I want to understand how Azure Network Watcher can help me resolve some of the network-related problems I've encountered and provide insight into how I use Azure networking.
# What is Azure Network Watcher?
Network Watcher offers two monitoring tools that help you view and monitor resou
### Connection monitor
-**Connection monitor** provides end-to-end connection monitoring for Azure and hybrid endpoints. It helps you understand network performance between various endpoints in your network infrastructure. For more information, see [Connection monitor overview](connection-monitor-overview.md) and [Monitor network communication between two virtual machines](connection-monitor.md).
+**Connection monitor** provides end-to-end connection monitoring for Azure and hybrid endpoints. It helps you understand network performance between various endpoints in your network infrastructure. For more information, see [Connection monitor overview](connection-monitor-overview.md) and [Monitor network communication between two virtual machines](monitor-vm-communication.md).
## Network diagnostic tools
Network Watcher offers seven network diagnostic tools that help troubleshoot and
### IP flow verify
-**IP flow verify** allows you to detect traffic filtering issues at a virtual machine level. It checks if a packet is allowed or denied to or from an IP address (IPv4 or IPv6 address). It also tells you which security rule allowed or denied the traffic. For more information, see [IP flow verify overview](network-watcher-ip-flow-verify-overview.md) and [Diagnose a virtual machine network traffic filter problem](diagnose-vm-network-traffic-filtering-problem.md).
+**IP flow verify** allows you to detect traffic filtering issues at a virtual machine level. It checks if a packet is allowed or denied to or from an IP address (IPv4 or IPv6 address). It also tells you which security rule allowed or denied the traffic. For more information, see [IP flow verify overview](ip-flow-verify-overview.md) and [Diagnose a virtual machine network traffic filter problem](diagnose-vm-network-traffic-filtering-problem.md).
### NSG diagnostics
networking Nva Accelerated Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/nva-accelerated-connections.md
-# Accelerated connections and NVAs (Preview)
+# Accelerated connections and NVAs (Limited GA)
-This article helps you understand the **Accelerated Connections** feature. When Accelerated Connections is enabled on the virtual network interface (vNIC) with Accelerated Networking, networking performance is improved. This high-performance feature is available on Network Virtual Appliances (NVAs) deployed from Azure Marketplace and offers competitive performance in Connections Per Second (CPS) optimization, along with improvements to handling large amounts of simultaneous connections. To access this feature during preview, use the [Preview sign-up form](https://go.microsoft.com/fwlink/?linkid=2223706).
+This article helps you understand the **Accelerated Connections** feature. When Accelerated Connections is enabled on the virtual network interface (vNIC) with Accelerated Networking, networking performance is improved. This high-performance feature is available on Network Virtual Appliances (NVAs) deployed from Azure Marketplace and offers competitive performance in Connections Per Second (CPS) optimization, along with improvements to handling large amounts of simultaneous connections. To access this feature during limited GA, use the [sign-up form](https://go.microsoft.com/fwlink/?linkid=2223706).
> [!IMPORTANT]
-> This feature is currently in public preview. This public preview is provided without a service-level agreement and shouldn't be used for production workloads. Certain features might not be supported, might have constrained capabilities, or might not be available in all Azure locations. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> This feature is currently in limited general availability (GA), and customer sign-up is needed to use it.
> Accelerated Connections supports the workloads that can send large amounts of active connections simultaneously. It supports these connections bursts with negligible degradation to VM throughput, latency or connections per second performance. The data path for the network traffic is highly optimized to offload the Software-defined networks (SDN) policies evaluation. The goal is to eliminate any bottlenecks in the cloud implementation and networking performance.
Accelerated Connections is implemented at the network interface level to allow m
Network Virtual Appliances (NVAs) with most large scale solutions requiring v-firewall, v-switch, load balancers and other critical network features would experience higher CPS performance with Accelerated Connections. > [!NOTE]
-> During preview, this feature is only supported for NVAs available on the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=network%20virtual%20appliance&page=1&filters=virtual-machine-images%3Bpartners).
+> During limited GA, this feature is only supported for NVAs available on the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=network%20virtual%20appliance&page=1&filters=virtual-machine-images%3Bpartners).
> **Diagram 1**
Network Virtual Appliances (NVAs) with most large scale solutions requiring v-fi
### Considerations and limitations
-* This feature is available only for NVAs deployed from Azure Marketplace during preview.
-* To enable this feature, you must sign up for the preview using the [Preview sign-up form](https://go.microsoft.com/fwlink/?linkid=2223706).
-* During preview, this feature is only available on Resource Groups created after sign-up.
+* This feature is available only for NVAs deployed from Azure Marketplace during limited GA.
+* To enable this feature, you must sign up using the [sign-up form](https://go.microsoft.com/fwlink/?linkid=2223706).
+* During limited GA, this feature is only available on Resource Groups created after sign-up.
* This feature can be enabled and is supported on new deployments using an Azure Resource Manager (ARM) template and preview instructions. * Feature support may vary as per the NVAs available on Marketplace. * Detaching and attaching a network interface on a running VM isn't supported as other Azure features.
-* Marketplace portal isn't supported for the public preview.
-* This feature is free during the preview, and chargeable after GA.
+* Marketplace portal isn't supported for the limited GA.
+* This feature is free during the limited GA, and chargeable after GA.
## Prerequisites
The following section lists the required prerequisites:
* [Accelerated networking](../virtual-network/accelerated-networking-overview.md) must be enabled on the traffic network interfaces of the NVA. * Custom tags must be added to the resources during deployment (instructions will be provided). * The data path tags should be added to the vNIC properties.
-* You've signed up for the preview using the [Preview sign-up form](https://go.microsoft.com/fwlink/?linkid=2223706).
+* You've signed up for the limited GA using the [sign-up form](https://go.microsoft.com/fwlink/?linkid=2223706).
## Supported regions
This list will be updated as more regions become available. The following region
* West Central US * East US * West US
+* East US 2
+* Central US
## Supported SKUs
-This feature is supported on all SKUs supported by Accelerated Networking except the Dv5 VM family, which isn't yet supported during preview.
+This feature is supported on all SKUs supported by Accelerated Networking except the Dv5 VM family, which isn't yet supported during limited GA.
+
+## Supported enablement methods
+
+This feature enablement and deployment is supported using following methods:
+
+* PowerShell
+* ARM templates via Azure portal
+* Azure CLI
+* Terraform
+* SDK Package Deployment
+ ## Next steps
-Sign up for the [Preview](https://go.microsoft.com/fwlink/?linkid=2223706).
+Sign up for the [Limited GA](https://go.microsoft.com/fwlink/?linkid=2223706).
open-datasets Dataset Panancestry Uk Bio Bank https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-panancestry-uk-bio-bank.md
This dataset is a mirror of the data store at https://pan.ukbb.broadinstitute.or
## Data volumes and update frequency
-This dataset includes approximately 756 TB of data, and is updated monthly during the first week of every month.
+This dataset includes approximately 144 TB of data, and is updated monthly during the first week of every month.
## Storage location
operator-nexus Troubleshoot Aks Hybrid Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/troubleshoot-aks-hybrid-cluster.md
- Title: Troubleshoot AKS hybrid cluster provisioning failures for Azure Operator Nexus
-description: Troubleshoot Azure Kubernetes Service (AKS) hybrid cluster provisioning failures, and learn how to debug failure codes.
--- Previously updated : 05/14/2023----
-# Troubleshoot AKS hybrid cluster provisioning failures
-
-To gather the data needed to diagnose Azure Kubernetes Service (AKS) hybrid cluster creation or management problems for Azure Operator Nexus, you first need to [check the status of your installation](/azure/AkS/Hybrid/create-aks-hybrid-preview-cli#connect-to-the-aks-hybrid-cluster).
--
-If **Status** isn't **Connected** and **Provisioning state** isn't **Succeeded**, the installation failed. This article can help you troubleshoot the failure.
-
-## Prerequisites
-
-* Install the latest version of the
- [appropriate Azure CLI extensions](./howto-install-cli-extensions.md).
-* Gather this information:
- * Tenant ID
- * Subscription ID
- * Cluster name and resource group
- * Network fabric controller and resource group
- * Network fabric instances and resource group
- * AKS hybrid cluster name and resource group
-* Prepare Azure CLI commands, Bicep templates, and Azure Resource Manager templates (ARM templates) that you use for resource creation.
-
-## What does an unhealthy AKS hybrid cluster look like?
-
-Several types of failures look similar to a user.
-
-In the Azure portal, an unhealthy cluster might show:
-
-* An alert that says "This cluster isn't connected to Azure."
-* A status of **Offline**.
-* A message that refers to certificate expiration time for a managed identity: "Couldn't display date/time, invalid format."
-
-In the Azure CLI, check the output of the following command:
-
-~~~ Azure CLI
-az hybridaks show -g <>--name <>
-~~~
-
-An unhealthy cluster might show:
-
-* `provisioningState`: `Failed`.
-* `provisioningState`: `Succeeded`, but null values for fields such as `lastConnectivityTime` and `managedIdentityCertificateExpirationTime`, or an `errorMessage` field that isn't null.
-
-## Troubleshoot basic network requirements
-
-At a minimum, every AKS hybrid cluster needs a default Container Network Interface (CNI) network and a cloud services network. Starting from the bottom up, consider managed network fabric resources, network cloud resources, and AKS hybrid resources.
-
-### Network fabric resources
-
-* Each network cloud cluster can support up to 200 cloud services networks.
-* The fabric must be configured with a Layer 3 (L3) isolation domain and an L3 internal network for use with the default CNI network.
-* The VLAN range can be greater than 1,000 for the default CNI network.
-* The L3 isolation domain must be successfully enabled.
-
-### Network cloud resources
-
-* The cloud services network must be created.
-* Use the correct hybrid AKS extended location. You can get it from the respective site cluster while you're creating the AKS hybrid resources.
-* The default CNI network must be created with an IPv4 prefix and a VLAN that matches an existing L3 isolation domain.
-* The IPv4 prefix must be unique across all default CNI networks and Layer 3 networks.
-* The networks must have a `provisioningState` value of `Succeeded`.
-
-[Learn how to connect a network cloud by using the Azure CLI](./howto-install-cli-extensions.md?tabs=linux#install-networkcloud-cli-extension).
-
-### AKS hybrid resources
-
-To be used by an AKS hybrid cluster, each network cloud network must be "wrapped" in an AKS hybrid virtual network. [Learn how to configure an AKS hybrid virtual network by using the Azure CLI](/cli/azure/hybridaks/vnet).
-
-## Troubleshoot common problems
-
-Any of the following problems can cause the AKS hybrid cluster to fail to be fully provisioned.
-
-### AKS hybrid clusters might fail or time out when they're created concurrently
-
-The Azure Arc appliance can handle creating only one AKS hybrid cluster at a time within an instance. After you create a single AKS hybrid cluster, you must wait for its provisioning status to be `Succeeded` and for the cluster status to appear as **Connected** or **Online** in the Azure portal.
-
-If you tried to create several at once and have them in a `Failed` state, delete all failed clusters and any partially succeeded clusters. Anything that isn't a fully successful cluster should be deleted.
-
-After all clusters and artifacts are deleted, wait a few minutes for the Azure Arc appliance and cluster operators to reconcile. Then try to create a single new AKS hybrid cluster. Wait for that to come up successfully and report as **Connected** or **Online**. You should now be able to continue creating AKS hybrid clusters, one at a time.
-
-### Case mismatch between an AKS hybrid virtual network and a network cloud network
-
-For you to configure an AKS hybrid virtual network, the resource IDs for the network cloud network must precisely match the Azure Resource Manager resource IDs. To ensure that the IDs have identical uppercase and lowercase letters, ensure that you use the correct casing when you're setting up the network.
-
-If you're using the Azure CLI, use the `--aods-vnet-id*` parameter. If you're using Azure Resource Manager, Bicep, or a manual Azure REST API call, use the value of `.properties.infraVnetProfile.networkCloud.networkId`.
-
-The most reliable way to obtain the correct value for creating the virtual network is to query the object for its ID. For example:
-
-~~~bash
-az networkcloud cloudservices show -g "example-rg" -n "csn-name" -o tsv --query id
-az networkcloud defaultcninetwork show -g "example-rg" -n "dcn-name" -o tsv --query id
-az networkcloud l3network show -g "example-rg" -n "l3n-name" -o tsv --query id
-~~~
-
-### L3 isolation domain or L2 isolation domain isn't enabled
-
-At a high level, the steps to create isolation domains are:
-
-1. Create the L3 isolation domain.
-1. Add one or more internal networks.
-1. Add one external network (optional, if northbound connectivity is required).
-1. Enable the L3 isolation domain by using the following command:
-
- ~~~bash
- az networkfabric l3domain update-admin-state --resource-group "RESOURCE_GROUP_NAME" --resource-name "L3ISOLATIONDOMAIN_NAME" --state "Enable"
- ~~~
-
-It's important to check that the fabric resources achieve an `administrativeState` value of `Enabled`, and that the `provisioningState` value is `Succeeded`. If the `update-admin-state` step is skipped or unsuccessful, the networks can't operate. You can use `show` commands to check the values. For example:
-
-~~~bash
-az networkfabric l3domain show -g "example-rg" --resource-name "l2domainname" -o table
-az networkfabric l2domain show -g "example-rg" --resource-name "l3domainname" -o table
-~~~
-
-### Network cloud network status is Failed
-
-When you create networks, ensure that they come up successfully. In particular, pay attention to the following constraints when you're creating default CNI networks:
-
-* The IPv4 prefix and VLAN need to match the internal network in the referenced L3 isolation domain.
-* The IPv4 prefix must be unique across default CNI networks (and Layer 3 networks) in the network cloud cluster.
-
-If you're using the Azure CLI to create these resources, the `--debug` option is helpful. The output includes an operation status URL, which you can query by using `az rest`.
-
-Depending on the mechanism used for creation (Azure portal, Azure CLI, Azure Resource Manager), it's sometimes hard to see why resources are `Failed`.
-
-One useful tool to help surface errors is the [az monitor activity-log](/cli/azure/monitor/activity-log) command. You can use it to show activities for a specific resource ID, resource group, or correlation ID. (You can also get this information in the **Activity** area of the Azure portal.)
-
-For example, to see why a default CNI network failed, use the following code:
-
-~~~bash
-RESOURCE_ID="/subscriptions/$subscriptionsid/resourceGroups/example-rg/providers/Microsoft.NetworkCloud/defaultcninetworks/example-duplicate-prefix-dcn"
-
-az monitor activity-log list --resource-id "${RESOURCE_ID}" -o tsv --query '[].properties.statusMessage' | jq
-~~~
-
-Here's the result:
-
-~~~output
-{
- "status": "Failed",
- "error": {
- "code": "ResourceOperationFailure",
- "message": "The resource operation completed with terminal provisioning state 'Failed'.",
- "details": [
- {
- "code": "Specified IPv4Connected Prefix 10.0.88.0/24 overlaps with existing prefix 10.0.88.0/24 from example-dcn",
- "message": "admission webhook \"vdefaultcninetwork.kb.io\" denied the request: Specified IPv4Connected Prefix 10.0.88.0/24 overlaps with existing prefix 10.0.88.0/24 from example-dcn"
- }
- ]
- }
-}
-
-~~~
-
-### Memory saturation on an AKS hybrid node
-
-There have been incidents where workloads for cloud-native network functions (CNFs) can't start because of resource constraints on the AKS hybrid node that the CNF workload is scheduled on. It has happened on nodes that have Azure Arc pods that are consuming many compute resources. To reduce memory saturation, use effective monitoring tools and apply best practices.
-
-For more information, see [Troubleshoot memory saturation in AKS clusters](/troubleshoot/azure/azure-kubernetes/identify-memory-saturation-aks).
-
-To access further details in the logs, see [Log Analytics workspace](../../articles/operator-nexus/concepts-observability.md#log-analytic-workspace).
-
-If you still have questions, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
payment-hsm Create Different Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/create-different-vnet.md
To see the newly created network interfaces, use the [az network nic show](/cli/
The output contains this line: ```json
- "privateIPAllocationMethod": "Dynamic",
+ "privateIPAllocationMethod": "Dynamic",
``` # [Azure PowerShell](#tab/azure-powershell)
To view the properties of a network interface, use the [az network nic show](/cl
The output contains this line: ```json
- "privateIPAllocationMethod": "Static",
+ "privateIPAllocationMethod": "Static",
``` # [Azure PowerShell](#tab/azure-powershell)
payment-hsm Create Payment Hsm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/create-payment-hsm.md
To see the newly created network interfaces, use the [az network nic show](/cli/
The output contains this line: ```json
- "privateIPAllocationMethod": "Dynamic",
+ "privateIPAllocationMethod": "Dynamic",
``` # [Azure PowerShell](#tab/azure-powershell)
To view the properties of a network interface, use the [az network nic show](/cl
The output contains this line: ```json
- "privateIPAllocationMethod": "Static",
+ "privateIPAllocationMethod": "Static",
``` # [Azure PowerShell](#tab/azure-powershell)
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
This is part of contrib and it is very easy to install this extension.
CREATE EXTENSION pg_buffercache; ```
+## Extensions and Major Version Upgrade
+Azure Database for PostgreSQL Flexible Server Postgres has introduced [in-place major version upgrade](./concepts-major-version-upgrade.md#overview) feature that performs an in-place upgrade of the Postgres server with just a click. In-place major version upgrade simplifies Postgres upgrade process minimizing the disruption to users and applications accessing the server. In-place major version upgrade doesn't support certain extensions and there are some limitations to upgrading certain extensions. The extensions **Timescaledb**, **pgaudit**, **dblink**, **orafce** and **postgres_fdw** are unsupported for all PostgreSQL versions when using [in-place majpr version update feature](./concepts-major-version-upgrade.md#overview).
## Next steps
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| Switzerland West | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | UAE North | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: | | US Gov Arizona | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: |
+| US Gov Texas | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: |
| US Gov Virginia | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: | | UK South | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | UK West | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | West Central US | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | West Europe | :heavy_check_mark: | :x: $ | :heavy_check_mark: | :heavy_check_mark: |
-| West US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| West US | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
| West US 2 | :heavy_check_mark: | :x: $ | :x: $ | :heavy_check_mark: | | West US 3 | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | :x: |
postgresql Partners Migration Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/partners-migration-postgresql.md
To broadly support your Azure Database for PostgreSQL solution, you can choose f
| Partner | Description | Links | Videos | | | | | |
+| ![Quadrant Resource][12] |**Quadrant Resource**<br>Quadrant Resource is cloud and data company with its expertise in App& Data Migrations, DevSecOps, SAP Migrations and Microsoft Fabric implementations. In space of Data Migrations, we have our in- house web-based tool Q-Migrator for migrating on-prem or cloud databases such as Oracle/SQL Server/PostgreSQL to Open-source databases like PostgreSQL / MySQL / MariaDB in Azure. Q-Migrator is a highly secured tool that can be deployed in client environment and can handle all the aspects of Database migration from code conversion, data migration, deployment, functional and performance testing. Automated code conversion for heterogeneous database migrations and migration testing is the main differentiator that no other tool in the marketplace offers. The entire migration process is streamlined to expedite migration from months to weeks.|[Website][quadrant_website]<br>[Marketplace Implementation][quadrant_marketplace_implementation]<br>[Marketplace Assessment][quadrant_marketplace_assessment]<br>[Marketplace POC][quadrant_marketplace_poc]<br>[LinkedIn][quadrant_linkedin]<br>[Contact][quadrant_contact] | |
| ![Improving][11] |**Improving**<br>Improving is a highly esteemed Microsoft Partner specializing in Application Modernization and Data & AI. With an extensive track record, Improving excels in handling intricate database migrations of diverse scales and complexities. For organizations considering migrations, we offer our [PostgreSQL Migration Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/prosourcesolutionsllc1594761633057.azure_database_for_postgresql_migration?tab=Overview&filters=country-unitedstates), presenting you with a comprehensive roadmap of industry-leading practices and expert recommendations to guide your migration strategy effectively. When you are ready to begin the migration process, Improving can provide the necessary resources to partner with you to ensure a successful database migration. Whatever your path may be, On-premise to Flex or Single-Server to Flex, we have the expertise to provide the migration.|[Website][improving_website]<br>[Marketplace][improving_marketplace]<br>[LinkedIn][improving_linkedin]<br>[Twitter][improving_twitter]<br>[Contact][improving_contact] | | | ![Solliance][10] |**Solliance**<br>Solliance is a consulting and technology solutions company comprised of industry thought leaders and experts specializing in PostgreSQL solutions on Azure. Their services, including cloud architecture, data engineering, and security, are tailored to businesses of all sizes. With a seasoned team and comprehensive training content, Solliance provides impactful PostgreSQL-based solutions that deliver tangible results for your business.|[Website][solliance_website]<br>[LinkedIn][solliance_linkedin]<br>[Twitter][solliance_twitter]<br>[Contact][solliance_contact] | | | ![Data Bene][9] |**Data Bene**<br>Databases done right! Data Bene is an open source software service company, expert in PostgreSQL and its ecosystem. Their customer portfolio includes several Fortune 100 companies as well as several famous «Unicorn». They have built over the years a serious reputation in PostgreSQL and Citus Data solutions and they provide support and technical assistance to ensure the smooth operation of your data infrastructure, including demanding projects in health-care and banking industries.|[Website][databene_website]<br>[LinkedIn][databene_linkedin]<br>[Contact][databene_contact] | |
To learn more about some of Microsoft's other partners, see the [Microsoft Partn
[9]:./media/partner-migration-postgresql/data-bene-logo.png [10]:./media/partner-migration-postgresql/solliance-logo.png [11]:./media/partner-migration-postgresql/improving-logo.png
+[12]:./media/partner-migration-postgresql/quadrant-resource-logo.jpg
<!--Website links --> [snp_website]:https://www.snp.com//
To learn more about some of Microsoft's other partners, see the [Microsoft Partn
[databene_website]:https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdata-bene.io%2F&data=05%7C01%7Carianap%40microsoft.com%7C9619e9fb8f20426c479d08db4bcedd2c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638187124891347095%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=fEg07O8aMx4zXUFwgzMjuXM8ZvgYq6BuvD3soDpkEoQ%3D&reserved=0 [solliance_website]:https://solliance.net/practices/ai-data/your-azure-postgresql-experts [improving_website]:https://improving.com/
+[quadrant_website]:https://qmigrator.ai/
<!--Get Started Links--> <!--Datasheet Links-->
To learn more about some of Microsoft's other partners, see the [Microsoft Partn
[credativ_marketplace]:https://azuremarketplace.microsoft.com/de-de/marketplace/apps?search=credativ&page=1 [newt_marketplace]:https://azuremarketplace.microsoft.com/en-in/marketplace/apps/newtglobalconsultingllc1581492268566.dmap_db_container_offer?tab=Overview [improving_marketplace]:https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/prosourcesolutionsllc1594761633057.azure_database_for_postgresql_migration?tab=Overview&filters=country-unitedstates
+[quadrant_marketplace_implementation]:https://azuremarketplace.microsoft.com/en-us/marketplace/apps/quadrantresourcellc.quadrant_database_migration_to_oss_implementation?tab=Overview
+[quadrant_marketplace_assessment]:https://azuremarketplace.microsoft.com/en-us/marketplace/apps/quadrantresourcellc.qmigrator_db_migration_tool?tab=Overview
+[quadrant_marketplace_poc]:https://azuremarketplace.microsoft.com/en-us/marketplace/apps/quadrantresourcellc.database_migration_to_oss_proof_of_concept?tab=Overview
<!--Press links-->
To learn more about some of Microsoft's other partners, see the [Microsoft Partn
[databene_linkedin]:https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Fdata-bene%2F&data=05%7C01%7Carianap%40microsoft.com%7C9619e9fb8f20426c479d08db4bcedd2c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638187124891347095%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=PwPDHeQHNYHVa%2FbdfEjbvlnCSFo9iFll1E9UeM3RBQs%3D&reserved=0 [solliance_linkedin]:https://www.linkedin.com/company/solliancenet/mycompany/ [improving_linkedin]:https://www.linkedin.com/company/improving-enterprises/
+[quadrant_linkedin]:https://www.linkedin.com/company/quadrant-resource-llc_2/
<!--Twitter links--> [snp_twitter]:https://twitter.com/snptechnologies
To learn more about some of Microsoft's other partners, see the [Microsoft Partn
[databene_contact]:https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.data-bene.io%2Fen%23contact&data=05%7C01%7Carianap%40microsoft.com%7C9619e9fb8f20426c479d08db4bcedd2c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638187124891347095%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=LAv2lRHmJH0kk2tft7LpRwtefQEdTkzwbB2ptoQpt3w%3D&reserved=0 [solliance_contact]:https://solliance.net/Contact [improving_contact]:mailto:toren.huntley@improving.com
+[quadrant_contact]:mailto:migrations@quadrantresource.com
postgresql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-certificate-rotation.md
No. There's no action needed if your certificate file already has the **DigiCert
There are many tools that you can use. For example, DigiCert has a handy [tool](https://www.digicert.com/help/) that shows you the certificate chain of any server name. (This tool works with publicly accessible server; it cannot connect to server that is contained in a virtual network (VNET)). Another tool you can use is OpenSSL in the command line, you can use this syntax to check certificates: ```bash
-openssl s_client -showcerts -connect <your-postgresql-server-name>:443
+openssl s_client -starttls postgres -showcerts -connect <your-postgresql-server-name>:5432
``` ### 14. What if I have further questions?
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Microsoft Purview (Microsoft.Purview) | portal | privatelink.purviewstudio.azure.com | purview.azure.com </br> purviewstudio.azure.com | | Azure Digital Twins (Microsoft.DigitalTwins) | digitalTwinsInstances | privatelink.digitaltwins.azure.net | digitaltwins.azure.net | | Azure HDInsight (Microsoft.HDInsight/clusters) | N/A | privatelink.azurehdinsight.net | azurehdinsight.net |
-| Azure Arc (Microsoft.HybridCompute) | hybridcompute | privatelink.his.arc.azure.com <br/> privatelink.guestconfiguration.azure.com </br> privatelink.kubernetesconfiguration.azure.com | his.arc.azure.com <br/> guestconfiguration.azure.com </br> kubernetesconfiguration.azure.com |
+| Azure Arc (Microsoft.HybridCompute) | hybridcompute | privatelink.his.arc.azure.com <br/> privatelink.guestconfiguration.azure.com </br> privatelink.dp.kubernetesconfiguration.azure.com | his.arc.azure.com <br/> guestconfiguration.azure.com </br> dp.kubernetesconfiguration.azure.com |
| Azure Media Services (Microsoft.Media) | keydelivery </br> liveevent </br> streamingendpoint | privatelink.media.azure.net | media.azure.net | | Azure Data Explorer (Microsoft.Kusto/Clusters) | cluster | privatelink.{regionName}.kusto.windows.net | {regionName}.kusto.windows.net | | Azure Static Web Apps (Microsoft.Web/staticSites) | staticSites | privatelink.azurestaticapps.net </br> privatelink.{partitionId}.azurestaticapps.net | azurestaticapps.net </br> {partitionId}.azurestaticapps.net |
role-based-access-control Transfer Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/transfer-subscription.md
Previously updated : 12/09/2022 Last updated : 09/28/2023
Several Azure resources have a dependency on a subscription or a directory. Depe
| Microsoft Dev Box | Yes | No | | You cannot transfer a dev box and its associated resources to a different directory. Once a subscription moves to another tenant, you will not be able to perform any actions on your dev box | | Azure Deployment Environments | Yes | No | | You cannot transfer an environment and its associated resources to a different directory. Once a subscription moves to another tenant, you will not be able to perform any actions on your environment | | Azure Service Bus | Yes | Yes | |You must delete, re-create, and attach the managed identities to the appropriate resource. You must re-create the role assignments. |
+| Azure Synapse Analytics Workspace | Yes | Yes | | You must update the tenant ID associated with the Synapse Analytics Workspace. If the workspace is associated with a Git repository, you must update the [workspace's Git configuration](../synapse-analytics/cicd/source-control.md#switch-to-a-different-git-repository). For more information, see [Recovering Synapse Analytics workspace after transferring a subscription to a different Azure AD directory (tenant)](../synapse-analytics/how-to-recover-workspace-after-tenant-move.md). |
> [!WARNING] > If you are using encryption at rest for a resource, such as a storage account or SQL database, that has a dependency on a key vault that is being transferred, it can lead to an unrecoverable scenario. If you have this situation, you should take steps to use a different key vault or temporarily disable customer-managed keys to avoid this unrecoverable scenario.
search Cognitive Search Skill Keyphrases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-keyphrases.md
For the example above, the output of your skill will be written to a new node in
#### document/myKeyPhrases ```json
- [
- "worldΓÇÖs glaciers",
- "huge rivers of ice",
- "Canadian Rockies",
- "iconic landscapes",
- "Mount Everest region",
- "Continued warming"
- ]
+[
+ "worldΓÇÖs glaciers",
+ "huge rivers of ice",
+ "Canadian Rockies",
+ "iconic landscapes",
+ "Mount Everest region",
+ "Continued warming"
+]
``` You may use "document/myKeyPhrases" as input into other skills, or as a source of an [output field mapping](cognitive-search-output-field-mapping.md).
service-fabric How To Managed Cluster Modify Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-modify-node-type.md
To adjust the node count for a node type using an ARM Template, adjust the `vmIn
> The managed cluster provider will block scale adjustments and return an error if the scaling request violates required minimums. ```json
- {
- "apiVersion": "[variables('sfApiVersion')]",
- "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
- "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
- "location": "[resourcegroup().location]",
- "properties": {
- ...
- "vmInstanceCount": "[parameters('nodeTypeVmInstanceCount')]",
- ...
- }
- }
+{
+ "apiVersion": "[variables('sfApiVersion')]",
+ "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
+ "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
+ "location": "[resourcegroup().location]",
+ "properties": {
+ ...
+ "vmInstanceCount": "[parameters('nodeTypeVmInstanceCount')]",
+ ...
+ }
} ```
To modify the OS image used for a node type using an ARM Template, adjust the `v
* The Service Fabric managed cluster resource apiVersion should be **2021-05-01** or later. ```json
- {
- "apiVersion": "[variables('sfApiVersion')]",
- "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
- "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
- "location": "[resourcegroup().location]",
- "properties": {
- ...
- "vmImagePublisher": "[parameters('vmImagePublisher')]",
- "vmImageOffer": "[parameters('vmImageOffer')]",
- "vmImageSku": "[parameters('vmImageSku')]",
- "vmImageVersion": "[parameters('vmImageVersion')]",
- ...
- }
- }
+{
+ "apiVersion": "[variables('sfApiVersion')]",
+ "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
+ "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
+ "location": "[resourcegroup().location]",
+ "properties": {
+ ...
+ "vmImagePublisher": "[parameters('vmImagePublisher')]",
+ "vmImageOffer": "[parameters('vmImageOffer')]",
+ "vmImageSku": "[parameters('vmImageSku')]",
+ "vmImageVersion": "[parameters('vmImageVersion')]",
+ ...
+ }
} ```
To adjust the placement properties for a node type using an ARM Template, adjust
* The Service Fabric managed cluster resource apiVersion should be **2021-05-01** or later. ```json
- {
- "apiVersion": "[variables('sfApiVersion')]",
- "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
- "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
- "location": "[resourcegroup().location]",
- "properties": {
- "placementProperties": {
- "PremiumSSD": "true",
- "NodeColor": "green",
- "SomeProperty": "5"
- }
- }
+{
+ "apiVersion": "[variables('sfApiVersion')]",
+ "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
+ "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
+ "location": "[resourcegroup().location]",
+ "properties": {
+ "placementProperties": {
+ "PremiumSSD": "true",
+ "NodeColor": "green",
+ "SomeProperty": "5"
+ }
+ }
} ```
Set-AzServiceFabricManagedNodeType -ResourceGroupName $rgName -ClusterName $clus
To modify the VM SKU size used for a node type using an ARM Template, adjust the `vmSize` property with the new value and do a cluster deployment for the setting to take effect. The managed cluster provider will reimage each instance by upgrade domain. For a list of SKU options, please refer to the [VM sizes - Azure Virtual Machines | Microsoft Learn](../virtual-machines/sizes.md). ```json
- {
- "apiVersion": "[variables('sfApiVersion')]",
- "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
- "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
- "location": "[resourcegroup().location]",
- "properties": {
- ...
- "vmSize": "[parameters('vmImageVersion')]",
- ...
- }
- }
+{
+ "apiVersion": "[variables('sfApiVersion')]",
+ "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
+ "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
+ "location": "[resourcegroup().location]",
+ "properties": {
+ ...
+ "vmSize": "[parameters('vmImageVersion')]",
+ ...
+ }
} ```+ ## Configure multiple managed disks+ Service Fabric managed clusters by default configure one managed disk. By configuring the following optional property and values, you can add more managed disks to node types within a cluster. You are able to specify the drive letter, disk type, and size per disk. Configure more managed disks by declaring `additionalDataDisks` property and required parameters in your Resource Manager template as follows:
Configure more managed disks by declaring `additionalDataDisks` property and req
* The Service Fabric managed cluster resource apiVersion should be **2022-01-01** or later. ```json
- {
- "apiVersion": "[variables('sfApiVersion')]",
- "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
- "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
- "location": "[resourcegroup().location]",
- "properties": {
- "additionalDataDisks": {
- "lun": "1",
- "diskSizeGB": "50",
- "diskType": "Standard_LRS",
- "diskLetter": "S"
- }
- }
- }
+{
+ "apiVersion": "[variables('sfApiVersion')]",
+ "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
+ "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
+ "location": "[resourcegroup().location]",
+ "properties": {
+ "additionalDataDisks": {
+ "lun": "1",
+ "diskSizeGB": "50",
+ "diskType": "Standard_LRS",
+ "diskLetter": "S"
+ }
+ }
+}
``` See [full list of parameters available](/azure/templates/microsoft.servicefabric/2021-11-01-preview/managedclusters)
Service Fabric managed clusters by default configure a Service Fabric data disk
* The Service Fabric managed cluster resource apiVersion should be **2022-01-01** or later. ```json
- {
- "apiVersion": "[variables('sfApiVersion')]",
- "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
- "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
- "location": "[resourcegroup().location]",
- "properties": {
- "dataDiskLetter": "S"
- }
- }
- }
+{
+ {
+ "apiVersion": "[variables('sfApiVersion')]",
+ "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
+ "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
+ "location": "[resourcegroup().location]",
+ "properties": {
+ "dataDiskLetter": "S"
+ }
+ }
+}
``` ## Next steps
service-fabric Service Fabric Cluster Config Upgrade Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-config-upgrade-azure.md
Azure clusters can be configured through the JSON Resource Manager template. To
4. Select **Edit** and update the `fabricSettings` JSON element and add a new element: ```json
- {
- "name": "Diagnostics",
- "parameters": [
- {
- "name": "MaxDiskQuotaInMB",
- "value": "65536"
- }
- ]
- }
+{
+ "name": "Diagnostics",
+ "parameters": [
+ {
+ "name": "MaxDiskQuotaInMB",
+ "value": "65536"
+ }
+ ]
+}
``` You can also customize cluster settings in one of the following ways with Azure Resource
service-fabric Service Fabric Cluster Config Upgrade Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-config-upgrade-windows-server.md
You can add, update, or remove settings in the `fabricSettings` section under th
For example, the following JSON adds a new setting *MaxDiskQuotaInMB* to the *Diagnostics* section under `fabricSettings`: ```json
- {
- "name": "Diagnostics",
- "parameters": [
- {
- "name": "MaxDiskQuotaInMB",
- "value": "65536"
- }
- ]
- }
+{
+ "name": "Diagnostics",
+ "parameters": [
+ {
+ "name": "MaxDiskQuotaInMB",
+ "value": "65536"
+ }
+ ]
+}
``` After you've modified the settings in your ClusterConfig.json file, [test the cluster configuration](#test-the-cluster-configuration) and then [upgrade the cluster configuration](#upgrade-the-cluster-configuration) to apply the settings to your cluster.
service-fabric Service Fabric Diagnostics Event Aggregation Wad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-aggregation-wad.md
Logs from additional channels are also available for collection, here are some o
* Operational Channel - Base: Enabled by default, high-level operations performed by Service Fabric and the cluster, including events for a node coming up, a new application being deployed, or an upgrade rollback, etc. For a list of events, refer to [Operational Channel Events](./service-fabric-diagnostics-event-generation-operational.md).
-```json
- scheduledTransferKeywordFilter: "4611686018427387904"
+ ```json
+ "scheduledTransferKeywordFilter": "4611686018427387904"
```+ * Operational Channel - Detailed: This includes health reports and load balancing decisions, plus everything in the base operational channel. These events are generated by either the system or your code by using the health or load reporting APIs such as [ReportPartitionHealth](/previous-versions/azure/reference/mt645153(v=azure.100)) or [ReportLoad](/previous-versions/azure/reference/mt161491(v=azure.100)). To view these events in Visual Studio's Diagnostic Event Viewer add "Microsoft-ServiceFabric:4:0x4000000000000008" to the list of ETW providers.
-```json
- scheduledTransferKeywordFilter: "4611686018427387912"
+ ```json
+ "scheduledTransferKeywordFilter": "4611686018427387912"
``` * Data and Messaging Channel - Base: Critical logs and events generated in the messaging (currently only the ReverseProxy) and data path, in addition to detailed operational channel logs. These events are request processing failures and other critical issues in the ReverseProxy, as well as requests processed. **This is our recommendation for comprehensive logging**. To view these events in Visual Studio's Diagnostic Event Viewer, add "Microsoft-ServiceFabric:4:0x4000000000000010" to the list of ETW providers.
-```json
- scheduledTransferKeywordFilter: "4611686018427387928"
+ ```json
+ "scheduledTransferKeywordFilter": "4611686018427387928"
``` * Data & Messaging Channel - Detailed: Verbose channel that contains all the non-critical logs from data and messaging in the cluster and the detailed operational channel. For detailed troubleshooting of all reverse proxy events, refer to the [reverse proxy diagnostics guide](service-fabric-reverse-proxy-diagnostics.md). To view these events in Visual Studio's Diagnostic Event viewer, add "Microsoft-ServiceFabric:4:0x4000000000000020" to the list of ETW providers.
-```json
- scheduledTransferKeywordFilter: "4611686018427387944"
+ ```json
+ "scheduledTransferKeywordFilter": "4611686018427387944"
``` >[!NOTE]
Update the `EtwEventSourceProviderConfiguration` section in the template.json fi
For example, if your event source is named My-Eventsource, add the following code to place the events from My-Eventsource into a table named MyDestinationTableName. ```json
- {
- "provider": "My-Eventsource",
- "scheduledTransferPeriod": "PT5M",
- "DefaultEvents": {
- "eventDestination": "MyDestinationTableName"
- }
- }
+{
+ "provider": "My-Eventsource",
+ "scheduledTransferPeriod": "PT5M",
+ "DefaultEvents": {
+ "eventDestination": "MyDestinationTableName"
+ }
+}
``` To collect performance counters or event logs, modify the Resource Manager template by using the examples provided in [Create a Windows virtual machine with monitoring and diagnostics by using an Azure Resource Manager template](../virtual-machines/extensions/diagnostics-template.md?toc=/azure/virtual-machines/windows/toc.json). Then, republish the Resource Manager template.
service-fabric Service Fabric Tutorial Standalone Create Service Fabric Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-standalone-create-service-fabric-cluster.md
Next, update the three ipAddress lines that occur in the file on lines 8, 15, an
After updating the nodes, they appear as follows: ```json
- {
- "nodeName": "vm0",
- "ipAddress": "172.31.27.1",
- "nodeTypeRef": "NodeType0",
- "faultDomain": "fd:/dc1/r0",
- "upgradeDomain": "UD0"
- }
+{
+ "nodeName": "vm0",
+ "ipAddress": "172.31.27.1",
+ "nodeTypeRef": "NodeType0",
+ "faultDomain": "fd:/dc1/r0",
+ "upgradeDomain": "UD0"
+}
``` Then you need to update a couple of the properties. On line 34, you need to modify the connection string for the diagnostic store it should look like this `"connectionstring": "C:\\ProgramData\\SF\\DiagnosticsStore"`
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Title: Support matrix for Azure VM disaster recovery with Azure Site Recovery description: Summarizes support for Azure VMs disaster recovery to a secondary region with Azure Site Recovery. Previously updated : 09/12/2023 Last updated : 09/28/2023
SUSE Linux Enterprise Server 12 | SP1, SP2, SP3, SP4, SP5 [(Supported kernel ve
SUSE Linux Enterprise Server 15 | 15, SP1, SP2, SP3, SP4 [(Supported kernel versions)](#supported-suse-linux-enterprise-server-15-kernel-versions-for-azure-virtual-machines) SUSE Linux Enterprise Server 11 | SP3<br/><br/> Upgrade of replicating machines from SP3 to SP4 isn't supported. If a replicated machine has been upgraded, you need to disable replication and re-enable replication after the upgrade. SUSE Linux Enterprise Server 11 | SP4
-Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4573888/), [7.9](https://support.microsoft.com/help/4597409), [8.0](https://support.microsoft.com/help/4573888/), [8.1](https://support.microsoft.com/help/4573888/), [8.2](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.3](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) (running the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3, 4, 5, and 6 (UEK3, UEK4, UEK5, UEK6), [8.4](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), 8.5, 8.6, 8.7, 9.1 <br/><br/>8.1 (running on all UEK kernels and RedHat kernel <= 3.10.0-1062.* are supported in [9.35](https://support.microsoft.com/help/4573888/) Support for rest of the RedHat kernels is available in [9.36](https://support.microsoft.com/help/4578241/)).
+Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4573888/), [7.9](https://support.microsoft.com/help/4597409), [8.0](https://support.microsoft.com/help/4573888/), [8.1](https://support.microsoft.com/help/4573888/), [8.2](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.3](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) (running the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3, 4, 5, and 6 (UEK3, UEK4, UEK5, UEK6), [8.4](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), 8.5, 8.6, 8.7 <br> **Note:** Support for Oracle Linux 9.1 is removed from support matrix as issues were observed while using Azure Site Recovery with Oracle Linux 9.1. <br/><br/>8.1 (running on all UEK kernels and RedHat kernel <= 3.10.0-1062.* are supported in [9.35](https://support.microsoft.com/help/4573888/) Support for rest of the RedHat kernels is available in [9.36](https://support.microsoft.com/help/4578241/)).
> [!NOTE] > For Linux versions, Azure Site Recovery doesn't support custom OS kernels. Only the stock kernels that are part of the distribution minor version release/update are supported.
site-recovery Site Recovery Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-overview.md
Title: About Azure Site Recovery
description: Provides an overview of the Azure Site Recovery service, and summarizes disaster recovery and migration deployment scenarios. Previously updated : 07/24/2023 Last updated : 09/20/2023
Site Recovery can manage replication for:
**Replication scenarios** | Replicate Azure VMs from <br/>1. One Azure region to another.<br/>2. Azure Public MEC to the Azure region it's connected to.<br/>3. One Azure Public MEC to another Public MEC connected to same Azure region.<br/><br/> Replicate on-premises VMware VMs, Hyper-V VMs, physical servers (Windows and Linux), Azure Stack VMs to Azure.<br/><br/> Replicate AWS Windows instances to Azure.<br/><br/> Replicate on-premises VMware VMs, Hyper-V VMs managed by System Center VMM, and physical servers to a secondary site. **Regions** | Review [supported regions](https://azure.microsoft.com/global-infrastructure/services/?products=site-recovery) for Site Recovery. | **Replicated machines** | Review the replication requirements for [Azure VM](azure-to-azure-support-matrix.md#replicated-machine-operating-systems) replication, [on-premises VMware VMs and physical servers](vmware-physical-azure-support-matrix.md#replicated-machines), and [on-premises Hyper-V VMs](hyper-v-azure-support-matrix.md#replicated-vms).
-**Workloads** | You can replicate any workload running on a machine that's supported for replication. And, the Site Recovery team did app-specific tests for a [number of apps](site-recovery-workload.md#workload-summary).
+**Workloads** | You can replicate any workload running on a machine that's supported for replication. Learn more about the app-specific [workload summary](site-recovery-workload.md#workload-summary).
## Next steps
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Title: Support matrix for VMware/physical disaster recovery in Azure Site Recove
description: Summarizes support for disaster recovery of VMware VMs and physical server to Azure using Azure Site Recovery. Previously updated : 08/07/2023 Last updated : 09/28/2023
Linux: CentOS | 5.2 to 5.11</b><br/> 6.1 to 6.10</b><br/> </br> 7.0, 7.1, 7.2, 7
Ubuntu | Ubuntu 14.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions)<br/>Ubuntu 16.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 18.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 20.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) <br> Ubuntu 22.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet. </br> (*includes support for all 14.04.*x*, 16.04.*x*, 18.04.*x*, 20.04.*x* versions) Debian | Debian 7/Debian 8 (includes support for all 7. *x*, 8. *x* versions). [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). <br/> Debian 9 (includes support for 9.1 to 9.13. Debian 9.0 isn't supported.). [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). <br/> Debian 10, Debian 11 [(Review supported kernel versions)](#debian-kernel-versions). SUSE Linux | SUSE Linux Enterprise Server 12 SP1, SP2, SP3, SP4, [SP5](https://support.microsoft.com/help/4570609) [(review supported kernel versions)](#suse-linux-enterprise-server-12-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 15, 15 SP1, SP2, SP3, SP4 [(review supported kernel versions)](#suse-linux-enterprise-server-15-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 11 SP3. [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). </br> SUSE Linux Enterprise Server 11 SP4 </br> **Note**: Upgrading replicated machines from SUSE Linux Enterprise Server 11 SP3 to SP4 isn't supported. To upgrade, disable replication and re-enable after the upgrade. <br/>|
-Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4573888/), [7.9](https://support.microsoft.com/help/4597409/), [8.0](https://support.microsoft.com/help/4573888/), [8.1](https://support.microsoft.com/help/4573888/), [8.2](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.3](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.4](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), 8.5, 8.6, 8.7, 9.0, 9.1 <br/><br/> Running the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3, 4 & 5 (UEK3, UEK4, UEK5)<br/><br/>8.1<br/>Running on all UEK kernels and RedHat kernel <= 3.10.0-1062.* are supported in [9.35](https://support.microsoft.com/help/4573888/) Support for rest of the RedHat kernels is available in [9.36](https://support.microsoft.com/help/4578241/). <br><br> **Note**: Support for Oracle Linux versions `9.0` and `9.1` is only available for Modernized experience and not available for Classic experience.
+Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4573888/), [7.9](https://support.microsoft.com/help/4597409/), [8.0](https://support.microsoft.com/help/4573888/), [8.1](https://support.microsoft.com/help/4573888/), [8.2](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.3](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.4](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), 8.5, 8.6, 8.7 <br/><br/> **Note:** Support for Oracle Linux `9.0` and `9.1` is removed from support matrix, as issues were observed using Azure Site Recovery with Oracle Linux 9.0 and 9.1. <br><br> Running the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3, 4 & 5 (UEK3, UEK4, UEK5)<br/><br/>8.1<br/>Running on all UEK kernels and RedHat kernel <= 3.10.0-1062.* are supported in [9.35](https://support.microsoft.com/help/4573888/) Support for rest of the RedHat kernels is available in [9.36](https://support.microsoft.com/help/4578241/).
> [!NOTE] >- For each of the Windows versions, Azure Site Recovery only supports [Long-Term Servicing Channel (LTSC)](/windows-server/get-started/servicing-channels-comparison#long-term-servicing-channel-ltsc) builds. [Semi-Annual Channel](/windows-server/get-started/servicing-channels-comparison#semi-annual-channel) releases are currently unsupported at this time.
spring-apps How To Use Accelerator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-accelerator.md
**This article applies to:** ❌ Basic/Standard ✔️ Enterprise
-This article shows you how to use [Application Accelerator for VMware Tanzu](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.3/tap/GUID-application-accelerator-about-application-accelerator.html) with the Azure Spring Apps Enterprise plan to bootstrap developing your applications in a discoverable and repeatable way.
+This article shows you how to use [Application Accelerator for VMware Tanzu](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.5/tap/application-accelerator-about-application-accelerator.html) (App Accelerator) with the Azure Spring Apps Enterprise plan to bootstrap developing your applications in a discoverable and repeatable way.
-Application Accelerator for VMware Tanzu helps you bootstrap developing your applications and deploying them in a discoverable and repeatable way. You can use Application Accelerator to create new projects based on published accelerator projects. For more information, see [Application Accelerator for VMware Tanzu](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.3/tap/GUID-application-accelerator-about-application-accelerator.html) in the VMware documentation.
+App Accelerator helps you bootstrap developing your applications and deploying them in a discoverable and repeatable way. You can use App Accelerator to create new projects based on published accelerator projects. For more information, see [Application Accelerator for VMware Tanzu](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.5/tap/application-accelerator-about-application-accelerator.html) in the VMware documentation.
## Prerequisites
az spring application-accelerator predefined-accelerator enable \
In addition to using the predefined accelerators, you can create your own accelerators. You can use any Git repository in Azure Devops, GitHub, GitLab, or BitBucket.
-Use to following steps to create and maintain your own accelerators:
+Use the following steps to create and maintain your own accelerators:
First, create a file named *accelerator.yaml* in the root directory of your Git repository.
-You can use the *accelerator.yaml* file to declare input options that users fill in using a form in the UI. These option values control processing by the template engine before it returns the zipped output files. If you don't include an *accelerator.yaml* file, the repository still works as an accelerator, but the files are passed unmodified to users. For more information, see [Creating an accelerator.yaml file](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.3/tap/GUID-application-accelerator-creating-accelerators-accelerator-yaml.html).
+You can use the *accelerator.yaml* file to declare input options that users fill in using a form in the UI. These option values control processing by the template engine before it returns the zipped output files. If you don't include an *accelerator.yaml* file, the repository still works as an accelerator, but the files are passed unmodified to users. For more information, see [Creating an accelerator.yaml file](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.5/tap/application-accelerator-creating-accelerators-accelerator-yaml.html).
Next, publish the new accelerator.
To create your own accelerator, open the **Accelerators** section and then selec
#### [Azure CLI](#tab/Azure-CLI)
-Use the following command to create your own accelerator in Azure CLI.
+Use the following command to create your own accelerator in Azure CLI:
```azurecli az spring application-accelerator customized-accelerator create \
The following table describes the customizable accelerator fields.
| **Git branch** | `git-branch` | The Git branch to check out and monitor for changes. You should specify only the Git branch, Git commit, or Git tag. | Optional | | **Git commit** | `git-commit` | The Git commit SHA to check out. You should specify only the Git branch, Git commit, or Git tag. | Optional | | **Git tag** | `git-tag` | The Git commit tag to check out. You should specify only the Git branch, Git commit, or Git tag. | Optional |
+| **Git sub path** | `git-sub-path` | The folder path inside the Git repository to consider as the root of the accelerator or fragment. | Optional |
| **Authentication type** | `N/A` | The authentication type of the accelerator source repository. The type can be `Public`, `Basic auth`, or `SSH`. | Required | | **User name** | `username` | The user name to access the accelerator source repository whose authentication type is `Basic auth`. | Required when the authentication type is `Basic auth`. | | **Password/Personal access token** | `password` | The password to access the accelerator source repository whose authentication type is `Basic auth`. | Required when the authentication type is `Basic auth`. |
The following table describes the customizable accelerator fields.
| **Host key** | `host-key` | The host key to access the accelerator source repository whose authentication type is `SSH`. | Required when the authentication type is `SSH`. | | **Host key algorithm** | `host-key-algorithm` | The host key algorithm to access the accelerator source repository whose authentication type is `SSH`. Can be `ecdsa-sha2-nistp256` or `ssh-rsa`. | Required when authentication type is `SSH`. | | **CA certificate name** | `ca-cert-name` | The CA certificate name to access the accelerator source repository with self-signed certificate whose authentication type is `Public` or `Basic auth`. | Required when a self-signed cert is used for the Git repo URL. |
+| **Type** | `type` | The type of customized accelerator. The type can be `Accelerator` or `Fragment`. The default value is `Accelerator`. | Optional |
To view all published accelerators, see the App Accelerators section of the **Developer Tools** page. Select the App Accelerator URL to view the published accelerators in Dev Tools Portal:
To view the newly published accelerator, refresh Dev Tools Portal.
> [!NOTE] > It might take a few seconds for Dev Tools Portal to refresh the catalog and add an entry for your new accelerator. The refresh interval is configured as `git-interval` when you create the accelerator. After you change the accelerator, it will also take time to be reflected in Dev Tools Portal. The best practice is to change the `git-interval` to speed up for verification after you apply changes to the Git repo.
+### Reference a fragment in your own accelerators
+
+Writing and maintaining accelerators can become repetitive and verbose as new accelerators are added. Some people create new projects by copying existing ones and making modifications, but this process can be tedious and error prone. To make the creation and maintenance of accelerators easier, Application Accelerator supports a feature named Composition that allows the reuse of parts of an accelerator, called *fragments*.
+
+Use following steps to reference a fragment in your accelerator:
+
+1. Publish the new accelerator of type `Fragment` using the Azure portal or the Azure CLI.
+
+ #### [Azure portal](#tab/Portal)
+
+ To create a fragment accelerator, open the **Accelerators** section, select **Add Accelerator** under the **Customized Accelerators** section, and then select **Fragment**.
+
+ :::image type="content" source="media/how-to-use-accelerator/add-fragment.png" alt-text="Screenshot of the Azure portal that shows the Customized Accelerators of type `Fragment`." lightbox="media/how-to-use-accelerator/add-fragment.png":::
+
+ #### [Azure CLI](#tab/Azure-CLI)
+
+ Use the following command to create a customized accelerator of type `Fragment`:
+
+ ```azurecli
+ az spring application-accelerator customized-accelerator create \
+ --resource-group <resource-group-name> \
+ --service <service-instance-name> \
+ --name <fragment-accelerator-name> \
+ --display-name <display-name> \
+ --type Fragment \
+ [--git-sub-path <sub project path>] \
+ --git-url <git-repo-URL>
+ ```
+
+1. Change the *accelerator.yaml* file in your accelerator project. Use the `imports` instruction in the `accelerator` section and the `InvokeFragment` instruction in the `engine` section to reference the fragment in the accelerator, as shown in the following example:
+
+ ```yaml
+ accelerator:
+ ...
+ # options for the UI
+ options:
+ ...
+ imports:
+ - name: <fragment-accelerator-name>
+ ...
+
+ engine:
+ chain:
+ ...
+ - merge:
+ - include: [ "**" ]
+ - type: InvokeFragment
+ reference: <fragment-accelerator-name>
+ ```
+
+1. Synchronize the change with the Dev Tools Portal.
+
+ To reflect the changes on the Dev Tools Portal more quickly, you can provide a value for the **Git interval** field of your customized accelerator. The **Git interval** value indicates how frequently the system checks for updates in the Git repository.
+
+1. Synchronize the change with your customized accelerator on the Azure portal by using the Azure portal or the Azure CLI.
+
+ #### [Azure portal](#tab/Portal)
+
+ The following list shows the two ways you can sync changes:
+
+ - Create or update your customized accelerator.
+ - Open the **Accelerators** section, and then select **Sync certificate**.
+
+ #### [Azure CLI](#tab/Azure-CLI)
+
+ Use the following command to sync changes for an accelerator:
+
+ ```azurecli
+ az spring application-accelerator customized-accelerator sync-cert \
+ --name <customized-accelerator-name> \
+ --service <service-instance-name> \
+ --resource-group <resource-group-name>
+ ```
+
+For more information, see [Use fragments in Application Accelerator](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.5/tap/application-accelerator-creating-accelerators-composition.html) in the VMware documentation.
+ ### Use accelerators to bootstrap a new project Use the following steps to bootstrap a new project using accelerators:
Use the following steps to bootstrap a new project using accelerators:
### Configure accelerators with a self-signed certificate
-When you set up a private Git repository and enable HTTPS with a self-signed certificate, you should configure the CA certificate name to the accelerator for client cert verification from the accelerator to the Git repository.
+When you set up a private Git repository and enable HTTPS with a self-signed certificate, you should configure the CA certificate name to the accelerator for client certificate verification from the accelerator to the Git repository.
Use the following steps to configure accelerators with a self-signed certificate:
As certificates expire, you need to rotate certificates in Spring Cloud Apps by
1. Import the certificates into Azure Spring Apps. For more information, see the [Import a certificate](how-to-use-tls-certificate.md#import-a-certificate) section of [Use TLS/SSL certificates in your application in Azure Spring Apps](how-to-use-tls-certificate.md). 1. Synchronize the certificates using the Azure portal or the Azure CLI.
-The accelerators will not automatically use the latest certificate. You should sync one or all certificates by using the Azure portal or the Azure CLI.
+The accelerators won't automatically use the latest certificate. You should sync one or all certificates by using the Azure portal or the Azure CLI.
#### [Azure portal](#tab/Portal)
You can enable App Accelerator under an existing Azure Spring Apps Enterprise pl
### [Azure portal](#tab/Portal)
-If a Dev tools public endpoint has already been exposed, you can enable App Accelerator, and then use <kbd>Ctrl</kbd>+<kbd>F5</kdb> to deactivate the browser cache to view it on the Dev Tools Portal.
+If a Dev tools public endpoint has already been exposed, you can enable App Accelerator, and then press <kbd>Ctrl</kbd>+<kbd>F5</kbd> to deactivate the browser cache to view it on the Dev Tools Portal.
Use the following steps to enable App Accelerator under an existing Azure Spring Apps Enterprise plan instance using the Azure portal:
storage Blobfuse2 Commands Completion Bash https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-bash.md
Title: How to use the 'blobfuse2 completion bash' command to generate the autocompletion script for BlobFuse2 description: Learn how to use the completion bash command to generate the autocompletion script for BlobFuse2.-+ Last updated 12/02/2022-+
storage Blobfuse2 Commands Completion Fish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-fish.md
Title: How to use the 'blobfuse2 completion fish' command to generate the autocompletion script for BlobFuse2 description: Learn how to use the 'blobfuse2 completion fish' command to generate the autocompletion script for BlobFuse2.-+ Last updated 12/02/2022-+
storage Blobfuse2 Commands Completion Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-powershell.md
Title: How to use the 'blobfuse2 completion powershell' command to generate the autocompletion script for BlobFuse2 description: Learn how to use the 'blobfuse2 completion powershell' command to generate the autocompletion script for BlobFuse2.-+ Last updated 12/02/2022-+
storage Blobfuse2 Commands Completion Zsh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-zsh.md
Title: How to use the 'blobfuse2 completion zsh' command to generate the autocompletion script for BlobFuse2 description: Learn how to use the 'blobfuse2 completion zsh' command to generate the autocompletion script for BlobFuse2.-+ Last updated 12/02/2022-+
storage Blobfuse2 Commands Completion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion.md
Title: How to use the 'blobfuse2 completion' command to generate the autocompletion script for BlobFuse2 description: Learn how to use the 'blobfuse2 completion' command to generate the autocompletion script for BlobFuse2.-+ Last updated 12/02/2022-+
storage Blobfuse2 Commands Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-help.md
Title: How to use 'blobfuse2 help' to get help info for the BlobFuse2 command and subcommands description: Learn how to use 'blobfuse2 help' to get help info for the BlobFuse2 command and subcommands.-+ Last updated 12/02/2022-+
storage Blobfuse2 Commands Mount All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount-all.md
Title: How to use the 'blobfuse2 mount all' command to mount all blob containers in a storage account as a Linux file system description: Learn how to use the 'blobfuse2 mount all' all command to mount all blob containers in a storage account as a Linux file system.-+ Last updated 12/02/2022-+
storage Blobfuse2 Commands Mount List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount-list.md
Title: How to use the 'blobfuse2 mount list' command to display all BlobFuse2 mount points description: Learn how to use the 'blobfuse2 mount list' command to display all BlobFuse2 mount points.-+ Last updated 12/02/2022-+
storage Blobfuse2 Commands Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount.md
Title: How to use the 'blobfuse2 mount' command to mount a Blob Storage container as a file system in Linux, or to display and manage existing mount points. description: Learn how to use the 'blobfuse2 mount' command to mount a Blob Storage container as a file system in Linux, or to display and manage existing mount points.-+ Last updated 12/02/2022-+
storage Blobfuse2 Commands Mountv1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mountv1.md
Title: How to generate a configuration file for BlobFuse2 from a BlobFuse v1 configuration file description: How to generate a configuration file for BlobFuse2 from a BlobFuse v1 configuration file.-+ Last updated 12/02/2022-+
storage Blobfuse2 Commands Secure Decrypt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-decrypt.md
Title: How to use the `blobfuse2 secure decrypt` command to decrypt a BlobFuse2 configuration file description: Learn how to use the `blobfuse2 secure decrypt` command to decrypt a BlobFuse2 configuration file.-+ Last updated 12/02/2022-+
storage Blobfuse2 Commands Secure Encrypt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-encrypt.md
Title: How to use the `blobfuse2 secure encrypt` command to encrypt a BlobFuse2 configuration file description: Learn how to use the `blobfuse2 secure encrypt` command to encrypt a BlobFuse2 configuration file.-+ Last updated 12/02/2022-+
storage Blobfuse2 Commands Secure Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-get.md
Title: How to use the 'blobfuse2 secure get' command to display the value of a parameter from an encrypted BlobFuse2 configuration file description: Learn how to use the 'blobfuse2 secure get' command to display the value of a parameter from an encrypted BlobFuse2 configuration file-+ Last updated 12/02/2022-+
storage Blobfuse2 Commands Secure Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-set.md
Title: How to use the 'blobfuse2 secure set' command to change the value of a parameter in an encrypted BlobFuse2 configuration file description: Learn how to use the 'blobfuse2 secure set' command to change the value of a parameter in an encrypted BlobFuse2 configuration file-+ Last updated 12/02/2022-+
storage Blobfuse2 Commands Secure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure.md
Title: How to use the 'blobfuse2 secure' command to encrypt, decrypt, or access settings in a BlobFuse2 configuration file description: Learn how to use the 'blobfuse2 secure' command to encrypt, decrypt, or access settings in a BlobFuse2 configuration file.-+ Last updated 12/02/2022-+
storage Blobfuse2 Commands Unmount All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-unmount-all.md
Title: How to use the 'blobfuse2 unmount all' command to unmount all blob containers in a storage account as a Linux file system description: Learn how to use the 'blobfuse2 unmount all' command to unmount all blob containers in a storage account as a Linux file system.-+ Last updated 12/02/2022-+
storage Blobfuse2 Commands Unmount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-unmount.md
Title: How to use the 'blobfuse2 unmount' command to unmount an existing mount point description: How to use the 'blobfuse2 unmount' command to unmount an existing mount point.-+ Last updated 12/02/2022-+
storage Blobfuse2 Commands Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-version.md
Title: How to use the 'blobfuse2 version' command to get the current version and optionally check for a newer one description: Learn how to use the 'blobfuse2 version' command to get the current version and optionally check for a newer one.-+ Last updated 12/02/2022-+
storage Blobfuse2 Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands.md
Title: How to use the BlobFuse2 command set description: Learn how to use the BlobFuse2 command set to mount blob storage containers as file systems on Linux, and manage them.-+ Last updated 12/02/2022-+
storage Blobfuse2 Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-configuration.md
Title: How to configure settings for BlobFuse2 description: Learn how to configure settings for BlobFuse2.--++
storage Blobfuse2 Health Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-health-monitor.md
Title: How to use Health Monitor to gain insights into BlobFuse2 mount activities and resource usage description: Learn how to Use Health Monitor to gain insights into BlobFuse2 mount activities and resource usage.--++
storage Blobfuse2 How To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-how-to-deploy.md
Title: How to mount an Azure Blob Storage container on Linux with BlobFuse2 description: Learn how to mount an Azure Blob Storage container on Linux with BlobFuse2.--++
storage Blobfuse2 Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-troubleshooting.md
Title: Troubleshoot issues in BlobFuse2 description: Learn how to troubleshoot issues in BlobFuse2.--++
storage Blobfuse2 What Is https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-what-is.md
Title: What is BlobFuse? - BlobFuse2 description: An overview of how to use BlobFuse to mount an Azure Blob Storage container through the Linux file system.--++
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
For more information about pricing, see [Block Blob pricing](https://azure.micro
- Each rule can have up to 10 case-sensitive prefixes and up to 10 blob index tag conditions. -- If you enable firewall rules for your storage account, lifecycle management requests may be blocked. You can unblock these requests by providing exceptions for trusted Microsoft services. For more information, see the **Exceptions** section in [Configure firewalls and virtual networks](../common/storage-network-security.md#exceptions).- - A lifecycle management policy can't change the tier of a blob that uses an encryption scope. - The delete action of a lifecycle management policy won't work with any blob in an immutable container. With an immutable policy, objects can be created and read, but not modified or deleted. For more information, see [Store business-critical blob data with immutable storage](./immutable-storage-overview.md).
storage Storage Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-attributes.md
Title: Actions and attributes for Azure role assignment conditions for Azure Blob Storage description: Supported actions and attributes for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC) for Azure Blob Storage. -+ Last updated 08/10/2023-+
storage Storage Auth Abac Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-cli.md
Title: "Tutorial: Add a role assignment condition to restrict access to blobs using Azure CLI - Azure ABAC" description: Add a role assignment condition to restrict access to blobs using Azure CLI and Azure attribute-based access control (Azure ABAC).-+ -+ Last updated 03/15/2023
storage Storage Auth Abac Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-examples.md
Title: Example Azure role assignment conditions for Blob Storage description: Example Azure role assignment conditions for Blob Storage.-+ -+ Last updated 05/09/2023
storage Storage Auth Abac Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-portal.md
Title: "Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal - Azure ABAC" description: Add a role assignment condition to restrict access to blobs using the Azure portal and Azure attribute-based access control (Azure ABAC).-+ -+ Last updated 03/15/2023
storage Storage Auth Abac Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-powershell.md
Title: "Tutorial: Add a role assignment condition to restrict access to blobs using Azure PowerShell - Azure ABAC" description: Add a role assignment condition to restrict access to blobs using Azure PowerShell and Azure attribute-based access control (Azure ABAC).-+ -+ Last updated 03/15/2023
storage Storage Auth Abac Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-security.md
Title: Security considerations for Azure role assignment conditions in Azure Blob Storage description: Security considerations for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC).-+ Last updated 05/09/2023-+
storage Storage Auth Abac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac.md
Title: Authorize access to Azure Blob Storage using Azure role assignment conditions description: Authorize access to Azure Blob Storage and Azure Data Lake Storage Gen2 using Azure role assignment conditions and Azure attribute-based access control (Azure ABAC). Define conditions on role assignments using Blob Storage attributes.-+ Last updated 04/21/2023-+
storage Storage Custom Domain Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-custom-domain-name.md
Title: Map a custom domain to an Azure Blob Storage endpoint description: Map a custom domain to a Blob Storage or web endpoint in an Azure storage account.-+ Last updated 02/12/2021-+
storage Storage How To Mount Container Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-how-to-mount-container-linux.md
Title: How to mount Azure Blob Storage as a file system on Linux with BlobFuse v1 description: Learn how to mount an Azure Blob Storage container with BlobFuse v1, a virtual file system driver on Linux.-+ Last updated 12/02/2022-+
storage Configure Network Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/configure-network-routing-preference.md
Title: Configure network routing preference
description: Configure network routing preference for your Azure storage account to specify how network traffic is routed to your account from clients over the Internet. -+ Last updated 03/17/2021-+
storage Last Sync Time Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/last-sync-time-get.md
Title: Check the Last Sync Time property for a storage account
description: Learn how to check the Last Sync Time property for a geo-replicated storage account. The Last Sync Time property indicates the last time at which all writes from the primary region were successfully written to the secondary region. -+ Last updated 07/20/2023-+
storage Network Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/network-routing-preference.md
Title: Network routing preference
description: Network routing preference enables you to specify how network traffic is routed to your account from clients over the internet. -+ Last updated 03/13/2023-+
storage Redundancy Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-migration.md
Title: Change how a storage account is replicated
description: Learn how to change how data in an existing storage account is replicated. -+ Last updated 09/21/2023-+
storage Redundancy Regions Gzrs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-regions-gzrs.md
Title: List of Azure regions that support geo-zone-redundant storage (GZRS)
description: List of Azure regions that support geo-zone-redundant storage (GZRS) -+ Last updated 04/28/2023-+
storage Redundancy Regions Zrs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-regions-zrs.md
Title: List of Azure regions that support zone-redundant storage (ZRS)
description: List of Azure regions that support zone-redundant storage (ZRS) -+ Last updated 04/28/2023-+
storage Security Restrict Copy Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/security-restrict-copy-operations.md
Title: Permitted scope for copy operations (preview) description: Learn how to use the "Permitted scope for copy operations (preview)" Azure storage account setting to limit the source accounts of copy operations to the same tenant or with private links to the same virtual network.--++
storage Storage Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-disaster-recovery-guidance.md
Title: Azure storage disaster recovery planning and failover
description: Azure Storage supports account failover for geo-redundant storage accounts. Create a disaster recovery plan for your storage accounts if the endpoints in the primary region become unavailable. -+ Last updated 09/22/2023-+
storage Storage Failover Customer Managed Unplanned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-failover-customer-managed-unplanned.md
Title: How Azure Storage account customer-managed failover works
description: Azure Storage supports account failover for geo-redundant storage accounts to recover from a service endpoint outage. Learn what happens to your storage account and storage services during a customer-managed failover to the secondary region if the primary endpoint becomes unavailable. -+ Last updated 09/22/2023-+
storage Storage Initiate Account Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-initiate-account-failover.md
Title: Initiate a storage account failover
description: Learn how to initiate an account failover in the event that the primary endpoint for your storage account becomes unavailable. The failover updates the secondary region to become the primary region for your storage account. -+ Last updated 09/15/2023-+
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Title: Configure Azure Storage firewalls and virtual networks description: Configure layered network security for your storage account by using the Azure Storage firewall. -+ Last updated 08/15/2023-+
storage Storage Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-private-endpoints.md
Title: Use private endpoints
description: Overview of private endpoints for secure access to storage accounts from virtual networks. -+ Last updated 06/22/2023-+
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
Title: Data redundancy
description: Understand data redundancy in Azure Storage. Data in your Microsoft Azure Storage account is replicated for durability and high availability. -+ Last updated 09/06/2023-+
storage Storage Require Secure Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-require-secure-transfer.md
Title: Require secure transfer to ensure secure connections
description: Learn how to require secure transfer for requests to Azure Storage. When you require secure transfer for a storage account, any requests originating from an insecure connection are rejected. -+ Last updated 06/01/2021-+
storage Transport Layer Security Configure Client Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/transport-layer-security-configure-client-version.md
Title: Configure Transport Layer Security (TLS) for a client application
description: Configure a client application to communicate with Azure Storage using a minimum version of Transport Layer Security (TLS). -+ Last updated 12/29/2022-+ ms.devlang: csharp
storage Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/transport-layer-security-configure-minimum-version.md
Title: Enforce a minimum required version of Transport Layer Security (TLS) for
description: Configure a storage account to require a minimum version of Transport Layer Security (TLS) for clients making requests against Azure Storage. -+ Last updated 12/30/2022-+
storage Storage Files Identity Auth Hybrid Identities Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-hybrid-identities-enable.md
There are two options for configuring directory and file-level permissions with
To configure directory and file-level permissions through Windows File Explorer, you also need to specify domain name and domain GUID for your on-premises AD. You can get this information from your domain admin or from an on-premises AD-joined client. If you prefer to configure using icacls, this step is not required.
+> [!IMPORTANT]
+> You can set file/directory level ACLs for identities which are not synced to Azure AD. However, these ACLs will not be enforced because the Kerberos ticket used for authentication/authorization will not contain these not-synced identities. In order to enforce set ACLs, identities need to be synced to Azure AD.
+ > [!TIP] > If Azure AD hybrid joined users from two different forests will be accessing the share, it's best to use icacls to configure directory and file-level permissions. This is because Windows File Explorer ACL configuration requires the client to be domain joined to the Active Directory domain that the storage account is joined to.
storage Queues Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-auth-abac-attributes.md
Title: Actions and attributes for Azure role assignment conditions for Azure Que
description: Supported actions and attributes for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC) for Azure Queue Storage. -+ Last updated 05/09/2023-+
storage Queues Auth Abac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-auth-abac.md
Title: Authorize access to queues using Azure role assignment conditions
description: Authorize access to Azure queues using Azure role assignment conditions and Azure attribute-based access control (Azure ABAC). Define conditions on role assignments using Storage attributes. -+ Last updated 10/19/2022-+
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
This section summarizes recent new features and improvements to machine learning
| November 2022 | **R Support (preview)** | Azure Synapse Analytics [now provides built-in R support for Apache Spark](./spark/apache-spark-r-language.md), currently in preview. For an example, [install an R library from CRAN and CRAN snapshots](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-update-2022/ba-p/3680019#TOCREF_16). | | August 2022 | **SynapseML v.0.10.0** | New [release of SynapseML v0.10.0](https://github.com/microsoft/SynapseML/releases/tag/v0.10.0) (previously MMLSpark), an open-source library that aims to simplify the creation of massively scalable machine learning pipelines. Learn more about the [latest additions to SynapseML](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/exciting-new-release-of-synapseml/ba-p/3589606) and get started with [SynapseML](https://aka.ms/spark).| | August 2022 | **.NET support** | SynapseML v0.10 [adds full support for .NET languages](https://devblogs.microsoft.com/dotnet/announcing-synapseml-for-dotnet/) like C# and F#. For a .NET SynapseML example, see [.NET Example with LightGBMClassifier](https://microsoft.github.io/SynapseML/docs/Reference/Quickstart%20-%20LightGBM%20in%20Dotnet/).|
-| August 2022 | **Azure Open AI Service support** | SynapseML now allows users to tap into 175-Billion parameter language models (GPT-3) from OpenAI that can generate and complete text and code near human parity. For more information, see [Azure OpenAI for Big Data](https://microsoft.github.io/SynapseML/docs/Explore%20Algorithms/OpenAI/).|
+| August 2022 | **Azure OpenAI Service support** | SynapseML now allows users to tap into 175-Billion parameter language models (GPT-3) from OpenAI that can generate and complete text and code near human parity. For more information, see [Azure OpenAI for Big Data](https://microsoft.github.io/SynapseML/docs/Explore%20Algorithms/OpenAI/).|
| August 2022 | **MLflow platform support** | SynapseML models now integrate with [MLflow](https://microsoft.github.io/SynapseML/docs/Use%20with%20MLFlow/Overview/) with full support for saving, loading, deployment, and [autologging](https://microsoft.github.io/SynapseML/docs/Use%20with%20MLFlow/Autologging/).| | August 2022 | **SynapseML in Binder** | We know that Spark can be intimidating for first users but fear not because with the technology Binder, you can [explore and experiment with SynapseML in Binder](https://mybinder.org/v2/gh/microsoft/SynapseML/93d7ccf?labpath=notebooks%2Ffeatures) with zero setup, install, infrastructure, or Azure account required.| | June 2022 | **Distributed Deep Neural Network Training (preview)** | The Azure Synapse runtime also includes supporting libraries like Petastorm and Horovod, which are commonly used for distributed training. This feature is currently available in preview. The Azure Synapse Analytics runtime for Apache Spark 3.1 and 3.2 also now includes support for the most common deep learning libraries like TensorFlow and PyTorch. To learn more about how to leverage these libraries within your Azure Synapse Analytics GPU-accelerated pools, read the [Deep learning tutorials](./machine-learning/concept-deep-learning.md). |
virtual-desktop Service Principal Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/service-principal-assign-roles.md
Here's how to assign a role to the Azure Virtual Desktop service principal using
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. In the search box, enter *Azure Active Directory* and select the matching service entry.
+1. In the search box, enter *Microsoft Entra ID* and select the matching service entry.
1. On the Overview page, in the search box for **Search your tenant**, enter the application ID **9cdead84-a844-4324-93f2-b2e6bb768d07**.
virtual-machine-scale-sets Virtual Machine Scale Sets Mvss Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-custom-image.md
In the scale set resource, add a `dependsOn` clause referring to the custom imag
In the `imageReference` of the scale set `storageProfile`, instead of specifying the publisher, offer, sku, and version of a platform image, specify the `id` of the `Microsoft.Compute/images` resource: ```json
- "virtualMachineProfile": {
- "storageProfile": {
- "imageReference": {
- "id": "[resourceId('Microsoft.Compute/images', 'myCustomImage')]"
- }
- },
- "osProfile": {
+ "virtualMachineProfile": {
+ "storageProfile": {
+ "imageReference": {
+ "id": "[resourceId('Microsoft.Compute/images', omImage')]"
+ }
+ },
+ "osProfile": {
+ ...
+ }
+ }
``` In this example, use the `resourceId` function to get the resource ID of the image created in the same template. If you have created the managed disk image beforehand, you should provide the ID of that image instead. This ID must be of the form: `/subscriptions/<subscription-id>resourceGroups/<resource-group-name>/providers/Microsoft.Compute/images/<image-name>`.
virtual-machine-scale-sets Virtual Machine Scale Sets Mvss Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-start.md
First, define `$schema` and `contentVersion` in the template. The `$schema` elem
{ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json", "contentVersion": "1.0.0.0",
+}
``` ## Define parameters+ Next, define two parameters, `adminUsername` and `adminPassword`. Parameters are values you specify at the time of deployment. The `adminUsername` parameter is simply a `string` type, but because `adminPassword` is a secret, give it type `securestring`. Later, these parameters are passed into the scale set configuration. ```json
Next, define two parameters, `adminUsername` and `adminPassword`. Parameters are
} }, ```+ ## Define variables+ Resource Manager templates also let you define variables to be used later in the template. The example doesn't use any variables, so the JSON object is empty. ```json
Resource Manager templates also let you define variables to be used later in the
Next is the resources section of the template. Here, you define what you actually want to deploy. Unlike `parameters` and `variables` (which are JSON objects), `resources` is a JSON list of JSON objects. ```json
- "resources": [
+ "resources": [
+ ...
+ ]
``` All resources require `type`, `name`, `apiVersion`, and `location` properties. This example's first resource has type [Microsoft.Network/virtualNetwork](/azure/templates/microsoft.network/virtualnetworks), name `myVnet`, and apiVersion `2018-11-01`. (To find the latest API version for a resource type, see the [Azure Resource Manager template reference](/azure/templates/).) ```json
- {
- "type": "Microsoft.Network/virtualNetworks",
- "name": "myVnet",
- "apiVersion": "2018-11-01",
+{
+ "type": "Microsoft.Network/virtualNetworks",
+ "name": "myVnet",
+ "apiVersion": "2018-11-01",
+}
``` ## Specify location+ To specify the location for the virtual network, use a [Resource Manager template function](../azure-resource-manager/templates/template-functions.md). This function must be enclosed in quotes and square brackets like this: `"[<template-function>]"`. In this case, use the `resourceGroup` function. It takes in no arguments and returns a JSON object with metadata about the resource group this deployment is being deployed to. The resource group is set by the user at the time of deployment. This value is then indexed into this JSON object with `.location` to get the location from the JSON object. ```json
- "location": "[resourceGroup().location]",
+ "location": "[resourceGroup().location]",
``` ## Specify virtual network properties+ Each Resource Manager resource has its own `properties` section for configurations specific to the resource. In this case, specify that the virtual network should have one subnet using the private IP address range `10.0.0.0/16`. A scale set is always contained within one subnet. It cannot span subnets. ```json
- "properties": {
- "addressSpace": {
- "addressPrefixes": [
- "10.0.0.0/16"
- ]
- },
- "subnets": [
- {
- "name": "mySubnet",
- "properties": {
- "addressPrefix": "10.0.0.0/16"
- }
- }
- ]
- }
- },
+ {
+ "properties": {
+ "addressSpace": {
+ "addressPrefixes": [
+ "10.0.0.0/16"
+ ]
+ },
+ "subnets": [
+ {
+ "name": "mySubnet",
+ "properties": {
+ "addressPrefix": "10.0.0.0/16"
+ }
+ }
+ ]
+ }
+ },
``` ## Add dependsOn list+ In addition to the required `type`, `name`, `apiVersion`, and `location` properties, each resource can have an optional `dependsOn` list of strings. This list specifies which other resources from this deployment must finish before deploying this resource. In this case, there is only one element in the list, the virtual network from the previous example. You specify this dependency because the scale set needs the network to exist before creating any VMs. This way, the scale set can give these VMs private IP addresses from the IP address range previously specified in the network properties. The format of each string in the dependsOn list is `<type>/<name>`. Use the same `type` and `name` used previously in the virtual network resource definition. ```json
- {
- "type": "Microsoft.Compute/virtualMachineScaleSets",
- "name": "myScaleSet",
- "apiVersion": "2019-03-01",
- "location": "[resourceGroup().location]",
- "dependsOn": [
- "Microsoft.Network/virtualNetworks/myVnet"
- ],
+ {
+ "type": "Microsoft.Compute/virtualMachineScaleSets",
+ "name": "myScaleSet",
+ "apiVersion": "2019-03-01",
+ "location": "[resourceGroup().location]",
+ "dependsOn": [
+ "Microsoft.Network/virtualNetworks/myVnet"
+ ],
+ ...
+ }
```+ ## Specify scale set properties+ Scale sets have many properties for customizing the VMs in the scale set. For a full list of these properties, see the [template reference](/azure/templates/microsoft.compute/virtualmachinescalesets). For this tutorial, only a few commonly used properties are set.+ ### Supply VM size and capacity+ The scale set needs to know what size of VM to create ("sku name") and how many such VMs to create ("sku capacity"). To see which VM sizes are available, see the [VM Sizes documentation](../virtual-machines/sizes.md). ```json
- "sku": {
- "name": "Standard_A1",
- "capacity": 2
- },
+ "sku": {
+ "name": "Standard_A1",
+ "capacity": 2
+ },
``` ### Choose type of updates+ The scale set also needs to know how to handle updates on the scale set. Currently, there are three options, `Manual`, `Rolling` and `Automatic`. For more information on the differences between the two, see the documentation on [how to upgrade a scale set](./virtual-machine-scale-sets-upgrade-policy.md). ```json
- "properties": {
- "upgradePolicy": {
- "mode": "Manual"
- },
+ "properties": {
+ "upgradePolicy": {
+ "mode": "Manual"
+ },
+ }
``` ### Choose VM operating system+ The scale set needs to know what operating system to put on the VMs. Here, create the VMs with a fully patched Ubuntu 16.04-LTS image. ```json
- "virtualMachineProfile": {
- "storageProfile": {
- "imageReference": {
- "publisher": "Canonical",
- "offer": "UbuntuServer",
- "sku": "16.04-LTS",
- "version": "latest"
- }
- },
+ "virtualMachineProfile": {
+ "storageProfile": {
+ "imageReference": {
+ "publisher": "Canonical",
+ "offer": "UbuntuServer",
+ "sku": "16.04-LTS",
+ "version": "latest"
+ }
+ },
+ }
``` ### Specify computerNamePrefix+ The scale set deploys multiple VMs. Instead of specifying each VM name, specify `computerNamePrefix`. The scale set appends an index to the prefix for each VM, so VM names have the form `<computerNamePrefix>_<auto-generated-index>`. In the following snippet, use the parameters from before to set the administrator username and password for all VMs in the scale set. This process uses the `parameters` template function. This function takes in a string that specifies which parameter to refer to and outputs the value for that parameter. ```json
- "osProfile": {
- "computerNamePrefix": "vm",
- "adminUsername": "[parameters('adminUsername')]",
- "adminPassword": "[parameters('adminPassword')]"
- },
+ "osProfile": {
+ "computerNamePrefix": "vm",
+ "adminUsername": "[parameters('adminUsername')]",
+ "adminPassword": "[parameters('adminPassword')]"
+ },
``` ### Specify VM network configuration
You can get the ID of the virtual network containing the subnet by using the `re
However, the identifier of the virtual network is not enough. Provide the specific subnet that the scale set VMs should be in. To do this, concatenate `/subnets/mySubnet` to the ID of the virtual network. The result is the fully qualified ID of the subnet. Do this concatenation with the `concat` function, which takes in a series of strings and returns their concatenation. ```json
- "networkProfile": {
- "networkInterfaceConfigurations": [
- {
- "name": "myNic",
- "properties": {
- "primary": "true",
- "ipConfigurations": [
- {
- "name": "myIpConfig",
- "properties": {
- "subnet": {
- "id": "[concat(resourceId('Microsoft.Network/virtualNetworks', 'myVnet'), '/subnets/mySubnet')]"
- }
- }
- }
- ]
- }
- }
- ]
- }
- }
- }
- }
- ]
-}
-
+ "networkProfile": {
+ "networkInterfaceConfigurations": [
+ {
+ "name": "myNic",
+ "properties": {
+ "primary": "true",
+ "ipConfigurations": [
+ {
+ "name": "myIpConfig",
+ "properties": {
+ "subnet": {
+ "id": "[concat(resourceId('Microsoft.Network/virtualNetworks', 'myVnet'), '/subnets/mySubnet')]"
+ }
+ }
+ }
+ ]
+ }
+ }
+ ]
+ }
``` ## Next steps
virtual-machines Maintenance Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations.md
This scope is integrated with [Update Manager](../update-center/overview.md), wh
- A minimum of 1 hour and 30 minutes is required for the maintenance window. - The value of **Repeat** should be at least 6 hours.
+In rare cases if platform catchup host update window happens to coincide with the guest (VM) patching window and if the guest patching window don't get sufficient time to execute after host update then the system would show **Schedule timeout, waiting for an ongoing update to complete the resource** error since only a single update is allowed by the platform at a time.
+ >[!IMPORTANT] > The minimum maintenance window has been increased from 1 hour 10 minutes to 1 hour 30 minutes, while the minimum repeat value has been set to 6 hours for new schedules. **Please note that your existing schedules will not get impacted; however, we strongly recommend updating existing schedules to include these new changes.** To learn more about this topic, checkout [Update Manager and scheduled patching](../update-center/scheduled-patching.md) > [!NOTE]
-> In rare cases if platform catchup host update window happens to coincide with the guest (VM) patching window and if the guest patching window don't get sufficient time to execute after host update then the system would show **Schedule timeout, waiting for an ongoing update to complete the resource** error since only a single update is allowed by the platform at a time.
+> If you move a VM to a different resource group or subscription, the scheduled patching for the VM stops working as this scenario is currently unsupported by the system.
## Shut Down Machines
virtual-machines Move Virtual Machines Regional Zonal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/move-virtual-machines-regional-zonal-portal.md
To select the VMs for the move, follow these steps:
:::image type="content" source="./media/tutorial-move-regional-zonal/availability-scaling.png" alt-text="Screenshot of Availability + scaling option."::: Alternatively, in the **DemoTestVM1** overview plane, you can select **Availability + scale** > **Availability + scaling**.
- :::image type="content" source="./media/tutorial-move-regional-zonal/availability-scaling-home.png" alt-text="Screenshot of Availability + scaling homepage.":::
+ :::image type="content" source="./media/tutorial-move-regional-zonal/scaling-pane.png" alt-text="Screenshot of Availability + scaling pane.":::
+
### Select the target availability zones
To select the target availability zones, follow these steps:
1. Under **Target availability zone**, select the desired target availability zones for the VM. For example, Zone 1.
+ :::image type="content" source="./media/tutorial-move-regional-zonal/availability-scaling-home.png" alt-text="Screenshot of Availability + scaling homepage.":::
>[!Important] >If you select an unsupported VM to move, the validation fails. In this case, you must restart the workflow with the correct selection of VM. Refer to the [Support Matrix](../reliability/migrate-vm.md#support-matrix) to learn more about unsupported VMs type.
To select the target availability zones, follow these steps:
To review the properties of the VM before you commit the move, follow these steps: 1. On the **Review properties** pane, review the VM properties.
+
#### VM properties Find more information on the impact of the move on the VM properties.
virtual-machines Msv2 Mdsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/msv2-mdsv2-series.md
The Msv2 and Mdsv2 Medium Memory VM Series features Intel® Xeon® Platinum 8280
## Msv2 Medium Memory Diskless
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Expected network bandwidth (Mbps) |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Expected network egress bandwidth (Mbps) |
|||||||||| | Standard_M32ms_v2 | 32 | 875 | 0 | 32 | 20000/500 | 40000/1000 | 8 | 8000 | | Standard_M64s_v2 | 64 | 1024 | 0 | 64 | 40000/1000 | 80000/2000 | 8 | 16000 |
The Msv2 and Mdsv2 Medium Memory VM Series features Intel® Xeon® Platinum 8280
## Mdsv2 Medium Memory with Disk
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disk | Max cached and temp storage throughput: IOPS / MBps | Burst cached and temp storage throughput: IOPS/MBps<sup>1</sup> | Max uncached disk throughput: IOPS/MBps | Burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Expected network bandwidth (Mbps) |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disk | Max cached and temp storage throughput: IOPS / MBps | Burst cached and temp storage throughput: IOPS/MBps<sup>1</sup> | Max uncached disk throughput: IOPS/MBps | Burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Expected network egress bandwidth (Mbps) |
|||||||||||| | Standard_M32dms_v2 | 32 | 875 | 1024 | 32 | 40000/400 | 40000/1000 | 20000/500 | 40000/1000 | 8 | 8000 | | Standard_M64ds_v2 | 64 | 1024 | 2048 | 64 | 80000/800 | 80000/2000 | 40000/1000 | 80000/2000 | 8 | 16000 |
virtual-machines States Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/states-billing.md
The following table provides a description of each instance state and indicates
Example of PowerState in JSON: ```json
- {
- "code": "PowerState/running",
- "level": "Info",
- "displayStatus": "VM running"
- }
+{
+ "code": "PowerState/running",
+ "level": "Info",
+ "displayStatus": "VM running"
+}
``` ## Provisioning states
virtual-network-manager Create Virtual Network Manager Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-bicep.md
resource connectivityConfigurationMesh 'Microsoft.Network/networkManagers/connec
#### Deployment Script
-In order to deploy the configuration to the target network group, a Deployment Script is used to call the `Deploy-AzNetworkManagerCommit` PowerShell command. In addition to the Deployment Script, a User Assigned Identity is created and granted the 'Contributor' role on the target resources group.
+In order to deploy the configuration to the target network group, a Deployment Script is used to call the `Deploy-AzNetworkManagerCommit`ΓÇï PowerShell command. The Deployment Script needs an identity with sufficient permissions to execute the PowerShell script against the Virtual Network Manager, so the Bicep template creates a User Managed Identity and grants it the 'Contributor' role on the target resource group. For more information on Deployment Scripts and associated identities, see [Use deployment scripts in ARM templates](../azure-resource-manager/templates/deployment-script-template.md).
```bicep @description('Create a Deployment Script resource to perform the commit/deployment of the Network Manager connectivity configuration.')
virtual-network-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md
Previously updated : 08/03/2023 Last updated : 09/27/2023
In Azure, VNet peering and connected groups are two methods of establishing conn
### Do security admin rules apply to Azure Private Endpoints? Currently, security admin rules don't apply to Azure Private Endpoints that fall under the scope of a virtual network managed by Azure Virtual Network Manager.
-### How can I explicitly allow Azure SQL Managed Instance traffic before having deny rules?
-
-Azure SQL Managed Instance has some network requirements. If your security admin rules can block the network requirements, you can use the below sample rules to allow SQLMI traffic with higher priority than the deny rules that can block the traffic of SQL Managed Instance.
-
-#### Inbound rules
-
-| Port | Protocol | Source | Destination | Action |
-| - | -- | | -- | |
-| 9000, 9003, 1438, 1440, 1452 | TCP | SqlManagement | **VirtualNetwork** | Allow |
-| 9000, 9003 | TCP | CorpnetSaw | **VirtualNetwork** | Allow |
-| 9000, 9003 | TCP | CorpnetPublic | **VirtualNetwork** | Allow |
-| Any | Any | **VirtualNetwork** | **VirtualNetwork** | Allow |
-| Any | Any | **AzureLoadBalancer** | **VirtualNetwork** | Allow |
#### Outbound rules
virtual-network Ipv6 Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ipv6-virtual-machine-scale-set.md
This article shows you how to deploy a dual stack (IPv4 + IPv6) Virtual Machine
The only step that is different from individual VMs is creating the network interface (NIC) configuration that uses the virtual machine scale set resource: networkProfile/networkInterfaceConfigurations. The JSON structure is similar to that of the Microsoft.Network/networkInterfaces object used for individual VMs with the addition of setting the NIC and the IPv4 IpConfiguration as the primary interface using the **"primary": true** attribute as seen in the following example: ```json
- "networkProfile": {
- "networkInterfaceConfigurations": [
- {
- "name": "[variables('nicName')]",
- "properties": {
- "primary": true,
+ "networkProfile": {
+ "networkInterfaceConfigurations": [
+ {
+ "name": "[variables('nicName')]",
+ "properties": {
+ "primary": true,
"networkSecurityGroup": { "id": "[resourceId('Microsoft.Network/networkSecurityGroups','VmssNsg')]"
- },
- "ipConfigurations": [
- {
- "name": "[variables('ipConfigName')]",
- "properties": {
- "primary": true,
- "subnet": {
- "id": "[resourceId('Microsoft.Network/virtualNetworks/subnets', 'MyvirtualNetwork','Mysubnet')]"
- },
- "privateIPAddressVersion":"IPv4",
- "publicipaddressconfiguration": {
- "name": "pub1",
- "properties": {
- "idleTimeoutInMinutes": 15
- }
- },
- "loadBalancerBackendAddressPools": [
- {
- "id": "[resourceId('Microsoft.Network/loadBalancers/backendAddressPools', 'loadBalancer', 'bePool'))]"
- }
- ],
- "loadBalancerInboundNatPools": [
- {
- "id": "[resourceId('Microsoft.Network/loadBalancers/inboundNatPools', 'loadBalancer', 'natPool')]"
- }
- ]
- }
- },
- {
- "name": "[variables('ipConfigNameV6')]",
- "properties": {
- "subnet": {
- "id": "[resourceId('Microsoft.Network/virtualNetworks/subnets','MyvirtualNetwork','Mysubnet')]"
- },
- "privateIPAddressVersion":"IPv6",
- "loadBalancerBackendAddressPools": [
- {
- "id": "[resourceId('Microsoft.Network/loadBalancers/backendAddressPools', 'loadBalancer','bePoolv6')]"
- }
- ],
- }
- }
- ]
- }
+ },
+ "ipConfigurations": [
+ {
+ "name": "[variables('ipConfigName')]",
+ "properties": {
+ "primary": true,
+ "subnet": {
+ "id": "[resourceId('Microsoft.Network/virtualNetworks/subnets', 'MyvirtualNetwork','Mysubnet')]"
+ },
+ "privateIPAddressVersion":"IPv4",
+ "publicipaddressconfiguration": {
+ "name": "pub1",
+ "properties": {
+ "idleTimeoutInMinutes": 15
+ }
+ },
+ "loadBalancerBackendAddressPools": [
+ {
+ "id": "[resourceId('Microsoft.Network/loadBalancers/backendAddressPools', 'loadBalancer', 'bePool'))]"
+ }
+ ],
+ "loadBalancerInboundNatPools": [
+ {
+ "id": "[resourceId('Microsoft.Network/loadBalancers/inboundNatPools', 'loadBalancer', 'natPool')]"
+ }
+ ]
}
- ]
- }
-
+ },
+ {
+ "name": "[variables('ipConfigNameV6')]",
+ "properties": {
+ "subnet": {
+ "id": "[resourceId('Microsoft.Network/virtualNetworks/subnets','MyvirtualNetwork','Mysubnet')]"
+ },
+ "privateIPAddressVersion":"IPv6",
+ "loadBalancerBackendAddressPools": [
+ {
+ "id": "[resourceId('Microsoft.Network/loadBalancers/backendAddressPools', 'loadBalancer','bePoolv6')]"
+ }
+ ]
+ }
+ }
+ ]
+ }
+ }
+ ]
+ }
``` - ## Sample virtual machine scale set template JSON To deploy a dual stack (IPv4 + IPv6) Virtual Machine Scale Set with dual stack external Load Balancer and virtual network view sample template [here](https://azure.microsoft.com/resources/templates/ipv6-in-vnet-vmss/).