Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-monitor.md | workspace("AD-B2C-TENANT1").AuditLogs Azure Monitor Logs are designed to scale and support collecting, indexing, and storing massive amounts of data per day from any source in your enterprise or deployed in Azure. By default, logs are retained for 30 days, but retention duration can be increased to up to two years. Learn how to [manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/cost-logs.md). After you select the pricing tier, you can [Change the data retention period](../azure-monitor/logs/data-retention-archive.md). +## Disable monitoring data collection ++To stop collecting logs to your Log Analytics workspace, delete the diagnostic settings you created. You'll continue to incur charges for retaining log data you've already collected into your workspace. If you no longer need the monitoring data you've collected, you can delete your Log Analytics workspace and the resource group you created for Azure Monitor. Deleting the Log Analytics workspace deletes all data in the workspace and prevents you from incurring additional data retention charges. ++## Delete Log Analytics workspace and resource group ++1. Sign in to the [Azure portal](https://portal.azure.com). +1. Make sure you're using the directory that contains your *Azure AD* tenant: + 1. Select the **Directories + subscriptions** icon in the portal toolbar. + 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch** button next to it. +1. Choose the resource group that contains the Log Analytics workspace. This example uses a resource group named _azure-ad-b2c-monitor_ and a Log Analytics workspace named `AzureAdB2C`. +1. [Delete the Logs Analytics workspace](../azure-monitor/logs/delete-workspace.md#azure-portal). +1. Select the **Delete** button to delete the resource group. ## Next steps - Find more samples in the Azure AD B2C [SIEM gallery](https://aka.ms/b2csiem). |
active-directory-b2c | Data Residency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/data-residency.md | At the moment, the following countries/regions have the local data residence opt #### What do I need to do? -If you've an existing Azure AD B2C tenant, you need to opt in to start using Go-Local add-on. If you're creating a new Azure AD B2C tenant, you can enable Go-Local add-on when you create it. Learn how to [create your Azure AD B2C](tutorial-create-tenant.md) tenant. +|Tenant status | What to do | +|-|| +| I've an existing tenant | You need to opt in to start using Go-Local add-on by using the steps in [Activate Go-Local ad-on](tutorial-create-tenant.md#activate-azure-ad-b2c-go-local-add-on). | +| I'm creating a new tenant | You enable Go-Local add-on when you create your new Azure AD B2C tenant. Learn how to [create your Azure AD B2C](tutorial-create-tenant.md) tenant.| ## EU Data Boundary |
active-directory-b2c | Tutorial Create Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-tenant.md | Before you create your Azure AD B2C tenant, you need to take the following consi - By default, each tenant can accommodate a total of **1.25 million** objects (user accounts and applications), but you can increase this limit to **5.25 million** objects when you add and verify a custom domain. If you want to increase this limit, please contact [Microsoft Support](find-help-open-support-ticket.md). However, if you created your tenant before **September 2022**, this limit doesn't affect you, and your tenant will retain the size allocated to it at creation, that's, **50 million** objects. Learn how to [read your tenant usage](microsoft-graph-operations.md#tenant-usage). -- If you want to reuse a tenant name that you previously tried to delete, but you see the error "Already in use by another directory" when you enter the domain name, you'll need to [follow these steps to fully delete the tenant first](./faq.yml?tabs=app-reg-ga#how-do-i-delete-my-azure-ad-b2c-tenant-). A role of at least *Subscription Administrator* is required. After deleting the tenant, you might also need to sign out and sign back in before you can reuse the domain name.+- If you want to reuse a tenant name that you previously tried to delete, but you see the error "Already in use by another directory" when you enter the domain name, you'll need to [follow these steps to fully delete the tenant](./faq.yml?tabs=app-reg-ga#how-do-i-delete-my-azure-ad-b2c-tenant-) before you try again. You require a role of at least *Subscription Administrator*. After deleting the tenant, you might also need to sign out and sign back in before you can reuse the domain name. ## Prerequisites Before you create your Azure AD B2C tenant, you need to take the following consi - For **Organization name**, enter a name for your Azure AD B2C tenant. - For **Initial domain name**, enter a domain name for your Azure AD B2C tenant.- - For **Location**, select your country/region from the list. If the country/region you select has a [Go-Local add-on](data-residency.md#go-local-add-on) option, such as Japan or Australia, and you want to store your data exclusively within that country/region, select the **Store Azure AD Core Store data, components and service data in the location selected above** checkbox. Go-Local add-on is a paid add-on whose charge is added to your Azure AD B2C Premium P1 or P2 licenses charges, see [Billing model](billing.md#about-go-local-add-on). You can't change the data residency region after you create your Azure AD B2C tenant. + - For **Location**, select your country/region from the list. If the country/region you select has a [Go-Local add-on](data-residency.md#go-local-add-on) option, such as Japan or Australia, and you want to store your data exclusively within that country/region, select the **Store Azure AD Core Store data and Azure AD components and service data in the location selected above** checkbox. Go-Local add-on is a paid add-on whose charge is added to your Azure AD B2C Premium P1 or P2 licenses charges, see [Billing model](billing.md#about-go-local-add-on). You can't change the data residency region after you create your Azure AD B2C tenant. - For **Subscription**, select your subscription from the list. - For **Resource group**, select or search for the resource group that will contain the tenant. You can link multiple Azure AD B2C tenants to a single Azure subscription for bi > [!NOTE] > When an Azure AD B2C directory is created, an application called `b2c-extensions-app` is automatically created inside the new directory. Do not modify or delete it. The application is used by Azure AD B2C for storing user data. Learn more about [Azure AD B2C: Extensions app](extensions-app.md). +## Activate Azure AD B2C Go-Local add-on ++Azure AD B2C allows you to activate Go-Local add-on on an existing tenant as long as your tenant stores data in a country/region that has local data residence option. To opt-in to Go-Local add-on, use the following steps: ++1. Sign in to the [Azure portal](https://portal.azure.com/). ++1. Make sure you're using the directory that contains your Azure AD B2C tenant: ++ 1. In the Azure portal toolbar, select the **Directories + subscriptions** (:::image type="icon" source="./../active-directory/develop/media/common/portal-directory-subscription-filter.png" border="false":::) icon. + + 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select the **Switch** button next to it. + +1. In the Azure portal, search for and select **Azure AD B2C**. ++1. On the tenant management page that appears, on the top of the page, select **Enable data residency** link. ++ :::image type="content" source="media/tutorial-create-tenant/opt-in-go-local-add-on.png" alt-text="Screenshot of opt in to Azure AD B2C Go-Local add-on in Azure portal."::: ++1. On the **Data residency** pane that appears, select the **Store my directory and Azure AD data in \<Country\>** checkbox, then select **Save** button. ++1. Close the **Data residency** pane. + ## Select your B2C tenant directory To start using your new Azure AD B2C tenant, you need to switch to the directory that contains the tenant: |
active-directory-domain-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md | Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
active-directory | Concept Authentication Default Enablement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-default-enablement.md | The following table lists each setting that can be set to Microsoft managed and | [Location in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [Application name in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [System-preferred MFA](concept-system-preferred-multifactor-authentication.md) | Disabled |-| [Authenticator Lite](how-to-mfa-authenticator-lite.md) | Disabled | +| [Authenticator Lite](how-to-mfa-authenticator-lite.md) | Enabled | As threat vectors change, Azure AD may announce default protection for a **Microsoft managed** setting in [release notes](../fundamentals/whats-new.md) and on commonly read forums like [Tech Community](https://techcommunity.microsoft.com/). For example, see our blog post [It's Time to Hang Up on Phone Transports for Authentication](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/it-s-time-to-hang-up-on-phone-transports-for-authentication/ba-p/1751752) for more information about the need to move away from using SMS and voice calls, which led to default enablement for the registration campaign to help users to set up Authenticator for modern authentication. |
active-directory | How To Mfa Authenticator Lite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-authenticator-lite.md | Microsoft Authenticator Lite is another surface for Azure Active Directory (Azur Users receive a notification in Outlook mobile to approve or deny sign-in, or they can copy a TOTP to use during sign-in. >[!NOTE]->This is an important security enhancement for users authenticating via telecom transports. This feature is currently in the state ΓÇÿMicrosoft managedΓÇÖ. Until June 26, leaving the feature set to ΓÇÿMicrosoft managedΓÇÖ will have no impact on your users and the feature will remain turned off unless you explicitly change the state to enabled. The Microsoft managed value of this feature will be changed from ΓÇÿdisabledΓÇÖ to ΓÇÿenabledΓÇÖ on June 26. We have made some changes to the feature configuration, so if you made an update before GA (5/17), please validate that the feature is in the correct state for your tenant prior to June 26. If you do not wish for this feature to be enabled on June 26, move the state to ΓÇÿdisabledΓÇÖ or set users to include and exclude groups. +>This is an important security enhancement for users authenticating via telecom transports. On June 26, the Microsoft managed value of this feature changed from ΓÇÿdisabledΓÇÖ to ΓÇÿenabledΓÇÖ. If you no longer wish for this feature to be enabled, move the state from 'default' toΓÇÿdisabledΓÇÖ or set users to include and exclude groups. ## Prerequisites -- Your organization needs to enable Microsoft Authenticator (second factor) push notifications for some users or groups by using the Authentication methods policy. You can edit the Authentication methods policy by using the Azure portal or Microsoft Graph API.+- Your organization needs to enable Microsoft Authenticator (second factor) push notifications for some users or groups by using the modern Authentication methods policy. You can edit the Authentication methods policy by using the Azure portal or Microsoft Graph API. >[!TIP] >We recommend that you also enable [system-preferred multifactor authentication (MFA)](concept-system-preferred-multifactor-authentication.md) when you enable Authenticator Lite. With system-preferred MFA enabled, users try to sign-in with Authenticator Lite before they try less secure telephony methods like SMS or voice call. Users receive a notification in Outlook mobile to approve or deny sign-in, or th ## Enable Authenticator Lite -By default, Authenticator Lite is [Microsoft managed](concept-authentication-default-enablement.md#microsoft-managed-settings). Until June 26, leaving the feature set to ΓÇÿMicrosoft managedΓÇÖ will have no impact on your users and the feature will remain turned off unless you explicitly change the state to enabled. The Microsoft managed value of this feature will be changed from ΓÇÿdisabledΓÇÖ to ΓÇÿenabledΓÇÖ on June 26. We have made some changes to the feature configuration, so if you made an update before GA (5/17), please validate that the feature is in the correct state for your tenant prior to June 26. If you do not wish for this feature to be enabled on June 26, move the state to ΓÇÿdisabledΓÇÖ or set users to include and exclude groups. +By default, Authenticator Lite is [Microsoft managed](concept-authentication-default-enablement.md#microsoft-managed-settings). On June 26, the Microsoft managed value of this feature changed from ΓÇÿdisabledΓÇÖ to ΓÇÿenabledΓÇÖ -### Enablement Authenticator Lite in Azure portal UX +### Disabling Authenticator Lite in Azure portal UX -To enable Authenticator Lite in the Azure portal, complete the following steps: +To disable Authenticator Lite in the Azure portal, complete the following steps: 1. In the Azure portal, click Azure Active Directory > Security > Authentication methods > Microsoft Authenticator. In the Entra admin center, on the sidebar select Azure Active Directory > Protect & Secure > Authentication methods > Microsoft Authenticator. - 2. On the Enable and Target tab, click Yes and All users to enable the policy for everyone or add selected users and groups. Set the Authentication mode for these users/groups to Any or Push. + 2. On the Enable and Target tab, click Yes and All users to enable the Authenticator policy for everyone or add selected users and groups. Set the Authentication mode for these users/groups to Any or Push. Only users who are enabled for Microsoft Authenticator here can be enabled to use Authenticator Lite for sign-in, or excluded from it. Users who aren't enabled for Microsoft Authenticator can't see the feature. Users who have Microsoft Authenticator downloaded on the same device Outlook is downloaded on will not be prompted to register for Authenticator Lite in Outlook. <img width="1112" alt="Entra portal Authenticator settings" src="https://user-images.githubusercontent.com/108090297/228603771-52c5933c-f95e-4f19-82db-eda2ba640b94.png"> - 3. On the Configure tab, for **Microsoft Authenticator on companion applications**, change Status to Enabled, choose who to include or exclude from Authenticator Lite, and click Save. + 3. On the Configure tab, for **Microsoft Authenticator on companion applications**, change Status to Disabled, and click Save. <img width="664" alt="Authenticator Lite configuration settings" src="https://user-images.githubusercontent.com/108090297/228603364-53f2581f-a4e0-42ee-8016-79b23e5eff6c.png"> Authenticator Lite enforces number matching in every authentication. If your ten To learn more about verification notifications, see [Microsoft Authenticator authentication method](concept-authentication-authenticator-app.md). ## Common questions+### Are users on the legacy policy eligible for Authenticator Lite? +No, only those users configured for Authenticator app via the modern authentication methods policy are eligible for this experience. If your tenant is currently on the legacy policy and you are interested in this feature, please migrate your users to the modern auth policy. ### Does Authenticator Lite work as a broker app? No, Authenticator Lite is only available for push notifications and TOTP. Users that have Microsoft Authenticator on their device can't register Authentic ### SSPR Notifications TOTP codes from Outlook will work for SSPR, but the push notification will not work and will return an error. -### Authentication Strengths -If you have a configured authentication strength for MFA push, Authenticator Lite will not be allowed. This is a known issue that we are working to resolve. +### Logs are showing additional conditional access evaluations +The conditional access policies are evaluated each time a user opens their Outlook app, in order to determine whether the user is eligible to register for Authenticator Lite. These checks may appear in logs. ## Next steps |
active-directory | Howto Mfa Mfasettings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md | If your organization uses the NPS extension to provide MFA to on-premises applic Trusted IP bypass works only from inside the company intranet. If you select the **All Federated Users** option and a user signs in from outside the company intranet, the user has to authenticate by using multi-factor authentication. The process is the same even if the user presents an AD FS claim. +>[!NOTE] +>If both per-user MFA and Conditional Access policies are configured in the tenant, you will need to add trusted IPs to the Conditional Access policy and update the MFA service settings. + #### User experience inside the corporate network When the trusted IPs feature is disabled, multi-factor authentication is required for browser flows. App passwords are required for older rich-client applications. |
active-directory | Howto Sspr Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-deployment.md | We recommend that you don't sync your on-prem Active Directory admin accounts wi ### Environments with multiple identity management systems -Some environments have multiple identity management systems. On-premises identity managers like Oracle AM and SiteMinder, require synchronization with AD for passwords. You can do this using a tool like the Password Change Notification Service (PCNS) with Microsoft Identity Manager (MIM). To find information on this more complex scenario, see the article [Deploy the MIM Password Change Notification Service on a domain controller](/microsoft-identity-manager/deploying-mim-password-change-notification-service-on-domain-controller). +Some environments have multiple identity management systems. On-premises identity managers like Oracle IAM and SiteMinder, require synchronization with AD for passwords. You can do this using a tool like the Password Change Notification Service (PCNS) with Microsoft Identity Manager (MIM). To find information on this more complex scenario, see the article [Deploy the MIM Password Change Notification Service on a domain controller](/microsoft-identity-manager/deploying-mim-password-change-notification-service-on-domain-controller). ## Plan Testing and Support |
active-directory | Concept Conditional Access Cloud Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md | For more information on how to set up a sample policy for Microsoft Azure Manage > [!TIP] > For Azure Government, you should target the Azure Government Cloud Management API application. +### Microsoft Admin Portals (preview) ++When a Conditional Access policy targets the Microsoft Admin Portals cloud app, the policy is enforced for tokens issued to application IDs of the following Microsoft administrative portals: ++- Microsoft 365 Admin Center +- Exchange admin center +- Azure portal +- Microsoft Entra admin center +- Security and Microsoft Purview compliance portal ++Other Microsoft admin portals will be added over time. ++> [!NOTE] +> The Microsoft Admin Portals app applies to interactive sign-ins to the listed admin portals only. Sign-ins to the underlying resources or services like Microsoft Graph or Azure Resource Manager APIs are not covered by this application. Those resources are protected by the [Microsoft Azure Management](#microsoft-azure-management) app. This enables customers to move along the MFA adoption journey for admins without impacting automation that relies on APIs and PowerShell. When you are ready, Microsoft recommends using a [policy requiring administrators perform MFA always](howto-conditional-access-policy-admin-mfa.md) for comprehensive protection. + ### Other applications Administrators can add any Azure AD registered application to Conditional Access policies. These applications may include: Some applications don't appear in the picker at all. The only way to include the ### All cloud apps -Applying a Conditional Access policy to **All cloud apps** will result in the policy being enforced for all tokens issued to web sites and services. This option includes applications that aren't individually targetable in Conditional Access policy, such as Azure Active Directory. +Applying a Conditional Access policy to **All cloud apps** results in the policy being enforced for all tokens issued to web sites and services. This option includes applications that aren't individually targetable in Conditional Access policy, such as Azure Active Directory. In some cases, an **All cloud apps** policy could inadvertently block user access. These cases are excluded from policy enforcement and include: User actions are tasks that can be performed by a user. Currently, Conditional A - **Register security information**: This user action allows Conditional Access policy to enforce when users who are enabled for combined registration attempt to register their security information. More information can be found in the article, [Combined security information registration](../authentication/concept-registration-mfa-sspr-combined.md). +> [!NOTE] +> When applying a policy targeting user actions for register security information, if the user account is a guest from [Microsoft personal account (MSA)](../external-identities/microsoft-account.md), using the control 'Require multifactor authentication', will require the MSA user to register security information with the organization. If the guest user is from another provider such as [Google](../external-identities/google-federation.md), access will be blocked. + - **Register or join devices**: This user action enables administrators to enforce Conditional Access policy when users [register](../devices/concept-azure-ad-register.md) or [join](../devices/concept-azure-ad-join.md) devices to Azure AD. It provides granularity in configuring multifactor authentication for registering or joining devices instead of a tenant-wide policy that currently exists. There are three key considerations with this user action: - `Require multifactor authentication` is the only access control available with this user action and all others are disabled. This restriction prevents conflicts with access controls that are either dependent on Azure AD device registration or not applicable to Azure AD device registration. - `Client apps`, `Filters for devices` and `Device state` conditions aren't available with this user action since they're dependent on Azure AD device registration to enforce Conditional Access policies. |
active-directory | Howto Remove App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-remove-app.md | Title: "How to: Remove a registered app from the Microsoft identity platform" -description: In this how-to, you learn how to remove an application registered with the Microsoft identity platform. +description: Learn how to remove an application registered with the Microsoft identity platform. Enterprise developers and software-as-a-service (SaaS) providers who have regist In the following sections, you learn how to: -* Remove an application authored by you or your organization -* Remove an application authored by another organization +- Remove an application authored by you or your organization +- Remove an application authored by another organization ## Prerequisites -* An [application registered in your Azure AD tenant](quickstart-register-app.md) +- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +- An [application registered in your Azure AD tenant](quickstart-register-app.md) ## Remove an application authored by you or your organization Applications that you or your organization have registered are represented by bo To delete an application, be listed as an owner of the application or have admin privileges. -1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. +1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites. 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which the app is registered. 1. Search and select the **Azure Active Directory**. -1. Under **Manage**, select **App registrations** and select the application that you want to configure. Once you've selected the app, you'll see the application's **Overview** page. +1. Under **Manage**, select **App registrations** and select the application that you want to configure. Once you've selected the app, you see the application's **Overview** page. 1. From the **Overview** page, select **Delete**. 1. Read the deletion consequences. Check the box if one appears at the bottom of the pane. 1. Select **Delete** to confirm that you want to delete the app. |
active-directory | Howto Restore App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-restore-app.md | You can view your deleted applications, restore a deleted application, or perman Neither you nor Microsoft customer support can restore a permanently deleted application or an application deleted more than 30 days ago. -## Required permissions +## Prerequisites + You must have one of the following roles to permanently delete applications. - Global administrator You must have one of the following roles to restore applications. - Global administrator - Application owner -### View your deleted applications +## View your deleted applications + You can see all the applications in a soft deleted state. Only applications deleted less than 30 days ago can be restored. -#### To view your restorable applications -1. Sign in to the [Azure portal](https://portal.azure.com/). -2. Search and select **Azure Active Directory**, select **App registrations**, and then select the **Deleted applications (Preview)** tab. +To view your restorable applications: ++1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites. +1. Search and select **Azure Active Directory**, select **App registrations**, and then select the **Deleted applications** tab. Review the list of applications. Only applications that have been deleted in the past 30 days are available to restore. If using the App registrations search preview, you can filter by the 'Deleted date' column to see only these applications. ## Restore a recently deleted application -When an app registration is deleted from the organization, the app is in a suspended state, and its configurations are preserved. When you restore an app registration, its configurations are also restored. However, if there were any organization-specific settings in **Enterprise applications** for the application's home tenant, those won't be restored. +When an app registration is deleted from the organization, the app is in a suspended state, and its configurations are preserved. When you restore an app registration, its configurations are also restored. However, if there were any organization-specific settings such as permission consents and user and group assignments for a certain organization stored in **Enterprise applications** for the application's home tenant, they're restored alongside the app registration. -This is because organization-specific settings are stored on a separate object, called the service principal. Settings held on the service principal include permission consents and user and group assignments for a certain organization; these configurations won't be restored when the app is restored. To learn how to restore the service principal with its previous configurations, see [Restore a recently deleted enterprise application](../manage-apps/restore-application.md). +To restore an application: --### To restore an application -1. On the **Deleted applications (Preview)** tab, search for and select one of the applications deleted less than 30 days ago. -2. Select **Restore app registration**. +1. Go to the **Deleted applications** tab. Search for and select one of the applications deleted less than 30 days ago. +1. Select **Restore app registration**. ## Permanently delete an application-You can manually permanently delete an application from your organization. A permanently deleted application can't be restored by you, another administrator, or by Microsoft customer support. However, this does not permanently delete the corresponding service principal. A service principal cannot be restored without having an active corresponding application, so the service principal can be manually deleted, which is also permanent. If no action is taken the service principal will be permanently deleted 30 days after deleting the application. -### To permanently delete an application +You can manually permanently delete an application from your organization. A permanently deleted application can't be restored by you, another administrator, or by Microsoft customer support. However, this doesn't permanently delete the corresponding service principal. The service principal can't be restored without having an active corresponding application, so the service principal can be manually deleted, which is also permanent. If no action is taken, the service principal will be permanently deleted 30 days after deleting the application. ++To permanently delete an application: -1. On the **Deleted applications (Preview)** tab, search for and select one of the available applications. -2. Select **Delete permanently**. -3. Read the warning text and select **Yes**. +1. Go to the **Deleted applications** tab. Search for and select one of the available applications. +1. Select **Delete permanently**. +1. Read the warning text and select **Yes**. ## Next steps+ After you've restored or permanently deleted your app, you can: - [Add an application](quickstart-register-app.md). |
active-directory | Migrate Off Email Claim Authorization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-off-email-claim-authorization.md | This risk of unauthorized access has only been found in multi-tenant apps, as a To secure applications from mistakes with unverified email addresses, all new multi-tenant applications are automatically opted-in to a new default behavior that removes email addresses with unverified domain owners from tokens as of June 2023. This behavior is not enabled for single-tenant applications and multi-tenant applications with previous sign-in activity with domain-owner unverified email addresses. -Depending on your scenario, you may determine that your application's tokens should continue receiving unverified emails. While not recommended for most applications, you may disable the default behavior by setting the `removeUnverifiedEmailClaim` property in the [Authentication Behaviors Microsoft Graph API](/graph/api/resources/authenticationbehaviors). +Depending on your scenario, you may determine that your application's tokens should continue receiving unverified emails. While not recommended for most applications, you may disable the default behavior by setting the `removeUnverifiedEmailClaim` property in the [authenticationBehaviors object of the applications API in Microsoft Graph](/graph/applications-authenticationbehaviors). By setting `removeUnverifiedEmailClaim` to `false`, your application will receive `email` claims that are potentially unverified and subject users to account takeover risk. If you're disabling this behavior in order to not break user login flows, it's highly recommended to migrate to a uniquely identifying token claim mapping as soon as possible, as described in the guidance below. If your application uses `email` (or any other mutable claim) for authorization ## Next steps - To learn more about using claims-based authorization securely, see [Secure applications and APIs by validating claims](claims-validation.md)-- For more information about optional claims, see the [optional claims reference](./optional-claims-reference.md)+- For more information about optional claims, see the [optional claims reference](./optional-claims-reference.md) |
active-directory | Msal Python Token Cache Serialization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-python-token-cache-serialization.md | Title: Custom token cache serialization (MSAL Python) -description: Learn how to serializing the token cache for MSAL for Python +description: Learn how to serialize token cache using MSAL for Python -In MSAL Python, an in-memory token cache that persists for the duration of the app session, is provided by default when you create an instance of [ClientApplication](https://msal-python.readthedocs.io/en/latest/#confidentialclientapplication). +In Microsoft Authentication Library (MSAL) for Python, an in-memory token cache that persists for the duration of the app session, is provided by default when you create an instance of [ClientApplication](/python/api/msal/msal.application.confidentialclientapplication). -Serialization of the token cache, so that different sessions of your app can access it, is not provided "out of the box." That's because MSAL Python can be used in app types that don't have access to the file system--such as Web apps. To have a persistent token cache in a MSAL Python app, you must provide custom token cache serialization. +Serialization of the token cache, so that different sessions of your app can access it, isn't provided "out of the box." MSAL for Python can be used in app types that don't have access to the file system--such as Web apps. To have a persistent token cache in an app that uses MSAL for Python, you must provide custom token cache serialization. -The strategies for serializing the token cache differ depending on whether you are writing a public client application (Desktop), or a confidential client application (web app, web API, or daemon app). +The strategies for serializing the token cache differ depending on whether you're writing a public client application (Desktop), or a confidential client application (web app, web API, or daemon app). ## Token cache for a public client application -Public client applications run on a user's device and manage tokens for a single user. In this case, you could serialize the entire cache into a file. Remember to provide file locking if your app, or another app, can access the cache concurrently. For a simple example of how to serialize a token cache to a file without locking, see the example in the [SerializableTokenCache](https://msal-python.readthedocs.io/en/latest/#msal.SerializableTokenCache) class reference documentation. +Public client applications run on a user's device and manage tokens for a single user. In this case, you could serialize the entire cache into a file. Remember to provide file locking if your app, or another app, can access the cache concurrently. For a simple example of how to serialize a token cache to a file without locking, see the example in the [SerializableTokenCache](/python/api/msal/msal.token_cache.serializabletokencache) class reference documentation. ## Token cache for a Web app (confidential client application) |
active-directory | Tutorial V2 Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-ios.md | -When you've completed the tutorial, your application will accept sign-ins of personal Microsoft accounts (including outlook.com, live.com, and others) and work or school accounts from any company or organization that uses Azure Active Directory. This tutorial is applicable to both iOS and macOS apps. Some steps are different between the two platforms. +When you've completed the tutorial, your application accepts sign-ins of personal Microsoft accounts (including outlook.com, live.com, and others) and work or school accounts from any company or organization that uses Azure Active Directory (Azure AD). This tutorial is applicable to both iOS and macOS apps. Some steps are different between the two platforms. In this tutorial: > [!div class="checklist"]-> * Create an iOS or macOS app project in *Xcode* -> * Register the app in the Azure portal -> * Add code to support user sign-in and sign-out -> * Add code to call the Microsoft Graph API -> * Test the app +> +> - Create an iOS or macOS app project in _Xcode_ +> - Register the app in the Azure portal +> - Add code to support user sign-in and sign-out +> - Add code to call the Microsoft Graph API +> - Test the app ## Prerequisites In this tutorial: ![Shows how the sample app generated by this tutorial works](../../../includes/media/active-directory-develop-guidedsetup-ios-introduction/iosintro.svg) -The app in this tutorial can sign in users and get data from Microsoft Graph on their behalf. This data will be accessed via a protected API (Microsoft Graph API in this case) that requires authorization and is protected by the Microsoft identity platform. +The app in this tutorial can sign in users and get data from Microsoft Graph on their behalf. This data is accessed via a protected API (Microsoft Graph API in this case) that requires authorization and is protected by the Microsoft identity platform. More specifically: -* Your app will sign in the user either through a browser or the Microsoft Authenticator. -* The end user will accept the permissions your application has requested. -* Your app will be issued an access token for the Microsoft Graph API. -* The access token will be included in the HTTP request to the web API. -* Process the Microsoft Graph response. +- Your app signs in the user either through a browser or the Microsoft Authenticator. +- The end user accepts the permissions your application has requested. +- Your app is issued an access token for the Microsoft Graph API. +- The access token is included in the HTTP request to the web API. +- Process the Microsoft Graph response. This sample uses the Microsoft Authentication Library (MSAL) to implement Authentication. MSAL will automatically renew tokens, deliver single sign-on (SSO) between other apps on the device, and manage the account(s). If you're using [Carthage](https://github.com/Carthage/Carthage), install `MSAL` github "AzureAD/microsoft-authentication-library-for-objc" "master" ``` -From a terminal window, in the same directory as the updated _Cartfile_, run the following command to have Carthage update the dependencies in your project. +From a terminal window, in the same directory as the updated _Cartfile_, run the following command to have Carthage update the dependencies in your project. iOS: You can also use Git Submodule, or check out the latest release to use as a fram ## Add your app registration -Next, we'll add your app registration to your code. +Next, we add your app registration to your code. First, add the following import statement to the top of the _ViewController.swift_ file and either _AppDelegate.swift_ or _SceneDelegate.swift_: var webViewParameters : MSALWebviewParameters? var currentAccount: MSALAccount? ``` -The only value you modify above is the value assigned to `kClientID` to be your [Application ID](./developer-glossary.md#application-client-id). This value is part of the MSAL Configuration data that you saved during the step at the beginning of this tutorial to register the application in the Azure portal. +The only value you modify is the value assigned to `kClientID` to be your [Application ID](./developer-glossary.md#application-client-id). This value is part of the MSAL Configuration data that you saved during the step at the beginning of this tutorial to register the application in the Azure portal. ## Configure Xcode project settings Add a new keychain group to your project **Signing & Capabilities**. The keychai ## For iOS only, configure URL schemes -In this step, you'll register `CFBundleURLSchemes` so that the user can be redirected back to the app after sign in. By the way, `LSApplicationQueriesSchemes` also allows your app to make use of Microsoft Authenticator. +In this step, you'll register `CFBundleURLSchemes` so that the user can be redirected back to the app after sign in. By the way, `LSApplicationQueriesSchemes` also allows your app to make use of Microsoft Authenticator. In Xcode, open _Info.plist_ as a source code file, and add the following inside of the `<dict>` section. Replace `[BUNDLE_ID]` with the value you used in the Azure portal. If you downloaded the code, the bundle identifier is `com.microsoft.identitysample.MSALiOS`. If you're creating your own project, select your project in Xcode and open the **General** tab. The bundle identifier appears in the **Identity** section. In Xcode, open _Info.plist_ as a source code file, and add the following inside ## Create your app's UI -Now create a UI that includes a button to call the Microsoft Graph API, another to sign out, and a text view to see some output by adding the following code to the `ViewController` class: +Now create a UI that includes a button to call the Microsoft Graph API, another to sign out, and a text view to see some output by adding the following code to the `ViewController` class: ### iOS UI Now, we can implement the application's UI processing logic and get tokens inter MSAL exposes two primary methods for getting tokens: `acquireTokenSilently()` and `acquireTokenInteractively()`. -- `acquireTokenSilently()` attempts to sign in a user and get tokens without user interaction as long as an account is present. `acquireTokenSilently()` require a valid `MSALAccount` which can be retrieved by using one of MSAL's account enumeration APIs. This tutorial uses `applicationContext.getCurrentAccount(with: msalParameters, completionBlock: {})` to retrieve the current account.+- `acquireTokenSilently()` attempts to sign in a user and get tokens without user interaction as long as an account is present. `acquireTokenSilently()` require a valid `MSALAccount`, which can be retrieved by using one of MSAL's account enumeration APIs. This tutorial uses `applicationContext.getCurrentAccount(with: msalParameters, completionBlock: {})` to retrieve the current account. - `acquireTokenInteractively()` always shows UI when attempting to sign in the user. It may use session cookies in the browser or an account in the Microsoft authenticator to provide an interactive-SSO experience. Add the following code to the `ViewController` class: #### Get a token interactively -The following code snippet gets a token for the first time by creating an `MSALInteractiveTokenParameters` object and calling `acquireToken`. Next you'll add code that: +The following code snippet gets a token for the first time by creating an `MSALInteractiveTokenParameters` object and calling `acquireToken`. Next you add code that: 1. Creates `MSALInteractiveTokenParameters` with scopes. 2. Calls `acquireToken()` with the created parameters. func acquireTokenInteractively() { The `promptType` property of `MSALInteractiveTokenParameters` configures the authentication and consent prompt behavior. The following values are supported: -- `.promptIfNecessary` (default) - The user is prompted only if necessary. The SSO experience is determined by the presence of cookies in the webview, and the account type. If multiple users are signed in, account selection experience is presented. *This is the default behavior*.+- `.promptIfNecessary` (default) - The user is prompted only if necessary. The SSO experience is determined by the presence of cookies in the webview, and the account type. If multiple users are signed in, account selection experience is presented. _This is the default behavior_. - `.selectAccount` - If no user is specified, the authentication webview presents a list of currently signed-in accounts for the user to select from. - `.login` - Requires the user to authenticate in the webview. Only one account may be signed-in at a time if you specify this value. - `.consent` - Requires the user to consent to the current set of scopes for the request. To acquire an updated token silently, add the following code to the `ViewControl Once you have a token, your app can use it in the HTTP header to make an authorized request to the Microsoft Graph: -| header key | value | -| - | | +| header key | value | +| - | - | | Authorization | Bearer \<access-token> | Add the following code to the `ViewController` class: To enable token caching: 1. Ensure your application is properly signed 1. Go to your Xcode Project Settings > **Capabilities tab** > **Enable Keychain Sharing** 1. Select **+** and enter one of the following **Keychain Groups**:- - iOS: `com.microsoft.adalcache` - - macOS: `com.microsoft.identity.universalstorage` + - iOS: `com.microsoft.adalcache` + - macOS: `com.microsoft.identity.universalstorage` ### Add helper methods+ Add the following helper methods to the `ViewController` class to complete the sample. ### iOS UI: -``` swift +```swift func updateLogging(text : String) { Use following code to read current device configuration, including whether devic ### Multi-account applications -This app is built for a single account scenario. MSAL also supports multi-account scenarios, but it requires more application work. You'll need to create UI to help users select which account they want to use for each action that requires tokens. Alternatively, your app can implement a heuristic to select which account to use by querying all accounts from MSAL. For example, see `accountsFromDeviceForParameters:completionBlock:` [API](https://azuread.github.io/microsoft-authentication-library-for-objc/Classes/MSALPublicClientApplication.html#/c:objc(cs)MSALPublicClientApplication(im)accountsFromDeviceForParameters:completionBlock:) +This app is built for a single account scenario. MSAL also supports multi-account scenarios, but it requires more application work. You need to create UI to help users select which account they want to use for each action that requires tokens. Alternatively, your app can implement a heuristic to select which account to use by querying all accounts from MSAL. For example, see `accountsFromDeviceForParameters:completionBlock:` [API](<https://azuread.github.io/microsoft-authentication-library-for-objc/Classes/MSALPublicClientApplication.html#/c:objc(cs)MSALPublicClientApplication(im)accountsFromDeviceForParameters:completionBlock:>) ## Test your app After you sign in, the app will display the data returned from the Microsoft Gra Learn more about building mobile apps that call protected web APIs in our multi-part scenario series. -> [!div class="nextstepaction"] +> [!div class="nextstepaction"] > [Scenario: Mobile application that calls web APIs](scenario-mobile-overview.md)+ |
active-directory | Overview Customers Ciam | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/overview-customers-ciam.md | Azure AD for customers represents the convergence of business-to-consumer (B2C) Learn more about the [security and governance](concept-security-customers.md) features that are available in a customer tenant. +## About Azure AD B2C ++If you're a new customer, you might be wondering which solution is a better fit, [Azure AD B2C](../../../active-directory-b2c/index.yml) or Microsoft Entra External ID (preview). Opt for the current Azure AD B2C product if: ++- You have an immediate need to deploy a production ready build for customer-facing apps. + + > [!NOTE] + > Keep in mind that the next generation Microsoft Entra External ID platform represents the future of CIAM for Microsoft, and rapid innovation, new features and capabilities will be focused on this platform. By choosing the next generation platform from the start, you will receive the benefits of rapid innovation and a future-proof architecture. ++Opt for the next generation Microsoft Entra External ID platform if: ++- YouΓÇÖre starting fresh building identities into apps or you're in the early stages of product discovery. +- The benefits of rapid innovation, new features and capabilities are a priority. + ## Next steps - Learn more about [planning for Azure AD for customers](concept-planning-your-solution.md). |
active-directory | Tenant Restrictions V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tenant-restrictions-v2.md | Suppose you use tenant restrictions to block access by default, but you want to :::image type="content" source="media/tenant-restrictions-v2/tenant-restrictions-external-users-organizational.png" alt-text="Screenshot showing selecting the external users allow access selections."::: -1. Under **Applies to**, choose either **All <your tenant> users and groups** or **Select <your tenant> users and groups**. If you choose **Select <your tenant> users and groups**, perform these steps for each user or group you want to add: +1. Under **Applies to**, choose either **All <organization> users and groups** or **Select <organization> users and groups**. If you choose **Select <organization> users and groups**, perform these steps for each user or group you want to add: - Select **Add external users and groups**. - In the **Select** pane, type the user name or group name in the search box. Suppose you use tenant restrictions to block access by default, but you want to - If you want to add more, select **Add** and repeat these steps. When you're done selecting the users and groups you want to add, select **Submit**. > [!NOTE]- > For our Microsoft Accounts example, we select **All Contoso users and groups**. + > For our Microsoft Accounts example, we select **All Microsoft Accounts users and groups**. :::image type="content" source="media/tenant-restrictions-v2/tenant-restrictions-external-users-organizational-applies-to.png" alt-text="Screenshot showing selecting the external users and groups selections."::: |
active-directory | Check Workflow Execution Scope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/check-workflow-execution-scope.md | -# Check execution user scope of a workflow +# Check execution user scope of a workflow + Workflow scheduling will automatically process the workflow for users meeting the workflows execution conditions. This article walks you through the steps to check the users who fall into the execution scope of a workflow. For more information about execution conditions, see: [workflow basics](../governance/understanding-lifecycle-workflows.md#workflow-basics). |
active-directory | Lifecycle Workflow Extensibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-extensibility.md | -# Lifecycle Workflows Custom Task Extension (Preview) +# Lifecycle Workflows custom task extension Lifecycle Workflows allow you to create workflows that can be triggered based on joiner, mover, or leaver scenarios. While Lifecycle Workflows provide several built-in tasks to automate common scenarios throughout the lifecycle of users, eventually you may reach the limits of these built-in tasks. With the extensibility feature, you're able to utilize the concept of custom task extensions to call-out to external systems as part of a workflow. For example, when a user joins your organization you can have a workflow with a custom task extension that assigns a Teams number, or have a separate workflow that grants access to an email account for a manager when a user leaves. With the extensibility feature, Lifecycle Workflows currently support creating custom tasks extensions to call-out to [Azure Logic Apps](../../logic-apps/logic-apps-overview.md). The response can be authorized in one of the following ways: The high-level steps for the Azure Logic Apps integration are as follows: > [!NOTE]-> Creating a custom task extension and logic app through the Azure portal will automate most of these steps. For a guide on creating a custom task extension this way, see: [Trigger Logic Apps based on custom task extensions (Preview)](trigger-custom-task.md). +> Creating a custom task extension and logic app through the Azure portal will automate most of these steps. For a guide on creating a custom task extension this way, see: [Trigger Logic Apps based on custom task extensions](trigger-custom-task.md). - **Create a consumption-based Azure Logic App**: A consumption-based Azure Logic App that is used to be called to from the custom task extension.-- **Configure the Azure Logic App so its compatible with Lifecycle workflows**: Configuring the consumption-based Azure Logic App so that it can be used with the custom task extension. For more information, see: [Configure a Logic App for Lifecycle Workflow use (Preview)](configure-logic-app-lifecycle-workflows.md)+- **Configure the Azure Logic App so its compatible with Lifecycle workflows**: Configuring the consumption-based Azure Logic App so that it can be used with the custom task extension. For more information, see: [Configure a Logic App for Lifecycle Workflow use](configure-logic-app-lifecycle-workflows.md) - **Build your custom business logic within your Azure Logic App**: Set up your business logic within the Azure Logic App using Logic App designer. - **Create a lifecycle workflow customTaskExtension which holds necessary information about the Azure Logic App**: Creating a custom task extension that references the configured Azure Logic App. - **Update or create a Lifecycle workflow with the ΓÇ£Run a custom task extensionΓÇ¥ task, referencing your created customTaskExtension**: Adding the newly created custom task extension to a new workflow, or updating the information to an existing workflow. |
active-directory | Lifecycle Workflow Tasks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md | For Microsoft Graph, the parameters for the **Generate Temporary Access Pass and ``` +### Send email to notify manager of user move ++When a user moves within your organization Lifecycle Workflows allow you to send an email to the users manager notifying them of the move. You're also able to customize the email that is sent to the user's manager. +++The Azure AD prerequisite to run the **Send email to notify manager of user move** task are: ++- A populated manager attribute for the user. +- A populated manager's mail attribute for the user. ++For Microsoft Graph the parameters for the **Send email to notify manager of user move** task are as follows: ++|Parameter |Definition | +||| +|category | Mover | +|displayName | Send email to notify manager of user move (Customizable by user) | +|description | Send email to notify userΓÇÖs manager of user move (Customizable by user) | +|taskDefinitionId | aab41899-9972-422a-9d97-f626014578b7 | ++```Example for usage within the workflow +{ + "category": "mover", + "continueOnError": true, + "displayName": "Send email to notify manager of user move", + "description": "Send email to notify userΓÇÖs manager of user move", + "isEnabled": true, + "taskDefinitionId": "aab41899-9972-422a-9d97-f626014578b7", + "arguments": [ + { + "name": "cc", + "value": "b47471b9-af8f-4a5a-bfa2-b78e82398f6e, a7a23ce0-909b-40b9-82cf-95d31f0aaca2" + }, + { + "name": "customSubject", + "value": "Reminder that {{userDisplayName}} has moved." + }, + { + "name": "customBody", + "value": "Hello {{managerDisplayName}}. \nThis is a reminder that {{userDisplayName}} has moved roles in the organization." + }, + { + "name": "locale", + "value": "en-us" + }, + ] +} ++``` ++### Request user access package assignment ++Allows you to request an access package assignment for users. Access packages are bundles of resources, with specific access, that a user would need to accomplish tasks. For more information on access packages, see [What are access packages and what resources can I manage with them?](entitlement-management-overview.md#what-are-access-packages-and-what-resources-can-i-manage-with-them). ++You're able to customize the task name and task description for this task. You must also select an access package that is provided to the user, and the access package policy. ++For Microsoft Graph, the parameters for the **Request user access package assignment** task are as follows: ++|Parameter |Definition | +||| +|category | joiner | +|displayName | Request user access package assignment (Customizable by user) | +|description | Request user assignment to selected access package (Customizable by user) | +|taskDefinitionId | c1ec1e76-f374-4375-aaa6-0bb6bd4c60be | +|arguments | Argument contains two name parameter that is the "assignmentPolicyId", and "accessPackageId". | +++```Example for usage within the workflow +{ + "category": "joiner", + "description": "Request user assignment to selected access package", + "displayName": "Request user access package assignment", + "id": "c1ec1e76-f374-4375-aaa6-0bb6bd4c60be", + "parameters": [ + { + "name": "assignmentPolicyId", + "values": [], + "valueType": "string" + }, + { + "name": "accessPackageId", + "values": [], + "valueType": "string" + } + ] + } ++``` + ### Add user to groups For Microsoft Graph, the parameters for the **Remove users from all teams** task ``` +### Remove access package assignment for user ++Allows you to remove an access package assignment from users. Access packages are bundles of resources, with specific access, that a user would need to accomplish tasks. For more information on access packages, see [What are access packages and what resources can I manage with them?](entitlement-management-overview.md#what-are-access-packages-and-what-resources-can-i-manage-with-them). ++You're able to customize the task name and description for this task in the Azure portal. You must also select the access package which you want to unassign from users. ++For Microsoft Graph, the parameters for the **Remove access package assignment for user** task are as follows: ++|Parameter |Definition | +||| +|category | leaver | +|displayName | Remove access package assignment for user (Customizable by user) | +|description | Remove user assignment of selected access package (Customizable by user) | +|taskDefinitionId | 4a0b64f2-c7ec-46ba-b117-18f262946c50 | +|arguments | Argument contains a name parameter that is the "accessPackageId". | +++```Example for usage within the workflow +{ + "category": "leaver", + "description": "Remove user assignment of selected access package", + "displayName": "Remove access package assignment for user", + "id": "4a0b64f2-c7ec-46ba-b117-18f262946c50", + "parameters": [ + { + "name": "accessPackageId", + "values": [], + "valueType": "string" + } + ] +} +``` ++### Remove all access package assignments for user ++Allows you to remove all access package assignments from users. Access packages are bundles of resources, with specific access, that a user would need to accomplish tasks. For more information on access packages, see [What are access packages and what resources can I manage with them?](entitlement-management-overview.md#what-are-access-packages-and-what-resources-can-i-manage-with-them). ++You're able to customize the task name and description for this task in the Azure portal. ++For Microsoft Graph, the parameters for the **Remove all access package assignments for user** task are as follows: ++|Parameter |Definition | +||| +|category | leaver | +|displayName | Remove all access package assignments for user (Customizable by user) | +|description | Remove all access packages assigned to the user (Customizable by user) | +|taskDefinitionId | 42ae2956-193d-4f39-be06-691b8ac4fa1d | +++```Example for usage within the workflow +{ + "category": "leaver", + "description": "Remove all access packages assigned to the user", + "displayName": "Remove all access package assignments for user", + "id": "42ae2956-193d-4f39-be06-691b8ac4fa1d", + "parameters": [] +} +``` +++### Cancel all pending access package assignment requests for user ++Allows you to remove all access package assignments from users. Access packages are bundles of resources, with specific access, that a user would need to accomplish tasks. For more information on access packages, see [What are access packages and what resources can I manage with them?](entitlement-management-overview.md#what-are-access-packages-and-what-resources-can-i-manage-with-them). ++You're able to customize the task name and description for this task in the Azure portal. ++For Microsoft Graph, the parameters for the **Cancel all pending access package assignment requests for user** task are as follows: ++|Parameter |Definition | +||| +|category | leaver | +|displayName | Cancel pending access package assignment requests for user (Customizable by user) | +|description | Cancel all pending access packages assignment requests for the user (Customizable by user) | +|taskDefinitionId | 498770d9-bab7-4e4c-b73d-5ded82a1d0b3 | +++```Example for usage within the workflow +{ + "category": "leaver", + "description": "Cancel all pending access packages assignment requests for the user", + "displayName": "Cancel pending access package assignment requests for user", + "id": "498770d9-bab7-4e4c-b73d-5ded82a1d0b3", + "parameters": [] +} +``` ++ ### Remove all license assignments from User Allows all direct license assignments to be removed from a user. For group-based license assignments, you would run a task to remove the user from the group the license assignment is part of. For Microsoft Graph, the parameters for the **Delete User** task are as follows: ``` -## Send email to manager before user's last day +### Send email to manager before user's last day Allows an email to be sent to a user's manager before their last day. You're able to customize the task name and the description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/send-email-before-last-day.png" alt-text="Screenshot of Workflows task: send email before user last day task."::: For Microsoft Graph the parameters for the **Send email before user's last day** ``` -## Send email on user's last day +### Send email on user's last day Allows an email to be sent to a user's manager on their last day. You're able to customize the task name and the description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/send-email-last-day.png" alt-text="Screenshot of Workflows task: task to send email last day."::: For Microsoft Graph, the parameters for the **Send email on user last day** task ``` -## Send email to user's manager after their last day +### Send email to user's manager after their last day Allows an email containing off-boarding information to be sent to the user's manager after their last day. You're able to customize the task name and description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/offboard-email-manager.png" alt-text="Screenshot of Workflows task: send off-boarding email to users manager after their last day."::: |
active-directory | Lifecycle Workflow Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-templates.md | Title: Workflow Templates and categories + Title: Lifecycle Workflows templates and categories description: Conceptual article discussing workflow templates and categories with Lifecycle Workflows. Last updated 05/31/2023 -# Lifecycle Workflows templates +# Lifecycle Workflows templates and categories +Lifecycle Workflows allows you to automate the lifecycle management process for your organization by creating workflows that contain both built-in tasks, and custom task extensions. These workflows, and the tasks within them, all fall into categories based on the Joiner-Mover-Leaver(JML) model of lifecycle management. To make this process even more efficient, Lifecycle Workflows also provide you with templates, which you can use to accelerate the set up, creation, and configuration of common lifecycle management processes. You can create workflows based on these templates as is, or you can customize them even further to match the requirements for users within your organization. In this article you get the complete list of workflow templates, common template parameters, default template parameters for specific templates, and the list of compatible tasks for each template. For full task definitions, see [Lifecycle Workflow tasks and definitions](lifecycle-workflow-tasks.md). -Lifecycle Workflows allows you to automate the lifecycle management process for your organization by creating workflows that contain both built-in tasks, and custom task extensions. These workflows, and the tasks within them, all fall into categories based on the Joiner-Mover-Leaver(JML) model of lifecycle management. To make this process even more efficient, Lifecycle Workflows also provide you with templates, which you can use to accelerate the setup, creation, and configuration of common lifecycle management processes. You can create workflows based on these templates as is, or you can customize them even further to match the requirements for users within your organization. In this article you get the complete list of workflow templates, common template parameters, default template parameters for specific templates, and the list of compatible tasks for each template. For full task definitions, see [Lifecycle Workflow tasks and definitions](lifecycle-workflow-tasks.md). -## Lifecycle Workflow Templates +## Lifecycle Workflows built-in templates Lifecycle Workflows currently have six built-in templates you can use or customize: The list of templates are as follows: - [Onboard pre-hire employee](lifecycle-workflow-templates.md#onboard-pre-hire-employee) - [Onboard new hire employee](lifecycle-workflow-templates.md#onboard-new-hire-employee) - [Post-Onboarding of an employee](lifecycle-workflow-templates.md#post-onboarding-of-an-employee)+- [Real-time employee change](lifecycle-workflow-templates.md#real-time-employee-change) - [Real-time employee termination](lifecycle-workflow-templates.md#real-time-employee-termination) - [Pre-Offboarding of an employee](lifecycle-workflow-templates.md#pre-offboarding-of-an-employee) - [Offboard an employee](lifecycle-workflow-templates.md#offboard-an-employee) The default specific parameters and properties for the **Onboard pre-hire employ -|parameter |description |Customizable | +|Parameter |Description |Customizable | |||| |Category | Joiner | ❌ | |Trigger Type | Trigger and Scope Based | ❌ | The **Onboard new-hire employee** template is designed to configure tasks that a The default specific parameters for the **Onboard new hire employee** template are as follows: -|parameter |description |Customizable | +|Parameter |Description |Customizable | |||| |Category | Joiner | ❌ | |Trigger Type | Trigger and Scope Based | ❌ | The **Post-Onboarding of an employee** template is designed to configure tasks t The default specific parameters for the **Post-Onboarding of an employee** template are as follows: -|parameter |description |Customizable | +|Parameter |Description |Customizable | |||| |Category | Joiner | ❌ | |Trigger Type | Trigger and Scope Based | ❌ | The default specific parameters for the **Post-Onboarding of an employee** templ |Tasks | **Add User To Group**, **Add user to selected teams** | ✔️ | +### Real-time employee change ++The **Real-time employee change** template is designed to configure tasks that are completed immediately when an employee changes roles. +++The default specific parameters for the **Real-time employee termination** template are as follows: ++|Parameter |Description |Customizable | +|||| +|Category | Mover | ❌ | +|Trigger Type | On-demand | ❌ | +|Tasks | **Run a Custom Task Extension** | ✔️ | ++> [!NOTE] +> As this template is designed to run on-demand, no execution condition is present. + ### Real-time employee termination The **Real-time employee termination** template is designed to configure tasks that are completed immediately when an employee is terminated. The **Real-time employee termination** template is designed to configure tasks t The default specific parameters for the **Real-time employee termination** template are as follows: -|parameter |description |Customizable | +|Parameter |Description |Customizable | |||| |Category | Leaver | ❌ | |Trigger Type | On-demand | ❌ | The **Pre-Offboarding of an employee** template is designed to configure tasks t The default specific parameters for the **Pre-Offboarding of an employee** template are as follows: -|parameter |description |Customizable | +|Parameter |Description |Customizable | |||| |Category | Leaver | ❌ | |Trigger Type | Trigger and Scope Based | ❌ | The **Offboard an employee** template is designed to configure tasks that are co The default specific parameters for the **Offboard an employee** template are as follows: -|parameter |description |Customizable | +|Parameter |Description |Customizable | |||| |Category | Leaver | ❌ | |Trigger Type | Trigger and Scope Based | ❌ | The **Post-Offboarding of an employee** template is designed to configure tasks The default specific parameters for the **Post-Offboarding of an employee** template are as follows: -|parameter |description |Customizable | +|Parameter |Description |Customizable | |||| |Category | Leaver | ❌ | |Trigger Type | Trigger and Scope Based | ❌ | |
active-directory | Concept Azure Ad Connect Sync User And Contacts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/concept-azure-ad-connect-sync-user-and-contacts.md | -In this topic, we'll go through how the default configuration behaves in certain topologies. We will go through the configuration and the Synchronization Rules Editor can be used to look at the configuration. +In this topic, we go through how the default configuration behaves in certain topologies. We go through the configuration and the Synchronization Rules Editor can be used to look at the configuration. There are a few general rules the configuration assumes: * Regardless of which order we import from the source Active Directories, the end result should always be the same. * An active account will always contribute sign-in information, including **userPrincipalName** and **sourceAnchor**.-* A disabled account will contribute userPrincipalName and sourceAnchor, unless it's a linked mailbox, if there's no active account to be found. -* An account with a linked mailbox will never be used for userPrincipalName and sourceAnchor. It is assumed that an active account will be found later. +* A disabled account contributes userPrincipalName and sourceAnchor, unless it's a linked mailbox, if there's no active account to be found. +* An account with a linked mailbox will never be used for userPrincipalName and sourceAnchor. It's assumed that an active account will be found later. * A contact object might be provisioned to Azure AD as a contact or as a user. You donΓÇÖt really know until all source Active Directory forests have been processed. ## Groups+> [!NOTE] +> Keep in mind that when you add a user from another forest to the group, there is an anchor created in the Active Directory where the groups exists inside a specific OU. This anchor is a Foreign security principal and is stored inside the OU ΓÇÿForeignSecurityPrincipalsΓÇÖ. If you don't synchronize this OU the users where removed from the group membership. +> +> + Important points to be aware of when synchronizing groups from Active Directory to Azure AD: * Azure AD Connect excludes built-in security groups from directory synchronization. Important points to be aware of when synchronizing groups from Active Directory * An Active Directory group whose proxyAddress attribute has values *{"X500:/0=contoso.com/ou=users/cn=testgroup", "smtp:johndoe\@contoso.com"}* will also be mail-enabled in Azure AD. ## Contacts-Having contacts representing a user in a different forest is common after a merger & acquisition where a GALSync solution is bridging two or more Exchange forests. The contact object is always joining from the connector space to the metaverse using the mail attribute. If there's already a contact object or user object with the same mail address, the objects are joined together. This is configured in the rule **In from AD ΓÇô Contact Join**. There is also a rule named **In from AD ΓÇô Contact Common** with an attribute flow to the metaverse attribute **sourceObjectType** with the constant **Contact**. This rule has very low precedence so if any user object is joined to the same metaverse object, then the rule **In from AD ΓÇô User Common** will contribute the value User to this attribute. With this rule, this attribute will have the value Contact if no user has been joined and the value User if at least one user has been found. +Having contacts representing a user in a different forest is common after a merger & acquisition where a GALSync solution is bridging two or more Exchange forests. The contact object is always joining from the connector space to the metaverse using the mail attribute. If there's already a contact object or user object with the same mail address, the objects are joined together. This is configured in the rule **In from AD ΓÇô Contact Join**. There is also a rule named **In from AD ΓÇô Contact Common** with an attribute flow to the metaverse attribute **sourceObjectType** with the constant **Contact**. This rule has low precedence so if any user object is joined to the same metaverse object, then the rule **In from AD ΓÇô User Common** will contribute the value User to this attribute. With this rule, this attribute has the value Contact if no user has been joined and the value User if at least one user has been found. For provisioning an object to Azure AD, the outbound rule **Out to AAD ΓÇô Contact Join** will create a contact object if the metaverse attribute **sourceObjectType** is set to **Contact**. If this attribute is set to **User**, then the rule **Out to AAD ΓÇô User Join** will create a user object instead. It is possible that an object is promoted from Contact to User when more source Active Directories are imported and synchronized. -For example, in a GALSync topology we'll find contact objects for everyone in the second forest when we import the first forest. This will stage new contact objects in the Azure AD Connector. When we later import and synchronize the second forest, we'll find the real users and join them to the existing metaverse objects. We will then delete the contact object in Azure AD and create a new user object instead. +For example, in a GALSync topology we find contact objects for everyone in the second forest when we import the first forest. This stages new contact objects in the Azure AD Connector. When we later import and synchronize the second forest, we find the real users and join them to the existing metaverse objects. We will then delete the contact object in Azure AD and create a new user object instead. -If you have a topology where users are represented as contacts, make sure you select to match users on the mail attribute in the installation guide. If you select another option, then you will have an order-dependent configuration. Contact objects will always join on the mail attribute, but user objects will only join on the mail attribute if this option was selected in the installation guide. You could then end up with two different objects in the metaverse with the same mail attribute if the contact object was imported before the user object. During export to Azure AD, an error will be thrown. This behavior is by design and would indicate bad data or that the topology was not correctly identified during the installation. +If you have a topology where users are represented as contacts, make sure you select to match users on the mail attribute in the installation guide. If you select another option, then you have an order-dependent configuration. Contact objects will always join on the mail attribute, but user objects will only join on the mail attribute if this option was selected in the installation guide. You could then end up with two different objects in the metaverse with the same mail attribute if the contact object was imported before the user object. During export to Azure AD, an error is shown. This behavior is by design and would indicate bad data or that the topology was not correctly identified during the installation. ## Disabled accounts Disabled accounts are synchronized as well to Azure AD. Disabled accounts are common to represent resources in Exchange, for example conference rooms. The exception is users with a linked mailbox; as previously mentioned, these will never provision an account to Azure AD. -The assumption is that if a disabled user account is found, then we won't find another active account later and the object is provisioned to Azure AD with the userPrincipalName and sourceAnchor found. In case another active account will join to the same metaverse object, then its userPrincipalName and sourceAnchor will be used. +The assumption is that if a disabled user account is found, then we won't find another active account later and the object is provisioned to Azure AD with the userPrincipalName and sourceAnchor found. In case another active account join to the same metaverse object, then its userPrincipalName and sourceAnchor will be used. ## Changing sourceAnchor When an object has been exported to Azure AD then it's not allowed to change the sourceAnchor anymore. When the object has been exported the metaverse attribute **cloudSourceAnchor** is set with the **sourceAnchor** value accepted by Azure AD. If **sourceAnchor** is changed and not match **cloudSourceAnchor**, the rule **Out to AAD ΓÇô User Join** will throw the error **sourceAnchor attribute has changed**. In this case, the configuration or data must be corrected so the same sourceAnchor is present in the metaverse again before the object can be synchronized again. |
active-directory | How To Connect Install Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-prerequisites.md | We recommend that you harden your Azure AD Connect server to decrease the securi ### SQL Server used by Azure AD Connect * Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2019 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10-GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the [performance of Azure AD Connect](./plan-connect-performance-factors.md#sql-database-factors). * If you use a different installation of SQL Server, these requirements apply:- * Azure AD Connect support all mainstream supported SQL Server versions up to SQL Server 2019. Please refer to the [SQL Server lifecycle article](/lifecycle/products/?products=sql-server) to verify the support status of your SQL Server version. SQL Server 2012 is no longer supported. Azure SQL Database *isn't supported* as a database. This includes both Azure SQL Database and Azure SQL Managed Instance. + * Azure AD Connect support all mainstream supported SQL Server versions up to SQL Server 2019 running on Windows. Please refer to the [SQL Server lifecycle article](/lifecycle/products/?products=sql-server) to verify the support status of your SQL Server version. SQL Server 2012 is no longer supported. Azure SQL Database *isn't supported* as a database. This includes both Azure SQL Database and Azure SQL Managed Instance. * You must use a case-insensitive SQL collation. These collations are identified with a \_CI_ in their name. Using a case-sensitive collation identified by \_CS_ in their name *isn't supported*. * You can have only one sync engine per SQL instance. Sharing a SQL instance with FIM/MIM Sync, DirSync, or Azure AD Sync *isn't supported*. |
active-directory | Delete Application Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md | -When you delete and enterprise application, it will be held in a suspended state in the recycle bin for 30 days. During the 30 days, you can [Restore the application](restore-application.md). Deleted items are automatically hard deleted after the 30-day period. For more information on frequently asked questions about deletion and recovery of applications, see [Deleting and recovering applications FAQs](delete-recover-faq.yml). +When you delete and enterprise application, it's held in a suspended state in the recycle bin for 30 days. During the 30 days, you can [Restore the application](restore-application.md). Deleted items are automatically hard deleted after the 30-day period. For more information on frequently asked questions about deletion and recovery of applications, see [Deleting and recovering applications FAQs](delete-recover-faq.yml). ## Prerequisites To delete an enterprise application, you need: :::zone pivot="portal" 1. Sign in to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.-1. In the left menu, select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. Search for and select the application that you want to delete. For example, **Azure AD SAML Toolkit 1**. +1. In the left menu, select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. Search for and select the application that you want to delete. In this article, we use the **Azure AD SAML Toolkit 1** as an example. 1. In the **Manage** section of the left menu, select **Properties**. 1. At the top of the **Properties** pane, select **Delete**, and then select **Yes** to confirm you want to delete the application from your Azure AD tenant. Delete an enterprise application using [Graph Explorer](https://developer.micros -1. Record the ID of the enterprise app you want to delete. -1. Delete the enterprise application. +2. Record the ID of the enterprise app you want to delete. +3. Delete the enterprise application. # [HTTP](#tab/http) ```http |
active-directory | Restore Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/restore-application.md | zone_pivot_groups: enterprise-apps-minus-portal # Restore an enterprise application in Azure AD -In this article, you'll learn how to restore a soft deleted enterprise application in your Azure Active Directory (Azure AD) tenant. Soft deleted enterprise applications can be restored from the recycle bin within the first 30 days after their deletion. After the 30-day window, the enterprise application is permanently deleted and can't be restored. +In this article, you learn how to restore a soft deleted enterprise application in your Azure Active Directory (Azure AD) tenant. Soft deleted enterprise applications can be restored from the recycle bin within the first 30 days after their deletion. After the 30-day window, the enterprise application is permanently deleted and can't be restored. >[!IMPORTANT]->If you deleted an [application registration](../develop/howto-remove-app.md) in its home tenant through app registrations in the Azure portal, the enterprise application, which is its corresponding service principal also got deleted. If you restore the deleted application registration through the Azure portal, its corresponding service principal, won't be restored. Instead, this action will create a new service principal. Therefore, if you had configurations on the previous enterprise application, you can't restore them through the Azure portal. Use the workaround provided in this article to recover the deleted service principal and its previous configurations. +>If you deleted an [application registration](../develop/howto-remove-app.md) in its home tenant through app registrations in the Azure portal, the enterprise application, which is its corresponding service principal also got deleted. If you restore the deleted application registration through the Azure portal, its corresponding service principal, is also restored. You'll therefore be able to recover the service principal's previous configurations, except its previous policies such as conditional access policies, which aren't restored. [!INCLUDE [portal updates](../includes/portal-update.md)] To restore an enterprise application, you need: - A [soft deleted enterprise application](delete-application-portal.md) in your tenant. ## View restorable enterprise applications -To recover your enterprise application with its previous configurations, first delete the enterprise application that was restored through the Azure portal, then take the following steps to recover the soft deleted enterprise application. For more information on frequently asked questions about deletion and recovery of applications, see [Deleting and recovering applications FAQs](delete-recover-faq.yml). +Take the following steps to recover a recently deleted enterprise application. For more information on frequently asked questions about deletion and recovery of applications, see [Deleting and recovering applications FAQs](delete-recover-faq.yml). :::zone pivot="aad-powershell" |
active-directory | Cross Tenant Synchronization Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview.md | Does cross-tenant synchronization support deprovisioning users? Does cross-tenant synchronization support restoring users? - If the user in the source tenant is restored, reassigned to the app, meets the scoping condition again within 30 days of soft deletion, it will be restored in the target tenant.-- IT admins can also manually [restore](/azure/active-directory/fundamentals/active-directory-users-restore-../fundamentals/active-directory-users-restore.md) the user directly in the target tenant. +- IT admins can also manually [restore](/azure/active-directory/fundamentals/active-directory-users-restore) the user directly in the target tenant. How can I deprovision all the users that are currently in scope of cross-tenant synchronization? |
active-directory | Azure Pim Resource Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/azure-pim-resource-rbac.md | Title: View audit report for Azure resource roles in Privileged Identity Managem description: View activity and audit history for Azure resource roles in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ editor: '' |
active-directory | Concept Pim For Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-pim-for-groups.md | Title: Privileged Identity Management (PIM) for Groups description: How to manage Azure AD Privileged Identity Management (PIM) for Groups. documentationcenter: ''-+ ms.assetid: |
active-directory | Groups Activate Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-activate-roles.md | Title: Activate your group membership or ownership in Privileged Identity Manage description: Learn how to activate your group membership or ownership in Privileged Identity Management (PIM). documentationcenter: ''-+ na Last updated 6/7/2023-+ |
active-directory | Groups Approval Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-approval-workflow.md | Title: Approve activation requests for group members and owners description: Learn how to approve activation requests for group members and owners in Azure AD Privileged Identity Management (PIM). -+ na Last updated 6/7/2023-+ |
active-directory | Groups Assign Member Owner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-assign-member-owner.md | Title: Assign eligibility for a group in Privileged Identity Management description: Learn how to assign eligibility for a group in Privileged Identity Management. documentationcenter: ''-+ na Last updated 6/7/2023-+ |
active-directory | Groups Audit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-audit.md | Title: Audit activity history for group assignments in Privileged Identity Manag description: View activity and audit activity history for group assignments in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ editor: '' |
active-directory | Groups Discover Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-discover-groups.md | Title: Bring groups into Privileged Identity Management description: Learn how to bring groups into Privileged Identity Management. documentationcenter: ''-+ na Last updated 6/7/2023-+ |
active-directory | Groups Renew Extend | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-renew-extend.md | Title: Extend or renew PIM for groups assignments description: Learn how to extend or renew PIM for groups assignments. documentationcenter: ''-+ na Last updated 6/7/2023-+ |
active-directory | Groups Role Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-role-settings.md | Title: Configure PIM for Groups settings description: Learn how to configure PIM for Groups settings. documentationcenter: ''-+ na Last updated 6/7/2023-+ |
active-directory | Pim Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-apis.md | Title: API concepts in Privileged Identity management description: Information for understanding the APIs in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ editor: '' |
active-directory | Pim Approval Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-approval-workflow.md | Title: Approve or deny requests for Azure AD roles in PIM description: Learn how to approve or deny requests for Azure AD roles in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ editor: '' |
active-directory | Pim Complete Roles And Resource Roles Review | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-complete-roles-and-resource-roles-review.md | Title: Complete an access review of Azure resource and Azure AD roles in PIM description: Learn how to complete an access review of Azure resource and Azure AD roles Privileged Identity Management in Azure Active Directory. documentationcenter: ''-+ editor: '' na Last updated 5/11/2023-+ |
active-directory | Pim Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-configure.md | Title: What is Privileged Identity Management? description: Provides an overview of Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ editor: '' |
active-directory | Pim Create Roles And Resource Roles Review | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-create-roles-and-resource-roles-review.md | Title: Create an access review of Azure resource and Azure AD roles in PIM description: Learn how to create an access review of Azure resource and Azure AD roles in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ editor: '' |
active-directory | Pim Deployment Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-deployment-plan.md | Title: Plan a Privileged Identity Management deployment description: Learn how to deploy Privileged Identity Management (PIM) in your Azure AD organization. documentationcenter: ''-+ editor: '' |
active-directory | Pim Email Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-email-notifications.md | Title: Email notifications in Privileged Identity Management (PIM) description: Describes email notifications in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ na Last updated 10/07/2021-+ |
active-directory | Pim Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-getting-started.md | Title: Start using PIM description: Learn how to enable and get started using Azure AD Privileged Identity Management (PIM) in the Azure portal. documentationcenter: ''-+ editor: '' |
active-directory | Pim How To Activate Role | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-activate-role.md | Title: Activate Azure AD roles in PIM description: Learn how to activate Azure AD roles in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ editor: '' |
active-directory | Pim How To Add Role To User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-add-role-to-user.md | Title: Assign Azure AD roles in PIM description: Learn how to assign Azure AD roles in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ editor: '' |
active-directory | Pim How To Change Default Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-change-default-settings.md | Title: Configure Azure AD role settings in PIM description: Learn how to configure Azure AD role settings in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ editor: '' |
active-directory | Pim How To Configure Security Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-configure-security-alerts.md | Title: Security alerts for Azure AD roles in PIM description: Configure security alerts for Azure AD roles Privileged Identity Management in Azure Active Directory. documentationcenter: ''-+ editor: '' |
active-directory | Pim How To Renew Extend | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-renew-extend.md | Title: Renew Azure AD role assignments in PIM description: Learn how to extend or renew Azure Active Directory role assignments in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ editor: '' na Last updated 06/24/2022-+ |
active-directory | Pim How To Use Audit Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-use-audit-log.md | Title: View audit log report for Azure AD roles in Azure AD PIM description: Learn how to view the audit log history for Azure AD roles in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ editor: '' |
active-directory | Pim Perform Roles And Resource Roles Review | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-perform-roles-and-resource-roles-review.md | Title: Perform an access review of Azure resource and Azure AD roles in PIM description: Learn how to review access of Azure resource and Azure AD roles in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ editor: '' na Last updated 5/11/2023-+ |
active-directory | Pim Resource Roles Activate Your Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-activate-your-roles.md | Title: Activate Azure resource roles in PIM description: Learn how to activate your Azure resource roles in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ na Last updated 4/14/2023-+ |
active-directory | Pim Resource Roles Approval Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-approval-workflow.md | Title: Approve requests for Azure resource roles in PIM description: Learn how to approve or deny requests for Azure resource roles in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ na Last updated 06/24/2022-+ |
active-directory | Pim Resource Roles Assign Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md | Title: Assign Azure resource roles in Privileged Identity Management description: Learn how to assign Azure resource roles in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ na Last updated 07/29/2022-+ |
active-directory | Pim Resource Roles Configure Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-alerts.md | Title: Configure security alerts for Azure roles in Privileged Identity Manageme description: Learn how to configure security alerts for Azure resource roles in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ na Last updated 3/29/2023-+ |
active-directory | Pim Resource Roles Configure Role Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-role-settings.md | Title: Configure Azure resource role settings in PIM description: Learn how to configure Azure resource role settings in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ na Last updated 6/7/2023-+ |
active-directory | Pim Resource Roles Custom Role Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-custom-role-policy.md | Title: Use Azure custom roles in PIM description: Learn how to use Azure custom roles in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ na Last updated 06/27/2022-+ |
active-directory | Pim Resource Roles Discover Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-discover-resources.md | Title: Discover Azure resources to manage in PIM description: Learn how to discover Azure resources to manage in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ na Last updated 06/27/2022-+ |
active-directory | Pim Resource Roles Overview Dashboards | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-overview-dashboards.md | Title: Resource dashboards for access reviews in PIM description: Describes how to use a resource dashboard to perform an access review in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ editor: markwahl-msft na Last updated 06/27/2022-+ |
active-directory | Pim Resource Roles Renew Extend | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-renew-extend.md | Title: Renew Azure resource role assignments in PIM description: Learn how to extend or renew Azure resource role assignments in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ editor: '' na Last updated 10/19/2021-+ |
active-directory | Pim Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-roles.md | Title: Roles you cannot manage in Privileged Identity Management description: Describes the roles you cannot manage in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ editor: '' |
active-directory | Pim Security Wizard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-security-wizard.md | Title: Azure AD roles Discovery and insights (preview) in Privileged Identity Ma description: Discovery and insights (formerly Security Wizard) help you convert permanent Azure AD role assignments to just-in-time assignments with Privileged Identity Management. documentationcenter: ''-+ editor: '' |
active-directory | Pim Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-troubleshoot.md | Title: Troubleshoot resource access denied in Privileged Identity Management description: Learn how to troubleshoot system errors with roles in Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ editor: '' |
active-directory | Subscription Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/subscription-requirements.md | Title: License requirements to use Privileged Identity Management description: Describes the licensing requirements to use Azure AD Privileged Identity Management (PIM). documentationcenter: ''-+ editor: '' ms.assetid: 34367721-8b42-4fab-a443-a2e55cdbf33d na Last updated 07/06/2022-+ |
active-directory | Permissions Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md | User<br/>(no admin role, but member or owner of a [role-assignable group](groups User with a role scoped to a [restricted management administrative unit](./admin-units-restricted-management.md) | | | | | :heavy_check_mark: | :heavy_check_mark: User Admin | | | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Usage Summary Reports Reader | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:-All custom roles | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: +All custom roles | | | | | :heavy_check_mark: | :heavy_check_mark: > [!IMPORTANT] > The [Partner Tier2 Support](#partner-tier2-support) role can reset passwords and invalidate refresh tokens for all non-administrators and administrators (including Global Administrators). The [Partner Tier1 Support](#partner-tier1-support) role can reset passwords and invalidate refresh tokens for only non-administrators. These roles should not be used because they are deprecated. User<br/>(no admin role, but member or owner of a [role-assignable group](groups User with a role scoped to a [restricted management administrative unit](./admin-units-restricted-management.md) | | | :heavy_check_mark: | :heavy_check_mark: User Admin | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Usage Summary Reports Reader | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:-All custom roles | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: +All custom roles | | | :heavy_check_mark: | :heavy_check_mark: ## Next steps |
aks | Aks Planned Maintenance Weekly Releases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-planned-maintenance-weekly-releases.md | Title: Use Planned Maintenance for your Azure Kubernetes Service (AKS) cluster weekly releases (preview) -description: Learn how to use Planned Maintenance in Azure Kubernetes Service (AKS) for cluster weekly releases +description: Learn how to use Planned Maintenance in Azure Kubernetes Service (AKS) for cluster weekly releases. Previously updated : 09/16/2021 Last updated : 06/27/2023 - # Use Planned Maintenance pre-created configurations to schedule Azure Kubernetes Service (AKS) weekly releases (preview) You can also be schedule with more fine-grained control using Planned Maintenanc ## Before you begin -This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal]. +This article assumes you have an existing AKS cluster. If you need an AKS cluster, you can create one using [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or [Azure portal][aks-quickstart-portal]. [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] This article assumes that you have an existing AKS cluster. If you need an AKS c When you use Planned Maintenance, the following restrictions apply: - AKS reserves the right to break these windows for unplanned/reactive maintenance operations that are urgent or critical.-- Currently, performing maintenance operations are considered *best-effort only* and are not guaranteed to occur within a specified window.-- Updates cannot be blocked for more than seven days.+- Currently, performing maintenance operations are considered *best-effort only* and aren't guaranteed to occur within a specified window. +- Updates can't be blocked for more than seven days. -## Available pre-created public maintenance configurations for you to pick +## Available pre-created public maintenance configurations There are two general kinds of pre-created public maintenance configurations: -- For Weekday (Monday, Tuesday, Wednesday, Thursday), from 10 pm to 6 am next morning.-- For Weekend (Friday, Saturday, Sunday), from 10 pm to 6 am next morning.+- **For weekdays**: (Monday, Tuesday, Wednesday, Thursday), from 10 pm to 6 am the next morning. +- **For weekends**: (Friday, Saturday, Sunday), from 10 pm to 6 am the next morning. -For a list of pre-created public maintenance configurations on the weekday schedule, see below. For weekend schedules, replace `weekday` with `weekend`. +The following pre-created public maintenance configurations are available on the weekday and weekend schedules. For weekend schedules, replace `weekday` with `weekend`. |Configuration name| Time zone| |--|--| For a list of pre-created public maintenance configurations on the weekday sched ## Assign a public maintenance configuration to an AKS Cluster -Find the public maintenance configuration ID by name: -```azurecli-interactive -az maintenance public-configuration show --resource-name "aks-mrp-cfg-weekday_utc8" -``` -This call may prompt you to install the `maintenance` extension. Once done, you can proceed: --The output should look like the below example. Be sure to take note of the `id` field - -```json -{ -"duration": "08:00", -"expirationDateTime": null, -"extensionProperties": { -"maintenanceSubScope": "AKS" -}, -"id": "/subscriptions/0159df5c-b605-45a9-9876-36e17d5286e0/providers/Microsoft.Maintenance/publicMaintenanceConfigurations/aks-mrp-cfg-weekday_utc8", -"installPatches": null, -"location": "westus2", -"maintenanceScope": "Resource", -"name": "aks-mrp-cfg-weekday_utc8", -"namespace": "Microsoft.Maintenance", -"recurEvery": "Week Monday,Tuesday,Wednesday,Thursday", -"startDateTime": "2022-08-01 22:00", -"systemData": null, -"tags": {}, -"timeZone": "China Standard Time", -"type": "Microsoft.Maintenance/publicMaintenanceConfigurations", -"visibility": "Public" -} -``` --Next, assign the public maintenance configuration to your AKS cluster using the ID: -```azurecli-interactive -az maintenance assignment create --maintenance-configuration-id "/subscriptions/0159df5c-b605-45a9-9876-36e17d5286e0/providers/Microsoft.Maintenance/publicMaintenanceConfigurations/aks-mrp-cfg-weekday_utc8" --name assignmentName --provider-name "Microsoft.ContainerService" --resource-group myResourceGroup --resource-name myAKSCluster --resource-type "managedClusters" -``` +1. Find the public maintenance configuration ID using the [`az maintenance public-configuration show`][az-maintenance-public-configuration-show] command. ++ ```azurecli-interactive + az maintenance public-configuration show --resource-name "aks-mrp-cfg-weekday_utc8" + ``` ++ > [!NOTE] + > You may be prompted to install the `maintenance` extension. ++ Your output should look like the following example output. Make sure you take note of the `id` field. ++ ```json + { + "duration": "08:00", + "expirationDateTime": null, + "extensionProperties": { + "maintenanceSubScope": "AKS" + }, + "id": "/subscriptions/0159df5c-b605-45a9-9876-36e17d5286e0/providers/Microsoft.Maintenance/publicMaintenanceConfigurations/aks-mrp-cfg-weekday_utc8", + "installPatches": null, + "location": "westus2", + "maintenanceScope": "Resource", + "name": "aks-mrp-cfg-weekday_utc8", + "namespace": "Microsoft.Maintenance", + "recurEvery": "Week Monday,Tuesday,Wednesday,Thursday", + "startDateTime": "2022-08-01 22:00", + "systemData": null, + "tags": {}, + "timeZone": "China Standard Time", + "type": "Microsoft.Maintenance/publicMaintenanceConfigurations", + "visibility": "Public" + } + ``` ++2. Assign the public maintenance configuration to your AKS cluster using the [`az maintenance assignment create`][az-maintenance-assignment-create] command and specify the ID from the previous step for the `--maintenance-configuration-id` parameter. ++ ```azurecli-interactive + az maintenance assignment create --maintenance-configuration-id "/subscriptions/0159df5c-b605-45a9-9876-36e17d5286e0/providers/Microsoft.Maintenance/publicMaintenanceConfigurations/aks-mrp-cfg-weekday_utc8" --name assignmentName --provider-name "Microsoft.ContainerService" --resource-group myResourceGroup --resource-name myAKSCluster --resource-type "managedClusters" + ``` + ## List all maintenance windows in an existing cluster-```azurecli-interactive -az maintenance assignment list --provider-name "Microsoft.ContainerService" --resource-group myResourceGroup --resource-name myAKSCluster --resource-type "managedClusters" -``` -## Delete a public maintenance configuration of an AKS cluster -```azurecli-interactive -az maintenance assignment delete --name assignmentName --provider-name "Microsoft.ContainerService" --resource-group myResourceGroup --resource-name myAKSCluster --resource-type "managedClusters" -``` +- List all maintenance windows in an existing cluster using the [`az maintenance assignment list`][az-maintenance-assignment-list] command. ++ ```azurecli-interactive + az maintenance assignment list --provider-name "Microsoft.ContainerService" --resource-group myResourceGroup --resource-name myAKSCluster --resource-type "managedClusters" + ``` ++## Remove a public maintenance configuration from an AKS cluster ++- Remove a public maintenance configuration from a cluster using the [`az maintenance assignment delete`][az-maintenance-assignment-delete] command. ++ ```azurecli-interactive + az maintenance assignment delete --name assignmentName --provider-name "Microsoft.ContainerService" --resource-group myResourceGroup --resource-name myAKSCluster --resource-type "managedClusters" + ``` <!-- LINKS - Internal --> [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md [aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md-[aks-support-policies]: support-policies.md -[aks-faq]: faq.md -[az-extension-add]: /cli/azure/extension#az_extension_add -[az-extension-update]: /cli/azure/extension#az_extension_update -[az-feature-list]: /cli/azure/feature#az_feature_list -[az-feature-register]: /cli/azure/feature#az_feature_register -[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli -[az-provider-register]: /cli/azure/provider#az_provider_register -[aks-upgrade]: upgrade-cluster.md [releases]:release-tracker.md-[planned-maintenance]: ./planned-maintenance.md +[planned-maintenance]: ./planned-maintenance.md +[az-maintenance-public-configuration-show]: /cli/azure/maintenance/public-configuration#az-maintenance-public-configuration-show +[az-maintenance-assignment-create]: /cli/azure/maintenance/assignment#az-maintenance-assignment-create +[az-maintenance-assignment-list]: /cli/azure/maintenance/assignment#az-maintenance-assignment-list +[az-maintenance-assignment-delete]: /cli/azure/maintenance/assignment#az-maintenance-assignment-delete |
aks | Configure Azure Cni | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md | The IP address plan for an AKS cluster consists of a virtual network, at least o | Subnet | Must be large enough to accommodate the nodes, pods, and all Kubernetes and Azure resources that might be provisioned in your cluster. For example, if you deploy an internal Azure Load Balancer, its front-end IPs are allocated from the cluster subnet, not public IPs. The subnet size should also take into account upgrade operations or future scaling needs.<p />To calculate the *minimum* subnet size including an additional node for upgrade operations: `(number of nodes + 1) + ((number of nodes + 1) * maximum pods per node that you configure)`<p/>Example for a 50 node cluster: `(51) + (51 * 30 (default)) = 1,581` (/21 or larger)<p/>Example for a 50 node cluster that also includes provision to scale up an additional 10 nodes: `(61) + (61 * 30 (default)) = 1,891` (/21 or larger)<p>If you don't specify a maximum number of pods per node when you create your cluster, the maximum number of pods per node is set to *30*. The minimum number of IP addresses required is based on that value. If you calculate your minimum IP address requirements on a different maximum value, see [how to configure the maximum number of pods per node](#configure-maximumnew-clusters) to set this value when you deploy your cluster. | | Kubernetes service address range | This range shouldn't be used by any network element on or connected to this virtual network. Service address CIDR must be smaller than /12. You can reuse this range across different AKS clusters. | | Kubernetes DNS service IP address | IP address within the Kubernetes service address range that will be used by cluster service discovery. Don't use the first IP address in your address range. The first address in your subnet range is used for the *kubernetes.default.svc.cluster.local* address. |-| Docker bridge address | The Docker bridge network address represents the default *docker0* bridge network address present in all Docker installations. While *docker0* bridge isn't used by AKS clusters or the pods themselves, you must set this address to continue to support scenarios such as *docker build* within the AKS cluster. it's required to select a CIDR for the Docker bridge network address because otherwise Docker will pick a subnet automatically, which could conflict with other CIDRs. You must pick an address space that doesn't collide with the rest of the CIDRs on your networks, including the cluster's service CIDR and pod CIDR. Default of 172.17.0.1/16. You can reuse this range across different AKS clusters. | ## Maximum pods per node Although it's technically possible to specify a service address range within the **Kubernetes DNS service IP address**: The IP address for the cluster's DNS service. This address must be within the *Kubernetes service address range*. Don't use the first IP address in your address range. The first address in your subnet range is used for the *kubernetes.default.svc.cluster.local* address. -**Docker Bridge address**: The Docker bridge network address represents the default *docker0* bridge network address present in all Docker installations. While *docker0* bridge isn't used by AKS clusters or the pods themselves, you must set this address to continue to support scenarios such as *docker build* within the AKS cluster. it's required to select a CIDR for the Docker bridge network address because otherwise Docker will pick a subnet automatically which could conflict with other CIDRs. You must pick an address space that doesn't collide with the rest of the CIDRs on your networks, including the cluster's service CIDR and pod CIDR. - ## Configure networking - CLI When you create an AKS cluster with the Azure CLI, you can also configure Azure CNI networking. Use the following commands to create a new AKS cluster with Azure CNI networking enabled. az aks create \ --name myAKSCluster \ --network-plugin azure \ --vnet-subnet-id <subnet-id> \- --docker-bridge-address 172.17.0.1/16 \ --dns-service-ip 10.2.0.10 \ --service-cidr 10.2.0.0/24 \ --generate-ssh-keys az aks create \ ## Configure networking - portal +> [!NOTE] +> The Docker Bridge address field is no longer in use. + The following screenshot from the Azure portal shows an example of configuring these settings during AKS cluster creation: :::image type="content" source="../aks/media/networking-overview/portal-01-networking-advanced.png" alt-text="Screenshot from the Azure portal showing an example of configuring these settings during AKS cluster creation."::: |
aks | Configure Kubenet Dual Stack | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet-dual-stack.md | -AKS clusters can now be deployed in a dual-stack (using both IPv4 and IPv6 addresses) mode when using [kubenet][kubenet] networking and a dual-stack Azure virtual network. In this configuration, nodes receive both an IPv4 and IPv6 address from the Azure virtual network subnet. Pods receive both an IPv4 and IPv6 address from a logically different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is then configured so that the pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT'd to the node's primary IP address of the same family (IPv4 to IPv4 and IPv6 to IPv6). +You can deploy your AKS clusters in a dual-stack mode when using [kubenet][kubenet] networking and a dual-stack Azure virtual network. In this configuration, nodes receive both an IPv4 and IPv6 address from the Azure virtual network subnet. Pods receive both an IPv4 and IPv6 address from a logically different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is then configured so that the pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT'd to the node's primary IP address of the same family (IPv4 to IPv4 and IPv6 to IPv6). This article shows you how to use dual-stack networking with an AKS cluster. For more information on network options and considerations, see [Network concepts for Kubernetes and AKS][aks-network-concepts]. ## Limitations-* Azure Route Tables have a hard limit of 400 routes per table. Because each node in a dual-stack cluster requires two routes, one for each IP address family, dual-stack clusters are limited to 200 nodes. ++* Azure route tables have a **hard limit of 400 routes per table**. + * Each node in a dual-stack cluster requires two routes, one for each IP address family, so **dual-stack clusters are limited to 200 nodes**. * In Azure Linux node pools, service objects are only supported with `externalTrafficPolicy: Local`.-* Dual-stack networking is required for the Azure Virtual Network and the pod CIDR - single stack IPv6-only isn't supported for node or pod IP addresses. Services can be provisioned on IPv4 or IPv6. -* Features **not supported on dual-stack kubenet** include: - * [Azure network policies](use-network-policies.md#create-an-aks-cluster-and-enable-network-policy) - * [Calico network policies](use-network-policies.md#create-an-aks-cluster-and-enable-network-policy) - * [NAT Gateway][nat-gateway] - * [Virtual nodes add-on](virtual-nodes.md#network-requirements) - * [Windows node pools](./windows-faq.md) +* Dual-stack networking is required for the Azure virtual network and the pod CIDR. + * Single stack IPv6-only isn't supported for node or pod IP addresses. Services can be provisioned on IPv4 or IPv6. +* The following features are **not supported on dual-stack kubenet**: + * [Azure network policies](use-network-policies.md#create-an-aks-cluster-and-enable-network-policy) + * [Calico network policies](use-network-policies.md#create-an-aks-cluster-and-enable-network-policy) + * [NAT Gateway][nat-gateway] + * [Virtual nodes add-on](virtual-nodes.md#network-requirements) + * [Windows node pools](./windows-faq.md) ## Prerequisites This article shows you how to use dual-stack networking with an AKS cluster. For ## Overview of dual-stack networking in Kubernetes -Kubernetes v1.23 brings stable upstream support for [IPv4/IPv6 dual-stack][kubernetes-dual-stack] clusters, including pod and service networking. Nodes and pods are always assigned both an IPv4 and an IPv6 address, while services can be single-stack on either address family or dual-stack. +Kubernetes v1.23 brings stable upstream support for [IPv4/IPv6 dual-stack][kubernetes-dual-stack] clusters, including pod and service networking. Nodes and pods are always assigned both an IPv4 and an IPv6 address, while services can be dual-stack or single-stack on either address family. AKS configures the required supporting services for dual-stack networking. This configuration includes: -* Dual-stack virtual network configuration (if managed Virtual Network is used) -* IPv4 and IPv6 node and pod addresses -* Outbound rules for both IPv4 and IPv6 traffic -* Load balancer setup for IPv4 and IPv6 services +* If using a managed virtual network, a dual-stack virtual network configuration. +* IPv4 and IPv6 node and pod addresses. +* Outbound rules for both IPv4 and IPv6 traffic. +* Load balancer setup for IPv4 and IPv6 services. ## Deploying a dual-stack cluster -Three new attributes are provided to support dual-stack clusters: -* `--ip-families` - takes a comma-separated list of IP families to enable on the cluster. - * Currently only `ipv4` or `ipv4,ipv6` are supported. -* `--pod-cidrs` - takes a comma-separated list of CIDR notation IP ranges to assign pod IPs from. +The following attributes are provided to support dual-stack clusters: ++* **`--ip-families`**: Takes a comma-separated list of IP families to enable on the cluster. + * Only `ipv4` or `ipv4,ipv6` are supported. +* **`--pod-cidrs`**: Takes a comma-separated list of CIDR notation IP ranges to assign pod IPs from. * The count and order of ranges in this list must match the value provided to `--ip-families`.- * If no values are supplied, the default values of `10.244.0.0/16,fd12:3456:789a::/64` will be used. -* `--service-cidrs` - takes a comma-separated list of CIDR notation IP ranges to assign service IPs from. + * If no values are supplied, the default value `10.244.0.0/16,fd12:3456:789a::/64` is used. +* **`--service-cidrs`**: Takes a comma-separated list of CIDR notation IP ranges to assign service IPs from. * The count and order of ranges in this list must match the value provided to `--ip-families`.- * If no values are supplied, the default values of `10.0.0.0/16,fd12:3456:789a:1::/108` will be used. + * If no values are supplied, the default value `10.0.0.0/16,fd12:3456:789a:1::/108` is used. * The IPv6 subnet assigned to `--service-cidrs` can be no larger than a /108. -### Deploy the cluster +## Deploy a dual-stack AKS cluster # [Azure CLI](#tab/azure-cli) -Deploying a dual-stack cluster requires passing the `--ip-families` parameter with the parameter value of `ipv4,ipv6` to indicate that a dual-stack cluster should be created. +1. Create an Azure resource group for the cluster using the [`az group create`][az-group-create] command. -1. First, create a resource group to create the cluster in: ```azurecli-interactive- az group create -l <Region> -n <ResourceGroupName> + az group create -l <region> -n <resourceGroupName> ``` -1. Then create the cluster itself: +2. Create a dual-stack AKS cluster using the [`az aks create`][az-aks-create] command with the `--ip-families` parameter set to `ipv4,ipv6`. + ```azurecli-interactive- az aks create -l <Region> -g <ResourceGroupName> -n <ClusterName> --ip-families ipv4,ipv6 + az aks create -l <region> -g <resourceGroupName> -n <clusterName> --ip-families ipv4,ipv6 + ``` ++3. Once the cluster is created, get the cluster admin credentials using the [`az aks get-credentials`][az-aks-get-credentials] command. ++ ```azurecli-interactive + az aks get-credentials -g <resourceGroupName> -n <clusterName> ``` # [Azure Resource Manager](#tab/azure-resource-manager) -When using an Azure Resource Manager template to deploy, pass `["IPv4", "IPv6"]` to the `ipFamilies` parameter to the `networkProfile` object. See the [Azure Resource Manager template documentation][deploy-arm-template] for help with deploying this template, if needed. --```json -{ - "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "clusterName": { - "type": "string", - "defaultValue": "aksdualstack" - }, - "location": { - "type": "string", - "defaultValue": "[resourceGroup().location]" - }, - "kubernetesVersion": { - "type": "string", - "defaultValue": "1.22.2" - }, - "nodeCount": { - "type": "int", - "defaultValue": 3 - }, - "nodeSize": { - "type": "string", - "defaultValue": "Standard_B2ms" - } - }, - "resources": [ +1. Create the ARM template and pass `["IPv4", "IPv6"]` to the `ipFamilies` parameter to the `networkProfile` object. ++ ```json {- "type": "Microsoft.ContainerService/managedClusters", - "apiVersion": "2021-10-01", - "name": "[parameters('clusterName')]", - "location": "[parameters('location')]", - "identity": { - "type": "SystemAssigned" + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "clusterName": { + "type": "string", + "defaultValue": "aksdualstack" + }, + "location": { + "type": "string", + "defaultValue": "[resourceGroup().location]" + }, + "kubernetesVersion": { + "type": "string", + "defaultValue": "1.22.2" + }, + "nodeCount": { + "type": "int", + "defaultValue": 3 + }, + "nodeSize": { + "type": "string", + "defaultValue": "Standard_B2ms" + } },- "properties": { - "agentPoolProfiles": [ - { - "name": "nodepool1", - "count": "[parameters('nodeCount')]", - "mode": "System", - "vmSize": "[parameters('nodeSize')]" + "resources": [ + { + "type": "Microsoft.ContainerService/managedClusters", + "apiVersion": "2021-10-01", + "name": "[parameters('clusterName')]", + "location": "[parameters('location')]", + "identity": { + "type": "SystemAssigned" + }, + "properties": { + "agentPoolProfiles": [ + { + "name": "nodepool1", + "count": "[parameters('nodeCount')]", + "mode": "System", + "vmSize": "[parameters('nodeSize')]" + } + ], + "dnsPrefix": "[parameters('clusterName')]", + "kubernetesVersion": "[parameters('kubernetesVersion')]", + "networkProfile": { + "ipFamilies": [ + "IPv4", + "IPv6" + ] + } }- ], - "dnsPrefix": "[parameters('clusterName')]", - "kubernetesVersion": "[parameters('kubernetesVersion')]", - "networkProfile": { - "ipFamilies": [ - "IPv4", - "IPv6" - ] }- } + ] }- ] -} -``` + ``` ++2. Once the cluster is created, get the cluster admin credentials using the [`az aks get-credentials`][az-aks-get-credentials] command. ++ ```azurecli-interactive + az aks get-credentials -g <resourceGroupName> -n <clusterName> + ``` ++> [!NOTE] +> For more information on deploying ARM templates, see the [Azure Resource Manager documentation][deploy-arm-template]. # [Bicep](#tab/bicep) -When using a Bicep template to deploy, pass `["IPv4", "IPv6"]` to the `ipFamilies` parameter to the `networkProfile` object. See the [Bicep template documentation][deploy-bicep-template] for help with deploying this template, if needed. --```bicep -param clusterName string = 'aksdualstack' -param location string = resourceGroup().location -param kubernetesVersion string = '1.22.2' -param nodeCount int = 3 -param nodeSize string = 'Standard_B2ms' --resource aksCluster 'Microsoft.ContainerService/managedClusters@2021-10-01' = { - name: clusterName - location: location - identity: { - type: 'SystemAssigned' - } - properties: { - agentPoolProfiles: [ - { - name: 'nodepool1' - count: nodeCount - mode: 'System' - vmSize: nodeSize +1. Create the Bicep template and pass `["IPv4", "IPv6"]` to the `ipFamilies` parameter to the `networkProfile` object. ++ ```bicep + param clusterName string = 'aksdualstack' + param location string = resourceGroup().location + param kubernetesVersion string = '1.22.2' + param nodeCount int = 3 + param nodeSize string = 'Standard_B2ms' ++ resource aksCluster 'Microsoft.ContainerService/managedClusters@2021-10-01' = { + name: clusterName + location: location + identity: { + type: 'SystemAssigned' + } + properties: { + agentPoolProfiles: [ + { + name: 'nodepool1' + count: nodeCount + mode: 'System' + vmSize: nodeSize + } + ] + dnsPrefix: clusterName + kubernetesVersion: kubernetesVersion + networkProfile: { + ipFamilies: [ + 'IPv4' + 'IPv6' + ] + } }- ] - dnsPrefix: clusterName - kubernetesVersion: kubernetesVersion - networkProfile: { - ipFamilies: [ - 'IPv4' - 'IPv6' - ] }- } -} -``` + ``` -+2. Once the cluster is created, get the cluster admin credentials using the [`az aks get-credentials`][az-aks-get-credentials] command. -Finally, after the cluster has been created, get the admin credentials: + ```azurecli-interactive + az aks get-credentials -g <resourceGroupName> -n <clusterName> + ``` -```azurecli-interactive -az aks get-credentials -g <ResourceGroupName> -n <ClusterName> -a -``` +> [!NOTE] +> For more information on deploying Bicep templates, see the [Bicep template documentation][deploy-bicep-template]. -### Inspect the nodes to see both IP families + -Once the cluster is provisioned, confirm that the nodes are provisioned with dual-stack networking: +## Inspect the nodes to see both IP families -```bash-interactive -kubectl get nodes -o=custom-columns="NAME:.metadata.name,ADDRESSES:.status.addresses[?(@.type=='InternalIP')].address,PODCIDRS:.spec.podCIDRs[*]" -``` +* Once the cluster is provisioned, confirm the nodes are provisioned with dual-stack networking using the `kubectl get nodes` command. -The output from the `kubectl get nodes` command will show that the nodes have addresses and pod IP assignment space from both IPv4 and IPv6. + ```bash-interactive + kubectl get nodes -o=custom-columns="NAME:.metadata.name,ADDRESSES:.status.addresses[?(@.type=='InternalIP')].address,PODCIDRS:.spec.podCIDRs[*]" + ``` -```output -NAME ADDRESSES PODCIDRS -aks-nodepool1-14508455-vmss000000 10.240.0.4,2001:1234:5678:9abc::4 10.244.0.0/24,fd12:3456:789a::/80 -aks-nodepool1-14508455-vmss000001 10.240.0.5,2001:1234:5678:9abc::5 10.244.1.0/24,fd12:3456:789a:0:1::/80 -aks-nodepool1-14508455-vmss000002 10.240.0.6,2001:1234:5678:9abc::6 10.244.2.0/24,fd12:3456:789a:0:2::/80 -``` + The output from the `kubectl get nodes` command shows the nodes have addresses and pod IP assignment space from both IPv4 and IPv6. ++ ```output + NAME ADDRESSES PODCIDRS + aks-nodepool1-14508455-vmss000000 10.240.0.4,2001:1234:5678:9abc::4 10.244.0.0/24,fd12:3456:789a::/80 + aks-nodepool1-14508455-vmss000001 10.240.0.5,2001:1234:5678:9abc::5 10.244.1.0/24,fd12:3456:789a:0:1::/80 + aks-nodepool1-14508455-vmss000002 10.240.0.6,2001:1234:5678:9abc::6 10.244.2.0/24,fd12:3456:789a:0:2::/80 + ``` ## Create an example workload -### Deploy an nginx web server +Once the cluster has been created, you can deploy your workloads. This article walks you through an example workload deployment of an NGINX web server. ++### Deploy an NGINX web server -Once the cluster has been created, workloads can be deployed as usual. A simple example webserver can be created using the following command: +# [kubectl](#tab/kubectl) -# [`kubectl create`](#tab/kubectl) +1. Create an NGINX web server using the `kubectl create deployment nginx` command. ++ ```bash-interactive + kubectl create deployment nginx --image=nginx:latest --replicas=3 + ``` ++2. View the pod resources using the `kubectl get pods` command. ++ ```bash-interactive + kubectl get pods -o custom-columns="NAME:.metadata.name,IPs:.status.podIPs[*].ip,NODE:.spec.nodeName,READY:.status.conditions[?(@.type=='Ready')].status" + ``` -```bash-interactive -kubectl create deployment nginx --image=nginx:latest --replicas=3 -``` + The output shows the pods have both IPv4 and IPv6 addresses. The pods don't show IP addresses until they're ready. ++ ```output + NAME IPs NODE READY + nginx-55649fd747-9cr7h 10.244.2.2,fd12:3456:789a:0:2::2 aks-nodepool1-14508455-vmss000002 True + nginx-55649fd747-p5lr9 10.244.0.7,fd12:3456:789a::7 aks-nodepool1-14508455-vmss000000 True + nginx-55649fd747-r2rqh 10.244.1.2,fd12:3456:789a:0:1::2 aks-nodepool1-14508455-vmss000001 True + ``` # [YAML](#tab/yaml) -```yml -apiVersion: apps/v1 -kind: Deployment -metadata: - labels: - app: nginx - name: nginx -spec: - replicas: 3 - selector: - matchLabels: - app: nginx - template: +1. Create an NGINX web server using the following YAML manifest. ++ ```yml + apiVersion: apps/v1 + kind: Deployment metadata: labels: app: nginx+ name: nginx spec:- containers: - - image: nginx:latest - name: nginx -``` + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - image: nginx:latest + name: nginx + ``` -+2. View the pod resources using the `kubectl get pods` command. -Using the following `kubectl get pods` command will show that the pods have both IPv4 and IPv6 addresses (note that the pods will not show IP addresses until they are ready): + ```bash-interactive + kubectl get pods -o custom-columns="NAME:.metadata.name,IPs:.status.podIPs[*].ip,NODE:.spec.nodeName,READY:.status.conditions[?(@.type=='Ready')].status" + ``` -```bash-interactive -kubectl get pods -o custom-columns="NAME:.metadata.name,IPs:.status.podIPs[*].ip,NODE:.spec.nodeName,READY:.status.conditions[?(@.type=='Ready')].status" -``` + The output shows the pods have both IPv4 and IPv6 addresses. The pods don't show IP addresses until they're ready. -``` -NAME IPs NODE READY -nginx-55649fd747-9cr7h 10.244.2.2,fd12:3456:789a:0:2::2 aks-nodepool1-14508455-vmss000002 True -nginx-55649fd747-p5lr9 10.244.0.7,fd12:3456:789a::7 aks-nodepool1-14508455-vmss000000 True -nginx-55649fd747-r2rqh 10.244.1.2,fd12:3456:789a:0:1::2 aks-nodepool1-14508455-vmss000001 True -``` + ```output + NAME IPs NODE READY + nginx-55649fd747-9cr7h 10.244.2.2,fd12:3456:789a:0:2::2 aks-nodepool1-14508455-vmss000002 True + nginx-55649fd747-p5lr9 10.244.0.7,fd12:3456:789a::7 aks-nodepool1-14508455-vmss000000 True + nginx-55649fd747-r2rqh 10.244.1.2,fd12:3456:789a:0:1::2 aks-nodepool1-14508455-vmss000001 True + ``` ++ -### Expose the workload via a `LoadBalancer`-type service +## Expose the workload via a `LoadBalancer` type service > [!IMPORTANT]-> There are currently two limitations pertaining to IPv6 services in AKS. These are both preview limitations and work is underway to remove them. -> * Azure Load Balancer sends health probes to IPv6 destinations from a link-local address. In Azure Linux node pools, this traffic cannot be routed to a pod and thus traffic flowing to IPv6 services deployed with `externalTrafficPolicy: Cluster` will fail. IPv6 services MUST be deployed with `externalTrafficPolicy: Local`, which causes `kube-proxy` to respond to the probe on the node, in order to function. -> * Only the first IP address for a service will be provisioned to the load balancer, so a dual-stack service will only receive a public IP for its first listed IP family. In order to provide a dual-stack service for a single deployment, please create two services targeting the same selector, one for IPv4 and one for IPv6. +> There are currently **two limitations** pertaining to IPv6 services in AKS. These are both preview limitations and work is underway to remove them. +> +> 1. Azure Load Balancer sends health probes to IPv6 destinations from a link-local address. In Azure Linux node pools, this traffic can't be routed to a pod, so traffic flowing to IPv6 services deployed with `externalTrafficPolicy: Cluster` fail. IPv6 services must be deployed with `externalTrafficPolicy: Local`, which causes `kube-proxy` to respond to the probe on the node. +> 2. Only the first IP address for a service will be provisioned to the load balancer, so a dual-stack service only receives a public IP for its first-listed IP family. To provide a dual-stack service for a single deployment, please create two services targeting the same selector, one for IPv4 and one for IPv6. -IPv6 services in Kubernetes can be exposed publicly similarly to an IPv4 service. +# [kubectl](#tab/kubectl) -# [`kubectl expose`](#tab/kubectl) +1. Expose the NGINX deployment using the `kubectl expose deployment nginx` command. -```bash-interactive -kubectl expose deployment nginx --name=nginx-ipv4 --port=80 --type=LoadBalancer' -kubectl expose deployment nginx --name=nginx-ipv6 --port=80 --type=LoadBalancer --overrides='{"spec":{"ipFamilies": ["IPv6"]}}' -``` + ```bash-interactive + kubectl expose deployment nginx --name=nginx-ipv4 --port=80 --type=LoadBalancer' + kubectl expose deployment nginx --name=nginx-ipv6 --port=80 --type=LoadBalancer --overrides='{"spec":{"ipFamilies": ["IPv6"]}}' + ``` -``` -service/nginx-ipv4 exposed -service/nginx-ipv6 exposed -``` + You receive an output that shows the services have been exposed. ++ ```output + service/nginx-ipv4 exposed + service/nginx-ipv6 exposed + ``` ++2. Once the deployment is exposed and the `LoadBalancer` services are fully provisioned, get the IP addresses of the services using the `kubectl get services` command. ++ ```bash-interactive + kubectl get services + ``` ++ ```output + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + nginx-ipv4 LoadBalancer 10.0.88.78 20.46.24.24 80:30652/TCP 97s + nginx-ipv6 LoadBalancer fd12:3456:789a:1::981a 2603:1030:8:5::2d 80:32002/TCP 63s + ``` ++3. Verify functionality via a command-line web request from an IPv6 capable host. Azure Cloud Shell isn't IPv6 capable. ++ ```bash-interactive + SERVICE_IP=$(kubectl get services nginx-ipv6 -o jsonpath='{.status.loadBalancer.ingress[0].ip}') + curl -s "http://[${SERVICE_IP}]" | head -n5 + ``` ++ ```html + <!DOCTYPE html> + <html> + <head> + <title>Welcome to nginx!</title> + <style> + ``` # [YAML](#tab/yaml) -```yml --apiVersion: v1 -kind: Service -metadata: - labels: - app: nginx - name: nginx-ipv4 -spec: - externalTrafficPolicy: Cluster - ports: - - port: 80 - protocol: TCP - targetPort: 80 - selector: - app: nginx - type: LoadBalancer --apiVersion: v1 -kind: Service -metadata: - labels: - app: nginx - name: nginx-ipv6 -spec: - externalTrafficPolicy: Cluster - ipFamilies: - - IPv6 - ports: - - port: 80 - protocol: TCP - targetPort: 80 - selector: - app: nginx - type: LoadBalancer --``` +1. Expose the NGINX deployment using the following YAML manifest. -+ ```yml + + apiVersion: v1 + kind: Service + metadata: + labels: + app: nginx + name: nginx-ipv4 + spec: + externalTrafficPolicy: Cluster + ports: + - port: 80 + protocol: TCP + targetPort: 80 + selector: + app: nginx + type: LoadBalancer + + apiVersion: v1 + kind: Service + metadata: + labels: + app: nginx + name: nginx-ipv6 + spec: + externalTrafficPolicy: Cluster + ipFamilies: + - IPv6 + ports: + - port: 80 + protocol: TCP + targetPort: 80 + selector: + app: nginx + type: LoadBalancer + ``` -Once the deployment has been exposed and the `LoadBalancer` services have been fully provisioned, `kubectl get services` will show the IP addresses of the +2. Once the deployment is exposed and the `LoadBalancer` services are fully provisioned, get the IP addresses of the services using the `kubectl get services` command. -```bash-interactive -kubectl get services -``` + ```bash-interactive + kubectl get services + ``` -```output -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -nginx-ipv4 LoadBalancer 10.0.88.78 20.46.24.24 80:30652/TCP 97s -nginx-ipv6 LoadBalancer fd12:3456:789a:1::981a 2603:1030:8:5::2d 80:32002/TCP 63s -``` + ```output + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + nginx-ipv4 LoadBalancer 10.0.88.78 20.46.24.24 80:30652/TCP 97s + nginx-ipv6 LoadBalancer fd12:3456:789a:1::981a 2603:1030:8:5::2d 80:32002/TCP 63s + ``` -Next, we can verify functionality via a command-line web request from an IPv6 capable host (note that Azure Cloud Shell is not IPv6 capable): +3. Verify functionality via a command-line web request from an IPv6 capable host. Azure Cloud Shell isn't IPv6 capable. -```bash-interactive -SERVICE_IP=$(kubectl get services nginx-ipv6 -o jsonpath='{.status.loadBalancer.ingress[0].ip}') -curl -s "http://[${SERVICE_IP}]" | head -n5 -``` + ```bash-interactive + SERVICE_IP=$(kubectl get services nginx-ipv6 -o jsonpath='{.status.loadBalancer.ingress[0].ip}') + curl -s "http://[${SERVICE_IP}]" | head -n5 + ``` -```html -<!DOCTYPE html> -<html> -<head> -<title>Welcome to nginx!</title> -<style> -``` + ```html + <!DOCTYPE html> + <html> + <head> + <title>Welcome to nginx!</title> + <style> + ``` + <!-- LINKS - External --> [kubernetes-dual-stack]: https://kubernetes.io/docs/concepts/services-networking/dual-stack/ curl -s "http://[${SERVICE_IP}]" | head -n5 [kubenet]: ./configure-kubenet.md [aks-out-of-tree]: ./out-of-tree.md [nat-gateway]: ../virtual-network/nat-gateway/nat-overview.md-[install-azure-cli]: /cli/azure/install-azure-cli [aks-network-concepts]: concepts-network.md-[aks-network-nsg]: concepts-network.md#network-security-groups [az-group-create]: /cli/azure/group#az_group_create-[az-network-vnet-create]: /cli/azure/network/vnet#az_network_vnet_create -[az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac -[az-network-vnet-show]: /cli/azure/network/vnet#az_network_vnet_show -[az-network-vnet-subnet-show]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_show -[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create [az-aks-create]: /cli/azure/aks#az_aks_create-[byo-subnet-route-table]: #bring-your-own-subnet-and-route-table-with-kubenet -[develop-helm]: quickstart-helm.md -[use-helm]: kubernetes-helm.md -[virtual-nodes]: virtual-nodes-cli.md -[vnet-peering]: ../virtual-network/virtual-network-peering-overview.md -[express-route]: ../expressroute/expressroute-introduction.md -[network-comparisons]: concepts-network.md#compare-network-models -[custom-route-table]: ../virtual-network/manage-route-table.md -[user-assigned managed identity]: use-managed-identity.md#bring-your-own-control-plane-mi -[az-provider-register]: /cli/azure/provider#az-provider-register -[az-feature-register]: /cli/azure/feature#az-feature-register -[az-feature-show]: /cli/azure/feature#az-feature-show +[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials |
aks | Dapr Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-overview.md | After learning about Dapr and some of the challenges it solves, try [Deploying a [dapr-docs]: https://docs.dapr.io/ [dapr-blocks]: https://docs.dapr.io/concepts/building-blocks-concept/ [dapr-secrets-block]: https://docs.dapr.io/developing-applications/building-blocks/secrets/secrets-overview/-[dapr-msi]: https://docs.dapr.io/developing-applications/integrations/azure/authenticating-azure +[dapr-msi]: https://docs.dapr.io/developing-applications/integrations/azure/azure-authentication |
aks | Dapr Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-settings.md | az k8s-extension show --cluster-type managedClusters \ > * `global.ha.*` > * `dapr_placement.*` >-> HA is enabled enabled by default. Disabling it requires deletion and recreation of the extension. +> HA is enabled by default. Disabling it requires deletion and recreation of the extension. To update your Dapr configuration settings, recreate the extension with the desired state. For example, assume we've previously created and installed the extension using the following configuration: |
aks | Deploy Marketplace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-marketplace.md | az provider register --namespace Microsoft.KubernetesConfiguration --wait ## Verify the deployment +### [Portal](#tab/azure-portal) ++Verify the deployment navigating to the cluster you recently installed the extension on, then navigate to "Extensions + Applications", where you'll see the extension status: ++ :::image type="content" source="./media/deploy-marketplace/verify-inline.png" lightbox="./media/deploy-marketplace/verify.png" alt-text="The Azure portal page for the A K S cluster is shown. 'Extensions + Applications' is selected, and the deployed extension is listed."::: + ### [Azure CLI](#tab/azure-cli) Verify the deployment by using the following command to list the extensions that are running on your cluster: Verify the deployment by using the following command to list the extensions that az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ``` -### [Portal](#tab/azure-portal) --Verify the deployment navigating to the cluster you recently installed the extension on, then navigate to "Extensions + Applications", where you'll see the extension status: -- :::image type="content" source="./media/deploy-marketplace/verify-inline.png" lightbox="./media/deploy-marketplace/verify.png" alt-text="The Azure portal page for the A K S cluster is shown. 'Extensions + Applications' is selected, and the deployed extension is listed."::: - ## Manage the offer lifecycle For lifecycle management, an Azure Kubernetes offer is represented as a cluster Purchasing an offer from Azure Marketplace creates a new instance of the extension on your AKS cluster. -### [Azure CLI](#tab/azure-cli) --You can view the extension instance from the cluster by using the following command: --```azurecli-interactive -az k8s-extension show --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters -``` - ### [Portal](#tab/azure-portal) First, navigate to an existing cluster, then select "Extensions + applications": To manage settings of your installed extension, you can edit the configuration s ![Screenshot of Cluster-extension-config-settings.](media/deploy-marketplace/cluster-extension-config-settings.png) +### [Azure CLI](#tab/azure-cli) ++You can view the extension instance from the cluster by using the following command: ++```azurecli-interactive +az k8s-extension show --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters +``` ++++++ To monitor billing and usage information for the offer that you deployed: You can delete a purchased plan for an Azure container offer by deleting the extension instance on the cluster. -### [Azure CLI](#tab/azure-cli) -```azurecli-interactive -az k8s-extension delete --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters -``` ### [Portal](#tab/azure-portal) Select an application, then select the uninstall button to remove the extension :::image type="content" source="./media/deploy-marketplace/uninstall-inline.png" alt-text="The Azure portal page for the A K S cluster is shown. The deployed extension is listed with the 'uninstall' button highlighted." lightbox="./media/deploy-marketplace/uninstall.png"::: +### [Azure CLI](#tab/azure-cli) ++```azurecli-interactive +az k8s-extension delete --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters +``` ++ ## Troubleshooting If you experience issues, see the [troubleshooting checklist for failed deployme ++- Learn more about [deploying a Kubernetes application programmatically using Azure CLI](/azure/aks/deploy-application-az-cli) ++++ |
aks | Developer Best Practices Pod Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/developer-best-practices-pod-security.md | This article focused on how to secure your pods. To implement some of these area <!-- EXTERNAL LINKS --> [linux-capabilities]: http://man7.org/linux/man-pages/man7/capabilities.7.html-[selinux-labels]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#selinuxoptions-v1-core +[selinux-labels]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#selinuxoptions-v1-core [aks-associated-projects]: https://awesomeopensource.com/projects/aks?categoryPage=11 [azure-sdk-download]: https://azure.microsoft.com/downloads/ |
aks | Egress Udr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-udr.md | Using outbound type is an advanced networking scenario and requires proper netwo AKS doesn't automatically configure egress paths if `userDefinedRouting` is set, which means you must configure the egress. -When you don't use standard load balancer (SLB) architecture, you must establish explicit egress. You must deploy your AKS cluster into an existing virtual network with a subnet that has been previously configured. This architecture requires explicitly sending egress traffic to an appliance like a firewall, gateway, proxy so a public IP assigned to the standard load balancer or appliance can handle the Network Address Translation (NAT). +When you don't use standard load balancer (SLB) architecture, you must establish explicit egress. You must deploy your AKS cluster into an existing virtual network with a subnet that has been previously configured. This architecture requires explicitly sending egress traffic to an appliance like a firewall, gateway, or proxy, so a public IP assigned to the standard load balancer or appliance can handle the Network Address Translation (NAT). ### Load balancer creation with `userDefinedRouting` |
aks | Nat Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md | This article shows you how to create an Azure Kubernetes Service (AKS) cluster w ## Create an AKS cluster with a managed NAT gateway * Create an AKS cluster with a new managed NAT gateway using the [`az aks create`][az-aks-create] command with the `--outbound-type managedNATGateway`, `--nat-gateway-managed-outbound-ip-count`, and `--nat-gateway-idle-timeout` parameters. If you want the NAT gateway to operate out of a specific availability zone, specify the zones using `--zones`.-* A single NAT gateway resource cannot be used across multiple availability zones. To ensure zone-resiliency, it is recommended to deploy a NAT gateway resource to each availability zone and assign to subnets containing AKS clusters in each zone. For more information on this deployment model, see [NAT gateway for each zone](/azure/nat-gateway/nat-availability-zones#zonal-nat-gateway-resource-for-each-zone-in-a-region-to-create-zone-resiliency). +* A managed NAT gateway resource cannot be used across multiple availability zones. When you deploy a managed NAT gateway instance, it is deployed to "no zone". No zone NAT gateway resources are deployed to a single availability zone for you by Azure. For more information on non-zonal deployment model, see [non-zonal NAT gateway](/azure/nat-gateway/nat-availability-zones#non-zonal). ```azurecli-interactive az aks create \ This article shows you how to create an Azure Kubernetes Service (AKS) cluster w ``` > [!IMPORTANT]- > If no zone is configured for NAT gateway, the default zone placement is "no zone", in which Azure places NAT gateway into a zone for you. + > Zonal configuration for your NAT gateway resource can be done with user-assigned NAT gateway resources. See [Create an AKS cluster with a user-assigned NAT gateway](#create-an-aks-cluster-with-a-user-assigned-nat-gateway] for more details. > If no value for the outbound IP address is specified, the default value is one. ### Update the number of outbound IP addresses |
aks | Open Service Mesh Deploy Addon Az Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-az-cli.md | Title: Install the Open Service Mesh add-on by using the Azure CLI -description: Use Azure CLI commands to install the Open Service Mesh (OSM) add-on on an Azure Kubernetes Service (AKS) cluster. + Title: Install the Open Service Mesh (OSM) add-on using Azure CLI +description: Use Azure CLI to install the Open Service Mesh (OSM) add-on on an Azure Kubernetes Service (AKS) cluster. Previously updated : 11/10/2021 Last updated : 06/27/2023 -# Install the Open Service Mesh add-on by using the Azure CLI +# Install the Open Service Mesh (OSM) add-on using Azure CLI -This article shows you how to install the Open Service Mesh (OSM) add-on on an Azure Kubernetes Service (AKS) cluster and verify that it's installed and running. +This article shows you how to install the Open Service Mesh (OSM) add-on on an Azure Kubernetes Service (AKS) cluster. The OSM add-on installs the OSM mesh on your cluster. The OSM mesh is a service mesh that provides traffic management, policy enforcement, and telemetry collection for your applications. For more information about the OSM mesh, see [Open Service Mesh](https://openservicemesh.io/). > [!IMPORTANT]-> Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM: -> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.5 -> * of OSM. -> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.3* of OSM. -> - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM. +> Based on the version of Kubernetes your cluster runs, the OSM add-on installs a different version of OSM: +> +> - If your cluster runs a Kubernetes version *1.24.0 or greater*, the OSM add-on installs OSM version *1.2.5*. +> - If your cluster runs a Kubernetes version *between 1.23.5 and 1.24.0*, the OSM add-on installs OSM version *1.1.3*. +> - If your cluster runs a Kubernetes version *below 1.23.5*, the OSM add-on installs OSM version *1.0.0*. ## Prerequisites -* An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). -* [Azure CLI installed](/cli/azure/install-azure-cli). +- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). +- [Azure CLI installed](/cli/azure/install-azure-cli). ## Install the OSM add-on on your cluster -To install the OSM add-on, use `--enable-addons open-service-mesh` when creating or updating a cluster. +1. If you don't have one already, create an Azure resource group using the [`az group create`][az-group-create] command. -The following example creates a *myResourceGroup* resource group. Then it creates a *myAKSCluster* cluster with three nodes and the OSM add-on. + ```azurecli-interactive + az group create --name myResourceGroup --location eastus + ``` -```azurecli-interactive -az group create --name myResourceGroup --location eastus +2. Create a new AKS cluster with the OSM add-on installed using the [`az aks create`][az-aks-create] command and specify `open-service-mesh` for the `--enable-addons` parameter. -az aks create \ - --resource-group myResourceGroup \ - --name myAKSCluster \ - --enable-addons open-service-mesh -``` --For existing clusters, use `az aks enable-addons`. The following code shows an example. + ```azurecli-interactive + az aks create \ + --resource-group myResourceGroup \ + --name myAKSCluster \ + --enable-addons open-service-mesh + ``` > [!IMPORTANT] > You can't enable the OSM add-on on an existing cluster if an OSM mesh is already on your cluster. Uninstall any existing OSM meshes on your cluster before enabling the OSM add-on.--```azurecli-interactive -az aks enable-addons \ - --resource-group myResourceGroup \ - --name myAKSCluster \ - --addons open-service-mesh -``` +> +> When installing on an existing clusters, use the [`az aks enable-addons`][az-aks-enable-addons] command. The following code shows an example: +> +> ```azurecli-interactive +> az aks enable-addons \ +> --resource-group myResourceGroup \ +> --name myAKSCluster \ +> --addons open-service-mesh +> ``` ## Get the credentials for your cluster -Get the credentials for your AKS cluster by using the `az aks get-credentials` command. The following example command gets the credentials for *myAKSCluster* in the *myResourceGroup* resource group: --```azurecli-interactive -az aks get-credentials --resource-group myResourceGroup --name myAKSCluster -``` --## Verify that the OSM add-on is installed on your cluster --To see if the OSM add-on is installed on your cluster, verify that the `enabled` value is `true` for `openServiceMesh` under `addonProfiles`. The following example shows the status of the OSM add-on for *myAKSCluster* in *myResourceGroup*: --```azurecli-interactive -az aks show --resource-group myResourceGroup --name myAKSCluster --query 'addonProfiles.openServiceMesh.enabled' -``` --## Verify that the OSM mesh is running on your cluster --You can verify the version, status, and configuration of the OSM mesh that's running on your cluster. Use `kubectl` to display the image version of the *osm-controller* deployment. For example: --```azurecli-interactive -kubectl get deployment -n kube-system osm-controller -o=jsonpath='{$.spec.template.spec.containers[:1].image}' -``` --The following example output shows version *0.11.1* of the OSM mesh: --```output -$ kubectl get deployment -n kube-system osm-controller -o=jsonpath='{$.spec.template.spec.containers[:1].image}' -mcr.microsoft.com/oss/openservicemesh/osm-controller:v0.11.1 -``` --To verify the status of the OSM components running on your cluster, use `kubectl` to show the status of the `app.kubernetes.io/name=openservicemesh.io` deployments, pods, and services. For example: --```azurecli-interactive -kubectl get deployments -n kube-system --selector app.kubernetes.io/name=openservicemesh.io -kubectl get pods -n kube-system --selector app.kubernetes.io/name=openservicemesh.io -kubectl get services -n kube-system --selector app.kubernetes.io/name=openservicemesh.io -``` --> [!IMPORTANT] -> If any pods have a status other than `Running`, such as `Pending`, your cluster might not have enough resources to run OSM. Review the sizing for your cluster, such as the number of nodes and the virtual machine's SKU, before continuing to use OSM on your cluster. --To verify the configuration of your OSM mesh, use `kubectl get meshconfig`. For example: --```azurecli-interactive -kubectl get meshconfig osm-mesh-config -n kube-system -o yaml -``` --The following example output shows the configuration of an OSM mesh: --```yaml -apiVersion: config.openservicemesh.io/v1alpha1 -kind: MeshConfig -metadata: - creationTimestamp: "0000-00-00A00:00:00A" - generation: 1 - name: osm-mesh-config - namespace: kube-system - resourceVersion: "2494" - uid: 6c4d67f3-c241-4aeb-bf4f-b029b08faa31 -spec: - certificate: - serviceCertValidityDuration: 24h - featureFlags: - enableEgressPolicy: true - enableMulticlusterMode: false - enableWASMStats: true - observability: - enableDebugServer: true - osmLogLevel: info - tracing: - address: jaeger.osm-system.svc.cluster.local - enable: false - endpoint: /api/v2/spans - port: 9411 - sidecar: - configResyncInterval: 0s - enablePrivilegedInitContainer: false - envoyImage: mcr.microsoft.com/oss/envoyproxy/envoy:v1.18.3 - initContainerImage: mcr.microsoft.com/oss/openservicemesh/init:v0.9.1 - logLevel: error - maxDataPlaneConnections: 0 - resources: {} - traffic: - enableEgress: true - enablePermissiveTrafficPolicyMode: true - inboundExternalAuthorization: - enable: false - failureModeAllow: false - statPrefix: inboundExtAuthz - timeout: 1s - useHTTPSIngress: false -``` --The preceding example shows `enablePermissiveTrafficPolicyMode: true`, which means OSM has permissive traffic policy mode enabled. With this mode enabled in your OSM mesh: --* The [SMI][smi] traffic policy enforcement is bypassed. -* OSM automatically discovers services that are a part of the service mesh. -* OSM creates traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services. +- Get the credentials for your AKS cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. ++ ```azurecli-interactive + az aks get-credentials --resource-group myResourceGroup --name myAKSCluster + ``` ++## Verify the OSM add-on is installed on your cluster ++- Verify the OSM add-on is installed on your cluster using the [`az aks show`][az-aks-show] command with and specify `'addonProfiles.openServiceMesh.enabled'` for the `--query` parameter. In the output, under `addonProfiles`, the `enabled` value should show as `true` for `openServiceMesh`. ++ ```azurecli-interactive + az aks show --resource-group myResourceGroup --name myAKSCluster --query 'addonProfiles.openServiceMesh.enabled' + ``` ++## Verify the OSM mesh is running on your cluster ++1. Verify the version, status, and configuration of the OSM mesh running on your cluster using the `kubectl get deployment` command and display the image version of the *osm-controller* deployment. ++ ```azurecli-interactive + kubectl get deployment -n kube-system osm-controller -o=jsonpath='{$.spec.template.spec.containers[:1].image}' + ``` ++ The following example output shows version *0.11.1* of the OSM mesh: ++ ```output + mcr.microsoft.com/oss/openservicemesh/osm-controller:v0.11.1 + ``` ++2. Verify the status of the OSM components running on your cluster using the following `kubectl` commands to show the status of the `app.kubernetes.io/name=openservicemesh.io` deployments, pods, and services. ++ ```azurecli-interactive + kubectl get deployments -n kube-system --selector app.kubernetes.io/name=openservicemesh.io + kubectl get pods -n kube-system --selector app.kubernetes.io/name=openservicemesh.io + kubectl get services -n kube-system --selector app.kubernetes.io/name=openservicemesh.io + ``` ++ > [!IMPORTANT] + > If any pods have a status other than `Running`, such as `Pending`, your cluster might not have enough resources to run OSM. Review the sizing for your cluster, such as the number of nodes and the virtual machine's SKU, before continuing to use OSM on your cluster. ++3. Verify the configuration of your OSM mesh using the `kubectl get meshconfig` command. ++ ```azurecli-interactive + kubectl get meshconfig osm-mesh-config -n kube-system -o yaml + ``` ++ The following example output shows the configuration of an OSM mesh: ++ ```yaml + apiVersion: config.openservicemesh.io/v1alpha1 + kind: MeshConfig + metadata: + creationTimestamp: "0000-00-00A00:00:00A" + generation: 1 + name: osm-mesh-config + namespace: kube-system + resourceVersion: "2494" + uid: 6c4d67f3-c241-4aeb-bf4f-b029b08faa31 + spec: + certificate: + serviceCertValidityDuration: 24h + featureFlags: + enableEgressPolicy: true + enableMulticlusterMode: false + enableWASMStats: true + observability: + enableDebugServer: true + osmLogLevel: info + tracing: + address: jaeger.osm-system.svc.cluster.local + enable: false + endpoint: /api/v2/spans + port: 9411 + sidecar: + configResyncInterval: 0s + enablePrivilegedInitContainer: false + envoyImage: mcr.microsoft.com/oss/envoyproxy/envoy:v1.18.3 + initContainerImage: mcr.microsoft.com/oss/openservicemesh/init:v0.9.1 + logLevel: error + maxDataPlaneConnections: 0 + resources: {} + traffic: + enableEgress: true + enablePermissiveTrafficPolicyMode: true + inboundExternalAuthorization: + enable: false + failureModeAllow: false + statPrefix: inboundExtAuthz + timeout: 1s + useHTTPSIngress: false + ``` ++ The example output shows `enablePermissiveTrafficPolicyMode: true`, which means OSM has permissive traffic policy mode enabled. With this mode enabled in your OSM mesh: ++ - The [SMI][smi] traffic policy enforcement is bypassed. + - OSM automatically discovers services that are a part of the service mesh. + - OSM creates traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services. ## Delete your cluster -When you no longer need the cluster, use the `az group delete` command to remove the resource group, the cluster, and all related resources: +- When you no longer need the cluster, you can delete it using the [`az group delete`][az-group-delete] command, which removes the resource group, the cluster, and all related resources. -```azurecli-interactive -az group delete --name myResourceGroup --yes --no-wait -``` + ```azurecli-interactive + az group delete --name myResourceGroup --yes --no-wait + ``` -Alternatively, you can uninstall the OSM add-on and the related resources from your cluster. For more information, see [Uninstall the Open Service Mesh add-on from your AKS cluster][osm-uninstall]. +> [!NOTE] +> Alternatively, you can uninstall the OSM add-on and the related resources from your cluster. For more information, see [Uninstall the Open Service Mesh add-on from your AKS cluster][osm-uninstall]. ## Next steps -This article showed you how to install the OSM add-on on an AKS cluster, and then verify that it's installed and running. With the OSM add-on installed on your cluster, you can [deploy a sample application][osm-deploy-sample-app] or [onboard an existing application][osm-onboard-app] to work with your OSM mesh. +This article showed you how to install the OSM add-on on an AKS cluster and verify it's installed and running. With the OSM add-on installed on your cluster, you can [deploy a sample application][osm-deploy-sample-app] or [onboard an existing application][osm-onboard-app] to work with your OSM mesh. -[aks-ephemeral]: cluster-configuration.md#ephemeral-os -[osm-sample]: open-service-mesh-deploy-new-application.md +<!-- LINKS --> [osm-uninstall]: open-service-mesh-uninstall-add-on.md [smi]: https://smi-spec.io/ [osm-deploy-sample-app]: https://release-v1-0.docs.openservicemesh.io/docs/getting_started/install_apps/ [osm-onboard-app]: https://release-v1-0.docs.openservicemesh.io/docs/guides/app_onboarding/+[az-group-create]: /cli/azure/group#az_group_create +[az-aks-create]: /cli/azure/aks#az_aks_create +[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials +[az-aks-show]: /cli/azure/aks#az_aks_show +[az-group-delete]: /cli/azure/group#az_group_delete +[az-aks-enable-addons]: /cli/azure/aks#az_aks_enable_addons |
aks | Open Service Mesh Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-troubleshoot.md | Title: Troubleshooting Open Service Mesh -description: How to troubleshoot Open Service Mesh + Title: Troubleshoot the Open Service Mesh (OSM) add-on for Azure Kubernetes Service (AKS) +description: How to troubleshoot the Open Service Mesh (OSM) add-on for Azure Kubernetes Service (AKS). Previously updated : 8/26/2021 Last updated : 06/27/2023 -# Open Service Mesh (OSM) AKS add-on Troubleshooting Guides +# Troubleshoot the Open Service Mesh (OSM) add-on for Azure Kubernetes Service (AKS) -When you deploy the OSM AKS add-on, you could possibly experience problems associated with configuration of the service mesh. The following guide will assist you on how to troubleshoot errors and resolve common problems. +When you deploy the Open Service Mesh (OSM) add-on for Azure Kubernetes Service (AKS), you may experience problems associated with the service mesh configuration. The article explores common troubleshooting errors and how to resolve them. -## Verifying and Troubleshooting OSM components +## Verifying and troubleshooting OSM components -### Check OSM Controller Deployment, Pod, and Service +### Check OSM Controller deployment, pod, and service -```azurecli-interactive -kubectl get deployment,pod,service -n kube-system --selector app=osm-controller -``` +* Check the OSM Controller deployment, pod, and service health using the `kubectl get deployment,pod,service` command. -A healthy OSM Controller would look like this: + ```azurecli-interactive + kubectl get deployment,pod,service -n kube-system --selector app=osm-controller + ``` -```Output -NAME READY UP-TO-DATE AVAILABLE AGE -deployment.apps/osm-controller 2/2 2 2 3m4s + A healthy OSM controller gives an output similar to the following example output: -NAME READY STATUS RESTARTS AGE -pod/osm-controller-65bd8c445c-zszp4 1/1 Running 0 2m -pod/osm-controller-65bd8c445c-xqhmk 1/1 Running 0 16s + ```output + NAME READY UP-TO-DATE AVAILABLE AGE + deployment.apps/osm-controller 2/2 2 2 3m4s -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -service/osm-controller ClusterIP 10.96.185.178 <none> 15128/TCP,9092/TCP,9091/TCP 3m4s -service/osm-validator ClusterIP 10.96.11.78 <none> 9093/TCP 3m4s -``` + NAME READY STATUS RESTARTS AGE + pod/osm-controller-65bd8c445c-zszp4 1/1 Running 0 2m + pod/osm-controller-65bd8c445c-xqhmk 1/1 Running 0 16s -> [!NOTE] -> For the osm-controller services the CLUSTER-IP would be different. The service NAME and PORT(S) must be the same as the example above. + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + service/osm-controller ClusterIP 10.96.185.178 <none> 15128/TCP,9092/TCP,9091/TCP 3m4s + service/osm-validator ClusterIP 10.96.11.78 <none> 9093/TCP 3m4s + ``` ++ > [!NOTE] + > For the `osm-controller` services, the CLUSTER-IP is different. The service NAME and PORT(S) must be the same as the example output. ++### Check OSM Injector deployment, pod, and service ++* Check the OSM Injector deployment, pod, and service health using the `kubectl get deployment,pod,service` command. ++ ```azurecli-interactive + kubectl get deployment,pod,service -n kube-system --selector app=osm-injector + ``` -### Check OSM Injector Deployment, Pod, and Service + A healthy OSM Injector gives an output similar to the following example output: -```azurecli-interactive -kubectl get deployment,pod,service -n kube-system --selector app=osm-injector -``` + ```output + NAME READY UP-TO-DATE AVAILABLE AGE + deployment.apps/osm-injector 2/2 2 2 4m37s -A healthy OSM Injector would look like this: + NAME READY STATUS RESTARTS AGE + pod/osm-injector-5c49bd8d7c-b6cx6 1/1 Running 0 4m21s + pod/osm-injector-5c49bd8d7c-dx587 1/1 Running 0 4m37s -```Output -NAME READY UP-TO-DATE AVAILABLE AGE -deployment.apps/osm-injector 2/2 2 2 4m37s + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + service/osm-injector ClusterIP 10.96.236.108 <none> 9090/TCP 4m37s + ``` -NAME READY STATUS RESTARTS AGE -pod/osm-injector-5c49bd8d7c-b6cx6 1/1 Running 0 4m21s -pod/osm-injector-5c49bd8d7c-dx587 1/1 Running 0 4m37s +### Check OSM Bootstrap deployment, pod, and service -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -service/osm-injector ClusterIP 10.96.236.108 <none> 9090/TCP 4m37s -``` +* Check the OSM Bootstrap deployment, pod, and service health using the `kubectl get deployment,pod,service` command. -### Check OSM Bootstrap Deployment, Pod, and Service + ```azurecli-interactive + kubectl get deployment,pod,service -n kube-system --selector app=osm-bootstrap + ``` -```azurecli-interactive -kubectl get deployment,pod,service -n kube-system --selector app=osm-bootstrap -``` + A healthy OSM Bootstrap gives an output similar to the following example output: -A healthy OSM Bootstrap would look like this: + ```output + NAME READY UP-TO-DATE AVAILABLE AGE + deployment.apps/osm-bootstrap 1/1 1 1 5m25s -```Output -NAME READY UP-TO-DATE AVAILABLE AGE -deployment.apps/osm-bootstrap 1/1 1 1 5m25s + NAME READY STATUS RESTARTS AGE + pod/osm-bootstrap-594ffc6cb7-jc7bs 1/1 Running 0 5m25s -NAME READY STATUS RESTARTS AGE -pod/osm-bootstrap-594ffc6cb7-jc7bs 1/1 Running 0 5m25s + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + service/osm-bootstrap ClusterIP 10.96.250.208 <none> 9443/TCP,9095/TCP 5m25s + ``` -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -service/osm-bootstrap ClusterIP 10.96.250.208 <none> 9443/TCP,9095/TCP 5m25s -``` +### Check validating and mutating webhooks -### Check Validating and Mutating webhooks +1. Check the OSM Validating Webhook using the `kubectl get ValidatingWebhookConfiguration` command. -```azurecli-interactive -kubectl get ValidatingWebhookConfiguration --selector app=osm-controller -``` + ```azurecli-interactive + kubectl get ValidatingWebhookConfiguration --selector app=osm-controller + ``` -A healthy OSM Validating Webhook would look like this: + A healthy OSM Validating Webhook gives an output similar to the following example output: -```Output -NAME WEBHOOKS AGE -aks-osm-validator-mesh-osm 1 81m -``` + ```output + NAME WEBHOOKS AGE + aks-osm-validator-mesh-osm 1 81m + ``` -```azurecli-interactive -kubectl get MutatingWebhookConfiguration --selector app=osm-injector -``` +2. Check the OSM Mutating Webhook using the `kubectl get MutatingWebhookConfiguration` command. -A healthy OSM Mutating Webhook would look like this: + ```azurecli-interactive + kubectl get MutatingWebhookConfiguration --selector app=osm-injector + ``` -```Output -NAME WEBHOOKS AGE -aks-osm-webhook-osm 1 102m -``` + A healthy OSM Mutating Webhook gives an output similar to the following example output: -### Check for the service and the CA bundle of the Validating webhook + ```output + NAME WEBHOOKS AGE + aks-osm-webhook-osm 1 102m + ``` -```azurecli-interactive -kubectl get ValidatingWebhookConfiguration aks-osm-validator-mesh-osm -o json | jq '.webhooks[0].clientConfig.service' -``` +### Check for the service and CA bundle of the Validating Webhook -A well configured Validating Webhook Configuration would look exactly like this: +* Check for the service and CA bundle of the OSM Validating Webhook using the `kubectl get ValidatingWebhookConfiguration` command with `aks-osm-validator-mesh-osm` and `jq '.webhooks[0].clientConfig.service'`. -```json -{ - "name": "osm-config-validator", - "namespace": "kube-system", - "path": "/validate-webhook", - "port": 9093 -} -``` + ```azurecli-interactive + kubectl get ValidatingWebhookConfiguration aks-osm-validator-mesh-osm -o json | jq '.webhooks[0].clientConfig.service' + ``` -### Check for the service and the CA bundle of the Mutating webhook + A well-configured Validating Webhook configuration looks like the following example JSON output: -```azurecli-interactive -kubectl get MutatingWebhookConfiguration aks-osm-webhook-osm -o json | jq '.webhooks[0].clientConfig.service' -``` + ```json + { + "name": "osm-config-validator", + "namespace": "kube-system", + "path": "/validate-webhook", + "port": 9093 + } + ``` -A well configured Mutating Webhook Configuration would look exactly like this: +### Check for the service and CA bundle of the Mutating webhook -```json -{ - "name": "osm-injector", - "namespace": "kube-system", - "path": "/mutate-pod-creation", - "port": 9090 -} -``` +* Check for the service and CA bundle of the OSM Mutating Webhook using the `kubectl get ValidatingWebhookConfiguration` command with `aks-osm-validator-mesh-osm` and `jq '.webhooks[0].clientConfig.service'`. ++ ```azurecli-interactive + kubectl get MutatingWebhookConfiguration aks-osm-webhook-osm -o json | jq '.webhooks[0].clientConfig.service' + ``` ++ A well-configured Mutating Webhook configuration looks like the following example JSON output: ++ ```json + { + "name": "osm-injector", + "namespace": "kube-system", + "path": "/mutate-pod-creation", + "port": 9090 + } + ``` ### Check the `osm-mesh-config` resource -Check for the existence: --```azurecli-interactive -kubectl get meshconfig osm-mesh-config -n kube-system -``` --Check the content of the OSM MeshConfig --```azurecli-interactive -kubectl get meshconfig osm-mesh-config -n kube-system -o yaml -``` --``` -apiVersion: config.openservicemesh.io/v1alpha1 -kind: MeshConfig -metadata: - creationTimestamp: "0000-00-00A00:00:00A" - generation: 1 - name: osm-mesh-config - namespace: kube-system - resourceVersion: "2494" - uid: 6c4d67f3-c241-4aeb-bf4f-b029b08faa31 -spec: - certificate: - serviceCertValidityDuration: 24h - featureFlags: - enableEgressPolicy: true - enableMulticlusterMode: false - enableWASMStats: true - observability: - enableDebugServer: true - osmLogLevel: info - tracing: - address: jaeger.kube-system.svc.cluster.local - enable: false - endpoint: /api/v2/spans - port: 9411 - sidecar: - configResyncInterval: 0s - enablePrivilegedInitContainer: false - envoyImage: mcr.microsoft.com/oss/envoyproxy/envoy:v1.18.3 - initContainerImage: mcr.microsoft.com/oss/openservicemesh/init:v0.9.1 - logLevel: error - maxDataPlaneConnections: 0 - resources: {} - traffic: - enableEgress: true - enablePermissiveTrafficPolicyMode: true - inboundExternalAuthorization: - enable: false - failureModeAllow: false - statPrefix: inboundExtAuthz - timeout: 1s - useHTTPSIngress: false -``` --`osm-mesh-config` resource values: +1. Check the OSM MeshConfig resource exists using the `kubectl get meshconfig` command. ++ ```azurecli-interactive + kubectl get meshconfig osm-mesh-config -n kube-system + ``` ++2. Check the contents of the OSM MeshConfig resource using the `kubectl get meshconfig` command with `-o yaml`. ++ ```azurecli-interactive + kubectl get meshconfig osm-mesh-config -n kube-system -o yaml + ``` ++ ```output + apiVersion: config.openservicemesh.io/v1alpha1 + kind: MeshConfig + metadata: + creationTimestamp: "0000-00-00A00:00:00A" + generation: 1 + name: osm-mesh-config + namespace: kube-system + resourceVersion: "2494" + uid: 6c4d67f3-c241-4aeb-bf4f-b029b08faa31 + spec: + certificate: + serviceCertValidityDuration: 24h + featureFlags: + enableEgressPolicy: true + enableMulticlusterMode: false + enableWASMStats: true + observability: + enableDebugServer: true + osmLogLevel: info + tracing: + address: jaeger.kube-system.svc.cluster.local + enable: false + endpoint: /api/v2/spans + port: 9411 + sidecar: + configResyncInterval: 0s + enablePrivilegedInitContainer: false + envoyImage: mcr.microsoft.com/oss/envoyproxy/envoy:v1.18.3 + initContainerImage: mcr.microsoft.com/oss/openservicemesh/init:v0.9.1 + logLevel: error + maxDataPlaneConnections: 0 + resources: {} + traffic: + enableEgress: true + enablePermissiveTrafficPolicyMode: true + inboundExternalAuthorization: + enable: false + failureModeAllow: false + statPrefix: inboundExtAuthz + timeout: 1s + useHTTPSIngress: false + ``` ++#### `osm-mesh-config` resource values | Key | Type | Default Value | Kubectl Patch Command Examples | |--|||--| spec: | spec.featureFlags.enableIngressBackendPolicy | bool | `"true"` | `kubectl patch meshconfig osm-mesh-config -n kube-system -p '{"spec":{"featureFlags":{"enableIngressBackendPolicy":"true"}}}' --type=merge` | | spec.featureFlags.enableEnvoyActiveHealthChecks | bool | `"false"` | `kubectl patch meshconfig osm-mesh-config -n kube-system -p '{"spec":{"featureFlags":{"enableEnvoyActiveHealthChecks":"false"}}}' --type=merge` | --### Check Namespaces +### Check namespaces > [!NOTE]-> The kube-system namespace will never participate in a service mesh and will never be labeled and/or annotated with the key/values below. +> The `kube-system` namespace never participates in a service mesh and is never labeled and/or annotated with the following key/values. -We use the `osm namespace add` command to join namespaces to a given service mesh. -When a k8s namespace is part of the mesh (or for it to be part of the mesh) the following must be true: +The `osm namespace add` command allows you to join namespaces to a given service mesh. When you want a K8s namespace to be part of the mesh, it must have the following annotation and label. -View the annotations with +1. View the annotations using the `kubectl get namespace` command with `jq '.metadata.annotations'`. -```azurecli-interactive -kubectl get namespace bookbuyer -o json | jq '.metadata.annotations' -``` + ```azurecli-interactive + kubectl get namespace bookbuyer -o json | jq '.metadata.annotations' + ``` -The following annotation must be present: + You must see the following annotation in the output: -```Output -{ - "openservicemesh.io/sidecar-injection": "enabled" -} -``` + ```output + { + "openservicemesh.io/sidecar-injection": "enabled" + } + ``` -View the labels with +2. View the labels using the `kubectl get namespaces` command with `jq '.metadata.labels'`. -```azurecli-interactive -kubectl get namespace bookbuyer -o json | jq '.metadata.labels' -``` + ```azurecli-interactive + kubectl get namespace bookbuyer -o json | jq '.metadata.labels' + ``` -The following label must be present: + You must see the following label in the output: -```Output -{ - "openservicemesh.io/monitored-by": "osm" -} -``` + ```output + { + "openservicemesh.io/monitored-by": "osm" + } + ``` -If a namespace is not annotated with `"openservicemesh.io/sidecar-injection": "enabled"` or not labeled with `"openservicemesh.io/monitored-by": "osm"` the OSM Injector will not add Envoy sidecars. +If a namespace doesn't have the `"openservicemesh.io/sidecar-injection": "enabled"` annotation or the `"openservicemesh.io/monitored-by": "osm"` label, the OSM Injector doesn't add Envoy sidecars. > [!NOTE]-> After `osm namespace add` is called only **new** pods will be injected with an Envoy sidecar. Existing pods must be restarted with `kubectl rollout restart deployment ...` --### Verify OSM CRDs: +> After `osm namespace add` is called, only **new** pods are injected with an Envoy sidecar. Existing pods must be restarted with `kubectl rollout restart deployment ...` -Check whether the cluster has the required CRDs: +### Verify OSM CRDs -```azurecli-interactive -kubectl get crds -``` +1. Check the cluster has the required CRDs using the `kubectl get crds` command. -We must have the following installed on the cluster: + ```azurecli-interactive + kubectl get crds + ``` -- egresses.policy.openservicemesh.io-- httproutegroups.specs.smi-spec.io -- ingressbackends.policy.openservicemesh.io-- meshconfigs.config.openservicemesh.io-- multiclusterservices.config.openservicemesh.io-- tcproutes.specs.smi-spec.io-- trafficsplits.split.smi-spec.io-- traffictargets.access.smi-spec.io+ The following CRDs must be installed on the cluster: -Get the versions of the SMI CRDs installed with this command: + * egresses.policy.openservicemesh.io + * httproutegroups.specs.smi-spec.io + * ingressbackends.policy.openservicemesh.io + * meshconfigs.config.openservicemesh.io + * multiclusterservices.config.openservicemesh.io + * tcproutes.specs.smi-spec.io + * trafficsplits.split.smi-spec.io + * traffictargets.access.smi-spec.io -```azurecli-interactive -osm mesh list -``` +2. Get the versions of the SMI CRDs installed using the `osm mesh list` command. -Expected output: + ```azurecli-interactive + osm mesh list + ``` -``` -MESH NAME MESH NAMESPACE VERSION ADDED NAMESPACES -osm kube-system v0.11.1 + Your output should look similar to the following example output: -MESH NAME MESH NAMESPACE SMI SUPPORTED -osm kube-system HTTPRouteGroup:v1alpha4,TCPRoute:v1alpha4,TrafficSplit:v1alpha2,TrafficTarget:v1alpha3 + ```output + MESH NAME MESH NAMESPACE VERSION ADDED NAMESPACES + osm kube-system v0.11.1 -To list the OSM controller pods for a mesh, please run the following command passing in the mesh's namespace - kubectl get pods -n <osm-mesh-namespace> -l app=osm-controller -``` + MESH NAME MESH NAMESPACE SMI SUPPORTED + osm kube-system HTTPRouteGroup:v1alpha4,TCPRoute:v1alpha4,TrafficSplit:v1alpha2,TrafficTarget:v1alpha3 -OSM Controller v0.11.1 requires the following versions: + To list the OSM controller pods for a mesh, please run the following command passing in the mesh's namespace + kubectl get pods -n <osm-mesh-namespace> -l app=osm-controller + ``` -- traffictargets.access.smi-spec.io - [v1alpha3](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-access/v1alpha3/traffic-access.md)-- httproutegroups.specs.smi-spec.io - [v1alpha4](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-specs/v1alpha4/traffic-specs.md#httproutegroup)-- tcproutes.specs.smi-spec.io - [v1alpha4](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-specs/v1alpha4/traffic-specs.md#tcproute)-- udproutes.specs.smi-spec.io - Not supported-- trafficsplits.split.smi-spec.io - [v1alpha2](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-split/v1alpha2/traffic-split.md)-- \*.metrics.smi-spec.io - [v1alpha1](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-metrics/v1alpha1/traffic-metrics.md)+ OSM Controller v0.11.1 requires the following versions: + * traffictargets.access.smi-spec.io - [v1alpha3](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-access/v1alpha3/traffic-access.md) + * httproutegroups.specs.smi-spec.io - [v1alpha4](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-specs/v1alpha4/traffic-specs.md#httproutegroup) + * tcproutes.specs.smi-spec.io - [v1alpha4](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-specs/v1alpha4/traffic-specs.md#tcproute) + * udproutes.specs.smi-spec.io - Not supported + * trafficsplits.split.smi-spec.io - [v1alpha2](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-split/v1alpha2/traffic-split.md) + * \*.metrics.smi-spec.io - [v1alpha1](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-metrics/v1alpha1/traffic-metrics.md) ### Certificate management -Information on how OSM issues and manages certificates to Envoy proxies running on application pods can be found on the [OpenServiceMesh docs site](https://docs.openservicemesh.io/docs/guides/certificates/). +For more information on how OSM issues and manages certificates to Envoy proxies running on application pods, see the [OSM certificates guide](https://docs.openservicemesh.io/docs/guides/certificates/). ### Upgrading Envoy -When a new pod is created in a namespace monitored by the add-on, OSM will inject an [envoy proxy sidecar](https://docs.openservicemesh.io/docs/guides/app_onboarding/sidecar_injection/) in that pod. Information regarding how to update the envoy version can be found in the [Upgrade Guide](https://docs.openservicemesh.io/docs/getting_started/) on the OpenServiceMesh docs site. +When you create a new pod in a namespace monitored by the add-on, OSM injects an [Envoy proxy sidecar](https://docs.openservicemesh.io/docs/guides/app_onboarding/sidecar_injection/) in that pod. For more information on how to update the Envoy version, see the [OSM upgrade guide](https://docs.openservicemesh.io/docs/getting_started/). |
aks | Operator Best Practices Container Image Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-container-image-management.md | Title: Operator best practices - Container image management in Azure Kubernetes Services (AKS) -description: Learn the cluster operator best practices for how to manage and secure container images in Azure Kubernetes Service (AKS) +description: Learn the cluster operator best practices for how to manage and secure container images in Azure Kubernetes Service (AKS). Previously updated : 03/11/2021- Last updated : 06/27/2023 # Best practices for container image management and security in Azure Kubernetes Service (AKS) -Container and container image security is a major priority while you develop and run applications in Azure Kubernetes Service (AKS). Containers with outdated base images or unpatched application runtimes introduce a security risk and possible attack vector. --Minimize risks by integrating and running scan and remediation tools in your containers at build and runtime. The earlier you catch the vulnerability or outdated base image, the more secure your cluster. +Container and container image security is a major priority when developing and running applications in Azure Kubernetes Service (AKS). Containers with outdated base images or unpatched application runtimes introduce security risks and possible attack vectors. You can minimize these risks by integrating and running scan and remediation tools in your containers at build and runtime. The earlier you catch the vulnerability or outdated base image, the more secure your application is. -In this article, *"containers"* means both: -* The container images stored in a container registry. -* The running containers. +In this article, *"containers"* refers to both the container images stored in a container registry and running containers. This article focuses on how to secure your containers in AKS. You learn how to: > [!div class="checklist"]+> > * Scan for and remediate image vulnerabilities. > * Automatically trigger and redeploy container images when a base image is updated. -You can also read the best practices for [cluster security][best-practices-cluster-security] and for [pod security][best-practices-pod-security]. +* You can read the best practices for [cluster security][best-practices-cluster-security] and [pod security][best-practices-pod-security]. +* You can use [Container security in Defender for Cloud][security-center-containers] to help scan your containers for vulnerabilities. [Azure Container Registry integration][security-center-acr] with Defender for Cloud helps protect your images and registry from vulnerabilities. -You can also use [Container security in Defender for Cloud][security-center-containers] to help scan your containers for vulnerabilities. [Azure Container Registry integration][security-center-acr] with Defender for Cloud helps protect your images and registry from vulnerabilities. +## Secure the images and runtime -## Secure the images and run time --> **Best practice guidance** +> **Best practice guidance** >-> Scan your container images for vulnerabilities. Only deploy validated images. Regularly update the base images and application runtime. Redeploy workloads in the AKS cluster. +> * Scan your container images for vulnerabilities. +> * Only deploy validated images. +> * Regularly update the base images and application runtime. +> * Redeploy workloads in the AKS cluster. ++When adopting container-based workloads, you want to verify the security of images and runtime used to build your own applications. To help avoid introducing security vulnerabilities into your deployments, you can use the following best practices: -When adopting container-based workloads, you'll want to verify the security of images and runtime used to build your own applications. How do you avoid introducing security vulnerabilities into your deployments? -* Include in your deployment workflow a process to scan container images using tools such as [Twistlock][twistlock] or [Aqua][aqua]. +* Include in your deployment workflow a process to scan container images using tools, such as [Twistlock][twistlock] or [Aqua][aqua]. * Only allow verified images to be deployed. ![Scan and remediate container images, validate, and deploy](media/operator-best-practices-container-security/scan-container-images-simplified.png) For example, you can use a continuous integration and continuous deployment (CI/ ## Automatically build new images on base image update -> **Best practice guidance** +> **Best practice guidance** > > As you use base images for application images, use automation to build new images when the base image is updated. Since updated base images typically include security fixes, update any downstream application container images. -Each time a base image is updated, you should also update any downstream container images. Integrate this build process into validation and deployment pipelines such as [Azure Pipelines][azure-pipelines] or Jenkins. These pipelines make sure that your applications continue to run on the updated based images. Once your application container images are validated, the AKS deployments can then be updated to run the latest, secure images. +Each time a base image is updated, you should also update any downstream container images. Integrate this build process into validation and deployment pipelines such as [Azure Pipelines][azure-pipelines] or Jenkins. These pipelines ensure your applications continue to run on the updated based images. Once your application container images are validated, you can then update AKS deployments to run the latest secure images. Azure Container Registry Tasks can also automatically update container images when the base image is updated. With this feature, you build a few base images and keep them updated with bug and security fixes. For more information about base image updates, see [Automate image builds on bas ## Next steps -This article focused on how to secure your containers. To implement some of these areas, see the following articles: +This article focused on how to secure your containers. To implement some of these areas, see the following article: * [Automate image builds on base image update with Azure Container Registry Tasks][acr-base-image-update] |
aks | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md | Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
aks | Workload Identity Deploy Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md | After a few minutes, the command completes and returns JSON-formatted informatio > [!NOTE] > When you create an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [Why are two resource groups created with AKS?][aks-two-resource-groups]. +## Update an existing AKS cluster ++You can update an AKS cluster using the [az aks update][az aks update] command with the `--enable-oidc-issuer` and the `--enable-workload-identity` parameter to use the OIDC Issuer and enable workload identity. The following example updates a cluster named *myAKSCluster*: ++```azurecli-interactive +az aks update -g "${RESOURCE_GROUP}" -n myAKSCluster --enable-oidc-issuer --enable-workload-identity +``` ++## Retrieve the OIDC Issuer URL + To get the OIDC Issuer URL and save it to an environmental variable, run the following command. Replace the default value for the arguments `-n`, which is the name of the cluster: ```bash In this article, you deployed a Kubernetes cluster and configured it to use a wo [aks-identity-concepts]: concepts-identity.md [az-account]: /cli/azure/account [az-aks-create]: /cli/azure/aks#az-aks-create+[az aks update]: /cli/azure/aks#az-aks-update [aks-two-resource-groups]: faq.md#why-are-two-resource-groups-created-with-aks [az-account-set]: /cli/azure/account#az-account-set [az-identity-create]: /cli/azure/identity#az-identity-create |
api-management | Api Management Howto Mutual Certificates For Clients | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates-for-clients.md | Using key vault certificates is recommended because it helps improve API Managem ## Prerequisites * If you have not created an API Management service instance yet, see [Create an API Management service instance](get-started-create-service-instance.md).-* You need access to the certificate and the password for management in an Azure key vault or upload to the API Management service. The certificate must be in **PFX** format. Self-signed certificates are allowed. +* You need access to the certificate and the password for management in an Azure key vault or upload to the API Management service. The certificate must be in either CER or PFX format. Self-signed certificates are allowed. If you use a self-signed certificate, also install trusted root and intermediate [CA certificates](api-management-howto-ca-certificates.md) in your API Management instance. Using key vault certificates is recommended because it helps improve API Managem [!INCLUDE [api-management-client-certificate-key-vault](../../includes/api-management-client-certificate-key-vault.md)] + > [!NOTE] + > If you only wish to use the certificate to authenticate the client with API Management, you can upload a CER file. + ## Enable API Management instance to receive and verify client certificates ### Developer, Basic, Standard, or Premium tier |
api-management | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md | Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
api-management | Validate Azure Ad Token Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-azure-ad-token-policy.md | -The `validate-azure-ad-token` policy enforces the existence and validity of a JSON web token (JWT) that was provided by the Azure Active Directory service. The JWT can be extracted from a specified HTTP header, query parameter, or value provided using a policy expression or context variable. +The `validate-azure-ad-token` policy enforces the existence and validity of a JSON web token (JWT) that was provided by the Azure Active Directory service for a specified set of principals in the directory. The JWT can be extracted from a specified HTTP header, query parameter, or value provided using a policy expression or context variable. > [!NOTE] > To validate a JWT that was provided by another identity provider, API Management also provides the generic [`validate-jwt`](validate-jwt-policy.md) policy. |
api-management | Virtual Network Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-reference.md | Outbound access on port `53` is required for communication with DNS servers. If ### FQDN dependencies -To operate properly, each [self-hosted gateway](self-hosted-gateway-overview.md) needs outbound connectivity on port 443 to the following endpoints associated with its cloud-based API Management instance: +To operate properly, the API Management service needs outbound connectivity on port 443 to the following endpoints associated with its cloud-based API Management instance: | Description | Required | Notes | |:|:|:| |
app-service | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md | Title: App Service Environment overview description: This article discusses the Azure App Service Environment feature of Azure App Service. Previously updated : 06/08/2023 Last updated : 06/27/2023 App Service Environment v3 is available in the following regions: | North Central US | ✅ | | ✅ | | North Europe | ✅ | ✅ | ✅ | | Norway East | ✅ | ✅ | ✅ | -| Norway West | ✅ | | ✅ | +| Norway West | ✅ | | ✅ | +| Poland Central | ✅ | | | | Qatar Central | ✅ | ✅ | | | South Africa North | ✅ | ✅ | ✅ | | South Africa West | ✅ | | ✅ | |
app-service | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md | Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
application-gateway | Configuration Frontend Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-frontend-ip.md | Only one public IP address and one private IP address is supported. You choose t A frontend IP address is associated to a *listener*, which checks for incoming requests on the frontend IP. >[!NOTE] -> You can create private and public listeners with the same port number (Preview feature). However, be aware of any Network Security Group (NSG) associated with the application gateway subnet. Depending on your NSG's configuration, you may need an inbound rule with **Destination IP addresses** as your application gateway subnet's IP prefix. +> You can create private and public listeners with the same port number (Preview feature). However, be aware of any Network Security Group (NSG) associated with the application gateway subnet. Depending on your NSG's configuration, you may need an allow-inbound rule with **Destination IP addresses** as your application gateway's Public and Private frontend IPs. When using the same port, your application gateway changes the "Destination" of the inbound flow to the frontend IPs of your gateway. > > **Inbound Rule**: > - Source: (as per your requirement)-> - Destination IP addresses: IP prefix of your application gateway subnet. -> - Destination Port: (as per listener configuration) +> - Destination: Public and Private frontend IPs of your application gateway. +> - Destination Port: (as per configured listeners) > - Protocol: TCP > -> **Outbound Rule**: (no specific requirement) +> **Outbound Rule**: +> - (no specific requirement) > [!IMPORTANT] > **The default domain name behavior for V1 SKU**: |
application-gateway | Mutual Authentication Certificate Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-certificate-management.md | -In order to configure mutual authentication with the client, or client authentication, Application Gateway requires a trusted client CA certificate chain to be uploaded to the gateway. If you have multiple certificate chains, you'll need to create the chains separately and upload them as different files on the Application Gateway. In this article, you'll learn how to export a trusted client CA certificate chain that you can use in your client authentication configuration on your gateway. +In order to configure mutual authentication with the client, or client authentication, Application Gateway requires a trusted client CA certificate chain to be uploaded to the gateway. If you have multiple certificate chains, you need to create the chains separately and upload them as different files on the Application Gateway. In this article, you learn how to export a trusted client CA certificate chain that you can use in your client authentication configuration on your gateway. ## Prerequisites An existing client certificate is required to generate the trusted client CA cer ## Export trusted client CA certificate -Trusted client CA certificate is required to allow client authentication on Application Gateway. In this example, we will use a TLS/SSL certificate for the client certificate, export its public key and then export the CA certificates from the public key to get the trusted client CA certificates. We'll then concatenate all the client CA certificates into one trusted client CA certificate chain. +Trusted client CA certificate is required to allow client authentication on Application Gateway. In this example, we use a TLS/SSL certificate for the client certificate, export its public key and then export the CA certificates from the public key to get the trusted client CA certificates. We then concatenate all the client CA certificates into one trusted client CA certificate chain. The following steps help you export the .pem or .cer file for your certificate: The following steps help you export the .pem or .cer file for your certificate: 1. To obtain a .cer file from the certificate, open **Manage user certificates**. Locate the certificate, typically in 'Certificates - Current User\Personal\Certificates', and right-click. Click **All Tasks**, and then click **Export**. This opens the **Certificate Export Wizard**. If you can't find the certificate under Current User\Personal\Certificates, you may have accidentally opened "Certificates - Local Computer", rather than "Certificates - Current User"). If you want to open Certificate Manager in current user scope using PowerShell, you type *certmgr* in the console window. - > [!div class="mx-imgBorder"] - > ![Screenshot shows the Certificate Manager with Certificates selected and a contextual menu with All tasks, then Export selected.](./media/certificates-for-backend-authentication/export.png) + :::image type="content" source="./media/certificates-for-backend-authentication/export.png" alt-text="Screenshot shows the Certificate Manager with Certificates selected and a contextual menu with All tasks, then Export selected."::: -2. In the Wizard, click **Next**. - > [!div class="mx-imgBorder"] - > ![Export certificate](./media/certificates-for-backend-authentication/exportwizard.png) +1. In the Wizard, click **Next**. -3. Select **No, do not export the private key**, and then click **Next**. - > [!div class="mx-imgBorder"] - > ![Do not export the private key](./media/certificates-for-backend-authentication/notprivatekey.png) + :::image type="content" source="./media/certificates-for-backend-authentication/exportwizard.png" alt-text="Screenshot of export certificate."::: -4. On the **Export File Format** page, select **Base-64 encoded X.509 (.CER).**, and then click **Next**. - > [!div class="mx-imgBorder"] - > ![Base-64 encoded](./media/certificates-for-backend-authentication/base64.png) +1. Select **No, do not export the private key**, and then click **Next**. -5. For **File to Export**, **Browse** to the location to which you want to export the certificate. For **File name**, name the certificate file. Then, click **Next**. + :::image type="content" source="./media/certificates-for-backend-authentication/notprivatekey.png" alt-text="Screenshot of do not export the private key."::: - > [!div class="mx-imgBorder"] - > ![Screenshot shows the Certificate Export Wizard where you specify a file to export.](./media/certificates-for-backend-authentication/browse.png) +1. On the **Export File Format** page, select **Base-64 encoded X.509 (.CER).**, and then click **Next**. -6. Click **Finish** to export the certificate. + :::image type="content" source="./media/certificates-for-backend-authentication/base64.png" alt-text="Screenshot of Base-64 encoded."::: - > [!div class="mx-imgBorder"] - > ![Screenshot shows the Certificate Export Wizard after you complete the file export.](./media/certificates-for-backend-authentication/finish-screen.png) +1. For **File to Export**, **Browse** to the location to which you want to export the certificate. For **File name**, name the certificate file. Then, click **Next**. -7. Your certificate is successfully exported. + :::image type="content" source="./media/certificates-for-backend-authentication/browse.png" alt-text="Screenshot shows the Certificate Export Wizard where you specify a file to export."::: - > [!div class="mx-imgBorder"] - > ![Screenshot shows the Certificate Export Wizard with a success message.](./media/certificates-for-backend-authentication/success.png) +1. Click **Finish** to export the certificate. + + :::image type="content" source="./media/certificates-for-backend-authentication/finish-screen.png" alt-text="Screenshot shows the Certificate Export Wizard after you complete the file export."::: - The exported certificate looks similar to this: +1. Your certificate is successfully exported. - > [!div class="mx-imgBorder"] - > ![Screenshot shows a certificate symbol.](./media/certificates-for-backend-authentication/exported.png) + :::image type="content" source="./media/certificates-for-backend-authentication/success.png" alt-text="Screenshot shows the Certificate Export Wizard with a success message."::: ++ The exported certificate looks similar to this: + :::image type="content" source="./media/certificates-for-backend-authentication/exported.png" alt-text="Screenshot shows a certificate symbol."::: ### Export CA certificate(s) from the public certificate -Now that you've exported your public certificate, you will now export the CA certificate(s) from your public certificate. If you only have a root CA, you'll only need to export that certificate. However, if you have 1+ intermediate CAs, you'll need to export each of those as well. +Now that you've exported your public certificate, you'll now export the CA certificate(s) from your public certificate. If you only have a root CA, you'll only need to export that certificate. However, if you have 1+ intermediate CAs, you need to export each of those as well. 1. Once the public key has been exported, open the file. - > [!div class="mx-imgBorder"] - > ![Open authorization certificate](./media/certificates-for-backend-authentication/openAuthcert.png) + :::image type="content" source="./media/certificates-for-backend-authentication/openAuthcert.png" alt-text="Screenshot of Open authorization certificate."::: - > [!div class="mx-imgBorder"] - > ![about certificate](./media/mutual-authentication-certificate-management/general.png) + :::image type="content" source="./media/mutual-authentication-certificate-management/general.png" alt-text="Screenshot of about certificate."::: 1. Select the Certification Path tab to view the certification authority. - > [!div class="mx-imgBorder"] - > ![cert details](./media/mutual-authentication-certificate-management/cert-details.png) + :::image type="content" source="./media/mutual-authentication-certificate-management/cert-details.png" alt-text="Screenshot of certificate details."::: 1. Select the root certificate and click on **View Certificate**. - > [!div class="mx-imgBorder"] - > ![cert path](./media/mutual-authentication-certificate-management/root-cert.png) + :::image type="content" source="./media/mutual-authentication-certificate-management/root-cert.png" alt-text="Screenshot of certificate path."::: You should see the root certificate details. - > [!div class="mx-imgBorder"] - > ![cert info](./media/mutual-authentication-certificate-management/root-cert-details.png) + :::image type="content" source="./media/mutual-authentication-certificate-management/root-cert-details.png" alt-text="Screenshot of certificate info."::: 1. Select the **Details** tab and click **Copy to File...** - > [!div class="mx-imgBorder"] - > ![copy root cert](./media/mutual-authentication-certificate-management/root-cert-copy-to-file.png) + :::image type="content" source="./media/mutual-authentication-certificate-management/root-cert-copy-to-file.png" alt-text="Screenshot of copy root certificate."::: -1. At this point, you've extracted the details of the root CA certificate from the public certificate. You'll see the **Certificate Export Wizard**. Follow steps 2-7 from the previous section ([Export public certificate](./mutual-authentication-certificate-management.md#export-public-certificate)) to complete the Certificate Export Wizard. +1. At this point, you've extracted the details of the root CA certificate from the public certificate. You see the **Certificate Export Wizard**. Follow steps 2-7 from the previous section ([Export public certificate](./mutual-authentication-certificate-management.md#export-public-certificate)) to complete the Certificate Export Wizard. 1. Now repeat steps 2-6 from this current section ([Export CA certificate(s) from the public certificate](./mutual-authentication-certificate-management.md#export-ca-certificates-from-the-public-certificate)) for all intermediate CAs to export all intermediate CA certificates in the Base-64 encoded X.509(.CER) format. - > [!div class="mx-imgBorder"] - > ![intermediate cert](./media/mutual-authentication-certificate-management/intermediate-cert.png) + :::image type="content" source="./media/mutual-authentication-certificate-management/intermediate-cert.png" alt-text="Screenshot of intermediate certificate."::: For example, you would repeat steps 2-6 from this section on the *MSIT CAZ2* intermediate CA to extract it as its own certificate. Now that you've exported your public certificate, you will now export the CA cer Your resulting combined certificate should look something like the following: - > [!div class="mx-imgBorder"] - > ![combined cert](./media/mutual-authentication-certificate-management/combined-cert.png) + :::image type="content" source="./media/mutual-authentication-certificate-management/combined-cert.png" alt-text="Screenshot of combined certificate."::: ## Next steps |
application-gateway | Tutorial Ingress Controller Add On Existing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-existing.md | az aks enable-addons -n myCluster -g myResourceGroup -a ingress-appgw --appgw-id If you'd like to use Azure portal to enable AGIC add-on, go to [(https://aka.ms/azure/portal/aks/agic)](https://aka.ms/azure/portal/aks/agic) and navigate to your AKS cluster through the portal link. From there, go to the Networking tab within your AKS cluster. You'll see an application gateway ingress controller section, which allows you to enable/disable the ingress controller add-on using the Azure portal. Select the box next to **Enable ingress controller**, and then select the application gateway you created, **myApplicationGateway** from the dropdown menu. Select **Save**. +> [!CAUTION] +> When you use an application gateway in a different resource group, the managed identity created **_ingressapplicationgateway-{AKSNAME}_** once this add-on is enabled in the AKS nodes resource group must have Contributor role set in the Application Gateway resource as well as Reader role set in the Application Gateway resource group. + :::image type="content" source="./media/tutorial-ingress-controller-add-on-existing/portal-ingress-controller-add-on.png" alt-text="Screenshot showing how to enable application gateway ingress controller from the networking page of the Azure Kubernetes Service."::: ## Peer the two virtual networks together |
attestation | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md | Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
automation | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md | Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
azure-app-configuration | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md | Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md | Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 # |
azure-arc | Quickstart Connect Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md | Title: "Quickstart: Connect an existing Kubernetes cluster to Azure Arc" description: In this quickstart, you learn how to connect an Azure Arc-enabled Kubernetes cluster. Previously updated : 03/13/2023 Last updated : 06/27/2023 ms.devlang: azurecli For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable ## Prerequisites -In addition to the prerequisites below, be sure to meet all [network requirements for Azure Arc-enabled Kubernetes](network-requirements.md). +In addition to these prerequisites, be sure to meet all [network requirements for Azure Arc-enabled Kubernetes](network-requirements.md). ### [Azure CLI](#tab/azure-cli) ResourceId : /subscriptions/00000000-0000-0000-0000-000000000000/resource ## Connect an existing Kubernetes cluster -Run the following command to connect your cluster. This command deploys the Azure Arc agents to the cluster and installs Helm v. 3.6.3 to the .azure folder of the deployment machine. This Helm 3 installation is only used for Azure Arc, and it does not remove or change any previously installed versions of Helm on the machine. +Run the following command to connect your cluster. This command deploys the Azure Arc agents to the cluster and installs Helm v. 3.6.3 to the `.azure` folder of the deployment machine. This Helm 3 installation is only used for Azure Arc, and it doesn't remove or change any previously installed versions of Helm on the machine. In this example, the cluster's name is AzureArcTest1. If your cluster is behind an outbound proxy server, requests must be routed via ### [Azure CLI](#tab/azure-cli) -1. Set the environment variables needed for Azure CLI to use the outbound proxy server: +1. On the deployment machine, set the environment variables needed for Azure CLI to use the outbound proxy server: ```bash export HTTP_PROXY=<proxy-server-ip-address>:<port> If your cluster is behind an outbound proxy server, requests must be routed via export NO_PROXY=<cluster-apiserver-ip-address>:<port> ``` -2. Run the connect command with the `proxy-https` and `proxy-http` parameters specified. If your proxy server is set up with both HTTP and HTTPS, be sure to use `--proxy-http` for the HTTP proxy and `--proxy-https` for the HTTPS proxy. If your proxy server only uses HTTP, you can use that value for both parameters. +2. On the Kubernetes cluster, run the connect command with the `proxy-https` and `proxy-http` parameters specified. If your proxy server is set up with both HTTP and HTTPS, be sure to use `--proxy-http` for the HTTP proxy and `--proxy-https` for the HTTPS proxy. If your proxy server only uses HTTP, you can use that value for both parameters. ```azurecli az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-https https://<proxy-server-ip-address>:<port> --proxy-http http://<proxy-server-ip-address>:<port> --proxy-skip-range <excludedIP>,<excludedCIDR> --proxy-cert <path-to-cert-file> If your cluster is behind an outbound proxy server, requests must be routed via ### [Azure PowerShell](#tab/azure-powershell) -1. Set the environment variables needed for Azure PowerShell to use the outbound proxy server: +1. On the deployment machine, set the environment variables needed for Azure PowerShell to use the outbound proxy server: ```powershell $Env:HTTP_PROXY = "<proxy-server-ip-address>:<port>" If your cluster is behind an outbound proxy server, requests must be routed via $Env:NO_PROXY = "<cluster-apiserver-ip-address>:<port>" ``` -2. Run the connect command with the proxy parameter specified: +2. On the Kubernetes cluster, run the connect command with the proxy parameter specified: ```azurepowershell New-AzConnectedKubernetes -ClusterName <cluster-name> -ResourceGroupName <resource-group> -Location eastus -Proxy 'https://<proxy-server-ip-address>:<port>' For outbound proxy servers where only a trusted certificate needs to be provided > [!NOTE] >-> * `--custom-ca-cert` is an alias for `--proxy-cert`. Either parameters can be used interchangeably. Passing both parameters in the same command will honour the one passed last. +> * `--custom-ca-cert` is an alias for `--proxy-cert`. Either parameters can be used interchangeably. Passing both parameters in the same command will honor the one passed last. ### [Azure CLI](#tab/azure-cli) az connectedk8s connect --name <cluster-name> --resource-group <resource-group> ### [Azure PowerShell](#tab/azure-powershell) -The ability to pass in the proxy certificate only without the proxy server endpoint details is not yet supported via PowerShell. +The ability to pass in the proxy certificate only without the proxy server endpoint details isn't currently supported via PowerShell. |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md | Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
azure-cache-for-redis | Cache How To Import Export Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-import-export-data.md | -This article provides a guide for importing and exporting data with Azure Cache for Redis and provides the answers to commonly asked questions. +This article provides a guide for importing and exporting data with Azure Cache for Redis and provides the answers to commonly asked questions. ## Scope of availability This article provides a guide for importing and exporting data with Azure Cache ## Compatibility - Data is exported as an RDB page blob in the _Premium_ tier. In the _Enterprise_ and _Enterprise Flash_ tiers, data is exported as a .gz block blob.-- Caches running Redis 4.0 support RDB version 8 and below. Caches running Redis 6.0 support RDB version 9 and below. +- Caches running Redis 4.0 support RDB version 8 and below. Caches running Redis 6.0 support RDB version 9 and below. - Exported backups from newer versions of Redis (for example, Redis 6.0) can't be imported into older versions of Redis (for example, Redis 4.0) - RDB files from _Premium_ tier caches can be imported into _Enterprise_ and _Enterprise Flash_ tier caches. This article provides a guide for importing and exporting data with Azure Cache Use import to bring Redis compatible RDB files from any Redis server running in any cloud or environment, including Redis running on Linux, Windows, or any cloud provider such as Amazon Web Services and others. Importing data is an easy way to create a cache with prepopulated data. During the import process, Azure Cache for Redis loads the RDB files from Azure storage into memory and then inserts the keys into the cache. > [!NOTE]-> Before beginning the import operation, ensure that your Redis Database (RDB) file or files are uploaded into page or block blobs in Azure storage, in the same region and subscription as your Azure Cache for Redis instance. For more information, see [Get started with Azure Blob storage](../storage/blobs/storage-quickstart-blobs-dotnet.md). If you exported your RDB file using the [Azure Cache for Redis Export](#export) feature, your RDB file is already stored in a page blob and is ready for importing. +> Before beginning the import operation, ensure that your Redis Database (RDB) file or files are uploaded into page or block blobs in Azure storage, in the same region and subscription as your Azure Cache for Redis instance. If you are using managed identity for authentication, the storage account can be in a different subscription. For more information, see [Get started with Azure Blob storage](../storage/blobs/storage-quickstart-blobs-dotnet.md). If you exported your RDB file using the [Azure Cache for Redis Export](#export) feature, your RDB file is already stored in a page blob and is ready for importing. > [!IMPORTANT] > Currently, importing from Redis Enterprise tier to Premium tier is not supported. Use import to bring Redis compatible RDB files from any Redis server running in :::image type="content" source="./media/cache-how-to-import-export-data/cache-import-blobs.png" alt-text="Screenshot showing the Import button to select to begin the import."::: You can monitor the progress of the import operation by following the notifications from the Azure portal, or by viewing the events in the [activity log](../azure-monitor/essentials/activity-log.md).- + > [!IMPORTANT] > Activity log support is not yet available in the Enterprise tiers.- > + > :::image type="content" source="./media/cache-how-to-import-export-data/cache-import-data-import-complete.png" alt-text="Screenshot showing the import progress in the notifications area."::: Export allows you to export the data stored in Azure Cache for Redis to Redis co :::image type="content" source="./media/cache-how-to-import-export-data/cache-export-data-choose-storage-container.png" alt-text="Screenshot showing Export data selected in the Resource menu"::: -2. Select **Choose Storage Container** and to display a list of available storage accounts. Select the storage account you want. The storage account must be in the same subscription and region as your cache. +2. Select **Choose Storage Container** and to display a list of available storage accounts. Select the storage account you want. The storage account must be in the same region as your cache. If you're using managed identity for authentication, the storage account can be in a different subscription. Otherwise, the storage account must be in the same subscription as your cache. > [!IMPORTANT] > The _import_ and _export_ features are available only in the _Premium_, _Enterpr ### Can I import data from any Redis server? -Yes, you can import data that was exported from Azure Cache for Redis instances. You can import RDB files from any Redis server running in any cloud or environment. The environments include Linux, Windows, or cloud providers such as Amazon Web Services. To do import this data, upload the RDB file from the Redis server you want into a page or block blob in an Azure Storage Account. Then, import it into your premium Azure Cache for Redis instance. +Yes, you can import data that was exported from Azure Cache for Redis instances. You can import RDB files from any Redis server running in any cloud or environment. The environments include Linux, Windows, or cloud providers such as Amazon Web Services. To do import this data, upload the RDB file from the Redis server you want into a page or block blob in an Azure Storage Account. Then, import it into your premium Azure Cache for Redis instance. -For example, you might want to: +For example, you might want to: 1. Export the data from your production cache. For example, you might want to: ### What RDB versions can I import? -For more information on supported RDB versions used with import, see the [compatibility section](#compatibility). +For more information on supported RDB versions used with import, see the [compatibility section](#compatibility). ### Is my cache available during an Import/Export operation? Some pricing tiers have different [databases limits](cache-configure.md#database The Azure Cache for Redis _persistence_ feature is primarily a data durability feature. Conversely, the _import/export_ functionality is designed as a method to make periodic data backups for point-in-time recovery. <!-- Kyle I rewrote this based on another convo. Also I want the primary answer to be in the first paragraph. -->-When _persistence_ is configured, your cache persists a snapshot of the data to disk, based on a configurable backup frequency. The data is written with a Redis-proprietary binary format. If a catastrophic event occurs that disables both the primary and the replica caches, the cache data is restored automatically using the most recent snapshot. +When _persistence_ is configured, your cache persists a snapshot of the data to disk, based on a configurable backup frequency. The data is written with a Redis-proprietary binary format. If a catastrophic event occurs that disables both the primary and the replica caches, the cache data is restored automatically using the most recent snapshot. -Data persistence is designed for disaster recovery. It isn't intended as a point-in-time recovery mechanism. +Data persistence is designed for disaster recovery. It isn't intended as a point-in-time recovery mechanism. -- On the Premium tier, the data persistence file is stored in Azure Storage, but the file can't be imported into a different cache. -- On the Enterprise tiers, the data persistence file is stored in a mounted disk that isn't user-accessible. +- On the Premium tier, the data persistence file is stored in Azure Storage, but the file can't be imported into a different cache. +- On the Enterprise tiers, the data persistence file is stored in a mounted disk that isn't user-accessible. If you want to make periodic data backups for point-in-time recovery, we recommend using the _import/export_ functionality. For more information, see [How to configure data persistence for Azure Cache for Redis](cache-how-to-premium-persistence.md). If you want to make periodic data backups for point-in-time recovery, we recomme Yes, see the following instructions for the _Premium_ tier: -- PowerShell instructions [to import Redis data](cache-how-to-manage-redis-cache-powershell.md#to-import-an-azure-cache-for-redis) and [to export Redis data](cache-how-to-manage-redis-cache-powershell.md#to-export-an-azure-cache-for-redis). +- PowerShell instructions [to import Redis data](cache-how-to-manage-redis-cache-powershell.md#to-import-an-azure-cache-for-redis) and [to export Redis data](cache-how-to-manage-redis-cache-powershell.md#to-export-an-azure-cache-for-redis). - Azure CLI instructions to [import Redis data](/cli/azure/redis#az-redis-import) and [export Redis data](/cli/azure/redis#az-redis-export) For the _Enterprise_ and _Enterprise Flash_ tiers: -- PowerShell instructions [to import Redis data](/powershell/module/az.redisenterprisecache/import-azredisenterprisecache) and [to export Redis data](/powershell/module/az.redisenterprisecache/export-azredisenterprisecache). +- PowerShell instructions [to import Redis data](/powershell/module/az.redisenterprisecache/import-azredisenterprisecache) and [to export Redis data](/powershell/module/az.redisenterprisecache/export-azredisenterprisecache). - Azure CLI instructions to [import Redis data](/cli/azure/redisenterprise/database#az-redisenterprise-database-import) and [export Redis data](/cli/azure/redisenterprise/database#az-redisenterprise-database-export) ### I received a timeout error during my Import/Export operation. What does it mean? More information here - [Managed identity for storage accounts - Azure Cache for ### Can I import or export data from a storage account in a different subscription than my cache? -In the _Premium_ tier, you can import and export data from a storage account in a different subscription than your cache, but you must use [managed identity](cache-managed-identity.md) as the authentication method. You will need to select the chosen subscription holding the storage account when configuring the import or export. +In the _Premium_ tier, you can import and export data from a storage account in a different subscription than your cache, but you must use [managed identity](cache-managed-identity.md) as the authentication method. You will need to select the chosen subscription holding the storage account when configuring the import or export. ## Next steps |
azure-cache-for-redis | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md | Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
azure-functions | Dotnet Isolated Process Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md | The following example demonstrates the use of `HttpRequestData` and `HttpRespons This section shows how to work with the underlying HTTP request and response objects using types from ASP.NET Core including [HttpRequest], [HttpResponse], and [IActionResult]. Use of this feature for local testing requires [Core Tools version 4.0.5198 or later](./functions-run-local.md). This model is not available to [apps targeting .NET Framework][supported-versions], which should instead leverage the [built-in model](#built-in-http-model). > [!NOTE]-> Not all features of ASP.NET Core are exposed by this model. Specifically, the ASP.NET Core middleware pipeline and routing capabilities are not available. In the initial preview versions of the integration package, route info is missing from the `HttpRequest` and `HttpContext` objects, and accessing route parameters should be done through the `FunctionContext` object or via parameter injection. +> Not all features of ASP.NET Core are exposed by this model. Specifically, the ASP.NET Core middleware pipeline and routing capabilities are not available. -1. Add a reference to the [Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore) to your project. +1. Add a reference to the [Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore NuGet package, version 1.0.0-preview2 or later](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore/1.0.0-preview2) to your project. - You must also update your project to use [version 1.10.0 or later of Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/1.10.0) and [version 1.14.1 or later of Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/1.14.1). + You must also update your project to use [version 1.11.0 or later of Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/1.11.0) and [version 1.16.0 or later of Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/1.16.0). -2. In your `Program.cs` file, update the host builder configuration to include the `UseAspNetCoreIntegration()` and `ConfigureAspNetCoreIntegration()` methods. The following example shows a minimal setup without other customizations: +2. In your `Program.cs` file, update the host builder configuration to use `ConfigureFunctionsWebApplication()` instead of `ConfigureFunctionsWorkerDefaults()`. The following example shows a minimal setup without other customizations: ```csharp using Microsoft.Extensions.Hosting; using Microsoft.Azure.Functions.Worker; var host = new HostBuilder()- .ConfigureFunctionsWorkerDefaults(workerApplication => - { - workerApplication.UseAspNetCoreIntegration(); - }) - .ConfigureAspNetCoreIntegration() + .ConfigureFunctionsWebApplication() .Build(); host.Run(); ``` - > [!NOTE] - > Initial preview versions of the integration package require both `UseAspNetCoreIntegration()` and `ConfigureAspNetCoreIntegration()` to be called, but these setup steps are not yet finalized. - 3. You can then update your HTTP-triggered functions to use the ASP.NET Core types. The following example shows `HttpRequest` and an `IActionResult` used for a simple "hello, world" function: ```csharp |
azure-functions | Functions App Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md | Requires that [`FUNCTIONS_WORKER_SHARED_MEMORY_DATA_TRANSFER_ENABLED`](#function ## ENABLE\_ORYX\_BUILD -Indicates whether the [Oryx build system](https://github.com/microsoft/Oryx) is used during deployment. `ENABLE_ORYX_BUILD` must be set to `true` when doing remote build deployments to Linux. For more information, see [Remote build on Linux](functions-deployment-technologies.md#remote-build-on-linux). +Indicates whether the [Oryx build system](https://github.com/microsoft/Oryx) is used during deployment. `ENABLE_ORYX_BUILD` must be set to `true` when doing remote build deployments to Linux. For more information, see [Remote build](functions-deployment-technologies.md#remote-build). |Key|Sample value| ||| The previous command requires you to upgrade to version 2.40 of the Azure CLI. #### Custom images -When you create and maintain your own custom linux container for your function app, the `linuxFxVersion` value is also in the format `DOCKER|<IMAGE_URI>`, as in the following example: +When you create and maintain your own custom linux container for your function app, the `linuxFxVersion` value is instead in the format `DOCKER|<IMAGE_URI>`, as in the following example: ``` linuxFxVersion = "DOCKER|contoso.com/azurefunctionsimage:v1.0.0" ```-For more information, see [Create a function on Linux using a custom container](functions-create-function-linux-custom-image.md). +This indicates the registry source of the deployed container. For more information, see [Working with containers and Azure Functions](functions-how-to-custom-container.md). [!INCLUDE [functions-linux-custom-container-note](../../includes/functions-linux-custom-container-note.md)] |
azure-functions | Functions Bindings Http Webhook Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-trigger.md | The following example shows an HTTP trigger that returns a "hello world" respons :::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Http/HttpFunction.cs" id="docsnippet_http_trigger"::: -The following examples shows an HTTP trigger that returns a "hello, world" response as an [IActionResult], using [ASP.NET Core integration in .NET Isolated](./dotnet-isolated-process-guide.md#aspnet-core-integration-preview): +The following example shows an HTTP trigger that returns a "hello, world" response as an [IActionResult], using [ASP.NET Core integration in .NET Isolated](./dotnet-isolated-process-guide.md#aspnet-core-integration-preview): ```csharp [Function("HttpFunction")] The key can be included in a query string variable named `code`, as above. It ca You can allow anonymous requests, which do not require keys. You can also require that the master key is used. You change the default authorization level by using the `authLevel` property in the binding JSON. For more information, see [Trigger - configuration](#configuration). > [!NOTE]-> When running functions locally, authorization is disabled regardless of the specified authorization level setting. After publishing to Azure, the `authLevel` setting in your trigger is enforced. Keys are still required when running [locally in a container](functions-create-function-linux-custom-image.md#build-the-container-image-and-test-locally). +> When running functions locally, authorization is disabled regardless of the specified authorization level setting. After publishing to Azure, the `authLevel` setting in your trigger is enforced. Keys are still required when running [locally in a container](functions-create-container-registry.md#build-the-container-image-and-verify-locally). #### Secure an HTTP endpoint in production |
azure-functions | Functions Core Tools Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-core-tools-reference.md | func init <PROJECT_FOLDER> When you supply `<PROJECT_FOLDER>`, the project is created in a new folder with this name. Otherwise, the current folder is used. -`func init` supports the following options, which are version 3.x/2.x-only, unless otherwise noted: +`func init` supports the following options, which don't support version 1.x unless otherwise noted: | Option | Description | | | -- | | **`--csx`** | Creates .NET functions as C# script, which is the version 1.x behavior. Valid only with `--worker-runtime dotnet`. |-| **`--docker`** | Creates a Dockerfile for a container using a base image that is based on the chosen `--worker-runtime`. Use this option when you plan to publish to a custom Linux container. | -| **`--docker-only`** | Adds a Dockerfile to an existing project. Prompts for the worker-runtime if not specified or set in local.settings.json. Use this option when you plan to publish an existing project to a custom Linux container. | +| **`--docker`** | Creates a Dockerfile for a container using a base image that is based on the chosen `--worker-runtime`. Use this option when you plan to deploy a containerized function app. | +| **`--docker-only`** | Adds a Dockerfile to an existing project. Prompts for the worker-runtime if not specified or set in local.settings.json. Use this option when you plan to deploy a containerized function app and the project already exists. | | **`--force`** | Initialize the project even when there are existing files in the project. This setting overwrites existing files with the same name. Other files in the project folder aren't affected. | | **`--language`** | Initializes a language-specific project. Currently supported when `--worker-runtime` set to `node`. Options are `typescript` and `javascript`. You can also use `--worker-runtime javascript` or `--worker-runtime typescript`. | | **`--managed-dependencies`** | Installs managed dependencies. Currently, only the PowerShell worker runtime supports this functionality. | When you supply `<PROJECT_FOLDER>`, the project is created in a new folder with | > [!NOTE]-> When you use either `--docker` or `--dockerfile` options, Core Tools automatically create the Dockerfile for C#, JavaScript, Python, and PowerShell functions. For Java functions, you must manually create the Dockerfile. Use the Azure Functions [image list](https://github.com/Azure/azure-functions-docker) to find the correct base image for your container that runs Azure Functions. +> When you use either `--docker` or `--dockerfile` options, Core Tools automatically create the Dockerfile for C#, JavaScript, Python, and PowerShell functions. For Java functions, you must manually create the Dockerfile. For more information, see [Creating containerized function apps](functions-how-to-custom-container.md#creating-containerized-function-apps). ## func logs The following publish options apply, based on version: | **`--build`**, **`-b`** | Performs build action when deploying to a Linux function app. Accepts: `remote` and `local`. | | **`--build-native-deps`** | Skips generating the `.wheels` folder when publishing Python function apps. | | **`--csx`** | Publish a C# script (.csx) project. |-| **`--force`** | Ignore pre-publishing verification in certain scenarios. | +| **`--force`** | Ignore prepublishing verification in certain scenarios. | | **`--dotnet-cli-params`** | When publishing compiled C# (.csproj) functions, the core tools calls `dotnet build --output bin/publish`. Any parameters passed to this will be appended to the command line. | |**`--list-ignored-files`** | Displays a list of files that are ignored during publishing, which is based on the `.funcignore` file. | | **`--list-included-files`** | Displays a list of files that are published, which is based on the `.funcignore` file. | func azure storage fetch-connection-string <STORAGE_ACCOUNT_NAME> ## func deploy -The `func deploy` command is deprecated. Please instead use [`func kubernetes deploy`](#func-kubernetes-deploy). +The `func deploy` command is deprecated. Instead use [`func kubernetes deploy`](#func-kubernetes-deploy). ## func durable delete-task-hub |
azure-functions | Functions Create Container Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-container-registry.md | + + Title: Create Azure Functions in a local Linux container +description: Get started with Azure Functions by creating a containerized function app on your local computer and publishing the image to a container registry. Last updated : 06/23/2023++zone_pivot_groups: programming-languages-set-functions +++# Create a function app in a local Linux container ++This article shows you how to use Azure Functions Core tools to create your first function in a Linux container on your local computer, verify the function locally, and then publish the containerized function to a container registry. From a container registry, you can easily deploy your containerized functions to Azure. ++For a complete example of deploying containerized functions to Azure, which include the steps in this article, see one of the following articles: +++ [Create your first containerized Azure Functions on Azure Container Apps](functions-deploy-container-apps.md)++ [Create your first containerized Azure Functions](functions-deploy-container.md)++ [Create your first containerized Azure Functions on Azure Arc (preview)](create-first-function-arc-custom-container.md)++You can also create a function app in the Azure portal by using an existing containerized function app from a container registry. For more information, see [Azure portal create using containers](functions-how-to-custom-container.md#azure-portal-create-using-containers). +++## Next steps ++> [!div class="nextstepaction"] +> [Working with containers and Azure Functions](./functions-how-to-custom-container.md) |
azure-functions | Functions Custom Handlers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-custom-handlers.md | To implement a custom handler, you need the following aspects to your applicatio - A *function.json* file for each function (inside a folder that matches the function name) - A command, script, or executable, which runs a web server -The following diagram shows how these files look on the file system for a function named "MyQueueFunction" and an custom handler executable named *handler.exe*. +The following diagram shows how these files look on the file system for a function named "MyQueueFunction" and a custom handler executable named *handler.exe*. ```bash | /MyQueueFunction The route for the order function here is `/api/hello`, same as the original requ ## Deploying -A custom handler can be deployed to every Azure Functions hosting option. If your handler requires operating system or platform dependencies (such as a language runtime), you may need to use a [custom container](./functions-create-function-linux-custom-image.md). +A custom handler can be deployed to every Azure Functions hosting option. If your handler requires operating system or platform dependencies (such as a language runtime), you may need to use a [custom container](./functions-how-to-custom-container.md). -When creating a function app in Azure for custom handlers, we recommend you select .NET Core as the stack. A "Custom" stack for custom handlers will be added in the future. +When creating a function app in Azure for custom handlers, we recommend you select .NET Core as the stack. To deploy a custom handler app using Azure Functions Core Tools, run the following command. You can also use this strategy in your CI/CD pipelines to run automated tests on ### Execution environment -Custom handlers run in the same environment as a typical Azure Functions app. Test your handler to ensure the environment contains all the dependencies it needs to run. For apps that require additional dependencies, you may need to run them using a [custom container image](functions-create-function-linux-custom-image.md) hosted on Azure Functions [Premium plan](functions-premium-plan.md). +Custom handlers run in the same environment as a typical Azure Functions app. Test your handler to ensure the environment contains all the dependencies it needs to run. For apps that require additional dependencies, you may need to run them using a [custom container image](./functions-how-to-custom-container.md) hosted on Azure Functions [Premium plan](functions-premium-plan.md). ### Get support |
azure-functions | Functions Deploy Container Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deploy-container-apps.md | Title: Create your first containerized Azure Functions on Azure Container Apps description: Get started with Azure Functions on Azure Container Apps by deploying your first function app from a Linux image in a container registry. Previously updated : 05/07/2023 Last updated : 05/25/2023 zone_pivot_groups: programming-languages-set-functions zone_pivot_groups: programming-languages-set-functions # Create your first containerized functions on Azure Container Apps -In this article, you create a function app running in a Linux container and deploy it to an [Azure Container Apps](../container-apps/overview.md) environment from a container registry. +In this article, you create a function app running in a Linux container and deploy it to an Azure Container Apps environment from a container registry. By deploying to Container Apps, you are able to integrate your function apps into cloud-native microservices. For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md). ++This article shows you how to use Functions tools to create your first function running in a Linux container, verify the functions locally, and then deploy the container to a Container Apps environment. Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account, which you can minimize by [cleaning-up resources](#clean-up-resources) when you're done. |
azure-functions | Functions Deployment Technologies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-technologies.md | When you deploy using an external package URL and the contents of the package ch ### Remote build -Azure Functions can automatically perform builds on the code it receives after zip deployments. These builds behave slightly differently depending on whether your app is running on Windows or Linux. Remote builds are not performed when an app has previously been set to run in [Run From Package](run-functions-from-deployment-package.md) mode. To learn how to use remote build, navigate to [zip deploy](#zip-deploy). +Azure Functions can automatically perform builds on the code it receives after zip deployments. These builds behave slightly differently depending on whether your app is running on Windows or Linux. -> [!NOTE] -> If you're having issues with remote build, it might be because your app was created before the feature was made available (August 1, 2019). Try creating a new function app, or running `az functionapp update -g <RESOURCE_GROUP_NAME> -n <APP_NAME>` to update your function app. This command might take two tries to succeed. +# [Windows](#tab/windows) -#### Remote build on Windows --All function apps running on Windows have a small management app, the SCM (or [Kudu](https://github.com/projectkudu/kudu)) site. This site handles much of the deployment and build logic for Azure Functions. +All function apps running on Windows have a small management app, the SCM site provided by [Kudu](https://github.com/projectkudu/kudu). This site handles much of the deployment and build logic for Azure Functions. When an app is deployed to Windows, language-specific commands, like `dotnet restore` (C#) or `npm install` (JavaScript) are run. -#### Remote build on Linux +# [Linux](#tab/linux) -To enable remote build on Linux, the following [application settings](functions-how-to-use-azure-function-app-settings.md#settings) must be set: +To enable remote build on Linux, you must set the following in your application settings: -+ `ENABLE_ORYX_BUILD=true` -+ `SCM_DO_BUILD_DURING_DEPLOYMENT=true` ++ [`ENABLE_ORYX_BUILD=true`](functions-app-settings.md#enable_oryx_build)++ [`SCM_DO_BUILD_DURING_DEPLOYMENT=true`](functions-app-settings.md#scm_do_build_during_deployment) By default, both [Azure Functions Core Tools](functions-run-local.md) and the [Azure Functions Extension for Visual Studio Code](./create-first-function-vs-code-csharp.md#publish-the-project-to-azure) perform remote builds when deploying to Linux. Because of this, both tools automatically create these settings for you in Azure. When apps are built remotely on Linux, they [run from the deployment package](run-functions-from-deployment-package.md). -##### Consumption plan --Linux function apps running in the Consumption plan don't have an SCM/Kudu site, which limits the deployment options. However, function apps on Linux running in the Consumption plan do support remote builds. + -##### Dedicated and Premium plans +The following considerations apply when using remote builds during deployment: -Function apps running on Linux in the [Dedicated (App Service) plan](dedicated-plan.md) and the [Premium plan](functions-premium-plan.md) also have a limited SCM/Kudu site. ++ Remote builds are supported for function apps running on Linux in the Consumption plan, however they don't have an SCM/Kudu site, which limits deployment options. ++ Function apps running on Linux a [Premium plan](functions-premium-plan.md) or in a [Dedicated (App Service) plan](dedicated-plan.md) do have an SCM/Kudu site, but it's limited compared to Windows.++ Remote builds aren't performed when an app has previously been set to run in [run-from-package](run-functions-from-deployment-package.md) mode. To learn how to use remote build in these cases, see [Zip deploy](#zip-deploy).++ You may have issues with remote build when your app was created before the feature was made available (August 1, 2019). For older apps, either create a new function app or run `az functionapp update -resource-group <RESOURCE_GROUP_NAME> -name <APP_NAME>` to update your function app. This command might take two tries to succeed. ### App content storage Use zip deploy to push a .zip file that contains your function app to Azure. Opt You can deploy a function app running in a Linux container. ->__How to use it:__ Create your functions in a Linux container then deploy the container to a Premium or Dedicated plan in Azure Functions or another container host. Use the [Azure Functions Core Tools](functions-run-local.md#) to create a Dockerfile for your project that you use to build a containerized function app. You can use the container in the following deployments: +>__How to use it:__ [Create your functions in a Linux container](functions-create-container-registry.md) then deploy the container to a Premium or Dedicated plan in Azure Functions or another container host. Use the [Azure Functions Core Tools](functions-run-local.md#) to create a customized Dockerfile for your project that you use to build a containerized function app. You can use the container in the following deployments: >->+ Deploy to Azure Functions resources you create in the Azure portal. For **Publish**, select **Docker Image**, and then configure the container. Enter the location where the image is hosted. Requires either [Premium plan](functions-premium-plan.md) or [Dedicated (App Service) plan](dedicated-plan.md) hosting. +>+ Deploy to Azure Functions resources you create in the Azure portal. For more information, see [Azure portal create using containers](functions-how-to-custom-container.md#azure-portal-create-using-containers). >+ Deploy to Azure Functions resources you create from the command line. Requires either a Premium or Dedicated (App Service) plan. To learn how, see [Create your first containerized Azure Functions](functions-deploy-container.md). >+ Deploy to Azure Container Apps (preview). To learn how, see [Create your first containerized Azure Functions on Azure Container Apps](functions-deploy-container-apps.md). >+ Deploy to Azure Arc (preview). To learn how, see [Create your first containerized Azure Functions on Azure Arc (preview)](create-first-function-arc-custom-container.md). >+ Deploy to a Kubernetes cluster. You can deploy to a cluster using [Azure Functions Core Tools](functions-run-local.md). Use the [`func kubernetes deploy`](functions-core-tools-reference.md#func-kubernetes-deploy) command. -> >__When to use it:__ Use the Docker container option when you need more control over the Linux environment where your function app runs and where the container is hosted. This deployment mechanism is available only for functions running on Linux. |
azure-functions | Functions How To Azure Devops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-azure-devops.md | You'll need to create a separate release pipeline to deploy to Azure Functions. ## Deploy a container -You can automatically deploy your code to Azure Functions as a custom container after every successful build. To learn more about containers, see [Create a function on Linux using a custom container](functions-create-function-linux-custom-image.md). +You can automatically deploy your code to Azure Functions as a custom container after every successful build. To learn more about containers, see [Working with containers and Azure Functions](./functions-how-to-custom-container.md) . + ### Deploy with the Azure Function App for Container task # [YAML](#tab/yaml/) |
azure-functions | Functions How To Custom Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-custom-container.md | Title: Working with Azure Functions in containers description: Learn how to work with function apps running in Linux containers. Previously updated : 05/09/2023 Last updated : 06/14/2023 zone_pivot_groups: functions-container-hosting To learn more about deployments to Azure Container Apps, see [Azure Container Ap ## Creating containerized function apps -Functions maintains a set of [lanuage-specific base images](https://mcr.microsoft.com/catalog?search=functions) that you can use to generate your containerized function apps. When you create a Functions project using [Azure Functions Core Tools](./functions-run-local.md) and include the [`--docker` option](./functions-core-tools-reference.md#func-init), Core Tools also generates a .Dockerfile that is used to create your container from the correct base image. +Functions makes it easy to deploy and run your function apps as Linux containers, which you create and maintain. Functions maintains a set of [lanuage-specific base images](https://mcr.microsoft.com/catalog?search=functions) that you can use when creating containerized function apps. +++For a complete example of how to create the local containerized function app from the command line and publish the image to a container registry, see [Create a function app in a local container](functions-create-container-registry.md). ++### Generate the Dockerfile ++Functions tooling provides a Docker option that generates a Dockerfile with your functions code project. You can use this file with Docker to create your functions in a container that derives from the correct base image (language and version). ++The way you create a Dockerfile depends on how you create your project. ++# [Command line](#tab/core-tools) +++ When you create a Functions project using [Azure Functions Core Tools](./functions-run-local.md), include the `--docker` option when you run the [`func init`](./functions-core-tools-reference.md#func-init) command, as in the following example:++ ```console + func init --docker + ``` ++ You can also add a Dockerfile to an existing project by using the `--docker-only` option when you run the [`func init`](./functions-core-tools-reference.md#func-init) command in an existing project folder, as in the following example:++ ```console + func init --docker-only + ``` ++For a complete example, see [Create a function app in a local container](functions-create-container-registry.md#create-and-test-the-local-functions-project). ++# [Visual Studio Code](#tab/vs-code) ++The Azure Functions extension for Visual Studio Code doesn't provide a way to create a Dockerfile when you create the project. However, you can instead create the Dockerfile for an existing project by using the `--docker-only` option when you run the [`func init`](./functions-core-tools-reference.md#func-init) command in the Terminal windows of an existing project folder, as in the following example: ++```console +func init --docker-only +``` ++# [Visual Studio](#tab/vs) +++ When you create a Functions project, make sure to check the **Enable Docker** option on the **Additional Information** page of the new project dialog. +++ You can always add a Dockerfile to an existing project by using the `--docker-only` option when you run the [`func init`](./functions-core-tools-reference.md#func-init) command in the Terminal windows of an existing project folder, as in the following example:++ ```console + func init --docker-only + ``` ++++### Creating your function app in a container ++With a Core Tools-generated Dockerfile in your code project, you can use Docker to create the containerized function app on your local computer. The following `docker build` command creates an image of your containerized functions from the project in the local directory: ++```console +docker build --tag <DOCKER_ID>/<IMAGE_NAME>:v1.0.0 . +``` ++For an example of how to create the container, see [Build the container image and verify locally](functions-create-container-registry.md#build-the-container-image-and-verify-locally). ## Update an image in the registry -When you make changes to your functions code project, you need to rebuild the container locally and republish the updated image to your chosen container registry. The following command rebuilds the image from the root folder with an updated version number and pushed to your registry: +When you make changes to your functions code project or need to update to the latest base image, you need to rebuild the container locally and republish the updated image to your chosen container registry. The following command rebuilds the image from the root folder with an updated version number and pushes it to your registry: # [Azure Container Registry](#tab/acr) In this example, `<IMAGE_NAME>` is the full name of the new image with version. :::zone pivot="azure-functions" You should also consider [enabling continuous deployment](#enable-continuous-deployment-to-azure). ::: zone-end+## Azure portal create using containers ++When you create a function app in the [Azure portal](https://portal.azure.com), you can also create a deployment of the function app from an existing container image. The following steps create and deploy a function app from an [existing container image](#creating-your-function-app-in-a-container). ++1. From the Azure portal menu or the **Home** page, select **Create a resource**. ++1. In the **New** page, select **Compute** > **Function App**. ++1. On the **Basics** page, use the function app settings as specified in the following table: ++ | Setting | Suggested value | Description | + | | - | -- | + | **Subscription** | Your subscription | The subscription in which you create your function app. | + | **[Resource Group](../azure-resource-manager/management/overview.md)** | *myResourceGroup* | Name for the new resource group in which you create your function app. You should create a resource group because there are [known limitations when creating new function apps in an existing resource group](functions-scale.md#limitations-for-creating-new-function-apps-in-an-existing-resource-group).| + | **Function App name** | Globally unique name | Name that identifies your new function app. Valid characters are `a-z` (case insensitive), `0-9`, and `-`. | + | **Do you want to deploy code or container image?**| Container image | Deploy a containerized function app from a registry. To create a function app in registry, see [Create a function app in a local container](functions-create-container-registry.md). | + |**Region**| Preferred region | Select a [region](https://azure.microsoft.com/regions/) that's near you or near other services that your functions can access. | +4. In **[Hosting options and plans](functions-scale.md)**, choose **Functions Premium**. ++ :::image type="content" source="media/functions-how-to-custom-container/function-app-create-container-functions-premium.png" alt-text="Screenshot of the Basics tab in the Azure portal when creating a function app for hosting a container in a Functions Premium plan."::: + + This creates a function app hosted by Azure Functions in the [Premium plan](functions-premium-plan.md), which supports dynamic scaling. You can also choose to run in an **App Service plan**, but in this kind of dedicated plan you must manage the [scaling of your function app](functions-scale.md). +4. In **[Hosting options and plans](functions-scale.md)**, choose **Azure Container Apps Environment plan**. ++ :::image type="content" source="media/functions-how-to-custom-container/function-app-create-container-apps-hosting.png" alt-text="Portal create Basics tab for a containerized function app hosted in Azure Container Apps."::: ++ This creates a new **Azure Container Apps Environment** resource to host your function app container. By default, the environment is created in a Consumption plan without zone redundancy, to minimize costs. You can also choose an existing Container Apps environment. For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md). +5. Accept the default options of creating a new storage account on the **Storage** tab and a new Application Insight instance on the **Monitoring** tab. You can also choose to use an existing storage account or Application Insights instance. ++6. Select the **Deployment** tab and unselect **Use quickstart image**. If you don't do this, the function app is deployed from the base image for your function app language. ++7. Choose your **Image type**, public or private. Choose **Private** if you're using Azure Container Registry or some other private registry. Supply the **Image** name, including the registry prefix. If you're using a private registry, provide the image registry authentication credentials. + +8. Select **Review + create** to review the app configuration selections. ++9. On the **Review + create** page, review your settings, and then select **Create** to provision the function app and deploy your container image from the registry. + ## Work with images in Azure Functions When your function app container is deployed from a registry, Functions maintains information about the source image. Use the following commands to get data about the image or change the deployment image used: |
azure-functions | Functions Infrastructure As Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md | resource functionApp 'Microsoft.Web/sites@2022-03-01' = { ### Custom Container Image -If you're [deploying a custom container image](./functions-create-function-linux-custom-image.md), you must specify it with `linuxFxVersion` and include configuration that allows your image to be pulled, as in [Web App for Containers](../app-service/index.yml). Also, set `WEBSITES_ENABLE_APP_SERVICE_STORAGE` to `false`, since your app content is provided in the container itself: +If you're [deploying a custom container image](./functions-how-to-custom-container.md), you must specify it with `linuxFxVersion` and include configuration that allows your image to be pulled, as in [Web App for Containers](../app-service/index.yml). Also, set `WEBSITES_ENABLE_APP_SERVICE_STORAGE` to `false`, since your app content is provided in the container itself: # [Bicep](#tab/bicep) |
azure-functions | Functions Kubernetes Keda | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-kubernetes-keda.md | The Azure Functions runtime provides flexibility in hosting where and how you wa The Azure Functions service is made up of two key components: a runtime and a scale controller. The Functions runtime runs and executes your code. The runtime includes logic on how to trigger, log, and manage function executions. The Azure Functions runtime can run *anywhere*. The other component is a scale controller. The scale controller monitors the rate of events that are targeting your function, and proactively scales the number of instances running your app. To learn more, see [Azure Functions scale and hosting](functions-scale.md). -Kubernetes-based Functions provides the Functions runtime in a [Docker container](functions-create-function-linux-custom-image.md) with event-driven scaling through KEDA. KEDA can scale in to 0 instances (when no events are occurring) and out to *n* instances. It does this by exposing custom metrics for the Kubernetes autoscaler (Horizontal Pod Autoscaler). Using Functions containers with KEDA makes it possible to replicate serverless function capabilities in any Kubernetes cluster. These functions can also be deployed using [Azure Kubernetes Services (AKS) virtual nodes](../aks/virtual-nodes-cli.md) feature for serverless infrastructure. +Kubernetes-based Functions provides the Functions runtime in a [Docker container](functions-create-container-registry.md) with event-driven scaling through KEDA. KEDA can scale in to 0 instances (when no events are occurring) and out to *n* instances. It does this by exposing custom metrics for the Kubernetes autoscaler (Horizontal Pod Autoscaler). Using Functions containers with KEDA makes it possible to replicate serverless function capabilities in any Kubernetes cluster. These functions can also be deployed using [Azure Kubernetes Services (AKS) virtual nodes](../aks/virtual-nodes-cli.md) feature for serverless infrastructure. ## Managing KEDA and functions in Kubernetes To run Functions on your Kubernetes cluster, you must install the KEDA component + Azure Functions Core Tools: using the [`func kubernetes install` command](functions-core-tools-reference.md#func-kubernetes-install). -+ Helm: there are various ways to install KEDA in any Kubernetes cluster, including Helm. Deployment options are documented on the [KEDA site](https://keda.sh/docs/deploy/). ++ Helm: there are various ways to install KEDA in any Kubernetes cluster, including Helm. Deployment options are documented on the [KEDA site](https://keda.sh/docs/deploy/). ## Deploying a function app to Kubernetes You can use Azure Functions that expose HTTP triggers, but KEDA doesn't directly ## Next Steps For more information, see the following resources: -* [Create a function using a custom image](functions-create-function-linux-custom-image.md) +* [Working with containers and Azure Functions](./functions-how-to-custom-container.md) * [Code and test Azure Functions locally](functions-develop-local.md) * [How the Azure Function Consumption plan works](functions-scale.md) |
azure-functions | Functions Recover Storage Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-recover-storage-account.md | Errors can occur when the container image being referenced is unavailable or fai You need to correct any errors that prevent the container from starting for the function app run correctly. -When the container image can't be found, you'll see a `manifest unknown` error in the Docker logs. In this case, you can use the Azure CLI commands documented at [How to target Azure Functions runtime versions](set-runtime-version.md?tabs=azurecli#manual-version-updates-on-linux) to change the container image being referenced. If you've deployed a [custom container image](functions-create-function-linux-custom-image.md), you need to fix the image and redeploy the updated version to the referenced registry. +When the container image can't be found, you'll see a `manifest unknown` error in the Docker logs. In this case, you can use the Azure CLI commands documented at [How to target Azure Functions runtime versions](set-runtime-version.md?tabs=azurecli#manual-version-updates-on-linux) to change the container image being referenced. If you've deployed a [custom container image](./functions-how-to-custom-container.md), you need to fix the image and redeploy the updated version to the referenced registry. ### App container has conflicting ports |
azure-functions | Functions Reference Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md | The main project folder, *<project_root>*, can contain the following files: * *host.json*: Contains configuration options that affect all functions in a function app instance. This file does get published to Azure. Not all options are supported when running locally. To learn more, see [host.json](functions-host-json.md). * *.vscode/*: (Optional) Contains the stored Visual Studio Code configuration. To learn more, see [Visual Studio Code settings](https://code.visualstudio.com/docs/getstarted/settings). * *.venv/*: (Optional) Contains a Python virtual environment used by local development.-* *Dockerfile*: (Optional) Used when publishing your project in a [custom container](functions-create-function-linux-custom-image.md). +* *Dockerfile*: (Optional) Used when publishing your project in a [custom container](./functions-how-to-custom-container.md). * *tests/*: (Optional) Contains the test cases of your function app. * *.funcignore*: (Optional) Declares files that shouldn't get published to Azure. Usually, this file contains *.vscode/* to ignore your editor setting, *.venv/* to ignore the local Python virtual environment, *tests/* to ignore test cases, and *local.settings.json* to prevent local app settings from being published. The main project folder, *<project_root>*, can contain the following files: * *host.json*: Contains configuration options that affect all functions in a function app instance. This file does get published to Azure. Not all options are supported when running locally. To learn more, see [host.json](functions-host-json.md). * *local.settings.json*: Used to store app settings and connection strings when it's running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file). * *requirements.txt*: Contains the list of Python packages the system installs when it publishes to Azure.-* *Dockerfile*: (Optional) Used when publishing your project in a [custom container](functions-create-function-linux-custom-image.md). +* *Dockerfile*: (Optional) Used when publishing your project in a [custom container](./functions-how-to-custom-container.md). ::: zone-end When you deploy your project to a function app in Azure, the entire contents of the main project folder, *<project_root>*, should be included in the package, but not the folder itself, which means that *host.json* should be in the package root. We recommend that you maintain your tests in a folder along with other functions (in this example, *tests/*). For more information, see [Unit testing](#unit-testing). |
azure-functions | Functions Run Local | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md | Title: Work with Azure Functions Core Tools description: Learn how to code and test Azure Functions from the command prompt or terminal on your local computer before you run them on Azure Functions. ms.assetid: 242736be-ec66-4114-924b-31795fd18884 Previously updated : 06/23/2023 Last updated : 06/26/2023 zone_pivot_groups: programming-languages-set-functions The following considerations apply to project initialization: + When you don't provide a project name, the current folder is initialized. -+ If you plan to deploy your project as a function app in a Linux container, use the `--docker` option to make sure that a Dockerfile is generated for your project. To learn more, see [Create a function on Linux using a custom image](functions-create-function-linux-custom-image.md). ++ If you plan to deploy your project as a function app running in a Linux container, use the `--docker` option to make sure that a Dockerfile is generated for your project. To learn more, see [Create a function app in a local container](functions-create-container-registry.md#create-and-test-the-local-functions-project). If you forget to do this, you can always generate the Dockerfile for the project later by using the `func init --docker-only` command. ::: zone pivot="programming-language-csharp" + Core Tools lets you create function app projects for the .NET runtime as either [in-process](functions-dotnet-class-library.md) or [isolated worker process](dotnet-isolated-process-guide.md) C# class library projects (.csproj). These projects, which can be used with Visual Studio or Visual Studio Code, are compiled during debugging and when publishing to Azure. mvn clean package mvn azure-functions:run ``` ::: zone-end ``` func start ``` ::: zone-end +The way you start the host depends on your runtime version: +# [v4.x](#tab/v2) +``` +func start +``` +# [v1.x](#tab/v1) +``` +func host start +``` + ::: zone pivot="programming-language-typescript" ``` npm install npm start ::: zone-end ::: zone pivot="programming-language-python" This command must be [run in a virtual environment](./create-first-function-cli-python.md).->[!NOTE] -> Version 1.x of the Functions runtime instead requires `func host start`. To learn more, see [Azure Functions Core Tools reference](functions-core-tools-reference.md?tabs=v1#func-start). When the Functions host starts, it outputs the URL of HTTP-triggered functions, like in the following example: Job host started Http Function MyHttpTrigger: http://localhost:7071/api/MyHttpTrigger </pre> ->[!IMPORTANT] ->By default, when running locally authorization isn't enforced for HTTP endpoints. This means that all local HTTP requests are handled as `authLevel = "anonymous"`. For more information, see the [HTTP binding article](functions-bindings-http-webhook-trigger.md#authorization-keys). You can use the `--enableAuth` option to require authorization when running locally. For more information, see [`func start`](./functions-core-tools-reference.md?tabs=v2#func-start) +### Considerations when running locally ++Keep in mind the following considerations when running your functions locally: +++ By default, authorization isn't enforced locally for HTTP endpoints. This means that all local HTTP requests are handled as `authLevel = "anonymous"`. For more information, see the [HTTP binding article](functions-bindings-http-webhook-trigger.md#authorization-keys). You can use the `--enableAuth` option to require authorization when running locally. For more information, see [`func start`](./functions-core-tools-reference.md?tabs=v2#func-start)+++ While there is local storage emulation available, it's often best to validate your triggers and bindings against live services in Azure. You can maintain the connections to these services in the local.settings.json project file. For more information, see [Local settings file](functions-develop-local.md#local-settings-file). Make sure to keep test and production data separate when testing against live Azure services. +++ You can trigger non-HTTP functions locally without connecting to a live service. For more information, see [Non-HTTP triggered functions](#non-http-triggered-functions).+++ When you include your Application Insights connection information in the local.settings.json file, local log data is written to the specific Application Insights instance. To keep local telemetry data separate from production data, consider using a separate Application Insights instance for development and testing. ### Passing test data to a function The following considerations apply to this kind of deployment: + Your project is deployed such that it [runs from the deployment package](run-functions-from-deployment-package.md). To disable this recommended deployment mode, use the [`--nozip` option][func azure functionapp publish]. ++ To publish to a specific named slot in your function app, use the [`--slot` option](functions-core-tools-reference.md#func-azure-functionapp-publish). + + Java uses Maven to publish your local project to Azure. Instead, use the following command to publish to Azure: `mvn azure-functions:deploy`. Azure resources are created during initial deployment. + You get an error when you try to publish to a `<FunctionAppName>` that doesn't exist in your subscription. ### Kubernetes cluster -Functions also lets you define your Functions project to run in a Docker container. Use the [`--docker` option][func init] of `func init` to generate a Dockerfile for your specific language. This file is then used when creating a container to deploy. To learn how to publish a custom container to Azure without Kubernetes, see [Create a function on Linux using a custom container](functions-create-function-linux-custom-image.md). +Functions also lets you define your Functions project to run in a Docker container. Use the [`--docker` option][func init] of `func init` to generate a Dockerfile for your specific language. This file is then used when creating a container to deploy. For more information, see [Working with containers and Azure Functions](functions-how-to-custom-container.md). Core Tools can be used to deploy your project as a custom container image to a Kubernetes cluster. |
azure-linux | Quickstart Azure Resource Manager Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/quickstart-azure-resource-manager-template.md | For more information about creating SSH keys, see [Create and manage SSH keys fo ## Review the template -The following deployment uses an ARM template from [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.kubernetes/aks-mariner). +The following deployment uses an ARM template from [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.kubernetes/aks-azure-linux). ```json { az group delete --name testAzureLinuxCluster --yes --no-wait In this quickstart, you deployed an Azure Linux Container Host cluster. To learn more about the Azure Linux Container Host, and walk through a complete cluster deployment and management example, continue to the Azure Linux Container Host tutorial. > [!div class="nextstepaction"]-> [Azure Linux Container Host tutorial](./tutorial-azure-linux-create-cluster.md) +> [Azure Linux Container Host tutorial](./tutorial-azure-linux-create-cluster.md) |
azure-maps | Creator Indoor Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md | The following example shows how to update a dataset, create a new tileset, and d > [!div class="nextstepaction"] > [Tutorial: Creating a Creator indoor map] -> [!div class="nextstepaction"] -> [Create custom styles for indoor maps] - <!-- Internal Links -> [Convert a drawing package]: #convert-a-drawing-package [Custom styling service]: #custom-styling-preview |
azure-maps | Migrate From Bing Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps.md | The following table provides a high-level list of Bing Maps features and the rel | Location Recognition | Γ£ô | | Locations (forward/reverse geocoding) | Γ£ô | | Optimized Itinerary Routes | Planned |-| Snap to roads | Γ£ô | +| Snap to roads | <sup>1</sup> | | Spatial Data Services (SDS) | Partial | | Time Zone | Γ£ô | | Traffic Incidents | Γ£ô | | Configuration driven maps | N/A | +<sup>1</sup> While there is no direct replacement for the Bing Maps *Snap to road* service, this functionality can be implemented using the Azure Maps [Route - Get Route Directions] REST API. For a complete code sample demonstrating the snap to road functionality, see the [Basic snap to road logic] sample that demonstrates how to snap individual points to the rendered roads on the map. Also see the [Snap points to logical route path] sample that shows how to snap points to the road network to form a logical path. + Bing Maps provides basic key-based authentication. Azure Maps provides both basic key-based authentication and highly secure, Azure Active Directory authentication. ## Licensing considerations Learn the details of how to migrate your Bing Maps application with these articl [Microsoft learning center shows]: https://aka.ms/AzureMapsVideos [Azure Maps Blog]: https://aka.ms/AzureMapsTechBlog [Azure Maps Feedback (UserVoice)]: https://aka.ms/AzureMapsFeedback+[Basic snap to road logic]: https://samples.azuremaps.com/?search=Snap%20to%20road&sample=basic-snap-to-road-logic +[Snap points to logical route path]: https://samples.azuremaps.com/?search=Snap%20to%20road&sample=snap-points-to-logical-route-path +[Route - Get Route Directions]: https://learn.microsoft.com/rest/api/maps/route/get-route-directions |
azure-maps | Spatial Io Connect Wfs Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-connect-wfs-service.md | The following features are supported by the `WfsClient` class: The `atlas.io.ogc.WfsClient` class in the spatial IO module makes it easy to query a WFS service and convert the responses into GeoJSON objects. This GeoJSON object can then be used for other mapping purposes. -The [Simple WFS example] sample shows how to easily query a Web Feature Service (WFS) and renders the returned features on the map. +The [Simple WFS example] sample shows how to easily query a Web Feature Service (WFS) and renders the returned features on the map. For the source code for this sample, see [Simple WFS example source code]. :::image type="content" source="./media/spatial-io-connect-wfs-service/simple-wfs-example.png"alt-text="A screenshot that shows the results of a WFS overlay on a map."::: The specification for the WFS standard makes use of OGC filters. The filters bel - `PropertyIsNil` - `PropertyIsBetween` -The [WFS filter example] sample demonstrates the use of different filters with the WFS client. +The [WFS filter example] sample demonstrates the use of different filters with the WFS client. For the source code for this sample, see [WFS filter example source code]. :::image type="content" source="./media/spatial-io-connect-wfs-service/wfs-filter-example.png"alt-text="A screenshot that shows The WFS filter sample that demonstrates the use of different filters with the WFS client."::: The [WFS filter example] sample demonstrates the use of different filters with t ## WFS service explorer -The [WFS service explorer] sample is a simple tool for exploring WFS services on Azure Maps. +The [WFS service explorer] sample is a simple tool for exploring WFS services on Azure Maps. For the source code for this sample, see [WFS service explorer source code]. :::image type="content" source="./media/spatial-io-connect-wfs-service/wfs-service-explorer.png"alt-text="A screenshot that shows a simple tool for exploring WFS services on Azure Maps."::: See the following articles for more code samples to add to your maps: [Simple WFS example]: https://samples.azuremaps.com/spatial-io-module/simple-wfs-example [WFS filter example]: https://samples.azuremaps.com/spatial-io-module/wfs-filter-examples [WFS service explorer]: https://samples.azuremaps.com/spatial-io-module/wfs-service-explorer++[Simple WFS example source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Simple%20WFS%20example/Simple%20WFS%20example.html +[WFS filter example source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/WFS%20filter%20examples/WFS%20filter%20examples.html +[WFS service explorer source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/WFS%20service%20explorer/WFS%20service%20explorer.html |
azure-maps | Spatial Io Read Write Spatial Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-read-write-spatial-data.md | The result from the read function is a `SpatialDataSet` object. This object exte ## Examples of reading spatial data -The [Load spatial data] sample shows how to read a spatial data set, and render it on the map using the `SimpleDataLayer` class. The code uses a GPX file pointed to by a URL. For the source code of this sample, see [Load spatial data source]. +The [Load spatial data] sample shows how to read a spatial data set, and render it on the map using the `SimpleDataLayer` class. The code uses a GPX file pointed to by a URL. For the source code of this sample, see [Load spatial data source code]. :::image type="content" source="./media/spatial-io-read-write-spatial-data/load-spatial-data.png"alt-text="A screenshot that shows the snap grid on map."::: The [Load spatial data] sample shows how to read a spatial data set, and render The next code demo shows how to read and load KML, or KMZ, to the map. KML can contain ground overlays, which will be in the form of an `ImageLyaer` or `OgcMapLayer`. These overlays must be added on the map separately from the features. Additionally, if the data set has custom icons, those icons need to be loaded to the maps resources before the features are loaded. -The [Load KML onto map] sample shows how to load KML or KMZ files onto the map. For the source code of this sample, see [Load KML onto map source]. +The [Load KML onto map] sample shows how to load KML or KMZ files onto the map. For the source code of this sample, see [Load KML onto map source code]. :::image type="content" source="./media/spatial-io-read-write-spatial-data/load-kml-onto-map.png"alt-text="A screenshot that shows a map with a KML ground overlay."::: function InitMap() There are two main write functions in the spatial IO module. The `atlas.io.write` function generates a string, while the `atlas.io.writeCompressed` function generates a compressed zip file. The compressed zip file would contain a text-based file with the spatial data in it. Both of these functions return a promise to add the data to the file. And, they both can write any of the following data: `SpatialDataSet`, `DataSource`, `ImageLayer`, `OgcMapLayer`, feature collection, feature, geometry, or an array of any combination of these data types. When writing using either functions, you can specify the wanted file format. If the file format isn't specified, then the data will be written as KML. -The [Spatial data write options] sample is a tool that demonstrates the majority of the write options that can be used with the `atlas.io.write` function. For the source code of this sample, see [Spatial data write options source]. +The [Spatial data write options] sample is a tool that demonstrates the majority of the write options that can be used with the `atlas.io.write` function. For the source code of this sample, see [Spatial data write options source code]. :::image type="content" source="./media/spatial-io-read-write-spatial-data/spatial-data-write-options.png"alt-text="A screenshot that shows The Spatial data write options sample that demonstrates most of the write options used with the atlas.io.write function."::: The [Spatial data write options] sample is a tool that demonstrates the majority ## Example of writing spatial data -The [Drag and drop spatial files onto map] sample allows you to drag and drop one or more KML, KMZ, GeoRSS, GPX, GML, GeoJSON or CSV files onto the map. For the source code of this sample, see [Drag and drop spatial files onto map source]. +The [Drag and drop spatial files onto map] sample allows you to drag and drop one or more KML, KMZ, GeoRSS, GPX, GML, GeoJSON or CSV files onto the map. For the source code of this sample, see [Drag and drop spatial files onto map source code]. :::image type="content" source="./media/spatial-io-read-write-spatial-data/drag-and-drop-spatial-files-onto-map.png" alt-text="A screenshot that shows a map with a panel to the left of the map that enables you to drag and drop one or more KML, KMZ, GeoRSS, GPX, GML, GeoJSON or CSV files onto the map."::: Well-known text can be read using the `atlas.io.ogc.WKT.read` function, and writ ## Examples of reading and writing Well-Known Text (WKT) -The [Read Well Known Text] sample shows how to read the well-known text string `POINT(-122.34009 47.60995)` and render it on the map using a bubble layer. For the source code of this sample, see [Read Well Known Text source]. +The [Read Well Known Text] sample shows how to read the well-known text string `POINT(-122.34009 47.60995)` and render it on the map using a bubble layer. For the source code of this sample, see [Read Well Known Text source code]. :::image type="content" source="./media/spatial-io-read-write-spatial-data/read-well-known-text.png" alt-text="A screenshot that shows how to read Well Known Text (WKT) as GeoJSON and render it on a map using a bubble layer."::: The [Read Well Known Text] sample shows how to read the well-known text string ` <iframe height='500' scrolling='no' title='Read Well-Known Text' src='//codepen.io/azuremaps/embed/XWbabLd/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/XWbabLd/'>Read Well-Known Text</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe> --> -The [Read and write Well Known Text] sample demonstrates how to read and write Well Known Text (WKT) strings as GeoJSON. For the source code of this sample, see [Read and write Well Known Text source]. +The [Read and write Well Known Text] sample demonstrates how to read and write Well Known Text (WKT) strings as GeoJSON. For the source code of this sample, see [Read and write Well Known Text source code]. :::image type="content" source="./media/spatial-io-read-write-spatial-data/read-and-write-well-known-text.png" alt-text="A screenshot showing the sample that demonstrates how to read and write Well Known Text (WKT) strings as GeoJSON."::: See the following articles for more code samples to add to your maps: [Add an OGC map layer](spatial-io-add-ogc-map-layer.md) [Load spatial data]: https://samples.azuremaps.com/spatial-io-module/load-spatial-data-(simple)-[Load spatial data source]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Load%20spatial%20data%20(simple)/Load%20spatial%20data%20(simple).html +[Load spatial data source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Load%20spatial%20data%20(simple)/Load%20spatial%20data%20(simple).html [Load KML onto map]: https://samples.azuremaps.com/spatial-io-module/load-kml-onto-map-[Load KML onto map source]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Load%20KML%20onto%20map/Load%20KML%20onto%20map.html +[Load KML onto map source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Load%20KML%20onto%20map/Load%20KML%20onto%20map.html [Spatial data write options]: https://samples.azuremaps.com/spatial-io-module/spatial-data-write-options-[Spatial data write options source]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Spatial%20data%20write%20options/Spatial%20data%20write%20options.html +[Spatial data write options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Spatial%20data%20write%20options/Spatial%20data%20write%20options.html [Drag and drop spatial files onto map]: https://samples.azuremaps.com/spatial-io-module/drag-and-drop-spatial-files-onto-map-[Drag and drop spatial files onto map source]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Drag%20and%20drop%20spatial%20files%20onto%20map/Drag%20and%20drop%20spatial%20files%20onto%20map.html +[Drag and drop spatial files onto map source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Drag%20and%20drop%20spatial%20files%20onto%20map/Drag%20and%20drop%20spatial%20files%20onto%20map.html [Read Well Known Text]: https://samples.azuremaps.com/spatial-io-module/read-well-known-text-[Read Well Known Text source]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Read%20Well%20Known%20Text/Read%20Well%20Known%20Text.html +[Read Well Known Text source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Read%20Well%20Known%20Text/Read%20Well%20Known%20Text.html [Read and write Well Known Text]: https://samples.azuremaps.com/spatial-io-module/read-and-write-well-known-text-[Read and write Well Known Text source]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Read%20and%20write%20Well%20Known%20Text/Read%20and%20write%20Well%20Known%20Text.html ++++[Read and write Well Known Text source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Read%20and%20write%20Well%20Known%20Text/Read%20and%20write%20Well%20Known%20Text.html |
azure-monitor | Azure Monitor Agent Troubleshoot Linux Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm.md | Follow the steps below to troubleshoot the latest version of the Azure Monitor a ## Issues collecting Syslog-For more information on how to troubleshoot syslog issues with Azure Monitor Agent see [here](azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md). +For more information on how to troubleshoot syslog issues with Azure Monitor Agent, see [here](azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md). - The quality of service (QoS) file `/var/opt/microsoft/azuremonitoragent/log/mdsd.qos` provides CSV-format 15-minute aggregations of the processed events and contains the information on the amount of the processed syslog events in the given timeframe. **This file is useful in tracking Syslog event ingestion drops**. For more information on how to troubleshoot syslog issues with Azure Monitor Age 3. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'Syslog DCR not available' and **Problem type** as 'I need help configuring data collection from a VM'. 3. Validate the layout of the Syslog collection workflow to ensure all necessary pieces are in place and accessible: 1. For `rsyslog` users, ensure the `/etc/rsyslog.d/10-azuremonitoragent.conf` file is present, isn't empty, and is accessible by the `rsyslog` daemon (syslog user).- 1. Check your rsyslog configuration at `/etc/rsyslog.conf` and `/etc/rsyslog.d/*` to see if you have any inputs bound to a non-default ruleset, as messages from these inputs will not be forwarded to Azure Monitor Agent. For instance, messages from an input configured with a non-default ruleset like `input(type="imtcp" port="514" `**`ruleset="myruleset"`**`)` will not be forward. + 1. Check your rsyslog configuration at `/etc/rsyslog.conf` and `/etc/rsyslog.d/*` to see if you have any inputs bound to a non-default ruleset, as messages from these inputs won't be forwarded to Azure Monitor Agent. For instance, messages from an input configured with a non-default ruleset like `input(type="imtcp" port="514" `**`ruleset="myruleset"`**`)` won't be forward. 2. For `syslog-ng` users, ensure the `/etc/syslog-ng/conf.d/azuremonitoragent.conf` file is present, isn't empty, and is accessible by the `syslog-ng` daemon (syslog user). 3. Ensure the file `/run/azuremonitoragent/default_syslog.socket` exists and is accessible by `rsyslog` or `syslog-ng` respectively. 4. Check for a corresponding drop in count of processed syslog events in `/var/opt/microsoft/azuremonitoragent/log/mdsd.qos`. If such drop isn't indicated in the file, [file a ticket](#file-a-ticket) with **Summary** as 'Syslog data dropped in pipeline' and **Problem type** as 'I need help with Azure Monitor Linux Agent'. For more information on how to troubleshoot syslog issues with Azure Monitor Age > Ensure to remove trace flag setting **-T 0x2002** after the debugging session, since it generates many trace statements that could fill up the disk more quickly or make visually parsing the log file difficult. 6. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA fails to collect syslog events' and **Problem type** as 'I need help with Azure Monitor Linux Agent'. +## Troubleshooting issues on Arc-enabled server +If after checking basic troubleshooting steps you don't see the Azure Monitor Agent emitting logs or find **'Failed to get MSI token from IMDS endpoint'** errors in `mdsd.err` log file, it's likely `syslog` user isn't a member of the group `himds`. Add `syslog` user to `himds` user group if the user isn't a member of this group. Create user `syslog` and the group `syslog`, if necessary, and make sure that the user is in that group. For more information check out Azure Arc-enabled server authentication requirements [here](../../azure-arc/servers/managed-identity-authentication.md). [!INCLUDE [azure-monitor-agent-file-a-ticket](../../../includes/azure-monitor-agent/azure-monitor-agent-file-a-ticket.md)] |
azure-monitor | Data Collection Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-firewall.md | - Title: Collect Firewall logs with Azure Monitor Agent -description: Configure collection of Windows Firewall logs on virtual machines with Azure Monitor Agent. - Previously updated : 6/1/2023-------# Collect Firewall logs with Azure Monitor Agent (Private Preview [signup here](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR5HgP8BLvCpLhecdvdpZy8VUQ0VCRVg2STY0UkYyOU9RNkU3Qk80VkFOMS4u)) -Windows Firewall is a Microsoft Windows application that filters information coming to your system from the Internet and blocks potentially harmful programs. It is also known as Microsoft Defender Firewall in Windows 10 version 2004 and later. You can turn it on or off by following these steps: -- Select Start, then open Settings-- Under Update & Security, select Windows Security, Firewall & network protection.-- Select a network profile: domain, private, or public.-- Under Microsoft Defender Firewall, switch the setting to On or Off.--## Prerequisites -To complete this procedure, you need: -- Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac).-- [Data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint).-- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.-- A Virtual Machine, Virtual Machine Scale Set, or Arc-enabled on-premises machine that is running firewall. --## Create a data collection rule to collect firewall logs -The [data collection rule](../essentials/data-collection-rule-overview.md) defines: -- Which source log files Azure Monitor Agent scans for new events.-- How Azure Monitor transforms events during ingestion.-- The destination Log Analytics workspace and table to which Azure Monitor sends the data.--You can define a data collection rule to send data from multiple machines to multiple Log Analytics workspaces, including workspaces in a different region or tenant. Create the data collection rule in the *same region* as your Analytics workspace. --> [!NOTE] -> To send data across tenants, you must first enable [Azure Lighthouse](../../lighthouse/overview.md). --To create the data collection rule in the Azure portal: -1. On the **Monitor** menu, select **Data Collection Rules**. -1. Select **Create** to create a new data collection rule and associations. - [ ![Screenshot that shows the Create button on the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png#lightbox) -1. Enter a **Rule name** and specify a **Subscription**, **Resource Group**, **Region**, and **Platform Type**: - - **Region** specifies where the DCR will be created. The virtual machines and their associations can be in any subscription or resource group in the tenant. - - **Platform Type** specifies the type of resources this rule can apply to. The **Custom** option allows for both Windows and Linux types. - -**Data Collection End Point** select a previously created data [collection end point](../essentials/data-collection-endpoint-overview.md). - [ ![Screenshot that shows the Basics tab of the Data Collection Rule screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png#lightbox) -1. On the **Resources** tab: - 1. Select **+ Add resources** and associate resources with the data collection rule. Resources can be Virtual Machines, Virtual Machine Scale Sets, and Azure Arc for servers. The Azure portal installs Azure Monitor Agent on resources that don't already have it installed. -- > [!IMPORTANT] - > The portal enables system-assigned managed identity on the target resources, along with existing user-assigned identities, if there are any. For existing applications, unless you specify the user-assigned identity in the request, the machine defaults to using system-assigned identity instead. - - If you need network isolation using private links, select existing endpoints from the same region for the respective resources or [create a new endpoint](../essentials/data-collection-endpoint-overview.md). - 1. Select **Enable Data Collection Endpoints**. - 1. Select a data collection endpoint for each of the resources associate to the data collection rule. -- [ ![Screenshot that shows the Resources tab of the Data Collection Rule screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png#lightbox) --1. On the **Collect and deliver** tab, select **Add data source** to add a data source and set a destination. -1. Select **Firewall Logs**. -- [ ![Screenshot that shows the Azure portal form to select firewall logs in a data collection rule.](media/data-collection-rule-azure-monitor-agent/firewall-data-collection-rule.png)](media/data-collection-rule-azure-monitor-agent/firewall-data-collection-rule.png#lightbox) --1. On the **Destination** tab, add one or more destinations for the data source. You can select multiple destinations of the same or different types. For instance, you can select multiple Log Analytics workspaces, which is also known as multihoming. -- [ ![Screenshot that shows the Azure portal form to add a data source in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png#lightbox) --1. Select **Review + create** to review the details of the data collection rule and association with the set of virtual machines. -1. Select **Create** to create the data collection rule. --> [!NOTE] -> It can take up to 5 minutes for data to be sent to the destinations after you create the data collection rule. ---### Sample log queries --- **Count the firewall log entries by URL for the host www.contoso.com.**- - ```kusto - WindowsFirewall - | where csHost=="www.contoso.com" - | summarize count() by csUriStem - ``` --## Troubleshoot -Use the following steps to troubleshoot the collection of firewall logs. --### Run Azure Monitor Agent troubleshooter -To test your configuration and share logs with Microsoft [use the Azure Monitor Agent Troubleshooter](use-azure-monitor-agent-troubleshooter.md) --### Check if any firewall logs have been received -Start by checking if any records have been collected for your firewall logs by running the following query in Log Analytics. If the query doesn't return records, check the other sections for possible causes. This query looks for entries in the last two days, but you can modify for another time range. --``` kusto -WindowsFirewall -| where TimeGenerated > ago(48h) -| order by TimeGenerated desc -``` --### Verify that firewall logs are being created -Look at the timestamps of the log files and open the latest to see that latest timestamps are present in the log files. The default location for firewall log files is C:\windows\system32\logfiles\firewall\pfirewall.log --## Next steps -Learn more about: -- [Azure Monitor Agent](azure-monitor-agent-overview.md).-- [Data collection rules](../essentials/data-collection-rule-overview.md).-- [Data collection endpoints](../essentials/data-collection-endpoint-overview.md) |
azure-monitor | Data Collection Text Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md | To complete this procedure, you need: - Do delineate the end of a record with an end of line. - Do use ASCII or UTF-8 encoding. Other formats such as UTF-16 aren't supported. - Do create a new log file every day so that you can remove old files easily. - - Do clean up all log files older than 2 days in the monitored directory. Azure Monitor Agent does not delete old log files and tracking them uses up Agent resources. - - Do Not overwrite an existing file with new data. You should only append new data to the file. - - Do Not rename a file and open a new file with the same name to log to. - - Do Not rename or copy large log files in to the monitored directory. If you must, do not exceed 50MB per minute - - Do Not rename files in the monitored directory to a new name that is also in the monitored directory. This can cause incorrect ingestion behavior. + - Do clean up all log files in the monitored directory. Tracking many log files can drive up agent CPU and Memory usage. Wait for at least 2 days to allow ample time for all logs to be processed. + - Do Not overwrite an existing file with new records. You should only append new records to the end of the file. Overwriting will cause data loss. + - Do Not rename a file to a new name and then open a new file with the same name. This could cause data loss. + - Do Not rename or copy large log files that match the file scan pattern in to the monitored directory. If you must, do not exceed 50MB per minute. + - Do Not rename a file that matches the file scan pattern to a new name that also matches the file scan pattern. This will cause duplicate data to be ingested. ## Create a custom table The column names used here are for example only. The column names for your log w Use the following steps to troubleshoot collection of text logs. ## Troubleshooting Tool-Use the [Asure monitor troubleshooter tool](use-azure-monitor-agent-troubleshooter.md) to look for common issues and share results with Microsoft. +Use the [Azure monitor troubleshooter tool](use-azure-monitor-agent-troubleshooter.md) to look for common issues and share results with Microsoft. ### Check if any custom logs have been received Start by checking if any records have been collected for your custom log table by running the following query in Log Analytics. If records aren't returned, check the other sections for possible causes. This query looks for entires in the last two days, but you can modify for another time range. It can take 5-7 minutes for new data from your tables to be uploaded. Only new data will be uploaded any log file last written to prior to the DCR rules being created won't be uploaded. |
azure-monitor | Java Standalone Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md | To send custom metrics using micrometer: </dependency> ``` -1. Use the Micrometer [global registry](https://micrometer.io/docs/concepts#_global_registry) to create a meter: +1. Use the Micrometer [global registry](https://micrometer.io/?/docs/concepts#_global_registry) to create a meter: ```java static final Counter counter = Metrics.counter("test.counter"); |
azure-monitor | Prometheus Rule Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-rule-groups.md | The rule group contains the following properties. | `location` | True | string | Resource location from regions supported in the preview | | `properties.description` | False | string | Rule group description | | `properties.scopes` | True | string[] | Target Azure Monitor workspace. Only one scope currently supported |-| `properties.ebabled` | False | boolean | Enable/disable group. Default is true. | +| `properties.enabled` | False | boolean | Enable/disable group. Default is true. | | `properties.clusterName` | False | string | Apply rule to data from a specific cluster. Default is apply to all data in workspace. | | `properties.interval` | False | string | Group evaluation interval. Default = PT1M | |
azure-monitor | Code Optimizations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/code-optimizations.md | Before you can use Code Optimizations on your application: - Verify your application: - Is .NET. - Uses [Application Insights](../app/app-insights-overview.md).+ - Is collecting profiles. ## Application Insights Profiler vs. Code Optimizations |
azure-monitor | Basic Logs Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md | Azure Monitor Logs offers two log data plans that let you reduce log ingestion a This article describes Azure Monitor's log data plans and explains how to configure the log data plan of the tables in your Log Analytics workspace. ++## Permissions ++To set a table's log data plan, you must have at least [contributor rights](../logs/manage-access.md#azure-rbac). + ## Compare the Basic and Analytics log data plans The following table summarizes the Basic and Analytics log data plans. Configure a table for Basic logs if: ## Set a table's log data plan -You can switch a table's plan once a week. +When you change a table's plan from Analytics to Basic, Log Analytics immediately archives any data that's older than eight days and up to original data retention of the table. In other words, the total retention period of the table remains unchanged, unless you explicitly [modify the archive period](../logs/data-retention-archive.md). +When you change a table's plan from Basic to Analytics, the changes take affect on existing data in the table immediately. ++> [!NOTE] +> You can switch a table's plan once a week. # [Portal](#tab/portal-1) To configure a table for Basic logs or Analytics logs in the Azure portal: |
azure-monitor | Data Retention Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md | Retention policies define when to remove or archive data in a [Log Analytics wor This article describes how to configure data retention and archiving. +## Permissions ++To configure set data retention and archiving, you must have at least [contributor rights](../logs/manage-access.md#azure-rbac). + ## How retention and archiving work Each workspace has a default retention policy that's applied to all tables. You can set a different retention policy on individual tables. During the interactive retention period, data is available for monitoring, troub Archived data stays in the same table, alongside the data that's available for interactive queries. When you set a total retention period that's longer than the interactive retention period, Log Analytics automatically archives the relevant data immediately at the end of the retention period. -If you change the archive settings on a table with existing data, the relevant data in the table is also affected immediately. For example, you might have an existing table with 30 days of interactive retention and no archive period. You decide to change the retention policy to eight days of interactive retention and one year total retention. Log Analytics immediately archives any data that's older than eight days. - You can access archived data by [running a search job](search-jobs.md) or [restoring archived logs](restore.md). > [!NOTE] > The archive period can only be set at the table level, not at the workspace level. +### Adjustments to retention and archive settings + When you shorten an existing retention policy, Azure Monitor waits 30 days before removing the data, so you can revert the change and prevent data loss in the event of an error in configuration. You can [purge data](#purge-retained-data) immediately when required. +When you increase the retention policy, the new retention period applies to all data that's already been ingested into the table and hasn't yet been purged or removed. ++If you change the archive settings on a table with existing data, the relevant data in the table is also affected immediately. For example, you might have an existing table with 180 days of interactive retention and no archive period. You decide to change the retention policy to 90 days of interactive retention without changing the total retention period of 180 days. Log Analytics immediately archives any data that's older than 90 days and none of the data is deleted. + ## Configure the default workspace retention policy You can set the workspace default retention policy in the Azure portal to 30, 31, 60, 90, 120, 180, 270, 365, 550, and 730 days. You can set a different policy for specific tables by [configuring the retention and archive policy at the table level](#set-retention-and-archive-policy-by-table). If you're on the *free* tier, you'll need to upgrade to the paid tier to change the data retention period. To set the default workspace retention policy: By default, all tables in your workspace inherit the workspace's interactive retention setting and have no archive policy. You can modify the retention and archive policies of individual tables, except for workspaces in the legacy Free Trial pricing tier. -You can keep data in interactive retention between 4 and 730 days. You can set the archive period for a total retention time of up to 2,556 days (seven years). +The Analytics log data plan includes 30 days of interactive retention. You can increase the interactive retention period to up to 730 days at an [additional cost](https://azure.microsoft.com/pricing/details/monitor/). If needed, you can reduce the interactive retention period to as low as four days using the API or CLI, however, since 30 days are included in the ingestion price, lowering the retention period below 30 days does not reduce costs. You can set the archive period to a total retention time of up to 2,556 days (seven years). # [Portal](#tab/portal-1) |
azure-monitor | Logs Data Export | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md | Log Analytics workspace data export continuously exports data that's sent to you ## Limitations -- Custom logs created using the [HTTP Data Collector API](./data-collector-api.md) and the dataSources API can't be exported. This includes text logs consumed by Log Analytics agent. You can export custom logs created using [data collection rules](./logs-ingestion-api-overview.md), including text-based logs.-- Data export will gradually support more tables, but is currently limited to the tables specified in the [supported tables](#supported-tables) section.+- Custom logs created using the [HTTP Data Collector API](./data-collector-api.md) can't be exported, including text-based logs consumed by Log Analytics agent. Custom logs created using [data collection rules](./logs-ingestion-api-overview.md), including text-based logs can be can be exported. +- Data export will gradually support more tables, but is currently limited to tables specified in the [supported tables](#supported-tables) section. - You can define up to 10 enabled rules in your workspace, each can include multiple tables. You can create more rules in workspace in disabled state. - Destinations must be in the same region as the Log Analytics workspace. - The storage account must be unique across rules in the workspace. - Table names can be 60 characters long when you're exporting to a storage account. They can be 47 characters when you're exporting to event hubs. Tables with longer names won't be exported.-- Currently, data export isn't supported in China.+- Export to Premium Storage Account isn't supported. ## Data completeness Data export is optimized to move large data volume to your destinations. The export operation might fail if the destination doesn't have sufficient capacity or is unavailable. In the event of failure, the retry process continues for up to 12 hours. For more information about destination limits and recommended alerts, see [Create or update a data export rule](#create-or-update-a-data-export-rule). If the destinations are still unavailable after the retry period, the data is discarded. In certain cases, retry can cause duplication of a fraction of the exported records. Don't use an existing storage account that has other non-monitoring data to bett To send data to an immutable storage account, set the immutable policy for the storage account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this article, including enabling protected append blobs writes. -The storage account must be StorageV1 or later and in the same region as your workspace. If you need to replicate your data to other storage accounts in other regions, you can use any of the [Azure Storage redundancy options](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region), including GRS and GZRS. +The Storage Account can't be Premium, must be StorageV1 or later, and located in the same region as your workspace. If you need to replicate your data to other storage accounts in other regions, you can use any of the [Azure Storage redundancy options](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region), including GRS and GZRS. Data is sent to storage accounts as it reaches Azure Monitor and exported to destinations located in a workspace region. A container is created for each table in the storage account with the name *am-* followed by the name of the table. For example, the table *SecurityEvent* would send to a container named *am-SecurityEvent*. |
azure-monitor | Manage Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md | The factors that define the data you can access are described in the following t | [Access mode](#access-mode) | Method used to access the workspace. Defines the scope of the data available and the access control mode that's applied. | | [Access control mode](#access-control-mode) | Setting on the workspace that defines whether permissions are applied at the workspace or resource level. | | [Azure role-based access control (RBAC)](#azure-rbac) | Permissions applied to individuals or groups of users for the workspace or resource sending data to the workspace. Defines what data you have access to. |-| [Table-level Azure RBAC](#set-table-level-read-access) | Optional permissions that define specific data types in the workspace that you can access. Apply to all users no matter your access mode or access control mode. | +| [Table-level Azure RBAC](#set-table-level-read-access) | Optional permissions that define specific data types in the workspace that you can access. Can be applies to all access modes or access control modes. | ## Access mode The *access mode* refers to how you access a Log Analytics workspace and defines There are two access modes: -- **Workspace-context**: You can view all logs in the workspace for which you have permission. Queries in this mode are scoped to all data in all tables in the workspace. This access mode is used when logs are accessed with the workspace as the scope, such as when you select **Logs** on the **Azure Monitor** menu in the Azure portal.+- **Workspace-context**: You can view all logs in the workspace for which you have permission. Queries in this mode are scoped to all data in tables that you have access to in the workspace. This access mode is used when logs are accessed with the workspace as the scope, such as when you select **Logs** on the **Azure Monitor** menu in the Azure portal. - **Resource-context**: When you access the workspace for a particular resource, resource group, or subscription, such as when you select **Logs** from a resource menu in the Azure portal, you can view logs for only resources in all tables that you have access to. Queries in this mode are scoped to only data associated with that resource. This mode also enables granular Azure RBAC. Workspaces use a resource-context log model where every log record emitted by an Azure resource is automatically associated with this resource. Records are only available in resource-context queries if they're associated with the relevant resource. To check this association, run a query and verify that the [_ResourceId](./log-standard-columns.md#_resourceid) column is populated. Each workspace can have multiple accounts associated with it. Each account can h | View workspace basic properties and enter the workspace pane in the portal. | `Microsoft.OperationalInsights/workspaces/read` | | Query logs by using any interface. | `Microsoft.OperationalInsights/workspaces/query/read` | | Access all log types by using queries. | `Microsoft.OperationalInsights/workspaces/query/*/read` |-| Access a specific log table. | `Microsoft.OperationalInsights/workspaces/query/<table_name>/read` | +| Access a specific log table - legacy method | `Microsoft.OperationalInsights/workspaces/query/<table_name>/read` | | Read the workspace keys to allow sending logs to this workspace. | `Microsoft.OperationalInsights/workspaces/sharedKeys/action` | | Add and remove monitoring solutions. | `Microsoft.Resources/deployments/*` <br> `Microsoft.OperationalInsights/*` <br> `Microsoft.OperationsManagement/*` <br> `Microsoft.Automation/*` <br> `Microsoft.Resources/deployments/*/write`<br><br>These permissions need to be granted at resource group or subscription level. | | View data in the **Backup** and **Site Recovery** solution tiles. | Administrator/Co-administrator<br><br>Accesses resources deployed by using the classic deployment model. | When users query logs from a workspace by using [resource-context access](#acces | Permission | Description | | - | -- |-| `Microsoft.Insights/logs/<tableName>/read`<br><br>Examples:<br>`Microsoft.Insights/logs/*/read`<br>`Microsoft.Insights/logs/Heartbeat/read` | Ability to view all log data for the resource | +| `Microsoft.Insights/logs/*/read` | Ability to view all log data for the resource | +| `Microsoft.Insights/logs/<tableName>/read`<br>Example:<br>`Microsoft.Insights/logs/Heartbeat/read` | Ability to view specific table for this resource - legacy method | | `Microsoft.Insights/diagnosticSettings/write` | Ability to configure diagnostics setting to allow setting up logs for this resource | The `/read` permission is usually granted from a role that includes _\*/read or_ _\*_ permissions, such as the built-in [Reader](../../role-based-access-control/built-in-roles.md#reader) and [Contributor](../../role-based-access-control/built-in-roles.md#contributor) roles. Custom roles that include specific actions or dedicated built-in roles might not include this permission. In addition to using the built-in roles for a Log Analytics workspace, you can c ## Set table-level read access +Table-level access allows you to let specific people read data only from a specific set of tables. It applies both for workspace-context and resource-context. There are two methods to define table-level permissions: +* By assigning permissions to the table sub-resource under the workspace resource - this is the recommended method that is described in this section. This method is currently in **preview**. +* By assigning special actions that contain table name to the workspace resource - this is the legacy method that is described in the next section. It has some limitations around custom log tables. ++Table-level RBAC is applied during query execution. It does not apply to metadata retrieval calls. For that reason, tables will appear in the list of tables even if they are not available to the user. ++> [!NOTE] +> The recommended table-level access method described here does not apply during preview to Microsoft Sentinel Detection Rules. These rules might have access to more tables than intended. ++In order to apply table-level RBAC for a user, two assignments shall be made: ++1. Assign the user the ability to read the workspace details and to run a query without granting the ability to run a query on tables. This is done by assigning a special custom role on the workspace that has only the following actions: + - `Microsoft.OperationalInsights/workspaces/read` + - `Microsoft.OperationalInsights/workspaces/query/read` + - `Microsoft.OperationalInsights/workspaces/analytics/query/action` + - `Microsoft.OperationalInsights/workspaces/search/action` + +2. Assign the user a read permissions on the specific table sub-resource. Any role that has */read will be sufficient such as **Reader** role or **Log Analytics Reader** role. As table is a sub-resource of workspace, the workspace admins can also perform action on a specific table. ++> [!WARNING] +> If the user has other assignments on the workspace, directly or via inheritence (e.g. user has Reader on the subscription that contains the workspace), the user will be able to access all tables in the workspace. +++ To create a [custom role](../../role-based-access-control/custom-roles.md) that lets specific users or groups read data from specific tables in a workspace: 1. Create a custom role that grants users permission to execute queries in the Log Analytics workspace, based on the built-in Azure Monitor Logs **Reader** role: To create a [custom role](../../role-based-access-control/custom-roles.md) that ### Legacy method of setting table-level read access -[Azure custom roles](../../role-based-access-control/custom-roles.md) let you grant specific users or groups access to specific tables in the workspace. Azure custom roles apply to workspaces with either workspace-context or resource-context [access control modes](#access-control-mode) regardless of the user's [access mode](#access-mode). +The legacy method of table-level also uses [Azure custom roles](../../role-based-access-control/custom-roles.md) to let you grant specific users or groups access to specific tables in the workspace. Azure custom roles apply to workspaces with either workspace-context or resource-context [access control modes](#access-control-mode) regardless of the user's [access mode](#access-mode). To define access to a particular table, create a [custom role](../../role-based-access-control/custom-roles.md): Custom tables store data you collect from data sources such as [text logs](../ag > [!NOTE] > Tables created by the [Logs ingestion API](../essentials/../logs/logs-ingestion-api-overview.md) don't yet support table-level RBAC. -You can't grant access to individual custom log tables at the table level, but you can grant access to all custom log tables. To create a role with access to all custom log tables, create a custom role by using the following actions: +Using the legacy method of table-level access, you can't grant access to individual custom log tables at the table level, but you can grant access to all custom log tables. To create a role with access to all custom log tables, create a custom role by using the following actions: ``` "Actions": [ You can't grant access to individual custom log tables at the table level, but y ], ``` -An alternative approach to managing access to custom logs is to assign them to an Azure resource and manage access by using resource-context access control. Include the resource ID by specifying it in the [x-ms-AzureResourceId](../logs/data-collector-api.md#request-headers) header when data is ingested to Log Analytics via the [HTTP Data Collector API](../logs/data-collector-api.md). The resource ID must be valid and have access rules applied to it. After the logs are ingested, they're accessible to users with read access to the resource. --Some custom logs come from sources that aren't directly associated to a specific resource. In this case, create a resource group to manage access to these logs. The resource group doesn't incur any cost, but it gives you a valid resource ID to control access to the custom logs. --For example, if a specific firewall is sending custom logs, create a resource group called *MyFireWallLogs*. Make sure that the API requests contain the resource ID of *MyFireWallLogs*. The firewall log records are then accessible only to users who were granted access to *MyFireWallLogs* or those users with full workspace access. --#### Considerations +### Considerations regarding table-level access - If a user is granted global read permission with the standard Reader or Contributor roles that include the _\*/read_ action, it will override the per-table access control and give them access to all log data. - If a user is granted per-table access but no other permissions, they can access log data from the API but not from the Azure portal. To provide access from the Azure portal, use Log Analytics Reader as its base role. |
azure-monitor | Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/restore.md | Last updated 10/01/2022 # Restore logs in Azure Monitor The restore operation makes a specific time range of data in a table available in the hot cache for high-performance queries. This article describes how to restore data, query that data, and then dismiss the data when you're done. +## Permissions ++To restore data from an archived table, you need `Microsoft.OperationalInsights/workspaces/tables/write` and `Microsoft.OperationalInsights/workspaces/restoreLogs/write` permissions to the Log Analytics workspace, for example, as provided by the [Log Analytics Contributor built-in role](../logs/manage-access.md#built-in-roles). + ## When to restore logs Use the restore operation to query data in [Archived Logs](data-retention-archive.md). You can also use the restore operation to run powerful queries within a specific time range on any Analytics table when the log queries you run on the source table can't complete within the log query timeout of 10 minutes. |
azure-monitor | Search Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md | Search jobs are asynchronous queries that fetch records into a new search table > [!NOTE] > The search job feature is currently not supported for workspaces with [customer-managed keys](customer-managed-keys.md). +## Permissions ++To run a search job, you need `Microsoft.OperationalInsights/workspaces/tables/write` and `Microsoft.OperationalInsights/workspaces/searchJobs/write` permissions to the Log Analytics workspace, for example, as provided by the [Log Analytics Contributor built-in role](../logs/manage-access.md#built-in-roles). + ## When to use search jobs Use a search job when the log query timeout of 10 minutes isn't sufficient to search through large volumes of data or if you're running a slow query. |
azure-monitor | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md | Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
azure-netapp-files | Azure Netapp Files Solution Architectures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md | This section provides references to SAP on Azure solutions. * [SAP Oracle 19c System Refresh Guide on Azure VMs using Azure NetApp Files Snapshots with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-oracle-19c-system-refresh-guide-on-azure-vms-using-azure/ba-p/3708172) * [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload using Azure NetApp Files](../virtual-machines/workloads/sap/dbms_guide_ibm.md#using-azure-netapp-files) * [DB2 Installation Guide on Azure NetApp Files](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/db2-installation-guide-on-anf/ba-p/3709437)+* [SAP ASE 16.0 on Azure NetApp Files for SAP Workloads on SLES15](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-ase-16-0-on-azure-netapp-files-for-sap-workloads-on-sles15/ba-p/3729496) ### SAP IQ-NLS |
azure-netapp-files | Double Encryption At Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/double-encryption-at-rest.md | Azure NetApp Files double encryption at rest is supported for the following regi * For the cost of using Azure NetApp Files double encryption at rest, see the [Azure NetApp Files pricing](https://azure.microsoft.com/pricing/details/netapp/) page. * You can't convert volumes in a single-encryption capacity pool to use double encryption at rest. However, you can copy data in a single-encryption volume to a volume created in a capacity pool that is configured with double encryption. * For capacity pools created with double encryption at rest, volume names in the capacity pool are visible only to volume owners for maximum security.-+* Using double encryption at rest might have performance impacts based on the workload type and frequency. The performance impact can range from a minimal 1-2% to possibly 15% or higher, depending on the workload profile. ## Next steps -* [Create a capacity pool for Azure NetApp Files](azure-netapp-files-set-up-capacity-pool.md) +* [Create a capacity pool for Azure NetApp Files](azure-netapp-files-set-up-capacity-pool.md) |
azure-netapp-files | Volume Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/volume-delete.md | + + Title: Delete an Azure NetApp Files volume | Microsoft Docs +description: Describes how to delete an Azure NetApp Files volume. ++documentationcenter: '' +++editor: '' ++ms.assetid: +++ na + Last updated : 06/22/2023+++# Delete an Azure NetApp Files volume ++This article describes how to delete an Azure NetApp Files volume. ++> [!IMPORTANT] +> If the volume you want to delete is in a replication relationship, follow the steps in [Delete source or destination volumes](cross-region-replication-delete.md#delete-source-or-destination-volumes). ++## Before you begin ++* Stop any applications that may be using the volume. Unmount the volume from all hosts before deleting. +* Remove the volume from automounter configurations such as `fstab`. ++## Delete a volume ++1. From the Azure portal and under storage service, select **Volumes**. Locate the volume you want to delete. +2. Right click the volume name and select **Delete**. ++ ![Screenshot that shows right-click menu for deleting a volume.](../media/azure-netapp-files/volume-delete.png) ++## Next steps ++* [Delete volume replications or volumes](cross-region-replication-delete.md) +* [Troubleshoot volume errors for Azure NetApp Files](troubleshoot-volumes.md) |
azure-portal | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md | Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
azure-relay | Ip Firewall Virtual Networks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/ip-firewall-virtual-networks.md | The template takes one parameter: **ipMask**, which is a single IPv4 address or To deploy the template, follow the instructions for [Azure Resource Manager](../azure-resource-manager/templates/deploy-powershell.md). -## Trusted services -The following services are the trusted services for Azure Relay. -- Azure Event Grid-- Azure IoT Hub-- Azure Stream Analytics-- Azure Monitor-- Azure API Management-- Azure Synapse-- Azure Data Explorer-- Azure IoT Central-- Azure Healthcare Data Services-- Azure Digital Twins-- Azure Arc ## Next steps To learn about other network security-related features, see [Network security](network-security.md). |
azure-relay | Private Link Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/private-link-service.md | Your private endpoint and virtual network must be in the same region. When you s Your private endpoint uses a private IP address in your virtual network. -### Steps -For step-by-step instructions on creating a new Azure Relay namespace and entities in it, see [Create an Azure Relay namespace using the Azure portal](relay-create-namespace-portal.md). +### Configure private access for a Relay namespace +The following procedure provides step-by-step instructions for disabling public access to a Relay namespace and then adding a private endpoint to the namespace. + 1. Sign in to the [Azure portal](https://portal.azure.com). 2. In the search bar, type in **Relays**. 3. Select the **namespace** from the list to which you want to add a private endpoint. 4. On the left menu, select the **Networking** tab under **Settings**.-5. Select the **Private endpoint connections** tab at the top of the page -6. Select the **+ Private Endpoint** button at the top of the page. +1. On the **Networking** page, for **Public network access**, select **Disabled** if you want the namespace to be accessed only via private endpoints. +1. For **Allow trusted Microsoft services to bypass this firewall**, select **Yes** if you want to allow [trusted Microsoft services](#trusted-services) to bypass this firewall. ++ :::image type="content" source="./media/private-link-service/public-access-disabled.png" alt-text="Screenshot of the Networking page with public network access as Disabled."::: +1. Select the **Private endpoint connections** tab at the top of the page +1. Select the **+ Private Endpoint** button at the top of the page. :::image type="content" source="./media/private-link-service/add-private-endpoint-button.png" alt-text="Screenshot showing the selection of the Add private endpoint button on the Private endpoint connections tab of the Networking page."::: 7. On the **Basics** page, follow these steps: Aliases: <namespace-name>.servicebus.windows.net - Maximum number of Azure Relay namespaces with private endpoints per subscription: 64. - Network Security Group (NSG) rules and User-Defined Routes don't apply to Private Endpoint. For more information, see [Azure Private Link service: Limitations](../private-link/private-link-service-overview.md#limitations) + ## Next Steps - Learn more about [Azure Private Link](../private-link/private-link-service-overview.md) |
azure-resource-manager | Bicep Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-cli.md | To use the restore command, you must have Bicep CLI version **0.4.1008 or later* To manually restore the external modules for a file, use: ```azurecli-az bicep restore <bicep-file> [--force] +az bicep restore --file <bicep-file> [--force] ``` The Bicep file you provide is the file you wish to deploy. It must contain a module that links to a registry. For example, you can restore the following file: |
azure-resource-manager | Parameter Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameter-files.md | Title: Create parameters files for Bicep deployment description: Create parameters file for passing in values during deployment of a Bicep file Previously updated : 06/22/2023 Last updated : 06/26/2023 # Create parameters files for Bicep deployment Last updated 06/22/2023 Rather than passing parameters as inline values in your script, you can use a Bicep parameters file with the `.bicepparam` file extension or a JSON parameters file that contains the parameter values. This article shows how to create parameters files. > [!NOTE]-> The Bicep parameters file is only supported in Bicep CLI version 0.18.4 or newer. +> The Bicep parameters file is only supported in [Bicep CLI](./install.md) version 0.18.4 or newer, and [Azure CLI](/azure/install-azure-cli.md) version 2.47.0 or newer. A single Bicep file can have multiple Bicep parameters files associated with it. However, each Bicep parameters file is intended for one particular Bicep file. This relationship is established using the `using` statement within the Bicep parameters file. For more information, see [Bicep parameters file](#parameters-file). |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md | Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md | Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md | Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
azure-signalr | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md | Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
azure-vmware | Concepts Network Design Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-network-design-considerations.md | Title: Concepts - Network design considerations description: Learn about network design considerations for Azure VMware Solution Previously updated : 1/10/2023 Last updated : 6/26/2023 # Azure VMware Solution network design considerations Due to asymmetric routing, connectivity issues can occur when Azure VMware Solut For AS-Path Prepend, consider the following: > [!div class="checklist"]-> * The key point is that you must prepend **Public** ASN numbers to influence how AVS route's traffic back to on-premises. If you prepend using _Private_ ASN, AVS will ignore the prepend, and the ECMP behavior above will occur. Even if you operate a Private BGP ASN on-premises, it's still possible to configure your on-premises devices to utilizes Public ASN when prepending routes outbound, to ensure compatibility with Azure VMware Solution. +> * The key point is that you must prepend **Public** ASN numbers to influence how Azure VMware Solution routes traffic back to on-premises. If you prepend using _Private_ ASN, Azure VMware Solution will ignore the prepend, and the ECMP behavior above will occur. Even if you operate a Private BGP ASN on-premises, it's still possible to configure your on-premises devices to utilize a Public ASN when prepending routes outbound, to ensure compatibility with Azure VMware Solution. > * Both or all circuits are connected to Azure VMware Solution through ExpressRoute Global Reach. > * The same netblocks are being advertised from two or more circuits. > * You wish to use AS-Path Prepend to force Azure VMware solution to prefer one circuit over another.-> * Use either 2-byte or 4-byte public ASN numbers. If you don't own a public ASN for prepending, open a [Microsoft support ticket](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview) to explore further options. +> * Use either 2-byte or 4-byte public ASN numbers. ## Management VMs and default routes from on-premises |
azure-vmware | Concepts Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-networking.md | Title: Concepts - Network interconnectivity description: Learn about key aspects and use cases of networking and interconnectivity in Azure VMware Solution. Previously updated : 5/6/2023 Last updated : 6/27/2023 The diagram below shows the basic network interconnectivity established at the t > When connecting **production** Azure VMware Solution private clouds to an Azure virtual network, an ExpressRoute virtual network gateway with the Ultra Performance Gateway SKU should be used with FastPath enabled to achieve 10Gbps connectivity. Less critical environments can use the Standard or High Performance Gateway SKUs for slower network performance. > [!NOTE]-> If connecting more than four Azure VMware Solution private clouds in the same Azure region to the same Azure virtual network is a requirement, use [Azure VMware Solution Interconnect](connect-multiple-private-clouds-same-region.md) to aggregate private cloud connectivity within the Azure region. +> If connecting more than four Azure VMware Solution private clouds in the same Azure region to the same Azure virtual network is a requirement, use [AVS Interconnect](connect-multiple-private-clouds-same-region.md) to aggregate private cloud connectivity within the Azure region. ## On-premises interconnectivity The diagram below shows the on-premises to private cloud interconnectivity, whic - Hot/Cold vSphere vMotion between on-premises and Azure VMware Solution. - On-premises to Azure VMware Solution private cloud management access. For full interconnectivity to your private cloud, you need to enable ExpressRoute Global Reach and then request an authorization key and private peering ID for Global Reach in the Azure portal. The authorization key and peering ID are used to establish Global Reach between an ExpressRoute circuit in your subscription and the ExpressRoute circuit for your private cloud. Once linked, the two ExpressRoute circuits route network traffic between your on-premises environments to your private cloud. For more information on the procedures, see the [tutorial for creating an ExpressRoute Global Reach peering to a private cloud](tutorial-expressroute-global-reach-private-cloud.md). > [!IMPORTANT]-> Customers should not advertise bogon routes over ExpressRoute from on-premises or their Azure VNET. Examples of bogon routes include 0.0.0.0/5 or 192.0.0.0/3. +> Customers should not advertise bogon routes over ExpressRoute from on-premises or their Azure VNet. Examples of bogon routes include 0.0.0.0/5 or 192.0.0.0/3. ## Route advertisement guidelines to Azure VMware Solution |
azure-vmware | Concepts Private Clouds Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-private-clouds-clusters.md | Title: Concepts - Private clouds and clusters description: Learn about the key capabilities of Azure VMware Solution software-defined data centers and VMware vSphere clusters. Previously updated : 1/10/2023 Last updated : 6/27/2023 A private cloud includes clusters with: As with other resources, private clouds are installed and managed from within an Azure subscription. The number of private clouds within a subscription is scalable. Initially, there's a limit of one private cloud per subscription. There's a logical relationship between Azure subscriptions, Azure VMware Solution private clouds, vSAN clusters, and hosts. -The diagram shows a single Azure subscription with two private clouds that represent a development and production environment. In each of those private clouds are two clusters. -+The diagram below describes the architectural components of the Azure VMware Solution. +++Each Azure VMware Solution architectural component has the following function: ++- Azure Subscription: Used to provide controlled access, budget, and quota management for the Azure VMware Solution. +- Azure Region: Physical locations around the world where we group data centers into Availability Zones (AZs) and then group AZs into regions. +- Azure Resource Group: Container used to place Azure services and resources into logical groups. +- Azure VMware Solution Private Cloud: Uses VMware software, including vCenter Server, NSX-T Data Center software-defined networking, vSAN software-defined storage, and Azure bare-metal ESXi hosts to provide compute, networking, and storage resources. +- Azure VMware Solution Resource Cluster: Uses VMware software, including vSAN software-defined storage, and Azure bare-metal ESXi hosts to provide compute, networking, and storage resources for customer workloads by scaling out the Azure VMware Solution private cloud. +- VMware HCX: Provides mobility, migration, and network extension services. +- VMware Site Recovery: Provides Disaster Recovery automation and storage replication services with VMware vSphere Replication. Third party Disaster Recovery solutions Zerto Disaster Recovery and JetStream Software Disaster Recovery are also supported. +- Dedicated Microsoft Enterprise Edge (D-MSEE): Router that provides connectivity between Azure cloud and the Azure VMware Solution private cloud instance. +- Azure Virtual Network (VNet): Private network used to connect Azure services and resources together. +- Azure Route Server: Enables network appliances to exchange dynamic route information with Azure networks. +- Azure Virtual Network Gateway: Cross premises gateway for connecting Azure services and resources to other private networks using IPSec VPN, ExpressRoute, and VNet to VNet. +- Azure ExpressRoute: Provides high-speed private connections between Azure data centers and on-premises or colocation infrastructure. +- Azure Virtual WAN (vWAN): Aggregates networking, security, and routing functions together into a single unified Wide Area Network (WAN). ## Hosts |
azure-vmware | Configure Dhcp Azure Vmware Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-dhcp-azure-vmware-solution.md | description: Learn how to configure DHCP by using either NSX-T Manager to host a Previously updated : 10/17/2022 Last updated : 6/26/2023 # Customer intent: As an Azure service administrator, I want to configure DHCP by using either NSX-T Manager to host a DHCP server or use a third-party external DHCP server. If you want to use NSX-T Data Center to host your DHCP server, you'll create a D ### Create a DHCP server -1. In NSX-T Manager, select **Networking** > **DHCP**, and then select **Add Server**. --1. Select **DHCP** for the **Server Type**, provide the server name and IP address, and select **Save**. +1. In NSX-T Manager, select **Networking** > **DHCP**, and then select **Add DHCP Profile**. - :::image type="content" source="./media/manage-dhcp/dhcp-server-settings.png" alt-text="Screenshot showing how to add a DHCP server in NSX-T Manager." border="true"::: +1. Select **Add DHCP Profile**, enter a name, and select **Save**. NOTE: An IP address is not required if none is entered NSX-T Manager will set one. -1. Select **Tier 1 Gateways**, select the vertical ellipsis on the Tier-1 gateway, and then select **Edit**. + :::image type="content" source="./media/manage-dhcp/dhcp-server-settings.png" alt-text="Screenshot showing how to add a DHCP Profile in NSX-T Manager." border="true"::: - :::image type="content" source="./media/manage-dhcp/edit-tier-1-gateway.png" alt-text="Screenshot showing how to edit the NSX-T Data Center Tier-1 Gateway for using a DHCP server." border="true"::: +1. Under **Networking** > **Tier-1 Gateways**, select the gateway where the segments are connected that DHCP is required. Edit the Tier-1 Gateway by clicking on the three ellipses and choose **Edit**. -1. Select **No IP Allocation Set** to add a subnet. +1. Select **Set DHCP Configuration**, select **DHCP Server** and then select the DHCP Server Profile created earlier. Click **Save**, then **Close Editing**. - :::image type="content" source="./media/manage-dhcp/add-subnet.png" alt-text="Screenshot showing how to add a subnet to the NSX-T Data Center Tier-1 Gateway for using a DHCP server." border="true"::: + :::image type="content" source="./media/manage-dhcp/edit-tier-1-gateway.png" alt-text="Screenshot showing how to edit the NSX-T Data Center Tier-1 Gateway for using a DHCP server." border="true"::: -1. For **Type**, select **DHCP Local Server**. +1. Navigate to **Networking** > **Segments** and find the segment where DHCP is required. Click on **Edit** then **Set DHCP Config**. -1. For the **DHCP Server**, select **Default DHCP**, and then select **Save**. +1. Select **Gateway DHCP Server** for DHCP Type, add a DHCP range, and click **Apply**. -1. Select **Save** again and then select **Close Editing**. + :::image type="content" source="./media/manage-dhcp/add-subnet.png" alt-text="Screenshot showing how to add a subnet to the NSX-T Data Center Tier-1 Gateway for using a DHCP server." border="true"::: ### Add a network segment |
azure-vmware | Deploy Disaster Recovery Using Jetstream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-disaster-recovery-using-jetstream.md | In this article, you'll implement JetStream DR for your Azure VMware Solution pr To learn more about JetStream DR, see: -- [JetStream Solution brief](https://www.jetstreamsoft.com/2020/09/28/solution-brief-disaster-recovery-for-avs/)+- [JetStream Solution brief](https://www.jetstreamsoft.com/2020/09/28/disaster-recovery-for-avs/) - [JetStream DR on Azure Marketplace](https://portal.azure.com/#blade/Microsoft_Azure_Marketplace/GalleryItemDetailsBladeNopdl/id/jetstreamsoftware1596597632545.jsdravs-111721) -- [JetStream knowledge base articles](https://www.jetstreamsoft.com/resources/knowledge-base/)- ## Core components of the JetStream DR solution | Items | Description | |
azure-vmware | Ecosystem Disaster Recovery Vms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-disaster-recovery-vms.md | We currently offer customers the possibility to implement their disaster recover Following our principle of giving customers the choice to apply their investments in skills and technology we┬┤ve collaborated with some of the leading partners in the industry. You can find more information about their solutions in the links below:-- [Jetstream](https://www.jetstreamsoft.com/2020/09/28/solution-brief-disaster-recovery-for-avs/)+- [Jetstream](https://www.jetstreamsoft.com/2020/09/28/disaster-recovery-for-avs/) - [Zerto](https://www.zerto.com/solutions/use-cases/disaster-recovery/)-- [RiverMeadow](https://www.rivermeadow.com/disaster-recovery-azure-blob)+- [RiverMeadow](https://www.rivermeadow.com/disaster-recovery-azure-blob) |
backup | Azure Backup Architecture For Sap Hana Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-backup-architecture-for-sap-hana-backup.md | In the following sections you'll learn about different SAP HANA setups and their :::image type="content" source="./media/azure-backup-architecture-for-sap-hana-backup/azure-network-with-udr-and-nva-or-azure-firewall-and-private-endpoint-or-service-endpoint.png" alt-text="Diagram showing the SAP HANA setup if Azure network with UDR + NVA / Azure Firewall + Private Endpoint or Service Endpoint."::: -### Backup architecture for database with HANA System Replication +### Backup architecture for database with HANA System Replication (preview) The backup service resides in both the physical nodes of the HSR setup. Once you confirm that these nodes are in a replication group (using the [pre-registration script](sap-hana-database-with-hana-system-replication-backup.md#run-the-preregistration-script)), Azure Backup groups the nodes logically, and creates a single backup item during protection configuration. This section provides you with an understanding about the backup process of an H - Learn about the supported configurations and scenarios in the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md). - Learn about how to [backup SAP HANA databases in Azure VMs](backup-azure-sap-hana-database.md).-- Learn about how to [backup SAP HANA System Replication databases in Azure VMs](sap-hana-database-with-hana-system-replication-backup.md).-- Learn about how to [backup SAP HANA databases' snapshot instances in Azure VMs](sap-hana-database-instances-backup.md).+- Learn about how to [backup SAP HANA System Replication databases in Azure VMs (preview)](sap-hana-database-with-hana-system-replication-backup.md). +- Learn about how to [backup SAP HANA databases' snapshot instances in Azure VMs (preview)](sap-hana-database-instances-backup.md). |
backup | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md | Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
backup | Quick Backup Hana Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-hana-cli.md | -# Quickstart: Back up SAP HANA System Replication on Azure VMs using Azure CLI +# Quickstart: Back up SAP HANA System Replication on Azure VMs using Azure CLI (preview) This quickstart describes how to protect SAP HANA System Replication (HSR) using Azure CLI. |
backup | Quick Restore Hana Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-restore-hana-cli.md | -# Quickstart: Restore SAP HANA System Replication on Azure VMs using Azure CLI +# Quickstart: Restore SAP HANA System Replication on Azure VMs using Azure CLI (preview) This quickstart describes how to restore SAP HANA System Replication (HSR) using Azure CLI. |
backup | Sap Hana Backup Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-backup-support-matrix.md | Title: SAP HANA Backup support matrix description: In this article, learn about the supported scenarios and limitations when you use Azure Backup to back up SAP HANA databases on Azure VMs. Previously updated : 06/20/2023 Last updated : 06/27/2023 Azure Backup supports the backup of SAP HANA databases to Azure. This article su | **OS versions** | SLES 12 with SP2, SP3, SP4 and SP5; SLES 15 with SP0, SP1, SP2, SP3, and SP4 <br><br> RHEL 7.4, 7.6, 7.7, 7.9, 8.1, 8.2, 8.4, 8.6, and 9.0. | | | **HANA versions** | SDC on HANA 1.x, MDC on HANA 2.x SPS 04, SPS 05 Rev <= 59, SPS 06 (validated for encryption enabled scenarios as well), and SPS 07. | | | **Encryption** | SSLEnforce, HANA data encryption | |-| **HANA deployments** | SAP HANA on a single Azure VM - Scale up only. <br><br> For high availability deployments, both the nodes on the two different machines are treated as individual nodes with separate data chains. | Scale-out <br><br> In high availability deployments, backup doesnΓÇÖt fail over to the secondary node automatically. Configuring backup should be done separately for each node. | | **HANA Instances** | A single SAP HANA instance on a single Azure VM ΓÇô scale up only | Multiple SAP HANA instances on a single VM. You can protect only one of these multiple instances at a time. | | **HANA database types** | Single Database Container (SDC) ON 1.x, Multi-Database Container (MDC) on 2.x | MDC in HANA 1.x | | **HANA database size** | HANA databases of size <= 8 TB (this isn't the memory size of the HANA system) | | |
backup | Sap Hana Database About | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-about.md | You can use [an Azure VM backup](backup-azure-vms-introduction.md) to back up th 1. Restore the database into the VM from the [Azure SAP HANA database backup](sap-hana-db-restore.md#restore-to-a-point-in-time-or-to-a-recovery-point) to your intended point in time. -## Back up a HANA system with replication enabled +## Back up a HANA system with replication enabled (preview) Azure Backup now supports backing up databases that have HSR enabled. This means that backups are managed automatically when a failover occurs, which eliminates the necessity for manual intervention. Backup also offers immediate protection with no remedial full backups, so you can protect HANA instances or HSR setup nodes as a single HSR container. As per SAP recommendation, it's mandatory to have weekly full snapshots for all Learn how to: - [Back up SAP HANA databases on Azure VMs](backup-azure-sap-hana-database.md).-- [Back up SAP HANA System Replication databases on Azure VMs](sap-hana-database-with-hana-system-replication-backup.md).-- [Back up SAP HANA database snapshot instances on Azure VMs](sap-hana-database-instances-backup.md).+- [Back up SAP HANA System Replication databases on Azure VMs (preview)](sap-hana-database-with-hana-system-replication-backup.md). +- [Back up SAP HANA database snapshot instances on Azure VMs (preview)](sap-hana-database-instances-backup.md). - [Restore SAP HANA databases on Azure VMs](./sap-hana-db-restore.md). - [Manage SAP HANA databases that are backed up by using Azure Backup](./sap-hana-db-manage.md). |
backup | Sap Hana Database Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-restore.md | -Azure Backup now supports backup and restore of SAP HANA System Replication (HSR) databases. +Azure Backup now supports backup and restore of SAP HANA System Replication (HSR) instance (preview). >[!Note] >- The restore process for HANA databases with HSR is the same as the restore process for HANA databases without HSR. As per SAP advisories, you can restore databases with HSR mode as *standalone* databases. If the target system has the HSR mode enabled, first disable the mode, and then restore the database. |
backup | Sap Hana Database With Hana System Replication Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-with-hana-system-replication-backup.md | -# Back up SAP HANA System Replication databases on Azure VMs +# Back up SAP HANA System Replication databases on Azure VMs (preview) SAP HANA databases are critical workloads that require a low recovery-point objective (RPO) and long-term retention. This article describes how you can back up SAP HANA databases that are running on Azure virtual machines (VMs) to an Azure Backup Recovery Services vault by using [Azure Backup](backup-overview.md). You can run an on-demand backup using SAP HANA native clients to local file-syst ## Next steps -- [Restore SAP HANA System Replication databases on Azure VMs](sap-hana-database-restore.md)-- [About backing up SAP HANA System Replication databases on Azure VMs](sap-hana-database-about.md#back-up-a-hana-system-with-replication-enabled)+- [Restore SAP HANA System Replication databases on Azure VMs (preview)](sap-hana-database-restore.md) +- [About backing up SAP HANA System Replication databases on Azure VMs (preview)](sap-hana-database-about.md#back-up-a-hana-system-with-replication-enabled-preview) |
backup | Tutorial Sap Hana Backup Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-backup-cli.md | -This tutorial describes how to back up SAP HANA database and SAP HANA System Replication (HSR) instances using Azure CLI. +This tutorial describes how to back up SAP HANA database instance and SAP HANA System Replication (HSR) instance (preview) using Azure CLI. Azure CLI is used to create and manage Azure resources from the Command Line or through scripts. This documentation details how to back up an SAP HANA database and trigger on-demand backups - all using Azure CLI. You can also perform these steps using the [Azure portal](./backup-azure-sap-hana-database.md). -Azure Backup also supports backup and restore of SAP HANA System Replication (HSR). - This document assumes that you already have an SAP HANA database installed on an Azure VM. (You can also [create a VM using Azure CLI](../virtual-machines/linux/quick-create-cli.md)). For more information on the supported scenarios, see the [support matrix](./sap-hana-backup-support-matrix.md#scenario-support) for SAP HANA. Location Name ResourceGroup westus2 saphanaVault saphanaResourceGroup ``` -# [HSR database](#tab/hsr-database) +# [HSR (preview)](#tab/hsr) -To create the Recovery Services vault for HSR database instance protection, run the following command: +To create the Recovery Services vault for HSR instance protection, run the following command: ```azurecli az backup vault create --resource-group hanarghsr2 --name hanavault10 --location westus2 To register and protect database instance, follow these steps: > The column ΓÇ£nameΓÇ¥ in the above output refers to the container name. This container name will be used in the next sections to enable backups and trigger them. Which in this case, is *VMAppContainer;Compute;saphanaResourceGroup;saphanaVM*. -# [HSR database](#tab/hsr-database) +# [HSR (preview)](#tab/hsr) To register and protect database instance, follow these steps: To get container name, run the following command. [Learn about this CLI command] ``` -# [HSR database](#tab/hsr-database) +# [HSR (preview)](#tab/hsr) To enable database instance backup, follow these steps: The response will give you the job name. This job name can be used to track the >[!NOTE] >Log backups are automatically triggered and managed by SAP HANA internally. -# [HSR database](#tab/hsr-database) +# [HSR (preview)](#tab/hsr) To run an on-demand backup, run the following command: |
backup | Tutorial Sap Hana Restore Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-restore-cli.md | -This tutorial describes how to restore SAP HANA database and SAP HANA System Replication (HSR) instances using Azure CLI. +This tutorial describes how to restore SAP HANA database instance and SAP HANA System Replication (HSR) instance (preview) using Azure CLI. Azure CLI is used to create and manage Azure resources from the command line or through scripts. This documentation details how to restore a backed-up SAP HANA database on an Azure VM - using Azure CLI. You can also perform these steps using the [Azure portal](./sap-hana-db-restore.md). -Azure Backup also supports backup and restore of SAP HANA System Replication (HSR). - >[!Note] >- Original Location Recovery (OLR) is currently not supported for HSR. >- Restore to HSR instance isn't supported. However, restore only to HANA instance is supported. DefaultRangeRecoveryPoint AzureWorkload As you can see, the list above contains three recovery points: one each for full, differential, and log backup. -# [HSR database](#tab/hsr-database) +# [HSR (preview)](#tab/hsr) To view the available recovery points, run the following command: Name Resource The response will give you the job name. This job name can be used to track the job status using [az backup job show](/cli/azure/backup/job#az-backup-job-show) cmdlet. -# [HSR database](#tab/hsr-database) +# [HSR (preview)](#tab/hsr) To start the restore operation, run the following command: |
backup | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md | You can learn more about the new releases by bookmarking this page or by [subscr ## Updates summary -- June 2023- - [Support for backup of SAP HANA System Replication is now generally available](#support-for-backup-of-sap-hana-system-replication-is-now-generally-available) - April 2023 - [Microsoft Azure Backup Server v4 is now generally available](#microsoft-azure-backup-server-v4-is-now-generally-available) - March 2023 You can learn more about the new releases by bookmarking this page or by [subscr - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview) -## Support for backup of SAP HANA System Replication is now generally available --Azure Backup now supports backup of HANA database with HANA System Replication. Now, the log backups from the new primary node are accepted immediately; thus provides continuous database automatic protection, --This eliminates the need of manual intervention to continue backups on the new primary node during a failover. With the elimination of the need to trigger full backups for every failover, you can save costs and reduce time for continue protection. --For more information, see [Back up a HANA system with replication enabled (preview)](sap-hana-database-about.md#back-up-a-hana-system-with-replication-enabled). - ## Microsoft Azure Backup Server v4 is now generally available Azure Backup now provides Microsoft Azure Backup Server (MABS) v4, the latest edition of on-premises backup solution. Azure Backup now supports backup of HANA database with HANA System Replication. This eliminates the need of manual intervention to continue backups on the new primary node during a failover. With the elimination of the need to trigger full backups for every failover, you can save costs and reduce time for continue protection. -For more information, see [Back up a HANA system with replication enabled (preview)](sap-hana-database-about.md#back-up-a-hana-system-with-replication-enabled). +For more information, see [Back up a HANA system with replication enabled (preview)](sap-hana-database-about.md#back-up-a-hana-system-with-replication-enabled-preview). ## Built-in Azure Monitor alerting for Azure Backup is now generally available |
bastion | Shareable Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/shareable-link.md | By default, users in your org will have only read access to shared links. If a u * Bastion must be configured to use the **Standard** SKU for this feature. You can update the SKU from Basic to Standard when you configure the shareable links feature. -* The VNet contains the VM resource to which you want to create a shareable link. +* The VNet in which the Bastion resource is deployed or a directly peered VNet contains the VM resource to which you want to create a shareable link. ## Enable Shareable Link feature |
batch | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md | Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
cloud-services | Cloud Services Guestos Msrc Releases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md | The following tables show the Microsoft Security Response Center (MSRC) updates | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |-| Rel 23-06 | [5027219] | Latest Cumulative Update(LCU) | 5.82 | Jun 13, 2023 | +| Rel 23-06 OOB | [5028623] | Latest Cumulative Update(LCU) | 5.82 | Jun 23, 2023 | | Rel 23-06 | [5027225] | Latest Cumulative Update(LCU) | 7.26 | Jun 13, 2023 | | Rel 23-06 | [5027222] | Latest Cumulative Update(LCU) | 6.58 | Jun 13, 2023 | | Rel 23-06 | [5027140] | .NET Framework 3.5 Security and Quality Rollup | 2.138 | Jun 13, 2023 | | Rel 23-06 | [5027134] | .NET Framework 4.6.2 Security and Quality Rollup | 2.138 | Jun 13, 2023 |+| Rel 23-06 OOB | [5028591] | .NET Framework Rollup 4.6 – 4.7.2/ .NET standalone update | 2.138 | Jun 22, 2023 | | Rel 23-06 | [5027141] | .NET Framework 3.5 Security and Quality Rollup | 4.118 | Jun 13, 2023 | | Rel 23-06 | [5027133] | .NET Framework 4.6.2 Security and Quality Rollup | 4.118 | Jun 13, 2023 |+| Rel 23-06 OOB | [5028590] | .NET Framework Rollup 4.6 – 4.7.2/ .NET standalone update | 4.118 | Jun 22, 2023 | | Rel 23-06 | [5027138] | .NET Framework 3.5 Security and Quality Rollup | 3.126 | Jun 13, 2023 | | Rel 23-06 | [5027132] | .NET Framework 4.6.2 Security and Quality Rollup | 3.126 | Jun 13, 2023 |+| Rel 23-06 OOB | [5028589] | .NET Framework Rollup 4.6 – 4.7.2/ .NET standalone update | 3.126 | Jun 22, 2023 | +| Rel 23-06 | [5027123] | .NET Framework 4.8 Security and Quality Rollup  | 5.82 | Jun 13, 2023 | +| Rel 23-06 OOB | [5028580] | .NET Framework Rollup 4.8 /.NET Standalone Update  | 5.82 | Jun 22, 2023 | | Rel 23-06 | [5027131] | . NET Framework 4.7.2 Cumulative Update | 6.58 | Jun 13, 2023 |+| Rel 23-06 OOB | [5028588] | .NET Framework Rollup - 4.6-4.7.2 / .NET Standalone Update | 6.58 | Jun 22, 2023 | | Rel 23-06 | [5027127] | .NET Framework 4.8 Security and Quality Rollup | 7.26 | Jun 13, 2023 |+| Rel 23-06 OOB | [5028584] | .NET Framework Rollup - 4.8 / .NET Standalone Update | 7.26 | Jun 22, 2023 | | Rel 23-06 | [5027275] | Monthly Rollup | 2.138 | Jun 13, 2023 | | Rel 23-06 | [5027283] | Monthly Rollup | 3.126 | Jun 13, 2023 | | Rel 23-06 | [5027271] | Monthly Rollup | 4.118 | Jun 13, 2023 | The following tables show the Microsoft Security Response Center (MSRC) updates | Rel 23-06 | [5017397] | Servicing Stack Update LKG | 2.138 | Sep 13, 2022 | | Rel 23-06 | [4494175] | Microcode | 5.82 | Sep 1, 2020 | | Rel 23-06 | [4494174] | Microcode | 6.58 | Sep 1, 2020 |-| Rel 23-06 | 5027396 | Servicing Stack Update | 7.26 | | -| Rel 23-06 | 5023789 | Servicing Stack Update | 6.58 | | +| Rel 23-06 | [5027396] | Servicing Stack Update | 7.26 | | +| Rel 23-06 | [5023789] | Servicing Stack Update | 6.58 | | -[5027219]: https://support.microsoft.com/kb/5027219 +[5028623]: https://support.microsoft.com/kb/5028623 [5027225]: https://support.microsoft.com/kb/5027225 [5027222]: https://support.microsoft.com/kb/5027222 [5027140]: https://support.microsoft.com/kb/5027140 [5027134]: https://support.microsoft.com/kb/5027134+[5028591]: https://support.microsoft.com/kb/5028591 [5027141]: https://support.microsoft.com/kb/5027141 [5027133]: https://support.microsoft.com/kb/5027133+[5028590]: https://support.microsoft.com/kb/5028590 [5027138]: https://support.microsoft.com/kb/5027138 [5027132]: https://support.microsoft.com/kb/5027132+[5028589]: https://support.microsoft.com/kb/5028589 +[5027123]: https://support.microsoft.com/kb/5027123 +[5028580]: https://support.microsoft.com/kb/5028580 [5027131]: https://support.microsoft.com/kb/5027131+[5028588]: https://support.microsoft.com/kb/5028588 [5027127]: https://support.microsoft.com/kb/5027127+[5028584]: https://support.microsoft.com/kb/5028584 [5027275]: https://support.microsoft.com/kb/5027275 [5027283]: https://support.microsoft.com/kb/5027283 [5027271]: https://support.microsoft.com/kb/5027271 |
cognitive-services | How To Custom Voice Create Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md | To create a custom neural voice in Speech Studio, follow these steps for one of 1. Select the data that you want to use for training. Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files. Only successfully processed datasets can be selected for training. Check your data processing status if you do not see your training set in the list. 1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data. 1. Select **Next**.-1. Optionally, you can check the box next to **Add my own test script** and select test scripts to upload. Each training generates 100 sample audio files automatically, to help you test the model with a default script. You can also provide your own test script with up to 100 utterances for the default style. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements). +1. Each training generates 100 sample audio files automatically, to help you test the model with a default script. Optionally, you can also check the box next to **Add my own test script** and provide your own test script with up to 100 utterances to test the model at no additional cost. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements). 1. Enter a **Name** and **Description** to help you identify the model. Choose a name carefully. The model name will be used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) via the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models. 1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model. 1. Select **Next**. To create a custom neural voice in Speech Studio, follow these steps for one of 1. Select the data that you want to use for training. Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files. Only successfully processed datasets can be selected for training. Check your data processing status if you do not see your training set in the list. 1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data. 1. Select **Next**.-1. Optionally, you can check the box next to **Add my own test script** and select test scripts to upload. Each training generates 100 sample audio files automatically, to help you test the model with a default script. You can also provide your own test script with up to 100 utterances. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements). +1. Each training generates 100 sample audio files automatically, to help you test the model with a default script. Optionally, you can also check the box next to **Add my own test script** and provide your own test script with up to 100 utterances to test the model at no additional cost. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements). 1. Enter a **Name** and **Description** to help you identify the model. Choose a name carefully. The model name will be used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) via the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models. 1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model. 1. Select **Next**. To create a custom neural voice in Speech Studio, follow these steps for one of 1. Select **Next**. 1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data. 1. Select **Next**.-1. Optionally, you can check the box next to **Add my own test script** and select test scripts to upload. Each training generates 100 sample audios for the default style and 20 for each preset style automatically, to help you test the model with a default script. You can also provide your own test script with up to 100 utterances. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements). +1. Each training generates 100 sample audios for the default style and 20 for each preset style automatically, to help you test the model with a default script. Optionally, you can also check the box next to **Add my own test script** and provide your own test script with up to 100 utterances to test the default style at no additional cost. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements). 1. Enter a **Name** and **Description** to help you identify the model. Choose a name carefully. The model name will be used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) via the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models. 1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model. 1. Select **Next**. |
cognitive-services | How To Pronunciation Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md | pronunciationAssessmentConfig?.phonemeAlphabet = "IPA" With spoken phonemes, you can get confidence scores indicating how likely the spoken phonemes matched the expected phonemes. -For example, when you speak the word "hello", the expected IPA phonemes are "h ɛ l oʊ". The actual spoken phonemes could be "h ə l oʊ". In the following assessment result, the most likely spoken phoneme was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2. +For example, to obtain the complete spoken sound for the word "Hello", you can concatenate the first spoken phoneme for each expected phoneme with the highest confidence score. In the following assessment result, when you speak the word "hello", the expected IPA phonemes are "h ɛ l oʊ". However, the actual spoken phonemes are "h ə l oʊ". You have five possible candidates for each expected phoneme in this example. The assessment result shows that the most likely spoken phoneme was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2. ```json {- "Phoneme": "ɛ", - "PronunciationAssessment": { - "AccuracyScore": 47.0, - "NBestPhonemes": [ - { - "Phoneme": "ə", - "Score": 100.0 - }, - { - "Phoneme": "l", - "Score": 52.0 - }, - { - "Phoneme": "ɛ", - "Score": 47.0 - }, - { - "Phoneme": "h", - "Score": 17.0 - }, - { - "Phoneme": "æ", - "Score": 2.0 - } - ] - }, - "Offset": 11100000, - "Duration": 500000 -}, + "Id": "bbb42ea51bdb46d19a1d685e635fe173", + "RecognitionStatus": 0, + "Offset": 7500000, + "Duration": 13800000, + "DisplayText": "Hello.", + "NBest": [ + { + "Confidence": 0.975003, + "Lexical": "hello", + "ITN": "hello", + "MaskedITN": "hello", + "Display": "Hello.", + "PronunciationAssessment": { + "AccuracyScore": 100, + "FluencyScore": 100, + "CompletenessScore": 100, + "PronScore": 100 + }, + "Words": [ + { + "Word": "hello", + "Offset": 7500000, + "Duration": 13800000, + "PronunciationAssessment": { + "AccuracyScore": 99.0, + "ErrorType": "None" + }, + "Syllables": [ + { + "Syllable": "hɛ", + "PronunciationAssessment": { + "AccuracyScore": 91.0 + }, + "Offset": 7500000, + "Duration": 4100000 + }, + { + "Syllable": "loʊ", + "PronunciationAssessment": { + "AccuracyScore": 100.0 + }, + "Offset": 11700000, + "Duration": 9600000 + } + ], + "Phonemes": [ + { + "Phoneme": "h", + "PronunciationAssessment": { + "AccuracyScore": 98.0, + "NBestPhonemes": [ + { + "Phoneme": "h", + "Score": 100.0 + }, + { + "Phoneme": "oʊ", + "Score": 52.0 + }, + { + "Phoneme": "ə", + "Score": 35.0 + }, + { + "Phoneme": "k", + "Score": 23.0 + }, + { + "Phoneme": "æ", + "Score": 20.0 + } + ] + }, + "Offset": 7500000, + "Duration": 3500000 + }, + { + "Phoneme": "ɛ", + "PronunciationAssessment": { + "AccuracyScore": 47.0, + "NBestPhonemes": [ + { + "Phoneme": "ə", + "Score": 100.0 + }, + { + "Phoneme": "l", + "Score": 52.0 + }, + { + "Phoneme": "ɛ", + "Score": 47.0 + }, + { + "Phoneme": "h", + "Score": 17.0 + }, + { + "Phoneme": "æ", + "Score": 2.0 + } + ] + }, + "Offset": 11100000, + "Duration": 500000 + }, + { + "Phoneme": "l", + "PronunciationAssessment": { + "AccuracyScore": 100.0, + "NBestPhonemes": [ + { + "Phoneme": "l", + "Score": 100.0 + }, + { + "Phoneme": "oʊ", + "Score": 46.0 + }, + { + "Phoneme": "ə", + "Score": 5.0 + }, + { + "Phoneme": "ɛ", + "Score": 3.0 + }, + { + "Phoneme": "u", + "Score": 1.0 + } + ] + }, + "Offset": 11700000, + "Duration": 1100000 + }, + { + "Phoneme": "oʊ", + "PronunciationAssessment": { + "AccuracyScore": 100.0, + "NBestPhonemes": [ + { + "Phoneme": "oʊ", + "Score": 100.0 + }, + { + "Phoneme": "d", + "Score": 29.0 + }, + { + "Phoneme": "t", + "Score": 24.0 + }, + { + "Phoneme": "n", + "Score": 22.0 + }, + { + "Phoneme": "l", + "Score": 18.0 + } + ] + }, + "Offset": 12900000, + "Duration": 8400000 + } + ] + } + ] + } + ] +} ``` To indicate whether, and how many potential spoken phonemes to get confidence scores for, set the `NBestPhonemeCount` parameter to an integer value such as `5`. |
cognitive-services | Translator Disconnected Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/containers/translator-disconnected-containers.md | -* The Docker `pull` command you'll use to download the container. +* The Docker `pull` command to download the container. * How to validate that a container is running. * How to send queries to the container's endpoint, once it's running. docker pull mcr.microsoft.com/azure-cognitive-services/translator/text-translati ## Configure the container to run in a disconnected environment -Now that you've downloaded your container, you'll need to execute the `docker run` command with the following parameters: +Now that you've downloaded your container, you need to execute the `docker run` command with the following parameters: -* **`DownloadLicense=True`**. This parameter will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. You can only use the license file in corresponding approved container. +* **`DownloadLicense=True`**. This parameter downloads a license file that enables your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file is invalid to run the container. You can only use the license file in corresponding approved container. * **`Languages={language list}`**. You must include this parameter to download model files for the [languages](../language-support.md) you want to translate. > [!IMPORTANT] The following example shows the formatting for the `docker run` command with pla | Placeholder | Value | Format| |-|-|| | `[image]` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/translator/text-translation` |-| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted. | `/host/license:/path/to/license/directory` | - | `{MODEL_MOUNT_PATH}`| The path where the machine translation models will be downloaded, and mounted. Your directory structure must be formatted as **/usr/local/models** | `/host/translator/models:/usr/local/models`| +| `{LICENSE_MOUNT}` | The path where the license is downloaded, and mounted. | `/host/license:/path/to/license/directory` | + | `{MODEL_MOUNT_PATH}`| The path where the machine translation models are downloaded, and mounted. Your directory structure must be formatted as **/usr/local/models** | `/host/translator/models:/usr/local/models`| | `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, in the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |-| `{API_KEY}` | The key for your Text Analytics resource. You can find it on your resource's **Key and endpoint** page, in the Azure portal. |`{string}`| +| `{API_KEY}` | The key for your Text Translation resource. You can find it on your resource's **Key and endpoint** page, in the Azure portal. |`{string}`| | `{LANGUAGES_LIST}` | List of language codes separated by commas. It's mandatory to have English (en) language as part of the list.| `en`, `fr`, `it`, `zu`, `uk` | | `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` | Placeholder | Value | Format| | `[image]`| The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/translator/text-translation` | `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container. | `16g` | | `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container. | `4` |-| `{LICENSE_MOUNT}` | The path where the license will be located and mounted. | `/host/translator/license:/path/to/license/directory` | -|`{MODEL_MOUNT_PATH}`| The path where the machine translation models will be downloaded, and mounted. Your directory structure must be formatted as **/usr/local/models** | `/host/translator/models:/usr/local/models`| +| `{LICENSE_MOUNT}` | The path where the license is located and mounted. | `/host/translator/license:/path/to/license/directory` | +|`{MODEL_MOUNT_PATH}`| The path where the machine translation models are downloaded, and mounted. Your directory structure must be formatted as **/usr/local/models** | `/host/translator/models:/usr/local/models`| |`{MODELS_DIRECTORY_LIST}`|List of comma separated directories each having a machine translation model. | `/usr/local/models/enu_esn_generalnn_2022240501,/usr/local/models/esn_enu_generalnn_2022240501` | | `{OUTPUT_PATH}` | The output path for logging [usage records](#usage-records). | `/host/output:/path/to/output/directory` | | `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` | When operating Docker containers in a disconnected environment, the container wi #### Arguments for storing logs -When run in a disconnected environment, an output mount must be available to the container to store usage logs. For example, you would include `-v /host/output:{OUTPUT_PATH}` and `Mounts:Output={OUTPUT_PATH}` in the following example, replacing `{OUTPUT_PATH}` with the path where the logs will be stored: +When run in a disconnected environment, an output mount must be available to the container to store usage logs. For example, you would include `-v /host/output:{OUTPUT_PATH}` and `Mounts:Output={OUTPUT_PATH}` in the following example, replacing `{OUTPUT_PATH}` with the path where the logs are stored: **Example `docker run` command** The container provides two endpoints for returning records regarding its usage. #### Get all records -The following endpoint will provide a report summarizing all of the usage collected in the mounted billing record directory. +The following endpoint provides a report summarizing all of the usage collected in the mounted billing record directory. ```HTTP https://<service>/records/usage-logs/ https://<service>/records/usage-logs/ `http://localhost:5000/records/usage-logs` -The usage-logs endpoint will return a JSON response similar to the following example: +The usage-logs endpoint returns a JSON response similar to the following example: ```json { The usage-logs endpoint will return a JSON response similar to the following exa #### Get records for a specific month -The following endpoint will provide a report summarizing usage over a specific month and year: +The following endpoint provides a report summarizing usage over a specific month and year: ```HTTP https://<service>/records/usage-logs/{MONTH}/{YEAR} ``` -This usage-logs endpoint will return a JSON response similar to the following example: +This usage-logs endpoint returns a JSON response similar to the following example: ```json { This usage-logs endpoint will return a JSON response similar to the following ex ### Purchase a different commitment plan for disconnected containers -Commitment plans for disconnected containers have a calendar year commitment period. When you purchase a plan, you'll be charged the full price immediately. During the commitment period, you can't change your commitment plan, however you can purchase more unit(s) at a pro-rated price for the remaining days in the year. You have until midnight (UTC) on the last day of your commitment, to end a commitment plan. +Commitment plans for disconnected containers have a calendar year commitment period. When you purchase a plan, you're charged the full price immediately. During the commitment period, you can't change your commitment plan, however you can purchase more unit(s) at a pro-rated price for the remaining days in the year. You have until midnight (UTC) on the last day of your commitment, to end a commitment plan. You can choose a different commitment plan in the **Commitment tier pricing** settings of your resource under the **Resource Management** section. ### End a commitment plan - If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's auto-renewal to **Do not auto-renew**. Your commitment plan will expire on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You'll be able to continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You'll have until midnight (UTC) on the last day of the year to end a commitment plan for disconnected containers. If you do so, you won't be charged for the following year. + If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's autorenewal to **Do not auto-renew**. Your commitment plan expires on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You're still able to continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You have until midnight (UTC) on the last day of the year to end a commitment plan for disconnected containers. If you do so, you avoid charges for the following year. ## Troubleshooting -Run the container with an output mount and logging enabled. These settings will enable the container to generate log files that are helpful for troubleshooting issues that occur while starting or running the container. +Run the container with an output mount and logging enabled. These settings enable the container to generate log files that are helpful for troubleshooting issues that occur while starting or running the container. > [!TIP] > For more troubleshooting information and guidance, see [Disconnected containers Frequently asked questions (FAQ)](../../containers/disconnected-container-faq.yml). |
cognitive-services | Commitment Tier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/commitment-tier.md | For more information, see [Azure Cognitive Services pricing](https://azure.micro ## Create a new resource -> [!NOTE] -> To purchase and use a commitment plan, your resource must have the Standard pricing tier. You cannot purchase a commitment plan (or see the option) for a resource that is on the free tier. - 1. Sign into the [Azure portal](https://portal.azure.com/) and select **Create a new resource** for one of the applicable Cognitive Services or Applied AI services listed above. -2. Enter the applicable information to create your resource. Be sure to select the standard pricing tier. +2. Enter the applicable information to create your resource. Be sure to select the standard pricing tier. + > [!NOTE] + > If you intend to purchase a commitment tier for disconnected container usage, you will need to request separate access and select the **Commitment tier disconnected containers** pricing tier. See the [disconnected containers](./containers/disconnected-containers.md) article for more information + :::image type="content" source="media/commitment-tier/create-resource.png" alt-text="A screenshot showing resource creation on the Azure portal." lightbox="media/commitment-tier/create-resource.png"::: -3. Once your resource is created, you will be able to change your pricing from pay-as-you-go, to a commitment plan. +3. Once your resource is created, you'll be able to change your pricing from pay-as-you-go, to a commitment plan. ## Purchase a commitment plan by updating your Azure resource 1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure subscription. 2. In your Azure resource for one of the applicable features listed above, select **Commitment tier pricing**.-- > [!NOTE] - > You will only see the option to purchase a commitment plan if the resource is using the standard pricing tier. - 3. Select **Change** to view the available commitments for hosted API and container usage. Choose a commitment plan for one or more of the following offerings: * **Web**: web-based APIs, where you send data to Azure for processing. * **Connected container**: Docker containers that enable you to [deploy Cognitive services on premises](cognitive-services-container-support.md), and maintain an internet connection for billing and metering. For more information, see [Azure Cognitive Services pricing](https://azure.micro 4. In the window that appears, select both a **Tier** and **Auto-renewal** option. - * **Commitment tier** - The commitment tier for the feature. The commitment tier will be enabled immediately when you click **Purchase** and you will be charged the commitment amount on a pro-rated basis. + * **Commitment tier** - The commitment tier for the feature. The commitment tier is enabled immediately when you select **Purchase** and you will be charged the commitment amount on a pro-rated basis. - * **Auto-renewal** - Choose how you want to renew, change, or cancel the current commitment plan starting with the next billing cycle. If you decide to auto-renew, the **Auto-renewal date** is the date (in your local timezone) when you will be charged for the next billing cycle. This date coincides with the start of the calendar month. + * **Auto-renewal** - Choose how you want to renew, change, or cancel the current commitment plan starting with the next billing cycle. If you decide to auto-renew, the **Auto-renewal date** is the date (in your local timezone) when you'll be charged for the next billing cycle. This date coincides with the start of the calendar month. > [!CAUTION] > Once you click **Purchase** you will be charged for the tier you select. Once purchased, the commitment plan is non-refundable. For more information, see [Azure Cognitive Services pricing](https://azure.micro ## Overage pricing -If you use the resource above the quota provided, you will be charged for the additional usage as per the overage amount mentioned in the commitment tier. +If you use the resource above the quota provided, you'll be charged for the additional usage as per the overage amount mentioned in the commitment tier. ## Purchase a different commitment plan -The commitment plans have a calendar month commitment period. You can purchase a commitment plan at any time from the default pay-as-you-go pricing model. When you purchase a plan, you will be charged a pro-rated price for the remaining month. During the commitment period, you cannot change the commitment plan for the current month. However, you can choose a different commitment plan for the next calendar month. The billing for the next month would happen on the first day of the next month. +The commitment plans have a calendar month commitment period. You can purchase a commitment plan at any time from the default pay-as-you-go pricing model. When you purchase a plan, you'll be charged a pro-rated price for the remaining month. During the commitment period, you can't change the commitment plan for the current month. However, you can choose a different commitment plan for the next calendar month. The billing for the next month would happen on the first day of the next month. If you need a larger commitment plan than any of the ones offered, contact `csgate@microsoft.com`. ## End a commitment plan -If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's auto-renewal to **Do not auto-renew**. Your commitment plan will expire on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You will be able to continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You have until midnight (UTC) on the last day of each month to end a commitment plan, and not be charged for the following month. +If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's auto-renewal to **Do not auto-renew**. Your commitment plan will expire on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You'll be able to continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You have until midnight (UTC) on the last day of each month to end a commitment plan, and not be charged for the following month. ## See also |
cognitive-services | Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/role-based-access-control.md | These users are the gatekeepers for the Language applications in production envi * [question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects) :::column-end::: |
cognitive-services | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/faq.md | See [training modes](how-to/train-model.md#training-modes) for more information. Yes, all the APIs are available. * [Authoring APIs](https://aka.ms/clu-authoring-apis)-* [Prediction API](https://aka.ms/clu-runtime-api) +* [Prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation) ## Next steps |
cognitive-services | Deploy Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/deploy-model.md | -Once you are satisfied with how your model performs, it's ready to be deployed, and query it for predictions from utterances. Deploying a model makes it available for use through the [prediction API](https://aka.ms/clu-runtime-api). +Once you are satisfied with how your model performs, it's ready to be deployed, and query it for predictions from utterances. Deploying a model makes it available for use through the [prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation). ## Prerequisites See [project development lifecycle](../overview.md#project-development-lifecycle ## Deploy model -After you have reviewed the model's performance and decide it's fit to be used in your environment, you need to assign it to a deployment to be able to query it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/clu-runtime-api). It is recommended to create a deployment named `production` to which you assign the best model you have built so far and use it in your system. You can create another deployment called `staging` to which you can assign the model you're currently working on to be able to test it. You can have a maximum on 10 deployments in your project. +After you have reviewed the model's performance and decide it's fit to be used in your environment, you need to assign it to a deployment to be able to query it. Assigning the model to a deployment makes it available for use through the [prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation). It is recommended to create a deployment named `production` to which you assign the best model you have built so far and use it in your system. You can create another deployment called `staging` to which you can assign the model you're currently working on to be able to test it. You can have a maximum on 10 deployments in your project. # [Language Studio](#tab/language-studio) |
cognitive-services | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/faq.md | Yes, only for predictions, and [samples are available](https://aka.ms/cluSampleC Yes, all the APIs are available. * [Authoring APIs](https://aka.ms/clu-authoring-apis)-* [Prediction API](https://aka.ms/clu-runtime-api) +* [Prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation) ## Next steps |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/overview.md | Follow these steps to get the most out of your model: 5. **Improve the model**: After reviewing the model's performance, you can then learn how you can improve the model. -6. **Deploy the model**: Deploying a model makes it available for use via the [prediction API](https://aka.ms/clu-runtime-api). +6. **Deploy the model**: Deploying a model makes it available for use via the [prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation). 7. **Predict intents**: Use your custom model to predict intents from user's utterances. As you use orchestration workflow, see the following reference documentation and |Development option / language |Reference documentation |Samples | |||| |REST APIs (Authoring) | [REST API documentation](https://aka.ms/clu-authoring-apis) | |-|REST APIs (Runtime) | [REST API documentation](https://aka.ms/clu-runtime-api) | | +|REST APIs (Runtime) | [REST API documentation](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation) | | |C# (Runtime) | [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) | |Python (Runtime)| [Python documentation](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) | |
cognitive-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/policy-reference.md | Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
communication-services | Chat Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/chat-metrics.md | + + Title: Chat metrics definitions for Azure Communication Service ++description: This document covers definitions of chat metrics available in the Azure portal. ++++ Last updated : 06/23/2023+++++# Chat metrics overview ++Azure Communication Services currently provides metrics for all ACS primitives. [Azure Metrics Explorer](../../../azure-monitor\essentials\metrics-getting-started.md) can be used to plot your own charts, investigate abnormalities in your metric values, and understand your API traffic by using the metrics data that Chat requests emit. ++## Where to find metrics ++Primitives in Azure Communication Services emit metrics for API requests. These metrics can be found in the Metrics tab under your Communication Services resource. You can also create permanent dashboards using the workbooks tab under your Communication Services resource. ++## Metric definitions ++All API request metrics contain three dimensions that you can use to filter your metrics data. These dimensions can be aggregated together using the `Count` aggregation type and support all standard Azure Aggregation time series including `Sum`, `Average`, `Min`, and `Max`. ++More information on supported aggregation types and time series aggregations can be found [Advanced features of Azure Metrics Explorer](../../../azure-monitor/essentials/metrics-charts.md#aggregation). ++- **Operation** - All operations or routes that can be called on the Azure Communication Services Chat gateway. +- **Status Code** - The status code response sent after the request. +- **StatusSubClass** - The status code series sent after the response. ++### Chat API request metric operations ++The following operations are available on Chat API request metrics: ++| Operation / Route | Description | +| -- | - | +| GetChatMessage | Gets a message by message ID. | +| ListChatMessages | Gets a list of chat messages from a thread. | +| SendChatMessage | Sends a chat message to a thread. | +| UpdateChatMessage | Updates a chat message. | +| DeleteChatMessage | Deletes a chat message. | +| GetChatThread | Gets a chat thread. | +| ListChatThreads | Gets the list of chat threads of a user. | +| UpdateChatThread | Updates a chat thread's properties. | +| CreateChatThread | Creates a chat thread. | +| DeleteChatThread | Deletes a thread. | +| GetReadReceipts | Gets read receipts for a thread. | +| SendReadReceipt | Sends a read receipt event to a thread, on behalf of a user. | +| SendTypingIndicator | Posts a typing event to a thread, on behalf of a user. | +| ListChatThreadParticipants | Gets the members of a thread. | +| AddChatThreadParticipants | Adds thread members to a thread. If members already exist, no change occurs. | +| RemoveChatThreadParticipant | Remove a member from a thread. | +++If a request is made to an operation that isn't recognized, you receive a "Bad Route" value response. +## Next steps ++- Learn more about [Data Platform Metrics](../../../azure-monitor/essentials/data-platform-metrics.md). |
communication-services | Call Automation Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/call-automation-metrics.md | + + Title: Call automation metrics definitions for Azure Communication Service ++description: This document covers definitions of call automation metrics available in the Azure portal. ++++ Last updated : 06/23/2023+++++# Call automation metrics overview ++Azure Communication Services currently provides metrics for all ACS primitives. [Azure Metrics Explorer](../../../../azure-monitor\essentials\metrics-getting-started.md) can be used to plot your own charts, investigate abnormalities in your metric values, and understand your API traffic by using the metrics data that call automation requests emit. ++## Where to find metrics ++Primitives in Azure Communication Services emit metrics for API requests. These metrics can be found in the Metrics tab under your Communication Services resource. You can also create permanent dashboards using the workbooks tab under your Communication Services resource. ++## Metric definitions ++All API request metrics contain three dimensions that you can use to filter your metrics data. These dimensions can be aggregated together using the `Count` aggregation type and support all standard Azure Aggregation time series including `Sum`, `Average`, `Min`, and `Max`. ++More information on supported aggregation types and time series aggregations can be found [Advanced features of Azure Metrics Explorer](../../../../azure-monitor/essentials/metrics-charts.md#aggregation). ++- **Operation** - All operations or routes that can be called on the Azure Communication Services Chat gateway. +- **Status Code** - The status code response sent after the request. +- **StatusSubClass** - The status code series sent after the response. ++### Call Automation API requests ++The following operations are available on Call Automation API request metrics: ++| Operation / Route | Description | +| -- | - | +| Create Call | Create an outbound call to user. +| Answer Call | Answer an inbound call. | +| Redirect Call | Redirect an inbound call to another user. | +| Reject Call | Reject an inbound call. | +| Transfer Call To Participant | Transfer 1:1 call to another user. | +| Play | Play audio to call participants. | +| PlayPrompt | Play a prompt to users as part of the Recognize action. | +| Recognize | Recognize user input from call participants. | +| Add Participants | Add a participant to a call. | +| Remove Participants | Remove a participant from a call. | +| HangUp Call | Hang up your call leg. | +| Terminate Call | End the call for all participants. | +| Get Call | Get details about a call. | +| Get Participant | Get details on a call participant. | +| Get Participants | Get all participants in a call. | +| Delete Call | Delete a call. | +| Cancel All Media Operations | Cancel all ongoing or queued media operations in a call. | ++++## Next steps ++- Learn more about [Data Platform Metrics](../../../../azure-monitor/essentials/data-platform-metrics.md). |
communication-services | Monitor Direct Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/monitoring-troubleshooting-telephony/monitor-direct-routing.md | + + Title: "Monitor Azure Communication Services direct routing" + Last updated : 06/22/2023+++audience: ITPro +++description: Learn how to monitor Azure Communication Services direct routing configuration, including Session Border Controllers, cloud components, and Telecom trunks. +++# Monitor direct routing ++This article describes how to monitor your direct routing configuration. ++The ability to make and receive calls by using direct routing involves the following components: ++- Session Border Controllers (SBCs) +- Direct routing components in the Microsoft Cloud +- Telecom trunks ++If you have difficulties troubleshooting issues, you can open a support case with your SBC vendor or Microsoft. ++Microsoft is working on providing more tools for troubleshooting and monitoring. Check the documentation periodically for updates. ++## Monitoring availability of Session Border Controllers using Session Initiation Protocol (SIP) OPTIONS messages ++Azure Communication Services direct routing uses SIP OPTIONS sent by the Session Border Controller to monitor SBC health. There are no actions required from the Azure administrator to enable the SIP OPTIONS monitoring. ++## Monitor with Azure portal and SBC logs ++In some cases, especially during the initial pairing, there might be issues related to misconfiguration of the SBCs or the direct routing service. ++You can use the following tools to monitor your configuration: ++- Azure portal +- SBC logs ++In the direct routing section of Azure portal, you can check [SBC connection status](../direct-routing-provisioning.md#session-border-controller-connection-status). +If calls can be made, you can also check [Azure monitors logs](../../analytics/logs/voice-and-video-logs.md) that provide descriptive SIP error codes ++SBC logs also is a great source of data for troubleshooting. Reach out to your SBC vendor's documentation on how to configure and collect those logs. ++## Next steps ++[Troubleshoot direct routing connectivity](./troubleshoot-tls-certificate-sip-options.md) +[Troubleshoot outbound calling](./troubleshoot-outbound-calls.md) |
communication-services | Troubleshoot Outbound Calls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/monitoring-troubleshooting-telephony/troubleshoot-outbound-calls.md | + + Title: Troubleshoot Azure Communication Services direct routing outbound calls issues +description: Learn how to troubleshoot Azure Communication Services direct routing potential issues that affect outbound calls. Last updated : 06/22/2023++++audience: ITPro +++++# Issues that affect outbound direct routing calls ++You might experience various issues when you use direct routing to make outbound calls from an app built on Azure Communication Services (ACS) Software Development Kit (SDK) to a Session Border Controller (SBC). These issues include: ++- An incorrect or anonymous caller ID is displayed to the call recipient. +- A connection to the SBC isn't established. +- Some users are unable to make calls. +- No users in a tenant are able to make calls. ++This article discusses potential causes of these issues, and provides resolutions that you can try. ++## Incorrect caller ID displayed to the recipient ++When you use direct routing, the caller ID information that is delivered to the call recipient is listed in the `From` and `P-Asserted-Identity` headers in the Session Initiation Protocol (SIP) options message. ++The `From` header contains any of the following items: ++- The phone number that's used as an `alternateCallerId` property of a `startCall` method in [Client Calling SDK](../../../quickstarts/telephony/pstn-call.md). + If an `alternateCallerId` wasn't provided, it's replaced with "anonymous". +- The phone number string that's passed when creating a `PhoneNumberIdentifier` object in [Call Automation SDK](../../../how-tos/call-automation/actions-for-call-control.md#make-an-outbound-call) +- The phone number of the original caller if an Call Automation SDK [redirects the call](../../../how-tos/call-automation/actions-for-call-control.md#redirect-a-call). +- The phone number selected as a Caller ID in Omnichannel Agent client application. ++The `P-Asserted-Identity` header contains the phone number of the user who is billed for the call. The `Privacy:id` indicates that the information in the header has to be hidden from the call recipient. ++### Cause ++If the information in the `From` and `P-Asserted-Identity` headers doesn't match, and if the Public Switched Telephone Network (PSTN) prioritizes the `P-Asserted-Identity` header information over the `From` header information, then incorrect information is displayed. ++### Resolution ++To make sure that the correct caller ID is displayed to the call recipient, configure the SBC to either remove the `P-Asserted-Identity` header from the SIP INVITE message or modify its contents. ++## Connection to the SBC not established ++Sometimes, calls reach the SBC but no connection is established. In this situation, when the SBC receives a SIP OPTIONS message from Microsoft, it returns a failure message that includes error codes in the range of 400 to 699. ++Any of the following causes might prevent a connection to the SBC. ++### Cause 1 ++The SIP failure message is coming from another telephony device that is on the same network as the SBC. ++### Resolution 1 ++Troubleshoot the other device to fix the error. If you need assistance, contact the device vendor. ++### Cause 2 ++Your PSTN provider is experiencing some issue and is sending the SIP failure message. This is most likely the case if the failure error code is SIP 403 or SIP 404. ++### Resolution 2 ++Contact your PSTN provider for support to fix the issue. ++### Cause 3 ++The issue isn't coming from another device on the network or by your PSTN provider. However, the cause is otherwise unknown. ++### Resolution 3 ++Contact the SBC vendor support to fix the issue. ++## Some users are unable to make calls ++If the connection between the Microsoft and the SBC is working correctly, but some users or applications can't make calls, the issue might be an incorrect scope of an Azure Communication Services access token ++### Cause 1 ++Azure Communication Services access token was created with a chat scope. ++### Resolution 1 ++Make sure that all the Azure Communication Services access tokens that are used for making calls are generated [with a `voip` scope](../../identity-model.md#access-tokens). ++### Cause 2 ++None of the patterns in the Voice Routes match the dialed number. ++### Resolution 2 ++Make sure that the following conditions are true: ++- There's a pattern in the Voice Route that matches the dialed number. +- The SBC that's specified for the Voice Route is **Online**. If it's **Inactive**, either set it up to become **Online** or select a different SBC that is **Online** ++### Cause 3 ++The SBC isn't responding to SIP OPTIONS messages because some device on the network, such as a firewall, is blocking the messages. ++### Resolution 3 ++Make sure that the SIP Signaling IPs and FQDNs are allowed on all network devices that connect the SBC to the internet. The IP addresses that must be allowed are listed at [SIP Signaling: FQDNs](../direct-routing-infrastructure.md#sip-signaling-fqdns). ++## Related articles ++- [Troubleshoot direct routing connectivity](./troubleshoot-tls-certificate-sip-options.md) +- [Plan for Azure direct routing](../direct-routing-infrastructure.md) +- [Pair the Session Border Controller and configure voice routing](../direct-routing-provisioning.md) +- [Outbound call to a phone number](../../../quickstarts/telephony/pstn-call.md) |
communication-services | Troubleshoot Tls Certificate Sip Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/monitoring-troubleshooting-telephony/troubleshoot-tls-certificate-sip-options.md | + + Title: "Troubleshoot Azure Communication Services direct routing TLS certificate and SIP OPTIONS issues" + Last updated : 06/22/2023+++audience: ITPro +++description: Learn how to troubleshoot Azure Communication Services direct routing connectivity with Session Border Controllers - TLS certificate and SIP OPTIONS issues. +++# Session Border Controller (SBC) connectivity issues ++When you set up a direct routing, you might experience the following Session Border Controller (SBC) connectivity issues: ++- Session Initiation Protocol (SIP) OPTIONS aren't received. +- Transport Layer Security (TLS) connections problems occur. +- The SBC doesn't respond. +- The SBC is marked as inactive in the Azure portal. ++The following conditions are most likely to cause such issues: ++- A TLS certificate experiences problems. +- An SBC isn't configured correctly for direct routing. ++This article lists some common issues that are related to SIP OPTIONS and TLS certificates, and provides resolutions that you can try. ++## Overview of the SIP OPTIONS process ++- The SBC sends a TLS connection request that includes a TLS certificate to the SIP proxy server Fully Qualified Domain Name (FQDN) (for example, **sip.pstnhub.microsoft.com**). ++- The SIP proxy checks the connection request. ++ - If the request isn't valid, the TLS connection is closed and the SIP proxy doesn't receive SIP OPTIONS from the SBC. + - If the request is valid, the TLS connection is established, and the SBC sends SIP OPTIONS to the SIP proxy. ++- After SIP proxy receives SIP OPTIONS, it checks the Record-Route to determine whether the SBC FQDN belongs to a known Communication resource. If the FQDN information isn't detected there, the SIP proxy checks the Contact header. ++- If the SBC FQDN is detected and recognized, the SIP proxy sends a **200 OK** message by using the same TLS connection. ++- The SIP proxy sends SIP OPTIONS to the SBC FQDN that is listed in the Contact header of the SIP options received from the SBC. ++- After receiving SIP OPTIONS from the SIP proxy, the SBC responds by sending a **200 OK** message. This step confirms that the SBC is healthy. ++- As the final step, the SBC is marked as **Online** in the Azure portal. ++## SIP OPTIONS issues ++After the TLS connection is successfully established, and the SBC is able to send and receive messages to and from the SIP proxy, there might still be problems that affect the format or content of SIP OPTIONS. ++### SBC doesn't receive a "200 OK" response from SIP proxy ++This situation might occur if youΓÇÖre using an older version of TLS. To enforce stricter security, enable TLS 1.2. ++Make sure that your SBC certificate isn't self-signed and that you got it from a [trusted Certificate Authority (CA)](../direct-routing-infrastructure.md#sbc-certificates-and-domain-names). ++If youΓÇÖre using TLS version 1.2 or higher, and your SBC certificate is valid, then the issue might occur because the FQDN is misconfigured in your SIP profile and not recognized as belonging to any Communication resource. Check for the following conditions, and fix any errors that you find: ++- The FQDN provided by the SBC in the Record-Route or Contact header is different from what is configured in Azure Communication resource. +- The Contact header contains an IP address instead of the FQDN. +- The domain isnΓÇÖt [fully validated](../../../how-tos/telephony/domain-validation.md). If you add an FQDN that wasnΓÇÖt validated previously, you must validate it. ++### SBC receives "200 OK" response but not SIP OPTIONS ++The SBC receives the **200 OK** response from the SIP proxy but not the SIP OPTIONS that were sent from the SIP proxy. If this error occurs, make sure that the FQDN that's listed in the Record-Route or Contact header is correct and resolves to the correct IP address. ++Another possible cause for this issue might be firewall rules that are preventing incoming traffic. Make sure that firewall rules are configured to allow incoming connections from all [SIP proxy signaling IP addresses](../direct-routing-infrastructure.md#sip-signaling-fqdns). ++### SBC status is intermittently inactive ++This issue might occur if: + +- The SBC is configured to send SIP OPTIONS not to FQDNs but to the specific IP addresses that they resolve to. During maintenance or outages, these IP addresses might change to a different datacenter. Therefore, the SBC is sending SIP OPTIONS to an inactive or unresponsive datacenter. To resolve the issue: ++ - Make sure that the SBC is discoverable and configured to send SIP OPTIONS to only FQDNs. + - Make sure that all devices in the route, such as SBCs and firewalls, are configured to allow communication to and from all Microsoft SIP signaling FQDNs. + - To provide a failover option when the connection from an SBC is made to a datacenter that's experiencing an issue, the SBC must be configured to use all three SIP proxy FQDNs: ++ - sip.pstnhub.microsoft.com + - sip2.pstnhub.microsoft.com + - sip3.pstnhub.microsoft.com ++ > [!NOTE] + > Devices that support DNS names can use sip-all.pstnhub.microsoft.com to resolve to all possible IP addresses. ++ For more information, see [SIP Signaling: FQDNs](../direct-routing-infrastructure.md#sip-signaling-fqdns). ++- The installed root or intermediate certificate isn't part of the SBC certificate chain issuer. When the SBC starts the three-way handshake during the authentication process, the Azure service is unable to validate the certificate chain on the SBC and resets the connection. The SBC may be able to authenticate again as soon as the public root certificate is loaded again on the service cache or the certificate chain is fixed on the SBC. Make sure that the intermediate and root certificates installed on the SBC are correct. + + For more information about certificates, see [SBC certificates and domain names](../direct-routing-infrastructure.md#sbc-certificates-and-domain-names). + +### FQDN doesnΓÇÖt match the contents of CN or SAN in the provided certificate ++This issue occurs if a wildcard doesn't match a lower-level subdomain. For example, the wildcard `\*\.contoso.com` would match `sbc1.contoso.com`, but not `sbc.acs.contoso.com`. You can't have multiple levels of subdomains under a wildcard. If the FQDN doesnΓÇÖt match the Common Name (CN) or Subject Alternate Name (SAN) in the provided certificate, request a new certificate that matches your domain names. ++For more information about certificates, see [SBC certificates and domain names](../direct-routing-infrastructure.md#sbc-certificates-and-domain-names). ++## TLS connection issues ++If the TLS connection is closed right away and SIP OPTIONS aren't received from the SBC, or if **200 OK** isn't received from the SBC, then the problem might be with the TLS version. The TLS version configured on the SBC should be 1.2 or higher. ++### SBC certificate is self-signed or not from a trusted CA ++If the SBC certificate is self-signed, it isn't valid. Make sure that the SBC certificate is obtained from a trusted Certificate Authority (CA). ++For a list of supported CAs, see [SBC certificates and domain names](../direct-routing-infrastructure.md#sbc-certificates-and-domain-names). ++### SBC doesn't trust SIP proxy certificate ++If the SBC doesn't trust the SIP proxy certificate, download and install the Baltimore CyberTrust root certificate **and** he DigiCert Global Root G2 certificates on the SBC. To download those certificates, see [Microsoft 365 encryption chains](/microsoft-365/compliance/encryption-office-365-certificate-chains). ++For a list of supported CAs, see [SBC certificates and domain names](../direct-routing-infrastructure.md#sbc-certificates-and-domain-names). ++### SBC certificate is invalid ++If the SBC connection status in the Azure portal indicates that the SBC certificate is expired, request or renew the certificate from a trusted Certificate Authority (CA). Then, install it on the SBC. For a list of supported CAs, see [SBC certificates and domain names](../direct-routing-infrastructure.md#sbc-certificates-and-domain-names). + +When you renew the SBC certificate, you must remove the TLS connections that were established from the SBC to Microsoft with the old certificate and re-establish them with the new certificate. Doing so ensures that certificate expiration warnings aren't triggered in Azure portal. +To remove the old TLS connections, restart the SBC during a time frame that has low traffic such as a maintenance window. If you can't restart the SBC, contact the vendor for instructions to force the closure of all old TLS connections. ++### SBC certificate or intermediary certificates are missing in the SBC TLS "Hello" message ++Check that a valid SBC certificate and all required intermediate certificates are installed correctly, and that the TLS connection settings on the SBC are correct. ++Sometimes, even if everything looks correct, a closer examination of the packet capture might reveal that the TLS certificate isn't provided to the Microsoft infrastructure. ++### SBC connection is interrupted ++The TLS connection is interrupted or not set up even though the certificates and SBC settings experience no issues. ++One of the intermediary devices (such as a firewall or a router) on the path between the SBC and the Microsoft network might close the TLS connection. Check for any connection issues within your managed network, and fix them. ++## Related articles ++- [Monitor direct routing](./monitor-direct-routing.md) +- [Plan for Azure direct routing](../direct-routing-infrastructure.md) +- [Pair the Session Border Controller and configure voice routing](../direct-routing-provisioning.md) +- [Outbound call to a phone number](../../../quickstarts/telephony/pstn-call.md) |
communication-services | Add Custom Verified Domains | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/add-custom-verified-domains.md | Click **Next** once you've completed this step. ### Configure sender authentication for custom domain+To configure sender authentication for your domains, additional DNS records need to be added to your domain. Below, we provide steps where Azure Communication Services will offer records that should be added to your DNS. However, depending on whether the domain you are registering is a root domain or a subdomain, you will need to add the records to the respective zone or make appropriate alterations to the records that we generate. ++As an example, let's consider adding SPF and DKIM records for the custom domain "sales.us.notification.azurecommtest.net." The following are different methods for adding these records to the DNS, depending on the level of the Zone where the records are being added. ++1. Zone: **sales.us.notification.azurecommtest.net** ++ | Record | Type | Name | Value | + | | | | | + |SPF | TXT | sales.us.notification.azurecommtest.net | v=spf1 include:spf.protection.outlook.com -all | + | DKIM | CNAME | selector1-azurecomm-prod-net._domainkey | selector1-azurecomm-prod-net._domainkey.azurecomm.net | + | DKIM2 | CNAME | selector2-azurecomm-prod-net._domainkey | selector2-azurecomm-prod-net._domainkey.azurecomm.net | ++The records that get generated in our portal assumes that you will be adding these records in DNS in this Zone **sales.us.notification.azurecommtest.net**. ++2. Zone: **us.notification.azurecommtest.net** ++ | Record | Type | Name | Value | + | | | | | + |SPF | TXT | sales | v=spf1 include:spf.protection.outlook.com -all | + | DKIM | CNAME | selector1-azurecomm-prod-net._domainkey.**sales** | selector1-azurecomm-prod-net._domainkey.azurecomm.net | + | DKIM2 | CNAME | selector2-azurecomm-prod-net._domainkey.**sales** | selector2-azurecomm-prod-net._domainkey.azurecomm.net | + +3. Zone: **notification.azurecommtest.net** ++ | Record | Type | Name | Value | + | | | | | + |SPF | TXT | sales.us | v=spf1 include:spf.protection.outlook.com -all | + | DKIM | CNAME | selector1-azurecomm-prod-net._domainkey.**sales.us** | selector1-azurecomm-prod-net._domainkey.azurecomm.net | + | DKIM2 | CNAME | selector2-azurecomm-prod-net._domainkey.**sales.us** | selector2-azurecomm-prod-net._domainkey.azurecomm.net | + ++ +4. Zone: **azurecommtest.net** ++ | Record | Type | Name | Value | + | | | | | + |SPF | TXT | sales.us.notification | v=spf1 include:spf.protection.outlook.com -all | + | DKIM | CNAME | selector1-azurecomm-prod-net._domainkey.**sales.us.notification** | selector1-azurecomm-prod-net._domainkey.azurecomm.net | + | DKIM2 | CNAME | selector2-azurecomm-prod-net._domainkey.**sales.us.notification** | selector2-azurecomm-prod-net._domainkey.azurecomm.net | + +++#### Adding SPF and DKIM Records ++ 1. Navigate to **Provision Domains** and confirm that **Domain Status** is in "Verified" state. 2. You can add SPF and DKIM by clicking **Configure**. Add the following TXT record and CNAME records to your domain's registrar or DNS hosting provider. Refer to the [adding DNS records in popular domain registrars table](#cname-records) for information on how to add a TXT & CNAME record for your DNS provider. -Click **Next** once you've completed this step. + Click **Next** once you've completed this step. :::image type="content" source="./media/email-domains-custom-spf.png" alt-text="Screenshot that shows the D N S records that you need to add for S P F validation for your verified domains."::: :::image type="content" source="./media/email-domains-custom-dkim-1.png" alt-text="Screenshot that shows the D N S records that you need to add for D K I M."::: :::image type="content" source="./media/email-domains-custom-dkim-2.png" alt-text="Screenshot that shows the D N S records that you need to add for additional D K I M records."::: |
communication-services | Pstn Call | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/pstn-call.md | If you want to clean up and remove a Communication Services subscription, you ca For more information, see the following articles: - Learn about [Calling SDK capabilities](../voice-video-calling/getting-started-with-calling.md)-- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)+- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md) |
communication-services | File Sharing Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial.md | For this quickstart, we'll be modifying files inside of the `src` folder. ### Install the Package -Use the `npm install` command to install the Azure Communication Services UI Library for JavaScript. +Use the `npm install` command to install the beta Azure Communication Services UI Library for JavaScript. ```bash -npm install @azure/communication-react +npm install @azure/communication-react@1.5.1-beta.5 ``` You may also want to: - [Add chat to your app](../quickstarts/chat/get-started.md) - [Creating user access tokens](../quickstarts/identity/access-tokens.md) - [Learn about client and server architecture](../concepts/client-and-server-architecture.md)-- [Learn about authentication](../concepts/authentication.md)+- [Learn about authentication](../concepts/authentication.md) |
connectors | Connectors Create Api Sqlazure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md | tags: connectors [!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)] -This article shows how to access your SQL database from a workflow in Azure Logic Apps with the SQL Server connector. You can then create automated workflows that run when triggered by events in your SQL database or in other systems and run actions to manage your SQL data and resources. +This how-to guide shows how to access your SQL database from a workflow in Azure Logic Apps with the SQL Server connector. You can then create automated workflows that run when triggered by events in your SQL database or in other systems and run actions to manage your SQL data and resources. For example, your workflow can run actions that get, insert, and delete data or that can run SQL queries and stored procedures. Your workflow can check for new records in a non-SQL database, do some processing work, use the results to create new records in your SQL database, and send email alerts about the new records. For more information, review the [SQL Server managed connector reference](/conne * To connect to an on-premises SQL server, the following extra requirements apply, based on whether you have a Consumption or Standard logic app workflow. - * Consumption logic app workflow + * Consumption workflow * In multi-tenant Azure Logic Apps, you need the [on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) installed on a local computer and a [data gateway resource that's already created in Azure](../logic-apps/logic-apps-gateway-connection.md). * In an ISE, you don't need the on-premises data gateway for SQL Server Authentication and non-Windows Authentication connections, and you can use the ISE-versioned SQL Server connector. For Windows Authentication, you need the [on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) on a local computer and a [data gateway resource that's already created in Azure](../logic-apps/logic-apps-gateway-connection.md). The ISE-version connector doesn't support Windows Authentication, so you have to use the regular SQL Server managed connector. - * Standard logic app workflow + * Standard workflow You can use the SQL Server built-in connector or managed connector. For more information, review the [SQL Server managed connector reference](/conne The following steps use the Azure portal, but with the appropriate Azure Logic Apps extension, you can also use the following tools to create logic app workflows: -* Consumption logic app workflows: [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) or [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md) +* Consumption workflows: [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) or [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md) -* Standard logic app workflows: [Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md) +* Standard workflows: [Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md) ### [Consumption](#tab/consumption) -1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer. +1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app and blank workflow in the designer. -1. Find and select the [SQL Server trigger](/connectors/sql/#trigger) that you want to use. +1. In the designer, under the search box, select **Standard**. Then, [follow these general steps](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger) to add the [SQL Server managed trigger you want](/connectors/sql/#triggers). - 1. On the designer, under the search box, select **Standard**. + This example continues with the trigger named **When an item is created**. - 1. In the search box, enter **sql server**. +1. If prompted, provide the [information for your connection](#create-connection). When you're done, select **Create**. - 1. From the triggers list, select the SQL trigger that you want. +1. After the trigger information box appears, provide the necessary information required by [your selected trigger](/connectors/sql/#triggers). - This example continues with the trigger named **When an item is created**. + For this example, in the trigger named **When an item is created**, provide the values for the SQL server name and database name, if you didn't previously provide them. Otherwise, from the **Table name** list, select the table that you want to use. Select the **Frequency** and **Interval** to set the schedule for the trigger to check for new items. - ![Screenshot showing the Azure portal, Consumption logic app workflow designer, search box with "sql server", and "When an item is created" trigger selected.](./media/connectors-create-api-sqlazure/select-sql-server-trigger-consumption.png) --1. Provide the [information for your connection](#create-connection). When you're done, select **Create**. --1. Provide the information required by [your selected trigger](/connectors/sql/#triggers). + ![Screenshot shows Consumption workflow designer and managed action named When an item is created.](./media/connectors-create-api-sqlazure/when-item-created-consumption.png) 1. If any other properties are available for this trigger, open the **Add new parameter** list, and select those properties relevant to your scenario. The following steps use the Azure portal, but with the appropriate Azure Logic A ### [Standard](#tab/standard) -1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer. --1. Find and select the SQL Server trigger that you want to use. -- 1. On the designer, select **Choose an operation**. -- 1. Under the **Choose an operation** search box, select either of the following options: -- * **Built-in** to view the [SQL Server built-in connector triggers](/azure/logic-apps/connectors/built-in/reference/sql/#triggers) -- * **Azure** to view the [SQL Server managed connector triggers](/connectors/sql/#triggers) -- 1. In the search box, enter **sql server**. -- 1. From the triggers list, select the SQL trigger that you want. -- * [Built-in connector triggers](/azure/logic-apps/connectors/built-in/reference/sql/#triggers) +1. In the [Azure portal](https://portal.azure.com), open your Standard logic app and blank workflow in the designer. - This example selects the built-in trigger named **When a row is inserted**. +1. In the designer, [follow these general steps](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger) to find and add the SQL Server [built-in trigger](/azure/logic-apps/connectors/built-in/reference/sql/#triggers) or [managed trigger](/connectors/sql/#triggers) you want. - ![Screenshot showing Standard workflow designer and the selected built-in trigger named When a row is inserted.](./media/connectors-create-api-sqlazure/select-trigger-built-in-standard.png) + For example, you might select the built-in trigger named **When a row is inserted** or the managed trigger named **When a row is created**. This example continues with the built-in trigger named **When a row is inserted**. - * [Managed connector triggers](/connectors/sql/#triggers) +1. If prompted, provide the [information for your connection](#create-connection). When you're done, select **Create**. - This example selects the built-in trigger named **When a row is created**. +1. After the trigger information box appears, provide the information required by your selected [built-in trigger](/azure/logic-apps/connectors/built-in/reference/sql/#triggers) or [managed trigger](/connectors/sql/#triggers). - ![Screenshot showing Standard workflow designer and the selected managed trigger named When a row is created.](./media/connectors-create-api-sqlazure/select-trigger-managed-standard.png) + For this example, in the trigger named **When a row is inserted**, from the **Table name** list, select the table that you want to use. -1. Provide the [information for your connection](#create-connection). When you're done, select **Create**. --1. Provide the information required by your selected [built-in trigger](/azure/logic-apps/connectors/built-in/reference/sql/#triggers) or [managed trigger](/connectors/sql/#triggers). -- The following example continues with the built-in trigger named **When a row is inserted**. From the **Table name** list, select the table that you want to use. -- ![Screenshot showing Standard workflow designer and the built-in action named When a row is inserted.](./media/connectors-create-api-sqlazure/when-row-inserted-standard.png) + ![Screenshot shows Standard workflow designer and built-in action named When a row is inserted.](./media/connectors-create-api-sqlazure/when-row-inserted-standard.png) 1. If any other properties are available for this trigger, open the **Add new parameter** list, and select those properties relevant to your scenario. The following steps use the Azure portal, but with the appropriate Azure Logic A 1. When you're done, save your workflow. On the designer toolbar, select **Save**. ++ When you save your workflow, this step automatically publishes your updates to your deployed logic app, which is live in Azure. With only a trigger, your workflow just checks the SQL database based on your specified schedule. You have to [add an action](#add-sql-action) that responds to the trigger. When you save your workflow, this step automatically publishes your updates to y The following steps use the Azure portal, but with the appropriate Azure Logic Apps extension, you can also use Visual Studio to edit Consumption logic app workflows or Visual Studio Code to the following tools to edit logic app workflows: -* Consumption logic app workflows: [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) or [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md) +* Consumption workflows: [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) or [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md) -* Standard logic app workflows: [Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md) +* Standard workflows: [Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md) In this example, the logic app workflow starts with the [Recurrence trigger](../connectors/connectors-native-recurrence.md), and calls an action that gets a row from an SQL database. ### [Consumption](#tab/consumption) -1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. --1. Find and select the [SQL Server action](/connectors/sql/#actions) that you want to use. -- This example continues with the action named **Get row**. -- 1. Under the trigger or action where you want to add the SQL action, select **New step**. -- Or, to add an action between existing steps, move your pointer over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**. +1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app and workflow in the designer. - 1. Under the **Choose an operation** search box, select **Standard**. +1. In the designer, [follow these general steps](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action) to add the [SQL Server managed action you want](/connectors/sql/#actions). - 1. In the search box, enter **sql server**. + This example continues with the action named **Get row**, which gets a single record. - 1. From the actions list, select the SQL Server action that you want. +1. If prompted, provide the [information for your connection](#create-connection). When you're done, select **Create**. - This example uses the **Get row** action, which gets a single record. +1. After the action information box appears, from the **Table name** list, select the table that you want to use. In the **Row id** property, enter the ID for the record that you want. - ![Screenshot showing the Azure portal, workflow designer for Consumption logic app, the search box with "sql server", and "Get row" selected in the "Actions" list.](./media/connectors-create-api-sqlazure/select-sql-get-row-action-consumption.png) + For this example, the table name is **SalesLT.Customer**. -1. Provide the [information for your connection](#create-connection). When you're done, select **Create**. --1. If you haven't already provided the SQL server name and database name, provide those values. Otherwise, from the **Table name** list, select the table that you want to use. In the **Row id** property, enter the ID for the record that you want. -- In this example, the table name is **SalesLT.Customer**. -- ![Screenshot showing Consumption workflow designer and the "Get row" action with the example "Table name" property value and empty row ID.](./media/connectors-create-api-sqlazure/specify-table-row-id-consumption.png) + ![Screenshot shows Consumption workflow designer and action named Get row with the example table name and empty row ID.](./media/connectors-create-api-sqlazure/specify-table-row-id-consumption.png) This action returns only one row from the selected table, and nothing else. To view the data in this row, add other actions. For example, such actions might create a file, include the fields from the returned row, and store the file in a cloud storage account. To learn about other available actions for this connector, see the [connector's reference page](/connectors/sql/). In this example, the logic app workflow starts with the [Recurrence trigger](../ ### [Standard](#tab/standard) -1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. --1. Find and select the SQL Server action that you want to use. -- 1. Under the trigger or action where you want to add the action, select the plus sign (**+**), and then select **Add an action**. -- Or, to add an action between existing steps, select the plus sign (**+**) on the connecting arrow, and then select **Add an action**. -- 1. Under the **Choose an operation** search box, select either of the following options: -- * **Built-in** to view the [SQL Server built-in connector actions](/azure/logic-apps/connectors/built-in/reference/sql/#actions) -- * **Azure** to view the [SQL Server managed connector actions](/connectors/sql/#actions) -- 1. In the search box, enter **sql server**. +1. In the [Azure portal](https://portal.azure.com), open your Standard logic app and workflow in the designer. - 1. From the actions list, select the SQL Server action that you want to use. +1. In the designer, [follow these general steps](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action) to find and add the SQL Server [built-in action](/azure/logic-apps/connectors/built-in/reference/sql/#actions) or [managed action](/connectors/sql/#actions) you want. - * [Built-in connector actions](/azure/logic-apps/connectors/built-in/reference/sql/#actions) + For example, you might select the built-in action named **Execute query** or the managed action named **Get row**, which gets a single record. This example continues with the managed action named **Get row**. - This example selects the built-in action named **Execute query**. +1. If prompted, provide the [information for your connection](#create-connection). When you're done, select **Create**. - ![Screenshot showing the designer search box with "sql server" and "Built-in" selected underneath with the "Execute query" action selected in the "Actions" list.](./media/connectors-create-api-sqlazure/select-sql-execute-query-action-standard.png) +1. After the action information box appears, provide the values for the SQL server name and database name, if you didn't previously provide them. Otherwise, from the **Table name** list, select the table that you want to use. In the **Row id** property, enter the ID for the record that you want. - * [Managed connector actions](/connectors/sql/#actions) + For this example, the table name is **SalesLT.Customer**. - This example selects the action named **Get row**, which gets a single record. -- ![Screenshot showing the designer search box with "sql server" and "Azure" selected underneath with the "Get row" action selected in the "Actions" list.](./media/connectors-create-api-sqlazure/select-sql-get-row-action-standard.png) --1. Provide the [information for your connection](#create-connection). When you're done, select **Create**. --1. Provide the information required by your selected action. -- The following example continues with the managed action named **Get row**. If you haven't already provided the SQL server name and database name, provide those values. Otherwise, from the **Table name** list, select the table that you want to use. In this example, the table name is **SalesLT.Customer**. In the **Row id** property, enter the ID for the record that you want. -- ![Screenshot showing Standard workflow designer and managed action "Get row" with the example "Table name" property value and empty row ID.](./media/connectors-create-api-sqlazure/specify-table-row-id-standard.png) + ![Screenshot shows Standard workflow designer and managed action named Get row with example table name and empty row ID.](./media/connectors-create-api-sqlazure/specify-table-row-id-standard.png) This action returns only one row from the selected table, and nothing else. To view the data in this row, add other actions. For example, such actions might create a file, include the fields from the returned row, and store the file in a cloud storage account. To learn about other available actions for this connector, review the [managed connector's reference page](/connectors/sql/). In the connection information box, complete the following steps: The following examples show how the connection information box might appear if you use the SQL Server *managed* connector and select **Azure AD Integrated** authentication: - * Consumption logic app workflows + **Consumption workflows** - ![Screenshot showing the Azure portal, workflow designer, and "SQL Server" cloud connection information with selected authentication type for Consumption.](./media/connectors-create-api-sqlazure/select-azure-ad-sql-cloud-consumption.png) + ![Screenshot shows Azure portal, Consumption workflow, and SQL Server cloud connection information with selected authentication type.](./media/connectors-create-api-sqlazure/select-azure-ad-sql-cloud-consumption.png) - * Standard logic app workflows + **Standard workflows** - ![Screenshot showing the Azure portal, workflow designer, and "SQL Server" cloud connection information with selected authentication type for Standard.](./media/connectors-create-api-sqlazure/select-azure-ad-sql-cloud-standard.png) + ![Screenshot shows Azure portal, Standard workflow, and SQL Server cloud connection information with selected authentication type.](./media/connectors-create-api-sqlazure/select-azure-ad-sql-cloud-standard.png) 1. After you select **Azure AD Integrated**, select **Sign in**. Based on whether you use Azure SQL Database or SQL Managed Instance, select your user credentials for authentication. In the connection information box, complete the following steps: > `Server=tcp:{your-server-address}.database.windows.net,1433;Initial Catalog={your-database-name};Persist Security Info=False;User ID={your-user-name};Password={your-password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;` > > * By default, tables in system databases are filtered out, so they might not automatically appear when you select a system database. As an alternative, you can manually enter the table name after you select **Enter custom value** from the database list.- > This database information box looks similar to the following example: - * Consumption logic app workflows + **Consumption workflows** - ![Screenshot showing SQL cloud database cloud information with sample values for Consumption.](./media/connectors-create-api-sqlazure/azure-sql-database-information-consumption.png) + ![Screenshot shows SQL cloud database cloud information with sample values for Consumption.](./media/connectors-create-api-sqlazure/azure-sql-database-information-consumption.png) - * Standard logic app workflows + **Standard workflows** - ![Screenshot showing SQL cloud database information with sample values for Standard.](./media/connectors-create-api-sqlazure/azure-sql-database-information-standard.png) + ![Screenshot shows SQL cloud database information with sample values for Standard.](./media/connectors-create-api-sqlazure/azure-sql-database-information-standard.png) 1. Now, continue with the steps that you haven't completed yet in either [Add a SQL trigger](#add-sql-trigger) or [Add a SQL action](#add-sql-action). In the connection information box, complete the following steps: The following examples show how the connection information box might appear if you select **Windows** authentication. - * Consumption logic app workflows + **Consumption workflows** - ![Screenshot showing the Azure portal, workflow designer, and "SQL Server" on-premises connection information with selected authentication for Consumption.](./media/connectors-create-api-sqlazure/select-windows-authentication-consumption.png) + ![Screenshot shows Azure portal, Consumption workflow, and SQL Server on-premises connection information with selected authentication.](./media/connectors-create-api-sqlazure/select-windows-authentication-consumption.png) - * Standard logic app workflows + **Standard workflows** - ![Screenshot showing the Azure portal, workflow designer, and "SQL Server" on-premises connection information with selected authentication for Standard.](./media/connectors-create-api-sqlazure/select-windows-authentication-standard.png) + ![Screenshot shows Azure portal, Standard workflow, and SQL Server on-premises connection information with selected authentication.](./media/connectors-create-api-sqlazure/select-windows-authentication-standard.png) 1. When you're ready, select **Create**. Sometimes, you work with result sets so large that the connector doesn't return When you call a stored procedure by using the SQL Server connector, the returned output is sometimes dynamic. In this scenario, follow these steps: -1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. +1. In the [Azure portal](https://portal.azure.com), open your logic app and workflow in the designer. 1. View the output format by performing a test run. Copy and save your sample output. -1. In the designer, under the action where you call the stored procedure, add a new action. --1. In the **Choose an operation** box, find and select the action named [**Parse JSON**](../logic-apps/logic-apps-perform-data-operations.md#parse-json-action). +1. In the designer, under the action where you call the stored procedure, add the built-in action named [**Parse JSON**](../logic-apps/logic-apps-perform-data-operations.md#parse-json-action). 1. In the **Parse JSON** action, select **Use sample payload to generate schema**. When you call a stored procedure by using the SQL Server connector, the returned 1. When you're done, save your workflow. -1. To reference the JSON content properties, click inside the edit boxes where you want to reference those properties so that the dynamic content list appears. In the list, under the [**Parse JSON**](../logic-apps/logic-apps-perform-data-operations.md#parse-json-action) heading, select the data tokens for the JSON content properties that you want. +1. To reference the JSON content properties, select inside the edit boxes where you want to reference those properties so that the dynamic content list appears. In the list, under the [**Parse JSON**](../logic-apps/logic-apps-perform-data-operations.md#parse-json-action) heading, select the data tokens for the JSON content properties that you want. ## Next steps * [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors) * [Built-in connectors for Azure Logic Apps](built-in.md)++ |
container-apps | Blue Green Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/blue-green-deployment.md | + + Title: Blue-Green Deployment in Azure Container Apps +description: Minimize downtime and reduce the risks associated with new releases by using Blue/Green deployment in Azure Container Apps. +++++ Last updated : 06/23/2023++zone_pivot_groups: azure-cli-bicep +++# Blue-Green Deployment in Azure Container Apps ++[Blue-Green Deployment](https://martinfowler.com/bliki/BlueGreenDeployment.html) is a software release strategy that aims to minimize downtime and reduce the risk associated with deploying new versions of an application. In a blue-green deployment, two identical environments, referred to as "blue" and "green," are set up. One environment (blue) is running the current application version and one environment (green) is running the new application version. ++Once green environment is tested, the live traffic is directed to it, and the blue environment is used to deploy a new application version during next deployment cycle. ++You can enable blue-green deployment in Azure Container Apps by combining [container apps revisions](revisions.md), [traffic weights](traffic-splitting.md), and [revision labels](revisions.md#revision-labels). +++You use revisions to create instances of the blue and green versions of the application. ++| Revision | Description | +||| +| *Blue* revision | The revision labeled as *blue* is the currently running and stable version of the application. This revision is the one that users interact with, and it's the target of production traffic. | +| *Green* revision | The revision labeled as *green* is a copy of the *blue* revision except it uses a newer version of the app code and possibly new set of environment variables. It doesn't receive any production traffic initially but is accessible via a labeled fully qualified domain name (FQDN). | ++After you test and verify the new revision, you can then point production traffic to the new revision. If you encounter issues, you can easily roll back to the previous version. ++| Actions | Description | +||| +| Testing and verification | The *green* revision is thoroughly tested and verified to ensure that the new version of the application functions as expected. This testing may involve various tasks, including functional tests, performance tests, and compatibility checks. | +| Traffic switch | Once the *green* revision passes all the necessary tests, a traffic switch is performed so that the *green* revision starts serving production load. This switch is done in a controlled manner, ensuring a smooth transition. | +| Rollback | If problems occur in the *green* revision, you can revert the traffic switch, routing traffic back to the stable *blue* revision. This rollback ensures minimal impact on users if there are issues in the new version. The *green* revision is still available for the next deployment. | +| Role change | The roles of the blue and green revisions change after a successful deployment to the *green* revision. During the next release cycle, the *green* revision represents the stable production environment while the new version of the application code is deployed and tested in the *blue* revision. ++This article shows you how to implement blue-green deployment in a container app. To run the following examples, you need a container app environment where you can create a new app. ++> [!NOTE] +> Refer to [containerapps-blue-green repository](https://github.com/Azure-Samples/containerapps-blue-green) for a complete example of a github workflow that implements blue-green deployment for Container Apps. ++## Create a container app with multiple active revisions enabled ++The container app must have the `configuration.activeRevisionsMode` property set to `multiple` to enable traffic splitting. To get deterministic revision names, you can set the `template.revisionSuffix` configuration setting to a string value that uniquely identifies a release. For example you can use build numbers, or git commits short hashes. ++For the following commands, a set of commit hashes was used. +++```azurecli +export APP_NAME=<APP_NAME> +export APP_ENVIRONMENT_NAME=<APP_ENVIRONMENT_NAME> +export RESOURCE_GROUP=<RESOURCE_GROUP> ++# A commitId that is assumed to correspond to the app code currently in production +export BLUE_COMMIT_ID=fb699ef +# A commitId that is assumed to correspond to the new version of the code to be deployed +export GREEN_COMMIT_ID=c6f1515 ++# create a new app with a new revision +az containerapp create --name $APP_NAME \ + --environment $APP_ENVIRONMENT_NAME \ + --resource-group $RESOURCE_GROUP \ + --image mcr.microsoft.com/k8se/samples/test-app:$BLUE_COMMIT_ID \ + --revision-suffix $BLUE_COMMIT_ID \ + --env-vars REVISION_COMMIT_ID=$BLUE_COMMIT_ID \ + --ingress external \ + --target-port 80 \ + --revisions-mode multiple ++# Fix 100% of traffic to the revision +az containerapp ingress traffic set \ + --name $APP_NAME \ + --resource-group $RESOURCE_GROUP \ + --revision-weight $APP_NAME--$BLUE_COMMIT_ID=100 ++# give that revision a label 'blue' +az containerapp revision label add \ + --name $APP_NAME \ + --resource-group $RESOURCE_GROUP \ + --label blue \ + --revision $APP_NAME--$BLUE_COMMIT_ID +``` ++++Save the following code into a file named `main.bicep`. ++```bicep +targetScope = 'resourceGroup' +param location string = resourceGroup().location ++@minLength(1) +@maxLength(64) +@description('Name of containerapp') +param appName string ++@minLength(1) +@maxLength(64) +@description('Container environment name') +param containerAppsEnvironmentName string ++@minLength(1) +@maxLength(64) +@description('CommitId for blue revision') +param blueCommitId string ++@maxLength(64) +@description('CommitId for green revision') +param greenCommitId string = '' ++@maxLength(64) +@description('CommitId for the latest deployed revision') +param latestCommitId string = '' ++@allowed([ + 'blue' + 'green' +]) +@description('Name of the label that gets 100% of the traffic') +param productionLabel string = 'blue' ++var currentCommitId = !empty(latestCommitId) ? latestCommitId : blueCommitId ++resource containerAppsEnvironment 'Microsoft.App/managedEnvironments@2022-03-01' existing = { + name: containerAppsEnvironmentName +} ++resource blueGreenDeploymentApp 'Microsoft.App/containerApps@2022-11-01-preview' = { + name: appName + location: location + tags: { + blueCommitId: blueCommitId + greenCommitId: greenCommitId + latestCommitId: currentCommitId + productionLabel: productionLabel + } + properties: { + environmentId: containerAppsEnvironment.id + configuration: { + maxInactiveRevisions: 10 // Remove old inactive revisions + activeRevisionsMode: 'multiple' // Multiple active revisions mode is required when using traffic weights + ingress: { + external: true + targetPort: 80 + traffic: !empty(blueCommitId) && !empty(greenCommitId) ? [ + { + revisionName: '${appName}--${blueCommitId}' + label: 'blue' + weight: productionLabel == 'blue' ? 100 : 0 + } + { + revisionName: '${appName}--${greenCommitId}' + label: 'green' + weight: productionLabel == 'green' ? 100 : 0 + } + ] : [ + { + revisionName: '${appName}--${blueCommitId}' + label: 'blue' + weight: 100 + } + ] + } + } + template: { + revisionSuffix: currentCommitId + containers:[ + { + image: 'mcr.microsoft.com/k8se/samples/test-app:${currentCommitId}' + name: appName + resources: { + cpu: json('0.5') + memory: '1.0Gi' + } + env: [ + { + name: 'REVISION_COMMIT_ID' + value: currentCommitId + } + ] + } + ] + } + } +} ++output fqdn string = blueGreenDeploymentApp.properties.configuration.ingress.fqdn +output latestRevisionName string = blueGreenDeploymentApp.properties.latestRevisionName +``` ++Deploy the app with the Bicep template using this command: ++```azurecli +export APP_NAME=<APP_NAME> +export APP_ENVIRONMENT_NAME=<APP_ENVIRONMENT_NAME> +export RESOURCE_GROUP=<RESOURCE_GROUP> ++# A commitId that is assumed to belong to the app code currently in production +export BLUE_COMMIT_ID=fb699ef +# A commitId that is assumed to belong to the new version of the code to be deployed +export GREEN_COMMIT_ID=c6f1515 ++# create a new app with a blue revision +az deployment group create \ + --name createapp-$BLUE_COMMIT_ID \ + --resource-group $RESOURCE_GROUP \ + --template-file main.bicep \ + --parameters appName=$APP_NAME blueCommitId=$BLUE_COMMIT_ID containerAppsEnvironmentName=$APP_ENVIRONMENT_NAME \ + --query properties.outputs.fqdn +``` +++## Deploy a new revision and assign labels ++The *blue* label currently refers to a revision that takes the production traffic arriving on the app's FQDN. The *green* label refers to a new version of an app that is about to be rolled out into production. A new commit hash identifies the new version of the app code. The following command deploys a new revision for that commit hash and marks it with *green* label. +++```azurecli +#create a second revision for green commitId +az containerapp update --name $APP_NAME \ + --resource-group $RESOURCE_GROUP \ + --image mcr.microsoft.com/k8se/samples/test-app:$GREEN_COMMIT_ID \ + --revision-suffix $GREEN_COMMIT_ID \ + --set-env-vars REVISION_COMMIT_ID=$GREEN_COMMIT_ID ++#give that revision a 'green' label +az containerapp revision label add \ + --name $APP_NAME \ + --resource-group $RESOURCE_GROUP \ + --label green \ + --revision $APP_NAME--$GREEN_COMMIT_ID +``` +++```azurecli +#deploy a new version of the app to green revision +az deployment group create \ + --name deploy-to-green-$GREEN_COMMIT_ID \ + --resource-group $RESOURCE_GROUP \ + --template-file main.bicep \ + --parameters appName=$APP_NAME blueCommitId=$BLUE_COMMIT_ID greenCommitId=$GREEN_COMMIT_ID latestCommitId=$GREEN_COMMIT_ID productionLabel=blue containerAppsEnvironmentName=$APP_ENVIRONMENT_NAME \ + --query properties.outputs.fqdn +``` +++The following example shows how the traffic section is configured. The revision with the *blue* `commitId` is taking 100% of production traffic while the newly deployed revision with *green* `commitId` doesn't take any production traffic. ++```json +{ + "traffic": [ + { + "revisionName": "<APP_NAME>--0b699ef", + "weight": 100, + "label": "blue" + }, + { + "revisionName": "<APP_NAME>--c6f1515", + "weight": 0, + "label": "green" + } + ] +} +``` ++The newly deployed revision can be tested by using the label-specific FQDN: ++```azurecli +#get the containerapp environment default domain +export APP_DOMAIN=$(az containerapp env show -g $RESOURCE_GROUP -n $APP_ENVIRONMENT_NAME --query properties.defaultDomain -o tsv | tr -d '\r\n') ++#Test the production FQDN +curl -s https://$APP_NAME.$APP_DOMAIN/api/env | jq | grep COMMIT ++#Test the blue lable FQDN +curl -s https://$APP_NAMEblue.$APP_DOMAIN/api/env | jq | grep COMMIT ++#Test the green lable FQDN +curl -s https://$APP_NAMEgreen.$APP_DOMAIN/api/env | jq | grep COMMIT +``` ++## Send production traffic to the green revision ++After confirming that the app code in the *green* revision works as expected, 100% of production traffic is sent to the revision. The *green* revision now becomes the production revision. +++```azurecli +# set 100% of traffic to green revision +az containerapp ingress traffic set \ + --name $APP_NAME \ + --resource-group $RESOURCE_GROUP \ + --label-weight blue=0 green=100 ++++```azurecli +# make green the prod revision +az deployment group create \ + --name make-green-prod-$GREEN_COMMIT_ID \ + --resource-group $RESOURCE_GROUP \ + --template-file main.bicep \ + --parameters appName=$APP_NAME blueCommitId=$BLUE_COMMIT_ID greenCommitId=$GREEN_COMMIT_ID latestCommitId=$GREEN_COMMIT_ID productionLabel=green containerAppsEnvironmentName=$APP_ENVIRONMENT_NAME \ + --query properties.outputs.fqdn +``` +++The following example shows how the `traffic` section is configured after this step. The *green* revision with the new application code takes all the user traffic while *blue* revision with the old application version doesn't accept user requests. ++```json +{ + "traffic": [ + { + "revisionName": "<APP_NAME>--c6f1515", + "weight": 0, + "label": "blue" + }, + { + "revisionName": "<APP_NAME>--0b699ef", + "weight": 100, + "label": "green" + } + ] +} +``` ++## Roll back the deployment if there were problems ++If after running in production, the new revision is found to have bugs, you can roll back to the previous good state. After the rollback, 100% of the traffic is sent to the old version in the *blue* revision and that revision is designated as the production revision again. +++```azurecli +# set 100% of traffic to green revision +az containerapp ingress traffic set \ + --name $APP_NAME \ + --resource-group $RESOURCE_GROUP \ + --label-weight blue=100 green=0 +``` ++++```azurecli +# rollback traffic to blue revision +az deployment group create \ + --name rollback-to-blue-$GREEN_COMMIT_ID \ + --resource-group $RESOURCE_GROUP \ + --template-file main.bicep \ + --parameters appName=$APP_NAME blueCommitId=$BLUE_COMMIT_ID greenCommitId=$GREEN_COMMIT_ID latestCommitId=$GREEN_COMMIT_ID productionLabel=blue containerAppsEnvironmentName=$APP_ENVIRONMENT_NAME \ + --query properties.outputs.fqdn +``` +++After the bugs are fixed, the new version of the application is deployed as a *green* revision again. The *green* version eventually becomes the production revision. ++## Next deployment cycle ++Now the *green* label marks the revision currently running the stable production code. ++During the next deployment cycle, the *blue* identifies the revision with the new application version being rolled out to production. ++The following commands demonstrate how to prepare for the next deployment cycle. +++```azurecli +# set the new commitId +export BLUE_COMMIT_ID=ad1436b ++# create a third revision for blue commitId +az containerapp update --name $APP_NAME \ + --resource-group $RESOURCE_GROUP \ + --image mcr.microsoft.com/k8se/samples/test-app:$BLUE_COMMIT_ID \ + --revision-suffix $BLUE_COMMIT_ID \ + --set-env-vars REVISION_COMMIT_ID=$BLUE_COMMIT_ID ++# give that revision a 'blue' label +az containerapp revision label add \ + --name $APP_NAME \ + --resource-group $RESOURCE_GROUP \ + --label blue \ + --revision $APP_NAME--$BLUE_COMMIT_ID +``` +++```azurecli +# set the new commitId +export BLUE_COMMIT_ID=ad1436b ++# deploy new version of the app to blue revision +az deployment group create \ + --name deploy-to-blue-$BLUE_COMMIT_ID \ + --resource-group $RESOURCE_GROUP \ + --template-file main.bicep \ + --parameters appName=$APP_NAME blueCommitId=$BLUE_COMMIT_ID greenCommitId=$GREEN_COMMIT_ID latestCommitId=$BLUE_COMMIT_ID productionLabel=green containerAppsEnvironmentName=$APP_ENVIRONMENT_NAME \ + --query properties.outputs.fqdn +``` +++## Next steps ++> [!div class="nextstepaction"] +> [Traffic Weights](traffic-splitting.md) |
container-apps | Log Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-options.md | You can choose between these logs destinations: - An Azure partner monitoring solution such as, Datadog, Elastic, Logz.io and others. For more information, see [Partner solutions](../partner-solutions/overview.md). - **None**: You can disable the storage of log data. When disabled, you can still view real-time container logs via the **Logs stream** feature in your container app. For more information, see [Log streaming](log-streaming.md). -> [!NOTE] -> Azure Monitor is not currently supported in the Consumption + Dedicated plan structure. - When *None* or the *Azure Monitor* destination is selected, the **Logs** menu item providing the Log Analytics query editor in the Azure portal is disabled. ## Configure options via the Azure portal |
container-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md | Title: Built-in policy definitions for Azure Container Apps description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
container-apps | Revisions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions.md | You can use revisions to: - Release a new version of your app. - Quickly revert to an earlier version of your app. - Split traffic between revisions for [A/B testing](https://wikipedia.org/wiki/A/B_testing).-- Gradually phase in a new revision in blue-green deployments. For more information about blue-green deployment, see [BlueGreenDeployment](https://martinfowler.com/bliki/BlueGreenDeployment.html).+- Gradually phase in a new revision in blue-green deployments. For more information about blue-green deployment, see [blue-green deployment](blue-green-deployment.md). ## Revision lifecycle |
container-apps | Traffic Splitting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/traffic-splitting.md | zone_pivot_groups: arm-azure-cli-portal By default, when ingress is enabled, all traffic is routed to the latest deployed revision. When you enable [multiple revision mode](revisions.md#revision-modes) in your container app, you can split incoming traffic between active revisions. -Traffic splitting is useful for testing updates to your container app. You can use traffic splitting to gradually phase in a new revision in [blue-green deployments](https://martinfowler.com/bliki/BlueGreenDeployment.html) or in [A/B testing](https://wikipedia.org/wiki/A/B_testing). +Traffic splitting is useful for testing updates to your container app. You can use traffic splitting to gradually phase in a new revision in [blue-green deployments](blue-green-deployment.md) or in [A/B testing](https://wikipedia.org/wiki/A/B_testing). Traffic splitting is based on the weight (percentage) of traffic that is routed to each revision. The combined weight of all traffic split rules must equal 100%. You can specify revision by revision name or [revision label](revisions.md#revision-labels). The following example template applies labels to different revisions. ## Next steps > [!div class="nextstepaction"]-> [Configure ingress](ingress-how-to.md) +> [Blue-green deployment](blue-green-deployment.md) |
container-instances | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md | |
container-registry | Container Registry Firewall Access Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-firewall-access-rules.md | After you set up dedicated data endpoints for your registry, you can enable clie ## Configure client firewall rules for MCR -If you need to access Microsoft Container Registry (MCR) from behind a firewall, see the guidance to configure [MCR client firewall rules](https://github.com/microsoft/containerregistry/blob/master/client-firewall-rules.md). MCR is the primary registry for all Microsoft-published docker images, such as Windows Server images. +If you need to access Microsoft Container Registry (MCR) from behind a firewall, see the guidance to configure [MCR client firewall rules](https://github.com/microsoft/containerregistry/blob/main/docs/client-firewall-rules.md). MCR is the primary registry for all Microsoft-published docker images, such as Windows Server images. ## Next steps |
container-registry | Container Registry Oras Artifacts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-oras-artifacts.md | In this article, a graph of supply chain artifacts is created, discovered, promo <!-- LINKS - external --> [docker-install]: https://www.docker.com/get-started/-[oci-artifact-manifest]: https://github.com/opencontainers/image-spec/blob/main/artifact.md/ +[oci-artifact-manifest]: https://github.com/opencontainers/image-spec/blob/main/manifest.md [oci-artifact-referrers]: https://github.com/opencontainers/distribution-spec/blob/main/spec.md#listing-referrers/ [oci-spec]: https://github.com/opencontainers/distribution-spec/blob/main/spec.md/ [oci-1_1-spec]: https://github.com/opencontainers/distribution-spec/releases/tag/v1.1.0-rc1 |
container-registry | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md | Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
cosmos-db | How To Configure Private Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-private-endpoints.md | By default, adding a private endpoint to an existing account results in a short 1. Configure your new private endpoint. 1. Remove the firewall rules set in step 1. +> [!NOTE] +> If you have running applications using the Azure Cosmos DB SDKs, there might be transient timeouts during the configuration update. Make sure your application is designed to be [resilient to transient connectivity failures](./nosql/conceptual-resilient-sdk-applications.md) and have retry logic in place in case it's needed. + ## Port range when using direct mode When you use Private Link with an Azure Cosmos DB account through a direct mode connection, you need to ensure that the full range of TCP ports (0 - 65535) is open. |
cosmos-db | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md | Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
cost-management-billing | Capabilities Allocation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-allocation.md | + + Title: Cost allocation +description: This article helps you understand the cost allocation capability within the FinOps Framework and how to implement that in the Microsoft Cloud. +keywords: ++ Last updated : 06/22/2023+++++++# Cost allocation ++This article helps you understand the cost allocation capability within the FinOps Framework and how to implement that in the Microsoft Cloud. ++## Definition ++**Cost allocation refers to the process of attributing and assigning costs to specific departments, teams, and projects within an organization.** ++Identify the most critical attributes to report against based on stakeholder needs. Consider the different reporting structures within the organization and how you'll handle change over time. Consider engineering practices that may introduce different types of cost that need to be analyzed independently. ++Establish and maintain a mapping of cloud and on-premises costs to each attribute and apply governance policies to ensure data is appropriately tagged in advance. Define a process for how to handle tagging gaps and misses. ++Cost allocation is the foundational element of cost accountability and enables organizations to gain visibility into the financial impact of their cloud solutions and related activities and initiatives. ++## Getting started ++When you first start managing cost in the cloud, you use the native "allocation" tools to organize subscriptions and resources to align to your primary organizational reporting structure. For anything beyond it, [tags](../../azure-resource-manager/management/tag-resources.md) can augment cloud resources and their usage to add business context, which is critical for any cost allocation strategy. ++Cost allocation is usually an afterthought and requires some level of cleanup when introduced. You need a plan to implement your cost allocation strategy. We recommend outlining that plan first to get alignment and possibly prototyping on a small scale to demonstrate the value. ++- Decide how you want to manage access to the cloud. + - At what level in the organization do you want to centrally provision access to the cloud: Departments, teams, projects, or applications? High levels require more governance and low levels require more management. + - What [cloud scope](../costs/understand-work-scopes.md) do you want to provision for this level? + - Billing scopes are used for to organize costs between and within invoices. + - [Management groups](../../governance/management-groups/overview.md) are used to organize costs for resource management. You can optimize management groups for policy assignment or organizational reporting. + - Subscriptions provide engineers with the most flexibility to build the solutions they need but can also come with more management and governance requirements due to this freedom. + - Resource groups enable engineers to deploy some solutions but may require more support when solutions require multiple resource groups or options to be enabled at the subscription level. +- How do you want to use management groups? + - Organize subscriptions into environment-based management groups to optimize for policy assignment. Management groups allow policy admins to manage policies at the top level but blocks the ability to perform cross-subscription reporting without an external solution, which increases your data analysis and showback efforts. + - Organize subscriptions into management groups based on the organizational hierarchy to optimize for organizational reporting. Management groups allow leaders within the organization to view costs more naturally from the portal but requires policy admins to use tag-based policies, which increases policy and governance efforts. Also keep in mind you may have multiple organizational hierarchies and management groups only support one. +- [Define a comprehensive tagging strategy](/azure/cloud-adoption-framework/ready/azure-best-practices/resource-tagging) that aligns with your organization's cost allocation objectives. + - Consider the specific attributes that are relevant for cost attribution, such as: + - How to map costs back to financial constructs, for example, cost center? + - Can you map back to every level in the organizational hierarchy, for example, business unit, department, division, and team? + - Who is accountable for the service, for example, business owner and engineering owner? + - What effort does this map to, for example project and application? + - What is the engineering purpose of this resource, for example, environment, component, and purpose? + - Clearly communicate tagging guidelines to all stakeholders. +- Once defined, it's time to implement your cost allocation strategy. + - Consider a top-down approach that prioritizes getting departmental costs in place before optimizing at the lowest project and environment level. You may want to implement it in phases, depending on how broad and deep your organization is. + - Enable [tag inheritance in Cost Management](../costs/enable-tag-inheritance.md) to copy subscription and resource group tags in cost data only. It doesn't change tags on your resources. + - Use Azure Policy to [enforce your tagging strategy](../../azure-resource-manager/management/tag-policies.md), automate the application of tags at scale, and track compliance status. Use compliance as a KPI for your tagging strategy. + - If you need to move costs between subscriptions, resource groups, or add or change tags, [configure allocation rules in Cost Management](../costs/allocate-costs.md). Cost allocation is covered in detail at [Managing shared costs](capabilities-shared-cost.md). + - Consider [grouping related resources together with the ΓÇ£cm-resource-parentΓÇ¥ tag](../costs/enable-preview-features-cost-management-labs.md#group-related-resources-in-the-cost-analysis-preview) to view costs together in Cost analysis. + - Distribute responsibility for any remaining change to scale out and drive efficiencies. +- Make note of any unallocated costs or costs that should be split but couldn't be. You cover it as part of [Managing shared costs](capabilities-shared-cost.md). ++Once all resources are tagged and/or organized into the appropriate resource groups and subscriptions, you can report against that data as part of [Data analysis and showback](capabilities-analysis-showback.md). ++Keep in mind that tagging takes time to apply, review, and clean up. Expect to go through multiple tagging cycles after everyone has visibility into the cost data. Many people don't realize there's a problem until they have visibility, which is why FinOps is so important. ++## Building on the basics ++At this point, you have a cost allocation strategy with detailed cloud management and tagging requirements. Tagging should be automatically enforced or at least tracked with compliance KPIs. As you move beyond the basics, consider the points: ++- Fill any gaps unmet by native tools. + - At a minimum, this gap requires reporting outside the portal, where tagging gaps can be merged with other data. + - If tagging gaps need to be resolved directly in the data, you need to implement [Data ingestion and normalization](capabilities-ingestion-normalization.md). +- Consider other costs that aren't yet covered or might be tracked separately. + - Strive to drive consistency across data sources to align tagging implementations. When not feasible, implement cleanup as part of [Data ingestion and normalization](capabilities-ingestion-normalization.md) or reallocate costs as part of your overarching cost allocation strategy. +- Regularly review and refine your cost allocation strategy. + - Consider this process as part of your reporting feedback loop. If your cost allocation strategy is falling short, the feedback you get may not be directly associated with cost allocation or metadata. It may instead be related to reporting. Watch out for this feedback and ensure the feedback is addressed at the most appropriate layer. + - Ensure naming, metadata, and hierarchy requirements are being used consistently and effectively throughout your environment. + - Consider other KPIs to track and monitor success of your cost allocation strategy. ++## Learn more at the FinOps Foundation ++This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [Cost allocation (metadata & hierarchy) capability](https://www.finops.org/framework/capabilities/cost-allocation/) article in the FinOps Framework documentation. ++## Next steps ++- [Data analysis and showback](capabilities-analysis-showback.md) +- [Managing shared costs](capabilities-shared-cost.md) |
cost-management-billing | Capabilities Analysis Showback | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-analysis-showback.md | + + Title: Data analysis and showback +description: This article helps you understand the data analysis and showback capability within the FinOps Framework and how to implement that in the Microsoft Cloud. +keywords: ++ Last updated : 06/22/2023+++++++# Data analysis and showback ++This article helps you understand the data analysis and showback capability within the FinOps Framework and how to implement that in the Microsoft Cloud. ++## Definition ++**Data analysis refers to the practice of analyzing and interpreting data related to cloud usage and costs. Showback refers to enabling cost visibility throughout an organization.** ++Provides transparency and visibility into cloud usage and costs across different departments, teams, and projects. Organizational alignment requires cost allocation metadata and hierarchies, and enabling visibility requires structured access control against these hierarchies. ++Data analysis and showback require a deep understanding of organizational needs to provide an appropriate level of detail to each stakeholder. Consider the following points: ++- Level of knowledge and experience each stakeholder has +- Different types of reporting and analytics you can provide +- Assistance they need to answer their questions ++With the right tools, data analysis and showback enable stakeholders to understand how resources are used, track cost trends, and make informed decisions regarding resource allocation, optimization, and budget planning. ++## When to prioritize ++Data analysis and showback are a common part of your iterative process. Some examples of when you want to prioritize data analysis and showback include: ++- New datasets become available, which need to be prepared for stakeholders. +- New requirements are raised to add or update reports. +- Implementing more cost visibility measures to drive awareness. ++If you're new to FinOps, we recommend starting with data analysis and showback using native cloud tools as you learn more about the data and the specific needs of your stakeholders. You revisit this capability again as you adopt new tools and datasets, which could be ingested into a custom data store or used by a third-party solution from the Marketplace. ++## Before you begin ++Before you can effectively analyze usage and costs, you need to familiarize yourself with [how you're charged for the services you use](https://azure.microsoft.com/pricing#product-pricing). Understanding the factors that contribute to costs such as compute, storage, networking, data transfer, or executions helps you understand what you ultimately get billed. Understanding how your service usage aligns with the various pricing models also helps you understand what you get billed. These patterns vary between services, which can result in unexpected charges if you don't fully understand how you're charged and how you can stop billing. ++>[!NOTE] +> For example, many people understand "VMs are not billed when they're not running." However, this is only partially true. There's a slight nuance for VMs where a "stopped" VM _will_ continue to charge you, because the cloud provider is still reserving that capacity for you. To stop billing, you must "deallocate" the VM. But you also need to remember that compute time isn't the only charge for a VM ΓÇô you're also charged for network bandwidth, disk storage, and other connected resources. In the simplest example, a deallocated VM will always charge you for disk storage, even if the VM is not running. Depending on what other services you have connected, there could be other charges as well. This is why it's important to understand how the services and features you use will charge you. ++We also recommend learning about [how cost data is tracked, stored, and refreshed in Microsoft Cost Management](../costs/understand-cost-mgt-data.md). Some examples include: ++- Which subscription types (or offers) are supported. For instance, data for classic CSP and sponsorship subscriptions isn't available in Cost Management and must be obtained from other data sources. +- Which charges are included. For instance, taxes aren't included. +- How tags are used and tracked. For instance, some resources don't support tags and [tag inheritance](../costs/enable-tag-inheritance.md) must be enabled manually to inherit tags from subscriptions and resource groups. +- When to use "actual" and "amortized" cost. + - "Actual" cost shows charges as they were or as they'll get shown on the invoice. Use actual costs for invoice reconciliation. + - "Amortized" cost shows the effective cost of resources that used a commitment-based discount (reservation or savings plan). Use amortized costs for cost allocation, to "smooth out" large purchases that may look like usage spikes, and numerous commitment-based discount scenarios. +- How credits are applied. For instance, credits are applied when the invoice is generated and not when usage is tracked. ++Understanding your cost data is critical to enable accurate and meaningful showback to all stakeholders. ++## Getting started ++When you first start managing cost in the cloud, you use the native tools: ++- [Cost analysis](../costs/quick-acm-cost-analysis.md) helps you explore and get quick answers about your costs. +- [Power BI](/power-bi/connect-data/desktop-connect-azure-cost-management) helps you build advanced reports merged with other cloud or business data. +- [Billing](../manage/index.yml) helps you review invoices and manage credits. +- [Azure Monitor](../../azure-monitor/overview.md) helps you analyze resource usage metrics, logs, and traces. +- [Azure Resource Graph](../../governance/resource-graph/overview.md) helps you explore resource configuration, changes, and relationships. ++As a starting point, we focus on tools available in the Azure portal and Microsoft 365 admin center. ++- Familiarize yourself with the [built-in views in Cost analysis](../costs/cost-analysis-built-in-views.md), concentrate on your top cost contributors, and drill in to understand what factors are contributing to that cost. + - Use the Services view to understand the larger services (not individual cloud resources) that have been purchased or are being used within your environment. This view is helpful for some stakeholders to get a high-level understanding of what's being used when they may not know the technical details of how each resource is contributing to business goals. + - Use the Subscriptions and Resource groups views to identify which departments, teams, or projects are incurring the highest cost, based on how you've organized your resources. + - Use the Resources view to identify which deployed resources are incurring the highest cost. + - Use the Reservations view to review utilization for a billing account or billing profile or to break down usage to the individual resources that received the reservation discount. + - Always use the view designed to answer your question. Avoid using the most detailed view to answer all questions, as it's slower and requires more work to find the answer you need. + - Use drilldown, filtering, and grouping to narrow down to the data you need, including the cost meters of an individual resource. +- [Save and share customized views](../costs/save-share-views.md) to revisit them later, collaborate with stakeholders, and drive awareness of current costs. + - Use private views for yourself and shared views for others to see and manage. + - Pin views to the Azure portal dashboard to create a heads-up display when you sign into the portal. + - Download an image of the chart and copy a link to the view to provide quick access from external emails, documents, etc. Note recipients are required to sign in and have access to the cost data. + - Download summarized data to share with others who don't have direct access. + - Subscribe to scheduled alerts to send emails with a chart and/or data to stakeholders on a daily, weekly, or monthly basis. +- As you review costs, make note of questions that you can't answer with the raw cloud usage and cost data. Feed this back into your cost allocation strategy to ensure more metadata is added via tags and labels. +- Use the different tools optimized to provide the details you need to understand the holistic picture of your resource cost and usage. + - [Analyze resource usage metrics in Azure Monitor](../../azure-monitor/essentials/tutorial-metrics.md). + - [Review resource configuration changes in Azure Resource Graph](../../governance/resource-graph/how-to/get-resource-changes.md). +- If you need to build more advanced reports or merge cost data with other cloud or business data, [connect to Cost Management data in Power BI](/power-bi/connect-data/desktop-connect-azure-cost-management). ++## Building on the basics ++At this point, you're likely productively utilizing the native reporting and analysis solutions in the portal and have possibly started building advanced reports in Power BI. As you move beyond the basics, consider the following to help you scale your reporting and analysis capabilities: ++- Talk to your stakeholders to ensure you have a firm understanding of their end goals. + - Differentiate between "tasks" and "goals." Tasks are performed to accomplish goals and will change as technology and our use of it evolves, while goals are more consistent over time. + - Think about what they'll do after you give them the data. Can you help them achieve that through automation or providing links to other tools or reports? How can they rationalize cost data against other business metrics (the benefits their resources are providing)? + - Do you have all the data you need to facilitate their goals? If not, consider ingesting other datasets to streamline their workflow. Adding other datasets is a common reason for moving from in-portal reporting into a custom or third-party solution to support other datasets. +- Consider reporting needs of each capability. Some examples include: + - Cost breakdowns aligned to cost allocation metadata and hierarchies. + - Optimization reports tuned to specific services and pricing models. + - Commitment-based discount utilization, coverage, savings, and chargeback. + - Reports to track and drill into KPIs across each capability. +- How can you make your reporting and KPIs an inherent part of day-to-day business and operations? + - Promote dashboards and KPIs at recurring meetings and reviews. + - Consider both bottom-up and top-down approaches to drive FinOps through data. + - Use alerting systems and collaboration tools to raise awareness of costs on a recurring basis. +- Regularly evaluate the quality of the data and reports. + - Consider introducing a feedback mechanism to learn how stakeholders are using reports and when they can't or or aren't meeting their needs. Use it as a KPI for your reports. + - Focus heavily on data quality and consistency. Many issues surfaced within the reporting tools are result from the underlying data ingestion, normalization, and cost allocation processes. Channel the feedback to the right stakeholders and raise awareness of and resolve issues that are impacting end-to-end cost visibility, accountability, and optimization. ++## Learn more at the FinOps Foundation ++This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [Data analysis and showback capability](https://www.finops.org/framework/capabilities/analysis-showback/) article in the FinOps Framework documentation. ++## Next steps ++- [Forecasting](capabilities-forecasting.md) +- [Managing anomalies](capabilities-anomalies.md) +- [Budget management](capabilities-budgets.md) |
cost-management-billing | Capabilities Anomalies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-anomalies.md | + + Title: Managing anomalies +description: This article helps you understand the managing anomalies capability within the FinOps Framework and how to implement that in the Microsoft Cloud. +keywords: ++ Last updated : 06/22/2023+++++++# Managing anomalies ++This article helps you understand the managing anomalies capability within the FinOps Framework and how to implement that in the Microsoft Cloud. ++## Definition ++**Managing anomalies refers to the practice of detecting and addressing abnormal or unexpected cost and usage patterns in a timely manner.** ++Use automated tools to detect anomalies and notify stakeholders. Review usage trends periodically to reveal anomalies automated tools may have missed. ++Investigate changes in application behaviors, resource utilization, and resource configuration to uncover the root cause of the anomaly. ++With a systematic approach to anomaly detection, analysis, and resolution, organizations can minimize unexpected costs that impact budgets and business operations. And, they can even identify and prevent security and reliability incidents that can surface in cost data. ++## Getting started ++When you first start managing cost in the cloud, you use the native tools available in the portal. ++- Start with proactive alerts. + - [Subscribe to anomaly alerts](../understand/analyze-unexpected-charges.md#create-an-anomaly-alert) for each subscription in your environment to receive email alerts when an unusual spike or drop has been detected in your normalized usage based on historical usage. + - Consider [subscribing to scheduled alerts](../costs/save-share-views.md#subscribe-to-scheduled-alerts) to share a chart of the recent cost trends with stakeholders. It can help you drive awareness as costs change over time and potentially catch changes the anomaly model may have missed. + - Consider [creating a budget in Cost Management](../costs/tutorial-acm-create-budgets.md) to track that specific scope or workload. Specify filters and set alerts for both actual and forecast costs for finer-grained targeting. +- Review costs periodically, using detailed cost breakdowns, usage analytics, and visualizations to identify potential anomalies that may have been missed. + - Use smart views in Cost analysis to [review anomaly insights](../understand/analyze-unexpected-charges.md#identify-cost-anomalies) that were automatically detected for each subscription. + - Use customizable views in Cost analysis to [manually find unexpected changes](../understand/analyze-unexpected-charges.md#manually-find-unexpected-cost-changes). + - Consider [saving custom views](../costs/save-share-views.md) that show cost over time for specific workloads to save time. + - Consider creating more detailed usage reports using [Power BI](/power-bi/connect-data/desktop-connect-azure-cost-management). +- Once an anomaly is identified, take appropriate actions to address it. + - Review the anomaly details with the engineers who manage the related cloud resources. Some autodetected "anomalies" are planned or at least known resource configuration changes as part of building and managing cloud services. + - If you need lower-level usage details, review resource utilization in [Azure Monitor metrics](../../azure-monitor/essentials/metrics-getting-started.md). + - If you need resource details, review [resource configuration changes in Azure Resource Graph](../../governance/resource-graph/how-to/get-resource-changes.md). ++## Building on the basics ++At this point, you have automated alerts configured and ideally views and reports saved to streamline periodic checks. ++- Establish and automate KPIs, such as: + - Number of anomalies each month or quarter. + - Total cost impact of anomalies each month or quarter + - Response time to detect and resolve anomalies. + - Number of false positives and false negatives. +- Expand coverage of your anomaly detection and response process to include all costs. +- Define, document, and automate workflows to guide the response process when anomalies are detected. +- Foster a culture of continuous learning, innovation, and collaboration. + - Regularly review and refine anomaly management processes based on feedback, industry best practices, and emerging technologies. + - Promote knowledge sharing and cross-functional collaboration to drive continuous improvement in anomaly detection and response capabilities. ++## Learn more at the FinOps Foundation ++This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [Managing anomalies capability](https://www.finops.org/framework/capabilities/manage-anomalies/) article in the FinOps Framework documentation. ++## Next steps ++- [Budget management](capabilities-budgets.md) |
cost-management-billing | Capabilities Budgets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-budgets.md | + + Title: Budget management +description: This article helps you understand the budget management capability within the FinOps Framework and how to implement that in the Microsoft Cloud. +keywords: ++ Last updated : 06/23/2023+++++++# Budget management ++This article helps you understand the budget management capability within the FinOps Framework and how to implement that in the Microsoft Cloud. ++## Definition ++**Budget management refers to the process of overseeing and tracking financial plans and limits over a given period to effectively manage and control spending.** ++Analyze historical usage and cost trends and adjust for future plans to estimate monthly, quarterly, and yearly costs that are realistic and achievable. Repeat for each level in the organization for a complete picture of organizational budgets. ++Configure alerting and automated actions to notify stakeholders and protect against budget overages. Investigate unexpected variance to budget and take appropriate actions. Review and adjust budgets regularly to ensure they remain accurate and reflect any changes in the organization's financial situation. ++Effective budget management helps ensure organizations operate within their means and are able to achieve financial goals. Unexpected costs can impact external business decisions and initiatives that could have widespread impact. ++## Getting started ++When you first start managing cost in the cloud, you may not have your financial budgets mapped to every subscription and resource group. You may not even have the budget mapped to your billing account yet. It's okay. Start by configuring cost alerts. The exact amount you use isn't as important as having _something_ to let you know when costs are escalating. ++- Start by [creating a monthly budget in Cost Management](../costs/tutorial-acm-create-budgets.md) at the primary scope you manage, whether that's a billing account, management group, subscription, or resource group. + - If you're not sure where to start, set your budget amount based on the cost of the previous months. You can also set it to be explicitly higher than what you intend, to catch an exceedingly high jump in costs, if you're not concerned with smaller moves. No matter what you set, you can always change it later. + - If you do want to provide a more realistic alert threshold, see [Estimate the initial cost of your cloud project](/azure/well-architected/cost/design-initial-estimate). + - Configure one or more alerts on actual or forecast cost to be sent to stakeholders. + - If you need to proactively stop billing before costs exceed a certain threshold on a subscription or resource group, [execute an automated action when alerts are triggered](../manage/cost-management-budget-scenario.md). +- If you have concerns about rollover costs from one month to the next as they accumulate for the quarter or year, create quarterly and yearly budgets. +- If you're not concerned about "overage," but would still like to stay informed about costs, [save a view in Cost analysis](../costs/save-share-views.md) and [subscribe to scheduled alerts](../costs/save-share-views.md#subscribe-to-scheduled-alerts). Then share a chart of the cost trends to stakeholders. It can help you drive accountability and awareness as costs change over time before you go over budget. +- Consider [subscribing to anomaly alerts](../understand/analyze-unexpected-charges.md#create-an-anomaly-alert) for each subscription to ensure everyone is aware of anomalies as they're identified. +- Repeat these steps to configure alerts for the stakeholders of each scope and application you want to be monitored for maximum visibility and accountability. +- Consider reviewing costs against your budget periodically to ensure costs remain on track with your expectations. ++## Building on the basics ++So far, you've defined granular and targeted cost alerts for each scope and application and ideally review your cost as a KPI with all stakeholders at regular meetings. Consider the following points to further refine your budget management process: ++- Refine the budget granularity to enable more targeted oversight. +- Encourage all teams to take ownership of their budget allocations and expenses. + - Educate them about the impact of their actions on the overall budget and empower them to make informed decisions. +- Streamline the process for making budget adjustments, ensuring teams easily understand and follow it. +- [Automate budget creation](../automate/automate-budget-creation.md) with new subscriptions and resource groups. +- If not done earlier, use automation tools like Azure Logic Apps or Alerts to [execute automated actions when budget alerts are triggered](../manage/cost-management-budget-scenario.md). Tools can be especially helpful on test subscriptions. ++## Learn more at the FinOps Foundation ++This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see to the [Budget management](https://www.finops.org/framework/capabilities/budget-management) article in the FinOps Framework documentation. ++## Next steps ++- [Forecasting](capabilities-forecasting.md) +- [Onboarding workloads](capabilities-workloads.md) +- [Chargeback and finance integration](capabilities-chargeback.md) |
cost-management-billing | Capabilities Chargeback | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-chargeback.md | + + Title: Chargeback and finance integration +description: This article helps you understand the chargeback and finance integration capability within the FinOps Framework and how to implement that in the Microsoft Cloud. +keywords: ++ Last updated : 06/23/2023+++++++# Chargeback and finance integration ++This article helps you understand the chargeback and finance integration capability within the FinOps Framework and how to implement that in the Microsoft Cloud. ++## Definition ++**Chargeback refers to the process of billing internal teams for their respective cloud costs. Finance integration involves leveraging existing internal finance tools and processes.** ++Plan the chargeback model with IT and Finance departments. Use the organizational cost allocation strategy that factors in how stakeholders agreed to account for shared costs and commitment-based discounts. ++Use existing tools and processes to manage cloud costs as part of organizational finances. Chargeback is represented in the accounting system, [budgets](capabilities-budgets.md) are managed through the budget system, etc. ++Chargeback and finance integration enables increased transparency, more direct accountability for the costs each department incurs, and reduced overhead costs. ++## Before you begin ++Chargeback, cost allocation, and showback are all important components of your FinOps practice. While you can implement them in any order, we generally recommend most organizations start with [showback](capabilities-analysis-showback.md) to ensure each team has visibility of the charges they're responsible for ΓÇô at least at a cloud scope level. Then implement [cost allocation](capabilities-allocation.md) to align cloud costs to the organizational reporting hierarchies, and lastly implement chargeback based on that cost allocation strategy. Consider reviewing the [Data analysis and showback](capabilities-analysis-showback.md) and [Cost allocation](capabilities-allocation.md) capabilities if you haven't implemented them yet. You may also find [Managing shared costs](capabilities-shared-cost.md) and [Managing commitment-based discounts](capabilities-commitment-discounts.md) capabilities to be helpful in implementing a complete chargeback solution. ++## Getting started ++Chargeback and finance integration is all about integrating with your own internal tools. Consider the following points: ++- Collaborate with stakeholders across finance, business, and technology to plan and prepare for chargeback. +- Document how chargeback works and be prepared for questions. +- Use the organizational [cost allocation](capabilities-allocation.md) strategy that factors in how stakeholders agreed to account for [shared costs](capabilities-shared-cost.md) and [commitment-based discounts](capabilities-commitment-discounts.md). + - If you haven't established one, consider simpler chargeback models that are fair and agreed upon by all stakeholders. +- Use existing tools and processes to manage cloud costs as part of organizational finances. ++## Building on the basics ++At this point, you have a basic chargeback model that all stakeholders have agreed to. As you move beyond the basics, consider the following points: ++- Consider implementing a one-way sync from your budget system to [Cost Management budgets](../automate/automate-budget-creation.md) to use automated alerts based on machine learning forecasts. +- If you track manual forecasts, consider creating Cost Management budgets for your forecast values as well. It gives you separate tracking and alerting for budgets separate from your forecast. +- Automate your [cost allocation](capabilities-allocation.md) strategy through tagging. +- Expand coverage of [shared costs](capabilities-shared-cost.md) and [commitment-based discounts](capabilities-commitment-discounts.md) if not already included. +- Fully integrate chargeback and showback reporting with the organization's finance tools. ++## Learn more at the FinOps Foundation ++This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [Chargeback and finance integration capability](https://www.finops.org/framework/capabilities/chargeback/) article in the FinOps Framework documentation. ++## Next steps ++- [Data analysis and showback](capabilities-analysis-showback.md) +- [Managing shared costs](capabilities-shared-cost.md) +- [Managing commitment-based discounts](capabilities-commitment-discounts.md) |
cost-management-billing | Capabilities Commitment Discounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-commitment-discounts.md | + + Title: Managing commitment-based discounts +description: This article helps you understand the managing commitment-based discounts capability within the FinOps Framework and how to implement that in the Microsoft Cloud. +keywords: ++ Last updated : 06/23/2023+++++++# Managing commitment-based discounts ++This article helps you understand the managing commitment-based discounts capability within the FinOps Framework and how to implement that in the Microsoft Cloud. ++## Definition ++**Managing commitment-based discounts is the practice of obtaining reduced rates on cloud services by committing to a certain level of usage or spend over a specific period.** ++Review daily usage and cost trends to estimate how much you expect to use or spend over the next one to five years. Use [Forecasting](capabilities-forecasting.md) and account for future plans. ++Commit to specific hourly usage targets to receive discounted rates and save up to 72% with [Azure reservations](../reservations/save-compute-costs-reservations.md). Or for more flexibility, commit to a specific hourly spend to save up to 65% with [Azure savings plans for compute](../savings-plan/savings-plan-compute-overview.md). Reservation discounts can be applied to resources of the specific type, SKU, and location only. Savings plan discounts are applied to a family of compute resources across types, SKUs, and locations. The extra specificity with reservations is what drives more favorable discounting. ++Adopting a commitment-based strategy allows organizations to reduce their overall cloud costs while maintaining the same or higher usage by taking advantage of discounts on the resources they already use. ++## Before you begin ++While you can save by using reservations and savings plans, there's also a risk that you may not end up using that capacity. You could end up underutilizing the commitment and lose money. While losing money is rare, it's possible. We recommend starting small and making targeted, high-confidence decisions. We also recommend not waiting too long to decide on how to approach commitment-based discounts when you do have consistent usage because you're effectively losing money. Start small and learn as you go. But first, learn how [reservation](../reservations/reservation-discount-application.md) and [savings plan](../savings-plan/discount-application.md) discounts are applied. ++Before you purchase either a reservation or a savings plan, consider the usage you want to commit to. If you have high confidence, you maintain a specific level of usage for that type, SKU, and location, strongly consider starting with a reservation. For maximum flexibility, you can use savings plans to cover a wide range of compute costs by committing to a specific hourly spend instead of hourly usage. ++## Getting started ++Microsoft offers several tools to help you identify when you should consider purchasing reservations or savings plans. You can choose whether you want to start by analyzing usage or by reviewing the system-generated recommendations based on your historical usage and cost. We recommend starting with the recommendations to focus your initial efforts: ++- One of the most common starting points is [Azure Advisor cost recommendations](../../advisor/advisor-reference-cost-recommendations.md). +- For more flexibility, you can view and filter recommendations in the [reservation](../reservations/reserved-instance-purchase-recommendations.md) and [savings plan](../savings-plan/purchase-recommendations.md#purchase-recommendations-in-the-azure-portal) purchase experiences. +- Lastly, you can also view reservation recommendations in [Power BI](/power-bi/connect-data/desktop-connect-azure-cost-management). +- After you know what to look for, you can [analyze your usage data](../reservations/determine-reservation-purchase.md#analyze-usage-data) to look for the specific usage you want to purchase a reservation for. ++After purchasing commitments, you can: ++- View utilization from the [reservation](../reservations/reservation-utilization.md) or [savings plan](../savings-plan/view-utilization.md) page in the portal. + - Consider expanding the scope or enabling instance size flexibility (when available) to increase utilization and maximize savings of an existing commitment. + - [Configure reservation utilization alerts](../costs/reservation-utilization-alerts.md) to notify stakeholders if utilization drops below a desired threshold. +- View showback and chargeback reports for [reservations](../reservations/charge-back-usage.md) and [savings plans](../savings-plan/charge-back-costs.md). ++## Building on the basics ++At this point, you have commitment-based discounts in place. As you move beyond the basics, consider the following points: ++- Configure commitments to automatically renew for [reservations](../reservations/reservation-renew.md) and [savings plans](../savings-plan/renew-savings-plan.md). +- Calculate cost savings for [reservations](../reservations/calculate-ea-reservations-savings.md) and [savings plans](../savings-plan/calculate-ea-savings-plan-savings.md). +- If you use multiple accounts, clouds, or providers, expand coverage of your commitment-based discounts efforts to include all accounts. + - Consider implementing a consistent utilization and coverage monitoring system that covers all accounts. +- Establish a process for centralized purchasing of commitment-based offers, assigning responsibility to a dedicated team or individual. +- Consider programmatically aligning governance policies with commitments to prioritize SKUs and locations that are covered by reservations and aren't fully utilized when deploying new applications. ++## Learn more at the FinOps Foundation ++This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [Managing commitment-based discounts capability](https://www.finops.org/framework/capabilities/manage-commitment-based-discounts/) article in the FinOps Framework documentation. ++## Next steps ++- [Data analysis and showback](capabilities-analysis-showback.md) +- [Cloud policy and governance](capabilities-policy.md) |
cost-management-billing | Capabilities Culture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-culture.md | + + Title: Establishing a FinOps culture +description: This article helps you understand the Establishing a FinOps culture capability within the FinOps Framework and how to implement that in the Microsoft Cloud. +keywords: ++ Last updated : 06/22/2023+++++++# Establishing a FinOps culture ++This article helps you understand the Establishing a FinOps culture capability within the FinOps Framework and how to implement that in the Microsoft Cloud. ++## Definition ++**Establishing a FinOps culture is about fostering a mindset of accountability and collaboration to accelerate and drive business value with cloud technology.** ++Evangelize the importance of a cost-aware culture that prioritizes driving business value over minimizing costs. Set clear expectations and goals for all stakeholders that are aligned with the mission and encourage accountability and responsibility for all actions taken. ++Lead with data. Establish and promote success metrics aligned with individual teams' goals. ++Establishing a FinOps culture gets the entire organization moving in the same direction and accelerates business goals through more efficient workflows and better team collaboration. Everyone can make more informed decisions together and increase operational flexibility. ++## Getting started ++When you first start, not all stakeholders are familiar with what FinOps is and their role within it. Consider the following to get off the ground: ++- Start by finding enthusiasts who are passionate about FinOps, cost optimization, efficiency, or data-driven use of technology to accelerate business goals. + - Build an informal [steering committee](capabilities-structure.md) and meet weekly or monthly to agree on goals, formulate strategy and tactics, and collaborate on the execution. +- Research your stakeholders and organizations. + - Understand what motivates them through their mission and success criteria. + - Learn about the challenges they face and look for opportunities for FinOps to help address them. + - Identify potential promoters and detractors and empathize with why they would or wouldn't support your efforts. Factor both sides into your strategy. +- Identify an initial sponsor and prepare a pitch that explains how your strategy leads to a positive impact on their mission and success criteria. Present your plan with clear asks and next steps. + - You're creating a mini startup. Do your research around how to prepare for these early meetings. + - Utilize [FinOps Foundation resources](https://www.finops.org/resources) to build your pitch, strategy, and more. + - Use the [FinOps community](https://www.finops.org/community/getting-started/) to share their knowledge and experience. They've been where you are. +- Dual-track your FinOps efforts: Drive lightweight FinOps initiatives with large returns while you cultivate your community. Nothing is better proof than data. + - Promote and celebrate your wins with early adopters. ++- Expand and formalize your steering committee as you develop broader sponsorship across business, finance, and engineering. ++## Building on the basics ++At this point, you have a steering committee that has early wins under its belt with basic support from the core stakeholder groups. As you move beyond the basics, consider the following points: ++- Define and document your operating model and evolve your strategy as a collaborative community. +- Brainstorm metrics and tactics that can demonstrate value and inspire different stakeholders through effective communication. +- Consider tools that can help self-promote your successes, like reports and dashboards. +- Share regular updates that celebrate small wins to demonstrate value. +- Look for opportunities to scale through other organizational priorities and initiatives. +- Explore ways to "go big" and launch a fully supported FinOps practice with a central team. Learn from other successful initiatives within the organization. ++## Learn more at the FinOps Foundation ++This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [Establishing a FinOps culture capability](https://www.finops.org/framework/capabilities/establish-finops-culture/) article in the FinOps Framework documentation. ++## Next steps ++- [Establishing a FinOps decision and accountability structure](capabilities-structure.md) +- [Cloud policy and governance](capabilities-policy.md) +- [FinOps education and enablement](capabilities-education.md) |
cost-management-billing | Capabilities Education | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-education.md | + + Title: FinOps education and enablement +description: This article helps you understand the FinOps education and enablement capability within the FinOps Framework and how to implement that in the Microsoft Cloud. +keywords: ++ Last updated : 06/22/2023+++++++# FinOps education and enablement ++This article helps you understand the FinOps education and enablement capability within the FinOps Framework and how to implement that in the Microsoft Cloud. ++## Definition ++**FinOps education and enablement involves refers to the process of providing training, resources, and support to help individuals and teams within an organization adopt FinOps practices.** ++Identify and share available training content with stakeholders. Create a central repository for training resources and provide introductory material that aligns with your FinOps processes. ++Consider marketing initiatives to drive awareness, encourage discussion and sharing lessons learned, or get people actively participating and learning (for example, hackathon or innovation sprint). Focus on the value FinOps brings and share data from your early successes. ++Provide a direct channel to get help and support as people are learning. Be responsive and establish a feedback loop to learn from help and support initiatives. ++By formalizing FinOps education and enablement, stakeholders develop the knowledge and skills needed to effectively manage and optimize cloud usage and costs. Organizations see: ++- Accelerated adoption of FinOps practices, leading to improved financial performance +- Increased agility +- Better alignment between cloud spending and business goals ++## Getting started ++Implementing a plan for FinOps education and enablement is like most other training and development efforts. Consider the following points: ++- If you're new to training and development, research common approaches and best practices. +- Use existing online resources from [Microsoft](https://azure.microsoft.com/solutions/finops), [FinOps Foundation](https://finops.org/), and others. +- Research and build targeted content and marketing strategies around common pain points experienced by your organization. + - Consider focusing on a few key areas of interest to make more progress. + - Experiment with different lightweight approaches to see what works best within your organization. +- Target the core areas that are critical for FinOps, like: + - Cross-functional collaboration between finance, engineering, and business teams. + - Cloud-specific knowledge and terminology. + - Continuous improvement best practices around monitoring, analyzing, and optimizing cloud usage and costs. +- Consider activities like formal training programs (for example, [FinOps Foundation training](https://learn.finops.org/)), on-the-job training, mentoring, coaching, and self-directed learning. +- Explore targeted learning tools that could help accelerate efforts. +- Use available collaboration tools like Teams, Viva Engage, and SharePoint. +- Find multiple avenues to promote the program (for example, hackathons, lunch and learns). +- Track and measure success to demonstrate the value of your training and development efforts. +- Consider specific training for nontechnical roles, such as finance and business teams or senior leadership. ++## Building on the basics ++At this point, you have a central repository for training content and targeted initiatives to drive awareness and encourage collaboration. As you move beyond the basics, consider the following points: ++- Expand coverage to more or all capabilities and document processes and key contacts. +- Track telemetry and establish a feedback loop to learn from learning resources and help and support workflows. + - Review findings regularly and factor into future plans. +- Consider establishing an official internal support channel to provide help and support. +- Seek out and engage with stakeholders within your organization, including senior level sponsorship and cultivated supporters to build momentum. +- Identify people with passion for cost optimization and data-driven decision making to be part of the [FinOps steering committee](capabilities-structure.md). ++## Learn more at the FinOps Foundation ++This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [FinOps education and enablement capability](https://www.finops.org/framework/capabilities/education-enablement/) article in the FinOps Framework documentation. ++## Next steps ++- [Establishing a FinOps decision and accountability structure](capabilities-structure.md) +- [Establishing a FinOps culture](capabilities-culture.md) |
cost-management-billing | Capabilities Efficiency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-efficiency.md | + + Title: Resource utilization and efficiency +description: This article helps you understand the resource utilization and efficiency capability within the FinOps Framework and how to implement that in the Microsoft Cloud. +keywords: ++ Last updated : 06/23/2023+++++++# Resource utilization and efficiency ++This article helps you understand the resource utilization and efficiency capability within the FinOps Framework and how to implement that in the Microsoft Cloud. ++## Definition ++**Resource utilization and efficiency refers to the process of ensuring cloud services are utilized and tuned to maximize business value and minimize wasteful spending.** ++Review how services are being used and ensure each is maximizing return on investment. Evaluate and implement best practices and recommendations. ++Every cost should have direct or indirect traceability back to business value. Eliminate fully "optimized" resources that aren't contributing to business value. ++Resource utilization and efficiency maximize the business value of cloud costs by avoiding unnecessary costs that don't contribute to the mission, which in turn increases return on investment and profitability. ++## Getting started ++When you first start managing cost in the cloud, you use the native tools to drive efficiency and optimize costs in the portal. ++- Review and implement [Azure Advisor cost recommendations](../../advisor/advisor-reference-cost-recommendations.md). + - Azure Advisor gives you high-confidence recommendations based on your usage. Azure Advisor is always the best place to start when looking to optimize any workload. + - Consider [subscribing to Azure Advisor alerts](../../advisor/advisor-alerts-portal.md) to get notified when there are new cost recommendations. +- Review your usage and purchase [commitment-based discounts](capabilities-commitment-discounts.md) when it makes sense. +- Take advantage of Azure Hybrid Benefit for [Windows](/windows-server/get-started/azure-hybrid-benefit), [Linux](../../virtual-machines/linux/azure-hybrid-benefit-linux.md), and [SQL Server](/azure/azure-sql/azure-hybrid-benefit). +- Review and implement [Cloud Adoption Framework costing best practices](/azure/cloud-adoption-framework/govern/cost-management/best-practices). +- Review and implement [Azure Well-Architected Framework cost optimization guidance](/azure/well-architected/cost/overview). +- Familiarize yourself with the services you use, how you're charged, and what service-specific cost optimization options you have. + - You can discover the services you use from the Azure portal All resources page or from the [Services view in Cost analysis](../costs/cost-analysis-built-in-views.md#break-down-product-and-service-costs). + - Explore the [Azure pricing pages](https://azure.microsoft.com/pricing) and [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator) to learn how each service charges you. Use them to identify options that may reduce costs. For example, shared infrastructure and commitment discounts. + - Review service documentation to learn about any cost-related features that could help you optimize your environment or improve cost visibility. Some examples: + - Choose [spot VMs](/azure/well-architected/cost/optimize-vm#spot-vms) for low priority, interruptible workloads. + - Avoid [cross-region data transfer](/azure/well-architected/cost/design-regions#traffic-across-billing-zones-and-regions). ++## Building on the basics ++At this point, you've implemented all the basic cost optimization recommendations and tuned applications to meet the most fundamental best practices. As you move beyond the basics, consider the following points: ++- Automate cost recommendations using [Azure Resource Graph](../../advisor/resource-graph-samples.md). +- Implement the [Workload management and automation capability](capabilities-workloads.md) for more optimizations. +- Stay abreast of emerging technologies, tools, and industry best practices to further optimize resource utilization. ++## Learn more at the FinOps Foundation ++This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [Resource utilization and efficiency capability](https://www.finops.org/framework/capabilities/utilization-efficiency/) article in the FinOps Framework documentation. ++## Next steps ++- [Managing commitment-based discounts](capabilities-commitment-discounts.md) +- [Workload management and automation](capabilities-workloads.md) +- [Measuring unit cost](capabilities-unit-costs.md) |
cost-management-billing | Capabilities Forecasting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-forecasting.md | + + Title: Forecasting +description: This article helps you understand the forecasting capability within the FinOps Framework and how to implement that in the Microsoft Cloud. +keywords: ++ Last updated : 06/22/2023++++++++# Forecasting ++This article helps you understand the forecasting capability within the FinOps Framework and how to implement that in the Microsoft Cloud. ++## Definition ++**Forecasting involves analyzing historical trends and future plans to predict costs, understand the impact on current budgets, and influence future budgets.** ++Analyze historical usage and cost trends to identify any patterns you expect to change. Augment that with future plans to generate an informed forecast. ++Periodically review forecasts against the current budgets to identify risk and initiate remediation efforts. Establish a plan to balance budgets across teams and departments and factor the learnings into future budgets. ++With an accurate, detailed forecast, organizations are better prepared to adapt to future change. ++## Before you begin ++Before you can effectively forecast future usage and costs, you need to familiarize yourself with [how you're charged for the services you use](https://azure.microsoft.com/pricing#product-pricing). ++Understanding how changes to your usage patterns affect future costs is informed with: +- Understanding the factors that contribute to costs (for example, compute, storage, networking, and data transfer) +- How your usage of a service aligns with the various pricing models (for example, pay-as-you-go, reservations, and Azure Hybrid Benefit) ++## Getting started ++When you first start managing cost in the cloud, you use the native Cost analysis experience in the portal. ++The simplest option is to [use Cost analysis to project future costs](../costs/cost-analysis-common-uses.md#view-forecast-costs) using the Daily costs or Accumulated costs view. If you have consistent usage with little to no anomalies or large variations, it may be all you need. ++If you do see anomalies or large (possibly expected) variations in costs, you may want to customize the view to build a more accurate forecast. To do so, you need to analyze the data and filter out anything that might skew the results. ++- Use Cost analysis to analyze historical trends and identify abnormalities. + - Before you start, determine if you're interested in your costs as they're billed or if you want to forecast the effective costs after accounting for commitment-based discounts. If you want the effective cost, [change the view to use amortized cost](../costs/customize-cost-analysis-views.md#switch-between-actual-and-amortized-cost). + - Start with the Daily costs view, then change the date range to look back as far as you're interested in looking forward. For instance, if you want to predict the next 12 months, then set the date range to the last 12 months. + - Filter out all purchases (`Charge type = Purchase`). Make a note of them as you need to forecast them separately. + - Group costs to identify new and old (deleted) subscriptions, resource groups, and resources. + - If you see any deleted items, filter them out. + - If you see any that are new, make note of them and then filter them out. You forecast them separately. Consider saving your view under a new name as one way to "remember" them for later. + - If you have future dates included in your view, you may notice the forecast is starting to level out. It happens because the abnormalities are no longer being factored into the algorithm. + - If you see any large spikes or dips, group the data by one of the [grouping options](../costs/group-filter.md) to identify what the cause was. + - Try different options until you discover the cause using the same approach as you would in [finding unexpected changes in cost](../understand/analyze-unexpected-charges.md#manually-find-unexpected-cost-changes). + - If you want to find the exact change that caused the cost spike (or dip), use tools like [Azure Monitor](../../azure-monitor/overview.md) or [Resource Graph](../../governance/resource-graph/how-to/get-resource-changes.md) in a separate window or browser tab. + - If the change was a segregated charge and shouldn't be factored into the forecast, filter it out. Be careful not to filter out other costs as it will skew the forecast. If necessary, start by forecasting a smaller scope to minimize risk of filtering more and repeat the process per scope. + - If the change is in a scope that shouldn't get filtered out, make note of that scope and then filter it out. You forecast them separately. + - Consider filtering out any subscriptions, resource groups, or resources that were reconfigured during the period and may not reflect an accurate picture of future costs. Make note of them so you can forecast them separately. + - At this point, you should have a fairly clean picture of consistent costs. +- Change the date range to look at the future period. For example, the next 12 months. + - If interested in the total accumulated costs for the period, change the granularity to `Accumulated`. +- Make note of the forecast, then repeat this process for each of the datasets that were filtered out. + - You may need to shorten the future date range to ensure the historical anomaly or resource change doesn't affect the forecast. If the forecast is affected, manually project the future costs based on the daily or monthly run rate. +- Next factor in any changes you plan to make to your environment. + - This part can be a little tricky and needs to be handled separately per workload. + - Start by filtering down to only the workload that is changing. If the planned change only impacts a single meter, like the number of uptime hours a VM may have or total data stored in a storage account, then filter down to that meter. + - Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator) to determine the difference between what you have today and what you intend to have. Then, take the difference and manually apply that to your cost projections for the intended period. + - Repeat the process for each of the expected changes. ++Whichever approach worked best for you, compare your forecast with your current budget to see where you're at today. If you filtered data down to a smaller scope or workload: ++- Consider [creating a budget in Cost Management](../costs/tutorial-acm-create-budgets.md) to track that specific scope or workload. Specify filters and set alerts for both actual and forecast costs. +- [Save a view in Cost analysis](../costs/save-share-views.md) to monitor that cost and budget over time. +- Consider [subscribing to scheduled alerts](../costs/save-share-views.md#subscribe-to-scheduled-alerts) for this view to share a chart of the cost trends with stakeholders. It can help you drive accountability and awareness as costs change over time before you go over budget. +- Consider [subscribing to anomaly alerts](../understand/analyze-unexpected-charges.md#create-an-anomaly-alert) for each subscription to ensure everyone is aware of anomalies as they're identified. ++Consider reviewing forecasts monthly or quarterly to ensure you remain on track with your expectations. ++## Building on the basics ++At this point, you have a manual process for generating a forecast. As you move beyond the basics, consider the following points: ++- Expand coverage of your forecast calculations to include all costs. +- If ingesting cost data into a separate system, use or introduce a forecast capability that spans all of your cost data. Consider using [Automated Machine Learning (AutoML)](../../machine-learning/how-to-auto-train-forecast.md) to minimize your effort. +- Integrate forecast projections into internal budgeting tools. +- Automate cost variance detection and mitigation. + - Implement automated processes to identify and address cost variances in real-time. + - Establish workflows or mechanisms to investigate and mitigate the variances promptly, ensuring cost control and alignment with forecasted budgets. +- Build custom forecast and budget reporting against actuals that's available to all stakeholders. +- If you're [measuring unit costs](capabilities-unit-costs.md), consider establishing a forecast for your unit costs to better understand whether you're trending towards higher or lower cost vs. revenue. +- Establish and automate KPIs, such as: + - Cost vs. forecast to measure the accuracy of the forecast algorithm. + - It can only be performed when there are expected usage patterns and no anomalies. + - Target \<12% variance when there are no anomalies. + - Cost vs. forecast to measure whether costs were on target. + - It's evaluated whether there are anomalies or not to measure the performance of the cloud solution. + - Target 12-20% variance where \<12% would be an optimized team, project, or workload. + - Number of unexpected anomalies during the period that caused cost to go outside the expected range. + - Time to react to forecast alerts. ++## Learn more at the FinOps Foundation ++This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [Forecasting capability](https://www.finops.org/framework/capabilities/forecasting) article in the FinOps Framework documentation. ++## Next steps ++- Budget management +- Managing commitment-based discounts |
cost-management-billing | Capabilities Frameworks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-frameworks.md | + + Title: FinOps and intersecting frameworks +description: This article helps you understand the FinOps and intersecting frameworks capability within the FinOps Framework and how to implement that in the Microsoft Cloud. +keywords: ++ Last updated : 06/22/2023+++++++# FinOps and intersecting frameworks ++This article helps you understand the FinOps and intersecting frameworks capability within the FinOps Framework and how to implement that in the Microsoft Cloud. ++## Definition ++**FinOps and intersecting frameworks refers to integrating FinOps practices with other frameworks and methodologies used by an organization.** ++Identify what frameworks and methodologies are used within your organization. Learn about the processes and benefits each framework provides and how they overlap with FinOps. Develop a plan for how processes can be aligned to achieve collective goals. ++## Getting started ++Implementation of this capability is highly dependent on how your organization has adopted each of the following frameworks and methodologies and what tools you've selected for each. See the following articles for details: ++- [IT Asset Management (ITAM)](https://www.finops.org/framework/capabilities/finops-itam/) by FinOps Foundation +- [Sustainability](https://www.finops.org/framework/capabilities/finops-sustainability/) by FinOps Foundation +- [Sustainability workloads](/azure/well-architected/sustainability/sustainability-get-started) +- IT Service Management + - [Azure Monitor integration](../../azure-monitor/alerts/itsmc-overview.md) + - [Azure DevOps and ServiceNow](/azure/devops/pipelines/release/approvals/servicenow) ++## Learn more at the FinOps Foundation ++This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [FinOps and intersecting frameworks capability](https://www.finops.org/framework/capabilities/finops-intersection/) article in the FinOps Framework documentation. ++## Next steps ++- [Microsoft Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/overview) +- [Microsoft Azure Well-Architected Framework](/azure/well-architected/) |
cost-management-billing | Capabilities Ingestion Normalization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-ingestion-normalization.md | + + Title: Data ingestion and normalization +description: This article helps you understand the data ingestion and normalization capability within the FinOps Framework and how to implement that in the Microsoft Cloud. +keywords: ++ Last updated : 06/22/2023+++++++# Data ingestion and normalization ++This article helps you understand the data ingestion and normalization capability within the FinOps Framework and how to implement that in the Microsoft Cloud. ++## Definition ++_Data ingestion and normalization refers to the process of collecting, transforming, and organizing data from various sources into a single, easily accessible repository._ ++Gather cost, utilization, performance, and other business data from cloud providers, vendors, and on-premises systems. Gathering the data can include: ++- Internal IT data. For example, from a configuration management database (CMDB) or IT asset management (ITAM) systems. +- Business-specific data, like organizational hierarchies and metrics that map cloud costs to or quantify business value. For example, revenue, as defined by your organizational and divisional mission statements. ++Consider how data gets reported and plan for data standardization requirements to support reporting on similar data from multiple sources, like cost data from multiple clouds or account types. Prefer open standards and interoperability with and across providers, vendors, and internal tools. It may also require restructuring data in a logical and meaningful way by categorizing or tagging data so it can be easily accessed, analyzed, and understood. ++When armed with a comprehensive collection of cost and usage information tied to business value, organizations can empower stakeholders and accelerate the goals of other FinOps capabilities. Stakeholders are able to make more informed decisions, leading to more efficient use of resources and potentially significant cost savings. ++## Before you begin ++While data ingestion and normalization are critical to long-term efficiency and effectiveness of any FinOps practice, it isn't a blocking requirement for your initial set of FinOps investments. If it is your first iteration through the FinOps lifecycle, consider lighter-weight capabilities that can deliver quicker return on investment, like [Data analysis and showback](capabilities-analysis-showback.md). Data ingestion and normalization can require significant time and effort depending on account size and complexity. We recommend focusing on this process once you have the right level of understanding of the effort and commitment from key stakeholders to support that effort. ++## Getting started ++When you first start managing cost in the cloud, you use the native tools available in the portal or through Power BI. If you need more, you may download the data for local analysis, or possibly build a small report or merge it with another dataset. Eventually, you need to automate this process, which is where "data ingestion" comes in. As a starting point, we focus on ingesting cost data into a common data store. ++- Before you ingest cost data, think about your reporting needs. + - Talk to your stakeholders to ensure you have a firm understanding of what they need. Try to understand their motivations and goals to ensure the data or reporting helps them. + - Identify the data you need, where you can get the data from, and who can give you access. Make note of any common datasets that may require normalization. + - Determine the level of granularity required and how often the data needs to be refreshed. Daily cost data can be a challenge to manage for a large account. Consider monthly aggregates to reduce costs and increase query performance and reliability if that meets your reporting needs. +- Consider using a third-party FinOps platform. + - Review the available [third-party solutions in the Azure Marketplace](https://portal.azure.com/#view/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/searchQuery/cost). +- Select the [cost details solution](../automate/usage-details-best-practices.md) that is right for you. We recommend scheduled exports, which push cost data to a storage account on a daily or monthly basis. + - If you use daily exports, notice that data is pushed into a new file each day. Ensure that you only select the latest day when reporting on costs. +- Determine if you need a data integration or workflow technology to process data. + - In an early phase, you may be able to keep data in the exported storage account without other processing. We recommend that you keep the data there for small accounts with lightweight requirements and minimal customization. + - If you need to ingest data into a more advanced data store or perform data cleanup or normalization, you may need to implement a data pipeline. [Choose a data pipeline orchestration technology](/azure/architecture/data-guide/technology-choices/pipeline-orchestration-data-movement). +- Determine what your data storage requirements are. + - In an early phase, we recommend using the exported storage account for simplicity and lower cost. + - If you need an advanced query engine or expect to hit data size limitations within your reporting tools, you should consider ingesting data into an analytical data store. [Choose an analytical data store](/azure/architecture/data-guide/technology-choices/analytical-data-stores). ++## Building on the basics ++At this point, you have a data pipeline and are ingesting data into a central data repository. As you move beyond the basics, consider the following points: ++- Normalize data to a standard schema to support aligning and blending data from multiple sources. + - For cost data, we recommend using the [FinOps Open Cost & Usage Specification (FOCUS) schema](https://finops.org/focus). +- Complement cloud cost data with organizational hierarchies and budgets. + - Consider labeling or tagging requirements to map cloud costs to organizational hierarchies. +- Enrich cloud resource and solution data with internal CMDB or ITAM data. +- Consider what internal business and revenue metrics are needed to map cloud costs to business value. +- Determine what other datasets are required based on your reporting needs: + - Cost and pricing + - [Azure retail prices](/rest/api/cost-management/retail-prices/azure-retail-prices) for pay-as-you-go rates without organizational discounts. + - [Price sheets](/rest/api/cost-management/price-sheet/download) for organizational pricing for Microsoft Customer Agreement accounts. + - [Price sheets](/rest/api/consumption/price-sheet/get) for organizational pricing for Enterprise Agreement accounts. + - [Balance summary](/rest/api/consumption/balances/get-by-billing-account) for Enterprise Agreement monetary commitment balance. + - Commitment-based discounts + - [Reservation details](/rest/api/cost-management/generate-reservation-details-report) for recommendation details. + - [Benefit utilization summaries](/rest/api/cost-management/generate-benefit-utilization-summaries-report) for savings plans. + - Utilization and efficiency + - [Resource Graph](/rest/api/azureresourcegraph/resourcegraph(2020-04-01-preview)/resources/resources) for Azure Advisor recommendations. + - [Monitor metrics](/rest/api/monitor/metrics-data-plane/batch) for resource usage. + - Resource details + - [Resource Graph](/rest/api/azureresourcegraph/resourcegraph(2020-04-01-preview)/resources/resources) for resource details. + - [Resource changes](/rest/api/resources/changes/list) to list resource changes from the past 14 days. + - [Subscriptions](/rest/api/resources/subscriptions/list) to list subscriptions. + - [Tags](/rest/api/resources/tags/list) for tags that have been applied to resources and resource groups. + - [Azure service-specific APIs](/rest/api/azure/) for lower-level configuration and utilization details. ++## Learn more at the FinOps Foundation ++This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [Data ingestion and normalization capability](https://www.finops.org/framework/capabilities/data-normalization/) article in the FinOps Framework documentation. ++## Next steps ++- Read about [Cost allocation](capabilities-allocation.md) to learn how to allocate costs to business units and applications. +- Read about [Data analysis and showback](capabilities-analysis-showback.md) to learn how to analyze and report on costs. |
cost-management-billing | Capabilities Onboarding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-onboarding.md | + + Title: Onboarding workloads +description: This article helps you understand the onboarding workloads capability within the FinOps Framework and how to implement that in the Microsoft Cloud. +keywords: ++ Last updated : 06/22/2023+++++++# Onboarding workloads ++This article helps you understand the onboarding workloads capability within the FinOps Framework and how to implement that in the Microsoft Cloud. ++## Definition ++**Onboarding workloads refers to the process of bringing new and existing applications into the cloud based on their financial and technical feasibility.** ++Establish a process to incorporate new and existing projects into the cloud and your FinOps practice. Introduce new stakeholders to the FinOps culture and approach. ++Assess projects' technical feasibility given current cloud resources and capabilities and financial feasibility given the return on investment, current budget, and projected forecast. ++A streamlined onboarding process ensures teams have a smooth transition into the cloud without sacrificing technical, financial, or business principles or goals and minimizing disruptions to business operations. ++## Getting started ++Onboarding projects is an internal process that depends solely on your technical, financial, and business governance policies. ++- Start by familiarizing yourself with existing governance policies and onboarding processes within the organization. + - Should FinOps be added to an existing onboarding process? + - Are there working processes you can use or copy? + - Are there any stakeholders who can help you get your process stood up? + - Who has access to provision new workloads in the cloud? How are you notified that they're created? + - What governance measures exist to structure and tag new cloud resources? For example, Azure Policy enforcing tagging requirements. +- In the beginning, keep it simple and focus on the basics. + - Introduce new stakeholders to the FinOps Framework by having them review [What is FinOps](overview-finops.md). + - Help them learn your culture and processes. + - Determine if you have the budget. + - Ensure the team runs through the [Forecasting capability](capabilities-forecasting.md) to estimate costs. + - Evaluate whether the budget has capacity for the estimated cost. + - Request department heads reprioritize existing projects to find capacity either by using capacity from under-utilized projects or by deprioritizing existing projects. + - Escalate through leadership as needed until budget capacity is established. + - Consider updating forecasts within the scope of the budget changes to ensure feasibility. ++## Building on the basics ++At this point, you have a simple process where stakeholders are introduced to FinOps, and new projects are at least being vetted against budget capacity. As you move beyond the basics, consider the following points: ++- Automate the onboarding process. + - Consider requiring simple FinOps training. + - Consider budget change request and approval process that automates reprioritization and change notification to stakeholders. +- Introduce technical feasibility into the approval process. Some considerations to include: + - Cost efficiency ΓÇô Implementation/migration, infrastructure, support + - Resiliency ΓÇô Performance, reliability, security + - Sustainability ΓÇô Carbon footprint ++## Developing a process ++Document your onboarding process. Using existing tools and processes where available and strive to automate as much as possible to make the process lightweight, effortless, and seamless. ++## Learn more at the FinOps Foundation ++This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [Onboarding workloads capability](https://www.finops.org/framework/capabilities/onboarding-workloads/) article in the FinOps Framework documentation. ++## Next steps ++- [Forecasting](capabilities-forecasting.md) +- [Cloud policy and governance](capabilities-policy.md) |
cost-management-billing | Capabilities Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-policy.md | + + Title: Cloud policy and governance +description: This article helps you understand the cloud policy and governance capability within the FinOps Framework and how to implement that in the Microsoft Cloud. +keywords: ++ Last updated : 06/22/2023+++++++# Cloud policy and governance ++This article helps you understand the cloud policy and governance capability within the FinOps Framework and how to implement that in the Microsoft Cloud. ++## Definition ++**Cloud policy and governance refers to the process of defining, implementing, and monitoring a framework of rules that guide an organization's FinOps efforts.** ++Define your governance goals and success metrics. Review and document how existing policies are updated to account for FinOps efforts. Review with all stakeholders to get buy-in and endorsement. ++Establish a rollout plan that starts with audit rules and slowly (and safely) expands coverage to drive compliance without negatively impacting engineering efforts. ++Implementing a policy and governance strategy enables organizations to sustainably implement FinOps at scale. Policy and governance can act as a multiplier to FinOps efforts by building them natively into day-to-day operations. ++## Getting started ++When you first start managing cost in the cloud, you use the native compliance tracking and enforcement tools. ++- Review your existing FinOps processes to identify opportunities for policy to automate enforcement. Some examples: + - [Enforce your tagging strategy](../../governance/policy/tutorials/govern-tags.md) to support different capabilities, like: + - Organizational reporting hierarchy tags for [cost allocation](capabilities-allocation.md). + - Financial reporting tags for [chargeback](capabilities-chargeback.md). + - Environment and application tags for [workload management](capabilities-workloads.md). + - Business and application owners for [anomalies](capabilities-anomalies.md). + - Monitor required and suggested alerting for [anomalies](capabilities-anomalies.md) and [budgets](capabilities-budgets.md). + - Block or audit the creation of more expensive resource SKUs (for example, E-series virtual machines). + - Implementation of cost recommendations and unused resources for [utilization and efficiency](capabilities-efficiency.md). + - Application of Azure Hybrid Benefit for [utilization and efficiency](capabilities-efficiency.md). + - Monitor [commitment-based discounts](capabilities-commitment-discounts.md) coverage. +- Identify what policies can be automated through [Azure Policy](../../governance/policy/overview.md) and which need other tooling. +- Review and [implement built-in policies](../../governance/policy/assign-policy-portal.md) that align with your needs and goals. +- Start small with audit policies and expand slowly (and safely) to ensure engineering efforts aren't negatively impacted. + - Test rules before you roll them out and consider a staged rollout where each stage has enough time to get used and garner feedback. Start small. ++## Building on the basics ++At this point, you have a basic set of policies in place that are being managed across the organization. As you move beyond the basics, consider the following points: ++- Formalize compliance reporting and promote within leadership conversations across stakeholders. + - Map governance efforts to FinOps efficiencies that can be mapped back to more business value with less effort. +- Expand coverage of more scenarios. + - Consider evaluating ways to quantify the impact of each rule in cost and/or business value. +- Integrate policy and governance into every conversation to establish a plan for how you want to automate the tracking and application of new policies. +- Consider advanced governance scenarios outside of Azure Policy. Build monitoring solutions using systems like [Power Automate](/power-automate/getting-started) or [Logic Apps](../../logic-apps/logic-apps-overview.md). ++## Learn more at the FinOps Foundation ++This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [Cloud policy and governance capability](https://www.finops.org/framework/capabilities/policy-governance/) article in the FinOps Framework documentation. ++## Next steps ++- [Establishing a FinOps culture](capabilities-culture.md) +- [Workload management and automation](capabilities-workloads.md) |
cost-management-billing | Capabilities Shared Cost | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-shared-cost.md | + + Title: Managing shared cost +description: This article helps you understand the managing shared cost capability within the FinOps Framework and how to implement that in the Microsoft Cloud. +keywords: ++ Last updated : 06/22/2023+++++++# Managing shared cost ++This article helps you understand the managing shared cost capability within the FinOps Framework and how to implement that in the Microsoft Cloud. ++## Definition ++**Managing shared cost refers to the process of redistributing the cost of shared services to the teams and applications that utilized them.** ++Identify shared costs and develop an allocation plan that defines the rules and methods for dividing the shared costs fairly and equitably. Track and report shared costs and their allocation to the relevant stakeholders. Regularly review and update allocation plan to ensure it remains accurate and fair. ++Effectively managing shared costs reduces overhead, increases transparency and accountability, and better aligns cloud costs to business value while maximizing the efficiencies and cost savings from shared services. ++## Before you begin ++Before you start, it's important to have a clear understanding of your organization's goals and priorities when it comes to managing shared costs. Keep in mind that not all shared costs may need to be redistributed, and some are more effectively managed with other means. Carefully evaluate each shared cost to determine the most appropriate approach for your organization. ++This guide doesn't cover commitment-based discounts, like reservations and savings plans. For details about how to handle showback and chargeback, refer to [Managing commitment-based discounts](capabilities-commitment-discounts.md). ++## Getting started ++When you first start managing cost in the cloud, you use the native allocation tools to manage shared costs. Start by identifying shared costs and how they should be handled. ++- If your organization previously implemented the [Cost allocation capability](capabilities-allocation.md), refer back to any notes about unallocated or shared costs. +- Notify stakeholders that you're evaluating shared costs and request details about any known scenarios. Self-identification can save you significant time and effort. +- Review the services that have been purchased and are being used with the [Services view in Cost analysis](../costs/cost-analysis-built-in-views.md#break-down-product-and-service-costs). +- Familiarize yourself with each service to determine if they're designed for and/or could be used for shared resources. A few examples of commonly shared services are: + - Application hosting services, like Azure Kubernetes Service, Azure App Service, and Azure Virtual Desktop. + - Observability tools, like Azure Monitor and Log Analytics. + - Management and security tools, like Microsoft Defender for Cloud and DevTest Labs. + - Networking services, like ExpressRoute. + - Database services, like Cosmos DB and SQL databases. + - Collaboration and productivity tools, like Microsoft 365. +- Contact stakeholders who are responsible for the potentially shared services. Make sure they understand if the shared services are shared and how costs are allocated today. If not accounted for, how allocation could or should be done. +- Use [cost allocation rules in Microsoft Cost Management](../costs/allocate-costs.md) to redistribute shared costs based on static percentages or compute, network, or storage costs. +- Regularly review and update allocation rules to ensure they remain accurate and fair. ++## Building on the basics ++At this point, your simple cost allocation scenarios may be addressed. You're left with more complicated scenarios that require more effort to accurately quantify and redistribute. As you move beyond the basics, consider the following points: ++- Establish and track common KPIs, like the percentage of unallocated shared costs. +- Use utilization data from [Azure Monitor metrics](../../azure-monitor/essentials/data-platform-metrics.md) where possible to understand service usage. +- Consider using application telemetry to quantify the distribution of shared costs. It's discussed more in [Measuring unit costs](capabilities-unit-costs.md). +- Automate the process of identifying the percentage breakdown of shared costs and consider using allocation rules in Cost Management to redistribute the costs. +- Automate cost allocation rules to update their respective percentages based on changing usage patterns. +- Consider sharing targeted reporting about the distribution of shared costs with relevant stakeholders. +- Build a reporting process to raise awareness of and drive accountability for unallocated shared costs. +- Share guidance with stakeholders on how they can optimize shared costs. ++## Learn more at the FinOps Foundation ++This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [Managing shared cost](https://www.finops.org/framework/capabilities/manage-shared-cloud-cost/) article in the FinOps Framework documentation. ++## Next steps ++- [Data analysis and showback](capabilities-analysis-showback.md) +- [Chargeback and finance integration](capabilities-chargeback.md) +- [Measuring unit costs](capabilities-unit-costs.md) |
cost-management-billing | Capabilities Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-structure.md | + + Title: Establishing a FinOps decision and accountability structure +description: This article helps you understand the establishing a FinOps decision and accountability structure capability within the FinOps Framework and how to implement that in the Microsoft Cloud. +keywords: ++ Last updated : 06/22/2023+++++++# Establishing a FinOps decision and accountability structure ++This article helps you understand the establishing a FinOps decision and accountability structure capability within the FinOps Framework and how to implement that in the Microsoft Cloud. ++## Definition ++**Establishing a FinOps decision and accountability structure involves defining roles and responsibilities, bridging gaps between teams, and enabling cross-functional collaboration and conflict resolution.** ++Define the roles, responsibilities, and activities required to effectively manage cost within the organization. Delegate accountability and decision-making authority to a cross-functional steering committee that can provide balanced oversight for technical, financial, and business priorities. ++Describe the steering committee "chain of command" and how information moves within the company, aligning with the organization's goals and objectives. Document the principles and processes needed to address challenges and resolve conflicts. ++Establishing a FinOps steering committee can help stakeholders within an organization align on a single process. The committee can also decide on the "rules of engagement" to effectively adopt and drive FinOps. All the while, ensuring accountability, fairness, and transparency and making sure senior decision makers can make informed decisions quickly. ++## Getting started ++When you first start managing cost in the cloud, you may not need to build a FinOps steering committee. The need for a more formal process increases as your organization grows and adopts the cloud more. Consider the following starting points: ++- Start a recurring meeting with representatives from finance, business, and engineering teams. + - If you have a central team responsible for cost management, consider having them chair the committee. +- Discuss and document the roles and responsibilities of each committee member. + - FinOps Foundation proposes one potential [responsibility assignment matrix (RACI model)](https://www.finops.org/wg/adopting-finops/#accountability-and-expectations-by-team-raci--daci-modeling). +- Collaborate on [planning your first FinOps iteration](conduct-finops-iteration.md). + - Make notes where there are differing perspectives and opinions. Discuss those topics for alignment in the future. + - Start small and find common ground to enable the committee to execute a successful iteration. It's OK if you don't solve every problem. + - Document decisions and outline processes, key contacts, and required activities. Documentation can be a small checklist in early stages. Focus on winning as one rather than documenting everything and executing perfectly. ++## Building on the basics ++At this point, you have a regular cadence of meetings, but not much structure. As you move beyond the basics, consider the following points: ++- Review the [FinOps Framework guidance](https://www.finops.org/framework/capabilities/decision-accountability-structure/) for how to best scale out your FinOps steering committee efforts. +- Review the Cloud Adoption Framework guidance for tips on how to [drive organizational alignment](/azure/cloud-adoption-framework/organize) on a larger scale. You may find opportunities to align with other governance initiatives. ++## Learn more at the FinOps Foundation ++This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, see the [Establishing a FinOps decision and accountability structure capability](https://www.finops.org/framework/capabilities/decision-accountability-structure/) article in the FinOps Framework documentation. ++## Next steps ++- [Onboarding workloads](capabilities-workloads.md) |
cost-management-billing | Capabilities Unit Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-unit-costs.md | + + Title: Measuring unit costs +description: This article helps you understand the measuring unit costs capability within the FinOps Framework and how to implement that in the Microsoft Cloud. +keywords: ++ Last updated : 06/23/2023+++++++# Measuring unit costs ++This article helps you understand the measuring unit costs capability within the FinOps Framework and how to implement that in the Microsoft Cloud. ++## Definition ++ **_Measuring unit costs refers to the process of calculating the cost of a single unit of a business that can show the business value of the cloud._** ++Identify what a single unit is for your business ΓÇô like a sale transaction for an ecommerce site or a user for a social app. Map each unit to the supporting cloud services that support it. Split the cost of shared infrastructure with utilization data to quantify the total cost of each unit. ++Measuring unit costs provides insights into profitability and allows organizations to make data-driven business decisions regarding cloud investments. Unit economics is what ties the cloud to measurable business value. ++## Before you begin ++Before you can effectively measure unit costs, you need to familiarize yourself with [how you're charged for the services you use](https://azure.microsoft.com/pricing#product-pricing). Understanding the factors that contribute to costs, helps you break down the usage and costs and map them to individual units. Cost contributing-factors factors include compute, storage, networking, and data transfer. How your service usage aligns with the various pricing models (for example, pay-as-you-go, reservations, and Azure Hybrid Benefit) also impacts your costs. ++## Getting started ++Measuring unit costs isn't a simple task. Unit economics requires a deep understanding of your architecture and needs multiple datasets to pull together the full picture. The exact data you need depends on the services you use and the telemetry you have in place. ++- Start with application telemetry. + - The more comprehensive your application telemetry is, the simpler unit economics can be to generate. Log when critical functions are executed and how long they run. You can use that to deduce the run time of each unit or relative to a function that correlates back to the unit. + - When application telemetry isn't directly possible, consider workarounds that can log telemetry, like [API Management](../../api-management/api-management-key-concepts.md) or even [configuring alert rules in Azure Monitor](../../azure-monitor/alerts/alerts-create-new-alert-rule.md) that trigger [action groups](../../azure-monitor/alerts/action-groups.md) that log the telemetry. The goal is to get all usage telemetry into a single, consistent data store. + - If you don't have telemetry in place, consider setting up [Application Insights](../../azure-monitor/app/app-insights-overview.md), which is an extension of Azure Monitor. +- Use [Azure Monitor metrics](../../azure-monitor/essentials/data-platform-metrics.md) to pull resource utilization data. + - If you don't have telemetry, see what metrics are available in Azure Monitor that can map your application usage to the costs. You need anything that can break down the usage of your resources to give you an idea of what percentage of the billed usage was from one unit vs. another. + - If you don't see the data you need in metrics, also check [logs and traces in Azure Monitor](../../azure-monitor/overview.md#data-platform). It may not be a direct correlation to usage but might be able to give you some indication of usage. +- Use service-specific APIs to get detailed usage telemetry. + - Every service uses Azure Monitor for a core set of logs and metrics. Some services also provide more detailed monitoring and utilization APIs to get more details than are available in Azure Monitor. Explore [Azure service documentation](../../index.yml) to find the right API for the services you use. +- Using the data you've collected, quantify the percentage of usage coming from each unit. + - Use pricing and usage data to facilitate this effort. It's typically best done after [Data ingestion and normalization](capabilities-ingestion-normalization.md) due to the high amount of data required to calculate accurate unit costs. + - Some amount of usage isn't mapped back to a unit. There are several ways to account for this cost, like distributing based on those known usage percentages or treating it as overhead cost that should be minimized separately. ++## Building on the basics ++- Automate any aspects of the unit cost calculation that haven't been fully automated. +- Consider expanding unit cost calculations to include other costs, like external licensing, on-premises operational costs, and labor. +- Build unit costs into business KPIs to maximize the value of the data you've collected. ++## Learn more at the FinOps Foundation ++This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [Measuring unit costs capability](https://www.finops.org/framework/capabilities/measure-unit-costs/) article in the FinOps Framework documentation. ++## Next steps ++- [Data analysis and showback](capabilities-analysis-showback.md) +- [Managing shared costs](capabilities-shared-cost.md) |
cost-management-billing | Capabilities Workloads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-workloads.md | + + Title: Workload management and automation +description: This article helps you understand the workload management and automation capability within the FinOps Framework and how to implement that in the Microsoft Cloud. +keywords: ++ Last updated : 06/23/2023+++++++# Workload management and automation ++This article helps you understand the workload management and automation capability within the FinOps Framework and how to implement that in the Microsoft Cloud. ++## Definition ++**Workload management and automation refers to running resources only when necessary and at the level or capacity needed for the active workload.** ++Tag resources based on their up-time requirements. Review resource usage patterns and determine if they can be scaled down or even shutdown (to stop billing) during off-peak hours. Consider cheaper alternatives to reduce costs. ++An effective workload management and automation plan can significantly reduce costs by adjusting configuration to match supply to demand dynamically, ensuring the most effective utilization. ++## Getting started ++When you first start working with a service, consider the following points: ++- Can the service be stopped (and if so, stop billing)? + - If the service can't be stopped, review alternatives to determine if there are any options that can be stopped to stop billing. + - Pay close attention to noncompute charges that may continue to be billed when a resource is stopped so you're not surprised. Storage is a common example of a cost that continues to be charged even if a compute resource that was using the storage is no longer running. +- Does the service support serverless compute? + - Serverless compute tiers can reduce costs when not active. Some examples: [Azure SQL Database](/azure/azure-sql/database/serverless-tier-overview), [Azure SignalR Service](/azure/azure-signalr/concept-service-mode), [Cosmos DB](../../cosmos-db/serverless.md), [Synapse Analytics](../../synapse-analytics/sql/on-demand-workspace-overview.md), [Azure Databricks](/azure/databricks/serverless-compute/). +- Does the service support autostop or autoshutdown functionality? + - Some services support autostop natively, like [Microsoft Dev Box](../../dev-box/how-to-configure-stop-schedule.md), [Azure DevTest Labs](../../devtest-labs/devtest-lab-auto-shutdown.md), [Azure Lab Services](../../lab-services/how-to-configure-auto-shutdown-lab-plans.md), and [Azure Load Testing](../../load-testing/how-to-define-test-criteria.md#auto-stop-configuration). + - If you use a service that supports being stopped, but not autostopping, consider using a lightweight flow in [Power Automate](/power-automate/getting-started) or [Logic Apps](../../logic-apps/logic-apps-overview.md). +- Does the service support autoscaling? + - If the service supports [autoscaling](/azure/architecture/best-practices/auto-scaling), configure it to scale based on your application's needs. + - Autoscaling can work with autostop behavior for maximum efficiency. +- Consider automatically stopping and manually starting nonproduction resources during work hours to avoid unnecessary costs. + - Avoid automatically starting nonproduction resources that aren't used every day. + - If you choose to autostart, be aware of vacations and holidays where resources may get started automatically but not be used. + - Consider tagging manually stopped resources. [Save a query in Azure Resource Graph](../../governance/resource-graph/first-query-portal.md) or a view in the All resources list and pin it to the Azure portal dashboard to ensure all resources are stopped. +- Consider architectural models such as containers and serverless to only use resources when they're needed, and to drive maximum efficiency in key services. ++## Building on the basics ++At this point, you have setup autoscaling and autostop behaviors. As you move beyond the basics, consider the following points: ++- Automate the process of automatically scaling or stopping resources that don't support it or have more complex requirements. + - Consider using automation services, like [Azure Automation](../../automation/automation-solution-vm-management.md) or [Azure Functions](../../azure-functions/start-stop-vms/overview.md). +- [Assign an "Env" or Environment tag](../../azure-resource-manager/management/tag-resources.md) to identify which resources are for development, testing, staging, production, etc. + - Prefer assigning tags at a subscription or resource group level. Then enable the [tag inheritance policy for Azure Policy](../../governance/policy/samples/built-in-policies.md#tags) and [Cost Management tag inheritance](../costs/enable-tag-inheritance.md) to cover resources that don't emit tags with usage data. + - Consider setting up automated scripts to stop resources with specific up-time profiles (for example, stop developer VMs during off-peak hours if they haven't been used in 2 hours). + - Document up-time expectations based on specific tag values and what happens when the tag isn't present. + - [Use Azure Policy to track compliance](../../governance/policy/how-to/get-compliance-data.md) with the tag policy. + - Use Azure Policy to enforce specific configuration rules based on environment. + - Consider using "override" tags to bypass the standard policy when needed. Track the cost and report them to stakeholders to ensure accountability. +- Consider establishing and tracking KPIs for low-priority workloads, like development servers. ++## Learn more at the FinOps Foundation ++This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [Workload management and automation capability](https://www.finops.org/framework/capabilities/workload-management-automation) article in the FinOps Framework documentation. ++## Next steps ++- [Resource utilization and efficiency](capabilities-efficiency.md) +- [Cloud policy and governance](capabilities-policy.md) |
cost-management-billing | Determine Reservation Purchase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/determine-reservation-purchase.md | Enterprise Agreement customers can use the VM RI Coverage reports for VMs and pu Reservation purchase recommendations are available in [Azure Advisor](https://portal.azure.com/#blade/Microsoft_Azure_Expert/AdvisorMenuBlade/overview). -- Advisor has only single-subscription scope recommendations.-- Advisor recommendations are calculated using 30-day look-back period. The projected savings are for a three-year reservation term.-- If you purchase a shared-scope reservation, Advisor reservation purchase recommendations can take up to 30 days to disappear.+- Advisor has only single-subscription scope recommendations. If you want to see recommendations for the entire billing scope (Billing account or billing profile), then: +- In the Azure portal, navigate to Reservations > Add and then select the type that you want to see the recommendations for. +- The recommendations quantity and savings are for a three-year reservation, where available. If a three-year reservation isn't sold for the service, the recommendation is calculated using the one-year reservation price. +- The recommendation calculations include any special discounts that you might have on your on-demand usage rates. +- If you purchase a shared-scope reservation, Advisor reservation purchase recommendations can take up to five days to disappear. +- Azure classic compute resources such as classic VMs are explicitly excluded from reservation recommendations. Microsoft recommends that users avoid making long-term commitments to legacy services that are being deprecated. ## Recommendations using APIs Use the [Reservation Recommendations](/rest/api/consumption/reservationrecommend - [Manage Azure Reservations](manage-reserved-vm-instance.md) - [Understand reservation usage for your subscription with pay-as-you-go rates](understand-reserved-instance-usage.md) - [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md)-- [Windows software costs not included with reservations](reserved-instance-windows-software-costs.md)+- [Windows software costs not included with reservations](reserved-instance-windows-software-costs.md) |
cost-management-billing | Reserved Instance Purchase Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reserved-instance-purchase-recommendations.md | More information about the recommendation appears when you select **See details* The chart and estimated values change when you increase the recommended quantity. When you increase the reservation quantity, your savings are reduced because you end up with reduced reservation use. In other words, you pay for reservations that aren't fully used. -If you lower the reservation quantity, your savings are also reduced. Although utilization is increased, there might be periods when your reservations don't fully cover your use. Usage beyond your reservation quantity is used by more expensive pay-as-you-go resources. The following example image illustrates the point. We've manually reduced the reservation quantity to 4. The reservation utilization is increased, but the overall savings are reduced because pay-as-you go costs are present. +If you lower the reservation quantity, your savings are also reduced. Although utilization is increased, there might be periods when your reservations don't fully cover your use. Usage beyond your reservation quantity is used by more expensive pay-as-you-go resources. The following example image illustrates the point. We've manually reduced the reservation quantity to 4. The reservation utilization is increased, but the overall savings are reduced because pay-as-you-go costs are present. :::image type="content" source="./media/reserved-instance-purchase-recommendations/recommended-quantity-details-changed.png" alt-text="Example showing changed reservation purchase recommendation details" ::: Reservation purchase recommendations are available in Azure Advisor. Keep in min - Advisor has only single-subscription scope recommendations. If you want to see recommendations for the entire billing scope (Billing account or billing profile), then: - In the Azure portal, navigate to **Reservations** > **Add** and then select the type that you want to see the recommendations for.-- Recommendations available in Advisor consider your past 30-day usage trend. - The recommendations quantity and savings are for a three-year reservation, where available. If a three-year reservation isn't sold for the service, the recommendation is calculated using the one-year reservation price. - The recommendation calculations include any special discounts that you might have on your on-demand usage rates. - If you purchase a shared-scope reservation, Advisor reservation purchase recommendations can take up to five days to disappear. |
data-factory | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md | |
data-lake-analytics | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md | Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
data-lake-store | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md | Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
databox-online | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md | Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
databox | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md | Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
ddos-protection | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md | |
defender-for-cloud | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md | Title: Built-in policy definitions for Microsoft Defender for Cloud description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
defender-for-cloud | Quickstart Onboard Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md | Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud Previously updated : 06/15/2023 Last updated : 06/26/2023 zone_pivot_groups: connect-aws-accounts The native cloud connector requires: - (Optional) Select **Configure**, to edit the configuration as required. + > [!NOTE] + > The respective Azure Arc servers for EC2 instances or GCP virtual machines that no longer exist (and the respective Azure Arc servers with a status of ["Disconnected" or "Expired"](https://learn.microsoft.com/azure/azure-arc/servers/overview)) will be removed after 7 days. This process removes irrelevant Azure ARC entities, ensuring only Azure Arc servers related to existing instances are displayed. + 1. By default the **Containers** plan is set to **On**. This is necessary to have Defender for Containers protect your AWS EKS clusters. Ensure you've fulfilled the [network requirements](./defender-for-containers-enable.md?pivots=defender-for-container-eks&source=docs&tabs=aks-deploy-portal%2ck8s-deploy-asc%2ck8s-verify-asc%2ck8s-remove-arc%2caks-removeprofile-api#network-requirements) for the Defender for Containers plan. > [!Note] |
defender-for-cloud | Quickstart Onboard Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md | To have full visibility to Microsoft Defender for Servers security content, ensu - **Manual installation** - You can manually connect your VM instances to Azure Arc for servers. Instances in projects with Defender for Servers plan enabled that aren't connected to Arc will be surfaced by the recommendation ΓÇ£GCP VM instances should be connected to Azure ArcΓÇ¥. Use the ΓÇ£FixΓÇ¥ option offered in this recommendation to install Azure Arc on the selected machines. + > [!NOTE] + > The respective Azure Arc servers for EC2 instances or GCP virtual machines that no longer exist (and the respective Azure Arc servers with a status of ["Disconnected" or "Expired"](https://learn.microsoft.com/azure/azure-arc/servers/overview)) will be removed after 7 days. This process removes irrelevant Azure ARC entities, ensuring only Azure Arc servers related to existing instances are displayed. + - Ensure you've fulfilled the [network requirements for Azure Arc](../azure-arc/servers/network-requirements.md?tabs=azure-cloud). - Other extensions should be enabled on the Arc-connected machines. |
defender-for-cloud | Recommendations Reference Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-aws.md | Title: Reference table for all Microsoft Defender for Cloud recommendations for AWS resources description: This article lists Microsoft Defender for Cloud's security recommendations that help you harden and protect your AWS resources. Previously updated : 01/24/2023 Last updated : 06/27/2023 # Security recommendations for AWS resources - a reference guide -This article lists the recommendations you might see in Microsoft Defender for Cloud if you've connected an -AWS account from the **Environment settings** page. The recommendations shown in your environment depend -on the resources you're protecting and your customized configuration. +This article lists the recommendations you might see in Microsoft Defender for Cloud if you've connected an AWS account from the **Environment settings** page. The recommendations shown in your environment depend on the resources you're protecting and your customized configuration. To learn about how to respond to these recommendations, see [Remediate recommendations in Defender for Cloud](implement-security-recommendations.md). impact on your secure score. [!INCLUDE [asc-recs-aws-container](../../includes/mdfc/mdfc-recs-aws-container.md)] +### Data plane recommendations ++All the data plane recommendations listed [here](kubernetes-workload-protections.md#view-and-configure-the-bundle-of-recommendations) are supported under AWS after [enabling the Azure policy extension](kubernetes-workload-protections.md#enable-kubernetes-data-plane-hardening). + ## <a name='recs-aws-data'></a> AWS Data recommendations [!INCLUDE [asc-recs-aws-data](../../includes/mdfc/mdfc-recs-aws-data.md)] |
defender-for-cloud | Recommendations Reference Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-gcp.md | Title: Reference table for all Microsoft Defender for Cloud recommendations for GCP resources description: This article lists Microsoft Defender for Cloud's security recommendations that help you harden and protect your GCP resources. Previously updated : 01/24/2023 Last updated : 06/27/2023 # Security recommendations for GCP resources - a reference guide -This article lists the recommendations you might see in Microsoft Defender for Cloud if you've connected a -GCP project from the **Environment settings** page. The recommendations shown in your environment depend -on the resources you're protecting and your customized configuration. +This article lists the recommendations you might see in Microsoft Defender for Cloud if you've connected a GCP project from the **Environment settings** page. The recommendations shown in your environment depend on the resources you're protecting and your customized configuration. To learn about how to respond to these recommendations, see [Remediate recommendations in Defender for Cloud](implement-security-recommendations.md). impact on your secure score. [!INCLUDE [asc-recs-gcp-container](../../includes/mdfc/mdfc-recs-gcp-container.md)] +### Data plane recommendations ++All the data plane recommendations listed [here](kubernetes-workload-protections.md#view-and-configure-the-bundle-of-recommendations) are supported under GCP after [enabling the Azure policy extension](kubernetes-workload-protections.md#enable-kubernetes-data-plane-hardening). ++ ## <a name='recs-gcp-data'></a> GCP Data recommendations [!INCLUDE [asc-recs-gcp-data](../../includes/mdfc/mdfc-recs-gcp-data.md)] |
defender-for-cloud | Upcoming Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md | If you're looking for the latest release notes, you can find them in the [What's |--|--| | [Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled"](#replacing-the-key-vaults-should-have-purge-protection-enabled-recommendation-with-combined-recommendation-key-vaults-should-have-deletion-protection-enabled) | June 2023 | [Changes to the Defender for DevOps recommendations environment source and resource ID](#changes-to-the-defender-for-devops-recommendations-environment-source-and-resource-id) | July 2023 |-| [Changes to the Defender for DevOps recommendations environment source and resource ID](#changes-to-the-defender-for-devops-recommendations-environment-source-and-resource-id) | July 2023 | | [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | July 2023 | | [General availability release of agentless container posture in Defender CSPM](#general-availability-ga-release-of-agentless-container-posture-in-defender-cspm) | July 2023 | | [Business model and pricing updates for Defender for Cloud plans](#business-model-and-pricing-updates-for-defender-for-cloud-plans) | July 2023 | |
devtest | Concepts Gitops Azure Devtest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/concepts-gitops-azure-devtest.md | Title: GitOps & Azure Dev/Test offer description: Use GitOps in association with Azure Dev/Test--++ ms.prod: visual-studio-windows Last updated 10/20/2021 |
devtest | Concepts Security Governance Devtest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/concepts-security-governance-devtest.md | Title: Security, governance, and Azure Dev/Test subscriptions description: Manage security and governance within your organization's Dev/Test subscriptions. --++ ms.prod: visual-studio-windows Last updated 10/20/2021 |
devtest | How To Add Users Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/how-to-add-users-directory.md | Title: Add users to your Azure Dev/Test developer directory tenant description: A how-to guide for adding users to your Azure credit subscription and managing their access with role-based controls.--++ ms.prod: visual-studio-windows Last updated 10/12/2021 |
devtest | How To Change Directory Tenants Visual Studio Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/how-to-change-directory-tenants-visual-studio-azure.md | Title: Change directory tenants with your individual VSS Azure subscriptions description: Change directory tenants with your Azure subscriptions.--++ ms.prod: visual-studio-windows Last updated 10/12/2021 |
devtest | How To Manage Monitor Devtest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/how-to-manage-monitor-devtest.md | Title: Managing and monitoring your Azure Dev/Test subscriptions description: Manage your Azure Dev/Test subscriptions with the flexibility of Azure's cloud environment. This guide also covers Azure Monitor to help maximize availability and performance for applications and services.--++ ms.prod: visual-studio-windows Last updated 10/12/2021 |
devtest | How To Manage Reliability Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/how-to-manage-reliability-performance.md | Title: Manage reliability and performance with Azure Dev/Test subscriptions description: Build reliability into your applications with Dev/Test subscriptions. --++ ms.prod: visual-studio-windows Last updated 10/12/2021 |
devtest | How To Remove Credit Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/how-to-remove-credit-limits.md | Title: Removing credit limits and changing Azure Dev/Test offers description: How to remove credit limits and change Azure Dev/Test offers. Switch from pay-as-you-go to another offer.--++ ms.prod: visual-studio-windows Last updated 10/04/2021 |
devtest | How To Sign Into Azure With Github | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/how-to-sign-into-azure-with-github.md | Title: Sign into Azure Dev/Test with your GitHub credentials description: Sign into an individual Monthly Azure Credit Subscription using GitHub credentials.--++ Last updated 10/12/2021 ms.prod: visual-studio-windows |
devtest | Overview What Is Devtest Offer Visual Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/overview-what-is-devtest-offer-visual-studio.md | Title: What is Azure Dev/Test offer? description: Use the Azure Dev/Test offer to get Azure credits for Visual Studio subscribers. ms.prod: visual-studio-windows--++ Last updated 10/12/2021 adobe-target: true |
devtest | Quickstart Create Enterprise Devtest Subscriptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/quickstart-create-enterprise-devtest-subscriptions.md | Title: Creating Enterprise Azure Dev/Test subscriptions description: Create Enterprise and Organizational Azure Dev/Test subscriptions for teams and large organizations.--++ ms.prod: visual-studio-windows Last updated 10/20/2021 |
devtest | Quickstart Individual Credit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/quickstart-individual-credit.md | Title: Start using individual Azure Dev/Test credit description: As a Visual Studio subscriber, learn how to access an Azure Credit subscription.--++ Last updated 11/24/2021 ms.prod: visual-studio-windows |
devtest | Troubleshoot Expired Removed Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/troubleshoot-expired-removed-subscription.md | Title: Troubleshoot expired Visual Studio subscription description: Learn how to renew an expired subscription, purchase a new one, or transfer your Azure resources.--++ Last updated 12/15/2021 ms.prod: visual-studio-windows |
dns | Dns Zones Records | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-zones-records.md | A domain name registrar is an organization that allows you to purchase a domain Azure DNS provides a globally distributed and high-availability name server infrastructure that you can use to host your domain. By hosting your domains in Azure DNS, you can manage your DNS records with the same credentials, APIs, tools, billing, and support as your other Azure services. -Azure DNS currently doesn't support purchasing of domain names. If you want to purchase a domain name, you need to use a third-party domain name registrar. The registrar typically charges a small annual fee. The domains can then be hosted in Azure DNS for management of DNS records. See [Delegate a Domain to Azure DNS](dns-domain-delegation.md) for details. +Azure DNS currently doesn't support purchasing of domain names. For an annual fee, you can buy a domain name by using [App Service domains](../app-service/manage-custom-dns-buy-domain.md#buy-and-map-an-app-service-domain) or a third-party domain name registrar. Your domains then can be hosted in Azure DNS for record management. For more information, see [Delegate a domain to Azure DNS](dns-domain-delegation.md). ## DNS zones |
energy-data-services | Concepts Csv Parser Ingestion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-csv-parser-ingestion.md | Title: Microsoft Azure Data Manager for Energy Preview csv parser ingestion workflow concept + Title: Microsoft Azure Data Manager for Energy csv parser ingestion workflow concept description: Learn how to use CSV parser ingestion. -A CSV Parser [DAG](https://airflow.apache.org/docs/apache-airflow/1.10.12/concepts.html#dags) allows a customer to load data into Microsoft Azure Data Manager for Energy Preview instance based on a custom schema that is, a schema that doesn't match the [OSDU™](https://osduforum.org) Well Known Schema (WKS). Customers must create and register the custom schema using the Schema service before loading the data. +A CSV Parser [DAG](https://airflow.apache.org/docs/apache-airflow/1.10.12/concepts.html#dags) allows a customer to load data into Microsoft Azure Data Manager for Energy instance based on a custom schema that is, a schema that doesn't match the [OSDU™](https://osduforum.org) Well Known Schema (WKS). Customers must create and register the custom schema using the Schema service before loading the data. -A CSV Parser DAG implements an ELT (Extract Load and Transform) approach to data loading, that is, data is first extracted from the source system in a CSV format, and it's loaded into the Azure Data Manager for Energy Preview instance. It could then be transformed to the [OSDU™](https://osduforum.org) Well Known Schema using a mapping service. +A CSV Parser DAG implements an ELT (Extract Load and Transform) approach to data loading, that is, data is first extracted from the source system in a CSV format, and it's loaded into the Azure Data Manager for Energy instance. It could then be transformed to the [OSDU™](https://osduforum.org) Well Known Schema using a mapping service. ## What does CSV ingestion do?-A CSV Parser DAG allows the customers to load the CSV data into the Microsoft Azure Data Manager for Energy Preview instance. It parses each row of a CSV file and creates a storage metadata record. It performs `schema validation` to ensure that the CSV data conforms to the registered custom schema. It automatically performs `type coercion` on the columns based on the schema data type definition. It generates `unique id` for each row of the CSV record by combining source, entity type and a Base64 encoded string formed by concatenating natural key(s) in the data. It performs `unit conversion` by converting declared frame of reference information into appropriate persistable reference using the Unit service. It performs `CRS conversion` for spatially aware columns based on the Frame of Reference (FoR) information present in the schema. It creates `relationships` metadata as declared in the source schema. Finally, it `persists` the metadata record using the Storage service. +A CSV Parser DAG allows the customers to load the CSV data into the Microsoft Azure Data Manager for Energy instance. It parses each row of a CSV file and creates a storage metadata record. It performs `schema validation` to ensure that the CSV data conforms to the registered custom schema. It automatically performs `type coercion` on the columns based on the schema data type definition. It generates `unique id` for each row of the CSV record by combining source, entity type and a Base64 encoded string formed by concatenating natural key(s) in the data. It performs `unit conversion` by converting declared frame of reference information into appropriate persistable reference using the Unit service. It performs `CRS conversion` for spatially aware columns based on the Frame of Reference (FoR) information present in the schema. It creates `relationships` metadata as declared in the source schema. Finally, it `persists` the metadata record using the Storage service. ## CSV parser ingestion components The CSV Parser DAG workflow is made up of the following -* **File service** facilitates the management of files in the Azure Data Manager for Energy Preview instance. It allows the user to securely upload, discovery and download files from the data platform. -* **Schema service** facilitates the management of schemas in the Azure Data Manager for Energy Preview instance. It allows the user to create, fetch and search for schemas in the data platform. +* **File service** facilitates the management of files in the Azure Data Manager for Energy instance. It allows the user to securely upload, discovery and download files from the data platform. +* **Schema service** facilitates the management of schemas in the Azure Data Manager for Energy instance. It allows the user to create, fetch and search for schemas in the data platform. * **Storage Service** facilitates the storage of metadata information for domain entities ingested into the data platform. It also raises storage record change events that allow downstream services to perform operations on ingested metadata records. * **Unit Service** facilitates the management and conversion of units-* **Workflow service** facilitates the management of workflows in the Azure Data Manager for Energy Preview instance. It's a wrapper service on top of the Airflow orchestration engine. +* **Workflow service** facilitates the management of workflows in the Azure Data Manager for Energy instance. It's a wrapper service on top of the Airflow orchestration engine. ### CSV ingestion components diagram To execute the CSV Parser DAG workflow, the user must have a valid authorization The below workflow diagram illustrates the CSV Parser DAG workflow: :::image type="content" source="media/concepts-csv-parser-ingestion/csv-ingestion-sequence-diagram.png" alt-text="Screenshot of the CSV ingestion sequence diagram." lightbox="media/concepts-csv-parser-ingestion/csv-ingestion-sequence-diagram-expanded.png"::: -To execute the CSV Parser DAG workflow, the user must first create and register the schema using the workflow service. Once the schema is created, the user then uses the File service to upload the CSV file to the Microsoft Azure Data Manager for Energy Preview instances, and also creates the storage record of file generic kind. The file service then provides a file ID to the user, which is used while triggering the CSV Parser workflow using the Workflow service. The Workflow service provides a run ID, which the user could use to track the status of the CSV Parser workflow run. +To execute the CSV Parser DAG workflow, the user must first create and register the schema using the workflow service. Once the schema is created, the user then uses the File service to upload the CSV file to the Microsoft Azure Data Manager for Energy instances, and also creates the storage record of file generic kind. The file service then provides a file ID to the user, which is used while triggering the CSV Parser workflow using the Workflow service. The Workflow service provides a run ID, which the user could use to track the status of the CSV Parser workflow run. OSDU™ is a trademark of The Open Group. |
energy-data-services | Concepts Ddms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-ddms.md | OSDU™ Technical Standard defines the following types of OSDU™ applic | Architecture Compliance | The OSDU™ Standard | The OSDU™ Standard | ISV | | Examples | OS CRS <br /> Wellbore DDMS | ESRI CRS <br /> Petrel DS | Petrel | ## Who did we build this for? -**IT Developers** build systems to connect data to domain applications (internal and external ΓÇô for example, Petrel) which enables data managers to deliver projects to geoscientists. The DDMS suite on Azure Data Manager for Energy Preview helps automate these workflows and eliminates time spent managing updates. +**IT Developers** build systems to connect data to domain applications (internal and external ΓÇô for example, Petrel) which enables data managers to deliver projects to geoscientists. The DDMS suite on Azure Data Manager for Energy helps automate these workflows and eliminates time spent managing updates. -**Geoscientists** use domain applications for key Exploration and Production workflows such as Seismic interpretation and Well tie analysis. While these users won't directly interact with the DDMS, their expectations for data performance and accessibility will drive requirements for the DDMS in the Foundation Tier. Azure will enable geoscientists to stream cross domain data instantly in OSDU™ compatible applications (for example, Petrel) connected to Azure Data Manager for Energy Preview. +**Geoscientists** use domain applications for key Exploration and Production workflows such as Seismic interpretation and Well tie analysis. While these users won't directly interact with the DDMS, their expectations for data performance and accessibility will drive requirements for the DDMS in the Foundation Tier. Azure will enable geoscientists to stream cross domain data instantly in OSDU™ compatible applications (for example, Petrel) connected to Azure Data Manager for Energy. **Data managers** spend a significant number of time fulfilling requests for data retrieval and delivery. The Seismic, Wellbore, and Petrel Data Services enable them to discover and manage data in one place while tracking version changes as derivatives are created. ## Platform landscape -Azure Data Manager for Energy Preview is an OSDU™ compatible product, meaning that its landscape and release model are dependent on OSDU™. +Azure Data Manager for Energy is an OSDU™ compatible product, meaning that its landscape and release model are dependent on OSDU™. -Currently, OSDU™ certification and release process are not fully defined yet and this topic should be defined as a part of the Azure Data Manager for Energy Preview Foundation Architecture. +Currently, OSDU™ certification and release process are not fully defined yet and this topic should be defined as a part of the Azure Data Manager for Energy Foundation Architecture. -OSDU™ R3 M8 is the base for the scope of the Azure Data Manager for Energy Preview Foundation Private Preview ΓÇô as a latest stable, tested version of the platform. +OSDU™ R3 M8 is the base for the scope of the Azure Data Manager for Energy Foundation Private ΓÇô as a latest stable, tested version of the platform. ## Learn more: OSDU™ DDMS community principles -[OSDU™ community DDMS Overview](https://community.opengroup.org/osdu/documentation/-/wikis/OSDU™-(C)/Design-and-Implementation/Domain-&-Data-Management-Services#ddms-requirements) provides an extensive overview of DDMS motivation and community requirements from a user, technical, and business perspective. These principles are extended to Azure Data Manager for Energy Preview. +[OSDU™ community DDMS Overview](https://community.opengroup.org/osdu/documentation/-/wikis/OSDU™-(C)/Design-and-Implementation/Domain-&-Data-Management-Services#ddms-requirements) provides an extensive overview of DDMS motivation and community requirements from a user, technical, and business perspective. These principles are extended to Azure Data Manager for Energy. ## DDMS requirements |
energy-data-services | Concepts Entitlements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-entitlements.md | Title: Microsoft Azure Data Manager for Energy Preview entitlement concepts -description: This article describes the various concepts regarding the entitlement services in Azure Data Manager for Energy Preview + Title: Microsoft Azure Data Manager for Energy entitlement concepts +description: This article describes the various concepts regarding the entitlement services in Azure Data Manager for Energy -Access management is a critical function for any service or resource. Entitlement service helps you manage who has access to your Azure Data Manager for Energy Preview instance, what they can do with it, and what services they have access to. -+Access management is a critical function for any service or resource. Entitlement service helps you manage who has access to your Azure Data Manager for Energy instance, what they can do with it, and what services they have access to. ## Groups -The entitlements service of Azure Data Manager for Energy Preview allows you to create groups, and an entitlement group defines permissions on services/data sources for your Azure Data Manager for Energy Preview instance. Users added by you to that group obtain the associated permissions. +The entitlements service of Azure Data Manager for Energy allows you to create groups, and an entitlement group defines permissions on services/data sources for your Azure Data Manager for Energy instance. Users added by you to that group obtain the associated permissions. The main motivation for entitlements service is data authorization, but the functionality enables three use cases: |
energy-data-services | Concepts Index And Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-index-and-search.md | Title: Microsoft Azure Data Manager for Energy Preview - index and search workflow concepts + Title: Microsoft Azure Data Manager for Energy - index and search workflow concepts description: Learn how to use indexing and search workflows -# Azure Data Manager for Energy Preview indexing and search workflows +# Azure Data Manager for Energy indexing and search workflows All data and associated metadata ingested into the platform are indexed to enable search. The metadata is accessible to ensure awareness even when the data isn't available. - ## Indexer Service The `Indexer Service` provides a mechanism for indexing documents that contain structured and unstructured data. |
energy-data-services | Concepts Manifest Ingestion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-manifest-ingestion.md | Title: Microsoft Azure Data Manager for Energy Preview manifest ingestion concepts + Title: Microsoft Azure Data Manager for Energy manifest ingestion concepts description: This article describes manifest ingestion concepts -Manifest-based file ingestion provides end-users and systems a robust mechanism for loading metadata about datasets in Azure Data Manager for Energy Preview instance. This metadata is indexed by the system and allows the end-user to search the datasets. +Manifest-based file ingestion provides end-users and systems a robust mechanism for loading metadata about datasets in Azure Data Manager for Energy instance. This metadata is indexed by the system and allows the end-user to search the datasets. Manifest-based file ingestion is an opaque ingestion that do not parse or understand the file contents. It creates a metadata record based on the manifest and makes the record searchable. ## What is a Manifest? A manifest is a JSON document that has a pre-determined structure for capturing entities defined as 'kind', that is, registered as schemas with the Schema service - [Well-known Schema (WKS) definitions](https://community.opengroup.org/osdu/dat#manifest-schemas). Any arrays are ordered. should there be interdependencies, the dependent items m ## Manifest-based file ingestion workflow -Azure Data Manager for Energy Preview instance has out-of-the-box support for Manifest-based file ingestion workflow. `Osdu_ingest` Airflow DAG is pre-configured in your instance. +Azure Data Manager for Energy instance has out-of-the-box support for Manifest-based file ingestion workflow. `Osdu_ingest` Airflow DAG is pre-configured in your instance. ### Manifest-based file ingestion workflow components The Manifest-based file ingestion workflow consists of the following components: The Manifest-based file ingestion workflow consists of the following components: * **Search Service** is used to perform referential integrity check during the manifest ingestion process. ### Pre-requisites-Before running the Manifest-based file ingestion workflow, customers must ensure that the user accounts running the workflow have access to the core services (Search, Storage, Schema, Entitlement and Legal) and Workflow service (see [Entitlement roles](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/osdu-entitlement-roles.md) for details). As part of Azure Data Manager for Energy Preview instance provisioning, the OSDU™ standard schemas and associated reference data are pre-loaded. Customers must ensure that the user account used for ingesting the manifests is included in appropriate owners and viewers ACLs. Customers must ensure that manifests are configured with correct legal tags, owners and viewers ACLs, reference data, etc. +Before running the Manifest-based file ingestion workflow, customers must ensure that the user accounts running the workflow have access to the core services (Search, Storage, Schema, Entitlement and Legal) and Workflow service (see [Entitlement roles](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/osdu-entitlement-roles.md) for details). As part of Azure Data Manager for Energy instance provisioning, the OSDU™ standard schemas and associated reference data are pre-loaded. Customers must ensure that the user account used for ingesting the manifests is included in appropriate owners and viewers ACLs. Customers must ensure that manifests are configured with correct legal tags, owners and viewers ACLs, reference data, etc. ### Workflow sequence The following illustration provides the Manifest-based file ingestion workflow: |
energy-data-services | How To Convert Segy To Ovds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-ovds.md | Title: Microsoft Azure Data Manager for Energy Preview - How to convert a segy to ovds file + Title: Microsoft Azure Data Manager for Energy - How to convert a segy to ovds file description: This article explains how to convert a SGY file to oVDS file format In this article, you will learn how to convert SEG-Y formatted data to the Open [OSDU™ SEG-Y to oVDS conversation](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-vds-conversion/-/tree/release/0.15) - ## Prerequisites 1. Download and install [Postman](https://www.postman.com/) desktop app. 2. Import the [oVDS Conversions.postman_collection](https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M9/Azure-M9/Services/DDMS/oVDS_Conversions.postman_collection.json) into Postman. All curl commands used below are added to this collection. Update your Environment file accordingly-3. Ensure that an Azure Data Manager for Energy Preview instance is created already +3. Ensure that an Azure Data Manager for Energy instance is created already 4. Clone the **sdutil** repo as shown below: ```markdown |
energy-data-services | How To Convert Segy To Zgy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-zgy.md | Title: Microsoft Azure Data Manager for Energy Preview - How to convert segy to zgy file + Title: Microsoft Azure Data Manager for Energy - How to convert segy to zgy file description: This article describes how to convert a SEG-Y file to a ZGY file -3. Ensure that your Azure Data Manager for Energy Preview instance is created already +3. Ensure that your Azure Data Manager for Energy instance is created already 4. Clone the **sdutil** repo as shown below: ```markdown git clone https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil.git |
energy-data-services | How To Enable Cors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-enable-cors.md | Title: How to enable CORS - Azure Data Manager for Energy Preview + Title: How to enable CORS - Azure Data Manager for Energy description: Guide on CORS in Azure data manager for Energy and how to set up CORS -# Use CORS for resource sharing in Azure Data Manager for Energy Preview -This document is to help you as user of Azure Data Manager for Energy preview to set up CORS policies. +# Use CORS for resource sharing in Azure Data Manager for Energy +This document is to help you as user of Azure Data Manager for Energy to set up CORS policies. ## What is CORS? CORS (Cross Origin Resource Sharing) is an HTTP feature that enables a web application running under one domain to access resources in another domain. In order to reduce the possibility of cross-site scripting attacks, all modern web browsers implement a security restriction known as same-origin policy, which prevents a web page from calling APIs in a different domain. CORS provides a secure way to allow one origin (the origin domain) to call APIs in another origin.-You can set CORS rules for each Azure Data Manager for Energy Preview instance. When you set CORS rules for the instance it gets applied automatically across all the services and storage accounts linked with Azure Data Manager for Energy Preview services. Once you set the CORS rules, then a properly authorized request made against the service evaluates from a different domain to determine whether it's allowed according to the rules you've specified. +You can set CORS rules for each Azure Data Manager for Energy instance. When you set CORS rules for the instance, it gets applied automatically across all the services and storage accounts linked with your Azure Data Manager for Energy resource. Once you set the CORS rules, then a properly authorized request made against the service evaluates from a different domain to determine whether it's allowed according to the rules you've specified. -## Enabling CORS on Azure Data Manager for Energy instance Preview +## Enabling CORS on Azure Data Manager for Energy instance -1. Create an **Azure Data Manager for Energy Preview** instance. +1. Create an **Azure Data Manager for Energy** instance. 2. Select the **Resource Sharing(CORS)** tab. [![Screenshot of Resource Sharing(CORS) tab while creating Azure Data Manager for Energy.](media/how-to-enable-cors/enable-cors-1.png)](media/how-to-enable-cors/enable-cors-1.png#lightbox) You can set CORS rules for each Azure Data Manager for Energy Preview instance. 7. The other values of CORS policy like **Allowed Methods**, **Allowed Headers**, **Exposed Headers**, **Max age in seconds** are set with default values displayed on the screen. 7. Next, select ΓÇ£**Review+Create**ΓÇ¥ after completing other tabs. 8. Select the "**Create**" button. -9. An **Azure Data Manager for Energy Preview** instance is created with CORS policy. +9. An **Azure Data Manager for Energy** instance is created with CORS policy. 10. Next, once the instance is created the CORS policy set can be viewed in instance **overview** page. 11. You can navigate to **Resource Sharing(CORS)** and see that CORS is enabled with required **Allowed Origins**. [![Screenshot of viewing the CORS policy set out.](media/how-to-enable-cors/enable-cors-3.png)](media/how-to-enable-cors/enable-cors-3.png#lightbox) You can set CORS rules for each Azure Data Manager for Energy Preview instance. ## How are CORS rules evaluated? CORS rules are evaluated as follows: 1. First, the origin domain of the request is checked against the domains listed for the AllowedOrigins element. -2. If the origin domain is included in the list, or all domains are allowed with the wildcard character '*', then rules evaluation proceeds. If the origin domain isn't included, then the request fails. +2. Rules evaluation proceeds if the origin domain is included in the list or all domains are allowed with the wildcard character (*). If the origin domain isn't included, the request fails. ## Limitations on CORS policy The following limitations apply to CORS rules: |
energy-data-services | Quickstart Create Microsoft Energy Data Services Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/quickstart-create-microsoft-energy-data-services-instance.md | Title: Create a Microsoft Azure Data Manager for Energy Preview instance -description: Quickly create an Azure Data Manager for Energy Preview instance + Title: Create a Microsoft Azure Data Manager for Energy instance +description: Quickly create an Azure Data Manager for Energy instance Last updated 08/18/2022 -# Quickstart: Create an Azure Data Manager for Energy Preview Preview instance +# Quickstart: Create an Azure Data Manager for Energy instance +Get started by creating an Azure Data Manager for Energy instance on Azure portal on a web browser. You first register an Azure application on Active Directory and then use the application ID to create an Azure Data Manager for Energy instance in your chosen Azure Subscription and region. -Get started by creating an Azure Data Manager for Energy Preview instance on Azure portal on a web browser. You first register an Azure application on Active Directory and then use the application ID to create an Azure Data Manager for Energy Preview instance in your chosen Azure Subscription and region. +The setup of Azure Data Manager for Energy instance can be triggered using a simple interface on Azure portal and takes about 50 minutes to complete. -The setup of Azure Data Manager for Energy Preview instance can be triggered using a simple interface on Azure portal and takes about 50 minutes to complete. --Azure Data Manager for Energy Preview is a managed "Platform as a service (PaaS)" offering from Microsoft that builds on top of the [OSDU™](https://osduforum.org/) Data Platform. Azure Data Manager for Energy Preview lets you ingest, transform, and export subsurface data by letting you connect your consuming in-house or third-party applications. +Azure Data Manager for Energy is a managed "Platform as a service (PaaS)" offering from Microsoft that builds on top of the [OSDU™](https://osduforum.org/) Data Platform. Azure Data Manager for Energy lets you ingest, transform, and export subsurface data by letting you connect your consuming in-house or third-party applications. ## Prerequisites | Prerequisite | Details | | | - |-Active Azure Subscription | You'll need the Azure subscription ID in which you want to install Azure Data Manager for Energy Preview. You need to have appropriate permissions to create Azure resources in this subscription. -Application ID | You'll need an [application ID](../active-directory/develop/application-model.md) (often referred to as "App ID" or a "client ID"). This application ID will be used for authentication to Azure Active Directory and will be associated with your Azure Data Manager for Energy Preview instance. You can [create an application ID](../active-directory/develop/quickstart-register-app.md) by navigating to Active directory and selecting *App registrations* > *New registration*. +Active Azure Subscription | You'll need the Azure subscription ID in which you want to install Azure Data Manager for Energy. You need to have appropriate permissions to create Azure resources in this subscription. +Application ID | You'll need an [application ID](../active-directory/develop/application-model.md) (often referred to as "App ID" or a "client ID"). This application ID will be used for authentication to Azure Active Directory and will be associated with your Azure Data Manager for Energy instance. You can [create an application ID](../active-directory/develop/quickstart-register-app.md) by navigating to Active directory and selecting *App registrations* > *New registration*. Client Secret | Sometimes called an application password, a client secret is a string value that your app can use in place of a certificate to identity itself. You can [create a client secret](../active-directory/develop/quickstart-register-app.md#add-a-client-secret) by selecting *Certificates & secrets* > *Client secrets* > *New client secret*. Record the secret's value for use in your client application code. This secret value is never displayed again after you leave this page. -## Create an Azure Data Manager for Energy Preview instance +## Create an Azure Data Manager for Energy instance 1. Save your **Application (client) ID** and **client secret** from Azure Active Directory to refer to them later in this quickstart. -1. Sign in to [Microsoft Azure Marketplace](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_OpenEnergyPlatformHidden) -- > [!IMPORTANT] - > *Azure Data Manager for Energy Preview* is accessible on the Azure Marketplace only if you use the above Azure portal link. +1. Sign in to [Microsoft Azure Marketplace](https://portal.azure.com/#view/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/home) -1. If you have access to multiple tenants, use the *Directories + subscriptions* filter in the top menu to switch to the tenant in which you want to install Azure Data Manager for Energy Preview. +1. If you have access to multiple tenants, use the *Directories + subscriptions* filter in the top menu to switch to the tenant in which you want to install Azure Data Manager for Energy. -1. Use the search bar in the Azure Marketplace (not the global Azure search bar on top of the screen) to search for *Azure Data Manager for Energy Preview*. +1. Use the search bar in the Azure Marketplace (not the global Azure search bar on top of the screen) to search for *Azure Data Manager for Energy*. - [![Screenshot of the search result on Azure Marketplace that shows Azure Data Manager for Energy Preview. Azure Data Manager for Energy Preview shows as a card.](media/quickstart-create-microsoft-energy-data-services-instance/search-meds-on-azure-marketplace.png)](media/quickstart-create-microsoft-energy-data-services-instance/search-meds-on-azure-marketplace.png#lightbox) + [![Screenshot of the search result on Azure Marketplace that shows Azure Data Manager for Energy. Azure Data Manager for Energy shows as a card.](media/quickstart-create-microsoft-energy-data-services-instance/search-meds-on-azure-marketplace.png)](media/quickstart-create-microsoft-energy-data-services-instance/search-meds-on-azure-marketplace.png#lightbox) -1. In the search page, select *Create* on the card titled "Azure Data Manager for Energy Preview(Preview)". +1. In the search page, select *Create* on the card titled "Azure Data Manager for Energy". -1. A new window appears. Complete the *Basics* tab by choosing the *subscription*, *resource group*, and the *region* in which you want to create your instance of Azure Data Manager for Energy Preview. Enter the *App ID* that you created during the prerequisite steps. +1. A new window appears. Complete the *Basics* tab by choosing the *subscription*, *resource group*, and the *region* in which you want to create your instance of Azure Data Manager for Energy. Enter the *App ID* that you created during the prerequisite steps. - [![Screenshot of the basic details page after you select 'create' for Azure Data Manager for Energy Preview. This page allows you to enter both instance and data partition details.](media/quickstart-create-microsoft-energy-data-services-instance/input-basic-details.png)](media/quickstart-create-microsoft-energy-data-services-instance/input-basic-details.png#lightbox) + [![Screenshot of the basic details page after you select 'create' for Azure Data Manager for Energy. This page allows you to enter both instance and data partition details.](media/quickstart-create-microsoft-energy-data-services-instance/input-basic-details.png)](media/quickstart-create-microsoft-energy-data-services-instance/input-basic-details.png#lightbox) Some naming conventions to guide you at this step: Client Secret | Sometimes called an application password, a client secret is a s | -- | | Instance name | Only alphanumeric characters are allowed, and the value must be 1-15 characters long. The name is **not** case-sensitive. One resource group can't have two instances with the same name. Application ID | Enter the valid Application ID that you generated and saved in the last section.- Data Partition name | Name should be 1-10 char long consisting of lowercase alphanumeric characters and hyphens. It should start with an alphanumeric character and not contain consecutive hyphens. The data partition names that you chose are automatically prefixed with your Azure Data Manager for Energy Preview instance name. This compound name will be used to refer to your data partition in application and API calls. + Data Partition name | Name should be 1-10 char long consisting of lowercase alphanumeric characters and hyphens. It should start with an alphanumeric character and not contain consecutive hyphens. The data partition names that you chose are automatically prefixed with your Azure Data Manager for Energy instance name. This compound name will be used to refer to your data partition in application and API calls. > [!NOTE]- > Azure Data Manager for Energy Preview instance and data partition names, once created, cannot be changed later. + > Azure Data Manager for Energy instance and data partition names, once created, cannot be changed later. ++1. Move to the next tab, *Networking*, and configure as needed. Learn more about [setting up a Private Endpoint in Azure Data Manager for Energy](../energy-data-services/how-to-set-up-private-links.md) + + [![Screenshot of the networking tab on the create workflow. This tab shows that customers can disable private access to their Azure Data Manager for Energy.](media/quickstart-create-microsoft-energy-data-services-instance/networking-tab.png)](media/quickstart-create-microsoft-energy-data-services-instance/networking-tab.png#lightbox) ++1. Move to the next tab, *Encryption*, and configure as needed. Learn how to encrypt your data with [customer managed encryption keys](../energy-data-services/how-to-manage-data-security-and-encryption.md), and manage data security using [managed identities in Azure Data Manager for Energy](../energy-data-services/how-to-use-managed-identity.md). + [![Screenshot of the tags tab on the create workflow. This tab shows the two options that customers have for data encryption.](media/quickstart-create-microsoft-energy-data-services-instance/encryption-tab.png)](media/quickstart-create-microsoft-energy-data-services-instance/encryption-tab.png#lightbox) -1. Select **Next: Tags** and enter any tags that you would want to specify. If nothing, this field can be left blank. - > [!TIP] - > Tags are metadata elements attached to resources. They're key-value pairs that help you identify resources based on settings that are relevant to your organization. If you want to track the deployment environment for your resources, add a key named `Environment`. To identify the resources deployed to production, give them a value of `Production`. [Learn more](../azure-resource-manager/management/tag-resources.md?tabs=json). +1. Navigate to the *Tags* tab and enter any tags that you would want to specify. If nothing, this field can be left blank. [![Screenshot of the tags tab on the create workflow. Any number of tags can be added and will show up in the list.](media/quickstart-create-microsoft-energy-data-services-instance/input-tags.png)](media/quickstart-create-microsoft-energy-data-services-instance/input-tags.png#lightbox) +1. Navigate to the *Resource Sharing (CORS)* tab and configure as needed. Learn more about [Use CORS for resource sharing in Azure Data Manager for Energy](../energy-data-services/how-to-enable-cors.md). ++ [![Screenshot of the cross-origin resource sharing tab on the create workflow. Multiple CORS policies can be added to the list on this tab.](media/quickstart-create-microsoft-energy-data-services-instance/cors-tab.png)](media/quickstart-create-microsoft-energy-data-services-instance/cors-tab.png#lightbox) + + 1. Select Next: **Review + Create**. 1. Once the basic validation tests pass (validation takes a few seconds), review the Terms and Basic Details. [![Screenshot of the review tab. It shows that data validation happens before you start deployment.](media/quickstart-create-microsoft-energy-data-services-instance/validation-check-after-entering-details.png)](media/quickstart-create-microsoft-energy-data-services-instance/validation-check-after-entering-details.png#lightbox) -1. This step is optional. You can download an Azure Resource Manager (ARM) template and use it for automated deployments of Azure Data Manager for Energy Preview in future. Select *Download a template for automation* located on the bottom-right of the screen. -- [![Screenshot to help locate the link to download Azure Resource Manager template for automation. It is available on the bottom right of the *review + create* tab.](media/quickstart-create-microsoft-energy-data-services-instance/download-template-automation.png)](media/quickstart-create-microsoft-energy-data-services-instance/download-template-automation.png#lightbox) -- [![Screenshot of the template that opens up when you select 'download template for automation'. Options are available to download or deploy from this page.](media/quickstart-create-microsoft-energy-data-services-instance/automate-deploy-resource-using-azure-resource-manager.png)](media/quickstart-create-microsoft-energy-data-services-instance/automate-deploy-resource-using-azure-resource-manager.png#lightbox) +1. Optional step: You can download an Azure Resource Manager (ARM) template and use it for automated deployments of Azure Data Manager for Energy in future. Select *View automation template* located in the *Review + create* tab. + 1. Select **Create** to start the deployment. 1. Wait while the deployment happens in the background. Review the details of the instance created. - [![Screenshot of the deployment completion page. Options are available to view details of the deployment.](media/quickstart-create-microsoft-energy-data-services-instance/deployment-complete.png)](media/quickstart-create-microsoft-energy-data-services-instance/deployment-complete.png#lightbox) + [![Screenshot of the deployment progress page. Options are available to view details of the deployment.](media/quickstart-create-microsoft-energy-data-services-instance/deployment-progress.png)](media/quickstart-create-microsoft-energy-data-services-instance/deployment-progress.png#lightbox) - [![Screenshot of the overview of Azure Data Manager for Energy Preview instance page. Details as such data partitions, instance URI, and app ID are accessible.](media/quickstart-create-microsoft-energy-data-services-instance/overview-energy-data-services.png)](media/quickstart-create-microsoft-energy-data-services-instance/overview-energy-data-services.png#lightbox) - -## Delete an Azure Data Manager for Energy Preview instance + You will find the newly created Azure Data Manager for Energy resource in your resource group. Select it to open the resource UI on portal. + + + [![Screenshot of the overview of Azure Data Manager for Energy instance page. Details as such data partitions, instance URI, and app ID are accessible.](media/quickstart-create-microsoft-energy-data-services-instance/overview-data-manager-for-energy.png)](media/quickstart-create-microsoft-energy-data-services-instance/overview-data-manager-for-energy.png#lightbox) ++ +## Delete an Azure Data Manager for Energy instance -Deleting a Microsoft Energy Data instance also deletes any data that you've ingested. This action is permanent and the ingested data can't be recovered. To delete an Azure Data Manager for Energy Preview instance, complete the following steps: +Deleting a Microsoft Energy Data instance also deletes any data that you've ingested. This action is permanent and the ingested data can't be recovered. To delete an Azure Data Manager for Energy instance, complete the following steps: 1. Sign in to the Azure portal and delete the *resource group* in which these components are installed. -2. This step is optional. Go to Azure Active Directory and delete the *app registration* that you linked to your Azure Data Manager for Energy Preview instance. +2. This step is optional. Go to Azure Active Directory and delete the *app registration* that you linked to your Azure Data Manager for Energy instance. OSDU™ is a trademark of The Open Group. ## Next steps-After provisioning an Azure Data Manager for Energy Preview instance, you can learn about user management on this instance. +After provisioning an Azure Data Manager for Energy instance, you can learn about user management on this instance. > [!div class="nextstepaction"] > [How to manage users](how-to-manage-users.md) |
event-grid | Azure Active Directory Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/azure-active-directory-events.md | When an event is triggered, the Event Grid service sends data about that event t ### Microsoft.Graph.UserUpdated event ```json-[{ +{ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce", "type": "Microsoft.Graph.UserUpdated", "source": "/tenants/<tenant-id>/applications/<application-id>", When an event is triggered, the Event Grid service sends data about that event t "subscriptionId": "<microsoft-graph-subscription-id>", "tenantId": "<tenant-id> }-}] +} ``` ### Microsoft.Graph.UserDeleted event ```json-[{ +{ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce", "type": "Microsoft.Graph.UserDeleted", "source": "/tenants/<tenant-id>/applications/<application-id>", When an event is triggered, the Event Grid service sends data about that event t "subscriptionId": "<microsoft-graph-subscription-id>", "tenantId": "<tenant-id> }-}] +} ``` ### Microsoft.Graph.GroupUpdated event ```json-[{ +{ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce", "type": "Microsoft.Graph.GroupUpdated", "source": "/tenants/<tenant-id>/applications/<application-id>", When an event is triggered, the Event Grid service sends data about that event t "subscriptionId": "<microsoft-graph-subscription-id>", "tenantId": "<tenant-id> }-}] +} ``` ### Microsoft.Graph.GroupDeleted event ```json-[{ +{ "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce", "type": "Microsoft.Graph.GroupDeleted", "source": "/tenants/<tenant-id>/applications/<application-id>", When an event is triggered, the Event Grid service sends data about that event t "subscriptionId": "<microsoft-graph-subscription-id>", "tenantId": "<tenant-id> }-}] +} ``` |
event-grid | Communication Services Voice Video Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-voice-video-events.md | When an event is triggered, the Event Grid service sends data about that event t This section contains an example of what that data would look like for each event. -> [!IMPORTANT] -> Call Recording feature is still in a Public Preview - ### Microsoft.Communication.RecordingFileStatusUpdated ```json |
event-grid | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md | Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
event-hubs | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md | Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
event-hubs | Private Link Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/private-link-service.md | If you already have an Event Hubs namespace, you can create a private link conne 1. Sign in to the [Azure portal](https://portal.azure.com). 2. In the search bar, type in **event hubs**. 3. Select the **namespace** from the list to which you want to add a private endpoint.-1. On the **Networking** page, for **Public network access**, you can set one of the three following options. Select **Disabled** if you want the namespace to be accessed only via private endpoints. +1. On the **Networking** page, for **Public network access**, select **Disabled** if you want the namespace to be accessed only via private endpoints. +1. For **Allow trusted Microsoft services to bypass this firewall**, select **Yes** if you want to allow [trusted Microsoft services](#trusted-microsoft-services) to bypass this firewall. - Here are more details about options available in the **Public network access** page: - - **Disabled**. This option disables any public access to the namespace. The namespace is accessible only through [private endpoints](private-link-service.md). - - **Selected networks**. This option enables public access to the namespace using an access key from selected networks. -- > [!IMPORTANT] - > If you choose **Selected networks**, add at least one IP firewall rule or a virtual network that will have access to the namespace. Choose **Disabled** if you want to restrict all traffic to this namespace over [private endpoints](private-link-service.md) only. - - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, the event hub accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range. + :::image type="content" source="./media/private-link-service/public-access-disabled.png" alt-text="Screenshot of the Networking page with public network access as Disabled."::: 1. Switch to the **Private endpoint connections** tab. 1. Select the **+ Private Endpoint** button at the top of the page. |
expressroute | Expressroute Howto Macsec | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-macsec.md | Each ExpressRoute Direct instance has two physical ports. You can choose to enab > * GcmAes256 > * GcmAesXpn128 > * GcmAesXpn256+ > * The recommendation is to configure encryption with xpn ciphers to avoid intermittent session drops observed with non-xpn ciphers on high speed links. > 1. Set MACsec secrets and cipher and associate the user identity with the port so that the ExpressRoute management code can access the MACsec secrets if needed. |
firewall | Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/features.md | You can associate [multiple public IP addresses](deploy-multi-public-ip-powershe This enables the following scenarios: - **DNAT** - You can translate multiple standard port instances to your backend servers. For example, if you have two public IP addresses, you can translate TCP port 3389 (RDP) for both IP addresses.-- **SNAT** - More ports are available for outbound SNAT connections, reducing the potential for SNAT port exhaustion. Azure Firewall uses the primary public IP address first before it uses the other associated public IP addresses for a connection. If you have any downstream filtering on your network, you need to allow all public IP addresses associated with your firewall. Consider using a [public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md) to simplify this configuration.+- **SNAT** - More ports are available for outbound SNAT connections, reducing the potential for SNAT port exhaustion. At this time, Azure Firewall randomly selects the source public IP address to use for a connection. If you have any downstream filtering on your network, you need to allow all public IP addresses associated with your firewall. Consider using a [public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md) to simplify this configuration. ## Azure Monitor logging |
firewall | Integrate With Nat Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/integrate-with-nat-gateway.md | -Azure Firewall uses the primary public IP address first before it uses the other associated public IP addresses. If your traffic workload requires a large volume of SNAT ports for connecting outbound, Azure Firewall uses the primary public IP address until it exhausts the SNAT ports with that primary public IP address. Then it starts using the other public IP addresses. --One of the challenges with using a large number of public IP addresses is when there are downstream IP address filtering requirements. Azure Firewall does use a primary public IP address before using the other associated public IP addresses. But if you have multiple public IP addresses associated with your firewall, you'll need to allow all public IP addresses associated with it. Even if you use [Public IP address prefixes](../virtual-network/ip-services/public-ip-address-prefix.md) and you need to associate 250 public IP addresses to meet your outbound SNAT port requirements, you still need to create and allow 16 public IP address prefixes. +One of the challenges with using a large number of public IP addresses is when there are downstream IP address filtering requirements. Azure Firewall randomly selects the source public IP address to use for a connection, so you need to allow all public IP addresses associated with it. Even if you use [Public IP address prefixes](../virtual-network/ip-services/public-ip-address-prefix.md) and you need to associate 250 public IP addresses to meet your outbound SNAT port requirements, you still need to create and allow 16 public IP address prefixes. A better option to scale and dynamically allocate outbound SNAT ports is to use an [Azure NAT Gateway](../virtual-network/nat-gateway/nat-overview.md). It provides 64,512 SNAT ports per public IP address and supports up to 16 public IP addresses. This effectively provides up to 1,032,192 outbound SNAT ports. Azure NAT Gateway also [dynamically allocates SNAT ports](/azure/nat-gateway/nat-gateway-resource#nat-gateway-dynamically-allocates-snat-ports) on a subnet level, so all the SNAT ports provided by its associated IP addresses is available on demand to provide outbound connectivity. |
frontdoor | How To Configure Https Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md | You can also choose to use your own TLS certificate. Your TLS certificate must m If you already have a certificate, you can upload it to your key vault. Otherwise, create a new certificate directly through Azure Key Vault from one of the partner certificate authorities (CAs) that Azure Key Vault integrates with. +> [!WARNING] +> Azure Front Door currently only supports Key Vault accounts in the same subscription as the Front Door configuration. Choosing a Key Vault under a different subscription than your Front Door will result in a failure. + > [!NOTE] > Front Door doesn't support certificates with elliptic curve (EC) cryptography algorithms. Also, your certificate must have a complete certificate chain with leaf and intermediate certificates, and the root certification authority (CA) must be part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). |
governance | Built In Initiatives | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md | Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
governance | Built In Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md | Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
hdinsight | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md | Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
healthcare-apis | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md | Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/01/2023 Last updated : 06/21/2023 |
healthcare-apis | Deploy Manual Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-manual-portal.md | Under the **Destination** tab, use these values to enter the destination propert * Next, select the **Resolution type**. - **Resolution type** specifies how MedTech service associates device data with FHIR Device resources and FHIR Patient resources. MedTech reads device and patient resources from the FHIR service using [device identifiers](https://www.hl7.org/fhir/device-definitions.html#Device.identifier) and [patient identifiers](https://www.hl7.org/fhir/patient-definitions.html#Patient.identifier). + **Resolution type** specifies how the MedTech service associates device data with FHIR Device resources and FHIR Patient resources. The MedTech service reads Device and Patient resources from the FHIR service using [device identifiers](https://www.hl7.org/fhir/r4/device-definitions.html#Device.identifier) and [patient identifiers](https://www.hl7.org/fhir/r4/patient-definitions.html#Patient.identifier). If an [encounter identifier](https://hl7.org/fhir/r4/encounter-definitions.html#Encounter.identifier) is specified and extracted from the device data, it's linked to the observation if an encounter exists on the FHIR service with that identifier. If the encounter identifier is successfully normalized, but no FHIR Encounter exists with that encounter identifier, a **FhirResourceNotFound** exception is thrown. Device and Patient resources can be resolved by choosing a **Resolution type** of **Create** and **Lookup**: - **Create** - If **Create** was selected, and device or patient resources are missing when you're reading data, new resources are created using the identifiers included in the device message. + If **Create** was selected, and Device or Patient resources are missing when the MedTech service is reading the device data, new resources are created using the identifiers included in the device data. - **Lookup** - If **Lookup** was selected, and device or patient resources are missing, an error occurs, and the data isn't processed. The errors **DeviceNotFoundException** and/or a **PatientNotFoundException** error is generated, depending on the type of resource not found. + If **Lookup** was selected, and Device or Patient resources are missing, an error occurs, and the device data isn't processed. A **DeviceNotFoundException** and/or a **PatientNotFoundException** error is generated, depending on the type of resource not found. * For the **Destination mapping** field, accept the default **Destination mapping**. The FHIR destination mapping is addressed in the [Post-deployment](#post-deployment) section of this quickstart. + The **Destination** tab should now look something like this after you've filled it out: :::image type="content" source="media\deploy-manual-portal\completed-destination-tab.png" alt-text="Screenshot of Destination tab filled out correctly." lightbox="media\deploy-manual-portal\completed-destination-tab.png"::: |
iot-develop | Quickstart Devkit Espressif Esp32 Freertos Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-espressif-esp32-freertos-iot-hub.md | -You'll complete the following tasks: +You complete the following tasks: * Install a set of embedded development tools for programming an ESP32 DevKit * Build an image and flash it onto the ESP32 DevKit-* Use Azure CLI to create and manage an Azure IoT hub that the ESP32 DevKit will securely connect to +* Use Azure CLI to create and manage an Azure IoT hub that the ESP32 DevKit connects to * Use Azure IoT Explorer to register a device with your IoT hub, view device properties, view device telemetry, and call direct commands on the device ## Prerequisites For Windows 10 and 11, make sure long paths are enabled. git config --system core.longpaths true ``` -## Create the cloud components --### Create an IoT hub --You can use Azure CLI to create an IoT hub that handles events and messaging for your device. --To create an IoT hub: --1. Launch your CLI app. To run the CLI commands in the rest of this quickstart, copy the command syntax, paste it into your CLI app, edit variable values, and press Enter. - - If you're using Cloud Shell, right-click the link for [Cloud Shell](https://shell.azure.com/bash), and select the option to open in a new tab. - - If you're using Azure CLI locally, start your CLI console app and sign in to Azure CLI. --1. Run [az extension add](/cli/azure/extension#az-extension-add) to install or upgrade the *azure-iot* extension to the current version. -- ```azurecli-interactive - az extension add --upgrade --name azure-iot - ``` --1. Run the [az group create](/cli/azure/group#az-group-create) command to create a resource group. The following command creates a resource group named *MyResourceGroup* in the *centralus* region. -- > [!NOTE] - > You can optionally set an alternate `location`. To see available locations, run [az account list-locations](/cli/azure/account#az-account-list-locations). -- ```azurecli - az group create --name MyResourceGroup --location centralus - ``` --1. Run the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command to create an IoT hub. It might take a few minutes to create an IoT hub. -- *YourIotHubName*. Replace this placeholder in the code with the name you chose for your IoT hub. An IoT hub name must be globally unique in Azure. This placeholder is used in the rest of this quickstart to represent your unique IoT hub name. -- The `--sku F1` parameter creates the IoT hub in the Free tier. Free tier hubs have a limited feature set and are used for proof of concept applications. For more information on IoT Hub tiers, features, and pricing, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub). -- ```azurecli - az iot hub create --resource-group MyResourceGroup --name {YourIoTHubName} --sku F1 --partition-count 2 - ``` --1. After the IoT hub is created, view the JSON output in the console, and copy the `hostName` value to use in a later step. The `hostName` value looks like the following example: -- `{Your IoT hub name}.azure-devices.net` --### Configure IoT Explorer --In the rest of this quickstart, you'll use IoT Explorer to register a device to your IoT hub, to view the device properties and telemetry, and to send commands to your device. In this section, you configure IoT Explorer to connect to the IoT hub you created, and to read plug and play models from the public model repository. --To add a connection to your IoT hub: --1. In your CLI app, run the [az iot hub connection-string show](/cli/azure/iot/hub/connection-string#az-iot-hub-connection-string-show) command to get the connection string for your IoT hub. -- ```azurecli - az iot hub connection-string show --hub-name {YourIoTHubName} - ``` --1. Copy the connection string without the surrounding quotation characters. -1. In Azure IoT Explorer, select **IoT hubs** on the left menu. -1. Select **+ Add connection**. -1. Paste the connection string into the **Connection string** box. -1. Select **Save**. -- :::image type="content" source="media/quickstart-devkit-espressif-esp32-iot-hub/iot-explorer-add-connection.png" alt-text="Screenshot of adding a connection in IoT Explorer."::: --If the connection succeeds, IoT Explorer switches to the **Devices** view. --To add the public model repository: --1. In IoT Explorer, select **Home** to return to the home view. -1. On the left menu, select **IoT Plug and Play Settings**, then select **+Add** and select **Public repository** from the drop-down menu. -1. An entry appears for the public model repository at `https://devicemodels.azure.com`. -- :::image type="content" source="media/quickstart-devkit-espressif-esp32-iot-hub/iot-explorer-add-public-repository.png" alt-text="Screenshot of adding the public model repository in IoT Explorer."::: --1. Select **Save**. --### Register a device --In this section, you create a new device instance and register it with the IoT hub you created. You'll use the connection information for the newly registered device to securely connect your physical device in a later section. --To register a device: --1. From the home view in IoT Explorer, select **IoT hubs**. -1. The connection you previously added should appear. Select **View devices in this hub** below the connection properties. -1. Select **+ New** and enter a device ID for your device; for example, `mydevice`. Leave all other properties the same. -1. Select **Create**. -- :::image type="content" source="media/quickstart-devkit-espressif-esp32-iot-hub/iot-explorer-device-created.png" alt-text="Screenshot of Azure IoT Explorer device identity."::: --1. Use the copy buttons to copy the **Device ID** and **Primary key** fields. --Before continuing to the next section, save each of the following values retrieved from earlier steps, to a safe location. You use these values in the next section to configure your device. --* `hostName` -* `deviceId` -* `primaryKey` - ## Prepare the device-To connect the ESP32 DevKit to Azure, you'll modify configuration settings, build the image, and flash the image to the device. +To connect the ESP32 DevKit to Azure, you modify configuration settings, build the image, and flash the image to the device. ### Set up the environment To launch the ESP-IDF environment: To add configuration to connect to Azure IoT Hub: To save the configuration:-1. Select <kbd>Shift</kbd>+<kbd>S</kbd> to open the save options. This lets you save the configuration to a file named *skconfig* in the current *.\aziotkit* directory. +1. Select <kbd>Shift</kbd>+<kbd>S</kbd> to open the save options. This menu lets you save the configuration to a file named *skconfig* in the current *.\aziotkit* directory. 1. Select <kbd>Enter</kbd> to save the configuration. 1. Select <kbd>Enter</kbd> to dismiss the acknowledgment message. 1. Select <kbd>Q</kbd> to quit the configuration menu. To confirm that the device connects to Azure IoT Central: ## View device properties -You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you'll use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the ESP32 DevKit. These capabilities rely on the device model published for the ESP32 DevKit in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device. +You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the ESP32 DevKit. These capabilities rely on the device model published for the ESP32 DevKit in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device. To access IoT Plug and Play components for the device in IoT Explorer: |
iot-develop | Tutorial Use Mqtt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-use-mqtt.md | Now that you've learned how to use the Mosquitto MQTT library to communicate wit > [!div class="nextstepaction"] > [Communicate with your IoT hub using the MQTT protocol](../iot/iot-mqtt-connect-to-iot-hub.md)+> [!div class="nextstepaction"] +> [MQTT Application samples](https://github.com/Azure-Samples/MqttApplicationSamples) |
iot-dps | Concepts Device Oem Security Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-device-oem-security-practices.md | For more information, see [provisioning](about-iot-dps.md#provisioning-process) ## Resources In addition to the recommended security practices in this article, Azure IoT provides resources to help with selecting secure hardware and creating secure IoT deployments: -- Azure IoT [security best practices](../iot/iot-security-best-practices.md) to guide the deployment process. +- Azure IoT [security best practices](../iot/iot-overview-security.md) to guide the deployment process. - The [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/) offers a service to help create secure IoT deployments. - For help with evaluating your hardware environment, see the whitepaper [Evaluating your IoT Security](https://download.microsoft.com/download/D/3/9/D3948E3C-D5DC-474E-B22F-81BA8ED7A446/Evaluating_Your_IOT_Security_whitepaper_EN_US.pdf). - For help with selecting secure hardware, see [The Right Secure Hardware for your IoT Deployment](https://download.microsoft.com/download/C/0/5/C05276D6-E602-4BB1-98A4-C29C88E57566/The_right_secure_hardware_for_your_IoT_deployment_EN_US.pdf). |
iot-edge | How To Access Host Storage From Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-access-host-storage-from-module.md | description: Use environment variables and create options to enable module acces Previously updated : 06/22/2023 Last updated : 06/26/2023 For production scenarios, use a persistent storage location on the host filesyst To set up system modules to use persistent storage: -1. For both IoT Edge hub and IoT Edge agent, add an environment variable called **storageFolder** that points to a directory in the module. +1. For both IoT Edge hub and IoT Edge agent, add an environment variable called **StorageFolder** that points to a directory in the module. 1. For both IoT Edge hub and IoT Edge agent, add binds to connect a local directory on the host machine to a directory in the module. For example: :::image type="content" source="./media/how-to-access-host-storage-from-module/offline-storage-1-4.png" alt-text="Screenshot that shows how to add create options and environment variables for local storage."::: Your deployment manifest would be similar to the following: "systemModules": { "edgeAgent": { "env": {- "storageFolder": { + "StorageFolder": { "value": "/tmp/edgeAgent" } }, Your deployment manifest would be similar to the following: }, "edgeHub": { "env": {- "storageFolder": { + "StorageFolder": { "value": "/tmp/edgeHub" } }, Your deployment manifest would be similar to the following: ### Automatic host system permissions management -On version 1.4 and newer, there's no need for manually setting ownership or permissions for host storage backing the `storageFolder`. Permissions and ownership are automatically managed by the system modules during startup. +On version 1.4 and newer, there's no need for manually setting ownership or permissions for host storage backing the `StorageFolder`. Permissions and ownership are automatically managed by the system modules during startup. > [!NOTE] > Automatic permission management of host bound storage only applies to system modules, IoT Edge agent and Edge hub. For custom modules, manual management of permissions and ownership of bound host storage is required if the custom module container isn't running as `root` user. |
iot-edge | How To Manage Device Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md | description: How to install and manage certificates on an Azure IoT Edge device Previously updated : 4/18/2023 Last updated : 6/23/2023 If your PKI provider provides a `.cer` file, it may contain the same certificate * If it's in DER (binary) format, convert it to PEM with `openssl x509 -in cert.cer -out cert.pem`. * Use the PEM file as the trust bundle. For more information about the trust bundle, see the next section. +> [!IMPORTANT] +> Your PKI infrastructure should support RSA-2048 bit keys and EC P-256 keys. For example, your EST servers should support these key types. You can use other key types, but we only test RSA-2048 bit keys and EC P-256 keys. +> + ## Permission requirements The following table lists the file and directory permissions required for the IoT Edge certificates. The preferred directory for the certificates is `/var/aziot/certs/` and `/var/aziot/secrets/` for keys. |
iot-edge | Nested Virtualization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/nested-virtualization.md | This is the baseline approach for any Windows VM that hosts Azure IoT Edge for L If you're using Windows Server or Azure Stack HCI, make sure you [install the Hyper-V role](/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server). ## Deployment on Windows VM on VMware ESXi-Intel-based VMware ESXi [6.7](https://docs.vmware.com/en/VMware-vSphere/6.7/vsphere-esxi-67-installation-setup-guide.pdf) and [7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html) versions can host Azure IoT Edge for Linux on Windows on top of a Windows virtual machine. Read [VMware KB2009916](https://kb.vmware.com/s/article/2009916) for more information on VMware ESXi nested virtualization support. +Intel-based VMware ESXi [6.7](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-esxi-vcenter-server-67-release-notes.html) and [7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html) versions can host Azure IoT Edge for Linux on Windows on top of a Windows virtual machine. Read [VMware KB2009916](https://kb.vmware.com/s/article/2009916) for more information on VMware ESXi nested virtualization support. To set up an Azure IoT Edge for Linux on Windows on a VMware ESXi Windows virtual machine, use the following steps: 1. Create a Windows virtual machine on the VMware ESXi host. For more information about VMware VM deployment, see [VMware - Deploying Virtual Machines](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-39D19B2B-A11C-42AE-AC80-DDA8682AB42C.html). |
iot-hub | Policy Reference |