Updates from: 06/21/2023 01:10:29
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-tenant.md
This article describes how to enable Microsoft Entra Permissions Management in y
To enable Permissions Management in your organization: - You must have an Azure AD tenant. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).-- You must be eligible for or have an active assignment to the global administrator role as a user in that tenant.
+- You must be eligible for or have an active assignment to the *Permissions Management Administrator* role as a user in that tenant.
## How to enable Permissions Management on your Azure AD tenant 1. In your browser: 1. Go to [Entra services](https://entra.microsoft.com) and use your credentials to sign in to [Azure Active Directory](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview).
- 1. If you aren't already authenticated, sign in as a global administrator user.
- 1. If needed, activate the global administrator role in your Azure AD tenant.
+ 1. If you aren't already authenticated, sign in as a *Permissions Management Administrator* user.
+ 1. If needed, activate the *Permissions Management Administrator* role in your Azure AD tenant.
1. In the Azure portal, select **Permissions Management**, and then select the link to purchase a license or begin a trial.
active-directory Product Account Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-account-settings.md
Title: View personal and organization information in Permissions Management
-description: How to view personal and organization information in the Account settings dashboard in Permissions Management.
+ Title: View personal and organization information in Microsoft Entra Permissions Management
+description: How to view personal and organization information in the Account settings dashboard in Microsoft Entra Permissions Management.
Previously updated : 02/23/2022 Last updated : 06/19/2023 # View personal and organization information The **Account settings** dashboard in Permissions Management allows you to view personal information, passwords, and account preferences.
-This information can't be modified because the user information is pulled from Azure AD. Only **User Session Time(min)**
+This information can't be modified because the user information is pulled from Azure AD only **User Session Time(min)**.
## View personal information
-1. In the Permissions Management home page, select the down arrow to the right of the **User** (your initials) menu, and then select **Account Settings**.
+- From the Permissions Management home page, select the down arrow to the right of the **User** (your initials) menu, then select **Account Settings**.
- The **Personal Information** box displays your **First Name**, **Last Name**, and the **Email Address** that was used to register your account on Permissions Management.
+ The **Personal Information** box displays your **First Name**, **Last Name**, and the **Email Address** used to register your account with Permissions Management.
## View current organization information
-1. In the Permissions Management home page, select the down arrow to the right of the **User** (your initials) menu, and then select **Account Settings**.
+1. From the Permissions Management home page, select the down arrow to the right of the **User** (your initials) menu, then select **Account Settings**.
The **Current Organization Information** displays the **Name** of your organization, the **Tenant ID** box, and the **User Session Timeout (min)**.
-1. To change duration of the **User Session Timeout (min)**, select **Edit** (the pencil icon), and then enter the number of minutes before you want a user session to time out.
-1. Select the check mark to confirm your new setting.
+1. To change duration of the **User Session Timeout (min)**, select **Edit** (the pencil icon), then enter the number of minutes before you want a user session to time out.
+1. Select the checkmark to confirm your new setting.
## Next steps
active-directory Product Audit Trail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-audit-trail.md
Title: Filter and query user activity in Permissions Management
-description: How to filter and query user activity in Permissions Management.
+ Title: Filter and query user activity in Microsoft Entra Permissions Management
+description: How to filter and query user activity in Microsoft Entra Permissions Management.
Previously updated : 02/23/2022 Last updated : 06/19/2023
There are several different query parameters you can configure individually or i
- To delete a function line in a query, select **Delete** (the minus sign **-** icon). - To create multiple queries at one time, select **Add New Tab** to the right of the **Query** tabs that are displayed.
- You can open a maximum number of six query tab pages at the same time. A message will appear when you've reached the maximum.
+ You can open a maximum number of six query tab pages at the same time. A message appears when you've reached the maximum.
## Create a query with specific parameters
The **Operator** menu displays the following options depending on the identity y
1. To view a list of all resources, hover over **Multiple**. - **Resource Type**: Displays the type of resource, for example, *Key* (encryption key) or *Bucket* (storage).
- - **Task Name**: The name of the task that was performed by the identity.
+ - **Task Name**: The name of the task performed by the identity.
An exclamation mark (**!**) next to the task name indicates that the task failed.
The **Operator** menu displays the following options depending on the identity y
- **Query Name**: Displays the name of the saved query. - **Query Type**: Displays whether the query is a *System* query or a *Custom* query.
- - **Schedule**: Displays how often a report will be generated. You can schedule a one-time report or a monthly report.
+ - **Schedule**: Displays how often a report is generated. You can schedule a one-time report or a monthly report.
- **Next On**: Displays the date and time the next report will be generated. - **Format**: Displays the output format for the report, for example, CSV. - **Last Modified On**: Displays the date in which the query was last modified on.
active-directory Product Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-dashboard.md
Title: View data about the activity in your authorization system in Permissions Management
-description: How to view data about the activity in your authorization system in the Permissions Management Dashboard in Permissions Management.
+ Title: View data about the activity in your authorization system
+description: How to view data about the activity in your authorization system in the Microsoft Entra Permissions Management Dashboard.
Previously updated : 01/25/2023 Last updated : 06/19/2023 # View data about the activity in your authorization system
-The Permissions Management **Dashboard** provides an overview of the authorization system and account activity being monitored. You can use this dashboard to view data collected from your Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) authorization systems.
+The Permissions Management **Dashboard** provides an overview of the authorization system and account activity being monitored. Use this dashboard to view data collected from your Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) authorization systems.
## View data about your authorization system
-1. In the Permissions Management home page, select **Dashboard**.
+1. From the Permissions Management home page, select **Dashboard**.
1. From the **Authorization systems type** dropdown, select **AWS**, **Azure**, or **GCP**. 1. Select the **Authorization System** box to display a **List** of accounts and **Folders** available to you. 1. Select the accounts and folders you want, and then select **Apply**.
active-directory Product Data Billable Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-data-billable-resources.md
Title: View current billable resources in your authorization systems
-description: How to view current billable resources in your authorization system in Permissions Management.
+description: How to view current billable resources in your authorization system in Microsoft Entra Permissions Management.
Previously updated : 01/25/2023 Last updated : 06/19/2023
Gain insight into current billable resources listed in your authorization system. In Microsoft Entra Permissions Management, a billable resource is defined as a cloud service that uses compute or memory and requires a license. The Permissions Management Billable Resources tab shows you which resources are in your authorization system, and how many of them you're being billed for.
-Here is the current list of resources per cloud provider. This list is subject to change as cloud providers add more services in the future.
+Here's the current list of resources per cloud provider. This list is subject to change as cloud providers add more services in the future.
:::image type="content" source="media/onboard-enable-tenant/billable-resources.png" alt-text="A table of current Microsoft billable resources." lightbox="media/onboard-enable-tenant/billable-resources.png":::
active-directory Product Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-data-sources.md
Previously updated : 01/25/2023 Last updated : 06/19/2023
You can use the **Data Collectors** dashboard in Permissions Management to view
## Next steps -- For information about viewing an inventory of created resources and licensing information for your authorization system, see [Display an inventory of created resources and licenses for your authorization system](product-data-inventory.md)
+- To view an inventory of created resources and licensing information for your authorization system, see [Display an inventory of created resources and licenses for your authorization system](product-data-inventory.md)
active-directory Product Define Permission Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-define-permission-levels.md
Title: Define and manage users, roles, and access levels in Permissions Management
-description: How to define and manage users, roles, and access levels in Permissions Management User management dashboard.
+ Title: Define and manage users, roles, and access levels in Microsoft Permissions Management
+description: How to define and manage users, roles, and access levels in the Permissions Management User management dashboard.
Previously updated : 02/23/2022 Last updated : 06/19/2023
Follow this process to invite users if the customer hasn't enabled SAML integrat
### Invite a user to Permissions Management
-Inviting a user to Permissions Management adds the user to the system and allows system administrators to assign permissions to those users. Follow the steps below to invite a user to Permissions Management.
+Inviting a user to Permissions Management adds the user to the system and allows system administrators to assign permissions to those users. Follow the steps to invite a user to Permissions Management.
1. To invite a user to Permissions Management, select the down caret icon next to the **User** icon on the right of the screen, and then select **User Management**. 2. From the **Users** tab, select **Invite User**.
Creating a permission directly in Permissions Management allows system administr
## Next steps -- For information about how to view user management information, see [Manage users with the User management dashboard](ui-user-management.md).-- For information about how to create group-based permissions, see [Create group-based permissions](how-to-create-group-based-permissions.md).
+- To view user management information, see [Manage users with the User management dashboard](ui-user-management.md).
+- To create group-based permissions, see [Create group-based permissions](how-to-create-group-based-permissions.md).
active-directory Custom Extension Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-get-started.md
After the Azure Function app is created, create an HTTP trigger function. The HT
1. Within your **Function App**, from the menu select **Functions**. 1. From the top menu, select **+ Create**. 1. In the **Create Function** window, leave the **Development environment** property as **Develop in portal**, and then select the **HTTP trigger** template.
-1. Under **Template details**, enter *CustomExtensionsAPI* for the **New Function** property.
+1. Under **Template details**, enter *CustomAuthenticationExtensionsAPI* for the **New Function** property.
1. For the **Authorization level**, select **Function**. 1. Select **Create**
active-directory How To Use App Roles Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-use-app-roles-customers.md
The following table shows which features are currently available.
| Change security group members using the Microsoft Entra admin center | Yes | | Change security group members using the Microsoft Graph API | Yes | | Scale up to 50,000 users and 50,000 groups | Not currently available |
-| Add 50,000 users to at least two groups | Not currently available |
+| Add 50,000 users to at least two groups | Not currently available |
+
+## Next steps
+
+- Learn how to [Use role-based access control in your web application](how-to-web-app-role-based-access-control.md).
active-directory How To Web App Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-web-app-role-based-access-control.md
+
+ Title: Use role-based access control in your Node.js web application
+description: Learn how to configure groups and user roles in your customer's tenant, so you can receive them as claims in a security token for your Node.js application
+++++++++ Last updated : 06/16/2023+++
+# Use role-based access control in your Node.js web application
+
+Role-based access control (RBAC) is a mechanism to enforce authorization in applications. Azure Active Directory (Azure AD) for customers allows you to define application roles for your application and assign those roles to users and groups. The roles you assign to a user or group define their level of access to the resources and operations in your application. When Azure AD for customers issues a security token for an authenticated user, it includes the names of the roles you've assigned the user or group in the security token's roles claim.
+
+You can also configure your Azure AD for customers tenant to return the group memberships of the user. Developers can then use security groups to implement RBAC in their applications, where the memberships of the user in specific groups are interpreted as their role memberships.
+
+Once you assign users and groups to roles, the *roles* claim is emitted in your security token. However, to emit the *groups* membership claim in security tokens, you need additional configuration in your customer's tenant.
+
+In this article, you learn how to receive user roles or group membership or both as claims in a security token for your Node.js web app.
+
+## Prerequisites
+
+- A security group in your customer's tenant. If you've not done so, [create one](../../roles/groups-create-eligible.md#azure-portal).
+
+- If you've not done so, complete the steps in [Using role-based access control for applications](how-to-use-app-roles-customers.md) article. This article shows you how to create roles for your application, how to assign users and groups to those roles, how to add members to a group and how to add a group claim to a to security token. Learn more about [ID tokens](../../develop/id-tokens.md) and [access tokens](../../develop/access-tokens.md).
+
+- If you've not done so, complete the steps in [Sign in users in your own Node.js web application](how-to-web-app-node-sign-in-overview.md)
+
+## Receive groups and roles claims in your Node.js web app
+
+Once you configure your customer's tenant, you can retrieve your *roles* and *groups* claims in your client app. The *roles* and *groups* claims are both present in the ID token and the access token, but your client app only needs to check for these claims in the ID token to implement authorization in the client side. The API app can also retrieve these claims when it receives the access token.
+
+You check your *roles* claim value as shown in the following code snippet example:
+
+```javascript
+const msal = require('@azure/msal-node');
+const { msalConfig, TENANT_SUBDOMAIN, REDIRECT_URI, POST_LOGOUT_REDIRECT_URI } = require('../authConfig');
+
+...
+class AuthProvider {
+...
+ async handleRedirect(req, res, next) {
+ const authCodeRequest = {
+ ...req.session.authCodeRequest,
+ code: req.body.code, // authZ code
+ codeVerifier: req.session.pkceCodes.verifier, // PKCE Code Verifier
+ };
+
+ try {
+ const msalInstance = this.getMsalInstance(this.config.msalConfig);
+ const tokenResponse = await msalInstance.acquireTokenByCode(authCodeRequest, req.body);
+ let roles = tokenResponse.idTokenClaims.roles;
+
+ //Check roles
+ if (roles && roles.includes("Orders.Manager")) {
+ //This user can view the ID token claims page.
+ res.redirect('/id');
+ }
+
+ //User can only view the index page.
+ res.redirect('/');
+ } catch (error) {
+ next(error);
+ }
+ }
+...
+}
+
+```
+
+If you assign a user to multiple roles, the `roles` string contains all roles separated by a comma, such as `Orders.Manager,Store.Manager,...`. Make sure you build your application to handle the following conditions:
+
+- absence of `roles` claim in the token
+- user hasn't been assigned to any role
+- multiple values in the `roles` claim when you assign a user to multiple roles
+
+You can also check your *groups* claim value as shown in the following code snippet example:
+
+```javascript
+const tokenResponse = await msalInstance.acquireTokenByCode(authCodeRequest, req.body);
+let groups = tokenResponse.idTokenClaims.groups;
+```
+The groups claim value is the group's *objectId*. If a user is a member of multiple groups, the `groups` string contains all groups separated by a comma, such as `7f0621bc-b758-44fa-a2c6-...,6b35e65d-f3c8-4c6e-9538-...`.
+
+> [!NOTE]
+> If you assign a user [Azure AD in-built roles](../../roles/permissions-reference.md) or commonly known as directory roles, those roles appear in the *groups* claim of the security token.
+
+## Handle groups overage
+
+To ensure that the size of the security token doesnΓÇÖt exceed the HTTP header size limit, Azure AD for customers limits the number of object IDs that it includes in the *groups* claim. The overage limit is **150 for SAML tokens and 200 for JWT tokens**. It's possible to exceed this limit if a user belongs to many groups, and you request for all the groups.
+
+### Detect group overage in your source code
+
+If you can't avoid groups overages, then you need to handle it in your code. When you exceed the overage limit, the token doesn't contain the *groups* claim. Instead, the token contains a *_claim_names* claim that contains a *groups* member of the array. So, you need to check the existence of *_claim_names* claim to tell that an overage has occurred. The following code snippet shows you how to detect a groups overage:
+
+```javascript
+const tokenResponse = await msalInstance.acquireTokenByCode(authCodeRequest, req.body);
+
+if(tokenResponse.idTokenClaims.hasOwnProperty('_claim_names') && tokenResponse.idTokenClaims['_claim_names'].hasOwnProperty('groups')) {
+ //overage has occurred
+}
+```
+
+Use the instructions in [Configuring group claims and app roles in tokens](/security/zero-trust/develop/configure-tokens-group-claims-app-roles#group-overages) article to learn how request for the full groups list when groups overage occurs.
+
+## How to use groups and roles values in your Node.js web app
+
+In the client app, you can verify whether a signed-in user has the necessary role(s) to access a protected route or call an API endpoint. This can be done by checking the `roles` claim in the ID token. To implement this protection in your app, you can build guards by using a custom middleware.
+
+In your service app (API app), you can also protect the API endpoints. After you [validate the access token](../../develop/access-tokens.md#validate-tokens) sent by the client app, you can check for the *roles* or *groups* claims in the payload claims of the access token.
+
+## Do I use App Roles or Groups?
+
+In this article, you have learned that you can use *App Roles* or *Groups* to implement RBAC in your application. The preferred approach is to use app roles as app roles provide more granular control when managing access/permissions at the application level. For more information on how to choose an approach, see [Choose an approach](../../develop/custom-rbac-for-developers.md#choose-an-approach).
+
+## Next steps
+
+- Learn more about [Configuring group claims and app roles in tokens](/security/zero-trust/develop/configure-tokens-group-claims-app-roles).
active-directory Overview Customers Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/overview-customers-ciam.md
Previously updated : 05/07/2023 Last updated : 06/20/2023
Microsoft Entra External ID for customers, also known as Azure Active Directory
[!INCLUDE [preview-alert](../customers/includes/preview-alert/preview-alert-ciam.md)]
-## Add customized sign-in to your customer-facing apps
-
-Azure AD for customers is intended for businesses that want to make applications available to their customers using the Microsoft Entra platform for identity and access.
--- **Add sign-up and sign-in pages to your apps.** Quickly add intuitive, user-friendly sign-up and sign-up experiences for your customer apps. With a single identity, a customer can securely access all the applications you want them to use.--- **Add single sign-on (SSO) with social and enterprise identities.** Customers can choose a social, enterprise, or managed identity to sign in with a username and password, email, or one-time passcode.--- **Add your company branding to the sign-up page.** Customize the look and feel of your sign-up and sign-in experiences, including both the default experience and the experience for specific browser languages.--- **Easily customize and extend your sign-up flows.** Tailor your identity user flows to your needs. Choose the attributes you want to collect from a customer during sign-up, or add your own custom attributes. If the information your app needs is contained in an external system, create custom authentication extensions to collect and add data to authentication tokens.--- **Integrate multiple app languages and platforms.** With Microsoft Entra, you can quickly set up and deliver secure, branded authentication flows for multiple app types, platforms, and languages.--- **Provide self-service account management.** Customers can register for your online services by themselves, manage their profile, delete their account, enroll in a multifactor authentication (MFA) method, or reset their password with no admin or help desk assistance.-
-Learn more about [adding sign-in and sign-up to your app](concept-planning-your-solution.md) and [customizing the sign-in look and feel](concept-branding-customers.md).
-
-## Manage apps and users in a dedicated customer tenant
+## Create a dedicated tenant for your customer scenarios
-Azure AD for customers uses the standard tenant model and overlays it with customized onboarding journeys for workforce or customer scenarios. B2B collaboration is part of workforce configurations. With the introduction of Azure AD for customers, Microsoft Entra now offers two different types of tenants that you can create and manage: *workforce tenants* and *customer tenants*.
+When getting started with Azure AD for customers, you first create a tenant that will contain your customer-facing apps, resources, and directory of customer accounts.
-A **workforce tenant** contains your employees and the apps and resources that are internal to your organization. If you've worked with Azure Active Directory, a workforce tenant is the type of tenant you're already familiar with. You might already have an existing workforce tenant for your organization.
-
-In contrast, a **customer tenant** represents your customer-facing app, resources, and directory of customer accounts. A customer tenant is distinct and separate from your workforce tenant. A customer tenant is the first resource you need to create to get started with Azure AD for customers. To establish a CIAM solution for a customer-facing app or service, you create a new customer tenant. A customer tenant contains:
+If you've worked with Azure Active Directory, you're already familiar with using an Azure AD tenant that contains your employee directory, internal apps, and other organizational resources. With Azure AD for customers, you create a distinct tenant that follows the standard Azure AD tenant model but is configured for customer scenarios. This tenant contains:
- **A directory**: The directory stores your customers' credentials and profile data. When a customer signs up for your app, a local account is created for them in your customer tenant.
In contrast, a **customer tenant** represents your customer-facing app, resource
- **Encryption keys**: Add and manage encryption keys for signing and validating tokens, client secrets, certificates, and passwords.
-There are two types of user accounts you can manage in a customer tenant:
+There are two types of user accounts you can manage in your customer tenant:
- **Customer account**: Accounts that represent the customers who access your applications. - **Admin account**: Users with work accounts can manage resources in a tenant, and with an administrator role, can also manage tenants. Users with work accounts can create new consumer accounts, reset passwords, block/unblock accounts, and set permissions or assign an account to a security group. Learn more about managing [customer accounts](how-to-manage-customer-accounts.md) and [admin accounts](how-to-manage-admin-accounts.md) in your customer tenant.
+## Add customized sign-in to your customer-facing apps
+
+Azure AD for customers is intended for businesses that want to make applications available to their customers using the Microsoft Entra platform for identity and access.
+
+- **Add sign-up and sign-in pages to your apps.** Quickly add intuitive, user-friendly sign-up and sign-up experiences for your customer apps. With a single identity, a customer can securely access all the applications you want them to use.
+
+- **Add single sign-on (SSO) with social and enterprise identities.** Customers can choose a social, enterprise, or managed identity to sign in with a username and password, email, or one-time passcode.
+
+- **Add your company branding to the sign-up page.** Customize the look and feel of your sign-up and sign-in experiences, including both the default experience and the experience for specific browser languages.
+
+- **Easily customize and extend your sign-up flows.** Tailor your identity user flows to your needs. Choose the attributes you want to collect from a customer during sign-up, or add your own custom attributes. If the information your app needs is contained in an external system, create custom authentication extensions to collect and add data to authentication tokens.
+
+- **Integrate multiple app languages and platforms.** With Microsoft Entra, you can quickly set up and deliver secure, branded authentication flows for multiple app types, platforms, and languages.
+
+- **Provide self-service account management.** Customers can register for your online services by themselves, manage their profile, delete their account, enroll in a multifactor authentication (MFA) method, or reset their password with no admin or help desk assistance.
+
+Learn more about [adding sign-in and sign-up to your app](concept-planning-your-solution.md) and [customizing the sign-in look and feel](concept-branding-customers.md).
## Design user flows for self-service sign-up
active-directory What Is Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/what-is-deprecated.md
Use the following table to learn about changes including deprecations, retiremen
|Functionality, feature, or service|Change|Change date | |||:|
-|[Microsoft Authenticator Lite for Outlook mobile](../../active-directory/authentication/how-to-mfa-authenticator-lite.md)|Feature change|Jun 9, 2023|
|[System-preferred authentication methods](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Sometime after GA| |[Azure AD Authentication Library (ADAL)](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Jun 30, 2023|
-|[Azure AD Graph API](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Deprecation|Jun 30, 2023|
-|[Azure AD PowerShell and MSOnline PowerShell](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Deprecation|Jun 30, 2023|
+|[Azure AD Graph API](https://aka.ms/aadgraphupdate)|Retirement|Jun 30, 2023|
|[My Apps improvements](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Jun 30, 2023| |[Terms of Use experience](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Jul 2023|
+|[Azure AD PowerShell and MSOnline PowerShell](https://aka.ms/aadgraphupdate)|Deprecation|Mar 30, 2024|
|[Azure AD MFA Server](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Sep 30, 2024| |[Legacy MFA & SSPR policy](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Sep 30, 2024| |['Require approved client app' Conditional Access Grant](https://aka.ms/RetireApprovedClientApp)|Retirement|Mar 31, 2026|
Use the following table to learn about changes including deprecations, retiremen
|Functionality, feature, or service|Change|Change date | |||:|
+|[Microsoft Authenticator Lite for Outlook mobile](../../active-directory/authentication/how-to-mfa-authenticator-lite.md)|Feature change|Jun 9, 2023|
|[My Groups experience](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|May 2023| |[My Apps browser extension](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|May 2023| |Microsoft Authenticator app [Number matching](../authentication/how-to-mfa-number-match.md)|Feature change|May 8, 2023|
active-directory Entitlement Management Access Package Auto Assignment Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-auto-assignment-policy.md
You can use rules to determine access package assignment based on user properties in Azure Active Directory (Azure AD), part of Microsoft Entra. In Entitlement Management, an access package can have multiple policies, and each policy establishes how users get an assignment to the access package, and for how long. As an administrator, you can establish a policy for automatic assignments by supplying a membership rule, that Entitlement Management will follow to create and remove assignments automatically. Similar to a [dynamic group](../enterprise-users/groups-create-rule.md), when an automatic assignment policy is created, user attributes are evaluated for matches with the policy's membership rule. When an attribute changes for a user, these automatic assignment policy rules in the access packages are processed for membership changes. Assignments to users are then added or removed depending on whether they meet the rule criteria.
-You can have at most one automatic assignment policy in an access package, and the policy can only be created by an administrator.
+You can have at most one automatic assignment policy in an access package, and the policy can only be created by an administrator. (Catalog owners and access package managers cannot create automatic assignment policies.)
This article describes how to create an access package automatic assignment policy for an existing access package. ## Before you begin
-You'll need to have attributes populated on the users who will be in scope for being assigned access. The attributes you can use in the rules criteria of an access package assignment policy are those attributes listed in [supported properties](../enterprise-users/groups-dynamic-membership.md#supported-properties), along with [extension attributes and custom extension properties](../enterprise-users/groups-dynamic-membership.md#extension-properties-and-custom-extension-properties). These attributes can be brought into Azure AD from [Graph](/graph/api/resources/user), an HR system such as [SuccessFactors](../app-provisioning/sap-successfactors-integration-reference.md), [Azure AD Connect cloud sync](../cloud-sync/how-to-attribute-mapping.md) or [Azure AD Connect sync](../hybrid/how-to-connect-sync-feature-directory-extensions.md).
+You'll need to have attributes populated on the users who will be in scope for being assigned access. The attributes you can use in the rules criteria of an access package assignment policy are those attributes listed in [supported properties](../enterprise-users/groups-dynamic-membership.md#supported-properties), along with [extension attributes and custom extension properties](../enterprise-users/groups-dynamic-membership.md#extension-properties-and-custom-extension-properties). These attributes can be brought into Azure AD from [Graph](/graph/api/resources/user), an HR system such as [SuccessFactors](../app-provisioning/sap-successfactors-integration-reference.md), [Azure AD Connect cloud sync](../cloud-sync/how-to-attribute-mapping.md) or [Azure AD Connect sync](../hybrid/how-to-connect-sync-feature-directory-extensions.md). The rules can include up to 5000 users per policy.
## Create an automatic assignment policy (Preview)
To create a policy for an access package, you need to start from the access pack
> [!NOTE] > In this preview, Entitlement management will automatically create a dynamic security group corresponding to each policy, in order to evaluate the users in scope. This group should not be modified except by Entitlement Management itself. This group may also be modified or deleted automatically by Entitlement Management, so don't use this group for other applications or scenarios.
-1. Azure AD will evaluate the users in the organization that are in scope of this rule, and create assignments for those users who don't already have assignments to the access package. It may take several minutes for the evaluation to occur, or for subsequent updates to user's attributes to be reflected in the access package assignments.
+1. Azure AD will evaluate the users in the organization that are in scope of this rule, and create assignments for those users who don't already have assignments to the access package. A policy can include at most 5000 users in its rule. It may take several minutes for the evaluation to occur, or for subsequent updates to user's attributes to be reflected in the access package assignments.
## Create an automatic assignment policy programmatically (Preview)
active-directory Entitlement Management Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-troubleshoot.md
You can only cancel a pending request that hasn't yet been delivered or whose de
1. In the request details pane, select **Cancel request**.
+## Automatic assignment policies
+
+* Each automatic assignment policy can include at most 5000 users in scope of its rule. Additional users in scope of the rule may not be assigned access.
+ ## Multiple policies * Entitlement management follows least privilege best practices. When a user requests access to an access package that has multiple policies that apply, entitlement management includes logic to help ensure stricter or more specific policies are prioritized over generic policies. If a policy is generic, entitlement management might not display the policy to the requestor or might automatically select a stricter policy.
active-directory Concept Sign In Diagnostics Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-in-diagnostics-scenarios.md
Title: Sign in diagnostics for Azure AD scenarios
-description: Lists the scenarios that are supported by the sign-in diagnostics for Azure AD.
+ Title: Common sign-in diagnostics AD scenarios
+description: Learn about the scenarios supported by the sign-in diagnostics for Azure AD.
Previously updated : 11/04/2022 Last updated : 06/19/2023
# Sign in diagnostics for Azure AD scenarios
+<a name="supported-scenarios"></a>
-You can use the sign-in diagnostic for Azure AD to analyze what happened during a sign-in attempt and get recommendations for resolving problems without needing to involve Microsoft support.
+You can use the sign-in diagnostic for Azure Active Directory (Azure AD) to analyze what happened during a sign-in attempt and get recommendations for resolving problems without needing to involve Microsoft support.
This article gives you an overview of the types of scenarios you can identify and resolve when using this tool.
-## Supported scenarios
+## How to access the Sign-in Diagnostics
-The sign-in diagnostic for Azure AD provides you with support for the following scenarios:
+There are three ways to access the Sign-in Diagnostics tool: from the Diagnose and solve problems area, the Azure AD sign-in logs, and when creating a new support request. For more information, see [How to use the sign-in diagnostics](howto-use-sign-in-diagnostics.md).
+## Conditional Access
-- **Conditional Access**
+Conditional Access policies are used to apply the right access controls when needed to keep your organization secure. Because Conditional Access policies can be used to grant or block access to resources, they often show up in the sign-in diagnostic.
- - Blocked by conditional access
+- [Blocked by Conditional access](../conditional-access/concept-conditional-access-grant.md#block-access)
+ - Your Conditional Access policies prevented the user from signing in.
- - Failed conditional access
+- [Failed Conditional access](../conditional-access/troubleshoot-conditional-access.md#select-all-consequences):
+ - It's possible your Conditional Access policies are too strict.
+ - Review your configurations for complete sets of users, groups, and apps.
+ - Make sure you understand the implications of restricting access from certain types of devices.
- - Multifactor authentication (MFA) from conditional access
+- [Multi-factor authentication (MFA) from Conditional access](../conditional-access/concept-conditional-access-grant.md#require-multifactor-authentication):
+ - Your Conditional Access policies triggered the MFA process for the user.
- - B2B Blocked Sign-In Due to Conditional Access
+- [B2B blocked sign-in due to Conditional Access](../external-identities/authentication-conditional-access.md#conditional-access-for-external-users):
+ - You have a Conditional Access policy in place to block external identities from signing in.
-- **Multifactor Authentication (MFA)**
+## Multi-factor authentication
- - MFA from other requirements
+### MFA from other requirements
- - MFA proof up required
+If the sign-in diagnostic results showed MFA from a requirement other than Conditional Access, you may have MFA enabled on a per-user basis. We [recommend converting per-user MFA to Conditional Access](recommendation-turn-off-per-user-mfa.md). The sign-in diagnostic provides details around the source of the MFA interruption and the result of the interaction.
- - MFA proof up required (risky sign-in location)
+### MFA "proofup"
-- **Correct & Incorrect Credentials**
+Another common scenario occurs when MFA interrupts sign-in attempts. When you run the sign-in diagnostic, information about "proofup" is provided in the diagnostic results. This error appears when users are setting up MFA for the first time and don't complete the setup or their configuration wasn't set up ahead of time.
- - Successful sign-in
+![Screenshot of the diagnostic results for MFA proofup.](media/concept-sign-in-diagnostics-scenarios/diagnostic-result-mfa-proofup.png)
- - Account locked
+## Correct & incorrect credentials
- - Invalid username or password
+### Successful sign-in
-- **Enterprise Apps**
+In some cases, you want to know if sign-in events *aren't* interrupted by Conditional Access or MFA, but they *should* be. The sign-in diagnostic tool provides details about sign-in events that should be interrupted, but aren't.
- - Enterprise apps service provider
+### Account locked
- - Enterprise apps configuration
+Another common scenario is when a user attempts to sign in with incorrect credentials too many times. This error happens when too many password-based sign-in attempts have occurred with incorrect credentials. The diagnostic results provide information for the administrator to determine where the attempts are coming from and if they're legitimate user sign-in attempts or not. Running the sign-in diagnostic provides details about the apps, the number of attempts, the device used, the operating system, and the IP address. For more information, see [Azure AD Smart Lockout](../authentication/howto-password-smart-lockout.md).
-- **Other Scenarios**
+### Invalid username or password
- - Security defaults
-
- - Error code insights
+If a user tried to sign in using an invalid username or password, the sign-in diagnostic helps the administrator determine the source of the problem. The source could be a user entering incorrect credentials, or a client and/or application(s) that's cached an old password and is resubmitting it. The sign-in diagnostic provides details about the apps, the number of attempts, the device used, the operating system and the IP address.
- - Legacy authentication
-
- - B2B blocked sign-in due to conditional access
-
- - Blocked by risk policy
-
- - Pass Through Authentication
-
- - Seamless single sign-on
---
-## Conditional access
--
-### Blocked by conditional access
-
-In this scenario, a sign-in attempt has been blocked by a conditional access policy.
--
-![Screenshot showing access configuration with Block access selected.](./media/concept-sign-in-diagnostics-scenarios/block-access.png)
-
-The diagnostic section for this scenario shows details about the user sign-in event and the applied policies.
-
-
-
-### Failed conditional access
-
-This scenario is typically a result of a sign-in attempt that failed because the requirements of a conditional access policy were not satisfied. Common examples are:
---
-![Screenshot showing access configuration with common policy examples and Grant access selected.](./media/concept-sign-in-diagnostics-scenarios/require-controls.png)
--- Require hybrid Azure AD joined device --- Require approved client app --- Require app protection policy -
-The diagnostic section for this scenario shows details about the user sign-in attempt and the applied policies.
-
-
-
-### MFA from conditional access
-
-In this scenario, a conditional access policy has the requirement to sign in using multifactor authentication set.
---
-![Screenshot showing access configuration with Require multifactor authentication selected.](./media/concept-sign-in-diagnostics-scenarios/require-mfa.png)
-
-The diagnostic section for this scenario shows details about the user sign-in attempt and the applied policies.
-
-
-
-
-
-## Multifactor authentication
-
-### MFA from other requirements
-
-In this scenario, a multifactor authentication requirement wasn't enforced by a conditional access policy. For example, multifactor authentication on a per-user basis.
---
-![Screenshot showing multifactor authentication per user configuration.](./media/concept-sign-in-diagnostics-scenarios/mfa-per-user.png)
-
-The intent of this diagnostic scenario is to provide more details about:
--- The source of the interrupted multifactor authentication --- The result of the client interaction -
-You can also view all details of the user sign-in attempt.
-
-
-
-### MFA proof up required
-
-In this scenario, sign-in attempts were interrupted by requests to set up multifactor authentication. This setup is also known as proof up.
-
-
-
-Multifactor authentication proof up occurs when a user is required to use multifactor authentication but has not configured it yet, or an administrator has required the user to configure it.
-
-
-
-The intent of this diagnostic scenario is to reveal that the multifactor authentication interruption was due to lack of user configuration. The recommended solution is for the user to complete the proof up.
-
-
-
-### MFA proof up required (risky sign-in location)
-
-In this scenario, sign-in attempts were interrupted by a request to set up multifactor authentication from a risky sign-in location.
-
-
-
-The intent of this diagnostic scenario is to reveal that the multifactor authentication interruption was due to lack of user configuration. The recommended solution is for the user to complete the proof up, specifically from a network location that doesn't appear risky.
-
-
-
-An example of this scenario is when policy requires that the user setup MFA only from trusted network locations but the user is signing in from an untrusted network location.
-
-
-
-## Correct & incorrect credential
-
-### Successful sign-in
-
-In this scenario, sign-in events are not interrupted by conditional access or multifactor authentication.
-
-
-
-This diagnostic scenario provides details about user sign-in events that are expected to be interrupted due to conditional access policies or multifactor authentication.
-
-
-
-### The account is locked
-
-In this scenario, a user signed-in with incorrect credentials too many times. This scenario happens when too many password-based sign-in attempts have occurred with incorrect credentials. The diagnostic scenario provides information for the admin to determine where the attempts are coming from and if they are legitimate user sign-in attempts or not.
-
-
-
-This diagnostic scenario provides details about the apps, the number of attempts, the device used, the operating system, and the IP address.
-
-
-
-More information about this topic can be found in the Azure AD Smart Lockout documentation.
-
-
-
-
-
-### Invalid username or password
-
-In this scenario, a user tried to sign in using an invalid username or password. The diagnostic is intended to allow an administrator to determine if the problem is with a user entering incorrect credentials, or a client and/or application(s), which have cached an old password and are resubmitting it.
-
-
-
-This diagnostic scenario provides details about the apps, the number of attempts, the device used, the operating system and the IP address.
-
-
-
-## Enterprise app
+## Enterprise apps
In enterprise applications, there are two points where problems may occur: - The identity provider (Azure AD) application configuration -- The service provider (application service, also known as SaaS application) side-
-
+- The service provider (application service, also known as SaaS application) configuration
-Diagnostics for these problems address which side of the problem should be looked at for resolution and what to do.
-
-
+Diagnostics for these problems address which side of the problem should be looked at for resolution and what to do
### Enterprise apps service provider
-In this scenario, a user tried to sign in to an application. The sign-in failed due to a problem with the application (also known as service provider) side of the sign-in flow. Problems detected by this diagnosis typically must be resolved by changing the configuration or fixing problems on the application service.
-
-Resolution for this scenario means signing into the other service and changing some configuration per the diagnostic guidance.
-
-
-
-### Enterprise apps configuration
+If the error occurred when a user tried to sign in to an application, the sign-in failed due to a problem with the service provider (application) side of the sign-in flow. Problems detected by the sign-in diagnosis typically must be resolved by changing the configuration or fixing problems on the application service. Resolution for this scenario means you need to sign into the other service and changing some configuration per the diagnostic guidance.
-In this scenario, a sign-in failed due to an application configuration issue for the Azure AD side of the application.
-
-
+### Enterprise apps configuration
-Resolution for this scenario requires reviewing and updating the configuration of the application in the Enterprise Applications blade entry for the application.
-
-
+Sign-in can fail due to an application configuration issue for the Azure AD side of the application. In these situations, resolution requires reviewing and updating the configuration of the application in the Enterprise Applications page for the application.
## Other scenarios ### Security defaults
-This scenario covers sign-in events where the userΓÇÖs sign-in was interrupted due to security defaults settings. Security defaults enforce best practice security for your organization and require multifactor authentication (MFA) to be configured and used in many scenarios to prevent password sprays, replay attacks and phishing attempts from being successful.
+Sign-in events can be interrupted due to security defaults settings. Security defaults enforce best practice security for your organization. One best practice is to require MFA to be configured and used to prevent password sprays, replay attacks, and phishing attempts from being successful.
For more information, see [What are security defaults?](../fundamentals/concept-fundamentals-security-defaults.md) ### Error code insights
-When an event does not have a contextual analysis in the sign-in diagnostic an updated error code explanation and relevant content may be shown. The error code insights contain detailed text about the scenario, how to remediate the problem, and any content to read regarding the problem.
+When an event doesn't have a contextual analysis in the sign-in diagnostic, an updated error code explanation and relevant content may be shown. The error code insights contain detailed text about the scenario, how to remediate the problem, and any content to read regarding the problem.
### Legacy authentication
-This diagnostics scenario diagnosis a sign-in event which was blocked or interrupted since the client was attempting to use Basic (also known as Legacy) Authentication.
+This scenario involves a sign-in event that was blocked or interrupted because the client was attempting to use Legacy (or Basic) Authentication.
-Preventing legacy authentication sign-in is recommended as the best practice for security. Legacy authentication protocols like POP, SMTP, IMAP, and MAPI cannot enforce multifactor authentication (MFA), which makes them preferred entry points for adversaries to attack your organization.
+Preventing legacy authentication sign-in is recommended as the best practice for security. Legacy authentication protocols like POP, SMTP, IMAP, and MAPI can't enforce MFA, which makes them preferred entry points for adversaries to attack your organization.
For more information, see [How to block legacy authentication to Azure AD with Conditional Access](../conditional-access/block-legacy-authentication.md).
-### B2B blocked sign-in due to conditional access
+### B2B blocked sign-in due to Conditional access
-This diagnostic scenario detects a blocked or interrupted sign-in due to the user being from another organization-a B2B sign-in-where a Conditional Access policy requires that the client's device is joined to the resource tenant.
+This diagnostic scenario detects a blocked or interrupted sign-in due to the user being from another organization. For example, a B2B sign-in, where a Conditional Access policy requires that the client's device is joined to the resource tenant.
For more information, see [Conditional Access for B2B collaboration users](../external-identities/authentication-conditional-access.md).
For more information, see [How to configure and enable risk policies](../identit
Because pass trough authentication is an integration of on premises and cloud authentication technologies, it can be difficult to determine where the problem lies. This diagnostic is intended to make these scenarios easier to diagnose and resolve.
-This diagnostic scenario identifies user specific sign-in issues when the authentication method being used is pass through authentication (PTA) and there is a PTA specific error. Errors due to other problems-even when PTA authentication is being used-will still be diagnosed correctly.
-
-The diagnostic shows contextual information about the failure and the user signing in, additional reasons why the sign-in failed, and recommended actions the admin can take to resolve the problem. For more information, see [Azure AD Connect: Troubleshoot Pass-through Authentication](../hybrid/tshoot-connect-pass-through-authentication.md).
+This diagnostic scenario identifies user specific sign-in issues when the authentication method being used is pass through authentication (PTA) and there's a PTA specific error. Errors due to other problems-even when PTA authentication is being used-will still be diagnosed correctly.
+The diagnostic results show contextual information about the failure and the user signing in. The results could show other reasons why the sign-in failed, and recommended actions the admin can take to resolve the problem. For more information, see [Azure AD Connect: Troubleshoot Pass-through Authentication](../hybrid/tshoot-connect-pass-through-authentication.md).
-### Seamless single sign on
+### Seamless single sign-on
-Seamless single sign on integrates Kerberos authentication with cloud authentication. Because this scenario involves two authentication protocols it can be difficult to understand where a failure point lies when sign-in problems occur. This diagnostic is intended to make these scenarios easier to diagnose and resolve.
+Seamless single sign-on integrates Kerberos authentication with cloud authentication. Because this scenario involves two authentication protocols, it can be difficult to understand where a failure point lies when sign-in problems occur. This diagnostic is intended to make these scenarios easier to diagnose and resolve.
-This diagnostic scenario examines the context of the sign-in failure and specific failure cause, contextual information on the sign-in attempt, and suggested actions which the admin can take-on premises or in the cloud-to resolve the problem. For more information, see [Troubleshoot Azure Active Directory Seamless Single Sign-On](../hybrid/tshoot-connect-sso.md).
-
------
+This diagnostic scenario examines the context of the sign-in failure and specific failure cause. The diagnostic results could include contextual information on the sign-in attempt, and suggested actions the admin can take. For more information, see [Troubleshoot Azure Active Directory Seamless single sign-on](../hybrid/tshoot-connect-sso.md).
## Next steps -- [What is the sign-in diagnostic in Azure AD?](overview-sign-in-diagnostics.md)
+- [How to use the sign-in diagnostic](howto-use-sign-in-diagnostics.md)
+- [How to troubleshoot sign-in errors](howto-troubleshoot-sign-in-errors.md)
active-directory Howto Troubleshoot Sign In Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-troubleshoot-sign-in-errors.md
Title: How to troubleshoot sign-in errors reports
+ Title: How to troubleshoot sign-in errors
description: Learn how to troubleshoot sign-in errors using Azure Active Directory reports in the Azure portal
Previously updated : 02/16/2023 Last updated : 06/19/2023 - # How to: Troubleshoot sign-in errors using Azure Active Directory reports
-The [sign-ins report](concept-sign-ins.md) in Azure Active Directory (Azure AD) enables you to find answers to questions around managing access to the applications in your organization, including:
+The Azure Active Directory (Azure AD) sign-in logs enable you to find answers to questions around managing access to the applications in your organization, including:
- What is the sign-in pattern of a user? - How many users have users signed in over a week? - WhatΓÇÖs the status of these sign-ins? -
-In addition, the sign-ins report can also help you troubleshoot sign-in failures for users in your organization. In this guide, you learn how to isolate a sign-in failure in the sign-ins report, and use it to understand the root cause of the failure.
+In addition, the sign-ins logs can also help you troubleshoot sign-in failures for users in your organization. In this guide, you learn how to isolate a sign-in failure in the sign-ins report, and use it to understand the root cause of the failure. Some common sign-in errors are also described.
## Prerequisites You need:
-* An Azure AD tenant with a premium (P1/P2) license. See [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md) to upgrade your Azure Active Directory edition.
-* A user, who is in the **global administrator**, **security administrator**, **security reader**, or **reports reader** role for the tenant. In addition, any user can access their own sign-ins.
+* An Azure AD tenant with a Premium P1/P2 license.
+* A user with the **Global Administrator**, **Security Administrator**, **Security Reader**, or **Reports Reader** role for the tenant.
+* In addition, any user can access their own sign-ins from https://mysignins.microsoft.com.
+
+## Gather sign-in details
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using a role of least privilege access.
+1. Go to **Azure AD** > **Sign-ins**.
+1. Use the filters to narrow down the results
+ - Search by username if you're troubleshooting a specific user.
+ - Search by application if you're troubleshooting issues with a specific app.
+ - Select **Failure** from the **Status** menu to display only failed sign-ins.
+1. Select the failed sign-in you want to investigate to open the details window.
+1. Explore the details on each tab. You may want to save a few details for further troubleshooting. These details are highlighted in the screenshot following the list.
+ - Correlation ID
+ - Sign-in error code
+ - Failure reason
+ - Username, User ID, and Sign-in identifier
+
+ ![Screenshot of the sign-in details, with several details highlighted.](media/howto-troubleshoot-sign-in-errors/sign-in-activity-details.png)
+
+## Troubleshoot sign-in errors
+
+With sign-in details gathered, you should explore the results and troubleshoot the issue.
+
+### Failure reason and additional details
+
+The **Failure reason** and **Additional Details** may provide you with the details and next steps to resolve the issue. The Failure reason describes the error. The Additional Details provides more details and often tells you how to resolve the issue.
+
+![Screenshot of the activity details, with the failure reason and details highlighted.](media/howto-troubleshoot-sign-in-errors/sign-in-activity-details-failure-reason.png)
+
+The following failure reasons and details are common:
+
+- The failure reason **Authentication failed during the strong authentication request** doesn't provide much to troubleshoot, but the additional details field says the user didn't complete the MFA prompt. Have the user sign-in again and complete the MFA prompts.
+- The failure reason **The Federation Service failed to issue an OAuth Primary Refresh Token** provides a good starting point, but the additional details briefly explain how authentication works in this scenario and tell you to make sure that device sync is enabled.
+- A common failure reason is **Error validating credentials due to invalid username or password**. The user entered something incorrectly and needs to try again.
-## Troubleshoot sign-in errors using the sign-ins report
+### Sign-in error codes
-1. Navigate to the [Azure portal](https://portal.azure.com) and select your directory.
-2. Select **Azure Active Directory** and select **Sign-ins** from the **Monitoring** section.
-3. Use the provided filters to narrow down the failure, either by the username or object identifier, application name or date. In addition, select **Failure** from the **Status** drop-down to display only the failed sign-ins.
+If you need more specifics to research, you can use the **sign-in error code** for further research.
- ![Filter results](./media/howto-troubleshoot-sign-in-errors/filters.png)
-
-4. Identify the failed sign-in you want to investigate. Select it to open up the other details window with more information about the failed sign-in. Note down the **Sign-in error code** and **Failure reason**.
+- Enter the error code into the **[Error code lookup tool](https://login.microsoftonline.com/error)** to get the error code description and remediation information.
+- Search for an error code in the **[sign-ins error codes reference](../develop/reference-aadsts-error-codes.md)**.
- ![Select record](./media/howto-troubleshoot-sign-in-errors/sign-in-failures.png)
-
-5. You can also find this information in the **Troubleshooting and support** tab in the details window.
+The following error codes are associated with sign-in events, but this list isn't exhaustive:
- ![Troubleshooting and support](./media/howto-troubleshoot-sign-in-errors/troubleshooting-and-support.png)
+- **50058**: User is authenticated but not yet signed in.
+ - This error code appears for sign-in attempts when the user didn't complete the sign-in process.
+ - Because the user didn't sign-in completely, the User field may display an Object ID or a globally unique identifier (GUID) instead of a username.
+ - In some of these situations, the User ID shows up like "00000000-0000-0000".
-6. The failure reason describes the error. For example, in the above scenario, the failure reason is **Invalid username or password or Invalid on-premises username or password**. The fix is to simply sign-in again with the correct username and password.
+- **90025**: An internal Azure AD service hit its retry allowance to sign the user in.
+ - This error often happens without the user noticing and is usually resolved automatically.
+ - If it persists, have the user sign in again.
-7. You can get additional information, including ideas for remediation, by searching for the error code, **50126** in this example, in the [sign-ins error codes reference](../develop/reference-error-codes.md).
+- **500121**: User didn't complete the MFA prompt.
+ - This error often appears if the user hasn't completed setting up MFA.
+ - Instruct the user to complete the setup process through to sign-in.
-8. If all else fails, or the issue persists despite taking the recommended course of action, [open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) following the steps in the **Troubleshooting and support** tab.
+If all else fails, or the issue persists despite taking the recommended course of action, [open a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). For more information, see [how to get support for Azure AD](../fundamentals/how-to-get-support.md).
## Next steps * [Sign-ins error codes reference](./concept-sign-ins.md)
-* [Sign-ins report overview](concept-sign-ins.md)
+* [Sign-ins report overview](concept-sign-ins.md)
+* [How to use the Sign-in diagnostics](howto-use-sign-in-diagnostics.md)
active-directory Howto Use Sign In Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-sign-in-diagnostics.md
++
+ Title: How to use the Sign-in diagnostic
+description: Information on how to use the Sign-in diagnostic in Azure Active Directory.
+++++++ Last updated : 06/19/2023++++
+# Customer intent: As an Azure AD administrator, I want a tool that gives me the right level of insights into the sign-in activities in my system so that I can easily diagnose and solve problems when they occur.
+++
+# What is the Sign-in diagnostic in Azure AD?
+
+Determining the reason for a failed sign-in can quickly become a challenging task. You need to analyze what happened during the sign-in attempt, and research the available recommendations to resolve the issue. Ideally, you want to resolve the issue without involving others, such as Microsoft support. If you are in a situation like this, you can use the Sign-in diagnostic in Azure AD, a tool that helps you investigate sign-ins in Azure AD.
+
+This article gives you an overview of what the Sign-in diagnostic is and how you can use it to troubleshoot sign-in related errors.
+
+## How it works
+
+In Azure AD, sign-in attempts are controlled by:
+
+- **Who** performed a sign-in attempt.
+- **How** a sign-in attempt was performed.
+
+For example, you can configure Conditional Access policies that enable administrators to configure all aspects of the tenant when they sign in from the corporate network. But the same user might be blocked when they sign in to the same account from an untrusted network.
+
+Due to the greater flexibility of the system to respond to a sign-in attempt, you might end up in scenarios where you need to troubleshoot sign-ins. The Sign-in diagnostic tool enables diagnosis of sign-in issues by:
+
+- Analyzing data from sign-in events and flagged sign-ins.
+- Displaying information about what happened.
+- Providing recommendations to resolve problems.
+
+## How to access it
+
+To use the Sign-in diagnostic, you must be signed into the tenant as a **Global Reader** or **Global Administrator**. With the correct access level, you can start the Sign-in diagnostic from more than one place.
+
+Flagged sign-in events can also be reviewed from the Sign-in diagnostic. Flagged sign-in events are captured *after* a user has enabled flagging during their sign-in experience. For more information, see [flagged sign-ins](overview-flagged-sign-ins.md).
+
+### From Diagnose and Solve Problems
+
+You can start the Sign-in diagnostic from the **Diagnose and Solve Problems** area of Azure AD. From Diagnose and Solve Problems you can review any flagged sign-in events or search for a specific sign-in event. You can also start this process from the Conditional Access Diagnose and Solve Problems area.
+
+**To search for sign-in events**:
+1. Go to **Azure AD** or **Azure AD Conditional Access** > **Diagnose and Solve Problems**.
+1. Select the **All Sign-In Events** tab to start a search.
+1. Enter as many details as possible into the search fields.
+ - **User**: Provide the name or email address of who made the sign-in attempt.
+ - **Application**: Provide the application display name or application ID.
+ - **correlationId** or **requestId**: These details can be found in the error report or the sign-in log details.
+ - **Date and time**: Provide a date and time to find sign-in events that occurred within 48 hours.
+1. Select the **Next** button.
+1. Explore the results and take action as necessary.
+
+### From the Sign-in logs
+
+You can start the Sign-in diagnostic from a specific sign-in event in the Sign-in logs. When you start the process from a specific sign-in event, the diagnostics start right away. You aren't prompted to enter details first.
+
+1. Go to **Azure AD** > **Sign-in logs** and select a sign-in event.
+ - You can filter your list to make it easier to find specific sign-in events.
+1. From the Activity Details window that opens, select the **Launch the Sign-in diagnostic** link.
+
+ ![Screenshot showing how to launch sign-in diagnostics from Azure AD.](./media/overview-sign-in-diagnostics/sign-in-logs-link.png)
+1. Explore the results and take action as necessary.
+
+### From a support request
+
+If you're in the middle of creating a support request *and* the options you selected are related to sign-in activity, you'll be prompted to run the Sign-in diagnostics during the support request process.
+
+1. Go to **Azure AD** > **Diagnose and Solve Problems**.
+1. Select the appropriate fields as necessary. For example:
+ - **Service type**: Azure Active Directory Sign-in and Multi-Factor Authentication
+ - **Problem type**: Multi-Factor Authentication
+ - **Problem subtype**: Unable to sign-in to an application due to MFA
+1. Explore the results and take action as necessary.
+
+ ![Screenshot of the support request fields that start the sign-in diagnostics.](media/howto-use-sign-in-diagnostics/sign-in-support-request.png)
+
+## How to use the diagnostic Results
+
+After the Sign-in diagnostic completes its search, a few things appear on the screen:
+
+- The **Authentication Summary** lists all of the events that match the details you provided.
+ - Select the **View Columns** option in the upper-right corner of the summary to change the columns that appear.
+- The **diagnostic Results** describe what happened during the sign-in events.
+ - Scenarios could include MFA requirements from a Conditional Access policy, sign-in events that may need to have a Conditional Access policy applied, or a large number of failed sign-in attempts over the past 48 hours.
+ - Related content and links to troubleshooting tools may be provided.
+ - Read through the results to identify any actions that you can take.
+ - Because it's not always possible to resolve issues without more help, a recommended step might be to open a support ticket.
+
+ ![Screenshot of the diagnostic Results for a scenario.](media/howto-use-sign-in-diagnostics/diagnostic-result-mfa-proofup.png)
+
+- Provide feedback on the results to help improve the feature.
+
+## Next steps
+
+- [Sign in diagnostics for Azure AD scenarios](concept-sign-in-diagnostics-scenarios.md)
+- [Learn about flagged sign-ins](overview-flagged-sign-ins.md)
+- [Troubleshoot sign-in errors](howto-troubleshoot-sign-in-errors.md)
active-directory Overview Sign In Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-sign-in-diagnostics.md
-- Title: What is the sign-in diagnostic for Azure Active Directory?
-description: Provides a general overview of the sign-in diagnostic in Azure Active Directory.
------- Previously updated : 11/01/2022---
-# Customer intent: As an Azure AD administrator, I want a tool that gives me the right level of insights into the sign-in activities in my system so that I can easily diagnose and solve problems when they occur.
---
-# What is the sign-in diagnostic in Azure AD?
-
-Determining the reason for a failed sign-in can quickly become a challenging task. You need to analyze what happened during the sign-in attempt, and research the available recommendations to resolve the issue. Ideally, you want to resolve the issue without involving others such as Microsoft support. If you are in a situation like this, you can use the sign-in diagnostic in Azure AD, a tool that helps you investigating sign-ins in Azure AD.
-
-This article gives you an overview of what the diagnostic is and how you can use it to troubleshoot sign-in related errors.
--
-## How it works
-
-In Azure AD, sign-in attempts are controlled by:
--- **Who** - The user performing a sign-in attempt.-- **How** - How a sign-in attempt was performed.-
-For example, you can configure conditional access policies that enable administrators to configure all aspects of the tenant when they sign in from the corporate network. But the same user might be blocked when they sign into the same account from an untrusted network.
-
-Due to the greater flexibility of the system to respond to a sign-in attempt, you might end-up in scenarios where you need to troubleshoot sign-ins. The sign-in diagnostic is a tool that is designed to enable self-diagnosis of sign-in issues by:
--- Analyzing data from sign-in events. --- Displaying information about what happened. --- Providing recommendations to resolve problems. -
-To start and complete the diagnostic process, you need to:
-
-1. **Identify event**ΓÇ»- Start the diagnostic and review the flagged events users are asking assistance for, or enter information about the sign-in event to be investigated.
-
-2. **Select event**ΓÇ»- Select an event based on the information shared.
-
-3. **Take action**ΓÇ»- Review diagnostic results and perform steps.
---
-### Identify event
-
-The diagnostic allows two methods to find events to investigate:
--- Sign-in failures users have [flagged for assistance](overview-flagged-sign-ins.md). -- Search for specific events by the user and other criteria. -
-Flagged sign-ins are automatically presented in a list of up to 100. You can run diagnostics on an event immediately by clicking it.
-
-You can search a specific event by selecting the search tab even when flagged sign-ins are present.
-When searching for specific events, you can filter based on the following options:
--- Name of the user --- Application --- Correlation ID or request ID --- Date and time ---
-### Select event
-
-For flagged sign-ins, or when a search has been done, Azure AD retrieves all matching sign-in events and presents them in an authentication summary list view.
--
-![Screenshot showing the authentication summary list.](./media/overview-sign-in-diagnostics/review-sign-ins.png)
-
-You can change the content displayed in the columns based on your preference. Examples are:
--- Risk details-- Conditional access status-- Location-- Resource ID-- User type-- Authentication details-
-### Take action
-
-For the selected sign-in event, you get a diagnostic result. Read through the results to identify action that you can take to fix the problem. These results add recommended steps and shed light on relevant information such as the related policies, sign-in details, and supportive documentation. Because it's not always possible to resolve issues without more help, a recommended step might be to open a support ticket.
--
-![Screenshot showing the diagnostic results.](./media/overview-sign-in-diagnostics/diagnostic-results.png)
---
-## How to access it
-
-To use the diagnostic, you must be signed into the tenant as a Global Administrator or a Global Reader.
-
-With the correct access level, you can find the diagnostic in various places:
-
-**Option A**: Diagnose and Solve Problems
-
-![Screenshot showing how to launch sign-in diagnostics from conditional access.](./media/overview-sign-in-diagnostics/troubleshoot-link.png)
--
-1. Open **Azure Active Directory AAD or Azure AD Conditional Access**.
-
-1. From the main menu, select **Diagnose & Solve Problems**.
-
-1. From the **Troubleshooters** section, select the **Troubleshoot** button from the sign-in diagnostic tile.
-
-
-**Option B**: Sign-in Events
-
-![Screenshot showing how to launch sign-in diagnostics from Azure AD.](./media/overview-sign-in-diagnostics/sign-in-logs-link.png)
----
-1. Open Azure Active Directory.
-
-2. On the main menu, in the **Monitoring** section, select **Sign-ins**.
-
-3. From the list of sign-ins, select a sign-in with a **Failure** status. You can filter your list by Status to make it easier to find failed sign-ins.
-
-4. The **Activity Details: Sign-ins** tab will open for the selected sign-in. Select the dotted icon to view more menu icons. Select the **Troubleshooting and support** tab.
-
-5. Select the link to **Launch the Sign-in Diagnostic**.
-
-
-
-**Option C**: Support Case
-
-The diagnostic can also be found when creating a support case to give you the opportunity to self-diagnose before resorting to submitting a case.
---
-## Next steps
--- [Sign in diagnostics for Azure AD scenarios](concept-sign-in-diagnostics-scenarios.md)
active-directory Cernercentral Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cernercentral-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* A Cerner Central tenant > [!NOTE]
-> Azure Active Directory integrates with Cerner Central using the [SCIM](http://www.simplecloud.info/) protocol.
+> Azure Active Directory integrates with Cerner Central using the SCIM protocol.
## Assigning users to Cerner Central
active-directory Linkedinelevate Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/linkedinelevate-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* An administrator account in LinkedIn Elevate with access to the LinkedIn Account Center > [!NOTE]
-> Azure Active Directory integrates with LinkedIn Elevate using the [SCIM](http://www.simplecloud.info/) protocol.
+> Azure Active Directory integrates with LinkedIn Elevate using the SCIM protocol.
## Assigning users to LinkedIn Elevate
active-directory Linkedinsalesnavigator Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/linkedinsalesnavigator-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* An administrator account in LinkedIn Sales Navigator with access to the LinkedIn Account Center > [!NOTE]
-> Azure Active Directory integrates with LinkedIn Sales Navigator using the [SCIM](http://www.simplecloud.info/) protocol.
+> Azure Active Directory integrates with LinkedIn Sales Navigator using the SCIM protocol.
## Assigning users to LinkedIn Sales Navigator
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
Expand the PVC by increasing the `spec.resources.requests.storage` field running
kubectl patch pvc pvc-azuredisk --type merge --patch '{"spec": {"resources": {"requests": {"storage": "15Gi"}}}}' ```
+> [!NOTE]
+> Shrinking persistent volumes is currently not supported. Trying to patch an existing PVC with a smaller size than the current one leads to the following error message:
+> `The persistentVolumeClaim "pvc-azuredisk" is invalid: spec.resources.requests.storage: Forbidden: field can not be less than previous value.`
+ The output of the command resembles the following example: ```output
aks Developer Best Practices Pod Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/developer-best-practices-pod-security.md
Work with your cluster operator to determine what security context settings you
To limit the risk of credentials being exposed in your application code, avoid the use of fixed or shared credentials. Credentials or keys shouldn't be included directly in your code. If these credentials are exposed, the application needs to be updated and redeployed. A better approach is to give pods their own identity and way to authenticate themselves, or automatically retrieve credentials from a digital vault.
-#### Use an Azure AD workload identity (preview)
+#### Use an Azure AD workload identity
A workload identity is an identity used by an application running on a pod that can authenticate itself against other Azure services that support it, such as Storage or SQL. It integrates with the capabilities native to Kubernetes to federate with external identity providers. In this security model, the AKS cluster acts as token issuer, Azure Active Directory uses OpenID Connect to discover public signing keys and verify the authenticity of the service account token before exchanging it for an Azure AD token. Your workload can exchange a service account token projected to its volume for an Azure AD token using the Azure Identity client library using the [Azure SDK][azure-sdk-download] or the [Microsoft Authentication Library][microsoft-authentication-library] (MSAL).
This article focused on how to secure your pods. To implement some of these area
[apparmor-seccomp]: operator-best-practices-cluster-security.md#secure-container-access-to-resources [microsoft-authentication-library]: ../active-directory/develop/msal-overview.md [workload-identity-overview]: workload-identity-overview.md
-[aks-keyvault-csi-driver]: csi-secrets-store-driver.md
+[aks-keyvault-csi-driver]: csi-secrets-store-driver.md
aks Network Observability Byo Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-byo-cli.md
+
+ Title: "Setup of Network Observability for Azure Kubernetes Service (AKS) - BYO Prometheus and Grafana"
+description: Get started with AKS Network Observability for your AKS cluster using BYO Prometheus and Grafana.
+++++ Last updated : 06/20/2023+++
+# Setup of Network Observability for Azure Kubernetes Service (AKS) - BYO Prometheus and Grafana
+
+AKS Network Observability is used to collect the network traffic data of your AKS cluster. Network Observability enables a centralized platform for monitoring application and network health. Prometheus collects AKS Network Observability metrics, and Grafana visualizes them. Both Cilium and non-Cilium data plane are supported. In this article, learn how to enable the Network Observability add-on and use BYO Prometheus and Grafana to visualize the scraped metrics.
+
+For more information about AKS Network Observability, see [What is Azure Kubernetes Service (AKS) Network Observability?](network-observability-overview.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- Installations of BYO Prometheus and Grafana.
++
+### Install the `aks-preview` Azure CLI extension
++
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Register the `NetworkObservabilityPreview` feature flag
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "NetworkObservabilityPreview"
+```
+
+Use [az feature show](/cli/azure/feature#az-feature-show) to check the registration status of the feature flag:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "NetworkObservabilityPreview"
+```
+
+Wait for the feature to say **Registered** before preceding with the article.
+
+```output
+{
+ "id": "/subscriptions/23250d6d-28f0-41dd-9776-61fc80805b6e/providers/Microsoft.Features/providers/Microsoft.ContainerService/features/NetworkObservabilityPreview",
+ "name": "Microsoft.ContainerService/NetworkObservabilityPreview",
+ "properties": {
+ "state": "Registering"
+ },
+ "type": "Microsoft.Features/providers/features"
+}
+```
+When the feature is registered, refresh the registration of the Microsoft.ContainerService resource provider with [az provider register](/cli/azure/provider#az-provider-register):
+
+```azurecli-interactive
+az provider register -n Microsoft.ContainerService
+```
+
+## Create a resource group
+
+A resource group is a logical container into which Azure resources are deployed and managed. Create a resource group with [az group create](/cli/azure/group#az-group-create) command. The following example creates a resource group named **myResourceGroup** in the **eastus** location:
+
+```azurecli-interactive
+az group create \
+ --name myResourceGroup \
+ --location eastus
+```
+
+## Create AKS cluster
+
+Create an AKS cluster with [az aks create](/cli/azure/aks#az-aks-create) command. The following example creates an AKS cluster named **myAKSCluster** in the **myResourceGroup** resource group:
+
+# [**Non-Cilium**](#tab/non-cilium)
+
+Non-Cilium clusters support the enablement of Network Observability on an existing cluster or during the creation of a new cluster.
+
+## New cluster
+
+Use [az aks create](/cli/azure/aks#az-aks-create) in the following example to create an AKS cluster with Network Observability and non-Cilium.
+
+```azurecli-interactive
+az aks create \
+ --name myAKSCluster \
+ --resource-group myResourceGroup \
+ --location eastus \
+ --generate-ssh-keys \
+ --network-plugin azure \
+ --network-plugin-mode overlay \
+ --pod-cidr 192.168.0.0/16 \
+ --enable-network-observability
+```
+
+## Existing cluster
+
+Use [az aks update](/cli/azure/aks#az-aks-update) to enable Network Observability on an existing cluster.
+
+```azurecli-interactive
+az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --enable-network-observability
+```
+
+# [**Cilium**](#tab/cilium)
+
+Use the following example to create an AKS cluster with Network Observability and Cilium.
+
+```azurecli-interactive
+az aks create \
+ --name myAKSCluster \
+ --resource-group myResourceGroup \
+ --generate-ssh-keys \
+ --location eastus \
+ --max-pods 250 \
+ --network-plugin azure \
+ --network-plugin-mode overlay \
+ --network-dataplane cilium \
+ --node-count 2 \
+ --pod-cidr 192.168.0.0/16
+```
+++
+## Get cluster credentials
+
+```azurecli-interactive
+az aks get-credentials -name myAKSCluster --resource-group myResourceGroup
+```
+
+## Enable Visualization on Grafana
+
+Use the following example to configure scrape jobs on Prometheus and enable visualization on Grafana for your AKS cluster.
++
+# [**Non-Cilium**](#tab/non-cilium)
+
+> [!NOTE]
+> The following section requires installations of Prometheus and Grafana.
+
+1. Add the following scrape job to your existing Prometheus configuration and restart your Prometheus server:
+
+ ```yml
+ scrape_configs:
+ - job_name: "network-obs-pods"
+ kubernetes_sd_configs:
+ - role: pod
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_pod_container_name]
+ action: keep
+ regex: kappie(.*)
+ - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
+ separator: ":"
+ regex: ([^:]+)(?::\d+)?
+ target_label: __address__
+ replacement: ${1}:${2}
+ action: replace
+ - source_labels: [__meta_kubernetes_pod_node_name]
+ action: replace
+ target_label: instance
+ metric_relabel_configs:
+ - source_labels: [__name__]
+ action: keep
+ regex: (.*)
+ ```
+
+1. In **Targets** of Prometheus, verify the **network-obs-pods** are present.
+
+1. Sign in to Grafana and import Network Observability dashboard with ID [18814](https://grafana.com/grafana/dashboards/18814/).
+
+# [**Cilium**](#tab/cilium)
+
+> [!NOTE]
+> The following section requires installations of Prometheus and Grafana.
+
+1. Add the following scrape job to your existing Prometheus configuration and restart your prometheus server.
+
+ ```yml
+ scrape_configs:
+ - job_name: 'kubernetes-pods'
+ kubernetes_sd_configs:
+ - role: pod
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
+ action: keep
+ regex: true
+ - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
+ action: replace
+ regex: (.+):(?:\d+);(\d+)
+ replacement: ${1}:${2}
+ target_label: __address__
+ ```
+
+1. In **Targets** of prometheus, verify the **kubernetes-pods** are present.
+
+1. Sign in to Grafana and import dashboards with the following ID [16611-cilium-metrics](https://grafana.com/grafana/dashboards/16611-cilium-metrics/)
+++
+## Clean up resources
+
+If you're not going to continue to use this application, delete the AKS cluster and the other resources created in this article with the following example:
+
+```azurecli-interactive
+ az group delete \
+ --name myResourceGroup
+```
+
+## Next steps
+
+In this how-to article, you learned how to install and enable AKS Network Observability for your AKS cluster.
+
+- For more information about AKS Network Observability, see [What is Azure Kubernetes Service (AKS) Network Observability?](network-observability-overview.md).
+
+- To create an AKS cluster with Network Observability and managed Prometheus and Grafana, see [Setup Network Observability for Azure Kubernetes Service (AKS) Azure managed Prometheus and Grafana](network-observability-managed-cli.md).
+
aks Network Observability Managed Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-managed-cli.md
+
+ Title: "Setup of Network Observability for Azure Kubernetes Service (AKS) - Azure managed Prometheus and Grafana"
+description: Get started with AKS Network Observability for your AKS cluster using Azure managed Prometheus and Grafana.
+++++ Last updated : 06/20/2023+++
+# Setup of Network Observability for Azure Kubernetes Service (AKS) - Azure managed Prometheus and Grafana
+
+AKS Network Observability is used to collect the network traffic data of your AKS cluster. Network Observability enables a centralized platform for monitoring application and network health. Prometheus collects AKS Network Observability metrics, and Grafana visualizes them. Both Cilium and non-Cilium data plane are supported. In this article, learn how to enable the Network Observability add-on and use Azure managed Prometheus and Grafana to visualize the scraped metrics.
+
+For more information about AKS Network Observability, see [What is Azure Kubernetes Service (AKS) Network Observability?](network-observability-overview.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
++
+### Install the `aks-preview` Azure CLI extension
++
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Register the `NetworkObservabilityPreview` feature flag
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "NetworkObservabilityPreview"
+```
+
+Use [az feature show](/cli/azure/feature#az-feature-show) to check the registration status of the feature flag:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "NetworkObservabilityPreview"
+```
+
+Wait for the feature to say **Registered** before preceding with the article.
+
+```output
+{
+ "id": "/subscriptions/23250d6d-28f0-41dd-9776-61fc80805b6e/providers/Microsoft.Features/providers/Microsoft.ContainerService/features/NetworkObservabilityPreview",
+ "name": "Microsoft.ContainerService/NetworkObservabilityPreview",
+ "properties": {
+ "state": "Registering"
+ },
+ "type": "Microsoft.Features/providers/features"
+}
+```
+When the feature is registered, refresh the registration of the Microsoft.ContainerService resource provider with [az provider register](/cli/azure/provider#az-provider-register):
+
+```azurecli-interactive
+az provider register -n Microsoft.ContainerService
+```
+
+## Create a resource group
+
+A resource group is a logical container into which Azure resources are deployed and managed. Create a resource group with [az group create](/cli/azure/group#az-group-create) command. The following example creates a resource group named **myResourceGroup** in the **eastus** location:
+
+```azurecli-interactive
+az group create \
+ --name myResourceGroup \
+ --location eastus
+```
+
+## Create AKS cluster
+
+Create an AKS cluster with [az aks create](/cli/azure/aks#az-aks-create). The following example creates an AKS cluster named **myAKSCluster** in the **myResourceGroup** resource group:
+
+# [**Non-Cilium**](#tab/non-cilium)
+
+Non-Cilium clusters support the enablement of Network Observability on an existing cluster or during the creation of a new cluster.
+
+Use [az aks create](/cli/azure/aks#az-aks-create) in the following example to create an AKS cluster with Network Observability and non-Cilium.
+
+## New cluster
+
+```azurecli-interactive
+az aks create \
+ --name myAKSCluster \
+ --resource-group myResourceGroup \
+ --location eastus \
+ --generate-ssh-keys \
+ --network-plugin azure \
+ --network-plugin-mode overlay \
+ --pod-cidr 192.168.0.0/16 \
+ --enable-network-observability
+```
+
+## Existing cluster
+
+Use [az aks update](/cli/azure/aks#az-aks-update) to enable Network Observability for an existing cluster.
+
+```azurecli-interactive
+az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --enable-network-observability
+```
+
+# [**Cilium**](#tab/cilium)
+
+Use [az aks create](/cli/azure/aks#az-aks-create) in the following example to create an AKS cluster with Network Observability and Cilium.
+
+```azurecli-interactive
+az aks create \
+ --name myAKSCluster \
+ --resource-group myResourceGroup \
+ --generate-ssh-keys \
+ --location eastus \
+ --max-pods 250 \
+ --network-plugin azure \
+ --network-plugin-mode overlay \
+ --network-dataplane cilium \
+ --node-count 2 \
+ --pod-cidr 192.168.0.0/16
+```
+++
+## Azure managed Prometheus and Grafana
+
+Use the following example to install and enable Prometheus and Grafana for your AKS cluster.
+
+### Create Azure Monitor resource
+
+```azurecli-interactive
+az resource create \
+ --resource-group myResourceGroup \
+ --namespace microsoft.monitor \
+ --resource-type accounts \
+ --name myAzureMonitor \
+ --location eastus \
+ --properties '{}'
+```
+
+### Create Grafana instance
+
+Use [az grafana create](/cli/azure/grafana#az-grafana-create) to create a Grafana instance. The name of the Grafana instance must be unique. Replace **myGrafana** with a unique name for your Grafana instance.
+
+```azurecli-interactive
+az grafana create \
+ --name myGrafana \
+ --resource-group myResourceGroup
+```
+
+### Place the Grafana and Azure Monitor resource IDs in variables
+
+Use [az grafana show](/cli/azure/grafana#az-grafana-show) to place the Grafana resource ID in a variable. Use [az resource show](/cli/azure/resource#az-resource-show) to place the Azure Monitor resource ID in a variable. Replace **myGrafana** with the name of your Grafana instance.
+
+```azurecli-interactive
+grafanaId=$(az grafana show \
+ --name myGrafana \
+ --resource-group myResourceGroup \
+ --query id \
+ --output tsv)
+
+azuremonitorId=$(az resource show \
+ --resource-group myResourceGroup \
+ --name myAzureMonitor \
+ --resource-type "Microsoft.Monitor/accounts" \
+ --query id \
+ --output tsv)
+```
+
+### Link Azure Monitor and Grafana to AKS cluster
+
+Use [az aks update](/cli/azure/aks#az-aks-update) to link the Azure Monitor and Grafana resources to your AKS cluster.
+
+```azurecli-interactive
+az aks update \
+ --name myAKSCluster \
+ --resource-group myResourceGroup \
+ --enable-azuremonitormetrics \
+ --azure-monitor-workspace-resource-id $azuremonitorId \
+ --grafana-resource-id $grafanaId
+```
+++
+## Get cluster credentials
+
+```azurecli-interactive
+az aks get-credentials -name myAKSCluster --resource-group myResourceGroup
+```
++
+## Enable visualization on Grafana
+
+# [**Non-Cilium**](#tab/non-cilium)
+
+> [!NOTE]
+> The following section requires deployments of Azure managed Prometheus and Grafana.
+
+1. Use the following example to verify the Azure Monitor pods are running.
+
+ ```azurecli-interactive
+ kubectl get po -owide -n kube-system | grep ama-
+ ```
+
+ ```output
+ ama-metrics-5bc6c6d948-zkgc9 2/2 Running 0 (21h ago) 26h
+ ama-metrics-ksm-556d86b5dc-2ndkv 1/1 Running 0 (26h ago) 26h
+ ama-metrics-node-lbwcj 2/2 Running 0 (21h ago) 26h
+ ama-metrics-node-rzkzn 2/2 Running 0 (21h ago) 26h
+ ama-metrics-win-node-gqnkw 2/2 Running 0 (26h ago) 26h
+ ama-metrics-win-node-tkrm8 2/2 Running 0 (26h ago) 26h
+ ```
+
+1. Use the ID [18814]( https://grafana.com/grafana/dashboards/18814/) to import the dashboard from Grafana's public dashboard repo.
+
+1. Verify the Grafana dashboard is visible.
+
+# [**Cilium**](#tab/cilium)
+
+> [!NOTE]
+> The following section requires deployments of Azure managed Prometheus and Grafana.
+
+1. Use the following example to create a yaml file named **`ama-cilium-configmap.yaml`**. Copy the code in the example into the file created.
+
+ ```yaml
+ scrape_configs:
+ - job_name: "cilium-pods"
+ kubernetes_sd_configs:
+ - role: pod
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_pod_container_name]
+ action: keep
+ regex: cilium(.*)
+ - source_labels:
+ [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
+ separator: ":"
+ regex: ([^:]+)(?::\d+)?
+ target_label: __address__
+ replacement: ${1}:${2}
+ action: replace
+ - source_labels: [__meta_kubernetes_pod_node_name]
+ action: replace
+ target_label: instance
+ - source_labels: [__meta_kubernetes_pod_label_k8s_app]
+ action: keep
+ regex: cilium
+ - source_labels: [__meta_kubernetes_pod_name]
+ action: replace
+ regex: (.*)
+ target_label: pod
+ metric_relabel_configs:
+ - source_labels: [__name__]
+ action: keep
+ regex: (.*)
+ ```
+
+1. To create the `configmap`, use the following example:
+
+ ```azurecli-interactive
+ kubectl create configmap ama-metrics-prometheus-config-node \
+ --from-file=./ama-cilium-configmap.yaml \
+ --name kube-system
+ ```
+
+1. Once the Azure Monitor pods have been deployed on the cluster, port forward to the `ama` pod to verify the pods are being scraped. Use the following example to port forward to the pod:
+
+ ```azurecli-interactive
+ k port-forward $(k get po -l dsName=ama-metrics-node -oname | head -n 1) 9090:9090
+ ```
+
+1. In **Targets** of prometheus, verify the **cilium-pods** are present.
+
+1. Sign in to Grafana and import dashboards with the following ID [16611-cilium-metrics](https://grafana.com/grafana/dashboards/16611-cilium-metrics/).
+++
+## Clean up resources
+
+If you're not going to continue to use this application, delete
+the AKS cluster and the other resources created in this article with the following example:
+
+```azurecli-interactive
+ az group delete \
+ --name myResourceGroup
+```
+
+## Next steps
+
+In this how-to article, you learned how to install and enable AKS Network Observability for your AKS cluster.
+
+- For more information about AKS Network Observability, see [What is Azure Kubernetes Service (AKS) Network Observability?](network-observability-overview.md).
+
+- To create an AKS cluster with Network Observability and BYO Prometheus and Grafana, see [Setup Network Observability for Azure Kubernetes Service (AKS) BYO Prometheus and Grafana](network-observability-byo-cli.md).
aks Network Observability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-overview.md
+
+ Title: What is Azure Kubernetes Service (AKS) Network Observability? (Preview)
+description: An overview of network observability for Azure Kubernetes Service (AKS).
+++++ Last updated : 06/20/2023++
+# What is Azure Kubernetes Service (AKS) Network Observability? (Preview)
+
+Kubernetes is a powerful tool for managing containerized applications. As containerized environments grow in complexity, it can be difficult to identify and troubleshoot networking issues in a Kubernetes cluster.
+
+Network observability is an important part of maintaining a healthy and performant Kubernetes cluster. By collecting and analyzing data about network traffic, you can gain insights into how your cluster is operating and identify potential problems before they cause outages or performance degradation.
++
+## Overview of Network Observability add-on in AKS
+
+> [!IMPORTANT]
+> AKS Network Observability is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Networking Observability add-on operates seamlessly on Non-Cilium and Cilium data-planes. It empowers customers with enterprise-grade capabilities for DevOps and SecOps. This solution offers a centralized way to monitor network issues in your cluster for cluster network administrators, cluster security administrators, and DevOps engineers.
+
+When the Network Observability add-on is enabled, it allows for the collection and conversion of useful metrics into Prometheus format, which can then be visualized in Grafana. There are two options available for using Prometheus and Grafana in this context: Azure managed [Prometheus](/azure/azure-monitor/essentials/prometheus-metrics-overview) and [Grafana](/azure/azure-monitor/visualize/grafana-plugin) or BYO Prometheus and Grafana.
+
+* **Azure managed Prometheus and Grafana:** This option involves using a managed service provided by Azure. The managed service takes care of the infrastructure and maintenance of Prometheus and Grafana, allowing you to focus on configuring and visualizing your metrics. This option is convenient if you prefer not to manage the underlying infrastructure.
+
+* **BYO Prometheus and Grafana:** Alternatively, you can choose to set up your own Prometheus and Grafana instances. In this case, you're responsible for provisioning and managing the infrastructure required to run Prometheus and Grafana. Install and configure Prometheus to scrape the metrics generated by the Network Observability add-on and store them. Similarly, Grafana needs to be set up to connect to Prometheus and visualize the collected data.
+
+## Metrics
+
+ Network Observability add-on currently only supports node level metrics in both Linux and Windows platforms. The below table outlines the different metrics generated by the Network Observability add-on.
+
+| Metric Name | Description | Labels | Linux | Windows |
+|-|-|--|-||
+| **kappie_forward_count** | Total forwarded packet count | Direction, NodeName, Cluster | Yes | Yes |
+| **kappie_forward_bytes** | Total forwarded byte count | Direction, NodeName, Cluster | Yes | Yes |
+| **kappie_drop_count** | Total dropped packet count | Reason, Direction, NodeName, Cluster | Yes | Yes |
+| **kappie_drop_bytes** | Total dropped byte count | Reason, Direction, NodeName, Cluster | Yes | Yes |
+| **kappie_tcp_state** | TCP active socket count by TCP state. | State, NodeName, Cluster | Yes | Yes |
+| **kappie_tcp_connection_remote** | TCP active socket count by remote address. | Address, Port, NodeName, Cluster | Yes | No |
+| **kappie_tcp_connection_stats** | TCP connection statistics. (ex: Delayed ACKs, TCPKeepAlive, TCPSackFailures) | Statistic, NodeName, Cluster | Yes | Yes |
+| **kappie_tcp_flag_counters** | TCP packets count by flag. | Flag, NodeName, Cluster | Yes | Yes |
+| **kappie_ip_connection_stats** | IP connection statistics. | Statistic, NodeName, Cluster | Yes | No |
+| **kappie_udp_connection_stats** | UDP connection statistics. | Statistic, NodeName, Cluster | Yes | No |
+| **kappie_udp_active_sockets** | UDP active socket count | NodeName, Cluster | Yes | No |
+| **kappie_interface_stats** | Interface statistics. | InterfaceName, Statistic, NodeName, Cluster | Yes | Yes |
+
+## Limitations
+
+* Pod level metrics aren't supported.
+
+* The deployment of the Network Observability add-on on Mariner 1.0 is currently unsupported.
+
+## Scale
+
+Certain scale limitations apply when you use Azure managed Prometheus and Grafana. For more information, see [Scrape Prometheus metrics at scale in Azure Monitor](/azure/azure-monitor/essentials/prometheus-metrics-scrape-scale)
+
+## Next steps
+
+- For more information about Azure Kubernetes Service (AKS), see [What is Azure Kubernetes Service (AKS)?](/azure/aks/intro-kubernetes).
+
+- To create an AKS cluster with Network Observability and Azure managed Prometheus and Grafana, see [Setup Network Observability for Azure Kubernetes Service (AKS) Azure managed Prometheus and Grafana](network-observability-managed-cli.md).
+
+- To create an AKS cluster with Network Observability and BYO Prometheus and Grafana, see [Setup Network Observability for Azure Kubernetes Service (AKS) BYO Prometheus and Grafana](network-observability-byo-cli.md).
+
aks Open Service Mesh Uninstall Add On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-uninstall-add-on.md
Title: Uninstall the Open Service Mesh (OSM) add-on
-description: Deploy Open Service Mesh on Azure Kubernetes Service (AKS) using Azure CLI
+ Title: Uninstall the Open Service Mesh (OSM) add-on from your Azure Kubernetes Service (AKS) cluster
+description: How to uninstall the Open Service Mesh on Azure Kubernetes Service (AKS) using Azure CLI.
Previously updated : 11/10/2021- Last updated : 06/19/2023 # Uninstall the Open Service Mesh (OSM) add-on from your Azure Kubernetes Service (AKS) cluster
This article shows you how to uninstall the OMS add-on and related resources fro
## Disable the OSM add-on from your cluster
-Disable the OSM add-on in your cluster using `az aks disable-addon`. For example:
+* Disable the OSM add-on from your cluster using the [`az aks disable-addon`][az-aks-disable-addon] command and the `--addons` parameter.
-```azurecli-interactive
-az aks disable-addons \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --addons open-service-mesh
-```
+ ```azurecli-interactive
+ az aks disable-addons \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --addons open-service-mesh
+ ```
-The above example removes the OSM add-on from the *myAKSCluster* in *myResourceGroup*.
+## Remove OSM resources
-## Remove additional OSM resources
+* Uninstall the remaining resources on the cluster using the `osm uninstall cluster-wide-resources` command.
-After the OSM add-on is disabled, use `osm uninstall cluster-wide-resources` to uninstall the remaining resource on the cluster. For example:
+ ```console
+ osm uninstall cluster-wide-resources
+ ```
-```console
-osm uninstall cluster-wide-resources
-```
+ > [!NOTE]
+ > For version 1.1, the command is `osm uninstall mesh --delete-cluster-wide-resources`
-> [!NOTE]
-> For version 1.1, the command is `osm uninstall mesh --delete-cluster-wide-resources`
+ > [!IMPORTANT]
+ > You must remove these additional resources after you disable the OSM add-on. Leaving these resources on your cluster may cause issues if you enable the OSM add-on again in the future.
-> [!IMPORTANT]
-> You must remove these additional resources after you disable the OSM add-on. Leaving these resources on your cluster may cause issues if you enable the OSM add-on again in the future.
+## Next steps
+
+Learn more about [Open Service Mesh][osm].
+
+<!-- LINKS - Internal -->
+[az-aks-disable-addon]: /cli/azure/aks#az_aks_disable_addons
+[osm]: ./open-service-mesh-about.md
aks Out Of Tree https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/out-of-tree.md
Title: Enable Cloud Controller Manager (preview)
-description: Learn how to enable the Out of Tree cloud provider (preview)
+ Title: Enable Cloud Controller Manager (preview) on your Azure Kubernetes Service (AKS) cluster
+description: Learn how to enable the Out of Tree cloud provider (preview) on your Azure Kubernetes Service (AKS) cluster.
Previously updated : 04/08/2022 Last updated : 06/19/2023
-# Enable Cloud Controller Manager (preview)
+# Enable Cloud Controller Manager (preview) on your Azure Kubernetes Service (AKS) cluster
As a cloud provider, Microsoft Azure works closely with the Kubernetes community to support our infrastructure on behalf of users.
-Previously, cloud provider integration with Kubernetes was "in-tree", where any changes to cloud specific features would follow the standard Kubernetes release cycle. When issues were fixed or enhancements were rolled out, they would need to be within the Kubernetes community's release cycle.
+Previously, cloud provider integration with Kubernetes was *in-tree*, where any changes to cloud specific features would follow the standard Kubernetes release cycle. When issues were fixed or enhancements were rolled out, they would need to be within the Kubernetes community's release cycle.
-The Kubernetes community is now adopting an *out-of-tree* model, where the cloud providers control their releases independently of the core Kubernetes release schedule through the [cloud-provider-azure][cloud-provider-azure] component. As part of this cloud-provider-azure component, we are also introducing a cloud-node-manager component, which is a component of the Kubernetes node lifecycle controller. This component is deployed by a DaemonSet in the *kube-system* namespace.
+The Kubernetes community is now adopting an ***out-of-tree*** model, where cloud providers control releases independently of the core Kubernetes release schedule through the [cloud-provider-azure][cloud-provider-azure] component. As part of this cloud-provider-azure component, we're also introducing a cloud-node-manager component, which is a component of the Kubernetes node lifecycle controller. A DaemonSet in the *kube-system* namespace deploys this component.
The Cloud Storage Interface (CSI) drivers are included by default in Kubernetes version 1.21 and higher. > [!NOTE]
-> When you enable the Cloud Controller Manager (preview) on your AKS cluster, it also enables the out of tree CSI drivers.
-
-The Cloud Controller Manager (preview) is the default controller from Kubernetes 1.22, supported by AKS. If your cluster is running a version earlier than 1.22, perform the following steps.
+> When you enable the Cloud Controller Manager (preview) on your AKS cluster, it also enables the out-of-tree CSI drivers.
## Prerequisites You must have the following resources installed:
-* The Azure CLI
-* Kubernetes version 1.20.x and higher
+* The Azure CLI. For more information, see [Install the Azure CLI][install-azure-cli].
+* Kubernetes version 1.20.x or higher.
## Install the aks-preview Azure CLI extension [!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
-To install the aks-preview extension, run the following command:
+1. Install the aks-preview extension using the [`az extension add`][az-extension-add] command.
-```azurecli
-az extension add --name aks-preview
-```
+ ```azurecli
+ az extension add --name aks-preview
+ ```
-Run the following command to update to the latest version of the extension released:
+2. Update to the latest version of the extension released using the [`az extension update`][az-extension-update] command.
-```azurecli
-az extension update --name aks-preview
-```
+ ```azurecli
+ az extension update --name aks-preview
+ ```
## Register the 'EnableCloudControllerManager' feature flag
-Register the `EnableCloudControllerManager` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+1. Register the `EnableCloudControllerManager` feature flag using the [`az feature register`][az-feature-register] command.
+
+ ```azurecli-interactive
+ az feature register --namespace "Microsoft.ContainerService" --name "EnableCloudControllerManager"
+ ```
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "EnableCloudControllerManager"
-```
+ It takes a few minutes for the status to show *Registered*.
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+2. Verify the registration status using the [`az feature show`][az-feature-show] command.
-```azurecli-interactive
-az feature show --namespace "Microsoft.ContainerService" --name "EnableCloudControllerManager"
-```
+ ```azurecli-interactive
+ az feature show --namespace "Microsoft.ContainerService" --name "EnableCloudControllerManager"
+ ```
-When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+3. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+ ```azurecli-interactive
+ az provider register --namespace Microsoft.ContainerService
+ ```
## Create a new AKS cluster with Cloud Controller Manager
-To create a cluster using the Cloud Controller Manager, run the following command. Include the parameter `EnableCloudControllerManager=True` as a customer header to the Azure API using the Azure CLI.
+* Create a new AKS cluster with Cloud Controller Manager using the [`az aks create`][az-aks-create] command and include the parameter `EnableCloudControllerManager=True` as an `--aks-custom-header`.
-```azurecli-interactive
-az aks create -n aks -g myResourceGroup --aks-custom-headers EnableCloudControllerManager=True
-```
+ ```azurecli-interactive
+ az aks create -n aks -g myResourceGroup --aks-custom-headers EnableCloudControllerManager=True
+ ```
## Upgrade an AKS cluster to Cloud Controller Manager on an existing cluster
-To upgrade a cluster to use the Cloud Controller Manager, run the following command. Include the parameter `EnableCloudControllerManager=True` as a customer header to the Azure API using the Azure CLI.
+* Upgrade an existing AKS cluster with Cloud Controller Manager using the [`az aks upgrade`][az-aks-upgrade] command and include the parameter `EnableCloudControllerManager=True` as an `--aks-custom-header`.
-```azurecli-interactive
-az aks upgrade -n aks -g myResourceGroup -k <version> --aks-custom-headers EnableCloudControllerManager=True
-```
+ ```azurecli-interactive
+ az aks upgrade -n aks -g myResourceGroup -k <version> --aks-custom-headers EnableCloudControllerManager=True
+ ```
## Verify component deployment
-To view this component, run the following Azure CLI command:
+* Verify the component deployment using the following `kubectl get po` command.
-```azurecli-interactive
-kubectl get po -n kube-system | grep cloud-node-manager
-```
+ ```azurecli-interactive
+ kubectl get po -n kube-system | grep cloud-node-manager
+ ```
## Next steps -- For more information on CSI drivers, and the default behavior for Kubernetes versions higher than 1.21, review [documentation][csi-docs].--- You can find more information about the Kubernetes community direction regarding out of tree providers on the [community blog post][community-blog].
+* For more information on CSI drivers, and the default behavior for Kubernetes versions higher than 1.21, review the [CSI documentation][csi-docs].
+* For more information on the Kubernetes community direction regarding out-of-tree providers, see the [community blog post][community-blog].
<!-- LINKS - internal --> [az-provider-register]: /cli/azure/provider#az-provider-register [az-feature-register]: /cli/azure/feature#az-feature-register [az-feature-show]: /cli/azure/feature#az-feature-show [csi-docs]: csi-storage-drivers.md
+[install-azure-cli]: /cli/azure/install-azure-cli
+[az-extension-add]: /cli/azure/extension#az-extension-add
+[az-extension-update]: /cli/azure/extension#az-extension-update
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az-aks-upgrade]: /cli/azure/aks#az-aks-upgrade
<!-- LINKS - External --> [community-blog]: https://kubernetes.io/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
description: Learn how to use Planned Maintenance to schedule and control cluste
Last updated 01/17/2023--++ # Use Planned Maintenance to schedule and control upgrades for your Azure Kubernetes Service (AKS) cluster (preview)
aks Reduce Latency Ppg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/reduce-latency-ppg.md
Title: Use proximity placement groups to reduce latency for Azure Kubernetes Service (AKS) clusters
-description: Learn how to use proximity placement groups to reduce latency for your AKS cluster workloads.
+description: Learn how to use proximity placement groups to reduce latency for your Azure Kubernetes Service (AKS) cluster workloads.
Previously updated : 10/19/2020 Last updated : 06/19/2023
-# Reduce latency with proximity placement groups
+# Use proximity placement groups to reduce latency for Azure Kubernetes Service (AKS) clusters
-> [!Note]
-> When using proximity placement groups on AKS, colocation only applies to the agent nodes. Node to node and the corresponding hosted pod to pod latency is improved. The colocation does not affect the placement of a cluster's control plane.
+> [!NOTE]
+> When using proximity placement groups on AKS, colocation only applies to the agent nodes. Node to node and the corresponding hosted pod to pod latency is improved. The colocation doesn't affect the placement of a cluster's control plane.
-When deploying your application in Azure, spreading Virtual Machine (VM) instances across regions or availability zones creates network latency, which may impact the overall performance of your application. A proximity placement group is a logical grouping used to make sure Azure compute resources are physically located close to each other. Some applications like gaming, engineering simulations, and high-frequency trading (HFT) require low latency and tasks that complete quickly. For high-performance computing (HPC) scenarios such as these, consider using [proximity placement groups](../virtual-machines/co-location.md#proximity-placement-groups) (PPG) for your cluster's node pools.
+When deploying your application in Azure, you can create network latency by spreading virtual machine (VM) instances across regions or availability zones, which may impact the overall performance of your application. A proximity placement group is a logical grouping used to make sure Azure compute resources are physically located close to one another. Some applications, such as gaming, engineering simulations, and high-frequency trading (HFT) require low latency and tasks that can complete quickly. For similar high-performance computing (HPC) scenarios, consider using [proximity placement groups](../virtual-machines/co-location.md#proximity-placement-groups) (PPG) for your cluster's node pools.
## Before you begin
-This article requires that you are running the Azure CLI version 2.14 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+This article requires Azure CLI version 2.14 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
### Limitations
-* A proximity placement group can map to at most one availability zone.
+* A proximity placement group can map to only *one* availability zone.
* A node pool must use Virtual Machine Scale Sets to associate a proximity placement group. * A node pool can associate a proximity placement group at node pool create time only. ## Node pools and proximity placement groups
-The first resource you deploy with a proximity placement group attaches to a specific data center. Additional resources deployed with the same proximity placement group are colocated in the same data center. Once all resources using the proximity placement group have been stopped (deallocated) or deleted, it's no longer attached.
+The first resource you deploy with a proximity placement group attaches to a specific data center. Any extra resources you deploy with the same proximity placement group are colocated in the same data center. Once all resources using the proximity placement group are stopped (deallocated) or deleted, it's no longer attached.
-* Many node pools can be associated with a single proximity placement group.
-* A node pool may only be associated with a single proximity placement group.
+* You can associate multiple node pools with a single proximity placement group.
+* You can only associate a node pool with a single proximity placement group.
### Configure proximity placement groups with availability zones > [!NOTE]
-> While proximity placement groups require a node pool to use at most one availability zone, the [baseline Azure VM SLA of 99.9%](https://azure.microsoft.com/support/legal/sla/virtual-machines/v1_9/) is still in effect for VMs in a single zone.
+> While proximity placement groups require a node pool to use only *one* availability zone, the [baseline Azure VM SLA of 99.9%](https://azure.microsoft.com/support/legal/sla/virtual-machines/v1_9/) is still in effect for VMs in a single zone.
-Proximity placement groups are a node pool concept and associated with each individual node pool. Using a PPG resource has no impact on AKS control plane availability. This can impact how a cluster should be designed with zones. To ensure a cluster is spread across multiple zones the following design is recommended.
+Proximity placement groups are a node pool concept and associated with each individual node pool. Using a PPG resource has no impact on AKS control plane availability, which can impact how you should design your cluster with zones. To ensure a cluster is spread across multiple zones, we recommend using the following design:
-* Provision a cluster with the first system pool using 3 zones and no proximity placement group associated. This ensures the system pods land in a dedicated node pool which will spread across multiple zones.
-* Add additional user node pools with a unique zone and proximity placement group associated to each pool. An example is nodepool1 in zone 1 and PPG1, nodepool2 in zone 2 and PPG2, nodepool3 in zone 3 with PPG3. This ensures at a cluster level, nodes are spread across multiple zones and each individual node pool is colocated in the designated zone with a dedicated PPG resource.
+* Provision a cluster with the first system pool using *three* zones and no proximity placement group associated to ensure the system pods land in a dedicated node pool, which spreads across multiple zones.
+* Add extra user node pools with a unique zone and proximity placement group associated to each pool. An example is *nodepool1* in zone one and PPG1, *nodepool2* in zone two and PPG2, and *nodepool3* in zone 3 with PPG3. This configuration ensures that, at a cluster level, nodes are spread across multiple zones and each individual node pool is colocated in the designated zone with a dedicated PPG resource.
## Create a new AKS cluster with a proximity placement group
-The following example uses the [az group create][az-group-create] command to create a resource group named *myResourceGroup* in the *centralus* region. An AKS cluster named *myAKSCluster* is then created using the [az aks create][az-aks-create] command.
-
-Accelerated networking greatly improves networking performance of virtual machines. Ideally, use proximity placement groups in conjunction with accelerated networking. By default, AKS uses accelerated networking on [supported virtual machine instances](../virtual-network/accelerated-networking-overview.md?toc=/azure/virtual-machines/linux/toc.json#limitations-and-constraints), which include most Azure virtual machine with two or more vCPUs.
-
-Create a new AKS cluster with a proximity placement group associated to the first system node pool:
-
-```azurecli-interactive
-# Create an Azure resource group
-az group create --name myResourceGroup --location centralus
-```
-Run the following command, and store the ID that is returned:
-
-```azurecli-interactive
-# Create proximity placement group
-az ppg create -n myPPG -g myResourceGroup -l centralus -t standard
-```
-
-The command produces output, which includes the *id* value you need for upcoming CLI commands:
-
-```output
-{
- "availabilitySets": null,
- "colocationStatus": null,
- "id": "/subscriptions/yourSubscriptionID/resourceGroups/myResourceGroup/providers/Microsoft.Compute/proximityPlacementGroups/myPPG",
- "location": "centralus",
- "name": "myPPG",
- "proximityPlacementGroupType": "Standard",
- "resourceGroup": "myResourceGroup",
- "tags": {},
- "type": "Microsoft.Compute/proximityPlacementGroups",
- "virtualMachineScaleSets": null,
- "virtualMachines": null
-}
-```
-
-Use the proximity placement group resource ID for the *myPPGResourceID* value in the below command:
-
-```azurecli-interactive
-# Create an AKS cluster that uses a proximity placement group for the initial system node pool only. The PPG has no effect on the cluster control plane.
-az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --ppg myPPGResourceID
-```
+Accelerated networking greatly improves networking performance of virtual machines. Ideally, use proximity placement groups with accelerated networking. By default, AKS uses accelerated networking on [supported virtual machine instances](../virtual-network/accelerated-networking-overview.md?toc=/azure/virtual-machines/linux/toc.json#limitations-and-constraints), which include most Azure virtual machine with two or more vCPUs.
+
+1. Create an Azure resource group using the [`az group create`][az-group-create] command.
+
+ ```azurecli-interactive
+ az group create --name myResourceGroup --location centralus
+ ```
+
+2. Create a proximity placement group using the [`az ppg create`][az-ppg-create] command. Make sure to note the ID value in the output.
+
+ ```azurecli-interactive
+ az ppg create -n myPPG -g myResourceGroup -l centralus -t standard
+ ```
+
+ The command produces an output similar to the following example output, which includes the *ID* value you need for upcoming CLI commands.
+
+ ```output
+ {
+ "availabilitySets": null,
+ "colocationStatus": null,
+ "id": "/subscriptions/yourSubscriptionID/resourceGroups/myResourceGroup/providers/Microsoft.Compute/proximityPlacementGroups/myPPG",
+ "location": "centralus",
+ "name": "myPPG",
+ "proximityPlacementGroupType": "Standard",
+ "resourceGroup": "myResourceGroup",
+ "tags": {},
+ "type": "Microsoft.Compute/proximityPlacementGroups",
+ "virtualMachineScaleSets": null,
+ "virtualMachines": null
+ }
+ ```
+
+3. Create an AKS cluster using the [`az aks create`][az-aks-create] command and replace the *myPPGResourceID* value with your proximity placement group resource ID from the previous step.
+
+ ```azurecli-interactive
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --ppg myPPGResourceID
+ ```
## Add a proximity placement group to an existing cluster
-You can add a proximity placement group to an existing cluster by creating a new node pool. You can then optionally migrate existing workloads to the new node pool, and then delete the original node pool.
+You can add a proximity placement group to an existing cluster by creating a new node pool. You can then optionally migrate existing workloads to the new node pool and delete the original node pool.
-Use the same proximity placement group that you created earlier, and this will ensure agent nodes in both node pools in your AKS cluster are physically located in the same data center.
+Use the same proximity placement group that you created earlier to ensure agent nodes in both node pools in your AKS cluster are physically located in the same data center.
-Use the resource ID from the proximity placement group you created earlier, and add a new node pool with the [`az aks nodepool add`][az-aks-nodepool-add] command:
+* Create a new node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command and replace the *myPPGResourceID* value with your proximity placement group resource ID.
-```azurecli-interactive
-# Add a new node pool that uses a proximity placement group, use a --node-count = 1 for testing
-az aks nodepool add \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name mynodepool \
- --node-count 1 \
- --ppg myPPGResourceID
-```
+ ```azurecli-interactive
+ az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name mynodepool \
+ --node-count 1 \
+ --ppg myPPGResourceID
+ ```
## Clean up
-To delete the cluster, use the [`az group delete`][az-group-delete] command to delete the AKS resource group:
+* Delete the Azure resource group along with its resources using the [`az group delete`][az-group-delete] command.
-```azurecli-interactive
-az group delete --name myResourceGroup --yes --no-wait
-```
+ ```azurecli-interactive
+ az group delete --name myResourceGroup --yes --no-wait
+ ```
## Next steps
-* Learn more about [proximity placement groups][proximity-placement-groups].
+Learn more about [proximity placement groups][proximity-placement-groups].
<!-- LINKS - Internal -->
-[azure-ad-rbac]: azure-ad-rbac.md
-[aks-tutorial-prepare-app]: ./tutorial-kubernetes-prepare-app.md
[azure-cli-install]: /cli/azure/install-azure-cli
-[az-aks-get-upgrades]: /cli/azure/aks#az_aks_get_upgrades
-[az-aks-upgrade]: /cli/azure/aks#az_aks_upgrade
-[az-aks-show]: /cli/azure/aks#az_aks_show
-[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
[proximity-placement-groups]: ../virtual-machines/co-location.md#proximity-placement-groups [az-aks-create]: /cli/azure/aks#az_aks_create
-[system-pool]: ./use-system-pools.md
[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add
-[az-aks-create]: /cli/azure/aks#az_aks_create
[az-group-create]: /cli/azure/group#az_group_create [az-group-delete]: /cli/azure/group#az_group_delete
+[az-ppg-create]: /cli/azure/ppg#az_ppg_create
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
Title: Supported Kubernetes versions in Azure Kubernetes Service (AKS).
description: Learn the Kubernetes version support policy and lifecycle of clusters in Azure Kubernetes Service (AKS). Last updated 11/21/2022--++
View the upcoming version releases on the AKS Kubernetes release calendar. To se
> [!NOTE] > AKS follows 12 months of support for a generally available (GA) Kubernetes version. To read more about our support policy for Kubernetes versioning, please read our [FAQ](./supported-kubernetes-versions.md#faq).
-For the past release history, see [Kubernetes history](https://en.wikipedia.org/wiki/Kubernetes#History).
+For the past release history, see [Kubernetes history](https://github.com/kubernetes/kubernetes/releases).
| K8s version | Upstream release | AKS preview | AKS GA | End of life | |--|-|--||-|
aks Use Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-tags.md
Title: Use Azure tags in Azure Kubernetes Service (AKS)
description: Learn how to use Azure provider tags to track resources in Azure Kubernetes Service (AKS). Previously updated : 05/26/2022 Last updated : 06/16/2023 # Use Azure tags in Azure Kubernetes Service (AKS)
-With Azure Kubernetes Service (AKS), you can set Azure tags on an AKS cluster and its related resources by using Azure Resource Manager, through the Azure CLI. For some resources, you can also use Kubernetes manifests to set Azure tags. Azure tags are a useful tracking resource for certain business processes, such as *chargeback*.
+With Azure Kubernetes Service (AKS), you can set Azure tags on an AKS cluster and its related resources using Azure Resource Manager and the Azure CLI. You can also use Kubernetes manifests to set Azure tags for certain resources. Azure tags are a useful tracking resource for certain business processes, such as *chargeback*.
This article explains how to set Azure tags for AKS clusters and related resources. ## Before you begin
-It's a good idea to understand what happens when you set and update Azure tags with AKS clusters and their related resources. For example:
+Review the following information before you begin:
-* Tags set on an AKS cluster apply to all resources that are related to the cluster, but not the node pools. This operation overwrites the values of existing keys.
+* Tags set on an AKS cluster apply to all resources related to the cluster, but not the node pools. This operation overwrites the values of existing keys.
* Tags set on a node pool apply only to resources related to that node pool. This operation overwrites the values of existing keys. Resources outside that node pool, including resources for the rest of the cluster and other node pools, are unaffected.
-* Public IPs, files, and disks can have tags set by Kubernetes through a Kubernetes manifest. Tags set in this way will maintain the Kubernetes values, even if you update them later by using another method. When public IPs, files, or disks are removed through Kubernetes, any tags that are set by Kubernetes are removed. Tags on those resources that aren't tracked by Kubernetes remain unaffected.
+* Public IPs, files, and disks can have tags set by Kubernetes through a Kubernetes manifest. Tags set in this way maintain the Kubernetes values, even if you update them later using a different method. When you remove public IPs, files, or disks through Kubernetes, any tags set by Kubernetes are removed. The tags on those resources that Kubernetes doesn't track remain unaffected.
### Prerequisites
-* The Azure CLI version 2.0.59 or later, installed and configured.
-
- To find your version, run `az --version`. If you need to install it or update your version, see [Install Azure CLI][install-azure-cli].
-* Kubernetes version 1.20 or later, installed.
+* The Azure CLI version 2.0.59 or later. To find your version, run `az --version`. If you need to install it or update your version, see [Install Azure CLI][install-azure-cli].
+* Kubernetes version 1.20 or later.
### Limitations
-* Azure tags have keys that are case-insensitive for operations, such as when you're retrieving a tag by searching the key. In this case, a tag with the specified key will be updated or retrieved regardless of casing. Tag values are case-sensitive.
+* Azure tags have keys that are case-insensitive for operations, such as when you're retrieving a tag by searching the key. In this case, a tag with the specified key is updated or retrieved regardless of casing. Tag values are case-sensitive.
* In AKS, if multiple tags are set with identical keys but different casing, the tags are used in alphabetical order. For example, `{"Key1": "val1", "kEy1": "val2", "key1": "val3"}` results in `Key1` and `val1` being set.
-* For shared resources, tags aren't able to determine the split in resource usage on their own.
+* For shared resources, tags can't determine the split in resource usage on their own.
+
+## Azure tags and AKS clusters
+
+When you create or update an AKS cluster with the `--tags` parameter, the following are assigned the Azure tags that you specified:
+
+* The AKS cluster itself and its related resources:
+ * Route table
+ * Public IP
+ * Load balancer
+ * Network security group
+ * Virtual network
+ * AKS-managed kubelet msi
+ * AKS-managed add-on msi
+ * Private DNS zone associated with the *private cluster*
+ * Private endpoint associated with the *private cluster*
+* The node resource group
-## Add tags to the cluster
+> [!NOTE]
+> Azure Private DNS only supports 15 tags. For more information, see the [tag resources](../azure-resource-manager/management/tag-resources.md).
-When you create or update an AKS cluster with the `--tags` parameter, the following are assigned the Azure tags that you've specified:
+## Create or update tags on an AKS cluster
-* The AKS cluster
-* The node resource group
-* The route table that's associated with the cluster
-* The public IP that's associated with the cluster
-* The load balancer that's associated with the cluster
-* The network security group that's associated with the cluster
-* The virtual network that's associated with the cluster
-* The AKS managed kubelet msi associated with the cluster
-* The AKS managed addon msi associated with the cluster
-* The private DNS zone associated with the private cluster
-* The private endpoint associated with the private cluster
+### Create a new AKS cluster
-> [!NOTE]
-> Azure Private DNS only supports 15 tags. [tag resources](../azure-resource-manager/management/tag-resources.md).
+> [!IMPORTANT]
+> If you're using existing resources when you create a new cluster, such as an IP address or route table, the `az aks create` command overwrites the set of tags. If you delete the cluster later, any tags set by the cluster are removed.
-To create a cluster and assign Azure tags, run `az aks create` with the `--tags` parameter, as shown in the following command. Running the command creates a *myAKSCluster* in the *myResourceGroup* with the tags *dept=IT* and *costcenter=9999*.
+1. Create a cluster and assign Azure tags using the [`az aks create`][az-aks-create] command with the `--tags` parameter.
-> [!NOTE]
-> To set tags on the initial node pool, the virtual machine scale set, and each virtual machine scale set instance that's associated with the initial node pool, also set the `--nodepool-tags` parameter.
+ > [!NOTE]
+ > To set tags on the initial node pool, the virtual machine scale set, and each virtual machine scale set instance associated with the initial node pool, you can also set the `--nodepool-tags` parameter.
-```azurecli-interactive
-az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --tags dept=IT costcenter=9999 \
- --generate-ssh-keys
-```
+ ```azurecli-interactive
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --tags dept=IT costcenter=9999 \
+ --generate-ssh-keys
+ ```
-> [!IMPORTANT]
-> If you're using existing resources when you're creating a new cluster, such as an IP address or route table, `az aks create` overwrites the set of tags. If you delete that cluster later, any tags set by the cluster will be removed.
-
-Verify that the tags have been applied to the cluster and related resources. The cluster tags for *myAKSCluster* are shown in the following example:
-
-```output
-$ az aks show -g myResourceGroup -n myAKSCluster --query '[tags]'
-{
- "clusterTags": {
- "costcenter": "9999",
- "dept": "IT"
- }
-}
-```
-
-To update the tags on an existing cluster, run `az aks update` with the `--tags` parameter. Running the command updates the *myAKSCluster* with the tags *team=alpha* and *costcenter=1234*.
--
-```azurecli-interactive
-az aks update \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --tags team=alpha costcenter=1234
-```
-
-Verify that the tags have been applied to the cluster. For example:
-
-```output
-$ az aks show -g myResourceGroup -n myAKSCluster --query '[tags]'
-{
- "clusterTags": {
- "costcenter": "1234",
- "team": "alpha"
- }
-}
-```
+2. Verify the tags have been applied to the cluster and its related resources using the [`az aks show`][az-aks-show] command.
+
+ ```azurecli-interactive
+ az aks show -g myResourceGroup -n myAKSCluster --query '[tags]'
+ ```
+
+ The following example output shows the tags applied to the cluster:
+
+ ```output
+ {
+ "clusterTags": {
+ "dept": "IT",
+ "costcenter": "9999"
+ }
+ }
+ ```
+
+### Update an existing AKS cluster
> [!IMPORTANT]
-> Setting tags on a cluster by using `az aks update` overwrites the set of tags. For example, if your cluster has the tags *dept=IT* and *costcenter=9999* and you use `az aks update` with the tags *team=alpha* and *costcenter=1234*, the new list of tags would be *team=alpha* and *costcenter=1234*.
+> Setting tags on a cluster using the `az aks update` command overwrites the set of tags. For example, if your cluster has the tags *dept=IT* and *costcenter=9999*, and you use `az aks update` with the tags *team=alpha* and *costcenter=1234*, the new list of tags would be *team=alpha* and *costcenter=1234*.
-## Adding tags to node pools
+1. Update the tags on an existing cluster using the [`az aks update`][az-aks-update] command with the `--tags` parameter.
-You can apply an Azure tag to a new or existing node pool in your AKS cluster. Tags applied to a node pool are applied to each node within the node pool and are persisted through upgrades. Tags are also applied to new nodes that are added to a node pool during scale-out operations. Adding a tag can help with tasks such as policy tracking or cost estimation.
+ ```azurecli-interactive
+ az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --tags team=alpha costcenter=1234
+ ```
-When you create or update a node pool with the `--tags` parameter, the tags that you specify are assigned to the following resources:
-
-* The node pool
-* The virtual machine scale set and each virtual machine scale set instance that's associated with the node pool
-
-To create a node pool with an Azure tag, run `az aks nodepool add` with the `--tags` parameter. Running the following command creates a *tagnodepool* node pool with the tags *abtest=a* and *costcenter=5555* in the *myAKSCluster*.
-
-```azurecli-interactive
-az aks nodepool add \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name tagnodepool \
- --node-count 1 \
- --tags abtest=a costcenter=5555 \
- --no-wait
-```
-
-Verify that the tags have been applied to the *tagnodepool* node pool.
-
-```output
-$ az aks show -g myResourceGroup -n myAKSCluster --query 'agentPoolProfiles[].{nodepoolName:name,tags:tags}'
-[
- {
- "nodepoolName": "nodepool1",
- "tags": null
- },
- {
- "nodepoolName": "tagnodepool",
- "tags": {
- "abtest": "a",
- "costcenter": "5555"
- }
- }
-]
-```
+2. Verify the tags have been applied to the cluster and its related resources using the [`az aks show`][az-aks-show] command.
-To update a node pool with an Azure tag, run `az aks nodepool update` with the `--tags` parameter. Running the following command updates the *tagnodepool* node pool with the tags *appversion=0.0.2* and *costcenter=4444* in the *myAKSCluster*, which already has the tags *abtest=a* and *costcenter=5555*.
+ ```azurecli-interactive
+ az aks show -g myResourceGroup -n myAKSCluster --query '[tags]'
+ ```
-```azurecli-interactive
-az aks nodepool update \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name tagnodepool \
- --tags appversion=0.0.2 costcenter=4444 \
- --no-wait
-```
+ The following example output shows the tags applied to the cluster:
-> [!IMPORTANT]
-> Setting tags on a node pool by using `az aks nodepool update` overwrites the set of tags. For example, if your node pool has the tags *abtest=a* and *costcenter=5555*, and you use `az aks nodepool update` with the tags *appversion=0.0.2* and *costcenter=4444*, the new list of tags would be *appversion=0.0.2* and *costcenter=4444*.
-
-Verify that the tags have been updated on the nodepool.
-
-```output
-$ az aks show -g myResourceGroup -n myAKSCluster --query 'agentPoolProfiles[].{nodepoolName:name,tags:tags}'
-[
- {
- "nodepoolName": "nodepool1",
- "tags": null
- },
- {
- "nodepoolName": "tagnodepool",
- "tags": {
- "appversion": "0.0.2",
- "costcenter": "4444"
+ ```output
+ {
+ "clusterTags": {
+ "team": "alpha",
+ "costcenter": "1234"
+ }
}
- }
-]
-```
+ ```
-## Add tags by using Kubernetes
+## Add tags to node pools
-You can apply Azure tags to public IPs, disks, and files by using a Kubernetes manifest.
+You can apply an Azure tag to a new or existing node pool in your AKS cluster. Tags applied to a node pool are applied to each node within the node pool and are persisted through upgrades. Tags are also applied to new nodes that are added to a node pool during scale-out operations. Adding a tag can help with tasks such as policy tracking or cost estimation.
-For public IPs, use *service.beta.kubernetes.io/azure-pip-tags* under *annotations*. For example:
+When you create or update a node pool with the `--tags` parameter, the tags you specify are assigned to the following resources:
-```yml
-apiVersion: v1
-kind: Service
-metadata:
- annotations:
- service.beta.kubernetes.io/azure-pip-tags: costcenter=3333,team=beta
-spec:
- ...
-```
+* The node pool.
+* The virtual machine scale set and each virtual machine scale set instance associated with the node pool.
-For files and disks, use *tags* under *parameters*. For example:
+### Create a new node pool
-```yml
-
-apiVersion: storage.k8s.io/v1
-...
-parameters:
- ...
- tags: costcenter=3333,team=beta
-...
-```
+1. Create a node pool with an Azure tag using the [`az aks nodepool add`][az-aks-nodepool-add] command with the `--tags` parameter.
+
+ ```azurecli-interactive
+ az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name tagnodepool \
+ --node-count 1 \
+ --tags abtest=a costcenter=5555 \
+ --no-wait
+ ```
+
+2. Verify that the tags have been applied to the node pool using the [`az aks show`][az-aks-show] command.
+
+ ```azurecli-interactive
+ az aks show -g myResourceGroup -n myAKSCluster --query 'agentPoolProfiles[].{nodepoolName:name,tags:tags}'
+ ```
+
+ The following example output shows the tags applied to the node pool:
+
+ ```output
+ [
+ {
+ "nodepoolName": "nodepool1",
+ "tags": null
+ },
+ {
+ "nodepoolName": "tagnodepool",
+ "tags": {
+ "abtest": "a",
+ "costcenter": "5555"
+ }
+ }
+ ]
+ ```
+
+### Update an existing node pool
+
+> [!IMPORTANT]
+> Setting tags on a node pool using the `az aks nodepool update` command overwrites the set of tags. For example, if your node pool has the tags *abtest=a* and *costcenter=5555*, and you use `az aks nodepool update` with the tags *appversion=0.0.2* and *costcenter=4444*, the new list of tags would be *appversion=0.0.2* and *costcenter=4444*.
+
+1. Update a node pool with an Azure tag using the [`az aks nodepool update`][az-aks-nodepool-update] command.
+
+ ```azurecli-interactive
+ az aks nodepool update \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name tagnodepool \
+ --tags appversion=0.0.2 costcenter=4444 \
+ --no-wait
+ ```
+
+2. Verify the tags have been applied to the node pool using the [`az aks show`][az-aks-show] command.
+
+ ```azurecli-interactive
+ az aks show -g myResourceGroup -n myAKSCluster --query 'agentPoolProfiles[].{nodepoolName:name,tags:tags}'
+ ```
+
+ The following example output shows the tags applied to the node pool:
+
+ ```output
+ [
+ {
+ "nodepoolName": "nodepool1",
+ "tags": null
+ },
+ {
+ "nodepoolName": "tagnodepool",
+ "tags": {
+ "appversion": "0.0.2",
+ "costcenter": "4444"
+ }
+ }
+ ]
+ ```
+
+## Add tags using Kubernetes
> [!IMPORTANT]
-> Setting tags on files, disks, and public IPs by using Kubernetes updates the set of tags. For example, if your disk has the tags *dept=IT* and *costcenter=5555*, and you use Kubernetes to set the tags *team=beta* and *costcenter=3333*, the new list of tags would be *dept=IT*, *team=beta*, and *costcenter=3333*.
->
-> Any updates that you make to tags through Kubernetes will retain the value that's set through Kubernetes. For example, if your disk has tags *dept=IT* and *costcenter=5555* set by Kubernetes, and you use the portal to set the tags *team=beta* and *costcenter=3333*, the new list of tags would be *dept=IT*, *team=beta*, and *costcenter=5555*. If you then remove the disk through Kubernetes, the disk would have the tag *team=beta*.
+> Setting tags on files, disks, and public IPs using Kubernetes updates the set of tags. For example, if your disk has the tags *dept=IT* and *costcenter=5555*, and you use Kubernetes to set the tags *team=beta* and *costcenter=3333*, the new list of tags would be *dept=IT*, *team=beta*, and *costcenter=3333*.
+>
+> Any updates you make to tags through Kubernetes retain the value set through Kubernetes. For example, if your disk has tags *dept=IT* and *costcenter=5555* set by Kubernetes, and you use the portal to set the tags *team=beta* and *costcenter=3333*, the new list of tags would be *dept=IT*, *team=beta*, and *costcenter=5555*. If you then remove the disk through Kubernetes, the disk would have the tag *team=beta*.
+
+You can apply Azure tags to public IPs, disks, and files using a Kubernetes manifest.
+
+* For public IPs, use *service.beta.kubernetes.io/azure-pip-tags* under *annotations*. For example:
+
+ ```yml
+ apiVersion: v1
+ kind: Service
+ metadata:
+ annotations:
+ service.beta.kubernetes.io/azure-pip-tags: costcenter=3333,team=beta
+ spec:
+ ...
+ ```
+
+* For files and disks, use *tags* under *parameters*. For example:
+
+ ```yml
+
+ apiVersion: storage.k8s.io/v1
+ ...
+ parameters:
+ ...
+ tags: costcenter=3333,team=beta
+ ...
+ ```
+
+## Next steps
+
+Learn more about [using labels in an AKS cluster][use-labels-aks].
+<!-- LINKS - internal -->
[install-azure-cli]: /cli/azure/install-azure-cli
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az-aks-show]: /cli/azure/aks#az-aks-show
+[use-labels-aks]: ./use-labels.md
+[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az-aks-nodepool-add
+[az-aks-nodepool-update]: /cli/azure/aks/nodepool#az-aks-nodepool-update
+[az-aks-update]: /cli/azure/aks#az-aks-update
api-management Front Door Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/front-door-api-management.md
Azure Front Door is a modern application delivery network platform providing a secure, scalable content delivery network (CDN), dynamic site acceleration, and global HTTP(s) load balancing for your global web applications. When used in front of API Management, Front Door can provide TLS offloading, end-to-end TLS, load balancing, response caching of GET requests, and a web application firewall, among other capabilities. For a full list of supported features, see [What is Azure Front Door?](../frontdoor/front-door-overview.md)
-> [!NOTE]
-> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
This article shows how to:
api-management Protect With Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/protect-with-ddos-protection.md
This article shows how to defend your Azure API Management instance against distributed denial of service (DDoS) attacks by enabling [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md). Azure DDoS Protection provides enhanced DDoS mitigation features to defend against volumetric and protocol DDoS attacks.ΓÇï
-> [!NOTE]
-> For web workloads, we highly recommend utilizing Azure DDoS protection and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
[!INCLUDE [premium-dev.md](../../includes/api-management-availability-premium-dev.md)]
app-service App Service Ip Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-ip-restrictions.md
You must have at least the following Role-based access control permissions on th
| Microsoft.Web/sites/config/read | Get Web App configuration settings | | Microsoft.Web/sites/config/write | Update Web App's configuration settings | | Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action* | Joins resource such as storage account or SQL database to a subnet |
+| Microsoft.Web/sites/write** | Update Web App settings |
**only required when adding a virtual network (service endpoint) rule.*
+***only required if you are updating access restrictions through Azure portal.*
+ If you're adding a service endpoint-based rule and the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the Microsoft.Web resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider), but it will also automatically be registered when creating the first web app in a subscription. ### Add an access restriction rule
app-service Tutorial Php Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-php-mysql-app.md
Title: 'Tutorial: PHP app with MySQL'
-description: Learn how to get a PHP app working in Azure, with connection to a MySQL database in Azure. Laravel is used in the tutorial.
+ Title: 'Tutorial: PHP app with MySQL and Redis'
+description: Learn how to get a PHP app working in Azure, with connection to a MySQL database and a Redis cache in Azure. Laravel is used in the tutorial.
ms.assetid: 14feb4f3-5095-496e-9a40-690e1414bd73 ms.devlang: php Previously updated : 01/31/2023 Last updated : 06/30/2023
-# Tutorial: Build a PHP and MySQL app in Azure App Service
+# Tutorial: Deploy a PHP, MySQL, and Redis app to Azure App Service
-[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system. This tutorial shows how to create a secure PHP app in Azure App Service that's connected to a MySQL database (using Azure Database for MySQL flexible server). When you're finished, you'll have a [Laravel](https://laravel.com/) app running on Azure App Service on Linux.
+This tutorial shows how to create a secure PHP app in Azure App Service that's connected to a MySQL database (using Azure Database for MySQL flexible server). You'll also deploy an Azure Cache for Redis to enable the caching code in your application. Azure App Service is a highly scalable, self-patching, web-hosting service that can easily deploy apps on Windows or Linux. When you're finished, you'll have a Laravel app running on Azure App Service on Linux.
:::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-browse-app-2.png" alt-text="Screenshot of the Azure app example titled Task List showing new tasks added.":::
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Create a secure-by-default PHP and MySQL app in Azure
-> * Configure connection secrets to MySQL using app settings
-> * Deploy application code using GitHub Actions
-> * Update and redeploy the app
-> * Run database migrations securely
-> * Stream diagnostic logs from Azure
-> * Manage the app in the Azure portal
- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] ## Sample application
-To follow along with this tutorial, clone or download the sample application from the repository:
+To follow along with this tutorial, clone or download the sample [Laravel](https://laravel.com/) application from the repository:
```terminal git clone https://github.com/Azure-Samples/laravel-tasks.git
In this step, you create the Azure resources. The steps used in this tutorial cr
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Create app service step 1](./includes/tutorial-php-mysql-app/azure-portal-create-app-mysql-1.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-create-app-mysql-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find the Web App + Database creation wizard." lightbox="./media/tutorial-php-mysql-app/azure-portal-create-app-mysql-1.png"::: |
-| [!INCLUDE [Create app service step 2](./includes/tutorial-php-mysql-app/azure-portal-create-app-mysql-2.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-create-app-mysql-2-240px.png" alt-text="A screenshot showing how to configure a new app and database in the Web App + Database wizard." lightbox="./media/tutorial-php-mysql-app/azure-portal-create-app-mysql-2.png"::: |
-| [!INCLUDE [Create app service step 3](./includes/tutorial-php-mysql-app/azure-portal-create-app-mysql-3.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-create-app-mysql-3-240px.png" alt-text="A screenshot showing the form to fill out to create a web app in Azure." lightbox="./media/tutorial-php-mysql-app/azure-portal-create-app-mysql-3.png"::: |
+ :::column span="2":::
+ **Step 1.** In the Azure portal:
+ 1. Enter "web app database" in the search bar at the top of the Azure portal.
+ 1. Select the item labeled **Web App + Database** under the **Marketplace** heading.
+ You can also navigate to the [creation wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) directly.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-create-app-mysql-1.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find the Web App + Database creation wizard." lightbox="./media/tutorial-php-mysql-app/azure-portal-create-app-mysql-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the **Create Web App + Database** page, fill out the form as follows.
+ 1. *Resource Group* &rarr; Select **Create new** and use a name of **msdocs-laravel-mysql-tutorial**.
+ 1. *Region* &rarr; Any Azure region near you.
+ 1. *Name* &rarr; **msdocs-laravel-mysql-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure.
+ 1. *Runtime stack* &rarr; **PHP 8.2**.
+ 1. *Add Azure Cache for Redis?* &rarr; **Yes**.
+ 1. *Hosting plan* &rarr; **Basic**. When you're ready, you can [scale up](manage-scale-up.md) to a production pricing tier later.
+ 1. **MySQL - Flexible Server** is selected for you by default as the database engine. Azure Database for MySQL is a fully managed MySQL database as a service on Azure, compatible with the latest community editions.
+ 1. Select **Review + create**.
+ 1. After validation completes, select **Create**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-create-app-mysql-2.png" alt-text="A screenshot showing how to configure a new app and database in the Web App + Database wizard." lightbox="./media/tutorial-php-mysql-app/azure-portal-create-app-mysql-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3.** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created:
+ - **Resource group** &rarr; The container for all the created resources.
+ - **App Service plan** &rarr; Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created.
+ - **App Service** &rarr; Represents your app and runs in the App Service plan.
+ - **Virtual network** &rarr; Integrated with the App Service app and isolates back-end network traffic.
+ - **Private endpoints** &rarr; Access endpoints for the database server and the Redis cache in the virtual network.
+ - **Network interfaces** &rarr; Represents private IP addresses, one for each of the private endpoints.
+ - **Azure Database for MySQL flexible server** &rarr; Accessible only from behind its private endpoint. A database and a user are created for you on the server.
+ - **Azure Cache for Redis** &rarr; Accessible only from behind its private endpoint.
+ - **Private DNS zones** &rarr; Enable DNS resolution of the database server and the Redis cache in the virtual network.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-create-app-mysql-3.png" alt-text="A screenshot showing the deployment process completed." lightbox="./media/tutorial-php-mysql-app/azure-portal-create-app-mysql-3.png":::
+ :::column-end:::
## 2 - Set up database connectivity
-The creation wizard generated a connection string to the database for you, but not in a format that's useable for your code yet. In this step, you create [app settings](configure-common.md#configure-app-settings) with the format that your app needs.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Get connection string step 1](./includes/tutorial-php-mysql-app/azure-portal-get-connection-string-1.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-1-240px.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-1.png"::: |
-| [!INCLUDE [Get connection string step 2](./includes/tutorial-php-mysql-app/azure-portal-get-connection-string-2.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-2-240px.png" alt-text="A screenshot showing how to see the autogenerated connection string." lightbox="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-2.png"::: |
-| [!INCLUDE [Get connection string step 3](./includes/tutorial-php-mysql-app/azure-portal-get-connection-string-3.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-3-240px.png" alt-text="A screenshot showing how to create an app setting." lightbox="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-3.png"::: |
-| [!INCLUDE [Get connection string step 4](./includes/tutorial-php-mysql-app/azure-portal-get-connection-string-4.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-4-240px.png" alt-text="A screenshot showing all the required app settings in the configuration page." lightbox="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-4.png"::: |
+ :::column span="2":::
+ **Step 1.** In the App Service page, in the left menu, select **Configuration**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.**
+ 1. Find app settings that begin with **AZURE_MYSQL_**. They were generated from the new MySQL database by the creation wizard.
+ 1. Also, find app settings that begin with **AZURE_REDIS_**. They were generated from the new Redis cache by the creation wizard. To set up your application, this name is all you need.
+ 1. If you want, you can select the **Edit** button to the right of each setting and see or copy its value.
+ Later, you'll change your application code to use these settings.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-2.png" alt-text="A screenshot showing how to create an app setting." lightbox="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3.** In the **Application settings** tab of the **Configuration** page, create a `CACHE_DRIVER` setting:
+ 1. Select **New application setting**.
+ 1. In the **Name** field, enter *CACHE_DRIVER*.
+ 1. In the **Value** field, enter *redis*.
+ 1. Select **OK**.
+ `CACHE_DRIVER` is already used in the Laravel application code. This setting tells Laravel to use Redis as its cache.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-3.png" alt-text="A screenshot showing how to see the autogenerated connection string." lightbox="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-3.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 4.** Using the same steps in **Step 3**, create the following app settings:
+ - **MYSQL_ATTR_SSL_CA**: Use */home/site/wwwroot/ssl/DigiCertGlobalRootCA.crt.pem* as the value. This app setting points to the path of the [TLS/SSL certificate you need to access the MySQL server](../mysql/flexible-server/how-to-connect-tls-ssl.md#download-the-public-ssl-certificate). It's included in the sample repository for convenience.
+ - **LOG_CHANNEL**: Use *stderr* as the value. This setting tells Laravel to pipe logs to stderr, which makes it available to the App Service logs.
+ - **APP_DEBUG**: Use *true* as the value. It's a [Laravel debugging variable](https://laravel.com/docs/10.x/errors#configuration) that enables debug mode pages.
+ - **APP_KEY**: Use *base64:Dsz40HWwbCqnq0oxMsjq7fItmKIeBfCBGORfspaI1Kw=* as the value. It's a [Laravel encryption variable](https://laravel.com/docs/10.x/encryption#configuration).
+ 1. In the menu bar at the top, select **Save**.
+ 1. When prompted, select **Continue**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-4.png" alt-text="A screenshot showing how to save settings in the configuration page." lightbox="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-4.png":::
+ :::column-end:::
+
+> [!IMPORTANT]
+> The `APP_KEY` value is used here for convenience. For production scenarios, it should be generated specifically for your deployment using `php artisan key:generate --show` in the command line.
## 3 - Deploy sample code
-In this step, you'll configure GitHub deployment using GitHub Actions. It's just one of many ways to deploy to App Service, but also a great way to have continuous integration in your deployment process. By default, every `git push` to your GitHub repository will kick off the build and deploy action. You'll make some changes to your codebase with Visual Studio Code directly in the browser, then let GitHub Actions deploy automatically for you.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Deploy sample code step 1](./includes/tutorial-php-mysql-app/azure-portal-deploy-sample-code-1.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-1-240px.png" alt-text="A screenshot showing how to create a fork of the sample GitHub repository." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-1.png"::: |
-| [!INCLUDE [Deploy sample code step 2](./includes/tutorial-php-mysql-app/azure-portal-deploy-sample-code-2.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-2-240px.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-2.png"::: |
-| [!INCLUDE [Deploy sample code step 3](./includes/tutorial-php-mysql-app/azure-portal-deploy-sample-code-3.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-3-240px.png" alt-text="A screenshot showing Visual Studio Code in the browser and an opened file." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-3.png"::: |
-| [!INCLUDE [Deploy sample code step 4](./includes/tutorial-php-mysql-app/azure-portal-deploy-sample-code-4.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-4-240px.png" alt-text="A screenshot showing how to open the deployment center in App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-4.png"::: |
-| [!INCLUDE [Deploy sample code step 5](./includes/tutorial-php-mysql-app/azure-portal-deploy-sample-code-5.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-5-240px.png" alt-text="A screenshot showing how to configure CI/CD using GitHub Actions." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-5.png"::: |
-| [!INCLUDE [Deploy sample code step 6](./includes/tutorial-php-mysql-app/azure-portal-deploy-sample-code-6.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-6-240px.png" alt-text="A screenshot showing how to open deployment logs in the deployment center." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-6.png"::: |
-| [!INCLUDE [Deploy sample code step 7](./includes/tutorial-php-mysql-app/azure-portal-deploy-sample-code-7.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-7-240px.png" alt-text="A screenshot showing how to commit your changes in the Visual Studio Code browser experience." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-7.png"::: |
+In this step, you'll configure GitHub deployment using GitHub Actions. It's just one of many ways to deploy to App Service, but also a great way to have continuous integration in your deployment process. By default, every `git push` to your GitHub repository kicks off the build and deploy action. You'll make some changes to your codebase with Visual Studio Code directly in the browser, then let GitHub Actions deploy automatically for you.
+
+ :::column span="2":::
+ **Step 1.** In a new browser window:
+ 1. Sign in to your GitHub account.
+ 1. Navigate to [https://github.com/Azure-Samples/laravel-tasks](https://github.com/Azure-Samples/laravel-tasks).
+ 1. Select **Fork**.
+ 1. Select **Create fork**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-1.png" alt-text="A screenshot showing how to create a fork of the sample GitHub repository." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-2.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3.** In Visual Studio Code in the browser, open *config/database.php* in the explorer. Find the `mysql` section and make the following changes:
+ 1. Replace `DB_HOST` with `AZURE_MYSQL_HOST`.
+ 1. Replace `DB_DATABASE` with `AZURE_MYSQL_DBNAME`.
+ 1. Replace `DB_USERNAME` with `AZURE_MYSQL_USERNAME`.
+ 1. Replace `DB_PASSWORD` with `AZURE_MYSQL_PASSWORD`.
+ 1. Replace `DB_PORT` with `AZURE_MYSQL_PORT`.
+ Remember that these `AZURE_MYSQL_` settings were created for you by the create wizard.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-3.png" alt-text="A screenshot showing Visual Studio Code in the browser and an opened file with modified MySQL variables." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-3.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 4.** In *config/database.php* scroll to the Redis `cache` section and make the following changes:
+ 1. Replace `REDIS_HOST` with `AZURE_REDIS_HOST`.
+ 1. Replace `REDIS_PASSWORD` with `AZURE_REDIS_PASSWORD`.
+ 1. Replace `REDIS_PORT` with `AZURE_REDIS_PORT`.
+ 1. Replace `REDIS_CACHE_DB` with `AZURE_REDIS_DATABASE`.
+ 1. In the same section, add a line with `'scheme' => 'tls',`. This configuration tells Laravel to use encryption to connect to Redis.
+ Remember that these `AZURE_REDIS_` settings were created for you by the create wizard.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-4.png" alt-text="A screenshot showing Visual Studio Code in the browser and an opened file with modified Redis variables." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-4.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 5.**
+ 1. Select the **Source Control** extension.
+ 1. In the textbox, type a commit message like `Configure DB & Redis variables`.
+ 1. Select **Commit and Push**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-5.png" alt-text="A screenshot showing the changes being committed and pushed to GitHub." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-5.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 6.** Back in the App Service page, in the left menu, select **Deployment Center**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-6.png" alt-text="A screenshot showing how to open the deployment center in App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-6.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 7.** In the Deployment Center page:
+ 1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider.
+ 1. Sign in to your GitHub account and follow the prompt to authorize Azure.
+ 1. In **Organization**, select your account.
+ 1. In **Repository**, select **laravel-task**.
+ 1. In **Branch**, select **main**.
+ 1. In the top menu, select **Save**. App Service commits a workflow file into the chosen GitHub repository, in the `.github/workflows` directory.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-7.png" alt-text="A screenshot showing how to configure CI/CD using GitHub Actions." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-7.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 8.** In the Deployment Center page:
+ 1. Select **Logs**. A deployment run is already started.
+ 1. In the log item for the deployment run, select **Build/Deploy Logs**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-8.png" alt-text="A screenshot showing how to open deployment logs in the deployment center." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-8.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 9.** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 15 minutes.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-9.png" alt-text="A screenshot showing a GitHub run in progress." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-9.png":::
+ :::column-end:::
## 4 - Generate database schema The creation wizard puts the MySQL database server behind a private endpoint, so it's accessible only from the virtual network. Because the App Service app is already integrated with the virtual network, the easiest way to run database migrations with your database is directly from within the App Service container.
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Generate database schema step 1](./includes/tutorial-php-mysql-app/azure-portal-generate-db-schema-1.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-generate-db-schema-1-240px.png" alt-text="A screenshot showing how to open the SSH shell for your app from the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-generate-db-schema-1.png"::: |
-| [!INCLUDE [Generate database schema step 2](./includes/tutorial-php-mysql-app/azure-portal-generate-db-schema-2.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-generate-db-schema-2-240px.png" alt-text="A screenshot showing the commands to run in the SSH shell and their output." lightbox="./media/tutorial-php-mysql-app/azure-portal-generate-db-schema-2.png"::: |
+ :::column span="2":::
+ **Step 1.** Back in the App Service page, in the left menu, select **SSH**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-generate-db-schema-1.png" alt-text="A screenshot showing how to open the SSH shell for your app from the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-generate-db-schema-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the SSH terminal:
+ 1. Run `cd /home/site/wwwroot`. Here are all your deployed files.
+ 1. Run `php artisan migrate --force`. If it succeeds, App Service is connecting successfully to the MySQL database.
+ Only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-generate-db-schema-2.png" alt-text="A screenshot showing the commands to run in the SSH shell and their output." lightbox="./media/tutorial-php-mysql-app/azure-portal-generate-db-schema-2.png":::
+ :::column-end:::
## 5 - Change site root
-[Laravel application lifecycle](https://laravel.com/docs/8.x/lifecycle#lifecycle-overview) begins in the **/public** directory instead. The default PHP 8.0 container for App Service uses Nginx, which starts in the application's root directory. To change the site root, you need to change the Nginx configuration file in the PHP 8.0 container (*/etc/nginx/sites-available/default*). For your convenience, the sample repository contains a custom configuration file called *default*. As noted previously, you don't want to replace this file using the SSH shell, because your changes will be lost after an app restart.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Change site root step 1](./includes/tutorial-php-mysql-app/azure-portal-change-site-root-1.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-change-site-root-1-240px.png" alt-text="A screenshot showing how to open the general settings tab in the configuration page of App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-change-site-root-1.png"::: |
-| [!INCLUDE [Change site root step 2](./includes/tutorial-php-mysql-app/azure-portal-change-site-root-2.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-change-site-root-2-240px.png" alt-text="A screenshot showing how to configure a startup command in App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-change-site-root-2.png"::: |
+[Laravel application lifecycle](https://laravel.com/docs/10.x/lifecycle#lifecycle-overview) begins in the **/public** directory instead. The default PHP container for App Service uses Nginx, which starts in the application's root directory. To change the site root, you need to change the Nginx configuration file in the PHP container (*/etc/nginx/sites-available/default*). For your convenience, the sample repository contains a custom configuration file called *default*. As noted previously, you don't want to replace this file using the SSH shell, because the change is outside of `/home` and will be lost after an app restart.
+
+ :::column span="2":::
+ **Step 1.**
+ 1. From the left menu, select **Configuration**.
+ 1. Select the **General settings** tab.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-change-site-root-1.png" alt-text="A screenshot showing how to open the general settings tab in the configuration page of App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-change-site-root-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the General settings tab:
+ 1. In the **Startup Command** box, enter the following command: *cp /home/site/wwwroot/default /etc/nginx/sites-available/default && service nginx reload*.
+ 1. Select **Save**.
+ The command replaces the Nginx configuration file in the PHP container and restarts Nginx. This configuration ensures that the same change is made to the container each time it starts.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-change-site-root-2.png" alt-text="A screenshot showing how to configure a startup command in App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-change-site-root-2.png":::
+ :::column-end:::
## 6 - Browse to the app
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Browse to app step 1](./includes/tutorial-php-mysql-app/azure-portal-browse-app-1.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-browse-app-1-240px.png" alt-text="A screenshot showing how to launch an App Service from the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-browse-app-1.png"::: |
-| [!INCLUDE [Browse to app step 2](./includes/tutorial-php-mysql-app/azure-portal-browse-app-2.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-browse-app-2-240px.png" alt-text="A screenshot of the Laravel app running in App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-browse-app-2.png"::: |
+ :::column span="2":::
+ **Step 1.** In the App Service page:
+ 1. From the left menu, select **Overview**.
+ 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-browse-app-1.png" alt-text="A screenshot showing how to launch an App Service from the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-browse-app-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** Add a few tasks to the list.
+ Congratulations, you're running a secure data-driven PHP app in Azure App Service.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-browse-app-2.png" alt-text="A screenshot of the Laravel app running in App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-browse-app-2.png":::
+ :::column-end:::
+
+> [!TIP]
+> The sample application implements the [cache-aside](/azure/architecture/patterns/cache-aside) pattern. When you reload the page after making data changes, **Response time** in the webpage shows a much faster time because it's loading the data from the cache instead of the database.
## 7 - Stream diagnostic logs
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Stream diagnostic logs step 1](./includes/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-1.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-1-240px.png" alt-text="A screenshot showing how to enable native logs in App Service in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-1.png"::: |
-| [!INCLUDE [Stream diagnostic logs step 2](./includes/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-2.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-2-240px.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-2.png"::: |
+Azure App Service captures all messages logged to the console to assist you in diagnosing issues with your application. The sample app outputs console log messages in each of its endpoints to demonstrate this capability. By default, Laravel's logging functionality (for example, `Log::info()`) outputs to a local file. Your `LOG_CHANNEL` app setting from earlier makes log entries accessible from the App Service log stream.
+
+ :::column span="2":::
+ **Step 1.** In the App Service page:
+ 1. From the left menu, select **App Service logs**.
+ 1. Under **Application logging**, select **File System**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-1.png" alt-text="A screenshot showing how to enable native logs in App Service in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-2.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-2.png":::
+ :::column-end:::
## Clean up resources When you're finished, you can delete all of the resources from your Azure subscription by deleting the resource group.
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Remove resource group Azure portal 1](./includes/tutorial-php-mysql-app/azure-portal-clean-up-resources-1.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-1-240px.png" alt-text="A screenshot showing how to search for and navigate to a resource group in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-1.png"::: |
-| [!INCLUDE [Remove resource group Azure portal 2](./includes/tutorial-php-mysql-app/azure-portal-clean-up-resources-2.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-2-240px.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-2.png"::: |
-| [!INCLUDE [Remove resource group Azure portal 3](./includes/tutorial-php-mysql-app/azure-portal-clean-up-resources-3.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-3-240px.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-3.png"::: |
+ :::column span="2":::
+ **Step 1.** In the search bar at the top of the Azure portal:
+ 1. Enter the resource group name.
+ 1. Select the resource group.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-1.png" alt-text="A screenshot showing how to search for and navigate to a resource group in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the resource group page, select **Delete resource group**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-2.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3.**
+ 1. Enter the resource group name to confirm your deletion.
+ 1. Select **Delete**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-3.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-3.png"::::
+ :::column-end:::
## Frequently asked questions
When you're finished, you can delete all of the resources from your Azure subscr
Pricing for the create resources is as follows: -- The App Service plan is created in **Premium V2** tier and can be scaled up or down. See [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/).
+- The App Service plan is created in **Basic** tier and can be scaled up or down. See [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/).
- The MySQL flexible server is created in **B1ms** tier and can be scaled up or down. With an Azure free account, **B1ms** tier is free for 12 months, up to the monthly limits. See [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/).
+- The Azure Cache for Redis is created in **Basic** tier with the minimum cache size. There's a small cost associated with this tier. You can scale it up to higher performance tiers for higher availability, clustering, and other features. See [Azure Cache for Redis pricing](https://azure.microsoft.com/pricing/details/cache/).
- The virtual network doesn't incur a charge unless you configure extra functionality, such as peering. See [Azure Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/). - The private DNS zone incurs a small charge. See [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/).
Most of the time taken by the two-job process is spent uploading and download ar
## Next steps
-In this tutorial, you learned how to:
-
-> [!div class="checklist"]
-> * Create a secure-by-default PHP and MySQL app in Azure
-> * Configure connection secrets to MySQL using app settings
-> * Deploy application code using GitHub Actions
-> * Update and redeploy the app
-> * Run database migrations securely
-> * Stream diagnostic logs from Azure
-> * Manage the app in the Azure portal
- Advance to the next tutorial to learn how to secure your app with a custom domain and certificate. > [!div class="nextstepaction"]
application-gateway Application Gateway Configure Ssl Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-configure-ssl-policy-powershell.md
Previously updated : 11/14/2019 Last updated : 06/06/2023
Learn how to configure TLS/SSL policy versions and cipher suites on Application
The `Get-AzApplicationGatewayAvailableSslOptions` cmdlet provides a listing of available pre-defined policies, available cipher suites, and protocol versions that can be configured. The following example shows an example output from running the cmdlet.
+> [!IMPORTANT]
+> The default TLS policy is set to AppGwSslPolicy20220101 for API versions 2023-02-01 or higher. Visit [TLS policy overview](./application-gateway-ssl-policy-overview.md#default-tls-policy) to know more.
+ ``` DefaultPolicy: AppGwSslPolicy20150501 PredefinedPolicies:
application-gateway Application Gateway Ssl Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-ssl-policy-overview.md
Previously updated : 12/17/2020 Last updated : 06/06/2023
The TLS policy includes control of the TLS protocol version as well as the ciphe
- SSL 2.0 and 3.0 are disabled for all application gateways and are not configurable. - A custom TLS policy allows you to select any TLS protocol as the minimum protocol version for your gateway: TLSv1_0, TLSv1_1, TLSv1_2, or TLSv1_3.-- If no TLS policy is defined, the minimum protocol version is set to TLSv1_0, and protocol versions v1.0, v1.1, and v1.2 are supported.
+- If no TLS policy is chosen, a [default TLS policy](application-gateway-ssl-policy-overview.md#default-tls-policy) gets applied based on the API version used to create that resource.
- The [**2022 Predefined**](#predefined-tls-policy) and [**Customv2 policies**](#custom-tls-policy) that support **TLS v1.3** are available only with Application Gateway V2 SKUs (Standard_v2 or WAF_v2). - Using a 2022 Predefined or Customv2 policy enhances SSL security and performance posture of the entire gateway (for SSL Policy and [SSL Profile](application-gateway-configure-listener-specific-ssl-policy.md#set-up-a-listener-specific-ssl-policy)). Hence, both old and new policies cannot co-exist on a gateway. You must use any of the older predefined or custom policies across the gateway if clients require older TLS versions or ciphers (for example, TLS v1.0). - TLS cipher suites used for the connection are also based on the type of the certificate being used. The cipher suites used in "client to application gateway connections" are based on the type of listener certificates on the application gateway. Whereas the cipher suites used in establishing "application gateway to backend pool connections" are based on the type of server certificates presented by the backend servers.
The following table shows the list of cipher suites and minimum protocol version
| - | - | - | - | - | - | | **Minimum Protocol Version** | 1.0 | 1.1 | 1.2 | 1.2 | 1.2 | | **Enabled protocol versions** | 1.0<br/>1.1<br/>1.2 | 1.1<br/>1.2 | 1.2 | 1.2<br/>1.3 | 1.2<br/>1.3 |
-| **Default** | True | False | False | False | False |
+| **Default** | True<br/>(for API version < 2023-02-01) | False | False | True<br/>(for API version >= 2023-02-01) | False |
| TLS_AES_128_GCM_SHA256 | &cross; | &cross; | &cross; | &check; | &check; | | TLS_AES_256_GCM_SHA384 | &cross; | &cross; | &cross; | &check; | &check; | | TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 | &check; | &check; | &check; | &check; | &check; |
The following table shows the list of cipher suites and minimum protocol version
| TLS_RSA_WITH_3DES_EDE_CBC_SHA | &check; | &cross; | &cross; | &cross; | &cross; | | TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA | &check; | &cross; | &cross; | &cross; | &cross; |
+### Default TLS policy
+
+When no specific SSL Policy is specified in the application gateway resource configuration, a default TLS policy gets applied. The selection of this default policy is based on the API version used to create that gateway.
+
+- **For API versions 2023-02-01 or higher**, the minimum protocol version is set to 1.2 (version up to 1.3 is supported). The gateways created with these API versions will see a read-only property **defaultPredefinedSslPolicy:[AppGwSslPolicy20220101](application-gateway-ssl-policy-overview.md#predefined-tls-policy)** in the resource configuration. This property defines the default TLS policy to use.
+- **For older API versions < 2023-02-01**, the minimum protocol version is set to 1.0 (versions up to 1.2 are supported) as they use the predefined policy [AppGwSslPolicy20150501](application-gateway-ssl-policy-overview.md#predefined-tls-policy) as default.
+
+If the default TLS doesnΓÇÖt fit your requirement, choose a different Predefined policy or use a Custom one.
+
+> [!NOTE]
+> Azure PowerShell and CLI support for the updated default TLS policy is coming soon.
++ ## Custom TLS policy If a TLS policy needs to be configured for your requirements, you can use a Custom TLS policy. With a custom TLS policy, you have complete control over the minimum TLS protocol version to support, as well as the supported cipher suites and their priority order.
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
Previously updated : 03/03/2023 Last updated : 06/20/2023
Subnet Size /24 = 256 IP addresses - 5 reserved from the platform = 251 availabl
The virtual network resource supports [DNS server](../virtual-network/manage-virtual-network.md#view-virtual-networks-and-settings-using-the-azure-portal) configuration, allowing you to choose between Azure-provided default or Custom DNS servers. The instances of your application gateway also honor this DNS configuration for any name resolution. Thus, after you change this setting, you must restart ([Stop](/powershell/module/az.network/Stop-AzApplicationGateway) and [Start](/powershell/module/az.network/start-azapplicationgateway)) your application gateway for these changes to take effect on the instances. ### Virtual network permission
-Since the application gateway resource is deployed inside a virtual network, we also perform a check to verify the permission on the provided virtual network resource. This validation is performed during both creation and management operations. You should check your [Azure role-based access control](../role-based-access-control/role-assignments-list-portal.md) to verify the users or service principals that operate application gateways also have at least **Microsoft.Network/virtualNetworks/subnets/join/action** permission on the Virtual Network or Subnet.
+Since the application gateway resource is deployed inside a virtual network, we also perform a check to verify the permission on the provided virtual network resource. This validation is performed during both creation and management operations. You should check your [Azure role-based access control](../role-based-access-control/role-assignments-list-portal.md) to verify the users (and service principals) that operate application gateways also have at least **Microsoft.Network/virtualNetworks/subnets/join/action** permission on the Virtual Network or Subnet. This is also applies to the [Managed Identities for Application Gateway Ingress Controller](./tutorial-ingress-controller-add-on-new.md#deploy-an-aks-cluster-with-the-add-on-enabled).
You may use the built-in roles, such as [Network contributor](../role-based-access-control/built-in-roles.md#network-contributor), which already support this permission. If a built-in role doesn't provide the right permission, you can [create and assign a custom role](../role-based-access-control/custom-roles-portal.md). Learn more about [managing subnet permissions](../virtual-network/virtual-network-manage-subnet.md#permissions).
As a temporary extension, we have introduced a subscription-level [Azure Feature
**EnrollmentType**: AutoApprove </br> > [!NOTE]
-> The provision to circumvent this virtual network permission check by using this feature control (AFEC) is available only for a limited period, **until 30th June 2023**. Ensure all the roles and permissions managing Application Gateways are updated by then, as there will be no further extensions.
+> We suggest using this feature control (AFEC) provision only as interim mitigation until you assign the correct permission. You must prioritize fixing the permissions for all the applicable Users (and Service Principals) and then unregister this AFEC flag to reintroduce the permission verification on the Virtual Network resource. It is recommended not to permanently depend on this AFEC method, as it will be removed in the future.
## Network security groups
application-gateway Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/features.md
![Application Gateway conceptual](media/overview/figure1-720.png)
-> [!NOTE]
-> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
Application Gateway includes the following features:
application-gateway Tutorial Ingress Controller Add On New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-new.md
Previously updated : 07/15/2022 Last updated : 06/22/2023
Deploying a new AKS cluster with the AGIC add-on enabled without specifying an e
az aks create -n myCluster -g myResourceGroup --network-plugin azure --enable-managed-identity -a ingress-appgw --appgw-name myApplicationGateway --appgw-subnet-cidr "10.225.0.0/16" --generate-ssh-keys ```
+If the virtual network Application Gateway is deployed into doesn't reside in the same resource group as the AKS nodes, please ensure the identity used by AGIC has Network Contributor role assigned to the subnet the Application Gateway is deployed into.
+
+```azurecli-interactive
+# Get application gateway id from AKS addon profile
+appGatewayId=$(az aks show -n myCluster -g myResourceGroup -o tsv --query "addonProfiles.ingressApplicationGateway.config.effectiveApplicationGatewayId")
+
+# Get Application Gateway subnet id
+appGatewaySubnetId=$(az network application-gateway show --ids $appGatewayId -o tsv --query "gatewayIPConfigurations[0].subnet.id")
+
+# Get AGIC addon identity
+agicAddonIdentity=$(az aks show -n myCluster -g myResourceGroup -o tsv --query "addonProfiles.ingressApplicationGateway.identity.clientId")
+
+# Assign network contributor role to AGIC addon identity to subnet that contains the Application Gateway
+az role assignment create --assignee $agicAddonIdentity --scope $appGatewaySubnetId --role "Network Contributor"
+```
+ To configure more parameters for the above command, see [az aks create](/cli/azure/aks#az-aks-create). > [!NOTE]
applied-ai-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
Previously updated : 06/13/2023 Last updated : 06/19/2023
- apiKey={FORM_RECOGNIZER_KEY} - AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000 ports:
- - "5000:5050"
+ - "5000:5000"
azure-cognitive-service-layout: container_name: azure-cognitive-service-layout image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0
- billing={FORM_RECOGNIZER_ENDPOINT_URI} - apiKey={FORM_RECOGNIZER_KEY} - AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000
- ports:
- - "5000:5050"
+ ports:
+ - "5000:5000"
azure-cognitive-service-layout: container_name: azure-cognitive-service-layout image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0
- apiKey={FORM_RECOGNIZER_KEY} - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000 ports:
- - "5000:5050"
+ - "5000:5000"
azure-cognitive-service-read: container_name: azure-cognitive-service-read image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0
The following code sample is a self-contained `docker compose` example to run t
```yml version: "3.9"
- azure-cognitive-service-receipt:
+ azure-cognitive-service-id-document:
container_name: azure-cognitive-service-id-document image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document-3.0 environment:
- apiKey={FORM_RECOGNIZER_KEY} - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000 ports:
- - "5000:5050"
+ - "5000:5000"
azure-cognitive-service-read: container_name: azure-cognitive-service-read image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0
- apiKey={FORM_RECOGNIZER_KEY} - AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000 ports:
- - "5000:5050"
+ - "5000:5000"
networks: - ocrvnet azure-cognitive-service-layout:
- apiKey={FORM_RECOGNIZER_KEY} - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000 ports:
- - "5000:5050"
+ - "5000:5000"
networks: - ocrvnet azure-cognitive-service-read:
- apiKey={FORM_RECOGNIZER_KEY} - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000 ports:
- - "5000:5050"
+ - "5000:5000"
networks: - ocrvnet azure-cognitive-service-read:
- apiKey={FORM_RECOGNIZER_KEY} - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000 ports:
- - "5000:5050"
+ - "5000:5000"
networks: - ocrvnet azure-cognitive-service-read:
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Connected Machine agent description: This article has release notes for Azure Connected Machine agent. For many of the summarized issues, there are links to more details. Previously updated : 05/08/2023 Last updated : 06/20/2023
This page is updated monthly, so revisit it regularly. If you're looking for ite
## Version 1.31 - June 2023
-Download for [Windows](https://download.microsoft.com/download/e/b/2/eb2f2d87-6382-463e-9d01-45b40c93c05b/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+Download for [Windows](https://download.microsoft.com/download/2/6/e/26e2b001-1364-41ed-90b0-1340a44ba409/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
### Known issue
-You may encounter error `AZCM0026: Network Error` accompanied by a message about "no IP addresses found" when connecting a server to Azure Arc using a proxy server. At this time, Microsoft recommends using [agent version 1.30](#version-131june-2023) in networks that require a proxy server. Microsoft has also reverted the agent download URL [aka.ms/AzureConnectedMachineAgent](https://aka.ms/AzureConnectedMachineAgent) to agent version 1.30 to allow existing installation scripts to succeed.
+The first release of agent version 1.31 had a known issue affecting customers using proxy servers. The issue is indicated by the `AZCM0026: Network Error` and a message about "no IP addresses found" when connecting a server to Azure Arc using a proxy server. A newer version of agent 1.31 was released on June 14, 2023 that addresses this issue.
-If you've already installed agent version 1.31 and are seeing the error message above, [uninstall the agent](manage-agent.md#uninstall-from-control-panel) and run your installation script again. You do not need to downgrade to agent 1.30 if your agent is connected to Azure.
+To check if you're running the latest version of the Azure connected machine agent, navigate to the server in the Azure portal or run `azcmagent show` from a terminal on the server itself and look for the "Agent version." The table below shows the version numbers for the first and patched releases of agent 1.31.
-Microsoft will update the release notes when this issue is resolved.
+| Package type | Version number with proxy issue | Version number of patched agent |
+| | - | - |
+| Windows | 1.31.02347.1069 | 1.31.02356.1083 |
+| RPM-based Linux | 1.31.02347.957 | 1.31.02356.970 |
+| DEB-based Linux | 1.31.02347.939 | 1.31.02356.952 |
### New features
azure-arc Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/private-link-security.md
Title: Use Azure Private Link to securely connect servers to Azure Arc description: Learn how to use Azure Private Link to securely connect networks to Azure Arc. Previously updated : 07/26/2022 Last updated : 06/20/2023 # Use Azure Private Link to securely connect servers to Azure Arc
The Azure Arc-enabled servers Private Link Scope object has a number of limits y
- The Azure Arc-enabled server and Azure Arc Private Link Scope must be in the same Azure region. The Private Endpoint and the virtual network must also be in the same Azure region, but this region can be different from that of your Azure Arc Private Link Scope and Arc-enabled server. - Network traffic to Azure Active Directory and Azure Resource Manager does not traverse the Azure Arc Private Link Scope and will continue to use your default network route to the internet. You can optionally [configure a resource management private link](../../azure-resource-manager/management/create-private-link-access-portal.md) to send Azure Resource Manager traffic to a private endpoint. - Other Azure services that you will use, for example Azure Monitor, requires their own private endpoints in your virtual network.-- Private link for Azure Arc-enabled servers is not currently available in Azure China
+- Remote access to the server using Windows Admin Center or SSH is not supported over private link at this time.
## Planning your Private Link setup
azure-cache-for-redis Cache Best Practices Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-performance.md
description: Learn how to test the performance of Azure Cache for Redis.
Previously updated : 04/06/2022 Last updated : 06/19/2023
The Enterprise and Enterprise Flash tiers offer a choice of cluster policy: _Ent
| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| GET requests per second without SSL (1-kB value size) | GET requests per second with SSL (1-kB value size) | |::| | :|:| :| :|
-| E10 | 12 GB | 4 | 4,000 | 300,000 | 200,000 |
-| E20 | 25 GB | 4 | 4,000 | 550,000 | 390,000 |
-| E50 | 50 GB | 8 | 8,000 | 950,000 | 530,000 |
-| E100 | 100 GB | 16 | 10,000 | 1,300,000 | 580,000 |
-| F300 | 384 GB | 8 | 3,200 | 650,000 | 310,000 |
-| F700 | 715 GB | 16 | 6,400 | 650,000 | 350,000 |
-| F1500 | 1455 GB | 32 | 12,800 | 650,000 | 360,000 |
+| E10 | 12 GB | 4 | 4,000 | 300,000 | 207,000 |
+| E20 | 25 GB | 4 | 4,000 | 680,000 | 480,000 |
+| E50 | 50 GB | 8 | 8,000 | 1,200,000 | 900,000 |
+| E100 | 100 GB | 16 | 10,000 | 1,700,000 | 1,650,000 |
+| F300 | 384 GB | 8 | 3,200 | 500,000 | 390,000 |
+| F700 | 715 GB | 16 | 6,400 | 500,000 | 370,000 |
+| F1500 | 1455 GB | 32 | 12,800 | 530,000 | 390,000 |
#### OSS Cluster Policy | Instance | Size | vCPUs | Expected network bandwidth (Mbps)| GET requests per second without SSL (1-kB value size) | GET requests per second with SSL (1-kB value size) | |::| | :|:| :| :|
-| E10 | 12 GB | 4 | 4,000 | 1,300,000 | 800,000 |
-| E20 | 25 GB | 4 | 4,000 | 1,000,000 | 710,000 |
-| E50 | 50 GB | 8 | 8,000 | 2,000,000 | 950,000 |
-| E100 | 100 GB | 16 | 10,000 | 2,000,000 | 960,000 |
-| F300 | 384 GB | 8 | 3,200 | 1,300,000 | 610,000 |
-| F700 | 715 GB | 16 | 6,400 | 1,300,000 | 680,000 |
-| F1500 | 1455 GB | 32 | 12,800 | 1,300,000 | 620,000 |
+| E10 | 12 GB | 4 | 4,000 | 1,400,000 | 1,000,000 |
+| E20 | 25 GB | 4 | 4,000 | 1,200,000 | 900,000 |
+| E50 | 50 GB | 8 | 8,000 | 2,300,000 | 1,700,000 |
+| E100 | 100 GB | 16 | 10,000 | 3,000,000 | 2,500,000 |
+| F300 | 384 GB | 8 | 3,200 | 1,500,000 | 1,200,000 |
+| F700 | 715 GB | 16 | 6,400 | 1,600,000 | 1,200,000 |
+| F1500 | 1455 GB | 32 | 12,800 | 1,600,000 | 1,110,000 |
### Enterprise & Enterprise Flash Tiers - Scaled Out
The following tables show the GET requests per second at different capacities, u
| Instance | Capacity 2 | Capacity 4 | Capacity 6 | |::| :| :| :|
-| E10 | 200,000 | 530,000 | 570,000 |
-| E20 | 390,000 | 520,000 | 580,000 |
-| E50 | 530,000 | 580,000 | 580,000 |
-| E100 | 580,000 | 580,000 | 580,000 |
+| E10 | 200,000 | 830,000 | 930,000 |
+| E20 | 480,000 | 710,000 | 950,000 |
+| E50 | 900,000 | 1,110,000 | 1,200,000 |
+| E100 | 1,600,000 | 1,120,000 | 1,200,000 |
| Instance | Capacity 3 | Capacity 9 | |::| :| :|
-| F300 | 310,000 | 530,000 |
-| F700 | 350,000 | 550,000 |
-| F1500 | 360,000 | 550,000 |
+| F300 | 390,000 | 640,000 |
+| F700 | 370,000 | 610,000 |
+| F1500 | 390,000 | 670,000 |
#### Scaling out - OSS cluster policy | Instance | Capacity 2 | Capacity 4 | Capacity 6 | |::| :| :| :|
-| E10 | 800,000 | 720,000 | 1,280,000 |
-| E20 | 710,000 | 950,000 | 1,250,000 |
-| E50 | 950,000 | 1,260,000 | 1,300,000 |
-| E100 | 960,000 | 1,840,000 | 1,930,000|
+| E10 | 1,00,000 | 1,900,000 | 2,500,000 |
+| E20 | 900,000 | 1,700,000 | 2,300,000 |
+| E50 | 1,700,000 | 3,000,000 | 3,900,000 |
+| E100 | 2,500,000 | 4,400,000 | 4,900,000|
| Instance | Capacity 3 | Capacity 9 |
-|::||::| :| :|
- F300 | 610,000 | 970,000 |
-| F700 | 680,000 | 1,280,000 |
-| F1500 | 620,000 | 1,850,000 |
+|::|::| :| :|
+| F300 | 1,200,000 | 2,600,000 |
+| F700 | 1,200,000 | 2,600,000 |
+| F1500 | 1,100,000 | 2,800,000 |
## Next steps
azure-cache-for-redis Cache How To Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-functions.md
zone_pivot_groups: cache-redis-zone-pivot-group
Previously updated : 05/22/2023 Last updated : 05/24/2023
class RedisMessageModel:
## Next steps - [Introduction to Azure Functions](/azure/azure-functions/functions-overview)
+- [Get started with Azure Functions triggers in Azure Cache for Redis](cache-tutorial-functions-getting-started.md)
+- [Using Azure Functions and Azure Cache for Redis to create a write-behind cache](cache-tutorial-write-behind.md)
azure-cache-for-redis Cache Tutorial Functions Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-functions-getting-started.md
+
+ Title: 'Tutorial: Function - Azure Cache for Redis and Azure Functions'
+description: Learn how to use Azure functions with Azure Cache for Redis.
+++++ Last updated : 06/19/2023+++
+# Get started with Azure Functions triggers in Azure Cache for Redis
+
+The following tutorial shows how to implement basic triggers with Azure Cache for Redis and Azure Functions. This tutorial uses VS Code to write and deploy the Azure Function in C#.
+
+## Requirements
+
+- Azure subscription
+- [Visual Studio Code](https://code.visualstudio.com/)
+
+## Instructions
+
+### Set up an Azure Cache for Redis instance
+
+Create a new **Azure Cache for Redis** instance using the Azure portal or your preferred CLI tool. We use a _Standard C1_ instance, which is a good starting point. Use the [quickstart guide](quickstart-create-redis.md) to get started.
+
+<!-- ![Image](Media/CreateCache.png) -->
+
+The default settings should suffice. We use a public endpoint for this demo, but we recommend you use a private endpoint for anything in production.
+
+Creating the cache can take a few minutes, so feel move to the next section while creating the cache completes.
+
+### Set up Visual Studio Code
+
+If you havenΓÇÖt installed the functions extension for VS Code, search for _Azure Functions_ in the extensions menu, and select **Install**. If you donΓÇÖt have the C# extension installed, install it, too.
+
+<!-- ![Image](Media/InstallExtensions.png) -->
+
+Next, go to the **Azure** tab, and sign-in to your existing Azure account, or create a new one:
+
+Create a new local folder on your computer to hold the project that you're building. In our example, we use ΓÇ£AzureRedisFunctionDemoΓÇ¥.
+
+In the Azure tab, create a new functions app by clicking on the lightning bolt icon in the top right of the **Workspace** box in the lower left of the screen.
+
+<!-- ![Image](Media/CreateFunctionProject.png) -->
+
+Select the new folder that youΓÇÖve created to start the creation of a new Azure Functions project. You get several on-screen prompts. Select:
+
+- **C#** as the language
+- **.NET 6.0 LTS** as the .NET runtime
+- **Skip for now** as the project template
+
+> [!NOTE]
+> If you donΓÇÖt have the .NET Core SDK installed, youΓÇÖll be prompted to do so.
+
+The new project is created:
+
+<!-- ![Image](Media/VSCodeWorkspace.png) -->
+
+### Install necessary NuGet packages
+
+You need to install two NuGet packages:
+
+1. [StackExchange.Redis](https://www.nuget.org/packages/StackExchange.Redis/), which is the primary .NET client for Redis.
+
+1. `Microsoft.Azure.WebJobs.Extensions.Redis`, which is the extension that allows Redis keyspace notifications to be used as triggers in Azure Functions.
+
+Install these packages by going to the **Terminal** tab in VS Code and entering the following commands:
+
+```terminal
+dotnet add package StackExchange.Redis
+dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis
+dotnet restore
+```
+
+### Configure cache
+
+Go to your newly created Azure Cache for Redis instance. Two steps are needed here.
+
+First, enable **keyspace notifications** on the cache to trigger on keys and commands. Go to your cache in the Azure portal and select the **Advanced settings** from the Resource menu. Scroll down to the field labeled _notify-keyspace-events_ and enter ΓÇ£KEAΓÇ¥. Then select Save at the top of the window. ΓÇ£KEAΓÇ¥ is a configuration string that enables keyspace notifications for all keys and events. More information on keyspace configuration strings can be found [here](https://redis.io/docs/manual/keyspace-notifications/).
+
+<!-- ![Image](Media/KeyspaceNotifications.png) -->
+
+Second, select **Access keys** from the Resource menu and write down/copy the Primary connection string field. We use the access key to connect to the cache.
+
+<!-- ![Image](Media/AccessKeys.png) -->
+
+### Set up the example code
+
+Go back to VS Code, add a file to the project called `RedisFunctions.cs` Copy and paste the code sample:
+
+```csharp
+using Microsoft.Extensions.Logging;
+using System.Text.Json;
+
+namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples
+{
+ public static class RedisSamples
+ {
+ public const string localhostSetting = "redisLocalhost";
+
+ [FunctionName(nameof(PubSubTrigger))]
+ public static void PubSubTrigger(
+ [RedisPubSubTrigger(localhostSetting, "pubsubTest")] RedisMessageModel model,
+ ILogger logger)
+ {
+ logger.LogInformation(JsonSerializer.Serialize(model));
+ }
+
+ [FunctionName(nameof(PubSubTriggerResolvedChannel))]
+ public static void PubSubTriggerResolvedChannel(
+ [RedisPubSubTrigger(localhostSetting, "%pubsubChannel%")] RedisMessageModel model,
+ ILogger logger)
+ {
+ logger.LogInformation(JsonSerializer.Serialize(model));
+ }
+
+ [FunctionName(nameof(KeyspaceTrigger))]
+ public static void KeyspaceTrigger(
+ [RedisPubSubTrigger(localhostSetting, "__keyspace@0__:keyspaceTest")] RedisMessageModel model,
+ ILogger logger)
+ {
+ logger.LogInformation(JsonSerializer.Serialize(model));
+ }
+
+ [FunctionName(nameof(KeyeventTrigger))]
+ public static void KeyeventTrigger(
+ [RedisPubSubTrigger(localhostSetting, "__keyevent@0__:del")] RedisMessageModel model,
+ ILogger logger)
+ {
+ logger.LogInformation(JsonSerializer.Serialize(model));
+ }
+
+ [FunctionName(nameof(ListsTrigger))]
+ public static void ListsTrigger(
+ [RedisListTrigger(localhostSetting, "listTest")] RedisMessageModel model,
+ ILogger logger)
+ {
+ logger.LogInformation(JsonSerializer.Serialize(model));
+ }
+
+ [FunctionName(nameof(ListsMultipleTrigger))]
+ public static void ListsMultipleTrigger(
+ [RedisListTrigger(localhostSetting, "listTest1 listTest2")] RedisMessageModel model,
+ ILogger logger)
+ {
+ logger.LogInformation(JsonSerializer.Serialize(model));
+ }
+
+ [FunctionName(nameof(StreamsTrigger))]
+ public static void StreamsTrigger(
+ [RedisStreamTrigger(localhostSetting, "streamTest")] RedisMessageModel model,
+ ILogger logger)
+ {
+ logger.LogInformation(JsonSerializer.Serialize(model));
+ }
+
+ [FunctionName(nameof(StreamsMultipleTriggers))]
+ public static void StreamsMultipleTriggers(
+ [RedisStreamTrigger(localhostSetting, "streamTest1 streamTest2")] RedisMessageModel model,
+ ILogger logger)
+ {
+ logger.LogInformation(JsonSerializer.Serialize(model));
+ }
+ }
+}
+```
+
+This tutorial shows multiple different triggers:
+
+1. _PubSubTrigger_, which is triggered when activity is published to the pub/sub channel named `pubsubTest`
+
+1. _KeyspaceTrigger_, which is built on the Pub/Sub trigger. Use it to look for changes to the key `keyspaceTest`
+
+1. _KeyeventTrigger_, which is also built on the Pub/Sub trigger. Use it to look for any use of the`DEL` command.
+
+1. _ListTrigger_, which looks for changes to the list `listTest`
+
+1. _ListMultipleTrigger_, which looks for changes to list `listTest1` and `listTest2`
+
+1. _StreamTrigger_, which looks for changes to the stream `streamTest`
+
+1. _StreamMultipleTrigger_, which looks for changes to streams `streamTest1` and `streamTest2`
+
+To connect to your cache, take the connection string you copied from earlier and paste to replace the value of `localhost` at the top of the file, set to `127.0.0.1:6379` by default.
+
+<!-- ![Image](Media/ConnectionString.png) -->
+
+### Build and run the code locally
+
+Switch to the **Run and debug** tab in VS Code and select the green arrow to debug the code locally. If you donΓÇÖt have Azure Functions core tools installed, you're prompted to do so. In that case, youΓÇÖll need to restart VS Code after installing.
+
+The code should build successfully, which you can track in the Terminal output.
+
+To test the trigger functionality, try creating and deleting the _keyspaceTest_ key. You can use any way you prefer to connect to the cache. An easy way is to use the built-in Console tool in the Azure Cache for Redis portal. Bring up the cache instance in the Azure portal, and select **Console** to open it.
+
+<!-- ![Image](Media/Console.png) -->
+
+After it's open, try the following commands:
+
+- SET keyspaceTest 1
+- SET keyspaceTest 2
+- DEL keyspaceTest
+- PUBLISH pubsubTest testMessage
+- LPUSH listTest test
+- XADD streamTest * name Clippy
+
+<!-- ![Image](Media/Console2.png) -->
+
+You should see the triggers activating in the terminal:
+
+<!-- ![Image](Media/TriggersWorking.png) -->
+
+### Deploy code to an Azure function
+
+Create a new Azure function by going back to the Azure tab, expanding your subscription, and right clicking on **Function App**. Select **Create a Function App in Azure…(Advanced)**.
+
+<!-- ![Image](Media/CreateFunctionApp.png) -->
+
+You see several prompts on information to configure the new functions app:
+
+- Enter a unique name
+- Choose **.NET 6** as the runtime stack
+- Choose either **Linux** or **Windows** (either works)
+- Select an existing or new resource group to hold the Function App
+- Choose the same region as your cache instance
+- Select **Premium** as the hosting plan
+- Create a new App Service plan
+- Choose the **EP1** pricing tier.
+- Choose an existing storage account or create a new one
+- Create a new Application Insights resource. We use the resource to confirm the trigger is working.
+
+> [!IMPORTANT]
+> Redis triggers are not currently supported on consumption functions.
+>
+
+Wait a few minutes for the new Function App to be created. It appears in the drop down under **Function App** in your subscription. Right click on the new function app and select **Deploy to Function App…**
+
+<!-- ![Image](Media/DeployToFunction.png) -->
+
+The app builds and starts deploying. You can track progress in the **Output Window**.
+
+Once deployment is complete, open your Function App in the Azure portal and select **Log Stream** from the Resource menu. Wait for log analytics to connect, and then use the Redis console to activate any of the triggers. You should see the triggers being logged here.
+
+<!-- ![Image](Media/LogStream.png) -->
+
+## Next steps
+
+- [Serverless event-based architectures with Azure Cache for Redis and Azure Functions (preview)](cache-how-to-functions.md)
+- [Build a write-behind cache using Azure Functions](cache-tutorial-write-behind.md)
azure-cache-for-redis Cache Tutorial Write Behind https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-write-behind.md
+
+ Title: 'Tutorial: Create a write-behind cache use Azure Cache for Redis and Azure Functions'
+description: Learn how to use Using Azure Functions and Azure Cache for Redis to create a write-behind cache.
+++++ Last updated : 04/20/2023+++
+# Using Azure Functions and Azure Cache for Redis to create a write-behind cache
+
+The objective of this tutorial is to use an Azure Cache for Redis instance as a [write-behind cache](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-caching/#types-of-caching). The _write-behind_ pattern in this tutorial shows how writes to the cache trigger corresponding writes to an Azure SQL database.
+
+We use the [Redis trigger for Azure Functions](cache-how-to-functions.md) to implement this functionality. In this scenario, you see how to use Azure Cache for Redis to store inventory and pricing information, while backing up that information in an Azure SQL Database.
+
+Every new item or new price written to the cache is then reflected in a SQL table in the database.
+
+## Requirements
+
+- Azure account
+- Completion of the previous tutorial, [Get started with Azure Functions triggers in Azure Cache for Redis](cache-tutorial-functions-getting-started.md) with the following resources provisioned:
+ - Azure Cache for Redis instance
+ - Azure Function instance
+ - VS Code environment set up with NuGet packages installed
+
+## Instructions
+
+### Create and configure a new Azure SQL Database instance
+
+The SQL database is the backing database for this example. You can create an Azure SQL database instance through the Azure portal or through your preferred method of automation.
+
+This example uses the portal.
+
+First, enter a database name and select **Create new** to create a new SQL server to hold the database.
+
+Select **Use SQL authentication** and enter an admin sign in and password. Make sure to remember these or write them down. When deploying a SQL server in production, use Azure Active Directory (Azure AD) authentication instead.
+
+Go to the **Networking** tab, and choose **Public endpoint** as a connection method. Select **Yes** for both firewall rules that appear. This endpoint allows access from your Azure Functions app.
+
+Select **Review + create** and then **Create** after validation finishes. The SQL database starts to deploy.
+
+Once deployment completes, go to the resource in the Azure portal, and select the **Query editor** tab. Create a new table called ΓÇ£inventoryΓÇ¥ that holds the data you'll be writing to it. Use the following SQL command to make a new table with two fields:
+
+- `ItemName`, lists the name of each item
+- `Price`, stores the price of the item
+
+```sql
+CREATE TABLE inventory (
+ ItemName varchar(255),
+ Price decimal(18,2)
+ );
+```
+
+Once that command has completed, expand the **Tables** folder and verify that the new table was created.
+
+### Configure the Redis trigger
+
+First, make a copy of the same VS Code project used in the previous tutorial. Copy the folder from the previous tutorial under a new name, such as ΓÇ£RedisWriteBehindTriggerΓÇ¥ and open it up in VS Code.
+
+In this example, weΓÇÖre going to use the [pub/sub trigger](cache-how-to-functions.md#redispubsubtrigger) to trigger on `keyevent` notifications. The following list shows our goals:
+
+1. Trigger every time a SET event occurs. A SET event happens when either new keys are being written to the cache instance or the value of a key is being changed.
+1. Once a SET event is triggered, access the cache instance to find the value of the new key.
+1. Determine if the key already exists in the ΓÇ£inventoryΓÇ¥ table in the Azure SQL database.
+ 1. If so, update the value of that key.
+ 1. If not, write a new row with the key and its value.
+
+To do this, copy and paste the following code in redisfunction.cs, replacing the existing code.
+
+```csharp
+using System.Text.Json;
+using Microsoft.Extensions.Logging;
+using StackExchange.Redis;
+using System;
+using System.Data.SqlClient;
+
+namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples
+{
+ public static class RedisSamples
+ {
+ public const string cacheAddress = "<YourRedisConnectionString>";
+ public const string SQLAddress = "<YourSQLConnectionString>";
+
+ [FunctionName("KeyeventTrigger")]
+ public static void KeyeventTrigger(
+ [RedisPubSubTrigger(ConnectionString = cacheAddress, Channel = "__keyevent@0__:set")] RedisMessageModel model,
+ ILogger logger)
+ {
+ // connect to a Redis cache instance
+ var redisConnection = ConnectionMultiplexer.Connect(cacheAddress);
+ var cache = redisConnection.GetDatabase();
+
+ // get the key that was set and its value
+ var key = model.Message;
+ var value = (double)cache.StringGet(key);
+ logger.LogInformation($"Key {key} was set to {value}");
+
+ // connect to a SQL database
+ String SQLConnectionString = SQLAddress;
+
+ // Define the name of the table you created and the column names
+ String tableName = "dbo.inventory";
+ String column1Value = "ItemName";
+ String column2Value = "Price";
+
+ // Connect to the database. Check if the key exists in the database, if it does, update the value, if it doesn't, add it to the database
+ using (SqlConnection connection = new SqlConnection(SQLConnectionString))
+ {
+ connection.Open();
+ using (SqlCommand command = new SqlCommand())
+ {
+ command.Connection = connection;
+
+ //Form the SQL query to update the database. In practice, you would want to use a parameterized query to prevent SQL injection attacks.
+ //An example query would be something like "UPDATE dbo.inventory SET Price = 1.75 WHERE ItemName = 'Apple'"
+ command.CommandText = "UPDATE " + tableName + " SET " + column2Value + " = " + value + " WHERE " + column1Value + " = '" + key + "'";
+ int rowsAffected = command.ExecuteNonQuery(); //The query execution returns the number of rows affected by the query. If the key doesn't exist, it will return 0.
+
+ if (rowsAffected == 0) //If key doesn't exist, add it to the database
+ {
+ //Form the SQL query to update the database. In practice, you would want to use a parameterized query to prevent SQL injection attacks.
+ //An example query would be something like "INSERT INTO dbo.inventory (ItemName, Price) VALUES ('Bread', '2.55')"
+ command.CommandText = "INSERT INTO " + tableName + " (" + column1Value + ", " + column2Value + ") VALUES ('" + key + "', '" + value + "')";
+ command.ExecuteNonQuery();
+
+ logger.LogInformation($"Item " + key + " has been added to the database with price " + value + "");
+ }
+
+ else {
+ logger.LogInformation($"Item " + key + " has been updated to price " + value + "");
+ }
+ }
+ connection.Close();
+ }
+
+ //Log the time the function was executed
+ logger.LogInformation($"C# Redis trigger function executed at: {DateTime.Now}");
+ }
++
+ }
+}
+```
+
+> [!IMPORTANT]
+> This example is simplified for the tutorial. For production use, we recommend that you use parameterized SQL queries to prevent SQL injection attacks.
+>
+
+You need to update the `cacheAddress` and `SQLAddress` variables with the connection strings for your cache instance and your SQL database. You need to manually enter the password for your SQL database connection string, because the password isn't pasted automatically. You can find the Redis connection string in the **Access Keys** of the Resource menu of the Azure Cache for Redis resource. You can find the SQL database connection string under the **ADO.NET** tab in **Connection strings** on the Resource menu in the SQL database resource.
+
+You see errors in some of the SQL classes. You need to import the `System.Data.SqlClient` NuGet package to resolve these. Go to the VS Code terminal and use the following command:
+
+```dos
+dotnet add package System.Data.SqlClient
+```
+
+### Build and run the project
+
+Go to the **Run and debug tab** in VS Code and run the project. Navigate back to your Azure Cache for Redis instance in the Azure portal and select the **Console** button to enter the Redis Console. Try using some set commands:
+
+Back in VS Code, you should see the triggers being registered:
+
+To validate that the triggers are working, go to the SQL database instance in the Azure portal. Then, select **Query editor** from the Resource menu. Create a **New Query** with the following SQL to view the top 100 items in the inventory table:
+
+```sql
+SELECT TOP (100) * FROM [dbo].[inventory]
+```
+
+You should see the items written to your Azure Cache for Redis instance show up here!
+
+### Deploy to your Azure Functions App
+
+The only thing left is to deploy the code to the actual Azure Function app. As before, go to the Azure tab in VS Code, find your subscription, expand it, find the Function App section, and expand that. Select and hold (or right-click) your Azure Function app. Then, select **Deploy to Function App…**
+
+Once the deployment has finished, go back to your Azure Cache for Redis instance and use SET commands to write more values. You should see these show up in your Azure SQL database as well.
+
+If youΓÇÖd like to confirm that your Azure Function app is working properly, go to the app in the portal and select the **Log stream** from the Resource menu. You should see the triggers executing there, and the corresponding updates being made to your SQL database.
+
+If you ever would like to clear the SQL database table without deleting it, you can use the following SQL query:
+
+```sql
+TRUNCATE TABLE [dbo].[inventory]
+```
+
+## Summary
+
+This tutorial and [Get started with Azure Functions triggers in Azure Cache for Redis](cache-tutorial-functions-getting-started.md) show how to use Azure Cache for Redis to trigger Azure Function apps, and how to use that functionality to use Azure Cache for Redis as a write-behind cache with Azure SQL Database. Using Azure Cache for Redis with Azure Functions is a powerful combination that can solve many integration and performance problems.
+
+## Next steps
+
+- [Serverless event-based architectures with Azure Cache for Redis and Azure Functions (preview)](cache-how-to-functions.md)
+- [Get started with Azure Functions triggers in Azure Cache for Redis](cache-tutorial-functions-getting-started.md)
azure-functions Functions Bindings Azure Data Explorer Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-data-explorer-input.md
The [C# library](functions-dotnet-class-library.md) uses the [KustoAttribute](ht
| Attribute property |Description| ||| | **Database** | Required. The database against which the query has to be executed. |
-| **Connection** | Required. The _**name**_ of the variable that holds the connection string, resolved through environment variables or through function app settings. Defaults to look up on the variable _**KustoConnectionString**_, at runtime this variable is looked up against the environment. Documentation on connection string can be found at [Kusto connection strings](/azure/data-explorer/kusto/api/connection-strings/kusto) for example:`"KustoConnectionString": "Data Source=https://_**cluster**_.kusto.windows.net;Database=_**Database**_;Fed=True;AppClientId=_**AppId**_;AppKey=_**AppKey**_;Authority Id=_**TenantId**_` |
+| **Connection** | Required. The _**name**_ of the variable that holds the connection string, resolved through environment variables or through function app settings. Defaults to look up on the variable _**KustoConnectionString**_, at runtime this variable is looked up against the environment. Documentation on connection string can be found at [Kusto connection strings](/azure/data-explorer/kusto/api/connection-strings/kusto) for example:`"KustoConnectionString": "Data Source=https://your_cluster.kusto.windows.net;Database=your_Database;Fed=True;AppClientId=your_AppId;AppKey=your_AppKey;Authority Id=your_TenantId` |
| **KqlCommand** | Required. The KqlQuery that has to be executed. Can be a KQL query or a KQL Function call| | **KqlParameters** | Optional. Parameters that act as predicate variables for the KqlCommand. For example "@name={name},@Id={id}" where the parameters {name} and {id} is substituted at runtime with actual values acting as predicates. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). | | **ManagedServiceIdentity** | Optional. A managed identity can be used to connect to Azure Data Explorer. To use a System managed identity, use "system", any other identity names are interpreted as user managed identity |
In the [Java functions runtime library](/java/api/overview/azure/functions/runti
||| | **name** | Required. The name of the variable that represents the query results in function code. | | **database** | Required. The database against which the query has to be executed. |
-| **connection** | Required. The _**name**_ of the variable that holds the connection string, resolved through environment variables or through function app settings. Defaults to look up on the variable _**KustoConnectionString**_, at runtime this variable is looked up against the environment. Documentation on connection string can be found at [Kusto connection strings](/azure/data-explorer/kusto/api/connection-strings/kusto) for example:`"KustoConnectionString": "Data Source=https://_**cluster**_.kusto.windows.net;Database=_**Database**_;Fed=True;AppClientId=_**AppId**_;AppKey=_**AppKey**_;Authority Id=_**TenantId**_` |
+| **connection** | Required. The _**name**_ of the variable that holds the connection string, resolved through environment variables or through function app settings. Defaults to look up on the variable _**KustoConnectionString**_, at runtime this variable is looked up against the environment. Documentation on connection string can be found at [Kusto connection strings](/azure/data-explorer/kusto/api/connection-strings/kusto) for example:`"KustoConnectionString": "Data Source=https://your_cluster.kusto.windows.net;Database=your_Database;Fed=True;AppClientId=your_AppId;AppKey=your_AppKey;Authority Id=your_TenantId` |
| **kqlCommand** | Required. The KqlQuery that has to be executed. Can be a KQL query or a KQL Function call| |**kqlParameters** | Optional. Parameters that act as \predicate variables for the KqlCommand. For example "@name={name},@Id={id}" where the parameters {name} and {id} is substituted at runtime with actual values acting as predicates. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). | | **managedServiceIdentity** | A managed identity can be used to connect to Azure Data Explorer. To use a System managed identity, use "system", any other identity names are interpreted as user managed identity|
The following table explains the binding configuration properties that you set i
|**direction** | Required. Must be set to `in`. | |**name** | Required. The name of the variable that represents the query results in function code. | | **database** | Required. The database against which the query has to be executed. |
-| **connection** | Required. The _**name**_ of the variable that holds the connection string, resolved through environment variables or through function app settings. Defaults to look up on the variable _**KustoConnectionString**_, at runtime this variable is looked up against the environment. Documentation on connection string can be found at [Kusto connection strings](/azure/data-explorer/kusto/api/connection-strings/kusto) for example:`"KustoConnectionString": "Data Source=https://_**cluster**_.kusto.windows.net;Database=_**Database**_;Fed=True;AppClientId=_**AppId**_;AppKey=_**AppKey**_;Authority Id=_**TenantId**_` |
+| **connection** | Required. The _**name**_ of the variable that holds the connection string, resolved through environment variables or through function app settings. Defaults to look up on the variable _**KustoConnectionString**_, at runtime this variable is looked up against the environment. Documentation on connection string can be found at [Kusto connection strings](/azure/data-explorer/kusto/api/connection-strings/kusto) for example:`"KustoConnectionString": "Data Source=https://your_cluster.kusto.windows.net;Database=your_Database;Fed=True;AppClientId=your_AppId;AppKey=your_AppKey;Authority Id=your_TenantId` |
| **kqlCommand** | Required. The KqlQuery that has to be executed. Can be a KQL query or a KQL Function call| |**kqlParameters** | Optional. Parameters that act as predicate variables for the KqlCommand. For example "@name={name},@Id={id}" where the parameters {name} and {id} is substituted at runtime with actual values acting as predicates. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). | | **managedServiceIdentity** | A managed identity can be used to connect to Azure Data Explorer. To use a System managed identity, use "system", any other identity names are interpreted as user managed identity|
The following table explains the binding configuration properties that you set i
::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-python,programming-language-java"
-The attribute's constructor takes the **Database** and the attributes **KQLCommand**, KQLParameters, and the Connection setting name. The **KQLCommand** can be a KQL statement or a KQL function. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [Kusto connection strings](/azure/data-explorer/kusto/api/connection-strings/kusto) for example: `"KustoConnectionString": "Data Source=https://_**cluster**_.kusto.windows.net;Database=_**Database**_;Fed=True;AppClientId=_**AppId**_;AppKey=_**AppKey**_;Authority Id=_**TenantId**_`. Queries executed by the input binding are parameterized and the values provided in the KQLParameters are used at runtime.
+The attribute's constructor takes the **Database** and the attributes **KQLCommand**, KQLParameters, and the Connection setting name. The **KQLCommand** can be a KQL statement or a KQL function. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [Kusto connection strings](/azure/data-explorer/kusto/api/connection-strings/kusto) for example: `"KustoConnectionString": "Data Source=https://your_cluster.kusto.windows.net;Database=your_Database;Fed=True;AppClientId=your_AppId;AppKey=your_AppKey;Authority Id=your_TenantId`. Queries executed by the input binding are parameterized and the values provided in the KQLParameters are used at runtime.
::: zone-end
azure-functions Functions Bindings Azure Data Explorer Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-data-explorer-output.md
The [C# library](functions-dotnet-class-library.md) uses the [KustoAttribute](ht
| Attribute property |Description| ||| | **Database** | Required. The database against which the query has to be executed. |
-| **Connection** | Required. The _**name**_ of the variable that holds the connection string, resolved through environment variables or through function app settings. Defaults to look up on the variable _**KustoConnectionString**_, at runtime this variable is looked up against the environment. Documentation on connection string can be found at [Kusto connection strings](/azure/data-explorer/kusto/api/connection-strings/kusto) for example:`"KustoConnectionString": "Data Source=https://_**cluster**_.kusto.windows.net;Database=_**Database**_;Fed=True;AppClientId=_**AppId**_;AppKey=_**AppKey**_;Authority Id=_**TenantId**_` .|
+| **Connection** | Required. The _**name**_ of the variable that holds the connection string, resolved through environment variables or through function app settings. Defaults to look up on the variable _**KustoConnectionString**_, at runtime this variable is looked up against the environment. Documentation on connection string can be found at [Kusto connection strings](/azure/data-explorer/kusto/api/connection-strings/kusto) for example:`"KustoConnectionString": "Data Source=https://your_cluster.kusto.windows.net;Database=your_Database;Fed=True;AppClientId=your_AppId;AppKey=your_AppKey;Authority Id=your_TenantId` .|
| **TableName** | Required. The table to ingest the data into.| | **MappingRef** | Optional. attribute to pass a [mapping ref](/azure/data-explorer/kusto/management/create-ingestion-mapping-command) that is already defined in the cluster. | | **ManagedServiceIdentity** | Optional. A managed identity can be used to connect to Azure Data Explorer. To use a System managed identity, use "system", any other identity names are interpreted as user managed identity |
In the [Java functions runtime library](/java/api/overview/azure/functions/runti
||| | **name** | Required. The name of the variable that represents the query results in function code. | | **database** | Required. The database against which the query has to be executed. |
-| **connection** | Required. The _**name**_ of the variable that holds the connection string, resolved through environment variables or through function app settings. Defaults to look up on the variable _**KustoConnectionString**_, at runtime this variable is looked up against the environment. Documentation on connection string can be found at [Kusto connection strings](/azure/data-explorer/kusto/api/connection-strings/kusto) for example:`"KustoConnectionString": "Data Source=https://_**cluster**_.kusto.windows.net;Database=_**Database**_;Fed=True;AppClientId=_**AppId**_;AppKey=_**AppKey**_;Authority Id=_**TenantId**_` |
+| **connection** | Required. The _**name**_ of the variable that holds the connection string, resolved through environment variables or through function app settings. Defaults to look up on the variable _**KustoConnectionString**_, at runtime this variable is looked up against the environment. Documentation on connection string can be found at [Kusto connection strings](/azure/data-explorer/kusto/api/connection-strings/kusto) for example:`"KustoConnectionString": "Data Source=https://your_cluster.kusto.windows.net;Database=your_Database;Fed=True;AppClientId=your_AppId;AppKey=your_AppKey;Authority Id=your_TenantId` |
| **tableName** | Required. The table to ingest the data into.| | **mappingRef** | Optional. attribute to pass a [mapping ref](/azure/data-explorer/kusto/management/create-ingestion-mapping-command) that is already defined in the cluster. | | **dataFormat** | Optional. The default data format is `multijson/json`. This can be set to _**text**_ formats supported in the datasource format [enumeration](/azure/data-explorer/kusto/api/netfx/kusto-ingest-client-reference#enum-datasourceformat). Samples are validated and provided for csv and JSON formats. |
The following table explains the binding configuration properties that you set i
|**direction** | Required. Must be set to `out`. | |**name** | Required. The name of the variable that represents the query results in function code. | | **database** | Required. The database against which the query has to be executed. |
-| **connection** | Required. The _**name**_ of the variable that holds the connection string, resolved through environment variables or through function app settings. Defaults to look up on the variable _**KustoConnectionString**_, at runtime this variable is looked up against the environment. Documentation on connection string can be found at [Kusto connection strings](/azure/data-explorer/kusto/api/connection-strings/kusto) for example: `"KustoConnectionString": "Data Source=https://_**cluster**_.kusto.windows.net;Database=_**Database**_;Fed=True;AppClientId=_**AppId**_;AppKey=_**AppKey**_;Authority Id=_**TenantId**_` |
+| **connection** | Required. The _**name**_ of the variable that holds the connection string, resolved through environment variables or through function app settings. Defaults to look up on the variable _**KustoConnectionString**_, at runtime this variable is looked up against the environment. Documentation on connection string can be found at [Kusto connection strings](/azure/data-explorer/kusto/api/connection-strings/kusto) for example: `"KustoConnectionString": "Data Source=https://your_cluster.kusto.windows.net;Database=your_Database;Fed=True;AppClientId=your_AppId;AppKey=your_AppKey;Authority Id=your_TenantId` |
| **tableName** | Required. The table to ingest the data into.| | **mappingRef** | Optional. attribute to pass a [mapping ref](/azure/data-explorer/kusto/management/create-ingestion-mapping-command) that is already defined in the cluster. | | **dataFormat** | Optional. The default data format is `multijson/json`. This can be set to _**text**_ formats supported in the datasource format [enumeration](/azure/data-explorer/kusto/api/netfx/kusto-ingest-client-reference#enum-datasourceformat). Samples are validated and provided for csv and JSON formats. |
The following table explains the binding configuration properties that you set i
::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-python,programming-language-java"
-The attribute's constructor takes the Database and the attributes TableName, MappingRef, DataFormat and the Connection setting name. The **KQLCommand** can be a KQL statement or a KQL function. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [Kusto connection strings](/azure/data-explorer/kusto/api/connection-strings/kusto) for example:`"KustoConnectionString": "Data Source=https://_**cluster**_.kusto.windows.net;Database=_**Database**_;Fed=True;AppClientId=_**AppId**_;AppKey=_**AppKey**_;Authority Id=_**TenantId**_`. Queries executed by the input binding are parameterized and the values provided in the KQLParameters are used at runtime.
+The attribute's constructor takes the Database and the attributes TableName, MappingRef, DataFormat and the Connection setting name. The **KQLCommand** can be a KQL statement or a KQL function. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [Kusto connection strings](/azure/data-explorer/kusto/api/connection-strings/kusto) for example:`"KustoConnectionString": "Data Source=https://your_cluster.kusto.windows.net;Database=your_Database;Fed=True;AppClientId=your_AppId;AppKey=your_AppKey;Authority Id=your_TenantId`. Queries executed by the input binding are parameterized and the values provided in the KQLParameters are used at runtime.
::: zone-end
azure-functions Functions How To Use Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-use-nat-gateway.md
Last updated 2/26/2021
# Tutorial: Control Azure Functions outbound IP with an Azure virtual network NAT gateway
-Virtual network address translation (NAT) simplifies outbound-only internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses your specified static public IP addresses. An NAT can be useful for apps that need to consume a third-party service that uses an allowlist of IP address as a security measure. To learn more, see [What is Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md).
+Virtual network address translation (NAT) simplifies outbound-only internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses your specified static public IP addresses. An NAT can be useful for apps that need to consume a third-party service that uses an allowlist of IP address as a security measure. To learn more, see [What is Azure NAT Gateway?](../virtual-network/nat-gateway/nat-overview.md).
-This tutorial shows you how to use virtual network NATs to route outbound traffic from an HTTP triggered function. This function lets you check its own outbound IP address. During this tutorial, you'll:
+This tutorial shows you how to use NAT gateways to route outbound traffic from an HTTP triggered function. This function lets you check its own outbound IP address. During this tutorial, you'll:
> [!div class="checklist"] > * Create a virtual network
azure-maps Spatial Io Connect Wfs Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-connect-wfs-service.md
Title: Connect to a Web Feature Service (WFS) service | Microsoft Azure Maps description: Learn how to connect to a WFS service, then query the WFS service using the Azure Maps web SDK and the Spatial IO module.-- Previously updated : 03/03/2020-++ Last updated : 06/20/2023+ - # Connect to a WFS service
The following features are supported by the `WfsClient` class:
The `atlas.io.ogc.WfsClient` class in the spatial IO module makes it easy to query a WFS service and convert the responses into GeoJSON objects. This GeoJSON object can then be used for other mapping purposes.
-The following code queries a WFS service and renders the returned features on the map.
+The [Simple WFS example] sample shows how to easily query a Web Feature Service (WFS) and renders the returned features on the map.
-<br/>
-<iframe height='700' scrolling='no' title='Simple WFS example' src='//codepen.io/azuremaps/embed/MWwvVYY/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/MWwvVYY/'>Simple WFS example</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+<!--
+<iframe height='700' scrolling='no' title='Simple WFS example' src='//codepen.io/azuremaps/embed/MWwvVYY/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/MWwvVYY/'>Simple WFS example</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+->
## Supported filters
The specification for the WFS standard makes use of OGC filters. The filters bel
- `PropertyIsNil` - `PropertyIsBetween`
-The following code demonstrates the use of different filters with the WFS client.
+The [WFS filter example] sample demonstrates the use of different filters with the WFS client.
-<br/>
-<iframe height='500' scrolling='no' title= 'WFS filter examples' src='//codepen.io/azuremaps/embed/NWqvYrV/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/NWqvYrV/'>WFS filter examples</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+<!--
+<iframe height='500' scrolling='no' title= 'WFS filter examples' src='//codepen.io/azuremaps/embed/NWqvYrV/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/NWqvYrV/'>WFS filter examples</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+-->
## WFS service explorer
-The following code uses the WFS client to explore WFS services. Select a property type layer within the service and see the associated legend.
+The [WFS service explorer] sample is a simple tool for exploring WFS services on Azure Maps.
-<br/>
+<!--
<iframe height='700' scrolling='no' title= 'WFS service explorer' src='//codepen.io/azuremaps/embed/bGdrvmG/?height=700&theme-id=0&default-tab=result&embed-version=2&editable=true' frameborder='no' allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/bGdrvmG/'>WFS service explorer</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+-->
-To access WFS services hosted on non-CORS enabled endpoints, a CORS enabled proxy service can be passed into the `proxyService` option of the WFS client as shown below.
+To access WFS services hosted on non-CORS enabled endpoints, a CORS enabled proxy service can be passed into the `proxyService` option of the WFS client as shown below.
```JavaScript //Create the WFS client to access the service and use the proxy service settings
See the following articles for more code samples to add to your maps:
> [!div class="nextstepaction"] > [Supported data format details](spatial-io-supported-data-format-details.md)+
+[Simple WFS example]: https://samples.azuremaps.com/spatial-io-module/simple-wfs-example
+[WFS filter example]: https://samples.azuremaps.com/spatial-io-module/wfs-filter-examples
+[WFS service explorer]: https://samples.azuremaps.com/spatial-io-module/wfs-service-explorer
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
The following features and services now use Azure Monitor Agent in preview. This
| [VM insights](../vm/vminsights-overview.md) | Public preview with Azure Monitor Agent | Dependency Agent extension, if youΓÇÖre using the Map Services feature | [Enable VM Insights](../vm/vminsights-enable-overview.md) | | [Container insights](../containers/container-insights-overview.md) | Public preview with Azure Monitor Agent | Containerized Azure Monitor agent | [Enable Container Insights](../containers/container-insights-onboard.md) | | [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Public preview with Azure Monitor Agent | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Auto-deployment of Azure Monitor Agent (Preview)](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md) |
-| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview with Azure Monitor Agent](../../sentinel/data-connectors/windows-forwarded-events.md)</li><li>Windows DNS logs: [Public preview with Azure Monitor Agent](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: [Public preview with Azure Monitor Agent](../../sentinel/connect-cef-ama.md#set-up-the-common-event-format-cef-via-ama-connector)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | See [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel. |
+| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [GA](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [GA](../../sentinel/data-connectors/windows-forwarded-events.md)</li><li>Windows DNS logs: [Public preview with Azure Monitor Agent](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: [Public preview with Azure Monitor Agent](../../sentinel/connect-cef-ama.md#set-up-the-common-event-format-cef-via-ama-connector)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | See [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel. |
| [Change Tracking and Inventory Management](../../automation/change-tracking/overview.md) | Public preview with Azure Monitor Agent | Change Tracking extension | [Change Tracking and Inventory using Azure Monitor Agent](../../automation/change-tracking/overview-monitoring-agent.md) | | [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Public preview with Azure Monitor Agent | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) | | Azure Stack HCI Insights | Private preview | No other extension installed | [Sign up here](https://aka.ms/amadcr-privatepreviews) |
azure-monitor Azure Monitor Agent Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-transformation.md
To complete this procedure, you need:
1. Run a basic query the custom logs table to view table data. 1. Use the query window to write and test a query that transforms the raw data in your table.
- For information about the KQL operators that transformations support, see [Structure of transformation in Azure Monitor](../essentials/data-collection-transformations-structure.md#kql-limitations).
+ For information about the KQL operators that transformations support, see [Structure of transformation in Azure Monitor](../essentials/data-collection-transformations-structure.md#kql-limitations).
+
+ > [!Note]
+ > The only columns that are available to apply transfroms against are TimeGenerated and RawData. Other columns are added to the table automatically after the transformation and are not availiable at the time of transformation.
+ > The _ResourceId column can't be used in the trasnformation.
**Example**
Learn more about:
- [Data collection rules](../essentials/data-collection-rule-overview.md). - [Data collection endpoints](../essentials/data-collection-endpoint-overview.md).
-
+
azure-monitor Data Collection Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-firewall.md
-# Collect Firewall logs with Azure Monitor Agent (Preview)
+# Collect Firewall logs with Azure Monitor Agent (Private Preview [signup here](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR5HgP8BLvCpLhecdvdpZy8VUQ0VCRVg2STY0UkYyOU9RNkU3Qk80VkFOMS4u))
Windows Firewall is a Microsoft Windows application that filters information coming to your system from the Internet and blocks potentially harmful programs. It is also known as Microsoft Defender Firewall in Windows 10 version 2004 and later. You can turn it on or off by following these steps: - Select Start, then open Settings - Under Update & Security, select Windows Security, Firewall & network protection.
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
The column names used here are for example only. The column names for your log w
## Troubleshoot Use the following steps to troubleshoot collection of text logs.
+## Troubleshooting Tool
+Use the [Asure monitor troubleshooter tool](use-azure-monitor-agent-troubleshooter.md) to look for common issues and share results with Microsoft.
+ ### Check if any custom logs have been received Start by checking if any records have been collected for your custom log table by running the following query in Log Analytics. If records aren't returned, check the other sections for possible causes. This query looks for entires in the last two days, but you can modify for another time range. It can take 5-7 minutes for new data from your tables to be uploaded. Only new data will be uploaded any log file last written to prior to the DCR rules being created won't be uploaded.
while ($true)
```
-### Share logs with Microsoft
-If everything is configured properly, but you're still not collecting log data, use the following procedure to collect diagnostics logs for Azure Monitor agent to share with the Azure Monitor group.
-
-1. Open an elevated PowerShell window.
-2. Change to directory `C:\Packages\Plugins\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\[version]\`.
-3. Execute the script: `.\CollectAMALogs.ps1`.
-4. Share the `AMAFiles.zip` file generated on the desktop.
## Next steps
azure-monitor Alerts Automatic Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-automatic-migration.md
Title: Understand how the automatic migration process for your Azure Monitor classic alerts works description: Learn how the automatic migration process works. Previously updated : 2/23/2022 Last updated : 06/20/2023 # Understand the automatic migration process for your classic alert rules
azure-monitor Alerts Classic Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-classic-portal.md
Title: Create and manage classic metric alerts using Azure Monitor
description: Learn how to use Azure portal or PowerShell to create, view and manage classic metric alert rules. Previously updated : 2/23/2022 Last updated : 06/20/2023 # Create, view, and manage classic metric alerts using Azure Monitor
azure-monitor Alerts Classic.Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-classic.overview.md
Title: Overview of classic alerts in Azure Monitor description: Classic alerts will be deprecated. Alerts enable you to monitor Azure resource metrics, events, or logs, and they notify you when a condition you specify is met. Previously updated : 2/23/2022 Last updated : 06/20/2023 # What are classic alerts in Azure?
azure-monitor Alerts Manage Alerts Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alerts-previous-version.md
Title: View and manage log alert rules created in previous versions| Microsoft D
description: Use the Azure Monitor portal to manage log alert rules created in earlier versions. Previously updated : 2/23/2022 Last updated : 06/20/2023
azure-monitor Alerts Prepare Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-prepare-migration.md
Title: Update logic apps & runbooks for alerts migration description: Learn how to modify your webhooks, logic apps, and runbooks to prepare for voluntary migration. Previously updated : 2/23/2022 Last updated : 06/20/2023 # Prepare your logic apps and runbooks for migration of classic alert rules
azure-monitor Alerts Resource Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-resource-move.md
Alert rules and alert processing rules reference other Azure resources. Examples
There are two main reasons why your rules might stop working after moving the target resources: -- The scope of your rule is explicitly referring the old resource.
+- The scope of your rule is explicitly referring to the old resource.
- Your alert rule is based on metrics. ## Rule scope explicitly refers to the old resource
azure-monitor Alerts Troubleshoot Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-log.md
Title: Troubleshoot log alerts in Azure Monitor | Microsoft Docs
description: Common issues, errors, and resolutions for log alert rules in Azure. Previously updated : 2/23/2022 Last updated : 06/20/2023 # Troubleshoot log alerts in Azure Monitor
azure-monitor Alerts Understand Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-understand-migration.md
Title: Understand migration for Azure Monitor alerts
description: Understand how the alerts migration works and troubleshoot problems. Previously updated : 2/23/2022 Last updated : 06/20/2023 # Understand migration options to newer alerts
azure-monitor Api Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/api-alerts.md
Title: Legacy Log Analytics Alert REST API description: The Log Analytics Alert REST API allows you to create and manage alerts in Log Analytics. This article provides details about the API and examples for performing different operations. Previously updated : 2/23/2022 Last updated : 06/20/2023
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
Application maps represent the logical structure of a distributed application. I
Application Map helps you spot performance bottlenecks or failure hotspots across all components of your distributed application. Each node on the map represents an application component or its dependencies and has health KPI and alerts status. You can select any component to get more detailed diagnostics, such as Application Insights events. If your app uses Azure services, you can also select Azure diagnostics, such as SQL Database Advisor recommendations.
-Application Map also features [Intelligent view](#application-map-intelligent-view-public-preview) to assist with fast service health investigations.
+Application Map also features [Intelligent view](#application-map-intelligent-view) to assist with fast service health investigations.
## What is a component?
There are many filter combinations. Here are some suggestions that apply to most
-## Application Map Intelligent view (public preview)
+## Application Map Intelligent view
The following sections discuss Intelligent view.
If an edge is highlighted, the explanation from the model should point you to th
#### Why doesn't Intelligent view load?
-If **Intelligent view** doesn't load:
-
-1. Set the configured time frame to six days or less.
-1. The **Try preview** button must be selected to opt in.
-
- :::image type="content" source="media/app-map/intelligent-view-try-preview.png" alt-text="Screenshot that shows the Try preview button in the Application Map user interface." lightbox="media/app-map/intelligent-view-try-preview.png":::
+If **Intelligent view** doesn't load, set the configured time frame to six days or less.
#### Why does Intelligent view take a long time to load?
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
If youΓÇÖre using the following supported SDKs, you can configure the JavaScript
| : | : | | ASP.NET Core | [Enable client-side telemetry for web applications](./asp-net-core.md?tabs=netcorenew%2Cnetcore6#enable-client-side-telemetry-for-web-applications) | | Node.js | [Automatic web Instrumentation](./nodejs.md#automatic-web-instrumentationpreview) |
+ | Java | [Browser SDK Loader](./java-standalone-config.md#browser-sdk-loader-preview) |
For other methods to instrument your application with the Application Insights JavaScript SDK, see [Get started with the JavaScript SDK](./javascript-sdk.md).
azure-monitor Java Get Started Supplemental https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-get-started-supplemental.md
Title: Application Insights with containers description: This article shows you how to set-up Application Insights Previously updated : 05/20/2023 Last updated : 06/19/2023 ms.devlang: java
For more information, see [Use Application Insights Java In-Process Agent in Azu
### Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.13.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.14.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.13.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.14.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.13.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.14.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.13.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.14.jar" -jar <myapp.jar>
```
FROM ...
COPY target/*.jar app.jar
-COPY agent/applicationinsights-agent-3.4.13.jar applicationinsights-agent-3.4.13.jar
+COPY agent/applicationinsights-agent-3.4.14.jar applicationinsights-agent-3.4.14.jar
COPY agent/applicationinsights.json applicationinsights.json ENV APPLICATIONINSIGHTS_CONNECTION_STRING="CONNECTION-STRING"
-ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.13.jar", "-jar", "app.jar"]
+ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.14.jar", "-jar", "app.jar"]
```
-In this example we have copied the `applicationinsights-agent-3.4.13.jar` and `applicationinsights.json` files from an `agent` folder (you can choose any folder of your machine). These two files have to be in the same folder in the Docker container.
+In this example we have copied the `applicationinsights-agent-3.4.14.jar` and `applicationinsights.json` files from an `agent` folder (you can choose any folder of your machine). These two files have to be in the same folder in the Docker container.
### Third-party container images
The following sections show how to set the Application Insights Java agent path
If you installed Tomcat via `apt-get` or `yum`, you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.13.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.14.jar"
``` #### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.13.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.13.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.14.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.13.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.14.jar` to `CATALINA_OPTS`.
### Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.13.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.14.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.13.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.14.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.13.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.14.jar` to `CATALINA_OPTS`.
#### Run Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.13.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.14.jar` to the `Java Options` under the `Java` tab.
### JBoss EAP 7 #### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.13.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.4.14.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.13.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.14.jar -Xms1303m -Xmx1303m ..."
... ``` #### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.13.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.14.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.4.13.jar` to the existing `j
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.4.13.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.4.14.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`:
``` --exec--javaagent:path/to/applicationinsights-agent-3.4.13.jar
+-javaagent:path/to/applicationinsights-agent-3.4.14.jar
``` ### Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.4.13.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.14.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.4.13.jar>
+ -javaagent:path/to/applicationinsights-agent-3.4.14.jar>
</jvm-options> ... </java-config>
Add `-javaagent:path/to/applicationinsights-agent-3.4.13.jar` to the existing `j
1. In `Generic JVM arguments`, add the following JVM argument: ```
- -javaagent:path/to/applicationinsights-agent-3.4.13.jar
+ -javaagent:path/to/applicationinsights-agent-3.4.14.jar
``` 1. Save and restart the application server.
Add `-javaagent:path/to/applicationinsights-agent-3.4.13.jar` to the existing `j
Create a new file `jvm.options` in the server directory (for example, `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.4.13.jar
+-javaagent:path/to/applicationinsights-agent-3.4.14.jar
``` ### Others
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
Title: Configure Azure Monitor Application Insights for Spring Boot description: How to configure Azure Monitor Application Insights for Spring Boot applications Previously updated : 05/20/2023 Last updated : 06/19/2023 ms.devlang: java
There are two options for enabling Application Insights Java with Spring Boot: J
## Enabling with JVM argument
-Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.13.jar"` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.14.jar"` somewhere before `-jar`, for example:
```
-java -javaagent:"path/to/applicationinsights-agent-3.4.13.jar" -jar <myapp.jar>
+java -javaagent:"path/to/applicationinsights-agent-3.4.14.jar" -jar <myapp.jar>
``` ### Spring Boot via Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.13.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.14.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.13.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.14.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.13.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.14.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.13.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.14.jar" -jar <myapp.jar>
``` ### Configuration
To enable Application Insights Java programmatically, you must add the following
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-runtime-attach</artifactId>
- <version>3.4.13</version>
+ <version>3.4.14</version>
</dependency> ```
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Title: Configuration options - Azure Monitor Application Insights for Java description: This article shows you how to configure Azure Monitor Application Insights for Java. Previously updated : 05/20/2023 Last updated : 06/19/2023 ms.devlang: java
More information and configuration options are provided in the following section
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.13.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.14.jar`.
You can specify your own configuration file path by using one of these two options: * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.13.jar` is located.
+If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.14.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the JSON configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
Or you can set the connection string by using the Java system property `applicat
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.13.jar` is located.
+If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.14.jar` is located.
```json {
and add `applicationinsights-core` to your application:
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>
- <version>3.4.13</version>
+ <version>3.4.14</version>
</dependency> ```
Starting from version 3.2.0, if you want to capture controller "InProc" dependen
} ```
+## Browser SDK Loader (preview)
+
+This feature automatically injects the [Browser SDK Loader](https://github.com/microsoft/ApplicationInsights-JS#snippet-setup-ignore-if-using-npm-setup) into your application's HTML pages, including configuring the appropriate Connection String.
+
+For example, when your java application returns a response like:
+
+```html
+<!DOCTYPE html>
+<html lang="en">
+ <head>
+ <title>Title</title>
+ </head>
+ <body>
+ </body>
+</html>
+```
+
+Then it will be automatically modified to return:
+```html
+<!DOCTYPE html>
+<html lang="en">
+ <head>
+ <script type="text/javascript">
+ !function(v,y,T){var S=v.location,k="script"
+ <!-- Removed for brevity -->
+ connectionString: "YOUR_CONNECTION_STRING"
+ <!-- Removed for brevity --> }});
+ </script>
+ <title>Title</title>
+ </head>
+ <body>
+ </body>
+</html>
+```
+
+The script is aiming at helping customers to track the web user data, and sent the collecting server-side telemetry back to users' Azure portal. Details can be found at [ApplicationInsights-JS](https://github.com/microsoft/ApplicationInsights-JS)
+
+If you want to enable this feature, add the below configuration option:
+
+```json
+"preview": {
+ "browserSdkLoader": {
+ "enabled": true
+ }
+}
+```
+ ## Telemetry processors (preview) You can use telemetry processors to configure rules that are applied to request, dependency, and trace telemetry. For example, you can:
In the preceding configuration example:
* `level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. * `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.4.13.jar` is located.
+`applicationinsights-agent-3.4.14.jar` is located.
Starting from version 3.0.2, you can also set the self-diagnostics `level` by using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`. It then takes precedence over the self-diagnostics level specified in the JSON configuration.
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
Title: Upgrading from 2.x - Azure Monitor Application Insights Java description: Upgrading from Azure Monitor Application Insights Java 2.x Previously updated : 05/20/2023 Last updated : 06/19/2023 ms.devlang: java
There are typically no code changes when upgrading to 3.x. The 3.x SDK dependenc
Add the 3.x Java agent to your JVM command-line args, for example ```--javaagent:path/to/applicationinsights-agent-3.4.13.jar
+-javaagent:path/to/applicationinsights-agent-3.4.14.jar
``` If you're using the Application Insights 2.x Java agent, just replace your existing `-javaagent:...` with the aforementioned example.
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Title: Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 05/20/2023 Last updated : 06/19/2023 ms.devlang: csharp, javascript, typescript, python
dotnet add package --prerelease Azure.Monitor.OpenTelemetry.Exporter
#### [Java](#tab/java)
-Download the [applicationinsights-agent-3.4.13.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.13/applicationinsights-agent-3.4.13.jar) file.
+Download the [applicationinsights-agent-3.4.14.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.14/applicationinsights-agent-3.4.14.jar) file.
> [!WARNING] >
var loggerFactory = LoggerFactory.Create(builder =>
Java autoinstrumentation is enabled through configuration changes; no code changes are required.
-Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.13.jar"` to your application's JVM args.
+Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.14.jar"` to your application's JVM args.
> [!TIP] > For scenario-specific guidance, see [Get Started (Supplemental)](./java-get-started-supplemental.md).
To paste your Connection String, select from the options below:
B. Set via Configuration File - Java Only (Recommended)
- Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.13.jar` with the following content:
+ Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.14.jar` with the following content:
```json {
This isn't available in .NET.
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>
- <version>3.4.13</version>
+ <version>3.4.14</version>
</dependency> ```
azure-monitor Autoscale Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-best-practices.md
If you have a setting that has minimum=2, maximum=2, and the current instance co
If you manually update the instance count to a value above or below the maximum, the autoscale engine automatically scales back to the minimum (if below) or the maximum (if above). For example, you set the range between 3 and 6. If you have one running instance, the autoscale engine scales to three instances on its next run. Likewise, if you manually set the scale to eight instances, on the next run autoscale will scale it back to six instances on its next run. Manual scaling is temporary unless you also reset the autoscale rules. ### Always use a scale-out and scale-in rule combination that performs an increase and decrease
-If you use only one part of the combination, autoscale only takes action in a single direction (scale out or in) until it reaches the maximum, or minimum instance counts, as defined in the profile. This situation isn't optimal. Ideally, you want your resource to scale up at times of high usage to ensure availability. Similarly, at times of low usage, you want your resource to scale down so that you can realize cost savings.
+If you use only one part of the combination, autoscale only takes action in a single direction (scale out or in) until it reaches the maximum, or minimum instance counts, as defined in the profile. This situation isn't optimal. Ideally, you want your resource to scale out at times of high usage to ensure availability. Similarly, at times of low usage, you want your resource to scale in so that you can realize cost savings.
When you use a scale-in and scale-out rule, ideally use the same metric to control both. Otherwise, it's possible that the scale-in and scale-out conditions could be met at the same time and result in some level of flapping. For example, we don't recommend the following rule combination because there's no scale-in rule for memory usage:
azure-monitor Autoscale Predictive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-predictive.md
Predictive autoscale adheres to the scaling boundaries you've set for your virtu
> [!NOTE] > Before you can enable predictive autoscale or forecast-only mode, you must set up the standard reactive autoscale conditions.
-1. To enable forecast-only mode, select it from the dropdown. Define a scale-up trigger based on *Percentage CPU*. Then select **Save**. The same process applies to enable predictive autoscale. To disable predictive autoscale or forecast-only mode, select **Disable** from the dropdown.
+1. To enable forecast-only mode, select it from the dropdown. Define a scale-out trigger based on *Percentage CPU*. Then select **Save**. The same process applies to enable predictive autoscale. To disable predictive autoscale or forecast-only mode, select **Disable** from the dropdown.
:::image type="content" source="media/autoscale-predictive/enable-forecast-only-mode-3.png" alt-text="Screenshot that shows enabling forecast-only mode.":::
This section addresses common errors and warnings.
You receive the following error message:
- *Predictive autoscale is based on the metric percentage CPU of the current resource. Choose this metric in the scale up trigger rules*.
+ *To enable predictive autoscale, create a scale out rule based on 'Percentage CPU' metric. Click here to go to the 'Configure' tab to set an autoscale rule.*
:::image type="content" source="media/autoscale-predictive/error-not-enabled.png" alt-text="Screenshot that shows error message predictive autoscale is based on the metric percentage CPU of the current resource.":::
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
+> [!TIP]
+> Get notified when this page is updated by copying and pasting the following URL into your feed reader:
+>
+> !["An rss icon"](./media//whats-new/rss.png) https://aka.ms/azmon/rss
## May 2023 |Subservice| Article | Description |
azure-resource-manager Bicep Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-cli.md
Title: Bicep CLI commands and overview
description: Describes the commands that you can use in the Bicep CLI. These commands include building Azure Resource Manager templates from Bicep. Previously updated : 06/15/2023 Last updated : 06/20/2023 # Bicep CLI commands
The `decompile-params` command decompile a JSON parameters file to a _.biceppara
bicep decompile-params azuredeploy.parameters.json --bicep-file ./dir/main.bicep ```
-This command decompiles a _azuredeploy.parameters.json_ parameters file into a _azuredeploy.parameters.bicepparam_ file. `-bicep-file` specifies the path to the Bicep file (relative to the .bicepparam file) that will be referenced in the `using` declaration.
+This command decompiles a _azuredeploy.parameters.json_ parameters file into a _azuredeploy.parameters.bicepparam_ file. `-bicep-file` specifies the path to the Bicep file (relative to the .bicepparam file) that is referenced in the `using` declaration.
## generate-params
-The `generate-params` command builds a parameters file from the given Bicep file, updates if there is an existing parameters file.
+The `generate-params` command builds a parameters file from the given Bicep file, updates if there's an existing parameters file.
```azurecli bicep generate-params main.bicep --output-format bicepparam --include-params all
The command returns an array of available versions.
```azurecli [
- "v0.4.1",
- "v0.3.539",
- "v0.3.255",
- "v0.3.126",
- "v0.3.1",
- "v0.2.328",
- "v0.2.317",
- "v0.2.212",
- "v0.2.59",
- "v0.2.14",
- "v0.2.3",
- "v0.1.226-alpha",
- "v0.1.223-alpha",
- "v0.1.37-alpha",
- "v0.1.1-alpha"
+ "v0.18.4",
+ "v0.17.1",
+ "v0.16.2",
+ "v0.16.1",
+ "v0.15.31",
+ "v0.14.85",
+ "v0.14.46",
+ "v0.14.6",
+ "v0.13.1",
+ "v0.12.40",
+ "v0.12.1",
+ "v0.11.1",
+ "v0.10.61",
+ "v0.10.13",
+ "v0.9.1",
+ "v0.8.9",
+ "v0.8.2",
+ "v0.7.4",
+ "v0.6.18",
+ "v0.6.11",
+ "v0.6.1",
+ "v0.5.6",
+ "v0.4.1318",
+ "v0.4.1272",
+ "v0.4.1124",
+ "v0.4.1008",
+ "v0.4.613",
+ "v0.4.451",
+ "v0.4.412",
+ "v0.4.63"
] ```
azure-resource-manager Bicep Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md
Title: Bicep config file
description: Describes the configuration file for your Bicep deployments Previously updated : 05/24/2023 Last updated : 06/20/2023 # Configure your Bicep environment
You can enable preview features by adding:
The preceding sample enables 'userDefineTypes' and 'extensibility`. The available experimental features include: - **extensibility**: Allows Bicep to use a provider model to deploy non-ARM resources. Currently, we only support a Kubernetes provider. See [Bicep extensibility Kubernetes provider](./bicep-extensibility-kubernetes-provider.md).-- **paramsFiles**: Allows for the use of a Bicep-style parameters file with a terser syntax than the JSON equivalent parameters file. Currently, you also need a special build of Bicep to enable this feature, so is it inaccessible to most users. See [Parameters - first release](https://github.com/Azure/bicep/issues/9567). - **sourceMapping**: Enables basic source mapping to map an error location returned in the ARM template layer back to the relevant location in the Bicep file. - **resourceTypedParamsAndOutputs**: Enables the type for a parameter or output to be of type resource to make it easier to pass resource references between modules. This feature is only partially implemented. See [Simplifying resource referencing](https://github.com/azure/bicep/issues/2245). - **symbolicNameCodegen**: Allows the ARM template layer to use a new schema to represent resources as an object dictionary rather than an array of objects. This feature improves the semantic equivalent of the Bicep and ARM templates, resulting in more reliable code generation. Enabling this feature has no effect on the Bicep layer's functionality.
azure-resource-manager Parameter Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameter-files.md
Title: Create parameters files for Bicep deployment
description: Create parameters file for passing in values during deployment of a Bicep file Previously updated : 06/05/2023 Last updated : 06/20/2023 # Create parameters files for Bicep deployment Rather than passing parameters as inline values in your script, you can use a Bicep parameters file with the `.bicepparam` file extension or a JSON parameters file that contains the parameter values. This article shows how to create parameters files.
+> [!NOTE]
+> The Bicep parameters file is only supported in Bicep CLI version 0.18.4 or newer.
+ A single Bicep file can have multiple Bicep parameters files associated with it. However, each Bicep parameters file is intended for one particular Bicep file. This relationship is established using the `using` statement within the Bicep parameters file. For more information, see [Bicep parameters file](#parameters-file).
-jgao: list the versions for supporting Bicep parameters file. You can compile Bicep parameters files into JSON parameters files to deploy with a Bicep file.
+You can compile Bicep parameters files into JSON parameters files to deploy with a Bicep file. See [build-params](./bicep-cli.md#build-params).
## Parameters file
Use Bicep syntax to declare [objects](./data-types.md#objects) and [arrays](./da
Bicep parameters file has the file extension of `.bicepparam`.
-To deploy to different environments, you create more than one parameters file. When you name the parameters files, identify their use such as development and production. For example, use _main.dev.biceparam_ and _main.prod.json_ to deploy resources.
+To deploy to different environments, you create more than one parameters file. When you name the parameters files, identify their use such as development and production. For example, use _main.dev.bicepparam_ and _main.prod.bicepparam_ to deploy resources.
# [JSON parameters file](#tab/JSON)
For more information, see [Deploy resources with Bicep and Azure PowerShell](./d
## Parameter precedence
-You can use inline parameters and a local parameters file in the same deployment operation. For example, you can specify some values in the local parameters file and add other values inline during deployment. If you provide values for a parameter in both the local parameters file and inline, the inline value takes precedence.
+You can use inline parameters and a local parameters file in the same deployment operation. For example, you can specify some values in the local parameters file and add other values inline during deployment. If you provide values for a parameter in both the local parameters file and inline, the inline value takes precedence. This feature hasn't been implemented for Bicep parameters file.
It's possible to use an external parameters file, by providing the URI to the file. When you use an external parameters file, you can't pass other values either inline or from a local file. All inline parameters are ignored. Provide all parameter values in the external file.
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
Title: Introduction
description: Learn the features and benefits of Azure VMware Solution to deploy and manage VMware-based workloads in Azure. Azure VMware Solution SLA guarantees that Azure VMware management tools (vCenter Server and NSX Manager) will be available at least 99.9% of the time. Previously updated : 4/11/2023 Last updated : 6/20/2023
Monitoring patterns inside the Azure VMware Solution are similar to Azure VMs wi
Azure VMware Solution implements a shared responsibility model that defines distinct roles and responsibilities of the two parties involved in the offering: Customer and Microsoft. The shared role responsibilities are illustrated in more detail in following two tables.
-The shared responsibility matrix table shows the high-level responsibilities between a customer and Microsoft for different aspects of the deployment/management of the private cloud and the customer application workloads.
+The shared responsibility matrix table shows the high-level responsibilities between a customer and Microsoft for different aspects of the deployment and management of the private cloud and the customer application workloads.
:::image type="content" source="media/azure-introduction-shared-responsibility-matrix.png" alt-text="screenshot shows the high-level shared responsibility matrix." lightbox="media/azure-introduction-shared-responsibility-matrix.png":::
azure-vmware Migrate Sql Server Failover Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-failover-cluster.md
Title: Migrate SQL Server failover cluster to Azure VMware Solution
description: Learn how to migrate SQL Server failover cluster to Azure VMware Solution Previously updated : 3/20/2023 Last updated : 6/20/2023
In this article, you'll learn how to migrate a Microsoft SQL Server Failover clu
VMware HCX doesn't support migrating virtual machines with SCSI controllers in physical sharing mode attached to a virtual machine. However, you can overcome this limitation by performing the steps shown in this procedure and by using VMware HCX Cold Migration to move the different virtual machines that make up the cluster. > [!NOTE] > This procedure requires a full shutdown of the cluster. Since the Microsoft SQL Server service will be unavailable during the migration, plan accordingly for the downtime period .
azure-vmware Request Host Quota Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/request-host-quota-azure-vmware-solution.md
description: Learn how to request host quota/capacity for Azure VMware Solution.
Previously updated : 09/27/2021 Last updated : 06/20/2023 #Customer intent: As an Azure service admin, I want to request hosts for either a new private cloud deployment or I want to have more hosts allocated in an existing private cloud.
You'll need an Azure account in an Azure subscription that adheres to one of the
- Region Name - Number of hosts
- - Any other details
+ - Any other details, including Availability Zone requirements for integrating with other Azure services (e.g. Azure NetApp Files, Azure Blob Storage)
>[!NOTE] >Azure VMware Solution requires a minimum of three hosts and recommends redundancy of N+1 hosts.
Access the Azure portal using the **Admin On Behalf Of** (AOBO) procedure from P
- Region Name - Number of hosts
- - Any other details
+ - Any other details, including Availability Zone requirements for integrating with other Azure services (e.g. Azure NetApp Files, Azure Blob Storage)
- Is intended to host multiple customers? >[!NOTE]
azure-web-pubsub Howto Client Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-client-certificate.md
+
+ Title: Enable client certificate authentication for Azure Web PubSub Service (Preview)
+
+description: How to enable client certificate authentication for Azure Web PubSub Service (Preview)
+++ Last updated : 06/20/2023+++
+# Enable client certificate authentication for Azure Web PubSub Service (Preview)
+
+You can restrict access to your Azure Web PubSub Service by enabling different types of authentication for it. One way to do it is to request a client certificate and validate the certificate in event handlers. This mechanism is called TLS mutual authentication or client certificate authentication. This article shows how to set up your Azure Web PubSub Service to use client certificate authentication.
+
+> [!Note]
+> Enabling client certificate authentication in browser scenarios is generally not recommended. Different browsers have different behaviors when dealing with client certificate request, while you have little control in JavaScript appliations. If you want to enable client certificate authentication, we recommend you in scenarios where you have strong control over TLS settings, for example, in native applications.
+
+## Prerequisites
+
+* An Azure account with an active subscription. If you don't have an Azure account, you can [create an account for free](https://azure.microsoft.com/free/).
+* An Azure Web PubSub service (must be Standard tier or above).
+* An Azure Function used to handle connect events.
+* A client certificate. You need to know its SHA-1 thumbprint.
+
+## Deploy Azure Web PubSub Service
+
+Suppose you're going to use a function called `func-client-cert` as event handler to process `connect` events. Clients connect to a hub called `echo`. Here are the Bicep/ARM templates to deploy an Azure Web PubSub service with client certificate authentication enabled and event handlers configured.
+
+We enable client certificate authentication via the property `tls.clientCertEnabled`.
+
+We configure an event handler for `connect` event so we can validate client thumbprint. Also note that `anonymousConnectPolicy` needs to be set to `allow` so clients no longer need to send access tokens.
+
+### Bicep
+
+```bicep
+param name string
+param hubName string = 'echo'
+param eventHandlerUrl string = 'https://func-client-cert.azurewebsites.net/api/echo'
+param location string = resourceGroup().location
+
+resource awps 'Microsoft.SignalRService/WebPubSub@2023-03-01-preview' = {
+ name: name
+ location: location
+ sku: {
+ name: 'Standard_S1'
+ tier: 'Standard'
+ size: 'S1'
+ capacity: 1
+ }
+ properties: {
+ tls: {
+ clientCertEnabled: true
+ }
+ }
+}
+
+resource hub 'Microsoft.SignalRService/WebPubSub/hubs@2023-03-01-preview' = {
+ parent: awps
+ name: '${hubName}'
+ properties: {
+ eventHandlers: [
+ {
+ urlTemplate: eventHandlerUrl
+ userEventPattern: '*'
+ systemEvents: [
+ 'connect'
+ ]
+ }
+ ]
+ anonymousConnectPolicy: 'allow'
+ }
+}
+```
+
+### ARM
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "name": {
+ "type": "String"
+ },
+ "hubName": {
+ "defaultValue": "echo",
+ "type": "String"
+ },
+ "eventHandlerUrl": {
+ "defaultValue": "https://func-client-cert.azurewebsites.net/api/echo",
+ "type": "String"
+ },
+ "location": {
+ "defaultValue": "[resourceGroup().location]",
+ "type": "String"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.SignalRService/WebPubSub",
+ "apiVersion": "2023-03-01-preview",
+ "name": "[parameters('name')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_S1",
+ "tier": "Standard",
+ "size": "S1",
+ "capacity": 1
+ },
+ "properties": {
+ "tls": {
+ "clientCertEnabled": true
+ }
+ }
+ },
+ {
+ "type": "Microsoft.SignalRService/WebPubSub/hubs",
+ "apiVersion": "2023-03-01-preview",
+ "name": "[concat(parameters('name'), '/', parameters('hubName'))]",
+ "dependsOn": [
+ "[resourceId('Microsoft.SignalRService/WebPubSub', parameters('name'))]"
+ ],
+ "properties": {
+ "eventHandlers": [
+ {
+ "urlTemplate": "[parameters('eventHandlerUrl')]",
+ "userEventPattern": "*",
+ "systemEvents": [
+ "connect"
+ ]
+ }
+ ],
+ "anonymousConnectPolicy": "allow"
+ }
+ }
+ ]
+}
+```
+
+## Validate client certificate in event handler
+
+You can validate incoming client certificate via its SHA-1 thumbprint in the `connect` event. The value is available in `clientCertificates` field. See [CloudEvents HTTP extension for event handler](reference-cloud-events.md#connect).
+
+Here are sample function codes to implement validation logic.
+
+### JavaScript
+
+```javascript
+module.exports = async function (context, req) {
+ // For client connect event
+ if (req.headers && req.headers['ce-type'] == 'azure.webpubsub.sys.connect') {
+ // CLIENT_CERT_THUMBPRINT should be configured as an environment variable of valid client certificate SHA-1 thumbprint
+ var validCertThumbprint = process.env['CLIENT_CERT_THUMBPRINT'];
+ var certThumbprint = null;
+ if (req.body.clientCertificates) {
+ certThumbprint = req.body.clientCertificates[0].thumbprint;
+ }
+ if (certThumbprint != validCertThumbprint) {
+ context.log('Expect client cert:', validCertThumbprint, 'but got:', certThumbprint);
+ context.res = {
+ status: 403
+ };
+ return;
+ }
+ }
+
+ context.res = {
+ // status: 200, /* Defaults to 200 */
+ headers: {
+ 'WebHook-Allowed-Origin': '*'
+ },
+ };
+}
+```
+
+## Certificate rotation
+
+In case you want to rotate the certificate, you can update your event handler code to accept multiple thumbprints.
+
+## Missing client certificate
+
+Azure Web PubSub Service doesn't abort TLS handshake when clients don't provide client certificate. It's up to event handler to decide whether to accept or reject a connection without client certificate.
+
+## Next steps
+
+* [How to configure event handler](howto-develop-eventhandler.md)
+* [Golang sample](https://github.com/Azure/azure-webpubsub/blob/main/samples/golang/clientWithCert/Readme.md)
backup Sap Hana Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-backup-support-matrix.md
Title: SAP HANA Backup support matrix description: In this article, learn about the supported scenarios and limitations when you use Azure Backup to back up SAP HANA databases on Azure VMs. Previously updated : 05/24/2023 Last updated : 06/20/2023
Azure Backup supports the backup of SAP HANA databases to Azure. This article su
| **Scenario** | **Supported configurations** | **Unsupported configurations** | | -- | | | | **Topology** | SAP HANA running in Azure Linux VMs only | HANA Large Instances (HLI) |
-| **Regions** | **Americas** ΓÇô Central US, East US 2, East US, North Central US, South Central US, West US 2, West US 3, West Central US, West US, Canada Central, Canada East, Brazil South <br> **Asia Pacific** ΓÇô Australia Central, Australia Central 2, Australia East, Australia Southeast, Japan East, Japan West, Korea Central, Korea South, East Asia, Southeast Asia, Central India, South India, West India, China East, China East 2, China East 3, China North, China North 2, China North 3 <br> **Europe** ΓÇô West Europe, North Europe, France Central, UK South, UK West, Germany North, Germany West Central, Switzerland North, Switzerland West, Central Switzerland North, Norway East, Norway West, Sweden Central, Sweden Sputh <br> **Africa / ME** - South Africa North, South Africa West, UAE North, UAE Central <BR> **Azure Government regions** | France South, Germany Central, Germany Northeast, US Gov IOWA |
-| **OS versions** | SLES 12 with SP2, SP3, SP4 and SP5; SLES 15 with SP0, SP1, SP2, SP3, and SP4 <br><br> RHEL 7.4, 7.6, 7.7, 7.9, 8.1, 8.2, 8.4, and 8.6 | |
-| **HANA versions** | SDC on HANA 1.x, MDC on HANA 2.x SPS04, SPS05 Rev <= 59, SPS 06 (validated for encryption enabled scenarios as well) | |
+| **Regions** | **Americas** ΓÇô Central US, East US 2, East US, North Central US, South Central US, West US 2, West US 3, West Central US, West US, Canada Central, Canada East, Brazil South <br> **Asia Pacific** ΓÇô Australia Central, Australia Central 2, Australia East, Australia Southeast, Japan East, Japan West, Korea Central, Korea South, East Asia, Southeast Asia, Central India, South India, West India, China East, China East 2, China East 3, China North, China North 2, China North 3 <br> **Europe** ΓÇô West Europe, North Europe, France Central, UK South, UK West, Germany North, Germany West Central, Switzerland North, Switzerland West, Central Switzerland North, Norway East, Norway West, Sweden Central, Sweden South <br> **Africa / ME** - South Africa North, South Africa West, UAE North, UAE Central <BR> **Azure Government regions** | France South, Germany Central, Germany Northeast, US Gov IOWA |
+| **OS versions** | SLES 12 with SP2, SP3, SP4 and SP5; SLES 15 with SP0, SP1, SP2, SP3, and SP4 <br><br> RHEL 7.4, 7.6, 7.7, 7.9, 8.1, 8.2, 8.4, 8.6, and 9.0. | |
+| **HANA versions** | SDC on HANA 1.x, MDC on HANA 2.x SPS 04, SPS 05 Rev <= 59, SPS 06 (validated for encryption enabled scenarios as well), and SPS 07. | |
| **Encryption** | SSLEnforce, HANA data encryption | |
-| **HANA deployments** | SAP HANA on a single Azure VM - Scale up only. <br><br> For high availability deployments, both the nodes on the two different machines are treated as individual nodes with separate data chains. | Scale-out <br><br> In high availability deployments, backup doesnΓÇÖt failover to the secondary node automatically. Configuring backup should be done separately for each node. |
+| **HANA deployments** | SAP HANA on a single Azure VM - Scale up only. <br><br> For high availability deployments, both the nodes on the two different machines are treated as individual nodes with separate data chains. | Scale-out <br><br> In high availability deployments, backup doesnΓÇÖt fail over to the secondary node automatically. Configuring backup should be done separately for each node. |
| **HANA Instances** | A single SAP HANA instance on a single Azure VM ΓÇô scale up only | Multiple SAP HANA instances on a single VM. You can protect only one of these multiple instances at a time. | | **HANA database types** | Single Database Container (SDC) ON 1.x, Multi-Database Container (MDC) on 2.x | MDC in HANA 1.x | | **HANA database size** | HANA databases of size <= 8 TB (this isn't the memory size of the HANA system) | |
batch Create Pool Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-extensions.md
The following extensions can currently be installed when creating a Batch pool:
- [HPC GPU driver extension for Windows on NVIDIA](../virtual-machines/extensions/hpccompute-gpu-windows.md) - [HPC GPU driver extension for Linux on NVIDIA](../virtual-machines/extensions/hpccompute-gpu-linux.md) - [Microsoft Antimalware extension for Windows](../virtual-machines/extensions/iaas-antimalware-windows.md)
+- [Azure Monitor agent for Linux](../azure-monitor/agents/azure-monitor-agent-manage.md)
You can request support for additional publishers and/or extension types by opening a support request.
chaos-studio Chaos Studio Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-bicep.md
+
+ Title: Use Bicep to create an experiment in Azure Chaos Studio Preview
+description: Sample Bicep templates to create Azure Chaos Studio Preview experiments.
+++ Last updated : 06/09/2023+++++
+# Use Bicep to create an experiment in Azure Chaos Studio Preview
+
+This article includes a sample Bicep file to get started in Azure Chaos Studio Preview, including:
+
+* Onboarding a resource as a target (for example, a Virtual Machine)
+* Enabling capabilities on the target (for example, Virtual Machine Shutdown)
+* Creating a Chaos Studio experiment
+* Assigning the necessary permissions for the experiment to execute
++
+## Build a Virtual Machine Shutdown experiment
+
+In this sample, we create a chaos experiment with a single target resource and a single virtual machine shutdown fault. You can modify the experiment by referencing the [fault library](chaos-studio-fault-library.md) and [recommended role assignments](chaos-studio-fault-providers.md).
+
+### Prerequisites
+
+This sample assumes:
+* The Chaos Studio resource provider is already registered with your Azure subscription
+* You already have a resource group in a region supported by Chaos Studio
+* A virtual machine is deployed in the resource group
+
+### Review the Bicep file
+
+```bicep
+@description('The existing virtual machine resource you want to target in this experiment')
+param targetName string
+
+@description('Desired name for your Chaos Experiment')
+param experimentName string
+
+@description('Desired region for the experiment, targets, and capabilities')
+param location string = resourceGroup().location
+
+// Define Chaos Studio experiment steps for a basic Virtual Machine Shutdown experiment
+param experimentSteps array = [
+ {
+ name: 'Step1'
+ branches: [
+ {
+ name: 'Branch1'
+ actions: [
+ {
+ name: 'urn:csci:microsoft:virtualMachine:shutdown/1.0'
+ type: 'continuous'
+ duration: 'PT10M'
+ parameters: [
+ {
+ key: 'abruptShutdown'
+ value: 'true'
+ }
+ ]
+ selectorId: 'Selector1'
+ }
+ ]
+ }
+ ]
+ }
+]
+
+// Reference the existing Virtual Machine resource
+resource vm 'Microsoft.Compute/virtualMachines@2023-03-01' existing = {
+ name: targetName
+}
+
+// Deploy the Chaos Studio target resource to the Virtual Machine
+resource chaosTarget 'Microsoft.Chaos/targets@2022-10-01-preview' = {
+ name: 'Microsoft-VirtualMachine'
+ location: location
+ scope: vm
+ properties: {}
+
+ // Define the capability -- in this case, VM Shutdown
+ resource chaosCapability 'capabilities' = {
+ name: 'Shutdown-1.0'
+ }
+}
+
+// Define the role definition for the Chaos experiment
+resource chaosRoleDefinition 'Microsoft.Authorization/roleDefinitions@2022-04-01' existing = {
+ scope: vm
+ // In this case, Virtual Machine Contributor role -- see https://learn.microsoft.com/azure/role-based-access-control/built-in-roles
+ name: '9980e02c-c2be-4d73-94e8-173b1dc7cf3c'
+}
+
+// Define the role assignment for the Chaos experiment
+resource chaosRoleAssignment 'Microsoft.Authorization/roleAssignments@2020-04-01-preview' = {
+ name: guid(vm.id, chaosExperiment.id, chaosRoleDefinition.id)
+ scope: vm
+ properties: {
+ roleDefinitionId: chaosRoleDefinition.id
+ principalId: chaosExperiment.identity.principalId
+ principalType: 'ServicePrincipal'
+ }
+}
+
+// Deploy the Chaos Studio experiment resource
+resource chaosExperiment 'Microsoft.Chaos/experiments@2022-10-01-preview' = {
+ name: experimentName
+ location: location // Doesn't need to be the same as the Targets & Capabilities location
+ identity: {
+ type: 'SystemAssigned'
+ }
+ properties: {
+ selectors: [
+ {
+ id: 'Selector1'
+ type: 'List'
+ targets: [
+ {
+ id: chaosTarget.id
+ type: 'ChaosTarget'
+ }
+ ]
+ }
+ ]
+ startOnCreation: false // Change this to true if you want to start the experiment on creation
+ steps: experimentSteps
+ }
+}
+```
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as `chaos-vm-shutdown.bicep` to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell, replacing `exampleRG` with the existing resource group that includes the virtual machine you want to target.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az deployment group create --resource-group exampleRG --template-file chaos-vm-shutdown.bicep
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./chaos-vm-shutdown.bicep
+ ```
+
+1. When prompted, enter the following values:
+ * **targetName**: the name of an existing Virtual Machine within your resource group that you want to target
+ * **experimentName**: the desired name for your Chaos Experiment
+ * **location**: the desired region for the experiment, targets, and capabilities
+1. The template should deploy within a few minutes. Once the deployment is complete, navigate to Chaos Studio in the Azure portal, select **Experiments**, and find the experiment created by the template. Select it, then **Start** the experiment.
+
+## Next steps
+
+* [Learn more about Chaos Studio](chaos-studio-overview.md)
+* [Learn more about chaos experiments](chaos-studio-chaos-experiments.md)
cognitive-services Document Translation Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/quickstarts/document-translation-sdk.md
To learn more, *see* [**Create SAS tokens**](../how-to-guides/create-sas-tokens.
### Next step > [!div class="nextstepaction"]
-> [**Learn more about Document Translation operations**](../../reference/rest-api-guide.md)
+> [**Learn more about Document Translation operations**](../reference/rest-api-guide.md)
cognitive-services Model Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/model-lifecycle.md
Use the following table to find which API versions are supported by each feature
| Feature | Supported versions | Latest Generally Available version | Latest preview version | |--||||
-| Custom text classification | `2022-05-01`, `2022-10-01-preview` | `2022-05-01` | |
-| Conversational language understanding | `2022-05-01`, `2022-10-01-preview` | `2022-05-01` | |
-| Custom named entity recognition | `2022-05-01`, `2022-10-01-preview` | `2022-05-01` | |
-| Orchestration workflow | `2022-05-01`, `2022-10-01-preview` | `2022-05-01` | |
+| Custom text classification | `2022-05-01`, `2022-10-01-preview`, `2023-04-01` | `2022-05-01` | `2022-10-01-preview` |
+| Conversational language understanding | `2022-05-01`, `2022-10-01-preview`, `2023-04-01` | `2023-04-01` | `2022-10-01-preview` |
+| Custom named entity recognition | `2022-05-01`, `2022-10-01-preview`, `2023-04-01` | `2023-04-01` | `2022-10-01-preview` |
+| Orchestration workflow | `2022-05-01`, `2022-10-01-preview`, `2023-04-01` | `2023-04-01` | `2022-10-01-preview` |
## Next steps
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/data-formats.md
Previously updated : 10/14/2022 Last updated : 06/20/2023
If you're [importing a project](../how-to/create-project.md#import-project) into
|Key |Placeholder |Value | Example | |||-|--|
-| `api-version` | `{API-VERSION}` | The version of the API you're calling. The value referenced here is for the latest released [model version](../../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data) released. | `2022-05-01` |
+|`{API-VERSION}` | The [version](../../concepts/model-lifecycle.md#api-versions) of the API you are calling. | `2023-04-01` |
|`confidenceThreshold`|`{CONFIDENCE-THRESHOLD}`|This is the threshold score below which the intent will be predicted as [none intent](none-intent.md). Values are from `0` to `1`|`0.7`| | `projectName` | `{PROJECT-NAME}` | The name of your project. This value is case-sensitive. | `EmailApp` | | `multilingual` | `true`| A boolean value that enables you to have utterances in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents. See [Language support](../language-support.md#multi-lingual-option) for more information about supported language codes. | `true`|
cognitive-services Migrate From Luis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/migrate-from-luis.md
The following table presents a side-by-side comparison between the features of L
|Role-Based Access Control (RBAC) for LUIS resources |Role-Based Access Control (RBAC) available for Language resources |Language resource RBAC must be [manually added after migration](../../concepts/role-based-access-control.md). | |Single training mode| Standard and advanced [training modes](#how-are-the-training-times-different-in-clu-how-is-standard-training-different-from-advanced-training) | Training will be required after application migration. | |Two publishing slots and version publishing |Ten deployment slots with custom naming | Deployment will be required after the applicationΓÇÖs migration and training. |
-|LUIS authoring APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Authoring REST APIs](/rest/api/language/2022-05-01/conversational-analysis-authoring). | For more information, see the [quickstart article](../quickstart.md?pivots=rest-api) for information on the CLU authoring APIs. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU authoring APIs. |
-|LUIS Runtime APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Runtime APIs](/rest/api/language/2022-05-01/conversation-analysis-runtime). CLU Runtime SDK support for [.NET](/dotnet/api/overview/azure/ai.language.conversations-readme) and [Python](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true). | See [how to call the API](../how-to/call-api.md#use-the-client-libraries-azure-sdk) for more information. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU runtime API response. |
+|LUIS authoring APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Authoring REST APIs](https://aka.ms/clu-authoring-apis). | For more information, see the [quickstart article](../quickstart.md?pivots=rest-api) for information on the CLU authoring APIs. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU authoring APIs. |
+|LUIS Runtime APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Runtime APIs](https://aka.ms/clu-runtime-api). CLU Runtime SDK support for [.NET](/dotnet/api/overview/azure/ai.language.conversations-readme) and [Python](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true). | See [how to call the API](../how-to/call-api.md#use-the-client-libraries-azure-sdk) for more information. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU runtime API response. |
## Migrate your LUIS applications
Follow these steps to begin migration programmatically using the CLU Authoring R
|||| |`{ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` | |`{PROJECT-NAME}` | The name for your project. This value is case sensitive. | `myProject` |
- |`{API-VERSION}` | The version of the API you are calling. The value referenced here is for the latest released [model version](../../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data) released. | `2022-05-01` |
+ |`{API-VERSION}` | The [version](../../concepts/model-lifecycle.md#api-versions) of the API you are calling. | `2023-04-01` |
### Headers
The API objects of CLU applications are different from LUIS and therefore code r
If you are using the LUIS [programmatic](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c40) and [runtime](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8) APIs, you can replace them with their equivalent APIs.
-[CLU authoring APIs](/rest/api/language/2022-05-01/conversational-analysis-authoring): Instead of LUIS's specific CRUD APIs for individual actions such as _add utterance_, _delete entity_, and _rename intent_, CLU offers an [import API](/rest/api/language/2022-05-01/conversational-analysis-authoring/import) that replaces the full content of a project using the same name. If your service used LUIS programmatic APIs to provide a platform for other customers, you must consider this new design paradigm. All other APIs such as: _listing projects_, _training_, _deploying_, and _deleting_ are available. APIs for actions such as _importing_ and _deploying_ are asynchronous operations instead of synchronous as they were in LUIS.
+[CLU authoring APIs](https://aka.ms/clu-authoring-apis): Instead of LUIS's specific CRUD APIs for individual actions such as _add utterance_, _delete entity_, and _rename intent_, CLU offers an [import API](/rest/api/language/2023-04-01/conversational-analysis-authoring/import) that replaces the full content of a project using the same name. If your service used LUIS programmatic APIs to provide a platform for other customers, you must consider this new design paradigm. All other APIs such as: _listing projects_, _training_, _deploying_, and _deleting_ are available. APIs for actions such as _importing_ and _deploying_ are asynchronous operations instead of synchronous as they were in LUIS.
-[CLU runtime APIs](/rest/api/language/2022-05-01/conversation-analysis-runtime): The new API request and response includes many of the same parameters such as: _query_, _prediction_, _top intent_, _intents_, _entities_, and their values. The CLU response object offers a more straightforward approach. Entity predictions are provided as they are within the utterance text, and any additional information such as resolution or list keys are provided in extra parameters called `extraInformation` and `resolution`. See the [reference documentation](/rest/api/language/2022-05-01/conversation-analysis-runtime) for more information on the API response structure.
+[CLU runtime APIs](https://aka.ms/clu-runtime-api): The new API request and response includes many of the same parameters such as: _query_, _prediction_, _top intent_, _intents_, _entities_, and their values. The CLU response object offers a more straightforward approach. Entity predictions are provided as they are within the utterance text, and any additional information such as resolution or list keys are provided in extra parameters called `extraInformation` and `resolution`.
You can use the [.NET](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0-beta.3/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples/) or [Python](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-language-conversations_1.1.0b1/sdk/cognitivelanguage/azure-ai-language-conversations/samples/README.md) CLU runtime SDK to replace the LUIS runtime SDK. There is currently no authoring SDK available for CLU.
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md
Title: Call Automation overview
-description: Learn about Azure Communication Services Call Automation.
+description: Learn about Azure Communication Services Call Automation API.
Previously updated : 09/06/2022 Last updated : 06/19/2023 - # Call Automation Overview -
-Azure Communication Services(ACS) Call Automation provides developers the ability to build server-based, intelligent call workflows, and call recording for voice and Public Switched Telephone Network(PSTN) channels. The SDKs, available for .NET and Java, use an action-event model to help you build personalized customer interactions. Your communication applications can listen to real-time call events and perform control plane actions (like answer, transfer, play audio, start recording, etc.) to steer and control calls based on your business logic.
+Azure Communication Services Call Automation provides developers the ability to build server-based, intelligent call workflows, and call recording for voice and Public Switched Telephone Network(PSTN) channels. The SDKs, available in C#, Java, JavaScript and Python, use an action-event model to help you build personalized customer interactions. Your communication applications can listen to real-time call events and perform control plane actions (like answer, transfer, play audio, start recording, etc.) to steer and control calls based on your business logic.
> [!NOTE]
-> Call Automation currently doesn't interoperate with Microsoft Teams. Actions like making or redirecting a call to a Teams user or adding them to a call using Call Automation aren't supported.
> Call Automation currently doesn't support [Rooms](../rooms/room-concept.md) calls. ## Common use cases
Azure Communication Services Call Automation can be used to build calling workfl
The following list presents the set of features that are currently available in the Azure Communication Services Call Automation SDKs.
-| Feature Area | Capability | .NET | Java |
-| -| -- | | -- |
-| Pre-call scenarios | Answer a one-to-one call | ✔️ | ✔️ |
-| | Answer a group call | ✔️ | ✔️ |
-| | Place new outbound call to one or more endpoints | ✔️ | ✔️ |
-| | Redirect* (forward) a call to one or more endpoints | ✔️ | ✔️ |
-| | Reject an incoming call | ✔️ | ✔️ |
-| Mid-call scenarios | Add one or more endpoints to an existing call | ✔️ | ✔️ |
-| | Play Audio from an audio file | ✔️ | ✔️ |
-| | Recognize user input through DTMF | ✔️ | ✔️ |
-| | Remove one or more endpoints from an existing call| ✔️ | ✔️ |
-| | Blind Transfer* a 1:1 call to another endpoint | ✔️ | ✔️ |
-| | Hang up a call (remove the call leg) | ✔️ | ✔️ |
-| | Terminate a call (remove all participants and end call)| ✔️ | ✔️ |
-| | Cancel media operations | ✔️ | ✔️ |
-| Query scenarios | Get the call state | ✔️ | ✔️ |
-| | Get a participant in a call | ✔️ | ✔️ |
-| | List all participants in a call | ✔️ | ✔️ |
-| Call Recording | Start/pause/resume/stop recording | ✔️ | ✔️ |
+| Feature Area | Capability | .NET | Java | JavaScript | Python |
+| -| -- | | -- | - | |
+| Pre-call scenarios | Answer a one-to-one call | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Answer a group call | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Place new outbound call to one or more endpoints | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Redirect* (forward) a call to one or more endpoints | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Reject an incoming call | ✔️ | ✔️ | ✔️ | ✔️ |
+| Mid-call scenarios | Add one or more endpoints to an existing call | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Play Audio from an audio file | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Recognize user input through DTMF | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Remove one or more endpoints from an existing call| ✔️ | ✔️ | ✔️ | ✔️ |
+| | Blind Transfer* a 1:1 call to another endpoint | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Hang up a call (remove the call leg) | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Terminate a call (remove all participants and end call)| ✔️ | ✔️ | ✔️ | ✔️ |
+| | Cancel media operations | ✔️ | ✔️ | ✔️ | ✔️ |
+| Query scenarios | Get the call state | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Get a participant in a call | ✔️ | ✔️ | ✔️ | ✔️ |
+| | List all participants in a call | ✔️ | ✔️ | ✔️ | ✔️ |
+| Call Recording | Start/pause/resume/stop recording | ✔️ | ✔️ | ✔️ | ✔️ |
*Transfer or redirect of a VoIP call to a phone number is currently not supported.
communication-services Incoming Call Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/incoming-call-notification.md
Title: Incoming call concepts
description: Learn about Azure Communication Services IncomingCall notification - -++ Last updated 09/26/2022 - # Incoming call concepts - Azure Communication Services Call Automation provides developers the ability to build applications, which can make and receive calls. Azure Communication Services relies on Event Grid subscriptions to deliver each `IncomingCall` event, so setting up your environment to receive these notifications is critical to your application being able to redirect or answer a call. ## Calling scenarios
-First, we need to define which scenarios can trigger an `IncomingCall` event. The primary concept to remember is that a call to an Azure Communication Services identity or Public Switched Telephone Network (PSTN) number will trigger an `IncomingCall` event. The following are examples of these resources:
+First, we need to define which scenarios can trigger an `IncomingCall` event. The primary concept to remember is that a call to an Azure Communication Services identity or Public Switched Telephone Network (PSTN) number triggers an `IncomingCall` event. The following are examples of these resources:
1. An Azure Communication Services identity 2. A PSTN phone number owned by your Azure Communication Services resource
-Given the above examples, the following scenarios will trigger an `IncomingCall` event sent to Event Grid:
+Given these examples, the following scenarios trigger an `IncomingCall` event sent to Event Grid:
| Source | Destination | Scenario(s) | | | -- | -- |
This architecture has the following benefits:
- Using Event Grid subscription filters, you can route the `IncomingCall` notification to specific applications. - PSTN number assignment and routing logic can exist in your application versus being statically configured online.-- As identified in the above [calling scenarios](#calling-scenarios) section, your application can be notified even when users make calls between each other. You can then combine this scenario together with the [Call Recording APIs](../voice-video-calling/call-recording.md) to meet compliance needs.
+- As identified in the [calling scenarios](#calling-scenarios) section, your application can be notified even when users make calls between each other. You can then combine this scenario together with the [Call Recording APIs](../voice-video-calling/call-recording.md) to meet compliance needs.
To check out a sample payload for the event and to learn about other calling events published to Event Grid, check out this [guide](../../../event-grid/communication-services-voice-video-events.md#microsoftcommunicationincomingcall).
-Below is an example of an Event Grid Webhook subscription where the event type filter is listening only to the `IncomingCall` event.
+Here is an example of an Event Grid Webhook subscription where the event type filter is listening only to the `IncomingCall` event.
![Image showing IncomingCall subscription.](./media/subscribe-incoming-call-event-grid.png)
You can use [advanced filters](../../../event-grid/event-filtering.md) in your E
> [!NOTE] > In many cases you will want to configure filtering in Event Grid due to the scenarios described above generating an `IncomingCall` event so that your application only receives events it should be responding to. For example, if you want to redirect an inbound PSTN call to an ACS endpoint and you don't use a filter, your Event Grid subscription will receive two `IncomingCall` events; one for the PSTN call and one for the ACS user even though you had not intended to receive the second notification. Failure to handle these scenarios using filters or some other mechanism in your application can cause infinite loops and/or other undesired behavior.
-Below is an example of an advanced filter on an Event Grid subscription watching for the `data.to.PhoneNumber.Value` string starting with a PSTN phone number of `+18005551212.
+Here is an example of an advanced filter on an Event Grid subscription watching for the `data.to.PhoneNumber.Value` string starting with a PSTN phone number of `+18005551212.
![Image showing Event Grid advanced filter.](./media/event-grid-advanced-filter.png) ## Number assignment
-Since the `IncomingCall` notification doesn't have a specific destination other than the Event Grid subscription you've created, you're free to associate any particular number to any endpoint in Azure Communication Services. For example, if you acquired a PSTN phone number of `+14255551212` and want to assign it to a user with an identity of `375f0e2f-e8db-4449-9bf7-2054b02e42b4` in your application, you'll maintain a mapping of that number to the identity. When an `IncomingCall` notification is sent matching the phone number in the **to** field, you'll invoke the `Redirect` API and supply the identity of the user. In other words, you maintain the number assignment within your application and route or answer calls at runtime.
+Since the `IncomingCall` notification doesn't have a specific destination other than the Event Grid subscription you've created, you're free to associate any particular number to any endpoint in Azure Communication Services. For example, if you acquired a PSTN phone number of `+14255551212` and want to assign it to a user with an identity of `375f0e2f-e8db-4449-9bf7-2054b02e42b4` in your application, you can maintain a mapping of that number to the identity. When an `IncomingCall` notification is sent matching the phone number in the **to** field, invoke the `Redirect` API and supply the identity of the user. In other words, you maintain the number assignment within your application and route or answer calls at runtime.
## Best Practices
-1. Event Grid requires you to prove ownership of your Webhook endpoint before it starts delivering events to that endpoint. This requirement prevents a malicious user from flooding your endpoint with events. If you are facing issues with receiving events, ensure the webhook configured is verified by handling `SubscriptionValidationEvent`. For more information, see this [guide](../../../event-grid/webhook-event-delivery.md).
-2. Upon the receipt of an incoming call event, if your application does not respond back with 200Ok to Event Grid in time, Event Grid will use exponential backoff retry to send the again. However, an incoming call only rings for 30 seconds, and acting on a call after that will not work. To avoid retries for expired or stale calls, we recommend setting the retry policy as - Max Event Delivery Attempts to 2 and Event Time to Live to 1 minute. These settings can be found under Additional Features tab of the event subscription. Learn more about retries [here](../../../event-grid/delivery-and-retry.md).
+1. Event Grid requires you to prove ownership of your Webhook endpoint before it starts delivering events to that endpoint. This requirement prevents a malicious user from flooding your endpoint with events. If you're facing issues with receiving events, ensure the webhook configured is verified by handling `SubscriptionValidationEvent`. For more information, see this [guide](../../../event-grid/webhook-event-delivery.md).
+2. Upon the receipt of an incoming call event, if your application doesn't respond back with 200Ok to Event Grid in time, Event Grid uses exponential backoff retry to send the again. However, an incoming call only rings for 30 seconds, and acting on a call after that won't work. To avoid retries for expired or stale calls, we recommend setting the retry policy as - Max Event Delivery Attempts to 2 and Event Time to Live to 1 minute. These settings can be found under Additional Features tab of the event subscription. Learn more about retries [here](../../../event-grid/delivery-and-retry.md).
3. We recommend you to enable logging for your Event Grid resource to monitor events that failed to deliver. Navigate to the system topic under Events tab of your Communication resource and enable logging from the Diagnostic settings. Failure logs can be found in 'AegDeliveryFailureLogs' table.
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-action.md
Last updated 09/06/2022 - # Playing audio in call - The play action provided through the call automation SDK allows you to play audio prompts to participants in the call. This action can be accessed through the server-side implementation of your application. The play action allows you to provide ACS access to your pre-recorded audio files with support for authentication. > [!NOTE]
As part of compliance requirements in various industries, vendors are expected t
## Next Steps - Check out our how-to guide to learn [how-to play custom voice prompts](../../how-tos/call-automation/play-action.md) to users.
+- Learn about [usage and operational logs](../analytics/logs/call-automation-logs.md) published by call automation.
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/recognize-action.md
Last updated 09/16/2022 - # Gathering user input - With the Recognize action developers will be able to enhance their IVR or contact center applications to gather user input. One of the most common scenarios of recognition is to play a message and request user input. This input is received in the form of DTMF (input via the digits on their calling device) which then allows the application to navigate the user to the next action. **DTMF**
The recognize action can be used for many reasons, below are a few examples of h
![Recognize Action](./media/recognize-flow.png) ## Next steps- - Check out our how-to guide to learn how you can [gather user input](../../how-tos/call-automation/recognize-action.md).
+- Learn about [usage and operational logs](../analytics/logs/call-automation-logs.md) published by call automation.
communication-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/reference.md
For each area, we have external pages to track and review our SDKs. You can cons
| - | | - | | | | -- | - | | Azure Resource Manager | - | [NuGet](https://www.nuget.org/packages/Azure.ResourceManager.Communication) | [PyPi](https://pypi.org/project/azure-mgmt-communication/) | - | - | - | [Go via GitHub](https://github.com/Azure/azure-sdk-for-go/releases/tag/v46.3.0) | | Calling | [npm](https://www.npmjs.com/package/@azure/communication-calling) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Calling.WindowsClient) | - | - | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) ([docs](/objectivec/communication-services/calling/)) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-calling/) | - |
-| Call Automation | - | [NuGet](https://www.nuget.org/packages/Azure.Communication.CallAutomation) | - | [Maven](https://search.maven.org/search?q=a:azure-communication-callautomation) | - | - | - |
+| Call Automation | [npm](https://www.npmjs.com/package/@azure/communication-call-automation) | [NuGet](https://www.nuget.org/packages/Azure.Communication.CallAutomation) | [PyPi](https://pypi.org/project/azure-communication-callautomation/) | [Maven](https://search.maven.org/search?q=a:azure-communication-callautomation) | - | - | - |
| Chat | [npm](https://www.npmjs.com/package/@azure/communication-chat) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Chat) | [PyPi](https://pypi.org/project/azure-communication-chat/) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | - | | Common | [npm](https://www.npmjs.com/package/@azure/communication-common) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Common/) | N/A | [Maven](https://search.maven.org/search?q=a:azure-communication-common) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-common) | - | | Email | [npm](https://www.npmjs.com/package/@azure/communication-email) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Email) | [PyPi](https://pypi.org/project/azure-communication-email/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-email) | - | - | - |
communication-services Room Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/rooms/room-concept.md
Previously updated : 04/26/2023 Last updated : 06/19/2023 # Rooms overview - Azure Communication Services provides a concept of a room for developers who are building structured conversations such as virtual appointments or virtual events. Rooms currently allow voice and video calling. Here are the main scenarios where rooms are useful:
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sdk-options.md
Development of Calling and Chat applications can be accelerated by the [Azure C
| Assembly | Protocols| Environment | Capabilities| |--|-||-|
-| Azure Resource Manager | [REST](/rest/api/communication/communicationservice)| Service| Provision and manage Communication Services resources|
+| Azure Resource Manager | [REST](/rest/api/communication/resourcemanager/communication-services)| Service| Provision and manage Communication Services resources|
| Common | N/A | Client & Service | Provides base types for other SDKs | | Identity | [REST](/rest/api/communication/communication-identity) | Service| Manage users, access tokens| | Phone numbers| [REST](/rest/api/communication/phonenumbers) | Service| Acquire and manage phone numbers |
Publishing locations for individual SDK packages are detailed below.
| SMS| [npm](https://www.npmjs.com/package/@azure/communication-sms) | [NuGet](https://www.NuGet.org/packages/Azure.Communication.Sms)| [PyPi](https://pypi.org/project/azure-communication-sms/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-sms) | -| -| -| | Email| [npm](https://www.npmjs.com/package/@azure/communication-email) | [NuGet](https://www.NuGet.org/packages/Azure.Communication.Email)| [PyPi](https://pypi.org/project/azure-communication-email/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-email) | -| -| -| | Calling| [npm](https://www.npmjs.com/package/@azure/communication-calling) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Calling.WindowsClient) | -| - | [GitHub](https://github.com/Azure/Communication/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-calling/)| -|
-|Call Automation||[NuGet](https://www.NuGet.org/packages/Azure.Communication.CallAutomation/)||[Maven](https://search.maven.org/artifact/com.azure/azure-communication-callautomation)
+|Call Automation|[npm](https://www.npmjs.com/package/@azure/communication-call-automation)|[NuGet](https://www.NuGet.org/packages/Azure.Communication.CallAutomation/)|[PyPi](https://pypi.org/project/azure-communication-callautomation/)|[Maven](https://search.maven.org/artifact/com.azure/azure-communication-callautomation)
|Network Traversal| [npm](https://www.npmjs.com/package/@azure/communication-network-traversal)|[NuGet](https://www.NuGet.org/packages/Azure.Communication.NetworkTraversal/) | [PyPi](https://pypi.org/project/azure-communication-networktraversal/) | [Maven](https://search.maven.org/search?q=a:azure-communication-networktraversal) | -|- | - | | UI Library| [npm](https://www.npmjs.com/package/@azure/communication-react) | - | - | - | [GitHub](https://github.com/Azure/communication-ui-library-ios) | [GitHub](https://github.com/Azure/communication-ui-library-android) | [GitHub](https://github.com/Azure/communication-ui-library), [Storybook](https://azure.github.io/communication-ui-library/?path=/story/overview--page) | | Reference Documentation | [docs](https://azure.github.io/azure-sdk-for-js/communication.html) | [docs](https://azure.github.io/azure-sdk-for-net/communication.html)| -| [docs](http://azure.github.io/azure-sdk-for-java/communication.html) | [docs](/objectivec/communication-services/calling/)| [docs](/java/api/com.azure.android.communication.calling)| -|
communication-services Actions For Call Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/actions-for-call-control.md
Title: Azure Communication Services Call Automation how-to for managing calls wi
description: Provides a how-to guide on using call actions to steer and manage a call with Call Automation. - Previously updated : 11/03/2022 Last updated : 06/19/2023 - # How to control and steer calls with Call Automation - Call Automation uses a REST API interface to receive requests for actions and provide responses to notify whether the request was successfully submitted or not. Due to the asynchronous nature of calling, most actions have corresponding events that are triggered when the action completes successfully or fails. This guide covers the actions available for steering calls, like CreateCall, Transfer, Redirect, and managing participants. Actions are accompanied with sample code on how to invoke the said action and sequence diagrams describing the events expected after invoking an action. These diagrams help you visualize how to program your service application with Call Automation. Call Automation supports various other actions to manage call media and recording that aren't included in this guide. > [!NOTE]
-> Call Automation currently doesn't interoperate with Microsoft Teams. Actions like making, redirecting a call to a Teams user or adding them to a call using Call Automation isn't supported.
+> Call Automation currently doesn't support [Rooms](../../concepts/rooms/room-concept.md) calls.
As a prerequisite, we recommend you to read these articles to make the most of this guide:
communication-services Handle Events With Event Processor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/handle-events-with-event-processor.md
EventProcessor features allow developers to easily build robust application that
Call Automation's EventProcessor first need to consume events that were sent from the service. Once the event arrives in callback endpoint, pass the event to EventProcessor. > [!IMPORTANT]
-> Have you established webhook callback events endpoint? EventProcessor still needs to consume callback events through webhook callback. See **[this page](../../quickstarts/call-automation/callflows-for-customer-interactions.md)** for further assistance.
+> Have you established webhook callback events endpoint? EventProcessor still needs to consume callback events through webhook callback. See **[quickstart](../../quickstarts/call-automation/quickstart-make-an-outbound-call.md)** that describes establishing webhook endpoints.
```csharp using Azure.Communication.CallAutomation;
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/play-action.md
Last updated 09/06/2022 -+ zone_pivot_groups: acs-js-csharp-java-python # Customize voice prompts to users with Play action - This guide will help you get started with playing audio files to participants by using the play action provided through Azure Communication Services Call Automation SDK. ::: zone pivot="programming-language-csharp"
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/recognize-action.md
Last updated 09/16/2022 -+ zone_pivot_groups: acs-js-csharp-java-python # Gather user input with Recognize action - This guide will help you get started with recognizing DTMF input provided by participants through Azure Communication Services Call Automation SDK. ::: zone pivot="programming-language-csharp"
communication-services Secure Webhook Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/secure-webhook-endpoint.md
Previously updated : 04/13/2023 Last updated : 06/19/2023 --+ zone_pivot_groups: acs-js-csharp-java-python # How to secure webhook endpoint - Securing the delivery of messages from end to end is crucial for ensuring the confidentiality, integrity, and trustworthiness of sensitive information transmitted between systems. Your ability and willingness to trust information received from a remote system relies on the sender providing their identity. Call Automation has two ways of communicating events that can be secured; the shared IncomingCall event sent by Azure Event Grid, and all other mid-call events sent by the Call Automation platform via webhook. ## Incoming Call Event
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/overview.md
Learn more about the Azure Communication Services SDKs with the resources listed
| | | |**[SDK libraries and REST APIs](./concepts/sdk-options.md)**|Azure Communication Services capabilities are conceptually organized into six areas, each represented by an SDK. You can decide which SDK libraries to use based on your real-time communication needs.| |**[Calling SDK overview](./concepts/voice-video-calling/calling-sdk-features.md)**|Review the Communication Services Calling SDK overview.|
+|**[Call Automation overview](./concepts/call-automation/call-automation.md)**|Review the Communication Services Call Automation SDK overview.|
|**[Chat SDK overview](./concepts/chat/sdk-features.md)**|Review the Communication Services Chat SDK overview.| |**[SMS SDK overview](./concepts/sms/sdk-features.md)**|Review the Communication Services SMS SDK overview.| |**[Email SDK overview](./concepts/email/sdk-features.md)**|Review the Communication Services SMS SDK overview.|
communication-services Quickstart Make An Outbound Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/call-automation/quickstart-make-an-outbound-call.md
Title: Quickstart - Make an outbound call using Call Automation
-description: In this quickstart, you'll learn how to make an outbound PSTN call using Azure Communication Services using Call Automation
+description: In this quickstart, you learn how to make an outbound PSTN call using Azure Communication Services using Call Automation
Previously updated : 05/26/2023- Last updated : 06/19/2023+ + zone_pivot_groups: acs-js-csharp-java-python # Quickstart: Make an outbound call using Call Automation
-Azure Communication Services (ACS) Call Automation APIs are a powerful way to create interactive calling experiences. In this quick start we'll cover a way to make an outbound call and recognize various events in the call.
+Azure Communication Services Call Automation APIs are a powerful way to create interactive calling experiences. In this quick start, we cover a way to make an outbound call and recognize various events in the call.
::: zone pivot="programming-language-csharp" [!INCLUDE [Make an outbound call C#](./includes/quickstart-make-an-outbound-call-using-callautomation-csharp.md)]
communication-services Get Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/get-phone-number.md
In this quickstart you learned how to:
> > [!div class="nextstepaction"] > [Toll-free verification](../../concepts/sms/sms-faq.md#toll-free-verification)-
+>
+> [!div class="nextstepaction"]
+> [Build workflow for outbound calls using the purchased phone numbers](../call-automation/quickstart-make-an-outbound-call.md)
+>
> [!div class="nextstepaction"]
-> [Get started with calling](../voice-video-calling/getting-started-with-calling.md)
+> [Get started with calling in applications](../voice-video-calling/getting-started-with-calling.md)
communication-services Voice Routing Sdk Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/voice-routing-sdk-config.md
If you want to clean up and remove a Communication Services subscription, you ca
For more information, see the following articles:
+- Learn about [call automation](../../concepts/call-automation/call-automation.md) to build workflows that [route and manage calls](../../how-tos/call-automation/actions-for-call-control.md) to Communication Services.
- Learn about [Calling SDK capabilities](../voice-video-calling/getting-started-with-calling.md). - Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md). - Call to a telephone number by [following a quickstart](./pstn-call.md).
communication-services Call Automation Appointment Reminder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/call-automation-appointment-reminder.md
- Title: Call Automation Appointment Reminder-
-description: Learn about creating a simple outbound call with Call Automation
----- Previously updated : 11/17/2022----
-zone_pivot_groups: acs-csharp-java
--
-# Call Automation - Appointment Reminder
--
-This Azure Communication Services Call Automation - Appointment Reminder sample demonstrates how your application can use the Call Automation SDK to build automated workflows that create outbound calls to proactively reach out to your customers.
-
-> This sample is available **on GitHub** for [C#](https://github.com/Azure-Samples/communication-services-dotnet-quickstarts/tree/main/CallAutomation_AppointmentReminder) and [Java](https://github.com/Azure-Samples/communication-services-java-quickstarts/tree/main/CallAutomation_AppointmentReminder)
-
-## Overview
-
-This sample application makes an outbound call to a phone number then performs dtmf recognition and then plays the next audio file based on the key pressed by the callee. This sample application accepts tone 1 (tone1) and 2 (tone2). If the callee presses any key other than what it's expecting, an invalid audio tone will be played and then the call will be disconnected.
-
-This sample application is also capable of making multiple concurrent outbound calls.
-
-## Design
-
-![Call flow](./media/call-automation/appointment-reminder.png)
---
-## Clean up resources
-
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../quickstarts/create-communication-resource.md#clean-up-resources).
-
-## Next steps
--- Learn about [Call Automation](../concepts/call-automation/call-automation.md) and its features.-- Learn about [Play action](../concepts/call-automation/play-action.md) to play audio in call.-- Learn about [Recognize action](../concepts/call-automation/recognize-action.md) to gather user input.
confidential-computing Confidential Containers Enclaves https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers-enclaves.md
You can deploy SCONE on Azure confidential computing nodes with AKS following th
### Anjuna
-[Anjuna](https://www.anjuna.io/) provides SGX platform software to run unmodified containers on AKS. For more information, see Anjuna's [documentation about functionality and sample applications](https://www.anjuna.io/microsoft-azure-confidential-computing-aks-lp).
+[Anjuna](https://www.anjuna.io/) provides SGX platform software to run unmodified containers on AKS. For more information, see Anjuna's [documentation about functionality and sample applications](https://www.anjuna.io/partners/microsoft-azure).
-Get started with a sample Redis Cache and Python Custom Application [here](https://www.anjuna.io/microsoft-azure-confidential-computing-aks-lp)
+Get started with a sample Redis Cache and Python Custom Application [here](https://www.anjuna.io/partners/microsoft-azure)
![Diagram of Anjuna's process, showing how containers are run on Azure confidential computing.](media/confidential-containers/anjuna-process-flow.png)
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/introduction.md
The API for Cassandra has added benefits of being built on Azure Cosmos DB:
- **Event Sourcing**: The API for Cassandra provides access to a persistent change log, the [Change Feed](change-feed.md). The change feed can facilitate event sourcing directly from the database. In Apache Cassandra, change data capture (CDC) is the only equivalent feature. CDC is merely a mechanism to flag specific tables for archival and rejecting writes to those tables once a configurable size-on-disk for the CDC log is reached. These capabilities are redundant in Azure Cosmos DB as the relevant aspects are automatically governed. +
+## Azure Managed Instance for Apache Cassandra
+
+For some customers, adapting to API for Cassandra can be a challenge due to differences in behaviour and/or configuration, especially for lift-and-shift migrations. [Azure Managed Instance for Apache Cassandra](../../managed-instance-apache-cassandr) is a first-party Azure service for hosting and maintaining pure open-source Apache Cassandra clusters with 100% compatibility.
+ ## Next steps - Get started with [creating a API for Cassandra account, database, and a table](create-account-java.md) by using a Java application.
cosmos-db Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/support.md
By using the Azure Cosmos DB for Apache Cassandra, you can enjoy the benefits of
The Azure Cosmos DB for Apache Cassandra is compatible with Cassandra Query Language (CQL) v3.11 API (backward-compatible with version 2.x). The supported CQL commands, tools, limitations, and exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB for Apache Cassandra.
+## Azure Managed Instance for Apache Cassandra
+
+For some customers, adapting to API for Cassandra can be a challenge due to differences in behaviour and/or configuration, especially for lift-and-shift migrations. If a feature that is critical for your application is listed as not supported below, consider using [Azure Managed Instance for Apache Cassandra](../../managed-instance-apache-cassandr). This is a first-party Azure service for hosting and maintaining pure open-source Apache Cassandra clusters with 100% compatibility.
+ ## Cassandra driver The following versions of Cassandra drivers are supported by Azure Cosmos DB for Apache Cassandra:
cosmos-db Performance Tips Query Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-query-sdk.md
Previously updated : 04/11/2022 Last updated : 06/20/2023 ms.devlang: csharp, java
Following are implications of how the parallel queries would behave for differen
## Tune the page size
-When you issue a SQL query, the results are returned in a segmented fashion if the result set is too large. By default, results are returned in chunks of 100 items or 1 MB, whichever limit is hit first.
+When you issue a SQL query, the results are returned in a segmented fashion if the result set is too large.
> [!NOTE] > The `MaxItemCount` property shouldn't be used just for pagination. Its main use is to improve the performance of queries by reducing the maximum number of items returned in a single page.
cost-management-billing Buy Savings Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/buy-savings-plan.md
Previously updated : 02/03/2023 Last updated : 06/20/2023
To disallow savings plan purchases on a billing profile, billing profile contrib
### Microsoft Partner Agreement partners -- Partners can use **Home** > **Savings plan** in the [Azure portal](https://portal.azure.com/) to purchase savings plans on behalf of their customers.
+Partners can use **Home** > **Savings plan** in the [Azure portal](https://portal.azure.com/) to purchase savings plans on behalf of their customers.
+
+As of June 2023, partners can purchase an Azure savings plan through Partner Center. Previously, Azure savings plan was only supported for purchase through the Azure portal. Partners can now purchase Azure savings plan through the Partner Center portal, APIs, or they can continue to use the Azure portal.
+
+To purchase Azure savings plan using the Partner Center APIs, see [Purchase Azure savings plans](/partner-center/developer/azure-purchase-savings-plan).
## Change agreement type to one supported by savings plan
cost-management-billing Savings Plan Compute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/savings-plan-compute-overview.md
Previously updated : 04/04/2023 Last updated : 06/20/2023
Savings plan purchases can't be canceled or refunded.
- Container Instances - Azure Premium Functions - Azure App Services - The Azure savings plan for compute can only be applied to the App Service upgraded Premium v3 plan and the upgraded Isolated v2 plan.
+- On-demand Capacity Reservation
Exclusions apply to the above services.
data-factory Connector Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-data-warehouse.md
Previously updated : 09/15/2022 Last updated : 04/20/2023 # Copy and transform data in Azure Synapse Analytics by using Azure Data Factory or Synapse pipelines
When your source data is not natively compatible with COPY statement, enable dat
To use this feature, create an [Azure Blob Storage linked service](connector-azure-blob-storage.md#linked-service-properties) or [Azure Data Lake Storage Gen2 linked service](connector-azure-data-lake-storage.md#linked-service-properties) with **account key or system-managed identity authentication** that refers to the Azure storage account as the interim storage. >[!IMPORTANT]
->- When you use managed identity authentication for your staging linked service, learn the needed configurations for [Azure Blob](connector-azure-blob-storage.md#managed-identity) and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#managed-identity) respectively.
+>- When you use managed identity authentication for your staging linked service, learn the needed configurations for [Azure Blob](connector-azure-blob-storage.md#managed-identity) and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#managed-identity) respectively. You also need to grant permissions to your Azure Synapse Analytics workspace managed identity in your staging Azure Blob Storage or Azure Data Lake Storage Gen2 account. To learn how to grant this permission, see [Grant permissions to workspace managed identity](/azure/synapse-analytics/security/how-to-grant-workspace-managed-identity-permissions).
>- If your staging Azure Storage is configured with VNet service endpoint, you must use managed identity authentication with "allow trusted Microsoft service" enabled on storage account, refer to [Impact of using VNet Service Endpoints with Azure storage](/azure/azure-sql/database/vnet-service-endpoint-rule-overview#impact-of-using-virtual-network-service-endpoints-with-azure-storage). >[!IMPORTANT]
When your source data is not natively compatible with PolyBase, enable data copy
To use this feature, create an [Azure Blob Storage linked service](connector-azure-blob-storage.md#linked-service-properties) or [Azure Data Lake Storage Gen2 linked service](connector-azure-data-lake-storage.md#linked-service-properties) with **account key or managed identity authentication** that refers to the Azure storage account as the interim storage. >[!IMPORTANT]
->- When you use managed identity authentication for your staging linked service, learn the needed configurations for [Azure Blob](connector-azure-blob-storage.md#managed-identity) and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#managed-identity) respectively.
+>- When you use managed identity authentication for your staging linked service, learn the needed configurations for [Azure Blob](connector-azure-blob-storage.md#managed-identity) and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#managed-identity) respectively. You also need to grant permissions to your Azure Synapse Analytics workspace managed identity in your staging Azure Blob Storage or Azure Data Lake Storage Gen2 account. To learn how to grant this permission, see [Grant permissions to workspace managed identity](/azure/synapse-analytics/security/how-to-grant-workspace-managed-identity-permissions).
>- If your staging Azure Storage is configured with VNet service endpoint, you must use managed identity authentication with "allow trusted Microsoft service" enabled on storage account, refer to [Impact of using VNet Service Endpoints with Azure storage](/azure/azure-sql/database/vnet-service-endpoint-rule-overview#impact-of-using-virtual-network-service-endpoints-with-azure-storage). >[!IMPORTANT]
data-factory Connector Troubleshoot Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-cosmos-db.md
Previously updated : 07/29/2022 Last updated : 04/20/2023
This article provides suggestions to troubleshoot common problems with the Azure
- **Symptoms**: When you copy data into Azure Cosmos DB with a default write batch size, you receive the following error: `Request size is too large.` -- **Cause**: Azure Cosmos DB limits the size of a single request to 2 MB. The formula is *request size = single document size \* write batch size*. If your document size is large, the default behavior will result in a request size that's too large. You can tune the write batch size.--- **Resolution**: In the copy activity sink, reduce the *write batch size* value (the default value is 10000).
+- **Cause**: Azure Cosmos DB limits the size of a single request to 2 MB. The formula is *request size = single document size \* write batch size*. If your document size is large, the default behavior will result in a request size that's too large.
+
+- **Resolution**: <br>
+You can tune the write batch size. In the copy activity sink, reduce the *write batch size* value (the default value is 10000). <br>
+If reducing the *write batch size* value to 1 still doesn't work, change your Azure Cosmos DB SQL API from V2 to V3. To complete this configuration, you have two options:
+
+ - **Option 1**: Change your authentication type to service principal or system-assigned managed identity or user-assigned managed identity.
+ - **Option 2**: If you still want to use account key authentication, follow these steps:
+ 1. Create an Azure Cosmos DB for NoSQL linked service.
+ 2. Update the linked service with the following template.
+
+ ```json
+ {
+ "name": "<CosmosDbV3>",
+ "type": "Microsoft.DataFactory/factories/linkedservices",
+ "properties": {
+ "annotations": [],
+ "type": "CosmosDb",
+ "typeProperties": {
+ "useV3": true,
+ "accountEndpoint": "<account endpoint>",
+ "database": "<database name>",
+ "accountKey": {
+ "type": "SecureString",
+ "value": "<account key>"
+ }
+ }
+ }
+ }
+ ```
## Error message: Unique index constraint violation
ddos-protection Ddos Protection Reference Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-reference-architectures.md
Unsupported resources include:
* PaaS services (multi-tenant) including Azure App Service Environment for Power Apps. * Protected resources that include public IPs created from public IP address prefix. -
-> [!NOTE]
-> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
## Virtual machine (Windows/Linux) workloads
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
## <a name="alerts-resourcemanager"></a>Alerts for Resource Manager
+> [!NOTE]
+> Alerts with a **delegated access** indication are triggered due to activity of third-party service providers. learn more about [service providers activity indications](/azure/defender-for-cloud/defender-for-resource-manager-usage).
+ [Further details and notes](defender-for-resource-manager-introduction.md) | Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
VM_VbScriptHttpObjectAllocation| VBScript HTTP object allocation detected | High
- [Security alerts in Microsoft Defender for Cloud](alerts-overview.md) - [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md) - [Continuously export Defender for Cloud data](continuous-export.md)++++
defender-for-cloud Azure Devops Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/azure-devops-extension.md
Title: Configure the Microsoft Security DevOps Azure DevOps extension description: Learn how to configure the Microsoft Security DevOps Azure DevOps extension. Previously updated : 05/01/2023 Last updated : 06/20/2023
If you don't have access to install the extension, you must request access from
pool: vmImage: 'windows-latest' steps:
- - task: UseDotNet@2
- displayName: 'Use dotnet'
- inputs:
- version: 3.1.x
- - task: UseDotNet@2
- displayName: 'Use dotnet'
- inputs:
- version: 5.0.x
- - task: UseDotNet@2
- displayName: 'Use dotnet'
- inputs:
- version: 6.0.x
- task: MicrosoftSecurityDevOps@1 displayName: 'Microsoft Security DevOps' ```
-> [!Note]
-> The MicrosoftSecurityDevOps build task depends on .NET 6. The CredScan analyzer depends on .NET 3.1. See more [here](https://marketplace.visualstudio.com/items?itemName=ms-securitydevops.microsoft-security-devops-azdevops).
- 9. To commit the pipeline, select **Save and run**. The pipeline will run for a few minutes and save the results.
-> [!Note]
+> [!NOTE]
> Install the SARIF SAST Scans Tab extension on the Azure DevOps organization in order to ensure that the generated analysis results will be displayed automatically under the Scans tab. ## Learn more
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
Title: Overview of Cloud Security Posture Management (CSPM)
description: Learn more about the new Defender CSPM plan and the other enhanced security features that can be enabled for your multicloud environment through the Defender Cloud Security Posture Management (CSPM) plan. Previously updated : 05/08/2023 Last updated : 06/20/2023 # Cloud Security Posture Management (CSPM)
The optional Defender CSPM plan, provides advanced posture management capabiliti
### Plan pricing > [!NOTE]
-> The Microsoft Defender CSPM plan protects across multicloud workloads. With Defender CSPM generally available (GA), the plan will remain free until billing starts on August 1 2023. Billing will apply for compute, database, and storage resources. Billable workloads will be VMs, Storage Accounts, OSS DBs, and SQL PaaS & Servers on Machines. When billing starts, existing Microsoft Defender for Cloud customers will receive automatically applied discounts for Defender CSPM. ΓÇï
+> The Microsoft Defender CSPM plan protects across multicloud workloads. With Defender CSPM generally available (GA), the plan will remain free until billing starts on August 1 2023. Billing will apply for compute, database, and storage resources. Billable workloads will be VMs, Storage Accounts, OSS DBs, and SQL PaaS & Servers on Machines.ΓÇï
- Microsoft Defender CSPM protects across all your multicloud workloads, but billing only applies for Servers, Databases and Storage accounts at $15/billable resource/month. The underlying compute services for AKS are regarded as servers for billing purposes.
-
-Current Microsoft Defender for Cloud customers receive automatically applied discounts (5-25% discount per billed workload based on the highest applicable discount). If you have one of the following plans enabled, you will receive a discount. Refer to the following table:
-
-| Current Defender for Cloud Customer | Automatic Discount | Defender CSPM Price |
-|--|--|--|
-|Defender for Servers P2 | 25% | **$11.25/** Compute or Data workload / month
-|Defender for Containers | 10% | **$13.50/** Compute or Data workload / month
-|Defender for DBs / Defender for Storage | 5% | **$14.25/** Compute or Data workload / month
+ Microsoft Defender CSPM protects across all your multicloud workloads, but billing only applies for Servers, Databases and Storage accounts at $5/billable resource/month. The underlying compute services for AKS are regarded as servers for billing purposes.
## Plan availability
defender-for-cloud Defender For Resource Manager Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-resource-manager-usage.md
When you receive an alert from Microsoft Defender for Resource Manager, we recom
Security alerts from Microsoft Defender for Resource Manager are based on threats detected by monitoring Azure Resource Manager operations. Defender for Cloud uses internal log sources of Azure Resource Manager as well as Azure Activity log, a platform log in Azure that provides insight into subscription-level events.
+Microsoft Defender for Resource Manager provides visibility into activity that comes from third party service providers that have delegated access as part of the resource manager alerts. For example, `Azure Resource Manager operation from suspicious proxy IP address - delegated access`.
+
+`Delegated access` refers to access with [Azure Lighthouse](/azure/lighthouse/overview) or with [Delegated administration privileges](/partner-center/dap-faq).
+
+Alerts that show `Delegated access` also include a customized description and remediation steps.
+ Learn more about [Azure Activity log](../azure-monitor/essentials/activity-log.md). To investigate security alerts from Microsoft Defender for Resource
This page explained the process of responding to an alert from Microsoft Defende
- [Overview of Microsoft Defender for Resource Manager](defender-for-resource-manager-introduction.md) - [Suppress security alerts](alerts-suppression-rules.md) - [Continuously export Defender for Cloud data](continuous-export.md)+
defender-for-cloud Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/github-action.md
Title: Configure the Microsoft Security DevOps GitHub action description: Learn how to configure the Microsoft Security DevOps GitHub action. Previously updated : 05/01/2023 Last updated : 06/18/2023
Security DevOps uses the following Open Source tools:
# Checkout your code repository to scan - uses: actions/checkout@v3
- # Install dotnet, used by MSDO
- - uses: actions/setup-dotnet@v3
- with:
- dotnet-version: |
- 5.0.x
- 6.0.x
- # Run analyzers - name: Run Microsoft Security DevOps Analysis uses: microsoft/security-devops-action@preview
defender-for-iot Hpe Proliant Dl20 Plus Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-enterprise.md
This procedure describes how to update the HPE BIOS configuration for your OT de
1. In the **BIOS/Platform Configuration (RBSU)** form, select **Boot Options**.
-1. Change **Boot Mode** to **Legacy BIOS Mode**, and then select **F10: Save**.
+1. Change **Boot Mode** to **UEFI BIOS Mode**, and then select **F10: Save**.
1. Select **Esc** twice to close the **System Configuration** form.
defender-for-iot Hpe Proliant Dl20 Plus Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-smb.md
This procedure describes how to update the HPE BIOS configuration for your OT de
1. In the **BIOS/Platform Configuration (RBSU)** form, select **Boot Options**.
-1. Change **Boot Mode** to **Legacy BIOS Mode**, and then select **F10: Save**.
+1. Change **Boot Mode** to **UEFI BIOS Mode**, and then select **F10: Save**.
1. Select **Esc** twice to close the **System Configuration** form.
devtest-labs Devtest Lab Create Custom Image From Vm Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-custom-image-from-vm-using-portal.md
The custom image is created and stored in the lab's storage account. The image i
## Next steps - [Add a VM to your lab](devtest-lab-add-vm.md)-- [Create a custom image from a VHD file](devtest-lab-create-template.md)
+- [Create a custom image from a VHD file for DevTest Labs](devtest-lab-create-template.md)
- [Compare custom images and formulas in DevTest Labs](devtest-lab-comparing-vm-base-image-types.md) - [Create a custom image factory in Azure DevTest Labs](image-factory-create.md)
devtest-labs Devtest Lab Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-template.md
Title: Create an Azure DevTest Labs virtual machine custom image from a VHD file
-description: Learn how to use a VHD file to create an Azure DevTest Labs virtual machine custom image in the Azure portal.
+ Title: Create custom image for Azure DevTest Labs VMs from VHD files
+description: Use a VHD file to create an Azure DevTest Labs virtual machine custom image in the Azure portal.
Last updated 01/04/2022
-# Create a custom image from a VHD file
+# Create a custom image for Azure DevTest Labs virtual machines from VHD files
[!INCLUDE [devtest-lab-create-custom-image-from-vhd-selector](../../includes/devtest-lab-create-custom-image-from-vhd-selector.md)]
-You can create a virtual machine (VM) custom image for Azure DevTest Labs by using a virtual hard drive (VHD) file.
+In this article, you learn how to create a virtual machine (VM) custom image for Azure DevTest Labs by using a virtual hard drive (VHD) file.
[!INCLUDE [devtest-lab-custom-image-definition](../../includes/devtest-lab-custom-image-definition.md)]
This article describes how to create a custom image in the Azure portal. You can
[!INCLUDE [devtest-lab-upload-vhd-options](../../includes/devtest-lab-upload-vhd-options.md)]
-## Azure portal instructions
+## Create custom images for Azure DevTest Labs in Azure portal
To create a custom image from a VHD file in DevTest Labs in the Azure portal, follow these steps:
dns Dns Private Resolver Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-portal.md
description: In this quickstart, you create and test a private DNS resolver in A
Previously updated : 03/02/2023 Last updated : 06/20/2023
This quickstart walks you through the steps to create an Azure DNS Private Resol
Azure DNS Private Resolver enables you to query Azure DNS private zones from an on-premises environment, and vice versa, without deploying VM based DNS servers. You no longer need to provision IaaS based solutions on your virtual networks to resolve names registered on Azure private DNS zones. You can configure conditional forwarding of domains back to on-premises, multicloud and public DNS servers. For more information, including benefits, capabilities, and regional availability, see [What is Azure DNS Private Resolver](dns-private-resolver-overview.md).
+## In this article:
+
+- Two VNets are created: myvne4t and myvnet2.
+- An Azure DNS Private Resolver is created in the first VNet with an inbound endpoint at 10.10.0.4.
+- A DNS forwarding ruleset is created to be used with the private resolver.
+- The DNS forwarding ruleset is linked to the second VNet.
+- Example rules are added to the DNS forwarding ruleset.
+
+This article does not demonstrate DNS forwarding to an on-premises network. For more information, see [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md).
+
+The following figure summarizes the setup used in this article:
+
+![Conceptual figure displaying components of the private resolver](./media/dns-resolver-getstarted-portal/resolver-components.png)
+ ## Prerequisites An Azure subscription is required.
dns Private Dns Getstarted Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-getstarted-portal.md
description: In this quickstart, you create and test a private DNS zone and reco
Previously updated : 09/27/2022 Last updated : 06/19/2023
This quickstart walks you through the steps to create your first private DNS zon
A DNS zone is used to host the DNS records for a particular domain. To start hosting your domain in Azure DNS, you need to create a DNS zone for that domain name. Each DNS record for your domain is then created inside this DNS zone. To publish a private DNS zone to your virtual network, you specify the list of virtual networks that are allowed to resolve records within the zone. These are called *linked* virtual networks. When autoregistration is enabled, Azure DNS also updates the zone records whenever a virtual machine is created, changes its IP address, or is deleted.
+> [!IMPORTANT]
+> When you create a private DNS zone, Azure stores the zone data as a global resource. This means that the private zone is not dependent on a single VNet or region. You can link the same private zone to multiple VNets in different regions. If service is interrupted in one VNet, your private zone is still available. For more information, see [Azure Private DNS zone resiliency](private-dns-resiliency.md).
+
+In this article, two VMs are used in a single VNet linked to your private DNS zone with autoregistration enabled. The setup is summarized in the following figure.
+
+![Summary diagram of the quickstart setup](media/private-dns-portal/private-dns-quickstart-summary.png)
+ ## Prerequisites If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
event-grid Resize Images On Storage Blob Upload Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/resize-images-on-storage-blob-upload-event.md
Notice that after the uploaded image disappears, a copy of the uploaded image is
![Screenshot that shows a published web app titled "ImageResizer" in a browser for the \.NET v12 SDK.](./media/resize-images-on-storage-blob-upload-event/tutorial-completed.png)
+[previous-tutorial]: storage-upload-process-images.md
## Next steps See other tutorials in the Tutorials section of the table of content (TOC).
event-hubs Use Log Compaction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/use-log-compaction.md
Title: Use log compaction
description: Learn how to use log compaction. Previously updated : 10/7/2022 Last updated : 06/19/2023 # Use log compaction
In this article you'll, follow these key steps:
- Consume events from a compacted event hub. > [!NOTE]
-> - This feature is currently in Preview.
-> - Log compaction feature is available only in **premium** and **dedicated** tiers.
+> Log compaction feature isn't supported in the **Basic** tier.
> [!WARNING] > Use of the Log Compaction feature is **not eligible for product support through Microsoft Azure**.
external-attack-surface-management Understanding Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-dashboards.md
# Understanding dashboards
-## Overview
- Microsoft Defender External Attack Surface Management (Defender EASM) offers a series of four dashboards designed to help users quickly surface valuable insights derived from their Approved inventory. These dashboards help organizations prioritize the vulnerabilities, risks and compliance issues that pose the greatest threat to their Attack Surface, making it easy to quickly mitigate key issues. Defender EASM provides four dashboards:
Defender EASM provides four dashboards:
To access your Defender EASM dashboards, first navigate to your Defender EASM instance. In the left-hand navigation column, select the dashboard youΓÇÖd like to view. You can access these dashboards from many pages in your Defender EASM instance from this navigation pane.
-![Screenshot of dashboard screen with dashboard navigation section highlighted](media/Dashboards-1.png)
+![Screenshot of dashboard screen with dashboard navigation section highlighted.](media/Dashboards-1.png)
++
+## Downloading chart data
+
+The data underlying any dashboard chart can be exported to a CSV file. This is useful for those who wish to import Defender EASM data into third party tools, or work off a CSV file when remediating any issues. To download chart data, first select the specific chart segment that contains the data you wish to download. Note that chart exports currently support individual chart segments; to download multiple segments from the same chart, you will need to export each individual segment.
+
+Selecting an individual chart segment will open a drillldown view of the data, listing any assets that comprise the segment count. At the top of this page, select **Download CSV report** to begin your export. If you are exporting a small number of assets, this action will directly download the CSV file to your machine. If you are exporting a large number of assets, this action will create a task manager notification where you can track the status of your export.
+
+![Screenshot of dashboard chart drilldown view with export button visible.](media/export-1.png)
+ ## Attack surface summary
For instance, the SSL configuration chart displays any detected configuration is
The SSL organization chart provides insight on the registration of your SSL certificates, indicating the organization and business units associated with each certificate. This can help users understand the designated ownership of these certificates; it is recommended that companies consolidate their organization and unit list when possible to help ensure proper management moving forward. + ## GDPR compliance dashboard The GDPR compliance dashboard presents an analysis of assets in your Confirmed Inventory as they relate to the requirements outlined in General Data Protection Regulation (GDPR). GDPR is a regulation in European Union (EU) law that enforces data protection and privacy standards for any online entities accessible to the EU. These regulations have become a model for similar laws outside of the EU, so it serves as an excellent guide on how to handle data privacy worldwide. This dashboard analyzes an organizationΓÇÖs public-facing web properties to surface any assets that are potentially non-compliant with GDPR.
-## Websites by status
+
+### Websites by status
This chart organizes your website assets by HTTP response status code. These codes indicate whether a specific HTTP request has been successfully completed or provides context as to why the site is inaccessible. HTTP codes can also alert you of redirects, server error responses, and client errors. The HTTP response ΓÇ£451ΓÇ¥ indicates that a website is unavailable for legal reasons. This may indicate that a site has been blocked for people in the EU because it does not comply with GDPR.
This section analysis the signature algorithms that power an SSL certificate. SS
Users can click any segment of the pie chart to view a list of assets that comprise the selected value. SHA256 is considered secure, whereas organizations should update any certificates using the SHA1 algorithm.
-## Personal identifiable information (PII) posture
+### Personal identifiable information (PII) posture
The protection of personal identifiable information (PII) is a critical component to the General Data Protection Regulation. PII is defined as any data that can identify an individual, including names, addresses, birthdays, or email addresses. Any website that accepts this data through a form must be thoroughly secured according to GDPR guidelines. By analyzing the Document Object Model (DOM) of your pages, Microsoft identifies forms and login pages that may accept PII and should therefore be assessed according to European Union law.
A login page is a page on a website where a user has the option to enter a usern
A cookie is information in the form of a very small text file that is placed on the hard drive of the computer running a web browser when browsing a site. Each time a website is visited, the browser sends the cookie back to the server to notify the website of your previous activity. GDPR has specific requirements for obtaining consent to issue a cookie, and different storage regulations for first- versus third-party cookies. ++ ## OWASP top 10 dashboard The OWASP Top 10 dashboard is designed to provide insight on the most critical security recommendations as designated by OWASP, a reputable open-source foundation for web application security. This list is globally recognized as a critical resource for developers who want to ensure their code is secure. OWASP provides key information about their top 10 security risks, as well as guidance on how to avoid or remediate the issue. This Defender EASM dashboard looks for evidence of these security risks within your Attack Surface and surfaces them, listing any applicable assets and how to remediate the risk.
The current OWASP Top 10 Critical Securities list includes:
This dashboard provides a description of each critical risk, information on why it matters, and remediation guidance alongside a list of any assets that are potentially impacted. For more information, see the [OWASP website](https://owasp.org/www-project-top-ten/). ++ ## Next Steps - [Understanding asset details](understanding-asset-details.md)
frontdoor Create Front Door Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-bicep.md
This quickstart describes how to use Bicep to create an Azure Front Door Standard/Premium with a Web App as origin.
-> [!NOTE]
-> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to employ [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
[!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)]
frontdoor Create Front Door Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-cli.md
In this quickstart, you'll learn how to create an Azure Front Door Standard/Premium profile using Azure CLI. You'll create this profile using two Web Apps as your origin, and add a WAF security policy. You can then verify connectivity to your Web Apps using the Azure Front Door endpoint hostname.
-> [!NOTE]
-> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to employ [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
frontdoor Create Front Door Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-portal.md
In this quickstart, you'll learn how to create an Azure Front Door profile using
With *Custom create*, you deploy two App services. Then, you create the Azure Front Door profile using the two App services as your origin. Lastly, you'll verify connectivity to your App services using the Azure Front Door frontend hostname.
-> [!NOTE]
-> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to employ [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
## Prerequisites
frontdoor Create Front Door Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-powershell.md
In this quickstart, you'll learn how to create an Azure Front Door Standard/Premium profile using Azure PowerShell. You'll create this profile using two Web Apps as your origin. You can then verify connectivity to your Web Apps using the Azure Front Door endpoint hostname.
-> [!NOTE]
-> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to employ [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
- ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
frontdoor Create Front Door Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-terraform.md
This quickstart describes how to use Terraform to create a Front Door profile to set up high availability for a web endpoint.
-> [!NOTE]
-> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to employ [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
The steps in this article were tested with the following Terraform and Terraform provider versions:
frontdoor Front Door Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-overview.md
Azure Front Door is MicrosoftΓÇÖs modern cloud Content Delivery Network (CDN) th
:::image type="content" source="./media/overview/front-door-overview.png" alt-text="Diagram of Azure Front Door routing user traffic to endpoints." lightbox="./media/overview/front-door-overview-expanded.png":::
-> [!NOTE]
-> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to employ [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
## Why use Azure Front Door?
governance Built In Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-packages.md
Title: List of built-in built-in packages for guest configuration
+ Title: List of built-in packages for guest configuration
description: List of all built-in packages for guest configuration mapped to each policy definition and the PowerShell modules that are used by each package. Last updated 08/04/2021
hdinsight Apache Hbase Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-replication.md
description: Learn how to set up HBase replication from one HDInsight version to
Previously updated : 09/15/2022 Last updated : 06/14/2023 # Set up Apache HBase cluster replication in Azure virtual networks Learn how to set up [Apache HBase](https://hbase.apache.org/) replication within a virtual network, or between two virtual networks in Azure.
-Cluster replication uses a source-push methodology. An HBase cluster can be a source or a destination, or it can fulfill both roles at once. Replication is asynchronous. The goal of replication is eventual consistency. When the source receives an edit to a column family when replication is enabled, the edit is propagated to all destination clusters. When data is replicated from one cluster to another, the source cluster and all clusters that have already consumed the data are tracked, to prevent replication loops.
+Cluster replication uses a source-push methodology. An HBase cluster can be a source or a destination, or it can fulfill both roles at once. Replication is asynchronous. The goal of replication is eventual consistency. When the source receives an edit to a column family when replication is enabled, the edit is propagated to all destination clusters. When data replicated from one cluster to another, the source cluster and all clusters that have already consumed the data tracked, to prevent replication loops.
In this article, you set up a source-destination replication. For other cluster topologies, see the [Apache HBase reference guide](https://hbase.apache.org/book.html#_cluster_replication).
To install Bind, use the following procedure:
Replace `sshuser` with the SSH user account you specified when creating the DNS virtual machine. > [!NOTE]
- > There are a variety of ways to obtain the `ssh` utility. On Linux, Unix, and macOS, it is provided as part of the operating system. If you are using Windows, consider one of the following options:
+ > There are a variety of ways to obtain the `ssh` utility. On Linux, Unix, and macOS, it's provided as part of the operating system. If you are using Windows, consider one of the following options:
> > * [Azure Cloud Shell](../../cloud-shell/quickstart.md) > * [Bash on Ubuntu on Windows 10](/windows/wsl/about)
To install Bind, use the following procedure:
vnet1DNS.icb0d0thtw0ebifqt0g1jycdxd.ex.internal.cloudapp.net ```
- The `icb0d0thtw0ebifqt0g1jycdxd.ex.internal.cloudapp.net` text is the __DNS suffix__ for this virtual network. Save this value, as it is used later.
+ The `icb0d0thtw0ebifqt0g1jycdxd.ex.internal.cloudapp.net` text is the __DNS suffix__ for this virtual network. Save this value, as it's used later.
You must also find out the DNS suffix from the other DNS server. You need it in the next step.
To install Bind, use the following procedure:
Address: 10.2.0.4 ```
- Until now, you cannot look up the IP address from the other network without specified DNS server IP address.
+ Until now, you can't look up the IP address from the other network without specified DNS server IP address.
### Configure the virtual network to use the custom DNS server
To ensure the environment is configured correctly, you must be able to ping the
## Load test data
-When you replicate a cluster, you must specify the tables that you want to replicate. In this section, you load some data into the source cluster. In the next section, you will enable replication between the two clusters.
+When you replicate a cluster, you must specify the tables that you want to replicate. In this section, you load some data into the source cluster. In the next section, you'll enable replication between the two clusters.
To create a **Contacts** table and insert some data in the table, follow the instructions at [Apache HBase tutorial: Get started using Apache HBase in HDInsight](apache-hbase-tutorial-get-started-linux.md).
The following steps describe how to call the script action script from the Azure
1. **Name**: Enter **Enable replication**. 2. **Bash Script URL**: Enter **https://raw.githubusercontent.com/Azure/hbase-utils/master/replication/hdi_enable_replication.sh**.
- 3. **Head**: Ensure this is selected. Clear the other node types.
+ 3. **Head**: Ensure this parameter is selected. Clear the other node types.
4. **Parameters**: The following sample parameters enable replication for all existing tables, and then copy all data from the source cluster to the destination cluster: `-m hn1 -s <source hbase cluster name> -d <destination hbase cluster name> -sp <source cluster Ambari password> -dp <destination cluster Ambari password> -copydata`
The following steps describe how to call the script action script from the Azure
> [!NOTE] > Use hostname instead of FQDN for both the source and destination cluster DNS name. >
- > This walkthrough assumes hn1 as active headnode. Please check your cluster to identify the active head node.
+ > This walkthrough assumes hn1 as active headnode. Check your cluster to identify the active head node.
6. Select **Create**. The script can take a while to run, especially when you use the **-copydata** argument.
After the script action is successfully deployed, you can use SSH to connect to
The following list shows you some general usage cases and their parameter settings: -- **Enable replication on all tables between the two clusters**. This scenario does not require copying or migrating existing data in the tables, and it does not use Phoenix tables. Use the following parameters:
+- **Enable replication on all tables between the two clusters**. This scenario doesn't require copying or migrating existing data in the tables, and it doesn't use Phoenix tables. Use the following parameters:
`-m hn1 -s <source hbase cluster name> -d <destination hbase cluster name> -sp <source cluster Ambari password> -dp <destination cluster Ambari password>`
The following list shows you some general usage cases and their parameter settin
`-m hn1 -s <source hbase cluster name> -d <destination hbase cluster name> -sp <source cluster Ambari password> -dp <destination cluster Ambari password> -t "table1;table2;table3" -copydata` -- **Enable replication on all tables, and replicate Phoenix metadata from source to destination**. Phoenix metadata replication is not perfect. Use it with caution. Use the following parameters:
+- **Enable replication on all tables, and replicate Phoenix metadata from source to destination**. Phoenix metadata replication isn't perfect. Use it with caution. Use the following parameters:
`-m hn1 -s <source hbase cluster name> -d <destination hbase cluster name> -sp <source cluster Ambari password> -dp <destination cluster Ambari password> -t "table1;table2;table3" -replicate-phoenix-meta`
+### Set up replication between ESP clusters
+
+**Prerequisites**
+1. Both ESP clusters should be there in the same realm (domain). Check `/etc/krb5.conf` file default realm property to confirm.
+1. Common user should be there who has read and write access to both the clusters
+ 1. For example, if both clusters have same cluster admin user (For example, `admin@abc.example.com`), that user can be used to run the replication script.
+ 1. If both the clusters using same user group, you can add a new user or use existing user from the group.
+ 1. If both the clusters using different user group, you can add a new user to both use existing user from the groups.
+
+**Steps to Execute Replication script**
+
+> [!NOTE]
+> Perform the following steps only if DNS is unable to resolve hostname correctly of destination cluster.
+> 1. Copy sink cluster hosts IP & hostname mapping in source cluster nodes /etc/hosts file.
+> 1. Copy head node, worker node and ZooKeeper nodes host and IP mapping from /etc/hosts file of destination(sink) cluster.
+> 1. Add copied entries source cluster /etc/hosts file. These entries should be added to head nodes, worker nodes and ZooKeeper nodes.
+
+**Step: 1**
+Create keytab file for the user using `ktutil`.
+`$ ktutil`
+1. `addent -password -p admin@ABC.EXAMPLE.COM -k 1 -e RC4-HMAC`
+1. Ask for password to authenticate, provide user password
+1. `wkt /etc/security/keytabs/admin.keytab`
+
+> [!NOTE]
+> Make sure the keytab file is stored in `/etc/security.keytabs/` folder in the `<username>.keytab` format.
+
+**Step 2**
+Run script action with `-ku` option
+1. Provide `-ku <username>` on ESP clusters.
+
+|Name|Description|
+|-|--|
+|`-ku, --krb-user` | For ESP clusters, Common Kerberos user, who can authenticate both source and destination clusters|
+ ## Copy and migrate data There are two separate script action scripts available for copying or migrating data after replication is enabled:
hdinsight Hdinsight Restrict Public Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-restrict-public-connectivity.md
By default, the HDInsight resource provider uses an *inbound* connection to the
In this configuration, without an inbound connection, there's no need to configure inbound service tags in the network security group. There's also no need to bypass the firewall or network virtual appliance via user-defined routes.
+> [!NOTE]
+> Implementations in Microsoft Azure Government may still require the inbound service tags in the network security group and user defined routes.
+ After you create your cluster, set up proper DNS resolution by adding DNS records that are needed for your restricted HDInsight cluster. The following canonical name DNS record (CNAME) is created in the Azure-managed public DNS zone: `azurehdinsight.net`. ```dns
Configuring `resourceProviderConnection` to *outbound* also allows you to access
- SQL metastores: Apache Ranger, Ambari, Oozie, and Hive - Azure Key Vault
-It isn't mandatory to use private endpoints for these resources. But if you plan to use private endpoints for these resources, you must create the resources and configure the private endpoints and DNS entries before you create the HDInsight cluster. All these resources should be accessible from inside the cluster subnet, either through a private endpoint or otherwise.
+It isn't mandatory to use private endpoints for these resources. But if you plan to use private endpoints for these resources, you must create the resources and configure the private endpoints and DNS entries before you create the HDInsight cluster. All these resources should be accessible from inside the cluster subnet, either through a private endpoint or otherwise. If you're planning to use a private endpoint, it's recommended to leverage the cluster subnet.
When you connect to Azure Data Lake Storage Gen2 over a private endpoint, make sure that the Gen2 storage account has an endpoint set for both `blob` and `dfs`. For more information, see [Create a private endpoint](../private-link/create-private-endpoint-portal.md).
hdinsight Apache Spark Troubleshoot Job Slowness Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-job-slowness-container.md
Title: Apache Spark slow when Azure HDInsight storage has many files
description: Apache Spark job runs slowly when the Azure storage container contains many files in Azure HDInsight Previously updated : 05/26/2022 Last updated : 06/14/2023 # Apache Spark job run slowly when the Azure storage container contains many files in Azure HDInsight
This article describes troubleshooting steps and possible resolutions for issues
## Issue
-When running an HDInsight cluster, the Apache Spark job that writes to Azure storage container becomes slow when there are many files/sub-folders. For example, it takes 20 seconds when writing to a new container, but about 2 minutes when writing to a container that has 200k files.
+When you run an HDInsight cluster, the Apache Spark job that writes to Azure storage container becomes slow when there are many files/sub-folders. For example, it takes 20 seconds when writing to a new container, but about 2 minutes when writing to a container that has 200k files.
## Cause
-This is a known Spark issue. The slowness comes from the `ListBlob` and `GetBlobProperties` operations during Spark job execution.
+This issue is a known Spark related issue. The slowness comes from the `ListBlob` and `GetBlobProperties` operations during Spark job execution.
-To track partitions, Spark has to maintain a `FileStatusCache` which contains info about directory structure. Using this cache, Spark can parse the paths and be aware of available partitions. The benefit of tracking partitions is that Spark only touches the necessary files when you read data. To keep this information up-to-date, when you write new data, Spark has to list all files under the directory and update this cache.
+To track partitions, Spark has to maintain a `FileStatusCache` that, contains info about directory structure. If you use this cache, Spark can parse the paths and be aware of available partitions. The benefit of tracking partitions is that Spark only touches the necessary files when you read data. To keep this information up-to-date, when you write new data, Spark has to list all files under the directory and update this cache.
-In Spark 2.1, while we do not need to update the cache after every write, Spark will check whether an existing partition column matches with the proposed one in the current write request, so it will also lead to listing operations at the beginning of every write.
+In Spark 2.1, while we don't need to update the cache after every write, Spark will check whether an existing partition column matches with the proposed one in the current write request. So it will also lead to listing operations at the beginning of every write.
In Spark 2.2, when writing data with append mode, this performance problem should be fixed.
In Spark 2.3, the same behavior as Spark 2.2 is expected.
## Resolution
-When you create a partitioned data set, it is important to use a partitioning scheme that will limit the number of files that Spark has to list to update the `FileStatusCache`.
+When you create a partitioned data set, it's important to use a partitioning scheme that limits the number of files that Spark has to list to update the `FileStatusCache`.
-For every Nth micro batch where N % 100 == 0 (100 is just an example), move existing data to another directory, which can be loaded by Spark.
+For every Nth micro batch where N % 100 == 0 (100 is just an example), move existing data to another directory, which is loaded by Spark.
## Next steps
iot-develop Quickstart Devkit Microchip Atsame54 Xpro Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-microchip-atsame54-xpro-iot-hub.md
ms.devlang: c Previously updated : 05/11/2022-
-#- id: iot-develop-toolset
-## Owner: timlt
-# Title: IoT Devices
-# prompt: Choose a build environment
-# - id: iot-toolset-mplab
-# Title: MPLAB
Last updated : 06/20/2023+ #Customer intent: As a device builder, I want to see a working IoT device sample connecting to IoT Hub and sending properties and telemetry, and responding to commands. As a solution builder, I want to use a tool to view the properties, commands, and telemetry an IoT Plug and Play device reports to the IoT hub it connects to.
Keep Termite open to monitor device output in the following steps.
## View device properties
-You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the Microchip E54. These capabilities rely on the device model published for the Microchip E54 in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
+You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the Microchip E54. These capabilities rely on the device model published for the Microchip E54 in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart.
To access IoT Plug and Play components for the device in IoT Explorer:
To delete a resource group by name:
In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the Microchip E54 device. You connected the Microchip E54 to Azure IoT Hub, and carried out tasks such as viewing telemetry and calling a method on the device.
-As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT.
+As a next step, explore the following articles to learn more about using the IoT device SDKs to connect general devices, and embedded devices, to Azure IoT.
+
+> [!div class="nextstepaction"]
+> [Connect a general simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
> [!div class="nextstepaction"]
-> [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](concepts-using-c-sdk-and-embedded-c-sdk.md)
> [!IMPORTANT] > Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot Iot Mqtt Connect To Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-mqtt-connect-to-iot-hub.md
For more information, see [Send device-to-cloud and cloud-to-device messages wit
## Receiving cloud-to-device messages
-To receive messages from IoT Hub, a device should subscribe using `devices/{device-id}/messages/devicebound/#` as a **Topic Filter**. The multi-level wildcard `#` in the Topic Filter is used only to allow the device to receive more properties in the topic name. IoT Hub doesn't allow the usage of the `#` or `?` wildcards for filtering of subtopics. Since IoT Hub isn't a general-purpose pub-sub messaging broker, it only supports the documented topic names and topic filters.
+To receive messages from IoT Hub, a device should subscribe using `devices/{device-id}/messages/devicebound/#` as a **Topic Filter**. The multi-level wildcard `#` in the Topic Filter is used only to allow the device to receive more properties in the topic name. IoT Hub doesn't allow the usage of the `#` or `?` wildcards for filtering of subtopics. Since IoT Hub isn't a general-purpose pub-sub messaging broker, it only supports the documented topic names and topic filters. A device can only subscribe to five topics at a time.
The device doesn't receive any messages from IoT Hub until it has successfully subscribed to its device-specific endpoint, represented by the `devices/{device-id}/messages/devicebound/#` topic filter. After a subscription has been established, the device receives cloud-to-device messages that were sent to it after the time of the subscription. If the device connects with **CleanSession** flag set to **0**, the subscription is persisted across different sessions. In this case, the next time the device connects with **CleanSession 0** it receives any outstanding messages sent to it while disconnected. If the device uses **CleanSession** flag set to **1** though, it doesn't receive any messages from IoT Hub until it subscribes to its device-endpoint.
lab-services Account Setup Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/account-setup-guide.md
To create a lab account, you need access to an Azure subscription that's already
It's important to know how many [virtual machines (VMs) and which VM sizes](./administrator-guide-1.md#vm-sizing) your school lab requires.
-For guidance on structuring your labs and images, see the blog post [Moving from a physical lab to Azure Lab Services](https://techcommunity.microsoft.com/t5/azure-lab-services/moving-from-a-physical-lab-to-azure-lab-services/ba-p/1654931).
+For guidance on structuring your labs and images, see [Moving from a physical lab to Azure Lab Services](./concept-migrating-physical-labs.md).
For more information on how to structure labs, see the "Lab" section of [Azure Lab Services - Administrator guide](./administrator-guide-1.md#lab).
lab-services Administrator Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/administrator-guide.md
There are a few key points to highlight as part of this solution:
- If you plan to use the [auto-shutdown settings](./cost-management-guide.md#automatic-shutdown-settings-for-cost-control), you'll need to unblock several Azure host names with the 3rd party software. The auto-shutdown settings use a diagnostic extension that must be able to communicate back to Lab Services. Otherwise, the auto-shutdown settings will fail to enable for the lab. - You may also want to have each student use a non-admin account on their VM so that they can't uninstall the content filtering software. Adding a non-admin account must be done when creating the lab.
+Learn more about the [supported networking scenarios in Azure Lab Services](./concept-lab-services-supported-networking-scenarios.md), such as content filtering.
+ If your school needs to do content filtering, contact us via the [Azure Lab Services' Q&A](https://aka.ms/azlabs/questions) for more information. ## Endpoint management
lab-services Class Type Arcgis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-arcgis.md
One type of licensing that ArcGIS Desktop offers is [concurrent use licenses](ht
The license server is located in either your on-premises network or hosted on an Azure virtual machine within an Azure virtual network. After your license server is set up, you'll need to [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md) with your lab plan. > [!IMPORTANT]
-> [Advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) must be enabled during the creation of your lab plan. It can't be added later.
+> [Advanced networking](how-to-connect-vnet-injection.md) must be enabled during the creation of your lab plan. It can't be added later.
For more information, see [Set up a license server as a shared resource](how-to-create-a-lab-with-shared-resource.md). ## Lab configuration
-When you get an Azure subscription, you can create a new lab plan in Azure Lab Services. For more information about creating a new lab plan, see the tutorial on [how to set up a lab plan](./quick-create-resources.md). If you're using a ArcGIS License Manager on a license server, enable [advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) when creating your lab plan. You can also use an existing lab plan.
+When you get an Azure subscription, you can create a new lab plan in Azure Lab Services. For more information about creating a new lab plan, see the tutorial on [how to set up a lab plan](./quick-create-resources.md). If you're using a ArcGIS License Manager on a license server, enable [advanced networking](how-to-connect-vnet-injection.md) when creating your lab plan. You can also use an existing lab plan.
### Lab plan settings
lab-services Class Type Autodesk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-autodesk.md
You need to access a license server if you plan to use the Autodesk network lice
To use network licensing with Autodesk software, [AutoDesk provides detailed steps](https://knowledge.autodesk.com/customer-service/network-license-administration/install-and-configure-network-license) to install Autodesk Network License Manager on your license server. You can host the license server in your on-premises network, or on an Azure virtual machine (VM) within in an Azure virtual network.
-After setting up your license server, you need to enable [advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) when you create the lab plan.
+After setting up your license server, you need to enable [advanced networking](how-to-connect-vnet-injection.md) when you create the lab plan.
Autodesk-generated license files embed the MAC address of the license server. If you decide to host your license server by using an Azure VM, itΓÇÖs important to make sure that your license serverΓÇÖs MAC address doesnΓÇÖt change. If the MAC address changes, you need to regenerate your licensing files. To prevent your MAC address from changing:
Autodesk-generated license files embed the MAC address of the license server. I
For more information, see [Set up a license server as a shared resource](./how-to-create-a-lab-with-shared-resource.md). > [!IMPORTANT]
-> You must enable [advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) when creating your lab plan. You can't enable advanced networking for an existing lab plan.
+> You must enable [advanced networking](how-to-connect-vnet-injection.md) when creating your lab plan. You can't enable advanced networking for an existing lab plan.
## Lab configuration
lab-services Class Type Matlab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-matlab.md
For detailed instructions on how to install a licensing server, see [Install Lic
Assuming the license server is located in an on-premises network or a private network within Azure, youΓÇÖll need to [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md) when creating your [lab plan](./quick-create-resources.md). > [!IMPORTANT]
-> [Advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) must be enabled during the creation of your lab plan. It can't be added later.
+> [Advanced networking](how-to-connect-vnet-injection.md) must be enabled during the creation of your lab plan. It can't be added later.
## Lab configuration
-Once you have an Azure subscription, you can create a new lab plan in Azure Lab Services. For more information about creating a new lab plan, see the tutorial on [how to set up a lab plan](./quick-create-resources.md). If you're using a [Network License Manager](https://www.mathworks.com/help/install/administer-network-licenses.html) on a license server, enable [advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) when creating your lab plan. You can also use an existing lab plan.
+Once you have an Azure subscription, you can create a new lab plan in Azure Lab Services. For more information about creating a new lab plan, see the tutorial on [how to set up a lab plan](./quick-create-resources.md). If you're using a [Network License Manager](https://www.mathworks.com/help/install/administer-network-licenses.html) on a license server, enable [advanced networking](how-to-connect-vnet-injection.md) when creating your lab plan. You can also use an existing lab plan.
### Lab settings
lab-services Class Type Pltw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-pltw.md
To use network licensing with Autodesk software, [PLTW provides detailed steps](
After your license server is set up, you need to [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md) when creating your [lab plan](./quick-create-resources.md). > [!IMPORTANT]
-> [Advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) must be enabled during the creation of your lab plan. It can't be added later.
+> [Advanced networking](how-to-connect-vnet-injection.md) must be enabled during the creation of your lab plan. It can't be added later.
Autodesk-generated license files embed the MAC address of the license server. If you decide to host your license server by using an Azure VM, itΓÇÖs important to ensure that your license serverΓÇÖs MAC address doesnΓÇÖt change. If the MAC address changes, you'll need to regenerate your licensing files. Following are the steps to prevent your MAC address from changing:
For more information, see [Set up a license server as a shared resource](./how-t
Once you have Azure subscription, you can create a new lab plan in Azure Lab Services. For more information about creating a new lab plan, see the tutorial on [how to set up a lab plan](./quick-create-resources.md).
-After you set up a lab plan, create a separate lab for each PLTW class session that your school offers. We also recommend that you create separate images for each type of PLTW class. For more information about how to structure your labs and images, see the blog post [Moving from a Physical Lab to Azure Lab Services](https://techcommunity.microsoft.com/t5/azure-lab-services/moving-from-a-physical-lab-to-azure-lab-services/ba-p/1654931).
+After you set up a lab plan, create a separate lab for each PLTW class session that your school offers. We also recommend that you create separate images for each type of PLTW class. For more information about how to structure your labs and images, see [Moving from a Physical Lab to Azure Lab Services](./concept-migrating-physical-labs.md).
### Lab plan settings
lab-services Class Type Rstudio Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-rstudio-linux.md
If you choose to have a shared R Server for the students, the server should be s
If you choose to use any external resources, you need to [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md) with your lab plan. > [!IMPORTANT]
-> [Advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) must be enabled during the creation of your lab plan. It can't be added later.
+> [Advanced networking](how-to-connect-vnet-injection.md) must be enabled during the creation of your lab plan. It can't be added later.
### Lab plan settings
lab-services Class Type Rstudio Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-rstudio-windows.md
If you choose to have a shared R Server for the students, the server should be s
If you choose to use any external resources, youΓÇÖll need to [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md) with your lab plan. > [!IMPORTANT]
-> [Advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) must be enabled during the creation of your lab plan. It can't be added later.
+> [Advanced networking](how-to-connect-vnet-injection.md) must be enabled during the creation of your lab plan. It can't be added later.
### Lab plan settings
lab-services Class Type Solidworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-solidworks.md
SOLIDWORKS Network Licensing requires that you have installed and activated Soli
1. After you set up the license server, follow these steps to [connect your lab plan to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md). > [!IMPORTANT]
- > You need to enable [advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) during the creation of your lab plan. You can't configure the lab plan's virtual network at a later stage.
+ > You need to enable [advanced networking](how-to-connect-vnet-injection.md) during the creation of your lab plan. You can't configure the lab plan's virtual network at a later stage.
1. Verify that the appropriate ports are opened on your firewalls to allow communication between the lab virtual machines and the license server.
lab-services Class Type Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-sql-server.md
To use a shared resource, such as an Azure SQL Database, in Azure Lab Services,
To use any external resources, you need to [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md) with your lab plan. > [!IMPORTANT]
- > [Advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) must be enabled during the creation of your lab plan. It can't be added later.
+ > [Advanced networking](how-to-connect-vnet-injection.md) must be enabled during the creation of your lab plan. It can't be added later.
1. Create a [single database](/azure/azure-sql/database/single-database-create-quickstart?tabs=azure-portal) in Azure SQL:
lab-services Concept Lab Services Supported Networking Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-lab-services-supported-networking-scenarios.md
+
+ Title: Supported networking scenarios
+
+description: Learn about the supported networking scenarios and architectures for lab plans in Azure Lab Services.
+++++ Last updated : 06/20/2023++
+# Supported networking scenarios for lab plans in Azure Lab Services
+
+With Azure Lab Services advanced networking for lab plans you can implement various network architectures and topologies. This article lists different networking scenarios and their support in Azure Lab Services.
+
+## Networking scenarios
+
+The following table lists common networking scenarios and topologies and their support in Azure Lab Services.
+
+| Scenario | Enabled | Details |
+| -- | - | - |
+| Lab-to-lab communication | Yes | Learn more about [setting up lab-to-lab communication](./tutorial-create-lab-with-advanced-networking.md). If lab users need multiple virtual machines, you can [configure nested virtualization](./concept-nested-virtualization-template-vm.md). |
+| Open additional ports to the lab VM | No | You can't open additional ports on the lab VM, even with advanced networking.<br/><br/>A workaround solution is to use PowerShell or the Azure SDK to manually add the NAT rules for every VM in the lab (every private IP address). This solution isn't recommended because of the limit on the number of allowed rules for load balancers, and because it's unclear to lab users what the port number for their lab VM is. |
+| Enable distant license server, such as on-premises, cross-region | Yes | Add a [user defined route (UDR)](/azure/virtual-network/virtual-networks-udr-overview) that points to the license server.<br/><br/>If the lab software requires connecting to the license server by its name instead of the IP address, you need to [configure a customer-provided DNS server](/azure/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances?tabs=redhat#name-resolution-that-uses-your-own-dns-server) or add an entry to the `hosts` file in the lab template.<br/><br/>If multiple services need access to the license server, using them from multiple regions, or if the license server is part of other infrastructure, you can use the [hub-and-spoke Azure networking best practice](/azure/cloud-adoption-framework/ready/azure-best-practices/hub-spoke-network-topology). |
+| Access to on-premises resources, such as a license server | Yes | You can access on-premises resources with these options: <br/>- Configure [Azure ExpressRoute](/azure/expressroute/expressroute-introduction) or create a [site-to-site VPN connection](/azure/vpn-gateway/tutorial-site-to-site-portal) (bridge the networks).<br/>- Add a public IP to your on-premises server with a firewall that only allows incoming connections from Azure Lab Services.<br/><br/>In addition, to reach the on-premises resources from the lab VMs, add a [user defined route (UDR)](/azure/virtual-network/virtual-networks-udr-overview). |
+| Use a [hub-and-spoke networking model](/azure/cloud-adoption-framework/ready/azure-best-practices/hub-spoke-network-topology) | Yes | This scenario works as expected with lab plans and advanced networking. <br/><br/>A number of configuration changes aren't supported with Azure Lab Services, such as adding a default route on a route table. Learn about the [unsupported virtual network configuration changes](./how-to-connect-vnet-injection.md#4-optional-update-the-networking-configuration-settings). |
+| Access lab VMs by private IP address (private-only labs) | Not recommended | This scenario is functional, but makes it difficult for lab users to connect to their lab VM. In the Azure Lab Services website, lab users can't identify the private IP address of their lab VM. In addition, the connect button points to the public endpoint of the lab VM. The lab creator needs to provide lab users with the private IP address of their lab VMs. After a VM reset, this private IP address might change.<br/><br/>If you implement this scenario, don't delete the public IP address or load balancer associated with the lab. If those resources are deleted, the lab fails to scale or publish. |
+| Protect on-premises resources with a firewall | Yes | Putting a firewall between the lab VMs and a specific resource is supported. |
+| Put lab VMs behind a firewall. For example for content filtering, security, and more. | No | The typical firewall setup doesn't work with Azure Lab Services, unless when connecting to lab VMs by private IP address (see previous scenario).<br/><br/>When you set up the firewall, a default route is added on the route table for the subnet. This default route introduces an asymmetric routing problem, which breaks the RDP/SSH connections to the lab. |
+| Use third party over-the-shoulder monitoring software | Yes | This scenario is supported with advanced networking for lab plans. |
+| Use a custom domain name for labs, for example `lab1.labs.myuniversity.edu.au` | No | This scenario doesnΓÇÖt work because the FQDN is defined upon creation of the lab, based on the public IP address of the lab. Changes to the public IP address aren't propagated to the connect button for the template VM or the lab VMs. |
+| Enable forced-tunneling for labs, where all communication to lab VMs doesn't go over the public internet. This is also known as *fully isolated labs*. | No | This scenario doesnΓÇÖt work out of the box. As soon as you associate a route table with the subnet that contains a default route, lab users lose connectivity to the lab.<br/>To enable this scenario, follow the steps for accessing lab VMs by private IP address. |
+| Enable content filtering | Depends | Supported content filtering scenarios:<br/>- Third-party content filtering software on the lab VM:<br/>&nbsp;&nbsp;&nbsp;&nbsp;1. Lab users should run as nonadmin to avoid uninstalling or disabling the software<br/>&nbsp;&nbsp;&nbsp;&nbsp;2. Ensure that outbound calls to Azure aren't blocked.<br/><br/>- DNS-based content filtering: filtering works with advanced networking and specifying the DNS server on the labΓÇÖs subnet. You can use a DNS server that supports content filtering to do DNS-based filtering.<br/><br/>- Proxy-based content filtering: filtering works with advanced networking if the lab VMs can use a customer-provided proxy server that supports content filtering. It works similarly to the DNS-based solution.<br/><br/>Unsupported content filtering:<br/>- Network appliance (firewall): for more information, see the scenario for putting lab VMs behind a firewall.<br/><br/>When planning a content filtering solution, implement a proof of concept to ensure that everything works as expected end to end. |
+| Use a connection broker, such as Parsec, for high-framerate gaming scenarios | Not recommended | This scenario isnΓÇÖt directly supported with Azure Lab Services and would run into the same challenges as accessing lab VMs by private IP address. |
+| *Cyber field* scenario, consisting of a set of vulnerable VMs on the network for lab users to discover and hack into (ethical hacking) | Yes | This scenario works with advanced networking for lab plans. Learn about the [ethical hacking class type](./class-type-ethical-hacking.md). |
+| Enable using Azure Bastion for lab VMs | No | Azure Bastion isn't supported in Azure Lab Services. |
+
+## Next steps
+
+- [Connect a lab plan to virtual network with advanced networking](./how-to-connect-vnet-injection.md)
+- [Tutorial: Set up lab-to-lab communication](./tutorial-create-lab-with-advanced-networking.md)
+- Provide feedback or request new features on the [Azure Lab Services community site](https://feedback.azure.com/d365community/forum/502dba10-7726-ec11-b6e6-000d3a4f032c)
lab-services How To Connect Vnet Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-connect-vnet-injection.md
Title: Connect to a virtual network
+ Title: Connect a lab plan to a virtual network
-description: Learn how to connect a lab plan in Azure Lab Services to a virtual network with advanced networking. Advanced networking uses VNET injection.
+description: Learn how to connect a lab plan to a virtual network with Azure Lab Services advanced networking. Advanced networking uses VNET injection to add lab virtual machines in your virtual network.
Previously updated : 04/25/2023 Last updated : 06/20/2023
-# Connect a virtual network to a lab plan with advanced networking in Azure Lab Services
+# Connect a lab plan to a virtual network with advanced networking
+This article describes how to connect a lab plan to a virtual network with Azure Lab Services advanced networking. With advanced networking, you have more control over the virtual network configuration of your labs. For example, to connect to on-premises resources such as licensing servers, or to use user-defined routes (UDRs). Learn more about the [supported networking scenarios and topologies for advanced networking](./concept-lab-services-supported-networking-scenarios.md).
-This article describes how to connect a lab plan to a virtual network in Azure Lab Services. With lab plans, you have more control over the virtual network for labs by using advanced networking. You can connect to on premise resources such as licensing servers and use user defined routes (UDRs).
+Advanced networking for lab plans replaces [Azure Lab Services virtual network peering](how-to-connect-peer-virtual-network.md) that is used with lab accounts.
-Some organizations have advanced network requirements and configurations that they want to apply to labs. For example, network requirements can include a network traffic control, ports management, access to resources in an internal network, and more. Certain on-premises networks are connected to Azure Virtual Network either through [ExpressRoute](../expressroute/expressroute-introduction.md) or [Virtual Network Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). These services must be set up outside of Azure Lab Services. To learn more about connecting an on-premises network to Azure using ExpressRoute, see [ExpressRoute overview](../expressroute/expressroute-introduction.md). For on-premises connectivity using a Virtual Network Gateway, the gateway, specified virtual network, network security group, and the lab plan all must be in the same region.
+Follow these steps to configure advanced networking for your lab plan:
-Azure Lab Services advanced networking uses virtual network (VNET) injection to connect a lab plan to your virtual network. VNET injection replaces the [Azure Lab Services virtual network peering](how-to-connect-peer-virtual-network.md) that was used with lab accounts.
+1. Delegate the virtual network subnet to Azure Lab Services lab plans. Delegation allows Azure Lab Services to create the lab template and lab virtual machines in the virtual network.
+1. Configure the network security group to allow inbound RDP or SSH traffic to the lab template virtual machine and lab virtual machines.
+1. Create a lab plan with advanced networking to associate it with the virtual network subnet.
+1. (Optional) Configure your virtual network.
-> [!IMPORTANT]
-> You must configure advanced networking when you create a lab plan. You can't enable advanced networking at a later stage.
+Advanced networking can only be enabled when creating a lab plan. Advanced networking is not a setting that can be updated later.
-> [!NOTE]
-> If your organization needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you will need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering).
+The following diagram shows an overview of the Azure Lab Services advanced networking configuration. The lab template and lab virtual machines are assigned an IP address in your subnet, and the network security group allows lab users to connect to the lab VMs by using RDP or SSH.
-## Prerequisites
-- An Azure virtual network and subnet. If you don't have this resource, learn how to create a [virtual network](/azure/virtual-network/manage-virtual-network) and [subnet](/azure/virtual-network/virtual-network-manage-subnet).
+> [!NOTE]
+> If your organization needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you will need to use 3rd party software. For more information, read guidance on [content filtering in the supported networking scenarios](./concept-lab-services-supported-networking-scenarios.md).
- > [!IMPORTANT]
- > The virtual network and the lab plan must be in the same Azure region.
+## Prerequisites
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Your Azure account has the [Network Contributor](/azure/role-based-access-control/built-in-roles#network-contributor) role, or a parent of this role, on the virtual network.
+- An Azure virtual network and subnet in the same Azure region as where you create the lab plan. Learn how to create a [virtual network](/azure/virtual-network/manage-virtual-network) and [subnet](/azure/virtual-network/virtual-network-manage-subnet).
+- The subnet has enough free IP addresses for the template VMs and lab VMs for all labs (each lab uses 512 IP addresses) in the lab plan.
-## Delegate the virtual network subnet to lab plans
+## 1. Delegate the virtual network subnet
-To use your virtual network subnet for advanced networking in Azure Lab Services, you need to [delegate the subnet](../virtual-network/subnet-delegation-overview.md) to Azure Lab Services lab plans.
+To use your virtual network subnet for advanced networking in Azure Lab Services, you need to [delegate the subnet](/azure/virtual-network/subnet-delegation-overview) to Azure Lab Services lab plans. Subnet delegation gives explicit permissions to Azure Lab Services to create service-specific resources, such as lab virtual machines, in the subnet.
You can delegate only one lab plan at a time for use with one subnet.
Follow these steps to delegate your subnet for use with a lab plan:
1. Go to your virtual network, and select **Subnets**.
-1. Select the subnet you wish to delegate to Azure Lab Services.
+1. Select a dedicated subnet you wish to delegate to Azure Lab Services.
> [!IMPORTANT]
- > You can't use a VNET Gateway subnet with Azure Lab Services.
+ > The subnet you use for Azure Lab Services should not already be used for a VNET gateway or Azure Bastion.
-1. In **Delegate subnet to a service**, select **Microsoft.LabServices/labplans**, and then select **Save**.
+1. In **Delegate subnet to a service**, select *Microsoft.LabServices/labplans*, and then select **Save**.
:::image type="content" source="./media/how-to-connect-vnet-injection/delegate-subnet-for-azure-lab-services.png" alt-text="Screenshot of the subnet properties page in the Azure portal, highlighting the Delegate subnet to a service setting.":::
-1. Verify the lab plan service appears in the **Delegated to** column.
+1. Verify that *Microsoft.LabServices/labplans* appears in the **Delegated to** column for your subnet.
:::image type="content" source="./media/how-to-connect-vnet-injection/delegated-subnet.png" alt-text="Screenshot of list of subnets for a virtual network in the Azure portal, highlighting the Delegated to columns." lightbox="./media/how-to-connect-vnet-injection/delegated-subnet.png":::
-## Configure a network security group
+## 2. Configure a network security group
+When you connect your lab plan to a virtual network, you need to configure a network security group (NSG) to allow inbound RDP/SSH traffic from the user's computer to the template virtual machine and the lab virtual machines. An NSG contains access control rules that allow or deny traffic based on traffic direction, protocol, source address and port, and destination address and port.
+
+The rules of an NSG can be changed at any time, and changes are applied to all associated instances. It might take up to 10 minutes for the NSG changes to be effective.
-An NSG is required when you connect your virtual network to Azure Lab Services. Specifically, configure the NSG to allow:
+> [!IMPORTANT]
+> If you don't configure a network security group, you won't be able to access the lab template VM and lab VMs via RDP or SSH.
+
+The network security group configuration for advanced networking consists of two steps:
-- inbound RDP/SSH traffic from lab users' computer to the lab virtual machines-- inbound RDP/SSH traffic to the template virtual machine
+1. Create a network security group that allows RDP/SSH traffic
+1. Associate the network security group with the virtual network subnet
-After creating the NSG, you associate the NSG with the virtual network subnet.
+
+You can use an NSG to control traffic to one or more virtual machines (VMs), role instances, network adapters (NICs), or subnets in your virtual network. An NSG contains access control rules that allow or deny traffic based on traffic direction, protocol, source address and port, and destination address and port. The rules of an NSG can be changed at any time, and changes are applied to all associated instances.
+
+For more information about NSGs, visit [what is an NSG](/azure/virtual-network/network-security-groups-overview).
### Create a network security group to allow traffic Follow these steps to create an NSG and allow inbound RDP or SSH traffic:
-1. If you don't have a network security group already, follow these steps to [create a network security group (NSG)](/azure/virtual-network/manage-network-security-group).
+1. If you don't have a network security group yet, follow these steps to [create a network security group (NSG)](/azure/virtual-network/manage-network-security-group).
Make sure to create the network security group in the same Azure region as the virtual network and lab plan.
Follow these steps to create an NSG and allow inbound RDP or SSH traffic:
| **Priority** | Enter *1000*. The priority must be higher than other *Deny* rules for RDP or SSH. | | **Name** | Enter *AllowRdpSshForLabs*. |
- :::image type="content" source="media/how-to-connect-vnet-injection/nsg-add-inbound-rule.png" lightbox="media/how-to-connect-vnet-injection/nsg-add-inbound-rule.png" alt-text="Screenshot of Add inbound rule window for network security group in the Azure portal.":::
- 1. Select **Add** to add the inbound security rule to the NSG.
- 1. Select **Refresh**. The new rule should show in the list of rules.
- ### Associate the subnet with the network security group
-To apply the network security group rules to traffic in the virtual network subnet, associate the NSG with the subnet.
+Next, associate the NSG with the virtual network subnet to apply the traffic rules to the virtual network traffic.
1. Go to your network security group, and select **Subnets**.
To apply the network security group rules to traffic in the virtual network subn
1. Select **OK** to associate the virtual network subnet with the network security group.
-Lab users and lab managers can now connect to their lab virtual machines or lab template by using RDP or SSH.
-
-## Connect the virtual network during lab plan creation
+## 3. Create a lab plan with advanced networking
-You can now create the lab plan and connect it to the virtual network. As a result, the template VM and lab VMs are injected in your virtual network.
+Now that you've configured the subnet and network security group, you can create the lab plan with advanced networking. When you create a new lab on the lab plan, Azure Lab Services creates the lab template and lab virtual machines in the virtual network subnet.
> [!IMPORTANT] > You must configure advanced networking when you create a lab plan. You can't enable advanced networking at a later stage.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+To create a lab plan with advanced networking in the Azure portal:
-1. Select **Create a resource** in the upper left-hand corner of the Azure portal.
+1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Search for **lab plan**. (**Lab plan** can also be found under the **DevOps** category.)
+1. Select **Create a resource** in the upper left-hand corner of the Azure portal, and search for **lab plan**.
1. Enter the information on the **Basics** tab of the **Create a lab plan** page. For more information, see [Create a lab plan with Azure Lab Services](quick-create-resources.md).
-1. Select the **Networking** tab, and then select **Enable advanced networking**.
+1. In the **Networking** tab, select **Enable advanced networking** to configure the virtual network subnet.
1. For **Virtual network**, select your virtual network. For **Subnet**, select your virtual network subnet.
- If your virtual network doesn't appear in the list, verify that the lab plan is in the same Azure region as the virtual network.
+ If your virtual network doesn't appear in the list, verify that the lab plan is in the same Azure region as the virtual network, that you've [delegated the subnet to Azure Lab Services](#1-delegate-the-virtual-network-subnet), and that your [Azure account has the necessary permissions](#prerequisites).
:::image type="content" source="./media/how-to-connect-vnet-injection/create-lab-plan-advanced-networking.png" alt-text="Screenshot of the Networking tab of the Create a lab plan wizard."::: 1. Select **Review + Create** to create the lab plan with advanced networking.
-All labs you create for this lab plan can now use the specified subnet.
+ Lab users and lab managers can now connect to their lab virtual machines or lab template virtual machine by using RDP or SSH.
+
+ When you create a new lab, all virtual machines are created in the virtual network and assigned an IP address within the subnet range.
+
+## 4. (Optional) Update the networking configuration settings
+
+It's recommended that you use the default configuration settings for the virtual network and subnet when you use advanced networking in Azure Lab Services.
+
+For specific networking scenarios, you might need to update the networking configuration. Learn more about the [supported networking architectures and topologies in Azure Lab Services](./concept-lab-services-supported-networking-scenarios.md) and the corresponding network configuration.
-## Known issues
+You can modify the virtual network settings after you create the lab plan with advanced networking. However, when you change the [DNS settings on the virtual network](/azure/virtual-network/manage-virtual-network#change-dns-servers), you need to restart any running lab virtual machines. If the lab VMs are stopped, they'll automatically receive the updated DNS settings when they start.
-- Deleting your virtual network or subnet causes the lab to stop working-- Changing the DNS label on the public IP causes the **Connect** button for lab VMs to stop working.-- Azure Firewall isn't currently supported.
+> [!CAUTION]
+> The following networking configuration changes are not supported after you've configured advanced networking :
+>
+> - Delete the virtual network or subnet associated with the lab plan. This causes the labs to stop working.
+> - Change the subnet address range when there are virtual machines created (template VM or lab VMs).
+> - Change the DNS label on the public IP address. This causes the **Connect** button for lab VMs to stop working.
+> - Change the [frontend IP configuration](/azure/load-balancer/manage#add-frontend-ip-configuration) on the Azure load balancer. This causes the **Connect** button for lab VMs to stop working.
+> - Change the FQDN on the public IP address.
+> - Use a route table with a default route for the subnet (forced-tunneling). This causes users to lose connectivity to their lab.
+> - The use of Azure Firewall or Azure Bastion is not supported.
## Next steps -- As an admin, [attach a compute gallery to a lab plan](how-to-attach-detach-shared-image-gallery.md).-- As an admin, [configure automatic shutdown settings for a lab plan](how-to-configure-auto-shutdown-lab-plans.md).-- As an admin, [add lab creators to a lab plan](add-lab-creator.md).
+- [Attach a compute gallery to a lab plan](how-to-attach-detach-shared-image-gallery.md).
+- [Configure automatic shutdown settings for a lab plan](how-to-configure-auto-shutdown-lab-plans.md).
+- [Add lab creators to a lab plan](add-lab-creator.md).
lab-services Lab Plan Setup Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-plan-setup-guide.md
To create a lab plan, you need access to an Azure subscription that's already se
It's important to know how many [virtual machines (VMs) and VM sizes](./administrator-guide.md#vm-sizing) your school lab requires.
-For guidance on structuring your labs and images, see the blog post [Moving from a physical lab to Azure Lab Services](https://techcommunity.microsoft.com/t5/azure-lab-services/moving-from-a-physical-lab-to-azure-lab-services/ba-p/1654931).
+For guidance on structuring your labs and images, see [Moving from a physical lab to Azure Lab Services](./concept-migrating-physical-labs.md).
For additional guidance on how to structure labs, see the "Lab" section of [Azure Lab Services - Administrator guide](./administrator-guide.md#lab).
lab-services Lab Services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-services-whats-new.md
With all the new enhancements, it's a good time to revisit your overall lab stru
Let's cover each step to get started with the August 2022 Update in more detail.
-1. **Configure shared resources**. Optionally, [configure licensing servers](how-to-create-a-lab-with-shared-resource.md). For VMs that require access to a licensing server, create a lab using a lab plan with [advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation). You can reuse the same Azure Compute Gallery and the licensing servers that you use with your lab accounts.
+1. **Configure shared resources**. Optionally, [configure licensing servers](how-to-create-a-lab-with-shared-resource.md). For VMs that require access to a licensing server, create a lab using a lab plan with [advanced networking](how-to-connect-vnet-injection.md). You can reuse the same Azure Compute Gallery and the licensing servers that you use with your lab accounts.
1. **Create Lab plans.**
- 1. [Create](quick-create-resources.md) and [configure lab plans](#configure-a-lab-plan). If you plan to use a license server, don't forget to enable [advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) when creating your lab plans.
+ 1. [Create](quick-create-resources.md) and [configure lab plans](#configure-a-lab-plan). If you plan to use a license server, don't forget to enable [advanced networking](how-to-connect-vnet-injection.md) when creating your lab plans.
1. [Assign permissions](quick-create-resources.md#add-a-user-to-the-lab-creator-role) to educators that will create labs. 1. Enable [Azure Marketplace images](specify-marketplace-images.md). 1. Optionally, [attach an Azure Compute Gallery](how-to-attach-detach-shared-image-gallery.md).
lab-services Migrate To 2022 Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/migrate-to-2022-update.md
For example, when you move from lab accounts to lab plans, you should first requ
## 3. Configure shared resources
-You can reuse the same Azure Compute Gallery and licensing servers that you use with your lab accounts. Optionally, you can also [configure more licensing servers](./how-to-create-a-lab-with-shared-resource.md) and galleries based on your needs. For VMs that require access to a licensing server, you'll create lab plans with [advanced networking](./how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) enabled as shown in the next step.
+You can reuse the same Azure Compute Gallery and licensing servers that you use with your lab accounts. Optionally, you can also [configure more licensing servers](./how-to-create-a-lab-with-shared-resource.md) and galleries based on your needs. For VMs that require access to a licensing server, you'll create lab plans with [advanced networking](./how-to-connect-vnet-injection.md) enabled as shown in the next step.
## 4. Create additional lab plans While you're waiting for capacity to be assigned, you can continue creating lab plans that will be used for setting up your labs. 1. [Create and configure lab plans](./tutorial-setup-lab-plan.md).
- - If you plan to use a license server, don't forget to enable [advanced networking](./how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) when creating your lab plans.
+ - If you plan to use a license server, don't forget to enable [advanced networking](./how-to-connect-vnet-injection.md) when creating your lab plans.
- The lab planΓÇÖs resource group name is significant because educators will select the resource group to [create a lab](./tutorial-setup-lab.md#create-a-lab). - Likewise, the lab plan name is important. If more than one lab plan is in the resource group, educators will see a dropdown to choose a lab plan when they create a lab. 1. [Assign permissions](./tutorial-setup-lab-plan.md#add-a-user-to-the-lab-creator-role) to educators that will create labs.
If you're moving from lab accounts, the following table provides guidance on how
|Lab account configuration|Lab plan configuration| |||
-|[Virtual network peering](./how-to-connect-peer-virtual-network.md#configure-at-the-time-of-lab-account-creation)|Lab plans can reuse the same virtual network as lab accounts. </br> - [Setup advanced networking](./how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) when you create the lab plan.|
+|[Virtual network peering](./how-to-connect-peer-virtual-network.md#configure-at-the-time-of-lab-account-creation)|Lab plans can reuse the same virtual network as lab accounts. </br> - [Setup advanced networking](./how-to-connect-vnet-injection.md) when you create the lab plan.|
|[Role assignments](./concept-lab-services-role-based-access-control.md) </br> - Lab account owner\contributor. </br> - Lab creator\owner\contributor.|Lab plans include new specialized roles. </br>1. [Review roles](./concept-lab-services-role-based-access-control.md). </br>2. [Assign permissions](./tutorial-setup-lab-plan.md#add-a-user-to-the-lab-creator-role).| |Enabled Marketplace images. </br> - Lab accounts only support Gen1 images from the Marketplace.|Lab plans include settings to enable [Azure Marketplace images](./specify-marketplace-images.md). </br> - Lab plans support Gen1 and Gen2 Marketplace images, so the list of images will be different than what you would see if using lab accounts.| |[Location](./how-to-manage-lab-accounts.md#create-a-lab-account) </br> - Labs are automatically created within the same geolocation as the lab account. </br> - You can't specify the exact region where a lab is created. |Lab plans enable specific control over which regions labs are created. </br> - [Configure regions for labs](./create-and-configure-labs-admin.md).|
lab-services Tutorial Create Lab With Advanced Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-create-lab-with-advanced-networking.md
[!INCLUDE [update focused article](includes/lab-services-new-update-focused-article.md)]
-Azure Lab Services provides a feature called advanced networking. Advanced networking enables you to control the network for labs created using lab plans. Advanced networking is used to enable various scenarios including [connecting to licensing servers](how-to-create-a-lab-with-shared-resource.md), using [hub-spoke model for Azure Networking](/azure/architecture/reference-architectures/hybrid-networking/), lab to lab communication, etc.
+Azure Lab Services provides a feature called advanced networking. Advanced networking enables you to control the network for labs created using lab plans. You can use advanced networking to implement various scenarios including [connecting to licensing servers](how-to-create-a-lab-with-shared-resource.md), using [hub-spoke model for Azure Networking](/azure/architecture/reference-architectures/hybrid-networking/), or lab to lab communication. Learn more about the [supported networking scenarios in Azure Lab Services](./concept-lab-services-supported-networking-scenarios.md).
Let's focus on the lab to lab communication scenario. For our example, we'll create labs for a web development class. Each student will need access to both a server VM and a client VM. The server and client VMs must be able to communicate with each other. We'll test communication by configuring Internet Control Message Protocol (ICMP) for each VM and allowing the VMs to ping each other.
machine-learning How To Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-connection.md
Previously updated : 05/25/2023 Last updated : 06/19/2023 # Customer intent: As an experienced data scientist with Python skills, I have data located in external sources outside of Azure. I need to make that data available to the Azure Machine Learning platform, to train my machine learning models.
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-In this article, learn how to connect to data sources located outside of Azure, to make that data available to Azure Machine Learning services. For this data availability, Azure supports connections to these external sources:
+In this article, you'll learn how to connect to data sources located outside of Azure, to make that data available to Azure Machine Learning services. For this data availability, Azure supports connections to these external sources:
- Snowflake DB - Amazon S3 - Azure SQL DB
In this article, learn how to connect to data sources located outside of Azure,
> An Azure Machine Learning connection securely stores the credentials passed during connection creation in the Workspace Azure Key Vault. A connection references the credentials from the key vault storage location for further use. You won't need to directly deal with the credentials after they are stored in the key vault. You have the option to store the credentials in the YAML file. A CLI command or SDK can override them. We recommend that you **avoid** credential storage in a YAML file, because a security breach could lead to a credential leak. > [!NOTE]
-> For a successful data import, please verify that you have installed the latest azure-ai-ml package (version 1.5.0 or later) for SDK, and the ml extension (version 2.15.1 or later).
->
-> If you have an older SDK package or CLI extension, please remove the old one and install the new one with the code shown in the tab section. Follow the instructions for SDK and CLI below:
+> For a successful data import, please verify that you have installed the latest azure-ai-ml package (version 1.5.0 or later) for SDK, and the ml extension (version 2.15.1 or later).
+>
+> If you have an older SDK package or CLI extension, please remove the old one and install the new one with the code shown in the tab section. Follow the instructions for SDK and CLI as shown here:
### Code versions
-# [SDK](#tab/SDK)
+# [Azure CLI](#tab/cli)
+
+```cli
+az extension remove -n ml
+az extension add -n ml --yes
+az extension show -n ml #(the version value needs to be 2.15.1 or later)
+```
+
+# [Python SDK](#tab/python)
```python pip uninstall azure-ai-ml
pip install azure-ai-ml
pip show azure-ai-ml #(the version value needs to be 1.5.0 or later) ```
-# [CLI](#tab/CLI)
+# [Studio](#tab/azure-studio)
-```cli
-az extension remove -n ml
-az extension add -n ml --yes
-az extension show -n ml #(the version value needs to be 2.15.1 or later)
-```
+Not available.
## Create a Snowflake DB connection
-# [CLI: Username/password](#tab/cli-username-password)
+# [Azure CLI](#tab/cli)
This YAML file creates a Snowflake DB connection. Be sure to update the appropriate values: ```yaml # my_snowflakedb_connection.yaml $schema: http://azureml/sdk-2-0/Connection.json type: snowflake
-name: my_snowflakedb_connection # add your datastore name here
+name: my-sf-db-connection # add your datastore name here
target: jdbc:snowflake://<myaccount>.snowflakecomputing.com/?db=<mydb>&warehouse=<mywarehouse>&role=<myrole> # add the Snowflake account, database, warehouse name and role name here. If no role name provided it will default to PUBLIC
az ml connection create --file my_snowflakedb_connection.yaml
### Option 2: Override the username and password at the command line ```azurecli
-az ml connection create --file my_snowflakedb_connection.yaml --set credentials.username="XXXXX" credentials.password="XXXXX"
+az ml connection create --file my_snowflakedb_connection.yaml --set credentials.username="XXXXX" credentials.password="XXXXX"
```
-# [Python SDK: username/ password](#tab/sdk-username-password)
+# [Python SDK](#tab/python)
### Option 1: Load connection from YAML file
ml_client.connections.create_or_update(workspace_connection=wps_connection)
```
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the [Azure Machine Learning studio](https://ml.azure.com).
+
+1. Under **Assets** in the left navigation, select **Data**. Next, select the **Data Connection** tab. Then select Create as shown in this screenshot:
+
+ :::image type="content" source="media/how-to-connection/create-new-data-connection.png" lightbox="media/how-to-connection/create-new-data-connection.png" alt-text="Screenshot showing the start of a new data connection in Azure Machine Learning studio UI.":::
+
+1. In the **Create connection** pane, fill in the values as shown in the screenshot. Choose Snowflake for the category, and **Username password** for the Authentication type. Be sure to specify the **Target** textbox value in this format, filling in your specific values between the < > characters:
+
+ **jdbc:snowflake://\<myaccount\>.snowflakecomputing.com/?db=\<mydb\>&warehouse=\<mywarehouse\>&role=\<myrole\>**
+
+ :::image type="content" source="media/how-to-connection/create-snowflake-connection.png" lightbox="media/how-to-connection/create-snowflake-connection.png" alt-text="Screenshot showing creation of a new Snowflake connection in Azure Machine Learning studio UI.":::
+
+1. Select Save to securely store the credentials in the key vault associated with the relevant workspace. This connection is used when running a data import job.
+ ## Create an Azure SQL DB connection
-# [CLI: Username/password](#tab/cli-sql-username-password)
+# [Azure CLI](#tab/cli)
This YAML script creates an Azure SQL DB connection. Be sure to update the appropriate values:
This YAML script creates an Azure SQL DB connection. Be sure to update the appro
$schema: http://azureml/sdk-2-0/Connection.json type: azure_sql_db
-name: my_sqldb_connection
+name: my-sqldb-connection
target: Server=tcp:<myservername>,<port>;Database=<mydatabase>;Trusted_Connection=False;Encrypt=True;Connection Timeout=30 # add the sql servername, port addresss and database
az ml connection create --file my_sqldb_connection.yaml
### Option 2: Override the username and password in YAML file ```azurecli
-az ml connection create --file my_sqldb_connection.yaml --set credentials.username="XXXXX" credentials.password="XXXXX"
+az ml connection create --file my_sqldb_connection.yaml --set credentials.username="XXXXX" credentials.password="XXXXX"
```
-# [Python SDK: username/ password](#tab/sdk-sql-username-password)
+# [Python SDK](#tab/python)
### Option 1: Load connection from YAML file
from azure.ai.ml.entities import WorkspaceConnection
from azure.ai.ml.entities import UsernamePasswordConfiguration target= "Server=tcp:<myservername>,<port>;Database=<mydatabase>;Trusted_Connection=False;Encrypt=True;Connection Timeout=30"
-# add the sql servername, port addresss and database
+# add the sql servername, port address and database
name= <my_sql_connection> # name of the connection wps_connection = WorkspaceConnection(name= name,
ml_client.connections.create_or_update(workspace_connection=wps_connection)
```
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the [Azure Machine Learning studio](https://ml.azure.com).
+
+1. Under **Assets** in the left navigation, select **Data**. Next, select the **Data Connection** tab. Then select Create as shown in this screenshot:
+
+ :::image type="content" source="media/how-to-connection/create-new-data-connection.png" lightbox="media/how-to-connection/create-new-data-connection.png" alt-text="Screenshot showing the start of a new data connection in Azure Machine Learning studio UI.":::
+
+1. In the **Create connection** pane, fill in the values as shown in the screenshot. Choose AzureSqlDb for the category, and **Username password** for the Authentication type. Be sure to specify the **Target** textbox value in this format, filling in your specific values between the < > characters:
+
+ **Server=tcp:\<myservername\>,\<port\>;Database=\<mydatabase\>;Trusted_Connection=False;Encrypt=True;Connection Timeout=30**
+
+ :::image type="content" source="media/how-to-connection/how-to-create-azuredb-connection.png" lightbox="media/how-to-connection/how-to-create-azuredb-connection.png" alt-text="Screenshot showing creation of a new Azure DB connection in Azure Machine Learning studio UI.":::
+ ## Create Amazon S3 connection
-# [CLI: Access key](#tab/cli-s3-access-key)
+# [Azure CLI](#tab/cli)
Create an Amazon S3 connection with the following YAML file. Be sure to update the appropriate values:
Create the Azure Machine Learning connection in the CLI:
az ml connection create --file my_s3_connection.yaml ```
-# [Python SDK: Access key](#tab/sdk-s3-access-key)
+# [Python SDK](#tab/python)
### Option 1: Load connection from YAML file
credentials= AccessKeyConfiguration(access_key_id="XXXXXX",acsecret_access_key="
ml_client.connections.create_or_update(workspace_connection=wps_connection) ```+
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the [Azure Machine Learning studio](https://ml.azure.com).
+
+1. Under **Assets** in the left navigation, select **Data**. Next, select the **Data Connection** tab. Then select Create as shown in this screenshot:
+
+ :::image type="content" source="media/how-to-connection/create-new-data-connection.png" lightbox="media/how-to-connection/create-new-data-connection.png" alt-text="Screenshot showing the start of a new data connection in Azure Machine Learning studio UI.":::
+
+1. In the **Create connection** pane, fill in the values as shown in the screenshot. Choose S3 for the category, and **Username password** for the Authentication type. Be sure to specify the **Target** textbox value in this format, filling in your specific values between the < > characters:
+
+ **\<target\>**
+
+ :::image type="content" source="media/how-to-connection/how-to-create-amazon-s3-connection.png" lightbox="media/how-to-connection/how-to-create-amazon-s3-connection.png" alt-text="Screenshot showing creation of a new Amazon S3 connection in Azure Machine Learning studio UI.":::
+ ## Next steps - [Import data assets](how-to-import-data-assets.md)-- [Import data assets on a schedule](reference-yaml-schedule-data-import.md)
+- [Import data assets on a schedule](reference-yaml-schedule-data-import.md)
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
Previously updated : 06/02/2023 Last updated : 06/20/2023 # Create and manage data assets
Next, execute the following command in the CLI (update the `<filename>` placehol
az ml data create -f <filename>.yml ```
-# [Python SDK](#tab/Python-SDK)
+# [Python SDK](#tab/python)
To create a data asset that is a File type, use the following code and update the `<>` placeholders with your information.
my_data = Data(
# Create the data asset in the workspace ml_client.data.create_or_update(my_data) ```
-# [Studio](#tab/Studio)
+# [Studio](#tab/azure-studio)
These steps explain how to create a File typed data asset in the Azure Machine Learning studio:
Next, execute the following command in the CLI (update the `<filename>` placehol
az ml data create -f <filename>.yml ```
-# [Python SDK](#tab/Python-SDK)
+# [Python SDK](#tab/python)
To create a data asset that is a Folder type use the following code and update the `<>` placeholders with your information.
my_data = Data(
ml_client.data.create_or_update(my_data) ```
-# [Studio](#tab/Studio)
+# [Studio](#tab/azure-studio)
Use these steps to create a Folder typed data asset in the Azure Machine Learning studio:
az ml data create --path ./data --name <DATA ASSET NAME> --version <VERSION> --t
> [!IMPORTANT] > The `path` should be a *folder* that contains a valid `MLTable` file.
-# [Python SDK](#tab/Python-SDK)
+# [Python SDK](#tab/python)
Use the following code to create a data asset that is a Table (`mltable`) type, and update the `<>` placeholders with your information.
my_data = Data(
ml_client.data.create_or_update(my_data) ```
-# [Studio](#tab/Studio)
+# [Studio](#tab/azure-studio)
> [!IMPORTANT] > Currently, the Studio UI has limited functionality for the creation of Table (`MLTable`) typed assets. We recommend that you use the Python SDK to author and create Table (`MLTable`) typed data assets.
+### Creating data assets from job outputs
+
+You can create a data asset from an Azure Machine Learning job by setting the `name` parameter in the output. In this example, you submit a job that copies data from a public blob store to your default Azure Machine Learning Datastore and creates a data asset called `job_output_titanic_asset`.
+
+# [Azure CLI](#tab/cli)
+
+Create a job specification YAML file (`<file-name>.yml`):
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
+
+# path: Set the URI path for the data. Supported paths include
+# local: `./<path>
+# Blob: wasbs://<container_name>@<account_name>.blob.core.windows.net/<path>
+# ADLS: abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>
+# Datastore: azureml://datastores/<data_store_name>/paths/<path>
+# Data Asset: azureml:<my_data>:<version>
+
+# type: What type of data are you pointing to?
+# uri_file (a specific file)
+# uri_folder (a folder)
+# mltable (a table)
+
+# mode: Set INPUT mode:
+# ro_mount (read-only mount)
+# download (download from storage to node)
+# mode: Set the OUTPUT mode
+# rw_mount (read-write mount)
+# upload (upload data from node to storage)
+
+type: command
+command: cp ${{inputs.input_data}} ${{outputs.output_data}}
+compute: azureml:cpu-cluster
+environment: azureml://registries/azureml/environments/sklearn-1.1/versions/4
+inputs:
+ input_data:
+ mode: ro_mount
+ path: azureml:wasbs://data@azuremlexampledata.blob.core.windows.net/titanic.csv
+ type: uri_file
+outputs:
+ output_data:
+ mode: rw_mount
+ path: azureml://datastores/workspaceblobstore/paths/quickstart-output/titanic.csv
+ type: uri_file
+ name: job_output_titanic_asset
+
+```
+
+Next, submit the job using the CLI:
+
+```azurecli
+az ml job create --file <file-name>.yml
+```
+
+# [Python SDK](#tab/python)
+
+```python
+from azure.ai.ml import command, Input, Output, MLClient
+from azure.ai.ml.constants import AssetTypes, InputOutputModes
+from azure.identity import DefaultAzureCredential
+
+# Set your subscription, resource group and workspace name:
+subscription_id = "<SUBSCRIPTION_ID>"
+resource_group = "<RESOURCE_GROUP>"
+workspace = "<AML_WORKSPACE_NAME>"
+
+# connect to the AzureML workspace
+ml_client = MLClient(
+ DefaultAzureCredential(), subscription_id, resource_group, workspace
+)
+
+# ==============================================================
+# Set the input and output URI paths for the data. Supported paths include:
+# local: `./<path>
+# Blob: wasbs://<container_name>@<account_name>.blob.core.windows.net/<path>
+# ADLS: abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>
+# Datastore: azureml://datastores/<data_store_name>/paths/<path>
+# Data Asset: azureml:<my_data>:<version>
+# As an example, we set the input path to a file on a public blob container
+# As an example, we set the output path to a folder in the default datastore
+# ==============================================================
+input_path = "wasbs://data@azuremlexampledata.blob.core.windows.net/titanic.csv"
+output_path = "azureml://datastores/workspaceblobstore/paths/quickstart-output/titanic.csv"
+
+# ==============================================================
+# What type of data are you pointing to?
+# AssetTypes.URI_FILE (a specific file)
+# AssetTypes.URI_FOLDER (a folder)
+# AssetTypes.MLTABLE (a table)
+# The path we set above is a specific file
+# ==============================================================
+data_type = AssetTypes.URI_FILE
+
+# ==============================================================
+# Set the input mode. The most commonly-used modes:
+# InputOutputModes.RO_MOUNT
+# InputOutputModes.DOWNLOAD
+# Set the mode to Read Only (RO) to mount the data
+# ==============================================================
+input_mode = InputOutputModes.RO_MOUNT
+
+# ==============================================================
+# Set the output mode. The most commonly-used modes:
+# InputOutputModes.RW_MOUNT
+# InputOutputModes.UPLOAD
+# Set the mode to Read Write (RW) to mount the data
+# ==============================================================
+output_mode = InputOutputModes.RW_MOUNT
+
+# ==============================================================
+# Set a data asset name for the output
+# ==============================================================
+data_asset_name = "job_output_titanic_asset"
+
+# Set the input and output for the job:
+inputs = {
+ "input_data": Input(type=data_type, path=input_path, mode=input_mode)
+}
+
+outputs = {
+ "output_data": Output(type=data_type, path=output_path, mode=output_mode, name = data_asset_name)
+}
+
+# This command job copies the data to your default Datastore
+job = command(
+ command="cp ${{inputs.input_data}} ${{outputs.output_data}}",
+ inputs=inputs,
+ outputs=outputs,
+ environment="azureml://registries/azureml/environments/sklearn-1.1/versions/4",
+ compute="cpu-cluster",
+)
+
+# Submit the command
+ml_client.jobs.create_or_update(job)
+```
+
+# [Studio](#tab/azure-studio)
+
+Not available.
+++ ## Manage data assets ### Delete a data asset
Execute the following command (update the `<>` placeholder with the name of your
az ml data archive --name <NAME OF DATA ASSET> ```
-# [Python SDK](#tab/Python-SDK)
+# [Python SDK](#tab/python)
```python from azure.ai.ml import MLClient
ml_client = MLClient(
ml_client.data.archive(name="<DATA ASSET NAME>") ```
-# [Studio](#tab/Studio)
+# [Studio](#tab/azure-studio)
1. In the Studio UI, select **Data** from the left-hand menu. 1. On the **Data assets** tab, select the data asset you want to archive.
Execute the following command (update the `<>` placeholders with the name of you
az ml data archive --name <NAME OF DATA ASSET> --version <VERSION TO ARCHIVE> ```
-# [Python SDK](#tab/Python-SDK)
+# [Python SDK](#tab/python)
```python from azure.ai.ml import MLClient
ml_client = MLClient(
ml_client.data.archive(name="<DATA ASSET NAME>", version="<VERSION TO ARCHIVE>") ```
-# [Studio](#tab/Studio)
+# [Studio](#tab/azure-studio)
> [!IMPORTANT] > Currently, archiving a specific data asset version is not supported in the Studio UI.
Execute the following command (update the `<>` placeholder with the name of your
az ml data restore --name <NAME OF DATA ASSET> ```
-# [Python SDK](#tab/Python-SDK)
+# [Python SDK](#tab/python)
```python from azure.ai.ml import MLClient
ml_client = MLClient(
ml_client.data.restore(name="<DATA ASSET NAME>") ```
-# [Studio](#tab/Studio)
+# [Studio](#tab/azure-studio)
1. In the Studio UI, select **Data** from the left-hand menu. 1. On the **Data assets** tab, enable **Include Archived**.
Execute the following command (update the `<>` placeholders with the name of you
az ml data restore --name <NAME OF DATA ASSET> --version <VERSION TO ARCHIVE> ```
-# [Python SDK](#tab/Python-SDK)
+# [Python SDK](#tab/python)
```python from azure.ai.ml import MLClient
ml_client = MLClient(
ml_client.data.restore(name="<DATA ASSET NAME>", version="<VERSION TO ARCHIVE>") ```
-# [Studio](#tab/Studio)
+# [Studio](#tab/azure-studio)
> [!IMPORTANT] > Currently, restoring a specific data asset version is not supported in the Studio UI.
Next, execute the following command in the CLI (update the `<filename>` placehol
az ml data create -f <filename>.yml ```
-# [Python SDK](#tab/Python-SDK)
+# [Python SDK](#tab/python)
To create a File type data asset, use the following code and update the `<>` placeholders with your information.
my_data = Data(
# Create the data asset in the workspace ml_client.data.create_or_update(my_data) ```
-# [Studio](#tab/Studio)
+# [Studio](#tab/azure-studio)
> [!IMPORTANT] > Currently, the Studio UI does not support adding tags as part of the data asset creation flow. You may add tags in the Studio UI after the data asset creation.
Execute the following command in the Azure CLI, and update the `<>` placeholders
az ml data update --name <DATA ASSET NAME> --version <VERSION> --set tags.<KEY>=<VALUE> ```
-# [Python SDK](#tab/Python-SDK)
+# [Python SDK](#tab/python)
```python from azure.ai.ml import MLClient
data.tags = tags
ml_client.data.create_or_update(data) ```
-# [Studio](#tab/Studio)
+# [Studio](#tab/azure-studio)
1. Select **Data** on the left-hand menu in the Studio UI. 1. Select the **Data Assets** tab.
machine-learning How To Import Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-import-data-assets.md
Previously updated : 06/02/2023 Last updated : 06/19/2023 # Import data assets (preview) [!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-In this article, learn how to import data into the Azure Machine Learning platform from external sources. A successful import automatically creates and registers an Azure Machine Learning data asset with the name provided during the import. An Azure Machine Learning data asset resembles a web browser bookmark (favorites). You don't need to remember long storage paths (URIs) that point to your most-frequently used data. Instead, you can create a data asset, and then access that asset with a friendly name.
+In this article, you'll learn how to import data into the Azure Machine Learning platform from external sources. A successful import automatically creates and registers an Azure Machine Learning data asset with the name provided during the import. An Azure Machine Learning data asset resembles a web browser bookmark (favorites). You don't need to remember long storage paths (URIs) that point to your most-frequently used data. Instead, you can create a data asset, and then access that asset with a friendly name.
-A data import creates a cache of the source data, along with metadata, for faster and reliable data access in Azure Machine Learning training jobs. The data cache avoids network and connection constraints. The cached data is versioned to support reproducibility. This provides versioning capabilities for data imported from SQL Server sources. Additionally, the cached data provides data lineage for auditability. A data import uses ADF (Azure Data Factory pipelines) behind the scenes, which means that users can avoid complex interactions with ADF. Behind the scenes, Azure Machine Learning also handles management of ADF compute resource pool size, compute resource provisioning, and tear-down, to optimize data transfer by determining proper parallelization.
+A data import creates a cache of the source data, along with metadata, for faster and reliable data access in Azure Machine Learning training jobs. The data cache avoids network and connection constraints. The cached data is versioned to support reproducibility. This provides versioning capabilities for data imported from SQL Server sources. Additionally, the cached data provides data lineage for auditing tasks. A data import uses ADF (Azure Data Factory pipelines) behind the scenes, which means that users can avoid complex interactions with ADF. Behind the scenes, Azure Machine Learning also handles management of ADF compute resource pool size, compute resource provisioning, and tear-down, to optimize data transfer by determining proper parallelization.
The transferred data is partitioned and securely stored in Azure storage, as parquet files. This enables faster processing during training. ADF compute costs only involve the time used for data transfers. Storage costs only involve the time needed to cache the data, because cached data is a copy of the data imported from an external source. Azure storage hosts that external source. The caching feature involves upfront compute and storage costs. However, it pays for itself, and can save money, because it reduces recurring training compute costs, compared to direct connections to external source data during training. It caches data as parquet files, which makes job training faster and more reliable against connection timeouts for larger data sets. This leads to fewer reruns, and fewer training failures.
-You can now import data from Snowflake, Amazon S3 and Azure SQL.
+You can import data from Amazon S3, Azure SQL, and Snowflake.
[!INCLUDE [machine-learning-preview-generic-disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
To create and work with data assets, you need:
* [Workspace connections created](how-to-connection.md) > [!NOTE]
-> For a successful data import, please verify that you installed the latest azure-ai-ml package (version 1.5.0 or later) for SDK, and the ml extension (version 2.15.1 or later).
->
-> If you have an older SDK package or CLI extension, please remove the old one and install the new one with the code shown in the tab section. Follow the instructions for SDK and CLI below:
+> For a successful data import, please verify that you installed the latest azure-ai-ml package (version 1.5.0 or later) for SDK, and the ml extension (version 2.15.1 or later).
+>
+> If you have an older SDK package or CLI extension, please remove the old one and install the new one with the code shown in the tab section. Follow the instructions for SDK and CLI as shown here:
### Code versions
-# [SDK](#tab/SDK)
+# [Azure CLI](#tab/cli)
+
+```cli
+az extension remove -n ml
+az extension add -n ml --yes
+az extension show -n ml #(the version value needs to be 2.15.1 or later)
+```
+
+# [Python SDK](#tab/python)
```python pip uninstall azure-ai-ml
-pip install azure-ai-ml
pip show azure-ai-ml #(the version value needs to be 1.5.0 or later) ```
-# [CLI](#tab/CLI)
+# [Studio](#tab/azure-studio)
-```cli
-az extension remove -n ml
-az extension add -n ml --yes
-az extension show -n ml #(the version value needs to be 2.15.1 or later)
-```
+Not available.
-## Importing from an external database source as a table data asset
+## Import from an external database as a table data asset
> [!NOTE] > The external databases can have Snowflake, Azure SQL, etc. formats.
Next, run the following command in the CLI:
> az ml data import -f <file-name>.yml ```
-# [Python SDK](#tab/Python-SDK)
+# [Python SDK](#tab/python)
```python from azure.ai.ml.entities import DataImport
from azure.ai.ml import MLClient
ml_client = MLClient.from_config() data_import = DataImport(
- name="<name>",
- source=Database(connection="<connection>", query="<query>"),
+ name="<name>",
+ source=Database(connection="<connection>", query="<query>"),
path="<path>" ) ml_client.data.import_data(data_import=data_import) ```
+# [Studio](#tab/azure-studio)
+
+> [!NOTE]
+> The example seen here describes the process for a Snowflake database. However, this process covers other external database formats, like Azure SQL, etc.
+
+1. Navigate to the [Azure Machine Learning studio](https://ml.azure.com).
+
+1. Under **Assets** in the left navigation, select **Data**. Next, select the **Data Import** tab. Then select Create as shown in this screenshot:
+
+ :::image type="content" source="media/how-to-import-data-assets/create-new-data-import.png" lightbox="media/how-to-import-data-assets/create-new-data-import.png" alt-text="Screenshot showing creation of a new data import in Azure Machine Learning studio UI.":::
+
+1. At the Data Source screen, select Snowflake, and then select Next, as shown in this screenshot:
+
+ :::image type="content" source="media/how-to-import-data-assets/select-source-for-snowflake-data-asset.png" lightbox="media/how-to-import-data-assets/select-source-for-snowflake-data-asset.png" alt-text="Screenshot showing selection of a Snowflake data asset.":::
+
+1. At the Data Type screen, fill in the values. The **Type** value defaults to **Table (mltable)**. Then select Next, as shown in this screenshot:
+
+ :::image type="content" source="media/how-to-import-data-assets/select-snowflake-data-asset-type.png" lightbox="media/how-to-import-data-assets/select-snowflake-data-asset-type.png" alt-text="Screenshot that shows selection of a Snowflake data asset type.":::
+
+1. At the Create data import screen, fill in the values, and select Next, as shown in this screenshot:
+
+ :::image type="content" source="media/how-to-import-data-assets/create-snowflake-data-import.png" lightbox="media/how-to-import-data-assets/create-snowflake-data-import.png" alt-text="Screenshot that shows details of the data source selection.":::
+
+1. Fill in the values at the Choose a datastore to output screen, and select Next, as shown in this screenshot. **Workspace managed datastore** is selected by default; the path is automatically assigned by the system when you choose manged datastore. If you select **Workspace managed datastore**, the **Auto delete setting** dropdown appears. It offers a data deletion time window of 30 days by default, and [how to manage imported data assets](./how-to-manage-imported-data-assets.md) explains how to change this value.
+
+ :::image type="content" source="media/how-to-import-data-assets/choose-snowflake-datastore-to-output.png" lightbox="media/how-to-import-data-assets/choose-snowflake-datastore-to-output.png" alt-text="Screenshot that shows details of the data source to output.":::
+
+ > [!NOTE]
+ > To choose your own datastore, select **Other datastores**. In this case, you must select the path for the location of the data cache.
+
+1. You can add a schedule. Select **Add schedule** as shown in this screenshot:
+
+ :::image type="content" source="media/how-to-import-data-assets/create-data-import-add-schedule.png" lightbox="media/how-to-import-data-assets/create-data-import-add-schedule.png" alt-text="Screenshot that shows the selection of the Add schedule button.":::
+
+ A new panel opens, where you can define a **Recurrence** schedule, or a **Cron** schedule. This screenshot shows the panel for a **Recurrence** schedule:
+
+ :::image type="content" source="media/how-to-import-data-assets/create-data-import-recurrence-schedule.png" lightbox="media/how-to-import-data-assets/create-data-import-recurrence-schedule.png" alt-text="A screenshot that shows selection of the Add schedule button.":::
+
+
+ - **Name**: the unique identifier of the schedule within the workspace.
+ - **Description**: the schedule description.
+ - **Trigger**: the recurrence pattern of the schedule, which includes the following properties.
+ - **Time zone**: the trigger time calculation is based on this time zone; (UTC) Coordinated Universal Time by default.
+ - **Recurrence** or **Cron expression**: select recurrence to specify the recurring pattern. Under **Recurrence**, you can specify the recurrence frequency - by minutes, hours, days, weeks, or months.
+ - **Start**: the schedule first becomes active on this date. By default, the creation date of this schedule.
+ - **End**: the schedule will become inactive after this date. By default, it's NONE, which means that the schedule will always be active until you manually disable it.
+ - **Tags**: the selected schedule tags.
+
+ This screenshot shows the panel for a **Cron** schedule:
+
+ :::image type="content" source="media/how-to-import-data-assets/create-data-import-cron-expression-schedule.png" lightbox="media/how-to-import-data-assets/create-data-import-cron-expression-schedule.png" alt-text="Screenshot that shows selection of the Add schedule button.":::
+
-## Import data from an external file system source as a folder data asset
+ - **Name**: the unique identifier of the schedule within the workspace.
+ - **Description**: the schedule description.
+ - **Trigger**: the recurrence pattern of the schedule, which includes the following properties.
+ - **Time zone**: the trigger time calculation is based on this time zone; (UTC) Coordinated Universal Time by default.
+ - **Recurrence** or **Cron expression**: select recurrence to specify the recurring pattern. **Cron expression** allows you to specify more flexible and customized recurrence pattern.
+ - **Start**: the schedule first becomes active on this date. By default, the creation date of this schedule.
+ - **End**: the schedule will become inactive after this date. By default, it's NONE, meaning that the schedule will remain active until you manually disable it.
+ - **Tags**: the selected schedule tags.
+
+- **(Required)** `expression` uses a standard crontab expression to express a recurring schedule. A single expression is composed of five space-delimited fields:
+
+ `MINUTES HOURS DAYS MONTHS DAYS-OF-WEEK`
+
+ - A single wildcard (`*`), which covers all values for the field. A `*`, in days, means all days of a month (which varies with month and year).
+ - The `expression: "15 16 * * 1"` in the sample above means the 16:15PM on every Monday.
+ - The next table lists the valid values for each field:
+
+ | Field | Range | Comment |
+ |-|-|--|
+ | `MINUTES` | 0-59 | - |
+ | `HOURS` | 0-23 | - |
+ | `DAYS` | - | Not supported. The value is ignored and treated as `*`. |
+ | `MONTHS` | - | Not supported. The value is ignored and treated as `*`. |
+ | `DAYS-OF-WEEK` | 0-6 | Zero (0) means Sunday. Names of days also accepted. |
+
+ - To learn more about crontab expressions, see [Crontab Expression wiki on GitHub](https://github.com/atifaziz/NCrontab/wiki/Crontab-Expression).
+
+ > [!IMPORTANT]
+ > `DAYS` and `MONTH` are not supported. If you pass one of these values, it will be ignored and treated as `*`.
+
+- (Optional) `start_time` specifies the start date and time with the timezone of the schedule. For example, `start_time: "2022-05-10T10:15:00-04:00"` means the schedule starts from 10:15:00AM on 2022-05-10 in the UTC-4 timezone. If `start_time` is omitted, the `start_time` equals the schedule creation time. For a start time in the past, the first job runs at the next calculated run time.
+
+The next screenshot shows the last screen of this process. Review your choices, and select Create. At this screen, and the other screens in this process, select Back to move to earlier screens to change your choices of values.
+
+ :::image type="content" source="media/how-to-import-data-assets/create-snowflake-data-import-review-values-and-create.png" lightbox="media/how-to-import-data-assets/create-snowflake-data-import-review-values-and-create.png" alt-text="Screenshot that shows all parameters of the data import.":::
+++
+## Import data from an external file system as a folder data asset
> [!NOTE] > An Amazon S3 data resource can serve as an external file system resource.
Next, execute this command in the CLI:
> az ml data import -f <file-name>.yml ```
-# [Python SDK](#tab/Python-SDK)
+# [Python SDK](#tab/python)
```python from azure.ai.ml.entities import DataImport
from azure.ai.ml import MLClient
ml_client = MLClient.from_config() data_import = DataImport(
- name="<name>",
- source=FileSystem(connection="<connection>", path="<path_on_source>"),
+ name="<name>",
+ source=FileSystem(connection="<connection>", path="<path_on_source>"),
path="<path>" ) ml_client.data.import_data(data_import=data_import) ```
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the [Azure Machine Learning studio](https://ml.azure.com).
+
+1. Under **Assets** in the left navigation, select **Data**. Next, select the Data Import tab. Then select Create as shown in this screenshot:
+
+ :::image type="content" source="media/how-to-import-data-assets/create-new-data-import.png" lightbox="media/how-to-import-data-assets/create-new-data-import.png" alt-text="Screenshot showing creation of a data import in Azure Machine Learning studio UI.":::
+
+1. At the Data Source screen, select S3, and then select Next, as shown in this screenshot:
+
+ :::image type="content" source="media/how-to-import-data-assets/select-source-for-s3-data-asset.png" lightbox="media/how-to-import-data-assets/select-source-for-s3-data-asset.png" alt-text="Screenshot showing selection of an S3 data asset.":::
+
+1. At the Data Type screen, fill in the values. The **Type** value defaults to **Folder (uri_folder)**. Then select Next, as shown in this screenshot:
+
+ :::image type="content" source="media/how-to-import-data-assets/select-s3-data-asset-type.png" lightbox="media/how-to-import-data-assets/select-s3-data-asset-type.png" alt-text="Screenshot showing selection of a Snowflake data asset type.":::
+
+1. At the Create data import screen, fill in the values, and select Next, as shown in this screenshot:
+
+ :::image type="content" source="media/how-to-import-data-assets/create-s3-data-import.png" lightbox="media/how-to-import-data-assets/create-s3-data-import.png" alt-text="Screenshot showing details of the data source selection.":::
+
+1. Fill in the values at the Choose a datastore to output screen, and select Next, as shown in this screenshot. **Workspace managed datastore** is selected by default; the path is automatically assigned by the system when you choose managed datastore. If you select **Workspace managed datastore**, the **Auto delete setting** dropdown appears. It offers a data deletion time window of 30 days by default, and [how to manage imported data assets](./how-to-manage-imported-data-assets.md) explains how to change this value.
+
+ :::image type="content" source="media/how-to-import-data-assets/choose-s3-datastore-to-output.png" lightbox="media/how-to-import-data-assets/choose-s3-datastore-to-output.png" alt-text="Screenshot showing details of the data source to output.":::
+
+1. You can add a schedule. Select **Add schedule** as shown in this screenshot:
+
+ :::image type="content" source="media/how-to-import-data-assets/create-data-import-add-schedule.png" lightbox="media/how-to-import-data-assets/create-data-import-add-schedule.png" alt-text="Screenshot showing selection of the Add schedule button.":::
+
+ A new panel opens, where you can define a **Recurrence** schedule, or a **Cron** schedule. This screenshot shows the panel for a **Recurrence** schedule:
+
+ :::image type="content" source="media/how-to-import-data-assets/create-data-import-recurrence-schedule.png" lightbox="media/how-to-import-data-assets/create-data-import-recurrence-schedule.png" alt-text="A screenshot showing selection of the Add schedule button.":::
+
+
+ - **Name**: the unique identifier of the schedule within the workspace.
+ - **Description**: the schedule description.
+ - **Trigger**: the recurrence pattern of the schedule, which includes the following properties.
+ - **Time zone**: the trigger time calculation is based on this time zone; (UTC) Coordinated Universal Time by default.
+ - **Recurrence** or **Cron expression**: select recurrence to specify the recurring pattern. Under **Recurrence**, you can specify the recurrence frequency - by minutes, hours, days, weeks, or months.
+ - **Start**: the schedule first becomes active on this date. By default, the creation date of this schedule.
+ - **End**: the schedule will become inactive after this date. By default, it's NONE, which means that the schedule will remain active until you manually disable it.
+ - **Tags**: the selected schedule tags.
+
+ This screenshot shows the panel for a **Cron** schedule:
+
+ :::image type="content" source="media/how-to-import-data-assets/create-data-import-cron-expression-schedule.png" lightbox="media/how-to-import-data-assets/create-data-import-cron-expression-schedule.png" alt-text="Screenshot showing the selection of the Add schedule button.":::
+++
+ - **Name**: the unique identifier of the schedule within the workspace.
+ - **Description**: the schedule description.
+ - **Trigger**: the recurrence pattern of the schedule, which includes the following properties.
+ - **Time zone**: the trigger time calculation is based on this time zone; (UTC) Coordinated Universal Time by default.
+ - **Recurrence** or **Cron expression**: select recurrence to specify the recurring pattern. **Cron expression** allows you to specify more flexible and customized recurrence pattern.
+ - **Start**: the schedule first becomes active on this date. By default, the creation date of this schedule.
+ - **End**: the schedule will become inactive after this date. By default, it's NONE, meaning that the schedule will remain active until you manually disable it.
+ - **Tags**: the selected schedule tags.
+
+- **(Required)** `expression` uses a standard crontab expression to express a recurring schedule. A single expression is composed of five space-delimited fields:
+
+ `MINUTES HOURS DAYS MONTHS DAYS-OF-WEEK`
+
+ - A single wildcard (`*`), which covers all values for the field. A `*`, in days, means all days of a month (which varies with month and year).
+ - The `expression: "15 16 * * 1"` in the sample above means the 16:15PM on every Monday.
+ - The next table lists the valid values for each field:
+
+ | Field | Range | Comment |
+ |-|-|--|
+ | `MINUTES` | 0-59 | - |
+ | `HOURS` | 0-23 | - |
+ | `DAYS` | - | Not supported. The value is ignored and treated as `*`. |
+ | `MONTHS` | - | Not supported. The value is ignored and treated as `*`. |
+ | `DAYS-OF-WEEK` | 0-6 | Zero (0) means Sunday. Names of days also accepted. |
+
+ - To learn more about crontab expressions, see [Crontab Expression wiki on GitHub](https://github.com/atifaziz/NCrontab/wiki/Crontab-Expression).
+
+ > [!IMPORTANT]
+ > `DAYS` and `MONTH` are not supported. If you pass one of these values, it will be ignored and treated as `*`.
+
+- (Optional) `start_time` specifies the start date and time with the timezone of the schedule. For example, `start_time: "2022-05-10T10:15:00-04:00"` means the schedule starts from 10:15:00AM on 2022-05-10 in the UTC-4 timezone. If `start_time` is omitted, the `start_time` equals the schedule creation time. For a start time in the past, the first job runs at the next calculated run time.
+
+1. As shown in the next screenshot, review your choices at the last screen of this process, and select Create. At this screen, and the other screens in this process, select Back to move to earlier screens if you'd like to change your choices of values.
+
+ :::image type="content" source="media/how-to-import-data-assets/choose-s3-datastore-to-output.png" lightbox="media/how-to-import-data-assets/choose-s3-datastore-to-output.png" alt-text="Screenshot showing details of the data source to output.":::
+
+1. The next screenshot shows the last screen of this process. Review your choices, and select Create. At this screen, and the other screens in this process, select Back to move to earlier screens to change your choices of values.
+
+ :::image type="content" source="media/how-to-import-data-assets/create-s3-data-import-review-values-and-create.png" lightbox="media/how-to-import-data-assets/create-s3-data-import-review-values-and-create.png" alt-text="Screenshot showing all parameters of the data import.":::
+ ## Check the import status of external data sources
-The data import action is an asynchronous action. It can take a long time. After submission of an import data action via the CLI or SDK, the Azure Machine Learning service might need several minutes to connect to the external data source. Then the service would start the data import and handle data caching and registration. The time needed for a data import also depends on the size of the source data set.
+The data import action is an asynchronous action. It can take a long time. After submission of an import data action via the CLI or SDK, the Azure Machine Learning service might need several minutes to connect to the external data source. Then, the service would start the data import, and handle data caching and registration. The time needed for a data import also depends on the size of the source data set.
The next example returns the status of the submitted data import activity. The command or method uses the "data asset" name as the input to determine the status of the data materialization. # [Azure CLI](#tab/cli) - ```cli > az ml data list-materialization-status --name <name> ```
-# [Python SDK](#tab/Python-SDK)
+# [Python SDK](#tab/python)
```python from azure.ai.ml.entities import DataImport
ml_client.data.show_materialization_status(name="<name>")
```
+# [Studio](#tab/azure-studio)
+
+Not available.
+ ## Next steps
machine-learning How To Manage Imported Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-imported-data-assets.md
Title: Manage imported data assets (preview)
-description: Learn how to manage imported data assets also known as edit auto-deletion.
+description: Learn how to manage imported data assets also known as edit autodeletion.
Previously updated : 06/02/2023 Last updated : 06/19/2023 # Manage imported data assets (preview) [!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-In this article, learn how to manage imported data assets from a life-cycle perspective. We learn how to modify or update auto-delete settings on the data assets imported on to a managed datastore (`workspacemanagedstore`) that Microsoft manages for the customer.
+In this article, you'll learn how to manage imported data assets from a life-cycle perspective. We learn how to modify or update auto delete settings on the data assets imported into a managed datastore (`workspacemanagedstore`) that Microsoft manages for the customer.
> [!NOTE]
-> Auto-delete settings capability or lifecycle management is currently offered only on the imported data assets in managed datastore aka `workspacemanagedstore`.
+> Auto delete settings capability, or lifecycle management, is currently offered only through the imported data assets in managed datastore, also known as `workspacemanagedstore`.
[!INCLUDE [machine-learning-preview-generic-disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] ## Modifying auto delete settings
-You can change the auto-delete setting value or condition
+You can change the auto delete setting value or condition as shown in these code samples:
+ # [Azure CLI](#tab/cli) ```cli
You can change the auto-delete setting value or condition
```
-# [Python SDK](#tab/Python-SDK)
+# [Python SDK](#tab/python)
```python from azure.ai.ml.entities import DataΓÇ» from azure.ai.ml.constants import AssetTypesΓÇ»
ml_client.data.create_or_update(my_data)ΓÇ»
```
+# [Studio](#tab/azure-studio)
+
+These steps describe how to modify the auto delete settings of an imported data asset in `workspacemanageddatastore` in the Azure Machine Learning studio:
+
+1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
+
+1. As shown in the next screenshot, under **Assets** in the left navigation, select **Data**. At the **Data assets** tab, select an imported data asset located in the **workspacemanageddatastore**
+
+ :::image type="content" source="./media/how-to-manage-imported-data-assets/data-assets-list.png" lightbox="./media/how-to-manage-imported-data-assets/data-assets-list.png" alt-text="Screenshot highlighting the imported data asset name in workspace managed datastore in the Data assets tab.":::
+
+1. As shown in the next screenshot, the details page of the data asset has an **Auto delete setting** property. This property is currently active on the data asset. Verify that you have the correct **Version:** of the data asset selected in the drop-down, and select the pencil icon to edit the property.
+
+ :::image type="content" source="./media/how-to-manage-imported-data-assets/data-assets-details.png" lightbox="./media/how-to-manage-imported-data-assets/data-assets-details.png" alt-text="Screenshot showing the edit of the auto delete setting.":::
+
+1. To change the auto delete **Condition** setting, select **Created greater than**, and change **Value** to any numerical value. Then, select **Save** as shown in the next screenshot:
+
+ :::image type="content" source="./media/how-to-manage-imported-data-assets/edit-managed-data-asset-details.png" lightbox="./media/how-to-manage-imported-data-assets/edit-managed-data-asset-details.png" alt-text="Screenshot that shows the managed data asset auto delete settings choices.":::
+
+ > [!NOTE]
+ > At this time, the supported values range from 1 day to 3 years.
+
+1. After a successful edit, you'll return to the data asset detail page. This page shows the updated values in **Auto delete settings** property box, as shown in the next screenshot:
+
+ :::image type="content" source="./media/how-to-manage-imported-data-assets/new-managed-data-asset-details.png" lightbox="./media/how-to-manage-imported-data-assets/new-managed-data-asset-details.png" alt-text="Screenshot showing the managed data asset auto delete settings.":::
+
+ > [!NOTE]
+ > The auto delete setting is available only on imported data assets in a workspacemanaged datastore, as shown in the above screenshot.
+ ## Deleting/removing auto delete settings
-You can remove a previously configured auto-delete setting.
+If you don't want a specific data asset version to become part of life-cycle management, you can remove a previously configured auto delete setting.
# [Azure CLI](#tab/cli)
You can remove a previously configured auto-delete setting.
```
-# [Python SDK](#tab/Python-SDK)
+# [Python SDK](#tab/python)
```python from azure.ai.ml.entities import DataΓÇ» from azure.ai.ml.constants import AssetTypesΓÇ»
my_data=Data(name=name,version=version,type=type, auto_delete_setting=None)
ml_client.data.create_or_update(my_data)ΓÇ» ```
+# [Studio](#tab/azure-studio)
+
+These steps describe how to delete or clear the auto delete settings of an imported data asset in `workspacemanageddatastore` in the Azure Machine Learning studio:
+
+1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
+
+1. As shown in this screenshot, under **Assets** in the left navigation, select **Data**. On the **Data assets** tab, select an imported data asset located in the **workspacemanageddatastore**:
+
+ :::image type="content" source="./media/how-to-manage-imported-data-assets/data-assets-list.png" lightbox="./media/how-to-manage-imported-data-assets/data-assets-list.png" alt-text="Screenshot highlighting the imported data asset name in workspace managed datastore in the Data assets tab.":::
+
+1. As shown in the next screenshot, the details page of the data asset has an **Auto delete setting** property. This property is currently active on the data asset. Verify that you have the correct **Version:** of the data asset selected in the drop-down, and select the pencil icon to edit the property.
+
+ :::image type="content" source="./media/how-to-manage-imported-data-assets/data-assets-details.png" lightbox="./media/how-to-manage-imported-data-assets/data-assets-details.png" alt-text="Screenshot showing the edit of the auto delete setting.":::
+
+1. To delete or clear the auto delete setting, select the **Clear auto delete setting** trash can icon at the bottom of the page, as shown in this screenshot:
+
+ :::image type="content" source="./media/how-to-manage-imported-data-assets/clear-managed-data-asset-details.png" lightbox="./media/how-to-manage-imported-data-assets/clear-managed-data-asset-details.png" alt-text="Screenshot showing the managed data asset auto delete settings choices.":::
+
+1. After a successful deletion, you'll return to the data asset detail page. This page shows the **Auto delete settings** property box, which displays **None**, as shown in this screenshot:
+ :::image type="content" source="./media/how-to-manage-imported-data-assets/cleared-managed-data-asset-details.png" lightbox="./media/how-to-manage-imported-data-assets/cleared-managed-data-asset-details.png" alt-text="This screenshot shows the managed data asset auto delete settings.":::
## Query on the configured auto delete settings
-You can view and list the data assets with certain conditions or with values configured in the "auto-delete" settings, as shown in this Azure CLI code sample:
+This Azure CLI code sample shows the data assets with certain conditions, or with values configured in the **auto delete** settings:
```cli > az ml data list --query '[?auto_delete_setting.\"condition\"==''created_greater_than'']'
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
Previously updated : 06/09/2023 Last updated : 06/20/2023 #Customer intent: As an experienced Python developer, I need to read my data, to make it available to a remote compute resource, to train my machine learning models.
In this article you learn:
## Quickstart
-Before delving into the detailed options available to you when accessing data, we show you the relevant code snippets to access data so you can get started quickly.
+Before you explore the detailed options available to you when accessing data, we show you the relevant code snippets to access data so you can get started quickly.
### Read data from Azure storage in an Azure Machine Learning job
-In this example, you submit an Azure Machine Learning job that accesses data from a *public* blob storage account. However, you can adapt the snippet to access your own data in a private Azure Storage account by updating the path (for details on how to specify paths, read [Paths](#paths)). Azure Machine Learning seamlessly handles authentication to cloud storage using Azure Active Directory passthrough. When you submit a job, you can choose:
+In this example, you submit an Azure Machine Learning job that accesses data from a *public* blob storage account. However, you can adapt the snippet to access your own data in a private Azure Storage account, by updating the path (for details on how to specify paths, read [Paths](#paths)). Azure Machine Learning seamlessly handles authentication to cloud storage using Azure Active Directory passthrough. When you submit a job, you can choose:
- **User identity:** Passthrough your Azure Active Directory identity to access the data. - **Managed identity:** Use the managed identity of the compute target to access data. - **None:** Don't specify an identity to access the data. Use None when using credential-based (key/SAS token) datastores or when accessing public data. > [!TIP]
-> If you are using keys or SAS tokens to authenticate then we recommend you [create an Azure Machine Learning datastore](how-to-datastore.md) as the runtime will automatically connect to storage without exposing the key/token.
+> If you use keys or SAS tokens to authenticate, we recommend that you [create an Azure Machine Learning datastore](how-to-datastore.md), because the runtime will automatically connect to storage without exposure of the key/token.
# [Python SDK](#tab/python)
az ml job create -f <file-name>.yml
### Write data from your Azure Machine Learning job to Azure Storage
-In this example, you submit an Azure Machine Learning job that writes data to your default Azure Machine Learning Datastore.
+In this example, you submit an Azure Machine Learning job that writes data to your default Azure Machine Learning Datastore. You can optionally set the `name` value of your data asset to create a data asset in the output.
# [Python SDK](#tab/python)
inputs = {
} outputs = {
- "output_data": Output(type=data_type, path=output_path, mode=output_mode)
+ "output_data": Output(type=data_type,
+ path=output_path,
+ mode=output_mode,
+ # optional: if you want to create a data asset from the output,
+ # then uncomment name (name can be set without setting version)
+ # name = "<name_of_data_asset>",
+ # version = "<version>",
+ )
} # This command job copies the data to your default Datastore
outputs:
mode: rw_mount path: azureml://datastores/workspaceblobstore/paths/quickstart-output/titanic.csv type: uri_file
+ # optional: if you want to create a data asset from the output,
+ # then uncomment name (name can be set without setting version)
+ # name: <name_of_data_asset>
+ # version: <version>
+
``` Next, submit the job using the CLI:
az ml job create --file <file-name>.yml
## The Azure Machine Learning data runtime
-When you submit a job, the Azure Machine Learning data runtime controls the data load from the storage location to the compute target. The Azure Machine Learning data runtime has been built to be fast and efficient for machine learning tasks. The key benefits include:
+When you submit a job, the Azure Machine Learning data runtime controls the data load, from the storage location to the compute target. The Azure Machine Learning data runtime has been optimized for speed and efficiency for machine learning tasks. The key benefits include:
-- Data loading is written in the [Rust language](https://www.rust-lang.org/), which is known for high speed and high memory efficiency. Avoids issues with Python GIL when doing concurrent data downloading.-- Light weight; there are *no* dependencies on other technologies - such as JVM - making the runtime fast to install, and doesn't drain extra resources (CPU, Memory) on the compute target.
+- Data loads are written in the [Rust language](https://www.rust-lang.org/), a language known for high speed and high memory efficiency. For concurrent data downloads, Rust avoids Python [Global Interpreter Lock (GIL)](https://wikipedia.org/wiki/Global_interpreter_lock) issues.
+- Light weight; Rust has *no* dependencies on other technologies - for example JVM. As a result, the runtime installs quickly, and it doesn't drain extra resources (CPU, Memory) on the compute target.
- Multi-process (parallel) data loading. - Prefetches data as a background task on the CPU(s), to enable better utilization of the GPU(s) when doing deep-learning. - Seamlessly handles authentication to cloud storage. - Provides options to mount data (stream) or download all the data. For more information, read [Mount (streaming)](#mount-streaming) and [Download](#download) sections.-- Seamless integration with [fsspec](https://filesystem-spec.readthedocs.io/en/latest/) - a unified pythonic interface to local, remote and embedded file systems and bytes storage.
+- Seamless integration with [fsspec](https://filesystem-spec.readthedocs.io/en/latest/) - a unified pythonic interface to local, remote and embedded file systems and byte storage.
> [!TIP]
-> We recommend that you leverage the Azure Machine Learning data runtime, rather than creating your own mounting/downloading capability in your training (client) code. In particular, we have seen storage throughput constrained when the client code uses Python to download data from storage due to [Global Interpreter Lock (GIL)](https://wikipedia.org/wiki/Global_interpreter_lock) issues.
+> We suggest that you leverage the Azure Machine Learning data runtime, instead of creating your own mounting/downloading capability in your training (client) code. In particular, we have seen storage throughput constrained when the client code uses Python to download data from storage due to [Global Interpreter Lock (GIL)](https://wikipedia.org/wiki/Global_interpreter_lock) issues.
## Paths
-When you provide a data input/output to a job, you must specify a `path` parameter that points to the data location. This table shows both the different data locations that Azure Machine Learning supports, and examples for the `path` parameter:
+When you provide a data input/output to a job, you must specify a `path` parameter that points to the data location. This table shows the different data locations that Azure Machine Learning supports, and also shows `path` parameter examples:
|Location | Examples | |||
When you provide a data input/output to a job, you must specify a `path` paramet
When you run a job with data inputs/outputs, you can select from various *modes*: -- **`ro_mount`:** Mount storage location as read-only on the compute target's local disk (SSD).-- **`rw_mount`:** Mount storage location as read-write on the compute target's local disk (SSD).-- **`download`:** Download the data from the storage location to the compute target's local disk (SSD).
+- **`ro_mount`:** Mount storage location, as read-only on the local disk (SSD) compute target.
+- **`rw_mount`:** Mount storage location, as read-write on the local disk (SSD) compute target.
+- **`download`:** Download the data from the storage location to the local disk (SSD) compute target.
- **`upload`:** Upload data from the compute target to the storage location.-- **`eval_mount`/`eval_download`:** *These modes are unique to MLTable.* In some scenarios, an MLTable can yield files that aren't necessarily located in the same storage account as the MLTable file. Or, an MLTable can subset or shuffle the data located in the storage resource. That view of the subset/shuffle becomes visible only if the Azure Machine Learning data runtime actually evaluates the MLTable file. For example, this diagram shows how an MLTable used with `eval_mount` or `eval_download` can take images from two different storage containers and an annotations file located in a different storage account and then mount/download to the remote compute target's filesystem.
+- **`eval_mount`/`eval_download`:** *These modes are unique to MLTable.* In some scenarios, an MLTable can yield files that might be located in a different storage account than the storage account that hosts the MLTable file. Or, an MLTable can subset or shuffle the data located in the storage resource. That view of the subset/shuffle becomes visible only if the Azure Machine Learning data runtime actually evaluates the MLTable file. For example, this diagram shows how an MLTable used with `eval_mount` or `eval_download` can take images from two different storage containers, and an annotations file located in a different storage account, and then mount/download to the filesystem of the remote compute target.
:::image type="content" source="media/how-to-read-write-data-v2/eval-mount.png" alt-text="Screenshot showing evaluation of mount." lightbox="media/how-to-read-write-data-v2/eval-mount.png":::
When you run a job with data inputs/outputs, you can select from various *modes*
ΓööΓöÇΓöÇ container1 ΓööΓöÇΓöÇ annotations.csv ```-- **`direct`:** You may want to read data directly from a URI through other APIs rather than go through the Azure Machine Learning data runtime. For example, you may want to access data on an s3 bucket (with a virtual-hostedΓÇôstyle or path-style `https` URL) using the boto s3 client. You can get URI of the input as a *string* by choosing `direct` mode. You see the direct mode used in Spark Jobs because the `spark.read_*()` methods know how to process the URIs. For **non-Spark** jobs, it is *your* responsibility to manage access credentials. For example, you need to explicitly make use compute MSI or broker access otherwise.
+- **`direct`:** You might want to read data directly from a URI through other APIs, rather than go through the Azure Machine Learning data runtime. For example, you may want to access data on an s3 bucket (with a virtual-hostedΓÇôstyle or path-style `https` URL) using the boto s3 client. You can obtain the URI of the input as a *string* with the `direct` mode. You see use of the direct mode in Spark Jobs, because the `spark.read_*()` methods know how to process the URIs. For **non-Spark** jobs, it is *your* responsibility to manage access credentials. For example, you must explicitly make use of compute MSI, or otherwise broker access.
This table shows the possible modes for different type/mode/input/output combinations:
Type | Input/Output | `upload` | `download` | `ro_mount` | `rw_mount` | `direct`
`mltable` | Output | Γ£ô | | | Γ£ô | Γ£ô | | ### Download
-In download mode, all the input data is copied to the local disk (SSD) of the compute target. The Azure Machine Learning data runtime starts the user training script once all the data is copied. When user script starts, it reads data from the local disk just like any other files. When the job finishes, the data is removed from the disk of the compute target.
+In download mode, all the input data is copied to the local disk (SSD) of the compute target. The Azure Machine Learning data runtime starts the user training script, once all the data is copied. When the user script starts, it reads data from the local disk, just like any other files. When the job finishes, the data is removed from the disk of the compute target.
| Advantages | Disadvantages | || --|
-| When training starts, all the data is available on the local disk (SSD) of the compute target for the training script. No Azure storage / network interaction is required. | Dataset must completely fit on a compute target disk.|
-|After user script starts, there are no dependencies on storage / network reliability. |Entire dataset is downloaded (if training needs to randomly select only a small portion of a data, then much of the download is wasted).|
-|Azure Machine Learning data runtime can parallelize download (significant difference on many small files) and max network / storage throughput.|The job waits until all data is downloaded to the compute target's local disk. If you submit a deep-learning job, the GPUs idle until data is ready.|
+| When training starts, all the data is available on the local disk (SSD) of the compute target, for the training script. No Azure storage / network interaction is required. | The dataset must completely fit on a compute target disk.|
+|After the user script starts, there are no dependencies on storage / network reliability. |The entire dataset is downloaded (if training needs to randomly select only a small portion of a data, then much of the download is wasted).|
+|Azure Machine Learning data runtime can parallelize the download (significant difference on many small files) and max network / storage throughput.|The job waits until all data downloads to the local disk of the compute target. If you submit a deep-learning job, the GPUs idle until data is ready.|
|No unavoidable overhead added by the FUSE layer (roundtrip: user space call in user script → kernel → user space fuse daemon → kernel → response to user script in user space) | Storage changes aren't reflected on the data after download is done. | #### When to use download -- The data is small enough to fit on the compute target's disk without interfering with other training.
+- The data is small enough to fit on the compute target's disk without interference with other training.
- The training uses most or all of the dataset. - The training reads files from a dataset more than once.-- The training needs to jump to random positions of a large file.
+- The training must jump to random positions of a large file.
- It's OK to wait until all the data downloads before training starts. #### Available download settings
You can tune the download settings with the following environment variables in y
| `RSLEX_DOWNLOADER_THREADS` | u64 | `NUMBER_OF_CPU_CORES * 4` | Number of concurrent threads download can use | | `AZUREML_DATASET_HTTP_RETRY_COUNT` | u64 | 7 | Number of retry attempts of individual storage / `http` request to recover from transient errors. |
-In your job, you can change the above defaults by setting the environment variables, for example:
+In your job, you can change the above defaults by setting the environment variables - for example:
# [Python SDK](#tab/python)
environment_variables:
#### Download performance metrics The VM size of your compute target has an effect on the download time of your data. Specifically: -- *The number of cores*. The more cores available, the more concurrency and therefore faster download.
+- *The number of cores*. The more cores available, the more concurrency and therefore faster download speed.
- *The expected network bandwidth*. Each VM in Azure has a maximum throughput from the Network Interface Card (NIC). > [!NOTE]
-> For A100 GPU VMs, the Azure Machine Learning data runtime can saturating the NIC (Network Interface Card) when downloading data to the compute target (~24 Gbit/s): **The theoretical maximum throughput possible**.
+> For A100 GPU VMs, the Azure Machine Learning data runtime can saturate the NIC (Network Interface Card) when downloading data to the compute target (~24 Gbit/s): **The theoretical maximum throughput possible**.
This table shows the download performance the Azure Machine Learning data runtime can handle for a 100-GB file on a `Standard_D15_v2` VM (20cores, 25 Gbit/s Network throughput):
This table shows the download performance the Azure Machine Learning data runtim
|100 x 1 GB Files | 58.09 | 259.47| 13.77 Gbit/s |1 x 100 GB File | 96.13 | 300.61 | 8.32 Gbit/s
-We can see that a larger file, broken up into smaller files, can improve download performance due to parallelism. We recommend that files don't become too small (not less than 4 MB) as the time spent on storage request submissions increases, relative to time spent downloading the payload. For more information, read [Many small files problem](#many-small-files-problem).
+We can see that a larger file, broken up into smaller files, can improve download performance due to parallelism. We recommend that you avoid files that become too small (less than 4 MB) because the time needed for storage request submissions increases, relative to time spent downloading the payload. For more information, read [Many small files problem](#many-small-files-problem).
### Mount (streaming) In mount mode, the Azure Machine Learning data capability uses the [FUSE (filesystem in user space)](https://www.kernel.org/doc/html/latest/filesystems/fuse.html) Linux feature, to create an emulated filesystem. Instead of downloading all the data to the local disk (SSD) of the compute target, the runtime can react to the user's script actions *in real-time*. For example, *"open file"*, *"read 2-KB chunk from position X"*, *"list directory content"*.
In mount mode, the Azure Machine Learning data capability uses the [FUSE (filesy
| Advantages | Disadvantages | |--|-| |Data that exceeds the compute target local disk capacity can be used (not limited by compute hardware)|Added overhead of the Linux FUSE module.|
-|No delay at the beginning of training (unlike download mode).|Dependency on userΓÇÖs code behavior (if the training code that sequentially reads small files in a single thread mount also requests data from storage, it may not maximize the network or storage throughput).|
+|No delay at the start of training (unlike download mode).|Dependency on userΓÇÖs code behavior (if the training code that sequentially reads small files in a single thread mount also requests data from storage, it may not maximize the network or storage throughput).|
|More available settings to tune for a usage scenario.| No windows support.| |Only data needed for training is read from storage.| | #### When to use Mount -- The data is large, and it canΓÇÖt fit on the compute target local disk.
+- The data is large, and it wonΓÇÖt fit on the compute target local disk.
- Each individual compute node in a cluster doesn't need to read the entire dataset (random file or rows in csv file selection, etc.). - Delays waiting for all data to download before training starts can become a problem (idle GPU time).
environment_variables:
#### Block-based open mode
-In block-based open mode, each file is split into blocks of predefined size (except for the last block). A read request from a specified position requests a corresponding block from storage, and returns the requested data immediately. A read also triggers background prefetching of *N* next blocks, using multiple threads (optimized for sequential read). Downloaded blocks are cached in two layer cache (RAM and local disk).
+In block-based open mode, each file is split into blocks of a predefined size (except for the last block). A read request from a specified position requests a corresponding block from storage, and returns the requested data immediately. A read also triggers background prefetching of *N* next blocks, using multiple threads (optimized for sequential read). Downloaded blocks are cached in two layer cache (RAM and local disk).
| Advantages | Disadvantages | ||-- |
Recommended for most scenarios *except* when you need fast reads from random fil
#### Whole file cache open mode |Advantages | Disadvantages | |--||
environment_variables:
#### Mount: Listing files
-When working with *millions* of files, avoid a *recursive listing* - for example `ls -R /mnt/dataset/folder/`. A recursive listing triggers many calls to list the directory contents of the parent directory. It then requires a separate recursive call for each directory inside, at all child levels. Typically, Azure Storage allows only 5000 elements to be returned per single list request. This means a recursive listing of 1M folders containing 10 files each requires `1,000,000 / 5000 + 1,000,000 = 1,000,200` requests to storage. In comparison, 1,000 folders with 10,000 files would only need 1001 requests to storage for a recursive listing.
+When working with *millions* of files, avoid a *recursive listing* - for example `ls -R /mnt/dataset/folder/`. A recursive listing triggers many calls to list the directory contents of the parent directory. It then requires a separate recursive call for each directory inside, at all child levels. Typically, Azure Storage allows only 5000 elements to be returned per single list request. As a result, a recursive listing of 1M folders containing 10 files each requires `1,000,000 / 5000 + 1,000,000 = 1,000,200` requests to storage. In comparison, 1,000 folders with 10,000 files would only need 1001 requests to storage for a recursive listing.
Azure Machine Learning mount handles listing in a lazy manner. Therefore, to list many small files, it's better to use an iterative client library call (for example, `os.scandir()` in Python) instead of a client library call that returns the full list (for example, `os.listdir()` in Python). An iterative client library call returns a generator, meaning that it doesn't need to wait until the entire list loads. It can then proceed faster.
environment_variables:
## Diagnosing and solving data loading bottlenecks
-When an Azure Machine Learning job executes with data, the `mode` of an input determines how bytes are read from storage and cached on the compute target local SSD disk. For download mode, all the data caches on disk before the user code starts execution. Therefore, factors such as
+When an Azure Machine Learning job executes with data, the `mode` of an input determines how bytes are read from storage and cached on the compute target local SSD disk. For download mode, all the data caches on disk, before the user code starts its execution. Therefore, factors such as
- number of parallel threads - the number of files
When an Azure Machine Learning job executes with data, the `mode` of an input de
have an effect on maximum download speeds. For mount, no data caches until the user code starts to open files. Different mount settings result in different reading and caching behavior. Various factors have an effect on the speed that data loads from storage: -- **Data locality to compute**: Your storage and compute target locations should be the same. If your storage and compute target are in different regions, performance degrades because data must transfer across regions. To learn more about ensuring that your data colocates with compute, read [Colocate data with compute](#colocate-data-with-compute).
+- **Data locality to compute**: Your storage and compute target locations should be the same. If your storage and compute target are located in different regions, performance degrades because data must transfer across regions. To learn more about ensuring that your data colocates with compute, read [Colocate data with compute](#colocate-data-with-compute).
- **The compute target size**: Small computes have lower core counts (less parallelism) and smaller expected network bandwidth compared to larger compute sizes - both factors affect data loading performance. - For example, if you use a small VM size, such as `Standard_D2_v2` (2 cores, 1500 Mbps NIC), and you try to load 50,000 MB (50 GB) of data, the best achievable data loading time would be ~270 secs (assuming you saturate the NIC at 187.5-MB/s throughput). In contrast, a `Standard_D5_v2` (16 cores, 12,000 Mbps) would load the same data in ~33 secs (assuming you saturate the NIC at 1500-MB/s throughput). - **Storage tier**: For most scenarios - including Large Language Models (LLM) - standard storage provides the best cost/performance profile. However, if you have [many small files](#many-small-files-problem), *premium* storage offers a better cost/performance profile. For more information, read [Azure Storage options](#azure-storage-options).
If the download throughput is a fraction of the expected network bandwidth for t
2023-05-18T14:08:25.388762Z INFO copy_uri:copy_uri:copy_dataset:write_streams_to_files:collect:reduce:reduce_and_combine:combine: rslex::dataset_crossbeam: close time.busy=1.22ms time.idle=9.50┬╡s sessionId=012ea46a-341c-4258-8aba-90bde4fdfb51 source=Dataset[Partitions: 1, Sources: 1] file_name_column=None break_on_first_error=true skip_existing_files=false parallelization_degree=4 self=Dataset[Partitions: 1, Sources: 1] parallelization_degree=4 self=Dataset[Partitions: 1, Sources: 1] parallelization_degree=4 self=Dataset[Partitions: 1, Sources: 1] parallelization_degree=4 ```
-The **rslex.log** file details all the file copying, whether or not you chose the mount or download modes. It also describes the Settings (environment variables) used. To start debugging, to check whether you have set the [Optimum mount settings for common scenarios](#optimum-mount-settings-for-common-scenarios).
+The **rslex.log** file provides details about all the file copying, whether or not you chose the mount or download modes. It also describes the Settings (environment variables) used. To start debugging, check whether you have set the [Optimum mount settings for common scenarios](#optimum-mount-settings-for-common-scenarios).
#### Monitor Azure storage
-In the Azure portal, you can select your Storage account and then **Metrics** to see the storage metrics:
+In the Azure portal, you can select your Storage account, and then **Metrics**, to see the storage metrics:
:::image type="content" source="media/how-to-read-write-data-v2/blob-metrics.png" alt-text="Screenshot showing blob metrics." lightbox="media/how-to-read-write-data-v2/blob-metrics.png":::
-You then plot the **SuccessE2ELatency** with **SuccessServerLatency**. If the **metrics show high SuccessE2ELatency and low SuccessServerLatency**, you have limited available threads, or you're low on resources such as CPU, memory, or network bandwidth. You should:
+You then plot the **SuccessE2ELatency** with **SuccessServerLatency**. If the **metrics show high SuccessE2ELatency and low SuccessServerLatency**, you have limited available threads, or you run low on resources such as CPU, memory, or network bandwidth, you should:
- Use [monitoring view in the Azure Machine Learning studio](#monitor-disk-usage-during-a-job) to check the CPU and memory utilization of your job. If you're low on CPU and memory, consider increasing the compute target VM size. - Consider increasing `RSLEX_DOWNLOADER_THREADS` if you're downloading and you aren't utilizing the CPU and memory. If you use mount, you should increase `DATASET_MOUNT_READ_BUFFER_BLOCK_COUNT` to do more prefetching, and increase `DATASET_MOUNT_READ_THREADS` for more read threads.
From the Azure Machine Learning studio, you can also monitor the compute target
:::image type="content" source="media/how-to-read-write-data-v2/disk-usage.png" alt-text="Screenshot showing disk usage during job execution." lightbox="media/how-to-read-write-data-v2/disk-usage.png"::: > [!NOTE]
-> Job monitoring supports only compute that Azure Machine Learning manages. Jobs with a runtime of less than 5 minutes will not have enough data to populate this view.
+> Job monitoring supports only compute resources that Azure Machine Learning manages. Jobs with a runtime of less than 5 minutes will not have enough data to populate this view.
-Azure Machine Learning data runtime doesn't use the last `RESERVED_FREE_DISK_SPACE` bytes of disk space to keep the compute healthy (the default value is `150MB`). If your disk is full, your code is writing files to disk without declaring the files as an output. Therefore, check your code to make sure that data isn't being written erroneously to temporary disk. If you must write files to temporary disk, and that resource is becoming full, consider:
+Azure Machine Learning data runtime doesn't use the last `RESERVED_FREE_DISK_SPACE` bytes of disk space, to keep the compute healthy (the default value is `150MB`). If your disk is full, your code is writing files to disk without declaring the files as an output. Therefore, check your code to make sure that data isn't being written erroneously to temporary disk. If you must write files to temporary disk, and that resource is becoming full, consider:
- Increasing the VM Size to one that has a larger temporary disk. - Setting a TTL on the cached data (`DATASET_MOUNT_ATTRIBUTE_CACHE_TTL`), to purge your data from disk.
data_path_for_this_rank = args.data[rank]
Reading files from storage involves making requests for each file. The request count per file varies, based on file sizes and the settings of the software that handles the file reads.
-Files are usually read in *blocks* of 1-4 MB size. Files smaller than a block are read with a single request (GET file.jpg 0-4MB) and files larger than a block have one request made per block (GET file.jpg 0-4MB, GET file.jpg 4-8 MB). The following table shows that files smaller than a 4-MB block result in more storage requests compared to larger files:
+Files are generally read in *blocks* of 1-4 MB size. Files smaller than a block are read with a single request (GET file.jpg 0-4MB), and files larger than a block have one request made per block (GET file.jpg 0-4MB, GET file.jpg 4-8 MB). The following table shows that files smaller than a 4-MB block result in more storage requests compared to larger files:
|# Files |File Size | Total data size | Block size |# Storage requests | |||||--|
machine-learning How To Schedule Data Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-schedule-data-import.md
+
+ Title: Schedule data import (preview)
+
+description: Learn how to schedule an automated data import that brings in data from external sources.
+++++++ Last updated : 06/19/2023+++
+# Schedule data import jobs (preview)
++
+In this article, you'll learn how to programmatically schedule data imports and use the schedule UI to do the same. You can create a schedule based on elapsed time. Time-based schedules can be used to take care of routine tasks, such as importing the data regularly to keep them up-to-date. After learning how to create schedules, you'll learn how to retrieve, update and deactivate them via CLI, SDK, and studio UI.
+
+## Prerequisites
+
+- You must have an Azure subscription to use Azure Machine Learning. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+
+# [Azure CLI](#tab/cli)
+
+- Install the Azure CLI and the `ml` extension. Follow the installation steps in [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+
+- Create an Azure Machine Learning workspace if you don't have one. For workspace creation, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+
+# [Python SDK](#tab/python)
+
+- Create an Azure Machine Learning workspace if you don't have one.
+- The [Azure Machine Learning SDK v2 for Python](/python/api/overview/azure/ai-ml-readme).
+
+# [Studio](#tab/azure-studio)
+
+- An Azure Machine Learning workspace. See [Create workspace resources](quickstart-create-resources.md).
+- Understanding of Azure Machine Learning pipelines. See [what are machine learning pipelines](concept-ml-pipelines.md), and how to create pipeline job in [CLI v2](how-to-create-component-pipelines-cli.md) or [SDK v2](how-to-create-component-pipeline-python.md).
+++
+## Schedule data import
+
+To import data on a recurring basis, you must create a schedule. A `Schedule` associates a data import action, and a trigger. The trigger can either be `cron` that use cron expression to describe the wait between runs or `recurrence` that specify using what frequency to trigger job. In each case, you must first define an import data definition. An existing data import, or a data import that is defined inline, works for this. Refer to [Create a data import in CLI, SDK and UI](how-to-import-data-assets.md).
+
+## Create a schedule
+
+### Create a time-based schedule with recurrence pattern
+
+# [Azure CLI](#tab/cli)
++
+#### YAML: Schedule for data import with recurrence pattern
+```yml
+$schema: https://azuremlschemas.azureedge.net/latest/schedule.schema.json
+name: simple_recurrence_import_schedule
+display_name: Simple recurrence import schedule
+description: a simple hourly recurrence import schedule
+
+trigger:
+ type: recurrence
+ frequency: day #can be minute, hour, day, week, month
+ interval: 1 #every day
+ schedule:
+ hours: [4,5,10,11,12]
+ minutes: [0,30]
+ start_time: "2022-07-10T10:00:00" # optional - default will be schedule creation time
+ time_zone: "Pacific Standard Time" # optional - default will be UTC
+
+import_data: ./my-snowflake-import-data.yaml
+
+```
+#### YAML: Schedule for data import definition inline with recurrence pattern on managed datastore
+```yml
+$schema: https://azuremlschemas.azureedge.net/latest/schedule.schema.json
+name: inline_recurrence_import_schedule
+display_name: Inline recurrence import schedule
+description: an inline hourly recurrence import schedule
+
+trigger:
+ type: recurrence
+ frequency: day #can be minute, hour, day, week, month
+ interval: 1 #every day
+ schedule:
+ hours: [4,5,10,11,12]
+ minutes: [0,30]
+ start_time: "2022-07-10T10:00:00" # optional - default will be schedule creation time
+ time_zone: "Pacific Standard Time" # optional - default will be UTC
+
+import_data:
+ type: mltable
+ name: my_snowflake_ds
+ path: azureml://datastores/workspacemanagedstore
+ source:
+ type: database
+ query: select * from TPCH_SF1.REGION
+ connection: azureml:my_snowflake_connection
+
+```
+
+`trigger` contains the following properties:
+
+- **(Required)** `type` specifies the schedule type, either `recurrence` or `cron`. See the following section for more details.
+
+Next, run this command in the CLI:
+
+```cli
+> az ml schedule create -f <file-name>.yml
+```
+
+# [Python SDK](#tab/python)
++
+```python
+from azure.ai.ml.data_transfer import Database
+from azure.ai.ml.constants import TimeZone
+from azure.ai.ml.entities import (
+ ImportDataSchedule,
+ RecurrenceTrigger,
+ RecurrencePattern,
+)
+from datetime import datetime
+
+source = Database(connection="azureml:my_sf_connection", query="select * from my_table")
+
+path = "azureml://datastores/workspaceblobstore/paths/snowflake/schedule/${{name}}"
++
+my_data = DataImport(
+ type="mltable", source=source, path=path, name="my_schedule_sfds_test"
+)
+
+schedule_name = "my_simple_sdk_create_schedule_recurrence"
+
+schedule_start_time = datetime.utcnow()
+
+recurrence_trigger = RecurrenceTrigger(
+ frequency="day",
+ interval=1,
+ schedule=RecurrencePattern(hours=1, minutes=[0, 1]),
+ start_time=schedule_start_time,
+ time_zone=TimeZone.UTC,
+)
+
+import_schedule = ImportDataSchedule(
+ name=schedule_name, trigger=recurrence_trigger, import_data=my_data
+)
+
+ml_client.schedules.begin_create_or_update(import_schedule).result()
+
+```
+`RecurrenceTrigger` contains following properties:
+
+- **(Required)** To provide better coding experience, we use `RecurrenceTrigger` for recurrence schedule.
+
+# [Studio](#tab/azure-studio)
+
+When you have a data import with satisfactory performance and outputs, you can set up a schedule to automatically trigger this import.
+
+ 1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
+
+ 1. Under **Assets** in the left navigation, select **Data**. On the **Data import** tab, select the imported data asset to which you want to attach a schedule. The **Import jobs history** page should appear, as shown in this screenshot:
+
+ :::image type="content" source="./media/how-to-schedule-data-import/data-import-list.png" lightbox="./media/how-to-schedule-data-import/data-import-list.png" alt-text="Screenshot highlighting the imported data asset name in the Data imports tab.":::
+
+ 1. At the **Import jobs history** page, select the latest **Import job name** link, to open the pipelines job details page as shown in this screenshot:
+
+ :::image type="content" source="./media/how-to-schedule-data-import/data-import-history.png" lightbox="./media/how-to-schedule-data-import/data-import-history.png" alt-text="Screenshot highlighting the imported data asset guid in the Import jobs history tab.":::
+
+ 1. At the pipeline job details page of any data import, select **Schedule** -> **Create new schedule** to open the schedule creation wizard, as shown in this screenshot:
+
+ :::image type="content" source="./media/how-to-schedule-data-import/schedule-entry-button.png" lightbox="./media/how-to-schedule-data-import/schedule-entry-button.png" alt-text="Screenshot of the jobs tab, with the create new schedule button.":::
+
+ 1. The *Basic settings* of the schedule creation wizard have the properties shown in this screenshot:
+
+ :::image type="content" source="./media/how-to-schedule-data-import/create-schedule-basic-settings.png" lightbox="./media/how-to-schedule-data-import/create-schedule-basic-settings.png" alt-text="Screenshot of schedule creation wizard showing the basic settings.":::
+
+ - **Name**: the unique identifier of the schedule within the workspace.
+ - **Description**: the schedule description.
+ - **Trigger**: the recurrence pattern of the schedule, which includes the following properties.
+ - **Time zone**: the trigger time calculation is based on this time zone; (UTC) Coordinated Universal Time by default.
+ - **Recurrence** or **Cron expression**: select recurrence to specify the recurring pattern. Under **Recurrence**, you can specify the recurrence frequency - by minutes, hours, days, weeks, or months.
+ - **Start**: the schedule first becomes active on this date. By default, the creation date of this schedule.
+ - **End**: the schedule will become inactive after this date. By default, it's NONE, which means that the schedule remains active until you manually disable it.
+ - **Tags**: the selected schedule tags.
+
+ After you configure the basic settings, you can select **Review + Create**, and the schedule will automatically submit the data import based on the recurrence pattern you specified. You can also select **Next**, and navigate through the wizard to select or update the data import parameters.
++
+> [!NOTE]
+> These properties apply to CLI and SDK:
+
+- **(Required)** `frequency` specifies the unit of time that describes how often the schedule fires. Can have values of `minute`, `hour`, `day`, `week`, or `month`.
+
+- **(Required)** `interval` specifies how often the schedule fires based on the frequency, which is the number of time units to wait until the schedule fires again.
+
+- (Optional) `schedule` defines the recurrence pattern, containing `hours`, `minutes`, and `weekdays`.
+ - When `frequency` equals `day`, a pattern can specify `hours` and `minutes`.
+ - When `frequency` equals `week` and `month`, a pattern can specify `hours`, `minutes` and `weekdays`.
+ - `hours` should be an integer or a list, ranging between 0 and 23.
+ - `minutes` should be an integer or a list, ranging between 0 and 59.
+ - `weekdays` a string or list ranging from `monday` to `sunday`.
+ - If `schedule` is omitted, the job(s) triggers according to the logic of `start_time`, `frequency` and `interval`.
+
+- (Optional) `start_time` describes the start date and time, with a timezone. If `start_time` is omitted, start_time equals the job creation time. For a start time in the past, the first job runs at the next calculated run time.
+
+- (Optional) `end_time` describes the end date and time with a timezone. If `end_time` is omitted, the schedule continues to trigger jobs until the schedule is manually disabled.
+
+- (Optional) `time_zone` specifies the time zone of the recurrence. If omitted, the default timezone is UTC. To learn more about timezone values, see [appendix for timezone values](reference-yaml-schedule.md#appendix).
+
+### Create a time-based schedule with cron expression
+
+# [Azure CLI](#tab/cli)
+
+## YAML: Schedule for a data import with cron expression
++
+#### YAML: Schedule for data import with cron expression (preview)
+```yml
+$schema: https://azuremlschemas.azureedge.net/latest/schedule.schema.json
+name: simple_cron_import_schedule
+display_name: Simple cron import schedule
+description: a simple hourly cron import schedule
+
+trigger:
+ type: cron
+ expression: "0 * * * *"
+ start_time: "2022-07-10T10:00:00" # optional - default will be schedule creation time
+ time_zone: "Pacific Standard Time" # optional - default will be UTC
+
+import_data: ./my-snowflake-import-data.yaml
+```
+
+#### YAML: Schedule for data import definition inline with cron expression (preview)
+```yml
+$schema: https://azuremlschemas.azureedge.net/latest/schedule.schema.json
+name: inline_cron_import_schedule
+display_name: Inline cron import schedule
+description: an inline hourly cron import schedule
+
+trigger:
+ type: cron
+ expression: "0 * * * *"
+ start_time: "2022-07-10T10:00:00" # optional - default will be schedule creation time
+ time_zone: "Pacific Standard Time" # optional - default will be UTC
+
+import_data:
+ type: mltable
+ name: my_snowflake_ds
+ path: azureml://datastores/workspaceblobstore/paths/snowflake/${{name}}
+ source:
+ type: database
+ query: select * from TPCH_SF1.REGION
+ connection: azureml:my_snowflake_connection
+```
+
+The `trigger` section defines the schedule details and contains following properties:
+
+- **(Required)** `type` specifies the schedule type is `cron`.
+
+```cli
+> az ml schedule create -f <file-name>.yml
+```
+
+The list continues here:
+
+# [Python SDK](#tab/python)
++
+```python
+from azure.ai.ml.data_transfer import Database
+from azure.ai.ml.constants import TimeZone
+from azure.ai.ml.entities import CronTrigger, ImportDataSchedule
+
+source = Database(connection="azureml:my_sf_connection", query="select * from my_table")
+
+path = "azureml://datastores/workspaceblobstore/paths/snowflake/schedule/${{name}}"
++
+my_data = DataImport(
+ type="mltable", source=source, path=path, name="my_schedule_sfds_test"
+)
+
+schedule_name = "my_simple_sdk_create_schedule_cron"
+
+cron_trigger = CronTrigger(
+ expression="15 10 * * 1",
+ start_time=datetime.utcnow(),
+ end_time="2023-12-03T18:40:00",
+)
+import_schedule = ImportDataSchedule(
+ name=schedule_name, trigger=cron_trigger, import_data=my_data
+)
+ml_client.schedules.begin_create_or_update(import_schedule).result()
+
+```
+
+The `CronTrigger` section defines the schedule details and contains following properties:
+
+- **(Required)** To provide better coding experience, we use `CronTrigger` for recurrence schedule.
+
+The list continues here:
+
+# [Studio](#tab/azure-studio)
+
+When you have a data import with satisfactory performance and outputs, you can set up a schedule to automatically trigger this import.
+
+ 1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
+
+ 1. Under **Assets** in the left navigation, select **Data**. On the **Data import** tab, select the imported data asset to which you want to attach a schedule. The **Import jobs history** page should appear, as shown in this screenshot:
+
+ :::image type="content" source="./media/how-to-schedule-data-import/data-import-list.png" lightbox="./media/how-to-schedule-data-import/data-import-list.png" alt-text="Screenshot highlighting the imported data asset name in the Data imports tab.":::
+
+ 1. At the **Import jobs history** page, select the latest **Import job name** link, to open the pipelines job details page as shown in this screenshot:
+
+ :::image type="content" source="./media/how-to-schedule-data-import/data-import-history.png" lightbox="./media/how-to-schedule-data-import/data-import-history.png" alt-text="Screenshot highlighting the imported data asset guid in the Import jobs history tab.":::
+
+ 1. At the pipeline job details page of any data import, select **Schedule** -> **Create new schedule** to open the schedule creation wizard, as shown in this screenshot:
+
+ :::image type="content" source="./media/how-to-schedule-data-import/schedule-entry-button.png" lightbox="./media/how-to-schedule-data-import/schedule-entry-button.png" alt-text="Screenshot of the jobs tab, with the create new schedule button.":::
+
+ 1. The *Basic settings* of the schedule creation wizard have the properties shown in this screenshot:
+
+ :::image type="content" source="./media/how-to-schedule-data-import/create-schedule-basic-settings.png" lightbox="./media/how-to-schedule-data-import/create-schedule-basic-settings.png" alt-text="Screenshot of schedule creation wizard showing the basic settings.":::
+
+ - **Name**: the unique identifier of the schedule within the workspace.
+ - **Description**: the schedule description.
+ - **Trigger**: the recurrence pattern of the schedule, which includes the following properties.
+ - **Time zone**: the trigger time calculation is based on this time zone; (UTC) Coordinated Universal Time by default.
+ - **Recurrence** or **Cron expression**: select recurrence to specify the recurring pattern. **Cron expression** allows you to specify more flexible and customized recurrence pattern.
+ - **Start**: the schedule first becomes active on this date. By default, the creation date of this schedule.
+ - **End**: the schedule will become inactive after this date. By default, it's NONE, which means that the schedule remains active until you manually disable it.
+ - **Tags**: the selected schedule tags.
+
+ After you configure the basic settings, you can select **Review + Create**, and the schedule will automatically submit the data import based on the recurrence pattern you specified. You can also select **Next**, and navigate through the wizard to select or update the data import parameters.
+++
+- **(Required)** `expression` uses a standard crontab expression to express a recurring schedule. A single expression is composed of five space-delimited fields:
+
+ `MINUTES HOURS DAYS MONTHS DAYS-OF-WEEK`
+
+ - A single wildcard (`*`), which covers all values for the field. A `*`, in days, means all days of a month (which varies with month and year).
+ - The `expression: "15 16 * * 1"` in the sample above means the 16:15PM on every Monday.
+ - The next table lists the valid values for each field:
+
+ | Field | Range | Comment |
+ |-|-|--|
+ | `MINUTES` | 0-59 | - |
+ | `HOURS` | 0-23 | - |
+ | `DAYS` | - | Not supported. The value is ignored and treated as `*`. |
+ | `MONTHS` | - | Not supported. The value is ignored and treated as `*`. |
+ | `DAYS-OF-WEEK` | 0-6 | Zero (0) means Sunday. Names of days also accepted. |
+
+ - To learn more about crontab expressions, see [Crontab Expression wiki on GitHub](https://github.com/atifaziz/NCrontab/wiki/Crontab-Expression).
+
+ > [!IMPORTANT]
+ > `DAYS` and `MONTH` are not supported. If you pass one of these values, it will be ignored and treated as `*`.
+
+- (Optional) `start_time` specifies the start date and time with the timezone of the schedule. For example, `start_time: "2022-05-10T10:15:00-04:00"` means the schedule starts from 10:15:00AM on 2022-05-10 in the UTC-4 timezone. If `start_time` is omitted, the `start_time` equals the schedule creation time. For a start time in the past, the first job runs at the next calculated run time.
+
+- (Optional) `end_time` describes the end date, and time with a timezone. If `end_time` is omitted, the schedule continues to trigger jobs until the schedule is manually disabled.
+
+- (Optional) `time_zone`specifies the time zone of the expression. If omitted, the timezone is UTC by default. See [appendix for timezone values](reference-yaml-schedule.md#appendix).
+
+Limitations:
+
+- Currently, Azure Machine Learning v2 scheduling doesn't support event-based triggers.
+- Use the Azure Machine Learning SDK/CLI v2 to specify a complex recurrence pattern that contains multiple trigger timestamps. The UI only displays the complex pattern and doesn't support editing.
+- If you set the recurrence as the 31st day of every month, the schedule won't trigger jobs in months with less than 31 days.
+
+### List schedules in a workspace
+
+# [Azure CLI](#tab/cli)
+++
+# [Python SDK](#tab/python)
++
+[!notebook-python[] (~/azureml-examples-main/sdk/python/schedules/job-schedule.ipynb?name=list_schedule)]
+
+# [Studio](#tab/azure-studio)
+
+In the studio portal, under the **Jobs** extension, select the **All schedules** tab. That tab shows all your job schedules created by the SDK/CLI/UI, in a single list. In the schedule list, you have an overview of all schedules in this workspace, as shown in this screenshot:
++++
+### Check schedule detail
+
+# [Azure CLI](#tab/cli)
++
+```cli
+az ml schedule show -n simple_cron_data_import_schedule
+``````
+
+# [Python SDK](#tab/python)
++
+```python
+created_schedule = ml_client.schedules.get(name=schedule_name)
+[created_schedule.name]
+
+```
+
+# [Studio](#tab/azure-studio)
+
+You can select a schedule name to show the schedule details page. The schedule details page contains the following tabs, as shown in this screenshot:
+
+- **Overview**: basic information for the specified schedule.
+
+ :::image type="content" source="./media/how-to-schedule-data-import/schedule-detail-overview.png" alt-text="Screenshot of the overview tab in the schedule details page." :::
+
+- **Job definition**: defines the job that the specified schedule triggers, as shown in this screenshot:
+
+ :::image type="content" source="./media/how-to-schedule-data-import/schedule-detail-job-definition.png" alt-text="Screenshot of the job definition tab in the schedule details page.":::
+++
+### Update a schedule
+
+# [Azure CLI](#tab/cli)
++
+```cli
+az ml schedule update -n simple_cron_data_import_schedule --set description="new description" --no-wait
+```
+
+> [!NOTE]
+> To update more than just tags/description, it is recommended to use `az ml schedule create --file update_schedule.yml`
+
+# [Python SDK](#tab/python)
++
+```python
+job_schedule = ml_client.schedules.begin_create_or_update(
+ schedule=job_schedule
+).result()
+print(job_schedule)
+
+```
+
+# [Studio](#tab/azure-studio)
+
+#### Update a data import definition to existing schedule
+
+To change the import frequency, or to create a new association for the data import job, you can update the import definition of an existing schedule.
+
+> [!NOTE]
+> To update an existing schedule, the association of the schedule with the old import definition will be removed. A schedule can have only one import job definition. However, multiple schedules can call one data import definition.
+
+ 1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
+
+ 1. Under **Assets** in the left navigation, select **Data**. On the **Data import** tab, select the imported data asset to which you want to attach a schedule. Then, the **Import jobs history** page opens, as shown in this screenshot:
+
+ :::image type="content" source="./media/how-to-schedule-data-import/data-import-list.png" alt-text="Screenshot highlighting the imported data asset name in the Data imports tab.":::
+
+ 1. At the **Import jobs history** page, select the latest **Import job name** link, to open the pipelines job details page as shown in this screenshot:
+
+ :::image type="content" source="./media/how-to-schedule-data-import/data-import-history.png" alt-text="Screenshot highlighting the imported data asset guid in the Import jobs history tab.":::
+
+ 1. At the pipeline job details page of any data import, select **Schedule** -> **Updated to existing schedule**, to open the Select schedule wizard, as shown in this screenshot:
+
+ :::image type="content" source="./media/how-to-schedule-data-import/schedule-update-button.png" alt-text="Screenshot of the jobs tab with the schedule button selected, showing the create update to existing schedule button.":::
+
+ 1. Select an existing schedule from the list, as shown in this screenshot:
+
+ :::image type="content" source="./media/how-to-schedule-data-import/update-select-schedule.png" alt-text="Screenshot of update select schedule showing the select schedule tab." :::
+
+ > [!IMPORTANT]
+ > Make sure to select the correct schedule to update. Once you finish the update, the schedule will trigger different data imports.
+
+ 1. You can also modify the source, query and change the destination path, for future data imports that the schedule triggers.
+
+ 1. Select **Review + Update** to finish the update process. The completed update will send a notification.
+
+ 1. You can view the new data import definition in the schedule details page when the update is completed.
+
+#### Update in schedule detail page
+
+In the schedule details page, you can select **Update settings** to update both the basic settings and advanced settings, including the job input/output and runtime settings of the schedule, as shown in this screenshot:
++++
+### Disable a schedule
+
+# [Azure CLI](#tab/cli)
++
+```cli
+az ml schedule disable -n simple_cron_data_import_schedule --no-wait
+```
+
+# [Python SDK](#tab/python)
+
+```python
+job_schedule = ml_client.schedules.begin_disable(name=schedule_name).result()
+job_schedule.is_enabled
+
+```
+
+# [Studio](#tab/azure-studio)
+
+You can disable the current schedule at the schedule details page. You can also disable schedules at the **All schedules** tab.
+++
+### Enable a schedule
+
+# [Azure CLI](#tab/cli)
++
+```cli
+az ml schedule enable -n simple_cron_data_import_schedule --no-wait
+```
+
+# [Python SDK](#tab/python)
++
+```python
+# Update trigger expression
+job_schedule.trigger.expression = "10 10 * * 1"
+job_schedule = ml_client.schedules.begin_create_or_update(
+ schedule=job_schedule
+).result()
+print(job_schedule)
+
+```
+
+# [Studio](#tab/azure-studio)
+
+On the schedule details page, you can enable the current schedule. You can also enable schedules at the **All schedules** tab.
+++
+## Delete a schedule
+
+> [!IMPORTANT]
+> A schedule must be disabled before deletion. Deletion is an unrecoverable action. After a schedule is deleted, you can never access or recover it.
+
+# [Azure CLI](#tab/cli)
++
+```cli
+az ml schedule delete -n simple_cron_data_import_schedule
+```
+
+# [Python SDK](#tab/python)
++
+```python
+# Only disabled schedules can be deleted
+ml_client.schedules.begin_disable(name=schedule_name).result()
+ml_client.schedules.begin_delete(name=schedule_name).result()
+
+```
+
+# [Studio](#tab/azure-studio)
+
+You can delete a schedule from the schedule details page or the all schedules tab.
++
+## RBAC (Role-based-access-control) support
+
+Schedules are generally used for production. To prevent problems, workspace admins may want to restrict schedule creation and management permissions within a workspace.
+
+There are currently three action rules related to schedules, and you can configure them in Azure portal. See [how to manage access to an Azure Machine Learning workspace.](how-to-assign-roles.md#create-custom-role) to learn more.
+
+| Action | Description | Rule |
+|--|-||
+| Read | Get and list schedules in Machine Learning workspace | Microsoft.MachineLearningServices/workspaces/schedules/read |
+| Write | Create, update, disable and enable schedules in Machine Learning workspace | Microsoft.MachineLearningServices/workspaces/schedules/write |
+| Delete | Delete a schedule in Machine Learning workspace | Microsoft.MachineLearningServices/workspaces/schedules/delete |
+
+## Next steps
+
+* Learn more about the [CLI (v2) data import schedule YAML schema](./reference-yaml-schedule-data-import.md).
+* Learn how to [manage imported data assets](how-to-manage-imported-data-assets.md).
machine-learning How To Track Experiments Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-experiments-mlflow.md
# Query & compare experiments and runs with MLflow
-Experiments and runs tracking information in Azure Machine Learning can be queried using MLflow. You don't need to install any specific SDK to manage what happens inside of a training job, creating a more seamless transition between local runs and the cloud by removing cloud-specific dependencies. In this article, you'll learn how to query and compare experiments and runs in your workspace using Azure Machine Learning and MLflow SDK in Python.
+Experiments and jobs (or runs) in Azure Machine Learning can be queried using MLflow. You don't need to install any specific SDK to manage what happens inside of a training job, creating a more seamless transition between local runs and the cloud by removing cloud-specific dependencies. In this article, you'll learn how to query and compare experiments and runs in your workspace using Azure Machine Learning and MLflow SDK in Python.
MLflow allows you to:
See [Support matrix for querying runs and experiments in Azure Machine Learning]
> [!NOTE] > The Azure Machine Learning Python SDK v2 does not provide native logging or tracking capabilities. This applies not just for logging but also for querying the metrics logged. Instead, use MLflow to manage experiments and runs. This article explains how to use MLflow to manage experiments and runs in Azure Machine Learning.
+### REST API
+
+Query and searching experiments and runs is also available using the MLflow REST API. See [Using MLflow REST with Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/using-rest-api/using_mlflow_rest_api.ipynb) for an example about how to consume it.
+ ### Prerequisites [!INCLUDE [mlflow-prereqs](../../includes/machine-learning-mlflow-prereqs.md)]
-## Query experiments
+## Query and search experiments
-Use MLflow to search for experiments inside of your workspace.
+Use MLflow to search for experiments inside of your workspace. See the following examples:
-## Getting all the experiments
+* Get all active experiments:
-You can get all the active experiments in the workspace using MLflow:
+ ```python
+ mlflow.search_experiments()
+ ```
+
+ > [!NOTE]
+ > In legacy versions of MLflow (<2.0) use method `mlflow.list_experiments()` instead.
-```python
-experiments = mlflow.search_experiments()
-for exp in experiments:
- print(exp.name)
-```
+* Get all the experiments, including archived:
-> [!NOTE]
-> In legacy versions of MLflow (<2.0) use method `mlflow.list_experiments()` instead.
+ ```python
+ from mlflow.entities import ViewType
+
+ mlflow.search_experiments(view_type=ViewType.ALL)
+ ```
-If you want to retrieve archived experiments too, then include the parameter `view_type=ViewType.ALL`. The following sample shows how:
+* Get a specific experiment by name:
-```python
-from mlflow.entities import ViewType
+ ```python
+ mlflow.get_experiment_by_name(experiment_name)
+ ```
-experiments = mlflow.search_experiments(view_type=ViewType.ALL)
-for exp in experiments:
- print(exp.name)
-```
+* Get a specific experiment by ID:
-### Getting a specific experiment
+ ```python
+ mlflow.get_experiment('1234-5678-90AB-CDEFG')
+ ```
-Details about a specific experiment can be retrieved using the `get_experiment_by_name` method:
+### Searching experiments
-```python
-exp = mlflow.get_experiment_by_name(experiment_name)
-print(exp)
-```
+The `search_experiments()` method available since Mlflow 2.0 allows searching experiment matching a criteria using `filter_string`.
-### Searching experiments
+* Retrieve multiple experiments based on their IDs:
-The `search_experiments()` method available since Mlflow 2.0 allows searching experiment matching a criteria using `filter_string`. The following query retrieves three experiments with different IDs.
+ ```python
+ mlflow.search_experiments(filter_string="experiment_id IN ("
+ "'CDEFG-1234-5678-90AB', '1234-5678-90AB-CDEFG', '5678-1234-90AB-CDEFG')"
+ )
+ ```
-```python
-mlflow.search_experiments(filter_string="experiment_id IN (
- 'CDEFG-1234-5678-90AB', '1234-5678-90AB-CDEFG', '5678-1234-90AB-CDEFG')"
-)
-```
+* Retrieve all experiments created after a given time:
-## Query and search runs inside an experiment
+ ```python
+ import datetime
-MLflow allows searching runs inside of any experiment, including multiple experiments at the same time. By default, MLflow returns the data in Pandas `Dataframe` format, which makes it handy when doing further processing our analysis of the runs. Returned data includes columns with:
+ dt = datetime.datetime(2022, 6, 20, 5, 32, 48)
+ mlflow.search_experiments(filter_string=f"creation_time > {int(dt.timestamp())}")
+ ```
-- Basic information about the run.-- Parameters with column's name `params.<parameter-name>`.-- Metrics (last logged value of each) with column's name `metrics.<metric-name>`.
+* Retrieve all experiments with a given tag:
-### Getting all the runs from an experiment
+ ```python
+ mlflow.search_experiments(filter_string=f"tags.framework = 'torch'")
+ ```
-By experiment name:
+## Query and search runs
-```python
-mlflow.search_runs(experiment_names=[ "my_experiment" ])
-```
+MLflow allows searching runs inside of any experiment, including multiple experiments at the same time. The method `mlflow.search_runs()` accepts the argument `experiment_ids` and `experiment_name` to indicate on which experiments you want to search. You can also indicate `search_all_experiments=True` if you want to search across all the experiments in the workspace:
-By experiment ID:
+* By experiment name:
-```python
-mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ])
-```
+ ```python
+ mlflow.search_runs(experiment_names=[ "my_experiment" ])
+ ```
-> [!TIP]
-> Notice that `experiment_ids` supports providing an array of experiments, so you can search runs across multiple experiments if required. This may be useful in case you want to compare runs of the same model when it is being logged in different experiments (by different people, different project iterations, etc). You can also use `search_all_experiments=True` if you want to search across all the experiments in the workspace.
+* By experiment ID:
+
+ ```python
+ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ])
+ ```
+
+* Search across all experiments in the workspace:
+
+ ```python
+ mlflow.search_runs(filter_string="params.num_boost_round='100'", search_all_experiments=True)
+ ```
+
+Notice that `experiment_ids` supports providing an array of experiments, so you can search runs across multiple experiments if required. This may be useful in case you want to compare runs of the same model when it is being logged in different experiments (by different people, different project iterations, etc.).
-Another important point to notice is that get returning runs, all metrics are parameters are also returned for them. However, for metrics containing multiple values (for instance, a loss curve, or a PR curve), only the last value of the metric is returned. If you want to retrieve all the values of a given metric, uses `mlflow.get_metric_history` method.
+> [!IMPORTANT]
+> If `experiment_ids`, `experiment_names`, or `search_all_experiments` are not indicated, then MLflow will search by default in the current active experiment. You can set the active experiment using `mlflow.set_experiment()`
+
+By default, MLflow returns the data in Pandas `Dataframe` format, which makes it handy when doing further processing our analysis of the runs. Returned data includes columns with:
+
+- Basic information about the run.
+- Parameters with column's name `params.<parameter-name>`.
+- Metrics (last logged value of each) with column's name `metrics.<metric-name>`.
+
+All metrics and parameters are also returned when querying runs. However, for metrics containing multiple values (for instance, a loss curve, or a PR curve), only the last value of the metric is returned. If you want to retrieve all the values of a given metric, uses `mlflow.get_metric_history` method. See [Getting params and metrics from a run](#getting-params-and-metrics-from-a-run) for an example.
### Ordering runs By default, experiments are ordered descending by `start_time`, which is the time the experiment was queue in Azure Machine Learning. However, you can change this default by using the parameter `order_by`.
-```python
-mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ], order_by=["start_time DESC"])
-```
+* Order runs by the attribute `start_time`:
+
+ ```python
+ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
+ order_by=["attributes.start_time DESC"])
+ ```
-Use the argument `max_results` from `search_runs` to limit the number of runs returned. For instance, the following example returns the last run of the experiment:
+* Order and retrieve the last run:
-```python
-mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ], max_results=1, order_by=["start_time DESC"])
-```
+ ```python
+ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
+ max_results=1, order_by=["attributes.start_time DESC"])
+ ```
-> [!WARNING]
-> Using `order_by` with expressions containing `metrics.*` in the parameter `order_by` is not supported by the moment. Please use `order_values` method from Pandas as shown in the next example.
+* Order runs by metric's values:
-You can also order by metrics to know which run generated the best results:
+ ```python
+ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ]).sort_values("metrics.accuracy", ascending=False)
+ ```
+
+ > [!WARNING]
+ > Using `order_by` with expressions containing `metrics.*` in the parameter `order_by` is not supported by the moment. Please use `order_values` method from Pandas as shown in the next example.
-```python
-mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ]).sort_values("metrics.accuracy", ascending=False)
-```
### Filtering runs
-You can also look for a run with a specific combination in the hyperparameters using the parameter `filter_string`. Use `params` to access run's parameters and `metrics` to access metrics logged in the run. MLflow supports expressions joined by the AND keyword (the syntax does not support OR):
+You can also look for a run with a specific combination in the hyperparameters using the parameter `filter_string`. Use `params` to access run's parameters, `metrics` to access metrics logged in the run, and `attributes` to access run information details. MLflow supports expressions joined by the AND keyword (the syntax does not support OR):
-```python
-mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
- filter_string="params.num_boost_round='100'")
-```
+* Search runs based on a parameter's value:
-You can also use the qualifier `attributes` to query for specific attributes of the run like `creation_time` or `run_id`. The following example conditions the query to return only specific runs:
+ ```python
+ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
+ filter_string="params.num_boost_round='100'")
+ ```
-```python
-mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
- filter_string="attributes.run_id IN ('1234-5678-90AB-CDEFG', '5678-1234-90AB-CDEFG')")
-```
+ > [!WARNING]
+ > Only operators `=`, `like`, and `!=` are supported for filtering `parameters`.
+
+* Search runs based on a metric's value:
+
+ ```python
+ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
+ filter_string="metrics.auc>0.8")
+ ```
+
+* Search runs with a given tag:
+
+ ```python
+ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
+ filter_string="tags.framework='torch'")
+ ```
+
+* Search runs created by a given user:
+
+ ```python
+ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
+ filter_string="attributes.user_id = 'John Smith'")
+ ```
+
+* Search runs that have failed. See [Filter runs by status](#filter-runs-by-status) for possible values:
+
+ ```python
+ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
+ filter_string="attributes.status = 'Failed'")
+ ```
+
+* Search runs created after a given time:
+
+ ```python
+ import datetime
+
+ dt = datetime.datetime(2022, 6, 20, 5, 32, 48)
+ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
+ filter_string=f"attributes.creation_time > '{int(dt.timestamp())}'")
+ ```
+
+ > [!TIP]
+ > Notice that for the key `attributes`, values should always be strings and hence encoded between quotes.
+
+* Search runs having the ID in a given set:
+
+ ```python
+ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
+ filter_string="attributes.run_id IN ('1234-5678-90AB-CDEFG', '5678-1234-90AB-CDEFG')")
+ ```
### Filter runs by status
-You can also filter experiment by status. It becomes useful to find runs that are running, completed, canceled or failed. In MLflow, `status` is an `attribute`, so we can access this value using the expression `attributes.status`. The following table shows the possible values:
+When filtering runs by status, notice that MLflow uses a different convention to name the different possible status of a run compared to Azure Machine Learning. The following table shows the possible values:
| Azure Machine Learning Job status | MLFlow's `attributes.status` | Meaning | | :-: | :-: | :- |
You can also filter experiment by status. It becomes useful to find runs that ar
| Failed | `FAILED` | The job/run has completed with errors. | | Canceled | `KILLED` | The job/run has been canceled or killed by the user/system. |
-> [!WARNING]
-> Expressions containing `attributes.status` in the parameter `filter_string` are not support at the moment. Please use Pandas filtering expressions as shown in the next example.
-
-The following example shows all the completed runs:
+Example:
```python
-runs = mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ])
-runs[runs.status == "FINISHED"]
+mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
+ filter_string="attributes.status = 'Failed'")
```-
-### Searching runs across experiments
-
-The method `search_runs` require you to indicate the experiment name or ID you want to search runs in. However, if you want to query runs across multiple experiments, you can indicate the argument `search_all_experiments=True` to expand the search.
-
-```python
-mlflow.search_runs(filter_string="params.num_boost_round='100'", search_all_experiments=True)
-```
-
-Notice that if `search_all_experiments` is not indicated and no experiment ID or name is indicated, the search is performed in the current active experiment (the one indicated in `mlflow.set_experiment()` method).
## Getting metrics, parameters, artifacts and models
child_runs = mlflow.search_runs(
## Compare jobs and models in Azure Machine Learning studio (preview)
-To compare and evaluate the quality of your jobs and models in Azure Machine Learning Studio, use the [preview panel](./how-to-enable-preview-features.md) to enable the feature. Once enabled, you can compare the parameters, metrics, and tags between the jobs and/or models you selected.
+To compare and evaluate the quality of your jobs and models in Azure Machine Learning studio, use the [preview panel](./how-to-enable-preview-features.md) to enable the feature. Once enabled, you can compare the parameters, metrics, and tags between the jobs and/or models you selected.
> [!IMPORTANT] > Items marked (preview) in this article are currently in public preview.
The MLflow SDK exposes several methods to retrieve runs, including options to co
| Feature | Supported by MLflow | Supported by Azure Machine Learning | | :- | :-: | :-: |
-| Ordering runs by run fields (like `start_time`, `end_time`, etc) | **&check;** | **&check;** |
-| Ordering runs by attributes | **&check;** | <sup>1</sup> |
+| Ordering runs by attributes | **&check;** | **&check;** |
| Ordering runs by metrics | **&check;** | <sup>1</sup> | | Ordering runs by parameters | **&check;** | <sup>1</sup> | | Ordering runs by tags | **&check;** | <sup>1</sup> |
-| Filtering runs by run fields (like `start_time`, `end_time`, etc) | | <sup>1</sup> |
-| Filtering runs by attributes | **&check;** | <sup>1</sup> |
+| Filtering runs by attributes | **&check;** | **&check;** |
| Filtering runs by metrics | **&check;** | **&check;** | | Filtering runs by metrics with special characters (escaped) | **&check;** | | | Filtering runs by parameters | **&check;** | **&check;** | | Filtering runs by tags | **&check;** | **&check;** | | Filtering runs with numeric comparators (metrics) including `=`, `!=`, `>`, `>=`, `<`, and `<=` | **&check;** | **&check;** | | Filtering runs with string comparators (params, tags, and attributes): `=` and `!=` | **&check;** | **&check;**<sup>2</sup> |
-| Filtering runs with string comparators (params, tags, and attributes): `LIKE`/`ILIKE` | **&check;** | |
+| Filtering runs with string comparators (params, tags, and attributes): `LIKE`/`ILIKE` | **&check;** | **&check;** |
| Filtering runs with comparators `AND` | **&check;** | **&check;** | | Filtering runs with comparators `OR` | | | | Renaming experiments | **&check;** | | > [!NOTE]
-> - <sup>1</sup> Check the section [Query and search runs inside an experiment](#query-and-search-runs-inside-an-experiment) for instructions and examples on how to achieve the same functionality in Azure Machine Learning.
+> - <sup>1</sup> Check the section [Ordering runs](#ordering-runs) for instructions and examples on how to achieve the same functionality in Azure Machine Learning.
> - <sup>2</sup> `!=` for tags not supported. ## Next steps
machine-learning How To Use Parallel Job In Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-parallel-job-in-pipeline.md
Sample code to set two attributes:
> If you want to parse arguments in Init() or Run(mini_batch) function, use "parse_known_args" instead of "parse_args" for avoiding exceptions. See the [iris_score](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/parallel-run/Code/iris_score.py) example for entry script with argument parser. > [!IMPORTANT]
-> If you use `mltable` as your major input data, you need to install 'mltable' library into your environment. See the line 9 of this [conda file](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/parallel/1a_oj_sales_prediction/src/parallel_train/conda.yml) example.
+> If you use `mltable` as your major input data, you need to install 'mltable' library into your environment. See the line 9 of this [conda file](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/parallel/1a_oj_sales_prediction/src/parallel_train/conda.yaml) example.
### Consider automation settings
machine-learning Interactive Data Wrangling With Apache Spark Azure Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/interactive-data-wrangling-with-apache-spark-azure-ml.md
Title: Interactive data wrangling with Apache Spark in Azure Machine Learning (preview)
+ Title: Interactive data wrangling with Apache Spark in Azure Machine Learning
description: Learn how to use Apache Spark to wrangle data with Azure Machine Learning
Previously updated : 12/01/2022 Last updated : 05/22/2023
-# Interactive Data Wrangling with Apache Spark in Azure Machine Learning (preview)
+# Interactive Data Wrangling with Apache Spark in Azure Machine Learning
-
-Data wrangling becomes one of the most important steps in machine learning projects. The Azure Machine Learning integration, with Azure Synapse Analytics (preview), provides access to an Apache Spark pool - backed by Azure Synapse - for interactive data wrangling using Azure Machine Learning Notebooks.
+Data wrangling becomes one of the most important steps in machine learning projects. The Azure Machine Learning integration, with Azure Synapse Analytics, provides access to an Apache Spark pool - backed by Azure Synapse - for interactive data wrangling using Azure Machine Learning Notebooks.
In this article, you'll learn how to perform data wrangling using -- Managed (Automatic) Synapse Spark compute
+- Serverless Spark compute
- Attached Synapse Spark pool ## Prerequisites - An Azure subscription; if you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free) before you begin. - An Azure Machine Learning workspace. See [Create workspace resources](./quickstart-create-resources.md). - An Azure Data Lake Storage (ADLS) Gen 2 storage account. See [Create an Azure Data Lake Storage (ADLS) Gen 2 storage account](../storage/blobs/create-data-lake-storage-account.md).-- To enable this feature:
- 1. Navigate to Azure Machine Learning studio UI.
- 2. Select **Manage preview features** (megaphone icon) among the icons on the top right side of the screen.
- 3. In **Managed preview feature** panel, toggle on **Run notebooks and jobs on managed Spark** feature.
- :::image type="content" source="media/interactive-data-wrangling-with-apache-spark-azure-ml/how_to_enable_managed_spark_preview.png" alt-text="Screenshot showing option for enabling Managed Spark preview.":::
- - (Optional): An Azure Key Vault. See [Create an Azure Key Vault](../key-vault/general/quick-create-portal.md). - (Optional): A Service Principal. See [Create a Service Principal](../active-directory/develop/howto-create-service-principal-portal.md). - [(Optional): An attached Synapse Spark pool in the Azure Machine Learning workspace](./how-to-manage-synapse-spark-pool.md).
-Before starting data wrangling tasks, you'll need familiarity with the process of storing secrets
+Before starting data wrangling tasks, you need familiarity with the process of storing secrets
- Azure Blob storage account access key - Shared Access Signature (SAS) token - Azure Data Lake Storage (ADLS) Gen 2 service principal information
-in the Azure Key Vault. You'll also need to know how to handle role assignments in the Azure storage accounts. The following sections review these concepts. Then, we'll explore the details of interactive data wrangling using the Spark pools in Azure Machine Learning Notebooks.
+in the Azure Key Vault. You also need to know how to handle role assignments in the Azure storage accounts. The following sections review these concepts. Then, we'll explore the details of interactive data wrangling using the Spark pools in Azure Machine Learning Notebooks.
> [!TIP] > To learn about Azure storage account role assignment configuration, or if you access data in your storage accounts using user identity passthrough, see [Add role assignments in Azure storage accounts](./apache-spark-environment-configuration.md#add-role-assignments-in-azure-storage-accounts). ## Interactive Data Wrangling with Apache Spark
-Azure Machine Learning offers serverless Spark compute (preview), and [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md), for interactive data wrangling with Apache Spark, in Azure Machine Learning Notebooks. The serverless Spark compute doesn't require creation of resources in the Azure Synapse workspace. Instead, a fully managed automatic Spark compute becomes directly available in the Azure Machine Learning Notebooks. Using a serverless Spark compute is the easiest approach to access a Spark cluster in Azure Machine Learning.
-
-### serverless Spark compute in Azure Machine Learning Notebooks
+Azure Machine Learning offers serverless Spark compute, and [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md), for interactive data wrangling with Apache Spark in Azure Machine Learning Notebooks. The serverless Spark compute doesn't require creation of resources in the Azure Synapse workspace. Instead, a fully managed serverless Spark compute becomes directly available in the Azure Machine Learning Notebooks. Using a serverless Spark compute is the easiest approach to access a Spark cluster in Azure Machine Learning.
-A serverless Spark compute is available in Azure Machine Learning Notebooks by default. To access it in a notebook, select **Azure Machine Learning Spark Compute** under **Azure Machine Learning Spark** from the **Compute** selection menu.
+### Serverless Spark compute in Azure Machine Learning Notebooks
+A serverless Spark compute is available in Azure Machine Learning Notebooks by default. To access it in a notebook, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark** from the **Compute** selection menu.
The Notebooks UI also provides options for Spark session configuration, for the serverless Spark compute. To configure a Spark session: 1. Select **Configure session** at the top of the screen.
-1. Select a version of **Apache Spark** from the dropdown menu.
+2. Select **Apache Spark version** from the dropdown menu.
> [!IMPORTANT] > > End of life announcement (EOLA) for Azure Synapse Runtime for Apache Spark 3.1 was made on January 26, 2023. In accordance, Apache Spark 3.1 will not be supported after July 31, 2023. We recommend that you use Apache Spark 3.2.
-1. Select **Instance type** from the dropdown menu. The following instance types are currently supported:
+3. Select **Instance type** from the dropdown menu. The following instance types are currently supported:
- `Standard_E4s_v3` - `Standard_E8s_v3` - `Standard_E16s_v3` - `Standard_E32s_v3` - `Standard_E64s_v3`
-1. Input a Spark **Session timeout** value, in minutes.
-1. Select the number of **Executors** for the Spark session.
-1. Select **Executor size** from the dropdown menu.
-1. Select **Driver size** from the dropdown menu.
-1. To use a conda file to configure a Spark session, check the **Upload conda file** checkbox. Then, select **Browse**, and choose the conda file with the Spark session configuration you want.
-1. Add **Configuration settings** properties, input values in the **Property** and **Value** textboxes, and select **Add**.
-1. Select **Apply**.
-
- :::image type="content" source="media/interactive-data-wrangling-with-apache-spark-azure-ml/azure-ml-session-configuration.png" alt-text="Screenshot showing the Spark session configuration options.":::
-
-1. Select **Stop now** in the **Stop current session** pop-up.
-
- :::image type="content" source="media/interactive-data-wrangling-with-apache-spark-azure-ml/stop-current-session.png" alt-text="Screenshot showing the stop current session dialog box.":::
-
-The session configuration changes persist and becomes available to another notebook session that is started using the serverless Spark compute.
+4. Input a Spark **Session timeout** value, in minutes.
+5. Select whether to **Dynamically allocate executors**
+6. Select the number of **Executors** for the Spark session.
+7. Select **Executor size** from the dropdown menu.
+8. Select **Driver size** from the dropdown menu.
+9. To use a conda file to configure a Spark session, check the **Upload conda file** checkbox. Then, select **Browse**, and choose the conda file with the Spark session configuration you want.
+10. Add **Configuration settings** properties, input values in the **Property** and **Value** textboxes, and select **Add**.
+11. Select **Apply**.
+12. Select **Stop session** in the **Configure new session?** pop-up.
+
+The session configuration changes persist and become available to another notebook session that is started using the serverless Spark compute.
### Import and wrangle data from Azure Data Lake Storage (ADLS) Gen 2
To start interactive data wrangling with the user identity passthrough:
- Verify that the user identity has **Contributor** and **Storage Blob Data Contributor** [role assignments](./apache-spark-environment-configuration.md#add-role-assignments-in-azure-storage-accounts) in the Azure Data Lake Storage (ADLS) Gen 2 storage account. -- To use the serverless Spark compute, select **Azure Machine Learning Spark Compute**, under **Azure Machine Learning Spark**, from the **Compute** selection menu.-
- :::image type="content" source="media/interactive-data-wrangling-with-apache-spark-azure-ml/select-azure-machine-learning-spark.png" alt-text="Screenshot showing use of a serverless Spark compute.":::
--- To use an attached Synapse Spark pool, select an attached Synapse Spark pool under **Synapse Spark pool (Preview)** from the **Compute** selection menu.
+- To use the serverless Spark compute, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**, from the **Compute** selection menu.
- :::image type="content" source="media/interactive-data-wrangling-with-apache-spark-azure-ml/select-synapse-spark-pools-preview.png" alt-text="Screenshot showing use of an attached spark pool.":::
+- To use an attached Synapse Spark pool, select an attached Synapse Spark pool under **Synapse Spark pools**, from the **Compute** selection menu.
- This Titanic data wrangling code sample shows use of a data URI in format `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>` with `pyspark.pandas` and `pyspark.ml.feature.Imputer`.
To start interactive data wrangling with the user identity passthrough:
To wrangle data by access through a service principal: 1. Verify that the service principal has **Contributor** and **Storage Blob Data Contributor** [role assignments](./apache-spark-environment-configuration.md#add-role-assignments-in-azure-storage-accounts) in the Azure Data Lake Storage (ADLS) Gen 2 storage account.
-1. [Create Azure Key Vault secrets](./apache-spark-environment-configuration.md#store-azure-storage-account-credentials-as-secrets-in-azure-key-vault) for the service principal tenant ID, client ID and client secret values.
-1. Select serverless Spark compute **Azure Machine Learning Spark Compute** under **Azure Machine Learning Spark** from the **Compute** selection menu, or select an attached Synapse Spark pool under **Synapse Spark pool (Preview)** from the **Compute** selection menu
-1. To set the service principal tenant ID, client ID and client secret in the configuration, execute the following code sample.
+2. [Create Azure Key Vault secrets](./apache-spark-environment-configuration.md#store-azure-storage-account-credentials-as-secrets-in-azure-key-vault) for the service principal tenant ID, client ID and client secret values.
+3. Select **Serverless Spark compute** under **Azure Machine Learning Serverless Spark** from the **Compute** selection menu, or select an attached Synapse Spark pool under **Synapse Spark pools** from the **Compute** selection menu.
+4. To set the service principal tenant ID, client ID and client secret in the configuration, and execute the following code sample.
- The `get_secret()` call in the code depends on name of the Azure Key Vault, and the names of the Azure Key Vault secrets created for the service principal tenant ID, client ID and client secret. The corresponding property name/values to set in the configuration are as follows: - Client ID property: `fs.azure.account.oauth2.client.id.<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net` - Client secret property: `fs.azure.account.oauth2.client.secret.<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net`
To wrangle data by access through a service principal:
) ```
-1. Import and wrangle data using data URI in format `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>` as shown in the code sample using the Titanic data.
+5. Import and wrangle data using data URI in format `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>` as shown in the code sample, using the Titanic data.
### Import and wrangle data from Azure Blob storage
You can access Azure Blob storage data with either the storage account access ke
To start interactive data wrangling: 1. At the Azure Machine Learning studio left panel, select **Notebooks**.
-1. At the **Compute** selection menu, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**, or select an attached Synapse Spark pool under **Synapse Spark pool (Preview)** from the **Compute** selection menu.
+1. Select **Serverless Spark compute** under **Azure Machine Learning Serverless Spark** from the **Compute** selection menu, or select an attached Synapse Spark pool under **Synapse Spark pools** from the **Compute** selection menu.
1. To configure the storage account access key or a shared access signature (SAS) token for data access in Azure Machine Learning Notebooks: - For the access key, set property `fs.azure.account.key.<STORAGE_ACCOUNT_NAME>.blob.core.windows.net` as shown in this code snippet:
To start interactive data wrangling:
### Import and wrangle data from Azure Machine Learning Datastore
-To access data from [Azure Machine Learning Datastore](how-to-datastore.md), define a path to data on the datastore with [URI format](how-to-create-data-assets.md#create-data-assets) `azureml://datastores/<DATASTORE_NAME>/paths/<PATH_TO_DATA>`. To wrangle data from an Azure Machine Learning Datastore in a Notebooks session interactively:
+To access data from [Azure Machine Learning Datastore](how-to-datastore.md), define a path to data on the datastore with [URI format](how-to-create-data-assets.md?tabs=cli#create-data-assets) `azureml://datastores/<DATASTORE_NAME>/paths/<PATH_TO_DATA>`. To wrangle data from an Azure Machine Learning Datastore in a Notebooks session interactively:
-1. Select the serverless Spark compute **Azure Machine Learning Spark Compute** under **Azure Machine Learning Spark** from the **Compute** selection menu, or select an attached Synapse Spark pool under **Synapse Spark pool (Preview)** from the **Compute** selection menu.
+1. Select **Serverless Spark compute** under **Azure Machine Learning Serverless Spark** from the **Compute** selection menu, or select an attached Synapse Spark pool under **Synapse Spark pools** from the **Compute** selection menu.
2. This code sample shows how to read and wrangle Titanic data from an Azure Machine Learning Datastore, using `azureml://` datastore URI, `pyspark.pandas` and `pyspark.ml.feature.Imputer`. ```python
df.to_csv(output_path, index_col="PassengerId")
- [Code samples for interactive data wrangling with Apache Spark in Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/sdk/python/data-wrangling) - [Optimize Apache Spark jobs in Azure Synapse Analytics](../synapse-analytics/spark/apache-spark-performance.md) - [What are Azure Machine Learning pipelines?](./concept-ml-pipelines.md)-- [Submit Spark jobs in Azure Machine Learning (preview)](./how-to-submit-spark-jobs.md)
+- [Submit Spark jobs in Azure Machine Learning](./how-to-submit-spark-jobs.md)
managed-instance-apache-cassandra Compare Cosmosdb Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/compare-cosmosdb-managed-instance.md
The following table shows the common scenarios, workload requirements, and aspir
| **Analytics**| You want full control over the provisioning of analytical pipelines regardless of the overhead to build and maintain them. | You want to use cloud-based analytical services like Azure Databricks. | You want near real-time hybrid transactional analytics built into the platform with [Azure Synapse Link for Azure Cosmos DB](../cosmos-db/synapse-link.md). | | **Workload pattern**| Your workload is fairly steady-state and you don't require scaling nodes in the cluster frequently. | Your workload is volatile and you need to be able to scale up or scale down nodes in a data center or add/remove data centers easily. | Your workload is often volatile and you need to be able to scale up or scale down quickly and at a significant volume. | | **SLAs**| You are happy with your processes for maintaining SLAs on consistency, throughput, availability, and disaster recovery. | You are happy with your processes for maintaining SLAs on consistency and throughput, but want an [SLA for availability](https://azure.microsoft.com/support/legal/sl#backup-and-restore). | You want [fully comprehensive SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_4/) on consistency, throughput, availability, and disaster recovery. |
+| **Replication and consistency**| You need to be able to configure the full array of [tunable consistency settings](https://cassandra.apache.org/doc/latest/cassandr)) |
+| **Data model**| You are migrating workloads which have a mixture of uniform distribution of data, and skewed data (with respect to both storage and throughput across partition keys) requiring flexibility on vertical scale of nodes. | You are migrating workloads which have a mixture of uniform distribution of data, and skewed data (with respect to both storage and throughput across partition keys) requiring flexibility on vertical scale of nodes. | You are building a new application, or your existing application has a relatively uniform distribution of data with respect to both storage and throughput across partition keys. |
## Next steps
migrate Tutorial Migrate Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware.md
ms. Previously updated : 05/31/2023 Last updated : 06/19/2023
Enable replication as follows:
14. In **Review and start replication**, review the settings, and click **Replicate** to start the initial replication for the servers.
+ > [!NOTE]
+ > If there is a connectivity issue with Azure or if the appliance services are down for more than 90 minutes, the active replication cycles for replicating servers are reset to 0% and the respective cycle runs from the beginning.
+ > [!NOTE] > You can update replication settings any time before replication starts (**Manage** > **Replicating machines**). You can't change settings after replication starts.
mysql How To Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-manage-server-portal.md
Last updated 9/21/2020
-# Manage an Azure Database for MySQL - Flexible server using Azure portal
+# Manage an Azure Database for MySQL - Flexible Server using Azure portal
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
network-watcher Connection Monitor Create Using Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-create-using-template.md
Title: Create connection monitor - ARM template
+ Title: Create connection monitor - ARMClient
description: Learn how to create Azure Network Watcher connection monitor using the ARMClient. - Last updated 02/08/2021 #Customer intent: I need to create a connection monitor to monitor communication between one VM and another.
-# Create an Azure Network Watcher connection monitor using ARM template
+# Create a connection monitor using the ARMClient
> [!IMPORTANT] > Starting 1 July 2021, you'll not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You'll also not be able to add new connection monitors in Connection Monitor (classic). You can continue to use the tests and connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor ](migrate-to-connection-monitor-from-network-performance-monitor.md) or [migrate from Connection Monitor (classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before 29 February 2024.
Connection Monitor includes the following entities:
![Diagram showing a connection monitor, defining the relationship between test groups and tests](./media/connection-monitor-2-preview/cm-tg-2.png)
-## Steps to create with sample ARM Template
+## Steps to create a connection monitor using ARMClient
Use the following code to create a connection monitor by using ARMClient.
armclient PUT $ARM/$SUB/$NW/connectionMonitors/$connectionMonitorName/?api-versi
* Test Groups * name - Name your test group. * testConfigurations - Test Configurations based on which source endpoints connect to destination endpoints
- * sources - Choose from endpoints created above. Azure based source endpoints need to have Azure Network Watcher extension installed and nonAzure based source endpoints need to haveAzure Log Analytics agent installed. To install an agent for your source, see [Install monitoring agents](./connection-monitor-overview.md#install-monitoring-agents).
+ * sources - Choose from endpoints created above. Azure based source endpoints need to have Azure Network Watcher extension installed and non-Azure based source endpoints need to haveAzure Log Analytics agent installed. To install an agent for your source, see [Install monitoring agents](./connection-monitor-overview.md#install-monitoring-agents).
* destinations - Choose from endpoints created above. You can monitor connectivity to Azure VMs or any endpoint (a public IP, URL, or FQDN) by specifying them as destinations. In a single test group, you can add Azure VMs, Office 365 URLs, Dynamics 365 URLs, and custom endpoints. * disable - Use this field to disable monitoring for all sources and destinations that the test group specifies.
network-watcher Traffic Analytics Schema Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-schema-update.md
 Title: Traffic analytics schema update - March 2020
-description: Sample queries with new fields in the Traffic Analytics schema. Use these three examples to replace the deprecated fields with the new ones.
+description: Learn how to use queries to replace the deprecated fields in the Traffic Analytics schema with the new ones.
+ - Previously updated : 06/20/2022- Last updated : 06/20/2023+
-# Sample queries with new fields in the Traffic Analytics schema (March 2020 schema update)
+# Use sample queries to replace deprecated fields in traffic analytics schema (March 2020 schema update)
-The [Traffic Analytics log schema](./traffic-analytics-schema.md) includes the following new fields:
+The [Traffic analytics log schema](./traffic-analytics-schema.md) includes the following new fields:
- `SrcPublicIPs_s` - `DestPublicIPs_s` - `NSGRule_s`
-The new fields provide information about source and destination IPs, and they simplify queries.
+The new fields provide information about source and destination IP addresses, and they simplify queries.
The following older fields will be deprecated in future:
The following older fields will be deprecated in future:
- `PublicIPs_s` - `FlowCount_d`
-The following three examples show how to replace the old fields with the new ones.
+The following three examples show you how to replace the old fields with the new ones.
-## Example 1: VMIP_s, Subscription_g, Region_s, Subnet_s, VM_s, NIC_s, and PublicIPs_s fields
+## Example 1: `VMIP_s`, `Subscription_g`, `Region_s`, `Subnet_s`, `VM_s`, `NIC_s`, and `PublicIPs_s` fields
The schema doesn't have to infer source and destination cases from the `FlowDirection_s` field for AzurePublic and ExternalPublic flows. It can also be inappropriate to use the `FlowDirection_s` field for a network virtual appliance.
SourcePublicIPsAggregated = iif(isnotempty(SrcPublicIPs_s), SrcPublicIPs_s, "N/A
DestPublicIPsAggregated = iif(isnotempty(DestPublicIPs_s), DestPublicIPs_s, "N/A") ```
-## Example 2: NSGRules_s field
+## Example 2: `NSGRules_s` field
The old field used the following format:
The old field used the following format:
<Index value 0)>|<NSG_ RuleName>|<Flow Direction>|<Flow Status>|<FlowCount ProcessedByRule> ```
-The schema no longer aggregates data across a network security group (NSG). In the updated schema, `NSGList_s` contains only one NSG. Also, `NSGRules` contains only one rule. The complicated formatting has been removed here and in other fields, as shown in the following example.
+The schema no longer aggregates data across a network security group (NSG). In the updated schema, `NSGList_s` contains only one network security group. Also, `NSGRules` contains only one rule. The complicated formatting has been removed here and in other fields, as shown in the following example.
Previous Kusto query:
FlowStatus = FlowStatus_s,
FlowCountProcessedByRule = AllowedInFlows_d + DeniedInFlows_d + AllowedOutFlows_d + DeniedOutFlows_d ```
-## Example 3: FlowCount_d field
+## Example 3: `FlowCount_d` field
-Because the schema doesn't club data across the NSG, the `FlowCount_d` is simply:
+Because the schema doesn't club data across the network security group, the `FlowCount_d` is simply:
`AllowedInFlows_d` + `DeniedInFlows_d` + `AllowedOutFlows_d` + `DeniedOutFlows_d`
Depending on the conditions, it's clear which of the four fields is populated.
## Next steps -- To get answers to frequently asked questions, see [Traffic Analytics FAQ](traffic-analytics-faq.yml).-- To see details about functionality, see [Traffic Analytics documentation](traffic-analytics.md).
+- To get answers to frequently asked questions, see [Traffic analytics FAQ](traffic-analytics-faq.yml).
+- To learn more about functionality, see [Traffic analytics overview](traffic-analytics.md).
postgresql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-business-continuity.md
When you create support ticket from **Help + support** or **Support + troublesho
* **Service Help** The **Service Health** page in the Azure portal contains information about Azure data center status globally. Search for "service health" in the search bar in the Azure portal, then view Service issues in the Active events category. You can also view the health of individual resources in the **Resource health** page of any resource under the Help menu. A sample screenshot of the Service Health page follows, with information about an active service issue in Southeast Asia. :::image type="content" source="./media/business-continuity/service-health-service-issues-example-map.png" alt-text=" Screenshot showing service outage in Service Health portal.":::++
+> [!IMPORTANT]
+> As the name implies, temporary tablespaces in PostgreSQL are used for temporary objects, as well as other internal database operations, such as sorting. Therefore we do not recommend creating user schema objects in temporary tablespace, as we dont guarantee durability of such objects after Server restarts, HA failovers, etc.
++ ### Unplanned downtime: failure scenarios and service recovery Below are some unplanned failure scenarios and the recovery process.
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
Do the following for each site you want to add to your private mobile network. D
| 6. | Configure a name, DNS name, and (optionally) time settings. </br></br>**Do not** configure an update. | [Tutorial: Configure the device settings for Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-set-up-device-update-time.md) | | 7. | Configure certificates and configure encryption-at-rest for your Azure Stack Edge Pro 2 device. After changing the certificates, you may have to reopen the local UI in a new browser window to prevent the old cached certificates from causing problems.| [Tutorial: Configure certificates for your Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-configure-certificates?pivots=single-node) | | 8. | Activate your Azure Stack Edge Pro 2 device. </br></br>**Do not** follow the section to *Deploy Workloads*. | [Tutorial: Activate Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-activate.md) |
-| 9. | Configure compute on your Azure Stack Edge Pro 2 device. | [Tutorial: Configure compute on Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-configure-compute.md) |
-| 10. | Enable VM management from the Azure portal. </br></br>Enabling this immediately after activating the Azure Stack Edge Pro 2 device occasionally causes an error. Wait one minute and retry. | Navigate to the ASE resource in the Azure portal, go to **Edge services**, select **Virtual machines** and select **Enable**. |
-| 11. | Run the diagnostics tests for the Azure Stack Edge Pro 2 device in the local web UI, and verify they all pass. </br></br>You may see a warning about a disconnected, unused port. You should fix the issue if the warning relates to any of these ports:</br></br>- Port 2 - management</br>- Port 3 - access network</br>- Port 4 - data networks</br></br>For all other ports, you can ignore the warning. </br></br>If there are any errors, resolve them before continuing with the remaining steps. This includes any errors related to invalid gateways on unused ports. In this case, either delete the gateway IP address or set it to a valid gateway for the subnet. | [Run diagnostics, collect logs to troubleshoot Azure Stack Edge device issues](../databox-online/azure-stack-edge-gpu-troubleshoot.md) |
+| 9. | Enable VM management from the Azure portal. </br></br>Enabling this immediately after activating the Azure Stack Edge Pro 2 device occasionally causes an error. Wait one minute and retry. | Navigate to the ASE resource in the Azure portal, go to **Edge services**, select **Virtual machines** and select **Enable**. |
+| 10. | Run the diagnostics tests for the Azure Stack Edge Pro 2 device in the local web UI, and verify they all pass. </br></br>You may see a warning about a disconnected, unused port. You should fix the issue if the warning relates to any of these ports:</br></br>- Port 2 - management</br>- Port 3 - access network</br>- Port 4 - data networks</br></br>For all other ports, you can ignore the warning. </br></br>If there are any errors, resolve them before continuing with the remaining steps. This includes any errors related to invalid gateways on unused ports. In this case, either delete the gateway IP address or set it to a valid gateway for the subnet. | [Run diagnostics, collect logs to troubleshoot Azure Stack Edge device issues](../databox-online/azure-stack-edge-gpu-troubleshoot.md) |
> [!IMPORTANT] > You must ensure your Azure Stack Edge Pro 2 device is compatible with the Azure Private 5G Core version you plan to install. See [Packet core and Azure Stack Edge (ASE) compatibility](./azure-stack-edge-packet-core-compatibility.md). If you need to upgrade your Azure Stack Edge Pro 2 device, see [Update your Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-gpu-install-update.md?tabs=version-2106-and-later).
sap Provider Ha Pacemaker Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-ha-pacemaker-cluster.md
Title: Create a High Availability Pacemaker cluster provider for Azure Monitor for SAP solutions
-description: Learn how to configure High Availability (HA) Pacemaker cluster providers for Azure Monitor for SAP solutions.
+description: Learn how to configure high-availability (HA) Pacemaker cluster providers for Azure Monitor for SAP solutions.
Last updated 01/05/2023
-#Customer intent: As a developer, I want to create a High Availability Pacemaker cluster so I can use the resource with Azure Monitor for SAP solutions.
+#Customer intent: As a developer, I want to create a high-availability Pacemaker cluster so that I can use the resource with Azure Monitor for SAP solutions.
-# Create High Availability cluster provider for Azure Monitor for SAP solutions
+# Create high-availability cluster provider for Azure Monitor for SAP solutions
-In this how-to guide, you learn to create a High Availability (HA) Pacemaker cluster provider for Azure Monitor for SAP solutions. You install the HA agent, then create the provider for Azure Monitor for SAP solutions.
+In this how-to guide, you learn how to create a high-availability (HA) Pacemaker cluster provider for Azure Monitor for SAP solutions. You install the HA agent and then create the provider for Azure Monitor for SAP solutions.
## Prerequisites - An Azure subscription. - An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor for SAP solutions resource, see the [quickstart for the Azure portal](quickstart-portal.md) or the [quickstart for PowerShell](quickstart-powershell.md).
-## Install HA agent
+## Install an HA agent
-Before adding providers for HA (Pacemaker) clusters, install the appropriate agent for your environment in each cluster node.
+Before you add providers for HA (Pacemaker) clusters, install the appropriate agent for your environment in each cluster node.
-For SUSE-based clusters, install **ha_cluster_provider** in each node. For more information, see [the HA cluster exporter installation guide](https://github.com/ClusterLabs/ha_cluster_exporter#installation). Supported SUSE versions include SLES for SAP 12 SP3 and later versions.
+For SUSE-based clusters, install **ha_cluster_provider** in each node. For more information, see the [HA cluster exporter installation guide](https://github.com/ClusterLabs/ha_cluster_exporter#installation). Supported SUSE versions include SLES for SAP 12 SP3 and later versions.
-For RHEL-based clusters, install **performance co-pilot (PCP)** and the **pcp-pmda-hacluster** sub package in each node.For more information, see the [PCP HACLUSTER agent installation guide](https://access.redhat.com/articles/6139852). Supported RHEL versions include 8.2, 8.4 and later versions.
+For RHEL-based clusters, install **performance co-pilot (PCP)** and the **pcp-pmda-hacluster** subpackage in each node. For more information, see the [PCP HACLUSTER agent installation guide](https://access.redhat.com/articles/6139852). Supported RHEL versions include 8.2, 8.4, and later versions.
-For RHEL-based pacemaker clusters, also install [PMProxy](https://access.redhat.com/articles/6139852) in each node.
+For RHEL-based Pacemaker clusters, also install [PMProxy](https://access.redhat.com/articles/6139852) in each node.
+
+### Install an HA cluster exporter on RHEL
-### Install HA Cluster Exporter on RHEL
1. Install the required packages on the system. ```bash
For RHEL-based pacemaker clusters, also install [PMProxy](https://access.redhat.
systemctl start pmcd ```
-1. Install and enable the HA Cluster PMDA. Replace `$PCP_PMDAS_DIR` with the path where `hacluster` is installed. Use the `find` command in Linux to find the path.
+1. Install and enable the HA cluster PMDA. Replace `$PCP_PMDAS_DIR` with the path where `hacluster` is installed. Use the `find` command in Linux to find the path.
```bash cd $PCP_PMDAS_DIR/hacluster
For RHEL-based pacemaker clusters, also install [PMProxy](https://access.redhat.
systemctl start pmproxy ```
-1. Data will then be collected in the system by PCP. You can export the data using `pmproxy` at `http://<SERVER-NAME-OR-IP-ADDRESS>:44322/metrics?names=ha_cluster`. Replace `<SERVER-NAME-OR-IP-ADDRESS>` with your server name or IP address.
+1. Data is then collected in the system by PCP. You can export the data by using `pmproxy` at `http://<SERVER-NAME-OR-IP-ADDRESS>:44322/metrics?names=ha_cluster`. Replace `<SERVER-NAME-OR-IP-ADDRESS>` with your server name or IP address.
## Prerequisites to enable secure communication
-To [enable TLS 1.2 or higher](enable-tls-azure-monitor-sap-solutions.md), follow the steps [mentioned here](https://github.com/ClusterLabs/ha_cluster_exporter#tls-and-basic-authentication)
+To [enable TLS 1.2 or higher](enable-tls-azure-monitor-sap-solutions.md), follow the steps in [this article](https://github.com/ClusterLabs/ha_cluster_exporter#tls-and-basic-authentication).
-## Create provider for Azure Monitor for SAP solutions
+## Create a provider for Azure Monitor for SAP solutions
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Go to the Azure Monitor for SAP solutions service. 1. Open your Azure Monitor for SAP solutions resource.
-1. In the resource's menu, under **Settings**, select **Providers**.
+1. On the resource's menu, under **Settings**, select **Providers**.
1. Select **Add** to add a new provider.
- ![Diagram of Azure Monitor for SAP solutions resource in the Azure portal, showing button to add a new provider.](./media/provider-ha-pacemaker-cluster/azure-monitor-providers-ha-cluster-start.png)
+ ![Diagram that shows Azure Monitor for SAP solutions resource in the Azure portal, showing button to add a new provider.](./media/provider-ha-pacemaker-cluster/azure-monitor-providers-ha-cluster-start.png)
1. For **Type**, select **High-availability cluster (Pacemaker)**.
-1. *Optional* Select **Enable secure communication**, choose a certificate type
+1. (Optional) Select **Enable secure communication** and choose a certificate type.
1. Configure providers for each node of the cluster by entering the endpoint URL for **HA Cluster Exporter Endpoint**. 1. For SUSE-based clusters, enter `http://<IP-address>:9664/metrics`.
- ![Diagram of the setup for an Azure Monitor for SAP solutions resource, showing the fields for SUSE-based clusters.](./media/provider-ha-pacemaker-cluster/azure-monitor-providers-ha-cluster-suse.png)
+ ![Diagram that shows the setup for an Azure Monitor for SAP solutions resource, showing the fields for SUSE-based clusters.](./media/provider-ha-pacemaker-cluster/azure-monitor-providers-ha-cluster-suse.png)
1. For RHEL-based clusters, enter `http://<'IP address'>:44322/metrics?names=ha_cluster`.
- ![Diagram of the setup for an Azure Monitor for SAP solutions resource, showing the fields for RHEL-based clusters.](./media/provider-ha-pacemaker-cluster/azure-monitor-providers-ha-cluster-rhel.png)
+ ![Diagram that shows the setup for an Azure Monitor for SAP solutions resource, showing the fields for RHEL-based clusters.](./media/provider-ha-pacemaker-cluster/azure-monitor-providers-ha-cluster-rhel.png)
1. Enter the system identifiers, host names, and cluster names. For the system identifier, enter a unique SAP system identifier for each cluster. For the hostname, the value refers to an actual hostname in the VM. Use `hostname -s` for SUSE- and RHEL-based clusters.
Use the following troubleshooting steps for common errors.
### Unable to reach the Prometheus endpoint
-When the provider settings validation operation fails with the code ΓÇÿPrometheusURLConnectionFailureΓÇÖ:
+When the provider settings validation operation fails with the code `PrometheusURLConnectionFailure`:
1. Restart the HA cluster exporter agent.
When the provider settings validation operation fails with the code ΓÇÿPrometheu
systemctl enable pmproxy ```
-1. Verify that the Prometheus endpoint is reachable from the subnet that provided while creating the Azure Monitor for SAP solutions resource.
+1. Verify that the Prometheus endpoint is reachable from the subnet that you provided when you created the Azure Monitor for SAP solutions resource.
## Next steps
sap Provider Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-hana.md
#Customer intent: As a developer, I want to create an SAP HANA provider so that I can use the resource with Azure Monitor for SAP solutions.
+# Configure SAP HANA provider for Azure Monitor for SAP solutions
-# Configure SAP HANA provider for Azure Monitor for SAP solutions
-
-In this how-to guide, you'll learn to configure an SAP HANA provider for Azure Monitor for SAP solutions through the Azure portal.
+In this how-to guide, you learn how to configure an SAP HANA provider for Azure Monitor for SAP solutions through the Azure portal.
## Prerequisite to enable secure communication
-To [enable TLS 1.2 higher](enable-tls-azure-monitor-sap-solutions.md) for SAP HANA provider, follow steps mentioned in this [SAP document](https://www.sap.com/documents/2018/11/b865eb91-287d-0010-87a3-c30de2ffd8ff.html)
+To [enable TLS 1.2 higher](enable-tls-azure-monitor-sap-solutions.md) for the SAP HANA provider, follow the steps in [this SAP document](https://www.sap.com/documents/2018/11/b865eb91-287d-0010-87a3-c30de2ffd8ff.html).
## Prerequisites -- An Azure subscription.
+- An Azure subscription.
- An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor for SAP solutions resource, see the [quickstart for the Azure portal](quickstart-portal.md) or the [quickstart for PowerShell](quickstart-powershell.md). - ## Configure Azure Monitor for SAP solutions 1. Sign in to the [Azure portal](https://portal.azure.com).
To [enable TLS 1.2 higher](enable-tls-azure-monitor-sap-solutions.md) for SAP HA
1. On the **Providers** tab: 1. Select **Add provider**. 1. On the creation pane, for **Type**, select **SAP HANA**.
- ![Diagram of the Azure Monitor for SAP solutions resource creation page in the Azure portal, showing all required form fields.](./media/provider-hana/azure-monitor-providers-hana-setup.png)
- 1. Optionally, select **Enable secure communication**, choose the certificate type from the drop-down menu.
- 1. For **IP address**, enter the IP address or hostname of the server that runs the SAP HANA instance that you want to monitor. If you're using a hostname, make sure there is connectivity within the virtual network.
- 1. For **Database tenant**, enter the HANA database that you want to connect to. It's recommended to use **SYSTEMDB**, because tenant databases don't have all monitoring views. For legacy single-container HANA 1.0 instances, leave this field blank.
+
+ ![Diagram that shows the Azure Monitor for SAP solutions resource creation page in the Azure portal, showing all required form fields.](./media/provider-hana/azure-monitor-providers-hana-setup.png)
+ 1. Optionally, select **Enable secure communication** and choose the certificate type from the dropdown menu.
+ 1. For **IP address**, enter the IP address or hostname of the server that runs the SAP HANA instance that you want to monitor. If you're using a hostname, make sure there's connectivity within the virtual network.
+ 1. For **Database tenant**, enter the HANA database that you want to connect to. We recommend that you use **SYSTEMDB** because tenant databases don't have all monitoring views. For legacy single-container HANA 1.0 instances, leave this field blank.
1. For **Instance number**, enter the instance number of the database (0-99). The SQL port is automatically determined based on the instance number.
- 1. For **Database username**, enter the dedicated SAP HANA database user. This user needs the **MONITORING** or **BACKUP CATALOG READ** role assignment. For non-production SAP HANA instances, use **SYSTEM** instead.
- 1. For **Database password**, enter the password for the database username. You can either enter the password directly or use a secret inside Azure Key Vault.
+ 1. For **Database username**, enter the dedicated SAP HANA database user. This user needs the **MONITORING** or **BACKUP CATALOG READ** role assignment. For nonproduction SAP HANA instances, use **SYSTEM** instead.
+ 1. For **Database password**, enter the password for the database username. You can either enter the password directly or use a secret inside Azure Key Vault.
1. Save your changes to the Azure Monitor for SAP solutions resource. ## Next steps
sap Provider Ibm Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-ibm-db2.md
Title: Create IBM Db2 provider for Azure Monitor for SAP solutions
-description: This article provides details to configure an IBM DB2 provider for Azure Monitor for SAP solutions.
+description: This article provides details to configure an IBM Db2 provider for Azure Monitor for SAP solutions.
#Customer intent: As a developer, I want to create an IBM Db2 provider so that I can monitor the resource through Azure Monitor for SAP solutions.
-# Create IBM Db2 provider for Azure Monitor for SAP solutions
+# Create IBM Db2 provider for Azure Monitor for SAP solutions
+
+In this how-to guide, you learn how to create an IBM Db2 provider for Azure Monitor for SAP solutions through the Azure portal.
-In this how-to guide, you'll learn how to create an IBM Db2 provider for Azure Monitor for SAP solutions through the Azure portal.
## Prerequisites -- An Azure subscription.
+- An Azure subscription.
- An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor for SAP solutions resource, see the [quickstart for the Azure portal](quickstart-portal.md) or the [quickstart for PowerShell](quickstart-powershell.md).
-## Create user for DB2 Server
+## Create a user for the Db2 server
-First, create a new user for your Db2 server, for use by Azure Monitor for SAP solutions. Then, run the following script to provide the new Db2 user with appropriate permissions.
-Make sure to replace `<username>` with the Db2 username.
+First, create a new user for your Db2 server for use by Azure Monitor for SAP solutions. Then run the following script to provide the new Db2 user with appropriate permissions. Make sure to replace `<username>` with the Db2 username.
```sql GRANT SECADM ON DATABASE TO USER <username>;
GRANT EXECUTE ON SPECIFIC PROCEDURE SYSPROC.WLM_COLLECT_STATS_WAIT TO ROLE SAPMO
GRANT EXECUTE ON SPECIFIC PROCEDURE SYSPROC.WLM_SET_CONN_ENV TO ROLE SAPMON; ```+ ## Prerequisites to enable secure communication
-To [enable TLS 1.2 or higher](enable-tls-azure-monitor-sap-solutions.md), follow the steps [mentioned here](https://assets.cdn.sap.com/sapcom/docs/2018/12/d2922a3b-307d-0010-87a3-c30de2ffd8ff.pdf).
+To [enable TLS 1.2 or higher](enable-tls-azure-monitor-sap-solutions.md), follow the steps in [this document](https://assets.cdn.sap.com/sapcom/docs/2018/12/d2922a3b-307d-0010-87a3-c30de2ffd8ff.pdf).
-## Create IBM Db2 provider
+## Create an IBM Db2 provider
To create the IBM Db2 provider for Azure Monitor for SAP solutions: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to the Azure Monitor for SAP solutions service.
+1. Go to the Azure Monitor for SAP solutions service.
1. Open the Azure Monitor for SAP solutions resource you want to modify. 1. On the resource's menu, under **Settings**, select **Providers**. 1. Select **Add** to add a new provider. 1. For **Type**, select **IBM Db2**.
- 1. *Optional* Select **Enable secure communication** and choose a certificate type from the dropdown list.
+ 1. (Optional) Select **Enable secure communication** and choose a certificate type from the dropdown list.
1. Enter the IP address for the hostname. 1. Enter the database name. 1. Enter the database port. 1. Save your changes. 1. Configure more providers for each instance of the database.
-
+ ## Next steps > [!div class="nextstepaction"]
sap Provider Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-linux.md
Last updated 03/09/2023
#Customer intent: As a developer, I want to configure a Linux provider so that I can use Azure Monitor for SAP solutions for monitoring.
-# Configure Linux provider for Azure Monitor for SAP solutions
+# Configure Linux provider for Azure Monitor for SAP solutions
-In this how-to guide, you learn to create a Linux OS provider for *Azure Monitor for SAP solutions* resources.
+In this how-to guide, you learn how to create a Linux OS provider for Azure Monitor for SAP solutions resources.
## Prerequisites - An Azure subscription. - An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor for SAP solutions resource, see the [quickstart for the Azure portal](quickstart-portal.md) or the [quickstart for PowerShell](quickstart-powershell.md).-- Install the [node exporter latest version](https://prometheus.io/download/#node_exporter) in each SAP host that you want to monitor, either BareMetal or Azure virtual machine (Azure VM). For more information, see [the node exporter GitHub repository](https://github.com/prometheus/node_exporter).
+- Install the [node exporter latest version](https://prometheus.io/download/#node_exporter) in each SAP host that you want to monitor, either BareMetal or Azure virtual machine (VM). For more information, see the [node exporter GitHub repository](https://github.com/prometheus/node_exporter).
To install the node exporter on Linux: 1. Run `wget https://github.com/prometheus/node_exporter/releases/download/v*/node_exporter-*.*-amd64.tar.gz`. Replace `*` with the version number.
-1. Run `tar xvfz node_exporter-*.*-amd64.tar.gz`
+1. Run `tar xvfz node_exporter-*.*-amd64.tar.gz`.
-1. Run `cd node_exporter-*.*-amd64`
+1. Run `cd node_exporter-*.*-amd64`.
-1. Run `./node_exporter`
+1. Run `./node_exporter`.
1. The node exporter now starts collecting data. You can export the data at `http://IP:9100/metrics`.
-## Script to set up Node Exporter
+## Script to set up the node exporter
```shell # To get the latest node exporter version from: https://prometheus.io/download/#node_exporter
cd node_exporter-*.*-amd64
nohup ./node_exporter --web.listen-address=":9100" & ```
-### Setting up cron job to start Node exporter on VM restart
+### Set up a cron job to start node exporter on a VM restart
-1. If the target virtual machine is restarted/stopped, node exporter is also stopped, and needs to be manually started again to continue monitoring.
-1. Run `sudo crontab -e` command to open cron file.
-2. Add the command `@reboot cd /path/to/node/exporter && nohup ./node_exporter &` at the end of cron file. This starts node exporter on VM reboot.
+1. If the target VM is restarted or stopped, node exporter is also stopped. It must be manually started again to continue monitoring.
+1. Run the `sudo crontab -e` command to open a cron file.
+1. Add the command `@reboot cd /path/to/node/exporter && nohup ./node_exporter &` at the end of the cron file. This starts node exporter on a VM reboot.
-```shell
-# if you do not have a crontab file already, create one by running the command: sudo crontab -e
-sudo crontab -l > crontab_new
-echo "@reboot cd /path/to/node/exporter && nohup ./node_exporter &" >> crontab_new
-sudo crontab crontab_new
-sudo rm crontab_new
-```
+ ```shell
+ # If you do not have a crontab file already, create one by running the command: sudo crontab -e
+ sudo crontab -l > crontab_new
+ echo "@reboot cd /path/to/node/exporter && nohup ./node_exporter &" >> crontab_new
+ sudo crontab crontab_new
+ sudo rm crontab_new
+ ```
## Prerequisites to enable secure communication
-To [enable TLS 1.2 or higher](enable-tls-azure-monitor-sap-solutions.md), follow the steps [mentioned here](https://prometheus.io/docs/guides/tls-encryption/)
+To [enable TLS 1.2 or higher](enable-tls-azure-monitor-sap-solutions.md), follow the steps in [this article](https://prometheus.io/docs/guides/tls-encryption/).
## Create Linux OS provider
To [enable TLS 1.2 or higher](enable-tls-azure-monitor-sap-solutions.md), follow
1. Configure the following settings for the new provider: 1. For **Type**, select **OS (Linux)**. 1. For **Name**, enter a name that will be the identifier for the BareMetal instance.
- 1. *Optional* Select **Enable secure communication**, choose a certificate type
+ 1. (Optional) Select **Enable secure communication**, choose a certificate type.
1. For **Node Exporter Endpoint**, enter `http://IP:9100/metrics`. 1. For the IP address, use the private IP address of the Linux host. Make sure the host and Azure Monitor for SAP solutions resource are in the same virtual network. 1. Open firewall port 9100 on the Linux host.
- 1. If you're using `firewall-cmd`, run `_firewall-cmd_ _--permanent_ _--add-port=9100/tcp_ ` then `_firewall-cmd_ _--reload_`.
- 1. If you're using `ufw`, run `_ufw_ _allow_ _9100/tcp_` then `_ufw_ _reload_`.
-1. If the Linux host is an Azure virtual machine (VM), make sure that all applicable network security groups (NSGs) allow inbound traffic at port 9100 from **VirtualNetwork** as the source.
+ 1. If you're using `firewall-cmd`, run `_firewall-cmd_ _--permanent_ _--add-port=9100/tcp_ ` and then run `_firewall-cmd_ _--reload_`.
+ 1. If you're using `ufw`, run `_ufw_ _allow_ _9100/tcp_` and then run `_ufw_ _reload_`.
+1. If the Linux host is an Azure VM, make sure that all applicable network security groups allow inbound traffic at port 9100 from **VirtualNetwork** as the source.
1. Select **Add provider** to save your changes. 1. Continue to add more providers as needed. 1. Select **Review + create** to review the settings.
Use these steps to resolve common errors.
### Unable to reach the Prometheus endpoint
-When the provider settings validation operation fails with the code ΓÇÿPrometheusURLConnectionFailureΓÇÖ:
+When the provider settings validation operation fails with the code `PrometheusURLConnectionFailure`:
1. Open firewall port 9100 on the Linux host.
- 1. If you're using `firewall-cmd`, run `_firewall-cmd_ _--permanent_ _--add-port=9100/tcp_ ` then `_firewall-cmd_ _--reload_`.
- 1. If you're using `ufw`, run `_ufw_ _allow_ _9100/tcp_` then `_ufw_ _reload_`.
+ 1. If you're using `firewall-cmd`, run `_firewall-cmd_ _--permanent_ _--add-port=9100/tcp_ ` and then run `_firewall-cmd_ _--reload_`.
+ 1. If you're using `ufw`, run `_ufw_ _allow_ _9100/tcp_` and then run `_ufw_ _reload_`.
1. Try to restart the node exporter agent:
- 1. Go to the folder where you installed the node exporter (the file name resembles `node_exporter-*.*-amd64`).
+ 1. Go to the folder where you installed the node exporter. The file name resembles `node_exporter-*.*-amd64`.
1. Run `./node_exporter`.
- 1. Adding nohup and & to above command decouples the node_exporter from linux machine commandline. If not included node_exporter would stop when the commandline is closed.
-1. Verify that the Prometheus endpoint is reachable from the subnet that you provided while creating the Azure Monitor for SAP solutions resource.
+ 1. Adding `nohup` and `&` to the preceding command decouples `node_exporter` from the Linux machine command line. If they're not included, `node_exporter` stops when the command line is closed.
+1. Verify that the Prometheus endpoint is reachable from the subnet that you provided when you created the Azure Monitor for SAP solutions resource.
+
+## Suggestion
-## Suggestions
+Use this suggestion for troubleshooting
-### Enabling Node Exporter
+### Enable the node exporter
-1. Run `nohup ./node_exporter &` command to enable node_exporter.
-1. Adding nohup and & to above command decouples the node_exporter from linux machine commandline. If not included node_exporter would stop when the commandline is closed.
+1. Run the `nohup ./node_exporter &` command to enable `node_exporter`.
+1. Adding `nohup` and `&` to the preceding command decouples `node_exporter` from the Linux machine command line. If they're not included, `node_exporter` stops when the command line is closed.
## Next steps
sap Provider Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-sql-server.md
Title: Configure Microsoft SQL Server provider for Azure Monitor for SAP solutions
-description: Learn how to configure a Microsoft SQL Server provider for use with Azure Monitor for SAP solutions.
+ Title: Configure SQL Server provider for Azure Monitor for SAP solutions
+description: Learn how to configure a SQL Server provider for use with Azure Monitor for SAP solutions.
Last updated 10/27/2022
-#Customer intent: As a developer, I want to configure a Microsoft SQL Server provider so that I can use Azure Monitor for SAP solutions for monitoring.
+#Customer intent: As a developer, I want to configure a SQL Server provider so that I can use Azure Monitor for SAP solutions for monitoring.
-# Configure SQL Server for Azure Monitor for SAP solutions
+# Configure SQL Server for Azure Monitor for SAP solutions
-In this how-to guide, you'll learn to configure a Microsoft SQL Server provider for Azure Monitor for SAP solutions through the Azure portal.
+In this how-to guide, you learn how to configure a SQL Server provider for Azure Monitor for SAP solutions through the Azure portal.
## Prerequisites -- An Azure subscription.
+- An Azure subscription.
- An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor for SAP solutions resource, see the [quickstart for the Azure portal](quickstart-portal.md) or the [quickstart for PowerShell](quickstart-powershell.md).
-## Open Windows port
+## Open a Windows port
-Open the Windows port in the local firewall of SQL Server and the network security group (NSG) where SQL Server and Azure Monitor for SAP solutions exist. The default port is 1433.
+Open the Windows port in the local firewall of SQL Server and the network security group where SQL Server and Azure Monitor for SAP solutions exist. The default port is 1433.
-## Configure SQL server
+## Configure SQL Server
-Configure SQL Server to accept logins from Windows and SQL Server:
+Configure SQL Server to accept sign-ins from Windows and SQL Server:
-1. Open SQL Server Management Studio (SSMS).
-1. Open **Server Properties** &gt; **Security** &gt; **Authentication**
+1. Open SQL Server Management Studio.
+1. Open **Server Properties** > **Security** > **Authentication**.
1. Select **SQL Server and Windows authentication mode**. 1. Select **OK** to save your changes. 1. Restart SQL Server to complete the changes. ## Create Azure Monitor for SAP solutions user for SQL Server
-Create a user for Azure Monitor for SAP solutions to log in to SQL Server using the following script. Make sure to replace:
+Create a user for Azure Monitor for SAP solutions to sign in to SQL Server by using the following script. Make sure to replace:
-- `<Database to monitor>` with your SAP database's name-- `<password>` with the password for your user
+- `<Database to monitor>` with your SAP database's name.
+- `<password>` with the password for your user.
You can replace the example information for the Azure Monitor for SAP solutions user with any other SQL username.
ALTER ROLE [db_datareader] ADD MEMBER [AMS]
ALTER ROLE [db_denydatawriter] ADD MEMBER [AMS] GO ```+ ## Prerequisites to enable secure communication
-To enable [TLS 1.2 or higher](enable-tls-azure-monitor-sap-solutions.md), follow the steps [mentioned here](/sql/database-engine/configure-windows/configure-sql-server-encryption?view=sql-server-ver15&preserve-view=true).
+To enable [TLS 1.2 or higher](enable-tls-azure-monitor-sap-solutions.md), follow the steps in [this article](/sql/database-engine/configure-windows/configure-sql-server-encryption?view=sql-server-ver15&preserve-view=true).
-## Install Azure Monitor for SAP solutions provider
+## Install an Azure Monitor for SAP solutions provider
To install the provider from Azure Monitor for SAP solutions: 1. Open the Azure Monitor for SAP solutions resource in the Azure portal.
-1. In the resource menu, under **Settings**, select **Providers**.
+1. On the resource menu, under **Settings**, select **Providers**.
1. On the provider page, select **Add** to add a new provider. 1. On the **Add provider** page, enter all required information: 1. For **Type**, select **Microsoft SQL Server**. 1. For **Name**, enter a name for the provider.
- 1. *Optional* Select **Enable secure communication** and choose a certificate type from the dropdown list.
+ 1. (Optional) Select **Enable secure communication** and choose a certificate type from the dropdown list.
1. For **Host name**, enter the IP address of the hostname. 1. For **Port**, enter the port on which SQL Server is listening. The default is 1433. 1. For **SQL username**, enter a username for the SQL Server account. 1. For **Password**, enter a password for the account.
- 1. For **SID**, enter the SAP system identifier (SID).
- 1. Select **Create** to create the provider
+ 1. For **SID**, enter the SAP system identifier.
+ 1. Select **Create** to create the provider.
1. Repeat the previous step as needed to create more providers. 1. Select **Review + create** to complete the deployment. - ## Next steps > [!div class="nextstepaction"]
sap Dbms Guide Sapiq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-sapiq.md
documentationcenter: saponazure
-tags: azure-resource-manager
-keywords: ''
Previously updated : 06/11/2021 Last updated : 06/19/2023 # SAP BW NLS implementation guide with SAP IQ on Azure
-Over the years, customers running the SAP Business Warehouse (BW) system see an exponential growth in database size, which increases compute cost. To achieve the right balance of cost and performance, customers can use near-line storage (NLS) to migrate historical data.
+Over the years, customers running the SAP Business Warehouse (BW) system see an exponential growth in database size, which increases compute cost. To achieve the right balance of cost and performance, customers can use near-line storage (NLS) to migrate historical data.
-The NLS implementation based on SAP IQ is the standard method by SAP to move historical data from a primary database (SAP HANA or AnyDB). The integration of SAP IQ makes it possible to separate frequently accessed data from infrequently accessed data, which makes less resource demand in the SAP BW system.
+The NLS implementation based on SAP IQ is the standard method by SAP to move historical data from a primary database (SAP HANA or AnyDB). The integration of SAP IQ makes it possible to separate frequently accessed data from infrequently accessed data, which makes less resource demand in the SAP BW system.
-This guide provides guidelines for planning, deploying, and configuring SAP BW NLS with SAP IQ on Azure. This guide covers common Azure services and features that are relevant for SAP IQ NLS deployment and doesn't cover any NLS partner solutions.
+This guide provides guidelines for planning, deploying, and configuring SAP BW NLS with SAP IQ on Azure. This guide covers common Azure services and features that are relevant for SAP IQ NLS deployment and doesn't cover any NLS partner solutions.
-This guide doesn't replace SAP's standard documentation on NLS deployment with SAP IQ. Instead, it complements the official installation and administration documentation.
+This guide doesn't replace SAP's standard documentation on NLS deployment with SAP IQ. Instead, it complements the official installation and administration documentation.
## Solution overview
-In an operative SAP BW system, the volume of data increases constantly because of business and legal requirements. The large volume of data can affect the performance of the system and increase the administration effort, which results in the need to implement a data-aging strategy.
+In an operative SAP BW system, the volume of data increases constantly because of business and legal requirements. The large volume of data can affect the performance of the system and increase the administration effort, which results in the need to implement a data-aging strategy.
-If you want to keep the amount of data in your SAP BW system without deleting, you can use data archiving. The data is first moved to archive or near-line storage and then deleted from the SAP BW system. You can either access the data directly or load it back as required, depending on how the data has been archived.
+If you want to keep the amount of data in your SAP BW system without deleting, you can use data archiving. The data is first moved to archive or near-line storage and then deleted from the SAP BW system. You can either access the data directly or load it back as required, depending on how the data has been archived.
-SAP BW users can use SAP IQ as a near-line storage solution. The adapter for SAP IQ as a near-line solution is delivered with the SAP BW system. With NLS implemented, frequently used data is stored in an SAP BW online database (SAP HANA or AnyDB). Infrequently accessed data is stored in SAP IQ, which reduces the cost to manage data and improves the performance of the SAP BW system. To ensure consistency between online data and near-line data, the archived partitions are locked and are read-only.
+SAP BW users can use SAP IQ as a near-line storage solution. The adapter for SAP IQ as a near-line solution is delivered with the SAP BW system. With NLS implemented, frequently used data is stored in an SAP BW online database (SAP HANA or AnyDB). Infrequently accessed data is stored in SAP IQ, which reduces the cost to manage data and improves the performance of the SAP BW system. To ensure consistency between online data and near-line data, the archived partitions are locked and are read-only.
-SAP IQ supports two types of architecture: simplex and multiplex. In a simplex architecture, a single instance of an SAP IQ server runs on a single virtual machine. Files might be located on a host machine or on a network storage device.
+SAP IQ supports two types of architecture: simplex and multiplex. In a simplex architecture, a single instance of an SAP IQ server runs on a single virtual machine. Files might be located on a host machine or on a network storage device.
-> [!Important]
+> [!IMPORTANT]
> For the SAP NLS solution, only simplex architecture is available and evaluated by SAP. ![Diagram that shows an overview of the SAP I Q solution.](media/sap-iq-deployment-guide/sap-iq-solution-overview.png)
-In Azure, the SAP IQ server must be implemented on a separate virtual machine (VM). We don't recommend installing SAP IQ software on an existing server that already has other database instances running, because SAP IQ uses complete CPU and memory for its own usage. One SAP IQ server can be used for multiple SAP NLS implementations.
+In Azure, the SAP IQ server must be implemented on a separate virtual machine (VM). We don't recommend installing SAP IQ software on an existing server that already has other database instances running, because SAP IQ uses complete CPU and memory for its own usage. One SAP IQ server can be used for multiple SAP NLS implementations.
## Support matrix
The support matrix for an SAP IQ NLS solution includes:
- **SAP BW compatibility**: Near-line storage for SAP IQ is released only for SAP BW systems that already run under Unicode. [SAP note 1796393](https://launchpad.support.sap.com/#/notes/1796393) contains information about SAP BW. -- **Storage**: In Azure, SAP IQ supports premium managed disks (Windows and Linux), Azure shared disks (Windows only), and Azure NetApp Files (Linux only).
+- **Storage**: In Azure, SAP IQ supports premium managed disks (Windows and Linux), Azure shared disks (Windows only), and Azure NetApp Files (Linux only).
-For more up-to-date information based on your SAP IQ release, see the [Product Availability Matrix](https://userapps.support.sap.com/sap/support/pam).
+For more up-to-date information based on your SAP IQ release, see the [Product Availability Matrix](https://userapps.support.sap.com/sap/support/pam).
## Sizing
-Sizing of SAP IQ is confined to CPU, memory, and storage. You can find general sizing guidelines for SAP IQ on Azure in [SAP note 1951789](https://launchpad.support.sap.com/#/notes/1951789). The sizing recommendation that you get by following the guidelines needs to be mapped to certified Azure virtual machine types for SAP. [SAP note 1928533](https://launchpad.support.sap.com/#/notes/1928533) provides the list of supported SAP products and Azure VM types.
+Sizing of SAP IQ is confined to CPU, memory, and storage. You can find general sizing guidelines for SAP IQ on Azure in [SAP note 1951789](https://launchpad.support.sap.com/#/notes/1951789). The sizing recommendation that you get by following the guidelines needs to be mapped to certified Azure virtual machine types for SAP. [SAP note 1928533](https://launchpad.support.sap.com/#/notes/1928533) provides the list of supported SAP products and Azure VM types.
The SAP IQ sizing guide and sizing worksheet mentioned in [SAP note 1951789](https://launchpad.support.sap.com/#/notes/1951789) were developed for the native usage of an SAP IQ database. Because they don't reflect the resources for the planning of an SAP IQ database, you might end up with unused resources for SAP NLS.
The SAP IQ sizing guide and sizing worksheet mentioned in [SAP note 1951789](htt
### Regions
-If you're already running your SAP systems on Azure, you've probably identified your region. SAP IQ deployment must be in the same region as your SAP BW system for which you're implementing the NLS solution.
+If you're already running your SAP systems on Azure, you've probably identified your region. SAP IQ deployment must be in the same region as your SAP BW system for which you're implementing the NLS solution.
To determine the architecture of SAP IQ, you need to ensure that the services required by SAP IQ, like Azure NetApp Files (NFS for Linux only), are available in that region. To check the service availability in your region, see the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/) webpage.
-### Availability sets
+### Deployment options
-To achieve redundancy of SAP systems in an Azure infrastructure, your application needs to be deployed in either availability sets or availability zones. Although you can achieve SAP IQ high availability by using the SAP IQ multiplex architecture, the multiplex architecture doesn't meet the requirements of the NLS solution.
+To achieve redundancy of SAP systems in an Azure infrastructure, your application needs to be deployed in either flexible scale set, availability zones, or availability sets. Although you can achieve SAP IQ high availability by using the SAP IQ multiplex architecture, the multiplex architecture doesn't meet the requirements of the NLS solution.
-To achieve high availability for the SAP IQ simplex architecture, you need to configure a two-node cluster with a custom solution. The two-node SAP IQ cluster can be deployed in availability sets or availability zones, but the Azure storage type that gets attached to the nodes decides its deployment method. Currently, Azure shared premium disks and Azure NetApp Files don't support zonal deployment. That leaves only the option of SAP IQ deployment in availability sets.
+To achieve high availability for the SAP IQ simplex architecture, you need to configure a two-node cluster with a custom solution. The two-node SAP IQ cluster can be deployed in flexible scale set with FD=1, availability zones or availability sets. However, it is advised to configure zone redundant storage when setting up a highly available solution across availability zones.
### Virtual machines
-Based on SAP IQ sizing, you need to map your requirements to Azure virtual machines. This approach is supported in Azure for SAP products. [SAP note 1928533](https://launchpad.support.sap.com/#/notes/1928533) is a good starting point that lists supported Azure VM types for SAP products on Windows and Linux.
+Based on SAP IQ sizing, you need to map your requirements to Azure virtual machines. This approach is supported in Azure for SAP products. [SAP note 1928533](https://launchpad.support.sap.com/#/notes/1928533) is a good starting point that lists supported Azure VM types for SAP products on Windows and Linux.
-Beyond the selection of only supported VM types, you also need to check whether those VM types are available in specific regions. You can check the availability of VM types on the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/) webpage. To choose the pricing model, see [Azure virtual machines for SAP workload](planning-guide.md#azure-virtual-machines-for-sap-workload).
+Beyond the selection of only supported VM types, you also need to check whether those VM types are available in specific regions. You can check the availability of VM types on the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/) webpage. To choose the pricing model, see [Azure virtual machines for SAP workload](planning-guide.md#azure-virtual-machines-for-sap-workload).
> [!TIP] > For production systems, we recommend that you use E-Series virtual machines because of their core-to-memory ratio. ### Storage
-Azure Storage has various storage types available for customers. You can find details about them in the article [What disk types are available in Azure?](../../virtual-machines/disks-types.md).
+Azure Storage has various storage types available for customers. You can find details about them in the article [What disk types are available in Azure?](../../virtual-machines/disks-types.md).
-Some of the storage types in Azure have limited use for SAP scenarios, but other types are well suited or optimized for specific SAP workload scenarios. For more information, see the [Azure Storage types for SAP workload](planning-guide-storage.md) guide. It highlights the storage options that are suited for SAP.
+Some of the storage types in Azure have limited use for SAP scenarios, but other types are well suited or optimized for specific SAP workload scenarios. For more information, see the [Azure Storage types for SAP workload](planning-guide-storage.md) guide. It highlights the storage options that are suited for SAP.
For SAP IQ on Azure, you can use the following Azure storage types. The choice depends on your operating system (Windows or Linux) and deployment method (standalone or highly available). - Azure managed disks
- A [managed disk](../../virtual-machines/managed-disks-overview.md) is a block-level storage volume that Azure manages. You can use managed disks for SAP IQ simplex deployment. Various types of managed disks are available, but we recommend that you use [premium SSDs](../../virtual-machines/disks-types.md#premium-ssds) for SAP IQ.
+ A [managed disk](../../virtual-machines/managed-disks-overview.md) is a block-level storage volume that Azure manages. You can use managed disks for SAP IQ simplex deployment. Various types of managed disks are available, but we recommend that you use [premium SSDs](../../virtual-machines/disks-types.md#premium-ssds) for SAP IQ.
- Azure shared disks
- [Shared disks](../../virtual-machines/disks-shared.md) are a new feature for Azure managed disks that allow you to attach a managed disk to multiple VMs simultaneously. Shared managed disks don't natively offer a fully managed file system that can be accessed through SMB or NFS. You need to use a cluster manager like a [Windows Server failover cluster](https://github.com/MicrosoftDocs/windowsserverdocs/blob/master/WindowsServerDocs/failover-clustering/failover-clustering-overview.md) (WSFC), which handles cluster node communication and write locking.
+ [Shared disks](../../virtual-machines/disks-shared.md) are a new feature for Azure managed disks that allow you to attach a managed disk to multiple VMs simultaneously. Shared managed disks don't natively offer a fully managed file system that can be accessed through SMB or NFS. You need to use a cluster manager like a [Windows Server failover cluster](https://github.com/MicrosoftDocs/windowsserverdocs/blob/master/WindowsServerDocs/failover-clustering/failover-clustering-overview.md) (WSFC), which handles cluster node communication and write locking.
To deploy a highly available solution for an SAP IQ simplex architecture on Windows, you can use Azure shared disks between two nodes that WSFC manages. An SAP IQ deployment architecture with Azure shared disks is discussed in the article [Deploy SAP IQ NLS HA solution using Azure shared disk on Windows Server](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/deploy-sap-iq-nls-ha-solution-using-azure-shared-disk-on-windows/ba-p/2433089).
For SAP IQ on Azure, you can use the following Azure storage types. The choice d
SAP IQ deployment on Linux can use [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) as a file system (NFS protocol) to install a standalone or a highly available solution. This storage offering isn't available in all regions. For up-to-date information, see the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/) webpage. SAP IQ deployment architecture with Azure NetApp Files is discussed in the article [Deploy SAP IQ-NLS HA solution using Azure NetApp Files on SUSE Linux Enterprise Server](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/deploy-sap-iq-nls-ha-solution-using-azure-netapp-files-on-suse/ba-p/1651172).
-The following table lists the recommendations for each storage type based on the operating system:
+The following table lists the recommendations for each storage type based on the operating system:
| Storage type | Windows | Linux | | - | - | -- |
The following table lists the recommendations for each storage type based on the
Azure provides a network infrastructure that allows the mapping of all scenarios that can be realized for an SAP BW system that uses SAP IQ as near-line storage. These scenarios include connecting to on-premises systems, connecting to systems in different virtual networks, and others. For more information, see [Microsoft Azure networking for SAP workloads](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/sap/workloads/planning-guide.md#microsoft-azure-networking).
-## Deploy SAP IQ on Windows
+## [Deploy SAP IQ on Windows](#tab/sapiq-windows)
-### Server preparation and installation
+### Windows server preparation and installation
To prepare servers for NLS implementation with SAP IQ on Windows, you can get the most up-to-date information in [SAP note 2780668 - SAP First Guidance - BW NLS Implementation with SAP IQ](https://launchpad.support.sap.com/#/notes/0002780668). It has comprehensive information about prerequisites for SAP BW systems, SAP IQ file-system layout, installation, post-configuration tasks, and SAP BW NLS integration with SAP IQ.
-### High-availability deployment
+### High-availability deployment on Windows
-SAP IQ supports both a simplex and a multiplex architecture. For the NLS solution, only simplex server architecture is available and evaluated. Simplex is a single instance of an SAP IQ server running on a single virtual machine.
+SAP IQ supports both a simplex and a multiplex architecture. For the NLS solution, only simplex server architecture is available and evaluated. Simplex is a single instance of an SAP IQ server running on a single virtual machine.
-Technically, you can achieve SAP IQ high availability by using a multiplex server architecture, but the multiplex architecture doesn't meet the requirements of the NLS solution. For simplex server architecture, SAP doesn't provide any features or procedures to run SAP IQ in a high-availability configuration.
+Technically, you can achieve SAP IQ high availability by using a multiplex server architecture, but the multiplex architecture doesn't meet the requirements of the NLS solution. For simplex server architecture, SAP doesn't provide any features or procedures to run SAP IQ in a high-availability configuration.
To set up SAP IQ high availability on Windows for simplex server architecture, you need to set up a custom solution that requires extra configuration, like a Windows Server failover cluster and shared disks. One such custom solution for SAP IQ on Windows is described in detail in [Deploy SAP IQ NLS HA solution using Azure shared disk on Windows Server](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/deploy-sap-iq-nls-ha-solution-using-azure-shared-disk-on-windows/ba-p/2433089).
-### Backup and restore
+### Backup and restore for system deployed on Windows
In Azure, you can schedule SAP IQ database backup as described in [SAP IQ Administration: Backup, Restore, and Data Recovery](https://help.sap.com/viewer/a893f37e84f210158511c41edb6a6367/16.1.4.7/5b8309b37f4e46b089465e380c24df59.html). SAP IQ provides the following types of database backups. You can find details about each backup type in [Backup Scenarios](https://help.sap.com/viewer/a893f37e84f210158511c41edb6a6367/16.1.4.7/a880dc1f84f21015af84f1a6b629dd7a.html). -- **Full backup**: It makes a complete copy of the database. -- **Incremental backup**: It copies all transactions since the last backup of any type.
+- **Full backup**: It makes a complete copy of the database.
+- **Incremental backup**: It copies all transactions since the last backup of any type.
- **Incremental since full backup**: It backs up all changes to the database since the last full backup. - **Virtual backup**: It copies all of the database except the table data and metadata from the SAP IQ store.
-Depending on your SAP IQ database size, you can schedule your database backup from any of the backup scenarios. But if you're using SAP IQ with the NLS interface delivered by SAP, you might want to automate the backup process for an SAP IQ database. Automation ensures that the SAP IQ database can always be recovered to a consistent state without loss of data that's moved between the primary database and the SAP IQ database. For details on setting up automation for SAP IQ near-line storage, see [SAP note 2741824 - How to setup backup automation for SAP IQ Cold Store/Near-line Storage](https://launchpad.support.sap.com/#/notes/2741824).
+Depending on your SAP IQ database size, you can schedule your database backup from any of the backup scenarios. But if you're using SAP IQ with the NLS interface delivered by SAP, you might want to automate the backup process for an SAP IQ database. Automation ensures that the SAP IQ database can always be recovered to a consistent state without loss of data that's moved between the primary database and the SAP IQ database. For details on setting up automation for SAP IQ near-line storage, see [SAP note 2741824 - How to setup backup automation for SAP IQ Cold Store/Near-line Storage](https://launchpad.support.sap.com/#/notes/2741824).
For a large SAP IQ database, you can use virtual backups. For more information, see [Virtual Backups](https://help.sap.com/viewer/a893f37e84f210158511c41edb6a6367/16.1.4.7/a880672184f21015a08dceedc7d19776.html), [Introduction Virtual Backup in SAP Sybase IQ](https://wiki.scn.sap.com/wiki/display/SYBIQ/Introduction+Virtual+BackUp+(+general++back+up+method+)+in+SAP+Sybase+IQ). Also see [SAP note 2461985 - How to Backup Large SAP IQ Database](https://launchpad.support.sap.com/#/notes/0002461985).
If you're using a network drive (SMB protocol) to back up and restore an SAP IQ
BACKUP DATABASE FULL TO '\\\sapiq.internal.contoso.net\sapiq-backup\backup\data\<filename>' ```
-## Deploy SAP IQ on Linux
+## [Deploy SAP IQ on Linux](#tab/sapiq-linux)
-### Server preparation and installation
+### Linux server preparation and installation
To prepare servers for NLS implementation with SAP IQ on Linux, you can get the most up-to-date information in [SAP note 2780668 - SAP First Guidance - BW NLS Implementation with SAP IQ](https://launchpad.support.sap.com/#/notes/0002780668). It has comprehensive information about prerequisites for SAP BW systems, SAP IQ file-system layout, installation, post-configuration tasks, and SAP BW NLS integration with SAP IQ.
-### High-availability deployment
+### High-availability deployment on Linux
-SAP IQ supports both a simplex and a multiplex architecture. For the NLS solution, only simplex server architecture is available and evaluated. Simplex is a single instance of an SAP IQ server running on a single virtual machine.
+SAP IQ supports both a simplex and a multiplex architecture. For the NLS solution, only simplex server architecture is available and evaluated. Simplex is a single instance of an SAP IQ server running on a single virtual machine.
Technically, you can achieve SAP IQ high availability by using a multiplex server architecture, but the multiplex architecture doesn't meet the requirements of the NLS solution. For simplex server architecture, SAP doesn't provide any features or procedures to run SAP IQ in a high-availability configuration. To set up SAP IQ high availability on Linux for simplex server architecture, you need to set up a custom solution that requires extra configuration, like Pacemaker. One such custom solution for SAP IQ on Linux is described in detail in [Deploy SAP IQ-NLS HA solution using Azure NetApp Files on SUSE Linux Enterprise Server](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/deploy-sap-iq-nls-ha-solution-using-azure-netapp-files-on-suse/ba-p/1651172).
-### Backup and restore
+### Backup and restore for system deployed on Linux
In Azure, you can schedule SAP IQ database backup as described in [SAP IQ Administration: Backup, Restore, and Data Recovery](https://help.sap.com/viewer/a893f37e84f210158511c41edb6a6367/16.1.4.7/5b8309b37f4e46b089465e380c24df59.html). SAP IQ provides the following types of database backups. You can find details about each backup type in [Backup Scenarios](https://help.sap.com/viewer/a893f37e84f210158511c41edb6a6367/16.1.4.7/a880dc1f84f21015af84f1a6b629dd7a.html). -- **Full backup**: It makes a complete copy of the database. -- **Incremental backup**: It copies all transactions since the last backup of any type.
+- **Full backup**: It makes a complete copy of the database.
+- **Incremental backup**: It copies all transactions since the last backup of any type.
- **Incremental since full backup**: It backs up all changes to the database since the last full backup. - **Virtual backup**: It copies all of the database except the table data and metadata from the SAP IQ store.
-Depending on your SAP IQ database size, you can schedule your database backup from any of the backup scenarios. But if you're using SAP IQ with the NLS interface delivered by SAP, you might want to automate the backup process for an SAP IQ database. Automation ensures that the SAP IQ database can always be recovered to a consistent state without loss of data that's moved between the primary database and the SAP IQ database. For details on setting up automation for SAP IQ near-line storage, see [SAP note 2741824 - How to setup backup automation for SAP IQ Cold Store/Near-line Storage](https://launchpad.support.sap.com/#/notes/2741824).
+Depending on your SAP IQ database size, you can schedule your database backup from any of the backup scenarios. But if you're using SAP IQ with the NLS interface delivered by SAP, you might want to automate the backup process for an SAP IQ database. Automation ensures that the SAP IQ database can always be recovered to a consistent state without loss of data that's moved between the primary database and the SAP IQ database. For details on setting up automation for SAP IQ near-line storage, see [SAP note 2741824 - How to setup backup automation for SAP IQ Cold Store/Near-line Storage](https://launchpad.support.sap.com/#/notes/2741824).
For a large SAP IQ database, you can use virtual backups. For more information, see [Virtual Backups](https://help.sap.com/viewer/a893f37e84f210158511c41edb6a6367/16.1.4.7/a880672184f21015a08dceedc7d19776.html), [Introduction Virtual Backup in SAP Sybase IQ](https://wiki.scn.sap.com/wiki/display/SYBIQ/Introduction+Virtual+BackUp+(+general++back+up+method+)+in+SAP+Sybase+IQ). Also see [SAP note 2461985 - How to Backup Large SAP IQ Database](https://launchpad.support.sap.com/#/notes/0002461985). ++ ## Disaster recovery
-This section explains the strategy to provide disaster recovery (DR) protection for the SAP IQ NLS solution. It complements the [Set up disaster recovery for SAP](../../site-recovery/site-recovery-sap.md) article, which represents the primary resources for an overall SAP DR approach. The process described in that article is presented at an abstract level. You need to validate the exact steps and thoroughly test your DR strategy.
+This section explains the strategy to provide disaster recovery (DR) protection for the SAP IQ NLS solution. It complements the [Set up disaster recovery for SAP](../../site-recovery/site-recovery-sap.md) article, which represents the primary resources for an overall SAP DR approach. The process described in that article is presented at an abstract level. You need to validate the exact steps and thoroughly test your DR strategy.
-For SAP IQ, see [SAP note 2566083](https://launchpad.support.sap.com/#/notes/0002566083), which describes methods to implement a DR environment safely. In Azure, you can also use [Azure Site Recovery](../../site-recovery/site-recovery-overview.md) for an SAP IQ DR strategy. The strategy for SAP IQ DR depends on the way it's deployed in Azure, and it should also be in line with your SAP BW system.
+For SAP IQ, see [SAP note 2566083](https://launchpad.support.sap.com/#/notes/0002566083), which describes methods to implement a DR environment safely. In Azure, you can also use [Azure Site Recovery](../../site-recovery/site-recovery-overview.md) for an SAP IQ DR strategy. The strategy for SAP IQ DR depends on the way it's deployed in Azure, and it should also be in line with your SAP BW system.
### Standalone deployment of SAP IQ
-If you've installed SAP IQ as a standalone system that doesn't have any application-level redundancy or high availability, but the business requires a DR setup, all the disks (Azure-managed disks) attached to the virtual machine will be local.
+If you've installed SAP IQ as a standalone system that doesn't have any application-level redundancy or high availability, but the business requires a DR setup, all the disks (Azure-managed disks) attached to the virtual machine will be local.
-You can use [Azure Site Recovery](../../site-recovery/site-recovery-overview.md) to replicate a standalone SAP IQ virtual machine in the secondary region. It replicates the servers and all the attached managed disks to the secondary region so that if a disaster or an outage occurs, you can easily fail over to your replicated environment and continue working. To start replicating the SAP IQ VMs to the Azure DR region, follow the guidance in [Replicate a virtual machine to Azure](../../site-recovery/azure-to-azure-tutorial-enable-replication.md).
+You can use [Azure Site Recovery](../../site-recovery/site-recovery-overview.md) to replicate a standalone SAP IQ virtual machine in the secondary region. It replicates the servers and all the attached managed disks to the secondary region so that if a disaster or an outage occurs, you can easily fail over to your replicated environment and continue working. To start replicating the SAP IQ VMs to the Azure DR region, follow the guidance in [Replicate a virtual machine to Azure](../../site-recovery/azure-to-azure-tutorial-enable-replication.md).
### Highly available deployment of SAP IQ If you've installed SAP IQ as a highly available system where SAP IQ binaries and database files are on an Azure shared disk (Windows only) or on a network drive like Azure NetApp Files (Linux only), you need to identify: - Whether you need the same highly available SAP IQ system on the DR site.-- Whether a standalone SAP IQ instance will suffice your business requirements.
+- Whether a standalone SAP IQ instance will suffice for your business requirements.
-If you need a standalone SAP IQ instance on a DR site, you can use [Azure Site Recovery](../../site-recovery/site-recovery-overview.md) to replicate a primary SAP IQ virtual machine in the secondary region. It replicates the servers and all the local attached managed disks to the secondary region, but it won't replicate an Azure shared disk or a network drive like Azure NetApp Files.
+If you need a standalone SAP IQ instance on a DR site, you can use [Azure Site Recovery](../../site-recovery/site-recovery-overview.md) to replicate a primary SAP IQ virtual machine in the secondary region. It replicates the servers and all the local attached managed disks to the secondary region, but it won't replicate an Azure shared disk or a network drive like Azure NetApp Files.
To copy data from Azure a shared disk or a network drive, you can use any file-base copy tool to replicate data between Azure regions. For more information on how to copy an Azure NetApp Files volume in another region, see [FAQs about Azure NetApp Files](../../azure-netapp-files/faq-data-migration-protection.md#how-do-i-create-a-copy-of-an-azure-netapp-files-volume-in-another-azure-region).
sap Disaster Recovery Overview Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/disaster-recovery-overview-guide.md
Previously updated : 12/06/2022 Last updated : 06/19/2023 # Disaster recovery overview and infrastructure guidelines for SAP workload Many organizations running critical business applications on Azure set up both High Availability (HA) and Disaster Recovery (DR) strategy. The purpose of high availability is to increase the SLA of business systems by eliminating single points of failure in the underlying system infrastructure. High Availability technologies reduce the effect of unplanned infrastructure failure and help with planned maintenance. Disaster Recovery is defined as policies, tools and procedures to enable the recovery or continuation of vital technology infrastructure and systems following a geographically widespread natural or human-induced disaster.
-To achieve [high availability for SAP workload on Azure](sap-high-availability-guide-start.md), virtual machines are typically deployed in an [availability set](planning-guide.md#availability-sets) or in [availability zones](planning-guide.md#availability-zones) to protect applications from infrastructure maintenance or failure within region. But the deployment doesnΓÇÖt protect applications from widespread disaster within region. So to protect applications from regional disaster, disaster recovery strategy for the applications should be in place. Disaster recovery is a documented and structured approach that is designed to assist an organization in executing the recovery processes in response to a disaster, and to protect or minimize IT services disruption and promote recovery.
+To achieve [high availability for SAP workload on Azure](sap-high-availability-guide-start.md), virtual machines are typically deployed in an [availability set](planning-guide.md#availability-sets), [availability zones](planning-guide.md#availability-zones) or in [flexible scale set](./virtual-machine-scale-set-sap-deployment-guide.md) to protect applications from infrastructure maintenance or failure within region. But the deployment doesnΓÇÖt protect applications from widespread disaster within region. So to protect applications from regional disaster, disaster recovery strategy for the applications should be in place. Disaster recovery is a documented and structured approach that is designed to assist an organization in executing the recovery processes in response to a disaster, and to protect or minimize IT services disruption and promote recovery.
This document provides details on protecting SAP workloads from large scale catastrophe by implementing structured DR approach. The details in this document are presented at an abstract level, based on different Azure services and SAP components. Exact DR strategy and the order of recovery for your SAP workload must be tested, documented and fine tuned regularly. Also, the document focuses on the Azure-to-Azure DR strategy for SAP workload.
The following reference architecture shows typical SAP NetWeaver system running
Organizations should plan and design a DR strategy for their entire IT landscape. Usually SAP systems running in production environment are integrated with different services and interfaces like Active directory, DNS, third-party application, and so on. So you must include the non-SAP systems and other services in your disaster recovery planning as well. This document focuses on the recovery planning for SAP applications. But you can expand the size and scope of the DR planning for dependent components to fit your requirements.
-[![Disaster Recovery reference architecture for SAP workload](media/disaster-recovery/disaster-recovery-reference-architecture.png)](media/disaster-recovery/disaster-recovery-reference-architecture.png#lighbox)
+[![Disaster Recovery reference architecture for SAP workload](media/disaster-recovery/disaster-recovery-reference-architecture.png)](media/disaster-recovery/disaster-recovery-reference-architecture.png#lightbox)
## Infrastructure components of DR solution for SAP workload
An SAP workload running on Azure uses different infrastructure components to run
### Virtual machines - On Azure, different components of a single SAP system run on virtual machines with different SKU types. For DR, protection of an application (SAP NetWeaver and non-SAP) running on Azure VMs can be enabled by replicating components using [Azure Site Recovery](../../site-recovery/site-recovery-overview.md) to another Azure region or zone. With Azure Site Recovery, Azure VMs are replicated continuously from primary to disaster recovery site. Depending on the selected Azure DR region, the VM SKU type may not be available on the DR site. You need to make sure that the required VM SKU types are available in the Azure DRregion as well. Check [Azure Products by Region](https://azure.microsoft.com/global-infrastructure/services/) to see if the required VM family SKU type is available or not.
+
+ > [!IMPORTANT]
+ > If SAP system is configured with flexible scale set with FD=1, then you need to use [PowerShell](../../site-recovery/azure-to-azure-powershell.md) to set up Azure Site Recovery for disaster recovery. Currently, it's the only method available to configure disaster recovery for VMs deployed in scale set.
- For databases running on Azure virtual machines, it's recommended to use native database replication technology to synchronize data to the disaster recovery site. The large VMs on which the databases are running may not be available in all regions. If you're using [availability zones for disaster recovery](../../site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md), you should check that the respective VM SKUs are available in the zone of your disaster recovery site.
- > [!Note]
- >
+ > [!NOTE]
> It isn't advised using Azure Site Recovery for databases, as it doesnΓÇÖt guarantee DB consistency and has [data churn limitation](../../site-recovery/azure-to-azure-support-matrix.md#limits-and-data-change-rates). - With production applications running on the primary region at all time, [reserve instances](https://azure.microsoft.com/pricing/reserved-vm-instances/) are typically used to economize Azure costs. If using reserved instances, you need to sign up for 1-year or a 3-year term commitment that may not be cost effective for DR site. Also setting up Azure Site Recovery doesnΓÇÖt guarantee you the capacity of the required VM SKU during your failover. To make sure that the VM SKU capacity is available, you can consider an option to enable [on-demand capacity reservation](../../virtual-machines/capacity-reservation-overview.md). It reserves compute capacity in an Azure region or an Azure availability zone for any duration of time without commitment. Azure Site Recovery is [integrated](https://azure.microsoft.com/updates/ondemand-capacity-reservation-with-azure-site-recovery-safeguards-vms-failover/) with on-demand capacity reservation. With this integration, you can use the power of capacity reservation with Azure Site Recovery to reserve compute capacity in the DR site and guarantee your failovers. For more information, read on-demand capacity reservation [limitations and restrictions](../../virtual-machines/capacity-reservation-overview.md#limitations-and-restrictions).
sap Sap Hana Availability Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-availability-across-regions.md
Previously updated : 09/12/2018 Last updated : 06/19/2023 - # SAP HANA availability across Azure regions
This article describes scenarios related to SAP HANA availability across differe
## Why deploy across multiple Azure regions
-Azure regions often are separated by large distances. Depending on the geopolitical region, the distance between Azure regions might be hundreds of miles, or even several thousand miles, like in the United States. Because of the distance, network traffic between assets that are deployed in two different Azure regions experience significant network roundtrip latency. The latency is significant enough to exclude synchronous data exchange between two SAP HANA instances under typical SAP workloads.
+Azure regions often are separated by large distances. Depending on the geopolitical region, the distance between Azure regions might be hundreds of miles, or even several thousand miles, like in the United States. Because of the distance, network traffic between assets that are deployed in two different Azure regions experience significant network roundtrip latency. The latency is significant enough to exclude synchronous data exchange between two SAP HANA instances under typical SAP workloads.
On the other hand, organizations often have a distance requirement between the location of the primary datacenter and a secondary datacenter. A distance requirement helps provide availability if a natural disaster occurs in a wider geographic location. Examples include the hurricanes that hit the Caribbean and Florida in September and October 2017. Your organization might have at least a minimum distance requirement. For most Azure customers, a minimum distance definition requires you to design for availability across [Azure regions](https://azure.microsoft.com/regions/). Because the distance between two Azure regions is too large to use the HANA synchronous replication mode, RTO and RPO requirements might force you to deploy availability configurations in one region, and then supplement with additional deployments in a second region. Another aspect to consider in this scenario is failover and client redirect. The assumption is that a failover between SAP HANA instances in two different Azure regions always is a manual failover. Because the replication mode of SAP HANA system replication is set to asynchronous, there's a potential that data committed in the primary HANA instance hasn't yet made it to the secondary HANA instance. Therefore, automatic failover isn't an option for configurations where the replication is asynchronous. Even with manually controlled failover, as in a failover exercise, you need to take measures to ensure that all the committed data on the primary side made it to the secondary instance before you manually move over to the other Azure region.
-
-Azure Virtual Network uses a different IP address range. The IP addresses are deployed in the second Azure region. So, you either need to change the SAP HANA client configuration, or preferably, you need to create steps to change the name resolution. This way, the clients are redirected to the new secondary site's server IP address. For more information, see the SAP article [Client connection recovery after takeover](https://help.sap.com/doc/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/c93a723ceedc45da9a66ff47672513d3.html).
+
+Azure Virtual Network uses a different IP address range. The IP addresses are deployed in the second Azure region. So, you either need to change the SAP HANA client configuration, or preferably, you need to create steps to change the name resolution. This way, the clients are redirected to the new secondary site's server IP address. For more information, see the SAP article [Client connection recovery after takeover](https://help.sap.com/doc/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/c93a723ceedc45da9a66ff47672513d3.html).
## Simple availability between two Azure regions You might choose not to put any availability configuration in place within a single region, but still have the demand to have the workload served if a disaster occurs. Typical cases for such scenarios are nonproduction systems. Although having the system down for half a day or even a day is sustainable, you can't allow the system to be unavailable for 48 hours or more. To make the setup less costly, run another system that is even less important in the VM. The other system functions as a destination. You can also size the VM in the secondary region to be smaller, and choose not to preload the data. Because the failover is manual and entails many more steps to fail over the complete application stack, the additional time to shut down the VM, resize it, and then restart the VM is acceptable.
-If you are using the scenario of sharing the DR target with a QA system in one VM, you need to take these considerations into account:
+If you're using the scenario of sharing the DR target with a QA system in one VM, you need to take these considerations into account:
- There are two [operation modes](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/627bd11e86c84ec2b9fcdf585d24011c.html) with delta_datashipping and logreplay, which are available for such a scenario - Both operation modes have different memory requirements without preloading data - Delta_datashipping might require drastically less memory without the preload option than logreplay could require. See chapter 4.3 of the SAP document [How To Perform System Replication for SAP HANA](https://www.sap.com/documents/2017/07/606a676e-c97c-0010-82c7-eda71af511fa.html)-- The memory requirement of logreplay operation mode without preload is not deterministic and depends on the columnstore structures loaded. In extreme cases, you might require 50% of the memory of the primary instance. The memory for logreplay operation mode is independent on whether you chose to have the data preloaded set or not.-
+- The memory requirement of logreplay operation mode without preload isn't deterministic and depends on the columnstore structures loaded. In extreme cases, you might require 50% of the memory of the primary instance. The memory for logreplay operation mode is independent on whether you chose to have the data preloaded set or not.
-![Diagram of two VMs over two regions](./media/sap-hana-availability-two-region/two_vm_HSR_async_2regions_nopreload.PNG)
+![Diagram of two VMs over two regions.](./media/sap-hana-availability-two-region/two_vm_HSR_async_2regions_nopreload.png)
> [!NOTE] > In this configuration, you can't provide an RPO=0 because your HANA system replication mode is asynchronous. If you need to provide an RPO=0, this configuration isn't the configuration of choice.
-A small change that you can make in the configuration might be to configure data as preloading. However, given the manual nature of failover and the fact that application layers also need to move to the second region, it might not make sense to preload data.
+A small change that you can make in the configuration might be to configure data as preloading. However, given the manual nature of failover and the fact that application layers also need to move to the second region, it might not make sense to preload data.
-## Combine availability within one region and across regions
+## Combine availability within one region and across regions
A combination of availability within and across regions might be driven by these factors:
A combination of availability within and across regions might be driven by these
- The organization isn't willing or able to have global operations affected by a major natural catastrophe that affects a larger region. This was the case for some hurricanes that hit the Caribbean over the past few years. - Regulations that demand distances between primary and secondary sites that are clearly beyond what Azure availability zones can provide.
-In these cases, you can set up what SAP calls an [SAP HANA multitier system replication configuration](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/ca6f4c62c45b4c85a109c7faf62881fc.html) by using HANA system replication. The architecture would look like:
+In these cases, you can set up what SAP calls an [SAP HANA multi-tier system replication configuration](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/ca6f4c62c45b4c85a109c7faf62881fc.html) by using HANA system replication. The architecture would look like:
-![Diagram of three VMs over two regions](./media/sap-hana-availability-two-region/three_vm_HSR_async_2regions_ha_and_dr.PNG)
+![Diagram of three VMs over two regions.](./media/sap-hana-availability-two-region/three_vm_HSR_async_2regions_ha_and_dr.png)
-SAP introduced [multi-target system replication](https://help.sap.com/viewer/42668af650f84f9384a3337bcd373692/2.0.03/en-US/0b2c70836865414a8c65463180d18fec.html) with HANA 2.0 SPS3. Multi-target system replication brings some advantages in update scenarios. For example, the DR site (Region 2) is not impacted when the secondary HA site is down for maintenance or updates.
+SAP introduced [multi-target system replication](https://help.sap.com/viewer/42668af650f84f9384a3337bcd373692/2.0.03/en-US/0b2c70836865414a8c65463180d18fec.html) with HANA 2.0 SPS3. Multi-target system replication brings some advantages in update scenarios. For example, the DR site (Region 2) isn't impacted when the secondary HA site is down for maintenance or updates.
You can find out more about HANA multi-target system replication at the [SAP Help Portal](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.03/en-US/ba457510958241889a459e606bbcf3d3.html). Possible architecture with multi-target replication would look like:
-![Diagram of three VMs over two regions milti-target](./media/sap-hana-availability-two-region/saphanaavailability_hana_system_2region_HA_and_DR_multitarget_3VMs.PNG)
+![Diagram of three VMs over two regions multi-target.](./media/sap-hana-availability-two-region/saphanaavailability_hana_system_2region_HA_and_DR_multitarget_3VMs.png)
If the organization has requirements for high availability readiness in the second(DR) Azure region, then the architecture would look like:
-![Diagram that shows an organization that has requirements for high availability readiness in the second (DR) Azure region.](./media/sap-hana-availability-two-region/saphanaavailability_hana_system_2region_HA_and_DR_multitarget_4VMs.PNG)
+![Diagram that shows an organization that has requirements for high availability readiness in the second (DR) Azure region.](./media/sap-hana-availability-two-region/saphanaavailability_hana_system_2region_HA_and_DR_multitarget_4VMs.png)
Using logreplay as operation mode, this configuration provides an RPO=0, with low RTO, within the primary region. The configuration also provides decent RPO if a move to the second region is involved. The RTO times in the second region are dependent on whether data is preloaded. Many customers use the VM in the secondary region to run a test system. In that use case, the data can't be preloaded. > [!IMPORTANT]
-> The operation modes between the different tiers need to be homogeneous. You **can't** use logreply as operation mode between tier 1 and tier 2 and delta_datashipping to supply tier 3. You can only choose the one or the other operation mode that needs to be consistent for all tiers. Since delta_datashipping is not suitable to give you an RPO=0, the only reasonable operation mode for such a multi-tier configuration remains logreplay. For details about operation modes and some restrictions, see the SAP article [Operation modes for SAP HANA system replication](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/627bd11e86c84ec2b9fcdf585d24011c.html).
+> The operation modes between the different tiers need to be homogeneous. You **can't** use logreplay as operation mode between tier 1 and tier 2 and delta_datashipping to supply tier 3. You can only choose the one or the other operation mode that needs to be consistent for all tiers. Since delta_datashipping is not suitable to give you an RPO=0, the only reasonable operation mode for such a multi-tier configuration remains logreplay. For details about operation modes and some restrictions, see the SAP article [Operation modes for SAP HANA system replication](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/627bd11e86c84ec2b9fcdf585d24011c.html).
## Next steps
sap Sap Hana Availability One Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-availability-one-region.md
Previously updated : 07/27/2018 Last updated : 06/19/2023 - # SAP HANA availability within one Azure region
-This article describes several availability scenarios within one Azure region. Azure has many regions, spread throughout the world. For the list of Azure regions, see [Azure regions](https://azure.microsoft.com/regions/). For deploying SAP HANA on VMs within one Azure region, Microsoft offers deployment of a single VM with a HANA instance. For increased availability, you can deploy two VMs with two HANA instances within an [Azure availability set](../../virtual-machines/windows/tutorial-availability-sets.md) that uses HANA system replication for availability.
-Currently, Azure is offering [Azure Availability Zones](../../availability-zones/az-overview.md). This article does not describe Availability Zones in detail. But, it includes a general discussion about using Availability Sets versus Availability Zones.
+This article describes several availability scenarios for SAP HANA within one Azure region. Azure has many regions, spread throughout the world. For the list of Azure regions, see [Azure regions](https://azure.microsoft.com/regions/). For deploying SAP HANA on VMs within one Azure region, Microsoft offers deployment of a single VM with a HANA instance. For increased availability, you can deploy two VMs with two HANA instances using either a [flexible scale set](./virtual-machine-scale-set-sap-deployment-guide.md) with FD=1, [availability zones](./high-availability-zones.md) or an [availability set](../../virtual-machines/windows/tutorial-availability-sets.md) that uses HANA system replication for availability.
-Azure regions where Availability Zones are offered have multiple datacenters. The datacenters are independent in the supply of power source, cooling, and network. The reason for offering different zones within a single Azure region is to deploy applications across two or three Availability Zones that are offered. Deploying across zones, issues in power and networking affecting only one Azure Availability Zone infrastructure, your application deployment within an Azure region is still functional. Some reduced capacity might occur. For example, VMs in one zone might be lost, but VMs in the other two zones would still be up and running.
-
-An Azure Availability Set is a logical grouping capability that helps ensure that the VM resources that you place within the Availability Set are failure-isolated from each other when they are deployed within an Azure datacenter. Azure ensures that the VMs you place within an Availability Set run across multiple physical servers, compute racks, storage units, and network switches. In some Azure documentation, this configuration is referred to as placements in different [update and fault domains](../../virtual-machines/availability.md). These placements usually are within an Azure datacenter. Assuming that power source and network issues would affect the datacenter that you are deploying, all your capacity in one Azure region would be affected.
+Azure regions that provide Availability Zones consist of multiple data centers, each with its own power source, cooling, and network infrastructure. The purpose of offering different zones within a single Azure region is to enable the deployment of applications across two or three available Availability Zones. By distributing your application deployment across zones, any power or networking issues affecting a specific Azure Availability Zone infrastructure wouldn't fully disrupt your application's functionality within the Azure region. While there might be some reduced capacity, such as the potential loss of VMs in one zone, the VMs in the remaining zones would continue to operate without interruption. To set up two HANA instances in separate VMs spanning across different zones, you have the option to deploy VMs using either the [flexible scale set with FD=1](./virtual-machine-scale-set-sap-deployment-guide.md) or [availability zones](./high-availability-zones.md) deployment option.
+
+For increased availability within a region, it's advised to deploy two VMs with two HANA instances using an [availability set](../../virtual-machines/windows/tutorial-availability-sets.md). An Azure Availability Set is a logical grouping capability that ensures that the VM resources configured within Availability Set are failure-isolated from each other when they're deployed within an Azure datacenter. Azure ensures that the VMs you place within an Availability Set run across multiple physical servers, compute racks, storage units, and network switches. In some Azure documentation, this configuration is referred to as placements in different [update and fault domains](../../virtual-machines/availability.md). These placements usually are within an Azure datacenter. Assuming that power source and network issues would affect the datacenter that you're deploying, all your capacity in one Azure region would be affected.
The placement of datacenters that represent Azure Availability Zones is a compromise between delivering acceptable network latency between services deployed in different zones, and a distance between datacenters. Natural catastrophes ideally wouldn't affect the power, network supply, and infrastructure for all Availability Zones in this region. However, as monumental natural catastrophes have shown, Availability Zones might not always provide the availability that you want within one region. Think about Hurricane Maria that hit the island of Puerto Rico on September 20, 2017. The hurricane basically caused a nearly 100 percent blackout on the 90-mile-wide island. ## Single-VM scenario
-In a single-VM scenario, you create an Azure VM for the SAP HANA instance. You use Azure Premium Storage to host the operating system disk and all your data disks. The Azure uptime SLA of 99.9 percent and the SLAs of other Azure components is sufficient for you to fulfill your availability SLAs for your customers. In this scenario, you have no need to leverage an Azure Availability Set for VMs that run the DBMS layer. In this scenario, you rely on two different features:
+In a single-VM scenario, you create an Azure VM for the SAP HANA instance. You use Azure Premium Storage to host the operating system disk and all your data disks. The Azure uptime SLA of 99.9 percent and the SLAs of other Azure components is sufficient for you to fulfill your availability SLAs for your customers. In this scenario, you have no need to use an Azure Availability Set for VMs that run the DBMS layer. In this scenario, you rely on two different features:
-- Azure VM auto-restart (also referred to as Azure service healing)-- SAP HANA auto-restart
+- Azure VM auto restart (also referred to as Azure service healing)
+- SAP HANA auto restart
Azure VM auto restart, or service healing, is a functionality in Azure that works on two levels:
Azure VM auto restart, or service healing, is a functionality in Azure that work
A health check functionality monitors the health of every VM that's hosted on an Azure server host. If a VM falls into a non-healthy state, a reboot of the VM can be initiated by the Azure host agent that checks the health of the VM. The fabric controller checks the health of the host by checking many different parameters that might indicate issues with the host hardware. It also checks on the accessibility of the host via the network. An indication of problems with the host can lead to the following events: - If the host signals a bad health state, a reboot of the host and a restart of the VMs that were running on the host is triggered.-- If the host is not in a healthy state after successful reboot, a redeployment of the VMs that were originally on the now unhealthy node onto an healthy host server is initiated. In this case, the original host is marked as not healthy. It won't be used for further deployments until it's cleared or replaced.-- If the unhealthy host has problems during the reboot process, an immediate restart of the VMs on an healthy host is triggered.
+- If the host isn't in a healthy state after successful reboot, a redeployment of the VMs that were originally on the now unhealthy node onto a healthy host server is initiated. In this case, the original host is marked as not healthy. It won't be used for further deployments until it's cleared or replaced.
+- If the unhealthy host has problems during the reboot process, an immediate restart of the VMs on a healthy host is triggered.
-With the host and VM monitoring provided by Azure, Azure VMs that experience host issues are automatically restarted on a healthy Azure host.
+With the host and VM monitoring provided by Azure, Azure VMs that experience host issues are automatically restarted on a healthy Azure host.
->[!IMPORTANT]
->Azure service healing will not restart Linux VMs where the guest OS is in a kernel panic state. The default settings of the commonly used Linux releases, are not automatically restarting VMs or server where the Linux kernel is in panic state. Instead the default foresees to keep the OS in kernel panic state to be able to attach a kernel debugger to analyze. Azure is honoring that behavior by not automatically restarting a VM with the guest OS in a such a state. Assumption is that such occurrences are extremely rare. You could overwrite the default behavior to enable a restart of the VM. To change the default behavior enable the parameter 'kernel.panic' in /etc/sysctl.conf. The time you set for this parameter is in seconds. Common recommended values are to wait for 20-30 seconds before triggering the reboot through this parameter. For more information, see [sysctl.conf](https://gitlab.com/procps-ng/procps/blob/master/sysctl.conf).
+> [!IMPORTANT]
+> Azure service healing will not restart Linux VMs where the guest OS is in a kernel panic state. The default settings of the commonly used Linux releases, are not automatically restarting VMs or server where the Linux kernel is in panic state. Instead the default foresees to keep the OS in kernel panic state to be able to attach a kernel debugger to analyze. Azure is honoring that behavior by not automatically restarting a VM with the guest OS in such a state. Assumption is that such occurrences are extremely rare. You could overwrite the default behavior to enable a restart of the VM. To change the default behavior enable the parameter 'kernel.panic' in /etc/sysctl.conf. The time you set for this parameter is in seconds. Common recommended values are to wait for 20-30 seconds before triggering the reboot through this parameter. For more information, see [sysctl.conf](https://gitlab.com/procps-ng/procps/blob/master/sysctl.conf).
-The second feature that you rely on in this scenario is the fact that the HANA service that runs in a restarted VM starts automatically after the VM reboots. You can set up [HANA service auto-restart](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.01/en-US/cf10efba8bea4e81b1dc1907ecc652d3.html) through the watchdog services of the various HANA services.
+The second feature that you rely on in this scenario is the fact that the HANA service that runs in a restarted VM starts automatically after the VM reboots. You can set up [HANA service auto restart](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.01/en-US/cf10efba8bea4e81b1dc1907ecc652d3.html) through the watchdog services of the various HANA services.
-You might improve this single-VM scenario by adding a cold failover node to an SAP HANA configuration. In the SAP HANA documentation, this setup is called [host auto-failover](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.01/en-US/ae60cab98173431c97e8724856641207.html). This configuration might make sense in an on-premises deployment situation where the server hardware is limited, and you dedicate a single-server node as the host auto-failover node for a set of production hosts. But in Azure, where the underlying infrastructure of Azure provides a healthy target server for a successful VM restart, it doesn't make sense to deploy SAP HANA host auto-failover. Because of Azure service healing, there is no reference architecture that foresees a standby node for HANA host auto-failover.
+You might improve this single-VM scenario by adding a cold failover node to an SAP HANA configuration. In the SAP HANA documentation, this setup is called [host autofailover](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.01/en-US/ae60cab98173431c97e8724856641207.html). This configuration might make sense in an on-premises deployment situation where the server hardware is limited, and you dedicate a single-server node as the host autofailover node for a set of production hosts. But in Azure, where the underlying infrastructure of Azure provides a healthy target server for a successful VM restart, it doesn't make sense to deploy SAP HANA host autofailover. Because of Azure service healing, there's no reference architecture that foresees a standby node for HANA host autofailover.
### Special case of SAP HANA scale-out configurations in Azure
-High availability for SAP HANA scale-out configurations is relying on service healing of Azure VMs and the restart of the SAP HANA instance as the VM is up and running again. High availability architectures based on HANA System Replication are going to be introduced at a later time.
+High availability architectures based on standby node or HANA System Replication can be found in following documents. In cases where standby nodes or HANA system replication high availability isn't used in SAP HANA scale-out configurations, you can depend on Azure VMs' service healing capabilities and the automatic restart of the SAP HANA instance once the VM is operational again.
+
+- RedHat Enterprise Linux
+ - [High availability of SAP HANA scale-out system with HSR on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md).
+ - [Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md).
+- SUSE Linux Enterprise Server
+ - [High availability of SAP HANA scale-out system with HSR on SLES](./sap-hana-high-availability-scale-out-hsr-suse.md).
+ - [Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md).
## Availability scenarios for two different VMs
-If you use two Azure VMs within an Azure Availability Set, you can increase the uptime between these two VMs if they're placed in an Azure Availability Set within one Azure region. The base setup in Azure would look like:
+To ensure the availability of the HANA system within a specific region, you have the option to configure two VMs across the availability zones of the region or within the region. To achieve this objective, you can configure the VMs using flexible scale set, availability zones or availability set deployment option. The base setup in Azure would look like:
-![Diagram of two VMs with all layers](./media/sap-hana-availability-one-region/two_vm_all_shell.PNG)
+![Diagram of two VMs with all layers.](./media/sap-hana-availability-one-region/two_vm_all_shell.png)
-To illustrate the different availability scenarios, a few of the layers in the diagram are omitted. The diagram shows only layers that depict VMs, hosts, Availability Sets, and Azure regions. Azure Virtual Network instances, resource groups, and subscriptions don't play a role in the scenarios described in this section.
+To illustrate the different SAP HANA availability scenarios, a few of the layers in the diagram are omitted. The diagram shows only layers that depict VMs, hosts, Availability Sets, and Azure regions. Azure Virtual Network instances, resource groups, and subscriptions don't play a role in the scenarios described in this section.
### Replicate backups to a second virtual machine
-One of the most rudimentary setups is to use backups. In particular, you might have transaction log backups shipped from one VM to another Azure VM. You can choose the Azure Storage type. In this setup, you are responsible for scripting the copy of scheduled backups that are conducted on the first VM to the second VM. If you need to use the second VM instances, you must restore the full, incremental/differential, and transaction log backups to the point that you need.
+One of the most rudimentary setups is to use backups. In particular, you might have transaction log backups shipped from one VM to another Azure VM. You can choose the Azure Storage type. In this setup, you're responsible for scripting the copy of scheduled backups that are conducted on the first VM to the second VM. If you need to use the second VM instances, you must restore the full, incremental/differential, and transaction log backups to the point that you need.
The architecture looks like:
-![Diagram that shows the architecture of two VMs with storage replication.](./media/sap-hana-availability-one-region/two_vm_storage_replication.PNG)
+![Diagram that shows the architecture of two VMs with storage replication.](./media/sap-hana-availability-one-region/two_vm_storage_replication.png)
-This setup is not well suited to achieving great Recovery Point Objective (RPO) and Recovery Time Objective (RTO) times. RTO times especially would suffer due to the need to fully restore the complete database by using the copied backups. However, this setup is useful for recovering from unintended data deletion on the main instances. With this setup, at any time, you can restore to a certain point in time, extract the data, and import the deleted data into your main instance. Hence, it might make sense to use a backup copy method in combination with other high-availability functionality.
+This setup isn't well suited to achieving great Recovery Point Objective (RPO) and Recovery Time Objective (RTO) times. RTO times especially would suffer due to the need to fully restore the complete database by using the copied backups. However, this setup is useful for recovering from unintended data deletion on the main instances. With this setup, at any time, you can restore to a certain point in time, extract the data, and import the deleted data into your main instance. Hence, it might make sense to use a backup copy method in combination with other high-availability functionality.
While backups are being copied, you might be able to use a smaller VM than the main VM that the SAP HANA instance is running on. Keep in mind that you can attach a smaller number of VHDs to smaller VMs. For information about the limits of individual VM types, see [Sizes for Linux virtual machines in Azure](../../virtual-machines/sizes.md). ### SAP HANA system replication without automatic failover
-The scenarios described in this section use SAP HANA system replication. For the SAP documentation, see [System replication](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.01/en-US/b74e16a9e09541749a745f41246a065e.html). Scenarios without automatic failover are not common for configurations within one Azure region. A configuration without automatic failover, though avoiding a Pacemaker setup, obligates you to monitor and failover manually. Since this takes and efforts as well, most customers are relying on Azure service healing instead. There are some edge cases where this configuration might help in terms of failure scenarios. Or, in some cases, a customer might want to realize more efficiency.
+The scenarios described in this section use SAP HANA system replication. For the SAP documentation, see [System replication](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.01/en-US/b74e16a9e09541749a745f41246a065e.html). Scenarios without automatic failover aren't common for configurations within one Azure region. A configuration without automatic failover, though avoiding a Pacemaker setup, obligates you to monitor and failover manually. Since this takes and efforts as well, most customers are relying on Azure service healing instead. There are some edge cases where this configuration might help in terms of failure scenarios. Or, in some cases, a customer might want to realize more efficiency.
#### SAP HANA system replication without auto failover and without data preload In this scenario, you use SAP HANA system replication to move data in a synchronous manner to achieve an RPO of 0. On the other hand, you have a long enough RTO that you don't need either failover or data preloading into the HANA instance cache. In this case, it's possible to achieve further economy in your configuration by taking the following actions: - Run another SAP HANA instance in the second VM. The SAP HANA instance in the second VM takes most of the memory of the virtual machine. In case a failover to the second VM, you need to shut down the running SAP HANA instance that has the data fully loaded in the second VM, so that the replicated data can be loaded into the cache of the targeted HANA instance in the second VM.-- Use a smaller VM size on the second VM. If a failover occurs, you have an additional step before the manual failover. In this step, you resize the VM to the size of the source VM.
-
+- Use a smaller VM size on the second VM. If a failover occurs, you have an additional step before the manual failover. In this step, you resize the VM to the size of the source VM.
+ The scenario looks like:
-![Diagram of two VMs with storage replication](./media/sap-hana-availability-one-region/two_vm_HSR_sync_nopreload.PNG)
+![Diagram of two VMs with storage replication.](./media/sap-hana-availability-one-region/two_vm_HSR_sync_nopreload.png)
> [!NOTE] > Even if you don't use data preload in the HANA system replication target, you need at least 64 GB of memory. You also need enough memory in addition to 64 GB to keep the rowstore data in the memory of the target instance.
In this scenario, data that's replicated to the HANA instance in the second VM i
### SAP HANA system replication with automatic failover
-In the standard and most common availability configuration within one Azure region, two Azure VMs running Linux with HA packages have a failover cluster defined. The HA Linux cluster is based on the `Pacemaker` framework using [SLES](./high-availability-guide-suse-pacemaker.md) or [RHEL](./high-availability-guide-rhel-pacemaker.md) in conjunction with a `fencing device` [SLES](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device) or [RHEL](./high-availability-guide-rhel-pacemaker.md#create-fencing-device) as an example.
+In the standard and most common availability configuration within one Azure region, two Azure VMs running Linux with HA packages have a failover cluster defined. The HA Linux cluster is based on the `Pacemaker` framework using [SLES](./high-availability-guide-suse-pacemaker.md) or [RHEL](./high-availability-guide-rhel-pacemaker.md) with a `fencing device` [SLES](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device) or [RHEL](./high-availability-guide-rhel-pacemaker.md#create-fencing-device) as an example.
From an SAP HANA perspective, the replication mode that's used is synced and an automatic failover is configured. In the second VM, the SAP HANA instance acts as a hot standby node. The standby node receives a synchronous stream of change records from the primary SAP HANA instance. As transactions are committed by the application at the HANA primary node, the primary HANA node waits to confirm the commit to the application until the secondary SAP HANA node confirms that it received the commit record. SAP HANA offers two synchronous replication modes. For details and for a description of differences between these two synchronous replication modes, see the SAP article [Replication modes for SAP HANA system replication](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/c039a1a5b8824ecfa754b55e0caffc01.html). The overall configuration looks like:
-![Diagram of two VMs with storage replication and failover](./media/sap-hana-availability-one-region/two_vm_HSR_sync_auto_pre_preload.PNG)
+![Diagram of two VMs with storage replication and failover.](./media/sap-hana-availability-one-region/two_vm_HSR_sync_auto_pre_preload.png)
-You might choose this solution because it enables you to achieve an RPO=0 and an low RTO. Configure the SAP HANA client connectivity so that the SAP HANA clients use the virtual IP address to connect to the HANA system replication configuration. Such a configuration eliminates the need to reconfigure the application if a failover to the secondary node occurs. In this scenario, the Azure VM SKUs for the primary and secondary VMs must be the same.
+You might choose this solution because it enables you to achieve an RPO=0 and a low RTO. Configure the SAP HANA client connectivity so that the SAP HANA clients use the virtual IP address to connect to the HANA system replication configuration. Such a configuration eliminates the need to reconfigure the application if a failover to the secondary node occurs. In this scenario, the Azure VM SKUs for the primary and secondary VMs must be the same.
## Next steps
sentinel Connect Cef Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-ama.md
This example collects events for:
1. To verify that the connector is installed correctly, run the troubleshooting script with this command: ```
- sudo wget -O cef_AMA_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_AMA_troubleshoot.py&&sudo python cef_AMA_troubleshoot.py
+ sudo wget -O Sentinel_AMA_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Sentinel_AMA_troubleshoot.py&&sudo python Sentinel_AMA_troubleshoot.py
``` ## Next steps
sentinel Normalization Schema Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-authentication.md
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| <a name="targetdomain"></a>**TargetDomain** | Recommended | String | The domain of the target device.<br><br>Example: `Contoso` | | <a name="targetdomaintype"></a>**TargetDomainType** | Conditional | Enumerated | The type of [TargetDomain](#targetdomain). For a list of allowed values and further information refer to [DomainType](normalization-about-schemas.md#domaintype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Required if [TargetDomain](#targetdomain) is used. | | **TargetFQDN** | Optional | String | The target device hostname, including domain information when available. <br><br>Example: `Contoso\DESKTOP-1282V4D` <br><br>**Note**: This field supports both traditional FQDN format and Windows domain\hostname format. The [TargetDomainType](#targetdomaintype) reflects the format used. |
-| <a name = "dstdescription"></a>**DstDescription** | Optional | String | A descriptive text associated with the device. For example: `Primary Domain Controller`. |
+| <a name = "targetdescription"></a>**TargetDescription** | Optional | String | A descriptive text associated with the device. For example: `Primary Domain Controller`. |
| <a name="targetdvcid"></a>**TargetDvcId** | Optional | String | The ID of the target device. If multiple IDs are available, use the most important one, and store the others in the fields `TargetDvc<DvcIdType>`. <br><br>Example: `ac7e9755-8eae-4ffc-8a02-50ed7a2216c3` | | <a name="targetdvcscopeid"></a>**TargetDvcScopeId** | Optional | String | The cloud platform scope ID the device belongs to. **TargetDvcScopeId** map to a subscription ID on Azure and to an account ID on AWS. | | <a name="targetdvcscope"></a>**TargerDvcScope** | Optional | String | The cloud platform scope the device belongs to. **TargetDvcScope** map to a subscription ID on Azure and to an account ID on AWS. |
These are the changes in version 0.1.2 of the schema:
- Added the fields `ActorScope`, `TargetUserScope`, `SrcDvcScopeId`, `SrcDvcScope`, `TargetDvcScopeId`, `TargetDvcScope`, `DvcScopeId`, and `DvcScope`. These are the changes in version 0.1.3 of the schema:-- Added the fields `SrcPortNumber`, `ActorOriginalUserType`, `ActorScopeId`, `TargetOriginalUserType`, `TargetUserScopeId`, `SrcDescription`, `SrcRiskLevel`, `SrcOriginalRiskLevel`, and `DstDescription`.
+- Added the fields `SrcPortNumber`, `ActorOriginalUserType`, `ActorScopeId`, `TargetOriginalUserType`, `TargetUserScopeId`, `SrcDescription`, `SrcRiskLevel`, `SrcOriginalRiskLevel`, and `TargetDescription`.
- Added inspection fields - Added target system geo-location fields.
service-bus-messaging Service Bus Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-ip-filtering.md
This feature is helpful in scenarios in which Azure Service Bus should be only a
The IP firewall rules are applied at the Service Bus namespace level. Therefore, the rules apply to all connections from clients using any **supported protocol** (AMQP (5671) and HTTPS (443)). Any connection attempt from an IP address that doesn't match an allowed IP rule on the Service Bus namespace is rejected as unauthorized. The response doesn't mention the IP rule. IP filter rules are applied in order, and the first rule that matches the IP address determines the accept or reject action. ## Important points-- Firewalls and Virtual Networks are supported only in the **premium** tier of Service Bus. If upgrading to the **premier** tier isn't an option, we recommend that you keep the Shared Access Signature (SAS) token secure and share with only authorized users. For information about SAS authentication, see [Authentication and authorization](service-bus-authentication-and-authorization.md#shared-access-signature).
+- Virtual Networks are supported only in the **premium** tier of Service Bus. If upgrading to the **premium** tier isn't an option, it's possible to use IP firewall rules. We recommend that you keep the Shared Access Signature (SAS) token secure and share it with only authorized users. For information about SAS authentication, see [Authentication and authorization](service-bus-authentication-and-authorization.md#shared-access-signature).
- Specify **at least one IP firewall rule or virtual network rule** for the namespace to allow traffic only from the specified IP addresses or subnet of a virtual network. If there are no IP and virtual network rules, the namespace can be accessed over the public internet (using the access key). - Implementing firewall rules can prevent other Azure services from interacting with Service Bus. As an exception, you can allow access to Service Bus resources from certain **trusted services** even when IP filtering is enabled. For a list of trusted services, see [Trusted services](#trusted-microsoft-services).
The IP firewall rules are applied at the Service Bus namespace level. Therefore,
- Azure App Service - Azure Functions
+> [!NOTE]
+> You see the **Networking** tab only for **premium** namespaces. To set IP firewall rules for the other tiers, use [Azure Resource Manager templates](#use-resource-manager-template).
+ ## Use Azure portal When creating a namespace, you can either allow public only (from all networks) or private only (only via private endpoints) access to the namespace. Once the namespace is created, you can allow access from specific IP addresses or from specific virtual networks (using network service endpoints).
site-recovery Vmware Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-common-questions.md
This article answers common questions that might come up when you deploy disaste
### How do I use the classic experience in the Recovery Services vault rather than the modernized experience?
-Moving to the classic experience in a newly created Recovery Services is not possible as it will be [deprecated](vmware-physical-azure-classic-deprecation.md) in March 2026. All new Recovery Services vaults will be using the modernized experience.
+Moving to the classic experience in a newly created Recovery Services isn't possible as it will be [deprecated](vmware-physical-azure-classic-deprecation.md) in March 2026. All new Recovery Services vaults will be using the modernized experience.
### Can I migrate to the modernized experience?
-All VMware VMs or Physical servers which are being replicated using the classic experience can be migrated to the modernized experience. Check the details [here](move-from-classic-to-modernized-vmware-disaster-recovery.md) and follow the [tutorial](how-to-move-from-classic-to-modernized-vmware-disaster-recovery.md).
+All VMware VMs or Physical servers that are being replicated using the classic experience can be migrated to the modernized experience. Check the details [here](move-from-classic-to-modernized-vmware-disaster-recovery.md) and follow the [tutorial](how-to-move-from-classic-to-modernized-vmware-disaster-recovery.md).
### What do I need for VMware VM disaster recovery?
Managed disks are charged slightly differently from storage accounts. [Learn mor
### Is there any difference in cost when replicating to General Purpose v2 storage account?
-You will typically see an increase in the transactions cost incurred on GPv2 storage accounts since Azure Site Recovery is transactions heavy. [Read more](../storage/common/storage-account-upgrade.md#pricing-and-billing) to estimate the change.
+You'll typically see an increase in the transactions cost incurred on GPv2 storage accounts since Azure Site Recovery is transactions heavy. [Read more](../storage/common/storage-account-upgrade.md#pricing-and-billing) to estimate the change.
## Mobility service
Replication of new VMs to a storage account is available only by using PowerShel
### Can I change the managed-disk type after a machine is protected?
-Yes, you can easily [change the type of managed disk](../virtual-machines/windows/convert-disk-storage.md) for ongoing replications. Before changing the type, ensure that no shared access signature URL is generated on the managed disk:
+Yes, you can easily [change the type of managed disk](../virtual-machines/disks-convert-types.md) for ongoing replications. Before changing the type, ensure that no shared access signature URL is generated on the managed disk:
1. Go to the **Managed Disk** resource on the Azure portal and check whether you have a shared access signature URL banner on the **Overview** blade. 1. If the banner is present, select it to cancel the ongoing export.
Site Recovery generates crash-consistent recovery points every 5 minutes.
### Can I change an already replicating machine from one to another Recovery Services vault?
-Switching Recovery Services vaults, when the replication is already ongoing, is not supported. To do so, replication will need to be disabled and enabled again. Additionally, the mobility service agent, installed on the source machine, will need to be unconfigured so that it can be configured to a new vault. Use the below commands to perform the unregistration -
+Switching Recovery Services vaults, when the replication is already ongoing, isn't supported. To do so, replication needs to be disabled and enabled again. Additionally, the mobility service agent, installed on the source machine, will need to be unconfigured so that it can be configured to a new vault. Use the below commands to perform the unregistration -
For Windows machines -
For Linux machines -
### My version of the Mobility services agent or configuration server is old, and my upgrade failed. What do I do?
-Site Recovery follows the N-4 support model. [Learn more](./service-updates-how-to.md#support-statement-for-azure-site-recovery) about how to upgrade from very old versions.
+Site Recovery follows the N-4 support model. [Learn more](./service-updates-how-to.md#support-statement-for-azure-site-recovery) about how to upgrade from old versions.
### Where can I find the release notes and update rollups for Azure Site Recovery?
We recommend taking regular scheduled backups of the configuration server.
### When I'm setting up the configuration server, can I download and install MySQL manually?
-Yes. Download MySQL and place it in the C:\Temp\ASRSetup folder. Then, install it manually. When you set up the configuration server VM and accept the terms, MySQL will be listed as **Already installed** in **Download and install**.
+Yes. Download MySQL and place it in the C:\Temp\ASRSetup folder. Then, install it manually. When you set up the configuration server VM and accept the terms, MySQL is listed as **Already installed** in **Download and install**.
### Can I avoid downloading MySQL but let Site Recovery install it?
-Yes. Download the MySQL installer and place it in the C:\Temp\ASRSetup folder. When you set up the configuration server VM, accept the terms and select **Download and install**. The portal will use the installer that you added to install MySQL.
+Yes. Download the MySQL installer and place it in the C:\Temp\ASRSetup folder. When you set up the configuration server VM, accept the terms and select **Download and install**. The portal uses the installer that you added to install MySQL.
### Can I use the configuration server VM for anything else?
In the Recovery Services vault, select **Configuration Servers** in **Site Recov
### Can a single configuration server be used to protect multiple vCenter instances?
-Yes, a single configuration server can protect VMs across multiple vCenters. There is not limit on how many vCenter instances can be added to the configuration server, however the limits for how many VMs a single configuration server can protect do apply.
+Yes, a single configuration server can protect VMs across multiple vCenters. There isn't limit on how many vCenter instances can be added to the configuration server, however the limits for how many VMs a single configuration server can protect do apply.
### Can a single configuration server protect multiple clusters within vCenter?
Crash-consistent recovery points are generated in every five minutes. App-consis
### Do increases in recovery point retention increase storage costs?
-Yes. For example, if you increase retention from 1 day to three days, Site Recovery saves recovery points for an additional 2 days. The added time incurs storage changes. Earlier, it was saving recovery points per hour for one day. Now, it is saving recovery points per two hours for 3 days. Refer [pruning of recovery points](#how-does-the-pruning-of-recovery-points-happen). So additional 12 recovery points are saved. As an example only, if a single recovery point had delta changes of 10 GB, with a per-GB cost of $0.16 per month, then additional charges would be $1.60 × 12 per month.
+Yes. For example, if you increase retention from 1 day to three days, Site Recovery saves recovery points for an extra two days. The added time incurs storage changes. Earlier, it was saving recovery points per hour for one day. Now, it's saving recovery points per two hours for three days. Refer [pruning of recovery points](#how-does-the-pruning-of-recovery-points-happen). So more 12 recovery points are saved. As an example only, if a single recovery point had delta changes of 10 GB, with a per-GB cost of $0.16 per month, then additional charges would be $1.60 × 12 per month.
### How do I access Azure VMs after failover?
spring-apps How To Enterprise Build Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-build-service.md
When you create a new Azure Spring Apps Enterprise service instance using the Az
The following image shows the resources given to the Tanzu Build Service Agent Pool after you've successfully provisioned the service instance. You can also update the configured agent pool size here after you've created the service instance. ## Build service on demand
spring-apps How To Enterprise Configure Apm Integration And Ca Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-configure-apm-integration-and-ca-certificates.md
You can enable or disable Tanzu Build Service on an Azure Springs Apps Enterpris
## Prerequisites - An already provisioned Azure Spring Apps Enterprise plan instance. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise plan](quickstart-deploy-apps-enterprise.md).-- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher. Use the following command to install the Azure Spring Apps extension: `az extension add --name spring`
+- [Azure CLI](/cli/azure/install-azure-cli) version 2.49.0 or higher. Use the following command to install the Azure Spring Apps extension: `az extension add --name spring`
## Supported scenarios - APM and CA certificates integration
-Tanzu Build Service is enabled by default in Azure Spring Apps Enterprise. If you choose to disable the build service, you can deploy applications but only by using a custom container image. This section provides guidance for both enabled and disabled scenarios.
-
-### [Build service enabled](#tab/enable-build-service)
- Tanzu Build Service uses buildpack binding to integrate with [Tanzu Partner Buildpacks](https://docs.pivotal.io/tanzu-buildpacks/partner-integrations/partner-integration-buildpacks.html) and other cloud native buildpacks such as the [ca-certificates](https://github.com/paketo-buildpacks/ca-certificates) buildpack on GitHub.
-Currently, Azure Spring Apps supports the following APM types and CA certificates:
+Currently, Azure Spring Apps supports the following APM types:
- ApplicationInsights - Dynatrace
Currently, Azure Spring Apps supports the following APM types and CA certificate
Azure Spring Apps supports CA certificates for all language family buildpacks, but not all supported APMs. The following table shows the binding types supported by Tanzu language family buildpacks.
-| Buildpack | ApplicationInsights | New Relic | AppDynamics | Dynatrace | ElasticAPM |
-|||--|-|--||
-| Java | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Dotnet | | | | Γ£ö | |
-| Go | | | | Γ£ö | |
-| Python | | | | | |
-| NodeJS | | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Buildpack | ApplicationInsights | New Relic | AppDynamics | Dynatrace | ElasticAPM |
+|-||--|-|--||
+| Java | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Dotnet | | | | Γ£ö | |
+| Go | | | | Γ£ö | |
+| Python | | | | | |
+| NodeJS | | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
| Web servers | | | | Γ£ö | | For information about using Web servers, see [Deploy web static files](how-to-enterprise-deploy-static-file.md).
-When you enable the build service, the APM and CA Certificate are integrated with a builder, as described in the [Manage APM integration and CA certificates in Azure Spring Apps](#manage-apm-integration-and-ca-certificates-in-azure-spring-apps) section.
-
-When the build service uses the Azure Spring Apps managed container registry, you can build an application to an image and then deploy it, but only within the current Azure Spring Apps service instance.
+Tanzu Build Service is enabled by default in Azure Spring Apps Enterprise. If you choose to disable the build service, you can deploy applications but only by using a custom container image. This section provides guidance for both build service enabled and disabled scenarios.
-Use the following command to integrate APM and CA certificates into your deployments:
-
-```azurecli
-az spring app deploy \
- --resource-group <resource-group-name> \
- --service <Azure-Spring-Apps-instance-name> \
- --name <app-name> \
- --builder <builder-name> \
- --artifact-path <path-to-your-JAR-file>
-```
-
-If you provide your own container registry to use with the build service, you can build an application into a container image and deploy the image to the current or other Azure Spring Apps Enterprise service instances.
-
-Providing your own container registry separates building from deployment. You can use the build command to create or update a build with a builder, then use the deploy command to deploy the container image to the service. In this scenario, you need to specify the APM-required environment variables on deployment.
-
-Use the following command to build an image:
-
-```azurecli
-az spring build-service build <create|update> \
- --resource-group <resource-group-name> \
- --service <Azure-Spring-Apps-instance-name> \
- --name <app-name> \
- --builder <builder-name> \
- --artifact-path <path-to-your-JAR-file>
-```
-
-Use the following command to deploy with a container image, using the `--env` parameter to configure runtime environment:
-
-```azurecli
-az spring app deploy \
- --resource-group <resource-group-name> \
- --service <Azure-Spring-Apps-instance-name> \
- --name <app-name> \
- --container-image <your-container-image> \
- --container-registry <your-container-registry> \
- --registry-password <your-password> \
- --registry-username <your-username> \
- --env NEW_RELIC_APP_NAME=<app-name> \
- NEW_RELIC_LICENSE_KEY=<your-license-key>
-```
-
-#### Supported APM resources with the build service enabled
+#### Supported APM types
This section lists the supported languages and required environment variables for the APMs that you can use for your integrations.
This section lists the supported languages and required environment variables fo
Environment variables required for buildpack binding: - `connection-string`
- Environment variables required for deploying an app with a custom image:
- - `APPLICATIONINSIGHTS_CONNECTION_STRING`
-
- > [!NOTE]
- > Upper-case keys are allowed, and you can replace underscores (`_`) with hyphens (`-`).
- For other supported environment variables, see [Application Insights Overview](../azure-monitor/app/app-insights-overview.md?tabs=java). - **DynaTrace**
This section lists the supported languages and required environment variables fo
- `TENANTTOKEN` - `CONNECTION_POINT`
- Environment variables required for deploying an app with a custom image:
- - `DT_TENANT`
- - `DT_TENANTTOKEN`
- - `DT_CONNECTION_POINT`
- For other supported environment variables, see [Dynatrace](https://www.dynatrace.com/support/help/shortlink/azure-spring#envvar) - **New Relic**
This section lists the supported languages and required environment variables fo
- `license_key` - `app_name`
- Environment variables required for deploying an app with a custom image:
- - `NEW_RELIC_LICENSE_KEY`
- - `NEW_RELIC_APP_NAME`
- For other supported environment variables, see [New Relic](https://docs.newrelic.com/docs/apm/agents/java-agent/configuration/java-agent-configuration-config-file/#Environment_Variables) - **Elastic**
This section lists the supported languages and required environment variables fo
- `application_packages` - `server_url`
- Environment variables required for deploying an app with a custom image:
- - `ELASTIC_APM_SERVICE_NAME`
- - `ELASTIC_APM_APPLICATION_PACKAGES`
- - `ELASTIC_APM_SERVER_URL`
- For other supported environment variables, see [Elastic](https://www.elastic.co/guide/en/apm/agent/java/master/configuration.html) - **AppDynamics**
This section lists the supported languages and required environment variables fo
- `controller_ssl_enabled` - `controller_port`
- Environment variables required for deploying an app with a custom image:
- - `APPDYNAMICS_AGENT_APPLICATION_NAME`
- - `APPDYNAMICS_AGENT_TIER_NAME`
- - `APPDYNAMICS_AGENT_NODE_NAME`
- - `APPDYNAMICS_AGENT_ACCOUNT_NAME`
- - `APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY`
- - `APPDYNAMICS_CONTROLLER_HOST_NAME`
- - `APPDYNAMICS_CONTROLLER_SSL_ENABLED`
- - `APPDYNAMICS_CONTROLLER_PORT`
- For other supported environment variables, see [AppDynamics](https://docs.appdynamics.com/21.11/en/application-monitoring/install-app-server-agents/java-agent/monitor-azure-spring-cloud-with-java-agent#MonitorAzureSpringCloudwithJavaAgent-ConfigureUsingtheEnvironmentVariablesorSystemProperties)
-## Use CA certificates
+## Configure APM integration for app builds and deployments
-CA certificates use the [ca-certificates](https://github.com/paketo-buildpacks/ca-certificates) buildpack to support providing CA certificates to the system trust store at build and runtime.
+You can configure APM in Azure Spring Apps in the following two ways:
-In the Azure Spring Apps Enterprise plan, the CA certificates use the **Public Key Certificates** tab on the **TLS/SSL settings** page in the Azure portal, as shown in the following screenshot:
+- Manage APM configurations on the service instance level and bind to app builds and deployments by referring to them. This approach is the recommended way to configure APM.
+- Manage APM configurations via bindings in the builder and bind to app builds and deployments by referring to the builder.
+ > [!NOTE]
+ > This approach is the old way to configure APM, and it's now deprecated. We recommend that you migrate the APM configured in bindings. For more information, see the [Migrate the APM configured in bindings](#migrate-the-apm-configured-in-bindings) section.
-You can configure the CA certificates on the **Edit binding** page. The `succeeded` certificates are shown in the **CA Certificates** list.
+The following sections provide guidance for both of these approaches.
+### Manage APMs on the service instance level (recommended)
-### [Build service disabled](#tab/disable-build-service)
+You can create an APM configuration and bind to app builds and deployments, as explained in the following sections.
-If you disable the build service, you can only deploy an application with a container image. For more information, see [Deploy an application with a custom container image](how-to-deploy-with-custom-container-image.md).
+#### Manage APM configuration in Azure Spring Apps
+
+You can manage APM integration by configuring properties or secrets in the APM configuration.
+
+> [!NOTE]
+> When configuring properties or secrets for APM, use key names without a prefix. For example, don't use a `DT_` prefix for a Dynatrace binding or `APPLICATIONINSIGHTS_` for Application Insights. Tanzu APM buildpacks will transform the key name to the original environment variable name with a prefix.
+
+The following list shows you the Azure CLI commands you can use to manage APM configuration:
+
+- Use the following command to list all the APM configurations:
+
+ ```azurecli
+ az spring apm list \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name>
+ ```
+
+- Use the following command to list all the supported APM types:
+
+ ```azurecli
+ az spring apm list-support-types \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --builder-name <your-builder-name>
+ ```
+
+- Use the following command to create an APM configuration:
+
+ ```azurecli
+ az spring apm create \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <your-APM-name> \
+ --type <your-APM-type> \
+ --properties a=b c=d \
+ --secrets e=f g=h
+ ```
+
+- Use the following command to view the details of an APM configuration:
+
+ ```azurecli
+ az spring apm show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <your-APM-name>
+ ```
+
+- Use the following command to change an APM's properties:
+
+ ```azurecli
+ az spring apm update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <your-APM-name> \
+ --type <your-APM-type> \
+ --properties a=b c=d \
+ --secrets e=f2 g=h
+ ```
+
+- Use the following command to enable an APM configuration globally. When you enable an APM configuration globally, all the subsequent builds and deployments use it automatically.
+
+ ```azurecli
+ az spring apm enable-globally \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <your-APM-name> \
+ ```
+
+- Use the following command to disable an APM configuration globally. When you disable an APM configuration globally, all the subsequent builds and deployments don't use it.
+
+ ```azurecli
+ az spring apm disable-globally \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <your-APM-name> \
+ ```
+
+- Use the following command to list all the APM configurations enabled globally:
+
+ ```azurecli
+ az spring apm list-enabled-globally \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name>
+ ```
+
+- Use the following command to delete an APM configuration.
+
+ ```azurecli
+ az spring apm delete \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <your-APM-name> \
+ ```
+
+For more information on the `properties` and `secrets` parameters for your buildpack, see the [Supported Scenarios - APM and CA Certificates Integration](#supported-scenariosapm-and-ca-certificates-integration) section.
+
+#### Bind to app builds and deployments
+
+For a build service that uses a managed Azure Container Registry, use the following command to integrate APM into your deployments:
+
+```azurecli
+az spring app deploy \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <app-name> \
+ --builder <builder-name> \
+ --apms <APM-name> \
+ --artifact-path <path-to-your-JAR-file>
+```
+
+When you enable an APM configuration globally, all the subsequent builds and deployments use it automatically, and it's unnecessary to specify the `--apms` parameter. If you want to override the APM configuration enabled globally for a deployment, specify the APM configurations via `--apms` parameter.
+
+For a build service that uses your own container registry, you can build an application into a container image and deploy the image to the current or other Azure Spring Apps Enterprise service instances.
+
+Providing your own container registry separates building from deployment. You can use the build command to create or update a build with a builder, then use the deploy command to deploy the container image to the service.
+
+Use the following command to build an image and configure APM:
+
+```azurecli
+az spring build-service build <create|update> \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <app-name> \
+ --builder <builder-name> \
+ --apms <APM-name> \
+ --artifact-path <path-to-your-JAR-file>
+```
+
+When you enable an APM configuration globally, all the subsequent builds and deployments use it automatically, and it's unnecessary to specify the `--apms` parameter. If you want to override the APM configuration enabled globally for a build, specify the APM configurations via the `--apms` parameter.
+
+Use the following command to deploy the application with the container image built previously and configure APM. You can use the APM configuration enabled globally or use the `--apms` parameter to specify APM configuration.
+
+```azurecli
+az spring app deploy \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <app-name> \
+ --container-image <your-container-image> \
+ --container-registry <your-container-registry> \
+ --registry-password <your-password> \
+ --registry-username <your-username> \
+ --apms <your-APM>
+```
+
+When you disable the build service, you can only deploy an application with a container image. For more information, see [Deploy an application with a custom container image](how-to-deploy-with-custom-container-image.md).
You can use multiple instances of Azure Spring Apps Enterprise, where some instances build and deploy images and others only deploy images. Consider the following scenario: -- For one instance, you enable the build service with a user container registry. Then you build from an artifact-file or source-code with APM or CA certificate into a container image and deploy to the current Azure Spring Apps instance or other service instances. For more information, see, the [Build and deploy polyglot applications](how-to-enterprise-deploy-polyglot-apps.md#build-and-deploy-polyglot-applications), section of [How to deploy polyglot apps in Azure Spring Apps Enterprise](How-to-enterprise-deploy-polyglot-apps.md).
+For one instance, you enable the build service with a user container registry. Then, you build from an artifact file or source code with APM or a CA certificate into a container image. You can then deploy to the current Azure Spring Apps instance or other service instances. For more information, see the [Build and deploy polyglot applications](how-to-enterprise-deploy-polyglot-apps.md#build-and-deploy-polyglot-applications) section of [How to deploy polyglot apps in Azure Spring Apps Enterprise](How-to-enterprise-deploy-polyglot-apps.md).
-- In another instance with the build service disabled, you deploy an application with the container image in your registry and also make use of APM and CA certificates.
+In another instance with the build service disabled, you deploy an application with the container image in your registry and also make use of APM.
-Due to the deployment supporting only a custom container image, you must use the `--env` parameter to configure the runtime environment for deployment. The following command provides an example:
+In this scenario, you can use the APM configuration enabled globally or use the `--apms` parameter to specify the APM configuration, as shown in the following example:
```azurecli az spring app deploy \
az spring app deploy \
--container-registry <your-container-registry> \ --registry-password <your-password> \ --registry-username <your-username> \
- --env NEW_RELIC_APP_NAME=<app-name> NEW_RELIC_LICENSE_KEY=<your-license-key>
+ --apms <your-APM>
```
-### Supported APM resources with the build service disabled
+### Manage APMs via bindings in builder
-This section lists the supported languages and required environment variables for the APMs that you can use for your integrations.
+When the build service uses the Azure Spring Apps managed container registry, you can build an application to an image and then deploy it, but only within the current Azure Spring Apps service instance.
-- **Application Insights**
+#### Manage APM configurations via bindings in builder
- Supported languages:
- - Java
+You can manage APM configurations via bindings in builder. For more information, see[Manage bindings in builder in Azure Spring Apps](#manage-bindings-in-builder-in-azure-spring-apps).
- Required runtime environment variables:
- - `APPLICATIONINSIGHTS_CONNECTION_STRING`
+#### Bind to app builds and deployments
- For other supported environment variables, see [Application Insights Overview](../azure-monitor/app/app-insights-overview.md?tabs=java)
+Use the following command to integrate APM into your deployments. The APM is configured via bindings in the builder:
-- **Dynatrace**
+```azurecli
+az spring app deploy \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <app-name> \
+ --builder <builder-name> \
+ --artifact-path <path-to-your-JAR-file>
+```
- Supported languages:
- - Java
- - .NET
- - Go
- - Node.js
- - WebServers
+### Enable Application Insights when creating the service instance
- Required runtime environment variables:
+If you enable Application Insights when creating a service instance, the following conditions apply:
- - `DT_TENANT`
- - `DT_TENANTTOKEN`
- - `DT_CONNECTION_POINT`
+- If you use a managed Azure Container Registry for the build service, Application Insights is bound to bindings in the default builder.
+- If you use your own container registry for the build service or you disable the build service, a default APM configuration is created for Application Insights. The default APM is enabled globally and all the subsequent builds and deployments use it automatically.
- For other supported environment variables, see [Dynatrace](https://www.dynatrace.com/support/help/shortlink/azure-spring#envvar)
+## Configure CA certificates for app builds and deployments
-- **New Relic**
+You can configure CA certificates in Azure Spring Apps in the following two ways:
- Supported languages:
- - Java
- - Node.js
+- You can manage public certificates in the TLS/SSL settings and bind to app builds and deployments by referring to them. This approach is the recommended way to configure CA certificates.
+- You can manage public certificates in the TLS/SSL settings and bind CA certificates via bindings in the builder. For more information, see the [Manage bindings in builder in Azure Spring Apps](#manage-bindings-in-builder-in-azure-spring-apps) section.
- Required runtime environment variables:
- - `NEW_RELIC_LICENSE_KEY`
- - `NEW_RELIC_APP_NAME`
+ > [!NOTE]
+ > This approach is the old way to configure CA certificates and it's deprecated. We recommend that you migrate the CA certificate configured in bindings. For more information, see the [Migrate CA certificate configured in bindings](#migrate-ca-certificate-configured-in-bindings) section.
- For other supported environment variables, see [New Relic](https://docs.newrelic.com/docs/apm/agents/java-agent/configuration/java-agent-configuration-config-file/#Environment_Variables)
+To manage public certificates on the service instance level, see the [Import a certificate](how-to-use-tls-certificate.md#import-a-certificate) section of [Use TLS/SSL certificates in your application in Azure Spring Apps](how-to-use-tls-certificate.md). then, follow one of the approaches described in the following sections to bind CA certificates to app builds and deployments.
-- **ElasticAPM**
+### Bind CA certificates to app builds and deployments
- Supported languages:
- - Java
- - Node.js
+For information on how to bind CA certificates to deployments, see the [Load a certificate](how-to-use-tls-certificate.md#load-a-certificate) section of [Use TLS/SSL certificates in your application in Azure Spring Apps](how-to-use-tls-certificate.md). Then, use the following instructions to bind to app builds.
- Required runtime environment variables:
- - `ELASTIC_APM_SERVICE_NAME`
- - `ELASTIC_APM_APPLICATION_PACKAGES`
- - `ELASTIC_APM_SERVER_URL`
+When you enable the build service and use a managed Azure Container Registry, use the following command to integrate CA certificates into your deployment:
- For other supported environment variables, see [Elastic](https://www.elastic.co/guide/en/apm/agent/java/master/configuration.html)
+```azurecli
+az spring app deploy \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <app-name> \
+ --builder <builder-name> \
+ --build-certificates <CA certificate-name> \
+ --artifact-path <path-to-your-JAR-file>
+ ```
-- **AppDynamics**
+When you use your own container registry for the build service or disable the build service, use the following command to integrate CA certificates into your build:
- Supported languages:
- - Java
- - Node.js
+```azurecli
+az spring build-service build <create|update> \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <app-name> \
+ --builder <builder-name> \
+ --certificates <CA certificate-name> \
+ --artifact-path <path-to-your-JAR-file>
+```
- Required runtime environment variables:
- - `APPDYNAMICS_AGENT_APPLICATION_NAME`
- - `APPDYNAMICS_AGENT_TIER_NAME`
- - `APPDYNAMICS_AGENT_NODE_NAME`
- - `APPDYNAMICS_AGENT_ACCOUNT_NAME`
- - `APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY`
- - `APPDYNAMICS_CONTROLLER_HOST_NAME`
- - `APPDYNAMICS_CONTROLLER_SSL_ENABLED`
- - `APPDYNAMICS_CONTROLLER_PORT`
+### Bind CA certificates via bindings in builder
- For other supported environment variables, see [AppDynamics](https://docs.appdynamics.com/21.11/en/application-monitoring/install-app-server-agents/java-agent/monitor-azure-spring-cloud-with-java-agent#MonitorAzureSpringCloudwithJavaAgent-ConfigureUsingtheEnvironmentVariablesorSystemProperties)
+CA certificates use the [ca-certificates](https://github.com/paketo-buildpacks/ca-certificates) buildpack to support providing CA certificates to the system trust store at build and runtime.
-
+In the Azure Spring Apps Enterprise plan, the CA certificates use the **Public Key Certificates** tab on the **TLS/SSL settings** page in the Azure portal, as shown in the following screenshot:
-## Manage APM integration and CA certificates in Azure Spring Apps
+
+You can configure the CA certificates on the **Edit binding** page. The `succeeded` certificates are shown in the **CA Certificates** list.
++
+## Manage bindings in builder in Azure Spring Apps
This section applies only to an Azure Spring Apps Enterprise service instance with the build service enabled. With the build service enabled, one buildpack binding means either credential configuration against one APM type, or CA certificates configuration against the CA certificates type. For APM integration, follow the earlier instructions to configure the necessary environment variables or secrets for your APM.
az spring build-service builder buildpack-binding delete \
+## Migrate APM and CA certificates from bindings in builder
+
+The bindings feature in builder is deprecated and is being removed in the future. We recommend that you migrate bindings in builder.
+
+You can configure APM and CA certificates in bindings and you can migrate them by using the following sections.
+
+### Migrate the APM configured in bindings
+
+In most use cases, there's only one APM configured in bindings in the default builder. You can create a new APM configuration with the same configuration in bindings and enable this APM configuration globally. All the subsequent builds and deployments use this configuration automatically. Use the following steps to migrate:
+
+1. Use the following command to create an APM configuration:
+
+ ```azurecli
+ az spring apm create \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <your-APM-name> \
+ --type <your-APM-type> \
+ --properties a=b c=d \
+ --secrets e=f g=h
+ ```
+
+1. Use the following command to enable the APM configuration globally:
+
+ ```azurecli
+ az spring apm enable-globally \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <your-APM-name> \
+ ```
+
+1. Use the following command to redeploy all the applications to use the new APM configuration enabled globally:
+
+ ```azurecli
+ az spring app deploy \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <app-name> \
+ --builder <builder-name> \
+ --artifact-path <path-to-your-JAR-file>
+ ```
+
+1. Verify that the new APM configuration works for all the applications. If everything works fine, use the following command to remove the APM bindings in builder:
+
+ ```azurecli
+ az spring build-service builder buildpack-binding delete \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <your-APM-buildpack-binding-name> \
+ --builder-name <your-builder-name>
+ ```
+
+If there are several APMs configured in bindings, you can create several APM configurations with the same configuration in bindings and enable the APM configuration globally if it's applicable. Use the `--apms` parameter to specify an APM configuration for a deployment if you want to override the APM enabled globally, as shown in the following command:
+
+```azurecli
+az spring app deploy \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <app-name> \
+ --builder <builder-name> \
+ --apms <APM-name> \
+ --artifact-path <path-to-your-JAR-file>
+```
+
+During the migration process, APM is configured in both bindings and APM configuration. In this case, the APM configuration takes effect and the binding is ignored.
+
+### Migrate CA certificate configured in bindings
+
+Use the following steps to migrate a CA certificate:
+
+1. For a CA certificate configured in binding, if it's used in runtime, you can load the certificate into your application. For more information, see [Load a certificate](how-to-use-tls-certificate.md#load-a-certificate) section of [Use TLS/SSL certificates in your application in Azure Spring Apps](how-to-use-tls-certificate.md).
+
+1. Use the following command to redeploy all the applications using the CA certificate. If you use the certificate at build time, use the `--build-certificates` parameter to specify the CA certificate to use at build time for a deployment:
+
+ ```azurecli
+ az spring app deploy \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <app-name> \
+ --builder <builder-name> \
+ --build-certificates <CA certificate-name> \
+ --artifact-path <path-to-your-JAR-file>
+ ```
+
+1. Verify if the CA certificate works for all the applications using it. If everything works fine, use the following command to remove the CA certificate bindings in the builder:
+
+ ```azurecli
+ az spring build-service builder buildpack-binding delete \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <your-CA-certificate-buildpack-binding-name> \
+ --builder-name <your-builder-name>
+ ```
+
+## Bindings in builder is deprecated
+
+> [!NOTE]
+> Previously, you would manage APM integration and CA certificates via bindings in the builder. For more information, see the [Manage bindings in builder in Azure Spring Apps](#manage-bindings-in-builder-in-azure-spring-apps) section. The bindings in builder feature is deprecated and is being removed in the future. We recommend that you migrate the APM configured in bindings. For more information, see the [Migrate the APM configured in bindings](#migrate-the-apm-configured-in-bindings) section.
+>
+> When you use your own container registry the build service or disable the build service, the bindings feature in builder is not available.
+>
+> When you use a managed Azure Container Registry for the build service, the registry is still available for backward compatibility, but is being removed in the future.
+
+When you use Azure CLI to create a service instance, you might the error message `Buildpack bindings feature is deprecated, it's not available when your own container registry is used for build service or build service is disabled`. This message indicates that you're using an old version of Azure CLI. To fix this issue, upgrade the Azure CLI. For more information, see [How to update the Azure CLI](/cli/azure/update-azure-cli).
+ ## Next steps - [How to deploy polyglot apps in Azure Spring Apps Enterprise](how-to-enterprise-deploy-polyglot-apps.md)
spring-apps How To Enterprise Deploy Polyglot Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-polyglot-apps.md
The buildpack for deploying Java applications is [tanzu-buildpacks/java-azure](h
The following table lists the features supported in Azure Spring Apps:
-| Feature description | Comment | Environment variable | Usage |
-|--||--||
-| Provides the Microsoft OpenJDK. | Configures the JVM version. The default JDK version is 11. Currently supported: JDK 8, 11, and 17. | `BP_JVM_VERSION` | `--build-env BP_JVM_VERSION=11.*` |
-| | Runtime env. Configures whether Java Native Memory Tracking (NMT) is enabled. The default value is *true*. Not supported in JDK 8. | `BPL_JAVA_NMT_ENABLED` | `--env BPL_JAVA_NMT_ENABLED=true` |
-| | Configures the level of detail for Java Native Memory Tracking (NMT) output. The default value is *summary*. Set to *detail* for detailed NMT output. | `BPL_JAVA_NMT_LEVEL` | `--env BPL_JAVA_NMT_ENABLED=summary` |
-| Add CA certificates to the system trust store at build and runtime. | See the [Use CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#use-ca-certificates) of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
-| Integrate with Application Insights, Dynatrace, Elastic, New Relic, App Dynamic APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
-| Deploy WAR package with Apache Tomcat or TomEE. | Set the application server to use. Set to *tomcat* to use Tomcat and *tomee* to use TomEE. The default value is *tomcat*. | `BP_JAVA_APP_SERVER` | `--build-env BP_JAVA_APP_SERVER=tomee` |
-| Support Spring Boot applications. | Indicates whether to contribute Spring Cloud Bindings support for the image at build time. The default value is *false*. | `BP_SPRING_CLOUD_BINDINGS_DISABLED` | `--build-env BP_SPRING_CLOUD_BINDINGS_DISABLED=false` |
-| | Indicates whether to autoconfigure Spring Boot environment properties from bindings at runtime. This feature requires Spring Cloud Bindings to have been installed at build time or it does nothing. The default value is *false*. | `BPL_SPRING_CLOUD_BINDINGS_DISABLED` | `--env BPL_SPRING_CLOUD_BINDINGS_DISABLED=false` |
-| Support building Maven-based applications from source. | Used for a multi-module project. Indicates the module to find the application artifact in. Defaults to the root module (empty). | `BP_MAVEN_BUILT_MODULE` | `--build-env BP_MAVEN_BUILT_MODULE=./gateway` |
-| Support building Gradle-based applications from source. | Used for a multi-module project. Indicates the module to find the application artifact in. Defaults to the root module (empty). | `BP_GRADLE_BUILT_MODULE` | `--build-env BP_GRADLE_BUILT_MODULE=./gateway` |
-| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> see more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
-| Integrate JProfiler agent. | Indicates whether to integrate JProfiler support. The default value is *false*. | `BP_JPROFILER_ENABLED` | build phase: <br>`--build-env BP_JPROFILER_ENABLED=true` <br> runtime phase: <br> `--env BPL_JPROFILER_ENABLED=true` <br> `BPL_JPROFILER_PORT=<port>` (optional, defaults to *8849*) <br> `BPL_JPROFILER_NOWAIT=true` (optional. Indicates whether the JVM executes before JProfiler has attached. The default value is *true*.) |
-| | Indicates whether to enable JProfiler support at runtime. The default value is *false*. | `BPL_JPROFILER_ENABLED` | `--env BPL_JPROFILER_ENABLED=false` |
-| | Indicates which port the JProfiler agent listens on. The default value is *8849*. | `BPL_JPROFILER_PORT` | `--env BPL_JPROFILER_PORT=8849` |
-| | Indicates whether the JVM executes before JProfiler has attached. The default value is *true*. | `BPL_JPROFILER_NOWAIT` | `--env BPL_JPROFILER_NOWAIT=true` |
-| Integrate [JRebel](https://www.jrebel.com/) agent. | The application should contain a *rebel-remote.xml* file. | N/A | N/A |
-| AES encrypts an application at build time and then decrypts it at launch time. | The AES key to use at build time. | `BP_EAR_KEY` | `--build-env BP_EAR_KEY=<value>` |
-| | The AES key to use at run time. | `BPL_EAR_KEY` | `--env BPL_EAR_KEY=<value>` |
-| Integrate [AspectJ Weaver](https://www.eclipse.org/aspectj/) agent. | `<APPLICATION_ROOT>`/*aop.xml* exists and *aspectj-weaver.\*.jar* exists. | N/A | N/A |
+| Feature description | Comment | Environment variable | Usage |
+|--|--|--|-|
+| Provides the Microsoft OpenJDK. | Configures the JVM version. The default JDK version is 11. Currently supported: JDK 8, 11, and 17. | `BP_JVM_VERSION` | `--build-env BP_JVM_VERSION=11.*` |
+| | Runtime env. Configures whether Java Native Memory Tracking (NMT) is enabled. The default value is *true*. Not supported in JDK 8. | `BPL_JAVA_NMT_ENABLED` | `--env BPL_JAVA_NMT_ENABLED=true` |
+| | Configures the level of detail for Java Native Memory Tracking (NMT) output. The default value is *summary*. Set to *detail* for detailed NMT output. | `BPL_JAVA_NMT_LEVEL` | `--env BPL_JAVA_NMT_ENABLED=summary` |
+| Add CA certificates to the system trust store at build and runtime. | See the [Configure CA certificates for app builds and deployments](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#configure-ca-certificates-for-app-builds-and-deployments) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
+| Integrate with Application Insights, Dynatrace, Elastic, New Relic, App Dynamic APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
+| Deploy WAR package with Apache Tomcat or TomEE. | Set the application server to use. Set to *tomcat* to use Tomcat and *tomee* to use TomEE. The default value is *tomcat*. | `BP_JAVA_APP_SERVER` | `--build-env BP_JAVA_APP_SERVER=tomee` |
+| Support Spring Boot applications. | Indicates whether to contribute Spring Cloud Bindings support for the image at build time. The default value is *false*. | `BP_SPRING_CLOUD_BINDINGS_DISABLED` | `--build-env BP_SPRING_CLOUD_BINDINGS_DISABLED=false` |
+| | Indicates whether to autoconfigure Spring Boot environment properties from bindings at runtime. This feature requires Spring Cloud Bindings to have been installed at build time or it does nothing. The default value is *false*. | `BPL_SPRING_CLOUD_BINDINGS_DISABLED` | `--env BPL_SPRING_CLOUD_BINDINGS_DISABLED=false` |
+| Support building Maven-based applications from source. | Used for a multi-module project. Indicates the module to find the application artifact in. Defaults to the root module (empty). | `BP_MAVEN_BUILT_MODULE` | `--build-env BP_MAVEN_BUILT_MODULE=./gateway` |
+| Support building Gradle-based applications from source. | Used for a multi-module project. Indicates the module to find the application artifact in. Defaults to the root module (empty). | `BP_GRADLE_BUILT_MODULE` | `--build-env BP_GRADLE_BUILT_MODULE=./gateway` |
+| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> see more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
+| Integrate JProfiler agent. | Indicates whether to integrate JProfiler support. The default value is *false*. | `BP_JPROFILER_ENABLED` | build phase: <br>`--build-env BP_JPROFILER_ENABLED=true` <br> runtime phase: <br> `--env BPL_JPROFILER_ENABLED=true` <br> `BPL_JPROFILER_PORT=<port>` (optional, defaults to *8849*) <br> `BPL_JPROFILER_NOWAIT=true` (optional. Indicates whether the JVM executes before JProfiler has attached. The default value is *true*.) |
+| | Indicates whether to enable JProfiler support at runtime. The default value is *false*. | `BPL_JPROFILER_ENABLED` | `--env BPL_JPROFILER_ENABLED=false` |
+| | Indicates which port the JProfiler agent listens on. The default value is *8849*. | `BPL_JPROFILER_PORT` | `--env BPL_JPROFILER_PORT=8849` |
+| | Indicates whether the JVM executes before JProfiler has attached. The default value is *true*. | `BPL_JPROFILER_NOWAIT` | `--env BPL_JPROFILER_NOWAIT=true` |
+| Integrate [JRebel](https://www.jrebel.com/) agent. | The application should contain a *rebel-remote.xml* file. | N/A | N/A |
+| AES encrypts an application at build time and then decrypts it at launch time. | The AES key to use at build time. | `BP_EAR_KEY` | `--build-env BP_EAR_KEY=<value>` |
+| | The AES key to use at run time. | `BPL_EAR_KEY` | `--env BPL_EAR_KEY=<value>` |
+| Integrate [AspectJ Weaver](https://www.eclipse.org/aspectj/) agent. | `<APPLICATION_ROOT>`/*aop.xml* exists and *aspectj-weaver.\*.jar* exists. | N/A | N/A |
### Deploy .NET applications
The buildpack for deploying .NET applications is [tanzu-buildpacks/dotnet-core](
The following table lists the features supported in Azure Spring Apps:
-| Feature description | Comment | Environment variable | Usage |
-|||--|--|
-| Configure the .NET Core runtime version. | Supports *Net6.0* and *Net7.0*. <br> You can configure through a *runtimeconfig.json* or MSBuild Project file. <br> The default runtime is *6.0.\**. | N/A | N/A |
-| Add CA certificates to the system trust store at build and runtime. | See the [Use CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#use-ca-certificates) of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
-| Integrate with Dynatrace APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
-| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
+| Feature description | Comment | Environment variable | Usage |
+||--|--|--|
+| Configure the .NET Core runtime version. | Supports *Net6.0* and *Net7.0*. <br> You can configure through a *runtimeconfig.json* or MSBuild Project file. <br> The default runtime is *6.0.\**. | N/A | N/A |
+| Add CA certificates to the system trust store at build and runtime. | See the [Configure CA certificates for app builds and deployments](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#configure-ca-certificates-for-app-builds-and-deployments) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
+| Integrate with Dynatrace APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
+| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
### Deploy Python applications
The buildpack for deploying Python applications is [tanzu-buildpacks/python](htt
The following table lists the features supported in Azure Spring Apps:
-| Feature description | Comment | Environment variable | Usage |
-|||--|-|
-| Specify a Python version. | Supports *3.7.\**, *3.8.\**, *3.9.\**, *3.10.\**, *3.11.\**. The default value is *3.10.\**<br> You can specify the version via the `BP_CPYTHON_VERSION` environment variable during build. | `BP_CPYTHON_VERSION` | `--build-env BP_CPYTHON_VERSION=3.8.*` |
-| Add CA certificates to the system trust store at build and runtime. | See the [Use CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#use-ca-certificates) of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
-| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
+| Feature description | Comment | Environment variable | Usage |
+||--|--|-|
+| Specify a Python version. | Supports *3.7.\**, *3.8.\**, *3.9.\**, *3.10.\**, *3.11.\**. The default value is *3.10.\**<br> You can specify the version via the `BP_CPYTHON_VERSION` environment variable during build. | `BP_CPYTHON_VERSION` | `--build-env BP_CPYTHON_VERSION=3.8.*` |
+| Add CA certificates to the system trust store at build and runtime. | See the [Configure CA certificates for app builds and deployments](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#configure-ca-certificates-for-app-builds-and-deployments) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
+| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
### Deploy Go applications
The buildpack for deploying Go applications is [tanzu-buildpacks/go](https://net
The following table lists the features supported in Azure Spring Apps:
-| Feature description | Comment | Environment variable | Usage |
-|||--|-|
-| Specify a Go version. | Supports *1.18.\**, *1.19.\**. The default value is *1.18.\**.<br> The Go version is automatically detected from the appΓÇÖs *go.mod* file. You can override this version by setting the `BP_GO_VERSION` environment variable at build time. | `BP_GO_VERSION` | `--build-env BP_GO_VERSION=1.19.*` |
-| Configure multiple targets. | Specifies multiple targets for a Go build. | `BP_GO_TARGETS` | `--build-env BP_GO_TARGETS=./some-target:./other-target` |
-| Add CA certificates to the system trust store at build and runtime. | See the [Use CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#use-ca-certificates) of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
-| Integrate with Dynatrace APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
-| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
+| Feature description | Comment | Environment variable | Usage |
+||--|--|-|
+| Specify a Go version. | Supports *1.18.\**, *1.19.\**. The default value is *1.18.\**.<br> The Go version is automatically detected from the appΓÇÖs *go.mod* file. You can override this version by setting the `BP_GO_VERSION` environment variable at build time. | `BP_GO_VERSION` | `--build-env BP_GO_VERSION=1.19.*` |
+| Configure multiple targets. | Specifies multiple targets for a Go build. | `BP_GO_TARGETS` | `--build-env BP_GO_TARGETS=./some-target:./other-target` |
+| Add CA certificates to the system trust store at build and runtime. | See the [Configure CA certificates for app builds and deployments](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#configure-ca-certificates-for-app-builds-and-deployments) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
+| Integrate with Dynatrace APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
+| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
### Deploy Node.js applications
The buildpack for deploying Node.js applications is [tanzu-buildpacks/nodejs](ht
The following table lists the features supported in Azure Spring Apps:
-| Feature description | Comment | Environment variable | Usage |
-|-||--|--|
-| Specify a Node version. | Supports *12.\**, *14.\**, *16.\**, *18.\**, *19.\**. The default value is *18.\**. <br>You can specify the Node version via an *.nvmrc* or *.node-version* file at the application directory root. `BP_NODE_VERSION` overrides the settings. | `BP_NODE_VERSION` | `--build-env BP_NODE_VERSION=18.*` |
-| Add CA certificates to the system trust store at build and runtime. | See the [Use CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#use-ca-certificates) of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
-| Integrate with Dynatrace, Elastic, New Relic, App Dynamic APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
-| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
+| Feature description | Comment | Environment variable | Usage |
+|-|--|--|--|
+| Specify a Node version. | Supports *12.\**, *14.\**, *16.\**, *18.\**, *19.\**. The default value is *18.\**. <br>You can specify the Node version via an *.nvmrc* or *.node-version* file at the application directory root. `BP_NODE_VERSION` overrides the settings. | `BP_NODE_VERSION` | `--build-env BP_NODE_VERSION=18.*` |
+| Add CA certificates to the system trust store at build and runtime. | See the [Configure CA certificates for app builds and deployments](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#configure-ca-certificates-for-app-builds-and-deployments) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
+| Integrate with Dynatrace, Elastic, New Relic, App Dynamic APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
+| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
### Deploy WebServer applications
spring-apps How To Use Enterprise Api Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-enterprise-api-portal.md
This section describes how to view and try out APIs with schema definitions in A
"Method=PUT" ], "filters": [
- "StripPrefix=0",
+ "StripPrefix=0"
] } ]
spring-apps Quickstart Integrate Azure Database Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-integrate-azure-database-mysql.md
Title: "Quickstart - Integrate with Azure Database for MySQL" description: Explains how to provision and prepare an Azure Database for MySQL instance, and then configure Pet Clinic on Azure Spring Apps to use it as a persistent database with only one command.--++ Last updated 08/28/2022
An Azure account with an active subscription. [Create an account for free](https
## Prepare an Azure Database for MySQL instance
-1. Create an Azure Database for MySQL flexible server using the [az mysql flexible-server create](/cli/azure/mysql/flexible-server#az-mysql-flexible-server-create) command. Replace the placeholders `<database-name>`, `<resource-group-name>`, `<MySQL-flexible-server-name>`, `<admin-username>`, and `<admin-password>` with a name for your new database, the name of your resource group, a name for your new server, and an admin username and password.
+1. Create an Azure Database for MySQL flexible server using the [az mysql flexible-server create](/cli/azure/mysql/flexible-server#az-mysql-flexible-server-create) command. Replace the placeholders `<database-name>`, `<resource-group-name>`, `<MySQL-flexible-server-name>`, `<admin-username>`, and `<admin-password>` with a name for your new database, the name of your resource group, a name for your new server, and an admin username and password. Use single quotes around the value for `admin-password`.
```azurecli-interactive az mysql flexible-server create \ --resource-group <resource-group-name> \ --name <MySQL-flexible-server-name> \ --database-name <database-name> \
+ --public-access 0.0.0.0 \
--admin-user <admin-username> \
- --admin-password <admin-password>
+ --admin-password '<admin-password>'
```
- > [!NOTE]
- > Standard_B1ms SKU is used by default. Refer to [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/) for pricing details.
+ > [!NOTE]
+ > The `Standard_B1ms` SKU is used by default. For pricing details, see [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/).
- > [!TIP]
- > Password should be at least eight characters long and contain at least one English uppercase letter, one English lowercase letter, one number, and one non-alphanumeric character (!, $, #, %, and so on.).
-
-1. A CLI prompt asks if you want to enable access to your IP. Enter `Y` to confirm.
+ > [!TIP]
+ > The password should be at least eight characters long and contain at least one English uppercase letter, one English lowercase letter, one number, and one non-alphanumeric character (!, $, #, %, and so on.).
## Connect your application to the MySQL database
Use [Service Connector](../service-connector/overview.md) to connect the app hos
1. If you're using Service Connector for the first time, start by running the command [az provider register](/cli/azure/provider#az-provider-register) to register the Service Connector resource provider.
- ```azurecli-interactive
- az provider register --namespace Microsoft.ServiceLinker
- ```
-
-1. Run the `az spring connection create` command to create a service connection between Azure Spring Apps and the Azure MySQL database. Replace the placeholders below with your own information.
-
- | Setting | Description |
- ||-|
- | `--resource-group` | The name of the resource group that contains the app hosted by Azure Spring Apps. |
- | `--service` | The name of the Azure Spring Apps resource. |
- | `--app` | The name of the application hosted by Azure Spring Apps that connects to the target service. |
- | `--target-resource-group` | The name of the resource group with the storage account. |
- | `--server` | The MySQL server you want to connect to |
- | `--database` | The name of the database you created earlier. |
- | `--secret name` | The MySQL server username. |
- | `--secret` | The MySQL server password. |
-
- ```azurecli-interactive
- az spring connection create mysql-flexible \
- --resource-group <Azure-Spring-Apps-resource-group-name> \
- --service <Azure-Spring-Apps-resource-name> \
- --app <app-name> \
- --target-resource-group <mySQL-server-resource-group> \
- --server <server-name> \
- --database <database-name> \
- --secret name=<username> secret=<secret>
- ```
-
- > [!TIP]
- > If the `az spring` command isn't recognized by the system, check that you have installed the Azure Spring Apps extension by running `az extension add --name spring`.
+ ```azurecli-interactive
+ az provider register --namespace Microsoft.ServiceLinker
+ ```
+
+1. Run the `az spring connection create` command to create a service connection between Azure Spring Apps and the Azure MySQL database. Replace the placeholders below with your own information. Use single quotes around the value for MySQL server `secret`.
+
+ | Setting | Description |
+ |||
+ | `--connection` | The name of the connection that identifies the connection between your app and target service. |
+ | `--resource-group` | The name of the resource group that contains the app hosted by Azure Spring Apps. |
+ | `--service` | The name of the Azure Spring Apps resource. |
+ | `--app` | The name of the application hosted by Azure Spring Apps that connects to the target service. |
+ | `--target-resource-group` | The name of the resource group with the storage account. |
+ | `--server` | The MySQL server you want to connect to |
+ | `--database` | The name of the database you created earlier. |
+ | `--secret name= secret=` | The MySQL server username and password. |
+
+ ```azurecli-interactive
+ az spring connection create mysql-flexible \
+ --resource-group <Azure-Spring-Apps-resource-group-name> \
+ --service <Azure-Spring-Apps-resource-name> \
+ --app <app-name> \
+ --connection <mysql-connection-name-for-app> \
+ --target-resource-group <mySQL-server-resource-group> \
+ --server <server-name> \
+ --database <database-name> \
+ --secret name=<username> secret='<secret>'
+ ```
+
+ > [!TIP]
+ > If the `az spring` command isn't recognized by the system, check that you have installed the Azure Spring Apps extension by running `az extension add --name spring`.
### [Portal](#tab/azure-portal)
Use [Service Connector](../service-connector/overview.md) to connect the app hos
1. Select or enter the following settings in the table.
- | Setting | Example | Description |
- ||--|-|
- | **Service type** | *DB for MySQL flexible server* | Select DB for MySQL flexible server as your target service |
- | **Subscription** | *my-subscription* | The subscription that contains your target service. The default value is the subscription that contains the app deployed to Azure Spring Apps. |
- | **Connection name** | *mysql_rk29a* | The connection name that identifies the connection between your app and target service. Use the connection name provided by Service Connector or enter your own connection name. |
- | **MySQL flexible server** | *MySQL80* | Select the MySQL flexible server you want to connect to. |
- | **MySQL database** | *petclinic* | Select the database you created earlier. |
- | **Client type** | *.NET* | Select the application stack that works with the target service you selected. |
+ | Setting | Example | Description |
+ ||--|-|
+ | **Service type** | *DB for MySQL flexible server* | Select DB for MySQL flexible server as your target service |
+ | **Connection name** | *mysql_9e8af* | The connection name that identifies the connection between your app and target service. Use the connection name provided by Service Connector or enter your own connection name. |
+ | **Subscription** | *My Subscription* | The subscription that contains your target service. The default value is the subscription that contains the app deployed to Azure Spring Apps. |
+ | **MySQL flexible server** | *MySQL80* | Select the MySQL flexible server you want to connect to. |
+ | **MySQL database** | *petclinic* | Select the database you created earlier. |
+ | **Client type** | *SpringBoot* | Select the application stack that works with the target service you selected. |
- :::image type="content" source="./media\quickstart-integrate-azure-database-mysql\basics-tab.png" alt-text="Screenshot of the Azure portal, filling out the basics tab in Service Connector.":::
+ :::image type="content" source="./media\quickstart-integrate-azure-database-mysql\basics-tab.png" alt-text="Screenshot of the Azure portal, filling out the basics tab in Service Connector.":::
1. Select **Next: Authentication** to select the authentication type. Then select **Connection string > Database credentials** and enter your database username and password.
Use [Service Connector](../service-connector/overview.md) to connect the app hos
Run the `az spring connection validate` command to show the status of the connection between Azure Spring Apps and the Azure MySQL database. Replace the placeholders below with your own information. ```azurecli-interactive
-az spring connection validate
- --resource-group <Azure-Spring-Apps-resource-group-name> \
+az spring connection validate \
+ --resource-group <Azure-Spring-Apps-resource-group-name> \
--service <Azure-Spring-Apps-resource-name> \ --app <app-name> \
- --connection <connection-name>
+ --connection <mysql-connection-name-for-app> \
+ --output table
``` The following output is displayed: ```Output
-Name Result
-- --
-The target existence is validated success
-The target service firewall is validated success
-The configured values (except username/password) is validated success
+Name Result Description
+ -- -
+Target resource existence validated. success
+Target service firewall validated. success
+Username and password validated. success
``` > [!TIP] > To get more details about the connection between your services, remove `--output table` from the above command.
-### [Portal](#tab/azure-portal)
+### [Azure portal](#tab/azure-portal)
Azure Spring Apps connections are displayed under **Settings > Service Connector**. Select **Validate** to check your connection status, and select **Learn more** to review the connection validation details.
spring-apps Troubleshoot Build Exit Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/troubleshoot-build-exit-code.md
The following list describes some common exit codes:
- The builder you're using doesn't support the language your project used.
- If you're using the default builder, check the language the default builder supports. For more information, see the [Supported APM resources with Build Service enabled](how-to-enterprise-configure-apm-integration-and-ca-certificates.md#supported-apm-resources-with-the-build-service-enabled) section of [How to configure APM integration and CA certificates](how-to-enterprise-configure-apm-integration-and-ca-certificates.md).
+ If you're using the default builder, check the language the default builder supports. For more information, see the [Supported APM types](how-to-enterprise-configure-apm-integration-and-ca-certificates.md#supported-apm-types) section of [How to configure APM integration and CA certificates](how-to-enterprise-configure-apm-integration-and-ca-certificates.md).
If you're using the custom builder, check whether your custom builder's buildpack supports the language your project used.
storage-mover Migration Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/migration-basics.md
Start by making a list of all the shares your workload depends on. Refer to your
If you need to migrate storage for multiple workloads at roughly the same time, you should split them into individual migration projects. > [!IMPORTANT]
-> Including multiple workloads in a single migration project is not recommended. Each workload should have its own migration project. Structuring the project in this way will significantly simply migration management and workload failover.
+> Including multiple workloads in a single migration project is not recommended. Each workload should have its own migration project. Structuring the project in this way will significantly simplify migration management and workload failover.
The result of the discovery phase is a list of file shares that you need to migrate to Azure. You should have distinct lists per workload.
storage Storage Blob Upload Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-javascript.md
description: Learn how to upload a blob to your Azure Storage account using the
Previously updated : 04/21/2023 Last updated : 06/20/2023
Each of these methods can be called using a [BlockBlobClient](/javascript/api/@a
## Upload a block blob from a file path
-The following example uploads a local file to blob storage with the [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient) object. The [options](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions) object allows you to pass in your own metadata and [tags](storage-manage-find-blobs.md#blob-index-tags-and-data-management), used for indexing, at upload time:
+The following example uploads a block blob from a local file path:
## Upload a block blob from a stream
-The following example uploads a readable stream to blob storage with the [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient) object. Pass in the BlockBlobUploadStream [options](/javascript/api/@azure/storage-blob/blockblobuploadstreamoptions) to affect the upload:
+The following example uploads a block by creating a readable stream and uploading the stream:
-Transform the stream during the upload for data clean up.
+## Upload a block blob from a buffer
+The following example uploads a block blob from a Node.js buffer:
-The following code demonstrates how to use the function.
-```javascript
-// fully qualified path to file
-const localFileWithPath = path.join(__dirname, `my-text-file.txt`);
+## Upload a block blob from a string
-// encoding: just to see the chunk as it goes by in the transform
-const streamOptions = { highWaterMark: 20, encoding: 'utf-8' }
+The following example uploads a block blob from a string:
-const readableStream = fs.createReadStream(localFileWithPath, streamOptions);
-// upload options
-const uploadOptions = {
+## Upload a block blob with configuration options
- // not indexed for searching
- metadata: {
- owner: 'PhillyProject'
- },
+You can define client library configuration options when uploading a blob. These options can be tuned to improve performance, enhance reliability, and optimize costs. The code examples in this section show how to set configuration options using the [BlockBlobParallelUploadOptions](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions) interface, and how to pass those options as a parameter to an upload method call.
- // indexed for searching
- tags: {
- createdBy: 'YOUR-NAME',
- createdWith: `StorageSnippetsForDocs-${i}`,
- createdOn: (new Date()).toDateString()
- }
- }
+### Specify data transfer options on upload
-// upload stream
-await createBlobFromReadStream(containerClient, `my-text-file.txt`, readableStream, uploadOptions);
-```
+You can configure properties in [BlockBlobParallelUploadOptions](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions) to improve performance for data transfer operations. The following table lists the properties you can configure, along with a description:
-## Upload a block blob from a buffer
+| Method | Description |
+| | |
+| [`blockSize`](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions#@azure-storage-blob-blockblobparalleluploadoptions-blocksize) | The maximum block size to transfer for each request as part of an upload operation. |
+| [`concurrency`](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions#@azure-storage-blob-blockblobparalleluploadoptions-concurrency) | The maximum number of parallel requests that are issued at any given time as a part of a single parallel transfer.
+| [`maxSingleShotSize`](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions#@azure-storage-blob-blockblobparalleluploadoptions-maxsingleshotsize) | If the size of the data is less than or equal to this value, it's uploaded in a single put rather than broken up into chunks. If the data is uploaded in a single shot, the block size is ignored. Default value is 256 MiB. |
-The following example uploads a Node.js buffer to blob storage with the [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient) object. Pass in the BlockBlobParallelUpload [options](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions) to affect the upload:
+The following code example shows how to set values for [BlockBlobParallelUploadOptions](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions) and include the options as part of an upload method call. The values provided in the samples aren't intended to be a recommendation. To properly tune these values, you need to consider the specific needs of your app.
-The following code demonstrates how to use the function.
+### Upload a block blob with index tags
-```javascript
-// fully qualified path to file
-const localFileWithPath = path.join(__dirname, `daisies.jpg`);
+Blob index tags categorize data in your storage account using key-value tag attributes. These tags are automatically indexed and exposed as a searchable multi-dimensional index to easily find data.
-// read file into buffer
-const buffer = await fs.readFile(localFileWithPath);
+The following example uploads a block blob with index tags set using [BlockBlobParallelUploadOptions](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions):
-// upload options
-const uploadOptions = {
- // not indexed for searching
- metadata: {
- owner: 'PhillyProject'
- },
+### Set a blob's access tier on upload
- // indexed for searching
- tags: {
- createdBy: 'YOUR-NAME',
- createdWith: `StorageSnippetsForDocs-${i}`,
- createdOn: (new Date()).toDateString()
- }
- }
+You can set a blob's access tier on upload by using the [BlockBlobParallelUploadOptions](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions) interface. The following code example shows how to set the access tier when uploading a blob:
-// upload buffer
-createBlobFromBuffer(containerClient, `daisies.jpg`, buffer, uploadOptions)
-```
-
-## Upload a block blob from a string
-The following example uploads a string to blob storage with the [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient) object. Pass in the BlockBlobUploadOptions [options](/javascript/api/@azure/storage-blob/blockblobuploadoptions) to affect the upload:
+Setting the access tier is only allowed for block blobs. You can set the access tier for a block blob to `Hot`, `Cool`, `Cold`, or `Archive`.
+To learn more about access tiers, see [Access tiers overview](access-tiers-overview.md).
## Resources
View code samples from this article (GitHub):
- [Upload from buffer](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-buffer.js) - [Upload from stream](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-stream.js) - [Upload from string](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-string.js)
+- [Upload with transfer options](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-with-transfer-options.js)
+- [Upload with index tags](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-with-index-tags.js)
+- [Upload with access tier](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-with-access-tier.js)
[!INCLUDE [storage-dev-guide-resources-javascript](../../../includes/storage-dev-guides/storage-dev-guide-resources-javascript.md)]
storage Container Storage Aks Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-aks-quickstart.md
description: Learn how to install and configure Azure Container Storage Preview
Previously updated : 06/01/2023 Last updated : 06/20/2023
Before you create your cluster, you should understand which back-end storage opt
To use Azure Container Storage, you'll need a node pool of at least three Linux VMs. Each VM should have a minimum of four virtual CPUs (vCPUs). Azure Container Storage will consume one core for I/O processing on every VM the extension is deployed to.
-If you intend to use Azure Elastic SAN Preview or Azure Disks with Azure Container Storage, then you should choose a [general purpose VM type](../../virtual-machines/sizes-general.md) such as **standard_d4s_v5** for the cluster nodes.
+If you intend to use Azure Elastic SAN Preview or Azure Disks with Azure Container Storage, then you should choose a [general purpose VM type](../../virtual-machines/sizes-general.md) such as **standard_d4s_v5** for the cluster nodes.
If you intend to use Ephemeral Disk, choose a [storage optimized VM type](../../virtual-machines/sizes-storage.md) with NVMe drives such as **standard_l8s_v3**. In order to use Ephemeral Disk, the VMs must have NVMe drives.
+> [!IMPORTANT]
+> You must choose a VM type that supports [Azure premium storage](../../virtual-machines/premium-storage-performance.md).
+ ## Create AKS cluster Run the following command to create a Linux-based AKS cluster and enable a system-assigned managed identity. Replace `<resource-group>` with the name of the resource group you created, `<cluster-name>` with the name of the cluster you want to create, and `<vm-type>` with the VM type you selected in the previous step. For this Quickstart, we'll create a cluster with three nodes. Increase the `--node-count` if you want a larger cluster.
Azure Container Service is a separate service from AKS, so you'll need to grant
1. Under **Infrastructure resource group**, you should see a link to the resource group that AKS created when you created the cluster. Select it. 1. Select **Access control (IAM)** from the left pane. 1. Select **Add > Add role assignment**.
-1. Under **Assignment type**, select **Privileged administrator roles** and then **Contributor**. If you don't have an Owner role on the subscription, you won't be able to add the Contributor role.
+1. Under **Assignment type**, select **Privileged administrator roles** and then **Contributor**, then select **Next**. If you don't have an Owner role on the subscription, you won't be able to add the Contributor role.
++ 1. Under **Assign access to**, select **Managed identity**. 1. Under **Members**, click **+ Select members**. The **Select managed identities** menu will appear. 1. Under **Managed identity**, select **User-assigned managed identity**. 1. Under **Select**, search for and select the managed identity with your cluster name and `-agentpool` appended.
-1. Select **Review + assign**.
+1. Click **Select**, then **Review + assign**.
# [Azure CLI](#tab/cli)
stream-analytics Run Job In Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/run-job-in-virtual-network.md
Several ASA jobs may utilize the same subnet. The last job here refers to no ot
> [!IMPORTANT] > - To authenticate with connection string, you must disable the storage account firewall settings.
-> - To authenticate with Managed Identity, you must add your Stream Analytics job to the storage account's access control list with the Storage Blob Data Contributor role. If you do not give your job access, the job will not be able to perform any operations. For more information on how to grant access, see Use Azure RBAC to assign a managed identity access to another resource.
+> - To authenticate with Managed Identity, you must add your Stream Analytics job to the storage account's access control list for Storage Blob Data Contributor role and
+Storage Table Data Contributor role. If you do not give your job access, the job will not be able to perform any operations. For more information on how to grant access, see Use Azure RBAC to assign a managed identity access to another resource.
## Permissions You must have at least the following Role-based access control permissions on the subnet or at a higher level to configure virtual network integration through Azure portal, CLI or when setting the virtualNetworkSubnetId site property directly:
synapse-analytics Workspaces Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/workspaces-encryption.md
Azure Key Vaults policies for automatic, periodic rotation of keys, or actions o
SQL Transparent Data Encryption (TDE) is available for dedicated SQL Pools in workspaces *not* enabled for double encryption. In this type of workspace, a service-managed key is used to provide double encryption for the data in the dedicated SQL pools. TDE with the service-managed key can be enabled or disabled for individual dedicated SQL pools.
+### Cmdlets for Azure SQL Database and Azure Synapse
+
+To configure TDE through PowerShell, you must be connected as the Azure Owner, Contributor, or SQL Security Manager.
+
+Use the following cmdlets for Azure Synapse workspace.
+
+| Cmdlet | Description |
+| | |
+| [Set-AzSynapseSqlPoolTransparentDataEncryption](/powershell/module/az.synapse/set-azsynapsesqlpooltransparentdataencryption) |Enables or disables transparent data encryption for a SQL pool.|
+| [Get-AzSynapseSqlPoolTransparentDataEncryption](/powershell/module/az.synapse/get-azsynapsesqlpooltransparentdataencryption) |Gets the transparent data encryption state for a SQL pool. |
+| [New-AzSynapseWorkspaceKey](/powershell/module/az.synapse/new-azsynapseworkspacekey) |Adds a Key Vault key to a workspace. |
+| [Get-AzSynapseWorkspaceKey](/powershell/module/az.synapse/get-azsynapseworkspacekey) |Gets the Key Vault keys for a workspace |
+| [Update-AzSynapseWorkspace](/powershell/module/az.synapse/update-azsynapseworkspace) |Sets the transparent data encryption protector for a workspace. |
+| [Get-AzSynapseWorkspace](/powershell/module/az.synapse/get-azsynapseworkspace) |Gets the transparent data encryption protector |
+| [Remove-AzSynapseWorkspaceKey](/powershell/module/az.synapse/remove-azsynapseworkspacekey) |Removes a Key Vault key from a workspace. |
+++ ## Next steps - [Use built-in Azure Policies to implement encryption protection for Synapse workspaces](../policy-reference.md)
virtual-desktop Proxy Server Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/proxy-server-support.md
Last updated 08/08/2022 -
In summary, we don't recommend using proxy servers on Azure Virtual Desktop beca
If your organization's network and security policies require proxy servers for web traffic, you can configure your environment to bypass Azure Virtual Desktop connections while still routing the traffic through the proxy server. However, each organization's policies are unique, so some methods may work better for your deployment than others. Here are some configuration methods you can try to prevent performance and reliability loss in your environment: -- Azure service tags on the Azure firewall-- Proxy server bypass using Proxy Auto Configuration (.PAC) files
+- Azure service tags with Azure Firewall
+- Proxy server bypass using Proxy Auto Configuration (`.PAC`) files
- Bypass list in the local proxy configuration - Using proxy servers for per-user configuration - Using RDP Shortpath for the RDP connection while keeping the service traffic over the proxy
Azure Virtual Desktop components on the session host run in the context of their
Proxy servers have capacity limits. Unlike regular HTTP traffic, RDP traffic has long running, chatty connections that are bi-directional and consume lots of bandwidth. Before you set up a proxy server, talk to your proxy server vendor about how much throughput your server has. Also make sure to ask them how many proxy sessions you can run at one time. After you deploy the proxy server, carefully monitor its resource use for bottlenecks in Azure Virtual Desktop traffic.
-### Proxy servers and Teams optimization
+### Proxy servers and Microsoft Teams media optimization
-Azure Virtual Desktop doesn't support proxy servers for Teams optimization.
+Azure Virtual Desktop doesn't support proxy servers with [media optimization for Microsoft Teams](teams-on-avd.md).
## Session host configuration recommendations
To configure your network to use DNS resolution for WPAD, follow the instruction
You can set a device-wide proxy or Proxy Auto Configuration (.PAC) file that applies to all interactive, Local System, and Network Service users with the [Network Proxy CSP](/windows/client-management/mdm/networkproxy-csp).
-In addition you will need to set a proxy for the Windows services *RDAgent* and *Remote Desktop Services*. RDAgent runs with the account *Local System* and Remote Desktop Services runs with the account *Network Service*. You can set a proxy for these accounts by running the following commands, changing the placeholder value for `<server>` with your own address:
+In addition you will need to set a proxy for the Windows services *RDAgent* and *Remote Desktop Services*. RDAgent runs with the account *Local System* and Remote Desktop Services runs with the account *Network Service*. You can set a proxy for these accounts by running the following commands from an elevated command prompt, changing the placeholder value for `<server>` with your own address:
```console bitsadmin /util /setieproxy LOCALSYSTEM AUTOSCRIPT http://<server>/proxy.pac
virtual-desktop Required Url Check Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/required-url-check-tool.md
Title: Use the Required URL Check tool for Azure Virtual Desktop
description: The Required URL Check tool enables you to check your session host virtual machines can access the required URLs to ensure Azure Virtual Desktop works as intended. Previously updated : 06/20/2022 Last updated : 06/20/2023 - # Required URL Check tool
-In order to deploy and make Azure Virtual Desktop available to your users, you must allow specific URLs that your session host virtual machines (VMs) can access them anytime. You can find the list of URLs in [Required URL list](safe-url-list.md). The Required URL Check tool will validate these URLs and show whether your session host VMs can access them. If not, then the tool will list the inaccessible URLs so you can unblock them and then retest, if needed.
+In order to deploy and make Azure Virtual Desktop available to your users, you must allow specific URLs that your session host virtual machines (VMs) can access them anytime. You can find the list of URLs in [Required URL list](safe-url-list.md).
+
+The Required URL Check tool will validate these URLs and show whether your session host VMs can access them. If not, then the tool will list the inaccessible URLs so you can unblock them and then retest, if needed.
> [!NOTE] > - You can only use the Required URL Check tool for deployments in the Azure public cloud, it does not check access for sovereign clouds.
In order to deploy and make Azure Virtual Desktop available to your users, you m
You need the following things to use the Required URL Check tool: -- Your session host VM must have a .NET 4.6.2 framework-- RDAgent version 1.0.2944.400 or higher-- The `WVDAgentUrlTool.exe` file must be in the same folder as the `WVDAgentUrlTool.config` file
+- A session host VM.
+
+- Your session host VM must have .NET 4.6.2 framework installed.
+
+- RDAgent version 1.0.2944.400 or higher on your session host VM. The Required URL Check tool (`WVDAgentUrlTool.exe`) is included in the same installation folder, for example `C:\Program Files\Microsoft RDInfra\RDAgent_1.0.2944.1200`.
+
+- The `WVDAgentUrlTool.exe` file must be in the same folder as the `WVDAgentUrlTool.config` file.
## Use the Required URL Check tool
To use the Required URL Check tool:
1. Run the following command to change the directory to the same folder as the current build agent (RDAgent_1.0.2944.1200 in this example):
- ```console
+ ```cmd
cd "C:\Program Files\Microsoft RDInfra\RDAgent_1.0.2944.1200" ```
-1. Run the following command:
+1. Run the following command to run the Required URL Check tool:
- ```console
+ ```cmd
WVDAgentUrlTool.exe ```
To use the Required URL Check tool:
> ![Screenshot of accessible URLs output.](media/access.png) 1. You can repeat these steps on your other session host VMs, particularly if they are in a different Azure region or use a different virtual network.+
+## Next steps
+
+For more information about network connectivity, see [Understanding Azure Virtual Desktop network connectivity](network-connectivity.md)
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
description: Learn about recent changes to the Remote Desktop client for Windows
Previously updated : 06/13/2023 Last updated : 06/20/2023 # What's new in the Remote Desktop client for Windows
The following table lists the current versions available for the public and Insi
| Release | Latest version | Download | ||-|-| | Public | 1.2.4337 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |
-| Insider | 1.2.4337 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
+| Insider | 1.2.4419 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
+
+## Updates for version 1.2.4419 (Insider)
+
+Date published: June 20, 2023
+
+Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368)
+
+In this release, we've made the following changes:
+
+- General improvements to Narrator experience.
+- Fixed an issue that caused the text in the message for subscribing to workspaces to be cut off when the user increases the text size.
+- Fixed an issue that caused the client to sometimes stop responding when attempting to start new connections.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
## Updates for version 1.2.4337
virtual-machine-scale-sets Virtual Machine Scale Sets Health Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md
# Using Application Health extension with Virtual Machine Scale Sets
-> [!IMPORTANT]
-> **Rich Health States** is currently in public preview. **Binary Health States** is generally available.
-> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- Monitoring your application health is an important signal for managing and upgrading your deployment. Azure Virtual Machine Scale Sets provide support for [Rolling Upgrades](virtual-machine-scale-sets-upgrade-policy.md) including [Automatic OS-Image Upgrades](virtual-machine-scale-sets-automatic-upgrade.md) and [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md), which rely on health monitoring of the individual instances to upgrade your deployment. You can also use Application Health Extension to monitor the application health of each instance in your scale set and perform instance repairs using [Automatic Instance Repairs](virtual-machine-scale-sets-automatic-instance-repairs.md). This article describes how you can use the two types of Application Health extension, **Binary Health States** or **Rich Health States**, to monitor the health of your applications deployed on Virtual Machine Scale Sets.
virtual-machines Disks Convert Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-convert-types.md
+
+ Title: Convert managed disks storage between different disk types
+description: How to convert Azure managed disks between the different disks types by using Azure PowerShell, Azure CLI, or the Azure portal.
++++ Last updated : 06/15/2023+++
+# Change the disk type of an Azure managed disk
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows
+
+There are five disk types of Azure managed disks: Azure Ultra Disks, Premium SSD v2, premium SSD, Standard SSD, and Standard HDD. You can easily switch between Premium SSD, Standard SSD, and Standard HDD based on your performance needs. You aren't yet able to switch from or to an Ultra Disk or a Premium SSD v2, you must deploy a new one.
+
+This functionality isn't supported for unmanaged disks. But you can easily convert an unmanaged disk to a managed disk with [CLI](linux/convert-unmanaged-to-managed-disks.md) or [PowerShell](windows/convert-unmanaged-to-managed-disks.md) to be able to switch between disk types.
++
+## Before you begin
+
+Because conversion requires a restart of the virtual machine (VM), schedule the migration of your disk during a pre-existing maintenance window.
+
+## Restrictions
+
+- You can only change disk type once per day.
+- You can only change the disk type of managed disks. If your disk is unmanaged, convert it to a managed disk with [CLI](linux/convert-unmanaged-to-managed-disks.md) or [PowerShell](windows/convert-unmanaged-to-managed-disks.md) to switch between disk types.
+
+## Switch all managed disks of a VM between from one account to another
+
+This example shows how to convert all of a VM's disks to premium storage. However, by changing the $storageType variable in this example, you can convert the VM's disks type to standard SSD or standard HDD. To use Premium managed disks, your VM must use a [VM size](sizes.md) that supports Premium storage. This example also switches to a size that supports premium storage:
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+# Name of the resource group that contains the VM
+$rgName = 'yourResourceGroup'
+
+# Name of the your virtual machine
+$vmName = 'yourVM'
+
+# Choose between Standard_LRS, StandardSSD_LRS and Premium_LRS based on your scenario
+$storageType = 'Premium_LRS'
+
+# Premium capable size
+# Required only if converting storage from Standard to Premium
+$size = 'Standard_DS2_v2'
+
+# Stop and deallocate the VM before changing the size
+Stop-AzVM -ResourceGroupName $rgName -Name $vmName -Force
+
+$vm = Get-AzVM -Name $vmName -resourceGroupName $rgName
+
+# Change the VM size to a size that supports Premium storage
+# Skip this step if converting storage from Premium to Standard
+$vm.HardwareProfile.VmSize = $size
+Update-AzVM -VM $vm -ResourceGroupName $rgName
+
+# Get all disks in the resource group of the VM
+$vmDisks = Get-AzDisk -ResourceGroupName $rgName
+
+# For disks that belong to the selected VM, convert to Premium storage
+foreach ($disk in $vmDisks)
+{
+ if ($disk.ManagedBy -eq $vm.Id)
+ {
+ $disk.Sku = [Microsoft.Azure.Management.Compute.Models.DiskSku]::new($storageType)
+ $disk | Update-AzDisk
+ }
+}
+
+Start-AzVM -ResourceGroupName $rgName -Name $vmName
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+ ```azurecli
+
+#resource group that contains the virtual machine
+$rgName='yourResourceGroup'
+
+#Name of the virtual machine
+vmName='yourVM'
+
+#Premium capable size
+#Required only if converting from Standard to Premium
+size='Standard_DS2_v2'
+
+#Choose between Standard_LRS, StandardSSD_LRS and Premium_LRS based on your scenario
+sku='Premium_LRS'
+
+#Deallocate the VM before changing the size of the VM
+az vm deallocate --name $vmName --resource-group $rgName
+
+#Change the VM size to a size that supports Premium storage
+#Skip this step if converting storage from Premium to Standard
+az vm resize --resource-group $rgName --name $vmName --size $size
+
+#Update the SKU of all the data disks
+az vm show -n $vmName -g $rgName --query storageProfile.dataDisks[*].managedDisk -o tsv \
+ | awk -v sku=$sku '{system("az disk update --sku "sku" --ids "$1)}'
+
+#Update the SKU of the OS disk
+az vm show -n $vmName -g $rgName --query storageProfile.osDisk.managedDisk -o tsv \
+| awk -v sku=$sku '{system("az disk update --sku "sku" --ids "$1)}'
+
+az vm start --name $vmName --resource-group $rgName
+```
+
+# [Portal](#tab/azure-portal)
+
+Use either PowerShell or CLI.
++
+## Change the type of an individual managed disk
+
+For your dev/test workload, you might want a mix of Standard and Premium disks to reduce your costs. You can choose to upgrade only those disks that need better performance. This example shows how to convert a single VM disk from Standard to Premium storage. However, by changing the $storageType variable in this example, you can convert the VM's disks type to standard SSD or standard HDD. To use Premium managed disks, your VM must use a [VM size](sizes.md) that supports Premium storage. You can also use these examples to change a disk from [Locally redundant storage (LRS)](disks-redundancy.md#locally-redundant-storage-for-managed-disks) disk to a [Zone-redundant storage (ZRS)](disks-redundancy.md#zone-redundant-storage-for-managed-disks) disk or vice-versa. This example also shows how to switch to a size that supports Premium storage:
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+
+$diskName = 'yourDiskName'
+# resource group that contains the managed disk
+$rgName = 'yourResourceGroupName'
+# Choose between Standard_LRS, StandardSSD_LRS and Premium_LRS based on your scenario
+$storageType = 'Premium_LRS'
+# Premium capable size
+$size = 'Standard_DS2_v2'
+
+$disk = Get-AzDisk -DiskName $diskName -ResourceGroupName $rgName
+
+# Get parent VM resource
+$vmResource = Get-AzResource -ResourceId $disk.ManagedBy
+
+# Stop and deallocate the VM before changing the storage type
+Stop-AzVM -ResourceGroupName $vmResource.ResourceGroupName -Name $vmResource.Name -Force
+
+$vm = Get-AzVM -ResourceGroupName $vmResource.ResourceGroupName -Name $vmResource.Name
+
+# Change the VM size to a size that supports Premium storage
+# Skip this step if converting storage from Premium to Standard
+$vm.HardwareProfile.VmSize = $size
+Update-AzVM -VM $vm -ResourceGroupName $rgName
+
+# Update the storage type
+$disk.Sku = [Microsoft.Azure.Management.Compute.Models.DiskSku]::new($storageType)
+$disk | Update-AzDisk
+
+Start-AzVM -ResourceGroupName $vm.ResourceGroupName -Name $vm.Name
+```
+
+# [Azure CLI](#tab/azure-cli)
++
+ ```azurecli
+
+#resource group that contains the managed disk
+$rgName='yourResourceGroup'
+
+#Name of your managed disk
+diskName='yourManagedDiskName'
+
+#Premium capable size
+#Required only if converting from Standard to Premium
+size='Standard_DS2_v2'
+
+#Choose between Standard_LRS, StandardSSD_LRS and Premium_LRS based on your scenario
+sku='Premium_LRS'
+
+#Get the parent VM Id
+vmId=$(az disk show --name $diskName --resource-group $rgName --query managedBy --output tsv)
+
+#Deallocate the VM before changing the size of the VM
+az vm deallocate --ids $vmId
+
+#Change the VM size to a size that supports Premium storage
+#Skip this step if converting storage from Premium to Standard
+az vm resize --ids $vmId --size $size
+
+# Update the SKU
+az disk update --sku $sku --name $diskName --resource-group $rgName
+
+az vm start --ids $vmId
+```
+
+# [Portal](#tab/azure-portal)
+
+Follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select the VM from the list of **Virtual machines**.
+1. If the VM isn't stopped, select **Stop** at the top of the VM **Overview** pane, and wait for the VM to stop.
+1. In the pane for the VM, select **Disks** from the menu.
+1. Select the disk that you want to convert.
+1. Select **Size + performance** from the menu.
+1. Change the **Account type** from the original disk type to the desired disk type.
+1. Select **Save**, and close the disk pane.
+
+The disk type conversion is instantaneous. You can start your VM after the conversion.
+++
+## Migrate to Premium SSD v2 or Ultra Disk
+
+Currently, you can only migrate an existing disk to either an Ultra Disk or a Premium SSD v2 through snapshots. Both Premium SSD v2 disks and Ultra Disks have their own set of restrictions. For example, neither can be used as an OS disk, and also aren't available in all regions. See the [Premium SSD v2 limitations](disks-deploy-premium-v2.md#limitations) and [Ultra Disk GA scope and limitations](disks-enable-ultra-ssd.md#ga-scope-and-limitations) sections of their articles for more information.
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+The following script migrates a snapshot of a Standard HDD, Standard SSD, or Premium SSD to either an Ultra Disk or a Premium SSD v2.
+
+```PowerShell
+$diskName = "yourDiskNameHere"
+$resourceGroupName = "yourResourceGroupNameHere"
+$snapshotName = "yourDesiredSnapshotNameHere"
+
+# Valid values are 1, 2, or 3
+$zone = "yourZoneNumber"
+
+#Provide the size of the disks in GB. It should be greater than the VHD file size.
+$diskSize = '128'
+
+#Provide the storage type. Use PremiumV2_LRS or UltraSSD_LRS.
+$storageType = 'PremiumV2_LRS'
+
+#Provide the Azure region (e.g. westus) where Managed Disks will be located.
+#This location should be same as the snapshot location
+#Get all the Azure location using command below:
+#Get-AzLocation
+
+#Select the same location as the current disk
+#Note that Premium SSD v2 and Ultra Disks are only supported in a select number of regions
+$location = 'eastus'
+
+#When migrating a Standard HDD, Standard SSD, or Premium SSD to either an Ultra Disk or Premium SSD v2, the logical sector size must be 512
+$logicalSectorSize=512
+
+# Get the disk that you need to backup by creating an incremental snapshot
+$yourDisk = Get-AzDisk -DiskName $diskName -ResourceGroupName $resourceGroupName
+
+# Create an incremental snapshot by setting the SourceUri property with the value of the Id property of the disk
+$snapshotConfig=New-AzSnapshotConfig -SourceUri $yourDisk.Id -Location $yourDisk.Location -CreateOption Copy -Incremental
+$snapshot = New-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotName $snapshotName -Snapshot $snapshotConfig
+
+$diskConfig = New-AzDiskConfig -SkuName $storageType -Location $location -CreateOption Copy -SourceResourceId $snapshot.Id -DiskSizeGB $diskSize -LogicalSectorSize $logicalSectorSize -Zone $zone
+
+New-AzDisk -Disk $diskConfig -ResourceGroupName $resourceGroupName -DiskName $diskName
+```
++
+# [Azure CLI](#tab/azure-cli)
+
+The following script migrates a snapshot of a Standard HDD, Standard SSD, or Premium SSD to either an Ultra Disk or a Premium SSD v2.
+
+```azurecli
+# Declare variables
+diskName="yourExistingDiskNameHere"
+newDiskName="newDiskNameHere"
+resourceGroupName="yourResourceGroupNameHere"
+snapshotName="desiredSnapshotNameHere"
+#Provide the storage type. Use PremiumV2_LRS or UltraSSD_LRS.
+storageType=PremiumV2_LRS
+#Select the same location as the current disk
+#Note that Premium SSD v2 and Ultra Disks are only supported in a select number of regions
+location=eastus
+#When migrating a Standard HDD, Standard SSD, or Premium SSD to either an Ultra Disk or Premium SSD v2, the logical sector size must be 512
+logicalSectorSize=512
+#Select an Availability Zone, acceptable values are 1,2, or 3
+zone=1
+
+# Get the disk you need to backup
+yourDiskID=$(az disk show -n $diskName -g $resourceGroupName --query "id" --output tsv)
+
+# Create the snapshot
+snapshot=$(az snapshot create -g $resourceGroupName -n $snapshotName --source $yourDiskID --incremental true)
+
+az disk create -g resourceGroupName -n newDiskName --source $snapshot --logical-sector-size $logicalSectorSize --location $location --zone $zone
+
+```
+
+# [Portal](#tab/azure-portal)
+
+The following steps assume you already have a snapshot. To learn how to create one, see [Create a snapshot of a virtual hard disk](snapshot-copy-managed-disk.md),
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select the search bar at the top. Search for and select Disks.
+1. Select **+Create** and fill in the details.
+1. Make sure the **Region** and **Availability** zone meet the requirements of either your Premium SSD v2 or Ultra Disk.
+1. For **Region** select the same region as the snapshot you took.
+1. For **Source Type** select **Snapshot**.
+1. Select the snapshot you created.
+1. Select **Change size** and select either **Premium SSD v2** or **Ultra Disk** for the **Storage Type**.
+1. Select the performance and capacity you'd like the disk to have.
+1. Continue to the **Advanced** tab.
+1. Select **512** for **Logical sector size (bytes)**.
+1. Select **Review+Create** and then **Create**.
++
+## Next steps
+
+Make a read-only copy of a VM by using a [snapshot](snapshot-copy-managed-disk.md).
virtual-machines Diagnostics Linux V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux-v3.md
These installation instructions and a [downloadable sample configuration](https:
The downloadable configuration is just an example. Modify it to suit your needs.
-### Supported Linux distributions
-
-LAD supports the following distributions and versions. The list of distributions and versions applies only to Azure-endorsed Linux vendor images. The extension generally doesn't support third-party BYOL and BYOS images, like appliances.
-
-A distribution that lists only major versions, like Debian 7, is also supported for all minor versions. If a minor version is specified, only that version is supported. If a plus sign (+) is appended, minor versions equal to or later than the specified version are supported.
-
-Supported distributions and versions:
--- Ubuntu 20.04, 18.04, 16.04, 14.04-- CentOS 7, 6.5+-- Oracle Linux 7, 6.4+-- OpenSUSE 13.1+-- SUSE Linux Enterprise Server 12-- Debian 9, 8, 7-- Red Hat Enterprise Linux (RHEL) 7, 6.7+- ### Prerequisites * **Azure Linux Agent version 2.2.0 or later**. Most Azure VM Linux gallery images include version 2.2.7 or later. Run `/usr/sbin/waagent -version` to confirm the version installed on the VM. If the VM is running an older version, [Update the guest agent](./update-linux-agent.md).
virtual-machines Image Version Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-version-encryption.md
When you're using customer-managed keys for encrypting images in an Azure Comput
- VM image version source doesn't currently support customer-managed key encryption.
+- Some of the features like replicating an SSE+CMK image, creating an image from SSE+CMK encrypted disk etc. are not supported through portal.
+ ## PowerShell To specify a disk encryption set for an image version, use [New-AzGalleryImageVersion](/powershell/module/az.compute/new-azgalleryimageversion) with the `-TargetRegion` parameter:
virtual-machines Convert Disk Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/convert-disk-storage.md
- Title: Convert managed disks storage between different disk types using Azure CLI
-description: How to convert Azure managed disks between the different disks types by using the Azure CLI.
---- Previously updated : 02/09/2023-----
-# Change the disk type of an Azure managed disk - CLI
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-
-There are four disk types of Azure managed disks: Azure ultra disks, premium SSD, standard SSD, and standard HDD. You can switch between premium SSD, standard SSD, and standard HDD based on your performance needs. You are not yet able to switch from or to an ultra disk, you must deploy a new one.
-
-This functionality is not supported for unmanaged disks. But you can easily [convert an unmanaged disk to a managed disk](convert-unmanaged-to-managed-disks.md) to be able to switch between disk types.
-
-This article shows how to convert managed disks from one disk type to another by using the Azure CLI. To install or upgrade the tool, see [Install Azure CLI](/cli/azure/install-azure-cli).
-
-## Before you begin
-
-Conversion requires a restart of the virtual machine (VM), so schedule the migration of your disk during a pre-existing maintenance window.
-
-## Restrictions
--- You can only change disk type once per day.-- You can only change the disk type of managed disks. If your disk is unmanaged, [convert it to a managed disk](convert-unmanaged-to-managed-disks.md) to switch between disk types.--
-## Switch all managed disks of a VM between from one account to another
-
-This example shows how to convert all of a VM's disks to premium storage. However, by changing the sku variable in this example, you can convert the VM's disks type to standard SSD or standard HDD. Please note that To use Premium managed disks, your VM must use a [VM size](../sizes.md) that supports Premium storage. This example also switches to a size that supports Premium storage.
-
- ```azurecli
-
-#resource group that contains the virtual machine
-$rgName='yourResourceGroup'
-
-#Name of the virtual machine
-vmName='yourVM'
-
-#Premium capable size
-#Required only if converting from Standard to Premium
-size='Standard_DS2_v2'
-
-#Choose between Standard_LRS, StandardSSD_LRS and Premium_LRS based on your scenario
-sku='Premium_LRS'
-
-#Deallocate the VM before changing the size of the VM
-az vm deallocate --name $vmName --resource-group $rgName
-
-#Change the VM size to a size that supports Premium storage
-#Skip this step if converting storage from Premium to Standard
-az vm resize --resource-group $rgName --name $vmName --size $size
-
-#Update the SKU of all the data disks
-az vm show -n $vmName -g $rgName --query storageProfile.dataDisks[*].managedDisk -o tsv \
- | awk -v sku=$sku '{system("az disk update --sku "sku" --ids "$1)}'
-
-#Update the SKU of the OS disk
-az vm show -n $vmName -g $rgName --query storageProfile.osDisk.managedDisk -o tsv \
-| awk -v sku=$sku '{system("az disk update --sku "sku" --ids "$1)}'
-
-az vm start --name $vmName --resource-group $rgName
-
-```
-## Switch individual managed disks from one disk type to another
-
-For your dev/test workload, you might want to have a mix of Standard and Premium disks to reduce your costs. You can choose to upgrade only those disks that need better performance. This example shows how to convert a single VM disk from Standard to Premium storage. However, by changing the sku variable in this example, you can convert the VM's disks type to standard SSD or standard HDD. To use Premium managed disks, your VM must use a [VM size](../sizes.md) that supports Premium storage. This example also switches to a size that supports Premium storage.
-
- ```azurecli
-
-#resource group that contains the managed disk
-$rgName='yourResourceGroup'
-
-#Name of your managed disk
-diskName='yourManagedDiskName'
-
-#Premium capable size
-#Required only if converting from Standard to Premium
-size='Standard_DS2_v2'
-
-#Choose between Standard_LRS, StandardSSD_LRS and Premium_LRS based on your scenario
-sku='Premium_LRS'
-
-#Get the parent VM Id
-vmId=$(az disk show --name $diskName --resource-group $rgName --query managedBy --output tsv)
-
-#Deallocate the VM before changing the size of the VM
-az vm deallocate --ids $vmId
-
-#Change the VM size to a size that supports Premium storage
-#Skip this step if converting storage from Premium to Standard
-az vm resize --ids $vmId --size $size
-
-# Update the SKU
-az disk update --sku $sku --name $diskName --resource-group $rgName
-
-az vm start --ids $vmId
-```
-
-## Switch managed disks from one disk type to another
-
-Follow these steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select the VM from the list of **Virtual machines**.
-3. If the VM isn't stopped, select **Stop** at the top of the VM **Overview** pane, and wait for the VM to stop.
-4. In the pane for the VM, select **Disks** from the menu.
-5. Select the disk that you want to convert.
-6. Select **Configuration** from the menu.
-7. Change the **Account type** from the original disk type to the desired disk type.
-8. Select **Save**, and close the disk pane.
-
-The update of the disk type is instantaneous. You can restart your VM after the conversion.
-
-## Next steps
-
-Make a read-only copy of a VM by using [snapshots](snapshot-copy-managed-disk.md).
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
description: Learn how to create a Bicep file or ARM template JSON template to u
Previously updated : 06/06/2023 Last updated : 06/12/2023
Write-Output '>>> Waiting for GA Service (WindowsAzureTelemetryService) to start
while ((Get-Service WindowsAzureTelemetryService) -and ((Get-Service WindowsAzureTelemetryService).Status -ne 'Running')) { Start-Sleep -s 5 } Write-Output '>>> Waiting for GA Service (WindowsAzureGuestAgent) to start ...' while ((Get-Service WindowsAzureGuestAgent).Status -ne 'Running') { Start-Sleep -s 5 }
-Write-Output '>>> Sysprepping VM ...'
if( Test-Path $Env:SystemRoot\system32\Sysprep\unattend.xml ) {
+ Write-Output '>>> Removing Sysprep\unattend.xml ...'
Remove-Item $Env:SystemRoot\system32\Sysprep\unattend.xml -Force }
+if (Test-Path $Env:SystemRoot\Panther\unattend.xml) {
+ Write-Output '>>> Removing Panther\unattend.xml ...'
+ Remove-Item $Env:SystemRoot\Panther\unattend.xml -Force
+}
+Write-Output '>>> Sysprepping VM ...'
& $Env:SystemRoot\System32\Sysprep\Sysprep.exe /oobe /generalize /quiet /quit while($true) { $imageState = (Get-ItemProperty HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Setup\State).ImageState
virtual-machines Image Builder Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-troubleshoot.md
description: This article helps you troubleshoot common problems and errors you
Previously updated : 05/17/2023 Last updated : 06/07/2023
Increase the build VM size.
### The build finished but no artifacts were created
-#### Error
+#### Warning
```text [a170b40d-2d77-4ac3-8719-72cdc35cf889] PACKER OUT Build 'azure-arm' errored: Future#WaitForCompletion: context has been cancelled: StatusCode=200 -- Original Error: context deadline exceeded
Increase the build VM size.
Done exporting Packer logs to Azure for Packer prefix: [a170b40d-2d77-4ac3-8719-72cdc35cf889] PACKER OUT ```
-#### Cause
-
-The build timed out while it was waiting for the required Azure resources to be created.
#### Solution
-Rerun the build to try again.
+The above warning can safely be ignored.
### Resource not found
virtual-machines Convert Disk Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/convert-disk-storage.md
- Title: Convert managed disks storage between different disk types by using Azure PowerShell
-description: How to convert Azure managed disks between the different disks types by using Azure PowerShell.
---- Previously updated : 02/09/2023----
-# Change the disk type of an Azure managed disk - PowerShell
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows
-
-There are four disk types of Azure managed disks: Azure ultra disks, premium SSD, standard SSD, and standard HDD. You can switch between premium SSD, standard SSD, and standard HDD based on your performance needs. You are not yet able to switch from or to an ultra disk, you must deploy a new one.
-
-This functionality is not supported for unmanaged disks. But you can easily [convert an unmanaged disk to a managed disk](convert-unmanaged-to-managed-disks.md) to be able to switch between disk types.
--
-## Before you begin
-
-Because conversion requires a restart of the virtual machine (VM), schedule the migration of your disk during a pre-existing maintenance window.
-
-## Restrictions
--- You can only change disk type once per day.-- You can only change the disk type of managed disks. If your disk is unmanaged, [convert it to a managed disk](convert-unmanaged-to-managed-disks.md) to switch between disk types.-
-## Switch all managed disks of a VM between from one account to another
-
-This example shows how to convert all of a VM's disks to premium storage. However, by changing the $storageType variable in this example, you can convert the VM's disks type to standard SSD or standard HDD. To use Premium managed disks, your VM must use a [VM size](../sizes.md) that supports Premium storage. This example also switches to a size that supports premium storage:
-
-```azurepowershell-interactive
-# Name of the resource group that contains the VM
-$rgName = 'yourResourceGroup'
-
-# Name of the your virtual machine
-$vmName = 'yourVM'
-
-# Choose between Standard_LRS, StandardSSD_LRS and Premium_LRS based on your scenario
-$storageType = 'Premium_LRS'
-
-# Premium capable size
-# Required only if converting storage from Standard to Premium
-$size = 'Standard_DS2_v2'
-
-# Stop and deallocate the VM before changing the size
-Stop-AzVM -ResourceGroupName $rgName -Name $vmName -Force
-
-$vm = Get-AzVM -Name $vmName -resourceGroupName $rgName
-
-# Change the VM size to a size that supports Premium storage
-# Skip this step if converting storage from Premium to Standard
-$vm.HardwareProfile.VmSize = $size
-Update-AzVM -VM $vm -ResourceGroupName $rgName
-
-# Get all disks in the resource group of the VM
-$vmDisks = Get-AzDisk -ResourceGroupName $rgName
-
-# For disks that belong to the selected VM, convert to Premium storage
-foreach ($disk in $vmDisks)
-{
- if ($disk.ManagedBy -eq $vm.Id)
- {
- $disk.Sku = [Microsoft.Azure.Management.Compute.Models.DiskSku]::new($storageType)
- $disk | Update-AzDisk
- }
-}
-
-Start-AzVM -ResourceGroupName $rgName -Name $vmName
-```
-
-## Switch individual managed disks between Standard and Premium
-
-For your dev/test workload, you might want a mix of Standard and Premium disks to reduce your costs. You can choose to upgrade only those disks that need better performance. This example shows how to convert a single VM disk from Standard to Premium storage. However, by changing the $storageType variable in this example, you can convert the VM's disks type to standard SSD or standard HDD. To use Premium managed disks, your VM must use a [VM size](../sizes.md) that supports Premium storage. This example also shows how to switch to a size that supports Premium storage:
-
-```azurepowershell-interactive
-
-$diskName = 'yourDiskName'
-# resource group that contains the managed disk
-$rgName = 'yourResourceGroupName'
-# Choose between Standard_LRS, StandardSSD_LRS and Premium_LRS based on your scenario
-$storageType = 'Premium_LRS'
-# Premium capable size
-$size = 'Standard_DS2_v2'
-
-$disk = Get-AzDisk -DiskName $diskName -ResourceGroupName $rgName
-
-# Get parent VM resource
-$vmResource = Get-AzResource -ResourceId $disk.ManagedBy
-
-# Stop and deallocate the VM before changing the storage type
-Stop-AzVM -ResourceGroupName $vmResource.ResourceGroupName -Name $vmResource.Name -Force
-
-$vm = Get-AzVM -ResourceGroupName $vmResource.ResourceGroupName -Name $vmResource.Name
-
-# Change the VM size to a size that supports Premium storage
-# Skip this step if converting storage from Premium to Standard
-$vm.HardwareProfile.VmSize = $size
-Update-AzVM -VM $vm -ResourceGroupName $rgName
-
-# Update the storage type
-$disk.Sku = [Microsoft.Azure.Management.Compute.Models.DiskSku]::new($storageType)
-$disk | Update-AzDisk
-
-Start-AzVM -ResourceGroupName $vm.ResourceGroupName -Name $vm.Name
-```
-
-## Switch managed disks from one disk type to another
-
-Follow these steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select the VM from the list of **Virtual machines**.
-3. If the VM isn't stopped, select **Stop** at the top of the VM **Overview** pane, and wait for the VM to stop.
-4. In the pane for the VM, select **Disks** from the menu.
-5. Select the disk that you want to convert.
-6. Select **Size + performance** from the menu.
-7. Change the **Account type** from the original disk type to the desired disk type.
-8. Select **Save**, and close the disk pane.
-
-The disk type conversion is instantaneous. You can start your VM after the conversion.
-
-## Next steps
-
-Make a read-only copy of a VM by using a [snapshot](snapshot-copy-managed-disk.md).
virtual-machines Convert Unmanaged To Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/convert-unmanaged-to-managed-disks.md
The VM will be stopped and restarted after migration is complete.
## Next steps
-[Convert standard managed disks to premium](convert-disk-storage.md)
+[Change the disk type of an Azure managed disk](../disks-convert-types.md).
Take a read-only copy of a VM by using [snapshots](snapshot-copy-managed-disk.md).
virtual-machines Image Builder Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-virtual-desktop.md
Title: Create an Azure Virtual Desktop image by using Azure VM Image Builder
description: Create an Azure VM image of Azure Virtual Desktop by using VM Image Builder and PowerShell. - Previously updated : 05/12/2021+ Last updated : 06/20/2023
Feel free to view the [template](https://raw.githubusercontent.com/azure/azvmima
Your template must be submitted to the service. Doing so downloads any dependent artifacts, such as scripts, and validates, checks permissions, and stores them in the staging resource group, which is prefixed with *IT_*. ```azurepowershell-interactive
-New-AzResourceGroupDeployment -ResourceGroupName $imageResourceGroup -TemplateFile $templateFilePath -TemplateParameterObject @{"api-Version" = "2020-02-14"} -imageTemplateName $imageTemplateName -svclocation $location
+New-AzResourceGroupDeployment -ResourceGroupName $imageResourceGroup -TemplateFile $templateFilePath -TemplateParameterObject @{"api-Version" = "2020-02-14"; "imageTemplateName" = $imageTemplateName; "svclocation" = $location}
# Optional - if you have any errors running the preceding command, run: $getStatus=$(Get-AzImageBuilderTemplate -ResourceGroupName $imageResourceGroup -Name $imageTemplateName)
virtual-machines Migrate To Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/migrate-to-managed-disks.md
You can migrate to Managed Disks in following scenarios:
|Convert stand alone VMs and VMs in an availability set to managed disks |[Convert VMs to use managed disks](convert-unmanaged-to-managed-disks.md) | |Convert a single VM from classic to Resource Manager on managed disks |[Create a VM from a classic VHD](create-vm-specialized-portal.md) | |Convert all the VMs in a vNet from classic to Resource Manager on managed disks |[Migrate IaaS resources from classic to Resource Manager](../migration-classic-resource-manager-ps.md) and then [Convert a VM from unmanaged disks to managed disks](convert-unmanaged-to-managed-disks.md) |
-|Upgrade VMs with standard unmanaged disks to VMs with managed premium disks | First, [Convert a Windows virtual machine from unmanaged disks to managed disks](convert-unmanaged-to-managed-disks.md). Then [Update the storage type of a managed disk](convert-disk-storage.md). |
+|Upgrade VMs with standard unmanaged disks to VMs with managed premium disks | First, [Convert a Windows virtual machine from unmanaged disks to managed disks](convert-unmanaged-to-managed-disks.md). Then [Update the storage type of a managed disk](../disks-convert-types.md). |
[!INCLUDE [classic-vm-deprecation](../../../includes/classic-vm-deprecation.md)]
virtual-machines Oracle Azure Vms Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-azure-vms-faq.md
+
+ Title: FAQs - Oracle on Azure VMs | Microsoft Docs
+description: FAQs - Oracle on Azure VMs
++++++ Last updated : 06/12/2023++
+# FAQs - Oracle on Azure VMs
+
+**What are the recommended Azure VM types for Oracle on Azure?**
+- **M Series**: high memory requirements, CPU performance, Lower IO limits, host level caching is low, accelerated networking options.
+- **E series** - Availability in all regions, allows for Premium SSD for OS Disk, ephemeral storage to be used for swap.
+
+**What is the role of an Oracle Data Guard on Azure?**
+Data Guard is more focused on disaster recovery (DR) in an on-premises Oracle solution in Azure. ItΓÇÖs central to high availability and disaster recovery. It applies mainly to Fast-Start Failover and the DG Broker & Observer. Data Guard provides a high-availability-based Architecture.
+
+**Does having a Oracle Data Guard setup on Azure VM between Availability Set/Zones or regions subject to ingress/egress cost?**ΓÇ»
+Yes. There's US$0.02/GB charge for the Data Guard redo transport for a remote standby database in another region. There's no cost for the Data Guard redo transport to a local standby database in another availability zone in the same region.
+
+**What are the different design options for Oracle migration to Azure?**
+- **Good & Fast**: You can choose a solution involving Data Guard or Golden Gate, but that's not cost effective.
+- **Good & Cost effective**: You can choose this option with non-Oracle solutions like Azure VM backup cross-region restore, or the Azure NetApp Files cross-region replication but it won't be fast, and you need to give some slacks in their RPO/RTO requirements. Both the Azure NetApp Files cross-region replication has cross-region transport of data included in the product cost.
+
+**What is the simple bare minimal Oracle reference architecture on Azure?**
+Two (2) Azure availability Zone architecture with VM.
virtual-machines Oracle Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-design.md
Last updated 10/15/2021-+
virtual-machines Oracle Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-migration.md
+
+ Title: Migrate Oracle workload to Azure VMs (IaaS)| Microsoft Docs
+description: Migrate Oracle workload to Azure VMs.
+++++ Last updated : 06/03/2023+
+# Migrate Oracle workload to Azure VMs (IaaS)
+
+This article describes how to move your on-premises Oracle workload to the Azure VM infrastructure as a service (IaaS). It's based on several considerations and recommendations defined in the Azure [cloud adoption framework](https://learn.microsoft.com/azure/cloud-adoption-framework/adopt/cloud-adoption).
+
+First step in the migration journey starts with understanding the customerΓÇÖs Oracle setup, identifying the right size of Azure VMs with optimized licensing & deployment of Oracle on Azure VMs. During migration of Oracle workloads to the Azure IaaS, the key thing is to know how well one can prepare their VM based architecture to deploy onto Azure following a clearly defined sequential process. Getting your complex Oracle setup onto Azure requires detailed understanding of each migration step and Azure Infrastructure as a Service offering. This article describes each of the nine migration steps.
+
+## Migration steps
+
+1. **Assess your Oracle workload using AWR Reports**: To move your Oracle workload onto Azure, carefully [analyze the actual database workloads of the customer by using AWR reports](https://github.com/Azure/Oracle-Workloads-for-Azure/tree/main/az-oracle-sizing) and determine the best VM size on Azure that meets the workload performance requirements. The reader is cautioned not to take the hardware specifications of the existing, on-premises Oracle servers or appliances and map one-to-one to Azure VM specifications since most Oracle environments are heavily oversized both from a hardware and Oracle licensing perspective.
+
+Take AWR reports from heavy usage time periods of the databases (such as peak hours, nightly backup and batch processing, or end of month processing, etc.). The AWR-based right sizing analysis takes all key performance indicators and provides a buffer for unexpected peaks during the calculation of required VM specifications.
+
+2. **Collect necessary AWR report data to calculate Azure VM Sizing:** From AWR report, fill in the key data required in ['Oracle_AWR_Erstimates.xltx'](https://techcommunity.microsoft.com/t5/data-architecture-blog/estimate-tool-for-sizing-oracle-workloads-to-azure-iaas-vms/ba-p/1427183) file as needed and determine suitable Azure VM and related workload (Memory).
+
+3. **Arrive at best Azure VM size for migration:** The output of the [AWR based workload analysis](https://techcommunity.microsoft.com/t5/data-architecture-blog/using-oracle-awr-and-infra-info-to-give-customers-complete/ba-p/3361648) indicates the required amount of memory, number of virtual cores, number, size and type of disks, and number of network interfaces. However, it's still up to the user to decide on which Azure VM type to select among the [many that Azure offers](https://azure.microsoft.com/pricing/details/virtual-machines/series/) keeping future requirements also in consideration.
+
+4. **Optimize Azure compute and choose deployment** **architecture:** Finalize the VM configuration that meets the requirements by optimizing compute and licenses, choose the right [deployment architecture](https://learn.microsoft.com/azure/virtual-machines/workloads/oracle/oracle-reference-architecture) (HA, Backup, etc.).
+
+5. **Tuning parameters of Oracle on Azure:** Ensure the VM selected, and deployment architecture meet the performance requirements. Two major factors are throughput & read/write IOPSΓÇô meet the requirements by choosing right [storage](oracle-storage.md) and [backup options](oracle-database-backup-strategies.md).
+
+6. Move your **on-premises Oracle data to the Oracle on Azure VM:** Now that your required Oracle setup is done, pending task is to move data from on premise to cloud. There are many approaches. Best approaches are:
+
+- Azure databox: [Copy your on-premises](https://learn.microsoft.com/training/modules/move-data-with-azure-data-box/3-how-azure-data-box-family-works) data and ship to Azure cloud securely. This suits high volume data scenarios. Data box [provides multiple options.](https://azure.microsoft.com/products/databox/data)
+- Data Factory [data pipeline to](https://learn.microsoft.com/azure/data-factory/connector-oracle?tabs=data-factory) move data from one premise to Oracle on Azure ΓÇô heavily dependent on bandwidth.
+
+Depending on the size of your data, you can also select from the following available options.
+
+- **Azure Data Box Disk**:
+
+ Azure Data Box Disk is a powerful and flexible tool for businesses looking to transfer large amounts of data to Azure quickly and securely.
+
+ Learn more [Microsoft Azure Data Box Heavy overview | Microsoft Learn](https://learn.microsoft.com/azure/databox/data-box-heavy-overview)
+
+- **Azure Data Box Heavy**:
+
+ Azure Data Box Heavy is a powerful and flexible tool for businesses looking to transfer massive amounts of data to Azure quickly and securely.
+
+ To learn more about data box, see [Microsoft Azure Data Box Heavy overview | Microsoft Learn](https://learn.microsoft.com/azure/databox/data-box-heavy-overview)
+
+ 7. **Load data received at cloud to Oracle on Azure VM:**
+
+Now that data is moved into data box, or data factory is pumping it to file system, in this step migrate this data to a newly set up Oracle on Azure VM using the following tools.
+
+- RMAN - Recovery Manager
+- Oracle Data Guard
+- Goldengate with Data Guard
+- Oracle Data Pump
+
+8. **Measure performance of your Oracle on Azure VM:** Demonstrate the performance of the Oracle on Azure VM using:
+
+- IO Benchmarking ΓÇô VM tooling (Monitoring ΓÇô CPU cycles etc.)
+
+Use the following handy tools and approaches.
+
+- FIO ΓÇô CPU Utilization/OS
+- SLOB ΓÇô Oracle specific
+- Oracle Swingbench
+- AWR/statspack report (CPU, IO)
+
+9. **Move your on-premises Oracle data to the Oracle on Azure VM**: Finally switch off your on-premises Oracle and switchover to Azure VM. Some checks to be in place are as follows:
+
+- If you have applications using the database, plan downtime.
+
+- Use a change control management tool and consider checking in data changes, not just code changes, into the system.
+
+## Next steps
+- [Storage options for Oracle on Azure VMs](oracle-storage.md)
virtual-machines Oracle Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-overview.md
Title: Oracle solutions on Microsoft Azure | Microsoft Docs
+ Title: Overview of Oracle Applications and solutions on Azure | Microsoft Docs
description: Learn about deploying Oracle Applications and solutions on Azure. Run entirely on Azure infrastructure or use cross-cloud connectivity with OCI. documentationcenter: ''
vm-linux Last updated 04/10/2023--+ # Overview of Oracle Applications and solutions on Azure **Applies to:** :heavy_check_mark: Linux VMs
-This article introduces capabilities to run Oracle solutions using Azure infrastructure. See also detailed introductions to available [WebLogic Server Azure Applications](oracle-weblogic.md), [Oracle VM images](oracle-vm-solutions.md) in the Azure Marketplace, and the capability to [interconnect Azure with Oracle Cloud Infrastructure (OCI)](oracle-oci-overview.md).
+In this article, you learn about running Oracle solutions using the Azure infrastructure.
## Oracle databases on Azure infrastructure-
-Run Oracle databases on Azure infrastructure using Oracle Database on Oracle Linux images available in the Azure Marketplace:
-
+Oracle supports running its Database 12.1 and higher Standard and Enterprise editions in Azure on VM images based on Oracle Linux. You can run Oracle databases on Azure infrastructure using Oracle Database on Oracle Linux images available in the Azure Marketplace.
- Oracle Database 12.2, and 18.3 Enterprise Edition - Oracle Database 12.2, and 18.3 Standard Edition-- Oracle Database 19.3-
-You can also take the following approaches:
-
+- Oracle Database 19.3
+You can also take one of the following approaches:
- Set up Oracle Database on a non-Oracle Linux image available in Azure.-- Base a solution on a custom image you create from scratch in Azure.
+- Build a solution on a custom image you create from scratch in Azure.
- Upload a custom image from your on-premises environment.
-Optionally configure your solution with multiple attached disks. You can improve database performance by installing Oracle Automated Storage Management (ASM).
-
-## WebLogic Server with Azure service integrations
+You can also choose to configure your solution with multiple attached disks. You can improve database performance by installing Oracle Automated Storage Management (ASM).
+For the best performance for production workloads of Oracle Database on Azure, be sure to properly size the VM image and select the right storage options based on throughput, IOPS & latency. For instructions on how to quickly get an Oracle Database up and running in Azure using the Oracle published VM image, see [Create an Oracle Database](oracle-database-quick-create.md) in an Azure VM.
+## Deploy Oracle VM images on Microsoft Azure
+This section covers information about Oracle solutions based on virtual machine (VM) images published by Oracle in the Azure Marketplace.
+To get a list of currently available Oracle images, run the following command using
+Azure CLI or Azure Cloud Shell
-Choose from various WebLogic Server Azure Applications to accelerate your cloud journey. Several preconfigured Azure service integrations are available, including database, Azure App Gateway, and Azure Active Directory.
+``az vm image list --publisher oracle --output table ΓÇôall``
-## Applications on Oracle Linux and WebLogic Server
+The images are bring-your-own-license. You're charged only for the costs of compute, storage, and networking incurred running a VM. You can also choose to build your solutions on a custom image that you create from scratch in Azure or upload a custom image from your on-premises environment.
+>[!IMPORTANT]
+>You require a proper license to use Oracle software and a current support agreement with Oracle. Oracle has guaranteed license mobility from on-premises to Azure. For more information about license mobility, see the [Oracle and Microsoft Strategic Partnership FAQ](https://www.oracle.com/cloud/azure/interconnect/faq/).
-Run enterprise applications in Azure on supported Oracle Linux images. The following virtual machine images are available in the Azure Marketplace:
+## Applications on Oracle Linux and WebLogic server
+Run enterprise applications on WebLogic server in Azure on supported Oracle Linux images. For more information, see the WebLogic documentation,[Oracle WebLogic Server on Azure Solution Overview](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/oracle.oraclelinux-wls-cluster).
-- Oracle WebLogic Server 12.1.2-- Oracle Linux with the Unbreakable Enterprise Kernel (UEK) 6.8, 6.9, 6.10, 7.3 through 7.7, 8.0, 8.1.-
-## High availability and disaster recovery options
-
-For high availability in region, configure any of the following technologies on Azure infrastructure:
+## WebLogic Server with Azure service integrations
+Oracle and Microsoft are collaborating to bring WebLogic Server to the Azure Marketplace in the form of Azure Application offering. For more information about these offers, see [What are solutions for running Oracle WebLogic Server](oracle-weblogic.md).
-- [Oracle Data Guard](https://docs.oracle.com/cd/B19306_01/server.102/b14239/concepts.htm#g1049956)-- [Active Data Guard with FSFO](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dgbkr/https://docsupdatetracker.net/index.html)-- [Sharding](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/admin/sharding-overview.html)-- [GoldenGate](https://www.oracle.com/middleware/technologies/goldengate.html)
+### Oracle WebLogic Server VM images
+**Clustering is supported on Enterprise Edition only**. You're licensed to use WebLogic clustering only when you use the Enterprise Edition of Oracle WebLogic Server. Don't use clustering with Oracle WebLogic Server Standard Edition.
+**UDP multicast isn't supported**. Azure supports UDP unicasting, but not multicasting or broadcasting. Oracle WebLogic Server can rely on Azure UDP unicast capabilities. For best results relying on UDP unicast, we recommend that the WebLogic cluster size is kept static, or kept with no more than 10 managed servers.
+**Oracle WebLogic Server expects public and private ports to be the same for T3 access**. For example, when using Enterprise JavaBeans (EJB). Consider a multi-tier scenario where a service layer application is running on an Oracle WebLogic Server cluster consisting of two or more VMs, in a virtual network named SLWLS. The client tier is in a different subnet in the same virtual network, running a simple Java program trying to call EJB in the service layer. Because you must load balance the service layer, a public load-balanced endpoint needs to be created for the VMs in the Oracle WebLogic Server cluster. If the private port specified is different from the public port an error occurs. For example, if you use ``7006:7008``, the following error occurs because for any remote T3 access, Oracle WebLogic Server expects the load balancer port and the WebLogic managed server port to be the same.
-You can also set up these configurations across multiple Azure regions for added availability and disaster recovery.
+``[java] javax.naming.CommunicationException [Root exception is java.net.ConnectException: t3://example.cloudapp.net:7006:``
-Use [Azure Site Recovery](../../../site-recovery/site-recovery-overview.md) to orchestrate and manage disaster recovery for your Oracle Linux VMs in Azure. You can also use Site Recovery for your physical servers with Oracle Data Guard or Oracle consistent backup measures that meet the Recovery Point Objective and Recovery Time Objective (RPO/RTO). Site Recovery has a [block change limit](../../../site-recovery/azure-to-azure-support-matrix.md) for the storage used by Oracle database.
+``Bootstrap to: example.cloudapp.net/138.91.142.178:7006' over: 't3' got an error or timed out]``
-## Backup Oracle workloads
+ In the preceding case, the client is accessing port 7006, which is the load balancer port, and the managed server is listening on 7008, which is the private port. This restriction is applicable only for T3 access, not HTTP.
-- Back up your Oracle VMs using [Azure Backup](../../../backup/backup-overview.md).
+To avoid this issue, use one of the following workarounds:
-- Back up your Oracle Database using Oracle RMAN. Optionally use [Azure Blob Fuse](../../../storage/blobs/storage-how-to-mount-container-linux.md) to mount a [highly redundant Azure Blob Storage account](../../../storage/common/storage-redundancy.md) and write your RMAN backups to it for added resiliency.
+- Use the same private and public port numbers for load balanced endpoints dedicated to T3 access.
+- Include the following JVM parameter when starting Oracle WebLogic Server:
+configCopy
+``Dweblogic.rjvm.enableprotocolswitch=true``
-## Integrate of Azure with Oracle Cloud Infrastructure
+- Dynamic clustering and load balancing limitations. Suppose you want to use a dynamic cluster in Oracle WebLogic Server and expose it through a single, public load-balanced endpoint in Azure. This approach can be done as long as you use a fixed port number for each of the managed servers, not dynamically assigned from a range, and don't start more managed servers than there are machines the administrator is tracking. There should be no more than one managed server per VM.
+ If your configuration results in more Oracle WebLogic Servers being started than there are VMs, it isn't possible for more than one of those instances of Oracle WebLogic Servers to bind to a given port number. That is, if multiple Oracle WebLogic Server instances share the same virtual machine, the others on that VM fail.
+ If you configure the admin server to automatically assign unique port numbers to its managed servers, then load balancing isn't possible because Azure doesn't support mapping from a single public port to multiple private ports, as would be required for this configuration.
+- Multiple instances of Oracle WebLogic Server on a VM. Depending on your deployment requirements, you might consider running multiple instances of Oracle WebLogic Server on the same VM, if the VM is large enough. For example, on a midsize VM, which contains two cores, you could choose to run two instances of Oracle WebLogic Server. However, we still recommend that you avoid introducing single points of failure into your architecture. Running multiple instances of Oracle WebLogic Server on just one VM would be such a single point.
-You can run Oracle Applications in Azure infrastructure, connected to backend databases in Oracle Cloud Infrastructure (OCI). This solution uses the following capabilities:
+Using at least two VMs could be a better approach. Each VM can run multiple instances of Oracle WebLogic Server. Each instance of Oracle WebLogic Server could still be part of the same cluster. However, it's currently not possible to use Azure to load-balance endpoints that are exposed by such Oracle WebLogic Server deployments within the same VM. Azure Load Balancer requires the load-balanced servers to be distributed among unique VMs.
+## High availability and disaster recovery options
+ When using Oracle solutions in Azure, you're responsible for implementing a high availability and disaster recovery solution to avoid any downtime.
+You can also implement high availability and disaster recovery for Oracle Database Enterprise Edition by using Data Guard, Active Data Guard, or Oracle GoldenGate. The approach requires two databases on two separate VMs, which should be in the same virtual network to ensure they can access each other over the private persistent IP address.
-- **Cross-cloud networking**. Use the direct interconnect available between Azure ExpressRoute and Oracle FastConnect to establish high-bandwidth, private, and low-latency connections between the application and the database layer.-- **Integrated identity**. Set up federated identity between Azure Active Directory (Azure AD) and Oracle Identity Cloud Service (IDCS) to create a single identity source for the solutions. Enable single sign-on to manage resources across OCI and Azure.
+We recommend placing the VMs in the same availability set to allow Azure to place them into separate fault domains and upgrade domains. If you want to have geo-redundancy, set up the two databases to replicate between two different regions and connect the two instances with a VPN Gateway. To walk through the basic setup procedure on Azure, see Implement Oracle Data Guard on an Azure Linux virtual machine.
-### Deploy Oracle Applications on Azure
+With Oracle Data Guard, you can achieve high availability with a primary database in one VM, a secondary (standby) database in another VM, and one-way replication set up between them. The result is read access to the copy of the database. With Oracle GoldenGate, you can configure bi-directional replication between the two databases. To learn how to set up a high-availability solution for your databases using these tools, see Active Data Guard and GoldenGate. If you need read-write access to the copy of the database, you can use Oracle Active Data Guard.
+To walk through the basic setup procedure on Azure, see [Implement Oracle Golden Gate on an Azure Linux VM](configure-oracle-golden-gate.md).
-Use Terraform templates to set up Azure infrastructure and install Oracle Applications. For more information, see [Terraform on Azure](/azure/developer/terraform/).
+In addition to having a high availability and disaster recovery solution architected in Azure, you should have a backup strategy in place to restore your database.
+## Backup Oracle workloads
+Different [backup strategies](oracle-database-backup-strategies.md) are available for Oracle on Azure VMs, the following backups are other options:
+- Using [Azure files](oracle-database-backup-azure-storage.md)
+- Using [Azure backup](oracle-database-backup-azure-backup.md)
+- Using [Oracle RMAN Streaming data](oracle-rman-streaming-backup.md) backup
+## Deploy Oracle applications on Azure
+Use Terraform templates to set up Azure infrastructure and install Oracle applications. For more information, see [Terraform on Azure](https://learn.microsoft.com/azure/developer/terraform/?branch=main&branchFallbackFrom=pr-en-us-234143).
Oracle has certified the following applications to run in Azure when connecting to an Oracle database by using the Azure with Oracle Cloud interconnect solution:- - E-Business Suite - JD Edwards EnterpriseOne - PeopleSoft - Oracle Retail applications - Oracle Hyperion Financial Management
-Also deploy custom applications in Azure that connect with OCI and other Azure services.
-
-### Set up Oracle databases in OCI
-
-Use Oracle Database Cloud Services with Oracle software running in Azure. These services include:
--- Oracle Autonomous Database-- Oracle Real Application Clusters (RAC)-- Oracle Exadata-- Oracle database as a service (DBaaS)-- Oracle Single Node.-
-Learn more about [OCI database options](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/databaseoverview.htm).
-
+You can deploy custom applications in Azure that connect with OCI and other Azure services.
+## Support for JD Edwards
+According to Oracle Support, JD Edwards EnterpriseOne versions 9.2 and above are supported on any public cloud offering that meets their specific Minimum Technical Requirements (MTR). You need to create custom images that meet their MTR specifications for operating system and software application compatibility. For more information, see [Doc ID 2178595.1](https://support.oracle.com/knowledge/JD%20Edwards%20EnterpriseOne/2178595_1.html).
## Licensing-
-Deployment of Oracle Applications in Azure is based on a bring-your-own-license model. This model assumes that you have licenses to use Oracle software and that you have a current support agreement in place with Oracle. Oracle has guaranteed license mobility from on-premises to Azure. See the Oracle-Azure [FAQ](https://www.oracle.com/cloud/technologies/oracle-azure-faq.html).
-
+Deployment of Oracle solutions in Azure is based on a bring-your-own-license model. This model assumes that you have licenses to use Oracle software and that you have a current support agreement in place with Oracle.
+Microsoft Azure is an authorized cloud environment for running Oracle Database. The Oracle Core Factor table isn't applicable when licensing Oracle databases in the cloud. Instead, when using VMs with Hyper-Threading Technology enabled for Enterprise Edition databases, count two vCPUs as equivalent to one Oracle Processor license if hyperthreading is enabled, as stated in the policy document. The policy details can be found at [Licensing Oracle Software in the Cloud Computing Environment](https://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf).
+Oracle databases generally require higher memory and I/O. For this reason, we recommend [Memory Optimized VMs](https://learn.microsoft.com/azure/virtual-machines/sizes-memory) for these workloads. To optimize your workloads further, we recommend [Constrained Core vCPUs](https://learn.microsoft.com/azure/virtual-machines/constrained-vcpu?branch=main) for Oracle Database workloads that require high memory, storage, and I/O bandwidth, but not a high core count.
+When you migrate Oracle software and workloads from on-premises to Microsoft Azure, Oracle provides license mobility as stated in [Oracle and Microsoft Strategic Partnership FAQ](https://www.oracle.com/cloud/azure/interconnect/faq/).
## Next steps--- Learn more about [WebLogic Server Azure Applications](oracle-weblogic.md) and the Azure service integrations they support.--- Learn more about deploying [Oracle VM images](oracle-vm-solutions.md) in Azure infrastructure.--- Learn more about how to [interconnect Azure with OCI](oracle-oci-overview.md).--- Check out the [Oracle on Azure overview session](https://www.pluralsight.com/courses/microsoft-ignite-session-57) from Ignite 2019.
+You now have an overview of current Oracle databases and solutions based on VM images in Microsoft Azure. Your next step is to deploy your first Oracle database on Azure.
+- [Create an Oracle database on Azure](oracle-database-quick-create.md)
virtual-machines Oracle Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-reference-architecture.md
Previously updated : 04/10/2023-
-
Last updated : 6/13/2023+ # Reference architectures for Oracle Database Enterprise Edition on Azure
virtual-machines Oracle Rman Streaming Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-rman-streaming-backup.md
+
+ Title: Stream database backups using Oracle Recovery Manager | Microsoft Docs
+description: Streaming database backups using Oracle Recovery Manager (RMAN).
++++++ Last updated : 06/5/2023++
+# Stream database backups using Oracle Recovery Manager
+
+In this article, you learn about how Azure VMs support streaming database backups with Oracle Recovery Manager (RMAN). The streaming process uses either the destination of a virtual tape library package, or writes those backups directly to a local or remote filesystem. This article describes how various virtual tape library packages are integrated with Oracle RMAN. For a few of the packages, you see links to the Azure Marketplace.
+
+The backup and restore utility Oracle RMAN (Recovery MANager) can be configured to stream and capture back up images of Oracle databases then stream and send those back-up images to two different types of destinations.
+
+## Device type SBT
+
+The serial backup tape (SBT) type of destination was originally designed for interacting with tape drives, though not directly. To simplify the interaction with multiple tape devices available when RMAN was created, Oracle developed an application programming interface (API) to interact with software packages to manage tape devices.
+
+The device type SBT sends commands to software packages through its defined API. The software package vendors create corresponding ΓÇ£plug-insΓÇ¥ that interact according to the specifications of the API to translate the RMAN commands for the software package. Oracle doesn't charge more for this functionality, but various software vendors may charge licensing and support fees for their ΓÇ£plug-insΓÇ¥ to connect to the API for RMAN published by Oracle.
+
+To use Device type SBT, the corresponding media management vendor (MMV) software must be previously installed onto the OS platform on which the Oracle database is available. Backups to the SBT installation aren't available ΓÇ£out of the boxΓÇ¥ following an Oracle database installation. Where there's no limit to the number of MMV packages that can be connected to an Oracle database instance, but it's exceedingly rare for there to be more than one in use at any time.
+
+Many of these software packages, originally available for on-premises installation, are also available in the Azure Marketplace.
+- CommVault
+- Veritas NetBackup
+- Dell PowerProtect DD Virtual Edition (DDVE)
+- Veeam Backup & Replication
+
+Other software packages can be found by searching the Azure Marketplace…
+
+## Device type disk
+
+A more universal configuration option for Oracle RMAN is device type disk. For this option, streamed database backup images are written to OS filesystem directories directly addressable from the OS image on which the Oracle database runs. The storage used for backups is either directly mounted on the OS platform, or remotely mounted as a fileshare.
+
+There's no extra licensing or support charges for this option because the DISK adapter for Oracle RMAN is entirely contained within the Oracle RDBMS software.
+
+There are six storage options for Oracle RMAN backups within an Azure VM, of which five are Azure fileshares.
+
+- Locally attached managed disk
+- Azure blob over NFS
+- Azure blobfuse 2.0
+- Azure Files standard over CIFS/SMB
+- Azure Files premium over NFS
+- Azure NetApp Files
+
+Each of these options has advantages or disadvantages in the areas of capacity, pricing, performance, durability. The following table is provided to allow easy comparison of features and prices.
++
+| **Type** | **Tier** | **Docs** | **Mount protocol for VM** | **Support model** | **Prices** | **Notes** |
+||||||||
+| **Managed disk** | Standard HDD | [Introduction to Azure managed disks](https://learn.microsoft.com/azure/virtual-machines/managed-disks-overview) | SCSI | Microsoft | [Managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | 1 |
+| **Managed disk** | Standard SSD | [Introduction to Azure managed disks](https://learn.microsoft.com/azure/virtual-machines/managed-disks-overview) | SCSI | Microsoft | [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | 1 |
+| **Managed disk** | Premium SSD | [Introduction to Azure managed disks](https://learn.microsoft.com/azure/virtual-machines/managed-disks-overview) | SCSI | Microsoft | [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | 1 |
+| **Managed disk** | Premium SSD v2 | [Introduction to Azure managed disks](https://learn.microsoft.com/azure/virtual-machines/managed-disks-overview) | SCSI | Microsoft | [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | 1 |
+| **Managed disk** | UltraDisk | [Introduction to Azure managed disks](https://learn.microsoft.com/azure/virtual-machines/managed-disks-overview) | SCSI | Microsoft | [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | 1 |
+| **Azure blob** | Block blobs | [Mount Blob Storage by using the Network File System (NFS) 3.0 protocol](https://learn.microsoft.com/azure/storage/blobs/network-file-system-protocol-support-how-to?tabs=linux) | NFS v3.0 | Microsoft | [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) | 2 |
+| **Azure** **blobfuse** | v1 | [How to mount Azure Blob Storage as a file system with BlobFuse v1](https://learn.microsoft.com/azure/storage/blobs/storage-how-to-mount-container-linux?tabs=RHEL) | Fuse | Open source/Github | n/a | 3, 5, 6 |
+| **Azure** **blobfuse** | v2 | [What is BlobFuse? - BlobFuse2](https://learn.microsoft.com/azure/storage/blobs/blobfuse2-what-is) | Fuse | Open source/Github | n/a | 3, 5, 6 |
+| **Azure Files** | Standard | [What is Azure Files?](https://learn.microsoft.com/azure/storage/files/storage-files-introduction) | SMB/CIFS | Microsoft | [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/) | 4, 6 |
+| **Azure Files** | Premium | [What is Azure Files?](https://learn.microsoft.com//azure/storage/files/storage-files-introduction) | SMB/CIFS, NFS v4.1 | Microsoft | [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/) | 4, 7 |
+| **Azure NetApp Files** | Standard | [Azure NetApp Files ](https://docs.netapp.com/us-en/cloud-manager-azure-netapp-files/) | SMB/CIFS, NFS v3.0, NFS v4.1 | Microsoft/NetApp | [Azure NetApp Files pricing](https://azure.microsoft.com/pricing/details/netapp/) | 4, 8, 11 |
+| **Azure NetApp Files** | Premium | [Azure NetApp Files ](https://docs.netapp.com/us-en/cloud-manager-azure-netapp-files/) | SMB/CIFS, NFS v3.0, NFS v4.1 | Microsoft/NetApp | [Azure NetApp Files pricing](https://azure.microsoft.com/pricing/details/netapp/) | 4, 9, 11 |
+| **Azure NetApp Files** | Ultra | [Azure NetApp Files](https://docs.netapp.com/us-en/cloud-manager-azure-netapp-files/) | SMB/CIFS, NFS v3.0, NFS v4.1 | Microsoft/NetApp | [Azure NetApp Files pricing](https://azure.microsoft.com/pricing/details/netapp/) | 4, 10, 11 |
+
+**Legend:**
+
+<sup>1</sup> Restricted by device-level and cumulative VM-level I/O limits on IOPS and I/O throughput.
+
+- device limits are specified in the pricing documentation.
+- cumulative limits for VM sizes are specified in the documentation [Sizes for virtual machines in Azure](https://learn.microsoft.com/azure/virtual-machines/sizes)
+
+<sup>2 </sup>Choose _hierarchical storage_ in 1<sup>st</sup> drop-down, then _blob only_ in the 2<sup>nd</sup> drop-down.
+
+<sup>3</sup> Choose _flat storage_ in 1<sup>st</sup> drop-down, then _blob only_ in the 2<sup>nd</sup> drop-down.
+
+<sup>4</sup> Uses CIFS protocol for which later versions of RHEL/OEL Linux are recommended.
+
+- don't use lower Linux versions (that is, RHEL7/OEL7 below 7.5) for CIFS
+- consider using mount option ``cache=none`` for Oracle archived redo log files use-case with CIFS mounts.
+
+<sup>5</sup> supported on GitHub by the Azure Storage product group within Microsoft as an open source project in GitHub.
+
+<sup>6</sup> _hot_ usage tier recommended.
+
+<sup>7</sup> _premium_ usage tier recommended.
+
+<sup>8</sup> I/O throughput of 16 MiB/s per TiB allocated.
+
+<sup>9</sup> I/O throughput of 64 MiB/s per TiB allocated.
+
+<sup>10</sup> I/O throughput of 128 MiB/s per TiB allocated.
+
+<sup>11</sup> [ANF calculator](https://anftechteam.github.io/calc/) is useful for quick pricing calculations.
+
+## Next steps
+[Storage options for oracle on Azure VMs](oracle-storage.md)
+++
virtual-machines Oracle Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-storage.md
+
+ Title: Storage options for Oracle on Azure VMs | Microsoft Docs
+description: Storage options for Oracle on Azure VMs
++++++ Last updated : 06/13/2023+
+# Storage options for Oracle on Azure VMs
+In this article, you learn about the storage choices available to you for Oracle on Azure VMs. The choices of database storage affect how well your Oracle tasks run, how reliable they are, and how much they cost. When exploring the upper limits of performance, it's important to recognize and reduce any constraints that could falsely skew results. Oracle database and applications set the bar high due to the intense demands on storage I/O with a mixed read and write workload driven by a single compute node. Understanding the choices of available storage options and their performance capabilities is the key to successfully migrating Oracle to Azure VMs. This article describes all the Azure native storage offerings with their capabilities.
+
+## Azure managed disks versus shared files
+The throughput & IOPs are limited by the SKU of the selected disk and the virtual machine ΓÇôwhichever is lower. Managed disks are less expensive and simpler to manage than shared storage; however, managed disks may offer lower IOPs and throughput than a given virtual machine allows.
+
+For example, while AzureΓÇÖs Ultra Disks provides 160k IOPs and 2k MB/sec throughput that would become a bottleneck when attached to a Standard_L80s_v2 virtual machine that allows reads of more than 3 million IOPs and 20k MB/sec throughput. When high IOPs are required, consider selecting an appropriate virtual machine with shared storage choices like [Azure Elastic SAN](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-introduction), [Azure NetApp Files.](https://learn.microsoft.com/azure/azure-netapp-files/performance-oracle-multiple-volumes)
+
+ ## Azure managed disks
+
+The [Azure Managed Disk](https://learn.microsoft.com/azure/virtual-machines/managed-disks-overview) are block-level storage volumes managed by Azure for use with Azure Virtual Machines (VMs). They come in several performance tiers (Ultra Disk, Premium SSD, Standard SSD, and Standard HDD), offering different performance and cost options.
+
+- **Ultra Disk**: Azure [Ultra Disks](https://learn.microsoft.com/azure/virtual-machines/disks-enable-ultra-ssd?tabs=azure-portal) are high-performing managed disks designed for I/O-intensive workloads, including Oracle databases. They deliver high throughput and low latency, offering unparalleled performance for your data applications. Can deliver 160,000 I/O operations per second (IOPS), 2000 MB/s per disk with dynamic scalability. Compatible with VM series ESv3, DSv3, FS, and M series, which are commonly used to host Oracles on Azure.
+
+- **Premium SSD**: Azure [Premium SSDs](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance) are high-performance managed disks designed for production and performance-sensitive workloads. They offer a balance between cost and performance, making them a popular choice for many business applications, including Oracle databases. Can deliver 20,000 I/O operations per second (IOPS) per disk, highly available (99.9%) and compatible with DS, Gs & FS VM series.
+
+- **Standard SSD**: Suitable for dev/test environments and noncritical workloads.
+
+- **Standard HDD**: Cost-effective storage for infrequently accessed data.
+
+## Azure Elastic SAN
+
+The [Azure Elastic SAN](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-introduction) is a cloud-native service that offers a scalable, cost-effective, high-performance, and comprehensive storage solution for a range of compute options. Gain higher resiliency and minimize downtime with rapid provisioning. Can deliver up to 64,000 IOPs & supports Volume groups.
+
+## ANF large volumes for Oracle
+
+The [Azure NetApp Files](https://learn.microsoft.com/azure/azure-netapp-files/performance-oracle-multiple-volumes) is able to meet the needs of this highly demanding workload. A cloud-native service that offers scalable & comprehensive storage choice. Can deliver up to 850,000 I/O requests per second and 6,800 MiB/s of storage throughput. Also provides multi host capabilities that are able to drive I/O totaling over 30,000 MiB/s across the three hosts while running in parallel. Azure NetApp also supplied comprehensive [performance bench marking](https://learn.microsoft.com/azure/azure-netapp-files/performance-benchmarks-linux) for Oracle on VM.
+
+## Lightbits on Azure
+
+The [Lightbits](https://www.lightbitslabs.com/azure/) Cloud Data Platform provides scalable and cost-efficient high-performance storage that is easy to consume on Azure. It removes the bottlenecks associated with native storage on the public cloud, such as scalable performance and consistently low latency. Removing these bottlenecks offers rich data services and resiliency that enterprises have come to rely on. It can deliver up to 1 million IOPS/volume and up to 3 million IOPs per VM. Lightbits cluster can scale vertically and horizontally. Lightbits support different sizes of [Lsv3](https://learn.microsoft.com/azure/virtual-machines/lsv3-series) and [Lasv3](https://learn.microsoft.com/azure/virtual-machines/lasv3-series) VMs for their clusters. For options, see L32sv3/L32asv3: 7.68 TB, L48sv3/L48asv3: 11.52 TB, L64sv3/L64asv3: 15.36 TB, L80sv3/L80asv3: 19.20 TB.
+
+## Next steps
+- [Deploy premium SSD to Azure Virtual Machine](https://learn.microsoft.com/azure/virtual-machines/disks-deploy-premium-v2?tabs=azure-cli)
+- [Deploy an Elastic SAN](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-create?tabs=azure-portal)
+- [Setup Azure NetApp Files & create NFS Volume](https://learn.microsoft.com/azure/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes?tabs=azure-portal)
+- [Create Lightbits solution on Azure VM](https://www.lightbitslabs.com/resources/lightbits-on-azure-solution-brief/)
vpn-gateway Vpn Gateway About Vpn Gateway Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md
description: Learn about VPN Gateway resources and configuration settings.
Previously updated : 04/07/2023 Last updated : 06/20/2023 ms.devlang: azurecli
Before you create a VPN gateway, you must create a gateway subnet. The gateway s
When you create the gateway subnet, you specify the number of IP addresses that the subnet contains. The IP addresses in the gateway subnet are allocated to the gateway VMs and gateway services. Some configurations require more IP addresses than others.
-When you're planning your gateway subnet size, refer to the documentation for the configuration that you're planning to create. For example, the ExpressRoute/VPN Gateway coexist configuration requires a larger gateway subnet than most other configurations. Additionally, you may want to make sure your gateway subnet contains enough IP addresses to accommodate possible future additional configurations. While you can create a gateway subnet as small as /29 (applicable to Basic SKU only), we recommend that you create a gateway subnet of /27 or larger (/27, /26 etc.). This accommodates most configurations.
+When you're planning your gateway subnet size, refer to the documentation for the configuration that you're planning to create. For example, the ExpressRoute/VPN Gateway coexist configuration requires a larger gateway subnet than most other configurations. While it's possible to create a gateway subnet as small as /29 (applicable to the Basic SKU only), all other SKUs require a gateway subnet of size /27 or larger ( /27, /26, /25 etc.). You may want to create a gateway subnet larger than /27 so that the subnet has enough IP addresses to accommodate possible future configurations.
The following Resource Manager PowerShell example shows a gateway subnet named GatewaySubnet. You can see the CIDR notation specifies a /27, which allows for enough IP addresses for most configurations that currently exist.
web-application-firewall Afds Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/afds-overview.md
WAF prevents malicious attacks close to the attack sources, before they enter yo
![Azure web application firewall](../media/overview/wafoverview.png)
-> [!NOTE]
-> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [**Azure Front Door**](../../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../../frontdoor/front-door-ddos.md).
Azure Front Door has [two tiers](../../frontdoor/standard-premium/overview.md): Front Door Standard and Front Door Premium. WAF is natively integrated with Front Door Premium with full capabilities. For Front Door Standard, only [custom rules](#custom-authored-rules) are supported.
web-application-firewall Waf Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/waf-sentinel.md
description: This article shows you how to use Microsoft Sentinel with Azure Web
Previously updated : 08/16/2022 Last updated : 06/19/2023
To enable log analytics for each resource, go to your individual Azure Front Doo
1. Select an already active workspace or create a new workspace.
-1. On the left side panel under **Configuration** select **Data Connectors**.
+1. In Microsoft Sentinel, under **Content management**, select **Content hub**.
+
+1. Find and select the **Azure Web Application Firewall** solution.
-1. Search for **Azure web application firewall** and select **Azure web application firewall (WAF)**. Select **Open connector** page on the bottom right.
+1. On the toolbar at the top of the page, select **Install/Update**.
- :::image type="content" source="media//waf-sentinel/data-connectors.png" alt-text="Data connectors":::
+1. In Microsoft Sentinel, on the left-hand side under **Configuration**, select **Data Connectors**.
+
+1. Search for and select **Azure Web Application Firewall (WAF)**. Select **Open connector page** on the bottom right.
+
+ :::image type="content" source="media//waf-sentinel/web-application-firewall-data-connector.png" alt-text="Screenshot of the data connector in Microsoft Sentinel.":::
1. Follow the instructions under **Configuration** for each WAF resource that you want to have log analytic data for if you haven't done so previously.