Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Application Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/application-types.md | To take advantage of this flow, your application can use an authentication libra ### Implicit grant flow -Some libraries, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-core), only support the [implicit grant flow](implicit-flow-single-page-application.md) or your application is implemented to use implicit flow. In these cases, Azure AD B2C supports the [OAuth 2.0 implicit flow](implicit-flow-single-page-application.md). The implicit grant flow allows the application to get **ID** and **Access** tokens. Unlike the authorization code flow, implicit grant flow doesn't return a **Refresh token**. +Some libraries, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib), only support the [implicit grant flow](implicit-flow-single-page-application.md) or your application is implemented to use implicit flow. In these cases, Azure AD B2C supports the [OAuth 2.0 implicit flow](implicit-flow-single-page-application.md). The implicit grant flow allows the application to get **ID** and **Access** tokens. Unlike the authorization code flow, implicit grant flow doesn't return a **Refresh token**. We **don't recommended** this approach. |
active-directory-b2c | Enable Authentication Angular Spa App Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-angular-spa-app-options.md | Familiarize yourself with the article [Configure authentication in an Angular SP You can configure your single-page application to sign in users with MSAL.js in two ways: -- **Pop-up window**: The authentication happens in a pop-up window, and the state of the application is preserved. Use this approach if you don't want users to move away from your application page during authentication. However, there are [known issues with pop-up windows on Internet Explorer](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/internet-explorer.md#popups).+- **Pop-up window**: The authentication happens in a pop-up window, and the state of the application is preserved. Use this approach if you don't want users to move away from your application page during authentication. However, there are known issues with pop-up windows on Internet Explorer. - To sign in with pop-up windows, in the `src/app/app.component.ts` class, use the `loginPopup` method. - In the `src/app/app.module.ts` class, set the `interactionType` attribute to `InteractionType.Popup`. - To sign out with pop-up windows, in the `src/app/app.component.ts` class, use the `logoutPopup` method. You can also configure `logoutPopup` to redirect the main window to a different page, such as the home page or sign-in page, after sign-out is complete by passing `mainWindowRedirectUri` as part of the request. const msalConfig = { [!INCLUDE [active-directory-b2c-app-integration-logging](../../includes/active-directory-b2c-app-integration-logging.md)] -To configure Angular [logging](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/logging.md), in *src/app/auth-config.ts*, configure the following keys: +To configure Angular [logging](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/logging.md), in *src/app/auth-config.ts*, configure the following keys: - `loggerCallback` is the logger callback function. - `logLevel` lets you specify the level of logging. Possible values: `Error`, `Warning`, `Info`, and `Verbose`. |
active-directory-b2c | Enable Authentication Angular Spa App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-angular-spa-app.md | The sample code consists of the following components: |||| | auth-config.ts| Constants | This configuration file contains information about your Azure AD B2C identity provider and the web API service. The Angular app uses this information to establish a trust relationship with Azure AD B2C, sign in and sign out the user, acquire tokens, and validate the tokens. | | app.module.ts| [Angular module](https://angular.io/guide/architecture-modules)| This component describes how the application parts fit together. This is the root module that's used to bootstrap and open the application. In this walkthrough, you add some components to the *app.module.ts* module, and you start the MSAL library with the MSAL configuration object. |-| app-routing.module.ts | [Angular routing module](https://angular.io/tutorial/toh-pt5) | This component enables navigation by interpreting a browser URL and loading the corresponding component. In this walkthrough, you add some components to the routing module, and you protect components with [MSAL Guard](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/msal-guard.md). Only authorized users can access the protected components. | +| app-routing.module.ts | [Angular routing module](https://angular.io/tutorial/toh-pt5) | This component enables navigation by interpreting a browser URL and loading the corresponding component. In this walkthrough, you add some components to the routing module, and you protect components with [MSAL Guard](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/msal-guard.md). Only authorized users can access the protected components. | | app.component.* | [Angular component](https://angular.io/guide/architecture-components) | The `ng new` command created an Angular project with a root component. In this walkthrough, you change the *app* component to host the top navigation bar. The navigation bar contains various buttons, including sign-in and sign-out buttons. The `app.component.ts` class handles the sign-in and sign-out events. | | home.component.* | [Angular component](https://angular.io/guide/architecture-components)|In this walkthrough, you add the *home* component to render the home page for anonymous access. This component demonstrates how to check whether a user has signed in. | | profile.component.* | [Angular component](https://angular.io/guide/architecture-components) | In this walkthrough, you add the *profile* component to learn how to read the ID token claims. | export class AppRoutingModule { } In this section, you add the sign-in and sign-out buttons to the *app* component. In the *src/app* folder, open the *app.component.ts* file and make the following changes: 1. Import the required components.-1. Change the class to implement the [OnInit method](https://angular.io/api/core/OnInit). The `OnInit` method subscribes to the [MSAL MsalBroadcastService](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/events.md) `inProgress$` observable event. Use this event to know the status of user interactions, particularly to check that interactions are completed. +1. Change the class to implement the [OnInit method](https://angular.io/api/core/OnInit). The `OnInit` method subscribes to the [MSAL MsalBroadcastService](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/events.md) `inProgress$` observable event. Use this event to know the status of user interactions, particularly to check that interactions are completed. Before interactions with the MSAL account object, check that the `InteractionStatus` property returns `InteractionStatus.None`. The `subscribe` event calls the `setLoginDisplay` method to check if the user is authenticated. 1. Add class variables. Optionally, update the *app.component.css* file with the following CSS snippet: ## Handle the app redirects -When you're using redirects with MSAL, you must add the [app-redirect](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/redirects.md) directive to *https://docsupdatetracker.net/index.html*. In the *src* folder, edit *https://docsupdatetracker.net/index.html* as shown in the following code snippet: +When you're using redirects with MSAL, you must add the [app-redirect](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/redirects.md) directive to *https://docsupdatetracker.net/index.html*. In the *src* folder, edit *https://docsupdatetracker.net/index.html* as shown in the following code snippet: ```html <!doctype html> The *home.component* file demonstrates how to check if the user is authenticated The code: -1. Subscribes to the [MSAL MsalBroadcastService](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/events.md) `msalSubject$` and `inProgress$` observable events. +1. Subscribes to the [MSAL MsalBroadcastService](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/events.md) `msalSubject$` and `inProgress$` observable events. 1. Ensures that the `msalSubject$` event writes the authentication result to the browser console. 1. Ensures that the `inProgress$` event checks if a user is authenticated. The `getAllAccounts()` method returns one or more objects. The *profile.component* file demonstrates how to access the user's ID token clai The code: 1. Imports the required components.-1. Subscribes to the [MSAL MsalBroadcastService](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/events.md) `inProgress$` observable event. The event loads the account and reads the ID token claims. +1. Subscribes to the [MSAL MsalBroadcastService](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/events.md) `inProgress$` observable event. The event loads the account and reads the ID token claims. 1. Ensures that the `checkAndSetActiveAccount` method checks and sets the active account. This action is common when the app interacts with multiple Azure AD B2C user flows or custom policies. 1. Ensures that the `getClaims` method gets the ID token claims from the active MSAL account object. The method then adds the claims to the `dataSource` array. The array is rendered to the user with the component's template binding. In the *src/app/profile* folder, update *profile.component.html* with the follow ## Call a web API -To call a [token-based authorization web API](enable-authentication-web-api.md), the app needs to have a valid access token. The [MsalInterceptor](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/msal-interceptor.md) provider automatically acquires tokens for outgoing requests that use the Angular [HttpClient](https://angular.io/api/common/http/HttpClient) class to known protected resources. +To call a [token-based authorization web API](enable-authentication-web-api.md), the app needs to have a valid access token. The [MsalInterceptor](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/msal-interceptor.md) provider automatically acquires tokens for outgoing requests that use the Angular [HttpClient](https://angular.io/api/common/http/HttpClient) class to known protected resources. > [!IMPORTANT] > The MSAL initialization method (in the `app.module.ts` class) maps protected resources, such as web APIs, with the required app scopes by using the `protectedResourceMap` object. If your code needs to call another web API, add the web API URI and the web API HTTP method, with the corresponding scopes, to the `protectedResourceMap` object. For more information, see [Protected Resource Map](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/master/lib/msal-angular/docs/v2-docs/msal-interceptor.md#protected-resource-map). -When the [HttpClient](https://angular.io/api/common/http/HttpClient) object calls a web API, the [MsalInterceptor](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/msal-interceptor.md) provider takes the following steps: +When the [HttpClient](https://angular.io/api/common/http/HttpClient) object calls a web API, the [MsalInterceptor](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/msal-interceptor.md) provider takes the following steps: 1. Acquires an access token with the required permissions (scopes) for the web API endpoint. 1. Passes the access token as a bearer token in the authorization header of the HTTP request by using this format: |
active-directory-b2c | Enable Authentication In Node Web App Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-in-node-web-app-options.md | return confidentialClientApplication.getAuthCodeUrl(authCodeRequest) [!INCLUDE [active-directory-b2c-app-integration-logging](../../includes/active-directory-b2c-app-integration-logging.md)] -To configure [logging](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/logging.md), in *index.js*, configure the following keys: +To configure [logging](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/logging.md), in *index.js*, configure the following keys: - `logLevel` lets you specify the level of logging. Possible values: `Error`, `Warning`, `Info`, and `Verbose`. - `piiLoggingEnabled` enables the input of personal data. Possible values: `true` or `false`. |
active-directory-b2c | Enable Authentication In Node Web App With Api Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-in-node-web-app-with-api-options.md | return confidentialClientApplication.getAuthCodeUrl(authCodeRequest) [!INCLUDE [active-directory-b2c-app-integration-logging](../../includes/active-directory-b2c-app-integration-logging.md)] -To configure [logging](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/logging.md), in *index.js*, configure the following keys: +To configure [logging](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/logging.md), in *index.js*, configure the following keys: - `logLevel` lets you specify the level of logging. Possible values: `Error`, `Warning`, `Info`, and `Verbose`. - `piiLoggingEnabled` enables the input of personal data. Possible values: `true` or `false`. |
active-directory-b2c | Enable Authentication React Spa App Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-react-spa-app-options.md | This article describes ways you can customize and enhance the Azure Active Direc You can configure your single-page application to sign in users with MSAL.js in two ways: -- **Pop-up window**: The authentication happens in a pop-up window, and the state of the application is preserved. Use this approach if you don't want users to move away from your application page during authentication. There are [known issues with pop-up windows on Internet Explorer](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/internet-explorer.md#popups).+- **Pop-up window**: The authentication happens in a pop-up window, and the state of the application is preserved. Use this approach if you don't want users to move away from your application page during authentication. There are known issues with pop-up windows on Internet Explorer. - To sign in with pop-up windows, use the `loginPopup` method. - To sign out with pop-up windows, use the `logoutPopup` method. - **Redirect**: The user is redirected to Azure AD B2C to complete the authentication flow. Use this approach if users have browser constraints or policies where pop-up windows are disabled. |
active-directory-b2c | Implicit Flow Single Page Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/implicit-flow-single-page-application.md | Many modern applications have a single-page app (SPA) front end that is written The recommended way of supporting SPAs is [OAuth 2.0 Authorization code flow (with PKCE)](./authorization-code-flow.md). -Some frameworks, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-core), only support the implicit grant flow. In these cases, Azure Active Directory B2C (Azure AD B2C) supports the OAuth 2.0 authorization implicit grant flow. The flow is described in [section 4.2 of the OAuth 2.0 specification](https://tools.ietf.org/html/rfc6749). In implicit flow, the app receives tokens directly from the Azure AD B2C authorize endpoint, without any server-to-server exchange. All authentication logic and session handling are done entirely in the JavaScript client with either a page redirect or a pop-up box. +Some frameworks, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib), only support the implicit grant flow. In these cases, Azure Active Directory B2C (Azure AD B2C) supports the OAuth 2.0 authorization implicit grant flow. The flow is described in [section 4.2 of the OAuth 2.0 specification](https://tools.ietf.org/html/rfc6749). In implicit flow, the app receives tokens directly from the Azure AD B2C authorize endpoint, without any server-to-server exchange. All authentication logic and session handling are done entirely in the JavaScript client with either a page redirect or a pop-up box. Azure AD B2C extends the standard OAuth 2.0 implicit flow to more than simple authentication and authorization. Azure AD B2C introduces the [policy parameter](user-flow-overview.md). With the policy parameter, you can use OAuth 2.0 to add policies to your app, such as sign-up, sign-in, and profile management user flows. In the example HTTP requests in this article, we use **{tenant}.onmicrosoft.com** for illustration. Replace `{tenant}` with [the name of your tenant]( tenant-management-read-tenant-name.md#get-your-tenant-name) if you've one. Also, you need to have [created a user flow](tutorial-create-user-flows.md?pivots=b2c-user-flow). GET https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/oauth2/v2.0/ ## Next steps -See the code sample: [Sign-in with Azure AD B2C in a JavaScript SPA](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-core-samples/VanillaJSTestApp/app/b2c). +See the code sample: [Sign-in with Azure AD B2C in a JavaScript SPA](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-browser-samples/VanillaJSTestApp2.0/app/b2c). |
active-directory-b2c | Openid Connect Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/openid-connect-technical-profile.md | -Azure Active Directory B2C (Azure AD B2C) provides support for the [OpenID Connect](https://openid.net/2015/04/17/openid-connect-certification-program/) protocol identity provider. OpenID Connect 1.0 defines an identity layer on top of OAuth 2.0 and represents the state of the art in modern authentication protocols. With an OpenID Connect technical profile, you can federate with an OpenID Connect based identity provider, such as Azure AD. Federating with an identity provider allows users to sign in with their existing social or enterprise identities. +Azure Active Directory B2C (Azure AD B2C) provides support for the [OpenID Connect](https://openid.net/certification/) protocol identity provider. OpenID Connect 1.0 defines an identity layer on top of OAuth 2.0 and represents the state of the art in modern authentication protocols. With an OpenID Connect technical profile, you can federate with an OpenID Connect based identity provider, such as Azure AD. Federating with an identity provider allows users to sign in with their existing social or enterprise identities. ## Protocol |
active-directory-b2c | Partner Idology | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-idology.md | The following architecture diagram shows the implementation. 1. IDology provides a variety of solutions, which you can find [here](https://www.idology.com/solutions/). For this sample, we use ExpectID. -2. To create an IDology account, contact [IDology](https://www.idology.com/request-a-demo/microsoft-integration-signup/). +2. To create an IDology account, contact [IDology](https://www.idology.com/talk-to-a-trust-expert/). 3. Once an account is created, you'll receive the information you need for API configuration. The following sections describe the process. |
active-directory-b2c | Tutorial Register Spa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-register-spa.md | To take advantage of this flow, your application can use an authentication libra ### Implicit grant flow -Some libraries, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-core), only support the implicit grant flow or your applications is implemented to use implicit flow. In these cases, Azure AD B2C supports the [OAuth 2.0 implicit flow](implicit-flow-single-page-application.md). The implicit grant flow allows the application to get **ID** and **Access** tokens. Unlike the authorization code flow, implicit grant flow doesn't return a **Refresh token**. +Some libraries, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib), only support the implicit grant flow or your applications is implemented to use implicit flow. In these cases, Azure AD B2C supports the [OAuth 2.0 implicit flow](implicit-flow-single-page-application.md). The implicit grant flow allows the application to get **ID** and **Access** tokens. Unlike the authorization code flow, implicit grant flow doesn't return a **Refresh token**. ![Single-page applications-implicit](./media/tutorial-single-page-app/spa-app.svg) |
active-directory | Concept Authentication Phone Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-phone-options.md | -Users can also verify themselves using a mobile phone or office phone as secondary form of authentication used during Azure AD Multi-Factor Authentication or self-service password reset (SSPR). +Users can also verify themselves using a mobile phone or office phone as secondary form of authentication used during Azure AD Multi-Factor Authentication or self-service password reset (SSPR). Azure AD Multi-Factor Authentication and SSPR support phone extensions only for office phones. -> [!NOTE] -> Phone call verification is not available for Azure AD tenants with trial subscriptions. For example, signing up for a trial EMS licenses, will not provide the capability for phone call verification. --To work properly, phone numbers must be in the format *+CountryCode PhoneNumber*, for example, *+1 4251234567*. --> [!NOTE] -> There needs to be a space between the country/region code and the phone number. -> -> Password reset and Azure AD Multi-Factor Authentication support phone extensions only in office phone. +>[!NOTE] +>Phone call verification isn't available for Azure AD tenants with trial subscriptions. For example, if you sign up for a trial license Microsoft Enterprise Mobility and Security (EMS), phone call verification isn't available. Phone numbers must be provided in the format *+CountryCode PhoneNumber*, for example, *+1 4251234567*. There must be a space between the country/region code and the phone number. ## Mobile phone verification If users don't want their mobile phone number to be visible in the directory but Microsoft doesn't guarantee consistent SMS or voice-based Azure AD Multi-Factor Authentication prompt delivery by the same number. In the interest of our users, we may add or remove short codes at any time as we make route adjustments to improve SMS deliverability. Microsoft doesn't support short codes for countries/regions besides the United States and Canada. > [!NOTE]-> Starting July 2023, we apply delivery method optimization such that tenants with a free or trial subscription may receive an SMS message or voice call. +> Starting July 2023, we will apply delivery method optimizations such that tenants with a free or trial subscription may receive an SMS message or voice call. ### SMS message verification Android users can enable Rich Communication Services (RCS) on their devices. RCS With phone call verification during SSPR or Azure AD Multi-Factor Authentication, an automated voice call is made to the phone number registered by the user. To complete the sign-in process, the user is prompted to press # on their keypad. +The calling number that a user receives the voice call from differs for each country. See [phone call settings](howto-mfa-mfasettings.md#phone-call-settings) to view all possible voice call numbers. + ## Office phone verification With office phone call verification during SSPR or Azure AD Multi-Factor Authentication, an automated voice call is made to the phone number registered by the user. To complete the sign-in process, the user is prompted to press # on their keypad. |
active-directory | Howto Mfa Mfasettings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md | Users can have a combination of up to five OATH hardware tokens or authenticator If users receive phone calls for MFA prompts, you can configure their experience, such as caller ID or the voice greeting they hear. -In the United States, if you haven't configured MFA caller ID, voice calls from Microsoft come from the following number. Users with spam filters should exclude this number. +In the United States, if you haven't configured MFA caller ID, voice calls from Microsoft come from the following numbers. Users with spam filters should exclude these numbers. ++Default number: *+1 (855) 330-8653* ++The following table lists more numbers for different countries. ++| Country | Number | +|:--|:-| +| Croatia | +385 15507766 | +| Ghana | +233 308250245 | +| Sri Lanka | +94 117750440 | +| Ukraine | +380 443332393 | + -* *+1 (855) 330-8653* > [!NOTE] > When Azure AD Multi-Factor Authentication calls are placed through the public telephone network, sometimes the calls are routed through a carrier that doesn't support caller ID. Because of this, caller ID isn't guaranteed, even though Azure AD Multi-Factor Authentication always sends it. This applies both to phone calls and text messages provided by Azure AD Multi-Factor Authentication. If you need to validate that a text message is from Azure AD Multi-Factor Authentication, see [What SMS short codes are used for sending messages?](multi-factor-authentication-faq.yml#what-sms-short-codes-are-used-for-sending-sms-messages-to-my-users-). To configure your own caller ID number, complete the following steps: 1. Set the **MFA caller ID number** to the number you want users to see on their phones. Only US-based numbers are allowed. 1. Select **Save**. +> [!NOTE] +> When Azure AD Multi-Factor Authentication calls are placed through the public telephone network, sometimes the calls are routed through a carrier that doesn't support caller ID. Because of this, caller ID isn't guaranteed, even though Azure AD Multi-Factor Authentication always sends it. This applies both to phone calls and text messages provided by Azure AD Multi-Factor Authentication. If you need to validate that a text message is from Azure AD Multi-Factor Authentication, see [What SMS short codes are used for sending messages?](multi-factor-authentication-faq.yml#what-sms-short-codes-are-used-for-sending-sms-messages-to-my-users-). + ### Custom voice messages You can use your own recordings or greetings for Azure AD Multi-Factor Authentication. These messages can be used in addition to the default Microsoft recordings or to replace them. |
active-directory | Active Directory Authentication Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/active-directory-authentication-libraries.md | The Azure Active Directory Authentication Library (ADAL) v1.0 enables applicatio > [!WARNING]-> Support for Active Directory Authentication Library (ADAL) [will end](https://aka.ms/adal-eos) in June 2023. Apps using ADAL on existing OS versions will continue to work, but technical support and security updates will end. Without continued security updates, apps using ADAL will become increasingly vulnerable to the latest security attack patterns. For more information, see [Migrate apps to MSAL](..\develop\msal-migration.md). +> Azure Active Directory Authentication Library (ADAL) has been deprecated. Please use the [Microsoft Authentication Library (MSAL)](/entr). ## Microsoft-supported Client Libraries |
active-directory | Product Rule Based Anomalies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-rule-based-anomalies.md | Rule-based anomalies identify recent activity in Permissions Management that is 1. Select one of the following conditions: - **Any Resource Accessed for the First Time**: The identity accesses a resource for the first time during the specified time interval. - **Identity Performs a Particular Task for the First Time**: The identity does a specific task for the first time during the specified time interval.- - **Identity Performs a Task for the First Time**: The identity performs any task for the first time during the specified time interval + - **Identity Performs a Task for the First Time**: The identity performs any task for the first time during the specified time interval. 1. Select **Next**. 1. On the **Authorization Systems** tab, select the available authorization systems and folders, or select **All**. |
active-directory | Authentication Flows App Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-flows-app-scenarios.md | Though we don't recommend that you use it, the [username/password flow](scenario Using the username/password flow constrains your applications. For instance, applications can't sign in a user who needs to use multifactor authentication or the Conditional Access tool in Azure AD. Your applications also don't benefit from single sign-on. Authentication with the username/password flow goes against the principles of modern authentication and is provided only for legacy reasons. -In desktop apps, if you want the token cache to persist, you can customize the [token cache serialization](msal-net-token-cache-serialization.md). By implementing dual token cache serialization, you can use backward-compatible and forward-compatible token caches. These tokens support previous generations of authentication libraries. Specific libraries include Azure AD Authentication Library for .NET (ADAL.NET) version 3 and version 4. +In desktop apps, if you want the token cache to persist, you can customize the [token cache serialization](msal-net-token-cache-serialization.md). By implementing dual token cache serialization, you can use backward-compatible and forward-compatible token caches. For more information, see [Desktop app that calls web APIs](scenario-desktop-overview.md). |
active-directory | Howto Get List Of All Auth Library Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-get-list-of-all-auth-library-apps.md | -Support for Active Directory Authentication Library (ADAL) will end in December, 2022. Apps using ADAL on existing OS versions will continue to work, but technical support and security updates will end. Without continued security updates, apps using ADAL will become increasingly vulnerable to the latest security attack patterns. For more information, see [Migrate apps to MSAL](msal-migration.md). This article provides guidance on how to use Azure Monitor workbooks to obtain a list of all apps that use ADAL in your tenant. +Azure Active Directory Authentication Library (ADAL) has been deprecated. While existing apps that use ADAL continue to work, Microsoft will no longer release security fixes on ADAL. Use the [Microsoft Authentication Library (MSAL)](/entr). This article provides guidance on how to use Azure Monitor workbooks to obtain a list of all apps that use ADAL in your tenant. ## Sign-ins workbook |
active-directory | Identity Platform Integration Checklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/identity-platform-integration-checklist.md | Use the following checklist to ensure that your application is effectively integ ![checkbox](./medi). If you must hand-code for the authentication protocols, you should follow the [Microsoft SDL](https://www.microsoft.com/sdl/default.aspx) or similar development methodology. Pay close attention to the security considerations in the standards specifications for each protocol. -![checkbox](./medi) apps. +![checkbox](./medi) apps. ![checkbox](./media/integration-checklist/checkbox-two.svg) For mobile apps, configure each platform using the application registration experience. In order for your application to take advantage of the Microsoft Authenticator or Microsoft Company Portal for single sign-in, your app needs a ΓÇ£broker redirect URIΓÇ¥ configured. This allows Microsoft to return control to your application after authentication. When configuring each platform, the app registration experience will guide you through the process. Use the quickstart to download a working example. On iOS, use brokers and system webview whenever possible. |
active-directory | Mobile Sso Support Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-sso-support-overview.md | The best choice for implementing single sign-on in your application is to use [t > [!NOTE] > It is possible to configure MSAL to use an embedded web view. This will prevent single sign-on. Use the default behavior (that is, the system web browser) to ensure that SSO will work. -If you're currently using the ADAL library in your application, you need to [migrate it to MSAL](msal-migration.md), as [ADAL is being deprecated](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/update-your-applications-to-use-microsoft-authentication-library/ba-p/1257363). +Azure Active Directory Authentication Library (ADAL) has been deprecated. Please use the [Microsoft Authentication Library (MSAL)](/entr). For iOS applications, we have a [quickstart](quickstart-v2-ios.md) that shows you how to set up sign-ins using MSAL, as well as [guidance for configuring MSAL for various SSO scenarios](single-sign-on-macos-ios.md). |
active-directory | Msal Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-migration.md | -If any of your applications use the Azure Active Directory Authentication Library (ADAL) for authentication and authorization functionality, it's time to migrate them to the [Microsoft Authentication Library (MSAL)](msal-overview.md#languages-and-frameworks). +If any of your applications use the Azure Active Directory Authentication Library (ADAL) for authentication and authorization capabilities, it's time to migrate them to the [Microsoft Authentication Library (MSAL)](/entra/msal). -- All Microsoft support and development for ADAL, including security fixes, ends in June 2023.-- There are no ADAL feature releases or new platform version releases planned prior to June 2023.+- All Microsoft support and development for ADAL, including security fixes, ended on June 30, 2023. +- There were no ADAL feature releases or new platform version releases planned prior to the deprecation date. - No new features have been added to ADAL since June 30, 2020. > [!WARNING]-> If you choose not to migrate to MSAL before ADAL support ends in June 2023, you put your app's security at risk. Existing apps that use ADAL will continue to work after the end-of-support date but Microsoft will no longer release security fixes on ADAL. Learn more in [the official announcement](https://aka.ms/adal-eos). +> Azure Active Directory Authentication Library (ADAL) has been deprecated. While existing apps that use ADAL will continue to work, Microsoft will no longer release security fixes on ADAL. Use the [Microsoft Authentication Library (MSAL)](/entra/msal/) to avoid putting your app's security at risk. ## Why switch to MSAL? -If you've developed apps against Azure Active Directory (v1.0) endpoint in the past, you're likely using ADAL. Since Microsoft identity platform (v2.0) endpoint has changed significantly enough, the new library (MSAL) was built for the new endpoint entirely. +If you've developed apps against Azure Active Directory (v1.0) endpoint in the past, you're likely using ADAL. Since Microsoft identity platform (v2.0) endpoint has changed significantly, the new library (MSAL) was entirely built for the new endpoint. The following diagram shows the v2.0 vs v1.0 endpoint experience at a high level, including the app registration experience, SDKs, endpoints, and supported identities. MSAL provides multiple benefits over ADAL, including the following features: | Microsoft account (MSA) |![Microsoft account (MSA) - MSAL provides the feature][y]|![Microsoft account (MSA) - ADAL doesn't provide the feature][n]| | Azure AD B2C accounts |![Azure AD B2C accounts - MSAL provides the feature][y]|![Azure AD B2C accounts - ADAL doesn't provide the feature][n]| | Best single sign-on experience |![Best single sign-on experience - MSAL provides the feature][y]|![Best single sign-on experience - ADAL doesn't provide the feature][n]|-|**Resilience**||| -| Proactive token renewal |![Proactive token renewal - MSAL provides the feature][y]|![Proactive token renewal - ADAL doesn't provide the feature][n]| +|**Authentication experiences**||| +| Continuous access evaluation through proactive token refresh |![Proactive token renewal - MSAL provides the feature][y]|![Proactive token renewal - ADAL doesn't provide the feature][n]| | Throttling |![Throttling - MSAL provides the feature][y]|![Throttling - ADAL doesn't provide the feature][n]|+|Auth broker support |![Device-based conditional access policy - MSAL has the feature built-in][y]|![Device-based conditional access policy - ADAL doesn't provide the feature][n]| +| Token protection|![Token protection - MSAL provides the feature][y]|![Token protection - ADAL doesn't provide the feature][n]| +++## Additional capabilities of MSAL over ADAL -## Additional Capabilities of MSAL over ADAL -- Auth broker support ΓÇô Device-based Conditional Access policy - Proof of possession tokens - Azure AD certificate-based authentication (CBA) on mobile - System browsers on mobile devices If you need to continue using AD FS, you should upgrade to AD FS 2019 or later b Before you start the migration, you need to identify which of your apps are using ADAL for authentication. Follow the steps in this article to get a list by using the Azure portal: - [How to: Get a complete list of apps using ADAL in your tenant](howto-get-list-of-all-active-directory-auth-library-apps.md) -After identifying your apps that use ADAL, migrate them to MSAL depending on your application type as illustrated below. +After identifying applications that use ADAL, migrate them to MSAL depending on your app type: [!INCLUDE [application type](includes/adal-msal-migration.md)] -MSAL Supports a wide range of application types and scenarios. Please refer to [Microsoft Authentication Library support for several application types](reference-v2-libraries.md#single-page-application-spa). +MSAL Supports a wide range of application types and scenarios. Refer to [Microsoft Authentication Library support for several application types](reference-v2-libraries.md#single-page-application-spa). ++ADAL to MSAL migration guide for different platforms are available in the following links: -ADAL to MSAL Migration Guide for different platforms are available in the following link. -- [Migrate to MSAL iOS and MacOS](migrate-objc-adal-msal.md)+- [Migrate to MSAL iOS and macOS](migrate-objc-adal-msal.md) - [Migrate to MSAL Java](migrate-adal-msal-java.md) - [Migrate to MSAL.js](msal-compare-msal-js-and-adal-js.md) - [Migrate to MSAL .NET](msal-net-migration.md) |
active-directory | Msal Net Migration Ios Broker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration-ios-broker.md | -You've been using the Azure Active Directory Authentication Library for .NET (ADAL.NET) and the iOS broker. Now it's time to migrate to the [Microsoft Authentication Library](msal-overview.md) for .NET (MSAL.NET), which supports the broker on iOS from release 4.3 onward. +You've been using the Azure Active Directory Authentication Library for .NET (ADAL.NET) and the iOS broker. Now it's time to migrate to the [Microsoft Authentication Library](/entra/msal) for .NET (MSAL.NET), which supports the broker on iOS from release 4.3 onward. Where should you start? This article helps you migrate your Xamarin iOS app from ADAL to MSAL. ## Prerequisites+ This article assumes that you already have a Xamarin iOS app that's integrated with the iOS broker. If you don't, move directly to MSAL.NET and begin the broker implementation there. For information on how to invoke the iOS broker in MSAL.NET with a new application, see [this documentation](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Leveraging-the-broker-on-iOS#why-use-brokers-on-xamarinios-and-xamarinandroid-applications). ## Background |
active-directory | Msal Net Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration.md | For details about the decision tree below, read [MSAL.NET or Microsoft.Identity. [See examples](https://identitydivision.visualstudio.com/DevEx/_wiki/wikis/DevEx.wiki/20413/1P-ADAL.NET-to-MSAL.NET-migration-examples) of other 1P teams who have already, or are currently, migrating from ADAL to one of the MSAL+ solutions above. See their code, and in some cases read about their migration story. -->-### Deprecated ADAL.Net Nuget packages and their MSAL.Net equivalents +### Deprecated ADAL.Net NuGet packages and their MSAL.Net equivalents + You might unknowingly consume ADAL dependencies from other Azure SDKs. Below are few of the deprecated packages and their MSAL alternatives. | ADAL.NET Package (Deprecated) | MSAL.NET Package (Current) | |
active-directory | Msal Net Token Cache Serialization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-token-cache-serialization.md | Examples of token cache serializers are provided in [Microsoft.Identity.Web/Toke ### Custom token cache for a desktop or mobile app (public client application) -MSAL.NET v2.x and later versions provide several options for serializing the token cache of a public client. You can serialize the cache only to the MSAL.NET format. (The unified format cache is common across MSAL and the platforms.) You can also support the [legacy](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Token-cache-serialization) token cache serialization of ADAL v3. -+MSAL.NET v2.x and later versions provide several options for serializing the token cache of a public client. You can serialize the cache only to the MSAL.NET format (the unified format cache is common across MSAL and the platforms). Customizing the token cache serialization to share the single sign-on state between ADAL.NET 3.x, ADAL.NET 5.x, and MSAL.NET is explained in part of the following sample: [active-directory-dotnet-v1-to-v2](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2). > [!Note] |
active-directory | Msal Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-overview.md | |
active-directory | Msal Python Adfs Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-python-adfs-support.md | When you acquire a token using `acquire_token_by_username_password`, MSAL Python When you connect directory to AD FS, the authority you'll want to use to build your application will be something like `https://somesite.contoso.com/adfs/` -MSAL Python supports ADFS 2019. --It does not support a direct connection to ADFS 2016 or ADFS v2. To support scenarios requiring a direct connection to ADFS 2016, use the latest version of ADAL Python. Once you have upgraded your on-premises system to ADFS 2019, you can use MSAL Python. +MSAL Python supports ADFS 2019, but does not support a direct connection to ADFS 2016 or ADFS v2. Once you have upgraded your on-premises system to ADFS 2019, you can use MSAL Python. ## Next steps |
active-directory | Scenario Token Exchange Saml Oauth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-token-exchange-saml-oauth.md | -SAML and OpenID Connect (OIDC) / OAuth are popular protocols used to implement Single Sign-On (SSO). Some apps might only implement SAML and others might only implement OIDC/OAuth. Both protocols use tokens to communicate secrets. To learn more about SAML, see [Single Sign-On SAML protocol](single-sign-on-saml-protocol.md). To learn more about OIDC/OAuth, see [OAuth 2.0 and OpenID Connect protocols on Microsoft identity platform](active-directory-v2-protocols.md). +SAML and OpenID Connect (OIDC) / OAuth are popular protocols used to implement single sign-on (SSO). Some apps might only implement SAML and others might only implement OIDC/OAuth. Both protocols use tokens to communicate secrets. To learn more about SAML, see [single sign-on SAML protocol](single-sign-on-saml-protocol.md). To learn more about OIDC/OAuth, see [OAuth 2.0 and OpenID Connect protocols on Microsoft identity platform](active-directory-v2-protocols.md). This article outlines a common scenario where an app implements SAML but calls the Graph API, which uses OIDC/OAuth. Basic guidance is provided for people working with this scenario. Many apps are implemented with SAML. However, the Graph API uses the OIDC/OAuth The general strategy is to add the OIDC/OAuth stack to your app. With your app that implements both standards you can use a session cookie. You aren't exchanging a token explicitly. You're logging a user in with SAML, which generates a session cookie. When the Graph API invokes an OAuth flow, you use the session cookie to authenticate. This strategy assumes the Conditional Access checks pass and the user is authorized. > [!NOTE]-> The recommended library for adding OIDC/OAuth behavior is the Microsoft Authentication Library (MSAL). To learn more about MSAL, see [Overview of the Microsoft Authentication Library (MSAL)](msal-overview.md). The previous library was called Active Directory Authentication Library (ADAL), however it is not recommended as MSAL is replacing it. +> The recommended library for adding OIDC/OAuth behavior to your applications is the [Microsoft Authentication Library (MSAL)](/entra/msal). ## Next steps - [Authentication flows and application scenarios](authentication-flows-app-scenarios.md) |
active-directory | Scenario Web Api Call Api Call Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-call-api.md | In this scenario, you've added the **Microsoft.Identity.Web.GraphServiceClient** #### Option 2: Call a downstream web API with the helper class -In this scenario, you've added `.AddDownstreamWebApi()` in *Startup.cs* as specified in [Code configuration](scenario-web-api-call-api-app-configuration.md#option-2-call-a-downstream-web-api-other-than-microsoft-graph), and you can directly inject an `IDownstreamWebApi` service in your controller or page constructor and use it in the actions: +In this scenario, you've added `.AddDownstreamApi()` in *Startup.cs* as specified in [Code configuration](scenario-web-api-call-api-app-configuration.md#option-2-call-a-downstream-web-api-other-than-microsoft-graph), and you can directly inject an `IDownstreamWebApi` service in your controller or page constructor and use it in the actions: ```csharp [Authorize] |
active-directory | Scenario Web App Sign User Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-sign-in.md | In Java, sign-out is handled by calling the Microsoft identity platform `logout` # [Node.js](#tab/nodejs) -When the user selects the **Sign out** button, the app triggers the `/signout` route, which destroys the session and redirects the browser to Microsoft identity platform sign-out endpoint. +When the user selects the **Sign out** button, the app triggers the `/auth/signout` route, which destroys the session and redirects the browser to Microsoft identity platform sign-out endpoint. :::code language="js" source="~/ms-identity-node/App/auth/AuthProvider.js" range="157-175"::: |
active-directory | Single Multi Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-multi-account.md | -The Azure Active Directory Authentication Library (ADAL) models the server. The Microsoft Authentication Library (MSAL) instead models your client application. The majority of Android apps are considered public clients. A public client is an app that can't securely keep a secret. +The Microsoft Authentication Library (MSAL) models your client application. The majority of Android apps are considered public clients. A public client is an app that can't securely keep a secret. MSAL specializes the API surface of `PublicClientApplication` to simplify and clarify the development experience for apps that allow only one account to be used at a time. `PublicClientApplication` is subclassed by `SingleAccountPublicClientApplication` and `MultipleAccountPublicClientApplication`. The following diagram shows the relationship between these classes. |
active-directory | Licensing Admin Center | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-admin-center.md | + + Title: Assign licenses to a group using the Microsoft 365 admin center +description: How to assign licenses to groups using the Microsoft 365 admin center ++keywords: Azure AD licensing +documentationcenter: '' +++++++ Last updated : 07/17/2023++++# Assign licenses to users by group membership using the Microsoft 365 admin center ++This article shows you how to use the Microsoft 365 license center to assign licenses to a group. ++> [!NOTE] +> Some Microsoft services are not available in all locations. Before a license can be assigned to a user, the administrator has to specify the Usage location property on the user. +> +> For group license assignment, any users without a usage location specified inherit the location of the directory. If you have users in multiple locations, we recommend that you always set usage location as part of your user creation flow in Azure AD. For example, configure Azure AD Connect configuration to set usage location. This recommendation makes sure the result of license assignment is always correct and users do not receive services in locations that are not allowed. ++## Assign a license ++1. Sign in to the [Microsoft 365 admin center](https://admin.microsoft.com/) with a license administrator account. To manage licenses, the account must be a License Administrator, User Administrator, or Global Administrator. + + ![Screenshot of the Microsoft admin Center landing page](./media/licensing-admin-center/admin-center.png) ++1. Browse to **Billing** > **Licenses** to open a page where you can see all licenses available in your organization. ++ ![screenshot of portal section allowing user to select products to assign licenses](./media/licensing-admin-center/choose-licenses.png) ++1. Under **Licenses**, select the license that you would like to assign. +1. In the License details section, choose **Groups** at the top of the page. +1. Choose **+ Assign licenses** +1. From the **+ Assign licenses** page search for the group that you would like to use for license assignment. ++ ![Screenshot of portal allowing users to choose the group to use for license assignment](./media/licensing-admin-center/assign-license-group.png) + + >[!NOTE] + >When assigning licenses to a group with service plans that have dependencies on other service plans, they must both be assigned together in the same group, otherwise the service plan with the dependency will be disabled. + +1. To complete the assignment, on the **Assign license** page, click **Assign** at the bottom of the page. ++ ![Screenshot of the portal section that allows you to choose assign after selecting the group](./media/licensing-admin-center/choose-assign.png) ++When assign licenses to a group, Azure AD processes all existing members of that group. This process might take some time depending on the size of the group. ++ ![Screenshot of message telling the administrator that they have assigned a license to a group](./media/licensing-admin-center/licenses-assignment-message.png) ++## Verify that the initial assignment has finished ++1. From the Admin Center, go to **Billing** > **Licenses**. Select the license that you assigned. ++1. On the **License details** page, you can view the status of the license assignment operation. For example, in the image show below, you can see that **Contoso marketing** shows a status of **All licenses assigned** while **Contoso human resources** shows a status of **In progress**. ++ ![Screenshot showing you the license assignment progress](./media/licensing-admin-center/progress.png) ++ [Read this section](licensing-group-advanced.md#use-audit-logs-to-monitor-group-based-licensing-activity) to learn more about how audit logs can be used to analyze changes made by group-based licensing. +++## Next steps ++To learn more about the feature set for license assignment using groups, see the following articles: ++- [What is group-based licensing in Azure Active Directory?](../fundamentals/active-directory-licensing-whatis-azure-portal.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context) +- [Identifying and resolving license problems for a group in Azure Active Directory](licensing-groups-resolve-problems.md) |
active-directory | Customize Branding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/customize-branding.md | - Title: Add branding to your organization's sign-in page -description: Instructions about how to add your organization's branding to the Azure Active Directory sign-in page. -------- Previously updated : 03/01/2023-------# Configure your company branding --When users authenticate into your corporate intranet or web-based applications, Azure Active Directory (Azure AD) provides the identity and access management (IAM) service. You can add company branding that applies to all these sign-in experiences to create a consistent experience for your users. --This article covers how to customize the company branding for sign-in experiences for your users. --An updated experience for adding company branding is available as an Azure AD preview feature. To opt in and explore the new experience, go to **Azure AD** > **Preview features** and enable the **Enhanced Company Branding** feature. Check out the updated documentation on [how to customize branding](how-to-customize-branding.md). --## Role and license requirements --Adding custom branding requires one of the following licenses: --- Azure AD Premium 1-- Azure AD Premium 2-- Office 365 (for Office apps)--At least one of the previously listed licenses is sufficient to add and manage the company branding in your tenant. --Azure AD Premium editions are available for customers in China using the worldwide instance of Azure AD. Azure AD Premium editions aren't currently supported in the Azure service operated by 21Vianet in China. For more information about licensing and editions, see [Sign up for Azure AD Premium](active-directory-get-started-premium.md). --The **Global Administrator** role is required to customize company branding. --## Before you begin --You can customize the sign-in experience when users sign in to your organization by passing a domain variable: -Microsoft 365 Portal: `https://login.microsoftonline.com/?whr=contoso.com` -Outlook: `https://outlook.com/contoso.com` -Teams: `https://teams.microsoft.com/?tenantId=contoso.com` -MyApps: `http://myapps.microsoft.com/?whr=contoso.com` -Azure AD Self-service Password Reset: `https://passwordreset.microsoftonline.com/?whr=contoso.com` --Custom branding appears after users sign in. Users that start the sign-in process at a site like www\.office.com won't see the branding. After users sign in, the branding may take at least 15 minutes to appear. --**All branding elements are optional. Default settings will remain, if left unchanged.** For example, if you specify a banner logo but no background image, the sign-in page shows your logo with a default background image from the destination site such as Microsoft 365. Additionally, sign-in page branding doesn't carry over to personal Microsoft accounts. If your users or guests sign in using a personal Microsoft account, the sign-in page won't reflect the branding of your organization. --**Images have different image and file size requirements.** Take note of the requirements for each option. You may need to use a photo editor to create the right-sized images. The preferred image type for all images is PNG, but JPG is accepted. --**Use Microsoft Graph with Azure AD company branding.** Company branding can be viewed and managed using Microsoft Graph on the `/beta` endpoint and the `organizationalBranding` resource type. For more information, see the [organizational branding API documentation](/graph/api/resources/organizationalbranding?view=graph-rest-beta&preserve-view=true). --## How to configure company branding --1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global administrator account for the directory. --2. Go to **Azure Active Directory** > **Company branding** > **Configure**. --3. On the **Configure company branding** page, provide any or all of the following information. -- ![Configure company branding page, with general settings completed](media/customize-branding/legacy-customize-branding-configure-basics.png) -- - **Language** The language for your first customized branding configuration is based on your default locale can't be changed. Once a default sign-in experience is created, you can add language-specific customized branding. - - - **Sign-in page background image** Select a PNG or JPG image file to appear as the background for your sign-in pages. The image is anchored to the center of the browser, and scales to the size of the viewable space. - - We recommended using images without a strong subject focus. An opaque white box appears in the center of the screen, which could cover any part of the image depending on the dimensions of the viewable space. -- - **Banner logo** Select a PNG or JPG version of your logo to appear on the sign-in page after the user enters a username and on the **My Apps** portal page. - - We recommend using a transparent image because the background might not match your logo background. We also recommend not adding padding around the image or it might make your logo look small. -- - **Username hint** Type the hint text that appears to users if they forget their username. This text must be Unicode, without links or code, and can't exceed 64 characters. If guests sign in to your app, we suggest not adding this hint. -- - **Sign-in page text** Type the text that appears on the bottom of the sign-in page. You can use this text to communicate additional information, such as the phone number to your help desk or a legal statement. This text must be Unicode and can't exceed 1024 characters. -- To begin a new paragraph, use the enter key twice. You can also change text formatting to include bold, italics, an underline or clickable link. Use the following syntax to add formatting to text: -- > Hyperlink: `[text](link)` - - > Bold: `**text**` or `__text__` - - > Italics: `*text*` or `_text_` - - > Underline: `++text++` - - > [!IMPORTANT] - > Hyperlinks that are added to the sign-in page text render as text in native environments, such as desktop and mobile applications. -- - **Advanced settings** - - ![Configure company branding page, with advanced settings completed](media/customize-branding/legacy-customize-branding-configure-advanced.png) -- - **Sign-in page background color** Specify the hexadecimal color (#FFFFFF) that appears in place of your background image in low-bandwidth connection situations. We recommend using the primary color of your banner logo or your organization color. -- - **Square logo image** Select a PNG or JPG image of your organization's logo to appear during the setup process for new Windows 10 Enterprise devices. This image is only used for Windows authentication and only appears on tenants that are using [Windows Autopilot](/windows/deployment/windows-autopilot/windows-10-autopilot) for deployment or password entry pages in other Windows 10 experiences. In some cases, it may also appear in the consent dialog. - - We recommend using a transparent image since the background might not match your logo background. We also recommend not adding padding around the image or it might make your logo look small. - - - **Square logo image, dark theme** Same as the square logo image. This logo image takes the place of the square logo image when used with a dark background, such as with Windows 10 Azure AD joined screens during the out-of-box experience (OOBE). If your logo looks good on white, dark blue, and black backgrounds, you don't need to add this image. - - >[!IMPORTANT] - > Transparent logos are supported with the square logo image. The color palette used in the transparent logo could conflict with backgrounds (such as, white, light grey, dark grey, and black backgrounds) used within Microsoft 365 apps and services that consume the square logo image. Solid color backgrounds may need to be used to ensure the square image logo is rendered correctly in all situations. - - - **Show option to remain signed in** You can choose to let your users remain signed in to Azure AD until explicitly signing out. If you uncheck this option, users must sign in each time the browser is closed and reopened. For more information, see the [Add or update a user's profile](active-directory-users-profile-azure-portal.md#learn-about-the-stay-signed-in-prompt) article. --3. After you've finished adding your branding, select **Save** in the upper-left corner of the configuration panel. -- This process creates your first custom branding configuration, and it becomes the default for your tenant. The default custom branding configuration serves as a fallback option for all language-specific branding configurations. The configuration can't be removed after you create it. - - >[!IMPORTANT] - >To add more corporate branding configurations to your tenant, you must choose **New language** on the **Contoso - Company branding** page. This opens the **Configure company branding** page, where you can follow the previous steps. --## Customize the sign-in experience by browser language --To create an inclusive experience for all of your users, you can customize the sign-in experience based on browser language. You must create a default custom branding experience before you can add a new language. --1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global administrator account for the directory. --2. Select **Azure Active Directory** > **Company branding** > **+ New language**. --The process for customizing the experience is the same as the main [configure company branding](#configure-your-company-branding) process, except you select a **Language** from the dropdown list. --We recommend adding **Sign-in page text** in the selected language. --## Edit custom branding --If custom branding has been added to your tenant, you can edit the details already provided. Refer to the details and descriptions of each setting in the [configure your company branding](#configure-your-company-branding) section of this article. --1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global Administrator account for the directory. --1. Go to **Azure Active Directory** > **Company branding**. --1. Select a custom branding item from the list. --1. On the **Edit company branding** page, edit any necessary details. --1. Select **Save**. -- It can take up to an hour for any changes you made to the sign-in page branding to appear. --## Next steps --- [Add your organization's privacy info on Azure AD](./active-directory-properties-area.md)-- [Learn more about Conditional Access](../conditional-access/overview.md) |
active-directory | How To Customize Branding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-customize-branding.md | Title: Add company branding to your organization's sign-in page (preview) + Title: Add company branding to your organization's sign-in page description: Instructions about how to add your organization's branding to the sign-in experience. -# Configure your company branding (preview) +# Configure your company branding When users authenticate into your corporate intranet or web-based applications, Azure Active Directory (Azure AD) provides the identity and access management (IAM) service. You can add company branding that applies to all these sign-in experiences to create a consistent experience for your users. The default sign-in experience is the global look and feel that applies across all sign-ins to your tenant. Before you customize any settings, the default Microsoft branding appears in your sign-in pages. You can customize this default experience with a custom background image and/or color, favicon, layout, header, and footer. You can also upload a custom CSS. -The updated experience for adding company branding covered in this article is available as an Azure AD preview feature. To opt in and explore the new experience, go to **Azure AD** > **Preview features** and enable the **Enhanced Company Branding** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). - > [!NOTE]-> Instructions for the legacy company branding customization process can be found in the **[Customize branding](customize-branding.md)** article. Instructions for how to manage the **'Stay signed in prompt?'** can be found in the **[Manage the 'Stay signed in?' prompt](how-to-manage-stay-signed-in-prompt.md)** article. +> Instructions for how to manage the **'Stay signed in prompt?'** can be found in the **[Manage the 'Stay signed in?' prompt](how-to-manage-stay-signed-in-prompt.md)** article. + ## License requirements |
active-directory | How To Manage Stay Signed In Prompt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-manage-stay-signed-in-prompt.md | |
active-directory | Licensing Whatis Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/licensing-whatis-azure-portal.md | Microsoft paid cloud services, such as Microsoft 365, Enterprise Mobility + Secu Azure AD includes group-based licensing, which allows you to assign one or more product licenses to a group. Azure AD ensures that the licenses are assigned to all members of the group. Any new members who join the group are assigned the appropriate licenses. When they leave the group, those licenses are removed. This licensing management eliminates the need for automating license management via PowerShell to reflect changes in the organization and departmental structure on a per-user basis. ## Licensing requirements+ You must have one of the following licenses **for every user who benefits from** group-based licensing: - Paid or trial subscription for Azure AD Premium P1 and above You must have one of the following licenses **for every user who benefits from** - Paid or trial edition of Microsoft 365 Business Premium or Office 365 Enterprise E3 or Office 365 A3 or Office 365 GCC G3 or Office 365 E3 for GCCH or Office 365 E3 for DOD and above ### Required number of licenses+ For any groups assigned a license, you must also have a license for each unique member. While you don't have to assign each member of the group a license, you must have at least enough licenses to include all of the members. For example, if you have 1,000 unique members who are part of licensed groups in your tenant, you must have at least 1,000 licenses to meet the licensing agreement. ## Features Here are the main features of group-based licensing: - All Microsoft cloud services that require user-level licensing are supported. This support includes all Microsoft 365 products, Enterprise Mobility + Security, and Dynamics 365. -- Group-based licensing is currently available only through the [Azure portal](https://portal.azure.com). If you primarily use other management portals for user and group management, such as the [Microsoft 365 admin center](https://admin.microsoft.com), you can continue to do so. But you should use the Azure portal to manage licenses at group level.+- Group-based licensing is currently available through the [Azure portal](https://portal.azure.com) and through the [Microsoft Admin center](https://admin.microsoft.com/). - Azure AD automatically manages license modifications that result from group membership changes. Typically, license modifications are effective within minutes of a membership change. |
active-directory | Entitlement Management Custom Teams Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-custom-teams-extension.md | + + Title: Integrating Azure AD Entitlement Management with Microsoft Teams using Custom Extensibility and Logic Apps +description: This tutorial walks you through integrating Microsoft Teams with entitlement management using custom extensions and Logic Apps. ++++++ Last updated : 07/05/2023++++# Tutorial: Integrating Azure AD Entitlement Management with Microsoft Teams using Custom Extensibility and Logic Apps +++Scenario: Use custom extensibility and an Azure Logic App to automatically send notifications to end users on Microsoft Teams when they receive or are denied access to an access package. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> * Adding a Logic App Workflow to an existing catalog. +> * Adding a custom extension to a policy within an existing access package. +> * Register an application in Azure AD for resuming Entitlement Management workflow +> * Configuring ServiceNow for Automation Authentication. +> * Requesting access to an access package as an end-user. +> * Receiving access to the requested access package as an end-user. +++## Prerequisites ++- An Azure AD user account with an active Azure subscription. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +++## Create a Logic App and custom extension in a catalog ++Prerequisite roles: Global administrator, Identity Governance administrator, or Catalog owner and Resource Group Owner. ++To create a Logic App and custom extension in a catalog, you'd follow these steps: ++1. Navigate To Entra portal [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement) ++1. In the left menu, select **Catalogs**. ++1. Select the catalog for which you want to add a custom extension and then in the left menu, select **Custom Extensions (Preview)**. ++1. In the header navigation bar, select **Add a Custom Extension**. ++1. In the **Basics** tab, enter the name of the custom extension and a description of the workflow. These fields show up in the **Custom Extensions** tab of the Catalog. ++1. Select the **Extension Type** as ΓÇ£**Request workflow**ΓÇ¥ to correspond with the policy stage of the access package requested being created, when the request is approved, when assignment is granted, and when assignment is removed. + > [!NOTE] + > Another custom extension can be created for the **Pre-Expiration workflow**. + + :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-create-custom-extension.png" alt-text="Screenshot of creating a custom extension for entitlement management." lightbox="media/entitlement-management-servicenow-integration/entitlement-management-create-custom-extension.png"::: +1. Under Extension Configuration, select ΓÇ£**Launch and continue**ΓÇ¥, which will ensure that Entitlement Management continues after this workflow is triggered. + :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-custom-extension-behavior.png" alt-text="Screenshot of entitlement management custom extension behavior actions tab." lightbox="media/entitlement-management-servicenow-integration/entitlement-management-custom-extension-behavior.png"::: +1. In the **Details** tab, choose Yes in the "*Create new logic App*" field and provide the Azure subscription and resource group details, along with the Logic App name. Select ΓÇ£*Create a logic app*ΓÇ¥. + :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-custom-extension-details-expanded.png" alt-text="Screenshot of expanded custom extension details selection." lightbox="media/entitlement-management-servicenow-integration/entitlement-management-custom-extension-details-expanded.png"::: +1. It shows as ΓÇ£*Deploying*ΓÇ¥, and once done a success message will appear such as: + :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-successful-deploy.png" alt-text="Screenshot of a successful deploy of a new Logic App."::: +1. In **Review and Create**, review the summary of your custom extension and make sure the details for your Logic App call-out are correct. Then select **Create**. ++This custom extension to the linked Logic App now appears in your Custom Extensions tab under Catalogs. You're able to call on this in the access package policies. ++## Configuring the Logic App ++1. The custom extension created will show under the **Custom Extensions (Preview)** tab. Select the ΓÇ£*Logic app*ΓÇ¥ in the custom extension that will redirect you to a page to configure the logic app. + :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-configure-logic-app.png" alt-text="Screenshot of the configure logic apps screen." lightbox="media/entitlement-management-servicenow-integration/entitlement-management-configure-logic-app.png"::: +1. On the left menu, select **Logic app designer**. + :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-logic-app-designer.png" alt-text="Screenshot of the logic apps designer screen." lightbox="media/entitlement-management-servicenow-integration/entitlement-management-logic-app-designer.png"::: +1. Delete the **Condition** by selecting the 3 dots on the right side and select ΓÇ£*Delete*ΓÇ¥ and select ΓÇ£*OK*ΓÇ¥. Once deleted, the page should have an option to add a new step. + :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-logic-app-designer-condition.png" alt-text="Screenshot of setting the logic app designer condition." lightbox="media/entitlement-management-servicenow-integration/entitlement-management-logic-app-designer-condition.png"::: +1. Select ΓÇ£*New Step*ΓÇ¥, which will open a dialog box and then select **All** and expand the list of connectors. + :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-logic-app-designer-connectors.png" alt-text="Screenshot of the list of connectors for the Logic App." lightbox="media/entitlement-management-servicenow-integration/entitlement-management-logic-app-designer-connectors.png"::: +1. In the list that appears, search and select Microsoft Teams. + :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-logic-app-designer-connectors-teams.png" alt-text="Screenshot of Microsoft Teams app in the Logic App connectors list." lightbox="media/entitlement-management-servicenow-integration/entitlement-management-logic-app-designer-connectors-teams.png"::: +1. In the list of actions, select ΓÇ£*Post message in a chat or channel*ΓÇ¥. + :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-teams-post-message.png" alt-text="Screenshot of the teams actions in logic app designer."::: +1. For **Post as** select ΓÇ£*Flow Bot*ΓÇ¥, and for **Post In** select ΓÇ£*Chat with Flow bot*ΓÇ¥. + :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-teams-post-message-parameters.png" alt-text="Screenshot of setting the teams post message parameters."::: +1. Selecting **Recipient** provides a pop up to select Dynamic Content. Select ΓÇ£*ObjectID -Requestor-Objectid*ΓÇ¥. + :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-teams-post-message-recipient.png" alt-text="Screenshot of setting the recipient ID for the teams post message."::: +1. Add the email content in the message. You can also format plain text, or add dynamic content. + :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-teams-post-message-dynamic-content.png" alt-text="Screenshot of the dynamic content setting in the teams post message settings."::: +1. Select inside ΓÇ£*Add new Parameter*ΓÇ¥ and check the ΓÇ£*IsAlert*ΓÇ¥ box to have the message show up on Microsoft TeamsΓÇÖs activity feed. + :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-teams-post-message-alert.png" alt-text="Screenshot of setting isAlert in the teams post message settings."::: +1. Select **Save** to ensure your changes are stored. The Logic App is now ready to send emails when updates are made to an access package linked to it. +++## Add Custom Extension to a policy in an existing Access Package ++After setting up custom extensibility in the catalog, administrators can create an access package with a policy to trigger the custom extension when the request has been approved. This enables them to define specific access requirements, and tailor the access review process to meet their organization's needs. ++**Prerequisite roles**: Global administrator, Identity Governance administrator, Catalog owner, or Access package manager ++1. In the Identity Governance portal, select **Access packages**. ++1. Select the access package you want to add a custom extension (Logic App) to from the list of already created access packages. ++1. Select **Edit** and under **Properties** change the catalog to one previously used in the section: [Create a Logic App and custom extension in a catalog](entitlement-management-custom-teams-extension.md#create-a-logic-app-and-custom-extension-in-a-catalog) then select **Save**. ++1. Change to the Policies tab, select the policy, and select **Edit**. ++1. In the policy settings, go to the **Custom Extensions (Preview)** tab. ++1. In the menu below Stage, select the access package event you wish to use as trigger for this custom extension (Logic App). For our scenario, to trigger the custom extension Logic App workflow when an access package is requested, approved, granted, or removed, select **Request is created**, **Request is approved**, **Assignment is Granted**, and **Assignment is removed**. + :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-custom-extension-policy.png" alt-text="Screenshot of custom extension policies for an access package."::: +1. Select **Update** to add it to an existing access package's policy. ++## Add Custom Extension to a new Access Package ++1. In the Identity Governance portal, select **Access packages** and create a new access package. ++1. Under the Basics tab, add the name of the policy, description and the catalog used in the section [Create a Logic App and custom extension in a catalog](entitlement-management-custom-teams-extension.md#create-a-logic-app-and-custom-extension-in-a-catalog). + :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-create-access-package.png" alt-text="Screenshot of creating an access package."::: +1. Add the required **Resource roles**. ++1. Add the required **Requests**. ++1. Provide **Requestor Information** if needed. ++1. Add **Lifecycle** details. ++1. Under the Custom Extensions (Preview) tab, in the menu below Stage, select the access package event you wish to use as trigger for this custom extension (Logic App). For our scenario, to trigger the custom extension Logic App workflow when an access package is requested, approved, granted, or removed, select **Request is created**, **Request is approved**, **Assignment is Granted**, and **Assignment is removed**. + :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-access-package-policy.png" alt-text="Screenshot of access package policy selection."::: +1. In **Review and Create**, review the summary of your access package, and make sure the details are correct, then select **Create**. ++> [!NOTE] +> Select **New access package** if you want to create a new access package. For more information about how to create an access package, see: [Create a new access package in entitlement management](entitlement-management-access-package-create.md). For more information about how to edit an existing access package, see: [Change request settings for an access package in Azure AD entitlement management](entitlement-management-access-package-request-policy.md#open-and-edit-an-existing-policys-request-settings). +++## Validation ++To validate successful integration with Microsoft Teams, you'd add or remove a user to the access package created in the section [Add Custom Extension to a new Access Package](entitlement-management-custom-teams-extension.md#add-custom-extension-to-a-new-access-package). The user receives a notification on Microsoft Teams from **Power Automate**. ++## Next step ++> [!div class="nextstepaction"] +> [Configure verified ID settings for an access package in entitlement management](entitlement-management-verified-id-settings.md) ++++ |
active-directory | Permissions Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md | This article lists the Azure AD built-in roles you can assign to allow managemen > | [Fabric Administrator](#fabric-administrator) | Can manage all aspects of the Fabric and Power BI products. | a9ea8996-122f-4c74-9520-8edcd192826c | > | [Global Administrator](#global-administrator) | Can manage all aspects of Azure AD and Microsoft services that use Azure AD identities. | 62e90394-69f5-4237-9190-012177145e10 | > | [Global Reader](#global-reader) | Can read everything that a Global Administrator can, but not update anything. | f2ef992c-3afb-46b9-b7cf-a126ee74c451 |+> | [Global Secure Access Administrator](#global-secure-access-administrator) | Create and manage all aspects of Microsoft Entra Internet Access and Microsoft Entra Private Access, including managing access to public and private endpoints. | ac434307-12b9-4fa1-a708-88bf58caabc1 | > | [Groups Administrator](#groups-administrator) | Members of this role can create/manage groups, create/manage groups settings like naming and expiration policies, and view groups activity and audit reports. | fdd7a751-b60b-444a-984c-02652fe8fa1c | > | [Guest Inviter](#guest-inviter) | Can invite guest users independent of the 'members can invite guests' setting. | 95e79109-95c0-4d8e-aee3-d01accf2d47b | > | [Helpdesk Administrator](#helpdesk-administrator) | Can reset passwords for non-administrators and Helpdesk Administrators. | 729827e3-9c14-49f7-bb1b-9608f156bbb8 | Users with this role **cannot** do the following: > | microsoft.virtualVisits/allEntities/allProperties/read | Read all aspects of Virtual Visits | > | microsoft.windows.updatesDeployments/allEntities/allProperties/read | Read all aspects of Windows Update Service | +## Global Secure Access Administrator ++Assign the Global Secure Access Administrator role to users who need to do the following: ++- Create and manage all aspects of Microsoft Entra Internet Access and Microsoft Entra Private Access +- Manage access to public and private endpoints ++Users with this role **cannot** do the following: ++- Cannot manage enterprise applications, application registrations, Conditional Access, or application proxy settings ++[Learn more](../../global-secure-access/overview-what-is-global-secure-access.md) ++> [!div class="mx-tableFixed"] +> | Actions | Description | +> | | | +> | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | +> | microsoft.directory/applicationPolicies/standard/read | Read standard properties of application policies | +> | microsoft.directory/applications/applicationProxy/read | Read all application proxy properties | +> | microsoft.directory/applications/owners/read | Read owners of applications | +> | microsoft.directory/applications/policies/read | Read policies of applications | +> | microsoft.directory/applications/standard/read | Read standard properties of applications | +> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs | +> | microsoft.directory/conditionalAccessPolicies/standard/read | Read conditional access for policies | +> | microsoft.directory/connectorGroups/allProperties/read | Read all properties of application proxy connector groups | +> | microsoft.directory/connectors/allProperties/read | Read all properties of application proxy connectors | +> | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy | +> | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners | +> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy | +> | microsoft.directory/namedLocations/standard/read | Read basic properties of custom rules that define network locations | +> | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | +> | microsoft.networkAccess/allEntities/allProperties/allTasks | Manage all aspects of Entra Network Access | +> | microsoft.office365.messageCenter/messages/read | Read messages in Message Center in the Microsoft 365 admin center, excluding security messages | +> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | +> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | +> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center | + ## Groups Administrator Users in this role can create/manage groups and its settings like naming and expiration policies. It is important to understand that assigning a user to this role gives them the ability to manage all groups in the organization across various workloads like Teams, SharePoint, Yammer in addition to Outlook. Also the user will be able to manage the various groups settings across various admin portals like Microsoft admin center, Azure portal, as well as workload specific ones like Teams and SharePoint admin centers. |
active-directory | Airtable Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/airtable-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Airtable for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to Airtable. +++writer: twimmers ++ms.assetid: 48a929c5-6cfa-44ca-8471-641fa5a35ee0 ++++ Last updated : 07/17/2023++++# Tutorial: Configure Airtable for automatic user provisioning ++This tutorial describes the steps you need to perform in both Airtable and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Airtable](https://www.airtable.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Airtable. +> * Remove users in Airtable when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Airtable. +> * Provision groups and group memberships in Airtable. +> * [Single sign-on](airtable-tutorial.md) to Airtable (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* An Airtable tenant. +* A user account in Airtable with Admin permissions. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Airtable](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Airtable to support provisioning with Azure AD +Contact Airtable support to configure Airtable to support provisioning with Azure AD. ++## Step 3. Add Airtable from the Azure AD application gallery ++Add Airtable from the Azure AD application gallery to start managing provisioning to Airtable. If you have previously setup Airtable for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Airtable ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. ++### To configure automatic user provisioning for Airtable in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png) ++1. In the applications list, select **Airtable**. ++ ![Screenshot of the Airtable link in the Applications list.](common/all-applications.png) ++1. Select the **Provisioning** tab. ++ ![Screenshot of Provisioning tab.](common/provisioning.png) ++1. Set the **Provisioning Mode** to **Automatic**. ++ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png) ++1. Under the **Admin Credentials** section, input your Airtable Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Airtable. If the connection fails, ensure your Airtable account has Admin permissions and try again. ++ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png) ++1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++ ![Screenshot of Notification Email.](common/provisioning-notification-email.png) ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Airtable**. ++1. Review the user attributes that are synchronized from Azure AD to Airtable in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Airtable for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Airtable API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Airtable| + ||||| + |userName|String|✓|✓ + |active|Boolean|| + |displayName|String|| + |title|String|| + |emails[type eq "work"].value|String|| + |preferredLanguage|String|| + |name.givenName|String||✓ + |name.familyName|String||✓ + |name.formatted|String|| + |addresses[type eq "work"].formatted|String|| + |addresses[type eq "work"].streetAddress|String|| + |addresses[type eq "work"].locality|String|| + |addresses[type eq "work"].region|String|| + |addresses[type eq "work"].postalCode|String|| + |addresses[type eq "work"].country|String|| + |phoneNumbers[type eq "work"].value|String|| + |phoneNumbers[type eq "mobile"].value|String|| + |phoneNumbers[type eq "fax"].value|String|| + |externalId|String|| + |nickName|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager.value|String|| + |addresses[type eq "home"].formatted|String|| + |addresses[type eq "home"].streetAddress|String|| + |addresses[type eq "home"].locality|String|| + |addresses[type eq "home"].region|String|| + |addresses[type eq "home"].postalCode|String|| + |addresses[type eq "home"].country|String|| + |addresses[type eq "other"].formatted|String|| + |addresses[type eq "other"].streetAddress|String|| + |addresses[type eq "other"].locality|String|| + |addresses[type eq "other"].region|String|| + |addresses[type eq "other"].postalCode|String|| + |addresses[type eq "other"].country|String|| + |emails[type eq "home"].value|String|| + |emails[type eq "other"].value|String|| + |locale|String|| + |name.honorificPrefix|String|| + |name.honorificSuffix|String|| + |name.middleName|String|| + |name.familyName|String|| + |phoneNumbers[type eq "home"].value|String|| + |phoneNumbers[type eq "other"].value|String|| + |phoneNumbers[type eq "pager"].value|String|| + |timezone|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String|| + |userType|String|| +++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Airtable**. ++1. Review the group attributes that are synchronized from Azure AD to Airtable in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Airtable for update operations. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Airtable| + ||||| + |displayName|String|✓|✓ + |members|Reference|| + +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Airtable, change the **Provisioning Status** to **On** in the **Settings** section. ++ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png) ++1. Define the users and/or groups that you would like to provision to Airtable by choosing the desired values in **Scope** in the **Settings** section. ++ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png) ++1. When you're ready to provision, click **Save**. ++ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png) ++This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Cleanmail Swiss Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cleanmail-swiss-provisioning-tutorial.md | -This tutorial describes the steps you need to do in both Cleanmail Swiss and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users and groups to [Cleanmail](https://www.alinto.com/fr) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +This tutorial describes the steps you need to do in both Cleanmail Swiss and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users to [Cleanmail](https://www.alinto.com/fr) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). ## Capabilities supported The scenario outlined in this tutorial assumes that you already have the followi ## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).-1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine who is in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). 1. Determine what data to [map between Azure AD and Cleanmail](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Cleanmail Swiss to support provisioning with Azure AD Contact [Cleanmail Swiss Support](https://www.alinto.com/contact-email-provider/ Add Cleanmail Swiss from the Azure AD application gallery to start managing provisioning to Cleanmail. If you have previously setup Cleanmail Swiss for SSO, you can use the same application. However it's recommended you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). -## Step 4. Define who will be in scope for provisioning +## Step 4. Define who is in scope for provisioning -The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user and group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +The Azure AD provisioning service allows you to scope who is provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who is provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who is provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). -* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +* Start small. Test with a small set of users before rolling out to everyone. When the scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app instance. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). * If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. ## Step 5. Configure automatic user provisioning to Cleanmail Swiss -This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in Cleanmail Swiss based on user and group assignments in Azure AD. +This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in Cleanmail Swiss based on user assignments in Azure AD. ### To configure automatic user provisioning for Cleanmail Swiss in Azure AD: This section guides you through the steps to configure the Azure AD provisioning ![Screenshot of provisioning status toggled on.](common/provisioning-toggle-on.png) -1. Define the users and groups that you would like to provision to Cleanmail Swiss by choosing the desired values in **Scope** in the **Settings** section. +1. Define the users that you would like to provision to Cleanmail Swiss by choosing the desired values in **Scope** in the **Settings** section. ![Screenshot of provisioning scope.](common/provisioning-scope.png) This section guides you through the steps to configure the Azure AD provisioning ![Screenshot of saving provisioning configuration.](common/provisioning-configuration-save.png) -This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to complete than next cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. +This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to complete than next cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment: |
active-directory | Dagster Cloud Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dagster-cloud-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi * [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).-* A user account in Dagster Cloud with Admin permissions. +* A user account in Dagster Cloud with Org Admin permissions. ## Step 1. Plan your provisioning deployment The scenario outlined in this tutorial assumes that you already have the followi 1. Determine what data to [map between Azure AD and Dagster Cloud](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Dagster Cloud to support provisioning with Azure AD-Contact Dagster Cloud support to configure Dagster Cloud to support provisioning with Azure AD. +1. Sign in to your Dagster Cloud account. +1. Click the **user menu (your icon) > Cloud Settings**. +1. Click the **Provisioning** tab. +1. If SCIM provisioning isn't enabled, click the **Enable SCIM provisioning** button to enable it. +1. Click **Create SCIM token** to create an API token. This token will be used to authenticate requests from Azure AD to Dagster Cloud. +Keep the API token handy - you'll need it later in step 5. ## Step 3. Add Dagster Cloud from the Azure AD application gallery This section guides you through the steps to configure the Azure AD provisioning ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png) -1. Under the **Admin Credentials** section, input your Dagster Cloud Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Dagster Cloud. +1. Under the **Admin Credentials** section, input your Dagster Cloud Tenant URL and Secret Token. The Tenant URL is `https://*your-org-name*.dagster.cloud/scim/v2` and the Secret Token is the SCIM token you created in step 2 above. Click **Test Connection** to ensure Azure AD can connect to Dagster Cloud. ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png) |
active-directory | Infor Cloudsuite Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/infor-cloudsuite-provisioning-tutorial.md | Before configuring Infor CloudSuite for automatic user provisioning with Azure A This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Infor CloudSuite based on user and/or group assignments in Azure AD. > [!TIP]-> You may also choose to enable SAML-based single sign-on for Infor CloudSuite, following the instructions provided in the [Infor CloudSuite Single sign-on tutorial](./infor-cloud-suite-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, though these two features compliment each other. +> You may also choose to enable SAML-based single sign-on for Infor CloudSuite, following the instructions provided in the [Infor CloudSuite Single sign-on tutorial](./infor-cloud-suite-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, though these two features complement each other. ### To configure automatic user provisioning for Infor CloudSuite in Azure AD: This section guides you through the steps to configure the Azure AD provisioning |urn:ietf:params:scim:schemas:extension:infor:2.0:User:actorId|String|| |urn:ietf:params:scim:schemas:extension:infor:2.0:User:federationId|String|| |urn:ietf:params:scim:schemas:extension:infor:2.0:User:ifsPersonId|String||- |urn:ietf:params:scim:schemas:extension:infor:2.0:User:inUser|String|| + |urn:ietf:params:scim:schemas:extension:infor:2.0:User:lnUser|String|| |urn:ietf:params:scim:schemas:extension:infor:2.0:User:userAlias|String|| |
active-directory | Informacast Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/informacast-provisioning-tutorial.md | + + Title: 'Tutorial: Configure InformaCast for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to InformaCast. +++writer: twimmers ++ms.assetid: eeae199e-09a2-457f-b879-5af76aa3d23b ++++ Last updated : 07/17/2023++++# Tutorial: Configure InformaCast for automatic user provisioning ++This tutorial describes the steps you need to perform in both InformaCast and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [InformaCast](https://www.singlewire.com/informacast) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in InformaCast. +> * Remove users in InformaCast when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and InformaCast. +> * Provision groups and group memberships in InformaCast. +> * [Single sign-on](informacast-tutorial.md) to InformaCast (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* A InformaCast tenant. +* A user account in InformaCast with Admin permissions. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and InformaCast](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure InformaCast to support provisioning with Azure AD +Contact InformaCast support to configure InformaCast to support provisioning with Azure AD. ++## Step 3. Add InformaCast from the Azure AD application gallery ++Add InformaCast from the Azure AD application gallery to start managing provisioning to InformaCast. If you have previously setup InformaCast for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to InformaCast ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. ++### To configure automatic user provisioning for InformaCast in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png) ++1. In the applications list, select **InformaCast**. ++ ![Screenshot of the InformaCast link in the Applications list.](common/all-applications.png) ++1. Select the **Provisioning** tab. ++ ![Screenshot of Provisioning tab.](common/provisioning.png) ++1. Set the **Provisioning Mode** to **Automatic**. ++ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png) ++1. Under the **Admin Credentials** section, input your InformaCast Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to InformaCast. If the connection fails, ensure your InformaCast account has Admin permissions and try again. ++ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png) ++1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++ ![Screenshot of Notification Email.](common/provisioning-notification-email.png) ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to InformaCast**. ++1. Review the user attributes that are synchronized from Azure AD to InformaCast in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in InformaCast for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the InformaCast API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by InformaCast| + ||||| + |userName|String|✓|✓ + |active|Boolean|✓| + |displayName|String|| + |title|String|| + |emails[type eq "work"].value|String|| + |preferredLanguage|String|| + |name.givenName|String|| + |name.familyName|String|| + |name.formatted|String||✓ + |addresses[type eq "work"].formatted|String|| + |addresses[type eq "work"].streetAddress|String|| + |addresses[type eq "work"].locality|String|| + |addresses[type eq "work"].region|String|| + |addresses[type eq "work"].postalCode|String|| + |addresses[type eq "work"].country|String|| + |phoneNumbers[type eq "work"].value|String|| + |phoneNumbers[type eq "mobile"].value|String|| + |externalId|String|| + |emails[type eq "home"].value|String|| + |emails[type eq "other"].value|String|| + |phoneNumbers[type eq "home"].value|String|| ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to InformaCast**. ++1. Review the group attributes that are synchronized from Azure AD to InformaCast in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in InformaCast for update operations. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by InformaCast| + ||||| + |displayName|String|✓|✓ + |externalId|String||✓ + |members|Reference|| + +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for InformaCast, change the **Provisioning Status** to **On** in the **Settings** section. ++ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png) ++1. Define the users and/or groups that you would like to provision to InformaCast by choosing the desired values in **Scope** in the **Settings** section. ++ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png) ++1. When you're ready to provision, click **Save**. ++ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png) ++This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Netskope Cloud Exchange Administration Console Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netskope-cloud-exchange-administration-console-tutorial.md | Complete the following steps to enable Azure AD single sign-on in the Azure port 1. On the **Basic SAML Configuration** section, perform the following steps: a. In the **Identifier** textbox, type a URL using the following pattern:- `https://<Cloud_Exchange_FQDN>.com/api/metadata` + `https://<Cloud_Exchange_FQDN>/api/metadata` b. In the **Reply URL** textbox, type a URL using the following pattern: `https://<Cloud_Exchange_FQDN>/api/ssoauth?acs=true` |
aks | App Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing.md | The application routing add-on creates an ingress class on the cluster called *w 2. Copy the following YAML into a new file named **ingress.yaml** and save the file to your local computer. > [!NOTE]- > Update *`<Hostname>`* with your DNS host name and *`<KeyVaultCertificateUri>`* with the ID returned from Azure Key Vault. `secretName` is the name of the secret that will be generated to store the certificate. This certificate will be presented in the browser. + > Update *`<Hostname>`* with your DNS host name and *`<KeyVaultCertificateUri>`* with the ID returned from Azure Key Vault. + > The *`secretName`* key in the `tls` section defines the name of the secret that contains the certificate for this Ingress resource. This certificate will be presented in the browser when a client browses to the URL defined in the `<Hostname>` key. Make sure that the value of `secretName` is equal to `keyvault-` followed by the value of the Ingress resource name (from `metadata.name`). In the example YAML, secretName will need to be equal to `keyvault-aks-helloworld`. ```yaml apiVersion: networking.k8s.io/v1 The application routing add-on creates an ingress class on the cluster called *w tls: - hosts: - <Hostname>- secretName: keyvault-aks-helloworld + secretName: keyvault-<Ingress resource name> ``` ### Create the resources on the cluster OSM issues a certificate that Nginx uses as the client certificate to proxy HTTP 2. Copy the following YAML into a new file named **ingress.yaml** and save the file to your local computer. > [!NOTE]- > Update *`<Hostname>`* with your DNS host name and *`<KeyVaultCertificateUri>`* with the ID returned from Azure Key Vault. `secretName` is the name of the secret that will be generated to store the certificate. This certificate will be presented in the browser. + > Update *`<Hostname>`* with your DNS host name and *`<KeyVaultCertificateUri>`* with the ID returned from Azure Key Vault. + > The *`secretName`* key in the `tls` section defines the name of the secret that contains the certificate for this Ingress resource. This certificate will be presented in the browser when a client browses to the URL defined in the `<Hostname>` key. Make sure that the value of `secretName` is equal to `keyvault-` followed by the value of the Ingress resource name (from `metadata.name`). In the example YAML, secretName will need to be equal to `keyvault-aks-helloworld`. ```yaml apiVersion: networking.k8s.io/v1 OSM issues a certificate that Nginx uses as the client certificate to proxy HTTP tls: - hosts: - <Hostname>- secretName: keyvault-aks-helloworld + secretName: keyvault-<Ingress resource name> ``` ### Create the resources on the cluster |
aks | Azure Csi Files Storage Provision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md | To mount the Azure Files file share into your pod, you configure the volume in t 1. Create a new file named `azure-files-pod.yaml` and copy in the following contents. If you changed the name of the file share or secret name, update the `shareName` and `secretName`. You can also update the `mountPath`, which is the path where the Files share is mounted in the pod. For Windows Server containers, specify a `mountPath` using the Windows path convention, such as *'D:'*. - ```yaml - apiVersion: v1 - kind: Pod - metadata: +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + nodeSelector: + kubernetes.io/os: linux + containers: + - image: 'mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine' name: mypod- spec: - nodeSelector: - kubernetes.io/os: linux - containers: - - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine - name: mypod - resources: - requests: - cpu: 100m - memory: 128Mi - limits: - cpu: 250m - memory: 256Mi - volumeMounts: - - name: azure - mountPath: /mnt/azure - volumes: - - name: azure - csi: - driver: file.csi.azure.com - readOnly: false - volumeAttributes: - secretName: azure-secret # required - shareName: aksshare # required - mountOptions: "dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nosharesock" # optional - ``` + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 250m + memory: 256Mi + volumeMounts: + - name: azure + mountPath: /mnt/azure + volumes: + - name: azure + csi: + driver: file.csi.azure.com + readOnly: false + volumeAttributes: + secretName: azure-secret # required + shareName: aksshare # required + mountOptions: 'dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nosharesock' # optional +``` 2. Create the pod using the [`kubectl apply`][kubectl-apply] command. |
aks | Enable Host Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/enable-host-encryption.md | Title: Enable host-based encryption on Azure Kubernetes Service (AKS) description: Learn how to configure a host-based encryption in an Azure Kubernetes Service (AKS) cluster. Previously updated : 06/22/2023 Last updated : 07/17/2023 ms.devlang: azurecli Before you begin, review the following prerequisites and limitations. az aks nodepool add --name hostencrypt --cluster-name myAKSCluster --resource-group myResourceGroup -s Standard_DS2_v2 --enable-encryption-at-host ``` -> [!NOTE] -> After you enable host-based encryption on your cluster, make sure you provide the proper access to your Azure Key Vault to enable encryption at rest. For more information, see [Control access][control-keys] and [Azure built-in roles for Key Vault data plane operations][akv-built-in-roles]. - ## Next steps - Review [best practices for AKS cluster security][best-practices-security]. |
aks | Quick Kubernetes Deploy Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md | The following example creates a resource group named *myResourceGroup* in the *e The following example creates a cluster named *myAKSCluster* with one node and enables a system-assigned managed identity. -* Create an AKS cluster using the [`az aks create`][az-aks-create] command with the `--enable-addons monitoring` and `--enable-msi-auth-for-monitoring` parameters to enable [Azure Monitor Container insights][azure-monitor-containers] with managed identity authentication (preview). +* Create an AKS cluster using the [`az aks create`][az-aks-create] command with the `--enable-addons monitoring` parameter to enable [Azure Monitor Container insights][azure-monitor-containers] with managed identity authentication (Minimum Azure CLI version 2.49.0 or higher). ```azurecli-interactive- az aks create -g myResourceGroup -n myAKSCluster --enable-managed-identity --node-count 1 --enable-addons monitoring --enable-msi-auth-for-monitoring --generate-ssh-keys + az aks create -g myResourceGroup -n myAKSCluster --enable-managed-identity --node-count 1 --enable-addons monitoring --generate-ssh-keys ``` After a few minutes, the command completes and returns JSON-formatted information about the cluster. |
aks | Load Balancer Standard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md | az aks create \ By default, AKS sets *AllocatedOutboundPorts* on its load balancer to `0`, which enables [automatic outbound port assignment based on backend pool size][azure-lb-outbound-preallocatedports] when creating a cluster. For example, if a cluster has 50 or fewer nodes, 1024 ports are allocated to each node. As the number of nodes in the cluster increases, fewer ports are available per node. > [!IMPORTANT]-> There is a hard limit of 1024 ports regardless of whether front-end IPs are added when the node size is less than 50. +> There is a hard limit of 1024 ports regardless of whether front-end IPs are added when the node size is less than or equal to 50 (1-50). To show the *AllocatedOutboundPorts* value for the AKS cluster load balancer, use `az network lb outbound-rule list`. |
aks | Operator Best Practices Cluster Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-cluster-security.md | For more information about Azure AD integration, Kubernetes RBAC, and Azure RBAC > > Add a network policy in all user namespaces to block pod egress to the metadata endpoint. +> [!NOTE] +> To implement Network Policy, include the attribute `--network-policy azure` when creating the AKS cluster. Use the following command to create the cluster: +> `az aks create -g myResourceGroup -n myManagedCluster --enable-managed-identity --network-plugin azure --network-policy azure` + ```yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy |
aks | Use Byo Cni | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-byo-cni.md | At this point, the cluster is ready for installation of a CNI plugin. Learn more about networking in AKS in the following articles: * [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](static-ip.md)-* [Use an internal load balancer with Azure Container Service (AKS)](internal-lb.md) +* [Use an internal load balancer with Azure Kubernetes Service (AKS)](internal-lb.md) * [Create a basic ingress controller with external network connectivity][aks-ingress-basic] * [Enable the HTTP application routing add-on][aks-http-app-routing] * [Create an ingress controller that uses an internal, private network and IP address][aks-ingress-internal] |
api-management | Configure Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-custom-domain.md | There are several API Management endpoints to which you can assign a custom doma ### Considerations * You can update any of the endpoints supported in your service tier. Typically, customers update **Gateway** (this URL is used to call the APIs exposed through API Management) and **Developer portal** (the developer portal URL).-* The default **Gateway** endpoint also is available after you configure a custom Gateway domain name. For other API Management endpoints (such as **Developer portal**) that you configure with a custom domain name, the default endpoint is no longer available. +* The default **Gateway** endpoint remains available after you configure a custom Gateway domain name and cannot be deleted. For other API Management endpoints (such as **Developer portal**) that you configure with a custom domain name, the default endpoint is no longer available. * Only API Management instance owners can use **Management** and **SCM** endpoints internally. These endpoints are less frequently assigned a custom domain name. * The **Premium** and **Developer** tiers support setting multiple hostnames for the **Gateway** endpoint. * Wildcard domain names, like `*.contoso.com`, are supported in all tiers except the Consumption tier. A specific subdomain certificate (for example, api.contoso.com) would take precedence over a wildcard certificate (*.contoso.com) for requests to api.contoso.com. |
api-management | Forward Request Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/forward-request-policy.md | The `forward-request` policy forwards the incoming request to the backend servic ## Policy statement ```xml-<forward-request http-version="1 | 2or1 | 2" timeout="time in seconds" follow-redirects="false | true" buffer-request-body="false | true" buffer-response="true | false" fail-on-error-status-code="false | true"/> +<forward-request http-version="1 | 2or1 | 2" timeout="time in seconds" continue-timeout="time in seconds" follow-redirects="false | true" buffer-request-body="false | true" buffer-response="true | false" fail-on-error-status-code="false | true"/> ``` ## Attributes The `forward-request` policy forwards the incoming request to the backend servic | Attribute | Description | Required | Default | | | -- | -- | - | | timeout | The amount of time in seconds to wait for the HTTP response headers to be returned by the backend service before a timeout error is raised. Minimum value is 0 seconds. Values greater than 240 seconds may not be honored, because the underlying network infrastructure can drop idle connections after this time. Policy expressions are allowed. | No | 300 |-| http-version | The HTTP spec version to use when sending the HTTP response to the backend service. When using `2or1`, the gateway will favor HTTP /2 over /1, but fall back to HTTP /1 if HTTP /2 does not work. | No | 1 | +| continue-timeout | The amount of time in seconds to wait for a `100 Continue` status code to be returned by the backend service before a timeout error is raised. Policy expressions are allowed. | No | N /A | +| http-version | The HTTP spec version to use when sending the HTTP response to the backend service. When using `2or1`, the gateway will favor HTTP /2 over /1, but fall back to HTTP /1 if HTTP /2 doesn't work. | No | 1 | | follow-redirects | Specifies whether redirects from the backend service are followed by the gateway or returned to the caller. Policy expressions are allowed. | No | `false` | | buffer-request-body | When set to `true`, request is buffered and will be reused on [retry](retry-policy.md). | No | `false` | | buffer-response | Affects processing of chunked responses. When set to `false`, each chunk received from the backend is immediately returned to the caller. When set to `true`, chunks are buffered (8 KB, unless end of stream is detected) and only then returned to the caller.<br/><br/>Set to `false` with backends such as those implementing [server-sent events (SSE)](how-to-server-sent-events.md) that require content to be returned or streamed immediately to the caller. Policy expressions aren't allowed. | No | `true` | This operation level policy uses the `base` element to inherit the backend polic ### Do not inherit policy from parent scope -This operation level policy explicitly forwards all requests to the backend service with a timeout of 120 and does not inherit the parent API level backend policy. If the backend service responds with an error status code from 400 to 599 inclusive, [on-error](api-management-error-handling-policies.md) section will be triggered. +This operation level policy explicitly forwards all requests to the backend service with a timeout of 120 and doesn't inherit the parent API level backend policy. If the backend service responds with an error status code from 400 to 599 inclusive, [on-error](api-management-error-handling-policies.md) section will be triggered. ```xml <!-- operation level --> This operation level policy explicitly forwards all requests to the backend serv ### Do not forward requests to backend -This operation level policy does not forward requests to the backend service. +This operation level policy doesn't forward requests to the backend service. ```xml <!-- operation level --> |
api-management | Websocket Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/websocket-api.md | For example, the following screenshot shows recent WebSocket API responses with Below are the current restrictions of WebSocket support in API Management: * WebSocket APIs are not supported yet in the Consumption tier.-* WebSocket APIs are not supported yet in the [self-hosted gateway](./self-hosted-gateway-overview.md). -* 200 active connections limit per unit. * WebSocket APIs support the following valid buffer types for messages: Close, BinaryFragment, BinaryMessage, UTF8Fragment, and UTF8Message. * Currently, the [set-header](set-header-policy.md) policy doesn't support changing certain well-known headers, including `Host` headers, in onHandshake requests. * During the TLS handshake with a WebSocket backend, API Management validates that the server certificate is trusted and that its subject name matches the hostname. With HTTP APIs, API Management validates that the certificate is trusted but doesnΓÇÖt validate that hostname and subject match. +For WebSocket connection limits, see [API Management limits](../azure-resource-manager/management/azure-subscription-service-limits.md#api-management-limits). + ### Unsupported policies The following policies are not supported by and cannot be applied to the onHandshake operation: |
app-service | Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md | Title: Migrate to App Service Environment v3 by using the migration feature description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 07/06/2023 Last updated : 07/17/2023 If your App Service Environment doesn't pass the validation checks or you try to |App Service Environment management operation in progress. |Your App Service Environment is undergoing a management operation. These operations can include activities such as deployments or upgrades. Migration is blocked until these operations are complete. |You can migrate once these operations are complete. | |Migrate is not available for this subscription.|Support needs to be engaged for migrating this App Service Environment.|Open a support case to engage support to resolve your issue.| |Your InteralLoadBalancingMode is not currently supported.|App Service Environments that have InternalLoadBalancingMode set to certain values can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. |+|Migration is invalid. Your ASE needs to be upgraded to the latest build to ensure successful migration. We will upgrade your ASE now. Please try migrating again in few hours once platform upgrade has finished. |Your App Service Environment isn't on the minumum build required for migration. An upgrade will be started. Your App Service Environment won't be impacted, but you won't be able to scale or make changes to your App Service Environment while the upgrade is in progress. You won't be able to migrate until the upgrade finishes. |Wait until the upgrade finishes and then migrate. | ## Overview of the migration process using the migration feature |
app-service | Upgrade To Asev3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/upgrade-to-asev3.md | description: Take the first steps toward upgrading to App Service Environment v3 Previously updated : 06/26/2023 Last updated : 07/17/2023 # Upgrade to App Service Environment v3 This page is your one-stop shop for guidance and resources to help you upgrade s |**2**|**Migrate**|Based on results of your review, either upgrade using the migration feature or follow the manual steps.<br><br>- [Use the automated migration feature](how-to-migrate.md)<br>- [Migrate manually](migration-alternatives.md)| |**3**|**Testing and troubleshooting**|Upgrading using the automated migration feature requires a 3-6 hour service window. Support teams are monitoring upgrades to ensure success. If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).| |**4**|**Optimize your App Service plans**|Once your upgrade is complete, you can optimize the App Service plans for additional benefits.<br><br>Review the autoselected Isolated v2 SKU sizes and scale up or scale down your App Service plans as needed.<br><br>- [Scale down your App Service plans](../manage-scale-up.md)<br>- [App Service Environment post-migration scaling guidance](migrate.md#pricing)<br><br>Check out the pricing estimates if needed.<br><br>- [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service/windows/)<br>- [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator)|-|**5**|**Learn more**|Need more help? [Submit a request](https://cxp.azure.com/nominationportal/nominationform/fasttrack) to contact FastTrack.<br><br>[Frequently asked questions](migrate.md#frequently-asked-questions)<br><br>[Community support](https://aka.ms/asev1v2retirement)| +|**5**|**Learn more**|Join the [free live webinar](https://developer.microsoft.com/en-us/reactor/events/20417) with FastTrack Architects.<br><br>Need more help? [Submit a request](https://cxp.azure.com/nominationportal/nominationform/fasttrack) to contact FastTrack.<br><br>[Frequently asked questions](migrate.md#frequently-asked-questions)<br><br>[Community support](https://aka.ms/asev1v2retirement)| ## Additional information |
app-service | Monitor Instances Health Check | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md | This article uses Health check in the Azure portal to monitor App Service instan ![Health check failure][1] -Please note that _/api/health_ is just an example added for illustration purposes. You should make sure that the path you are selecting is a valid path and we do not create a healthcheck path by default but it needs to exist for your application. +Please note that _/api/health_ is just an example added for illustration purposes. We do not create a Health Check path by default. You should make sure that the path you are selecting is a valid path that exists within your application ## What App Service does with Health checks - When given a path on your app, Health check pings this path on all instances of your App Service app at 1-minute intervals.-- If an instance doesn't respond with a status code between 200-299 (inclusive) after 10 requests, App Service determines it's unhealthy and removes it. (The required number of failed requests for an instance to be deemed unhealthy is configurable to a minimum of two requests.)+- If an instance doesn't respond with a status code between 200-299 (inclusive) after 10 requests, App Service determines it's unhealthy and removes it from the load balancer for this Web App. The required number of failed requests for an instance to be deemed unhealthy is configurable to a minimum of two requests. - After removal, Health check continues to ping the unhealthy instance. If the instance begins to respond with a healthy status code (200-299) then the instance is returned to the load balancer.-- If an instance remains unhealthy for one hour, it will be replaced with new instance.+- If an instance remains unhealthy for one hour, it will be replaced with a new instance. - When scaling out, App Service pings the Health check path to ensure new instances are ready. > [!NOTE] Please note that _/api/health_ is just an example added for illustration purpose 4. Select **Save**. > [!NOTE]-> - Your [App Service plan](./overview-hosting-plans.md) should be scaled to two or more instances to fully utilize Health check. The Health check path should check critical components of your application. For example, if your application depends on a database and a messaging system, the Health check endpoint should connect to those components. If the application can't connect to a critical component, then the path should return a 500-level response code to indicate the app is unhealthy. Also, if the path does not return a response within 1 minute, the health check ping is considered unhealthy. -> - When selecting the Health check path, make sure you're selecting a path that returns 200 status code only when the app is fully warmed up. +> - Your [App Service plan](./overview-hosting-plans.md) should be scaled to two or more instances to fully utilize Health check. +> - The Health check path should check critical components of your application. For example, if your application depends on a database and a messaging system, the Health check endpoint should connect to those components. If the application can't connect to a critical component, then the path should return a 500-level response code to indicate the app is unhealthy. Also, if the path does not return a response within 1 minute, the health check ping is considered unhealthy. +> - When selecting the Health check path, make sure you're selecting a path that returns a 200 status code, only when the app is fully warmed up. > [!CAUTION] > Health check configuration changes restart your app. To minimize impact to production apps, we recommend [configuring staging slots](deploy-staging-slots.md) and swapping to production. After providing your application's Health check path, you can monitor the health - Health check can be enabled for **Free** and **Shared** App Service Plans so you can have metrics on the site's health and setup alerts, but because **Free** and **Shared** sites can't scale out, any unhealthy instances won't be replaced. You should scale up to the **Basic** tier or higher so you can scale out to 2 or more instances and utilize the full benefit of Health check. This is recommended for production-facing applications as it will increase your app's availability and performance. - The App Service plan can have a maximum of one unhealthy instance replaced per hour and, at most, three instances per day.-- There's a limit of replaced instances we have per scale unit, and its value is reset once at 12h.+- There's a non-configurable limit on the total amount of instances replaced by Health Check per scale unit. If this limit is reached, no unhealthy instances will be replaced. This value gets reset every 12 hours. ## Frequently Asked Questions Imagine you have two applications (or one app with a slot) with Health check ena ### What if all my instances are unhealthy? -In the scenario where all instances of your application are unhealthy, App Service will remove instances from the load balancer up to the percentage specified in `WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT`. In this scenario, taking all unhealthy app instances out of the load balancer rotation would effectively cause an outage for your application. +In the scenario where all instances of your application are unhealthy, App Service will not remove instances from the load balancer. In this scenario, taking all unhealthy app instances out of the load balancer rotation would effectively cause an outage for your application; however, the instances replacement will still be honored. ### Does Health check work on App Service Environments? |
app-service | Troubleshoot Dotnet Visual Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-dotnet-visual-studio.md | For more information about troubleshooting apps in Azure App Service, see the fo * [How to monitor apps](web-sites-monitor.md) * [Investigating Memory Leaks in Azure App Service with Visual Studio 2013](https://devblogs.microsoft.com/devops/investigating-memory-leaks-in-azure-web-sites-with-visual-studio-2013/). Microsoft ALM blog post about Visual Studio features for analyzing managed memory issues.-* [Azure App Service online tools you should know about](https://azure.microsoft.com/blog/2014/03/28/windows-azure-websites-online-tools-you-should-know-about-2/). Blog post by Amit Apple. +* [Azure App Service online tools you should know about](https://azure.microsoft.com/blog/windows-azure-websites-online-tools-you-should-know-about). Blog post by Amit Apple. For help with a specific troubleshooting question, start a thread in one of the following forums: For more information about how to use debug mode in Visual Studio, see [Debuggin ### Remote debugging in Azure For more information about remote debugging for App Service apps and WebJobs, see the following resources: -* [Introduction to Remote Debugging Azure App Service](https://azure.microsoft.com/blog/2014/05/06/introduction-to-remote-debugging-on-azure-web-sites/). -* [Introduction to Remote Debugging Azure App Service part 2 - Inside Remote debugging](https://azure.microsoft.com/blog/2014/05/07/introduction-to-remote-debugging-azure-web-sites-part-2-inside-remote-debugging/) -* [Introduction to Remote Debugging on Azure App Service part 3 - Multi-Instance environment and GIT](https://azure.microsoft.com/blog/2014/05/08/introduction-to-remote-debugging-on-azure-web-sites-part-3-multi-instance-environment-and-git/) +* [Introduction to Remote Debugging Azure App Service](https://azure.microsoft.com/blog/introduction-to-remote-debugging-on-azure-web-sites/). +* [Introduction to Remote Debugging Azure App Service part 2 - Inside Remote debugging](https://azure.microsoft.com/blog/introduction-to-remote-debugging-on-azure-web-sites/) +* [Introduction to Remote Debugging on Azure App Service part 3 - Multi-Instance environment and GIT](https://azure.microsoft.com/blog/introduction-to-remote-debugging-on-azure-web-sites/) * [WebJobs Debugging (video)](https://www.youtube.com/watch?v=ncQm9q5ZFZs&list=UU_SjTh-ZltPmTYzAybypB-g&index=1) If your app uses an Azure Web API or Mobile Services back-end and you need to debug that, see [Debugging .NET Backend in Visual Studio](/archive/blogs/azuremobile/debugging-net-backend-in-visual-studio). |
azure-app-configuration | Howto Targetingfilter Aspnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-targetingfilter-aspnet-core.md | The entire *ConfigureServices* method will look like this: 1. In the **Edit** screen, select the **Enable feature flag** checkbox if it isn't already selected. Then select the **Use feature filter** checkbox. -1. Select the **Targeting** radio button. +1. Select the **Create** button. ++1. Select the **Targeting filter** in the filter type dropdown. ++1. Select the **Override by Groups** and **Override by Users** checkbox. 1. Select the following options: - **Default percentage**: 0- - **Groups**: Enter a **Name** of _contoso.com_ and a **Percentage** of _50_ - - **Users**: `test@contoso.com` + - **Include Groups**: Enter a **Name** of _contoso.com_ and a **Percentage** of _50_ + - **Exclude Groups**: `contoso-xyz.com` + - **Include Users**: `test@contoso.com` + - **Exclude Users**: `testuser@contoso.com` The feature filter screen will look like this: The entire *ConfigureServices* method will look like this: These settings result in the following behavior: - - The feature flag is always enabled for user `test@contoso.com`, because `test@contoso.com` is listed in the _Users_ section. - - The feature flag is enabled for 50% of other users in the _contoso.com_ group, because _contoso.com_ is listed in the _Groups_ section with a _Percentage_ of _50_. + - The feature flag is always disabled for user `testuser@contoso.com`, because `testuser@contoso.com` is listed in the _Exclude Users_ section. + - The feature flag is always disabled for users in the `contoso-xyz.com`, because `contoso-xyz.com` is listed in the _Exclude Groups_ section. + - The feature flag is always enabled for user `test@contoso.com`, because `test@contoso.com` is listed in the _Include Users_ section. + - The feature flag is enabled for 50% of users in the _contoso.com_ group, because _contoso.com_ is listed in the _Include Groups_ section with a _Percentage_ of _50_. - The feature is always disabled for all other users, because the _Default percentage_ is set to _0_. +1. Select **Add** to save the targeting filter. + 1. Select **Apply** to save these settings and return to the **Feature manager** screen. 1. The **Feature filter** for the feature flag now appears as *Targeting*. This state indicates that the feature flag will be enabled or disabled on a per-request basis, based on the criteria enforced by the *Targeting* feature filter. To see the effects of this feature flag, build and run the application. Initiall Now sign in as `test@contoso.com`, using the password you set when registering. The *Beta* item now appears on the toolbar, because `test@contoso.com` is specified as a targeted user. +Now sign in as `testuser@contoso.com`, using the password you set when registering. The *Beta* item doesn't appear on the toolbar, because `testuser@contoso.com` is specified as an excluded user. + The following video shows this behavior in action. > [!div class="mx-imgBorder"] > ![TargetingFilter in action](./media/feature-flags-targetingfilter.gif) -You can create additional users with `@contoso.com` email addresses to see the behavior of the group settings. 50% of these users will see the *Beta* item. The other 50% won't see the *Beta* item. +You can create additional users with `@contoso.com` and `@contoso-xyz.com` email addresses to see the behavior of the group settings. ++Users with `contoso-xyz.com` email addresses will not see the *Beta* item. While 50% of users with `@contoso.com` email addresses will see the *Beta* item, the other 50% won't see the *Beta* item. ## Next steps |
azure-functions | Functions Create First Quarkus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-quarkus.md | Title: Deploy serverless Java apps with Quarkus on Azure Functions description: Learn how to develop, build, and deploy a serverless Java app by using Quarkus on Azure Functions.+ Use the following command to clone the sample Java project for this article. The ```bash git clone https://github.com/Azure-Samples/quarkus-azure+cd quarkus-azure +git checkout 2023-01-10 +cd functions-quarkus ``` +If you see a message about being in **detached HEAD** state, this message is safe to ignore. Because this article does not require any commits, detached HEAD state is appropriate. + Explore the sample function. Open the *functions-quarkus/src/main/java/io/quarkus/GreetingFunction.java* file. Run the following command. The `@Funq` annotation makes your method (in this case, `funqyHello`) a serverless function. |
azure-monitor | Availability Test Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-test-migration.md | + + Title: Migrate from Azure Monitor Application Insights classic URL ping tests to standard tests +description: How to migrate from Azure Monitor Application Insights classic availability URL ping tests to standard tests. + Last updated : 07/19/2023++++# Migrate availability tests ++In this article, we guide you through the process of migrating from [classic URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) to the modern and efficient [standard tests](availability-standard-tests.md) . ++We simplify this process by providing clear step-by-step instructions to ensure a seamless transition and equip your applications with the most up-to-date monitoring capabilities. ++## Migrate classic URL ping tests to standard tests ++The following steps walk you through the process of creating [standard tests](availability-standard-tests.md) that replicate the functionality of your [URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability). It allows you to more easily start using the advanced features of [standard tests](availability-standard-tests.md) using your previously created [URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability). ++> [!NOTE] +> A cost is associated with running [standard tests](availability-standard-tests.md). Once you create a [standard test](availability-standard-tests.md), you will be charged for test executions. +> Refer to [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#pricing) before starting this process. ++### Prerequisites ++- Any [URL ping test](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) within Application Insights +- [Azure PowerShell](/powershell/azure/get-started-azureps) access ++### Steps ++1. Connect to your subscription with Azure PowerShell (Connect-AzAccount + Set-AzContext). +2. List all URL ping tests in a resource group: ++```azurepowershell +$resourceGroup = "myResourceGroup"; +Get-AzApplicationInsightsWebTest -ResourceGroupName $resourceGroup | ` + Where-Object { $_.WebTestKind -eq "ping" }; +``` +3. Find the URL ping Test you want to migrate and record its name. +4. The following commands create a standard test with the same logic as the URL ping test: ++```azurepowershell +$appInsightsComponent = "componentName"; +$pingTestName = "pingTestName"; +$newStandardTestName = "newStandardTestName"; ++$componentId = (Get-AzApplicationInsights -ResourceGroupName $resourceGroup -Name $appInsightsComponent).Id; +$pingTest = Get-AzApplicationInsightsWebTest -ResourceGroupName $resourceGroup -Name $pingTestName; +$pingTestRequest = ([xml]$pingTest.ConfigurationWebTest).WebTest.Items.Request; +$pingTestValidationRule = ([xml]$pingTest.ConfigurationWebTest).WebTest.ValidationRules.ValidationRule; ++$dynamicParameters = @{}; ++if ($pingTestRequest.IgnoreHttpStatusCode -eq [bool]::FalseString) { ++$dynamicParameters["RuleExpectedHttpStatusCode"] = [convert]::ToInt32($pingTestRequest.ExpectedHttpStatusCode, 10); ++} ++if ($pingTestValidationRule -and $pingTestValidationRule.DisplayName -eq "Find Text" ` ++-and $pingTestValidationRule.RuleParameters.RuleParameter[0].Name -eq "FindText" ` +-and $pingTestValidationRule.RuleParameters.RuleParameter[0].Value) { +$dynamicParameters["ContentMatch"] = $pingTestValidationRule.RuleParameters.RuleParameter[0].Value; +$dynamicParameters["ContentPassIfTextFound"] = $true; +} ++New-AzApplicationInsightsWebTest @dynamicParameters -ResourceGroupName $resourceGroup -Name $newStandardTestName ` +-Location $pingTest.Location -Kind 'standard' -Tag @{ "hidden-link:$componentId" = "Resource" } -TestName $newStandardTestName ` +-RequestUrl $pingTestRequest.Url -RequestHttpVerb "GET" -GeoLocation $pingTest.PropertiesLocations -Frequency $pingTest.Frequency ` +-Timeout $pingTest.Timeout -RetryEnabled:$pingTest.RetryEnabled -Enabled:$pingTest.Enabled ` +-RequestParseDependent:($pingTestRequest.ParseDependentRequests -eq [bool]::TrueString); ++``` ++5. The new standard test doesn't have alert rules by default, so it doesn't create noisy alerts. No changes are made to your URL ping test so you can continue to rely on it for alerts. +6. Once you have validated the functionality of the new standard test, [update your alert rules](/azure/azure-monitor/alerts/alerts-manage-alert-rules) that reference the URL ping test to reference the standard test instead. Then you disable or delete the URL ping test. +7. To delete a URL ping test with Azure PowerShell, you can use this command: ++```azurepowershell +Remove-AzApplicationInsightsWebTest -ResourceGroupName $resourceGroup -Name $pingTestName; +``` ++## FAQ ++#### When should I use these commands? ++We recommend using these commands to migrate a URL ping test to a standard test and take advantage of the available capabilities. Remember, this migration is optional. +++#### Do these steps work for both HTTP and HTTPS endpoints? ++Yes, these commands work for both HTTP and HTTPS endpoints, which are used in your URL ping Tests. ++## More Information ++* [Standard tests](availability-standard-tests.md) +* [Availability alerts](availability-alerts.md) +* [Troubleshooting](troubleshoot-availability.md) +* [Web tests Azure Resource Manager template](/azure/templates/microsoft.insights/webtests?tabs=json) +* [Web test REST API](/rest/api/application-insights/web-tests) |
azure-monitor | Container Insights Cost Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost-config.md | Create a file and provide values for _interval_, _namespaceFilteringMode_, and _ ## Onboarding to a new AKS cluster +> [!NOTE] +> Minimum Azure CLI version 2.49.0 or higher. + Use the following command to enable monitoring of your AKS cluster ```azcli-az aks create -g myResourceGroup -n myAKSCluster --enable-managed-identity --node-count 1 --enable-addons monitoring --enable-msi-auth-for-monitoring --data-collection-settings dataCollectionSettings.json --generate-ssh-keys +az aks create -g myResourceGroup -n myAKSCluster --enable-managed-identity --node-count 1 --enable-addons monitoring --data-collection-settings dataCollectionSettings.json --generate-ssh-keys ``` ## Onboarding to an existing AKS Cluster ## [Azure CLI](#tab/create-CLI) +> [!NOTE] +> Minimum Azure CLI version 2.49.0 or higher. + ### Onboard to a cluster without the monitoring addon ```azcli-az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <clusterResourceGroup> -n <clusterName> --data-collection-settings dataCollectionSettings.json +az aks enable-addons -a monitoring -g <clusterResourceGroup> -n <clusterName> --data-collection-settings dataCollectionSettings.json ``` ### Onboard to a cluster with an existing monitoring addon az aks show -g <clusterResourceGroup> -n <clusterName> | grep -i "logAnalyticsWo az aks disable-addons -a monitoring -g <clusterResourceGroup> -n <clusterName> # enable monitoring with data collection settings-az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <clusterResourceGroup> -n <clusterName> --workspace-resource-id <logAnalyticsWorkspaceResourceId> --data-collection-settings dataCollectionSettings.json +az aks enable-addons -a monitoring -g <clusterResourceGroup> -n <clusterName> --workspace-resource-id <logAnalyticsWorkspaceResourceId> --data-collection-settings dataCollectionSettings.json ``` ## [Azure portal](#tab/create-portal) |
azure-monitor | Container Insights Enable Aks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks.md | Use any of the following methods to enable monitoring for an existing AKS cluste ## [CLI](#tab/azure-cli) > [!NOTE]-> Azure CLI version 2.39.0 or higher is required for managed identity authentication. +> Managed identity authentication will be default in CLI version 2.49.0 or higher. If you need to use legacy/non-managed identity authentication, use CLI version < 2.49.0. ### Use a default Log Analytics workspace This section explains two methods for migrating to managed identity authenticati AKS clusters with a service principal must first disable monitoring and then upgrade to managed identity. Only Azure public cloud, Azure China cloud, and Azure Government cloud are currently supported for this migration. +> [!NOTE] +> Minimum Azure CLI version 2.49.0 or higher. + 1. Get the configured Log Analytics workspace resource ID: ```cli AKS clusters with a service principal must first disable monitoring and then upg 1. Enable the monitoring add-on with the managed identity authentication option by using the Log Analytics workspace resource ID obtained in step 1: ```cli- az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id> + az aks enable-addons -a monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id> ``` ### Existing clusters with system or user-assigned identity AKS clusters with system-assigned identity must first disable monitoring and then upgrade to managed identity. Only Azure public cloud, Azure China cloud, and Azure Government cloud are currently supported for clusters with system identity. For clusters with user-assigned identity, only Azure public cloud is supported. +> [!NOTE] +> Minimum Azure CLI version 2.49.0 or higher. + 1. Get the configured Log Analytics workspace resource ID: ```cli AKS clusters with system-assigned identity must first disable monitoring and the 1. Enable the monitoring add-on with the managed identity authentication option by using the Log Analytics workspace resource ID obtained in step 1: ```cli- az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id> + az aks enable-addons -a monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id> ``` ## Private link |
azure-monitor | Prometheus Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-workbooks.md | Title: Query Prometheus metrics using Azure workbooks description: Query Prometheus metrics in the portal using Azure Workbooks. Previously updated : 01/18/2023 Last updated : 07/17/2023 This article introduces workbooks for Azure Monitor workspaces and shows you how To query Prometheus metrics from an Azure Monitor workspace you need the following: - An Azure Monitor workspace. To create an Azure Monitor workspace see [Create an Azure Monitor Workspace](./azure-monitor-workspace-overview.md?tabs=azure-portal.md). - Your Azure Monitor workspace must be [collecting Prometheus metrics](./prometheus-metrics-enable.md) from an AKS cluster.-- The user must be assigned the **Monitoring Data Reader** role for the Azure Monitor workspace.--> [!NOTE] -> Querying data from an Azure Monitor workspace is a data plane operation. Even if you are an owner or have elevated control plane access, you still need to assign the **Monitoring Data Reader** role. For more information, see [Azure control and data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). +- The user must be assigned role that can perform the **microsoft.monitor/accounts/read** operation on the Azure Monitor workspace. ## Prometheus Explorer workbook Azure Monitor workspaces include an exploration workbook to query your Prometheus metrics. -1. From the Azure Monitor workspace overview page, select **Workbooks** +1. From the Azure Monitor workspace overview page, select **Prometheus explorer** -1. In the Azure Monitor workspace gallery, select the **Prometheus Explorer** workbook tile. +![Screenshot that shows Azure Monitor workspace menu selection.](./media/prometheus-workbooks/prometheus-explorer-menu.png) + +2. Or the **Workbooks** menu item, and in the Azure Monitor workspace gallery, select the **Prometheus Explorer** workbook tile. ![Screenshot that shows Azure Monitor workspace gallery.](./media/prometheus-workbooks/prometheus-gallery.png) Workbooks supports many visualizations and Azure integrations. For more informat If your workbook query does not return data: -- Check that you have **Monitoring Data Reader** role permissions assigned through Access Control (IAM) in your Azure Monitor workspace+- Check that you have sufficient permissions to perform **microsoft.monitor/accounts/read** assigned through Access Control (IAM) in your Azure Monitor workspace - Verify that you have turned on metrics collection in the Monitored clusters blade of your Azure Monitor workspace. |
azure-monitor | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md | -Azure Monitor collects and aggregates the data from every layer and component of your system across multiple Azure and non-Azure subscrtiptions and tenants. It stores it in a common data platform for consumption by a common set of tools which can correlate, analyze, visualize, and/or respond to the data. You can also integrate additional Microsoft and non-Microsoft tools. +Azure Monitor collects and aggregates the data from every layer and component of your system across multiple Azure and non-Azure subscriptions and tenants. It stores it in a common data platform for consumption by a common set of tools which can correlate, analyze, visualize, and/or respond to the data. You can also integrate additional Microsoft and non-Microsoft tools. :::image type="content" source="media/overview/azure-monitor-high-level-abstraction-opt.svg" alt-text="Diagram that shows an abstracted view of what Azure monitor does as described in the previous paragraph." border="false" lightbox="media/overview/azure-monitor-high-level-abstraction-opt.svg"::: The diagram depicts the Azure Monitor system components: - The data is **collected and routed** to the data platform. Clicking on the diagram shows these options, which are also called out in detail later in this article. - The **[data platform](data-platform.md)** stores the collected monitoring data. Azure Monitor's core data platform has stores for metrics, logs, traces, and changes. SCOM MI uses it's own database hosted in SQL Server Managed Instance. - The **consumption** section shows the components that use data from the data platform. - - Azure Monitor's core consumption methods include tools to provide **insights**, **visualize**, and **analyize** data. The visualization tools build on the analysis tools and the insights build on top of both the visualization and analysis tools. + - Azure Monitor's core consumption methods include tools to provide **insights**, **visualize**, and **analyze** data. The visualization tools build on the analysis tools and the insights build on top of both the visualization and analysis tools. - There are additional mechanisms to help you **respond** to incoming monitoring data. - The **SCOM MI** path uses the traditional Operations Manager console that SCOM customers are already familiar with. You may need to integrate Azure Monitor with other systems or to build custom so |[Azure Storage](../storage/common/storage-introduction.md)| Export data to Azure storage for less expensive, long-term archival of monitoring data for auditing or compliance purposes. |Hosted and Managed Partners | Many [external partners](partners.md) integrate with Azure Monitor. Azure Monitor has partnered with other monitoring providers to provide an [Azure-hosted version of their products](/azure/partner-solutions/) to make interoperability easier. Examples include Elastic, Datadog, Logz.io, and Dynatrace. |[API](/rest/api/monitor/)|Multiple APIs are available to read and write metrics and logs to and from Azure Monitor in addition to accessing generated alerts. You can also configure and retrieve alerts. With APIs, you have unlimited possibilities to build custom solutions that integrate with Azure Monitor.|-|[Azure Logic Apps](../logic-apps/logic-apps-overview.md)|Azure Logic Apps is a service you can use to automate tasks and business processes by using workflows that integrate with different systems and services with little or no code. Activities are available that read and write metrics and logs in Azure Monitor. You can use Logic Apps to [customize responses and perform other actions in response to to Azure Monitor alerts](alerts/alerts-logic-apps.md). You can also perform other [more complex actions](logs/logicapp-flow-connector.md) when the Azure Monitor infrastructure doesn't already supply a built-it method.| +|[Azure Logic Apps](../logic-apps/logic-apps-overview.md)|Azure Logic Apps is a service you can use to automate tasks and business processes by using workflows that integrate with different systems and services with little or no code. Activities are available that read and write metrics and logs in Azure Monitor. You can use Logic Apps to [customize responses and perform other actions in response to Azure Monitor alerts](alerts/alerts-logic-apps.md). You can also perform other [more complex actions](logs/logicapp-flow-connector.md) when the Azure Monitor infrastructure doesn't already supply a built-it method.| |[Azure Functions](../azure-functions/functions-overview.md)| Similar to Azure Logic Apps, Azure Functions give you the ability to pre process and post process monitoring data as well as perform complex action beyond the scope of typical Azure Monitor alerts. Azure Functions uses code however providing additional flexibility over Logic Apps. |Azure DevOps and GitHub | Azure Monitor Application Insights gives you the ability to create [Work Item Integration](app/work-item-integration.md) with monitoring data embedding in it. Additional options include [release annotations](app/annotations.md) and [continuous monitoring](app/continuous-monitoring.md). | |
azure-netapp-files | Azacsnap Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-introduction.md | This is a list of technical articles where AzAcSnap has been used as part of a d * [Manual Recovery Guide for SAP HANA on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-vms-from-azure/ba-p/3290161) * [Manual Recovery Guide for SAP HANA on Azure Large Instance from storage snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-large-instance-from/ba-p/3242347) * [Manual Recovery Guide for SAP Oracle 19c on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-oracle-19c-on-azure-vms-from-azure/ba-p/3242408)+* [Manual Recovery Guide for SAP Db2 on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-db2-on-azure-vms-from-azure-netapp/ba-p/3865379) * [SAP Oracle 19c System Refresh Guide on Azure VMs using Azure NetApp Files Snapshots with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-oracle-19c-system-refresh-guide-on-azure-vms-using-azure/ba-p/3708172) * [Protecting HANA databases configured with HSR on Azure NetApp Files with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/protecting-hana-databases-configured-with-hsr-on-azure-netapp/ba-p/3654620) * [Automating SAP system copy operations with Libelle SystemCopy](https://docs.netapp.com/us-en/netapp-solutions-sap/lifecycle/libelle-sc-overview.html) The command options are as follows with the commands as the main bullets and the ## Next steps - [Get started with Azure Application Consistent Snapshot tool](azacsnap-get-started.md)+ |
azure-resource-manager | Preview Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/preview-features.md | The portal only shows a preview feature when the service that owns the feature h 1. Sign in to the [Azure portal](https://portal.azure.com/). 1. In the search box, enter _subscriptions_ and select **Subscriptions**. - :::image type="content" source="./media/preview-features/search.png" alt-text="Azure portal search."::: + :::image type="content" source="./media/preview-features/search.png" alt-text="Screenshot of Azure portal search box with 'subscriptions' entered."::: 1. Select the link for your subscription's name. - :::image type="content" source="./media/preview-features/subscriptions.png" alt-text="Select Azure subscription."::: + :::image type="content" source="./media/preview-features/subscriptions.png" alt-text="Screenshot of Azure portal with subscription selection highlighted."::: 1. From the left menu, under **Settings** select **Preview features**. - :::image type="content" source="./media/preview-features/preview-features-menu.png" alt-text="Azure preview features menu."::: + :::image type="content" source="./media/preview-features/preview-features-menu.png" alt-text="Screenshot of Azure portal with Preview features menu option highlighted."::: 1. You see a list of available preview features and your current registration status. - :::image type="content" source="./media/preview-features/preview-features-list.png" alt-text="Azure portal list of preview features."::: + :::image type="content" source="./media/preview-features/preview-features-list.png" alt-text="Screenshot of Azure portal displaying a list of preview features."::: 1. From **Preview features** you can filter the list by **name**, **State**, or **Type**: The portal only shows a preview feature when the service that owns the feature h - **State**: Select the drop-down menu and choose a state. The portal doesn't filter by **Unregistered**. - **Type**: Select the drop-down menu and choose a type. - :::image type="content" source="./media/preview-features/filter.png" alt-text="Azure portal filter preview features."::: + :::image type="content" source="./media/preview-features/filter.png" alt-text="Screenshot of Azure portal with filter options for preview features."::: # [Azure CLI](#tab/azure-cli) Some services require other methods, such as email, to get approval for pending 1. Select the link for the preview feature you want to register. 1. Select **Register**. - :::image type="content" source="./media/preview-features/register.png" alt-text="Azure portal register preview feature."::: + :::image type="content" source="./media/preview-features/register.png" alt-text="Screenshot of Azure portal with Register button for a preview feature."::: 1. Select **OK**. You can unregister preview features from **Preview features**. The **State** cha 1. Select the link for the preview feature you want to unregister. 1. Select **Unregister**. - :::image type="content" source="./media/preview-features/unregister.png" alt-text="Azure portal unregister preview feature."::: + :::image type="content" source="./media/preview-features/unregister.png" alt-text="Screenshot of Azure portal with Unregister button for a preview feature."::: 1. Select **OK**. |
azure-resource-manager | Request Limits And Throttling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/request-limits-and-throttling.md | Throttling happens at two levels. Azure Resource Manager throttles requests for The following image shows how throttling is applied as a request goes from the user to Azure Resource Manager and the resource provider. The image shows that requests are initially throttled per principal ID and per Azure Resource Manager instance in the region of the user sending the request. The requests are throttled per hour. When the request is forwarded to the resource provider, requests are throttled per region of the resource rather than per Azure Resource Manager instance in region of the user. The resource provider requests are also throttled per principal user ID and per hour. -![Request throttling](./media/request-limits-and-throttling/request-throttling.svg) ## Subscription and tenant limits |
azure-resource-manager | Resource Providers And Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-providers-and-types.md | To see information for a particular resource provider: 2. On the Azure portal menu, select **All services**. 3. In the **All services** box, enter **resource explorer**, and then select **Resource Explorer**. - ![select All services](./media/resource-providers-and-types/select-resource-explorer.png) + :::image type="content" source="./media/resource-providers-and-types/select-resource-explorer.png" alt-text="Screenshot of selecting All services in the Azure portal."::: 4. Expand **Providers** by selecting the right arrow. - ![Select providers](./media/resource-providers-and-types/select-providers.png) + :::image type="content" source="./media/resource-providers-and-types/select-providers.png" alt-text="Screenshot of selecting providers in the Azure Resource Explorer."::: 5. Expand a resource provider and resource type that you want to view. - ![Select resource type](./media/resource-providers-and-types/select-resource-type.png) + :::image type="content" source="./media/resource-providers-and-types/select-resource-type.png" alt-text="Screenshot of selecting a resource type in the Azure Resource Explorer."::: 6. Resource Manager is supported in all regions, but the resources you deploy might not be supported in all regions. Also, there may be limitations on your subscription that prevent you from using some regions that support the resource. The resource explorer displays valid locations for the resource type. - ![Show locations](./media/resource-providers-and-types/show-locations.png) + :::image type="content" source="./media/resource-providers-and-types/show-locations.png" alt-text="Screenshot of showing locations for a resource type in the Azure Resource Explorer."::: 7. The API version corresponds to a version of REST API operations that are released by the resource provider. As a resource provider enables new features, it releases a new version of the REST API. The resource explorer displays valid API versions for the resource type. - ![Show API versions](./media/resource-providers-and-types/show-api-versions.png) + :::image type="content" source="./media/resource-providers-and-types/show-api-versions.png" alt-text="Screenshot of showing API versions for a resource type in the Azure Resource Explorer."::: ## Azure PowerShell |
azure-video-indexer | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md | Guidelines to customers: Make the necessary adjustments to your video encoding a ### New ARM experience without AMS -The deprecation of the AMS dependency has led to a breaking change in the account creation form and the create API for new ARM-based accounts, starting December 2023. As part of the updated workflow, the option to associate an AMS account during account creation will be removed, and will be replaced by adding storage entity. +The deprecation of the AMS dependency requires a change in the account creation form and the create API for new ARM-based accounts, starting December 2023. As part of the updated workflow, the option to associate an AMS account during account creation will be removed, and will be replaced by adding storage entity. Guidelines to customers: We're working on a new implementation without AMS and will provide more details in our documentation. Once available, review the updated documentation and modify your account creation process accordingly to avoid any disruptions. ## May 2023 -### API breaking change +### API updates -We're introducing a change in behavior that may break your existing query logic. The change is in the **List** and **Search** APIs, find a detailed change between the current and the new behavior in a table that follows. You may need to update your code to utilize the [new APIs](https://api-portal.videoindexer.ai/). +We're introducing a change in behavior that may requires a change to your existing query logic. The change is in the **List** and **Search** APIs, find a detailed change between the current and the new behavior in a table that follows. You may need to update your code to utilize the [new APIs](https://api-portal.videoindexer.ai/). -|API |Current|New|The breaking change| +|API |Current|New|The update| ||||| |List Videos|• List all videos/projects according to 'IsBase' boolean parameter. If 'IsBase' isn't defined, list both.<br/>• Returns videos in all states (In progress/Proccessed/Failed). |• List Videos API will Return only videos (with paging) in all states.<br/>• List Projects API returns only projects (with paging).|• List videos API was divided into two new API’s **List Videos** and **List Projects**<br/>• The 'IsBase' parameter no longer has a meaning. | |Search Videos|• Search all videos/projects according to 'IsBase' boolean parameter. If 'IsBase' isn't defined, search both. <br/>• Search videos in all states (In progress/Proccessed/Failed). |Search only processed videos.|• Search Videos API will only search videos and not projects.<br/>• The 'IsBase' parameter no longer has a meaning.<br/>• Search Videos API will only search Processed videos (and not Failed/InProgress ones.)| |
cognitive-services | Responsible Use Of Ai Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/responsible-use-of-ai-overview.md | Azure Cognitive Services provides information and guidelines on how to responsib * [Integration and responsible use](/legal/cognitive-services/luis/guidance-integration-responsible-use?context=/azure/cognitive-services/LUIS/context/context) * [Data, privacy, and security](/legal/cognitive-services/luis/data-privacy-security?context=/azure/cognitive-services/LUIS/context/context) -## OpenAI +## Azure OpenAI Service * [Transparency note](/legal/cognitive-services/openai/transparency-note?context=/azure/cognitive-services/openai/context/context) * [Limited access](/legal/cognitive-services/openai/limited-access?context=/azure/cognitive-services/openai/context/context) Azure Cognitive Services provides information and guidelines on how to responsib * [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/cognitive-services/speech-service/context/context) * [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/cognitive-services/speech-service/context/context) * [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/cognitive-services/speech-service/context/context)-* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context) +* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context) |
container-apps | Workload Profiles Manage Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/workload-profiles-manage-portal.md | -Learn to manage Container Apps environments with workload profile support. --## Supported regions --The following regions support workload profiles during preview: --- North Central US-- North Europe-- West Europe-- East US--<a id="create"></a> +Learn to manage Container Apps environments with [workload profile](./workload-profiles-overview.md) support. ## Create a container app in a workload profile |
container-registry | Container Registry Api Deprecation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-api-deprecation.md | + + Title: Removed and deprecated features for Azure Container Registry +description: This article lists and notifies the features that are deprecated or removed from support for Azure Container Registry. + Last updated : 06/27/2022++++# API Deprecations in Azure Container Registry ++This article describes how to use the information about APIs that are removed from support for Azure Container Registry (ACR). This article provides early notice about future changes that might affect the APIs of ACR available to customers in preview or GA states. ++This information helps you identify the deprecated API versions. The information is subject to change with future releases, and might not include each deprecated feature or product. ++## How to use this information ++When an API is first listed as deprecated, support for using it with ACR is on schedule to be removed in a future update. This information is provided to help you plan for alternatives and a replacement version for using that API. When a version of API is removed, this article is updated to indicate that specific version. ++Unless noted otherwise, a feature, product, SKU, SDK, utility, or tool that's supporting the deprecated API typically continues to be fully supported, available, and usable. ++When support is removed for a version of API, you can use a latest version of API, as long as the API remains in support. ++To avoid errors due to using a deprecated API, we recommend moving to a newer version of the ACR API. You can find a list of [supported versions here.](/azure/templates/microsoft.containerregistry/allversions) ++You may be consuming this API via one or more SDKs. Use a newer API version by updating to a newer version of the SDK. You can find a [list of SDKs and their latest versions here.](https://azure.github.io/azure-sdk/releases/latest/https://docsupdatetracker.net/index.html?search=containerregistry) ++## Removed and Deprecated APIs ++The following API versions are ready for retirement/deprecation. In some cases, they're no longer in the product. ++| API version | Deprecation first announcement | Plan to end support | +| | | - | +| 2016-06-27-preview | July 17, 2023 | October 16, 2023 | +| 2017-06-01-preview | July 17, 2023 | October 16, 2023 | +| 2018-02-01-preview | July 17, 2023 | October 16, 2023 | +| 2017-03-01-GA | September 2023 | September 2026 | ++## See also ++For more information, see the following articles: ++>* [Supported API versions](/azure/templates/microsoft.containerregistry/allversions) +>* [SDKs and their latest versions](https://azure.github.io/azure-sdk/releases/latest/https://docsupdatetracker.net/index.html?search=containerregistry) |
cosmos-db | High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/high-availability.md | When you configure an Azure Cosmos DB account for multiple write regions, strong #### Conflict-resolution region -When an Azure Cosmos DB account is configured with multiple-region writes, one of the regions will act as an arbiter in write conflicts. When such conflicts happen, they're routed to this region for consistent resolution. +When an Azure Cosmos DB account is configured with multiple-region writes, one of the regions will act as an arbiter in write conflicts. #### Best practices for multi-region writes Next, you can read the following articles: * [Diagnose and troubleshoot the availability of Azure Cosmos DB SDKs in multiregional environments](troubleshoot-sdk-availability.md) + |
cosmos-db | Connect Using Robomongo | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/connect-using-robomongo.md | To connect to Azure Cosmos DB account using Robo 3T, you must: * Download and install [Robo 3T](https://robomongo.org/) * Have your Azure Cosmos DB [connection string](connect-account.md) information -> [!NOTE] -> Currently, Robo 3T v1.2 and lower versions are supported with Azure Cosmos DB's API for MongoDB. - ## Connect using Robo 3T To add your Azure Cosmos DB account to the Robo 3T connection manager, perform the following steps: |
cosmos-db | Connect Using Robomongo | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/connect-using-robomongo.md | + + Title: Use Studio 3T to connect to Azure Cosmos DB for MongoDB vCore +description: Learn how to connect to Azure Cosmos DB for MongoDB vCore using Studio 3T +++ Last updated : 07/07/2023++++# Use Studio 3T to connect to Azure Cosmos DB for MongoDB vCore ++Studio 3T (also known as Robomongo or Robo 3T) is a professional GUI that offers IDE & client tools for MongoDB. It's a great tool to speed up MongoDB development with a friendly user interface. In order to connect to your Azure Cosmos DB for MongoDB vCore cluster using Studio 3T, you must: ++* Download and install [Studio 3T](https://robomongo.org/) +* Have your Azure Cosmos DB for MongoDB vCore [connection string](quickstart-portal.md#get-cluster-credentials) information ++## Connect using Studio 3T ++To add your Azure Cosmos DB cluster to the Studio 3T connection manager, perform the following steps: ++1. Retrieve the connection information for your Azure Cosmos DB for MongoDB vCore using the instructions [here](quickstart-portal.md#get-cluster-credentials). ++ :::image type="content" source="./media/connect-using-robomongo/connection-string.png" alt-text="Screenshot of the connection string page."::: +2. Run the **Studio 3T** application. ++3. Click the connection button under **File** to manage your connections. Then, click **New Connection** in the **Connection Manager** window, which will open up another window where you can paste the connection credentials. ++4. In the connection credentials window, choose the first option and paste your connection string. Click **Next** to move forward. ++ :::image type="content" source="./media/connect-using-robomongo/new-connection.png" alt-text="Screenshot of the Studio 3T connection credentials window."::: +5. Choose a **Connection name** and double check your connection credentials. ++ :::image type="content" source="./media/connect-using-robomongo/connection-configuration.png" alt-text="Screenshot of the Studio 3T connection details window."::: +6. On the **SSL** tab, check **Use SSL protocol to connect**. ++ :::image type="content" source="./media/connect-using-robomongo/connection-ssl.png" alt-text="Screenshot of the Studio 3T new connection SSL Tab."::: +7. Finally, click **Test Connection** in the bottom left to verify that you are able to connect, then click **Save**. ++## Next steps ++- Learn [how to use Bicep templates](quickstart-bicep.md) to deploy your Azure Cosmos DB for MongoDB vCore cluster. +- Learn [how to connect your Nodejs web application](tutorial-nodejs-web-app.md) to a MongoDB vCore cluster. +- Check the [migration options](migration-options.md) to lift and shift your MongoDB workloads to Azure Cosmos DB for MongoDB vCore. |
cosmos-db | Abs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/abs.md | Returns a numeric expression. ## Examples The following example shows the results of using this function on three different numbers. - -```sql -SELECT VALUE { - absoluteNegativeOne: ABS(-1), - absoluteZero: ABS(0), - absoluteOne: ABS(1) -} -``` - -```json -[ - { - "absoluteNegativeOne": 1, - "absoluteZero": 0, - "absoluteOne": 1 - } -] -``` ++ ## Remarks |
cosmos-db | Array Concat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/array-concat.md | ARRAY_CONCAT (<arr_expr1>, <arr_expr2> [, <arr_exprN>]) ## Examples The following example how to concatenate two arrays. - -```sql -SELECT ARRAY_CONCAT(["apples", "strawberries"], ["bananas"]) AS arrayConcat -``` - - Here is the result set. - -```json -[{"arrayConcat": ["apples", "strawberries", "bananas"]}] -``` ++ ## Remarks |
cosmos-db | Objecttoarray | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/objecttoarray.md | SELECT VALUE This final example uses an item within an existing container that stores data using fields within a JSON object. -```json -[ - { - "name": "Witalica helmet", - "quantities": { - "small": 15, - "medium": 24, - "large": 2, - "xlarge": 0 - } - } -] -``` In this example, the function is used to break up the object into an array item for each field/value pair. -```sql -SELECT - p.name, - ObjectToArray(p.quantities, "size", "quantity") AS quantitiesBySize -FROM - products p -``` -```json -[ - { - "name": "Witalica helmet", - "quantitiesBySize": [ - { - "size": "small", - "quantity": 15 - }, - { - "size": "medium", - "quantity": 24 - }, - { - "size": "large", - "quantity": 2 - }, - { - "size": "xlarge", - "quantity": 0 - } - ] - } -] -``` ## Remarks |
cost-management-billing | View Amortized Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/view-amortized-costs.md | + + Title: View amortized benefit costs ++description: This article helps you understand what amortized reservation and saving plan benefit costs are and how to view them in cost analysis. +++++ Last updated : 06/30/2023++++# View amortized benefit costs ++This article helps you understand what amortized costs are and how to view them in cost analysis. For simplicity, this article refers to a reservation or savings plan as a *benefit*. When you buy a benefit, you're normally committing to a one-year or three-year plan to save money compared to pay-as-you-go costs. You can choose to pay for the benefit up front or with monthly payments. If you pay up front, the one-time payment is charged to your subscription. If your organization needs to charge back or show back partial costs of the benefit to users or departments that use it, then you might need to determine what the monthly or daily cost of the benefit is. _Amortization_ is the process of breaking the one-time cost into periodic costs. ++However, if your organization doesn't charge back or show back benefit use to the users or departments that use them, then you might not need to worry about amortized costs. ++The examples in the article use a reservation. However, the same logic applies to a savings plan. ++## How Azure calculates amortized costs ++To understand how amortized costs are shown in cost analysis for reservations, let's look at some examples. ++First, let's look at a one-year virtual machine reservation that was purchased on January 1. Depending on your view, instead of seeing a $365 purchase on January 1, 2022, you'll see a $1.00 purchase every day from January 1, 2022 to December 31, 2022. In addition to basic amortization, the costs are also reallocated and associated to the specific resources that used the reservation. For example, if the $1.00 daily charge was split between two virtual machines, you'd see two $0.50 charges for the day. If part of the reservation isn't utilized for the day, you'd see one $0.50 charge associated with the applicable virtual machine and another $0.50 charge with a charge type of _UnusedReservation_. Unused reservation costs can be seen only when viewing amortized cost. ++Now, let's look at a one-year reservation purchased at some other point in a month. For example, if you buy a reservation on May 26, 2022 with a monthly or upfront payment, the amortized cost is divided by 365 (assuming it's not a leap year) and spread from May 26, 2022 through May 25, 2023. In this example, the daily cost would be the same for every day. However, the monthly cost will vary because of the varying number of days in a month. Also, if the reservation period includes a leap year, costs for the leap year are divided evenly by 366. ++Because of the change in how costs are represented, it's important to note that actual cost and amortized cost views will show different total numbers. Depending on your view in Cost analysis, the total cost of months with a reservation purchase will decrease when viewing amortized costs, and months following a reservation purchase will increase. Amortization is available only for reservation purchases and doesn't apply to Azure Marketplace purchases currently. ++Although the preceding example shows how to calculate amortized costs for a reservation, the same logic applies to a savings plan. The only difference is that you use the charge type of _UnusedSavingsPlan_ instead of _UnusedReservation_. ++## Metrics affect how costs are shown ++In Cost analysis, you view costs with a metric. They include Actual cost and Amortized cost. Each metric affects how data is shown for your benefit charges. ++**Actual cost** - Shows the purchase as it appears on your bill. For example, if you bought a one-year reservation for $1200 in January 2022, cost analysis shows a $1200 cost in the month of January for the reservation. It doesn't show a reservation cost for other months of the year. If you group your actual costs by VM, then a VM that received the reservation benefit for a given month would have zero cost for the month. ++**Amortized cost** - Shows a reservation purchase split as an amortized cost over the duration of the reservation term. With the same example above, cost analysis shows a different amount for each month depending on the number of days in the month. If you group costs by VM in this example, you'd see cost attributed to each VM that received the reservation benefit. However, _unused reservation_ costs are not attributed to the subscription used to buy the reservation because the unused portion isn't attributable to any specific resource or subscription. Similarly, unused saving plan costs are not attributed to the subscription used to buy the saving plan. ++## View amortized costs ++By default, cost analysis shows charges as they appear on your bill. The charges are shown as actual costs or amortized over the course of your benefit period. ++> [!NOTE] +> You can buy a reservation with a pay-as-you-go (MS-AZR-0003P) subscription. However, Cost Analysis doesn't support viewing amortized reservation costs for a pay-as-you-go subscription. ++Amortized costs are available only for reservations and savings plans. They aren't available for Azure Marketplace purchases. However, virtual machine software usage reservations available in the Azure Marketplace are supported. ++Depending on the view you use in cost analysis, you'll see different benefit costs. For example: ++When you use the **DailyCosts** view with a date filter applied, you'll easily see when a benefit was purchased with an increase in actual daily costs. If you try to view costs with the **Amortized cost** metric, you'll see the same results as **Actual Cost**. ++Let's look at an example one-year benefit purchased for $12,016.00, purchased on October 23, 2019. The term ends on October 23, 2020, and a leap year day is included in the term, so the term's duration is 366 days. ++In the Azure portal, navigate to cost analysis for your scope. For example, **Cost Management** > **Cost analysis**. ++1. Select a date range that includes a period of the benefit term. +2. Add a filter for **Pricing Model: Reservation** to see only reservation costs. +3. Then, set **Granularity** to **Daily**. Here's an example showing the purchase with the date range set from October through November 2019. + :::image type="content" source="./media/view-amortized-costs/reservation-purchase.png" alt-text="Screenshot showing a reservation purchase in Cost analysis." lightbox="./media/view-amortized-costs/reservation-purchase.png" ::: +4. Under **Scope** and next to the cost shown, select the down arrow symbol and then select **Amortized cost** metric. Here's an example showing the daily cost of all reservations for the selected date range. For the highlighted day, the daily cost is about $37.90. Azure accounts for costs to further decimal places, but only shows costs to two decimal places. + :::image type="content" source="./media/view-amortized-costs/daily-cost-all-reservations-amortized.png" alt-text="Screenshot showing daily amortized cost for all reservations in Cost analysis." lightbox="./media/view-amortized-costs/daily-cost-all-reservations-amortized.png" ::: +5. If you have multiple reservations (the example above does), then use the **Group by** list to group the results by **Reservation name**. Here's an example showing the daily amortized cost of the reservation named `VM_WUS_DS3_Upfront` for $32.83. In this example, Azure determined the cost by: $12,016 / 366 = $32.83 per day. Because the reservation term includes a leap year (2020), 366 is used to divide the total cost, not 365. + :::image type="content" source="./media/view-amortized-costs/daily-cost-amortized-specific-reservation.png" alt-text="Screenshot showing the daily amortized cost for a specific reservation in Cost analysis." lightbox="./media/view-amortized-costs/daily-cost-amortized-specific-reservation.png" ::: +6. Next, change the **Granularity** to **Monthly** and expand the date range. The following example shows varying monthly costs for reservations. The cost varies because the number of days in each month differs. November has 30 days, so the daily cost of $32.83 \* 30 = ~$984.90. + :::image type="content" source="./media/view-amortized-costs/monthly-cost-amortized-reservations.png" alt-text="Screenshot showing the monthly amortized cost for a specific reservation in Cost analysis." lightbox="./media/view-amortized-costs/monthly-cost-amortized-reservations.png" ::: ++## View benefit resource amortized costs ++To charge back or show back costs for a benefit, you need to know which resources used the benefit. Use the following steps to see amortized costs for individual resources. In this example, we'll examine November 2019, which was the first month of full reservation use. ++1. Select a date range in the benefit term where you want to view resources that used the benefit. +2. Add a filter for **Pricing Model: Reservation** to see only reservation costs. +3. Set **Granularity** to **Monthly**. +4. Under **Scope** and next to the cost shown, select the down arrow symbol and then select the **Amortized** cost metric. +5. If you have multiple reservations, use the **Group by** list to group the results by **Reservation name**. +6. In the chart, select a reservation. A filter is added for the reservation name. +7. In the **Group by** list, select **Resource**. The chart shows the resources that used the reservation. In the following example image, November 2019 had eight resources that used the reservation. There's one unused item, which is the subscription that was used to buy the reservation. + :::image type="content" source="./media/view-amortized-costs/reservation-cost-resource-chart.png" alt-text="Screenshot showing the amortized cost of all the reservations for a specific month." lightbox="./media/view-amortized-costs/reservation-cost-resource-chart.png" ::: +8. To see the cost more easily for individual resources, select **Table** in the chart list. Expand items as needed. Here's an example for November 2019 showing the amortized reservation costs for the eight resources that used the reservation. The highlighted cost is the unused portion of the reservation. + :::image type="content" source="./media/view-amortized-costs/reservation-cost-resource-table.png" alt-text="Screenshot showing the amortized cost of all resources that used a reservation for a specific month." lightbox="./media/view-amortized-costs/reservation-cost-resource-table.png" ::: ++Another easy way to view reservation amortized cost is to use the **Reservations** preview view. To easily navigate to it, in Cost analysis in the top menu under **Cost by resource**, select the **Reservations (preview)** view. +++## Next steps ++- Read [Charge back Azure Reservation costs](charge-back-usage.md) to learn more about charge back processes for reservations. +- Read [Charge back Azure saving plan costs](../savings-plan/charge-back-costs.md) to learn more about charge back processes for savings-plans. |
data-factory | Data Flow Flowlet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-flowlet.md | |
data-factory | Data Flow Join | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-join.md | |
data-factory | Data Flow Lookup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-lookup.md | |
data-factory | Data Flow Map Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-map-functions.md | |
data-factory | Data Flow Metafunctions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-metafunctions.md | |
data-factory | Data Flow New Branch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-new-branch.md | |
data-factory | Data Flow Parse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-parse.md | |
data-factory | Data Flow Pivot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-pivot.md | |
data-factory | Data Flow Rank | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-rank.md | |
data-factory | Data Flow Reserved Capacity Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-reserved-capacity-overview.md | |
data-factory | Data Flow Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-script.md | |
data-factory | Data Flow Select | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-select.md | |
data-factory | Data Flow Sort | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-sort.md | |
data-factory | Data Flow Stringify | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-stringify.md | |
data-factory | Data Flow Surrogate Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-surrogate-key.md | |
data-factory | Data Flow Transformation Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-transformation-overview.md | |
data-factory | Data Flow Troubleshoot Connector Format | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-troubleshoot-connector-format.md | |
data-factory | Data Flow Tutorials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-tutorials.md | |
data-factory | Data Flow Understand Reservation Charges | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-understand-reservation-charges.md | |
data-factory | Data Flow Union | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-union.md | |
data-factory | Data Flow Unpivot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-unpivot.md | |
data-factory | Data Flow Window Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-window-functions.md | |
data-factory | Data Flow Window | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-window.md | |
data-factory | Data Migration Guidance Hdfs Azure Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-migration-guidance-hdfs-azure-storage.md | |
data-factory | Data Migration Guidance Netezza Azure Sqldw | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-migration-guidance-netezza-azure-sqldw.md | |
data-factory | Data Migration Guidance Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-migration-guidance-overview.md | |
data-factory | Data Migration Guidance S3 Azure Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-migration-guidance-s3-azure-storage.md | |
data-factory | Data Transformation Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-transformation-functions.md | |
data-factory | Delete Activity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/delete-activity.md | |
data-factory | Deploy Linked Arm Templates With Vsts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/deploy-linked-arm-templates-with-vsts.md | |
data-factory | Enable Aad Authentication Azure Ssis Ir | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/enable-aad-authentication-azure-ssis-ir.md | |
data-factory | Encrypt Credentials Self Hosted Integration Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/encrypt-credentials-self-hosted-integration-runtime.md | |
data-factory | Format Avro | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-avro.md | |
data-factory | Format Binary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-binary.md | |
data-factory | Format Common Data Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-common-data-model.md | |
data-factory | Format Excel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-excel.md | |
data-factory | Format Json | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-json.md | |
data-factory | Format Orc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-orc.md | |
data-factory | Format Parquet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-parquet.md | |
data-factory | Format Xml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-xml.md | |
data-factory | How To Access Secured Purview Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-access-secured-purview-account.md | |
data-factory | How To Clean Up Ssisdb Logs With Elastic Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-clean-up-ssisdb-logs-with-elastic-jobs.md | description: This article describes how to clean up SSIS project deployment and Previously updated : 08/09/2022 Last updated : 07/17/2023 |
data-factory | How To Configure Azure Ssis Ir Custom Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-custom-setup.md | |
data-factory | How To Configure Azure Ssis Ir Enterprise Edition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-enterprise-edition.md | description: "This article describes the features of Enterprise Edition for the Previously updated : 08/09/2022 Last updated : 07/17/2023 |
data-factory | How To Configure Shir For Log Analytics Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-shir-for-log-analytics-collection.md | |
data-factory | How To Create Custom Event Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-custom-event-trigger.md | |
data-factory | How To Create Event Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-event-trigger.md | |
data-factory | How To Create Schedule Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-schedule-trigger.md | |
data-factory | How To Create Tumbling Window Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-tumbling-window-trigger.md | |
data-factory | How To Data Flow Dedupe Nulls Snippets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-data-flow-dedupe-nulls-snippets.md | |
data-factory | How To Data Flow Error Rows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-data-flow-error-rows.md | |
data-factory | How To Develop Azure Ssis Ir Licensed Components | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-develop-azure-ssis-ir-licensed-components.md | |
data-factory | How To Discover Explore Purview Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-discover-explore-purview-data.md | |
data-factory | How To Expression Language Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-expression-language-functions.md | |
data-factory | How To Fixed Width | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-fixed-width.md | |
data-factory | How To Invoke Ssis Package Azure Enabled Dtexec | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-azure-enabled-dtexec.md | description: Learn how to execute SQL Server Integration Services (SSIS) package Previously updated : 08/09/2022 Last updated : 07/17/2023 |
data-factory | How To Invoke Ssis Package Managed Instance Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-managed-instance-agent.md | |
data-factory | How To Invoke Ssis Package Ssdt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-ssdt.md | |
data-factory | How To Invoke Ssis Package Ssis Activity Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-ssis-activity-powershell.md | |
data-factory | How To Invoke Ssis Package Ssis Activity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-ssis-activity.md | |
data-factory | Parameterize Linked Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameterize-linked-services.md | All the linked service types are supported for parameterization. - Azure SQL Managed Instance - Azure Synapse Analytics - Azure Table Storage+- Dataverse - DB2+- Dynamics 365 +- Dynamics AX +- Dynamics CRM - File System - FTP - Generic HTTP Refer to the [JSON sample](#json) to add ` parameters` section to define paramet } } ```++ |
data-factory | Quickstart Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-get-started.md | |
defender-for-cloud | Onboard Machines With Defender For Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/onboard-machines-with-defender-for-endpoint.md | Deploying the Defender for Endpoint agent on your on-premises Windows and Linux | Location | Deployment use case | | | |- | All | <u>Windows Server (all versions)</u> <br />Azure VMs or Azure Arc machines already onboarded and billed by Defender for Servers via an Azure subscription or Log Analytics workspace, running the Defender for Endpoint agent without the MDE.Windows Azure extension. For such machines, you can enable Defender for Cloud integration with Defender for Endpoint to deploy the extension. | - | On-premises (not running Azure Arc) | <u>Windows Server 2019</u>:<br />Servers already onboarded and billed by Defender for Servers P2 via the Log Analytics workspace<br /><br /><u>Windows Server 2012, 2016</u>: <br />Servers running the Defender for Endpoint modern unified agent, and already billed by Defender for Servers P2 via the Log Analytics workspace | - | AWS, GCP (not running Azure Arc) | <u>Windows Server 2019</u>:<br />Servers already onboarded and billed by Defender for Servers via multicloud connectors, Log Analytics workspace, or both. <br /><br /><u>Windows Server 2012, 2016</u>: <br />AWS or GCP VMs using the modern unified Defender for Endpoint solution, already onboarded and billed by Defender for Servers via multicloud connectors, Log Analytics workspace, or both. | + | All | <u>Windows 2012, 2016:</u> <br />Azure VMs or Azure Arc machines already onboarded and billed by Defender for Servers via an Azure subscription or Log Analytics workspace, running the Defender for Endpoint modern unified agent without the MDE.Windows Azure extension. For such machines, you can enable Defender for Cloud integration with Defender for Endpoint to deploy the extension. | + | On-premises (not running Azure Arc) | <u>Windows Server 2012, 2016</u>: <br />Servers running the Defender for Endpoint modern unified agent, and already billed by Defender for Servers P2 via the Log Analytics workspace | + | AWS, GCP (not running Azure Arc) | <u>Windows Server 2012, 2016</u>: <br />AWS or GCP VMs using the modern unified Defender for Endpoint solution, already onboarded and billed by Defender for Servers via multicloud connectors, Log Analytics workspace, or both. | ++ Note: For Windows 2019 and above and Linux, agent version updates have been already released to support simultaneous onboarding without limitations. For Windows - use agent version 10.8555.X and above, For Linux - use agent version 30.101.23052.009 and above. ## Next steps |
dev-box | How To Configure Dev Box Hibernation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-dev-box-hibernation.md | Title: Configure hibernation for Microsoft Dev Box -description: Learn how to enable, disable and troubleshoot hibernation for your dev boxes. +description: Learn how to enable, disable and troubleshoot hibernation for your dev boxes. Configure hibernation settings for your image and dev box definition. -# How to configure Dev Box Hibernation (preview) +# Configure Dev Box Hibernation (preview) for a dev box definition Hibernating dev boxes at the end of the workday can help you save a substantial portion of your VM costs. It eliminates the need for developers to shut down their dev box and lose their open windows and applications. There are two steps in enabling hibernation; you must enable hibernation on your > Dev Box Hibernation is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - ## Key concepts for hibernation-enabled images - The following SKUs support hibernation: 8, 16 vCPU SKUs. 32 vCPU SKUs do not support hibernation. -- You can enable hibernation only on new dev boxes created with hibernation-enabled dev box definitions. You cannot enable hibernation on existing dev boxes.+- You can enable hibernation only on new dev boxes created with hibernation-enabled dev box definitions. You can't enable hibernation on existing dev boxes. -- You can hibernate a dev box only using the dev Portal, CLI, PowerShell, SDKs, and API. Hibernating from within the dev box in Windows is not supported.+- You can hibernate a dev box only using the dev Portal, CLI, PowerShell, SDKs, and API. Hibernating from within the dev box in Windows isn't supported. - If you use a marketplace image, we recommend using the Visual Studio for dev box images. -- The Windows 11 Enterprise CloudPC + OS Optimizations image contains optimized power settings, and they cannot be used with hibernation.+- The Windows 11 Enterprise CloudPC + OS Optimizations image contains optimized power settings, and they can't be used with hibernation. -- Once enabled, you cannot disable hibernation on a dev box. However, you can disable hibernation support on the dev box definition so that future dev boxes do not have hibernation.+- Once enabled, you can't disable hibernation on a dev box. However, you can disable hibernation support on the dev box definition so that future dev boxes don't have hibernation. -- To enable hibernation, you need to enable nested virtualization in your Windows OS. If the "Virtual Machine Platform" feature is not enabled in your DevBox image, DevBox will automatically enable nested virtualization for you if you choose to enable hibernation.+- To enable hibernation, you need to enable nested virtualization in your Windows OS. If the "Virtual Machine Platform" feature isn't enabled in your DevBox image, DevBox automatically enables nested virtualization for you if you choose to enable hibernation. - Hibernation doesn't support hypervisor-protected code integrity (HVCI)/ Memory Integrity features. Dev box disables this feature automatically. These settings are known to be incompatible with hibernation, and aren't support The Visual Studio and Microsoft 365 images that dev Box provides in the Azure Marketplace are already configured to support hibernation. You don't need to enable hibernation on these images, they're ready to use. -If you plan to use a custom image from an Azure Compute Gallery, you need to enable hibernation capabilities as you create the new image. To enable hibernation capabilities, set the IsHibernateSupported flag to true. You must set the IsHibernateSupported flag when you create the image, existing images cannot be modified. +If you plan to use a custom image from an Azure Compute Gallery, you need to enable hibernation capabilities as you create the new image. To enable hibernation capabilities, set the IsHibernateSupported flag to true. You must set the IsHibernateSupported flag when you create the image, existing images can't be modified. To enable hibernation capabilities, set the `IsHibernateSupported` flag to true: All new dev boxes created in dev box pools that use a dev box definition with hi Dev Box validates your image for hibernate support. Your dev box definition may fail validation if hibernation couldn't be successfully enabled using your image. +You can enable hibernation on a dev box definition by using the Azure portal or the CLI. + ### Enable hibernation on an existing dev box definition by using the Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com). Dev Box validates your image for hibernate support. Your dev box definition may 1. On the Editing \<dev box definition\> page, select **Enable hibernation**. - :::image type="content" source="./media/how-to-configure-dev-box-hibernation/dev-box-pool-enable-hibernation.png" alt-text="Screenshot of the page for editing a dev box definition, with Enable hibernation selected.."::: + :::image type="content" source="./media/how-to-configure-dev-box-hibernation/dev-box-pool-enable-hibernation.png" alt-text="Screenshot of the page for editing a dev box definition, with Enable hibernation selected."::: 1. Select **Save**. az devcenter admin devbox-definition update --dev-box-definition-name <DevBoxDef If you have issues provisioning new VMs after enabling hibernation on a pool or you want to revert to shut down only dev boxes, you can disable hibernation on the dev box definition. +You can disable hibernation on a dev box definition by using the Azure portal or the CLI. + ### Disable hibernation on an existing dev box definition by using the Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com). |
dev-box | How To Generate Visual Studio Caches | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-generate-visual-studio-caches.md | + + Title: Configure Visual Studio caches for your dev box image ++description: Learn how to generate Visual Studio caches for your customized Dev Box image. ++++ Last updated : 07/17/2023++++# Optimize the Visual Studio experience on Microsoft Dev Box ++> [!IMPORTANT] +> Visual Studio precaching for Microsoft Dev Box is currently in public preview. This information relates to a feature that may be substantially modified before it's released. Microsoft makes no warranties, expressed or implied, with respect to the information provided here. ++With [Visual Studio 17.7 Preview 3](https://visualstudio.microsoft.com/vs/preview/), you can try precaching of Visual Studio solutions for Microsoft Dev Box. When loading projects, Visual Studio indexes files and generates metadata to enable the full suite of [IDE](/visualstudio/get-started/visual-studio-ide) capabilities. As a result, Visual Studio can sometimes take a considerable amount of time when loading large projects for the first time. With Visual Studio caches on dev box, you can now pregenerate this startup data and make it available to Visual Studio as part of your customized dev box image. This means that when you create a dev box from a custom image including Visual Studio caches, you can log onto a Microsoft Dev Box and start working on your project immediately. ++Benefits of precaching your Visual Studio solution on a dev box image include: +- You can reduce the time it takes to load your solution for the first time. +- You can quickly access and use key IDE features like Find In Files and IntelliSense in Visual Studio. +- You can improve the Git performance on large repositories. ++> [!NOTE] +> Performance gains in startup time from precaching of your Visual Studio solution will vary depending on the complexity of your solution. ++## Prerequisites ++To leverage precaching of your source code and Visual Studio IDE customizations on Microsoft Dev Box, you need to meet the following requirements: ++- Create a dev center and configure the Microsoft Dev Box service. If you don't have one available, follow the steps in [Quickstart: Configure Microsoft Dev Box](quickstart-configure-dev-box-service.md) to create a dev center and configure a dev box. +- [Create a custom VM image for dev box](how-to-customize-devbox-azure-image-builder.md) that includes your source code and pregenerated caches. ++ This article guides you through the creation of an Azure Resource Manager template. In the following sections, you'll modify that template to include processes to [generate the Visual Studio solution cache](#enable-caches-in-dev-box-images) and further improve Visual Studio performance by [preparing the git commit graph](#enable-git-commit-graph-optimizations) for your project. ++You can then use the resulting image to [create new dev boxes](quickstart-configure-dev-box-service.md#3-create-a-dev-box-definition) for your team. ++## Enable caches in dev box images ++You can generate caches for your Visual Studio solution as part of an automated pipeline that builds custom dev box images. To do so, you must meet the following requirements: ++* Within the [Azure Resource Manager template](../azure-resource-manager/templates/overview.md), add a customized step to clone the source repository of your project into a nonuser specific location on the VM. +* With the project source located on disk you can now run the `PopulateSolutionCache` feature to generate the project caches. To do this, add the following PowerShell command to your template's customized steps: ++ ```shell + # Add a command line flag to the Visual Studio devenv + devenv SolutionName /PopulateSolutionCache /LocalCache /Build [SolnConfigName [/Project ProjName [/ProjectConfig ProjConfigName]] [/Out OutputFilename]] + ``` + + This command will open your solution, execute a build, and generate the caches for the specified solution. The generated caches will then be included in the [custom image](how-to-customize-devbox-azure-image-builder.md) and available to dev box users once [posted to a connected Azure Compute Gallery](../virtual-machines/shared-image-galleries.md). You can then [create a new dev box](quickstart-configure-dev-box-service.md#3-create-a-dev-box-definition) based off this image. + + The `/Build` flag is optional, but without it some caches that require a build to have completed won't be available. For more information on the `build` command, see [Build command-line reference](/visualstudio/ide/reference/build-devenv-exe). ++When a dev box user opens the solution on a dev box based off the customized image, Visual Studio will read the already generated caches and skip the cache generation altogether. ++## Enable Git commit-graph optimizations ++Beyond the [standalone commit-graph feature that was made available with Visual Studio 17.2 Preview 3](https://devblogs.microsoft.com/visualstudio/supercharge-your-git-experience-in-vs/), you can also enable commit-graph optimizations as part of an automated pipeline that generates custom dev box images. To do so, you must meet the following requirements: ++* You're using a [Microsoft Dev Box](overview-what-is-microsoft-dev-box.md) as your development workstation. +* The source code for your project is saved in a non-user specific location to be included in the image. +* You can [create a custom dev box image](how-to-customize-devbox-azure-image-builder.md) that includes the Git source code repository for your project. +* You're using [Visual Studio 17.7 Preview 3 or higher](https://visualstudio.microsoft.com/vs/preview/). + +To enable this optimization, execute the following `git` commands from your Git repositoryΓÇÖs location as part of your image build process: ++```bash +# Enables the Git repo to use the commit-graph file, if the file is present +git config --local core.commitGraph true ++# Update the Git repositoryΓÇÖs commit-graph file to contain all reachable commits +git commit-graph write --reachable +``` ++The generated caches will then be included in the [custom image](how-to-customize-devbox-azure-image-builder.md) and available to dev box users once [posted to a connected Azure Compute Gallery](../virtual-machines/shared-image-galleries.md). ++## Next steps ++Get started with Visual Studio precaching in Microsoft Dev Box: ++- [Download and install Visual Studio 17.7 Preview 3 or later](https://visualstudio.microsoft.com/vs/preview/). ++WeΓÇÖd love to hear your feedback, input, and suggestions on Visual Studio precaching in Microsoft Dev Box via the [Developer Community](https://visualstudio.microsoft.com/vs/preview/). |
firewall | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md | Azure Firewall Standard has the following known issues: |Moving a firewall to a different resource group or subscription isn't supported|Moving a firewall to a different resource group or subscription isn't supported.|Supporting this functionality is on our road map. To move a firewall to a different resource group or subscription, you must delete the current instance and recreate it in the new resource group or subscription.| |Threat intelligence alerts may get masked|Network rules with destination 80/443 for outbound filtering masks threat intelligence alerts when configured to alert only mode.|Create outbound filtering for 80/443 using application rules. Or, change the threat intelligence mode to **Alert and Deny**.| |Azure Firewall DNAT doesn't work for private IP destinations|Azure Firewall DNAT support is limited to Internet egress/ingress. DNAT doesn't currently work for private IP destinations. For example, spoke to spoke.|A fix is being investigated.|-|Availability zones can only be configured during deployment.|Availability zones can only be configured during deployment. You can't configure Availability Zones after a firewall has been deployed.|This is by design.| +|With secured virtual hubs, availability zones can only be configured during deployment.| You can't configure Availability Zones after a firewall with secured virtual hubs has been deployed.|This is by design.| |SNAT on inbound connections|In addition to DNAT, connections via the firewall public IP address (inbound) are SNATed to one of the firewall private IPs. This requirement today (also for Active/Active NVAs) to ensure symmetric routing.|To preserve the original source for HTTP/S, consider using [XFF](https://en.wikipedia.org/wiki/X-Forwarded-For) headers. For example, use a service such as [Azure Front Door](../frontdoor/front-door-http-headers-protocol.md#from-the-front-door-to-the-backend) or [Azure Application Gateway](../application-gateway/rewrite-http-headers-url.md) in front of the firewall. You can also add WAF as part of Azure Front Door and chain to the firewall. |SQL FQDN filtering support only in proxy mode (port 1433)|For Azure SQL Database, Azure Synapse Analytics, and Azure SQL Managed Instance:<br><br>SQL FQDN filtering is supported in proxy-mode only (port 1433).<br><br>For Azure SQL IaaS:<br><br>If you're using non-standard ports, you can specify those ports in the application rules.|For SQL in redirect mode (the default if connecting from within Azure), you can instead filter access using the SQL service tag as part of Azure Firewall network rules. |Outbound SMTP traffic on TCP port 25 is blocked|Outbound email messages that are sent directly to external domains (like `outlook.com` and `gmail.com`) on TCP port 25 can be blocked by Azure platform. This is the default platform behavior in Azure, Azure Firewall doesn't introduce any more specific restriction. |Use authenticated SMTP relay services, which typically connect through TCP port 587, but also supports other ports. For more information, see [Troubleshoot outbound SMTP connectivity problems in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md). Currently, Azure Firewall may be able to communicate to public IPs by using outbound TCP 25, but it's not guaranteed to work, and it's not supported for all subscription types. For private IPs like virtual networks, VPNs, and Azure ExpressRoute, Azure Firewall supports an outbound connection of TCP port 25. |
healthcare-apis | Import Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md | -> Incremental import mode is currently in public preview +> Incremental import mode is currently in public preview and offered free of charge. With General Availability, use of Incremental import will incur charges. > Preview APIs and SDKs are provided without a service-level agreement. We recommend that you don't use them for production workloads. Some features might not be supported, or they might have constrained capabilities. > > For more information, review [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). |
healthcare-apis | Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/known-issues.md | Refer to the table below to find details about resolution dates or possible work |Issue | Date discovered | Workaround | Date resolved | | :- | : | :- | :- |-|Import functionality isn't working as expected when NDJSON file size is greater than 2 GB. Customer sees the import job stuck in retry mode.| June 2023| Suggested workaround is to reduce file size less than 2 GB.|--| +|FHIR resources are not queryable by custom search parameters even after reindex is successful.| July 2023| Suggested workaround is to create support ticket to update the status of custom search parameters after reindex is successful.|--| |Using [token type](https://www.hl7.org/fhir/search.html#token) fields of length more than 128 characters can result in undesired behavior on `create`, `search`, `update`, and `delete` operations. | August 2022 |- | Resolved, customers impacted with 128 characters issue are notified on resolution. | |The SQL provider causes the `RawResource` column in the database to save incorrectly. This occurs in a few cases when a transient exception occurs that causes the provider to use its retry logic.ΓÇ»|April 2022 |-|May 2022 Resolved [#2571](https://github.com/microsoft/fhir-server/pull/2571) | | Queries not providing consistent result counts after appended with `_sort` operator. For more information, see [#2680](https://github.com/microsoft/fhir-server/pull/2680). | July 2022 | -|August 2022 Resolved [#2680](https://github.com/microsoft/fhir-server/pull/2680) | |
iot-central | Howto Create Custom Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-custom-rules.md | To send emails with SendGrid, you need to configure the bindings for your functi 1. Select **+ Add output**. 1. Select **SendGrid** as the binding type. 1. For the **SendGrid API Key App Setting**, select **New**.-1. Enter the *Name* and *Value* of your SendGrid API key. If you followed the instructions above, the name of your SendGrid API key is **AzureFunctionAccess**. +1. Enter the *Name* and *Value* of your SendGrid API key. If you followed the previous instructions, the name of your SendGrid API key is **AzureFunctionAccess**. 1. Add the following information: | Setting | Value | This solution uses a Stream Analytics query to detect when a device stops sendin ## Configure export in IoT Central -On the [Azure IoT Central application manager](https://aka.ms/iotcentral) website, navigate to the IoT Central application the script created. The name of the app is **In-store analytics - Condition Monitoring (custom rules scenario)**. +On the [Azure IoT Central My apps](https://apps.azureiotcentral.com/myapps) page, locate the IoT Central application the script created. The name of the app is **In-store analytics - Condition Monitoring (custom rules scenario)**. To enable the data export to Event Hubs, navigate to the **Data Export** page and enable the **All telemetry** export. Wait until the export status is **Running** before you continue. |
iot-central | Howto Create Iot Central Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-iot-central-application.md | The following table summarizes the differences between the three standard plans: ### Application name -The _application name_ you choose appears in the title bar on every page in your IoT Central application. It also appears on your application's tile on the **My apps** page on the [Azure IoT Central](https://aka.ms/iotcentral) site. +The _application name_ you choose appears in the title bar on every page in your IoT Central application. The _subdomain_ you choose uniquely identifies your application. The subdomain is part of the URL you use to access the application. The URL for an IoT Central application looks like `https://yoursubdomain.azureiotcentral.com`. If you choose one of the standard plans, you need to provide billing information - The directory that contains the subscription you're using. - The location to host your application. IoT Central uses Azure regions as locations: Australia East, Canada Central, Central US, East US, East US 2, Japan East, North Europe, South Central US, Southeast Asia, UK South, West Europe, and West US. -## Azure IoT Central site +## Azure portal -The easiest way to get started creating IoT Central applications is on the [Azure IoT Central](https://aka.ms/iotcentral) site. +The easiest way to get started creating IoT Central applications is in the [Azure portal](https://portal.azure.com/#create/Microsoft.IoTCentral). -The [Build](https://apps.azureiotcentral.com/build) lets you select the application template you want to use: ++Enter the following information: ++| Field | Description | +| -- | -- | +| Subscription | The Azure subscription you want to use. | +| Resource group | The resource group you want to use. You can create a new resource group or use an existing one. | +| Resource name | A valid Azure resource name. | +| Application URL | The URL subdomain for your application. The URL for an IoT Central application looks like `https://yoursubdomain.azureiotcentral.com`. | +| Template | The application template you want to use. For a blank application template, select **Custom application**.| +| Region | The Azure region you want to use. | +| Pricing plan | The pricing plan you want to use. | -If you select **Create app**, you can provide the necessary information to create an application from the template: +Select **Review + create**, Then select **Create** to create the application. -The **My apps** page lists all the IoT Central applications you have access to. The list includes applications you created and applications that you've been granted access to. +When the app is ready, you can navigate to it from the Azure portal: -> [!TIP] -> All the applications you create using a standard pricing plan on the Azure IoT Central site use the **IOTC** resource group in your subscription. The approaches described in the following section let you choose a resource group to use. ++To list all the IoT Central apps you've created, navigate to [IoT Central Applications](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.IoTCentral%2FIoTApps). ++To list all the IoT Central applications you have access to, navigate to [IoT Central Applications](https://apps.azureiotcentral.com/myapps). ## Copy an application You can create a copy of any application, minus any device instances, device data history, and user data. The copy uses a standard pricing plan that you'll be billed for. -Navigate to **Application > Management** and select **Copy**. In the dialog box, enter the details for the new application. Then select **Copy** to confirm that you want to continue. To learn more about the fields in the form, see [Create an application](howto-create-iot-central-application.md). +Navigate to **Application > Management** and select **Copy**. In the dialog box, enter the details for the new application. Then select **Copy** to confirm that you want to continue. To learn more about the fields in the form, see [Options](#options). :::image type="content" source="media/howto-create-iot-central-application/app-copy.png" alt-text="Screenshot that shows the copy application settings page." lightbox="media/howto-create-iot-central-application/app-copy.png"::: To create an application template from an existing IoT Central application: ### Use an application template -To use an application template to create a new IoT Central application, you need a previously created **Shareable Link**. Paste the **Shareable Link** into your browser's address bar. The **Create an application** page displays with your custom application template selected: -+To use an application template to create a new IoT Central application, you need a previously created **Shareable Link**. Paste the **Shareable Link** into your browser's address bar. The **Create an application** page displays with your custom application template selected. Select your pricing plan and fill out the other fields on the form. Then select **Create** to create a new IoT Central application from the application template. To update your application template, change the template name or description on You can also use the following approaches to create an IoT Central application: -- [Create an IoT Central application from the Azure portal](howto-manage-iot-central-from-portal.md#create-iot-central-applications) - [Create an IoT Central application using the command line](howto-manage-iot-central-from-cli.md#create-an-application) - [Create an IoT Central application programmatically](/samples/azure-samples/azure-iot-central-arm-sdk-samples/azure-iot-central-arm-sdk-samples/) |
iot-central | Howto Manage Iot Central From Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-from-cli.md | -Instead of creating and managing IoT Central applications on the [Azure IoT Central application manager](https://aka.ms/iotcentral) website, you can use [Azure CLI](/cli/azure/) or [Azure PowerShell](/powershell/azure/) to manage your applications. +Instead of creating and managing IoT Central applications in the [Azure portal](https://portal.azure.com/#create/Microsoft.IoTCentral), you can use [Azure CLI](/cli/azure/) or [Azure PowerShell](/powershell/azure/) to manage your applications. If you prefer to use a language such as JavaScript, Python, C#, Ruby, or Go to create, update, list, and delete Azure IoT Central applications, see the [Azure IoT Central ARM SDK samples](/samples/azure-samples/azure-iot-central-arm-sdk-samples/azure-iot-central-arm-sdk-samples/) repository. |
iot-central | Howto Manage Iot Central From Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-from-portal.md | -## Create IoT Central applications ---To create an application, navigate to the [IoT Central Application](https://portal.azure.com/#create/Microsoft.IoTCentral) page in the Azure portal: ---* **Resource name** is a unique name you can choose for your IoT Central application in your Azure resource group. --* **Application URL** is the URL you can use to access your application. --* **Template** is the type of IoT Central application you want to create. You can create a new application either from the list of industry-relevant templates to help you get started quickly, or start from scratch using the **Custom application** template. --* **Location** is the Azure region where you'd like to create your application. Typically, you should choose the location that's physically closest to your devices to get optimal performance. For a list of the regions where Azure IoT Central is currently available, see [Availability by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=iot-central). -- Once you choose a location, you can't later move your application to a different location. --After filling out all fields, select **Create**. To learn more, see [Create an IoT Central application](howto-create-iot-central-application.md). +To learn how to create an IoT Central application, see [Create an IoT Central application](howto-create-iot-central-application.md). ## Manage existing IoT Central applications Use the **Workbooks** page to analyze logs and create visual reports. To learn m ### Metrics and invoices -Metrics may differ from the numbers shown on your Azure IoT Central invoice. This situation occurs for a number of reasons such as: +Metrics may differ from the numbers shown on your Azure IoT Central invoice. This situation occurs for reasons such as: * IoT Central [standard pricing plans](https://azure.microsoft.com/pricing/details/iot-central/) include two devices and varying message quotas for free. While the free items are excluded from billing, they're still counted in the metrics. To learn how to remotely monitor your IoT Edge fleet using Azure Monitor and bui ## Next steps -Now that you've learned how to manage and monitor Azure IoT Central applications from the Azure portal, here is the suggested next step: +Now that you've learned how to manage and monitor Azure IoT Central applications from the Azure portal, here's the suggested next step: > [!div class="nextstepaction"] > [Administer your application](howto-administer.md) |
iot-central | Overview Iot Central Tour | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-tour.md | This article introduces you to Azure IoT Central UI. You can use the UI to creat ## IoT Central homepage -The [IoT Central homepage](https://apps.azureiotcentral.com/) page is the place to learn more about the latest news and features available on IoT Central, create new applications, and see and launch your existing applications. +The [IoT Central homepage](https://apps.azureiotcentral.com/) page is the place to learn more about the latest news and features available on IoT Central and see and launch your existing applications. --### Create an application --In the **Build** section you can browse the list of industry-relevant IoT Central templates, or start from scratch using a Custom application template. ---To learn more, see the [Create an Azure IoT Central application](quick-deploy-iot-central.md) quickstart. ### Launch your application Once you're inside your IoT application, use the left pane to access various fea :::row::: :::column span=""::: - :::image type="content" source="media/overview-iot-central-tour/navigation-bar.png" alt-text="left pane"::: + :::image type="content" source="media/overview-iot-central-tour/navigation-bar.png" alt-text="Screenshot that shows the IoT Central left navigation pane."::: :::column-end::: :::column span="2"::: |
iot-central | Quick Deploy Iot Central | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-deploy-iot-central.md | In this quickstart, you: ## Create an application -Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with the Microsoft personal, work, or school account associated with your Azure subscription. - IoT Central provides various industry-focused application templates to help you get started. This quickstart uses the **Custom application** template to create an application from scratch: -1. Navigate to the **Build** page and select **Create app** in the **Custom app** tile: -- :::image type="content" source="media/quick-deploy-iot-central/iot-central-create-new-application.png" alt-text="Build your IoT application page" lightbox="media/quick-deploy-iot-central/iot-central-create-new-application.png"::: -- If you're prompted to sign in, use the Microsoft account associated with your Azure subscription. --1. On the **New application** page, make sure that **Custom application** is selected under the **Application template**. --1. Azure IoT Central automatically suggests an **Application name** based on the application template you've selected. Enter your own application name such as *Contoso quickstart app*. --1. Azure IoT Central also generates a unique **URL** prefix for you, based on the application name. You use this URL to access your application. Change this URL prefix to something more memorable if you'd like. This URL must be unique. -- :::image type="content" source="media/quick-deploy-iot-central/iot-central-create-custom.png" alt-text="Azure IoT Central Create an application page" lightbox="media/quick-deploy-iot-central/iot-central-create-custom.png"::: --1. For this quickstart, leave the pricing plan set to **Standard 2**. +1. Navigate to the [Create IoT Central Application](https://portal.azure.com/#create/Microsoft.IoTCentral) page in the Azure portal. If prompted, sign in with your Azure account. -1. Select your subscription in the **Azure subscription** drop-down. +1. Enter the following information: -1. Select your closest location in the **Location** drop-down. + | Field | Description | + | -- | -- | + | Subscription | The Azure subscription you want to use. | + | Resource group | The resource group you want to use. You can create a new resource group or use an existing one. | + | Resource name | A valid Azure resource name such as *my-contoso-app*. | + | Application URL | A URL subdomain for your application such as *my-contoso-app*. The URL for an IoT Central application looks like `https://my-contoso-app.azureiotcentral.com`. | + | Template | **Custom application** | + | Region | The Azure region you want to use. | + | Pricing plan | **Standard 2**| -1. Review the Terms and Conditions, and select **Create** at the bottom of the page. After a few seconds, your IoT Central application is ready to use: +1. Select **Review + create**. Then select **Create**. - :::image type="content" source="media/quick-deploy-iot-central/iot-central-application.png" alt-text="Azure IoT Central application" lightbox="media/quick-deploy-iot-central/iot-central-application.png"::: ## Register a device |
iot-central | Tutorial Smart Meter App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/energy/tutorial-smart-meter-app.md | To complete this tutorial, you need an active Azure subscription. If you don't h ## Create an application for monitoring smart meters -1. Go to the [Azure IoT Central build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. +To create your IoT Central application: -1. Select **Build** from the left menu, and then select the **Energy** tab. +1. Navigate to the [Create IoT Central Application](https://portal.azure.com/#create/Microsoft.IoTCentral) page in the Azure portal. If prompted, sign in with your Azure account. - :::image type="content" source="media/tutorial-iot-central-smart-meter/smart-meter-build.png" alt-text="Screenshot that shows the Azure IoT Central build site with energy app templates."::: +1. Enter the following information: -1. Under **Smart meter monitoring**, select **Create app**. + | Field | Description | + | -- | -- | + | Subscription | The Azure subscription you want to use. | + | Resource group | The resource group you want to use. You can create a new resource group or use an existing one. | + | Resource name | A valid Azure resource name. | + | Application URL | The URL subdomain for your application. The URL for an IoT Central application looks like `https://yoursubdomain.azureiotcentral.com`. | + | Template | **Smart Meter Analytics** | + | Region | The Azure region you want to use. | + | Pricing plan | The pricing plan you want to use. | -To learn more, see [Create an Azure IoT Central application](../core/howto-create-iot-central-application.md). +1. Select **Review + create**. Then select **Create**. + ## Walk through the application |
iot-central | Tutorial Solar Panel App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/energy/tutorial-solar-panel-app.md | An active Azure subscription. If you don't have an Azure subscription, create a ## Create a solar panel monitoring application -1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Energy** tab: +To create your IoT Central application: - :::image type="content" source="media/tutorial-iot-central-solar-panel/solar-panel-build.png" alt-text="Screenshot showing the Azure IoT Central build site with the energy app templates."::: +1. Navigate to the [Create IoT Central Application](https://portal.azure.com/#create/Microsoft.IoTCentral) page in the Azure portal. If prompted, sign in with your Azure account. -1. Select **Create app** under **Solar panel monitoring**. +1. Enter the following information: -To learn more, see [Create an IoT Central application](../core/howto-create-iot-central-application.md). + | Field | Description | + | -- | -- | + | Subscription | The Azure subscription you want to use. | + | Resource group | The resource group you want to use. You can create a new resource group or use an existing one. | + | Resource name | A valid Azure resource name. | + | Application URL | The URL subdomain for your application. The URL for an IoT Central application looks like `https://yoursubdomain.azureiotcentral.com`. | + | Template | **Smart Power Monitoring** | + | Region | The Azure region you want to use. | + | Pricing plan | The pricing plan you want to use. | ++1. Select **Review + create**. Then select **Create**. + ## Walk through the application |
iot-central | Tutorial Connected Waste Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-connected-waste-management.md | An active Azure subscription. If you don't have an Azure subscription, create a ## Create connected waste management application -To create your application: +To create your IoT Central application: -1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Government** tab: +1. Navigate to the [Create IoT Central Application](https://portal.azure.com/#create/Microsoft.IoTCentral) page in the Azure portal. If prompted, sign in with your Azure account. - :::image type="content" source="media/tutorial-connected-waste-management/iot-central-government-tab-overview.png" alt-text="Screenshot showing the Azure IoT Central build site with the government app templates."::: +1. Enter the following information: -1. Select **Create app** under **Connected waste management**. + | Field | Description | + | -- | -- | + | Subscription | The Azure subscription you want to use. | + | Resource group | The resource group you want to use. You can create a new resource group or use an existing one. | + | Resource name | A valid Azure resource name. | + | Application URL | The URL subdomain for your application. The URL for an IoT Central application looks like `https://yoursubdomain.azureiotcentral.com`. | + | Template | **Connected Waste Management** | + | Region | The Azure region you want to use. | + | Pricing plan | The pricing plan you want to use. | -To learn more, see [Create an IoT Central application](../core/howto-create-iot-central-application.md). +1. Select **Review + create**. Then select **Create**. + ## Walk through the application |
iot-central | Tutorial Water Consumption Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-water-consumption-monitoring.md | An active Azure subscription. If you don't have an Azure subscription, create a ## Create water consumption monitoring application -Create the application using following steps: +To create your IoT Central application: -1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Government** tab. +1. Navigate to the [Create IoT Central Application](https://portal.azure.com/#create/Microsoft.IoTCentral) page in the Azure portal. If prompted, sign in with your Azure account. -1. Select **Create app** under **Water consumption monitoring**. +1. Enter the following information: -To learn more, see [Create an IoT Central application](../core/howto-create-iot-central-application.md). + | Field | Description | + | -- | -- | + | Subscription | The Azure subscription you want to use. | + | Resource group | The resource group you want to use. You can create a new resource group or use an existing one. | + | Resource name | A valid Azure resource name. | + | Application URL | The URL subdomain for your application. The URL for an IoT Central application looks like `https://yoursubdomain.azureiotcentral.com`. | + | Template | **Water Consumption Monitoring** | + | Region | The Azure region you want to use. | + | Pricing plan | The pricing plan you want to use. | ++1. Select **Review + create**. Then select **Create**. + ## Walk through the application |
iot-central | Tutorial Water Quality Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-water-quality-monitoring.md | An active Azure subscription. If you don't have an Azure subscription, create a ## Create water quality monitoring application -Create the application using following steps: +To create your IoT Central application: -1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Government** tab. +1. Navigate to the [Create IoT Central Application](https://portal.azure.com/#create/Microsoft.IoTCentral) page in the Azure portal. If prompted, sign in with your Azure account. -1. Select **Create app** under **Water quality monitoring**. +1. Enter the following information: -To learn more, see [Create an IoT Central application](../core/howto-create-iot-central-application.md). + | Field | Description | + | -- | -- | + | Subscription | The Azure subscription you want to use. | + | Resource group | The resource group you want to use. You can create a new resource group or use an existing one. | + | Resource name | A valid Azure resource name. | + | Application URL | The URL subdomain for your application. The URL for an IoT Central application looks like `https://yoursubdomain.azureiotcentral.com`. | + | Template | **Water Consumption Monitoring** | + | Region | The Azure region you want to use. | + | Pricing plan | The pricing plan you want to use. | ++1. Select **Review + create**. Then select **Create**. + ## Walk through the application |
iot-central | Tutorial Continuous Patient Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/healthcare/tutorial-continuous-patient-monitoring.md | Azure IoT Central is HIPAA-compliant and HITRUST® certified. You can send pa ### Machine learning (4) -Use machine learning models with your FHIR data to generate insights and support decision making by your care team. To learn more, see the [Azure machine learning documentation](../../machine-learning/index.yml). +Use machine learning models with your FHIR data to generate insights and support decision making by your care team. To learn more, see the [Azure Machine Learning documentation](../../machine-learning/index.yml). ### Provider dashboard (5) An active Azure subscription. If you don't have an Azure subscription, create a ## Create application -1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Healthcare** tab. +To create your IoT Central application: -1. Select **Create app** under **Continuous patient monitoring**. +1. Navigate to the [Create IoT Central Application](https://portal.azure.com/#create/Microsoft.IoTCentral) page in the Azure portal. If prompted, sign in with your Azure account. -To learn more, see [Create an IoT Central application](../core/howto-create-iot-central-application.md). +1. Enter the following information: ++ | Field | Description | + | -- | -- | + | Subscription | The Azure subscription you want to use. | + | Resource group | The resource group you want to use. You can create a new resource group or use an existing one. | + | Resource name | A valid Azure resource name. | + | Application URL | The URL subdomain for your application. The URL for an IoT Central application looks like `https://yoursubdomain.azureiotcentral.com`. | + | Template | **Continuous Patient Monitoring** | + | Region | The Azure region you want to use. | + | Pricing plan | The pricing plan you want to use. | ++1. Select **Review + create**. Then select **Create**. + ## Walk through the application The **Commands** tab lets you run commands on the device. A suggested next step is to learn more about integrating IoT Central with other > [!div class="nextstepaction"]-> [IoT Central data integration](../core/overview-iot-central-solution-builder.md) +> [IoT Central data integration](../core/overview-iot-central-solution-builder.md) |
iot-central | Tutorial In Store Analytics Create App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md | An active Azure subscription. If you don't have an Azure subscription, [create a ## Create an in-store analytics application -Create the application by completing the following steps: +To create your IoT Central application: -1. Sign in to the [Azure IoT Central](https://aka.ms/iotcentral) build site with a Microsoft personal, work, or school account. +1. Navigate to the [Create IoT Central Application](https://portal.azure.com/#create/Microsoft.IoTCentral) page in the Azure portal. If prompted, sign in with your Azure account. -1. On the left pane, select **Build**, and then select the **Retail** tab. +1. Enter the following information: -1. Under **In-store analytics - checkout**, select **Create app**. + | Field | Description | + | -- | -- | + | Subscription | The Azure subscription you want to use. | + | Resource group | The resource group you want to use. You can create a new resource group or use an existing one. | + | Resource name | A valid Azure resource name. | + | Application URL | The URL subdomain for your application. The URL for an IoT Central application looks like `https://yoursubdomain.azureiotcentral.com`. | + | Template | **In-store Analytics - Checkout** | + | Region | The Azure region you want to use. | + | Pricing plan | The pricing plan you want to use. | -To learn more, see [Create an IoT Central application](../core/howto-create-iot-central-application.md). +1. Select **Review + create**. Then select **Create**. + ## Walk through the application To create a custom theme: To update the application image: -1. Select **Application** > **Management**. +1. Select **Application** > **Management**. -1. Select **Change**, and then select an image to upload as the application image. +1. Select **Change**, and then select an image to upload as the application image. 1. Select **Save**. - The image appears on the application tile on the **My Apps** page of the [Azure IoT Central application manager](https://aka.ms/iotcentral) site. + The image appears on the application tile on the **My Apps** page of the [Azure IoT Central My apps](https://apps.azureiotcentral.com/myapps) site. ### Create the device templates |
iot-central | Tutorial In Store Analytics Customize Dashboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-customize-dashboard.md | After you've created your condition-monitoring application, you can edit its def The first step in customizing the application dashboard is to change the name: -1. Go to the [Azure IoT Central application manager](https://aka.ms/iotcentral) website. +1. Go to the [Azure IoT Central My apps](https://apps.azureiotcentral.com/myapps) page. 1. Open the condition-monitoring application that you created. |
iot-central | Tutorial Iot Central Connected Logistics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-connected-logistics.md | An active Azure subscription. If you don't have an Azure subscription, create a ## Create connected logistics application -Create the application using following steps: +To create your IoT Central application: -1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab. +1. Navigate to the [Create IoT Central Application](https://portal.azure.com/#create/Microsoft.IoTCentral) page in the Azure portal. If prompted, sign in with your Azure account. -1. Select **Create app** under **Connected Logistics**. +1. Enter the following information: -1. **Create app** opens the **New application** form. Enter the following details: + | Field | Description | + | -- | -- | + | Subscription | The Azure subscription you want to use. | + | Resource group | The resource group you want to use. You can create a new resource group or use an existing one. | + | Resource name | A valid Azure resource name. | + | Application URL | The URL subdomain for your application. The URL for an IoT Central application looks like `https://yoursubdomain.azureiotcentral.com`. | + | Template | **Connected Logistics** | + | Region | The Azure region you want to use. | + | Pricing plan | The pricing plan you want to use. | - - **Application name**: you can use default suggested name or enter your friendly application name. - - **URL**: you can use suggested default URL or enter your friendly unique memorable URL. - - **Billing Info**: The directory, Azure subscription, and region details are required to provision the resources. - - **Create**: Select create at the bottom of the page to deploy your application. +1. Select **Review + create**. Then select **Create**. + ## Walk through the application |
iot-central | Tutorial Iot Central Digital Distribution Center | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-digital-distribution-center.md | An active Azure subscription. If you don't have an Azure subscription, create a ## Create digital distribution center application template -Create the application using following steps: +To create your IoT Central application: -1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab. +1. Navigate to the [Create IoT Central Application](https://portal.azure.com/#create/Microsoft.IoTCentral) page in the Azure portal. If prompted, sign in with your Azure account. -1. Select **Create app** under **Digital distribution center**. +1. Enter the following information: -To learn more, see [Create an IoT Central application](../core/howto-create-iot-central-application.md). + | Field | Description | + | -- | -- | + | Subscription | The Azure subscription you want to use. | + | Resource group | The resource group you want to use. You can create a new resource group or use an existing one. | + | Resource name | A valid Azure resource name. | + | Application URL | The URL subdomain for your application. The URL for an IoT Central application looks like `https://yoursubdomain.azureiotcentral.com`. | + | Template | **Digital Distribution Center** | + | Region | The Azure region you want to use. | + | Pricing plan | The pricing plan you want to use. | ++1. Select **Review + create**. Then select **Create**. + ## Walk through the application |
iot-central | Tutorial Iot Central Smart Inventory Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-smart-inventory-management.md | An active Azure subscription. If you don't have an Azure subscription, [create a ## Create a smart inventory-management application -Create the application by completing the following steps: +To create your IoT Central application: -1. Sign in to the [Azure IoT Central Build](https://aka.ms/iotcentral) site with a Microsoft personal, work, or school account. On the left pane, select **Build**, and then select the **Retail** tab. +1. Navigate to the [Create IoT Central Application](https://portal.azure.com/#create/Microsoft.IoTCentral) page in the Azure portal. If prompted, sign in with your Azure account. -1. Select **Create app** under **smart inventory management**. +1. Enter the following information: -To learn more, see [Create an IoT Central application](../core/howto-create-iot-central-application.md). + | Field | Description | + | -- | -- | + | Subscription | The Azure subscription you want to use. | + | Resource group | The resource group you want to use. You can create a new resource group or use an existing one. | + | Resource name | A valid Azure resource name. | + | Application URL | The URL subdomain for your application. The URL for an IoT Central application looks like `https://yoursubdomain.azureiotcentral.com`. | + | Template | **Smart Inventory Management** | + | Region | The Azure region you want to use. | + | Pricing plan | The pricing plan you want to use. | ++1. Select **Review + create**. Then select **Create**. + ## Walk through the application |
iot-central | Tutorial Micro Fulfillment Center | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-micro-fulfillment-center.md | An active Azure subscription. If you don't have an Azure subscription, create a ## Create micro-fulfillment application -Create the application using following steps: +To create your IoT Central application: -1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the navigation bar and then select the **Retail** tab. +1. Navigate to the [Create IoT Central Application](https://portal.azure.com/#create/Microsoft.IoTCentral) page in the Azure portal. If prompted, sign in with your Azure account. -1. Select **Create app** under **micro-fulfillment center**. +1. Enter the following information: ++ | Field | Description | + | -- | -- | + | Subscription | The Azure subscription you want to use. | + | Resource group | The resource group you want to use. You can create a new resource group or use an existing one. | + | Resource name | A valid Azure resource name. | + | Application URL | The URL subdomain for your application. The URL for an IoT Central application looks like `https://yoursubdomain.azureiotcentral.com`. | + | Template | **Micro-fulfillment Center** | + | Region | The Azure region you want to use. | + | Pricing plan | The pricing plan you want to use. | ++1. Select **Review + create**. Then select **Create**. + ## Walk through the application |
iot-edge | How To Connect Downstream Iot Edge Device | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md | description: How to create a trusted connection between an IoT Edge gateway and Previously updated : 01/17/2023 Last updated : 07/17/2023 All the steps in this article build on [Configure an IoT Edge device to act as a * A free or standard IoT hub. * At least two **IoT Edge devices**, one to be the top layer device and one or more lower layer devices. If you don't have IoT Edge devices available, you can [Run Azure IoT Edge on Ubuntu virtual machines](how-to-install-iot-edge-ubuntuvm.md).-* If you use the Azure CLI to create and manage devices, have Azure CLI v2.3.1 with the Azure IoT extension v0.10.6 or higher installed. +* If you use the [Azure CLI](/cli/azure/install-azure-cli) to create and manage devices, install the [Azure IoT extension](https://github.com/Azure/azure-iot-cli-extension). -This article provides detailed steps and options to help you create the right gateway hierarchy for your scenario. For a guided tutorial, see [Create a hierarchy of IoT Edge devices using gateways](tutorial-nested-iot-edge.md). +> [TIP] +> This article provides detailed steps and options to help you create the right gateway hierarchy for your scenario. For a guided tutorial, see [Create a hierarchy of IoT Edge devices using gateways](tutorial-nested-iot-edge.md). ## Create a gateway hierarchy For example, the following commands create a root CA certificate, a parent devic ``` > [!WARNING]- > Do not use certificates created by the test scripts for production. They contain hard-coded passwords and expire by default after 30 days. The test CA certificates are provided for demonstration purposes to help you quickly understand CA Certificates. Use your own security best practices for certification creation and lifetime management in production. + > Don't use certificates created by the test scripts for production. They contain hard-coded passwords and expire by default after 30 days. The test CA certificates are provided for demonstration purposes to help you understand CA Certificates. Use your own security best practices for certification creation and lifetime management in production. For more information about creating test certificates, see [create demo certificates to test IoT Edge device features](how-to-create-test-certificates.md). You should already have IoT Edge installed on your device. If not, follow the st parent. In a hierarchical scenario where a single IoT Edge device is both a parent and a child device, it needs both parameters. - The *hostname*, *local_gateway_hostname*, and *trust_bundle_cert* parameters, must be at the beginning of the configuration file before any sections. Adding the parameter before defined sections, ensures it's applied correctly. + The *hostname* and *trust_bundle_cert* parameters, must be at the beginning of the configuration file before any sections. Adding the parameter before defined sections, ensures it's applied correctly. Use a hostname shorter than 64 characters, which is the character limit for a server certificate common name. |
iot-edge | Tutorial Nested Iot Edge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-nested-iot-edge.md | To create a hierarchy of IoT Edge devices, you need: * A computer (Windows or Linux) with internet connectivity. * An Azure account with a valid subscription. If you don't have an [Azure subscription](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/) before you begin. * A free or standard tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.-* A Bash shell in Azure Cloud Shell using Azure CLI v2.3.1 with the Azure IoT extension v0.10.6 or higher installed. This tutorial uses the [Azure Cloud Shell](../cloud-shell/overview.md). To see your current versions of the Azure CLI modules and extensions, run [az version](/cli/azure/reference-index?#az-version). +* A Bash shell in Azure Cloud Shell using [Azure CLI](/cli/azure/install-azure-cli) with the [Azure IoT extension](https://github.com/Azure/azure-iot-cli-extension) installed. This tutorial uses the [Azure Cloud Shell](../cloud-shell/overview.md). To see your current versions of the Azure CLI modules and extensions, run [az version](/cli/azure/reference-index?#az-version). * Two Linux devices to configure your hierarchy. If you don't have devices available, you can create Azure virtual machines for each device in your hierarchy using the [IoT Edge Azure Resource Manager template](https://github.com/Azure/iotedge-vm-deploy). IoT Edge version 1.4 is preinstalled with this Resource Manager template. If you're installing IoT Edge on your own devices, see [Install Azure IoT Edge for Linux](how-to-provision-single-device-linux-symmetric.md) or [Update IoT Edge](how-to-update-iot-edge.md). * To simplify network communication between devices, the virtual machines should be on the same virtual network or use virtual network peering. * Make sure that the following ports are open inbound for all devices except the lowest layer device: 443, 5671, 8883: |
machine-learning | How To Monitor Online Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-online-endpoints.md | There are three logs that can be enabled for online endpoints: * **AMLOnlineEndpointTrafficLog** (preview): You could choose to enable traffic logs if you want to check the information of your request. Below are some cases: - * If the response isn't 200, check the value of the column ΓÇ£ResponseCodeReasonΓÇ¥ to see what happened. Also check the reason in the "HTTPS status codes" section of the [Troubleshoot online endpoints](how-to-troubleshoot-online-endpoints.md#http-status-codes) article. + * If the response isn't 200, check the value of the column "ResponseCodeReason" to see what happened. Also check the reason in the "HTTPS status codes" section of the [Troubleshoot online endpoints](how-to-troubleshoot-online-endpoints.md#http-status-codes) article. - * You could check the response code and response reason of your model from the column ΓÇ£ModelStatusCodeΓÇ¥ and ΓÇ£ModelStatusReasonΓÇ¥. + * You could check the response code and response reason of your model from the column "ModelStatusCode" and "ModelStatusReason". * You want to check the duration of the request like total duration, the request/response duration, and the delay caused by the network throttling. You could check it from the logs to see the breakdown latency. There are three logs that can be enabled for online endpoints: * You may also use this log for performance analysis in determining the time required by the model to process each request. -* **AMLOnlineEndpointEventLog** (preview): Contains event information regarding the containerΓÇÖs life cycle. Currently, we provide information on the following types of events: +* **AMLOnlineEndpointEventLog** (preview): Contains event information regarding the container's life cycle. Currently, we provide information on the following types of events: | Name | Message | | -- | -- | The following tables provide details on the data stored in each log: **AMLOnlineEndpointTrafficLog** (preview) -| Field name | Description | -| - | - | -| Method | The requested method from client. -| Path | The requested path from client. -| SubscriptionId | The machine learning subscription ID of the online endpoint. -| AzureMLWorkspaceId | The machine learning workspace ID of the online endpoint. -| AzureMLWorkspaceName | The machine learning workspace name of the online endpoint. -| EndpointName | The name of the online endpoint. -| DeploymentName | The name of the online deployment. -| Protocol | The protocol of the request. -| ResponseCode | The final response code returned to the client. -| ResponseCodeReason | The final response code reason returned to the client. -| ModelStatusCode | The response status code from model. -| ModelStatusReason | The response status reason from model. -| RequestPayloadSize | The total bytes received from the client. -| ResponsePayloadSize | The total bytes sent back to the client. -| UserAgent | The user-agent header of the request, including comments but truncated to a max of 70 characters. -| XRequestId | The request ID generated by Azure Machine Learning for internal tracing. -| XMSClientRequestId | The tracking ID generated by the client. -| TotalDurationMs | Duration in milliseconds from the request start time to the last response byte sent back to the client. If the client disconnected, it measures from the start time to client disconnect time. -| RequestDurationMs | Duration in milliseconds from the request start time to the last byte of the request received from the client. -| ResponseDurationMs | Duration in milliseconds from the request start time to the first response byte read from the model. -| RequestThrottlingDelayMs | Delay in milliseconds in request data transfer due to network throttling. -| ResponseThrottlingDelayMs | Delay in milliseconds in response data transfer due to network throttling. **AMLOnlineEndpointConsoleLog** -| Field Name | Description | -| -- | -- | -| TimeGenerated | The timestamp (UTC) of when the log was generated. -| OperationName | The operation associated with log record. -| InstanceId | The ID of the instance that generated this log record. -| DeploymentName | The name of the deployment associated with the log record. -| ContainerName | The name of the container where the log was generated. -| Message | The content of the log. **AMLOnlineEndpointEventLog** (preview) --| Field Name | Description | -| -- | -- | -| TimeGenerated | The timestamp (UTC) of when the log was generated. -| OperationName | The operation associated with log record. -| InstanceId | The ID of the instance that generated this log record. -| DeploymentName | The name of the deployment associated with the log record. -| Name | The name of the event. -| Message | The content of the event. -- ## Next steps |
machine-learning | How To Submit Spark Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-submit-spark-jobs.md | These prerequisites cover the submission of a Spark job from Azure Machine Learn > [!NOTE] > - To ensure successful execution of the Spark job, assign the **Contributor** and **Storage Blob Data Contributor** roles, on the Azure storage account used for data input and output, to the identity that the Spark job uses+> - Public Network Access should be enabled in Azure Synapse workspace to ensure successful execution of the Spark job using an [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md). > - If an [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md) points to a Synapse Spark pool, in an Azure Synapse workspace that has a managed virtual network associated with it, [a managed private endpoint to storage account should be configured](../synapse-analytics/security/connect-to-a-secure-storage-account.md) to ensure data access. > - Serverless Spark compute supports a managed virtual network (preview). If a [managed network is provisioned for the serverless Spark compute, the corresponding private endpoints for the storage account should also be provisioned](./how-to-managed-network.md#configure-for-serverless-spark-jobs) to ensure data access. |
machine-learning | Monitor Azure Machine Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/monitor-azure-machine-learning.md | You can configure the following logs for Azure Machine Learning: | AmlComputeClusterNodeEvent (deprecated) | Events from nodes within an Azure Machine Learning compute cluster. | | AmlComputeJobEvent | Events from jobs running on Azure Machine Learning compute. | | AmlComputeCpuGpuUtilization | ML services compute CPU and GPU utilization logs. |+| AmlOnlineEndpointTrafficLog | Logs for traffic to online endpoints. | +| AmlOnlineEndpointConsoleLog | Logs that the containers for online endpoints write to the console. | +| AmlOnlineEndpointEventLog | Logs for events regarding the life cycle of online endpoints. | | AmlRunStatusChangedEvent | ML run status changes. | | ModelsChangeEvent | Events when ML model is accessed created or deleted. | | ModelsReadEvent | Events when ML model is read. | Data in Azure Monitor Logs is stored in tables, with each table having its own s | AmlPipelineEvent | Events when ML pipeline draft or endpoint or module are accessed (read, created, or deleted).Category includes:PipelineReadEvent,PipelineChangeEvent. | | AmlRunEvent | Events when ML experiments are accessed (read, created, or deleted). Category includes:RunReadEvent,RunEvent. | | AmlEnvironmentEvent | Events when ML environment configurations (read, created, or deleted). Category includes:EnvironmentReadEvent (very chatty),EnvironmentChangeEvent. |+| AmlOnlineEndpointTrafficLog | Logs for traffic to online endpoints. | +| AmlOnlineEndpointConsoleLog | Logs that the containers for online endpoints write to the console. | +| AmlOnlineEndpointEventLog | Logs for events regarding the life cycle of online endpoints. | > [!NOTE] > Effective February 2022, the AmlComputeClusterNodeEvent table will be deprecated. We recommend that you instead use the AmlComputeClusterEvent table. |
machine-learning | Monitor Resource Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/monitor-resource-reference.md | -# Monitoring Azure machine learning data reference +# Monitoring Azure Machine Learning data reference Learn about the data and resources collected by Azure Monitor from your Azure Machine Learning workspace. See [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md) for details on collecting and analyzing monitoring data. The following schemas are in use by Azure Machine Learning | CorrelationId | A GUID used to group together a set of related events, when applicable. | | OperationName | The name of the operation associated with the log entry | | Identity | The identity of the user or application that performed the operation. |-| AadTenantId | The AAD tenant ID the operation was submitted for. | +| AadTenantId | The Azure Active Directory (Azure AD) tenant ID the operation was submitted for. | | AmlComputeInstanceName | "The name of the compute instance associated with the log entry. | ### AmlDataLabelEvent table The following schemas are in use by Azure Machine Learning | CorrelationId | A GUID used to group together a set of related events, when applicable. | | OperationName | The name of the operation associated with the log entry | | Identity | The identity of the user or application that performed the operation. |-| AadTenantId | The AAD tenant ID the operation was submitted for. | +| AadTenantId | The Azure AD tenant ID the operation was submitted for. | | AmlProjectId | The unique identifier of the Azure Machine Learning project. | | AmlProjectName | The name of the Azure Machine Learning project. | | AmlLabelNames | The label class names which are created for the project. | The following schemas are in use by Azure Machine Learning | AmlWorkspaceId | A GUID and unique ID of the Azure Machine Learning workspace. | | OperationName | The name of the operation associated with the log entry | | Identity | The identity of the user or application that performed the operation. |-| AadTenantId | The AAD tenant ID the operation was submitted for. | +| AadTenantId | The Azure AD tenant ID the operation was submitted for. | | AmlDatasetId | The ID of the Azure Machine Learning Data Set. | | AmlDatasetName | The name of the Azure Machine Learning Data Set. | The following schemas are in use by Azure Machine Learning | AmlWorkspaceId | A GUID and unique ID of the Azure Machine Learning workspace. | | OperationName | The name of the operation associated with the log entry | | Identity | The identity of the user or application that performed the operation. |-| AadTenantId | The AAD tenant ID the operation was submitted for. | +| AadTenantId | The Azure AD tenant ID the operation was submitted for. | | AmlDatastoreName | The name of the Azure Machine Learning Data Store. | ### AmlDeploymentEvent table The following schemas are in use by Azure Machine Learning | ResultType | The status of the event. Typical values include Started, In Progress, Succeeded, Failed, Active, and Resolved. | | OperationName | The name of the operation associated with the log entry | | Identity | The identity of the user or application that performed the operation. |-| AadTenantId | The AAD tenant ID the operation was submitted for. | +| AadTenantId | The Azure AD tenant ID the operation was submitted for. | | AmlServiceName | The name of the Azure Machine Learning Service. | ### AmlInferencingEvent table The following schemas are in use by Azure Machine Learning | ResultType | The status of the event. Typical values include Started, In Progress, Succeeded, Failed, Active, and Resolved. | | OperationName | The name of the operation associated with the log entry | | Identity | The identity of the user or application that performed the operation. |-| AadTenantId | The AAD tenant ID the operation was submitted for. | +| AadTenantId | The Azure AD tenant ID the operation was submitted for. | | AmlServiceName | The name of the Azure Machine Learning Service. | ### AmlModelsEvent table The following schemas are in use by Azure Machine Learning | ResultType | The status of the event. Typical values include Started, In Progress, Succeeded, Failed, Active, and Resolved. | | OperationName | The name of the operation associated with the log entry | | Identity | The identity of the user or application that performed the operation. |-| AadTenantId | The AAD tenant ID the operation was submitted for. | +| AadTenantId | The Azure AD tenant ID the operation was submitted for. | | ResultSignature | The HTTP status code of the event. Typical values include 200, 201, 202 etc. | | AmlModelName | The name of the Azure Machine Learning Model. | The following schemas are in use by Azure Machine Learning | AmlWorkspaceId | The name of the Azure Machine Learning workspace. | | OperationName | The name of the operation associated with the log entry | | Identity | The identity of the user or application that performed the operation. |-| AadTenantId | The AAD tenant ID the operation was submitted for. | +| AadTenantId | The Azure AD tenant ID the operation was submitted for. | | AmlModuleId | A GUID and unique ID of the module.| | AmlModelName | The name of the Azure Machine Learning Model. | | AmlPipelineId | The ID of the Azure Machine Learning pipeline. | The following schemas are in use by Azure Machine Learning | OperationName | The name of the operation associated with the log entry | | AmlWorkspaceId | A GUID and unique ID of the Azure Machine Learning workspace. | | Identity | The identity of the user or application that performed the operation. |-| AadTenantId | The AAD tenant ID the operation was submitted for. | +| AadTenantId | The Azure AD tenant ID the operation was submitted for. | | RunId | The unique ID of the run. | ### AmlEnvironmentEvent table The following schemas are in use by Azure Machine Learning | Level | The severity level of the event. Must be one of Informational, Warning, Error, or Critical. | | OperationName | The name of the operation associated with the log entry | | Identity | The identity of the user or application that performed the operation. |-| AadTenantId | The AAD tenant ID the operation was submitted for. | +| AadTenantId | The Azure AD tenant ID the operation was submitted for. | | AmlEnvironmentName | The name of the Azure Machine Learning environment configuration. | | AmlEnvironmentVersion | The name of the Azure Machine Learning environment configuration version. | +### AMLOnlineEndpointTrafficLog table (preview) ++For more information on this log, see [Monitor online endpoints](how-to-monitor-online-endpoints.md). ++### AMLOnlineEndpointConsoleLog +++For more information on this log, see [Monitor online endpoints](how-to-monitor-online-endpoints.md). ++### AMLOnlineEndpointEventLog (preview) +++For more information on this log, see [Monitor online endpoints](how-to-monitor-online-endpoints.md). ## See also - See [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md) for a description of monitoring Azure Machine Learning. |
machine-learning | How To Create Manage Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-create-manage-runtime.md | Use `docker images` to check if the image was pulled successfully. If your ima This type error usually related to runtime lack required packages. If you're using default environment, make sure image of your runtime is using the latest version, learn more: [runtime update](#update-runtime-from-ui), if you're using custom image and you're using conda environment, make sure you have installed all required packages in your conda environment, learn more: [customize Prompt flow environment](how-to-customize-environment-runtime.md#customize-environment-with-docker-context-for-runtime). +#### Request timeout issue ++##### Request timeout error shown in UI ++**MIR runtime request timeout error in the UI:** +++Error in the example says "UserError: Upstream request timeout". ++**Compute instance runtime request timeout error:** +++Error in the example says "UserError: Invoking runtime gega-ci timeout, error message: The request was canceled due to the configured HttpClient.Timeout of 100 seconds elapsing". ++#### How to identify which node consume the most time ++1. Check the runtime logs ++2. Trying to find below warning log format ++ {node_name} has been running for {duration} seconds. ++ For example: ++ - Case 1: Python script node running for long time. ++ :::image type="content" source="./media/how-to-create-manage-runtime/runtime-timeout-running-for-long-time.png" alt-text="Screenshot of a timeout run logs in the studio UI. " lightbox = "./media/how-to-create-manage-runtime/runtime-timeout-running-for-long-time.png"::: ++ In this case, you can find that the `PythonScriptNode` was running for a long time (almost 300s), then you can check the node details to see what's the problem. ++ - Case 2: LLM node running for long time. ++ :::image type="content" source="./media/how-to-create-manage-runtime/runtime-timeout-by-language-model-timeout.png" alt-text="Screenshot of a timeout logs caused by LLM timeout in the studio UI. " lightbox = "./media/how-to-create-manage-runtime/runtime-timeout-by-language-model-timeout.png"::: ++ In this case, if you find the message `request canceled` in the logs, it may be due to the OpenAI API call taking too long and exceeding the runtime limit. ++ An OpenAI API Timeout could be caused by a network issue or a complex request that requires more processing time. For more information, see [OpenAI API Timeout](https://help.openai.com/en/articles/6897186-timeout). ++ You can try waiting a few seconds and retrying your request. This usually resolves any network issues. ++ If retrying doesn't work, check whether you're using a long context model, such as ΓÇÿgpt-4-32kΓÇÖ, and have set a large value for `max_tokens`. If so, it's expected behavior because your prompt may generate a very long response that takes longer than the interactive mode upper threshold. In this situation, we recommend trying 'Bulk test', as this mode doesn't have a timeout setting. ++3. If you can't find anything in runtime logs to indicate it's a specific node issue ++ Please contact the Prompt Flow team ([promptflow-eng](mailto:aml-pt-eng@microsoft.com)) with the runtime logs. We'll try to identify the root cause. + ### Compute instance runtime related #### How to find the compute instance runtime log for further investigation? This because you're cloning a flow from others that is using compute instance as #### Compute instance behind VNet If your compute instance is behind a VNet, you need to make the following changes to ensure that your compute instance can be used in prompt flow:-- Please follow [required-public-internet-access](../how-to-secure-workspace-vnet.md#required-public-internet-access) to set your CI network configuration.-- If your storage account also behind vnet, please follow [Secure Azure storage accounts](../how-to-secure-workspace-vnet.md#secure-azure-storage-accounts) to create private endpoints for both table and blob.+- See [required-public-internet-access](../how-to-secure-workspace-vnet.md#required-public-internet-access) to set your compute instance network configuration. +- If your storage account also behind vnet, see [Secure Azure storage accounts](../how-to-secure-workspace-vnet.md#secure-azure-storage-accounts) to create private endpoints for both table and blob. - Make sure the managed identity of workspace have `Storage Blob Data Contributor`, `Storage Table Data Contributor` roles on the workspace default storage account. ### Managed endpoint runtime related |
machine-learning | Reference Managed Online Endpoints Vm Sku List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-managed-online-endpoints-vm-sku-list.md | This table shows the VM SKUs that are supported for Azure Machine Learning manag * For more information on configuration details such as CPU and RAM, see [Azure Machine Learning Pricing](https://azure.microsoft.com/pricing/details/machine-learning/) and [VM sizes](../virtual-machines/sizes.md). | Relative Size | General Purpose | Compute Optimized | Memory Optimized | GPU |-| | | | | | -| V.Small | Standard_DS1_v2 <br/> Standard_DS2_v2 | Standard_F2s_v2 | Standard_E2s_v3 | Standard_NC4as_T4_v3 | -| Small | Standard_DS3_v2 | Standard_F4s_v2 | Standard_E4s_v3 | Standard_NC6s_v2 <br/> Standard_NC6s_v3 <br/> Standard_NC8as_T4_v3 | -| Medium | Standard_DS4_v2 | Standard_F8s_v2 | Standard_E8s_v3 | Standard_NC12s_v2 <br/> Standard_NC12s_v3 <br/> Standard_NC16as_T4_v3 | -| Large | Standard_DS5_v2 | Standard_F16s_v2 | Standard_E16s_v3 | Standard_NC24s_v2 <br/> Standard_NC24s_v3 <br/> Standard_NC64as_T4_v3 | -| X-Large| - | Standard_F32s_v2 <br/> Standard_F48s_v2 <br/> Standard_F64s_v2 <br/> Standard_F72s_v2 <br/> Standard_FX24mds <br/> Standard_FX36mds <br/> Standard_FX48mds| Standard_E32s_v3 <br/> Standard_E48s_v3 <br/> Standard_E64s_v3 | Standard_ND40rs_v2 <br/> Standard_ND96asr_v4 <br/> Standard_ND96amsr_A100_v4 <br/>| +| | | | | | +| X-Small | Standard_DS1_v2 <br/> Standard_DS2_v2 <br/> Standard_D2a_v4 <br/> Standard_D2as_v4 | Standard_F2s_v2 | Standard_E2s_v3 | Standard_NC4as_T4_v3 | +| Small | Standard_DS3_v2 <br/> Standard_D4a_v4 <br/> Standard_D4as_v4 | Standard_F4s_v2 <br/> Standard_FX4mds | Standard_E4s_v3 | Standard_NC6s_v2 <br/> Standard_NC6s_v3 <br/> Standard_NC8as_T4_v3 | +| Medium | Standard_DS4_v2 </br> Standard_D8a_v4 </br> Standard_D8as_v4 | Standard_F8s_v2 </br> Standard_FX12mds | Standard_E8s_v3 | Standard_NC12s_v2 <br/> Standard_NC12s_v3 <br/> Standard_NC16as_T4_v3 | +| Large | Standard_DS5_v2 </br> Standard_D16a_v4 </br> Standard_D16as_v4 | Standard_F16s_v2 | Standard_E16s_v3 | Standard_NC24s_v2 <br/> Standard_NC24s_v3 <br/> Standard_NC64as_T4_v3 </br> Standard_NC24ads_A100_v4 | +| X-Large | Standard_D32a_v4 </br> Standard_D32as_v4 </br> Standard_D48a_v4 </br> Standard_D48as_v4 </br> Standard_D64a_v4 </br> Standard_D64as_v4 </br> Standard_D96a_v4 </br> Standard_D96as_v4 | Standard_F32s_v2 <br/> Standard_F48s_v2 <br/> Standard_F64s_v2 <br/> Standard_F72s_v2 <br/> Standard_FX24mds <br/> Standard_FX36mds <br/> Standard_FX48mds | Standard_E32s_v3 <br/> Standard_E48s_v3 <br/> Standard_E64s_v3 | Standard_NC48ads_A100_v4 </br> Standard_NC96ads_A100_v4 </br> Standard_ND96asr_v4 </br> Standard_ND96amsr_A100_v4 </br> Standard_ND40rs_v2 | > [!CAUTION]-> `Standard_DS1_v2` and `Standard_F2s_v2` may be too small for bigger models and may lead to container termination due to insufficient memory, not enough space on the disk, or probe failure as it takes too long to initiate the container. If you want to reduce the cost of deploying multiple models with managed online endpoint, see [the example for multi models](how-to-deploy-online-endpoints.md#use-more-than-one-model-in-a-deployment). If you face [OutOfQuota errors](how-to-troubleshoot-online-endpoints.md?tabs=cli#error-outofquota) or [ReourceNotReady errors](how-to-troubleshoot-online-endpoints.md?tabs=cli#error-resourcenotready), try bigger VM SKUs. +> `Standard_DS1_v2` and `Standard_F2s_v2` may be too small for bigger models and may lead to container termination due to insufficient memory, not enough space on the disk, or probe failure as it takes too long to initiate the container. If you face [OutOfQuota errors](how-to-troubleshoot-online-endpoints.md?tabs=cli#error-outofquota) or [ReourceNotReady errors](how-to-troubleshoot-online-endpoints.md?tabs=cli#error-resourcenotready), try bigger VM SKUs. If you want to reduce the cost of deploying multiple models with managed online endpoint, see [the example for multi models](how-to-deploy-online-endpoints.md#use-more-than-one-model-in-a-deployment). |
managed-grafana | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/faq.md | + + Title: Azure Managed Grafana FAQ +description: Frequently asked questions about Azure Managed Grafana ++++ Last updated : 07/17/2023++++# Azure Managed Grafana FAQ ++This article answers frequently asked questions about Azure Managed Grafana. ++## Do you use open source Grafana for Managed Grafana? ++No. Managed Grafana hosts a commercial version called [Grafana Enterprise](https://grafana.com/products/enterprise/grafana/) that Microsoft is licensing from Grafana Labs. While not all of the Enterprise features are available yet, Managed Grafana continues to add support as these features are fully integrated with Azure. ++> [!NOTE] +> [Grafana Enterprise plugins](https://grafana.com/grafan) for Managed Grafana. ++## Does Managed Grafana encrypt my data? ++Yes. Managed Grafana always encrypts all data at rest and in transit. It supports [encryption at rest](./encryption.md) using Microsoft-managed keys. All network communication is over TLS 1.2. You can further restrict network traffic using a [private link](./how-to-set-up-private-access.md) for connecting to Grafana and [managed private endpoints](./how-to-connect-to-data-source-privately.md) for data sources. ++## Where do Managed Grafana data reside? ++Customer data, including dashboards and data source configuration, created in Managed Grafana are stored in the region where the customer's Managed Grafana workspace is located. This data residency applies to all available regions. Customers may move, copy, or access their data from any location globally. ++## Does Managed Grafana support Grafana's built-in SAML and LDAP authentications? ++No. Managed Grafana uses its implementation for Azure Active Directory authentication. ++## Can I install more plugins? ++No. Currently all Grafana plugins are preinstalled. Managed Grafana supports all popular plugins for Azure data sources. ++## Next steps ++> [!div class="nextstepaction"] +> [About Azure Managed Grafana](./overview.md) |
migrate | Common Questions Business Case | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-business-case.md | This article answers common questions about Business case in Azure Migrate. If y ### How can I export the business case? -You can click on export from the Business case to export it in an .xlsx file. If you see the **Export** gesture as disabled, you need to recalculate the business case by modifying any one assumption (Azure or on-premises) in the Business Case and click on Save. For example: +You can select export from the Business case to export it in an .xlsx file. If you see the **Export** gesture as disabled, you need to recalculate the business case by modifying any one assumption (Azure or on-premises) in the Business Case and select **Save**. For example: 1. Go to a business case and select **Edit assumptions** and choose **Azure assumptions**. 1. Select **Reset** next to **Performance history duration date range is outdated** warning. You could also choose to change any other setting. Currently, you can create a Business case on servers and workloads discovered us ### Why is the Build business case feature disabled? -The **Build business case** feature will be enabled only when you have discovery performed using an Azure Migrate appliance for servers and workloads in your VMware, Hyper-V and Physical/Baremetal environment. The Business case feature is not supported for servers and/or workloads imported via a .csv file. +The **Build business case** feature will be enabled only when you have discovery performed using an Azure Migrate appliance for servers and workloads in your VMware, Hyper-V and Physical/Baremetal environment. The Business case feature isn't supported for servers and/or workloads imported via a .csv file. ### Why canΓÇÖt I build business case from my project? -You will not be able to create a business case if your project is in one of these 3 project regions: +You won't be able to create a business case if your project is in one of these two project regions: -Germany West Central, East Asia and Switzerland North. +Germany West Central and Sweden Central To verify in an existing project: 1. You can use the https://portal.azure.com/ URL to get started To verify in an existing project: 3. On the **Azure Migrate: Discovery and assessment** tool, select **Overview**. 4. Under Project details, select **Properties**. 5. Check the Project location.-6. The Business case feature is not supported in the following regions: +6. The Business case feature isn't supported in the following regions: - Germany West Central, East Asia and Switzerland North. + Germany West Central and Sweden Central ### Why can't I change the currency during business case creation? Currently, the currency is defaulted to USD. Currently, the currency is defaulted to USD. There are multiple possibilities for this issue. -- Discovery hasn't completed - Wait for the discovery to complete. It is recommended to wait for at least 24 hours.+- Discovery hasn't completed - Wait for the discovery to complete. It's recommended to wait for at least 24 hours. - Check and resolve any discovery issues. - Changes to discovery happened after creating the Business case. To deep dive into sizing, readiness, and Azure cost estimates, you can create re ### Does the Azure SQL recommendation logic include SQL consolidation?-No, it does not include SQL consolidation. +No, it doesn't include SQL consolidation. ## Next steps |
migrate | How To Upgrade Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-upgrade-windows.md | -> This feature is currently available only for VMWare agentless migration. +> This feature is currently available only for [VMWare agentless migration](tutorial-migrate-vmware.md). ## Prerequisites To upgrade Windows during the test migration, follow these steps: 5. In the **Replicating machines** tab, right-click the VM to test and select **Test migrate**. -6. Select the **Upgrade available** option. In the pane that appears, select the target OS version that you want to upgrade to and select **Apply**. + :::image type="content" source="./media/how-to-upgrade-windows/test-migration.png" alt-text="Screenshot displays the Test Migrate option."::: ++6. Select the **Upgrade available** option. ++ :::image type="content" source="./media/how-to-upgrade-windows/upgrade-available-inline.png" alt-text="Screenshot with the Upgrade available option." lightbox="./media/how-to-upgrade-windows/upgrade-available-expanded.png"::: ++7. In the pane that appears, select the target OS version that you want to upgrade to and select **Apply**. ++ :::image type="content" source="./media/how-to-upgrade-windows/upgrade-available-options.png" alt-text="Screenshot with the available servers."::: ++ The **Upgrade available** option changes to **Upgrade configured**. 7. Select **Test migration** to initiate the test migration followed by the OS upgrade. After you've verified that the test migration works as expected, you can migrate 1. On the **Get started** page > **Servers, databases and web apps**, select **Replicate**. A Start Replication job begins. 2. In **Replicating machines**, right-click the VM and select **Migrate**. ++ :::image type="content" source="./media/how-to-upgrade-windows/migration.png" alt-text="Screenshot displays the Migrate option."::: + 3. In **Migrate** > **Shut down virtual machines and perform a planned migration with no data loss**, select **Yes** > **OK**. - By default, Azure Migrate shuts down the on-premises VM to ensure minimum data loss. - If you don't want to shut down the VM, select No. 4. Select the **Upgrade available** option. -5. In the pane that appears, select the target OS version that you want to upgrade to and select **Apply**. The upgrade available option changes to upgrade configured. ++ :::image type="content" source="./media/how-to-upgrade-windows/migrate-upgrade-available-inline.png" alt-text="Screenshot with the Upgrade available option in the Migration screen." lightbox="./media/how-to-upgrade-windows/migrate-upgrade-available-expanded.png"::: ++5. In the pane that appears, select the target OS version that you want to upgrade to and select **Apply**. ++ :::image type="content" source="./media/how-to-upgrade-windows/migrate-upgrade-options.png" alt-text="Screenshot with the available servers in the Migrate screen."::: ++ The **Upgrade available** option changes to **Upgrade configured**. ++ :::image type="content" source="./media/how-to-upgrade-windows/migrate-upgrade-configured-inline.png" alt-text="Screenshot with the Upgrade configured option in the Migration screen." lightbox="./media/how-to-upgrade-windows/migrate-upgrade-configured-expanded.png"::: + 5. Select **Migrate** to start the migration and the upgrade. After you've verified that the test migration works as expected, you can migrate Investigate the [cloud migration journey](https://learn.microsoft.com/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework. - + |
migrate | Migrate Support Matrix Hyper V | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v.md | Support | Details **Operating systems** | All Windows and Linux versions with [Hyper-V integration services](/virtualization/hyper-v-on-windows/reference/integration-services) enabled. **Server requirements** | Windows servers must have PowerShell remoting enabled and PowerShell version 2.0 or later installed. <br/><br/> WMI must be enabled and available on Windows servers to gather the details of the roles and features installed on the servers.<br/><br/> Linux servers must have SSH connectivity enabled and ensure that the following commands can be executed on the Linux servers to pull the application data: list, tail, awk, grep, locate, head, sed, ps, print, sort, uniq. Based on OS type and the type of package manager being used, here are some additional commands: rpm/snap/dpkg, yum/apt-cache, mssql-server. **Server access** | You can add multiple domain and non-domain (Windows/Linux) credentials in the appliance configuration manager for software inventory.<br /><br /> You must have a guest user account for Windows servers and a standard user account (non-`sudo` access) for all Linux servers.-**Port access** | For Windows server, need access on port 5985 (HTTP) and for Linux servers, need access on port 22(TCP). +**Port access** | For Windows server, need access on port 5985 (HTTP) and for Linux servers, need access on port 22(TCP).<br /> <br />If using domain credentials, the Azure Migrate appliance must be able to connect to the following TCP and UDP ports: <br /><br />TCP 135 ΓÇô RPC Endpoint<br />TCP 389 ΓÇô LDAP<br />TCP 636 ΓÇô LDAP SSL<br />TCP 445 ΓÇô SMB<br />TCP/UDP 88 ΓÇô Kerberos authentication<br />TCP/UDP 464 ΓÇô Kerberos change operations **Discovery** | Software inventory is performed by directly connecting to the servers using the server credentials added on the appliance. <br/><br/> The appliance gathers the information about the software inventory from Windows servers using PowerShell remoting and from Linux servers using SSH connection. <br/><br/> Software inventory is agentless. No agent is installed on the servers. ## SQL Server instance and database discovery requirements |
migrate | Migrate Support Matrix Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md | Support | Details **Server requirements** | For software inventory, VMware Tools must be installed and running on your servers. The VMware Tools version must be version 10.2.1 or later.<br /><br /> Windows servers must have PowerShell version 2.0 or later installed.<br/><br/>WMI must be enabled and available on Windows servers to gather the details of the roles and features installed on the servers. **vCenter Server account** | To interact with the servers for software inventory, the vCenter Server read-only account that's used for assessment must have privileges for guest operations on VMware VMs. **Server access** | You can add multiple domain and non-domain (Windows/Linux) credentials in the appliance configuration manager for software inventory.<br /><br /> You must have a guest user account for Windows servers and a standard user account (non-`sudo` access) for all Linux servers.-**Port access** | The Azure Migrate appliance must be able to connect to TCP port 443 on ESXi hosts running servers on which you want to perform software inventory. The server running vCenter Server returns an ESXi host connection to download the file that contains the details of the software inventory. +**Port access** | The Azure Migrate appliance must be able to connect to TCP port 443 on ESXi hosts running servers on which you want to perform software inventory. The server running vCenter Server returns an ESXi host connection to download the file that contains the details of the software inventory. <br /><br /> If using domain credentials, the Azure Migrate appliance must be able to connect to the following TCP and UDP ports: <br /> <br />TCP 135 ΓÇô RPC Endpoint<br />TCP 389 ΓÇô LDAP<br />TCP 636 ΓÇô LDAP SSL<br />TCP 445 ΓÇô SMB<br />TCP/UDP 88 ΓÇô Kerberos authentication<br />TCP/UDP 464 ΓÇô Kerberos change operations + **Discovery** | Software inventory is performed from vCenter Server by using VMware Tools installed on the servers.<br/><br/> The appliance gathers the information about the software inventory from the server running vCenter Server through vSphere APIs.<br/><br/> Software inventory is agentless. No agent is installed on the server, and the appliance doesn't connect directly to the servers. ## SQL Server instance and database discovery requirements |
mysql | Concepts Data In Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-in-replication.md | Title: Data-in replication - Azure Database for MySQL - Flexible Server -description: Learn about using Data-in replication to synchronize from an external server into the Azure Database for MySQL Flexible service. +description: Learn about using Data-in replication to synchronize from an external server into the Azure Database for MySQL Flexible Server. -Data-in replication allows you to synchronize data from an external MySQL server into an Azure Database for MySQL flexible server. The external server can be on-premises, in virtual machines, Azure Database for MySQL single server, or a database service hosted by other cloud providers. Data-in replication is based on the binary log (binlog) file position or GTID-based replication. To learn more about binlog replication, see the [MySQL Replication](https://dev.mysql.com/doc/refman/5.7/en/replication-configuration.html). +Data-in replication allows you to synchronize data from an external MySQL server into an Azure Database for MySQL Flexible Server. The external server can be on-premises, in virtual machines, Azure Database for MySQL single server, or a database service hosted by other cloud providers. Data-in replication is based on the binary log (binlog) file position or GTID-based replication. To learn more about binlog replication, see the [MySQL Replication](https://dev.mysql.com/doc/refman/5.7/en/replication-configuration.html). > [!NOTE] > Configuring Data-in replication for servers enabled with high-availability, is supported only through GTID-based replication. The stored procedure for replication using GTID is available on all HA-enabled s ### Filter -The parameter `replicate_wild_ignore_table` creates a replication filter for tables on the replica server. To modify this parameter from the Azure portal, navigate to Azure Database for MySQL flexible server used as replica and select "Server Parameters" to view/edit the `replicate_wild_ignore_table` parameter. +The parameter `replicate_wild_ignore_table` creates a replication filter for tables on the replica server. To modify this parameter from the Azure portal, navigate to Azure Database for MySQL Flexible Server used as replica and select "Server Parameters" to view/edit the `replicate_wild_ignore_table` parameter. ### Requirements |
nat-gateway | Tutorial Hub Spoke Nat Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-hub-spoke-nat-firewall.md | -In this tutorial, youΓÇÖll learn how to integrate a NAT gateway with an Azure Firewall in a hub and spoke network +In this tutorial, you learn how to integrate a NAT gateway with an Azure Firewall in a hub and spoke network Azure Firewall provides [2,496 SNAT ports per public IP address](../firewall/integrate-with-nat-gateway.md) configured per backend Virtual Machine Scale Set instance (minimum of two instances). You can associate up to 250 public IP addresses to Azure Firewall. Depending on your architecture requirements and traffic patterns, you may require more SNAT ports than what Azure Firewall can provide. You may also require the use of fewer public IPs while also requiring more SNAT ports. A better method for outbound connectivity is to use NAT gateway. NAT gateway provides 64,512 SNAT ports per public IP address and can be used with up to 16 public IP addresses. The hub virtual network contains the firewall subnet that is associated with the | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **Create new**. </br> Enter **TutorialNATHubSpokeFW-rg**. </br> Select **OK**. | + | Resource group | Select **Create new**. </br> Enter **test-rg**. </br> Select **OK**. | | **Instance details** | |- | Name | Enter **myVNet-Hub**. | + | Name | Enter **vnet-hub**. | | Region | Select **South Central US**. | 5. Select **Next: IP Addresses**. 6. In the **IP Addresses** tab in **IPv4 address space**, select the trash can to delete the address space that is auto populated. -7. In **IPv4 address space** enter **10.1.0.0/16**. +7. In **IPv4 address space** enter **10.0.0.0/16**. 8. Select **+ Add subnet**. The hub virtual network contains the firewall subnet that is associated with the | Setting | Value | | - | -- | | Subnet name | Enter **subnet-private**. |- | Subnet address range | Enter **10.1.0.0/24**. | + | Subnet address range | Enter **10.0.0.0/24**. | 10. Select **Add**. The hub virtual network contains the firewall subnet that is associated with the | Setting | Value | | - | -- |- | Bastion name | Enter **myBastion**. | - | AzureBastionSubnet address space | Enter **10.1.1.0/26**. | - | Public IP address | Select **Create new**. </br> In **Name** enter **myPublicIP-Bastion**. </br> Select **OK**. | + | Bastion name | Enter **bastion**. | + | AzureBastionSubnet address space | Enter **10.0.1.0/26**. | + | Public IP address | Select **Create new**. </br> In **Name** enter **public-ip**. </br> Select **OK**. | 14. In **Firewall** select **Enable**. The hub virtual network contains the firewall subnet that is associated with the | Setting | Value | | - | -- |- | Firewall name | Enter **myFirewall**. | - | Firewall subnet address space | Enter **10.1.2.0/26**. | - | Public IP address | Select **Create new**. </br> In **Name** enter **myPublicIP-Firewall**. </br> Select **OK**. | + | Firewall name | Enter **firewall**. | + | Firewall subnet address space | Enter **10.0.2.0/26**. | + | Public IP address | Select **Create new**. </br> In **Name** enter **public-ip-firewall**. </br> Select **OK**. | 16. Select **Review + create**. 17. Select **Create**. -It will take a few minutes for the bastion host and firewall to deploy. When the virtual network is created as part of the deployment, you can proceed to the next steps. +It takes a few minutes for the bastion host and firewall to deploy. When the virtual network is created as part of the deployment, you can proceed to the next steps. ## Create the NAT gateway -All outbound internet traffic will traverse the NAT gateway to the internet. Use the following example to create a NAT gateway for the hub and spoke network and associate it with the **AzureFirewallSubnet**. +All outbound internet traffic traverses the NAT gateway to the internet. Use the following example to create a NAT gateway for the hub and spoke network and associate it with the **AzureFirewallSubnet**. 1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results. All outbound internet traffic will traverse the NAT gateway to the internet. Use | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **TutorialNATHubSpokeFW-rg**. | + | Resource group | Select **test-rg**. | | **Instance details** | |- | NAT gateway name | Enter **myNATgateway**. | + | NAT gateway name | Enter **nat-gateway**. | | Region | Select **South Central US**. | | Availability zone | Select a **Zone** or **No zone**. | | TCP idle timeout (minutes) | Leave the default of **4**. | All outbound internet traffic will traverse the NAT gateway to the internet. Use 6. In **Outbound IP** in **Public IP addresses**, select **Create a new public IP address**. -7. Enter **myPublicIP-NAT** in **Name**. +7. Enter **public-ip-nat** in **Name**. 8. Select **OK**. 9. Select **Next: Subnet**. -10. In **Virtual Network** select **myVNet-Hub**. +10. In **Virtual Network** select **vnet-hub**. 11. Select **AzureFirewallSubnet** in **Subnet name**. The spoke virtual network contains the test virtual machine used to test the rou | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **TutorialNATHubSpokeFW-rg**. | + | Resource group | Select **test-rg**. | | **Instance details** | |- | Name | Enter **myVNet-Spoke**. | + | Name | Enter **vnet-spoke**. | | Region | Select **South Central US**. | 4. Select **Next: IP Addresses**. 5. In the **IP Addresses** tab in **IPv4 address space**, select the trash can to delete the address space that is auto populated. -6. In **IPv4 address space** enter **10.2.0.0/16**. +6. In **IPv4 address space** enter **10.1.0.0/16**. 7. Select **+ Add subnet**. The spoke virtual network contains the test virtual machine used to test the rou | Setting | Value | | - | -- | | Subnet name | Enter **subnet-private**. |- | Subnet address range | Enter **10.2.0.0/24**. | + | Subnet address range | Enter **10.1.0.0/24**. | 9. Select **Add**. A virtual network peering is used to connect the hub to the spoke and the spoke 1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. -2. Select **myVNet-Hub**. +2. Select **vnet-hub**. 3. Select **Peerings** in **Settings**. A virtual network peering is used to connect the hub to the spoke and the spoke | Setting | Value | | - | -- | | **This virtual network** | |- | Peering link name | Enter **myVNet-Hub-To-myVNet-Spoke**. | + | Peering link name | Enter **vnet-hub-to-vnet-spoke**. | | Traffic to remote virtual network | Leave the default of **Allow (default)**. | | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. | | Virtual network gateway or Route Server | Leave the default of **None**. | | **Remote virtual network** | |- | Peering link name | Enter **myVNet-Spoke-To-myVNet-Hub**. | + | Peering link name | Enter **vnet-spoke-to-vnet-hub**. | | Virtual network deployment model | Leave the default of **Resource manager**. | | Subscription | Select your subscription. |- | Virtual network | Select **myVNet-Spoke**. | + | Virtual network | Select **vnet-spoke**. | | Traffic to remote virtual network | Leave the default of **Allow (default)**. | | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. | | Virtual network gateway or Route Server | Leave the default of **None**. | A virtual network peering is used to connect the hub to the spoke and the spoke ## Create spoke network route table -A route table will force all traffic leaving the spoke virtual network to the hub virtual network. The route table is configured with the private IP address of the Azure Firewall as the virtual appliance. +A route table forces all traffic leaving the spoke virtual network to the hub virtual network. The route table is configured with the private IP address of the Azure Firewall as the virtual appliance. ### Obtain private IP address of firewall The private IP address of the firewall is needed for the route table created later in this article. Use the following example to obtain the firewall private IP address. -1. In the search box at the top of the portal, enter **Firewall**. Select **Firewall** in the search results. +1. In the search box at the top of the portal, enter **Firewall**. Select **Firewalls** in the search results. -2. Select **myFirewall**. +2. Select **firewall**. -3. In the **Overview** of **myFirewall**, note the IP address in the field **Firewall private IP**. The IP address should be **10.1.2.4**. +3. In the **Overview** of **firewall**, note the IP address in the field **Firewall private IP**. The IP address should be **10.0.2.4**. ### Create route table Create a route table to force all inter-spoke and internet egress traffic throug | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **TutorialNATHubSpokeFW-rg**. | + | Resource group | Select **test-rg**. | | **Instance details** | | | Region | Select **South Central US**. |- | Name | Enter **myRouteTable-Spoke**. | + | Name | Enter **route-table-spoke**. | | Propagate gateway routes | Select **No**. | 4. Select **Review + create**. Create a route table to force all inter-spoke and internet egress traffic throug 6. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results. -7. Select **myRouteTable-Spoke**. +7. Select **route-table-spoke**. 8. In **Settings** select **Routes**. Create a route table to force all inter-spoke and internet egress traffic throug | Setting | Value | | - | -- |- | Route name | Enter **Route-To-Hub**. | - | Address prefix destination | Select **IP Addresses**. | + | Route name | Enter **route-to-hub**. | + | Destination type | Select **IP Addresses**. | | Destination IP addresses/CIDR ranges | Enter **0.0.0.0/0**. | | Next hop type | Select **Virtual appliance**. |- | Next hop address | Enter **10.1.2.4**. | + | Next hop address | Enter **10.0.2.4**. | 11. Select **Add**. Create a route table to force all inter-spoke and internet egress traffic throug | Setting | Value | | - | -- |- | Virtual network | Select **myVNet-Spoke (TutorialNATHubSpokeFW-rg)**. | + | Virtual network | Select **vnet-spoke (test-rg)**. | | Subnet | Select **subnet-private**. | 15. Select **OK**. Traffic from the spoke through the hub must be allowed through and firewall poli 1. In the search box at the top of the portal, enter **Firewall**. Select **Firewalls** in the search results. -2. Select **myFirewall**. +2. Select **firewall**. 3. In the **Overview** select **Migrate to firewall policy**. Traffic from the spoke through the hub must be allowed through and firewall poli | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **TutorialNATHubSpokeFW-rg**. | + | Resource group | Select **test-rg**. | | **Policy details** | |- | Name | Enter **myFirewallPolicy**. | + | Name | Enter **firewall-policy**. | | Region | Select **South Central US**. | 5. Select **Review + create**. Traffic from the spoke through the hub must be allowed through and firewall poli 1. In the search box at the top of the portal, enter **Firewall**. Select **Firewall Policies** in the search results. -2. Select **myFirewallPolicy**. +2. Select **firewall-policy**. 3. In **Settings** select **Network rules**. Traffic from the spoke through the hub must be allowed through and firewall poli | Setting | Value | | - | -- |- | Name | Enter **SpokeToInternet**. | + | Name | Enter **spoke-to-internet**. | | Rule collection type | Select **Network**. | | Priority | Enter **100**. | | Rule collection action | Select **Allow**. | | Rule collection group | Select **DefaultNetworkRuleCollectionGroup**. | | Rules | |- | Name | Enter **AllowWeb**. | + | Name | Enter **allow-web**. | | Source type | **IP Address**. |- | Source | Enter **10.2.0.0/24**. | + | Source | Enter **10.1.0.0/24**. | | Protocol | Select **TCP**. | | Destination Ports | Enter **80**,**443**. | | Destination Type | Select **IP Address**. | Traffic from the spoke through the hub must be allowed through and firewall poli ## Create test virtual machine -A Windows Server 2022 virtual machine is used to test the outbound internet traffic through the NAT gateway. Use the following example to create a Windows Server 2022 virtual machine. +An Ubuntu virtual machine is used to test the outbound internet traffic through the NAT gateway. Use the following example to create an Ubuntu virtual machine. -1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. +The following procedure creates a test virtual machine (VM) named **vm-spoke** in the virtual network. ++1. In the portal, search for and select **Virtual machines**. -2. Select **+ Create** then **Azure virtual machine**. +1. In **Virtual machines**, select **+ Create**, then **Azure virtual machine**. -3. In **Create a virtual machine** enter or select the following information in the **Basics** tab: +1. On the **Basics** tab of **Create a virtual machine**, enter or select the following information: | Setting | Value |- | - | -- | - | **Project details** | | + ||| + | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **TutorialNATHubSpokeFW-rg**. | - | **Instance details** | | - | Virtual machine name | Enter **myVM-Spoke**. | - | Region | Select **South Central US**. | + | Resource group | Select **test-rg**. | + | **Instance details** | | + | Virtual machine name | Enter **vm-spoke**. | + | Region | Select **(US) South Central US**. | | Availability options | Select **No infrastructure redundancy required**. |- | Security type | Select **Standard**. | - | Image | Select **Windows Server 2022 Datacenter - x64 Gen2**. | + | Security type | Leave the default of **Standard**. | + | Image | Select **Ubuntu Server 22.04 LTS - x64 Gen2**. | | VM architecture | Leave the default of **x64**. | | Size | Select a size. |- | **Administrator account** | | + | **Administrator account** | | | Authentication type | Select **Password**. |- | Username | Enter a username. | + | Username | Enter **azureuser**. | | Password | Enter a password. |- | Confirm password | Reenter password. | + | Confirm password | Reenter the password. | | **Inbound port rules** | | | Public inbound ports | Select **None**. | -4. Select **Next: Disks** then **Next: Networking**. +1. Select the **Networking** tab at the top of the page. -5. In the Networking tab, enter or select the following information: +1. Enter or select the following information in the **Networking** tab: | Setting | Value |- | - | -- | - | **Network interface** | | - | Virtual network | Select **myVNet-Spoke**. | - | Subnet | Select **subnet-private (10.2.0.0/24)**. | + ||| + | **Network interface** | | + | Virtual network | Select **vnet-spoke**. | + | Subnet | Select **subnet-private (10.1.0.0/24)**. | | Public IP | Select **None**. |+ | NIC network security group | Select **Advanced**. | + | Configure network security group | Select **Create new**. </br> Enter **nsg-1** for the name. </br> Leave the rest at the defaults and select **OK**. | ++1. Leave the rest of the settings at the defaults and select **Review + create**. -6. Leave the rest of the options at the defaults and select **Review + create**. +1. Review the settings and select **Create**. -7. Select **Create**. +>[!NOTE] +>Virtual machines in a virtual network with a bastion host don't need public IP addresses. Bastion provides the public IP, and the VMs use private IPs to communicate within the network. You can remove the public IPs from any VMs in bastion hosted virtual networks. For more information, see [Dissociate a public IP address from an Azure VM](../virtual-network/ip-services/remove-public-ip-address-vm.md). ## Test NAT gateway -You'll connect to the Windows Server 2022 virtual machines you created in the previous steps to verify that the outbound internet traffic is leaving the NAT gateway. +You connect to the Ubuntu virtual machines you created in the previous steps to verify that the outbound internet traffic is leaving the NAT gateway. ### Obtain NAT gateway public IP address Obtain the NAT gateway public IP address for verification of the steps later in 1. In the search box at the top of the portal, enter **Public IP**. Select **Public IP addresses** in the search results. -2. Select **myPublic-NAT**. +1. Select **public-ip-nat**. -3. Make note of value in **IP address**. The example used in this article is **20.225.88.213**. +1. Make note of value in **IP address**. The example used in this article is **20.225.88.213**. ### Test NAT gateway from spoke -Use Microsoft Edge on the Windows Server 2022 virtual machine to connect to https://whatsmyip.com to verify the functionality of the NAT gateway. - 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -2. Select **myVM-Spoke**. --3. Select **Connect** then **Bastion**. --4. Enter the username and password you entered when the virtual machine was created. --5. Select **Connect**. --6. Open **Microsoft Edge** when the desktop finishes loading. --7. In the address bar, enter **https://whatsmyip.com**. --8. Verify the outbound IP address displayed is the same as the IP of the NAT gateway you obtained previously. +1. Select **vm-spoke**. - :::image type="content" source="./media/tutorial-hub-spoke-nat-firewall/outbound-ip-address.png" alt-text="Screenshot of outbound IP address."::: +1. On the **Overview** page, select **Connect**, then select the **Bastion** tab. -## Clean up resources +1. Select **Use Bastion**. -If you're not going to continue to use this application, delete the created resources with the following steps: +1. Enter the username and password entered during VM creation. Select **Connect**. -1. In the search box at the top of the portal, enter **Resource group**. Select **Resource groups** in the search results. +1. In the bash prompt, enter the following command: -2. Select **TutorialNATHubSpokeFW-rg**. + ```bash + curl ifconfig.me + ``` -3. In the **Overview** of **TutorialNATHubSpokeFW-rg**, select **Delete resource group**. +1. Verify the IP address returned by the command matches the public IP address of the NAT gateway. -4. In **TYPE THE RESOURCE GROUP NAME:**, enter **TutorialNATHubSpokeFW-rg**. + ```output + azureuser@vm-1:~$ curl ifconfig.me + 20.225.88.213 + ``` -5. Select **Delete**. +1. Close the Bastion connection to **vm-spoke**. ## Next steps Advance to the next article to learn how to integrate a NAT gateway with an Azure Load Balancer: |
network-watcher | Connection Monitor Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-overview.md | To enable the Network Performance Monitor solution for on-premises machines, do After you've enabled the solution, the workspace takes a couple of minutes to be displayed. - :::image type="content" source="./media/connection-monitor/network-performance-monitor-solution-enable.png" alt-text="Screenshot showing how to add the Network Performance Monitor solution in Connection Monitor." lightbox="./media/connection-monitor/network-performance-monitor-solution-enable.png"::: - Unlike Log Analytics agents, the Network Performance Monitor solution can be configured to send data only to a single Log Analytics workspace. If you wish to escape the installation process for enabling the Network Watcher extension, you can proceed with the creation of Connection Monitor and allow auto enablement of monitoring solution on your on-premises machines. |
network-watcher | Connection Monitor Virtual Machine Scale Set | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-virtual-machine-scale-set.md | After the scale set has been created, enable the Network Watcher extension in th 1. Under **Settings**, select **Extensions**. -1. Select **Add extension**, and then select **Network Watcher Agent for Windows**, as shown in the following image: -- :::image type="content" source="./media/connection-monitor/nw-agent-extension.png" alt-text="Screenshot that shows the Network Watcher extension addition."::: +1. Select **Add extension**, and then select **Network Watcher Agent for Windows**. 1. Under **Network Watcher Agent for Windows**, select **Create**. 1. Under **Install extension**, select **OK**. In the Azure portal, to create a test group in a connection monitor, do the foll * To choose on-premises agents, select the **NonΓÇôAzure endpoints** tab. By default, agents are grouped into workspaces by region. All these workspaces have the Network Performance Monitor configured. - 1. If you need to add Network Performance Monitor to your workspace, get it from [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/solarwinds.solarwinds-orion-network-performance-monitor?tab=Overview). For information about how to add Network Performance Monitor, see [Monitoring solutions in Azure Monitor](/previous-versions/azure/azure-monitor/insights/solutions). For information about how to configure agents for on-premises machines, see [Agents for on-premises machines](connection-monitor-overview.md#agents-for-on-premises-machines). - 1. Under **Create Connection Monitor**, on the **Basics** pane, the default region is selected. If you change the region, you can choose agents from workspaces in the new region. You can select one or more agents or subnets. In the **Subnet** view, you can select specific IPs for monitoring. If you add multiple subnets, a custom on-premises network named **OnPremises_Network_1** will be created. You can also change the **Group by** selector to group by agents. :::image type="content" source="./media/connection-monitor-2-preview/add-non-azure-sources.png" alt-text="Screenshot that shows the 'Add Sources' pane and the 'Non-Azure endpoints' pane in Connection monitor."::: |
network-watcher | Diagnose Communication Problem Between Networks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-communication-problem-between-networks.md | Title: 'Tutorial: Diagnose communication problem between virtual networks - Azure portal' description: In this tutorial, you learn how to use Azure Network Watcher VPN troubleshoot to diagnose a communication problem between two Azure virtual networks connected by Azure VPN gateways.- + - Previously updated : 02/28/2023- Last updated : 07/17/2023 # Customer intent: I need to determine why resources in a virtual network can't communicate with resources in a different virtual network over a VPN connection. Fix the problem by correcting the key on **to-VNet1** connection to match the ke When no longer needed, delete the resource group and all of the resources it contains: -1. Enter *myResourceGroup* in the search box at the top of the portal. When you see **myResourceGroup** in the search results, select it. -2. Select **Delete resource group**. -3. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**. +1. Enter ***myResourceGroup*** in the search box at the top of the portal. When you see **myResourceGroup** in the search results, select it. ++1. Select **Delete resource group**. ++1. In **Delete a resource group**, enter ***myResourceGroup***, and then select **Delete**. ++1. Select **Delete** to confirm the deletion of the resource group and all its resources. ## Next steps |
network-watcher | Diagnose Vm Network Traffic Filtering Problem Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem-powershell.md | Azure allows and denies network traffic to and from a virtual machine based on i In this quickstart, you deploy a virtual machine and use Network Watcher [IP flow verify](network-watcher-ip-flow-verify-overview.md) to test the connectivity to and from different IP addresses. Using the IP flow verify results, you determine the cause of a communication failure and learn how you can resolve it. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. |
network-watcher | Diagnose Vm Network Traffic Filtering Problem | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem.md | Azure allows and denies network traffic to and from a virtual machine based on i In this quickstart, you deploy a virtual machine and use Network Watcher [IP flow verify](network-watcher-ip-flow-verify-overview.md) to test the connectivity to and from different IP addresses. Using the IP flow verify results, you determine the cause of a communication failure and learn how you can resolve it. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. When no longer needed, delete the resource group and all of the resources it con 1. Select **Delete resource group**. -1. Enter ***myResourceGroup*** for **Enter resource group name to confirm deletion** and select **Delete**. +1. In **Delete a resource group**, enter ***myResourceGroup***, and then select **Delete**. ++1. Select **Delete** to confirm the deletion of the resource group and all its resources. ## Next steps |
network-watcher | Monitor Vm Communication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/monitor-vm-communication.md | + + Title: 'Tutorial: Monitor network communication between two VMs - Azure portal' +description: In this tutorial, learn how to monitor network communication between two Azure virtual machines with Azure Network Watcher's connection monitor capability. +++ Last updated : 07/17/2023+++# Customer intent: I need to monitor the communication between two virtual machines in Azure. If the communication fails, I need to be alerted and I want to know why it failed, so that I can resolve the problem. +++# Tutorial: Monitor network communication between two virtual machines using the Azure portal ++Successful communication between a virtual machine (VM) and an endpoint such as another VM, can be critical for your organization. Sometimes, configuration changes break communication. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> * Create a virtual network +> * Create two virtual machines +> * Monitor communication between the two virtual machines +> * Diagnose a communication problem between the two virtual machines ++## Prerequisites ++- An Azure account with an active subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ++## Sign in to Azure ++Sign in to the [Azure portal](https://portal.azure.com). ++## Create a virtual network ++In this section, you create **myVNet** virtual network with two subnets and an Azure Bastion host. The first subnet is used for the virtual machine, and the second subnet is used for the Bastion host. ++1. In the search box at the top of the portal, enter *virtual networks*. Select **Virtual networks** in the search results. ++ :::image type="content" source="./media/monitor-vm-communication/virtual-network-azure-portal.png" alt-text="Screenshot shows searching for virtual networks in the Azure portal."::: ++1. Select **+ Create**. In **Create virtual network**, enter or select the following values in the **Basics** tab: ++ | Setting | Value | + | | | + | **Project details** | | + | Subscription | Select your Azure subscription. | + | Resource Group | Select **Create new**. </br> Enter *myResourceGroup* in **Name**. </br> Select **OK**. | + | **Instance details** | | + | Virtual network name | Enter *myVNet*. | + | Region | Select **(US) East US**. | ++1. Select the **IP Addresses** tab, or select the **Next** button at the bottom of the page twice. ++1. Accept the default IP address space **10.0.0.0/16**. ++1. Select the pencil icon next to **default** subnet to rename it. Under **Subnet details** in the **Edit subnet** page, enter *mySubnet* for the **Name** and then select **Save**. ++1. Select **Review + create**. ++1. Review the settings, and then select **Create**. ++## Create two virtual machines ++In this section, you create two virtual machines: **myVM1** and **myVM2** to test the connection between them. ++### Create the first virtual machine ++1. In the search box at the top of the portal, enter *virtual machine*. Select **Virtual machines** in the search results. ++1. In **Virtual machines**, select **+ Create** then **+ Azure virtual machine**. ++1. Enter or select the following information in the **Basics** tab of **Create a virtual machine**. ++ | Setting | Value | + | - | -- | + | **Project details** | | + | Subscription | Select your subscription. | + | Resource group | Select **myResourceGroup**. | + | **Instance details** | | + | Virtual machine name | Enter *myVM1*. | + | Region | Select **(US) East US**. | + | Availability options | Select **No infrastructure redundancy required**. | + | Security type | Leave the default of **Standard**. | + | Image | Select **Ubuntu Server 20.04 LTS - x64 Gen2**. | + | Size | Choose a size or leave the default setting. | + | **Administrator account** | | + | Authentication type | Select **Password**. | + | Username | Enter a username. | + | Password | Enter a password. | + | Confirm password | Reenter password. | ++1. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**. ++1. In the Networking tab, select the following values: ++ | Setting | Value | + | | | + | **Network interface** | | + | Virtual network | Select **myVNet**. | + | Subnet | Select **mySubnet**. | + | Public IP | Select **None**. | + | NIC network security group | Select **None**. | ++1. Select **Review + create**. ++1. Review the settings, and then select **Create**. ++### Create the second virtual machine ++Repeat the steps in the previous section to create the second virtual machine and enter *myVM2* for the virtual machine name. ++## Create a connection monitor ++In this section, you create a connection monitor to monitor communication over TCP port 3389 from *myVm1* to *myVm2*. ++1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher**. ++1. Under **Monitoring**, select **Connection monitor**. ++1. Select **+ Create**. ++ :::image type="content" source="./media/monitor-vm-communication/connection-monitor.png" alt-text="Screenshot shows Connection monitor page in the Azure portal."::: ++1. Enter or select the following information in the **Basics** tab of **Create Connection Monitor**: ++ | Setting | Value | + | - | -- | + | Connection Monitor Name | Enter *myConnectionMonitor*. | + | Subscription | Select your subscription. | + | Region | Select **East US**. | + | **Workspace configuration** | | + | Virtual machine name | Enter *myVM1*. | + | Region | Select **(US) East US**. | + | Workspace configuration | Leave the default. | ++ :::image type="content" source="./media/monitor-vm-communication/create-connection-monitor-basics.png" alt-text="Screenshot shows the Basics tab of creating a connection monitor in the Azure portal."::: ++1. Select the **Test groups** tab, or select **Next: Test groups** button. ++1. Enter *myTestGroup* in **Test group name**. ++1. In the **Add test group details** page, select **+ Add sources** to add the source virtual machine. ++1. In the **Add sources** page, select **myVM1** as the source endpoint, and then select **Add endpoints**. ++ :::image type="content" source="./media/monitor-vm-communication/add-source-endpoint.png" alt-text="Screenshot shows how to add a source endpoint for a connection monitor in the Azure portal."::: ++ > [!NOTE] + > You can use **Subscription**, **Resource group**, **VNET**, or **Subnet** filters to narrow down the list of virtual machines. ++1. In the **Add test group details** page, select **Add Test configuration**, and then enter or select the following information: ++ | Setting | Value | + | - | -- | + | Test configuration name | Enter *SSH-from-myVM1-to-myVM2*. | + | Protocol | Select **TCP**. | + | Destination port | Enter *22*. | + | Test frequency | Select the default **Every 30 seconds**. | ++ :::image type="content" source="./media/monitor-vm-communication/add-test-configuration.png" alt-text="Screenshot shows how to add a test configuration for a connection monitor in the Azure portal."::: ++1. Select **Add test configuration**. ++1. In the **Add test group details** page, select **Add destinations** to add the destination virtual machine. ++1. In the **Add Destinations** page, select **myVM2** as the destination endpoint, and then select **Add endpoints**. ++ :::image type="content" source="./media/monitor-vm-communication/add-destination-endpoint.png" alt-text="Screenshot shows how to add a destination endpoint for a connection monitor in the Azure portal."::: ++ > [!NOTE] + > In addition to the **Subscription**, **Resource group**, **VNET**, and **Subnet** filters, you can use the **Region** filter to narrow down the list of virtual machines. ++1. In the **Add test group details** page, select **Add Test Group** button. ++1. Select **Review + create**, and then select **Create**. ++## View the connection monitor ++In this section, you view all the details of the connection monitor that you created in the previous section. ++1. Go to the **Connection monitor** page. If you don't see **myConnectionMonitor** in the list of connection monitors, wait a few minutes, then select **Refresh**. ++ :::image type="content" source="./media/monitor-vm-communication/new-connection-monitor.png" alt-text="Screenshot shows the new connection monitor that you've just created." lightbox="./media/monitor-vm-communication/new-connection-monitor.png"::: ++1. Select **myConnectionMonitor** to see the performance metrics of the connection monitor like round trip time and percentage of failed checks + + :::image type="content" source="./media/monitor-vm-communication/connection-monitor-summary.png" alt-text="Screenshot shows the new connection monitor." lightbox="./media/monitor-vm-communication/connection-monitor-summary.png"::: ++1. Select **Time Intervals** to adjust the time range to see the performance metrics for a specific time period. Available time intervals are **Last 1 hour**, **Last 6 hours**, **Last 24 hours**, **Last 7 days**, and **Last 30 days**. You can also select **Custom** to specify a custom time range. ++ :::image type="content" source="./media/monitor-vm-communication/metrics-time-intervals.png" alt-text="Screenshot shows available options to change the time interval of the performance metrics in a connection monitor." lightbox="./media/monitor-vm-communication/metrics-time-intervals.png"::: ++## View a problem ++The connection monitor you created in the previous section monitors the connection between **myVM1** and port 22 on **myVM2**. If the connection fails for any reason, connection monitor detects and logs the failure. In this section, you simulate a problem by stopping **myVM2**. ++1. In the search box at the top of the portal, enter *virtual machine*. Select **Virtual machines** in the search results. ++1. In **Virtual machines**, select **myVM2**. ++1. In the **Overview**, select **Stop** to stop (deallocate) **myVM2** virtual machine. ++1. Go to the **Connection monitor** page. If you don't see the failure in the dashboard, select **Refresh**. ++ :::image type="content" source="./media/monitor-vm-communication/connection-monitor-fail.png" alt-text="Screenshot shows the failure after stopping the virtual machine." lightbox="./media/monitor-vm-communication/connection-monitor-fail.png"::: ++ You can see that the number of **Fail** connection monitors became **1 out of 1** after stopping **myVM2**, and under **Reason**, you can see **ChecksFailedPercent** as the reason for this failure. ++## Clean up resources ++When no longer needed, delete the resource group and all of the resources it contains: ++1. In the search box at the top of the portal, enter *myResourceGroup*. When you see **myResourceGroup** in the search results, select it. ++1. Select **Delete resource group**. ++1. In **Delete a resource group**, enter *myResourceGroup*, and then select **Delete**. ++1. Select **Delete** to confirm the deletion of the resource group and all its resources. ++## Next steps ++In this tutorial, you learned how to monitor a connection between two virtual machines. You learned that connection monitor detected the connection failure to port 22 on target virtual machine after you stopped it. To learn about all of the different metrics that connection monitor can return, see [Metrics in Azure Monitor](connection-monitor-overview.md#metrics-in-azure-monitor). ++To learn how to diagnose and troubleshoot problems with virtual network gateways, advance to the next tutorial. ++> [!div class="nextstepaction"] +> [Diagnose communication problems between networks](diagnose-communication-problem-between-networks.md) |
network-watcher | Network Watcher Connectivity Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-portal.md | In this article, you learn how to use [Azure Network Watcher connection troubles ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Virtual machines (VMs) to troubleshoot connections with.+- A virtual machine with inbound TCP connectivity from 168.63.129.16 over the port being tested. > [!IMPORTANT] > Connection troubleshoot requires that the virtual machine you troubleshoot from has the `AzureNetworkWatcherExtension` extension installed. The extension is not required on the destination virtual machine. |
networking | Azure Network Latency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/azure-network-latency.md | Azure continuously monitors the latency (speed) of core areas of its network usi ## How are the measurements collected? -The latency measurements are collected from ThousandEyes agents, hosted in Azure cloud regions worldwide, that continuously sends network probes between themselves in 1-minute intervals. The monthly latency statistics are derived from averaging the collected samples for the month. +The latency measurements are collected from Azure cloud regions worldwide, and continuously measured in 1-minute intervals by network probes. The monthly latency statistics are derived from averaging the collected samples for the month. ## Round-trip latency figures |
networking | Networking Partners Msp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/networking-partners-msp.md | Use the links in this section for more information about managed cloud networkin | **MSP** | **Cloud Network Transformation Services** | **Managed ExpressRoute** | **Managed Virtual WAN** | **Managed Private Edge Zones** | **Managed Security** | | | | | | | |-|[ANS Group UK](https://www.ans.co.uk/)|[Azure Managed Services + ANS Glass 10 week implementation](https://azuremarketplace.microsoft.com/marketplace/apps/ans_group.glasssaas?tab=Overview)|[ExpressRoute & connectivity: Two week Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/ans_group.ans_er)|[Azure Virtual WAN + Fortinet: Two week Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/ans_group.ans_vw)||| +|[ANS Group UK](https://www.ans.co.uk/)|[Azure Managed Services + ANS Glass 10 week implementation](https://azuremarketplace.microsoft.com/marketplace/apps/ans_group.glasssaas?tab=Overview)| ExpressRoute & connectivity: Two week Assessment | Azure Virtual WAN + Fortinet: Two week Assessment ||| |[Aryaka Networks](https://www.aryaka.com/azure-msp-vwan-managed-service-provider-launch-partner-aryaka/)||[Aryaka Azure Connect](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/aryaka.cloudconnect_azure_19?tab=Overview)|[Aryaka Managed SD-WAN for Azure Networking Virtual](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/aryaka.aryaka_azure_virtual_wan?tab=Overview) | | | |[AXESDN](https://www.axesdn.com/en/azure-msp.html)||[AXESDN Managed Azure ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/1584591601184.axesdn_managed_azure_expressroute?tab=Overview)|[AXESDN Managed Azure Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/1584591601184.axesdn_managed_azure_virtualwan?tab=Overview) | | | |[BT](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)|[Network Transformation Consulting: 1-Hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/bt-americas-inc.network-transformation-consulting);[BT Cloud Connect Azure](https://azuremarketplace.microsoft.com/marketplace/consulting-services/btenterprise-hq.bt_caf_vwan_landingzone)|[BT Cloud Connect Azure ExpressRoute](https://azuremarketplace.microsoft.com/marketplace/consulting-services/btenterprise-hq.bt_caf_vwan_landingzone)|[BT Cloud Connect Azure VWAN](https://azuremarketplace.microsoft.com/marketplace/consulting-services/btenterprise-hq.bt_caf_vwan_landingzone)||| Use the links in this section for more information about managed cloud networkin |[Infosys](https://www.infosys.com/services/microsoft-cloud-business/pages/index.aspx)|[Infosys Integrate+ for Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/infosysltd.infosys-integrate-for-azure?tab=Overview)||||| |[Interxion](https://www.interxion.com/products/interconnection/cloud-connect/support-your-cloud-strategy/)|[Azure Networking Assessment - Five Days](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/interxionhq.inxn_azure_networking_assessment)||||| |[IX Reach](https://www.ixreach.com/services/sdn-cloud-connect/)||[ExpressRoute by IX Reach, a BSO company](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/ixreach.cloudconnect?tab=Overview)||||-|[KoçSistem](https://azure.kocsistem.com.tr/en)|[KoçSistem Managed Cloud Services for Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.kocsistemcloudmanagementtool?tab=Overview)|[KoçSistem Azure ExpressRoute Management](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.ks_azure_express_route?tab=Overview)|[KoçSistem Azure Virtual WAN Management](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.ks_azure_virtual_wan?tab=Overview)||[`KoçSistem Azure Security Center Managed Service`](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.ks_azure_security_center?tab=Overview)| +|[KoçSistem](https://azure.kocsistem.com.tr/en)|[KoçSistem Managed Cloud Services for Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.kocsistemcloudmanagementtool?tab=Overview) | KoçSistem Azure ExpressRoute Management|[ KoçSistem Azure Virtual WAN Management || `KoçSistem Azure Security Center Managed Service` | |[Liquid Telecom](https://liquidcloud.africa/)| [Liquid Azure Expert Services](https://azuremarketplace.microsoft.com/marketplace/apps/liquidtelecommunicationsoperationslimited.5dab29ab-bb14-4df8-8978-9a8608a41ad7?tab=Overview)|[Liquid Managed ExpressRoute for Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/liquidtelecommunicationsoperationslimited.42cfee0b-8f07-4948-94b0-c9fc3e1ddc42?tab=Overview)|||| |[Lumen](https://www.lumen.com/en-us/solutions/hybrid-cloud.html)||[ExpressRoute Consulting |[Macquarie Telecom](https://macquariecloudservices.com/azure-managed-services/)|[Azure Managed Services by Macquarie Cloud](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.managed_services?tab=Overview); [Azure Extend by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.azure_extend?tab=Overview)||[Azure Deploy by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/marketplace/apps/macquariecloudservices.azure_deploy?tab=Overview); [SD-WAN Virtual Edge offer by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.azure_deploy?tab=Overview)||[Managed Security by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.managed_security?tab=Overview)| |[Megaport](https://www.megaport.com/services/microsoft-expressroute/)||[Managed Routing Service for ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/megaport1582290752989.megaport_mcr?tab=Overview)|||| |[Netfosys](https://www.netfosys.com/azurewan)|||[Netfosys Managed Services for Azure vWAN](https://azuremarketplace.microsoft.com/en-ca/marketplace/apps/netfosys1637934664103.azure-vwan?tab=Overview)|||-|[Nokia](https://www.nokia.com/networks/services/managed-services/)|||[NBConsult Nokia Nuage SDWAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nbconsult1588859334197.nbconsult-nokia-nuage?tab=Overview); [Nuage SD-WAN 2.0 Azure Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nokiaofamericacorporation1591716055441.nuage_sd-wan_2-0_azure_virtual_wan?tab=Overview)|[Nokia 4G & 5G Private Wireless (NDAC)](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nokiaofamericacorporation1591716055441.ndac_5g-ready_private_wireless?tab=Overview)| +|[Nokia](https://www.nokia.com/networks/services/managed-services/) ||| NBConsult Nokia Nuage SDWAN; [Nuage SD-WAN 2.0 Azure Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nokiaofamericacorporation1591716055441.nuage_sd-wan_2-0_azure_virtual_wan?tab=Overview)|[Nokia 4G & 5G Private Wireless (NDAC)](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nokiaofamericacorporation1591716055441.ndac_5g-ready_private_wireless?tab=Overview)| |[NTT Ltd](https://www.nttglobal.net/)|[Azure Cloud Discovery: 2-Week Workshop](https://azuremarketplace.microsoft.com/marketplace/apps/capside.replica-azure-cloud-governance-capside?tab=Overview)|NTT Managed ExpressRoute Service;NTT Managed IP VPN Service|NTT Managed SD-WAN Service||| |[NTT Data](https://www.nttdata.com/global/en/services/cloud)|[Managed |[Oncore Cloud Services]( https://www.oncore.cloud/services/ue-for-expressroute/)|[Enterprise Cloud Foundations: Workshop (~10 days)](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/oncore_cloud_services-4944214.oncore_cloud_onboard_201810)|[UniversalEdge for Azure ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/oncore_cloud_services-4944214.universaledge_for_expressroute?tab=Overview)|||| |[OpenSystems](https://open-systems.com/solutions/microsoft-azure-virtual-wan)|||[Managed secure SD-WAN using Microsoft Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/open_systems_ag.sdwan_0820?tab=Overview)|| |[Orange Business Services](https://www.orange-business.com/en/partners/orange-business-services-become-microsoft-azure-networking-managed-services-provider)||[ExpressRoute Network Study : 3-week implementation](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/orangebusinessservicessa1603182943272.expressroute_study_obs_connectivity)|||-|[Orixcom]( https://www.orixcom.com/cloud-solutions/)||[Orixcom Managed ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/orixcom.orixcom_managed_expressroute?tab=Overview)|[Orixcom SD-WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/orixcom.orixcom_sd_wan?tab=Overview)||| +|[Orixcom]( https://www.orixcom.com/solutions/cloudconnect)||[Orixcom Managed ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/orixcom.orixcom_managed_expressroute?tab=Overview)|[Orixcom SD-WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/orixcom.orixcom_sd_wan?tab=Overview)||| |[Proximus](https://www.proximus.be/en/companies-and-public-sector/?)|[Proximus Azure Services - Operational Framework](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/proximusnv1580135963165.pas-lighthouse?tab=Reviews)||||| |[Servent](https://www.servent.co.uk/)|[Azure Advanced Networking – Five Day Assessment ](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/servent.servent-azure-advanced-networking?tab=Overview)|[ExpressRoute – Three Day Assessment ](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/servent.servent-azure-express-route?tab=Overview)|[Azure Virtual WAN – Three Day Assessment ](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/servent.servent-azure-virtual-wan?tab=Overview)||| |[SoftBank]( https://www.softbank.jp/biz/nw/nwp/cloud_access/direct_access_for_az/)|[Azure Network Consulting Service: 1-Week Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/sbmpn.softbank_nw_msp_service_azure); [Azure Assessment Service: 1-Week](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/sbmpn.softbank_msp_service_azure_01?tab=Overview&pub_source=email&pub_status=success)||||| |
operator-nexus | Howto Baremetal Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-functions.md | You can restore the runtime version on a BMM by executing `reimage` command. Thi As a best practice, make sure the BMM's workloads are drained using the [`cordon`](#make-a-bmm-unschedulable-cordon) command, with `evacuate "True"`, prior to executing the `reimage` command. +> [!Warning] +> Running more than one baremetalmachine replace or reimage command at the same time, or running a replace +> at the same time as a reimage, will leave servers in a nonworking state. Make sure one replace / reimage +> has fully completed before starting another one. In a future release, we plan to either add the ability +> to replace multiple servers at once or have the command return an error when attempting to do so. + ```azurecli az networkcloud baremetalmachine reimage \ ΓÇô-name "bareMetalMachineName" \ az networkcloud baremetalmachine reimage \ Use `Replace BMM` command when a server has encountered hardware issues requiring a complete or partial hardware replacement. After replacement of components such as motherboard or NIC replacement, the MAC address of BMM will change, however the IDrac IP address and hostname will remain the same. > [!Warning]-> Running more than one baremetalmachine replace command at the same time will leave servers in a -> nonworking state. Make sure one replace has fully completed before starting another one. In a future -> release, we plan to either add the ability to replace multiple servers at once or have the command -> return an error when attempting to do so. +> Running more than one baremetalmachine replace or reimage command at the same time, or running a replace +> at the same time as a reimage, will leave servers in a nonworking state. Make sure one replace / reimage +> has fully completed before starting another one. In a future release, we plan to either add the ability +> to replace multiple servers at once or have the command return an error when attempting to do so. ```azurecli az networkcloud baremetalmachine replace \ |
operator-service-manager | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/overview.md | Title: About Azure Operator Service Manager description: Learn about Azure Operator Service Manager, an Azure Service for the management of Network Services for telecom operators.--++ Last updated 04/09/2023 -Azure Operator Service Manager expands and improves the Network Function Manager by incorporating technology and ideas from Azure for Operators' on-premises management tools. Its purpose is to manage the convergence of comprehensive, multi-vendor service solutions on a per-site basis. It uses a declarative software and configuration model for the system. It also combines Azure's hyperscaler experience and tooling for error-free Safe Deployment Practices (SDP) across sites grouped in canary tiers. +Azure Operator Service Manager expands and improves the Network Function Manager by incorporating technology and ideas from Azure for Operators' on-premises management tools. Its purpose is to manage the convergence of comprehensive, multi-vendor service solutions on a per-site basis. It uses a declarative software and configuration model for the system. ## Product features Azure Operator Service Manager provides an Azure-native abstraction for modeling and realizing a distributed network service using extra resource types in Azure Resource Manager (ARM) through our cloud service. A network service is represented as a network graph comprising multiple network functions, with appropriate policies controlling the data plane to meet each telecom operator's operational needs. Creation of templates of configuration schemas allows for per-site variation that is often required in such deployments. -The service is partitioned into a global control plane, which operates on Azure, and site control planes. Site control planes also function on Azure, but are confined to specific sites, such as on-premises and hybrid sites. --The global control plane hosts the interfaces for publishers, designers, and operators. All of the applicable resources are immutable versioned objects, replicated to all Azure regions. The global control plane also hosts the Safe Deployment Practices (SDP) Global Convergence Agent, which is responsible for driving rollout across sites. --The site control plane consists of the Site Convergence Agent. The Site Convergence Agent is responsible for mapping the desired state of a site. The desired state ranges from the network service level down to the network function and cloud resource level. The Site Convergence Agent converges each site to the desired state, and runs in an Azure region as a resource provider in that region. - ## Benefits Azure Operator Service Manager provides the following benefits: - Provides a single management experience for all Azure for operators solutions in Azure or connected clouds.-- Offers SDK and PowerShell services to further extend the reach to include third-party network functions and network services.-- Implements consistent best-practice safe deployment practices (SDP) fleet-wide. - Provides blast-radius limitations and disconnected mode support to enable five-nines operation of these services.-- Offers clear dashboard reporting of convergence state for each site and canary level. - Enables real telecom DevOps working, eliminating the need for NF-specific maintenance windows. ## Get access to Azure Operator Service Manager |
private-5g-core | Azure Private 5G Core Release Notes 2305 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2305.md | Packet core versions are supported until two subsequent versions have been relea ## What's new -- **User-Plane Inactivity Detection** - Starting from AP5GC 2305, a user-plane inactivity timer with a value of 600 seconds is configured for 5G sessions. If there is no traffic for a period of 600 seconds and RAN-initiated Access Network release has not occurred, the Packet Core will release Access Network resources.+- **User-Plane Inactivity Detection** - Starting from AP5GC 2305, a user-plane inactivity timer with a value of 600 seconds is configured for 5G sessions. If there is no traffic for 600 seconds and RAN-initiated Access Network release has not occurred, the Packet Core will release Access Network resources. -- **UE (user equipment) to UE internal forwarding** - This release delivers the ability for AP5GC to internally forward UE data traffic destined to another UE in the same Data Network (without going via an external router). +- **UE (user equipment) to UE internal forwarding** - This release delivers the ability for AP5GC to internally forward UE data traffic destined to another UE in the same Data Network (without going via an external router). - If you are currently using the default service with allow-all SIM policy along with NAT enabled for the Data Network, or with an external router with deny rules for this traffic, you might have UE to UE traffic forwarding blocked. If you want to continue this blocking behavior with AP5GC 2305, see [Configure UE to UE internal forwarding](configure-internal-forwarding.md). + If you're currently using the default service with allow-all SIM policy along with NAT enabled for the Data Network, or with an external router with deny rules for this traffic, you might have UE to UE traffic forwarding blocked. If you want to continue this blocking behavior with AP5GC 2305, see [Configure UE to UE internal forwarding](configure-internal-forwarding.md). - If you are not using the default service with allow-all SIM policy and want to allow UE-UE internal forwarding, see [Configure UE to UE internal forwarding](configure-internal-forwarding.md). + If you're not using the default service with allow-all SIM policy and want to allow UE-UE internal forwarding, see [Configure UE to UE internal forwarding](configure-internal-forwarding.md). -- **Event Hubs feed of UE Usage** - This feature enhances AP5GC to provide an Azure Event Hubs feed of UE Data Usage events. You can integrate with Event Hubs to build reports on how your private 4G/5G network is being used or carry out other data processing using the information in these events. If you want to enable this feature for your deployment, please contact your support representative.+- **Event Hubs feed of UE Usage** - This feature enhances AP5GC to provide an Azure Event Hubs feed of UE Data Usage events. You can integrate with Event Hubs to build reports on how your private 4G/5G network is being used or carry out other data processing using the information in these events. If you want to enable this feature for your deployment, contact your support representative. ## Issues fixed in the AP5GC 2305 release The following table provides a summary of issues fixed in this release. |No. |Feature | Issue | |--|--|--|- | 1 | Packet forwarding | In scenarios of sustained high load (e.g., continuous setup of hundreds of TCP flows per second) combined with NAT pin-hole exhaustion, AP5GC can encounter a memory leak, leading to a short period of service disruption resulting in some call failures. This issue has been fixed in this release. | + | 1 | Packet forwarding | In scenarios of sustained high load (e.g. continuous setup of hundreds of TCP flows per second) combined with NAT pin-hole exhaustion, AP5GC can encounter a memory leak, leading to a short period of service disruption resulting in some call failures. This issue has been fixed in this release. | | 2 | Install/Upgrade | Changing the technology type of a deployment from 4G (EPC) to 5G using the upgrade or site delete/add sequence is not supported. This issue has been fixed in this release. |- | 3 | Local dashboards | In some scenarios, the Azure Private 5G Core local dashboards don't show session rejection under the **Device and Session Statistics** panel if "Session Establishment" requests are rejected due to invalid PDU type (e.g. IPv6 when only IPv4 is supported). This issue has been fixed in this release. | + | 3 | Local dashboards | In some scenarios, the Azure Private 5G Core local dashboards don't show session rejection under the **Device and Session Statistics** panel if "Session Establishment" requests are rejected due to invalid PDU type (e.g. IPv6 when only IPv4 is supported). This issue has been fixed in this release. | ## Known issues in the AP5GC 2305 release |No. |Feature | Issue | Workaround/comments | |--|--|--|--|- | 1 | Local Dashboards | Where Azure Active Directory is used to authenticate access to AP5GC Local Dashboards, this traffic doesn't transmit via the web proxy when enabled on the Azure Stack Edge appliance that the packet core is running on. | Not applicable. | + | 1 | Local Dashboards | When a web proxy is enabled on the Azure Stack Edge appliance that the packet core is running on and Azure Active Directory is used to authenticate access to AP5GC Local Dashboards, the traffic to Azure Active Directory does not transmit via the web proxy. If there is a firewall blocking traffic that does not go via the web proxy then enabling Azure Active Directory will cause the packet core install to fail. | Disable Azure Active Directory and use password based authentication to authenticate access to AP5GC Local Dashboards instead. +Description | | 2 | Reboot | AP5GC may intermittently fail to recover after the underlying platform is rebooted and may require another reboot to recover. | Not applicable. | ## Known issues from previous releases |
private-5g-core | Enable Azure Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/enable-azure-active-directory.md | Title: Enable Azure Active Directory (Azure AD) for local monitoring tools -description: Complete the prerequisite tasks for enabling Azure Active Directory to access Azure Private 5G Core's local monitoring tools. +description: Complete the prerequisite tasks for enabling Azure Active Directory to access Azure Private 5G Core's local monitoring tools. -+ Last updated 12/29/2022 Azure Private 5G Core provides the [distributed tracing](distributed-tracing.md) In this how-to guide, you'll carry out the steps you need to complete after deploying or configuring a site that uses Azure AD to authenticate access to your local monitoring tools. You don't need to follow this if you decided to use local usernames and passwords to access the distributed tracing and packet core dashboards. +> [!CAUTION] +> Azure AD for local monitoring tools is not supported when a web proxy is enabled on the Azure Stack Edge device on which Azure Private 5G Core is running. If you have configured a firewall that blocks traffic not transmitted via the web proxy, then enabling Azure AD will cause the Azure Private 5G Core installation to fail. + ## Prerequisites - You must have completed the steps in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md) and [Collect the required information for a site](collect-required-information-for-a-site.md). In the authoritative DNS server for the DNS zone you want to create the DNS reco ## Register application -You'll now register a new local monitoring application with Azure AD to establish a trust relationship with the Microsoft identity platform. +You'll now register a new local monitoring application with Azure AD to establish a trust relationship with the Microsoft identity platform. If your deployment contains multiple sites, you can use the same two redirect URIs for all sites, or create different URI pairs for each site. You can configure a maximum of two redirect URIs per site. If you've already registered an application for your deployment and you want to use the same URIs across your sites, you can skip this step. 1. Follow [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md) to register a new application for your local monitoring tools with the Microsoft identity platform. 1. In *Add a redirect URI*, select the **Web** platform and add the following two redirect URIs, where *\<local monitoring domain\>* is the domain name for your local monitoring tools that you set up in [Configure domain system name (DNS) for local monitoring IP](#configure-domain-system-name-dns-for-local-monitoring-ip):- + - https://*\<local monitoring domain\>*/sas/auth/aad/callback - https://*\<local monitoring domain\>*/grafana/login/azuread To support Azure AD on Azure Private 5G Core applications, you'll need a YAML fi 1. Convert each of the values you collected in [Collect the information for Kubernetes Secret Objects](#collect-the-information-for-kubernetes-secret-objects) into Base64 format. For example, you can run the following command in an Azure Cloud Shell **Bash** window: - ```bash + ```bash echo -n <Value> | base64 ``` You'll need to apply your Kubernetes Secret Objects if you're enabling Azure AD 1. Sign in to [Azure Cloud Shell](../cloud-shell/overview.md) and select **PowerShell**. If this is your first time accessing your cluster via Azure Cloud Shell, follow [Access your cluster](../azure-arc/kubernetes/cluster-connect.md?tabs=azure-cli) to configure kubectl access. 1. Apply the Secret Object for both distributed tracing and the packet core dashboards, specifying the core kubeconfig filename.- + `kubectl apply -f /home/centos/secret-azure-ad-local-monitoring.yaml --kubeconfig=<core kubeconfig>` 1. Use the following commands to verify if the Secret Objects were applied correctly, specifying the core kubeconfig filename. You should see the correct **Name**, **Namespace**, and **Type** values, along with the size of the encoded values. You'll need to apply your Kubernetes Secret Objects if you're enabling Azure AD 1. Restart the distributed tracing and packet core dashboards pods. 1. Obtain the name of your packet core dashboards pod:- + `kubectl get pods -n core --kubeconfig=<core kubeconfig>" | grep "grafana"` 1. Copy the output of the previous step and replace it into the following command to restart your pods. |
purview | Concept Policies Devops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-policies-devops.md | SQL dynamic metadata includes a list of more than 700 DMVs and DMFs. The followi | | Use the Query Store | [sys.query_store_plan](/sql/relational-databases/system-catalog-views/sys-query-store-plan-transact-sql) | | | | [sys.query_store_query](/sql/relational-databases/system-catalog-views/sys-query-store-query-transact-sql) | | | | [sys.query_store_query_text](/sql/relational-databases/system-catalog-views/sys-query-store-query-text-transact-sql) |+| | Get Error Log (not yet supported)| [sys.sp_readerrorlog](/sql/relational-databases/system-stored-procedures/sp-readerrorlog-transact-sql) | |||| | SQL Security Auditor | Get audit details | [sys.dm_server_audit_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-server-audit-status-transact-sql) | |||| |
reliability | Availability Zones Migration Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-migration-overview.md | The table below lists each product that offers migration guidance and/or informa | [Azure Functions](migrate-functions.md)| | [Azure Load Balancer](migrate-load-balancer.md)| | [Azure Service Fabric](migrate-service-fabric.md) | +| [Azure SQL Database](migrate-sql-database.md) | | [Azure Storage account: Blob Storage, Azure Data Lake Storage, Files Storage](migrate-storage.md) | | [Azure Storage: Managed Disks](migrate-vm.md)| | [Azure Virtual Machines and Azure Virtual Machine Scale Sets](migrate-vm.md)| |
reliability | Availability Zones Service Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md | Azure offerings are grouped into three categories that reflect their _regional_ | **Products** | **Resiliency** | | | | | [Azure Application Gateway (V2)](migrate-app-gateway-v2.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |-| [Azure Backup](migrate-recovery-services-vault.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | +| [Azure Backup](migrate-recovery-services-vault.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Cosmos DB](../cosmos-db/high-availability.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure DNS: Azure DNS Private Zones](../dns/private-dns-getstarted-portal.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure DNS: Azure DNS Private Resolver](../dns/dns-private-resolver-get-started-portal.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure ExpressRoute](../expressroute/designing-for-high-availability-with-expressroute.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Public IP](../virtual-network/ip-services/public-ip-addresses.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | | [Azure Site Recovery](migrate-recovery-services-vault.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) |-| [Azure SQL Database](/azure/azure-sql/database/high-availability-sla) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | -| [Azure SQL Managed Instance](/azure/azure-sql/database/business-continuity-high-availability-disaster-recover-hadr-overview?view=azuresql) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | +| [Azure SQL Database](migrate-sql-database.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | +| [Azure SQL Managed Instance](/azure/azure-sql/database/business-continuity-high-availability-disaster-recover-hadr-overview?view=azuresql&preserve-view=true) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Event Hubs](../event-hubs/event-hubs-geo-dr.md#availability-zones) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Key Vault](../key-vault/general/disaster-recovery-guidance.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Load Balancer](../load-balancer/load-balancer-standard-availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) | |
reliability | Migrate Sql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-sql-database.md | + + Title: Migrate Azure SQL Database to availability zone support +description: Learn how to migrate your Azure SQL Database to availability zone support. +++ Last updated : 06/29/2023+++++# Migrate Azure SQL Database to availability zone support + +This guide describes how to migrate [Azure SQL Database](/azure/azure-sql/) from non-availability zone support to availability support. ++Enabling zone redundancy for Azure SQL Database guarantees high availability as the database utilizes Azure Availability Zones to replicate data across multiple physical locations within an Azure region. By selecting zone redundancy, you can make your databases and elastic pools resilient to a larger set of failures, such as catastrophic datacenter outages, without any changes of the application logic. ++## Prerequisites ++Before you migrate to availability zone support, refer to the following table to ensure that your Azure SQL Database is in a supported service tier and deployment model. Make sure that your tier and model is offered in a [region that supports availability zones](/azure/reliability/availability-zones-service-support). ++| Service tier | Zone redundancy availability | +|--|| +| Premium | [All regions that support availability zones](availability-zones-service-support.md#azure-regions-with-availability-zone-support)| +| Business Critical | [All regions that support availability zones](availability-zones-service-support.md#azure-regions-with-availability-zone-support) | +| General Purpose | [Selected regions that support availability zones](/azure/azure-sql/database/high-availability-sla?view=azuresql&tabs=azure-powershell&preserve-view=true#general-purpose-service-tier-zone-redundant-availability)| +| Hyperscale (Preview) |[All regions that support availability zones](availability-zones-service-support.md#azure-regions-with-availability-zone-support) | + ++## Downtime requirements ++Migration for Premium, Business Critical, and General Purpose service tier is an online operation with a brief disconnect towards the end to finish the migration process. If you have implemented [retry logic for standard transient errors](/azure/azure-sql/database/troubleshoot-common-connectivity-issues?view=azuresql&preserve-view=true#retry-logic-for-transient-errors), you won't notice the failover. ++For Hyperscale service tier, zone redundancy support can only be specified during database creation and can't be modified once the resource is provisioned. If you wish to move to availability zone support, you'll need to transfer the data with database copy, point-in-time restore, or geo-replica. If the target database is in a different region than the source or if the database backup storage redundancy for the target differs from the source database, then downtime is proportional to the size of the data operation. ++## Migration (Premium, Business Critical, and General Purpose) ++For the Premium, Business Critical, and General Purpose service tiers, migration to zone redundancy is possible. ++Follow the steps below to perform migration for a single database or an elastic pool. ++### Migrate a single database ++# [Azure portal](#tab/portal) ++1. Go to the [Azure portal](https://portal.azure.com) to find your database. Search for and select **SQL databases**. ++1. Select the database that you want to migrate. ++1. Under **Settings** select **Compute + Storage**. ++1. Select **Yes** for **Would you like to make this database zone redundant?** ++1. Select **Apply**. ++1. Wait to receive an operation completion notice in **Notifications** in the top menu of the Azure portal. ++1. To verify that zone redundancy is enabled, select **Overview** and then select **Properties**. ++1. Under the **Availability** section, confirm that zone redundancy is set to **Enabled**. ++# [PowerShell](#tab/powershell) ++Open PowerShell as Administrator and run the following command (replace the placeholders in "<>" with your resource names). Note that `<server_name>` should not include `.database.windows.net`. ++```powershell +Connect-AzAccount +$subscriptionid = <'your subscription id here'> +Set-AzContext -SubscriptionId $subscriptionid ++$parameters = @{ + ResourceGroupName = '<Resource_Group_Name>' + ServerName = '<Server_Name>' + DatabaseName = '<Database_Name>' +} +Set-AzSqlDatabase @parameters -ZoneRedundant +``` ++# [Azure CLI](#tab/cli) +++Use Azure CLI to run the following command (replace the placeholders in "<>" with your resource names): ++```azurecli + az sql db update --resource-group ΓÇ£<Resource_Group_Name>ΓÇ¥ --server ΓÇ£<Server_Name>ΓÇ¥ --name ΓÇ£<Database_Name>ΓÇ¥ --zone-redundant +``` ++# [ARM](#tab/arm) ++To enable zone redundancy, see [Databases - Create Or Update in ARM](/rest/api/sql/2022-05-01-preview/databases/create-or-update?tabs=HTTP) and use the `properties.zoneRedundant` property. ++++### Migrate an elastic pool +++>[!IMPORTANT] +>Enabling zone redundancy support for elastic pools makes all databases within the pool zone redundant. +++# [Azure portal](#tab/portal) ++1. Go to the [Azure portal](https://portal.azure.com) to find and select the elastic pool that you want to migrate. ++1. Select **Settings**, and then select **Configure**. ++1. Select **Yes** for **Would you like to make this elastic pool zone redundant?**. ++1. Select **Save**. ++1. Wait to receive an operation completion notice in **Notifications** in the top menu of the Azure portal. ++1. To verify that zone redundancy is enabled, select **Configure** and then select **Pool settings**. ++1. The zone redundant option should be set to **Yes**. +++# [PowerShell](#tab/powershell) +++Open PowerShell as Administrator and run the following command (replace the placeholders in "<>" with your resource names). Note that `<server_name>` should not include `.database.windows.net`. ++```powershell +Connect-AzAccount +$subscriptionid = <'your subscription id here'> +Set-AzContext -SubscriptionId $subscriptionid ++$parameters = @{ + ResourceGroupName = '<Resource_Group_Name>' + ServerName = '<Server_Name>' + ElasticPoolName = '<Elastic_Pool_Name>' +} ++Set-AzSqlElasticPool @parameters -ZoneRedundant +``` +++# [Azure CLI](#tab/cli) ++Use Azure CLI to run the following command (replace the placeholders in "<>" with your resource names): ++```azurecli + az sql elastic-pool update --resource-group ΓÇ£<Resource_Group_Name>ΓÇ¥ --server ΓÇ£<Server_Name>ΓÇ¥ --name ΓÇ£<Elastic_Pool_Name>ΓÇ¥ --zone-redundant +``` +++# [ARM](#tab/arm) ++To enable zone redundancy, see [Elastic Pools - Create Or Update in ARM](/rest/api/sql/2022-05-01-preview/elastic-pools/create-or-update?tabs=HTTP). ++++## Redeployment (Hyperscale) ++For the Hyperscale service tier, zone redundancy support can only be specified during database creation and can't be modified once the database is provisioned. If you wish to gain zone redundancy support, you need to perform a data transfer from your existing Hyperscale service tier single database. To perform the transfer and enable the zone redundancy option, a clone must be created using database copy, point-in-time restore, or geo-replica. ++### Redeployment considerations ++- There are two modes of redeployment (online and offline): ++ - The **Database copy and point-in-time restore methods (offline mode)** create a transactionally consistent database at a certain point in time. As a result, any data changes performed after the copy or restore operation have been initiated won't be available on the copied or restored database. + + - **Geo-replica method (online mode)** is a redeployment wherein any data changes from source are synchronized to target. ++- Connection string for the application must be updated to point to the zone redundant database. ++### Redeploy a single database ++#### Database copy ++To create a database copy and enable zone redundancy with Azure portal, PowerShell, or Azure CLI, follow the instructions in [copy a transactionally consistent copy of a database in Azure SQL Database](/azure/azure-sql/database/database-copy?tabs=azure-powershell&view=azuresql&preserve-view=true#copy-using-the-azure-portal). + ++#### Point-in-time restore ++To create a point-in-time database restore and enable zone redundancy with Azure portal, PowerShell, or Azure CLI, follow the instructions in [Point-in-time restore](/azure/azure-sql/database/recovery-using-backups?view=azuresql&preserve-view=true&tabs=azure-portal#point-in-time-restore). ++##### Geo-replica ++To create a geo-replica of the database: ++1. Follow the instructions with Azure portal, PowerShell, or Azure CLI in [Configure active geo-replication and failover (Azure SQL Database)](/azure/azure-sql/database/active-geo-replication-configure-portal?view=azuresql&preserve-view=true&tabs=portal) and enable zone redundancy under **Compute + Storage** ++1. The replica is seeded, and the time taken for seeding the data depends upon size of source database. You can monitor the status of seeding in the Azure portal or by running the following TSQL queries on the replica database: ++ ```sql + SELECT * FROM sys.dm_geo_replication_link_status; + SELECT * FROM sys.dm_operation_status; + ``` ++1. Once the database seeding is finished, perform a planned (no data loss) failover to make the zone redundant target database as primary. Use the [sys.dm_geo_replication_link_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-geo-replication-link-status-azure-sql-database?view=azuresqldb-current&preserve-view=true) to view the status of the geo-replication state. The `replication_state_desc` is `CATCH_UP` when the secondary database is in a transactionally consistent state. In the [sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database?view=azuresqldb-current&preserve-view=true) dynamic management view, look for `state_desc` to be `COMPLETED` when the seeding operation has completed. ++1. Update the server name in the connection strings for the application to reflect the new zone redundant database. ++1. To clean up, consider removing the original non-zone redundant database from the geo replica relationship. You can choose to delete it. +++## Next steps ++> [!div class="nextstepaction"] +> [Azure services and regions that support availability zones](availability-zones-service-support.md) |
search | Index Add Scoring Profiles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-add-scoring-profiles.md | -In this article, you'll learn how to define a scoring profile for boosting search scores based on criteria. --Criteria can be a weighted field, such as when a match found in a "tags" field is more relevant than a match found in "descriptions". Criteria can also be a function, such as the `distance` function that favors results that are within a specified distance of the current location. --Scoring profiles are defined in a search index and invoked on query requests. You can create multiple profiles and then modify query logic to choose which one is used. +In this article, you'll learn how to define a scoring profile. A scoring profile is critera for boosting a search score based on parameters that you provide. For example, you might want matches found in a "tags" field to be more relevant than the same match found in "descriptions". Criteria can be a weighted field (such as the "tags" example) or a function. Scoring profiles are defined in a search index and invoked on query requests. You can create multiple profiles and then modify query logic to choose which one is used. > [!NOTE]-> Unfamiliar with relevance concepts? The following video segment fast-forwards to how scoring profiles work in Azure Cognitive Search. You can also visit [Relevance and scoring in Azure Cognitive Search](index-similarity-and-scoring.md) for more background. -> -> > [!VIDEO https://www.youtube.com/embed/Y_X6USgvB1g?version=3&start=463&end=970] +> Unfamiliar with relevance concepts? The following [video segment on YouTube](https://www.youtube.com/embed/Y_X6USgvB1g?version=3&start=463&end=970) fast-forwards to how scoring profiles work in Azure Cognitive Search. You can also visit [Relevance and scoring in Azure Cognitive Search](index-similarity-and-scoring.md) for more background. > +<!-- > > [!VIDEO https://www.youtube.com/embed/Y_X6USgvB1g?version=3&start=463&end=970] +> --> + ## Scoring profile definition -A scoring profile is part of the index definition and is composed of weighted fields, functions, and parameters. +A scoring profile is named object defined in an index schema. A profile can be composed of weighted fields, functions, and parameters. -The following definition shows a simple profile named 'geo'. This example boosts results that have the search term in the hotelName field. It also uses the `distance` function to favor results that are within 10 kilometers of the current location. If someone searches on the term 'inn', and 'inn' happens to be part of the hotel name, documents that include hotels with 'inn' within a 10 KM radius of the current location will appear higher in the search results. +The following definition shows a simple profile named "geo". This example boosts results that have the search term in the hotelName field. It also uses the `distance` function to favor results that are within 10 kilometers of the current location. If someone searches on the term 'inn', and 'inn' happens to be part of the hotel name, documents that include hotels with 'inn' within a 10 KM radius of the current location will appear higher in the search results. ```json "scoringProfiles": [ The following definition shows a simple profile named 'geo'. This example boosts ] ``` -To use this scoring profile, your query is formulated to specify scoringProfile parameter in the request. If you're using the REST API, queries are specified through GET and POST requests. +To use this scoring profile, your query is formulated to specify scoringProfile parameter in the request. If you're using the REST API, queries are specified through GET and POST requests. In the following example, "currentLocation" has a delimiter of a single dash (`-`). It's followed by longitude and latitude coordinates, where longitude is a negative value. ```http GET /indexes/hotels/docs?search+inn&scoringProfile=geo&scoringParameter=currentLocation--122.123,44.77233&api-version=2020-06-30 See the [Extended example](#bkmk_ex) to review a more detailed example of a scor ## How scores are computed -Scores are computed for full text search queries for ranking the most relevant matches and returning them at the top of the response. The overall score for each document is an aggregation of the individual scores for each field, where the individual score of each field is computed based on the term frequency and document frequency of the searched terms within that field (known as [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) or term frequency-inverse document frequency). +Scores are computed for full text search queries. Matches are scored based on how relevant the match is, and the highest scoring matches are returned in the query response. The overall score for each document is an aggregation of the individual scores for each field, where the individual score of each field is computed based on the term frequency and document frequency of the searched terms within that field (known as [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) or term frequency-inverse document frequency). You can use the [featuresMode (preview)](index-similarity-and-scoring.md#featuresmode-parameter-preview) parameter to request extra scoring details with the search results (including the field level scores). Weighted fields are composed of a searchable field and a positive number that is ```json "scoringProfiles": [ -{ - "name": "boostKeywords", - "text": { - "weights": { - "HotelName": 2, - "Description": 5 - } - } -} + { + "name": "boostKeywords", + "text": { + "weights": { + "HotelName": 2, + "Description": 5 + } + } + } +] ``` <a name="functions"></a> ### Using functions -Use functions when simple relative weights are insufficient or don't apply, as is the case of distance and freshness, which are calculations over numeric data. You can specify multiple functions per scoring profile. +Use functions when simple relative weights are insufficient or don't apply, as is the case of distance and freshness, which are calculations over numeric data. You can specify multiple functions per scoring profile. For more information about the EDM data types used in Cognitive Search, see [Supported data types](/rest/api/searchservice/supported-data-types). | Function | Description | |-|-| |
search | Search Get Started Vector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vector.md | Azure Cognitive Search is a billable resource. If it's no longer needed, delete ## Next steps -As a next step, we recommend reviewing the demo code for [Python](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python), or [C#](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet). +As a next step, we recommend reviewing the demo code for [Python](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python), [C#](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet), or [JavaScript](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-javascript). |
security | Azure CA Details | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-CA-details.md | This article provides the details of the root and subordinate Certificate Author Any entity trying to access Azure Active Directory (Azure AD) identity services via the TLS/SSL protocols will be presented with certificates from the CAs listed in this article. Different services may use different root or intermediate CAs. The following root and subordinate CAs are relevant to entities that use [certificate pinning](certificate-pinning.md). - **How to read the certificate details:** - The Serial Number (top string in the table) contains the hexadecimal value of the certificate serial number. - The Thumbprint (bottom string in the table) is the SHA1 thumbprint. Any entity trying to access Azure Active Directory (Azure AD) identity services | [Microsoft Azure ECC TLS Issuing CA 01](https://crt.sh/?d=2616305805) | 0x330000001aa9564f44321c54b900000000001a<br>CDA57423EC5E7192901CA1BF6169DBE48E8D1268 | | [Microsoft Azure ECC TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2002.cer) | 0x0e8dbe5ea610e6cbb569c736f6d7004b<br>1E981CCDDC69102A45C6693EE84389C3CF2329F1 | | [Microsoft Azure ECC TLS Issuing CA 02](https://crt.sh/?d=2616326233) | 0x330000001b498d6736ed5612c200000000001b<br>489FF5765030EB28342477693EB183A4DED4D2A6 |+| [*Microsoft Azure ECC TLS Issuing CA 03*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2003%20-%20xsign.crt) | 0x01529ee8368f0b5d72ba433e2d8ea62d<br>56D955C849887874AA1767810366D90ADF6C8536 | +| [*Microsoft Azure ECC TLS Issuing CA 03*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2003.crt) | 0x330000003322a2579b5e698bcc000000000033<br>91503BE7BF74E2A10AA078B48B71C3477175FEC3 | +| [*Microsoft Azure ECC TLS Issuing CA 04*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2004%20-%20xsign.crt) | 0x02393d48d702425a7cb41c000b0ed7ca<br>FB73FDC24F06998E070A06B6AFC78FDF2A155B25 | +| [*Microsoft Azure ECC TLS Issuing CA 04*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2004.crt) | 0x33000000322164aedab61f509d000000000032<br>406E3B38EFF35A727F276FE993590B70F8224AED | | [Microsoft Azure ECC TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2005.cer) | 0x0ce59c30fd7a83532e2d0146b332f965<br>C6363570AF8303CDF31C1D5AD81E19DBFE172531 | | [Microsoft Azure ECC TLS Issuing CA 05](https://crt.sh/?d=2616326161) | 0x330000001cc0d2a3cd78cf2c1000000000001c<br>4C15BC8D7AA5089A84F2AC4750F040D064040CD4 | | [Microsoft Azure ECC TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2006.cer) | 0x066e79cd7624c63130c77abeb6a8bb94<br>7365ADAEDFEA4909C1BAADBAB68719AD0C381163 | | [Microsoft Azure ECC TLS Issuing CA 06](https://crt.sh/?d=2616326228) | 0x330000001d0913c309da3f05a600000000001d<br>DFEB65E575D03D0CC59FD60066C6D39421E65483 |+| [*Microsoft Azure ECC TLS Issuing CA 07*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2007%20-%20xsign.crt) | 0x0f1f157582cdcd33734bdc5fcd941a33<br>3BE6CA5856E3B9709056DA51F32CBC8970A83E28 | +| [*Microsoft Azure ECC TLS Issuing CA 07*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2007.crt) | 0x3300000034c732435db22a0a2b000000000034<br>AB3490B7E37B3A8A1E715036522AB42652C3CFFE | +| [*Microsoft Azure ECC TLS Issuing CA 08*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2008%20-%20xsign.crt) | 0x0ef2e5d83681520255e92c608fbc2ff4<br>716DF84638AC8E6EEBE64416C8DD38C2A25F6630 | +| [*Microsoft Azure ECC TLS Issuing CA 08*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2008.crt) | 0x3300000031526979844798bbb8000000000031<br>CF33D5A1C2F0355B207FCE940026E6C1580067FD | +| [*Microsoft Azure RSA TLS Issuing CA 03*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2003%20-%20xsign.crt) | 0x05196526449a5e3d1a38748f5dcfebcc<br>F9388EA2C9B7D632B66A2B0B406DF1D37D3901F6 | +| [*Microsoft Azure RSA TLS Issuing CA 03*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2003.crt) | 0x330000003968ea517d8a7e30ce000000000039<br>37461AACFA5970F7F2D2BAC5A659B53B72541C68 | +| [*Microsoft Azure RSA TLS Issuing CA 04*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2004%20-%20xsign.crt) | 0x09f96ec295555f24749eaf1e5dced49d<br>BE68D0ADAA2345B48E507320B695D386080E5B25 | +| [*Microsoft Azure RSA TLS Issuing CA 04*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2004.crt) | 0x330000003cd7cb44ee579961d000000000003c<br>7304022CA8A9FF7E3E0C1242E0110E643822C45E | +| [*Microsoft Azure RSA TLS Issuing CA 07*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2007%20-%20xsign.crt) | 0x0a43a9509b01352f899579ec7208ba50<br>3382517058A0C20228D598EE7501B61256A76442 | +| [*Microsoft Azure RSA TLS Issuing CA 07*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2007.crt) | 0x330000003bf980b0c83783431700000000003b<br>0E5F41B697DAADD808BF55AD080350A2A5DFCA93 | +| [*Microsoft Azure RSA TLS Issuing CA 08*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008%20-%20xsign.crt) | 0x0efb7e547edf0ff1069aee57696d7ba0<br>31600991ED5FEC63D355A5484A6DCC787EAD89BC | +| [*Microsoft Azure RSA TLS Issuing CA 08*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008.crt) | 0x330000003a5dc2ffc321c16d9b00000000003a<br>512C8F3FB71EDACF7ADA490402E710B10C73026E | | [Microsoft Azure TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001.cer) | 0x0aafa6c5ca63c45141ea3be1f7c75317<br>2F2877C5D778C31E0F29C7E371DF5471BD673173 | | [Microsoft Azure TLS Issuing CA 01](https://crt.sh/?d=2616326024) | 0x1dbe9496f3db8b8de700000000001d<br>B9ED88EB05C15C79639493016200FDAB08137AF3 | | [Microsoft Azure TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002.cer) | 0x0c6ae97cced599838690a00a9ea53214<br>E7EEA674CA718E3BEFD90858E09F8372AD0AE2AA | Any entity trying to access Azure Active Directory (Azure AD) identity services | [Microsoft Azure TLS Issuing CA 05](https://crt.sh/?d=2616326057) | 0x330000001f9f1fa2043bc28db900000000001f<br>56F1CA470BB94E274B516A330494C792C419CF87 | | [Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.cer) | 0x02e79171fb8021e93fe2d983834c50c0<br>30E01761AB97E59A06B41EF20AF6F2DE7EF4F7B0 | | [Microsoft Azure TLS Issuing CA 06](https://crt.sh/?d=2616330106) | 0x3300000020a2f1491a37fbd31f000000000020<br>8F1FD57F27C828D7BE29743B4D02CD7E6E5F43E6 |-| [*Microsoft ECC TLS Issuing AOC CA 01*](https://crt.sh/?d=4789656467) | 0x33000000282bfd23e7d1add707000000000028<br>30ab5c33eb4b77d4cbff00a11ee0a7507d9dd316 | -| [*Microsoft ECC TLS Issuing AOC CA 02*](https://crt.sh/?d=4814787086) | 0x33000000290f8a6222ef6a5695000000000029<br>3709cd92105d074349d00ea8327f7d5303d729c8 | -| [*Microsoft ECC TLS Issuing EOC CA 01*](https://crt.sh/?d=4814787088) | 0x330000002a2d006485fdacbfeb00000000002a<br>5fa13b879b2ad1b12e69d476e6cad90d01013b46 | -| [*Microsoft ECC TLS Issuing EOC CA 02*](https://crt.sh/?d=4814787085) | 0x330000002be6902838672b667900000000002b<br>58a1d8b1056571d32be6a7c77ed27f73081d6e7a | +| [Microsoft ECC TLS Issuing AOC CA 01](https://crt.sh/?d=4789656467) | 0x33000000282bfd23e7d1add707000000000028<br>30ab5c33eb4b77d4cbff00a11ee0a7507d9dd316 | +| [Microsoft ECC TLS Issuing AOC CA 02](https://crt.sh/?d=4814787086) | 0x33000000290f8a6222ef6a5695000000000029<br>3709cd92105d074349d00ea8327f7d5303d729c8 | +| [Microsoft ECC TLS Issuing EOC CA 01](https://crt.sh/?d=4814787088) | 0x330000002a2d006485fdacbfeb00000000002a<br>5fa13b879b2ad1b12e69d476e6cad90d01013b46 | +| [Microsoft ECC TLS Issuing EOC CA 02](https://crt.sh/?d=4814787085) | 0x330000002be6902838672b667900000000002b<br>58a1d8b1056571d32be6a7c77ed27f73081d6e7a | | [Microsoft RSA TLS CA 01](https://crt.sh/?d=3124375355) | 0x0f14965f202069994fd5c7ac788941e2<br>703D7A8F0EBF55AAA59F98EAF4A206004EB2516A | | [Microsoft RSA TLS CA 02](https://crt.sh/?d=3124375356) | 0x0fa74722c53d88c80f589efb1f9d4a3a<br>B0C2D2D13CDD56CDAA6AB6E2C04440BE4A429C75 |-| [*Microsoft RSA TLS Issuing AOC CA 01*](https://crt.sh/?d=4789678141) | 0x330000002ffaf06f6697e2469c00000000002f<br>4697fdbed95739b457b347056f8f16a975baf8ee | -| [*Microsoft RSA TLS Issuing AOC CA 02*](https://crt.sh/?d=4814787092) | 0x3300000030c756cc88f5c1e7eb000000000030<br>90ed2e9cb40d0cb49a20651033086b1ea2f76e0e | -| [*Microsoft RSA TLS Issuing EOC CA 01*](https://crt.sh/?d=4814787098) | 0x33000000310c4914b18c8f339a000000000031<br>a04d3750debfccf1259d553dbec33162c6b42737 | -| [*Microsoft RSA TLS Issuing EOC CA 02*](https://crt.sh/?d=4814787087) | 0x3300000032444d7521341496a9000000000032<br>697c6404399cc4e7bb3c0d4a8328b71dd3205563 | --+| [Microsoft RSA TLS Issuing AOC CA 01](https://crt.sh/?d=4789678141) | 0x330000002ffaf06f6697e2469c00000000002f<br>4697fdbed95739b457b347056f8f16a975baf8ee | +| [Microsoft RSA TLS Issuing AOC CA 02](https://crt.sh/?d=4814787092) | 0x3300000030c756cc88f5c1e7eb000000000030<br>90ed2e9cb40d0cb49a20651033086b1ea2f76e0e | +| [Microsoft RSA TLS Issuing EOC CA 01](https://crt.sh/?d=4814787098) | 0x33000000310c4914b18c8f339a000000000031<br>a04d3750debfccf1259d553dbec33162c6b42737 | +| [Microsoft RSA TLS Issuing EOC CA 02](https://crt.sh/?d=4814787087) | 0x3300000032444d7521341496a9000000000032<br>697c6404399cc4e7bb3c0d4a8328b71dd3205563 | # [Certificate Authority chains](#tab/certificate-authority-chains) Any entity trying to access Azure Active Directory (Azure AD) identity services | [**DigiCert Global Root G2**](https://crt.sh/?d=8656329) | 0x033af1e6a711a9a0bb2864b11d09fae5<br>DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 | | Γöö [Microsoft Azure TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001.cer) | 0x0aafa6c5ca63c45141ea3be1f7c75317<br>2F2877C5D778C31E0F29C7E371DF5471BD673173 | | Γöö [Microsoft Azure TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002.cer) | 0x0c6ae97cced599838690a00a9ea53214<br>E7EEA674CA718E3BEFD90858E09F8372AD0AE2AA |+| Γöö [*Microsoft Azure RSA TLS Issuing CA 03*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2003%20-%20xsign.crt) | 0x05196526449a5e3d1a38748f5dcfebcc<br>F9388EA2C9B7D632B66A2B0B406DF1D37D3901F6 | +| Γöö [*Microsoft Azure RSA TLS Issuing CA 04*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2004%20-%20xsign.crt) | 0x09f96ec295555f24749eaf1e5dced49d<br>BE68D0ADAA2345B48E507320B695D386080E5B25 | +| Γöö [*Microsoft Azure RSA TLS Issuing CA 07*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2007%20-%20xsign.crt) | 0x0a43a9509b01352f899579ec7208ba50<br>3382517058A0C20228D598EE7501B61256A76442 | +| Γöö [*Microsoft Azure RSA TLS Issuing CA 08*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008%20-%20xsign.crt) | 0x0efb7e547edf0ff1069aee57696d7ba0<br>31600991ED5FEC63D355A5484A6DCC787EAD89BC | | Γöö [Microsoft Azure TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005.cer) | 0x0d7bede97d8209967a52631b8bdd18bd<br>6C3AF02E7F269AA73AFD0EFF2A88A4A1F04ED1E5 | | Γöö [Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.cer) | 0x02e79171fb8021e93fe2d983834c50c0<br>30E01761AB97E59A06B41EF20AF6F2DE7EF4F7B0 | | [**DigiCert Global Root G3**](https://crt.sh/?d=8568700) | 0x055556bcf25ea43535c3a40fd5ab4572<br>7E04DE896A3E666D00E687D33FFAD93BE83D349E | | Γöö [Microsoft Azure ECC TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2001.cer) | 0x09dc42a5f574ff3a389ee06d5d4de440<br>92503D0D74A7D3708197B6EE13082D52117A6AB0 | | Γöö [Microsoft Azure ECC TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2002.cer) | 0x0e8dbe5ea610e6cbb569c736f6d7004b<br>1E981CCDDC69102A45C6693EE84389C3CF2329F1 |+| Γöö [*Microsoft Azure ECC TLS Issuing CA 03*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2003%20-%20xsign.crt) | 0x01529ee8368f0b5d72ba433e2d8ea62d<br>56D955C849887874AA1767810366D90ADF6C8536 | +| Γöö [*Microsoft Azure ECC TLS Issuing CA 04*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2004%20-%20xsign.crt) | 0x02393d48d702425a7cb41c000b0ed7ca<br>FB73FDC24F06998E070A06B6AFC78FDF2A155B25 | +| Γöö [*Microsoft Azure ECC TLS Issuing CA 07*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2007%20-%20xsign.crt) | 0x0f1f157582cdcd33734bdc5fcd941a33<br>3BE6CA5856E3B9709056DA51F32CBC8970A83E28 | +| Γöö [*Microsoft Azure ECC TLS Issuing CA 08*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2008%20-%20xsign.crt) | 0x0ef2e5d83681520255e92c608fbc2ff4<br>716DF84638AC8E6EEBE64416C8DD38C2A25F6630 | | Γöö [Microsoft Azure ECC TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2005.cer) | 0x0ce59c30fd7a83532e2d0146b332f965<br>C6363570AF8303CDF31C1D5AD81E19DBFE172531 | | Γöö [Microsoft Azure ECC TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2006.cer) | 0x066e79cd7624c63130c77abeb6a8bb94<br>7365ADAEDFEA4909C1BAADBAB68719AD0C381163 | | [**Microsoft ECC Root Certificate Authority 2017**](https://crt.sh/?d=2565145421) | 0x66f23daf87de8bb14aea0c573101c2ec<br>999A64C37FF47D9FAB95F14769891460EEC4C3C5 | | Γöö [Microsoft Azure ECC TLS Issuing CA 01](https://crt.sh/?d=2616305805) | 0x330000001aa9564f44321c54b900000000001a<br>CDA57423EC5E7192901CA1BF6169DBE48E8D1268 | | Γöö [Microsoft Azure ECC TLS Issuing CA 02](https://crt.sh/?d=2616326233) | 0x330000001b498d6736ed5612c200000000001b<br>489FF5765030EB28342477693EB183A4DED4D2A6 |+| Γöö [*Microsoft Azure ECC TLS Issuing CA 03*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2003.crt) | 0x330000003322a2579b5e698bcc000000000033<br>91503BE7BF74E2A10AA078B48B71C3477175FEC3 | +| Γöö [*Microsoft Azure ECC TLS Issuing CA 04*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2004.crt) | 0x33000000322164aedab61f509d000000000032<br>406E3B38EFF35A727F276FE993590B70F8224AED | | Γöö [Microsoft Azure ECC TLS Issuing CA 05](https://crt.sh/?d=2616326161) | 0x330000001cc0d2a3cd78cf2c1000000000001c<br>4C15BC8D7AA5089A84F2AC4750F040D064040CD4 | | Γöö [Microsoft Azure ECC TLS Issuing CA 06](https://crt.sh/?d=2616326228) | 0x330000001d0913c309da3f05a600000000001d<br>DFEB65E575D03D0CC59FD60066C6D39421E65483 |-| Γöö [*Microsoft ECC TLS Issuing AOC CA 01*](https://crt.sh/?d=4789656467) |33000000282bfd23e7d1add707000000000028<br>30ab5c33eb4b77d4cbff00a11ee0a7507d9dd316 | -| Γöö [*Microsoft ECC TLS Issuing AOC CA 02*](https://crt.sh/?d=4814787086) |33000000290f8a6222ef6a5695000000000029<br>3709cd92105d074349d00ea8327f7d5303d729c8 | -| Γöö [*Microsoft ECC TLS Issuing EOC CA 01*](https://crt.sh/?d=4814787088) |330000002a2d006485fdacbfeb00000000002a<br>5fa13b879b2ad1b12e69d476e6cad90d01013b46 | -| Γöö [*Microsoft ECC TLS Issuing EOC CA 02*](https://crt.sh/?d=4814787085) |330000002be6902838672b667900000000002b<br>58a1d8b1056571d32be6a7c77ed27f73081d6e7a | +| Γöö [*Microsoft Azure ECC TLS Issuing CA 07*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2007.crt) | 0x3300000034c732435db22a0a2b000000000034<br>AB3490B7E37B3A8A1E715036522AB42652C3CFFE | +| Γöö [*Microsoft Azure ECC TLS Issuing CA 08*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2008.crt) | 0x3300000031526979844798bbb8000000000031<br>CF33D5A1C2F0355B207FCE940026E6C1580067FD | +| Γöö [Microsoft ECC TLS Issuing AOC CA 01](https://crt.sh/?d=4789656467) |33000000282bfd23e7d1add707000000000028<br>30ab5c33eb4b77d4cbff00a11ee0a7507d9dd316 | +| Γöö [Microsoft ECC TLS Issuing AOC CA 02](https://crt.sh/?d=4814787086) |33000000290f8a6222ef6a5695000000000029<br>3709cd92105d074349d00ea8327f7d5303d729c8 | +| Γöö [Microsoft ECC TLS Issuing EOC CA 01](https://crt.sh/?d=4814787088) |330000002a2d006485fdacbfeb00000000002a<br>5fa13b879b2ad1b12e69d476e6cad90d01013b46 | +| Γöö [Microsoft ECC TLS Issuing EOC CA 02](https://crt.sh/?d=4814787085) |330000002be6902838672b667900000000002b<br>58a1d8b1056571d32be6a7c77ed27f73081d6e7a | | [**Microsoft RSA Root Certificate Authority 2017**](https://crt.sh/?id=2565151295) | 0x1ed397095fd8b4b347701eaabe7f45b3<br>73A5E64A3BFF8316FF0EDCCC618A906E4EAE4D74 |+| Γöö [*Microsoft Azure RSA TLS Issuing CA 03*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2003.crt) | 0x330000003968ea517d8a7e30ce000000000039<br>37461AACFA5970F7F2D2BAC5A659B53B72541C68 | +| Γöö [*Microsoft Azure RSA TLS Issuing CA 04*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2004.crt) | 0x330000003cd7cb44ee579961d000000000003c<br>7304022CA8A9FF7E3E0C1242E0110E643822C45E | +| Γöö [*Microsoft Azure RSA TLS Issuing CA 07*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2007.crt) | 0x330000003bf980b0c83783431700000000003b<br>0E5F41B697DAADD808BF55AD080350A2A5DFCA93 | +| Γöö [*Microsoft Azure RSA TLS Issuing CA 08*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008.crt) | 0x330000003a5dc2ffc321c16d9b00000000003a<br>512C8F3FB71EDACF7ADA490402E710B10C73026E | | Γöö [Microsoft Azure TLS Issuing CA 01](https://crt.sh/?d=2616326024) | 0x1dbe9496f3db8b8de700000000001d<br>B9ED88EB05C15C79639493016200FDAB08137AF3 | | Γöö [Microsoft Azure TLS Issuing CA 02](https://crt.sh/?d=2616326032) | 0x330000001ec6749f058517b4d000000000001e<br>C5FB956A0E7672E9857B402008E7CCAD031F9B08 | | Γöö [Microsoft Azure TLS Issuing CA 05](https://crt.sh/?d=2616326057) | 0x330000001f9f1fa2043bc28db900000000001f<br>56F1CA470BB94E274B516A330494C792C419CF87 | | Γöö [Microsoft Azure TLS Issuing CA 06](https://crt.sh/?d=2616330106) | 0x3300000020a2f1491a37fbd31f000000000020<br>8F1FD57F27C828D7BE29743B4D02CD7E6E5F43E6 |-| Γöö [*Microsoft RSA TLS Issuing AOC CA 01*](https://crt.sh/?d=4789678141) |330000002ffaf06f6697e2469c00000000002f<br>4697fdbed95739b457b347056f8f16a975baf8ee | -| Γöö [*Microsoft RSA TLS Issuing AOC CA 02*](https://crt.sh/?d=4814787092) |3300000030c756cc88f5c1e7eb000000000030<br>90ed2e9cb40d0cb49a20651033086b1ea2f76e0e | -| Γöö [*Microsoft RSA TLS Issuing EOC CA 01*](https://crt.sh/?d=4814787098) |33000000310c4914b18c8f339a000000000031<br>a04d3750debfccf1259d553dbec33162c6b42737 | -| Γöö [*Microsoft RSA TLS Issuing EOC CA 02*](https://crt.sh/?d=4814787087) |3300000032444d7521341496a9000000000032<br>697c6404399cc4e7bb3c0d4a8328b71dd3205563 | +| Γöö [Microsoft RSA TLS Issuing AOC CA 01](https://crt.sh/?d=4789678141) |330000002ffaf06f6697e2469c00000000002f<br>4697fdbed95739b457b347056f8f16a975baf8ee | +| Γöö [Microsoft RSA TLS Issuing AOC CA 02](https://crt.sh/?d=4814787092) |3300000030c756cc88f5c1e7eb000000000030<br>90ed2e9cb40d0cb49a20651033086b1ea2f76e0e | +| Γöö [Microsoft RSA TLS Issuing EOC CA 01](https://crt.sh/?d=4814787098) |33000000310c4914b18c8f339a000000000031<br>a04d3750debfccf1259d553dbec33162c6b42737 | +| Γöö [Microsoft RSA TLS Issuing EOC CA 02](https://crt.sh/?d=4814787087) |3300000032444d7521341496a9000000000032<br>697c6404399cc4e7bb3c0d4a8328b71dd3205563 | Microsoft updated Azure services to use TLS certificates from a different set of ### Article change log +- July 17, 2023: Added 16 new subordinate Certificate Authorities - February 7, 2023: Added eight new subordinate Certificate Authorities-- March 1, 2023: Provided timelines for upcoming sub CA expiration ## Next steps |
sentinel | Deploy Dynamics 365 Finance Operations Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/dynamics-365/deploy-dynamics-365-finance-operations-solution.md | Before you begin, verify that: ## Deploy the solution and enable the data connector 1. Navigate to the **Microsoft Sentinel** service.-1. Select **Content hub**, and in the search bar, search for *Dynamics 365 F&O*. -1. Select **Dynamics 365 F&O**. +1. Select **Content hub**, and in the search bar, search for *Dynamics 365 Finance and Operations*. +1. Select **Dynamics 365 Finance and Operations**. 1. Select **Install**. For more information about how to manage the solution components, see [Discover and deploy out-of-the-box content](../sentinel-solutions-deploy.md). |
sentinel | Normalization Develop Parsers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-develop-parsers.md | Handle the results as follows: To make sure that your parser produces valid values, use the ASIM data tester by running the following query in the Microsoft Sentinel **Logs** page: ```KQL- <parser name> | limit <X> | invoke ASimDataTester ( ['<schema>'] ) + <parser name> | limit <X> | invoke ASimDataTester ('<schema>') ``` Specifying a schema is optional. If a schema is not specified, the `EventSchema` field is used to identify the schema the event should adhere to. Ig an event does not include an `EventSchema` field, only common fields will be verified. If a schema is specified as a parameter, this schema will be used to test all records. This is useful for older parsers that do not set the `EventSchema` field. |
service-fabric | How To Managed Cluster Availability Zones | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-availability-zones.md | Sample templates are available: [Service Fabric cross availability zone template A Service Fabric cluster distributed across Availability Zones ensures high availability of the cluster state. -The recommended topology for managed cluster requires the resources outlined below: +The recommended topology for managed cluster requires the following resources: * The cluster SKU must be Standard-* Primary node type should have at least nine nodes for best resiliency, but supports minimum number of six. +* Primary node type should have at least nine nodes (3 in each AZ) for best resiliency, but supports minimum number of six (2 in each AZ). * Secondary node type(s) should have at least six nodes for best resiliency, but supports minimum number of three. >[!NOTE] Sample node list depicting FD/UD formats in a virtual machine scale set spanning When a service is deployed on the node types that are spanning zones, the replicas are placed to ensure they land up in separate zones. This separation is ensured as the fault domainΓÇÖs on the nodes present in each of these node types are configured with the zone information (i.e FD = fd:/zone1/1 etc.). For example: for five replicas or instances of a service, the distribution will be 2-2-1 and runtime will try to ensure equal distribution across AZs. **User Service Replica Configuration**:-Stateful user services deployed on the cross-availability zone node types should be configured with this configuration: replica count with target = 9, min = 5. This configuration helps the service to be working even when one zone goes down since 6 replicas will be still up in the other two zones. An application upgrade in such a scenario will also go through. +Stateful user services deployed on the cross-availability zone node types should be configured with this configuration: replica count with target = 9, min = 5. This configuration helps the service to be working even when one zone goes down since six replicas will be still up in the other two zones. An application upgrade in such a scenario will also go through. **Zone down scenario**:-When a zone goes down, all the nodes in that zone appear as down. Service replicas on these nodes will also be down. Since there are replicas in the other zones, the service continues to be responsive with primary replicas failing over to the zones which are functioning. The services will appear in warning state as the target replica count is not met and the VM count is still more than the defined min target replica size. As a result, Service Fabric load balancer brings up replicas in the working zones to match the configured target replica count. At this point, the services should appear healthy. When the zone that was down comes back up, the load balancer will again spread all the service replicas evenly across all the zones. +When a zone goes down, all the nodes in that zone appear as down. Service replicas on these nodes will also be down. Since there are replicas in the other zones, the service continues to be responsive with primary replicas failing over to the zones that are functioning. The services will appear in warning state as the target replica count is not met and the VM count is still more than the defined min target replica size. As a result, Service Fabric load balancer brings up replicas in the working zones to match the configured target replica count. At this point, the services should appear healthy. When the zone that was down comes back up, the load balancer will again spread all the service replicas evenly across all the zones. ## Networking Configuration For more information, see [Configure network settings for Service Fabric managed clusters](./how-to-managed-cluster-networking.md) ## Enabling a zone resilient Azure Service Fabric managed cluster-To enable a zone resilient Azure Service Fabric managed cluster, you must include the following in the managed cluster resource definition. --* The **ZonalResiliency** property, which specifies if the cluster is zone resilient or not. +To enable a zone resilient Azure Service Fabric managed cluster, you must include the following **ZonalResiliency** property, which specifies if the cluster is zone resilient or not. ```json { To enable a zone resilient Azure Service Fabric managed cluster, you must includ } ``` -## Migrate an existing non-zone resilient cluster to Zone Resilient (Preview) -Existing Service Fabric managed clusters which are not spanned across availability zones can now be migrated in-place to span availability zones. Supported scenarios include clusters created in regions that have three availability zones as well as clusters in regions where three availability zones are made available post-deployment. +## Migrate an existing nonzone resilient cluster to Zone Resilient (Preview) +Existing Service Fabric managed clusters that are not spanned across availability zones can now be migrated in-place to span availability zones. Supported scenarios include clusters created in regions that have three availability zones and clusters in regions where three availability zones are made available post-deployment. Requirements: * Standard SKU cluster Requirements: } ``` - If the Public IP resource is not zone resilient, migration of the cluster will cause a brief loss of external connectivity. This is due to the migration setting up new Public IP and updating the cluster FQDN to the new IP. If the Public IP resource is zone resilient, migration will not modify the Public IP resource or FQDN and there will be no external connectivity impact. + If the Public IP resource is not zone resilient, migration of the cluster will cause a brief loss of external connectivity. This connection loss is due to the migration setting up new Public IP and updating the cluster FQDN to the new IP. If the Public IP resource is zone resilient, migration will not modify the Public IP resource nor the FQDN, and there will be no external connectivity impact. 2) Initiate conversion of the underlying storage account created for managed cluster from LRS to ZRS using [customer-initiated conversion](../storage/common/redundancy-migration.md#customer-initiated-conversion). The resource group of storage account that needs to be migrated would be of the form "SFC_ClusterId"(ex SFC_9240df2f-71ab-4733-a641-53a8464d992d) under the same subscription as the managed cluster resource. 3) Add a new primary node type which spans across availability zones - This step will trigger the resource provider to perform the migration of the primary node type and Public IP along with a cluster FQDN DNS update, if needed, to become zone resilient. Use the above API to understand implication of this step. + This step triggers the resource provider to perform the migration of the primary node type and Public IP along with a cluster FQDN DNS update, if needed, to become zone resilient. Use the above API to understand implication of this step. * Use apiVersion 2022-02-01-preview or higher.-* Add a new primary node type to the cluster with zones parameter set to ["1", "2", "3"] as show below: +* Add a new primary node type to the cluster with zones parameter set to ["1," "2," "3"] as show below: ```json { "apiVersion": "2022-02-01-preview", Requirements: ```http POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ServiceFabric/managedClusters/{clusterName}/getazresiliencystatus?api-version=2022-02-01-preview ```- This should provide with response similar to: + This API call should provide a response similar to: ```json { "baseResourceStatus" :[ Requirements: If you run in to any problems reach out to support for assistance. ## Enable FastZonalUpdate on Service Fabric managed clusters (preview)-Service Fabric managed clusters support faster cluster and application upgrades by reducing the max upgrade domains per availability zone. The default configuration right now can have at most 15 UDs in multiple AZ nodetype. This huge number of UDs reduced the upgrade velocity. Using the new configuration, the max UDs are reduced, which results in faster updates, keeping the safety of the upgrades intact. +Service Fabric managed clusters support faster cluster and application upgrades by reducing the max upgrade domains per availability zone. The default configuration right now can have at most 15 UDs in multiple AZ nodetype. This huge number of UDs reduced the upgrade velocity. The new configuration reduces the max UDs, which results in faster updates, keeping the safety of the upgrades intact. -The update should be done via ARM template by setting the zonalUpdateMode property to ΓÇ£fastΓÇ¥ and then modifying a node type attribute, such as adding a node and then removing the node to each nodetype (see required steps 2 and 3 below). The Service Fabric managed cluster resource apiVersion should be 2022-10-01-preview or later. +The update should be done via ARM template by setting the zonalUpdateMode property to ΓÇ£fastΓÇ¥ and then modifying a node type attribute, such as adding a node and then removing the node to each nodetype (see required steps 2 and 3). The Service Fabric managed cluster resource apiVersion should be 2022-10-01-preview or later. 1. Modify the ARM template with the new property mentioned above. ```json |
service-fabric | Service Fabric Scale Up Primary Node Type | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-scale-up-primary-node-type.md | Title: Scale up an Azure Service Fabric primary node type description: Vertically scale your Service Fabric cluster by adding a new node type and removing the previous one. --++ Last updated 09/20/2022 # Scale up a Service Fabric cluster primary node type -This article describes how to scale up a Service Fabric cluster primary node type with minimal downtime. In-place SKU upgrades are not supported on Service Fabric cluster nodes, as such operations potentially involve data and availability loss. The safest, most reliable, and recommended method for scaling up a Service Fabric node type is to: +This article describes how to scale up a Service Fabric cluster primary node type with minimal downtime. In-place SKU upgrades aren't supported on Service Fabric cluster nodes, as such operations potentially involve data and availability loss. The safest, most reliable, and recommended method for scaling up a Service Fabric node type is to: 1. Add a new node type to your Service Fabric cluster, backed by your upgraded (or modified) virtual machine scale set SKU and configuration. This step also involves setting up a new load balancer, subnet, and public IP for the scale set. The following commands will guide you through generating a new self-signed certi ### Generate a self-signed certificate and deploy the cluster -First, assign the variables you'll need for Service Fabric cluster deployment. Adjust the values for `resourceGroupName`, `certSubjectName`, `parameterFilePath`, and `templateFilePath` for your specific account and environment: +First, assign the variables you need for Service Fabric cluster deployment. Adjust the values for `resourceGroupName`, `certSubjectName`, `parameterFilePath`, and `templateFilePath` for your specific account and environment: ```powershell # Assign deployment variables Import-PfxCertificate ` -Password (ConvertTo-SecureString Password!1 -AsPlainText -Force) ``` -The operation will return the certificate thumbprint, which you can now use to [connect to the new cluster](#connect-to-the-new-cluster-and-check-health-status) and check its health status. (Skip the following section, which is an alternate approach to cluster deployment.) +The operation returns the certificate thumbprint, which you can now use to [connect to the new cluster](#connect-to-the-new-cluster-and-check-health-status) and check its health status. (Skip the following section, which is an alternate approach to cluster deployment.) ### Use an existing certificate to deploy the cluster -Alternately, you can use an existing Azure Key Vault certificate to deploy the test cluster. To do this, you'll need to [obtain references to your Key Vault](#obtain-your-key-vault-references) and certificate thumbprint. +Alternately, you can use an existing Azure Key Vault certificate to deploy the test cluster. To do this, you need to [obtain references to your Key Vault](#obtain-your-key-vault-references) and certificate thumbprint. ```powershell # Key Vault variables Connect-ServiceFabricCluster ` Get-ServiceFabricClusterHealth ``` -With that, we're ready to begin the upgrade procedure. +Now, we're ready to begin the upgrade procedure. ## Deploy a new primary node type with upgraded scale set -In order to upgrade (vertically scale) a node type, we'll first need to deploy a new node type backed by a new scale set and supporting resources. The new scale set will be marked as primary (`isPrimary: true`), just like the original scale set. If you want to scale up a non-primary node type, see [Scale up a Service Fabric cluster non-primary node type](service-fabric-scale-up-non-primary-node-type.md). The resources created in the following section will ultimately become the new primary node type in your cluster, and the original primary node type resources will be deleted. +In order to upgrade (vertically scale) a node type, we'll first need to deploy a new node type backed by a new scale set and supporting resources. The new scale set will be marked as primary (`isPrimary: true`), just like the original scale set. If you want to scale up a non-primary node type, see [Scale up a Service Fabric cluster non-primary node type](service-fabric-scale-up-non-primary-node-type.md). The resources created in the following section will ultimately become the new primary node type in your cluster, and the original primary node type resources are deleted. ### Update the cluster template with the upgraded scale set Here are the section-by-section modifications of the original cluster deployment The required changes for this step have already been made for you in the [*Step1-AddPrimaryNodeType.json*](https://github.com/microsoft/service-fabric-scripts-and-templates/tree/master/templates/nodetype-upgrade/Step1-AddPrimaryNodeType.json) template file, and the following will explain these changes in detail. If you prefer, you can skip the explanation and continue to [obtain your Key Vault references](#obtain-your-key-vault-references) and [deploy the updated template](#deploy-the-updated-template) that adds a new primary node type to your cluster. > [!Note]-> Ensure that you use names that are unique from the original node type, scale set, load balancer, public IP, and subnet of the original primary node type, as these resources will be deleted at a later step in the process. +> Ensure that you use names that are unique from the original node type, scale set, load balancer, public IP, and subnet of the original primary node type, as these resources are deleted at a later step in the process. #### Create a new subnet in the existing virtual network OS SKU } ``` +### If you're changing OS SKU in a Linux Cluster ++In Windows cluster, the value for property vmImage is ΓÇÿWindowsΓÇÖ while the value of the same property for Linux cluster is name of the OS image used. For e.g. - Ubuntu20_04(use the latest vm image name). ++So, if you're changing the VM image (OS SKU) in a Linux cluster, then update the vmImage setting on the Service Fabric cluster resource as well. ++```json +#Update the property vmImage with the required OS name in your ARM template +{ + "vmImage": "[parameter(newVmImageName]ΓÇ¥ +} +``` +Note: Example of newVmImageName: Ubuntu20_04 ++You can also update the cluster resource by using the following PowerShell command: +```powershell +# Update cluster vmImage to target OS. This registers the SF runtime package type that is supplied for upgrades. +Update-AzServiceFabricVmImage -ResourceGroupName $resourceGroup -ClusterName $clusterName -VmImage Ubuntu20_04 +``` + Also, ensure you include any additional extensions that are required for your workload. #### Add a new primary node type to the cluster Once you've implemented all the changes in your template and parameters files, p ### Obtain your Key Vault references -To deploy the updated configuration, you'll need several references to the cluster certificate stored in your Key Vault. The easiest way to find these values is through Azure portal. You'll need: +To deploy the updated configuration, you need several references to the cluster certificate stored in your Key Vault. The easiest way to find these values is through Azure portal. You need: * **The Key Vault URL of your cluster certificate.** From your Key Vault in Azure portal, select **Certificates** > *Your desired certificate* > **Secret Identifier**: To deploy the updated configuration, you'll need several references to the clust $certUrlValue="https://sftestupgradegroup.vault.azure.net/secrets/sftestupgradegroup20200309235308/dac0e7b7f9d4414984ccaa72bfb2ea39" ``` -* **The thumbprint of your cluster certificate.** (You probably already have this if you [connected to the initial cluster](#connect-to-the-new-cluster-and-check-health-status) to check its health status.) From the same certificate blade (**Certificates** > *Your desired certificate*) in Azure portal, copy **X.509 SHA-1 Thumbprint (in hex)**: +* **The thumbprint of your cluster certificate.** (You probably already have the certificate if you [connected to the initial cluster](#connect-to-the-new-cluster-and-check-health-status) to check its health status.) From the same certificate blade (**Certificates** > *Your desired certificate*) in Azure portal, copy **X.509 SHA-1 Thumbprint (in hex)**: ```powershell $thumb = "BB796AA33BD9767E7DA27FE5182CF8FDEE714A70" First remove the `isPrimary` designation in the template from the original node } ``` -Then deploy the template with the update. This will initiate the migration of seed nodes to the new scale set. +Then deploy the template with the update. This deployment initiates the migration of seed nodes to the new scale set. ```powershell $templateFilePath = "C:\Step2-UnmarkOriginalPrimaryNodeType.json" New-AzResourceGroupDeployment ` > [!Note] > It will take some time to complete the seed node migration to the new scale set. To guarantee data consistency, only one seed node can change at a time. Each seed node change requires a cluster update; thus replacing a seed node requires two cluster upgrades (one each for node addition and removal). Upgrading the five seed nodes in this sample scenario will result in ten cluster upgrades. -Use Service Fabric Explorer to monitor the migration of seed nodes to the new scale set. The nodes of the original node type (nt0vm) should all be *false* in the **Is Seed Node** column, and those of the new node type (nt1vm) will be *true*. +Use Service Fabric Explorer to monitor the migration of seed nodes to the new scale set. The nodes of the original node type (nt0vm) should all be *false* in the **Is Seed Node** column, and those of the new node type (nt1vm) should be *true*. ### Disable the nodes in the original node type scale set Use Service Fabric Explorer to monitor the progression of nodes in the original :::image type="content" source="./media/scale-up-primary-node-type/service-fabric-explorer-node-status.png" alt-text="Service Fabric Explorer showing status of disabled nodes"::: -For Silver and Gold durability, some nodes will go into Disabled state, while others might remain in a *Disabling* state. In Service Fabric Explorer, check the **Details** tab of nodes in Disabling state. If they show a *Pending Safety Check* of Kind *EnsurePartitionQuorem* (ensuring quorum for infrastructure service partitions), then it is safe to continue. +For Silver and Gold durability, some nodes will go into Disabled state, while others might remain in a *Disabling* state. In Service Fabric Explorer, check the **Details** tab of nodes in Disabling state. If they show a *Pending Safety Check* of Kind *EnsurePartitionQuorem* (ensuring quorum for infrastructure service partitions), then it's safe to continue. :::image type="content" source="./media/scale-up-primary-node-type/service-fabric-explorer-node-status-disabling.png" alt-text="You can proceed with stopping data and removing nodes stuck in 'Disabling' status if they show a pending safety check of kind 'EnsurePartitionQuorum'."::: Remove-AzResource -ResourceName $scaleSetName -ResourceType $scaleSetResourceTyp ### Delete the original IP and load balancer resources -You can now delete the original IP, and load balancer resources. In this step you will also update the DNS name. +You can now delete the original IP, and load balancer resources. In this step you'll also update the DNS name. > [!Note] > This step is optional if you're already using a *Standard* SKU public IP and load balancer. In this case you could have multiple scale sets / node types under the same load balancer. The upgrade will change settings to the *InfrastructureService*; therefore, a no Once the deployment has completed, verify in Azure portal that the Service Fabric resource Status is *Ready*. Verify you can reach the new Service Fabric Explorer endpoint, the **Cluster Health State** is *OK*, and any deployed applications function properly. -With that, you've vertically scaled a cluster primary node type! +Now, you've vertically scaled a cluster primary node type! ## Next steps |
spring-apps | How To Elastic Apm Java Agent Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-elastic-apm-java-agent-monitor.md | -**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise +**This article applies to:** ✔️ Basic/Standard ❌ Enterprise This article explains how to use Elastic APM Agent to monitor Spring Boot applications running in Azure Spring Apps. Use the following steps to enable custom persistent storage: ## Activate Elastic APM Java Agent -Before proceeding, you'll need your Elastic APM server connectivity information handy, which assumes you've deployed Elastic on Azure. For more information, see [How to deploy and manage Elastic on Microsoft Azure](https://www.elastic.co/blog/getting-started-with-the-azure-integration-enhancement). To get this information, use the following steps: +Before proceeding, you need your Elastic APM server connectivity information handy, which assumes you've deployed Elastic on Azure. For more information, see [How to deploy and manage Elastic on Microsoft Azure](https://www.elastic.co/blog/getting-started-with-the-azure-integration-enhancement). To get this information, use the following steps: 1. In the Azure portal, go to the **Overview** page of your Elastic deployment, then select **Manage Elastic Cloud Deployment**. To configure the environment variables in an ARM template, add the following cod ## Upgrade Elastic APM Java Agent -To plan your upgrade, see [Upgrade versions](https://www.elastic.co/guide/en/cloud/current/ec-upgrade-deployment.html) for Elastic Cloud on Azure, and [Breaking Changes](https://www.elastic.co/guide/en/apm/server/current/breaking-changes.html) for APM. After you've upgraded APM Server, upload the Elastic APM Java agent JAR file in the custom persistent storage and restart apps with updated JVM options pointing to the upgraded Elastic APM Java agent JAR. +To plan your upgrade, see [Upgrade versions](https://www.elastic.co/guide/en/cloud/current/ec-upgrade-deployment.html) for Elastic Cloud on Azure, and [Breaking Changes](https://www.elastic.co/guide/en/apm/server/current/breaking-changes.html) for APM. After you've upgraded APM Server, upload the Elastic APM Java agent JAR file in the custom persistent storage. Then, restart your apps with the updated JVM options pointing to the upgraded Elastic APM Java agent JAR. ## Monitor applications and metrics with Elastic APM Use the following steps to monitor applications and metrics: :::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-kibana-search-apm.png" alt-text="Elastic / Kibana screenshot showing A P M search results." lightbox="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-kibana-search-apm.png"::: -Kibana APM is the curated application to support Application Monitoring workflows. Here you can view high-level details such as request/response times, throughput, transactions in a service with most impact on the duration. +Kibana APM is the curated application to support Application Monitoring workflows. Here you can view high-level details such as request/response times, throughput, and the transactions in a service with the most impact on the duration. :::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-customer-service.png" alt-text="Elastic / Kibana screenshot showing A P M Services Overview page." lightbox="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-customer-service.png"::: |
spring-apps | How To New Relic Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-new-relic-monitor.md | Use the following procedure to access the agent: ```azurecli az spring app create \ --resource-group <resource-group-name> \- --service <Azure-Spring-Apps-instance-name> + --service <Azure-Spring-Apps-instance-name> \ --name <app-name> \ --is-public true \ ``` |
spring-apps | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/whats-new.md | Azure Spring Apps is improved on an ongoing basis. To help you stay up to date w This article is updated quarterly, so revisit it regularly. You can also visit [Azure updates](https://azure.microsoft.com/updates/?query=azure%20spring), where you can search for updates or browse by category. +## June 2023 ++The following updates offer two new plans: ++- **Azure Spring Apps consumption plan**: This plan provides a new way to pay for Azure Spring Apps. With super-efficient pricing and serverless capabilities, you can deploy apps that scale based on usage, paying only for resources consumed. For more information, see [Start from zero and scale to zero ΓÇô Azure Spring Apps consumption plan](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/start-from-zero-and-scale-to-zero-azure-spring-apps-consumption/ba-p/3774825). ++- **Azure Spring Apps Consumption and Dedicated plans**: The Standard Dedicated plan provides a fully managed, dedicated environment for running Spring applications on Azure. This plan offers you customizable compute options (including memory optimization), single tenancy, and high availability to help you achieve price predictability, cost savings, and performance for running Spring applications at scale. For more information, see [Unleash Spring apps in a flex environment with Azure Spring Apps Consumption and Dedicated plans](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/unleash-spring-apps-in-a-flex-environment-with-azure-spring-apps/ba-p/3828232). ++The following update is now available in all plans: ++- **Azure Migrate for Spring Apps**: Discover and assess your Spring workloads for cloud readiness and get a price estimate for Azure Spring Apps using Azure Migrate. For more information, see [Discover and Assess Spring Apps with Azure Migrate - Preview Sign-Up](https://aka.ms/discover-spring-apps). ++The following update is now available in the Consumption and Basic/Standard plans: ++- **Azure Developer CLI (azd) for Azure Spring Apps**: Azure Developer CLI (azd) is an open-source tool that accelerates the time it takes for you to get your application from local development environment to Azure. You can now initialize, package, provision, and deploy a Spring application to Azure Spring Apps with only a few commands. Try it out using [Quickstart: Deploy your first web application to Azure Spring Apps](quickstart-deploy-web-app.md). ++The following updates are now available in the Enterprise plan: ++- **Shareable build result among Azure Spring Apps Enterprise instances (preview)**: This update enables you to have full visibility for Azure Spring Apps built images through bring-your-own Azure Container Registry (ACR) to support the following scenarios: ++ - Build and test in a PREPROD environment and deploy to multiple PROD environments with the verified images. + - Orchestrate a secure CICD pipeline to plug in any steps between build and deploy actions. ++ For more information, see [How to deploy polyglot apps in the Azure Spring Apps Enterprise plan](how-to-enterprise-deploy-polyglot-apps.md) and [Use Azure Spring Apps CI/CD with GitHub Actions](how-to-github-actions.md?pivots=programming-language-java). ++- **High Availability support for App Accelerator and App Live View**: App Accelerator and App Live View now support multiple replicas to offer high availability. For more information, see [Configure Tanzu Dev Tools in the Azure Spring Apps Enterprise plan](how-to-use-dev-tool-portal.md). ++- **Spring Cloud Gateway auto scaling**: Spring Cloud Gateway now supports auto scaling to better serve the elastic traffic without the hassle of manual scaling. For more information, see the [Set up autoscale settings for VMware Spring Cloud Gateway in Azure CLI](how-to-configure-enterprise-spring-cloud-gateway.md?tabs=Azure-portal#set-up-autoscale-settings-for-vmware-spring-cloud-gateway-in-azure-cli) section of [Configure VMware Spring Cloud Gateway](how-to-configure-enterprise-spring-cloud-gateway.md). ++- **Application Configuration Service ΓÇô polyglot support**: This update enables you to use Application Configuration Service to manage external configurations for any polyglot app, such as .NET, Go, and so on. For more information, see the [Polyglot support](how-to-enterprise-application-configuration-service.md#polyglot-support) section of [Use Application Configuration Service for Tanzu](how-to-enterprise-application-configuration-service.md). ++- **Application Configuration Service ΓÇô enhanced performance and security**: This update provides a dramatic performance enhancement in Git monitoring operations. This enhancement enables faster updates for configuration and certification verification over TLS between Application Configuration Service and Git repos. For more information, see [Use Application Configuration Service for Tanzu](how-to-enterprise-application-configuration-service.md). ++- **1000 app instance support (preview)**: We've increased the maximum app instance count for one Azure Spring Apps Enterprise service instance to 1000 to support large-scale microservice clusters. For more information, see [Quotas and service plans for Azure Spring Apps](quotas.md). ++- **App Accelerator certificate verification**: This update provides certification verification over TLS between App Accelerator and Git repos. For more information, see the [Configure accelerators with a self-signed certificate](how-to-use-accelerator.md#configure-accelerators-with-a-self-signed-certificate) section of [Use VMware Tanzu Application Accelerator with the Azure Spring Apps Enterprise plan](how-to-use-accelerator.md). + ## March 2023 -The following updates are now available in both Basic/Standard and Enterprise plan: +The following updates are now available in both the Basic/Standard and Enterprise plans: - **Source code assessment for migration**: Assess your existing on-premises Spring applications for their readiness to migrate to Azure Spring Apps with Cloud Suitability Analyzer. This tool provides information on what types of changes are needed for migration, and how much effort is involved. For more information, see [Assess Spring applications with Cloud Suitability Analyzer](/azure/developer/java/migration/cloud-suitability-analyzer). The following updates are now available in the Enterprise plan: ## December 2022 -The following updates are now available in both Basic/Standard and Enterprise plan: +The following updates are now available in both the Basic/Standard and Enterprise plans: - **Ingress Settings**: With ingress settings, you can manage Azure Spring Apps traffic on the application level. This capability includes protocol support for gRPC, WebSocket and RSocket-on-WebSocket, session affinity, and send/read timeout. For more information, see [Customize the ingress configuration in Azure Spring Apps](how-to-configure-ingress.md). |
storage | Upgrade To Data Lake Storage Gen2 How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/upgrade-to-data-lake-storage-gen2-how-to.md | -# Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities +# Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities -This article helps you to enable a hierarchical namespace and unlock capabilities such as file and directory-level security and faster operations. These capabilities are widely used by big data analytics workloads and are referred to collectively as Azure Data Lake Storage Gen2. +This article helps you to enable a hierarchical namespace and unlock capabilities such as file and directory-level security and faster operations. These capabilities are widely used by big data analytics workloads and are referred to collectively as Azure Data Lake Storage Gen2. To learn more about these capabilities and evaluate the impact of this upgrade on workloads, applications, costs, service integrations, tools, features, and documentation, see [Upgrading Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](upgrade-to-data-lake-storage-gen2.md). To learn more about these capabilities and evaluate the impact of this upgrade o ## Prepare to upgrade -1. Review feature support +To prepare to upgrade your storage account to Data Lake Storage Gen2: - Your account might be configured to use features that aren't yet supported in Data Lake Storage Gen2 enabled accounts. If your account is using a feature that isn't yet supported, the upgrade will not pass the validation step. Review the [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md) article to identify unsupported features. If you're using any of those unsupported features in your account, make sure to disable them before you begin the upgrade. +> [!div class="checklist"] +> - [Review feature support](#review-feature-support) +> - [Ensure the segments of each blob path are named](#ensure-the-segments-of-each-blob-path-are-named) - > [!NOTE] - > Blob soft delete is not yet supported by the upgrade process. Make sure to disable blob soft delete and then allow all soft-delete blobs to expire before you upgrade the account. +### Review feature support -2. Ensure that the segments of each blob path are named +Your storage account might be configured to use features that aren't yet supported in Data Lake Storage Gen2 enabled accounts. If your account is using such features, the upgrade will not pass the validation step. Review the [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md) article to identify unsupported features. If you're using any such features in your account, disable them before you begin the upgrade. - The migration process creates a directory for each path segment of a blob. Data Lake Storage Gen2 directories must have a name so for migration to succeed, each path segment in a virtual directory must have a name. The same requirement is true for segments that are named only with a space character. If any path segments are either unnamed (`//`) or named only with a space character (`_`), then before you proceed with the migration, you must copy those blobs to a new path that is compatible with these naming requirements. +The following features are supported for Data Lake Storage Gen2 accounts, but are not supported by the upgrade process: ++- Blob snapshots +- Encryption scopes +- Immutable storage +- Soft delete for blobs +- Soft delete for containers ++If your storage account has such features enabled, you must disable them before performing the upgrade. If you want to resume using the features after the upgrade is complete, re-enable them. ++In some cases, you will have to allow time for clean-up operations after a feature is disabled before upgrading. One example is the [blob soft delete](soft-delete-blob-overview.md) feature. You must disable blob soft delete and then allow all soft-delete blobs to expire before you can upgrade the account. ++> [!IMPORTANT] +> You cannot upgrade a storage account to Data Lake Storage Gen2 that has **ever** had the change feed feature enabled. +> Simply disabling change feed will not allow you to perform an upgrade. To convert such an account to Data Lake Storage Gen2, you must perform a manual migration. For migration options, see [Migrate a storage account](../common/storage-account-overview.md#migrate-a-storage-account). ++### Ensure the segments of each blob path are named ++The migration process creates a directory for each path segment of a blob. Data Lake Storage Gen2 directories must have a name so for migration to succeed, each path segment in a virtual directory must have a name. The same requirement is true for segments that are named only with a space character. If any path segments are either unnamed (`//`) or named only with a space character (`_`), then before you proceed with the migration, you must copy those blobs to a new path that is compatible with these naming requirements. ## Perform the upgrade |
storage | Files Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md | description: Learn about new features and enhancements in Azure Files and Azure Previously updated : 05/24/2023 Last updated : 07/17/2023 Azure Files is updated regularly to offer new features and enhancements. This ar ## What's new in 2023 +### 2023 quarter 3 (July, August, September) ++#### Azure Active Directory support for Azure Files REST API with OAuth authentication is generally available ++This feature enables share-level read and write access to SMB Azure file shares for users, groups, and managed identities when accessing file share data through the REST API. Cloud native and modern applications that use REST APIs can utilize identity-based authentication and authorization to access file shares. For more information, [read the blog post](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-introducing-azure-ad-support-for-azure-files-smb/ba-p/3826733). + ### 2023 quarter 2 (April, May, June)+ #### Azure Files scalability improvement for Azure Virtual Desktop and other workloads that open root directory handles is generally available+ Azure Files has increased the root directory handle limit per share from 2,000 to 10,000 for standard and premium file shares. This improvement benefits applications that keep an open handle on the root directory. For example, Azure Virtual Desktop with FSLogix profile containers now supports 10,000 active users per share (5x improvement). Note: The number of active users supported per share is dependent on the applications that are accessing the share. If your applications are not opening a handle on the root directory, Azure Files can support more than 10,000 active users per share. The root directory handle limit has been increased in all regions and applies to all existing and new file shares. For more information about Azure Files scale targets, see: [Azure Files scalability and performance targets](storage-files-scale-targets.md). - #### Geo-redundant storage for large file shares is in public preview Azure Files geo-redundancy for large file shares preview significantly improves capacity and performance for standard SMB file shares when using geo-redundant storage (GRS) and geo-zone redundant storage (GZRS) options. The preview is only available for standard SMB Azure file shares. For more information, see [Azure Files geo-redundancy for large file shares preview](geo-redundant-storage-for-large-file-shares.md). |
storage | Storage Files Identity Auth Domain Services Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-domain-services-enable.md | description: Learn how to enable identity-based authentication over Server Messa Previously updated : 05/03/2023 Last updated : 07/17/2023 recommendations: false The action requires running an operation on the Active Directory domain that's m > [!IMPORTANT] > The Windows Server Active Directory PowerShell cmdlets in this section must be run in Windows PowerShell 5.1 from a client machine that's domain-joined to the Azure AD DS domain. PowerShell 7.x and Azure Cloud Shell won't work in this scenario. -Log into the domain-joined client machine as an Azure AD DS user with the required permissions (typically, members of the **AAD DC Administrators** group will have the necessary permissions). Open a normal (non-elevated) PowerShell session and execute the following commands. +Log into the domain-joined client machine as an Azure AD DS user with the required permissions. You must have write access to the `msDS-SupportedEncryptionTypes` attribute of the domain object. Typically, members of the **AAD DC Administrators** group will have the necessary permissions. Open a normal (non-elevated) PowerShell session and execute the following commands. ```powershell # 1. Find the service account in your managed domain that represents the storage account. |
storage | Storage Files Identity Auth Hybrid Identities Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-hybrid-identities-enable.md | description: Learn how to enable identity-based Kerberos authentication for hybr Previously updated : 06/30/2023 Last updated : 07/17/2023 recommendations: false Add an entry for each storage account that uses on-premises AD DS integration. U - Configure this Intune [Policy CSP](/windows/client-management/mdm/policy-configuration-service-provider) and apply it to the client(s): [Kerberos/HostToRealm](/windows/client-management/mdm/policy-csp-admx-kerberos#hosttorealm) - Configure this group policy on the client(s): `Administrative Template\System\Kerberos\Define host name-to-Kerberos realm mappings`-- Configure the following registry value on the client(s): `reg add HKLM\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\domain_realm /v <DomainName> /d <StorageAccountEndPoint>`- - For example, `reg add HKLM\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\domain_realm /v contoso.local /d <your-storage-account-name>.file.core.windows.net` +- Run the `ksetup` Windows command on the client(s): `ksetup /addhosttorealmmap <hostname> <realmname>` + - For example, `ksetup /addhosttorealmmap <your storage account name>.file.core.windows.net contoso.local` -Changes are not instant, and require a policy refresh or a reboot to take effect. +Changes aren't instant, and require a policy refresh or a reboot to take effect. ## Disable Azure AD authentication on your storage account |
storage | Storage Files Quick Create Use Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-windows.md | Next, create an SMB Azure file share. ![Screenshot, + file share selected to create a new file share.](./media/storage-files-quick-create-use-windows/create-file-share.png) -1. Name the new file share *qsfileshare*, enter "1" for the **Quota**, leave **Transaction optimized** selected, and select **Create**. The quota can be a maximum of 5 TiB (100 TiB, with large file shares enabled), but you only need 1 GiB for this. +1. Name the new file share *qsfileshare* and leave **Transaction optimized** selected for **Tier**. +1. Select **Review + create** and then **Create** to create the file share. 1. Create a new txt file called *qsTestFile* on your local machine. 1. Select the new file share, then on the file share location, select **Upload**. |
storage | Storage How To Use Files Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-portal.md | To create an Azure file share: 1. On the menu at the top of the **File shares** page, select **+ File share**. The **New file share** page drops down. 1. In **Name**, type *myshare*. Leave **Transaction optimized** selected for **Tier**.-1. Select **Create** to create the Azure file share. +1. Select **Review + create** and then **Create** to create the Azure file share. File share names must be all lower-case letters, numbers, and single hyphens, and must begin and end with a lower-case letter or number. The name can't contain two consecutive hyphens. For details about naming file shares and files, see [Naming and Referencing Shares, Directories, Files, and Metadata](/rest/api/storageservices/Naming-and-Referencing-Shares--Directories--Files--and-Metadata). |
storage | Partner Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/container-solutions/partner-overview.md | This article highlights Microsoft partner solutions that enable automation, data | Partner | Description | Website/product link | | - | -- | -- | | ![Kasten company logo](./media/kasten-logo.png) |**Kasten**<br>Kasten by Veeam provides a solution for Kubernetes backup and disaster recovery. Kasten helps enterprises overcome Day 2 data management challenges to confidently run applications on Kubernetes.<br><br>The Kasten K10 data management software platform provides enterprise operations teams a scalable and secure system for BCDR and mobility of Kubernetes applications.|[Partner page](https://docs.kasten.io/latest/install/azure/azure.html)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/veeam.kasten_k10_by_veeam_byol?tab=Overview)|-| ![Portworx company logo](./media/portworx-logo.png) |**Portworx**<br>Portworx by Pure Storage is the Kubernetes Data Services Platform enterprises trust to run mission-critical applications in containers in production.<br><br>Portworx provides a fully integrated solution for persistent storage, data protection, disaster recovery, data security, cross-cloud and data migrations, and automated capacity management for applications running on Kubernetes.|[Partner page](https://portworx.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/portworx.portworx_enterprise?tab=overview)| +| ![Portworx company logo](./media/portworx-logo.png) |**Portworx**<br>Portworx by Pure Storage is the Kubernetes Data Services Platform enterprises trust to run mission-critical applications in containers in production.<br><br>Portworx provides a fully integrated solution for persistent storage, data protection, disaster recovery, data security, cross-cloud and data migrations, and automated capacity management for applications running on Kubernetes.|[Partner page](https://portworx.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/purestoragemarketplaceadmin.portworx-enterprise)| | ![Robin.io company logo](./media/robin-logo.png) |**Robin.io**<br>Robin.io provides an application and data management platform that enables enterprises and 5G service providers to deliver complex application pipelines as a service.<br><br>Robin Cloud Native Storage (CNS) brings advanced data management capabilities to Azure Kubernetes Service. Robin CNS seamlessly integrates with Azure Disk Storage to simplify management of stateful applications. Developers and DevOps teams can deploy Robin CNS as a standard Kubernetes operator on AKS. Robin Cloud Native Storage helps simplify data management operations such as BCDR and cloning of entire applications. |[Partner page](https://robin.io/robin-cloud-native-storage-for-microsoft-aks/)| | ![NetApp company logo](./media/astra-logo.jpg) |**NetApp**<br>NetApp is a global cloud-led, data-centric software company that empowers organizations to lead with data in the age of accelerated digital transformation.<br><br>NetApp Astra Control Service is a fully managed service that makes it easier for customers to manage, protect, and move their data-rich containerized workloads running on Kubernetes within and across public clouds and on-premises. Astra Control provides persistent container storage with Azure NetApp Files offering advanced application-aware data management functionality (like snapshot-revert, backup-restore, activity log, and active cloning) for data protection, disaster recovery, data audit, and migration use-cases for your modern apps. |[Partner page](https://cloud.netapp.com/astra)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/netapp.netapp-astra-acs)| | ![Rackware company logo](./media/rackware-logo.png) |**Rackware**<br>RackWare provides an intelligent highly automated Hybrid Cloud Management Platform that extends across physical and virtual environments.<br><br>RackWare SWIFT is a converged disaster recovery, backup and migration solution for Kubernetes and OpenShift. It is a cross-platform, cross-cloud and cross-version solution that enables you to move and protect your stateful Kubernetes applications from any on-premises or cloud environment to Azure Kubernetes Service (AKS) and Azure Storage.|[Partner page](https://www.rackwareinc.com/rackware-swift-microsoft-azure)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=rackware%20swift&page=1&filters=virtual-machine-images)|-| ![Ondat company logo](./media/ondat-logo.png) |**Ondat**<br>Ondat, formerly StorageOS, provides an agnostic platform to run any data service anywhere, while ensuring industry-leading levels of application performance, availability and security.<br><br>Ondat cloud native storage solution delivers persistent container storage for your stateful applications in production. Fast, scalable, software-based block storage, Ondat delivers high availability, rapid application failover, replication, encryption of data in-transit & at-rest, data reduction with access controls and native Kubernetes integration.|[Partner page](https://www.ondat.io/datasheets/ondat-aks)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/ondat1652283854761.ondat?tab=Overview)| +| ![Ondat company logo](./media/ondat-logo.png) |**Ondat**<br>Ondat, formerly StorageOS, provides an agnostic platform to run any data service anywhere, while ensuring industry-leading levels of application performance, availability and security.<br><br>Ondat cloud native storage solution delivers persistent container storage for your stateful applications in production. Fast, scalable, software-based block storage, Ondat delivers high availability, rapid application failover, replication, encryption of data in-transit & at-rest, data reduction with access controls and native Kubernetes integration.|[Partner page](https://www.ondat.io/datasheets/ondat-aks) | Are you a storage partner but your solution is not listed yet? Send us your info [here](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR3i8TQB_XnRAsV3-7XmQFpFUQjY4QlJYUzFHQ0ZBVDNYWERaUlNRVU5IMyQlQCN0PWcu). ## Next steps |
stream-analytics | Quick Create Visual Studio Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-create-visual-studio-code.md | Title: Quickstart - Create a Stream Analytics job using Visual Studio Code description: This quickstart shows you how to create a Stream Analytics job using the ASA extension for Visual Studio Code. -- Previously updated : 10/27/2022++ Last updated : 07/17/2023 #Customer intent: As an IT admin/developer, I want to create a Stream Analytics job, configure input and output, and analyze data by using Visual Studio Code. |
synapse-analytics | Distribution Advisor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/distribution-advisor.md | The Distribution Advisor (DA) feature of Azure Synapse SQL analyzes customer que - Ensure that statistics are available and up-to-date before running the advisor. For more details, [Manage table statistics](develop-tables-statistics.md), [CREATE STATISTICS](/sql/t-sql/statements/create-statistics-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [UPDATE STATISTICS](/sql/t-sql/statements/update-statistics-transact-sql?view=azure-sqldw-latest&preserve-view=true) articles for more details on statistics. +- Enable the Azure Synapse distribution advisor for the current session with the [SET RECOMMENDATIONS](/sql/t-sql/statements/set-recommendations-sql?view=azure-sqldw-latest&preserve-view=true) T-SQL command. + ## Analyze workload and generate distribution recommendations The follow tutorial explains the sample use case for using the Distribution Advisor feature to analyze customer queries and recommend the best distribution strategies. For feature requests, use the [Azure Synapse Analytics Feedback](https://feedbac ## Next steps +- [SET RECOMMENDATIONS (Transact-SQL)](/sql/t-sql/statements/set-recommendations-sql?view=azure-sqldw-latest&preserve-view=true) - [Loading data to dedicated SQL pool](../sql-data-warehouse/load-data-wideworldimportersdw.md) - [Data loading strategies for dedicated SQL pool in Azure Synapse Analytics](../sql-data-warehouse/design-elt-data-loading.md). - [Dedicated SQL pool (formerly SQL DW) architecture in Azure Synapse Analytics](../sql-data-warehouse/massively-parallel-processing-mpp-architecture.md) |
virtual-desktop | Autoscale Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-diagnostics.md | Title: Set up diagnostics for autoscale in Azure Virtual Desktop description: How to set up diagnostic reports for the scaling service in your Azure Virtual Desktop deployment. Previously updated : 04/29/2022 Last updated : 07/18/2023 # Set up diagnostics for autoscale in Azure Virtual Desktop +> [!IMPORTANT] +> Autoscale for personal host pools is currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. + Diagnostics lets you monitor potential issues and fix them before they interfere with your autoscale scaling plan. -Currently, you can either send diagnostic logs for autoscale to an Azure Storage account or consume logs with the Events hub. If you're using an Azure Storage account, make sure it's in the same region as your scaling plan. Learn more about diagnostic settings at [Create diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md). For more information about resource log data ingestion time, see [Log data ingestion time in Azure Monitor](../azure-monitor/logs/data-ingestion-time.md). +Currently, you can either send diagnostic logs for autoscale to an Azure Storage account or consume logs with Microsoft Azure Event Hubs. If you're using an Azure Storage account, make sure it's in the same region as your scaling plan. Learn more about diagnostic settings at [Create diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md). For more information about resource log data ingestion time, see [Log data ingestion time in Azure Monitor](../azure-monitor/logs/data-ingestion-time.md). ## Enable diagnostics for scaling plans -To enable diagnostics for your scaling plan: +#### [Pooled host pools](#tab/pooled-autoscale) ++To enable diagnostics for your scaling plan for pooled host pools: ++1. Open the [Azure portal](https://portal.azure.com). ++1. In the search bar, enter **Azure Virtual Desktop**, then select the service from the drop-down menu. ++1. Select **Scaling plans**, then select the scaling plan you'd like the report to track. ++1. Go to **Diagnostic Settings** and select **Add diagnostic setting**. ++1. Enter a name for the diagnostic setting. ++1. Next, select **Autoscale logs for pooled host pools** and choose either **storage account** or **event hub** depending on where you want to send the report. ++1. Select **Save**. ++#### [Personal host pools](#tab/personal-autoscale) ++To enable diagnostics for your scaling plan for personal host pools: 1. Open the [Azure portal](https://portal.azure.com). 1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry. -2. Select **Scaling plans**, then select the scaling plan you'd like the report to track. +1. Select **Scaling plans**, then select the scaling plan you'd like the report to track. -3. Go to **Diagnostic Settings** and select **Add diagnostic setting**. +1. Go to **Diagnostic Settings** and select **Add diagnostic setting**. -4. Enter a name for the diagnostic setting. +1. Enter a name for the diagnostic setting. -5. Next, select **Autoscale** and choose either **storage account** or **event hub** depending on where you want to send the report. +1. Next, select **Autoscale logs for personal host pools** and choose either **storage account** or **event hub** depending on where you want to send the report. -6. Select **Save**. +1. Select **Save**. -## Set log location in Azure Storage +## Find autoscale diagnostic logs in Azure Storage After you've configured your diagnostic settings, you can find the logs by following these instructions: -1. In the Azure portal, go to the storage group you sent the diagnostic logs to. +1. In the Azure portal, go to the storage account you sent the diagnostic logs to. -2. Select **Containers**. A folder called **insight-logs-autoscaling** should open. +1. Select **Containers** and open the folder called **insight-logs-autoscaling**. -3. Select the **insight-logs-autoscaling folder** and open the log you want to review. Open folders within that folder until you see the JSON file, then select all items in that folder, right-click, and download them to your local computer. +1. Within the **insight-logs-autoscaling** folder select the subscription, resource group, scaling plan, and date until you see the JSON file. Select the JSON file and download it to your local computer. -4. Finally, open the JSON file in the text editor of your choice. +1. Finally, open the JSON file in the text editor of your choice. ## View diagnostic logs The following JSON file is an example of what you'll see when you open a report: - [Assign your scaling plan to new or existing host pools](autoscale-new-existing-host-pool.md). - Learn more about terms used in this article at our [autoscale glossary](autoscale-glossary.md). - For examples of how autoscale works, see [Autoscale example scenarios](autoscale-scenarios.md).-- View our [autoscale FAQ](autoscale-faq.yml) to answer commonly asked questions.+- View our [autoscale FAQ](autoscale-faq.yml) to answer commonly asked questions. |
virtual-desktop | Autoscale Glossary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-glossary.md | Title: Azure Virtual Desktop autoscale glossary for Azure Virtual Desktop - Azur description: A glossary of terms and concepts for the Azure Virtual Desktop autoscale feature. Previously updated : 08/03/2022 Last updated : 07/18/2023 # Autoscale glossary for Azure Virtual Desktop +> [!IMPORTANT] +> Autoscale for personal host pools is currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. + This article is a list of definitions for key terms and concepts related to the autoscale feature for Azure Virtual Desktop. ## Autoscale -Autoscale is Azure Virtual DesktopΓÇÖs native scaling service that turns VMs on and off based on the number of sessions on the session hosts in the host pool and which phase of the [scaling plan](#scaling-plan) [schedule](#schedule) the workday is in. +Autoscale is Azure Virtual DesktopΓÇÖs native scaling service that turns VMs on and off based on the capacity of the host pools and the [scaling plan](#scaling-plan) [schedule](#schedule) you define. ## Scaling tool Azure Virtual DesktopΓÇÖs scaling tool uses Azure Automation and Azure Logic App ## Scaling plan -A scaling plan is an Azure Virtual Desktop Azure Resource Manager object that defines the schedules for scaling session hosts in a host pool. You can assign one scaling plan to multiple host pools. Each host pool can only have one scaling plan assigned to it. +A scaling plan is an Azure Virtual Desktop Azure Resource Manager object that defines the schedules for scaling session hosts in a host pool. You can assign one scaling plan to multiple host pools. Each scaling plan can only be assigned to either personal or pooled host pools, but not both types at the same time. ## Schedule -Schedules are sub-resources of [scaling plans](#scaling-plan) that specify the start time, capacity threshold, minimum percentage of hosts, load-balancing algorithm, and other configuration settings for the different phases of the day. +Schedules are sub-resources of [scaling plans](#scaling-plan). Scaling plans for pooled host pools have schedules that specify the start time, capacity threshold, minimum percentage of hosts, load-balancing algorithm, and other configuration settings for the different phases of the day. Scaling plans for personal host pools have schedules that specify the start time and what operation to perform based on user session state (signed out or disconnected) for the different phases of the day. ## Ramp-up The ramp-down phase of a [scaling plan](#scaling-plan) [schedule](#schedule) is ## Off-peak -The off-peak phase of the [scaling plan](#scaling-plan) [schedule](#schedule) is when the host pool usually reaches the minimum number of [active user sessions](#active-user-session) for the day. During this phase, there aren't usually many active users, but you can keep a small amount of resources on to accommodate users who work after the peak and ramp-down phases. +The off-peak phase of the [scaling plan](#scaling-plan) [schedule](#schedule) is when the host pool usually reaches the minimum number of [active user sessions](#active-user-session) for the day. During this phase, there aren't usually many active users, but you may keep a small amount of resources on to accommodate users who work after the peak and ramp-down phases. ## Available session host -Available session hosts are session hosts that have passed all Azure Virtual Desktop agent health checks and have VM objects that are powered on, making them available for users to start their user sessions on. +Available session hosts are session hosts that have passed all Azure Virtual Desktop agent health checks and have VM objects that are powered on, making them available for users to establish user sessions on. ## Capacity threshold The number of [active](#active-user-session) and [disconnected user sessions](#d Scaling actions are when [autoscale](#autoscale) turns VMs on or off. +## Shut down ++Autoscale for pooled and personal host pools shuts down VMs based on the defined schedule. When autoscale shuts down a VM, it deallocates and stops the VM, ensuring you aren't charged for the compute resources. + ## Minimum percentage of hosts The minimum percentage of hosts is the lowest percentage of all session hosts in the host pool that must be turned on for each phase of the [scaling plan](#scaling-plan) [schedule](#schedule). An exclusion tag is a property of a [scaling plan](#scaling-plan) that's a tag n - For more information about autoscale, see the [autoscale feature document](autoscale-scaling-plan.md). - For examples of how autoscale works, see [Autoscale example scenarios](autoscale-scenarios.md).-- For more information about the scaling script, see the [scaling script document](set-up-scaling-script.md).+- For more information about the scaling script, see the [scaling script document](set-up-scaling-script.md). |
virtual-desktop | Autoscale New Existing Host Pool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-new-existing-host-pool.md | Title: Azure Virtual Desktop scaling plans for host pools in Azure Virtual Deskt description: How to assign scaling plans to new or existing host pools in your deployment. Previously updated : 08/03/2022 Last updated : 07/18/2023 # Assign scaling plans to host pools in Azure Virtual Desktop -You can assign a scaling plan for any existing host pools in your deployment. When you apply a scaling plan to your host pool, the plan will apply to all session hosts within that host pool. The scaling plan also automatically applies to any new session hosts you create in your assigned host pool. +> [!IMPORTANT] +> Autoscale for personal host pools is currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. -If you disable a scaling plan, all assigned resources will remain in the scaling state they were in at the time you disabled it. +You can assign a scaling plan to any existing host pools in your deployment. When you assign a scaling plan to your host pool, the plan will apply to all session hosts within that host pool. The scaling plan also automatically applies to any new session hosts you create in the assigned host pool. ++If you disable a scaling plan, all assigned resources will remain in the state they were in at the time you disabled it. ## Assign a scaling plan to a single existing host pool To assign a scaling plan to an existing host pool: 1. Select **Host pools**, and select the host pool you want to assign the scaling plan to. -1. Under the **Settings** heading, select **Scaling plan**, and then select **+ Assign**. Select the scaling plan you want to assign and select **Assign**. The scaling plan must be in the same Azure region as the host pool. +1. Under the **Settings** heading, select **Scaling plan**, and then select **+ Assign**. Select the scaling plan you want to assign and select **Assign**. The scaling plan must be in the same Azure region as the host pool and the scaling plan's host pool type must match the type of host pool that you're trying to assign it to. > [!TIP] > If you've enabled the scaling plan during deployment, then you'll also have the option to disable the plan for the selected host pool in the **Scaling plan** menu by unselecting the **Enable autoscale** checkbox, as shown in the following screenshot. To assign a scaling plan to an existing host pool: ## Assign a scaling plan to multiple existing host pools -To assign a scaling plan multiple existing host pool at the same time: +To assign a scaling plan to multiple existing host pools at the same time: 1. Open the [Azure portal](https://portal.azure.com). To assign a scaling plan multiple existing host pool at the same time: 1. Select **Scaling plans**, and select the scaling plan you want to assign to host pools. -1. Under the **Manage** heading, select **Host pool assignments**, and then select **+ Assign**. Select the host pools you want to assign the scaling plan to and select **Assign**. The host pools must be in the same Azure region as the scaling plan. +1. Under the **Manage** heading, select **Host pool assignments**, and then select **+ Assign**. Select the host pools you want to assign the scaling plan to and select **Assign**. The host pools must be in the same Azure region as the scaling plan and the scaling plan's host pool type must match the type of host pools you're trying to assign it to. ## Next steps To assign a scaling plan multiple existing host pool at the same time: - Learn how to troubleshoot your scaling plan at [Enable diagnostics for your scaling plan](autoscale-diagnostics.md). - Learn more about terms used in this article at our [autoscale glossary](autoscale-glossary.md). - For examples of how autoscale works, see [Autoscale example scenarios](autoscale-scenarios.md).-- View our [autoscale FAQ](autoscale-faq.yml) to answer commonly asked questions.+- View our [autoscale FAQ](autoscale-faq.yml) to answer commonly asked questions. |
virtual-desktop | Autoscale Scaling Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scaling-plan.md | Title: Create an autoscale scaling plan for Azure Virtual Desktop description: How to create an autoscale scaling plan to optimize deployment costs. Previously updated : 02/03/2023 Last updated : 07/18/2023 # Create an autoscale scaling plan for Azure Virtual Desktop -Autoscale lets you scale your session host virtual machines (VMs) in a host pool up or down to optimize deployment costs. You can create a scaling plan based on: +> [!IMPORTANT] +> Autoscale for personal host pools is currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. -- Time of day-- Specific days of the week-- Session limits per session host+Autoscale lets you scale your session host virtual machines (VMs) in a host pool up or down according to schedule to optimize deployment costs. To learn more about autoscale, see [Autoscale scaling plans and example scenarios in Azure Virtual Desktop](autoscale-scenarios.md). To learn more about autoscale, see [Autoscale scaling plans and example scenario > - Autoscale doesn't support scaling of generalized or sysprepped VMs with machine-specific information removed. For more information, see [Remove machine-specific information by generalizing a VM before creating an image](../virtual-machines/generalize.md). > - You can't use autoscale and [scale session hosts using Azure Automation and Azure Logic Apps](scaling-automation-logic-apps.md) on the same host pool. You must use one or the other. > - Autoscale is available in Azure and Azure Government.+> - You can currently only configure personal autoscale with the Azure portal. For best results, we recommend using autoscale with VMs you deployed with Azure Virtual Desktop Azure Resource Manager templates or first-party tools from Microsoft. For best results, we recommend using autoscale with VMs you deployed with Azure To use scaling plans, make sure you follow these guidelines: -- You can currently only configure autoscale with existing pooled host pools. - You must create the scaling plan in the same Azure region as the host pool you assign it to. You can't assign a scaling plan in one Azure region to a host pool in another Azure region.-- All host pools you use with autoscale must have a configured *MaxSessionLimit* parameter. Don't use the default value. You can configure this value in the host pool settings in the Azure portal or run the [New-AzWvdHostPool](/powershell/module/az.desktopvirtualization/new-azwvdhostpool) or [Update-AzWvdHostPool](/powershell/module/az.desktopvirtualization/update-azwvdhostpool) PowerShell cmdlets.+- When using autoscale for pooled host pools, you must have a configured *MaxSessionLimit* parameter for that host pool. Don't use the default value. You can configure this value in the host pool settings in the Azure portal or run the [New-AzWvdHostPool](/powershell/module/az.desktopvirtualization/new-azwvdhostpool) or [Update-AzWvdHostPool](/powershell/module/az.desktopvirtualization/update-azwvdhostpool) PowerShell cmdlets. - You must grant Azure Virtual Desktop access to manage the power state of your session host VMs. You must have the `Microsoft.Authorization/roleAssignments/write` permission on your subscriptions in order to assign the role-based access control (RBAC) role for the Azure Virtual Desktop service principal on those subscriptions. This is part of **User Access Administrator** and **Owner** built in roles. ## Assign the Desktop Virtualization Power On Off Contributor role with the Azure portal Before creating your first scaling plan, you'll need to assign the *Desktop Virtualization Power On Off Contributor* RBAC role with your Azure subscription as the assignable scope. Assigning this role at any level lower than your subscription, such as the resource group, host pool, or VM, will prevent autoscale from working properly. You'll need to add each Azure subscription as an assignable scope that contains host pools and session host VMs you want to use with autoscale. This role and assignment will allow Azure Virtual Desktop to manage the power state of any VMs in those subscriptions. It will also let the service apply actions on both host pools and VMs when there are no active user sessions. -To learn how to assign the *Desktop Virtualization Power On Off Contributor* role to the Azure Virtual Desktop service principal, see [Assign RBAC roles to the Azure Virtual Desktop service principal](service-principal-assign-roles.md). +To assign the *Desktop Virtualization Power On Off Contributor* role with the Azure portal to the Azure Virtual Desktop service principal on the subscription your host pool is deployed to: ++1. Sign in to the Azure portal and go to **Subscriptions**. Select a subscription that contains a host pool and session host VMs you want to use with autoscale. ++1. Select **Access control (IAM)**. ++1. Select the **+ Add** button, then select **Add role assignment** from the drop-down menu. ++1. Select the **Desktop Virtualization Power On Off Contributor** role and select **Next**. ++1. On the **Members** tab, select **User, group, or service principal**, then select **+Select members**. In the search bar, enter and select either **Azure Virtual Desktop** or **Windows Virtual Desktop**. Which value you have depends on when the *Microsoft.DesktopVirtualization* resource provider was first registered in your Azure tenant. If you see two entries titled *Windows Virtual Desktop*, please see the tip below. ++1. Select **Review + assign** to complete the assignment. Repeat this for any other subscriptions that contain host pools and session host VMs you want to use with autoscale. ++> [!TIP] +> The application ID for the service principal is **9cdead84-a844-4324-93f2-b2e6bb768d07**. +> +> If you have an Azure Virtual Desktop (classic) deployment and an Azure Virtual Desktop (Azure Resource Manager) deployment where the *Microsoft.DesktopVirtualization* resource provider was registered before the display name changed, you will see two apps with the same name of *Windows Virtual Desktop*. To add the role assignment to the correct service principal, [you can use PowerShell](../role-based-access-control/role-assignments-powershell.md) which enables you to specify the application ID: +> +> To assign the *Desktop Virtualization Power On Off Contributor* role with PowerShell to the Azure Virtual Desktop service principal on the subscription your host pool is deployed to: +> +> 1. Open [Azure Cloud Shell](../cloud-shell/overview.md) with PowerShell as the shell type. +> +> 1. Get the object ID for the service principal (which is unique in each Azure tenant) and store it in a variable: +> +> ```powershell +> $objId = (Get-AzADServicePrincipal -AppId "9cdead84-a844-4324-93f2-b2e6bb768d07").Id +> ``` +> +> 1. Find the name of the subscription you want to add the role assignment to by listing all that are available to you: +> +> ```powershell +> Get-AzSubscription +> ``` +> +> 1. Get the subscription ID and store it in a variable, replacing the value for `-SubscriptionName` with the name of the subscription from the previous step: +> +> ```powershell +> $subId = (Get-AzSubscription -SubscriptionName "Microsoft Azure Enterprise").Id +> ``` +> +> 1. Add the role assignment: +> +> ```powershell +> New-AzRoleAssignment -RoleDefinitionName "Desktop Virtualization Power On Off Contributor" -ObjectId $objId -Scope /subscriptions/$subId +> ``` ## Create a scaling plan Now that you've assigned the *Desktop Virtualization Power On Off Contributor* r 1. For **Time zone**, select the time zone you'll use with your plan. +1. For **Host pool type**, select the type of host pool that you want your scaling plan to apply to. + 1. In **Exclusion tags**, enter a tag name for VMs you don't want to include in scaling operations. For example, you might want to tag VMs that are set to drain mode so that autoscale doesn't override drain mode during maintenance using the exclusion tag "excludeFromScaling". If you've set "excludeFromScaling" as the tag name field on any of the VMs in the host pool, autoscale won't start, stop, or change the drain mode of those particular VMs. >[!NOTE] Now that you've assigned the *Desktop Virtualization Power On Off Contributor* r ## Configure a schedule -Schedules let you define when autoscale activates ramp-up and ramp-down modes throughout the day. In each phase of the schedule, autoscale only turns off VMs when in doing so the used host pool capacity won't exceed the capacity threshold. The default values you'll see when you try to create a schedule are the suggested values for weekdays, but you can change them as needed. +Schedules let you define when autoscale turns VMs on and off throughout the day. The schedule parameters are different based on the **Host pool type** you chose for the scaling plan. ++#### [Pooled host pools](#tab/pooled-autoscale) ++In each phase of the schedule, autoscale only turns off VMs when in doing so the used host pool capacity won't exceed the capacity threshold. The default values you'll see when you try to create a schedule are the suggested values for weekdays, but you can change them as needed. To create or change a schedule: To create or change a schedule: - Load-balancing algorithm. We recommend choosing **depth-first** to gradually reduce the number of session hosts based on sessions on each VM. - Just like peak hours, you can't configure the capacity threshold here. Instead, the value you entered in **Ramp-down** will carry over. +#### [Personal host pools](#tab/personal-autoscale) ++In each phase of the schedule, define whether VMs should be deallocated based on the user session state. ++To create or change a schedule: ++1. In the **Schedules** tab, select **Add schedule**. ++1. Enter a name for your schedule into the **Schedule name** field. ++1. In the **Repeat on** field, select which days your schedule will repeat on. ++1. In the **Ramp up** tab, fill out the following fields: ++ - For **Start time**, select the time you want the ramp-up phase to start from the drop-down menu. + + - For **Start VM on Connect**, select whether you want Start VM on Connect to be enabled during ramp up. ++ - For **VMs to start**, select whether you want only personal desktops that have a user assigned to them at the start time to be started, you want all personal desktops in the host pool (regardless of user assignment) to be started, or you want no personal desktops in the pool to be started. ++ > [!NOTE] + > We highly recommend that you enable Start VM on Connect if you choose not to start your VMs during the ramp-up phase. ++ - For **When disconnected for**, specify the number of minutes a user session has to be disconnected before performing a specific action. This number can be anywhere between 0 and 360. ++ - For **Perform**, specify what action the service should take after a user session has been disconnected for the specified time. The options are to either deallocate (shut down) the VMs or do nothing. + + - For **When logged off for**, specify the number of minutes a user session has to be logged off before performing a specific action. This number can be anywhere between 0 and 360. ++ - For **Perform**, specify what action the service should take after a user session has been logged off for the specified time. The options are to either deallocate (shut down) the VMs or do nothing. ++1. In the **Peak hours**, **Ramp-down**, and **Off-peak hours** tabs, fill out the following fields: ++ - For **Start time**, enter a start time for each phase. This time is also the end time for the previous phase. + + - For **Start VM on Connect**, select whether you want to enable Start VM on Connect to be enabled during that phase. ++ - For **When disconnected for**, specify the number of minutes a user session has to be disconnected before performing a specific action. This number can be anywhere between 0 and 360. ++ - For **Perform**, specify what action should be performed after a user session has been disconnected for the specified time. The options are to either deallocate (shut down) the VMs or do nothing. + + - For **When logged off for**, specify the number of minutes a user session has to be logged off before performing a specific action. This number can be anywhere between 0 and 360. ++ - For **Perform**, specify what action should be performed after a user session has been logged off for the specified time. The options are to either deallocate (shut down) the VMs or do nothing. ++ ## Assign host pools -Now that you've set up your scaling plan, it's time to assign the plan to your host pools. Select the check box next to each host pool you want to include. If you don't want to enable autoscale, unselect all check boxes. You can always return to this setting later and change it. +Now that you've set up your scaling plan, it's time to assign the plan to your host pools. Select the check box next to each host pool you want to include. If you don't want to enable autoscale, unselect all check boxes. You can always return to this setting later and change it. You can only assign the scaling plan to host pools that match the host pool type specified in the plan. > [!NOTE]-> When you create or update a scaling plan that's already assigned to host pools, its changes will be applied immediately. +> - When you create or update a scaling plan that's already assigned to host pools, its changes will be applied immediately. +> +> - While autoscale for personal host pools is in preview, we don't recommend assigning a scaling plan to a personal host pool with more than 2000 session hosts. ## Add tags To edit an existing scaling plan: 1. Select **Scaling plans**, then select the name of the scaling plan you want to edit. The overview blade of the scaling plan should open. -1. To change the scaling plan host pool assignments, under the **Manage** heading select **Host pool assignments** . +1. To change the scaling plan host pool assignments, under the **Manage** heading select **Host pool assignments**. 1. To edit schedules, under the **Manage** heading, select **Schedules**. Now that you've created your scaling plan, here are some things you can do: - [Assign your scaling plan to new and existing host pools](autoscale-new-existing-host-pool.md) - [Enable diagnostics for your scaling plan](autoscale-diagnostics.md) -If you'd like to learn more about terms used in this article, check out our [autoscale glossary](autoscale-glossary.md). For examples of how autoscale works, see [Autoscale example scenarios](autoscale-scenarios.md). You can also look at our [Autoscale FAQ](autoscale-faq.yml) if you have other questions. +If you'd like to learn more about terms used in this article, check out our [autoscale glossary](autoscale-glossary.md). For examples of how autoscale works, see [Autoscale example scenarios](autoscale-scenarios.md). You can also look at our [Autoscale FAQ](autoscale-faq.yml) if you have other questions. |
virtual-desktop | Autoscale Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scenarios.md | Title: Autoscale scaling plans and example scenarios in Azure Virtual Desktop description: Information about autoscale and a collection of four example scenarios that illustrate how various parts of autoscale for Azure Virtual Desktop work. Previously updated : 02/09/2023 Last updated : 07/18/2023 # Autoscale scaling plans and example scenarios in Azure Virtual Desktop -Autoscale lets you scale your session host virtual machines (VMs) in a host pool up or down to optimize deployment costs. You create a scaling plan that can be based on: +> [!IMPORTANT] +> Autoscale for personal host pools is currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. -- Time of day-- Specific days of the week-- Session limits per session host+Autoscale lets you scale your session host virtual machines (VMs) in a host pool up or down according to schedule to optimize deployment costs. > [!NOTE] > - Azure Virtual Desktop (classic) doesn't support autoscale. For best results, we recommend using autoscale with VMs you deployed with Azure Before you create your plan, keep the following things in mind: -- You can assign one scaling plan to one or more host pools of the same host pool type. The scaling plan's schedule will also be applied across all assigned host pools.+- You can assign one scaling plan to one or more host pools of the same host pool type. The scaling plan's schedules will be applied to all assigned host pools. - You can only associate one scaling plan per host pool. If you assign a single scaling plan to multiple host pools, those host pools can't be assigned to another scaling plan. Also, keep these limitations in mind: - DonΓÇÖt use autoscale in combination with other scaling Microsoft or third-party scaling tools. Ensure that you disable those for the host pools you apply the scaling plans to. -- Autoscale overwrites [drain mode](drain-mode.md), so make sure to use exclusion tags when updating VMs in host pools.+- For pooled host pools, autoscale overwrites [drain mode](drain-mode.md), so make sure to use exclusion tags when updating VMs in host pools. -- Autoscale ignores existing [load-balancing algorithms](host-pool-load-balancing.md) in your host pool settings, and instead applies load balancing based on your schedule configuration.+- For pooled host pools, autoscale ignores existing [load-balancing algorithms](host-pool-load-balancing.md) in your host pool settings, and instead applies load balancing based on your schedule configuration. -## Example scenarios +## Example scenarios for autoscale for pooled host pools -In this section, there are four scenarios that show how different parts of autoscale works. In each example, there are tables that show the host pool's settings and animated visual demonstrations. +In this section, there are four scenarios that show how different parts of autoscale for pooled host pools works. In each example, there are tables that show the host pool's settings and animated visual demonstrations. >[!NOTE] >To learn more about what the parameter terms mean, see [our autoscale glossary](autoscale-glossary.md). The following animation is a visual recap of what we just went over in Scenario - To learn how to create scaling plans for autoscale, see [Create autoscale scaling for Azure Virtual Desktop host pools](autoscale-scaling-plan.md). - To review terms associated with autoscale, see [the autoscale glossary](autoscale-glossary.md).-- For answers to commonly asked questions about autoscale, see [the autoscale FAQ](autoscale-faq.yml).+- For answers to commonly asked questions about autoscale, see [the autoscale FAQ](autoscale-faq.yml). |
virtual-desktop | Private Link Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-overview.md | Title: Azure Private Link with Azure Virtual Desktop - Azure description: Learn about using Private Link with Azure Virtual Desktop to privately connect to your remote resources. Previously updated : 07/10/2023 Last updated : 07/17/2023 When adding Private Link with Azure Virtual Desktop, you have the following opti Private Link with Azure Virtual Desktop has the following limitations: -- You need to [enable the feature](private-link-setup.md#enable-the-feature) on each Azure subscription you want to Private Link with Azure Virtual Desktop.--- You can't use the [manual connection approval method](../private-link/private-endpoint-overview.md#access-to-a-private-link-resource-using-approval-workflow) when using Private Link with Azure Virtual Desktop. We're aware of this issue and are working on fixing it.+- Before you use Private Link for Azure Virtual Desktop, you need to [enable the feature](private-link-setup.md#enable-the-feature) on each Azure subscription you want to Private Link with Azure Virtual Desktop. - All [Remote Desktop clients to connect to Azure Virtual Desktop](users/remote-desktop-clients-overview.md) can be used with Private Link, but we currently only offer troubleshooting support for the web client with Private Link. -- After you've changed a private endpoint to a host pool, you must restart the *Remote Desktop Agent Loader* (RDAgentBootLoader) service on the session host VM. You also need to restart this service whenever you change a host pool's network configuration. Instead of restarting the service, you can restart the session host.+- After you've changed a private endpoint to a host pool, you must restart the *Remote Desktop Agent Loader* (*RDAgentBootLoader*) service on each session host in the host pool. You also need to restart this service whenever you change a host pool's network configuration. Instead of restarting the service, you can restart each session host. -- Service tags are used by the Azure Virtual Desktop service for agent monitoring traffic. These tags are created automatically.+- Using both Private Link and [RDP Shortpath](./shortpath.md) at the same time isn't currently supported. -- Using both Private Link and [RDP Shortpath](./shortpath.md) at the same time isn't supported.+- Azure PowerShell cmdlets for Azure Virtual Desktop that support Private Link are in preview. You'll need to download and install the [preview version of the Az.DesktopVirtualization module](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/5.0.0-preview) to use these cmdlets, which have been added in version 5.0.0. ## Next steps |
virtual-desktop | Private Link Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-setup.md | Title: Set up Private Link with Azure Virtual Desktop - Azure description: Learn how to set up Private Link with Azure Virtual Desktop to privately connect to your remote resources. Previously updated : 07/13/2023 Last updated : 07/17/2023 In order to use Private Link with Azure Virtual Desktop, you need the following - If you want to use Azure CLI or Azure PowerShell locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [desktopvirtualization](/cli/azure/desktopvirtualization) Azure CLI extension or the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md). +- Azure PowerShell cmdlets for Azure Virtual Desktop that support Private Link are in preview. You'll need to download and install the [preview version of the Az.DesktopVirtualization module](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/5.0.0-preview) to use these cmdlets, which have been added in version 5.0.0. + ## Enable the feature To use of Private Link with Azure Virtual Desktop, first you need to re-register the *Microsoft.DesktopVirtualization* resource provider and register the *Azure Virtual Desktop Private Link* feature on your Azure subscription. With Azure Virtual Desktop, you can independently control public traffic for wor # [Azure PowerShell](#tab/powershell-2) > [!IMPORTANT]-> You need to use the preview version of the Az.DesktopVirtualization module to run the following commands. For more information and to download and install the preview module, see [PowerShell Gallery](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/5.0.0-preview). +> Azure PowerShell cmdlets for Azure Virtual Desktop that support Private Link are in preview. You'll need to download and install the [preview version of the Az.DesktopVirtualization module](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/5.0.0-preview) to use these cmdlets, which have been added in version 5.0.0. #### Workspaces With Azure Virtual Desktop, you can independently control public traffic for wor > [!IMPORTANT]-> Changing access for session hosts won't affect existing sessions. You must restart the session host virtual machines for the change to take effect. +> Changing access for session hosts won't affect existing sessions. After you've changed a private endpoint to a host pool, you must restart the *Remote Desktop Agent Loader* (*RDAgentBootLoader*) service on each session host in the host pool. You also need to restart this service whenever you change a host pool's network configuration. Instead of restarting the service, you can restart each session host. ### Block public routes with network security groups or Azure Firewall |
virtual-desktop | Troubleshoot Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-agent.md | To resolve this issue, create a valid registration token: On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277 with **INVALID_FORM** in the description, the agent can't connect to the broker or reach a particular endpoint. This issue may be because of certain firewall or DNS settings. -To resolve this issue, check that you can reach the two endpoints referred to as *BrokerURI* and *BrokerURIGlobal*: +To resolve this issue, check that you can reach the two endpoints referred to as *BrokerResourceIdURI* and *BrokerResourceIdURIGlobal*: 1. Open Registry Editor. 1. Go to **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\RDInfraAgent**. -1. Make note of the values for **BrokerURI** and **BrokerURIGlobal**. +1. Make note of the values for **BrokerResourceIdURI** and **BrokerResourceIdURIGlobal**. - > [!div class="mx-imgBorder"] - > ![Screenshot of broker uri and broker uri global](media/broker-uri.png) --1. Open a web browser and enter your value for *BrokerURI* in the address bar and add */api/health* to the end, for example `https://rdbroker-g-us-r0.wvd.microsoft.com/api/health`. +1. Open a web browser and enter your value for **BrokerResourceIdURI** in the address bar and add **/api/health** to the end, for example `https://rdbroker-g-us-r0.wvd.microsoft.com/api/health`. -1. Open another tab in the browser and enter your value for *BrokerURIGlobal* in the address bar and add */api/health* to the end, for example `https://rdbroker.wvd.microsoft.com/api/health`. +1. Open another tab in the browser and enter your value for **BrokerResourceIdURIGlobal** in the address bar and add **/api/health** to the end, for example `https://rdbroker.wvd.microsoft.com/api/health`. 1. If your network isn't blocking the connection to the broker, both pages should load successfully and show a message stating **RD Broker is Healthy**, as shown in the following screenshots: |
virtual-machines | Automatic Vm Guest Patching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-vm-guest-patching.md | VMs on Azure now support the following patch orchestration modes: > [!NOTE] >For Windows VMs, the property `osProfile.windowsConfiguration.enableAutomaticUpdates` can only be set when the VM is first created. This impacts certain patch mode transitions. Switching between AutomaticByPlatform and Manual modes is supported on VMs that have `osProfile.windowsConfiguration.enableAutomaticUpdates=false`. Similarly switching between AutomaticByPlatform and AutomaticByOS modes is supported on VMs that have `osProfile.windowsConfiguration.enableAutomaticUpdates=true`. Switching between AutomaticByOS and Manual modes is not supported.->Azure recommends that [Assessment Mode](https://learn.microsoft.com/rest/api/compute/virtual-machines/assess-patches) be enabled on a VM even if Azure Orchestration is not enabled for patching. This will allow the platform to assess the VM every 24 hours for any pending updates, and save the details in [Azure Resource Graph](https://learn.microsoft.com/azure/update-center/query-logs). (preview) +>Azure recommends that [Assessment Mode](https://learn.microsoft.com/rest/api/compute/virtual-machines/assess-patches) be enabled on a VM even if Azure Orchestration is not enabled for patching. This will allow the platform to assess the VM every 24 hours for any pending updates, and save the details in [Azure Resource Graph](https://learn.microsoft.com/azure/update-center/query-logs). (preview). The platform performs assessment to report consolidated results when the machineΓÇÖs desired patch configuration state is applied or confirmed. This will be reported as a ΓÇÿPlatformΓÇÖ-initated assessment. ## Requirements for enabling automatic VM guest patching |
virtual-machines | Attach Disk Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/attach-disk-portal.md | The following example uses `parted` on `/dev/sdc`, which is where the first data ```bash sudo parted /dev/sdc --script mklabel gpt mkpart xfspart xfs 0% 100%-sudo mkfs.xfs /dev/sdc1 +sudo mkfs.xfs /dev/sdc sudo partprobe /dev/sdc1 ``` |
virtual-machines | Image Builder Json | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md | description: Learn how to create a Bicep file or ARM template JSON template to u Previously updated : 06/12/2023 Last updated : 07/17/2023 an Azure Compute Gallery is made up of: Before you can distribute to the gallery, you must create a gallery and an image definition, see [Create a gallery](../create-gallery.md). +> [!NOTE] +> The image version ID needs to be distinct or different from any image versions that are in the existing Azure Compute Gallery. ++ # [JSON](#tab/json) ```json |
virtual-machines | Maintenance Configurations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations.md | This scope is integrated with [update management center](../update-center/overvi - The upper maintenance window is 3 hours 55 mins. - A minimum of 1 hour and 30 minutes is required for the maintenance window.-- There is no limit to the recurrence of your schedule.+- The value of **Repeat** should be at least 6 hours. To learn more about this topic, checkout [update management center and scheduled patching](../update-center/scheduled-patching.md) You can create and manage maintenance configurations using any of the following For an Azure Functions sample, see [Scheduling Maintenance Updates with Maintenance Configurations and Azure Functions](https://github.com/Azure/azure-docs-powershell-samples/tree/master/maintenance-auto-scheduler). +## Service Limits ++The following are the recommended limits for the mentioned indicators ++| Indicator | Limit | +|-|-| +| Number of schedules per Subscription per Region | 250 | +| Total number of Resource associations to a schedule | 3000 | +| Resource associations on each dynamic scope | 1000 | +| Number of dynamic scopes per Resource Group or Subscription per Region | 250 | ++The following are the Dynamic Scope Limits for **each dynamic scope** ++| Resource | Limit | +|-|-| +| Resource associations | 1000 | +| Number of tag filters | 50 | +| Number of Resource Group filters | 50 | ++**Please note that the above limits are for the Dynamic Scoping in the Guest scope only.** + ## Next steps To learn more, see [Maintenance and updates](maintenance-and-updates.md). |
virtual-machines | Network Security Group Test | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/network-security-group-test.md | + + Title: Network security group test +description: Learn how to check if a security rule is blocking traffic to or from your virtual machine (VM) using network security group test in the Azure portal. ++++ Last updated : 07/17/2023++++# Network security group test ++You can use [network security groups](../virtual-network/network-security-groups-overview.md) to filter and control inbound and outbound network traffic to and from your virtual machines (VMs). You can also use [Azure Virtual Network Manager](../virtual-network-manager/overview.md) to apply admin security rules to your VMs to control network traffic. ++In this article, you learn how to use **Network security group test** to check if a security rule is blocking traffic to or from your virtual machine by checking what security rules are applied to your VM traffic. ++## Prerequisites ++- An Azure account with an active subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ++- Sign in to the [Azure portal](https://portal.azure.com/?WT.mc_id=A261C142F) with your Azure account. ++- An Azure virtual machine (VM). If you don't have one, create [a Linux VM](./linux/quick-create-portal.md) or [a Windows VM](./windows/quick-create-portal.md). ++## Test inbound connections ++In this section, you test if RDP connections are allowed to your VM from a remote IP address. ++1. In the search box at the top of the portal, enter *virtual machines*. Select **Virtual machines** from the search results. ++ :::image type="content" source="./media/network-security-group-test/virtual-machines-portal-search.png" alt-text="Screenshot of searching for virtual machines in the Azure portal." lightbox="./media/network-security-group-test/virtual-machines-portal-search.png"::: ++1. Select the VM that you want to test. ++1. Under **Help**, select **Network security group test**. ++ > [!NOTE] + > The virtual machine must be in running state. ++1. Select **Inbound connections**. The following options are available for **Inbound** tests: ++ | Setting | Value | + | | | + | Source type | - **My IP address**: your public IP address that you're using to access the Azure portal. <br> - **Any IP address**: any source IP address. <br> - **Other IP address/CIDR**: Source IP address or address prefix. <br> - **Service tag**: Source [service tag](../virtual-network/service-tags-overview.md). | + | IP address/CIDR | The IP address or address prefix that you want to use as the source. <br><br> **Note**: You see this option if you select **Other IP address/CIDR** for **Source type**. | + | Service tag | The service tag that you want to use as the source. <br><br> **Note**: You see this option if you select **Service tag** for **Source type**. | + | Service type | List of predefined services available for the test. <br><br> **Notes**:<br> - If you select a predefined service, the service port number and protocol are automatically selected. <br> - If you don't see the port and protocol information that you want, select **Custom**, and then enter the port number and select the protocol that you want. | + | Port | VM port number. <br><br> **Note**: If you select one of the predefined services, the correct port number is automatically selected. <br>Manually enter the port number when you select **Custom** for **Service type**. | + | Protocol | Connection protocol. Available options are: **Any**, **TCP**, and **UDP**. <br><br> **Note**: If you select one of the predefined services, the correct protocol used by the service is automatically selected. <br>Manually select the protocol when you select **Custom** for **Service type**. | ++1. To test if RDP connection is allowed to the VM from a remote IP address, select the following values: ++ | Setting | Value | + | | | + | Source type | Select **My IP address**. | + | Service type | Select **RDP**. | + | Port | Leave the default of **3389**. | + | Protocol | Leave the default of **TCP**. | ++ :::image type="content" source="./media/network-security-group-test/inbound-test.png" alt-text="Screenshot of inbound network security group test in the Azure portal." lightbox="./media/network-security-group-test/inbound-test.png"::: ++1. Select **Run test**. ++ After a few seconds, you see the details of the test: + - If RDP connections are allowed to the VM from the remote IP address, you see **Traffic status: Allowed**. + - If RDP connections are blocked, you see **Traffic status: Denied**. In the Summary section, you see the security rules that are blocking the traffic. ++ :::image type="content" source="./media/network-security-group-test/inbound-test-result.png" alt-text="Screenshot of inbound network security group test result." lightbox="./media/network-security-group-test/inbound-test-result.png"::: ++ To allow the RDP connection to the VM from the remote IP address, add to the network security group a security rule that allows RDP connections from the remote IP address. This security rule must have a higher priority than the one that's blocking the traffic. For more information, see [Create, change, or delete a network security group](../virtual-network/manage-network-security-group.md). ++## Test outbound connections ++In this section, you test your VM can have connect to the internet. ++1. In the search box at the top of the portal, enter *virtual machines*. Select **Virtual machines** from the search results. ++ :::image type="content" source="./media/network-security-group-test/virtual-machines-portal-search.png" alt-text="Screenshot of searching for virtual machines in the Azure portal." lightbox="./media/network-security-group-test/virtual-machines-portal-search.png"::: ++1. Select the VM you want to test. ++1. Under **Help**, select **Network security group test**. ++ > [!NOTE] + > The virtual machine must be in running state. ++1. Select **Outbound connections**. The following options are available for **Outbound** tests: ++ | Setting | Value | + | | | + | Service type | List of predefined services available for the test. <br><br> **Notes**:<br> - If you select a predefined service, the service port number and protocol are automatically selected. <br> - If you don't see the port and protocol information that you want, select **Custom**, and then enter the port number and select the protocol that you want. | + | Port | VM port number. <br><br> **Note**: If you select one of the predefined services, the correct port number is automatically selected. <br>Manually enter the port number when you select **Custom** for **Service type**. | + | Protocol | Connection protocol. Available options are: **Any**, **TCP**, and **UDP**. <br><br> **Note**: If you select one of the predefined services, the correct protocol used by the service is automatically selected. <br>Manually select the protocol when you select **Custom** for **Service type**. | + | Destination type | - **My IP address**: your public IP address that you're using to access the Azure portal. <br> - **Any IP address**: any source IP address. <br> - **Other IP address/CIDR**: Source IP address or address prefix. <br> - **Service tag**: Source [service tag](../virtual-network/service-tags-overview.md). | + | IP address/CIDR | The IP address or address prefix that you want to use as the destination. <br><br> **Note**: You see this option if you select **Other IP address/CIDR** for **Source type**. | + | Service tag | The service tag that you want to use as the destination. <br><br> **Note**: You see this option if you select **Service tag** for **Source type**. | ++1. To test if the VM can connect to the internet, select the following values: ++ | Setting | Value | + | | | + | Service type | Select **Custom**. | + | Port | Leave the default of **50000**. | + | Protocol | Leave the default of **Any**. | + | Destination type | Select **Any IP address**. | ++ :::image type="content" source="./media/network-security-group-test/outbound-test.png" alt-text="Screenshot of outbound network security group test in the Azure portal." lightbox="./media/network-security-group-test/outbound-test.png"::: ++1. Select **Run test**. ++ After a few seconds, you see the details of the test: + - If connections to the internet are allowed from the VM, you see **Traffic status: Allowed**. + - If connections to the internet are blocked, you see **Traffic status: Denied**. In the Summary section, you see the security rules that are blocking the traffic. ++ :::image type="content" source="./media/network-security-group-test/outbound-test-result.png" alt-text="Screenshot of outbound network security group test result." lightbox="./media/network-security-group-test/outbound-test-result.png"::: ++ To allow internet connections from the VM, add to the network security group a security rule that allows connections to the internet service tag. This security rule must have a higher priority than the one that's blocking the traffic. For more information, see [Create, change, or delete a network security group](../virtual-network/manage-network-security-group.md). ++## Next steps ++- To learn how to troubleshoot VM connections, see [Troubleshoot connections with Azure Network Watcher](../network-watcher/network-watcher-connectivity-portal.md). +- To learn more about network security groups, see [Network security groups overview](../virtual-network/network-security-groups-overview.md). |
virtual-machines | Vm Applications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications.md | The install/update/remove commands should be written assuming the application pa ## File naming -When the application file gets downloaded to the VM, the file name is the same as the name you use when you create the VM application. For example, if I name my VM application `myApp`, the file that is downloaded to the VM is also named `myApp`, regardless of what the file name is used in the storage account. If your VM application also has a configuration file, that file is the name of the application with `_config` appended. If `myApp` has a configuration file, it's named `myApp_config`. +When the application file gets downloaded to the VM, it's renamed as "MyVmApp" (no extension). This is because the VM isn't aware of your package's original name or extension. It utilizes the only name it has, which is the application name itself - "MyVmApp". -For example, if I name my VM application `myApp` when I create it in the Gallery, but it's stored as `myApplication.exe` in the storage account, when it gets downloaded to the VM the file name is `myApp`. My install string should start by renaming the file to be whatever it needs to be to run on the VM (like `myApp.exe`). +Here are a few alternatives to navigate this issue: ++You can modify your script to include a command for renaming the file before execution: +```azurepowershell +move .\\MyVmApp .\\MyApp.exe & MyApp.exe /S +``` +You can also use the `packageFileName` (and the corresponding `configFileName`) property to instruct us what to rename your file. For example, setting it to "MyApp.exe" will make your install script only need to be: +```powershell +MyAppe.exe /S +``` +> [!TIP] +> If your blob was originally named "myApp.exe" instead of "MyBlob", then the above script would have worked without setting the `packageFileName` property. -The install, update, and remove commands must be written with file naming in mind. The `configFileName` is assigned to the config file for the VM and `packageFileName` is the name assigned downloaded package on the VM. For more information regarding these other VM settings, see [UserArtifactSettings](/rest/api/compute/gallery-application-versions/create-or-update?tabs=HTTP#userartifactsettings) in our API docs. ## Command interpreter |
virtual-network-manager | Concept Network Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-network-groups.md | To create an Azure Policy initiative definition and assignment for Azure Virtual To create, edit, or delete Azure Virtual Network Manager dynamic group policies, you need: -- Read and write Azure RBAC permissions to the underlying policy-- Azure RBAC permissions to join the network group (Classic Admin authorization isn't supported).+- Read and write role-based access control permissions to the underlying policy. +- Role-based access control permissions to join the network group (Classic Admin authorization isn't supported). For more information on required permissions for Azure Virtual Network Manager dynamic group policies, review [Required permissions](concept-azure-policy-integration.md#required-permissions). |
virtual-network-manager | How To Define Network Group Membership Azure Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-define-network-group-membership-azure-policy.md | -In this article, you learn how to use Azure Policy conditional statements to create network groups with dynamic membership. You create these conditional statements using the basic editor by selecting parameters and operators from a drop-down menu. You'll also learn how to use the advanced editor to update conditional statements of an existing network group. +In this article, you learn how to use Azure Policy conditional statements to create network groups with dynamic membership. You create these conditional statements using the basic editor by selecting parameters and operators from a drop-down menu. Also, you learn how to use the advanced editor to update conditional statements of an existing network group. [Azure Policy](../governance/policy/overview.md) is a service to enable you to enforce per-resource governance at scale. It can be used to specify conditional expressions that define group membership, as opposed to explicit lists of virtual networks. This condition continues to power your network groups dynamically, allowing virtual networks to join and leave the group automatically as their fulfillment of the condition changes, with no Network Manager operation required. In this article, you learn how to use Azure Policy conditional statements to cre > This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). -## Pre-requisites +## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- To modify dynamic network groups, you must be [granted access via Azure RBAC role](concept-network-groups.md#network-groups-and-azure-policy) assignment only. Classic Admin/legacy authorization is not supported.+- To modify dynamic network groups, you must be [granted access with role-based access control](concept-network-groups.md#network-groups-and-azure-policy). Classic Admin/legacy authorization isn't supported. ## <a name="parameters"></a> Parameters and operators Virtual networks with dynamic memberships are selected using conditional statements. You can define more than one conditional statement by using *logical operators* such as **AND** and **OR** for scenarios where you need to further narrow the selected virtual networks. List of supported operators: ## Basic editor Assume you have the following virtual networks in your subscription. Each virtual network has an associated tag named **environment** with the respective value of *Production* or *Test*. -* myVNet01-EastUS - *Production* -* myVNet01-WestUS - *Production* -* myVNet02-WestUS - *Test* -* myVNet03-WestUS - *Test* ++| **Virtual Network** | **Tag** | +| - | - | +| myVNet01-EastUS | Production | +| myVNet01-WestUS | Production | +| myVNet02-WestUS | Test | +| myVNet03-WestUS | Test | You only want to select virtual networks that contain **VNet-A** in the name. To begin using the basic editor to create your conditional statement, you need to create a new network group. Both `"allOf"` and `"anyOf"` are used in the code. Since the **AND** operator is ### Example 3: Using custom tag values with advanced editor -In this example, a conditional statement is created that finds virtual networks where the name includes **myVNet** AND the **environment** tag equals **production**. +In this example, a conditional statement is created that finds virtual networks where the name includes **myVNet** AND the **environment** tag includes **production**. * Advanced editor: In this example, a conditional statement is created that finds virtual networks }, { "field": "tags['environment']",- "equals": "production" + "contains": "production" } ] } |
virtual-wan | Route Maps About | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/route-maps-about.md | This section outlines the basic workflow for Route-maps. You can [configure rout 1. [Configure a route map and route map rules](route-maps-how-to.md), then save. 1. Once a route map is configured, the virtual hub router and gateways begin an upgrade needed to support the Route-maps feature. - * The upgrade process takes between X ΓÇô Y mins. + * The upgrade process takes around 30 minutes. * The upgrade process only happens the first time a route map is created on a hub. * If the route map is deleted, the virtual hub router remains on the new version of software. * Using Route-maps will incur an additional charge. For more information, see the [Pricing](https://azure.microsoft.com/pricing/details/virtual-wan/) page.-1. The process is complete when the Provisioning state is 'Succeeded'. Reach out to preview-route-maps@microsoft.com if the process failed. +1. The process is complete when the Provisioning state is 'Succeeded'. Open a support case if the process failed. 1. The route map can now be applied to connections (ExpressRoute, S2S VPN, P2S VPN, VNet). 1. Once the route map has been applied in the correct direction, use the [Route-map dashboard](route-maps-dashboard.md) to verify that the route map is working as expected. The following section describes all the match conditions and actions supported f ## Troubleshooting -The following section describes common issues encountered when you configure Route-maps on your Virtual WAN hub. Read this section and, if your issue is still unresolved, reach out to preview-route-maps@microsoft.com for support. Expect a response within 48 business hours (Monday through Friday 9:00am - 5:00 PM PST). +The following section describes common issues encountered when you configure Route-maps on your Virtual WAN hub. Read this section and, if your issue is still unresolved, please open a support case. [!INCLUDE [Route-maps troubleshooting](../../includes/virtual-wan-route-maps-troubleshoot.md)] ## Next steps * [Configure Route-maps](route-maps-how-to.md).-* Use the [Route-maps dashboard](route-maps-dashboard.md) to monitor routes, AS Path, and BGP communities. +* Use the [Route-maps dashboard](route-maps-dashboard.md) to monitor routes, AS Path, and BGP communities. |
vpn-gateway | Create Routebased Vpn Gateway Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/create-routebased-vpn-gateway-powershell.md | description: Learn how to create a route-based virtual network gateway for a VPN Previously updated : 03/11/2022 Last updated : 07/17/2023 $vnet | Set-AzVirtualNetwork ## <a name="PublicIP"></a>Request a public IP address -A VPN gateway must have a dynamically allocated public IP address. When you create a connection to a VPN gateway, this is the IP address that you specify. Use the following example to request a public IP address: +A VPN gateway must have an allocated public IP address. When you create a connection to a VPN gateway, this is the IP address that you specify. Use the following example to request a public IP address: ```azurepowershell-interactive-$gwpip= New-AzPublicIpAddress -Name VNet1GWIP -ResourceGroupName TestRG1 -Location 'East US' -AllocationMethod Dynamic +$gwpip = New-AzPublicIpAddress -Name "VNet1GWIP" -ResourceGroupName "TestRG1" -Location "EastUS" -AllocationMethod Static -Sku Standard ``` ## <a name="GatewayIPConfig"></a>Create the gateway IP address configuration Creating a gateway can often take 45 minutes or more, depending on the selected ```azurepowershell-interactive New-AzVirtualNetworkGateway -Name VNet1GW -ResourceGroupName TestRG1 `--Location 'East US' -IpConfigurations $gwipconfig -GatewayType Vpn `--VpnType RouteBased -GatewaySku VpnGw1+-Location "East US" -IpConfigurations $gwipconfig -GatewayType "Vpn" ` +-VpnType "RouteBased" GatewaySku VpnGw2 -VpnGatewayGeneration "Generation2" ``` ## <a name="viewgw"></a>View the VPN gateway You can view the VPN gateway using the [Get-AzVirtualNetworkGateway](/powershell ```azurepowershell-interactive Get-AzVirtualNetworkGateway -Name Vnet1GW -ResourceGroup TestRG1 ```--The output will look similar to this example: --``` -Name : VNet1GW -ResourceGroupName : TestRG1 -Location : eastus -Id : /subscriptions/<subscription ID>/resourceGroups/TestRG1/provide - rs/Microsoft.Network/virtualNetworkGateways/VNet1GW -Etag : W/"0952d-9da8-4d7d-a8ed-28c8ca0413" -ResourceGuid : dc6ce1de-2c4494-9d0b-20b03ac595 -ProvisioningState : Succeeded -Tags : -IpConfigurations : [ - { - "PrivateIpAllocationMethod": "Dynamic", - "Subnet": { - "Id": "/subscriptions/<subscription ID>/resourceGroups/Te - stRG1/providers/Microsoft.Network/virtualNetworks/VNet1/subnets/GatewaySubnet" - }, - "PublicIpAddress": { - "Id": "/subscriptions/<subscription ID>/resourceGroups/Te - stRG1/providers/Microsoft.Network/publicIPAddresses/VNet1GWIP" - }, - "Name": "default", - "Etag": "W/\"0952d-9da8-4d7d-a8ed-28c8ca0413\"", - "Id": "/subscriptions/<subscription ID>/resourceGroups/Test - RG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW/ipConfigurations/de - fault" - } - ] -GatewayType : Vpn -VpnType : RouteBased -EnableBgp : False -ActiveActive : False -GatewayDefaultSite : null -Sku : { - "Capacity": 2, - "Name": "VpnGw1", - "Tier": "VpnGw1" - } -VpnClientConfiguration : null -BgpSettings : { - -``` - ## <a name="viewgwpip"></a>View the public IP address To view the public IP address for your VPN gateway, use the [Get-AzPublicIpAddress](/powershell/module/az.network/Get-azPublicIpAddress) cmdlet. To view the public IP address for your VPN gateway, use the [Get-AzPublicIpAddre Get-AzPublicIpAddress -Name VNet1GWIP -ResourceGroupName TestRG1 ``` -In the example response, the IpAddress value is the public IP address. --``` -Name : VNet1GWIP -ResourceGroupName : TestRG1 -Location : eastus -Id : /subscriptions/<subscription ID>/resourceGroups/TestRG1/provi - ders/Microsoft.Network/publicIPAddresses/VNet1GWIP -Etag : W/"5001666a-bc2a-484b-bcf5-ad488dabd8ca" -ResourceGuid : 3c7c481e-9828-4dae-abdc-f95b383 -ProvisioningState : Succeeded -Tags : -PublicIpAllocationMethod : Dynamic -IpAddress : 13.90.153.3 -PublicIpAddressVersion : IPv4 -IdleTimeoutInMinutes : 4 -IpConfiguration : { - "Id": "/subscriptions/<subscription ID>/resourceGroups/Test - RG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW/ipConfigurations/ - default" - } -DnsSettings : null -Zones : {} -Sku : { - "Name": "Basic" - } -IpTags : {} -``` - ## Clean up resources When you no longer need the resources you created, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to delete the resource group. This will delete the resource group and all of the resources it contains. |