Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Identity Provider Mobile Id | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-mobile-id.md | To enable sign-in for users with Mobile ID in Azure AD B2C, you need to create a |Key |Note | |||- | Client ID | The Mobile ID client ID. For example, 11111111-2222-3333-4444-555555555555. | + | Client ID | The Mobile ID client ID. For example, 00001111-aaaa-2222-bbbb-3333cccc4444. | | Client Secret| The Mobile ID client secret.| |
active-directory-b2c | Identity Provider Swissid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-swissid.md | To enable sign-in for users with a SwissID account in Azure AD B2C, you need to |Key |Note | ||| | Environment| The SwissID OpenId well-known configuration endpoint. For example, `https://login.sandbox.pre.swissid.ch/idp/oauth2/.well-known/openid-configuration`. |- | Client ID | The SwissID client ID. For example, `11111111-2222-3333-4444-555555555555`. | + | Client ID | The SwissID client ID. For example, `00001111-aaaa-2222-bbbb-3333cccc4444`. | | Password| The SwissID client secret.| |
active-directory-b2c | Implicit Flow Single Page Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/implicit-flow-single-page-application.md | In this request, the client indicates the permissions that it needs to acquire f - `{tenant}` with the name of your Azure AD B2C tenant. -- `90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6` with the app ID of the application you've registered in your tenant. +- `00001111-aaaa-2222-bbbb-3333cccc4444` with the app ID of the application you've registered in your tenant. - `{policy}` with the name of a policy you've created in your tenant, for example `b2c_1_sign_in`. ```http GET https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/oauth2/v2.0/authorize?-client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6 +client_id=00001111-aaaa-2222-bbbb-3333cccc4444 &response_type=id_token+token &redirect_uri=https%3A%2F%2Faadb2cplayground.azurewebsites.net%2F &response_mode=fragment GET https://aadb2cplayground.azurewebsites.net/# access_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZnl0aEV1Q... &token_type=Bearer &expires_in=3599-&scope="90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6 offline_access", +&scope="00001111-aaaa-2222-bbbb-3333cccc4444 offline_access", &id_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZnl0aEV1Q... &state=arbitrary_data_you_sent_earlier ``` In a typical web app flow, you would make a request to the `/token` endpoint. Ho ```http https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/oauth2/v2.0/authorize?-client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6 +client_id=00001111-aaaa-2222-bbbb-3333cccc4444 &response_type=token &redirect_uri=https%3A%2F%2Faadb2cplayground.azurewebsites.net%2F &scope=https%3A%2F%2Fapi.contoso.com%2Ftasks.read GET https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/oauth2/v2.0/ ## Next steps -See the code sample: [Sign-in with Azure AD B2C in a JavaScript SPA](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-browser-samples/VanillaJSTestApp2.0/app/b2c). +See the code sample: [Sign-in with Azure AD B2C in a JavaScript SPA](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-browser-samples/VanillaJSTestApp2.0/app/b2c). |
active-directory-b2c | Json Transformations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/json-transformations.md | The following claims transformation outputs a JSON string claim that will be the - Input claims: - **email**, transformation claim type **customerEntity.email**: "john.s@contoso.com"- - **objectId**, transformation claim type **customerEntity.userObjectId** "01234567-89ab-cdef-0123-456789abcdef" + - **objectId**, transformation claim type **customerEntity.userObjectId** "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb" - **givenName**, transformation claim type **customerEntity.firstName** "John" - **surname**, transformation claim type **customerEntity.lastName** "Smith" - Input parameter: The following claims transformation outputs a JSON string claim that will be the { "customerEntity":{ "email":"john.s@contoso.com",- "userObjectId":"01234567-89ab-cdef-0123-456789abcdef", + "userObjectId":"aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb", "firstName":"John", "lastName":"Smith", "role":{ The **GenerateJson** claims transformation accepts plain strings. If an input cl { "customerEntity":{ "email":"[\"someone@contoso.com\"]",- "userObjectId":"01234567-89ab-cdef-0123-456789abcdef", + "userObjectId":"aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb", "firstName":"John", "lastName":"Smith", "role":{ |
active-directory-b2c | Jwt Issuer Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/jwt-issuer-technical-profile.md | The **InputClaims**, **OutputClaims**, and **PersistClaims** elements are empty | refresh_token_lifetime_secs | No | Refresh token lifetimes. The maximum time period before which a refresh token can be used to acquire a new access token, if your application had been granted the offline_access scope. The default is 120,9600 seconds (14 days). The minimum (inclusive) is 86,400 seconds (24 hours). The maximum (inclusive) is 7,776,000 seconds (90 days). | | rolling_refresh_token_lifetime_secs | No | Refresh token sliding window lifetime. After this time period elapses the user is forced to reauthenticate, irrespective of the validity period of the most recent refresh token acquired by the application. If you don't want to enforce a sliding window lifetime, set the value of allow_infinite_rolling_refresh_token to `true`. The default is 7,776,000 seconds (90 days). The minimum (inclusive) is 86,400 seconds (24 hours). The maximum (inclusive) is 31,536,000 seconds (365 days). | | allow_infinite_rolling_refresh_token | No | If set to `true`, the refresh token sliding window lifetime never expires. |-| IssuanceClaimPattern | No | Controls the Issuer (iss) claim. One of the values:<ul><li>AuthorityAndTenantGuid - The iss claim includes your domain name, such as `login.microsoftonline` or `tenant-name.b2clogin.com`, and your tenant identifier https:\//login.microsoftonline.com/00000000-0000-0000-0000-000000000000/v2.0/</li><li>AuthorityWithTfp - The iss claim includes your domain name, such as `login.microsoftonline` or `tenant-name.b2clogin.com`, your tenant identifier and your relying party policy name. https:\//login.microsoftonline.com/tfp/00000000-0000-0000-0000-000000000000/b2c_1a_tp_sign-up-or-sign-in/v2.0/</li></ul> Default value: AuthorityAndTenantGuid | +| IssuanceClaimPattern | No | Controls the Issuer (iss) claim. One of the values:<ul><li>AuthorityAndTenantGuid - The iss claim includes your domain name, such as `login.microsoftonline` or `tenant-name.b2clogin.com`, and your tenant identifier https:\//login.microsoftonline.com/aaaabbbb-0000-cccc-1111-dddd2222eeee/v2.0/</li><li>AuthorityWithTfp - The iss claim includes your domain name, such as `login.microsoftonline` or `tenant-name.b2clogin.com`, your tenant identifier and your relying party policy name. https:\//login.microsoftonline.com/tfp/aaaabbbb-0000-cccc-1111-dddd2222eeee/b2c_1a_tp_sign-up-or-sign-in/v2.0/</li></ul> Default value: AuthorityAndTenantGuid | | AuthenticationContextReferenceClaimPattern | No | Controls the `acr` claim value.<ul><li>None - Azure AD B2C doesn't issue the acr claim</li><li>PolicyId - the `acr` claim contains the policy name</li></ul>The options for setting this value are TFP (trust framework policy) and ACR (authentication context reference). It is recommended setting this value to TFP, to set the value, ensure the `<Item>` with the `Key="AuthenticationContextReferenceClaimPattern"` exists and the value is `None`. In your relying party policy, add `<OutputClaims>` item, add this element `<OutputClaim ClaimTypeReferenceId="trustFrameworkPolicy" Required="true" DefaultValue="{policy}" PartnerClaimType="tfp"/>`. Also make sure your policy contains the claim type `<ClaimType Id="trustFrameworkPolicy"> <DisplayName>trustFrameworkPolicy</DisplayName> <DataType>string</DataType> </ClaimType>` | |RefreshTokenUserJourneyId| No | The identifier of a user journey that should be executed during the [refresh an access token](authorization-code-flow.md#4-refresh-the-token) POST request to the `/token` endpoint. | The CryptographicKeys element contains the following attributes: ## Session management To configure the Azure AD B2C sessions between Azure AD B2C and a relying party application, in the attribute of the `UseTechnicalProfileForSessionManagement` element, add a reference to [OAuthSSOSessionProvider](custom-policy-reference-sso.md#oauthssosessionprovider) SSO session.-------------- |
active-directory-b2c | Language Customization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/language-customization.md | In the following example, English (en) and Spanish (es) custom strings are added 1. Switch your browser default language to Spanish. Or you can add the query string parameter, `ui_locales` to the authorization request. For example: ```http-https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/B2C_1A_signup_signin/oauth2/v2.0/authorize&client_id=0239a9cc-309c-4d41-12f1-31299feb2e82&nonce=defaultNonce&redirect_uri=https%3A%2F%2Fjwt.ms&scope=openid&response_type=id_token&prompt=login&ui_locales=es +https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/B2C_1A_signup_signin/oauth2/v2.0/authorize&client_id=00001111-aaaa-2222-bbbb-3333cccc4444&nonce=defaultNonce&redirect_uri=https%3A%2F%2Fjwt.ms&scope=openid&response_type=id_token&prompt=login&ui_locales=es ``` ::: zone-end |
active-directory-b2c | Openid Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/openid-connect.md | When your web application needs to authenticate the user and run a user flow, it In this request, the client indicates the permissions that it needs to acquire from the user in the `scope` parameter, and specifies the user flow to run. To get a feel of how the request works, paste the request into your browser and run it. Replace: - `{tenant}` with the name of your tenant.-- `90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6` with the app ID of an [application you registered in your tenant](tutorial-register-applications.md). +- `00001111-aaaa-2222-bbbb-3333cccc4444` with the app ID of an [application you registered in your tenant](tutorial-register-applications.md). - `{application-id-uri}/{scope-name}` with the Application ID URI and scope of an application that you registered in your tenant. - `{policy}` with the policy name that you have in your tenant, for example `b2c_1_sign_in`. In this request, the client indicates the permissions that it needs to acquire f GET /{tenant}.onmicrosoft.com/{policy}/oauth2/v2.0/authorize? Host: {tenant}.b2clogin.com -client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6 +client_id=00001111-aaaa-2222-bbbb-3333cccc4444 &response_type=code+id_token &redirect_uri=https%3A%2F%2Fjwt.ms%2F &response_mode=fragment Host: {tenant}.b2clogin.com Content-Type: application/x-www-form-urlencoded grant_type=authorization_code-&client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6 -&scope=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6 offline_access +&client_id=00001111-aaaa-2222-bbbb-3333cccc4444 +&scope=00001111-aaaa-2222-bbbb-3333cccc4444 offline_access &code=AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrq... &redirect_uri=urn:ietf:wg:oauth:2.0:oob ``` A successful token response looks like: "not_before": "1442340812", "token_type": "Bearer", "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZnl0aEV1Q...",- "scope": "90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6 offline_access", + "scope": "00001111-aaaa-2222-bbbb-3333cccc4444 offline_access", "expires_in": "3600", "expires_on": "1644254945", "refresh_token": "AAQfQmvuDy8WtUv-sd0TBwWVQs1rC-Lfxa_NDkLqpg50Cxp5Dxj0VPF1mx2Z...", Host: {tenant}.b2clogin.com Content-Type: application/x-www-form-urlencoded grant_type=refresh_token-&client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6 +&client_id=00001111-aaaa-2222-bbbb-3333cccc4444 &scope=openid offline_access &refresh_token=AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrq... &redirect_uri=urn:ietf:wg:oauth:2.0:oob A successful token response looks like: "not_before": "1442340812", "token_type": "Bearer", "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZnl0aEV1Q...",- "scope": "90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6 offline_access", + "scope": "00001111-aaaa-2222-bbbb-3333cccc4444 offline_access", "expires_in": "3600", "refresh_token": "AAQfQmvuDy8WtUv-sd0TBwWVQs1rC-Lfxa_NDkLqpg50Cxp5Dxj0VPF1mx2Z...", "refresh_token_expires_in": "1209600" |
active-directory-b2c | Partner Asignio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-asignio.md | Get the custom policy starter packs from GitHub, then update the XML files in th <Item Key="scope">openid profile email</Item> <Item Key="UsePolicyInRedirectUri">0</Item> <!-- Update the Client ID below to the Asignio Application ID -->- <Item Key="client_id">00000000-0000-0000-0000-000000000000</Item> + <Item Key="client_id">00001111-aaaa-2222-bbbb-3333cccc4444</Item> <Item Key="IncludeClaimResolvingInClaimsHandling">true</Item> <!-- trying to add additional claim-->- <!--Insert b2c-extensions-app application ID here, for example: 11111111-1111-1111-1111-111111111111--> - <Item Key="11111111-1111-1111-1111-111111111111"></Item> - <!--Insert b2c-extensions-app application ObjectId here, for example: 22222222-2222-2222-2222-222222222222--> - <Item Key="22222222-2222-2222-2222-222222222222"></Item> + <!--Insert b2c-extensions-app application ID here, for example: 00001111-aaaa-2222-bbbb-3333cccc4444--> + <Item Key="00001111-aaaa-2222-bbbb-3333cccc4444"></Item> + <!--Insert b2c-extensions-app application ObjectId here, for example: aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb--> + <Item Key="aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb"></Item> <!-- The key below allows you to specify each of the Azure AD tenants that can be used to sign in. Update the GUIDs below for each tenant. -->- <!--<Item Key="ValidTokenIssuerPrefixes">https://login.microsoftonline.com/11111111-1111-1111-1111-111111111111</Item>--> + <!--<Item Key="ValidTokenIssuerPrefixes">https://login.microsoftonline.com/00001111-aaaa-2222-bbbb-3333cccc4444</Item>--> <!-- The commented key below specifies that users from any tenant can sign-in. Uncomment if you would like anyone with an Azure AD account to be able to sign in. --> <Item Key="ValidTokenIssuerPrefixes">https://login.microsoftonline.com/</Item> </Metadata> If you have an Asignio Signature, you're prompted to authenticate with your Asig * [Azure AD B2C Samples](https://stackoverflow.com/questions/tagged/azure-ad-b2c) * YouTube: [Identity Azure AD B2C Series](https://www.youtube.com/playlist?list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0) * [Azure AD B2C custom policy overview](custom-policy-overview.md)-* [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy) +* [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy) |
active-directory-b2c | Partner Bindid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bindid.md | You can define the Transmit Security as a claims provider by adding it to the ** <Metadata> <Item Key="METADATA">https://api.transmitsecurity.io/cis/oidc/.well-known/openid-configuration</Item> <!-- Update the Client ID below to the Transmit Security client ID -->- <Item Key="client_id">00000000-0000-0000-0000-000000000000</Item> + <Item Key="client_id">00001111-aaaa-2222-bbbb-3333cccc4444</Item> <Item Key="response_types">code</Item> <Item Key="scope">openid email</Item> <Item Key="response_mode">form_post</Item> |
active-directory-b2c | Partner Biocatch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-biocatch.md | For the following instructions, see [Tutorial: Register a web application in Azu "iss": "https://tenant.b2clogin.com/12345678-1234-1234-1234-123456789012/v2.0/", - "sub": "12345678-1234-1234-1234-123456789012", + "sub": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb", - "aud": "12345678-1234-1234-1234-123456789012", + "aud": "00001111-aaaa-2222-bbbb-3333cccc4444", "acr": "b2c_1a_signup_signin_biocatch_policy", For the following instructions, see [Tutorial: Register a web application in Azu "score": 275, - "tid": "12345678-1234-1234-1234-123456789012" + "tid": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb" }.[Signature] |
active-directory-b2c | Partner Bloksec | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bloksec.md | To define BlokSec as a claims provider, add it to the **ClaimsProvider** element <Metadata> <Item Key="METADATA">https://api.bloksec.io/oidc/.well-known/openid-configuration</Item> <!-- Update the Client ID below to the BlokSec Application ID -->- <Item Key="client_id">00000000-0000-0000-0000-000000000000</Item> + <Item Key="client_id">00001111-aaaa-2222-bbbb-3333cccc4444</Item> <Item Key="response_types">code</Item> <Item Key="scope">openid profile email</Item> <Item Key="response_mode">form_post</Item> |
active-directory-b2c | Partner Dynamics 365 Fraud Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-dynamics-365-fraud-protection.md | In the provided [custom policies](https://github.com/azure-ad-b2c/partner-integr |{Settings:Tenant}|Your tenant short name |`your-tenant` - from your-tenant.onmicrosoft.com| |{Settings:DeploymentMode}|Application Insights deployment mode to use|`Production` or `Development`| |{Settings:DeveloperMode}|Whether to deploy the policies in Application Insights developer mode|`true` or `false`|-|{Settings:AppInsightsInstrumentationKey}|Instrumentation key of your Application Insights instance*|`01234567-89ab-cdef-0123-456789abcdef`| -|{Settings:IdentityExperienceFrameworkAppId}App ID of the IdentityExperienceFramework app configured in your Azure AD B2C tenant|`01234567-89ab-cdef-0123-456789abcdef`| -|{Settings:ProxyIdentityExperienceFrameworkAppId}|App ID of the ProxyIdentityExperienceFramework app configured in your Azure AD B2C tenant|`01234567-89ab-cdef-0123-456789abcdef`| +|{Settings:AppInsightsInstrumentationKey}|Instrumentation key of your Application Insights instance*|`00001111-aaaa-2222-bbbb-3333cccc4444`| +|{Settings:IdentityExperienceFrameworkAppId}App ID of the IdentityExperienceFramework app configured in your Azure AD B2C tenant|`00001111-aaaa-2222-bbbb-3333cccc4444`| +|{Settings:ProxyIdentityExperienceFrameworkAppId}|App ID of the ProxyIdentityExperienceFramework app configured in your Azure AD B2C tenant|`00001111-aaaa-2222-bbbb-3333cccc4444`| |{Settings:FacebookClientId}|App ID of the Facebook app you configured for federation with B2C| `000000000000000`| |{Settings:FacebookClientSecretKeyContainer}| Name of the policy key, in which you saved Facebook's app secret |`B2C_1A_FacebookAppSecret`| |{Settings:ContentDefinitionBaseUri}|Endpoint in where you deployed the UI files|`https://<my-storage-account>.blob.core.windows.net/<my-storage-container>`|-|{Settings:DfpApiBaseUrl}|The base path for your DFP API instance, found in the DFP portal| `https://tenantname-01234567-89ab-cdef-0123-456789abcdef.api.dfp.dynamics.com/v1.0/`| +|{Settings:DfpApiBaseUrl}|The base path for your DFP API instance, found in the DFP portal| `https://tenantname-00001111-aaaa-2222-bbbb-3333cccc4444.api.dfp.dynamics.com/v1.0/`| |{Settings:DfpApiAuthScope}|The client_credentials scope for the DFP API service|`https://api.dfp.dynamics-int.com/.default or https://api.dfp.dynamics.com/.default`|-|{Settings:DfpTenantId}|The ID of the Microsoft Entra tenant (not B2C) where DFP is licensed and installed|`01234567-89ab-cdef-0123-456789abcdef` or `consoto.onmicrosoft.com` | +|{Settings:DfpTenantId}|The ID of the Microsoft Entra tenant (not B2C) where DFP is licensed and installed|`00001111-aaaa-2222-bbbb-3333cccc4444` or `consoto.onmicrosoft.com` | |{Settings:DfpAppClientIdKeyContainer}|Name of the policy key-in which you save the DFP client ID|`B2C_1A_DFPClientId`| |{Settings:DfpAppClientSecretKeyContainer}|Name of the policy key-in which you save the DFP client secret |`B2C_1A_DFPClientSecret`| |{Settings:DfpEnvironment}| The ID of the DFP environment.|Environment ID is a global unique identifier of the DFP environment that you send the data to. Your custom policy should call the API endpoint, including the query string parameter `x-ms-dfpenvid=your-env-id>`| |
active-directory-b2c | Partner Experian | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-experian.md | In the partner-integration [custom policies](https://github.com/azure-ad-b2c/par | | | | | {your_tenant_name} | Your tenant short name | "yourtenant" from yourtenant.onmicrosoft.com | | {your_trustframeworkbase_policy} | Azure AD B2C name of your TrustFrameworkBase policy| B2C_1A_experian_TrustFrameworkBase|-| {your_tenant_IdentityExperienceFramework_appid} |App ID of the IdentityExperienceFramework app configured in your Azure AD B2C tenant| 01234567-89ab-cdef-0123-456789abcdef| -| {your_tenant_ ProxyIdentityExperienceFramework_appid}| App ID of the ProxyIdentityExperienceFramework app configured in your Azure AD B2C tenant | 01234567-89ab-cdef-0123-456789abcdef| -| {your_tenant_extensions_appid} | App ID of your tenant storage application| 01234567-89ab-cdef-0123-456789abcdef| -| {your_tenant_extensions_app_objectid}| Object ID of your tenant storage application| 01234567-89ab-cdef-0123-456789abcdef| +| {your_tenant_IdentityExperienceFramework_appid} |App ID of the IdentityExperienceFramework app configured in your Azure AD B2C tenant| 00001111-aaaa-2222-bbbb-3333cccc4444| +| {your_tenant_ ProxyIdentityExperienceFramework_appid}| App ID of the ProxyIdentityExperienceFramework app configured in your Azure AD B2C tenant | 00001111-aaaa-2222-bbbb-3333cccc4444| +| {your_tenant_extensions_appid} | App ID of your tenant storage application| 00001111-aaaa-2222-bbbb-3333cccc4444| +| {your_tenant_extensions_app_objectid}| Object ID of your tenant storage application| aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb| | {your_api_username_key_name}| Username key name, made in **Create API policy keys**| B2C\_1A\_RestApiUsername| | {your_api_password_key_name}| Password key name, made in **Create API policy keys**| B2C\_1A\_RestApiPassword| | {your_app_service_URL}| App service URL you set up| `https://yourapp.azurewebsites.net`| |
active-directory-b2c | Partner Idemia | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-idemia.md | To define IDEMIA as a claims provider, add it to the **ClaimsProvider** element <Metadata> <Item Key="METADATA">https://idp.XXXX.net/oxauth/.well-known/openid-configuration</Item> <!-- Update the Client ID below to the Application ID -->- <Item Key="client_id">00000000-0000-0000-0000-000000000000</Item> + <Item Key="client_id">00001111-aaaa-2222-bbbb-3333cccc4444</Item> <Item Key="response_types">code</Item> <Item Key="scope">openid id_basic mt_scope</Item> <Item Key="response_mode">form_post</Item> |
active-directory-b2c | Partner Onfido | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-onfido.md | In [/samples/OnFido-Combined/Policies](https://github.com/azure-ad-b2c/partner-i |Placeholder|Replace with value|Example| |||| |{your_tenant_name}|Your tenant short name|"your tenant" from yourtenant.onmicrosoft.com|-|{your_tenantID}|Your Azure AD B2C TenantID| 01234567-89ab-cdef-0123-456789abcdef| -|{your_tenant_IdentityExperienceFramework_appid}|IdentityExperienceFramework app App ID configured in your Azure AD B2C tenant|01234567-89ab-cdef-0123-456789abcdef| -|{your_tenant_ ProxyIdentityExperienceFramework_appid}|ProxyIdentityExperienceFramework app App ID configured in your Azure AD B2C tenant| 01234567-89ab-cdef-0123-456789abcdef| -|{your_tenant_extensions_appid}|Your tenant storage application App ID| 01234567-89ab-cdef-0123-456789abcdef| -|{your_tenant_extensions_app_objectid}|Your tenant storage application Object ID| 01234567-89ab-cdef-0123-456789abcdef| -|{your_app_insights_instrumentation_key}|Your app insights instance* instrumentation key|01234567-89ab-cdef-0123-456789abcdef| +|{your_tenantID}|Your Azure AD B2C TenantID| aaaabbbb-0000-cccc-1111-dddd2222eeee| +|{your_tenant_IdentityExperienceFramework_appid}|IdentityExperienceFramework app App ID configured in your Azure AD B2C tenant|00001111-aaaa-2222-bbbb-3333cccc4444| +|{your_tenant_ ProxyIdentityExperienceFramework_appid}|ProxyIdentityExperienceFramework app App ID configured in your Azure AD B2C tenant| 00001111-aaaa-2222-bbbb-3333cccc4444| +|{your_tenant_extensions_appid}|Your tenant storage application App ID| 00001111-aaaa-2222-bbbb-3333cccc4444| +|{your_tenant_extensions_app_objectid}|Your tenant storage application Object ID| aaaabbbb-0000-cccc-1111-dddd2222eeee| +|{your_app_insights_instrumentation_key}|Your app insights instance* instrumentation key|00001111-aaaa-2222-bbbb-3333cccc4444| |{your_ui_file_base_url}|Location URL of your UI folders **ocean_blue**, **dist**, and **assets**| `https://yourstorage.blob.core.windows.net/UI/`| |{your_app_service_URL}|The app service URL you set up|`https://yourapp.azurewebsites.net`| |
active-directory-b2c | Partner Trusona | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-trusona.md | Use the following steps to add Trusona as a claims provider: <Item Key="METADATA">https://authcloud.trusona.net/.well-known/openid-configuration</Item> <Item Key="scope">openid profile email</Item> <!-- Update the Client ID to the Trusona Authentication Cloud Application ID -->- <Item Key="client_id">00000000-0000-0000-0000-000000000000</Item> + <Item Key="client_id">00001111-aaaa-2222-bbbb-3333cccc4444</Item> <Item Key="response_types">code</Item> <Item Key="response_mode">form_post</Item> <Item Key="HttpBinding">POST</Item> <Item Key="UsePolicyInRedirectUri">false</Item> <Item Key="IncludeClaimResolvingInClaimsHandling">true</Item> <!-- trying to add additional claim-->- <!--Insert b2c-extensions-app application ID here, for example: 11111111-1111-1111-1111-111111111111--> - <Item Key="11111111-1111-1111-1111-111111111111"></Item> - <!--Insert b2c-extensions-app application ObjectId here, for example: 22222222-2222-2222-2222-222222222222--> - <Item Key="11111111-1111-1111-1111-111111111111"></Item> + <!--Insert b2c-extensions-app application ID here, for example: 00001111-aaaa-2222-bbbb-3333cccc4444--> + <Item Key="00001111-aaaa-2222-bbbb-3333cccc4444"></Item> + <!--Insert b2c-extensions-app application ObjectId here, for example: aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb--> + <Item Key="00001111-aaaa-2222-bbbb-3333cccc4444"></Item> <!-- The key allows you to specify each of the Azure AD tenants that can be used to sign in. Update the GUIDs for each tenant. -->- <!--<Item Key="ValidTokenIssuerPrefixes">https://login.microsoftonline.com/187f16e9-81ab-4516-8db7-1c8ef94ffeca,https://login.microsoftonline.com/11111111-1111-1111-1111-111111111111</Item>--> + <!--<Item Key="ValidTokenIssuerPrefixes">https://login.microsoftonline.com/187f16e9-81ab-4516-8db7-1c8ef94ffeca,https://login.microsoftonline.com/00001111-aaaa-2222-bbbb-3333cccc4444</Item>--> <!-- The commented key specifies that users from any tenant can sign-in. Uncomment if you would like anyone with an Azure AD account to be able to sign in. --> <Item Key="ValidTokenIssuerPrefixes">https://login.microsoftonline.com/</Item> |
active-directory-b2c | Partner Twilio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-twilio.md | The following components make up the Twilio solution: ```xml <add key="ida:Tenant" value="yourtenant.onmicrosoft.com" />- <add key="ida:TenantId" value="d6f33888-0000-4c1f-9b50-1590f171fc70" /> - <add key="ida:ClientId" value="6bd98cc8-0000-446a-a05e-b5716ef2651b" /> + <add key="ida:TenantId" value="aaaabbbb-0000-cccc-1111-dddd2222eeee" /> + <add key="ida:ClientId" value="00001111-aaaa-2222-bbbb-3333cccc4444" /> <add key="ida:ClientSecret" value="secret" /> <add key="ida:AadInstance" value="https://yourtenant.b2clogin.com/tfp/{0}/{1}" /> <add key="ida:RedirectUri" value="https://your hosted psd2 demo app url/" /> |
active-directory-b2c | Partner Xid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-xid.md | Get the custom policy starter packs from GitHub, then update the XML files in th <Metadata> <Item Key="METADATA">https://oidc-uat.x-id.io/.well-known/openid-configuration</Item> <!-- Update the Client ID below to the X-ID Application ID -->- <Item Key="client_id">00000000-0000-0000-0000-000000000000</Item> + <Item Key="client_id">00001111-aaaa-2222-bbbb-3333cccc4444</Item> <Item Key="response_types">code</Item> <Item Key="scope">openid verification</Item> <Item Key="response_mode">query</Item> |
active-directory-b2c | Relyingparty | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/relyingparty.md | By using custom policies in Azure AD B2C, you can send a parameter in a query st The following example passes a parameter named `campaignId` with a value of `hawaii` in the query string: -`https://login.microsoft.com/contoso.onmicrosoft.com/oauth2/v2.0/authorize?pB2C_1A_signup_signin&client_id=a415078a-0402-4ce3-a9c6-ec1947fcfb3f&nonce=defaultNonce&redirect_uri=http%3A%2F%2Fjwt.io%2F&scope=openid&response_type=id_token&prompt=login&campaignId=hawaii` +`https://login.microsoft.com/contoso.onmicrosoft.com/oauth2/v2.0/authorize?pB2C_1A_signup_signin&client_id=00001111-aaaa-2222-bbbb-3333cccc4444&nonce=defaultNonce&redirect_uri=http%3A%2F%2Fjwt.io%2F&scope=openid&response_type=id_token&prompt=login&campaignId=hawaii` The **ContentDefinitionParameters** element contains the following element: The JWT token includes the `sub` claim with the user objectId: ```json { ...- "sub": "6fbbd70d-262b-4b50-804c-257ae1706ef2", + "sub": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb", ... } ``` |
active-directory-b2c | Secure Api Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/secure-api-management.md | You should now have two URLs recorded for use in the next section: the OpenID Co ``` https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/B2C_1_signupsignin1/v2.0/.well-known/openid-configuration-https://<tenant-name>.b2clogin.com/99999999-0000-0000-0000-999999999999/v2.0/ +https://<tenant-name>.b2clogin.com/aaaabbbb-0000-cccc-1111-dddd2222eeee/v2.0/ ``` ## Configure the inbound policy in Azure API Management You're now ready to add the inbound policy in Azure API Management that validate <validate-jwt header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid."> <openid-config url="https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/B2C_1_signupsignin1/v2.0/.well-known/openid-configuration" /> <audiences>- <audience>44444444-0000-0000-0000-444444444444</audience> + <audience>00001111-aaaa-2222-bbbb-3333cccc4444</audience> </audiences> <issuers>- <issuer>https://<tenant-name>.b2clogin.com/99999999-0000-0000-0000-999999999999/v2.0/</issuer> + <issuer>https://<tenant-name>.b2clogin.com/aaaabbbb-0000-cccc-1111-dddd2222eeee/v2.0/</issuer> </issuers> </validate-jwt> <base /> Several applications typically interact with a single REST API. To enable your A ```xml <!-- Accept tokens intended for these recipient applications --> <audiences>- <audience>44444444-0000-0000-0000-444444444444</audience> - <audience>66666666-0000-0000-0000-666666666666</audience> + <audience>00001111-aaaa-2222-bbbb-3333cccc4444</audience> + <audience>11112222-bbbb-3333-cccc-4444dddd5555</audience> </audiences> ``` Similarly, to support multiple token issuers, add their endpoint URIs to the `<i ```xml <!-- Accept tokens from multiple issuers --> <issuers>- <issuer>https://<tenant-name>.b2clogin.com/99999999-0000-0000-0000-999999999999/v2.0/</issuer> - <issuer>https://login.microsoftonline.com/99999999-0000-0000-0000-999999999999/v2.0/</issuer> + <issuer>https://<tenant-name>.b2clogin.com/aaaabbbb-0000-cccc-1111-dddd2222eeee/v2.0/</issuer> + <issuer>https://login.microsoftonline.com/aaaabbbb-0000-cccc-1111-dddd2222eeee/v2.0/</issuer> </issuers> ``` The following example Azure API Management inbound policy illustrates how to acc <validate-jwt header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid."> <openid-config url="https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/B2C_1_signupsignin1/v2.0/.well-known/openid-configuration" /> <audiences>- <audience>44444444-0000-0000-0000-444444444444</audience> - <audience>66666666-0000-0000-0000-666666666666</audience> + <audience>00001111-aaaa-2222-bbbb-3333cccc4444</audience> + <audience>11112222-bbbb-3333-cccc-4444dddd5555</audience> </audiences> <issuers>- <issuer>https://login.microsoftonline.com/99999999-0000-0000-0000-999999999999/v2.0/</issuer> - <issuer>https://<tenant-name>.b2clogin.com/99999999-0000-0000-0000-999999999999/v2.0/</issuer> + <issuer>https://login.microsoftonline.com/aaaabbbb-0000-cccc-1111-dddd2222eeee/v2.0/</issuer> + <issuer>https://<tenant-name>.b2clogin.com/aaaabbbb-0000-cccc-1111-dddd2222eeee/v2.0/</issuer> </issuers> </validate-jwt> <base /> |
active-directory-b2c | Tokens Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tokens-overview.md | The following table lists the claims that you can expect in ID tokens and access | Name | Claim | Example value | Description | | - | -- | - | -- |-| Audience | `aud` | `90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6` | Identifies the intended recipient of the token. For Azure AD B2C, the audience is the application ID. Your application should validate this value and reject the token if it doesn't match. Audience is synonymous with resource. | -| Issuer | `iss` |`https://<tenant-name>.b2clogin.com/775527ff-9a37-4307-8b3d-cc311f58d925/v2.0/` | Identifies the security token service (STS) that constructs and returns the token. It also identifies the directory in which the user was authenticated. Your application should validate the issuer claim to make sure that the token came from the appropriate endpoint. | +| Audience | `aud` | `00001111-aaaa-2222-bbbb-3333cccc4444` | Identifies the intended recipient of the token. For Azure AD B2C, the audience is the application ID. Your application should validate this value and reject the token if it doesn't match. Audience is synonymous with resource. | +| Issuer | `iss` |`https://<tenant-name>.b2clogin.com/aaaabbbb-0000-cccc-1111-dddd2222eeee/v2.0/` | Identifies the security token service (STS) that constructs and returns the token. It also identifies the directory in which the user was authenticated. Your application should validate the issuer claim to make sure that the token came from the appropriate endpoint. | | Issued at | `iat` | `1438535543` | The time at which the token was issued, represented in epoch time. | | Expiration time | `exp` | `1438539443` | The time at which the token becomes invalid, represented in epoch time. Your application should use this claim to verify the validity of the token lifetime. | | Not before | `nbf` | `1438535543` | The time at which the token becomes valid, represented in epoch time. This time is usually the same as the time the token was issued. Your application should use this claim to verify the validity of the token lifetime. | The following table lists the claims that you can expect in ID tokens and access | Code hash | `c_hash` | `SGCPtt01wxwfgnYZy2VJtQ` | A code hash included in an ID token only when the token is issued together with an OAuth 2.0 authorization code. A code hash can be used to validate the authenticity of an authorization code. For more information about how to perform this validation, see the [OpenID Connect specification](https://openid.net/specs/openid-connect-core-1_0.html). | | Access token hash | `at_hash` | `SGCPtt01wxwfgnYZy2VJtQ` | An access token hash included in an ID token only when the token is issued together with an OAuth 2.0 access token. An access token hash can be used to validate the authenticity of an access token. For more information about how to perform this validation, see the [OpenID Connect specification](https://openid.net/specs/openid-connect-core-1_0.html) | | Nonce | `nonce` | `12345` | A nonce is a strategy used to mitigate token replay attacks. Your application can specify a nonce in an authorization request by using the `nonce` query parameter. The value you provide in the request is emitted unmodified in the `nonce` claim of an ID token only. This claim allows your application to verify the value against the value specified on the request. Your application should perform this validation during the ID token validation process. |-| Subject | `sub` | `884408e1-2918-4cz0-b12d-3aa027d7563b` | The principal about which the token asserts information, such as the user of an application. This value is immutable and can't be reassigned or reused. It can be used to perform authorization checks safely, such as when the token is used to access a resource. By default, the subject claim is populated with the object ID of the user in the directory. | +| Subject | `sub` | `aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb` | The principal about which the token asserts information, such as the user of an application. This value is immutable and can't be reassigned or reused. It can be used to perform authorization checks safely, such as when the token is used to access a resource. By default, the subject claim is populated with the object ID of the user in the directory. | | Authentication context class reference | `acr` | Not applicable | Used only with older policies. | | Trust framework policy | `tfp` | `b2c_1_signupsignin1` | The name of the policy that was used to acquire the ID token. | | Authentication time | `auth_time` | `1438535543` | The time at which a user last entered credentials, represented in epoch time. There's no discrimination between that authentication being a fresh sign-in, a single sign-on (SSO) session, or another sign-in type. The `auth_time` is the last time the application (or user) initiated an authentication attempt against Azure AD B2C. The method used to authenticate isn't differentiated. | For a full list of validations your application should perform, refer to the [Op ## Next steps Learn more about how to [use access tokens](access-tokens.md).- |
active-directory-b2c | Troubleshoot With Application Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/troubleshoot-with-application-insights.md | Here's a list of queries you can use to see the logs: | `traces | where timestamp > ago(1d)` | Get all of the logs generated by Azure AD B2C for the last day.| | `traces | where message contains "exception" | where timestamp > ago(2h)`| Get all of the logs with errors from the last two hours.| | `traces | where customDimensions.Tenant == "contoso.onmicrosoft.com" and customDimensions.UserJourney == "b2c_1a_signinandup"` | Get all of the logs generated by Azure AD B2C *contoso.onmicrosoft.com* tenant, and user journey is *b2c_1a_signinandup*. |-| `traces | where customDimensions.CorrelationId == "00000000-0000-0000-0000-000000000000"`| Get all of the logs generated by Azure AD B2C for a correlation ID. Replace the correlation ID with your correlation ID. | +| `traces | where customDimensions.CorrelationId == "aaaa0000-bb11-2222-33cc-444444dddddd"`| Get all of the logs generated by Azure AD B2C for a correlation ID. Replace the correlation ID with your correlation ID. | The entries may be long. Export to CSV for a closer look. |
active-directory-b2c | User Flow Custom Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-custom-attributes.md | Extension attributes can only be registered on an application object, even thoug 1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **All applications**. 1. Select the `b2c-extensions-app. Do not modify. Used by AADB2C for storing user data.` application.-1. Copy the **Application ID**. Example: `11111111-1111-1111-1111-111111111111`. +1. Copy the **Application ID**. Example: `00001111-aaaa-2222-bbbb-3333cccc4444`. ::: zone-end Extension attributes can only be registered on an application object, even thoug 1. Select **App registrations**, and then select **All applications**. 1. Select the **b2c-extensions-app. Do not modify. Used by AADB2C for storing user data.** application. 1. Copy the following identifiers to your clipboard and save them:- * **Application ID**. Example: `11111111-1111-1111-1111-111111111111`. - * **Object ID**. Example: `22222222-2222-2222-2222-222222222222`. + * **Application ID**. Example: `00001111-aaaa-2222-bbbb-3333cccc4444`. + * **Object ID**. Example: `aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb`. ## Modify your custom policy To enable custom attributes in your policy, provide **Application ID** and Appli <TechnicalProfiles> <TechnicalProfile Id="AAD-Common"> <Metadata>- <!--Insert b2c-extensions-app application ID here, for example: 11111111-1111-1111-1111-111111111111--> + <!--Insert b2c-extensions-app application ID here, for example: 00001111-aaaa-2222-bbbb-3333cccc4444--> <Item Key="ClientId"></Item>- <!--Insert b2c-extensions-app application ObjectId here, for example: 22222222-2222-2222-2222-222222222222--> + <!--Insert b2c-extensions-app application ObjectId here, for example: aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb--> <Item Key="ApplicationObjectId"></Item> </Metadata> </TechnicalProfile> The following example demonstrates the use of a custom attribute in Azure AD B2C You can use Microsoft Graph to create and manage the custom attributes then set the values for a user. Extension attributes are also called directory or Microsoft Entra extensions. -Custom attributes (directory extensions) in the Microsoft Graph API are named by using the convention `extension_{appId-without-hyphens}_{extensionProperty-name}` where `{appId-without-hyphens}` is the stripped version of the **appId** (called Client ID on the Azure AD B2C portal) for the `b2c-extensions-app` with only characters 0-9 and A-Z. For example, if the **appId** of the `b2c-extensions-app` application is `25883231-668a-43a7-80b2-5685c3f874bc` and the attribute name is `loyaltyId`, then the custom attribute is named `extension_25883231668a43a780b25685c3f874bc_loyaltyId`. +Custom attributes (directory extensions) in the Microsoft Graph API are named by using the convention `extension_{appId-without-hyphens}_{extensionProperty-name}` where `{appId-without-hyphens}` is the stripped version of the **appId** (called Client ID on the Azure AD B2C portal) for the `b2c-extensions-app` with only characters 0-9 and A-Z. For example, if the **appId** of the `b2c-extensions-app` application is `11112222-bbbb-3333-cccc-4444dddd5555` and the attribute name is `loyaltyId`, then the custom attribute is named `extension_25883231668a43a780b25685c3f874bc_loyaltyId`. Learn how to [manage extension attributes in your Azure AD B2C tenant](microsoft-graph-operations.md#application-extension-directory-extension-properties) using the Microsoft Graph API. |
active-directory-b2c | Userinfo Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/userinfo-endpoint.md | The user info UserJourney specifies: <Metadata> <!-- Update the Issuer and Audience below --> <!-- Audience is optional, Issuer is required-->- <Item Key="issuer">https://yourtenant.b2clogin.com/11111111-1111-1111-1111-111111111111/v2.0/</Item> - <Item Key="audience">[ "22222222-2222-2222-2222-222222222222", "33333333-3333-3333-3333-333333333333" ]</Item> + <Item Key="issuer">https://yourtenant.b2clogin.com/aaaabbbb-0000-cccc-1111-dddd2222eeee/v2.0/</Item> + <Item Key="audience">[ "00001111-aaaa-2222-bbbb-3333cccc4444", "11112222-bbbb-3333-cccc-4444dddd5555" ]</Item> <Item Key="client_assertion_type">urn:ietf:params:oauth:client-assertion-type:jwt-bearer</Item> </Metadata> <CryptographicKeys> The user info UserJourney specifies: 1. **issuer** - This value must be identical to the `iss` claim within the access token claim. Tokens issued by Azure AD B2C use an issuer in the format `https://yourtenant.b2clogin.com/your-tenant-id/v2.0/`. Learn more about [token customization](configure-tokens.md). 1. **IdTokenAudience** - Must be identical to the `aud` claim within the access token claim. In Azure AD B2C the `aud` claim is the ID of your relying party application. This value is a collection and supports multiple values using a comma delimiter. - In the following access token, the `iss` claim value is `https://contoso.b2clogin.com/11111111-1111-1111-1111-111111111111/v2.0/`. The `aud` claim value is `22222222-2222-2222-2222-222222222222`. + In the following access token, the `iss` claim value is `https://contoso.b2clogin.com/aaaabbbb-0000-cccc-1111-dddd2222eeee/v2.0/`. The `aud` claim value is `00001111-aaaa-2222-bbbb-3333cccc4444`. ```json { The user info UserJourney specifies: "nbf": 1605545868, "ver": "1.0", "iss": "https://contoso.b2clogin.com/11111111-1111-1111-1111-111111111111/v2.0/",- "sub": "44444444-4444-4444-4444-444444444444", - "aud": "22222222-2222-2222-2222-222222222222", + "sub": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb", + "aud": "00001111-aaaa-2222-bbbb-3333cccc4444", "acr": "b2c_1a_signup_signin", "nonce": "defaultNonce", "iat": 1605545868, The user info UserJourney specifies: "name": "John Smith", "given_name": "John", "family_name": "Smith",- "tid": "11111111-1111-1111-1111-111111111111" + "tid": "aaaabbbb-0000-cccc-1111-dddd2222eeee" } ``` A successful response would look like: ```json {- "objectId": "44444444-4444-4444-4444-444444444444", + "objectId": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb", "givenName": "John", "surname": "Smith", "displayName": "John Smith", |
active-directory-b2c | View Audit Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/view-audit-logs.md | Here's the JSON representation of the example activity event shown earlier in th { "id": "B2C_DQO3J_4984536", "category": "Authentication",- "correlationId": "00000000-0000-0000-0000-000000000000", + "correlationId": "ffffffff-eeee-dddd-cccc-bbbbbbbbbbb0", "result": "success", "resultReason": "N/A", "activityDisplayName": "Issue an id_token to the application", Here's the JSON representation of the example activity event shown earlier in th "initiatedBy": { "user": null, "app": {- "appId": "00000000-0000-0000-0000-000000000000", + "appId": "00001111-aaaa-2222-bbbb-3333cccc4444", "displayName": null, "servicePrincipalId": null, "servicePrincipalName": "00000000-0000-0000-0000-000000000000" |
api-center | Check Minimal Api Permissions Dev Proxy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/check-minimal-api-permissions-dev-proxy.md | In the `devproxyrc.json` file, add the following configuration: "https://api.northwind.com/*" ], "apiCenterMinimalPermissionsPlugin": {- "subscriptionId": "00000000-0000-0000-0000-000000000000", + "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e", "resourceGroupName": "demo", "serviceName": "contoso-api-center", "workspaceName": "default" Update your `devproxyrc.json` file with a reference to the plain-text reporter: "https://api.northwind.com/*" ], "apiCenterMinimalPermissionsPlugin": {- "subscriptionId": "00000000-0000-0000-0000-000000000000", + "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e", "resourceGroupName": "demo", "serviceName": "contoso-api-center", "workspaceName": "default" |
api-center | Configure Environments Deployments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/configure-environments-deployments.md | Here you add a deployment by associating one of your APIs with the environment y 1. In the left menu, under **Assets**, select **APIs**. -1. Select an API, for example, the *Demo Conference API*. +1. Select an API, for example, the *Conference API*. -1. On the **Demo Conference API** page, under **Details**, select **Deployments** > **+ Add deployment**. +1. On the **Conference API** page, under **Details**, select **Deployments** > **+ Add deployment**. 1. In the **Add deployment** page, add the following information. If you previously defined the custom *Line of business* metadata or other metadata assigned to environments, you'll see them at the bottom of the page. Here you add a deployment by associating one of your APIs with the environment y |**Identification**|After you enter the preceding title, Azure API Center generates this identifier, which you can override.| Azure resource name for the deployment.| | **Description** | Optionally enter a description. | Description of the deployment. | | **Environment** | Make a selection from the dropdown, such as *My Testing*, or optionally select **Create new**.| New or existing environment where the API version is deployed. |- | **Definition** | Select or add a definition file for a version of the Demo Conference API. | API definition file. | + | **Definition** | Select or add a definition file for a version of the Conference API. | API definition file. | | **Runtime URL** | Enter a base URL, for example, `https://api.contoso.com`. | Base runtime URL for the API in the environment. | | **Line of business** | If you added this custom metadata, optionally make a selection from the dropdown, such as **IT**. | Custom metadata that identifies the business unit that manages APIs in the environment. | |
api-center | Discover Shadow Apis Dev Proxy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/discover-shadow-apis-dev-proxy.md | In the `devproxyrc.json` file, add the following configuration: "https://jsonplaceholder.typicode.com/*" ], "apiCenterOnboardingPlugin": {- "subscriptionId": "00000000-0000-0000-0000-000000000000", + "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e", "resourceGroupName": "demo", "serviceName": "contoso-api-center", "workspaceName": "default", Update your `devproxyrc.json` file with a reference to the plain-text reporter: "https://jsonplaceholder.typicode.com/*" ], "apiCenterOnboardingPlugin": {- "subscriptionId": "00000000-0000-0000-0000-000000000000", + "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e", "resourceGroupName": "demo", "serviceName": "contoso-api-center", "workspaceName": "default", The `ApiCenterOnboardingPlugin` can not only detect shadow APIs, but also automa "https://jsonplaceholder.typicode.com/*" ], "apiCenterOnboardingPlugin": {- "subscriptionId": "00000000-0000-0000-0000-000000000000", + "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e", "resourceGroupName": "demo", "serviceName": "contoso-api-center", "workspaceName": "default", To automatically generate OpenAPI specs for onboarded APIs, update Dev Proxy con "https://jsonplaceholder.typicode.com/*" ], "apiCenterOnboardingPlugin": {- "subscriptionId": "00000000-0000-0000-0000-000000000000", + "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e", "resourceGroupName": "demo", "serviceName": "contoso-api-center", "workspaceName": "default", |
api-center | Find Nonproduction Api Requests Dev Proxy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/find-nonproduction-api-requests-dev-proxy.md | In the `devproxyrc.json` file, add the following configuration: "https://jsonplaceholder.typicode.com/*" ], "apiCenterProductionVersionPlugin": {- "subscriptionId": "00000000-0000-0000-0000-000000000000", + "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e", "resourceGroupName": "demo", "serviceName": "contoso-api-center", "workspaceName": "default" Update your `devproxyrc.json` file with a reference to the plain-text reporter: "https://jsonplaceholder.typicode.com/*" ], "apiCenterProductionVersionPlugin": {- "subscriptionId": "00000000-0000-0000-0000-000000000000", + "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e", "resourceGroupName": "demo", "serviceName": "contoso-api-center", "workspaceName": "default" |
api-center | Register Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/register-apis.md | In this tutorial, you learn how to use the portal to: * One or more APIs that you want to register in your API center. Here are two examples, with links to their OpenAPI definitions: * [Swagger Petstore API](https://github.com/swagger-api/swagger-petstore/blob/master/src/main/resources/openapi.yaml)- * [Azure Demo Conference API](https://conferenceapi.azurewebsites.net?format=json) + * [Conference API](https://bigconference.azurewebsites.net) * Complete the previous tutorial, [Define custom metadata](add-metadata-properties.md), to define custom metadata for your APIs. When you register (add) an API in your API center, the API registration includes After registering an API, you can add versions and definitions to the API. -The following steps register two sample APIs: Swagger Petstore API and Demo Conference API (see [Prerequisites](#prerequisites)). If you prefer, register APIs of your own. +The following steps register two sample APIs: Swagger Petstore API and Conference API (see [Prerequisites](#prerequisites)). If you prefer, register APIs of your own. 1. In the [portal](https://portal.azure.com), navigate to your API center. The following steps register two sample APIs: Swagger Petstore API and Demo Conf 1. Select **Create**. The API is registered. -1. Repeat the preceding three steps to register another API, such as the Demo Conference API. +1. Repeat the preceding three steps to register another API, such as the Conference API. > [!TIP] > When you register an API in the portal, you can select any of the predefined API types or enter another type of your choice. Here you add a version to one of your APIs: 1. In the portal, navigate to your API center. -1. In the left menu, select **APIs**, and then select an API, for example, *Demo Conference API*. +1. In the left menu, select **APIs**, and then select an API, for example, *Swagger Petstore*. -1. On the Demo Conference API page, under **Details**, select **Versions** > **+ Add version**. +1. On the API page, under **Details**, select **Versions** > **+ Add version**. :::image type="content" source="media/register-apis/add-version.png" alt-text="Screenshot of adding an API version in the portal." lightbox="media/register-apis/add-version.png"::: To add an API definition to your version: |**Title**| Enter a title of your choice, such as *v2 Definition*.|Name you choose for the API definition.| |**Identification**|After you enter the preceding title, Azure API Center generates this identifier, which you can override.| Azure resource name for the definition.| | **Description** | Optionally enter a description. | Description of the API definition. |- | **Specification name** | For the Demo Conference API, select **OpenAPI**. | Specification format for the API.| - | **Specification version** | Enter a version identifier of your choice, such as *2.0*. | Specification version. | - |**Document** | Browse to a local definition file for the Demo Conference API, or enter a URL. Example URL: `https://conferenceapi.azurewebsites.net?format=json` | API definition file. | + | **Specification name** | For the Petstore API, select **OpenAPI**. | Specification format for the API.| + | **Specification version** | Enter a version identifier of your choice, such as *3.0*. | Specification version. | + |**Document** | Browse to a local definition file for the Petstore API, or enter a URL. Example URL: `https://raw.githubusercontent.com/swagger-api/swagger-petstore/refs/heads/master/src/main/resources/openapi.yaml` | API definition file. | :::image type="content" source="media/register-apis/add-definition.png" alt-text="Screenshot of adding an API definition in the portal." lightbox="media/register-apis/add-definition.png" ::: |
automation | Enable Vms Monitoring Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/enable-vms-monitoring-agent.md | Title: Enable Azure Automation Change Tracking for single machine and multiple m description: This article tells how to enable the Change Tracking feature for single machine and multiple machines at scale from the Azure portal. Previously updated : 10/10/2024 Last updated : 10/29/2024 This section provides detailed procedure on how you can enable change tracking o It will initiate the deployment and the notification appears on the top right corner of the screen. :::image type="content" source="media/enable-vms-monitoring-agent/deployment-success-inline.png" alt-text="Screenshot showing the notification of deployment." lightbox="media/enable-vms-monitoring-agent/deployment-success-expanded.png":::-+ > [!NOTE]-> It usually takes up to two to three minutes to successfully onboard and enable the virtual machine(s). After you enable a virtual machine for change tracking, you can make changes to the files, registries, or software for the specific VM. +> - When you enable Change Tracking in the Azure portal using the Azure Monitoring Agent, the process automatically creates a Data Collection Rule (DCR). This rule will appear in the resource group with a name in the format ct-dcr-aaaaaaaaa. After the rule is created, add the required resources. +> - It usually takes up to two to three minutes to successfully onboard and enable the virtual machine(s). After you enable a virtual machine for change tracking, you can make changes to the files, registries, or software for the specific VM. #### [Multiple Azure VMs - portal](#tab/multiplevms) This section provides detailed procedure on how you can enable change tracking o :::image type="content" source="media/enable-vms-monitoring-agent/select-change-tracking-multiple-vms-inline.png" alt-text="Screenshot showing how to select multiple virtual machines from the portal." lightbox="media/enable-vms-monitoring-agent/select-change-tracking-multiple-vms-expanded.png"::: > [!NOTE]- > You can select upto 250 virtual machines at a time to enable this feature. + > You can select up to 250 virtual machines at a time to enable this feature. 1. In **Enable Change Tracking** page, select the banner at the top of the page, **Click here to try new change tracking and inventory with Azure Monitoring Agent (AMA) experience**. Using the Deploy if not exist (DINE) policy, you can enable Change tracking with :::image type="content" source="media/enable-vms-monitoring-agent/deployment-confirmation.png" alt-text="Screenshot of deployment notification."::: +> [!NOTE] +> After creating the Data Collection Rule (DCR) using the Azure Monitoring Agent's change tracking schema, ensure that you don't add any Data Sources to this rule. This can cause Change Tracking and Inventory to fail. You must only add new Resources in this section. + ## Next steps - For details of working with the feature, see [Manage Change Tracking](../change-tracking/manage-change-tracking-monitoring-agent.md). |
azure-functions | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/troubleshoot.md | -This article provides information on troubleshooting and resolving issues that may occur while attempting to install and configure Start/Stop VMs. For general information, see [Start/Stop VMs overview](overview.md). +This article provides information on troubleshooting and resolving issues that may occur while attempting to install and configure Start/Stop VMs. ## General validation and troubleshooting |
azure-maps | Add Custom Protocol Pmtiles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-custom-protocol-pmtiles.md | By using the `addProtocol` function, which registers a callback triggered before The first step is to add a reference to the protocol. The following example references the `pmtiles` library: ```html- <script src="https://unpkg.com/pmtiles@3.0.5/dist/pmtiles.js"></script> + <script src="https://unpkg.com/pmtiles@3.2.0/dist/pmtiles.js"></script> ``` Next, initialize the MapLibre PMTiles protocol. ```js-//Initialize the plugin. - const protocol = new pmtiles.Protocol(); - atlas.addProtocol("pmtiles", (request) => { - return new Promise((resolve, reject) => { - const callback = (err, data) => { - if (err) { - reject(err); - } else { - resolve({ data }); - } - }; - protocol.tile(request, callback); - }); - }); -``` --## Add PMTiles protocol --To add the PMTiles protocol, hook the data source with the specified protocol URI scheme. The following sample uses the [Overture] building dataset to add building data over the basemap. --```js -const PMTILES_URL = "https://overturemaps-tiles-us-west-2-beta.s3.amazonaws.com/2024-07-22/buildings.pmtiles"; -protocol.add(new pmtiles.PMTiles(PMTILES_URL)); + //Initialize the plugin. + const protocol = new pmtiles.Protocol(); + atlas.addProtocol("pmtiles", protocol.tile); ``` ## Add PMTiles as a map source +The following sample uses the [Overture] building dataset to add building data over the basemap. + PMTiles are added as a map source during the map event. Once added, the specified URI scheme is available to the Azure Maps Web SDK. In the following sample, the PMTiles URL is added as a `VectorTileSource`. ```js+const PMTILES_URL = "https://overturemaps-tiles-us-west-2-beta.s3.amazonaws.com/2024-07-22/buildings.pmtiles"; //Add the source to the map. map.sources.add(- new atlas.source.VectorTileSource("pmtiles", { + new atlas.source.VectorTileSource("my_source", { type: "vector", url: `pmtiles://${PMTILES_URL}`, }) The following sample uses the building theme's properties (for example, building ```js //Create a polygon extrusion layer. layer = new atlas.layer.PolygonExtrusionLayer(- "pmtiles", + "my_source", "building", { sourceLayer: "building", |
azure-resource-manager | Move Resource Group And Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md | When you receive this error, you have two options. Either move your resources to No, you can't move a resource group to a new subscription. But, you can move all of the resources in the resource group to a resource group in another subscription. Settings such as tags, role assignments, and policies aren't automatically transferred from the original resource group to the destination resource group. You need to reapply these settings to the new resource group. For more information, see [Move resources to new resource group or subscription](./move-support-resources.md). +### Unsupported scenarios ++The platform blocks a scenario where resources from Subscription A are migrated to Subscription B when *at the same time* resources from Subscription B are migrated to Subscription C. This is by design. + ## Next steps For a list of which resources support move, see [Move operation support for resources](move-support-resources.md). |
azure-sql-edge | Disconnected Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/disconnected-deployment.md | Title: Deploy Azure SQL Edge with Docker - Azure SQL Edge description: Learn about deploying Azure SQL Edge with Docker Previously updated : 09/21/2024 Last updated : 10/28/2024 Azure SQL Edge containers aren't supported on the following platforms for produc - Start an Azure SQL Edge instance running as the Developer edition: ```bash- sudo docker run --cap-add SYS_PTRACE -e 'ACCEPT_EULA=1' -e 'MSSQL_SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 --name azuresqledge -d mcr.microsoft.com/azure-sql-edge + sudo docker run --cap-add SYS_PTRACE -e 'ACCEPT_EULA=1' -e 'MSSQL_SA_PASSWORD=<password>' -p 1433:1433 --name azuresqledge -d mcr.microsoft.com/azure-sql-edge ``` - Start an Azure SQL Edge instance running as the Premium edition: ```bash- sudo docker run --cap-add SYS_PTRACE -e 'ACCEPT_EULA=1' -e 'MSSQL_SA_PASSWORD=yourStrong(!)Password' -e 'MSSQL_PID=Premium' -p 1433:1433 --name azuresqledge -d mcr.microsoft.com/azure-sql-edge + sudo docker run --cap-add SYS_PTRACE -e 'ACCEPT_EULA=1' -e 'MSSQL_SA_PASSWORD=<password>' -e 'MSSQL_PID=Premium' -p 1433:1433 --name azuresqledge -d mcr.microsoft.com/azure-sql-edge ``` > [!IMPORTANT] Azure SQL Edge containers aren't supported on the following platforms for produc | Parameter | Description | | | | | **-e "ACCEPT_EULA=Y"** | Set the **ACCEPT_EULA** variable to any value to confirm your acceptance of the [End-User Licensing Agreement](https://go.microsoft.com/fwlink/?linkid=2139274). Required setting for the SQL Edge image. |- | **-e "MSSQL_SA_PASSWORD=yourStrong(!)Password"** | Specify your own strong password that is at least eight characters and meets the [Azure SQL Edge password requirements](/sql/relational-databases/security/password-policy). Required setting for the SQL Edge image. | + | **-e "MSSQL_SA_PASSWORD=\<password\>"** | Specify your own strong password that is at least eight characters and meets the [password requirements](/sql/relational-databases/security/password-policy). Required setting for the SQL Edge image. | | **-p 1433:1433** | Map a TCP port on the host environment (first value) with a TCP port in the container (second value). In this example, SQL Edge is listening on TCP 1433 in the container and this is exposed to the port, 1433, on the host. | | **--name azuresqledge** | Specify a custom name for the container rather than a randomly generated one. If you run more than one container, you can't reuse this same name. | | **-d** | Run the container in the background (daemon) | The **SA** account is a system administrator on the Azure SQL Edge instance that 1. Choose a strong password to use for the SA user. -1. Use `docker exec` to run **sqlcmd** to change the password using Transact-SQL. In the following example, replace the old password, `<YourStrong!Passw0rd>`, and the new password, `<YourNewStrong!Passw0rd>`, with your own password values. +1. Use `docker exec` to run **sqlcmd** to change the password using Transact-SQL. In the following example, replace the old password, `<old-password>`, and the new password, `<new-password>`, with your own password values. ```bash sudo docker exec -it azuresqledge /opt/mssql-tools/bin/sqlcmd \- -S localhost -U SA -P "<YourStrong@Passw0rd>" \ - -Q 'ALTER LOGIN SA WITH PASSWORD="<YourNewStrong@Passw0rd>"' + -S localhost -U SA -P "<old-password>" \ + -Q 'ALTER LOGIN SA WITH PASSWORD="<new-password>"' ``` ## Connect to Azure SQL Edge The following steps use the Azure SQL Edge command-line tool, **sqlcmd**, inside 1. Once inside the container, connect locally with **sqlcmd**. **sqlcmd** isn't in the path by default, so you have to specify the full path. ```bash- /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "<YourNewStrong@Passw0rd>" + /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "<password>" ``` > [!TIP] |
azure-vmware | Ecosystem App Monitoring Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-app-monitoring-solutions.md | Microsoft recommends [Application Insights](/azure/azure-monitor/app/app-insight Learn how modern monitoring with Azure Monitor can transform your business by reviewing the [product overview, features, getting started guide and more](https://azure.microsoft.com/services/monitor). +### Azure Resource Health for Azure VMware Solution Private Cloud (Public preview) ++In this article, you learn how Azure Resource Health helps you diagnose and get support for service problems that affect your Private Cloud resources. Azure Resource Health reports on the current and past health of your Private Cloud Infrastructure resources and provides you with a personalized dashboard of the health of the infrastructure resources. Azure Resource Health allows you to report on historical events and can identify every time a service is unavailable and if Service Level Agreement (SLA) is violated. ++#### Preview Enablement ++You are required to register yourself for the feature preview under _Preview Features_ of Azure VMware Solution in Azure portal. Customers should first register themselves to ***"Microsoft.AVS/ResourceHealth"*** preview flag from Azure portal and once registered, all the preconfigured alerts related to Host replacement, vCenter, and other critical alarms will start to surface in the Resource Health of Azure VMware Solution (AVS) User Interface (UI). ++#### Benefits of enabling Resource Health ++- Resource Health feature enablement adds significant value to your monitoring capabilities. You get notified about unplanned maintenance that took place in your private cloud infrastructure. ++- Resource Health gives you a personalized dashboard of the health of your resources. Resource Health shows all the time that your resources have been unavailable which makes it easy for you to check if SLA was violated. ++- For the Public Preview, a group of critical alerts are enabled which notifies you about Host replacements, storage critical alarms and also about the Network health of your private cloud. ++- The alerts are updated to have all the necessary information for better reporting and triage purposes. ++- Resource Health uses Azure Action groups that allow you to configure Email/SMS/Webhook/ITSM and get notified via communication method of your choice. ++- Once Enabled the health of your private cloud infrastructure reflects following statuses ++ + - Available + + - Unavailable ++ - Unknown ++ - Degraded +++#### Available ++Available means that there are no events detected that affect the health of the resource. In cases where the resource recovered from unplanned downtime during the last 24 hours, you see a "Recently resolved" notification ++++#### Unavailable ++Unavailable means that the service detected an ongoing platform or nonplatform event that affects the health of the resource. ++#### Unknown ++Unknown means that Resource Health hasn't received information about the resource for more than 10 minutes. You may see this status under two different conditions: ++- Your subscription is not enabled for Resource Health metrics, and you need to register yourself for the preview. ++- If the resource is running as expected, the status of the resource will change to Available after a few minutes. If you experience problems with the resource, the Unknown health status might mean that an event in the private cloud is affecting the resource. ++++#### Degraded ++Degraded means that Resource Health detected a loss in performance in either one or more private cloud resources, although it's still available for use. Different resources have their own criteria for when they report that they are degraded. ++++#### Pre-configured Alarms enabled in Azure Resource Health +++|Alert Name|Remediation Mode| +| -- | -- | +|Physical Disk Health Alarm |System Remediation| +|System Board Health Alarm|System Remediation| +| Memory Health Alarm|System Remediation| +|Storage Health Alarm|System Remediation| +|Temperature Health Alarm |System Remediation| +|Host Connection State Alarm|System Remediation| +|High Availability (HA) host Status |System Remediation| +| Network Connectivity Lost Alarm|System Remediation| +|Virtual Storage (vSAN) Host Disk Error Alarm|System Remediation| +|Voltage Health Alarm |System Remediation| +|Processor Health Alarm| System Remediation| +|Fan Health Alarm|System Remediation| +|High pNIC error rate detected|System Remediation| +|iDRAC critical alerts if there are hardware faults (CPU/DIMM/PCI bus/Voltage issues)|System Remediation| +|vSphere HA restarted a virtual machine|System Remediation| +|Virtual Storage (vSAN) High Disk Utilization|Customer Intervention Required| +|Replacement Start and Stop Notification|System Remediation| +|Repair Service notification to customers (Host reboot and Restart of Management services) |System Remediation| +|Notification to customer when a Virtual Machine is configured to use an external device that prevents a maintenance operation|Customer Intervention Required| +| Customer notification when CD-ROM is mounted on the Virtual Machine and its ISO image isn't accessible and blocks maintenance operation|Customer Intervention Required| +|Notification to customer when an external Datastore mounted becomes inaccessible and will block maintenance operations|Customer Intervention Required| +|Notification to customer when connected network adapter becomes inaccessible and blocks any maintenance operations|Customer Intervention Required| +|VMware Network (NSX –T) alarms (Customer notification about License expiration)|Customer Intervention Required| +++## Next Steps ++Now that you have configured an alert rule for your Azure VMware Solution private cloud, you can learn more about: ++- [Azure Resource Health](/azure/service-health/resource-health-overview) ++- [Azure Monitor](/azure/azure-monitor/overview) ++- [Azure Action Groups](/azure/azure-monitor/alerts/action-groups) ++You can also continue with one of the other Azure VMware Solution how-to [guides](/azure/azure-vmware/deploy-azure-vmware-solution?tabs=azure-portal) + ## Third-party solutions-Our application performance monitoring and troubleshooting partners have industry-leading solutions in VMware-based environments that assure the availability, reliability, and responsiveness of applications and services. Our customers adopt many of the solutions integrated with VMware NSX-T Data Center for their on-premises deployments. As one of our key principles, we want to enable them to continue to use their investments and VMware solutions running on Azure. Many of the Independent Software Vendors (ISV) already validated their solutions with Azure VMware Solution. +Our application performance monitoring and troubleshooting partners have industry-leading solutions in VMware-based environments that assure the availability, reliability, and responsiveness of applications and services. You can adopt many of the solutions integrated with VMware NSX-T Data Center for their on-premises deployments. As one of our key principles, we want to enable you to continue to use your investments and VMware solutions running on Azure. Many of the Independent Software Vendors (ISV) already validated their solutions with Azure VMware Solution. You can find more information about these solutions here: |
backup | Sap Hana Backup Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-backup-support-matrix.md | Title: SAP HANA Backup support matrix description: In this article, learn about the supported scenarios and limitations when you use Azure Backup to back up SAP HANA databases on Azure VMs. Previously updated : 09/30/2024 Last updated : 10/30/2024 Azure Backup supports the backup of SAP HANA databases to Azure. This article su | -- | | | | **Topology** | SAP HANA running in Azure Linux VMs only | HANA Large Instances (HLI) | | **Regions** | **Americas** ΓÇô Central US, East US 2, East US, North Central US, South Central US, West US 2, West US 3, West Central US, West US, Canada Central, Canada East, Brazil South <br> **Asia Pacific** ΓÇô Australia Central, Australia Central 2, Australia East, Australia Southeast, Japan East, Japan West, Korea Central, Korea South, East Asia, Southeast Asia, Central India, South India, West India, China East, China East 2, China East 3, China North, China North 2, China North 3 <br> **Europe** ΓÇô West Europe, North Europe, France Central, UK South, UK West, Germany North, Germany West Central, Switzerland North, Switzerland West, Central Switzerland North, Norway East, Norway West, Sweden Central, Sweden South <br> **Africa / ME** - South Africa North, South Africa West, UAE North, UAE Central <BR> **Azure Government regions** | France South, Germany Central, Germany Northeast, US Gov IOWA |-| **OS versions** | SLES 12 with SP2, SP3, SP4 and SP5; SLES 15 with SP0, SP1, SP2, SP3, SP4, and SP5 <br><br> RHEL 7.4, 7.6, 7.7, 7.9, 8.1, 8.2, 8.4, 8.6, 8.8, 9.0, and 9.2. | | +| **OS versions** | SLES 12 with SP2, SP3, SP4 and SP5; SLES 15 with SP0, SP1, SP2, SP3, SP4, SP5, and SP6 <br><br> RHEL 7.4, 7.6, 7.7, 7.9, 8.1, 8.2, 8.4, 8.6, 8.8, 9.0, 9.2, and 9.4. | | | **HANA versions** | SDC on HANA 1.x, MDC on HANA 2.x SPS 04, SPS 05 Rev <= 59, SPS 06 (validated for encryption enabled scenarios as well), and SPS 07. | | | **Encryption** | SSLEnforce, HANA data encryption | | | **HANA Instances** | A single SAP HANA instance on a single Azure VM ΓÇô scale up only | Multiple SAP HANA instances on a single VM. You can protect only one of these multiple instances at a time. | |
bastion | Bastion Connect Vm Ssh Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-ssh-linux.md | This article shows you how to securely and seamlessly create an SSH connection t Azure Bastion provides secure connectivity to all of the VMs in the virtual network in which it's provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. For more information, see the [What is Azure Bastion?](bastion-overview.md) article. -When connecting to a Linux virtual machine using SSH, you can use both username/password and SSH keys for authentication. The SSH private key must be in a format that begins with `"--BEGIN RSA PRIVATE KEY--"` and ends with `"--END RSA PRIVATE KEY--"`. +When connecting to a Linux virtual machine using SSH, you can use both username/password and SSH keys for authentication. ## Prerequisites |
communication-services | Privacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/privacy.md | Audio and video communication is ephemerally processed by the service and no cal ### Call Recording -Call recordings are stored temporarily in the same geography that was selected for ```Data Location``` during resource creation for 48 hours. After this the recording is deleted and you are responsible for storing the recording in a secure and compliant location. +Call recordings are stored temporarily in the same geography that was selected for ```Data Location``` during resource creation for 24 hours. After this the recording is deleted and you are responsible for storing the recording in a secure and compliant location. ### Email Email message content is ephemerally stored for processing in the resource's ```Data Location``` specified by you during resource provisioning. Email message delivery logs are available in Azure Monitor Logs, where you are in control to define the workspace to store logs. Domain sender usernames (or MailFrom) values are stored in the resource's ```Data Location``` until explicitly deleted. Recipient's email addresses that result in hard bounced messages are temporarily retained for spam and abuse prevention and detection. |
communication-services | Setup Title Subtitle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/setup-title-subtitle.md | Developers now have the capability to customize the title and subtitle of a call For instance, in a customer support scenario, the title could display the issue being addressed, while the subtitle could show the customer's name or ticket number. + Additionally, if tracking time spent in various segments of the call is crucial, the subtitle could dynamically update to display the elapsed call duration, helping to manage the meeting or session effectively. ## Prerequisites |
communication-services | Theming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/theming.md | zone_pivot_groups: acs-plat-web-ios-android # Theme the UI Library in an application + The Azure Communication Services UI Library is a set of components, icons, and composites that make it easier for you to build high-quality user interfaces for your projects. The UI Library uses components and icons from [Fluent UI](https://developer.microsoft.com/fluentui), the cross-platform design system that Microsoft uses. As a result, the components are built with usability, accessibility, and localization in mind. In this article, you learn how to change the theme for UI Library components as you configure an application. |
communication-services | Diagnostic Options Tag | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/voice-video-calling/diagnostic-options-tag.md | Each value can have a maximum length of 64 characters, with support for only let Here is an example of how to use the **Diagnostic Options** parameters from within your WebJS application: ```js-this.callClient = new CallClient({ +const callClient = new CallClient({ diagnostics: { appName: 'contoso-healthcare-calling-services', appVersion: '2.1', Once you add the values to your client SDK, they're populated and appear in your **contoso-healthcare-calling-services**/**2.1** azsdk-js-communication-calling/1.27.1-rc.10 (javascript_calling_sdk;**#clientTag:contoso_virtual_visits"**,**`#clientTag:participant0001**). Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36 Edg/129.0.0.0 > [!NOTE]-> If you doesn't set a value of `appName` and `appVersion` from within the client API, the default value of default/0.0.0 will appear within the `UserAgent` field +> If you don't set a value of `appName` and `appVersion` from within the client API, the default value of default/0.0.0 will appear within the `UserAgent` field. ## Next steps - Learn more about Azure Communication Services Call Diagnostic Center [here](../../concepts//voice-video-calling/call-diagnostics.md) |
container-apps | Dapr Component Resiliency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-component-resiliency.md | Title: Dapr component resiliency (preview) + Title: Dapr component resiliency description: Learn how to make your Dapr components resilient in Azure Container Apps. -# Dapr component resiliency (preview) +# Dapr component resiliency Resiliency policies proactively prevent, detect, and recover from your container app failures. In this article, you learn how to apply resiliency policies for applications that use Dapr to integrate with different cloud services, like state stores, pub/sub message brokers, secret stores, and more. |
container-apps | Ingress How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress-how-to.md | type: Microsoft.App/containerApps ::: zone pivot="azure-portal" -This feature isn't supported in the Azure portal. ++1. Expand the **Additional TCP ports** section within the Ingress blade. +2. Add in additional TCP ports that your application will be accepting traffic on in the _Target port_ field. If _Exposed port_ is left empty, this will take from the same value set in _Target port_. +3. Change the _Ingress traffic_ field as needed. This will configure where ingress traffic will be limited to for each port. +4. When finished, click **Save**. ::: zone-end |
container-apps | Ingress Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress-overview.md | With TCP ingress enabled, your container app: In addition to the main HTTP/TCP port for your container apps, you might expose additional TCP ports to enable applications that accept TCP connections on multiple ports. > [!NOTE]-> This feature requires using the latest preview version of the container apps CLI extension. +> To use this feature, you must have the container apps CLI extension. Run `az extension add -n containerapp` in order to install the latest version of the container apps CLI extension. The following apply to additional TCP ports: - Additional TCP ports can only be external if the app itself is set as external and the container app is using a custom VNet. |
container-apps | Service Discovery Resiliency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/service-discovery-resiliency.md | Title: Service discovery resiliency (preview) + Title: Service discovery resiliency description: Learn how to apply container app to container app resiliency when using the application's service name in Azure Container Apps. -# Service discovery resiliency (preview) +# Service discovery resiliency With Azure Container Apps resiliency, you can proactively prevent, detect, and recover from service request failures using simple resiliency policies. In this article, you learn how to configure Azure Container Apps resiliency policies when initiating requests using Azure Container Apps service discovery. az containerapp resiliency delete --group MyResourceGroup --name MyResiliency -- # [Azure portal](#tab/portal) -Navigate into your container app in the Azure portal. In the left side menu under **Settings**, select **Resiliency (preview)** to open the resiliency pane. +Navigate into your container app in the Azure portal. In the left side menu under **Settings**, select **Resiliency** to open the resiliency pane. :::image type="content" source="media/service-discovery-resiliency/resiliency-pane.png" alt-text="Screenshot demonstrating how to access the service discovery resiliency pane."::: |
cost-management-billing | Direct Ea Azure Usage Charges Invoices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md | Title: View your Azure usage summary details and download reports for EA enrollm description: This article explains how enterprise administrators of direct and indirect Enterprise Agreement (EA) enrollments can view a summary of their usage data, Azure Prepayment consumed, and charges associated with other usage in the Azure portal. Previously updated : 06/24/2024 Last updated : 10/29/2024 In the current refund process, totals in the purchase month overage, total charg >[!IMPORTANT]-> When there are adjustment charges, back-dated credits, or discounts for the account that result in an invoice getting rebilled, it resets the refund behavior. Refunds are shown in the rebilled invoice for the rebilled period. +> - When there are adjustment charges, back-dated credits, or discounts for the account that result in an invoice getting rebilled, it resets the refund behavior. Refunds are shown in the rebilled invoice for the rebilled period. +> - If a purchase and refund occur in the same month before they get billed, both the positive and negative adjustments get shown on the next invoice. #### Common refunded overage credits questions |
cost-management-billing | Microsoft Customer Agreement Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/microsoft-customer-agreement-get-started.md | When you or your organization signed the Microsoft Customer Agreement, a billing ## Update your PO and tax ID number -[Update your PO number](../manage/change-azure-account-profile.yml#update-a-po-number) in your billing profile and, after moving your subscriptions, ensure you [update your tax ID](../manage/change-azure-account-profile.yml#update-your-tax-id). The tax ID is used for tax exemption calculations and appears on your invoice. [Learn more about how to update your billing account settings](/microsoft-store/update-microsoft-store-for-business-account-settings). +[Update your PO number](../manage/change-azure-account-profile.yml#update-a-po-number) in your billing profile and, after moving your subscriptions, ensure you [update your tax ID](../manage/change-azure-account-profile.yml#update-your-tax-id). The tax ID is used for tax exemption calculations and appears on your invoice. [Learn more about how to update your billing account settings](../understand/mosp-new-customer-experience.md). ## Confirm payment details |
cost-management-billing | Tutorial Azure Hybrid Benefits Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/tutorial-azure-hybrid-benefits-sql.md | Title: Tutorial - Optimize centrally managed Azure Hybrid Benefit for SQL Server description: This tutorial guides you through proactively assigning SQL Server licenses in Azure to manage and optimize Azure Hybrid Benefit. Previously updated : 10/12/2023 Last updated : 10/29/2024 +#customer intent: As a billing administrator, I want to learn how to assign SQL Server licenses in Azure using centrally managed Azure Hybrid Benefit. # Tutorial: Optimize centrally managed Azure Hybrid Benefit for SQL Server -This tutorial guides you through proactively assigning SQL Server licenses in Azure to optimize [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) as you centrally manage it. Optimizing your benefit reduces the costs of running Azure SQL. +In this tutorial, you learn how to proactively assign SQL Server licenses in Azure to optimize [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) as you centrally manage it. Optimizing your benefit reduces the costs of running Azure SQL. In this tutorial, you learn how to: Have read and understand the [What is centrally managed Azure Hybrid Benefit?](o > [!NOTE] > Managing Azure Hybrid Benefit centrally at a scope-level is limited to enterprise customers and customers buying directly from Azure.com with a Microsoft Customer Agreement. -Verify that your self-installed virtual machines running SQL Server in Azure are registered before you start to use the new experience. Doing so ensures that Azure resources that are running SQL Server are visible to you and Azure. For more information about registering SQL VMs in Azure, see [Register SQL Server VM with SQL IaaS Agent Extension](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm) and [Register multiple SQL VMs in Azure with the SQL IaaS Agent extension](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-vms-bulk). +Verify that your self-installed virtual machines running SQL Server in Azure are registered before you start to use the new experience. Doing so ensures that Azure resources that are running SQL Server are visible to you and Azure. For more information about registering SQL VMs in Azure, see: ++- [Register SQL Server VM with SQL IaaS Agent Extension](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm) +- [Register multiple SQL VMs in Azure with the SQL IaaS Agent extension](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-vms-bulk) ## Gather license usage and availability details Your software procurement or software asset management department is likely to h ## Buy more licenses if needed -After reviewing the information gathered, if you determine that the number of SQL Server licenses available is insufficient to cover planned Azure SQL usage, then talk to your procurement department to buy more SQL Server core licenses with Software Assurance (or subscription licenses). +After reviewing the information gathered, you might determine that the number of SQL Server licenses available is insufficient to cover planned Azure SQL usage. If so, talk to your procurement department to buy more SQL Server core licenses with Software Assurance (or subscription licenses). Buying SQL Server licenses and applying Azure Hybrid Benefit is less expensive than paying for SQL Server by the hour in Azure. By purchasing enough licenses to cover all planned Azure SQL usage, your organization maximizes cost savings from the benefit. Buying SQL Server licenses and applying Azure Hybrid Benefit is less expensive t ## Monitor usage and adjust 1. Navigate to **Cost Management + Billing** > **Reservations + Hybrid Benefits**.-1. A table is shown that includes the Azure Hybrid Benefit licenses assignments that you've made and the utilization percentage of each one. +1. A table is shown that includes the Azure Hybrid Benefit licenses assignments that you made and the utilization percentage of each one. 1. If any of the utilization percentages are 100%, then your organization is paying hourly rates for some SQL Server resources. Engage with other groups in your organization again to confirm whether current usage levels are temporary or if they're expected to continue. If the latter, your organization should consider purchasing more licenses and assigning them to Azure to reduce cost. 1. If utilization approaches 100%, but doesn't exceed it, determine whether usage is expected to rise in the near term. If so, you can proactively acquire and assign more licenses. The preceding section discusses ongoing monitoring. We also recommend that you e ### License assignment review date -After you assign a license and set a review date, the license assignment automatically expires 90 days after the review date. The license assignment becomes inactive and no longer applies 90 days after expiration. +After you assign a license and set a review date, the license assignment automatically becomes inactive and expires 90 days after the review date. Microsoft sends email notifications: -- 90 days before expiration-- 30 days before expiration-- Seven days before expiration+- 90 days before the review date +- 30 days before the review date +- Seven days before the review date Before the license assignment expires, you can set the review date to a future date so that you continue to receive the benefit. When the license assignment expires, you're charged with pay-as-you-go prices. To change the review date, use the following steps: Your procurement department informs you that you can centrally manage Azure Hybr You locate the new Azure Hybrid Benefit experience in the Cost Management + Billing area of the Azure portal. -After you've read the preceding instructions in the article, you understand that: +After you read the preceding instructions in the article, you understand that: - Contoso needs to register SQL Server VMs before taking other actions. - The ideal way to use the new capability is to assign licenses proactively to cover expected usage. After you've read the preceding instructions in the article, you understand that Then, do the following steps. 1. Use the preceding instructions to make sure self-installed SQL VMs are registered. They include talking to subscription owners to complete the registration for the subscriptions where you don't have sufficient permissions.-1. You review Azure resource usage data from recent months and you talk to others in Contoso. You determine that 2000 SQL Server Enterprise Edition and 750 SQL Server Standard Edition core licenses, or 8750 normalized cores, are needed to cover expected Azure SQL usage for the next year. Expected usage also includes migrating workloads (1500 SQL Server Enterprise Edition + 750 SQL Server Standard Edition = 6750 normalized) and net new Azure SQL workloads (another 500 SQL Server Enterprise Edition or 2000 normalized cores). -1. Next, confirm with your with procurement team that the needed licenses are already available or that they're planned to get purchased. The confirmation ensures that the licenses are available to assign to Azure. +1. You review Azure resource usage data from recent months and you talk to others in Contoso. You determine that 2000 SQL Server Enterprise Edition and 750 SQL Server Standard Edition core licenses, or 8,750 normalized cores, are needed to cover expected Azure SQL usage for the next year. Expected usage also includes migrating workloads (1500 SQL Server Enterprise Edition + 750 SQL Server Standard Edition = 6750 normalized) and net new Azure SQL workloads (another 500 SQL Server Enterprise Edition or 2000 normalized cores). +1. Next, confirm with your with procurement team that the licenses required are available. Or, that they're planned to get purchased. The confirmation ensures that the licenses are available to assign to Azure. - Licenses you have in use on premises can be considered available to assign to Azure if the associated workloads are being migrated to Azure. As mentioned previously, Azure Hybrid Benefit allows dual use for up to 180 days.- - You determine that there are 1800 SQL Server Enterprise Edition licenses and 2000 SQL Server Standard Edition licenses available to assign to Azure. The available licenses equal 9200 normalized cores. That value is a little more than the 8750 needed (2000 x 4 + 750 = 8750). -1. Then, you assign the 1800 SQL Server Enterprise Edition and 2000 SQL Server Standard Edition to Azure. That action results in 9200 normalized cores that the system can apply to Azure SQL resources as they run each hour. Assigning more licenses than are required now provides a buffer if usage grows faster than you expect. + - You determine that there are 1800 SQL Server Enterprise Edition licenses and 2000 SQL Server Standard Edition licenses available to assign to Azure. The available licenses equal 9,200 normalized cores. That value is a little more than the 8750 needed (2000 x 4 + 750 = 8750). +1. Then, you assign the 1800 SQL Server Enterprise Edition and 2000 SQL Server Standard Edition to Azure. That action results in 9,200 normalized cores that the system can apply to Azure SQL resources as they run each hour. Assigning more licenses than are required now provides a buffer if usage grows faster than you expect. Afterward, you monitor assigned license usage periodically, ideally monthly. After 10 months, usage approaches 95%, indicating faster Azure SQL usage growth than you expected. You talk to your procurement team to get more licenses so that you can assign them. Lastly, you adopt an annual license review schedule. In the review process, you: - Update license assignments. - Monitor over time. -## Next steps +## Related content - Learn about how to [transition to centrally managed Azure Hybrid Benefit](transition-existing.md). - Review the [Centrally managed Azure Hybrid Benefit FAQ](faq-azure-hybrid-benefit-scope.yml). |
data-factory | Connector Office 365 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-office-365.md | -# Copy and transform data from Microsoft 365 (Office 365) into Azure using Azure Data Factory or Synapse Analytics +# Copy from Microsoft 365 (Office 365) into Azure using Azure Data Factory or Synapse Analytics [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] Azure Data Factory and Synapse Analytics pipelines integrate with [Microsoft Graph data connect](/graph/data-connect-concept-overview), allowing you to bring the rich organizational data in your Microsoft 365 (Office 365) tenant into Azure in a scalable way and build analytics applications and extract insights based on these valuable data assets. Integration with Privileged Access Management provides secured access control for the valuable curated data in Microsoft 365 (Office 365). Refer to [this link](/graph/data-connect-concept-overview) for an overview of Microsoft Graph data connect. To copy data from Microsoft 365 (Office 365), the following properties are suppo ] ``` -## Transform data with the Microsoft 365 connector --Microsoft 365 datasets can be used as a source with mapping data flows. The data flow will transform the data by flattening the dataset automatically. This allows users to concentrate on leveraging the flattened dataset to accelerate their analytics scenarios. --### Mapping data flow properties --To create a mapping data flow using the Microsoft 365 connector as a source, complete the following steps: --1. In ADF Studio, go to the **Data flows** section of the **Author** hub, select the **…** button to drop down the **Data flow actions** menu, and select the **New data flow** item. Turn on debug mode by using the **Data flow debug** button in the top bar of data flow canvas. -- :::image type="content" source="media/connector-office-365/connector-office-365-mapping-data-flow-data-flow-debug.png" alt-text="Screenshot of the data flow debug button in mapping data flow."::: --2. In the mapping data flow editor, select **Add Source**. -- :::image type="content" source="media/connector-office-365/connector-office-365-mapping-data-flow-add-source.png" alt-text="Screenshot of add source in mapping data flow."::: --3. On the tab **Source settings**, select **Inline** in the **Source type** property, **Microsoft 365 (Office 365)** in the **Inline dataset type**, and the Microsoft 365 linked service that you have created earlier. -- :::image type="content" source="media/connector-office-365/connector-office-365-mapping-data-flow-select-dataset.png" alt-text="Screenshot of the select dataset option in source settings of mapping data flow source."::: --4. On the tab **Source options** select the **Table name** of the Microsoft 365 table that you would like to transform. Also select the **Auto flatten** option to decide if you would like data flow to auto flatten the source dataset. -- :::image type="content" source="media/connector-office-365/connector-office-365-mapping-data-flow-source-options.png" alt-text="Screenshot of the source options of mapping data flow source."::: --5. For the tabs **Projection**, **Optimize** and **Inspect**, please follow [mapping data flow](concepts-data-flow-overview.md). --6. On the tab **Data preview** click on the **Refresh** button to fetch a sample dataset for validation. - ## Related content For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats). |
dev-box | Concept Dev Box Network Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-network-requirements.md | The following table is the list of FQDNs and endpoints your dev boxes need to ac |Address |Protocol |Outbound port |Purpose |Service tag| ||||||-|login.microsoftonline.com |TCP |443 |Authentication to Microsoft Online Services | +|login.microsoftonline.com |TCP |443 |Authentication to Microsoft Online Services | AzureActiveDirectory | |*.wvd.microsoft.com |TCP |443 |Service traffic |WindowsVirtualDesktop | |*.prod.warm.ingest.monitor.core.windows.net |TCP |443 |Agent traffic [Diagnostic output](/azure/virtual-desktop/diagnostics-log-analytics) |AzureMonitor | |catalogartifact.azureedge.net |TCP |443 |Azure Marketplace |AzureFrontDoor.Frontend| |
devtest-labs | How To Move Labs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/how-to-move-labs.md | -# Move DevTest Labs and schedules to another region +# Move DevTest Labs and Schedules -You can move DevTest Labs and their associated schedules to another region. To move a lab, create a copy of an existing lab in another region. When you've moved your lab, and you have a virtual machine (VM) in the target region, you can move your lab schedules. +You can move DevTest Labs and their associated schedules to another region or resource group. You can move resource groups through the Azure Portal. To move a lab, create a copy of an existing lab in another region. When you've moved your lab, and you have a virtual machine (VM) in the target region, you can move your lab schedules.. In this article, you learn how to: > [!div class="checklist"] > >+> - Move resources to different resource groups. > - Export an Azure Resource Manager (ARM) template of your lab. > - Modify the template by adding or updating the target region and other parameters. > - Deploy the template to create the new lab in the target region. In this article, you learn how to: ## Move a lab -The following section describes how to create and customize an ARM template to move a lab from one region to another. +The following section describes how to move resources to a different resource group and create and customize an ARM template to move a lab from one region to another. You can move a schedule without moving a lab, if you have a VM in the target region. If you want to move a schedule without moving a lab, see [Move a schedule](#move-a-schedule). -### Prepare to move a lab +### Move Resource Groups using Azure Portal +Moving resources between resource groups in different locations is now seamlessly enabled in DevTest Labs. You can effortlessly transfer any resource from one group to another within the same subscription. ++To begin, select the resource you wish to move. On the resource's **Overview** page, you'll find the current **Resource Group** displayed at the top. Next to the resource group name, you'll see the word `(move)` in parentheses. ++Click the hyperlinked `move` text, which will direct you to a new page where you can relocate the resource to any other resource group within the same subscription. Please note that moving the resource will not change its location, even if the destination resource group is in a different location. If you're not moving resources through the Azure Portal or if you're transferring to a resource group in a different subscription, alternative methods using ARM are outlined below. ++### Move Labs to a Different Region When you move a lab, there are some steps you must take to prepare for the move. You need to: Use the following steps to export and redeploy your schedule in another Azure re ## Discard or clean up -After the deployment, if you want to start over, you can delete the target lab, and repeat the steps described in the [Prepare](#prepare-to-move-a-lab) and [Move](#deploy-the-template-to-move-the-lab) sections of this article. +After the deployment, if you want to start over, you can delete the target lab, and repeat the steps described in the [Prepare](#move-labs-to-a-different-region) and [Move](#deploy-the-template-to-move-the-lab) sections of this article. To commit the changes and complete the move, you must delete the original lab. |
event-grid | Handler Event Grid Namespace Topic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-event-grid-namespace-topic.md | Now, you're ready to create an event subscription to the system topic for the so 1. On the **Create Event Subscription** page, follow these steps: 1. For **Name**, enter the name for an event subscription. 1. For **Event Schema**, select the event schema as **Cloud Events Schema v1.0**. It's the only schema type that the Event Grid Namespace Topic destination supports.- 1. For **Filter to Event Types**, select types of events you want to subscribe too. + 1. For **Filter to Event Types**, select types of events you want to subscribe to. 1. For **Endpoint type**, select **Event Grid Namespace Topic**. 1. Select **Configure an endpoint**. |
event-hubs | Schema Registry Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/schema-registry-concepts.md | To learn more about using Avro schema format with Event Hubs Schema Registry, se - [How to use schema registry with Kafka and Avro](schema-registry-kafka-java-send-receive-quickstart.md) - [How to use Schema registry with Event Hubs .NET SDK (AMQP) and Avro.](schema-registry-dotnet-send-receive-quickstart.md) -#### JSON Schema (Preview) +#### JSON Schema [JSON Schema](https://json-schema.org/) is a standardized way of defining the structure and data types of the events. JSON Schema enables the confident and reliable use of the JSON data format in event streaming. To learn more about using JSON schema format with Event Hubs Schema Registry, see: |
event-hubs | Transport Layer Security Audit Minimum Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-audit-minimum-version.md | To create a policy with an audit effect for the minimum TLS version with the Azu }, { "not": {- "field": " Microsoft.EventHub/namespaces/minimumTlsVersion", + "field": "Microsoft.EventHub/namespaces/minimumTlsVersion", "equals": "1.2" } } See the following documentation for more information. - [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace](transport-layer-security-enforce-minimum-version.md) - [Configure the minimum TLS version for an Event Hubs namespace](transport-layer-security-configure-minimum-version.md)-- [Configure Transport Layer Security (TLS) for an Event Hubs client application](transport-layer-security-configure-client-version.md)+- [Configure Transport Layer Security (TLS) for an Event Hubs client application](transport-layer-security-configure-client-version.md) |
event-hubs | Transport Layer Security Enforce Minimum Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-enforce-minimum-version.md | Azure Event Hubs supports choosing a specific TLS version for namespaces. Curren Azure Event Hubs namespaces permit clients to send and receive data with TLS 1.0 and above. To enforce stricter security measures, you can configure your Event Hubs namespace to require that clients send and receive data with a newer version of TLS. If an Event Hubs namespace requires a minimum version of TLS, then any requests made with an older version will fail. > [!WARNING]-> As of 31 October 2024, TLS 1.0 and TLS 1.1 will no longer be supported on Azure. [TLS 1.0 and TLS 1.1 end of support announcement](https://azure.microsoft.com/updates/azure-support-tls-will-end-by-31-october-2024-2/) The minimum TLS version will be 1.2 for all Event Hubs deployments. +> As of 28 February 2025, TLS 1.0 and TLS 1.1 will no longer be supported on Azure Event Hubs. The minimum TLS version will be 1.2 for all Event Hubs deployments. > [!IMPORTANT] > On 31 October 2024, TLS 1.3 will be enabled for AMQP traffic. TLS 1.3 is already enabled for Kafka and HTTPS traffic. Java clients may have a problem with TLS 1.3 due to a dependency on an older version of Proton-J. For more details, read [Java client changes to support TLS 1.3 with Azure Service Bus and Azure Event Hubs](https://techcommunity.microsoft.com/t5/messaging-on-azure-blog/java-client-changes-to-support-tls-1-3-with-azure-service-bus/ba-p/4089355) |
expressroute | Expressroute Howto Gateway Migration Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-gateway-migration-portal.md | The following SKUs are available for ExpressRoute virtual network gateways: - Review the [Gateway migration](gateway-migration.md) article before you begin. - You must have an existing [ExpressRoute Virtual network gateway](expressroute-howto-add-gateway-portal-resource-manager.md) in your Azure subscription.-- A second prefix is required for the gateway subnet. If you have only one prefix, you can add a second prefix by following the steps in the [Add a second prefix to the gateway subnet](#add-a-second-prefix-to-the-gateway-subnet) section.- -## Add a second prefix to the gateway subnet --The gateway subnet needs two or more address prefixes for migration. If you have only one prefix, you can add a second prefix by following these steps. --1. First, update the `Az.Network` module to the latest version by running this PowerShell command: -- ```powershell-interactive - Update-Module -Name Az.Network -Force - ``` --1. Then, add a second prefix to the **GatewaySubnet** by running these PowerShell commands: -- ```powershell-interactive - $vnet = Get-AzVirtualNetwork -Name $vnetName -ResourceGroupName $resourceGroup - $subnet = Get-AzVirtualNetworkSubnetConfig -Name GatewaySubnet -VirtualNetwork $vnet - $prefix = "Enter new prefix" - $subnet.AddressPrefix.Add($prefix) - Set-AzVirtualNetworkSubnetConfig -Name GatewaySubnet -VirtualNetwork $vnet -AddressPrefix $subnet.AddressPrefix - Set-AzVirtualNetwork -VirtualNetwork $vnet - ``` ## Migrate to a new gateway in Azure portal Here are the steps to migrate to a new gateway in Azure portal. ## Next steps * Learn more about [designing for high availability](designing-for-high-availability-with-expressroute.md).-* Plan for [disaster recovery](designing-for-disaster-recovery-with-expressroute-privatepeering.md) and [using VPN as a backup](use-s2s-vpn-as-backup-for-expressroute-privatepeering.md). +* Plan for [disaster recovery](designing-for-disaster-recovery-with-expressroute-privatepeering.md) and [using VPN as a backup](use-s2s-vpn-as-backup-for-expressroute-privatepeering.md). |
expressroute | Expressroute Howto Gateway Migration Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-gateway-migration-powershell.md | This script creates a new ExpressRoute virtual network gateway on the same gatew ```azurepowershell-interactive gateway-migration/preparemigration.ps1 ```-1. Enter the resource ID of your gateway. -1. The gateway subnet needs two or more address prefixes for the migration. If you have only one prefix, you're prompted to enter an additional prefix. +1. Enter the resource ID of your gateway. 1. Choose a name for your new resources, the new resource name will be added to the existing name. For example: existingresourcename_newname. 1. Enter an availability zone for your new gateway. |
firewall | Ip Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/ip-groups.md | You can now update multiple IP Groups in parallel at the same time. This is part With this support, you can now: -- Update 20 IP Groups at a time+- Update 50 IP Groups at a time - Update the firewall and firewall policy during IP Group updates - Use the same IP Group in parent and child policy - Update multiple IP Groups referenced by firewall policy or classic firewall simultaneously |
firewall | Monitor Firewall Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/monitor-firewall-reference.md | Title: Monitoring data reference for Azure Firewall description: This article contains important reference material you need when you monitor Azure Firewall by using Azure Monitor. Previously updated : 08/08/2024 Last updated : 10/26/2024 The *AZFW Latency Probe* metric measures the overall or average latency of Azure - Monitor and alert if there are any latency or performance issues, so IT teams can proactively engage. - There might be various reasons that can cause high latency in Azure Firewall. For example, high CPU utilization, high throughput, or a possible networking issue. - This metric doesn't measure end-to-end latency of a given network path. In other words, this latency health probe doesn't measure how much latency Azure Firewall adds. +**What the AZFW Latency Probe Metric Measures (and Doesn't):** -- When the latency metric isn't functioning as expected, a value of 0 appears in the metrics dashboard.-- As a reference, the average expected latency for a firewall is approximately 1 ms. This value might vary depending on deployment size and environment.-- The latency probe is based on Microsoft's Ping Mesh technology. So, intermittent spikes in the latency metric are to be expected. These spikes are normal and don't signal an issue with the Azure Firewall. They're part of the standard host networking setup that supports the system.+- What it measures: The latency of the Azure Firewall within the Azure platform +- What it doesn't meaure: The metric does not capture end-to-end latency for the entire network path. Instead, it reflects the performance within the firewall, rather than how much latency Azure Firewall introduces into the network. +- Error reporting: If the latency metric isn't functioning correct, it reports a value of 0 in the metrics dashboard, indicating a probe failure or interruption. - As a result, if you experience consistent high latency that last longer than typical spikes, consider filing a Support ticket for assistance. +**Factors that impact latency:** +- High CPU utilization +- High throughput or traffic load +- Networking issues within the Azure platform ++**Latency Probes: From ICMP to TCP** +The latency probe currently uses Microsoft's Ping Mesh technology, which is based on ICMP (Internet Control Message Protcol). ICMP is suitable for quick health checks, like ping requests, but it may not accurately represent real-world application traffic, which typically relis on TCP.However, ICMP probes prioritize differently across the Azure platform, which can result in variation across SKUs. To reduce these discrepancies, Azure Firewall plans to transition to TCP-based probes. ++- Latency spikes: With ICMP probes, intermittent spikes are normal and are part of the host network's standard behavior. These should not be misinterpreted as firewall issues unless they are persistent. +- Average latency: On average, the latency of Azure Firewall is expected to range from 1ms to 10 ms, dpending on the Firewall SKU and deployment size. ++**Best Practices for Monitoring Latency** +- Set a baseline: Establish a latency baseline under light traffic conditions for accurate comparisons during normal or peak usage. +- Monitor for patterns: Expect occasional latency spikes as part of normal operations. If high latency persists beyond these normal variations, it may indicate a deeper issue requiring investigation. +- Recommended latency threshold: A recommended guideline is that latency should not exceed 3x the baseline. If this threshold is crossed, further investigation is recommended. +- Check the rule limit: Ensure that the network rules are within the 20K rule limit. Exceeding this limit can affect performance. +- New application onboarding: Check for any newly onboarded applications that could be adding significant load or causing latency issues. +- Support request: If you observe continuous latency degredation that does not align with expected behavior, consider filing a support ticket for further assistance. :::image type="content" source="media/metrics/latency-probe.png" alt-text="Screenshot showing the Azure Firewall Latency Probe metric."::: |
governance | Built In Initiatives | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md | Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Azure Machine Configuration, and more. Previously updated : 10/21/2024 Last updated : 10/30/2024 |
governance | Built In Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md | Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Azure Machine Configuration, and more. Previously updated : 10/21/2024 Last updated : 10/30/2024 |
hdinsight | Hdinsight Release Notes Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md | description: Archived release notes for Azure HDInsight. Get development tips an Previously updated : 09/06/2024 Last updated : 10/29/2024 # Archived release notes To subscribe, click the ΓÇ£watchΓÇ¥ button in the banner and watch out for [HDIn ## Release Information +### Release date: Aug 30, 2024 ++> [!NOTE] +> This is a Hotfix / maintenance release for Resource Provider. For more information see, [Resource Provider](.//hdinsight-overview-versioning.md#hdinsight-resource-provider). ++Azure HDInsight periodically releases maintenance updates for delivering bug fixes, performance enhancements, and security patches ensuring you stay up to date with these updates guarantees optimal performance and reliability. ++This release note applies to ++++++HDInsight release will be available to all regions over several days. This release note is applicable for image number **2407260448**. [How to check the image number?](./view-hindsight-cluster-image-version.md) ++HDInsight uses safe deployment practices, which involve gradual region deployment. It might take up to 10 business days for a new release or a new version to be available in all regions. ++**OS versions** ++* HDInsight 5.1: Ubuntu 18.04.5 LTS Linux Kernel 5.4 +* HDInsight 5.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4 +* HDInsight 4.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4 ++> [!NOTE] +> Ubuntu 18.04 is supported under [Extended Security Maintenance(ESM)](https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/canonical-ubuntu-18-04-lts-reaching-end-of-standard-support/ba-p/3822623) by the Azure Linux team for [Azure HDInsight July 2023](/azure/hdinsight/hdinsight-release-notes-archive#release-date-july-25-2023), release onwards. ++For workload specific versions, see [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md). ++## Issue fixed ++* Default DB bug fix. ++## :::image type="icon" border="false" source="./media/hdinsight-release-notes/clock.svg"::: Coming soon ++* [Basic and Standard A-series VMs Retirement](https://azure.microsoft.com/updates/basic-and-standard-aseries-vms-on-hdinsight-will-retire-on-31-august-2024/). + * On August 31, 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). + * To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before August 31, 2024. +* Retirement Notifications for [HDInsight 4.0](https://azure.microsoft.com/updates/azure-hdinsight-40-will-be-retired-on-31-march-2025-migrate-your-hdinsight-clusters-to-51) and [HDInsight 5.0](https://azure.microsoft.com/updates/hdinsight5retire/). + +If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview). ++You can always ask us about HDInsight on [Azure HDInsight - Microsoft Q&A](/answers/tags/168/azure-hdinsight). ++We're listening: YouΓÇÖre welcome to add more ideas and other topics here and vote for them - [HDInsight Ideas](https://feedback.azure.com/d365community/search/?q=HDInsight) and follow us for more updates on [AzureHDInsight Community](https://www.linkedin.com/groups/14313521/). ++> [!NOTE] +> We advise customers to use to latest versions of HDInsight [Images](./view-hindsight-cluster-image-version.md) as they bring in the best of open source updates, Azure updates and security fixes. For more information, see [Best practices](./hdinsight-overview-before-you-start.md). + ### Release date: Aug 09, 2024 This release note applies to For workload specific versions, see [HDInsight 5.x component versions](./hdinsig The `setOwnerUser` implementation given in Ranger 2.3.0 release has a critical regression issue when being used by Hive. In Ranger 2.3.0, when HiveServer2 tries to evaluate the policies, Ranger Client tries to get the owner of the hive table by calling the Metastore in the setOwnerUser function which essentially makes call to storage to check access for that table. This issue causes the queries to run slow when Hive runs on 2.3.0 Ranger. -## :::image type="icon" border="false" source="./media/hdinsight-release-notes/clock.svg"::: Coming soon +**Coming soon** * [Basic and Standard A-series VMs Retirement](https://azure.microsoft.com/updates/basic-and-standard-aseries-vms-on-hdinsight-will-retire-on-31-august-2024/). * On August 31, 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). |
hdinsight | Hdinsight Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md | description: Latest release notes for Azure HDInsight. Get development tips and Previously updated : 09/06/2024 Last updated : 10/29/2024 # Azure HDInsight release notes To subscribe, click the **watch** button in the banner and watch out for [HDInsi ## Release Information -### Release date: Aug 30, 2024 +### Release date: Oct 22, 2024 > [!NOTE] > This is a Hotfix / maintenance release for Resource Provider. For more information see, [Resource Provider](.//hdinsight-overview-versioning.md#hdinsight-resource-provider). This release note applies to :::image type="icon" source="./media/hdinsight-release-notes/yes-icon.svg" border="false"::: HDInsight 4.0 version. -HDInsight release will be available to all regions over several days. This release note is applicable for image number **2407260448**. [How to check the image number?](./view-hindsight-cluster-image-version.md) +HDInsight release will be available to all regions over several days. This release note is applicable for image number **2409240625**. [How to check the image number?](./view-hindsight-cluster-image-version.md) HDInsight uses safe deployment practices, which involve gradual region deployment. It might take up to 10 business days for a new release or a new version to be available in all regions. HDInsight uses safe deployment practices, which involve gradual region deploymen For workload specific versions, see [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md). -## Issue fixed +## Updated -* Default DB bug fix. +* MSI based authentication support available for Azure blob storage. ++Azure HDInsight now supports OAuth-based authentication for accessing Azure Blob storage by leveraging Azure Active Directory (AAD) and managed identities (MSI). With this enhancement, HDInsight uses user-assigned managed identities to access Azure blob storage. For more information, see [Managed identities for Azure resources](/entra/identity/managed-identities-azure-resources/overview). ## :::image type="icon" border="false" source="./media/hdinsight-release-notes/clock.svg"::: Coming soon * [Basic and Standard A-series VMs Retirement](https://azure.microsoft.com/updates/basic-and-standard-aseries-vms-on-hdinsight-will-retire-on-31-august-2024/). * On August 31, 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). * To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before August 31, 2024.++* HDInsight service is transitioning to use standard load balancers for all its cluster configurations because of [deprecation announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer#main) of Azure basic load balancer. + * This change will be rolled out in a phased manner for different regions between November 07, 2024 and November 21, 2024. Watch out our release notes for more updates. + * Retirement Notifications for [HDInsight 4.0](https://azure.microsoft.com/updates/azure-hdinsight-40-will-be-retired-on-31-march-2025-migrate-your-hdinsight-clusters-to-51) and [HDInsight 5.0](https://azure.microsoft.com/updates/hdinsight5retire/). If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview). |
healthcare-apis | Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/known-issues.md | Refer to the table for details about resolution dates or possible workarounds. |Issue | Date discovered | Workaround | Date resolved | | :- | : | :- | :- | |For FHIR instances created after August 19,2024, diagnostic logs aren't available in log analytics workspace. |September 19,2024 9:00 am PST| -- | October 17,2024 9:00 am PST |-|For FHIR instances created after August 19,2024, in metrics blade - Total requests, Total latency, and Total errors metrics are not being populated. |September 19,2024 9:00 am PST| -- | -- | +|For FHIR instances created after August 19,2024, in metrics blade - Total requests, Total latency, and Total errors metrics are not being populated. |September 19,2024 9:00 am PST| -- | October 28,2024 9:00 am PST | |For FHIR instances created after August 19,2024, changes in private link configuration at the workspace level causes FHIR service to be stuck in 'Updating' state. |September 24,2024 9:00 am PST| Accounts deployed prior to September 27,2024 and facing this issue can follow the steps: <br> 1. Remove private endpoint from the Azure Health Data Services workspace having this issue. On Azure blade, go to Workspace and then click on Networking blade. In networking blade, select existing private link connection and click on 'Remove' <br> 2. Create new private connection to link to the workspace.| September 27,2024 9:00 am PST | |Changes in private link configuration at the workspace level don't propagate to the child services.|September 4,2024 9:00 am PST| To fix this issue a service reprovisioning is required. To reprovision the service, reach out to FHIR service team| September 17,2024 9:00am PST|-|Customers accessing the FHIR Service via a private endpoint are experiencing difficulties, specifically receiving a 403 error when making API calls from within the vNet. This problem affects FHIR instances provisioned after August 19 that utilize private link.|August 22,2024 11:00 am PST|-- | September 3,2024 9:00 am PST| +|Customers accessing the FHIR Service via a private endpoint are experiencing difficulties, specifically receiving a 403 error when making API calls from within the vNet. This problem affects FHIR instances provisioned after August 19 that utilize private link.|August 22,2024 11:00 am PST|-- | September 3,2024 9:00 am PST | |FHIR Applications were down in EUS2 region|January 8, 2024 2 pm PST|--|January 8,2024 4:15 pm PST| |API queries to FHIR service returned Internal Server error in UK south region |August 10,2023 9:53 am PST|--|August 10,2023 10:43 am PST| |
iot-edge | Tutorial Monitor With Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-monitor-with-workbooks.md | A Log Analytics workspace is necessary to collect the metrics data and provides 1. Once your workspace is created, select **Go to resource**. -1. From the main menu under **Settings**, select **Agents management**. +1. From the main menu under **Settings**, select **Agents**. -1. Copy the values of **Workspace ID** and **Primary key**. You'll use these two values later in the tutorial to configure the metrics collector module to send the metrics to this workspace. +1. Copy the values of **Workspace ID** and **Primary key**, available under 'Log Analytics agent instructions'. You'll use these two values later in the tutorial to configure the metrics collector module to send the metrics to this workspace. ## Retrieve your IoT hub resource ID |
logic-apps | Logic Apps Limits And Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md | For Azure Logic Apps to receive incoming communication through your firewall, yo | Azure Government region | Azure Logic Apps IP | |-|| | US Gov Arizona | 52.244.67.164, 52.244.67.64, 52.244.66.82, 52.126.52.254, 52.126.53.145, 52.244.187.241, 52.244.17.238, 52.244.23.110, 52.244.20.213, 52.244.16.162, 52.244.15.25, 52.244.16.141, 52.244.15.26 |-| US Gov Texas | 52.238.119.104, 52.238.112.96, 52.238.119.145, 52.245.171.151, 52.245.163.42 | +| US Gov Texas | 52.238.119.104, 52.238.112.96, 52.238.119.145, 52.245.171.151, 52.245.163.42, 52.238.78.169, 52.238.164.135, 52.238.164.111, 52.238.164.44, 52.243.250.114, 52.243.248.33, 52.243.253.54, 52.243.253.44 | | US Gov Virginia | 52.227.159.157, 52.227.152.90, 23.97.4.36, 13.77.239.182, 13.77.239.190, 20.159.220.127, 62.10.96.217, 62.10.102.236, 62.10.102.136, 62.10.111.137, 62.10.111.152, 62.10.111.128, 62.10.111.123 | | US DoD Central | 52.182.49.204, 52.182.52.106, 52.182.49.105, 52.182.49.175, 52.180.225.24, 52.180.225.43, 52.180.225.50, 52.180.252.28, 52.180.225.29, 52.180.231.56, 52.180.231.50, 52.180.231.65 | This section lists the outbound IP addresses that Azure Logic Apps requires in y |--|| | US DoD Central | 52.182.48.215, 52.182.92.143, 52.182.53.147, 52.182.52.212, 52.182.49.162, 52.182.49.151, 52.180.225.0, 52.180.251.16, 52.180.250.135, 52.180.251.20, 52.180.231.89, 52.180.224.251, 52.180.252.222, 52.180.225.21 | | US Gov Arizona | 52.244.67.143, 52.244.65.66, 52.244.65.190, 52.126.50.197, 52.126.49.223, 52.126.53.144, 52.126.36.100, 52.244.187.5, 52.244.19.121, 52.244.18.105, 52.244.51.113, 52.244.17.113, 52.244.26.122, 52.244.22.195, 52.244.19.137 |-| US Gov Texas | 52.238.114.217, 52.238.115.245, 52.238.117.119, 20.141.120.209, 52.245.171.152, 20.141.123.226, 52.245.163.1 | +| US Gov Texas | 52.238.114.217, 52.238.115.245, 52.238.117.119, 20.141.120.209, 52.245.171.152, 20.141.123.226, 52.245.163.1, 52.238.164.53, 52.238.72.216, 52.238.164.123, 52.238.160.255, 52.243.237.44, 52.249.101.31, 52.243.251.37, 52.243.252.22 | | US Gov Virginia | 13.72.54.205, 52.227.138.30, 52.227.152.44, 13.77.239.177, 13.77.239.140, 13.77.239.187, 13.77.239.184, 20.159.219.180, 62.10.96.177, 62.10.102.138, 62.10.102.94, 62.10.111.134, 62.10.111.151, 62.10.110.102, 62.10.109.190 | ## Next steps |
migrate | Common Questions Server Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-server-migration.md | Migration and modernization tool migrates all the UEFI-based machines to Azure a | SUSE Linux Enterprise Server 15 SP1 | Y | Y | Y | | SUSE Linux Enterprise Server 12 SP4 | Y | Y | Y | | Ubuntu Server 16.04, 18.04, 19.04, 19.10 | Y | Y | Y |-| RHEL 8.1, 8.0, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x | Y | Y | Y | +| RHEL 9.x, 8.1, 8.0, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x | Y | Y | Y | | CentOS Stream | Y | Y | Y | | Oracle Linux 7.7, 7.7-CI | Y | Y | Y | |
migrate | Prepare For Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-for-migration.md | Configure this setting manually as follows: Azure Migrate completes these actions automatically for these versions -- Red Hat Enterprise Linux 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.3, 7.2, 7.1, 7.0, 6.x (Azure Linux VM agent is also installed automatically during migration)+- Red Hat Enterprise Linux 9.x, 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.3, 7.2, 7.1, 7.0, 6.x (Azure Linux VM agent is also installed automatically during migration) - CentOS Stream (Azure Linux VM agent is also installed automatically during migration) - SUSE Linux Enterprise Server 15 SP0, 15 SP1, 12, 11 SP4, 11 SP3 - Ubuntu 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS (Azure Linux VM agent is also installed automatically during migration) The following table summarizes the steps performed automatically for the operati Learn more about steps for [running a Linux VM on Azure](/azure/virtual-machines/linux/create-upload-generic), and get instructions for some of the popular Linux distributions. -Review the list of [required packages](/azure/virtual-machines/extensions/agent-linux#requirements) to install Linux VM agent. Azure Migrate installs the Linux VM agent automatically for RHEL 8.x/7.x/6.x, Ubuntu 14.04/16.04/18.04/19.04/19.10/20.04, SUSE 15 SP0/15 SP1/12/11 SP4/11 SP3, Debian 9/8/7, and Oracle 7 when using the agentless method of VMware migration. +Review the list of [required packages](/azure/virtual-machines/extensions/agent-linux#requirements) to install Linux VM agent. Azure Migrate installs the Linux VM agent automatically for RHEL 9.x, 8.x/7.x/6.x, Ubuntu 14.04/16.04/18.04/19.04/19.10/20.04, SUSE 15 SP0/15 SP1/12/11 SP4/11 SP3, Debian 9/8/7, and Oracle 7 when using the agentless method of VMware migration. ## Check Azure VM requirements |
migrate | Tutorial Discover Import | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-import.md | Operating system names provided in the CSV must contain and match. If they don't **A-H** | **I-R** | **S-T** | **U-Z** | | | -Asianux 3<br/>Asianux 4<br/>Asianux 5<br/>CoreOS Linux<br/>Debian GNU/Linux 4<br/>Debian GNU/Linux 5<br/>Debian GNU/Linux 6<br/>Debian GNU/Linux 7<br/>Debian GNU/Linux 8<br/>FreeBSD | IBM OS/2<br/>macOS X 10<br/>MS-DOS<br/>Novell NetWare 5<br/>Novell NetWare 6<br/>Oracle Linux<br/>Oracle Linux 4/5<br/>Oracle Solaris 10<br/>Oracle Solaris 11<br/>Red Hat Enterprise Linux 2<br/>Red Hat Enterprise Linux 3<br/>Red Hat Enterprise Linux 4<br/>Red Hat Enterprise Linux 5<br/>Red Hat Enterprise Linux 6<br/>Red Hat Enterprise Linux 7<br/>Red Hat Fedora | SCO OpenServer 5<br/>SCO OpenServer 6<br/>SCO UnixWare 7<br/> Serenity Systems eComStation<br/>Serenity Systems eComStation 1<br/>Serenity Systems eComStation 2<br/>Sun Microsystems Solaris 8<br/>Sun Microsystems Solaris 9<br/><br/>SUSE Linux Enterprise 10<br/>SUSE Linux Enterprise 11<br/>SUSE Linux Enterprise 12<br/>SUSE Linux Enterprise 8/9<br/>SUSE Linux Enterprise 11<br/>SUSE openSUSE | Ubuntu Linux<br/>VMware ESXi 4<br/>VMware ESXi 5<br/>VMware ESXi 6<br/>Windows 10<br/>Windows 2000<br/>Windows 3<br/>Windows 7<br/>Windows 8<br/>Windows 95<br/>Windows 98<br/>Windows NT<br/>Windows Server (R) 2008<br/>Windows Server 2003<br/>Windows Server 2008<br/>Windows Server 2008 R2<br/>Windows Server 2012<br/>Windows Server 2012 R2<br/>Windows Server 2016<br/>Windows Server 2019<br/>Windows Server Threshold<br/>Windows Vista<br/>Windows Web Server 2008 R2<br/>Windows XP Professional +Asianux 3<br/>Asianux 4<br/>Asianux 5<br/>CoreOS Linux<br/>Debian GNU/Linux 4<br/>Debian GNU/Linux 5<br/>Debian GNU/Linux 6<br/>Debian GNU/Linux 7<br/>Debian GNU/Linux 8<br/>FreeBSD | IBM OS/2<br/>macOS X 10<br/>MS-DOS<br/>Novell NetWare 5<br/>Novell NetWare 6<br/>Oracle Linux<br/>Oracle Linux 4/5<br/>Oracle Solaris 10<br/>Oracle Solaris 11<br/>Red Hat Enterprise Linux 2<br/>Red Hat Enterprise Linux 3<br/>Red Hat Enterprise Linux 4<br/>Red Hat Enterprise Linux 5<br/>Red Hat Enterprise Linux 6<br/>Red Hat Enterprise Linux 7<br/>Red Hat Enterprise Linux 8<br/>Red Hat Enterprise Linux 9<br/>Red Hat Fedora | SCO OpenServer 5<br/>SCO OpenServer 6<br/>SCO UnixWare 7<br/> Serenity Systems eComStation<br/>Serenity Systems eComStation 1<br/>Serenity Systems eComStation 2<br/>Sun Microsystems Solaris 8<br/>Sun Microsystems Solaris 9<br/><br/>SUSE Linux Enterprise 10<br/>SUSE Linux Enterprise 11<br/>SUSE Linux Enterprise 12<br/>SUSE Linux Enterprise 8/9<br/>SUSE Linux Enterprise 11<br/>SUSE openSUSE | Ubuntu Linux<br/>VMware ESXi 4<br/>VMware ESXi 5<br/>VMware ESXi 6<br/>Windows 10<br/>Windows 2000<br/>Windows 3<br/>Windows 7<br/>Windows 8<br/>Windows 95<br/>Windows 98<br/>Windows NT<br/>Windows Server (R) 2008<br/>Windows Server 2003<br/>Windows Server 2008<br/>Windows Server 2008 R2<br/>Windows Server 2012<br/>Windows Server 2012 R2<br/>Windows Server 2016<br/>Windows Server 2019<br/>Windows Server Threshold<br/>Windows Vista<br/>Windows Web Server 2008 R2<br/>Windows XP Professional ## Business case considerations - If you import servers by using a CSV file and build a business case: |
migrate | Migrate Support Matrix Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/migrate-support-matrix-vmware.md | Support | Details | Supported servers | You can enable agentless dependency analysis on up to 1,000 servers (across multiple vCenter Servers) discovered per appliance. Windows servers | Windows Server 2022 <br/> Windows Server 2019<br /> Windows Server 2016<br /> Windows Server 2012 R2<br /> Windows Server 2012<br /> Windows Server 2008 R2 (64-bit)<br /> Windows Server 2008 (32-bit)-Linux servers | Red Hat Enterprise Linux 5.1, 5.3, 5.11, 6.x, 7.x, 8.x <br /> Ubuntu 12.04, 14.04, 16.04, 18.04, 20.04, 22.04 <br /> OracleLinux 6.1, 6.7, 6.8, 6.9, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8, 8.1, 8.3, 8.5 <br /> SUSE Linux 10, 11 SP4, 12 SP1, 12 SP2, 12 SP3, 12 SP4, 15 SP2, 15 SP3 <br /> Debian 7, 8, 9, 10, 11 +Linux servers | Red Hat Enterprise Linux 5.1, 5.3, 5.11, 6.x, 7.x, 8.x, 9.x <br /> Ubuntu 12.04, 14.04, 16.04, 18.04, 20.04, 22.04 <br /> OracleLinux 6.1, 6.7, 6.8, 6.9, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8, 8.1, 8.3, 8.5 <br /> SUSE Linux 10, 11 SP4, 12 SP1, 12 SP2, 12 SP3, 12 SP4, 15 SP2, 15 SP3 <br /> Debian 7, 8, 9, 10, 11 Server requirements | VMware Tools (10.2.1 and later) must be installed and running on servers you want to analyze.<br /><br /> Servers must have PowerShell version 2.0 or later installed.<br /><br /> WMI should be enabled and available on Windows servers. vCenter Server account | The read-only account used by Azure Migrate and Modernize for assessment must have privileges for guest operations on VMware VMs. Windows server access | A user account (local or domain) with administrator permissions on servers. |
migrate | Prepare For Agentless Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/prepare-for-agentless-migration.md | The preparation script executes the following changes based on the OS type of th Azure Migrate will attempt to install the Microsoft Azure Linux Agent (waagent), a secure, lightweight process that manages Linux & FreeBSD provisioning, and VM interaction with the Azure Fabric Controller. [Learn more](/azure/virtual-machines/extensions/agent-linux) about the functionality enabled for Linux and FreeBSD IaaS deployments via the Linux agent. - Review the list of [required packages](/azure/virtual-machines/extensions/agent-linux#requirements) to install Linux VM agent. Azure Migrate installs the Linux VM agent automatically for RHEL 8.x/7.x/6.x, Ubuntu 14.04/16.04/18.04/19.04/19.10/20.04, SUSE 15 SP0/15 SP1/12, Debian 9/8/7, and Oracle 7/6 when using the agentless method of VMware migration. Follow these instructions to [install the Linux Agent manually](/azure/virtual-machines/extensions/agent-linux#installation) for other OS versions. + Review the list of [required packages](/azure/virtual-machines/extensions/agent-linux#requirements) to install Linux VM agent. Azure Migrate installs the Linux VM agent automatically for RHEL 9.x, 8.x/7.x/6.x, Ubuntu 14.04/16.04/18.04/19.04/19.10/20.04, SUSE 15 SP0/15 SP1/12, Debian 9/8/7, and Oracle 7/6 when using the agentless method of VMware migration. Follow these instructions to [install the Linux Agent manually](/azure/virtual-machines/extensions/agent-linux#installation) for other OS versions. You can use the command to verify the service status of the Azure Linux Agent to make sure it's running. The service name might be **walinuxagent** or **waagent**. Once the hydration changes are done, the script will unmount all the partitions mounted, deactivate volume groups, and then flush the devices. |
nat-gateway | Nat Gateway Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-gateway-resource.md | The total number of connections that a NAT gateway can support at any given time - NAT Gateway doesn't support Public IP addresses with routing configuration type **internet**. To see a list of Azure services that do support routing configuration **internet** on public IPs, see [supported services for routing over the public internet](/azure/virtual-network/ip-services/routing-preference-overview#supported-services). -- Public IPs with DDoS protection enabled aren't supported with NAT gateway. For more information, see [DDoS limitations](/azure/ddos-protection/ddos-protection-sku-comparison#limitations). +- Public IPs with DDoS protection enabled aren't supported with NAT gateway. For more information, see [DDoS limitations](/azure/ddos-protection/ddos-protection-sku-comparison#limitations).zzzzz + +- Azure NAT Gateway is not supported in a secured virtual hub network (vWAN) architecture. ## Next steps |
network-watcher | Diagnose Vm Network Routing Problem Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem-cli.md | description: In this article, you learn how to use Azure CLI to diagnose a virtu Previously updated : 03/18/2022 Last updated : 10/29/2024 + # Customer intent: I need to diagnose virtual machine (VM) network routing problem that prevents communication to different destinations. -# Diagnose a virtual machine network routing problem - Azure CLI +# Diagnose a virtual machine network routing problem using the Azure CLI ++In this article, you learn how to use Azure Network Watcher [next hop](network-watcher-next-hop-overview.md) tool to troubleshoot and diagnose a VM routing problem that's preventing it from correctly communicating with other resources. -In this article, you deploy a virtual machine (VM), and then check communications to an IP address and URL. You determine the cause of a communication failure and how you can resolve it. +## Prerequisites +- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- Azure Cloud Shell or Azure CLI. -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. + The steps in this article run the Azure CLI commands interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloud Shell** at the upper-right corner of a code block. Select **Copy** to copy the code, and paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal. -- The Azure CLI commands in this article are formatted to run in a Bash shell.+ You can also [install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands. This article requires the Azure CLI version 2.0 or later. Run [az --version](/cli/azure/reference-index#az-version) command to find the installed version. If you run Azure CLI locally, sign in to Azure using the [az login](/cli/azure/reference-index#az-login) command. -## Create a VM +## Create a virtual machine Before you can create a VM, you must create a resource group to contain the VM. Create a resource group with [az group create](/cli/azure/group#az-group-create). The following example creates a resource group named *myResourceGroup* in the *eastus* location: |
network-watcher | Diagnose Vm Network Routing Problem Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem-powershell.md | description: In this article, you learn how to diagnose a virtual machine networ Previously updated : 01/07/2021 Last updated : 10/29/2024 + # Customer intent: I need to diagnose virtual machine (VM) network routing problem that prevents communication to different destinations. -# Diagnose a virtual machine network routing problem - Azure PowerShell +# Diagnose a virtual machine network routing problem using PowerShell ++In this article, you learn how to use Azure Network Watcher [next hop](network-watcher-next-hop-overview.md) tool to troubleshoot and diagnose a virtual machine (VM) routing problem that's preventing it from correctly communicating with other resources. ++## Prerequisites -In this article, you deploy a virtual machine (VM), and then check communications to an IP address and URL. You determine the cause of a communication failure and how you can resolve it. +- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +- Azure Cloud Shell or Azure PowerShell. + The steps in this article run the Azure PowerShell cmdlets interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the cmdlets in the Cloud Shell, select **Open Cloud Shell** at the upper-right corner of a code block. Select **Copy** to copy the code and then paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal. -If you choose to install and use PowerShell locally, this article requires the Az PowerShell module. For more information, see [How to install Azure PowerShell](/powershell/azure/install-azure-powershell). To find the installed version, run `Get-InstalledModule -Name Az`. If you run PowerShell locally, sign in to Azure using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. + You can also [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. Run [Get-Module -ListAvailable Az](/powershell/module/microsoft.powershell.core/get-module) to find the installed version. If you run PowerShell locally, sign in to Azure using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. -## Create a VM +## Create a virtual machine Before you can create a VM, you must create a resource group to contain the VM. Create a resource group with [New-AzResourceGroup](/powershell/module/az.Resources/New-azResourceGroup). The following example creates a resource group named *myResourceGroup* in the *eastus* location. |
network-watcher | Diagnose Vm Network Routing Problem | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem.md | Last updated 10/26/2023 In this tutorial, you use Azure Network Watcher [next hop](network-watcher-next-hop-overview.md) tool to troubleshoot and diagnose a VM routing problem that's preventing it from correctly communicating with other resources. Next hop shows you that a [custom route](../virtual-network/virtual-networks-udr-overview.md?toc=/azure/network-watcher/toc.json#custom-routes) caused the routing problem. In this tutorial, you learn how to: In this section, you create a virtual network. :::image type="content" source="./media/diagnose-vm-network-routing-problem/virtual-network-azure-portal.png" alt-text="Screenshot shows searching for virtual networks in the Azure portal."::: -1. Select **+ Create**. In **Create virtual network**, enter or select the following values in the **Basics** tab: +1. Select **+ Create**. ++1. Enter or select the following values on the **Basics** tab of **Create virtual network**: | Setting | Value | | | | In this section, you create a virtual network. | Resource Group | Select **Create new**. </br> Enter ***myResourceGroup*** in **Name**. </br> Select **OK**. | | **Instance details** | | | Virtual network name | Enter ***myVNet***. |- | Region | Select **East US**. | + | Region | Select **(US) East US**. | 1. Select the **IP Addresses** tab, or select **Next** button at the bottom of the page twice. -1. Enter the following values in the **IP Addresses** tab: +1. Enter the following values on the **IP Addresses** tab: | Setting | Value | | | | In this section, you create a virtual network. ## Create virtual machines -In this section, you create two virtual machines: **myVM** and **myNVA**. You use **myVM** virtual machine to test the communication from. **myNVA** virtual machine is used as a network virtual appliance in the scenario. +In this section, you create two virtual machines: +- **myVM**: to test the communication from. +- **myNVA**: to use as a network virtual appliance. ### Create first virtual machine In this section, you create two virtual machines: **myVM** and **myNVA**. You us 1. Select **+ Create** and then select **Azure virtual machine**. -1. In **Create a virtual machine**, enter or select the following values in the **Basics** tab: +1. Enter or select the following values on the **Basics** tab of **Create a virtual machine**: | Setting | Value | | | | | **Project Details** | | | Subscription | Select your Azure subscription. |- | Resource Group | Select **myResourceGroup**. | + | Resource group | Select **myResourceGroup**. | | **Instance details** | | | Virtual machine name | Enter ***myVM***. | | Region | Select **(US) East US**. |- | Availability Options | Select **No infrastructure redundancy required**. | + | Availability options | Select **No infrastructure redundancy required**. | | Security type | Select **Standard**. | | Image | Select **Windows Server 2022 Datacenter: Azure Edition - x64 Gen2**. | | Size | Choose a size or leave the default setting. | In this section, you create two virtual machines: **myVM** and **myNVA**. You us 1. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**. -1. In the Networking tab, enter or select the following values: +1. On the Networking tab, enter or select the following values: | Setting | Value | | | | In this section, you create a static custom route (user-defined route) in a rout 1. In the search box at the top of the portal, enter ***route tables***. Select **Route tables** from the search results. -1. Select **+ Create** to create a new route table. In the **Create Route table** page, enter or select the following values: +1. Select **+ Create** to create a new route table. On the **Create Route table** page, enter, or select the following values: | Setting | Value | | - | | |
openshift | Howto Deploy Java Jboss Enterprise Application Platform App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-jboss-enterprise-application-platform-app.md | -This article shows you how to quickly set up JBoss Enterprise Application Platform (EAP) on Azure Red Hat OpenShift (ARO) using the Azure portal. +This article shows you how to quickly set up JBoss Enterprise Application Platform (EAP) on Azure Red Hat OpenShift using the Azure portal. -This article uses the Azure Marketplace offer for JBoss EAP to accelerate your journey to ARO. The offer automatically provisions resources including an ARO cluster with a built-in OpenShift Container Registry (OCR), the JBoss EAP Operator, and optionally a container image including JBoss EAP and your application using Source-to-Image (S2I). To see the offer, visit the [Azure portal](https://aka.ms/eap-aro-portal). If you prefer manual step-by-step guidance for running JBoss EAP on ARO that doesn't use the automation enabled by the offer, see [Deploy a Java application with Red Hat JBoss Enterprise Application Platform (JBoss EAP) on an Azure Red Hat OpenShift 4 cluster](/azure/developer/java/ee/jboss-eap-on-aro). +This article uses the Azure Marketplace offer for JBoss EAP to accelerate your journey to Azure Red Hat OpenShift. The offer automatically provisions resources including an Azure Red Hat OpenShift cluster with a built-in OpenShift Container Registry (OCR), the JBoss EAP Operator, and optionally a container image including JBoss EAP and your application using Source-to-Image (S2I). To see the offer, visit the [Azure portal](https://aka.ms/eap-aro-portal). If you prefer manual step-by-step guidance for running JBoss EAP on Azure Red Hat OpenShift that doesn't use the automation enabled by the offer, see [Deploy a Java application with Red Hat JBoss Enterprise Application Platform (JBoss EAP) on an Azure Red Hat OpenShift 4 cluster](/azure/developer/java/ee/jboss-eap-on-aro). If you're interested in providing feedback or working closely on your migration scenarios with the engineering team developing JBoss EAP on Azure solutions, fill out this short [survey on JBoss EAP migration](https://aka.ms/jboss-on-azure-survey) and include your contact information. The team of program managers, architects, and engineers will promptly get in touch with you to initiate close collaboration. |
openshift | Howto Deploy Java Liberty App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-liberty-app.md | -This article shows you how to quickly stand up IBM WebSphere Liberty and Open Liberty on Azure Red Hat OpenShift (ARO) using the Azure portal. +This article shows you how to quickly stand up IBM WebSphere Liberty and Open Liberty on Azure Red Hat OpenShift using the Azure portal. -This article uses the Azure Marketplace offer for Open/WebSphere Liberty to accelerate your journey to ARO. The offer automatically provisions several resources including an ARO cluster with a built-in OpenShift Container Registry (OCR), the Liberty Operators, and optionally a container image including Liberty and your application. To see the offer, visit the [Azure portal](https://aka.ms/liberty-aro). If you prefer manual step-by-step guidance for running Liberty on ARO that doesn't utilize the automation enabled by the offer, see [Manually deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Red Hat OpenShift cluster](/azure/developer/java/ee/liberty-on-aro). +This article uses the Azure Marketplace offer for Open/WebSphere Liberty to accelerate your journey to Azure Red Hat OpenShift. The offer automatically provisions several resources including an Azure Red Hat OpenShift cluster with a built-in OpenShift Container Registry (OCR), the Liberty Operators, and optionally a container image including Liberty and your application. To see the offer, visit the [Azure portal](https://aka.ms/liberty-aro). If you prefer manual step-by-step guidance for running Liberty on Azure Red Hat OpenShift that doesn't utilize the automation enabled by the offer, see [Manually deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Red Hat OpenShift cluster](/azure/developer/java/ee/liberty-on-aro). This article is intended to help you quickly get to deployment. Before going to production, you should explore [Tuning Liberty](https://www.ibm.com/docs/was-liberty/base?topic=tuning-liberty). The following steps guide you through creating an Azure SQL Database single data export DB_RESOURCE_GROUP_NAME=<db-resource-group> ``` -Now that you created the database and ARO cluster, you can prepare the ARO to host your WebSphere Liberty application. +Now that you created the database and Azure Red Hat OpenShift cluster, you can prepare the Azure Red Hat OpenShift cluster to host your WebSphere Liberty application. ## Configure and deploy the sample application Use the following steps to deploy and test the application: ## Clean up resources -To avoid Azure charges, you should clean up unnecessary resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, ARO cluster, Azure SQL Database, and all related resources. +To avoid Azure charges, you should clean up unnecessary resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, Azure Red Hat OpenShift cluster, Azure SQL Database, and all related resources. ```bash az group delete --name $RESOURCE_GROUP_NAME --yes --no-wait |
operator-nexus | Reference Operator Nexus Network Cloud Skusus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-operator-nexus-network-cloud-SKUsus.md | + + Title: Azure Operator Nexus Network Cloud SKUs +description: SKU options for Azure Operator Nexus Network Cloud + Last updated : 10/24/2024++++++# Azure Operator Nexus Network Cloud Stock Keeping Units (SKUs) ++Operator Nexus Network Cloud SKUs for Azure Operator Nexus are meticulously designed to streamline the procurement and deployment processes, offering standardized bill of materials (BOM), topologies, wiring, and workflows. Microsoft crafts and prevalidates each SKU in collaboration with OEM vendors, ensuring seamless integration and optimal performance for operators. ++Operator Nexus Network Cloud SKUs offer a comprehensive range of options, allowing operators to tailor their deployments according to their specific requirements. With prevalidated configurations and standardized BOMs, the procurement and deployment processes are streamlined, ensuring efficiency and performance across the board. ++The following table outlines the various configurations of Operator Nexus Network Cloud SKUs, catering to different use-cases and functionalities required by operators. ++| Version | Use-Case | Network Cloud SKU ID | Description | BOM Components | +||--|--||| +| 1.7.3 | Multi Rack Near-Edge Aggregation (Agg) Rack | VNearEdge1_Aggregator_x70r3_9 | Aggregation Rack with Pure x70r3 | - Pair of Customer Edge Devices required for SKU.<br> - Two Management switch per rack deployed.<br> - Network packet broker device.<br> - Terminal Server.<br> - Pure storage array.<br> - Cable and optics. | +| 1.7.3 | Multi Rack Near-Edge Compute | VNearEdge1_Compute_DellR750_4C2M | Support up to eight Compute Racks where each rack can support four compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to four Compute servers per compute rack deployed. <br> - Cable and optics. | +| 1.7.3 | Multi Rack Near-Edge Compute | VNearEdge1_Compute_DellR750_8C2M | Support up to eight Compute Racks where each rack can support eight compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to eight Compute servers per compute rack deployed. <br> - Cable and optics. | +| 1.7.3 | Multi Rack Near-Edge Compute | VNearEdge1_Compute_DellR750_12C2M | Support up to eight Compute Racks where each rack can support 12 compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to 12 Compute servers per compute rack deployed. <br> - Cable and optics. | +| 1.7.3 | Multi Rack Near-Edge Compute | VNearEdge1_Compute_DellR750_16C2M | Support up to eight Compute Racks where each rack can support 16 compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to 16 Compute servers per compute rack deployed. <br> - Cable and optics. | +| 1.7.3 | Multi Rack Near-Edge Compute | VNearEdge2_Compute_DellR650_4C2M | 100G Fabric support up to eight Compute Racks where each rack can support four compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to four Compute servers per compute rack deployed. <br> - Cable and optics. | +| 1.7.3 | Multi Rack Near-Edge Compute | VNearEdge2_Compute_DellR650_8C2M | 100G Fabric support up to eight Compute Racks where each rack can support eight compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to eight Compute servers per compute rack deployed. <br> - Cable and optics. | +| 1.7.3 | Multi Rack Near-Edge Compute | VNearEdge2_Compute_DellR650_12C2M | 100G Fabric support up to eight Compute Racks where each rack can support 12 compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to 12 Compute servers per compute rack deployed. <br> - Cable and optics. | +| 1.7.3 | Multi Rack Near-Edge Compute | VNearEdge2_Compute_DellR650_16C2M | 100G Fabric support up to eight Compute Racks where each rack can support 16 compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to 16 Compute servers per compute rack deployed. <br> - Cable and optics. | +| 2.0.0 | Multi Rack Near-Edge Compute | VNearEdge4_Compute_DellR760_4C2M | Support up to eight Compute Racks where each rack can support four compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to four Compute servers per compute rack deployed. <br> - Cable and optics. | +| 2.0.0 | Multi Rack Near-Edge Compute | VNearEdge4_Compute_DellR760_8C2M | Support up to eight Compute Racks where each rack can support eight compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to eight Compute servers per compute rack deployed. <br> - Cable and optics. | +| 2.0.0 | Multi Rack Near-Edge Compute | VNearEdge4_Compute_DellR760_12C2M | Support up to eight Compute Racks where each rack can support 12 compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to 12 Compute servers per compute rack deployed. <br> - Cable and optics. | +| 2.0.0 | Multi Rack Near-Edge Compute | VNearEdge4_Compute_DellR760_16C2M | Support up to eight Compute Racks where each rack can support 16 compute servers. | - Pair of Top the rack switches per rack deployed. <br> - One Management switch per compute rack deployed.<br> - Two Management server per compute rack deployed. <br> - Up to 16 Compute servers per compute rack deployed. <br> - Cable and optics. | +| 2.0.0 | Multi Rack Near-Edge Agg | VNearEdge4_Aggregator_x70r4 | Aggregation Rack with Pure x70r4. | - Pair of Customer Edge Devices required for SKU.<br> - Two Management switch per rack deployed.<br> - Network packet broker device.<br> - Terminal Server.<br> - Pure storage array.<br> - Cable and optics. | +| 2.0.0 | Multi Rack Near-Edge Agg | VNearEdge4_Aggregator_x70r3 | Aggregation Rack with Pure x70r3. | - Pair of Customer Edge Devices required for SKU.<br> - Two Management switch per rack deployed.<br> - Network packet broker device.<br> - Terminal Server.<br> - Pure storage array.<br> - Cable and optics. | +| 2.0.0 | Multi Rack Near-Edge Agg | VNearEdge4_Aggregator_x20r4 | Aggregation Rack with Pure x70r4. | - Pair of Customer Edge Devices required for SKU.<br> - Two Management switch per rack deployed.<br> - Network packet broker device.<br> - Terminal Server.<br> - Pure storage array.<br> - Cable and optics. | +| 2.0.0 | Multi Rack Near-Edge Agg | VNearEdge4_Aggregator_x20r3 | Aggregation Rack with Pure x70r3. | - Pair of Customer Edge Devices required for SKU.<br> - Two Management switch per rack deployed.<br> - Network packet broker device.<br> - Terminal Server.<br> - Pure storage array.<br> - Cable and optics. | ++**Notes:** +- Bill of materials (BOM) adheres to Nexus Network Cloud specifications. +- All subscribed customers have the privilege to request BOM details. |
oracle | Oracle Database Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/oracle-database-regions.md | The following table lists Azure regions and corresponding OCI regions that suppo |-|--|-|-| | Australia East | Australia East (Sydney) | Γ£ô | Γ£ô | | Southeast Asia | Singapore (Singapore) | Γ£ô | Γ£ô |-| Japan East | Japan East(Tokyo) | Γ£ô | Γ£ô | +| Japan East | Japan East(Tokyo) | Γ£ô | | ## Europe, Middle East, Africa (EMEA) |
role-based-access-control | Built In Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md | The following table provides a brief description of each built-in role. Click th > | <a name='cognitive-services-custom-vision-labeler'></a>[Cognitive Services Custom Vision Labeler](./built-in-roles/ai-machine-learning.md#cognitive-services-custom-vision-labeler) | View, edit training images and create, add, remove, or delete the image tags. Labelers can view the project but can't update anything other than training images and tags. | 88424f51-ebe7-446f-bc41-7fa16989e96c | > | <a name='cognitive-services-custom-vision-reader'></a>[Cognitive Services Custom Vision Reader](./built-in-roles/ai-machine-learning.md#cognitive-services-custom-vision-reader) | Read-only actions in the project. Readers can't create or update the project. | 93586559-c37d-4a6b-ba08-b9f0940c2d73 | > | <a name='cognitive-services-custom-vision-trainer'></a>[Cognitive Services Custom Vision Trainer](./built-in-roles/ai-machine-learning.md#cognitive-services-custom-vision-trainer) | View, edit projects and train the models, including the ability to publish, unpublish, export the models. Trainers can't create or delete the project. | 0a5ae4ab-0d65-4eeb-be61-29fc9b54394b |-> | <a name='cognitive-services-data-reader-preview'></a>[Cognitive Services Data Reader (Preview)](./built-in-roles/ai-machine-learning.md#cognitive-services-data-reader-preview) | Lets you read Cognitive Services data. | b59867f0-fa02-499b-be73-45a86b5b3e1c | +> | <a name='cognitive-services-data-reader'></a>[Cognitive Services Data Reader](./built-in-roles/ai-machine-learning.md#cognitive-services-data-reader) | Lets you read Cognitive Services data. | b59867f0-fa02-499b-be73-45a86b5b3e1c | > | <a name='cognitive-services-face-recognizer'></a>[Cognitive Services Face Recognizer](./built-in-roles/ai-machine-learning.md#cognitive-services-face-recognizer) | Lets you perform detect, verify, identify, group, and find similar operations on Face API. This role does not allow create or delete operations, which makes it well suited for endpoints that only need inferencing capabilities, following 'least privilege' best practices. | 9894cab4-e18a-44aa-828b-cb588cd6f2d7 | > | <a name='cognitive-services-immersive-reader-user'></a>[Cognitive Services Immersive Reader User](./built-in-roles/ai-machine-learning.md#cognitive-services-immersive-reader-user) | Provides access to create Immersive Reader sessions and call APIs | b2de6794-95db-4659-8781-7e080d3f2b9d | > | <a name='cognitive-services-language-owner'></a>[Cognitive Services Language Owner](./built-in-roles/ai-machine-learning.md#cognitive-services-language-owner) | Has access to all Read, Test, Write, Deploy and Delete functions under Language portal | f07febfe-79bc-46b1-8b37-790e26e6e498 | |
role-based-access-control | Ai Machine Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/ai-machine-learning.md | View, edit projects and train the models, including the ability to publish, unpu } ``` -## Cognitive Services Data Reader (Preview) +## Cognitive Services Data Reader Lets you read Cognitive Services data. +[Learn more](/azure/ai-services/speech-service/role-based-access-control) + > [!div class="mx-tableFixed"] > | Actions | Description | > | | | Lets you read Cognitive Services data. "notDataActions": [] } ],- "roleName": "Cognitive Services Data Reader (Preview)", + "roleName": "Cognitive Services Data Reader", "roleType": "BuiltInRole", "type": "Microsoft.Authorization/roleDefinitions" } Access to the real-time speech recognition and batch transcription APIs, real-ti > | [Microsoft.CognitiveServices](../permissions/ai-machine-learning.md#microsoftcognitiveservices)/accounts/SpeechServices/*/transcriptions/read | | > | [Microsoft.CognitiveServices](../permissions/ai-machine-learning.md#microsoftcognitiveservices)/accounts/SpeechServices/*/transcriptions/write | | > | [Microsoft.CognitiveServices](../permissions/ai-machine-learning.md#microsoftcognitiveservices)/accounts/SpeechServices/*/transcriptions/delete | |+> | [Microsoft.CognitiveServices](../permissions/ai-machine-learning.md#microsoftcognitiveservices)/accounts/SpeechServices/*/transcriptions/action | | > | [Microsoft.CognitiveServices](../permissions/ai-machine-learning.md#microsoftcognitiveservices)/accounts/SpeechServices/*/frontend/action | | > | [Microsoft.CognitiveServices](../permissions/ai-machine-learning.md#microsoftcognitiveservices)/accounts/SpeechServices/text-dependent/*/action | | > | [Microsoft.CognitiveServices](../permissions/ai-machine-learning.md#microsoftcognitiveservices)/accounts/SpeechServices/text-independent/*/action | | Access to the real-time speech recognition and batch transcription APIs, real-ti "Microsoft.CognitiveServices/accounts/SpeechServices/*/transcriptions/read", "Microsoft.CognitiveServices/accounts/SpeechServices/*/transcriptions/write", "Microsoft.CognitiveServices/accounts/SpeechServices/*/transcriptions/delete",+ "Microsoft.CognitiveServices/accounts/SpeechServices/*/transcriptions/action", "Microsoft.CognitiveServices/accounts/SpeechServices/*/frontend/action", "Microsoft.CognitiveServices/accounts/SpeechServices/text-dependent/*/action", "Microsoft.CognitiveServices/accounts/SpeechServices/text-independent/*/action", |
sap | Get Sap Installation Media | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/get-sap-installation-media.md | In this how-to guide, you'll learn how to get the SAP software installation medi - A deployment of S/4HANA infrastructure. - The SSH private key for the virtual machines in the SAP system. You generated this key during the infrastructure deployment. - If you're installing a Highly Available (HA) SAP system, get the Service Principal identifier (SPN ID) and password to authorize the Azure fence agent (fencing device) against Azure resources. - - For more information, see [Use Azure CLI to create a Microsoft Entra app and configure it to access Media Services API](/azure/media-services/previous/media-services-cli-create-and-configure-aad-app). + - For more information, see Use Azure CLI to create a Microsoft Entra app and configure it to access Media Services API. - For an example, see the Red Hat documentation for [Creating a Microsoft Entra Application](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/configuring-rhel-high-availability-on-azure_cloud-content#azure-create-an-azure-directory-application-in-ha_configuring-rhel-high-availability-on-azure). - To avoid frequent password expiry, use the Azure Command-Line Interface (Azure CLI) to create the Service Principal identifier and password instead of the Azure portal. |
sap | Quickstart Install High Availability Namecustom Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/quickstart-install-high-availability-namecustom-cli.md | After you [deploy infrastructure](deploy-s4hana.md) and install SAP software wit - The SSH private key for the virtual machines in the SAP system. You generated this key during the infrastructure deployment. - You should have the SAP installation media available in a storage account. For more information, see [how to download the SAP installation media](get-sap-installation-media.md). - The *json* configuration file that you used to create infrastructure in the [previous step](tutorial-create-high-availability-name-custom.md) for SAP system using PowerShell or Azure CLI. -- As you're installing a Highly Available (HA) SAP system, get the Service Principal identifier (SPN ID) and password to authorize the Azure fence agent (fencing device) against Azure resources. For more information, see [Use Azure CLI to create a Microsoft Entra app and configure it to access Media Services API](/azure/media-services/previous/media-services-cli-create-and-configure-aad-app). +- As you're installing a Highly Available (HA) SAP system, get the Service Principal identifier (SPN ID) and password to authorize the Azure fence agent (fencing device) against Azure resources. For more information, see Use Azure CLI to create a Microsoft Entra app and configure it to access Media Services API. - For an example, see the Red Hat documentation for [Creating a Microsoft Entra Application](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/configuring-rhel-high-availability-on-azure_cloud-content#azure-create-an-azure-directory-application-in-ha_configuring-rhel-high-availability-on-azure). - To avoid frequent password expiry, use the Azure Command-Line Interface (Azure CLI) to create the Service Principal identifier and password instead of the Azure portal. |
sap | Dbms Guide Oracle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-oracle.md | keywords: 'SAP, Azure, Oracle, Data Guard' Previously updated : 10/14/2024 Last updated : 10/28/2024 General information about running SAP Business Suite on Oracle can be found at  | 2799920 | [Patches for 19c: Database SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/0002799920) | | 974876 | [Oracle Transparent Data Encryption (TDE) SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/0000974876) | | 2936683 | [Oracle Linux 8: SAP Installation and Upgrade SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/2936683) |-| 1672954 | [Oracle 11g, 12c, 18c and 19c: Usage of hugepages on Linux](https://launchpad.support.sap.com/#/notes/1672954) | +| 1672954 | [Oracle 11g, 12c, 18c, and 19c: Usage of hugepages on Linux](https://launchpad.support.sap.com/#/notes/1672954) | | 1171650 | [Automated Oracle DB parameter check](https://launchpad.support.sap.com/#/notes/1171650) | | 2936683 | [Oracle Linux 8: SAP Installation and Upgrade](https://launchpad.support.sap.com/#/notes/2936683) |+| 3399081 | [Oracle Linux 9: SAP Installation and Upgrade](https://launchpad.support.sap.com/#/notes/3399081) | ### Specifics for Oracle Database on Oracle Linux Installing or migrating existing SAP on Oracle systems to Azure, the following d 4. Azure Premium Storage SSD should be used. Don't use Standard or other storage types. 5. ASM removes the requirement for Mirror Log. Follow the guidance from Oracle in Note [888626 - Redo log layout for high-end systems](https://launchpad.support.sap.com/#/notes/888626). 6. Use ASMLib and don't use udev.-7. Azure NetApp Files deployments should use Oracle dNFS (Oracle’s own high performance Direct NFS solution). +7. Azure NetApp Files deployments should use Oracle dNFS which is Oracle’s own high performance Direct NFS (Network File System) driver solution. 8. Large Oracle databases benefit greatly from large System Global Area (SGA) sizes. Large customers should deploy on Azure M-series with 4 TB or more RAM size - Set Linux Huge Pages to 75% of Physical RAM size - Set System Global Area (SGA) to 90% of Huge Page size - Set the Oracle parameter USE_LARGE_PAGES = **ONLY** - The value ONLY is preferred over the value TRUE as the value ONLY is supposed to deliver more consistent and predictable performance. The value TRUE may allocate both large 2MB and standard 4K pages. The value ONLY is going to always force large 2MB pages. If the number of available huge pages isn't sufficient or not correctly configured, the database instance is going to fail to start with error code: *ora-27102 : out of memory Linux_x86_64 Error 12 : can't allocate memory*. If there's insufficient contiguous memory, Oracle Linux may need to be restarted and/or the Operating System Huge Page parameters reconfigured.-9. Oracle Home should be located outside of the "root" volume or disk. Use a separate disk or ANF volume. The disk holding the Oracle Home should be 64 Gigabyte in size or larger. +9. Oracle Home should be located outside of the "root" volume or disk. Use a separate disk or ANF volume. The disk holding the Oracle Home should be 64 Gigabytes in size or larger. 10. The size of the boot disk for large high performance Oracle database servers is important. As a minimum a P10 disk should be used for M-series or E-series. Don't use small disks such as P4 or P6. A small disk can cause performance issues. 11. Accelerated Networking must be enabled on all Virtual Machines. Upgrade to the latest Oracle Linux release if there are any problems enabling Accelerated Networking. 12. Check for updates in this documentation and SAP note [2039619 - SAP Applications on Microsoft Azure using the Oracle Database: Supported Products and Versions - SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/2039619). The table below details the support status |--||--| | --| | **Block Storage Type** | | | | | | Premium SSD | Supported | 512e | ASM Recommended. LVM Supported | No support for ASM on Windows |-| Premium SSD v2 | Supported | 4K Native or 512e<sup>1</sup> | ASM Recommended. LVM Supported | No support for ASM on Windows. Change Log File disks from 4K Native to 512e | +| Premium SSD v2<sup>1</sup> | Supported | 4K Native or 512e<sup>2</sup> | ASM Recommended. LVM Supported | No support for ASM on Windows. Change Log File disks from 4K Native to 512e | | Standard SSD | Not supported | | | | | Standard HDD | Not supported | | | | | Ultra disk | Supported | 4K Native | ASM Recommended. LVM Supported | No support for ASM on Windows. Change Log File disks from 4K Native to 512e | The table below details the support status | Azure Files NFS | Not supported | | | | Azure files SMB | Not supported | | | -<sup>1</sup> 512e is supported on Premium SSD v2 for Windows systems. 512e configurations are't recommended for Linux customers. Migrate to 4K Native using procedure in MOS 512/512e sector size to 4K Native Review (Doc ID 1133713.1) +1. Azure Premium SSD v2 doesn't have predefined storage sizes. There's no need to allocate multiple disks within an ASM Disk Group or LVM VG. It's recommended to allocate a single Premium SSD v2 disk with the required size, throughput, and IOPS per ASM Disk Group +2. 512e is supported on Premium SSD v2 for Windows systems. 512e configurations are't recommended for Linux customers. Migrate to 4K Native using procedure in MOS 512/512e sector size to 4K Native Review (Doc ID 1133713.1) Other considerations that apply list like: 1. No support for DIRECTIO with 4K Native sector size. Recommended settings for FILESYSTEMIO_OPTIONS for LVM configurations: Other considerations that apply list like: 2. Oracle 19c and higher fully supports 4K Native sector size with both ASM and LVM 3. Oracle 19c and higher on Linux – when moving from 512e storage to 4K Native storage Log sector sizes must be changed 4. To migrate from 512/512e sector size to 4K Native Review (Doc ID 1133713.1) – see section "Offline Migration to 4KB Sector Disks"-5. SAPInst writes to the pfile during installation. If the $ORACLE_HOME/dbs is on a 4K disk set filesystemio_options=asynch and see the Section "Datafile Support of 4kB Sector Disks" in MOS Supporting 4K Sector Disks (Doc ID 1133713.1) +5. SAPInst writes to the pfile during installation. If the $ORACLE_HOME/dbs is on a 4K disk, set filesystemio_options=asynch and see the Section "Datafile Support of 4kB Sector Disks" in MOS Supporting 4K Sector Disks (Doc ID 1133713.1) 5. No support for ASM on Windows platforms-6. No support for 4K Native sector size for Log volume on Windows platforms. SSDv2 and Ultra Disk must be changed to 512e via the "Edit Disk" pencil icon in the Azure Portal +6. No support for 4K Native sector size for Log volume on Windows platforms. SSDv2 and Ultra Disk must be changed to 512e via the "Edit Disk" pencil icon in the Azure portal 7. 4K Native sector size is supported only on Data volumes for Windows platforms. 4K isn't supported for Log volumes on Windows 8. We recommend reviewing these MOS articles: - Oracle Linux: File System's Buffer Cache versus Direct I/O (Doc ID 462072.1) Checklist for Oracle Automatic Storage Management: 3. ASM should be configured for **External Redundancy**. Azure Premium SSD storage provides triple redundancy. Azure Premium SSD matches the reliability and integrity of any other storage solution. For optional safety, customers can consider **Normal Redundancy** for the Log Disk Group 4. Mirroring Redo Log files is optional for ASM [888626 - Redo log layout for high-end systems](https://launchpad.support.sap.com/#/notes/888626) 5. ASM Disk Groups configured as per Variant 1, 2 or 3 below-6. ASM Allocation Unit size = 4MB (default). Very Large Databases (VLDB) OLAP systems such as BW may benefit from larger ASM Allocation Unit size. Change only after confirming with Oracle support +6. ASM Allocation Unit size = 4MB (default). Very Large Databases (VLDB) OLAP systems such as SAP BW may benefit from larger ASM Allocation Unit size. Change only after confirming with Oracle support 7. ASM Sector Size and Logical Sector Size = default (UDEV isn't recommended but requires 4k) 8. If the COMPATIBLE.ASM disk group attribute is set to 11.2 or greater for a disk group, you can create, copy, or move an Oracle ASM SPFILE into ACFS file system. Review the Oracle documentation on moving pfile into ACFS. SAPInst isn't creating the pfile in ACFS by default 8. Appropriate ASM Variant is used. Production systems should use Variant 2 or 3 Oracle ASM disk group recommendation: ### Variant 2 – medium to large data volumes between 3 TB and 12 TB, restore time important -Customer has medium to large sized databases where backup and/or restore -+ +Customer has medium to large sized databases where backup and/or restore, or recovery of all databases can't be accomplished in a timely fashion. -recovery of all databases can't be accomplished in a timely fashion. --Usually customers are using RMAN, Azure Backup for Oracle and/or disk snap techniques in combination. +Usually customers are using RMAN, Azure Backup for Oracle and/or disk snapshot techniques in combination. Major differences to Variant 1 are: Major differences to Variant 1 are: ### Variant 3 – huge data and data change volumes more than 5 TB, restore time crucial -Customer has a huge database where backup and/or restore + recovery of a single database can't be accomplished in a timely fashion. +Customer has a huge database where backup and/or restore, or recovery of a single database can't be accomplished in a timely fashion. Usually customers are using RMAN, Azure Backup for Oracle and/or disk snap techniques in combination. In this variant, each relevant database file type is separated to different Oracle ASM disk groups. Documentation is available with: ### Monitoring SAP on Oracle ASM Systems on Azure -Run an Oracle AWR report as the first step when troubleshooting a performance problem. Disk performance metrics are detailed in the AWR report. +Run an Oracle AWR (Automatic Workload Repository) report as the first step when troubleshooting a performance problem. Disk performance metrics are detailed in the AWR report. Disk performance can be monitored from inside Oracle Enterprise Manager and via external tools. Documentation, which might help is available here: - [Using Views to Display Oracle ASM Information](https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/views-asm-info.html#GUID-23E1F0D8-ECF5-4A5A-8C9C-11230D2B4AD4) Mirror Log is required when running LVM. |--|-|--|--| | /oracle/\<SID\>/origlogaA & mirrlogB | Premium | None | Not needed | | /oracle/\<SID\>/origlogaB & mirrlogA | Premium | None | Not needed |-| /oracle/\<SID\>/sapdata1...n | Premium | Read-only<sup>2</sup> | Recommended | -| /oracle/\<SID\>/oraarch<sup>3</sup> | Premium | None | Not needed | +| /oracle/\<SID\>/sapdata1...n | Premium | None | Recommended | +| /oracle/\<SID\>/oraarch<sup>2</sup> | Premium | None | Not needed | | Oracle Home, saptrace, ... | Premium | None | None | 1. Striping: LVM stripe using RAID0-2. During R3Load migrations, the Host Cache option for SAPDATA should be set to None -3. oraarch: LVM is optional +2. oraarch: LVM is optional The disk selection for hosting Oracle's online redo logs is driven by IOPS requirements. It's possible to store all sapdata1...n (tablespaces) on a single mounted disk as long as the volume, IOPS, and throughput satisfy the requirements. The disk selection for hosting Oracle's online redo logs is driven by IOPS requi | /oracle/\<SID\>/origlogaB | Premium | None | Can be used | | /oracle/\<SID\>/mirrlogAB | Premium | None | Can be used | | /oracle/\<SID\>/mirrlogBA | Premium | None | Can be used |-| /oracle/\<SID\>/sapdata1...n | Premium | Read-only<sup>2</sup> | Recommended | -| /oracle/\<SID\>/oraarch<sup>3</sup> | Premium | None | Not needed | +| /oracle/\<SID\>/sapdata1...n | Premium | None | Recommended | +| /oracle/\<SID\>/oraarch<sup>2</sup> | Premium | None | Not needed | | Oracle Home, saptrace, ... | Premium | None | None | 1. Striping: LVM stripe using RAID0-2. During R3load migrations, the Host Cache option for SAPDATA should be set to None -3. oraarch: LVM is optional +2. oraarch: LVM is optional ## Azure Infra: Virtual machine Throughput Limits & Azure Disk Storage Options -### Oracle Automatic Storage Management (ASM)## can evaluate these storage technologies: +### Current recommendations for Oracle Storage -1. Azure Premium Storage – currently the default choice +1. Azure Premium Storage – Most customers are deploying on ASM with Premium Storage +2. Azure NetApp Files - VLDB customers, often with single Oracle databases larger than 50TB are typically using ANF and using Storage Snapshot capabilities of Azure NetApp Files for Backup and Restore 3. Managed Disk Bursting - [Managed disk bursting - Azure Virtual Machines \| Microsoft Docs](/azure/virtual-machines/disk-bursting)-4. Azure Write Accelerator -5. Online disk extension for Azure Premium SSD storage is still in progress +4. Azure Write Accelerator - used for the case that the Oracle redo log is based on Premium SSD v1 disks +5. Online disk extension is fully supported for Premium Storage v1 and works with ASM Log write times can be improved on Azure M-Series VMs by enabling Write Accelerator. Enable Azure Write Accelerator for the Azure Premium Storage disks used by the ASM Disk Group for <u>online redo log files</u>. For more information, see [<u>Write Accelerator</u>](/azure/virtual-machines/how-to-enable-write-accelerator). The following recommendations should be followed when selecting a VM type: For backup/restore functionality, the SAP BR\*Tools for Oracle are supported in the same way as they are on bare metal and Hyper-V. Oracle Recovery Manager (RMAN) is also supported for backups to disk and restores from disk. For more information about how you can use Azure Backup and Recovery services for Oracle databases, see:-- [<u>Back up and recover an Oracle Database 12c database on an Azure Linux virtual machine</u>](/azure/virtual-machines/workloads/oracle/oracle-overview)-- [<u>Azure Backup service</u>](../../backup/backup-overview.md) is also supporting Oracle backups as described in the article [<u>Back up and recover an Oracle Database 19c database on an Azure Linux VM using Azure Backup</u>](/azure/virtual-machines/workloads/oracle/oracle-database-backup-azure-backup).+- [<u>Azure Backup service</u>](../../backup/backup-overview.md) is also supporting Oracle backups as described in the article [<u>Back up and recover an Oracle Database on an Azure Linux VM using Azure Backup</u>](/azure/virtual-machines/workloads/oracle/oracle-database-backup-azure-backup). ## High availability At the time, of writing ASM for Windows customers on Azure isn't supported. The |--|-|--|--| | E:\oracle\\\<SID\>\origlogaA & mirrlogB | Premium | None | Not needed | | F:\oracle\\\<SID\>\origlogaB & mirrlogA | Premium | None | Not needed |-| G:\oracle\\\<SID\>\sapdata1...n | Premium | Read-only<sup>2</sup> | Recommended | -| H:\oracle\\\<SID\>\oraarch<sup>3</sup> | Premium | None | Not needed | +| G:\oracle\\\<SID\>\sapdata1...n | Premium | None | Recommended | +| H:\oracle\\\<SID\>\oraarch<sup>2</sup> | Premium | None | Not needed | | I:\Oracle Home, saptrace, ... | Premium | None | None | 1. Striping: Windows Storage Spaces-2. During R3load migrations, the Host Cache option for SAPDATA should be set to None -3. oraarch: Windows Storage Spaces is optional +2. oraarch: Windows Storage Spaces is optional The disk selection for hosting Oracle's online redo logs is driven by IOPS requirements. It's possible to store all sapdata1...n (tablespaces) on a single mounted disk as long as the volume, IOPS, and throughput satisfy the requirements. The disk selection for hosting Oracle's online redo logs is driven by IOPS requi | F:\oracle\\\<SID\>\origlogaB | Premium | None | Can be used | | G:\oracle\\\<SID\>\mirrlogAB | Premium | None | Can be used | | H:\oracle\\\<SID\>\mirrlogBA | Premium | None | Can be used |-| I:\oracle\\\<SID\>\sapdata1...n | Premium | Read-only<sup>2</sup> | Recommended | -| J:\oracle\\\<SID\>\oraarch<sup>3</sup> | Premium | None | Not needed | +| I:\oracle\\\<SID\>\sapdata1...n | Premium | None | Recommended | +| J:\oracle\\\<SID\>\oraarch<sup>2</sup> | Premium | None | Not needed | | K:\Oracle Home, saptrace, ... | Premium | None | None | 1. Striping: Windows Storage Spaces-2. During R3load migrations, the Host Cache option for SAPDATA should be set to None -3. oraarch: Windows Storage Spaces is optional +2. oraarch: Windows Storage Spaces is optional ### Links for Oracle on Windows - [Overview of Windows Tuning (oracle.com)](https://docs.oracle.com/en/database/oracle/oracle-database/19/ntqrf/overview-of-windows-tuning.html#GUID-C0A0EC5D-65DD-4693-80B1-DA2AB6147AB9) |
sap | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md | In the SAP workload documentation space, you can find the following areas: ## Change Log -- October 25, 2024: Adding documentation link to [SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver](./dbms-guide-sqlserver.md) that describes how to disable SMT to be able to use some Mv3 SKUs where SQL Server would have a problem with too large NUMA nodes +- October 28, 2024: Added information on RedHat support and the configuration of Azure fence agents for VMs in the Azure Government cloud to the document [Set up Pacemaker on Red Hat Enterprise Linux in Azure](./high-availability-guide-rhel-pacemaker.md). +- October 25, 2024: Adding documentation link to [SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver](./dbms-guide-sqlserver.md) that describes how to disable SMT to be able to use some Mv3 SKUs where SQL Server would have a problem with too large NUMA nodes. +- October 16, 2024: Included ordering constraints in [High availability of SAP HANA scale-up with Azure NetApp Files on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) to ensure SAP resources on a node stop before any of the NFS mounts. - October 14, 2024: Change several database guides mentioning that with several Mv3 VM types, IOPS and throughput could be lower when using read cached Premium SSD v1 disks compared to using non-cached disks - October 7, 2024: Changes in [SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver](./dbms-guide-sqlserver.md), documenting new Mv3 SKUs that will not work with SQL Server because of NUMA nodes larger than 64 vCPUs - October 5, 2024: Changes in documenting active/active and active/passive application layer in [SAP workload configurations with Azure Availability Zones](./high-availability-zones.md). Eliminating the list of regions for each of the cases |
sap | High Availability Guide Rhel Pacemaker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-pacemaker.md | Based on the selected fencing mechanism, follow only one section for relevant in 2. **[1]** Run the appropriate command depending on whether you're using a managed identity or a service principal for the Azure fence agent. > [!NOTE]- > The option `pcmk_host_map` is *only* required in the command if the RHEL hostnames and the Azure VM names are *not* identical. Specify the mapping in the format **hostname:vm-name**. - > - > Refer to the bold section in the command. For more information, see [What format should I use to specify node mappings to fencing devices in pcmk_host_map?](https://access.redhat.com/solutions/2619961). + > When using Azure government cloud, you must specify `cloud=` option when configuring fence agent. For example, `cloud=usgov` for the Azure US government cloud. For details on RedHat support on Azure government cloud, see [Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members](https://access.redhat.com/articles/3131341). ++ > [!TIP] + > The option `pcmk_host_map` is *only* required in the command if the RHEL hostnames and the Azure VM names are *not* identical. Specify the mapping in the format **hostname:vm-name**. For more information, see [What format should I use to specify node mappings to fencing devices in pcmk_host_map?](https://access.redhat.com/solutions/2619961). #### [Managed identity](#tab/msi) |
sentinel | Ueba Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ueba-reference.md | After you [enable UEBA](enable-entity-behavior-analytics.md) for your Microsoft While the initial synchronization may take a few days, once the data is fully synchronized: -- Changes made to your user profiles in Microsoft Entra ID are updated in the **IdentityInfo** table within 15 minutes.--- Group and role information is synchronized between the **IdentityInfo** table and Microsoft Entra ID daily.+- Changes made to your user profiles, groups, and roles in Microsoft Entra ID are updated in the **IdentityInfo** table within 15-30 minutes. - Every 14 days, Microsoft Sentinel re-synchronizes with your entire Microsoft Entra ID to ensure that stale records are fully updated. |
virtual-network | Virtual Network Encryption Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-encryption-overview.md | Virtual network encryption has the following requirements: | Type | VM Series | VM SKU | | | | |- | General purpose workloads | D-series V4 </br> D-series V5 </br> D-series V6 | **[Dv4 and Dsv4-series](/azure/virtual-machines/dv4-dsv4-series)** </br> **[Ddv4 and Ddsv4-series](/azure/virtual-machines/ddv4-ddsv4-series)** </br> **[Dav4 and Dasv4-series](/azure/virtual-machines/dav4-dasv4-series)** </br> **[Dv5 and Dsv5-series](/azure/virtual-machines/dv5-dsv5-series)** </br> **[Ddv5 and Ddsv5-series](/azure/virtual-machines/ddv5-ddsv5-series)** </br> **[Dlsv5 and Dldsv5-series](/azure/virtual-machines/dlsv5-dldsv5-series)** </br> **[Dasv5 and Dadsv5-series](/azure/virtual-machines/dasv5-dadsv5-series)** </br> **[Dasv6 and Dadsv6-series](/azure/virtual-machines/dasv6-dadsv6-series)** </br> **[Dalsv6 and Daldsv6-series](/azure/virtual-machines/dalsv6-daldsv6-series)** | + | General purpose workloads | D-series V4 </br> D-series V5 </br> D-series V6 | **[Dv4 and Dsv4-series](/azure/virtual-machines/dv4-dsv4-series)** </br> **[Ddv4 and Ddsv4-series](/azure/virtual-machines/ddv4-ddsv4-series)** </br> **[Dav4 and Dasv4-series](/azure/virtual-machines/dav4-dasv4-series)** </br> **[Dv5 and Dsv5-series](/azure/virtual-machines/dv5-dsv5-series)** </br> **[Ddv5 and Ddsv5-series](/azure/virtual-machines/ddv5-ddsv5-series)** </br> **[Dlsv5 and Dldsv5-series](/azure/virtual-machines/dlsv5-dldsv5-series)** </br> **[Dasv5 and Dadsv5-series](/azure/virtual-machines/dasv5-dadsv5-series)** </br> **[Dasv6 and Dadsv6-series](/azure/virtual-machines/dasv6-dadsv6-series)** </br> **[Dalsv6 and Daldsv6-series](/azure/virtual-machines/dalsv6-daldsv6-series)** </br> **[Dsv6-series](/azure/virtual-machines/sizes/general-purpose/dsv6-series)** | | Memory intensive workloads | E-series V4 </br> E-series V5 </br> E-series V6 </br> M-series V2 </br> M-series V3 | **[Ev4 and Esv4-series](/azure/virtual-machines/ev4-esv4-series)** </br> **[Edv4 and Edsv4-series](/azure/virtual-machines/edv4-edsv4-series)** </br> **[Eav4 and Easv4-series](/azure/virtual-machines/eav4-easv4-series)** </br> **[Ev5 and Esv5-series](/azure/virtual-machines/ev5-esv5-series)** </br> **[Edv5 and Edsv5-series](/azure/virtual-machines/edv5-edsv5-series)** </br> **[Easv5 and Eadsv5-series](/azure/virtual-machines/easv5-eadsv5-series)** </br> **[Easv6 and Eadsv6-series](/azure/virtual-machines/easv6-eadsv6-series)** </br> **[Mv2-series](/azure/virtual-machines/mv2-series)** </br> **[Msv2 and Mdsv2 Medium Memory series](/azure/virtual-machines/msv2-mdsv2-series)** </br> **[Msv3 and Mdsv3 Medium Memory series](/azure/virtual-machines/msv3-mdsv3-medium-series)** | | Storage intensive workloads | L-series V3 | **[LSv3-series](/azure/virtual-machines/lsv3-series)** | | Compute optimized | F-series V6 | **[Falsv6-series](/azure/virtual-machines/sizes/compute-optimized/falsv6-series)** </br> **[Famsv6-series](/azure/virtual-machines/sizes/compute-optimized/famsv6-series)** </br> **[Fasv6-series](/azure/virtual-machines/sizes/compute-optimized/fasv6-series)** | |
virtual-wan | How To Network Virtual Appliance Inbound | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-network-virtual-appliance-inbound.md | description: Learn how to use Destination NAT with a Network Virtual Appliance i Previously updated : 01/04/2023 Last updated : 10/25/2024 # Customer intent: As someone with a networking background, I want to create a Network Virtual Appliance (NVA) in my Virtual WAN hub and leverage destination NAT. The following configurations are performed: #### Inbound traffic flow -The list below corresponds to the diagram above and describes the packet flow for the inbound connection: +The following list corresponds to the diagram above and describes the packet flow for the inbound connection: 1. The user initiates a connection with one of the Public IPs used for DNAT associated to the NVA. 1. Azure load balances the connection request to one of the Firewall NVA instances. Traffic is sent to the external/untrusted interface of the NVA. 1. NVA inspects the traffic and translates the packet based on rule configuration. In this case, the NVA is configured to NAT and forward inbound traffic to 10.60.0.4:443. The source of the packet is also translated to the private IP (IP of trusted/internal interface) of the chosen Firewall instance to ensure flow symmetry. The NVA forwards the packet and Virtual WAN routes the packet to the final destination. #### Outbound traffic flow The list below corresponds to the diagram above and describes the packet flow for the outbound response: The list below corresponds to the diagram above and describes the packet flow fo * Destination NAT Public IPs must be from the same region as the NVA resource. For example, if the NVA is deployed in the East US region, the public IP must also be from the East US region. * Destination NAT Public IPs can't be in use by another Azure resource. For example, you can't use an IP address in use by a Virtual Machine network interface IP Configuration or a Standard Load Balancer front-end configuration. * Public IPs must be from IPv4 address spaces. Virtual WAN doesn't support IPv6 addresses.- * Public IPs must be deployed with Standard SKU. Basic SKU Public IPs are not supported. + * Public IPs must be deployed with Standard SKU. Basic SKU Public IPs aren't supported. * Destination NAT is only supported on new NVA deployments that are created with at least one Destination NAT Public IP. Existing NVA deployments or NVA deployments that didn't have a Destination NAT Public IP associated at NVA creation time aren't eligible to use Destination NAT. * Programming Azure infrastructure components to support DNAT scenarios is done automatically by NVA orchestration software when a DNAT rule is created. Therefore, you can't program NVA rules through Azure portal. However, you can view the inbound security rules associated to each internet inbound Public IP. * DNAT traffic in Virtual WAN can only be routed to connections to the same hub as the NVA. Inter-hub traffic patterns with DNAT aren't supported. The list below corresponds to the diagram above and describes the packet flow fo ### Considerations * Inbound Traffic is automatically load-balanced across all healthy instances of the Network Virtual Appliance.-* In most cases, NVAs must perform source-NAT to the Firewall private IP in addition to destination-NAT to ensure flow symmetry. Certain NVA types may not require source-NAT. Contact your NVA provider for best practices around source-NAT. +* In most cases, NVAs must perform source-NAT to the Firewall private IP in addition to destination-NAT to ensure flow symmetry. Certain NVA types might not require source-NAT. Contact your NVA provider for best practices around source-NAT. * Timeout for idle flows is automatically set to 4 minutes. * You can assign individual IP address resources generated from an IP address prefix to the NVA as internet inbound IPs. Assign each IP address from the prefix individually. The following section describes how to manage NVA configurations related to inte > IP addresses can only be removed if there are no rules associated to that IP is 0. Remove all rules associated to the IP by removing DNAT rules assigned to that IP from your NVA management software. Select the IP you want to remove from the grid and click **Delete**. ## Programming DNAT Rules The following section describes some common troubleshooting scenarios. * **Option to associate IP to NVA resource not available through Azure portal** : Only NVAs that are created with DNAT/Internet Inbound IPs at deployment time are eligible to use DNAT capabilities. Delete and re-create the NVA with an Internet Inbound IP assigned at deployment time. * **IP address not showing up in dropdown Azure portal**: Public IPs only show up in the dropdown menu if the IP address is IPv4, in the same region as the NVA and isn't in use/assigned to another Azure resource. Ensure the IP address you're trying to use meets the above requirements, or create a new IP address. * **Can't delete/disassociate Public IP from NVA**: Only IP addresses that have no rules associated with them can be deleted. Use the NVA orchestration software to remove any DNAT rules associated to that IP address.-* **NVA provisioning state not succeeded**: If there are on-going operations on the NVA or if the provisioning status of the NVA is **not successful**, IP address association fails. Wait for any existing operations to terminate. +* **NVA provisioning state not succeeded**: If there are ongoing operations on the NVA or if the provisioning status of the NVA is **not successful**, IP address association fails. Wait for any existing operations to terminate. ### <a name="healthprobeconfigs"></a> Load balancer health probes |
virtual-wan | Howto Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-firewall.md | |
virtual-wan | Migrate From Hub Spoke Topology | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/migrate-from-hub-spoke-topology.md | description: Learn how to migrate from an existing customer-managed hub-and-spok Previously updated : 10/13/2022 Last updated : 10/25/2024 This article shows how to migrate an existing customer-managed hub-and-spoke env ## Scenario -Contoso is a global financial organization with offices in both Europe and Asia. They are planning to move their existing applications from an on-premises data center in to Azure and have built out a foundation design based on the customer-managed hub-and-spoke architecture, including regional hub virtual networks for hybrid connectivity. As part of the move to cloud-based technologies, the network team has been tasked with ensuring that their connectivity is optimized for the business moving forward. +Contoso is a global financial organization with offices in both Europe and Asia. They're planning to move their existing applications from an on-premises data center in to Azure and have built out a foundation design based on the customer-managed hub-and-spoke architecture, including regional hub virtual networks for hybrid connectivity. As part of the move to cloud-based technologies, the network team has been tasked with ensuring that their connectivity is optimized for the business moving forward. The following figure shows a high-level view of the existing global network including connectivity to multiple Azure regions. |
virtual-wan | Point To Site Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/point-to-site-concepts.md | description: Learn about Virtual WAN User VPN P2S VPN concepts. Previously updated : 12/05/2022 Last updated : 10/25/2024 The following concepts are related to server configurations that use certificate | Concept | Description | Notes| |--| --|--|-| Root certificate name | Name used by Azure to identify customer root certificates. | Can be configured to be any name. You may have multiple root certificates. | +| Root certificate name | Name used by Azure to identify customer root certificates. | Can be configured to be any name. You can have multiple root certificates. | | Public certificate data | Root certificate(s) from which client certificates are issued.| Input the string corresponding to the root certificate public data. For an example for how to get root certificate public data, see the step 8 in the following document about [generating certificates](certificates-point-to-site.md). | | Revoked certificate | Name used by Azure to identify certificates to be revoked. | Can be configured to be any name.| | Revoked certificate thumbprint| Thumbprint of the end user certificate(s) that shouldn't be able to connect to the gateway. | The input for this parameter is one or more certificate thumbprints. Every user certificate must be revoked individually. Revoking an intermediate certificate or a root certificate won't automatically revoke all children certificates. | If a P2S VPN gateway is configured to use RADIUS-based authentication, the P2S V | Primary server IP address|Private IP address of RADIUS server| This IP must be a private IP reachable by the Virtual Hub. Make sure the connection hosting the RADIUS server is propagating to the defaultRouteTable of the hub with the gateway.| | Secondary server secret| Server secret configured on the second RADIUS server that is used for encryption by RADIUS protocol.| Any provided shared secret string.| | Secondary server IP address|The private IP address of the RADIUS server| This IP must be a private IP reachable by the virtual hub. Make sure the connection hosting the RADIUS server is propagating to the defaultRouteTable of the hub with the gateway.|-|RADIUS server root certificate | RADIUS server root certificate public data.| This field is optional. Input the string(s) corresponding to the RADIUS root certificate public data. You may input multiple root certificates. All client certificates presented for authentication must be issued from the specified root certificates. For an example for how to get certificate public data, see the step 8 in the following document about [generating certificates](certificates-point-to-site.md).| +|RADIUS server root certificate | RADIUS server root certificate public data.| This field is optional. Input the string(s) corresponding to the RADIUS root certificate public data. You can input multiple root certificates. All client certificates presented for authentication must be issued from the specified root certificates. For an example for how to get certificate public data, see the step 8 in the following document about [generating certificates](certificates-point-to-site.md).| |Revoked client certificates |Thumbprint(s) of revoked RADIUS client certificates. Clients presenting revoked certificates won't be able to connect. |This field is optional. Every user certificate must be revoked individually. Revoking an intermediate certificate or a root certificate won't automatically revoke all children certificates.| <a name='azure-active-directory-authentication-concepts'></a> The server configuration contains the definitions of groups and the groups are t |--| --|--| |User group / policy group|A user Group or policy group is a logical representation of a group of users that should be assigned IP addresses from the same address pool.| For more information, see [about user groups.](user-groups-about.md)| |Default group|When users try to connect to a gateway using the user group feature, users who don't match any group assigned to the gateway are automatically considered to be part of the default group and assigned an IP address associated to that group. |Each group in a server configuration can be specified as a default group or non-default group and this setting **cannot** be changed after the group has been created. Exactly one default group can be assigned to each P2S VPN gateway, even if the assigned server configuration has multiple default groups.|-|Group priority|When multiple groups are assigned to a gateway a connecting user may present credentials that match multiple groups. Virtual WAN processes groups assigned to a gateway in increasing order of priority.|Priorities are positive integers and groups with lower numerical priorities are processed first. Every group must have a distinct priority.| +|Group priority|When multiple groups are assigned to a gateway a connecting user might present credentials that match multiple groups. Virtual WAN processes groups assigned to a gateway in increasing order of priority.|Priorities are positive integers and groups with lower numerical priorities are processed first. Every group must have a distinct priority.| |Group settings/members| User groups consist of members. Members don't correspond to individual users but rather define the criteri).| ## Gateway configuration concepts There can be one or more connection configurations on a P2S VPN gateway. Each co |--|--|--| | Configuration Name | Name for a P2S VPN configuration | Any name can be provided. You can have more than one connection configuration on a gateway if you're leveraging the user groups/multi-pool feature. If you aren't using this feature, there can only be one configuration per gateway.| | User Groups | User groups that correspond to a configuration | Any user group(s) referenced in the VPN Server configuration. This parameter is optional. For more information, see [about user groups](user-groups-about.md).|-| Address Pools|Address pools are private IP addresses that connecting users are assigned.|Address pools can be specified as any CIDR block that doesn't overlap with any Virtual Hub address spaces, IP addresses used in Virtual Networks connected to Virtual WAN or addresses advertised from on-premises. Depending on the scale unit specified on the gateway, you may need more than one CIDR block. For more information, see [about address pools](about-client-address-pools.md).| -|Routing configuration|Every connection to Virtual Hub has a routing configuration, which defines which route table the connection is associated to and which route tables the route table propagates to. |All branch connections to the same hub (ExpressRoute, VPN, NVA) must associate to the defaultRouteTable and propagate to the same set of route tables. Having different propagations for branches connections may result in unexpected routing behaviors, as Virtual WAN will choose the routing configuration for one branch and apply it to all branches and therefore routes learned from on-premises.| +| Address Pools|Address pools are private IP addresses that connecting users are assigned.|Address pools can be specified as any CIDR block that doesn't overlap with any Virtual Hub address spaces, IP addresses used in Virtual Networks connected to Virtual WAN or addresses advertised from on-premises. Depending on the scale unit specified on the gateway, you might need more than one CIDR block. For more information, see [about address pools](about-client-address-pools.md).| +|Routing configuration|Every connection to Virtual Hub has a routing configuration, which defines which route table the connection is associated to and which route tables the route table propagates to. |All branch connections to the same hub (ExpressRoute, VPN, NVA) must associate to the defaultRouteTable and propagate to the same set of route tables. Having different propagations for branches connections might result in unexpected routing behaviors, as Virtual WAN will choose the routing configuration for one branch and apply it to all branches and therefore routes learned from on-premises.| ## Next steps |
virtual-wan | Scenario 365 Expressroute Private | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-365-expressroute-private.md | |
virtual-wan | Scenario Any To Any | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-any-to-any.md | |
virtual-wan | Scenario Bgp Peering Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-bgp-peering-hub.md | description: Learn about BGP peering with an Azure Virtual WAN virtual hub. Previously updated : 10/30/2023 Last updated : 10/25/2024 The virtual hub router now also exposes the ability to peer with it, thereby exc **Considerations** * You can only peer the virtual hub router with NVAs that are deployed in directly connected VNets. - * Configuring BGP peering between an on-premises NVA and the virtual hub router is not supported. - * Configuring BGP peering between an Azure Route Server and the virtual hub router is not supported. + * Configuring BGP peering between an on-premises NVA and the virtual hub router isn't supported. + * Configuring BGP peering between an Azure Route Server and the virtual hub router isn't supported. * The virtual hub router only supports 16-bit (2 bytes) ASN. * The virtual network connection that has the NVA BGP connection endpoint must always be associated and propagating to defaultRouteTable. Custom route tables aren't supported at this time. * The virtual hub router supports transit connectivity between virtual networks connected to virtual hubs. This has nothing to do with this feature for BGP peering capability as Virtual WAN already supports transit connectivity. Examples: The virtual hub router now also exposes the ability to peer with it, thereby exc * Private ASNs: 65515, 65517, 65518, 65519, 65520 * ASNs reserved by IANA: 23456, 64496-64511, 65535-65551 * While the virtual hub router exchanges BGP routes with your NVA and propagates them to your virtual network, it directly facilitates propagating routes from on-premises via the virtual hub hosted gateways (VPN gateway/ExpressRoute gateway/Managed NVA gateways).-* BGP peering is only supported with an IP address that is assigned to an interface of the NVA. Peering with loopbacks is not supported. +* BGP peering is only supported with an IP address that is assigned to an interface of the NVA. Peering with loopbacks isn't supported. The virtual hub router has the following limits: |
virtual-wan | Scenario Isolate Vnets Custom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-isolate-vnets-custom.md | |
virtual-wan | Scenario Isolate Vnets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-isolate-vnets.md | |
virtual-wan | Scenario Route Between Vnets Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-route-between-vnets-firewall.md | description: Learn about routing scenarios to route traffic between VNets direct Previously updated : 02/13/2023 Last updated : 10/25/2024 |
virtual-wan | Scenario Route Through Nva | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-route-through-nva.md | description: Learn about Virtual WAN routing scenarios to route traffic through Previously updated : 02/13/2023 Last updated : 10/25/2024 In this scenario we'll use the following naming convention: * "Non-NVA VNets" for virtual networks connected to Virtual WAN that don't have an NVA or other VNets peered with them (VNet 1 and VNet 3 in the **Figure 2** further down in the article). * "Hubs" for Microsoft-managed Virtual WAN Hubs, where NVA VNets are connected to. NVA spoke VNets don't need to be connected to Virtual WAN hubs, only to NVA VNets. -The following connectivity matrix, summarizes the flows supported in this scenario: +The following connectivity matrix summarizes the flows supported in this scenario: **Connectivity matrix** The following connectivity matrix, summarizes the flows supported in this scenar | **Non-NVA VNets**| → | Over NVA VNet | Direct | Direct | Direct | | **Branches** | → | Over NVA VNet | Direct | Direct | Direct | -Each of the cells in the connectivity matrix describes how a VNet or branch (the "From" side of the flow, the row headers in the table) communicates with a destination VNet or branch (the "To" side of the flow, the column headers in italics in the table). "Direct" means that connectivity is provided natively by Virtual WAN, "Peering" means that connectivity is provided by a User-Defined Route in the VNet, "Over NVA VNet" means that the connectivity traverses the NVA deployed in the NVA VNet. Consider the following: +Each of the cells in the connectivity matrix describes how a VNet or branch (the "From" side of the flow, the row headers in the table) communicates with a destination VNet or branch (the "To" side of the flow, the column headers in italics in the table). "Direct" means that connectivity is provided natively by Virtual WAN, "Peering" means that connectivity is provided by a User-Defined Route in the VNet, "Over NVA VNet" means that the connectivity traverses the NVA deployed in the NVA VNet. Consider the following items: * NVA Spokes aren't managed by Virtual WAN. As a result, the mechanisms with which they'll communicate to other VNets or branches are maintained by the user. Connectivity to the NVA VNet is provided by a VNet peering, and a Default route to 0.0.0.0/0 pointing to the NVA as next hop should cover connectivity to the Internet, to other spokes, and to branches * NVA VNets knows about their own NVA spokes, but not about NVA spokes connected to other NVA VNets. For example, in the Figure 2 further down in this article, VNet 2 knows about VNet 5 and VNet 6, but not about other spokes such as VNet 7 and VNet 8. A static route is required to inject other spokes' prefixes into NVA VNets |
virtual-wan | Scenario Route Through Nvas Custom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-route-through-nvas-custom.md | -> Please note that for the routing scenarios below, the Virtual WAN hub and Spoke Virtual Network containing the NVA must be in the same Azure Region. +> Please note that for the following routing scenarios, the Virtual WAN hub and Spoke Virtual Network containing the NVA must be in the same Azure Region. ## Design To set up routing via NVA, consider the following steps: 1. For internet-bound traffic to go via VNet 5, you need VNets 1, 2, and 3 to directly connect via virtual network peering to VNet 5. You also need a user-defined route set up in the virtual networks for 0.0.0.0/0 and next hop 10.5.0.5. - If you do not want to connect VNets 1, 2, and 3 to VNet 5 and instead just use the NVA in VNet 4 to route 0.0.0.0/0 traffic from branches (on-premises VPN or ExpressRoute connections), go to the [alternate workflow](#alternate). + If you don't want to connect VNets 1, 2, and 3 to VNet 5 and instead just use the NVA in VNet 4 to route 0.0.0.0/0 traffic from branches (on-premises VPN or ExpressRoute connections), go to the [alternate workflow](#alternate). However, if you want VNet-to-VNet traffic to transit through the NVA, you would need to disconnect VNet 1,2,3 from the virtual hub and connect it or stack it above the NVA Spoke VNet4. In Virtual WAN, VNet-to-VNet traffic transits through the Virtual WAN hub or a Virtual WAN hub’s Azure Firewall (Secure hub). If VNets peer directly using VNet peering, then they can communicate directly bypassing the transit through the virtual hub. To set up routing via NVA, consider the following steps: * Add an aggregated route '10.2.0.0/16' with next hop as the VNet 4 connection for traffic going from VNets 1, 2, and 3 towards branches. In the VNet4 connection, configure a route for '10.2.0.0/16' and indicate the next hop to be the specific IP of the NVA in VNet 4. - * Add a route '0.0.0.0/0' with next hop as the VNet 4 connection. '0.0.0.0/0' is added to imply sending traffic to internet. It does not imply specific address prefixes pertaining to VNets or branches. In the VNet4 connection, configure a route for '0.0.0.0/0', and indicate the next hop to be the specific IP of the NVA in VNet 4. + * Add a route '0.0.0.0/0' with next hop as the VNet 4 connection. '0.0.0.0/0' is added to imply sending traffic to internet. It doesn't imply specific address prefixes pertaining to VNets or branches. In the VNet4 connection, configure a route for '0.0.0.0/0', and indicate the next hop to be the specific IP of the NVA in VNet 4. * **Association:** Select all **VNets 1, 2, and 3**. This implies that VNet connections 1, 2, and 3 will associate to this route table and be able to learn routes (static and dynamic via propagation) in this route table. - * **Propagation:** Connections propagate routes to route tables. Selecting VNets 1, 2, and 3 enables propagating routes from VNets 1, 2, and 3 to this route table. Make sure the option for branches (VPN/ER/P2S) is not selected. This ensures that on-premises connections cannot get to the VNets 1, 2, and 3 directly. + * **Propagation:** Connections propagate routes to route tables. Selecting VNets 1, 2, and 3 enables propagating routes from VNets 1, 2, and 3 to this route table. Make sure the option for branches (VPN/ER/P2S) is not selected. This ensures that on-premises connections can't get to the VNets 1, 2, and 3 directly. 1. Edit the default route table, **DefaultRouteTable**. To set up routing via NVA, consider the following steps: * Add an aggregated route '10.1.0.0/16' for **VNets 1, 2, and 3** with next hop as the **VNet 4 connection**. - * Add a route '0.0.0.0/0' with next hop as the **VNet 4 connection**. '0.0.0.0/0' is added to imply sending traffic to internet. It does not imply specific address prefixes pertaining to VNets or branches. In the prior step for the VNet4 connection, you would already have configured a route for '0.0.0.0/0', with next hop to be the specific IP of the NVA in VNet 4. + * Add a route '0.0.0.0/0' with next hop as the **VNet 4 connection**. '0.0.0.0/0' is added to imply sending traffic to internet. It doesn't imply specific address prefixes pertaining to VNets or branches. In the prior step for the VNet4 connection, you would already have configured a route for '0.0.0.0/0', with next hop to be the specific IP of the NVA in VNet 4. * **Association:** Make sure the option for branches **(VPN/ER/P2S)** is selected. This ensures that on-premises branch connections are associated to the default route table. All VPN, Azure ExpressRoute, and user VPN connections are associated only to the default route table. To set up routing via NVA, consider the following steps: * Portal users must enable 'Propagate to default route' on connections (VPN/ER/P2S/VNet) for the 0.0.0.0/0 route to take effect. * PS/CLI/REST users must set flag 'enableinternetsecurity' to true for the 0.0.0.0/0 route to take effect.-* Virtual network connection does not support 'multiple/unique' next hop IP to the 'same' network virtual appliance in a spoke VNet 'if' one of the routes with next hop IP is indicated to be public IP address or 0.0.0.0/0 (internet). -* When 0.0.0.0/0 is configured as a static route on a virtual network connection, that route is applied to all traffic, including the resources within the spoke itself. This means all traffic will be forwarded to the next hop IP address of the static route (NVA Private IP). Thus, in deployments with a 0.0.0.0/0 route with next hop NVA IP address configured on a spoke virtual network connection, to access workloads in the same virtual network as the NVA directly (i.e. so that traffic does not pass through the NVA), specify a /32 route on the spoke virtual network connection. For instance, if you want to access 10.1.3.1 directly, specify 10.1.3.1/32 next hop 10.1.3.1 on the spoke virtual network connection. +* Virtual network connection doesn't support 'multiple/unique' next hop IP to the 'same' network virtual appliance in a spoke VNet 'if' one of the routes with next hop IP is indicated to be public IP address or 0.0.0.0/0 (internet). +* When 0.0.0.0/0 is configured as a static route on a virtual network connection, that route is applied to all traffic, including the resources within the spoke itself. This means all traffic will be forwarded to the next hop IP address of the static route (NVA Private IP). Thus, in deployments with a 0.0.0.0/0 route with next hop NVA IP address configured on a spoke virtual network connection, to access workloads in the same virtual network as the NVA directly (i.e. so that traffic doesn't pass through the NVA), specify a /32 route on the spoke virtual network connection. For instance, if you want to access 10.1.3.1 directly, specify 10.1.3.1/32 next hop 10.1.3.1 on the spoke virtual network connection. * To simplify routing and to reduce the changes in the Virtual WAN hub route tables, we encourage using the new "BGP peering with Virtual WAN hub" option. * [Scenario: BGP peering with a virtual hub](scenario-bgp-peering-hub.md) |
virtual-wan | Scenario Secured Hub App Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-secured-hub-app-gateway.md | |
virtual-wan | Scenario Shared Services Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-shared-services-vnet.md | |
virtual-wan | Virtual Wan Expressroute Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-expressroute-portal.md | Once the gateway is created, you can connect an [ExpressRoute circuit](../expres First, verify that your circuit's peering status is provisioned in the **ExpressRoute circuit -> Peerings** page in Portal. Then, go to the **Virtual hub -> Connectivity -> ExpressRoute** page. If you have access in your subscription to an ExpressRoute circuit, you'll see the circuit you want to use in the list of circuits. If you donΓÇÖt see any circuits, but have been provided with an authorization key and peer circuit URI, you can redeem and connect a circuit. See [To connect by redeeming an authorization key](#authkey). 1. Select the circuit.-2. Select **Connect circuit(s)**. -- :::image type="content" source="./media/virtual-wan-expressroute-portal/cktconnect.png" alt-text="Screenshot shows connect circuits." border="false"::: +1. Select **Connect circuit(s)**. ### <a name="authkey"></a>To connect by redeeming an authorization key Use the authorization key and circuit URI you were provided in order to connect. 1. On the ExpressRoute page, click **+Redeem authorization key**-- :::image type="content" source="./media/virtual-wan-expressroute-portal/redeem.png" alt-text="Screenshot shows the ExpressRoute for a virtual hub with Redeem authorization key selected."border="false"::: 2. On the Redeem authorization key page, fill in the values.-- :::image type="content" source="./media/virtual-wan-expressroute-portal/redeemkey2.png" alt-text="Screenshot shows redeem authorization key values." border="false"::: 3. Select **Add** to add the key. 4. View the circuit. A redeemed circuit only shows the name (without the type, provider and other information) because it is in a different subscription than that of the user. Use the authorization key and circuit URI you were provided in order to connect. After the circuit connection is established, the hub connection status will indicate 'this hub', implying the connection is established to the hub ExpressRoute gateway. Wait approximately 5 minutes before you test connectivity from a client behind your ExpressRoute circuit, for example, a VM in the VNet that you created earlier. - ## To change the size of a gateway If you want to change the size of your ExpressRoute gateway, locate the ExpressRoute gateway inside the hub, and select the scale units from the dropdown. Save your change. It will take approximately 30 minutes to update the hub gateway. - ## To advertise default route 0.0.0.0/0 to endpoints If you would like the Azure virtual hub to advertise the default route 0.0.0.0/0 to your ExpressRoute end points, you'll need to enable 'Propagate default route'. If you would like the Azure virtual hub to advertise the default route 0.0.0.0/0 ## To see your Virtual WAN connection from the ExpressRoute circuit blade -Navigate to the **Connections** blade for your ExpressRoute circuit to see each ExpressRoute gateway that your ExpressRoute circuit is connected to. If the gateway is in a different subscription than the circuit, then the **Peer** field will be the circuit authorization key. +Navigate to the **Connections** page for your ExpressRoute circuit to see each ExpressRoute gateway that your ExpressRoute circuit is connected to. If the gateway is in a different subscription than the circuit, then the **Peer** field will be the circuit authorization key. :::image type="content" source="./media/virtual-wan-expressroute-portal/view-expressroute-connection.png" alt-text="Screenshot shows the initial container page." lightbox="./media/virtual-wan-expressroute-portal/view-expressroute-connection.png"::: ## Enable or disable VNet to Virtual WAN traffic over ExpressRoute+ By default, VNet to Virtual WAN traffic is disabled over ExpressRoute. You can enable this connectivity by using the following steps. 1. In the "Edit virtual hub" blade, enable **Allow traffic from non Virtual WAN networks**. 1. In the "Virtual network gateway" blade, enable **Allow traffic from remote Virtual WAN networks.** See instructions [here.](../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md#enable-or-disable-vnet-to-vnet-or-vnet-to-virtual-wan-traffic-through-expressroute) It is recommended to keep these toggles disabled and instead create a Virtual Network connection between the standalone virtual network and Virtual WAN hub. This offers better performance and lower latency, as conveyed in our [FAQ.](virtual-wan-faq.md#when-theres-an-expressroute-circuit-connected-as-a-bow-tie-to-a-virtual-wan-hub-and-a-standalone-vnet-what-is-the-path-for-the-standalone-vnet-to-reach-the-virtual-wan-hub) |
vpn-gateway | Azure Vpn Client Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/azure-vpn-client-versions.md | -This article helps you view each of the versions of the Azure VPN Client. As new client versions become available, they're added to this article. To view the version number of an installed Azure VPN Client, launch the client and select **Help**. +This article helps you view each of the versions of the Azure VPN Client. As new client versions become available, they're added to this article. To view the version number of an installed Azure VPN Client, launch the client and select **Help**. For the list of Azure VPN Client instructions, including how to download the Azure VPN Client, see the table in [VPN Client configuration requirements](point-to-site-about.md#what-are-the-client-configuration-requirements). ## Azure VPN Client - Windows |
vpn-gateway | Vpn Gateway Howto Vnet Vnet Resource Manager Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md | Title: 'Configure a VNet-to-VNet VPN gateway connection: Azure portal' -description: Learn how to create a VPN gateway connection between VNets. +description: Learn how to create a VPN gateway connection between virtual networks. Previously updated : 12/11/2023 Last updated : 10/29/2024 # Configure a VNet-to-VNet VPN gateway connection - Azure portal -This article helps you connect virtual networks (VNets) by using the VNet-to-VNet connection type using the Azure portal. The virtual networks can be in different regions and from different subscriptions. When you connect VNets from different subscriptions, the subscriptions don't need to be associated with the same tenant. This type of configuration creates a connection between two virtual network gateways. This article doesn't apply to VNet peering. For VNet peering, see the [Virtual Network peering](../virtual-network/virtual-network-peering-overview.md) article. +This article helps you connect virtual networks by using the VNet-to-VNet connection type using the Azure portal. The virtual networks can be in different regions and from different subscriptions. When you connect virtual networks from different subscriptions, the subscriptions don't need to be associated with the same tenant. This type of configuration creates a connection between two virtual network gateways. This article doesn't apply to VNet peering. For VNet peering, see the [Virtual Network peering](../virtual-network/virtual-network-peering-overview.md) article. :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/vnet-vnet-diagram.png" alt-text="VNet to VNet diagram." lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/vnet-vnet-diagram.png"::: You can create this configuration using various tools, depending on the deployme > * [PowerShell](vpn-gateway-vnet-vnet-rm-ps.md) > * [Azure CLI](vpn-gateway-howto-vnet-vnet-cli.md) --## About connecting VNets +## About connecting virtual networks The following sections describe the different ways to connect virtual networks. ### VNet-to-VNet -Configuring a VNet-to-VNet connection is a simple way to connect VNets. When you connect a virtual network to another virtual network with a VNet-to-VNet connection type (VNet2VNet), it's similar to creating a Site-to-Site IPsec connection to an on-premises location. Both connection types use a VPN gateway to provide a secure tunnel with IPsec/IKE and function the same way when communicating. However, they differ in the way the local network gateway is configured. +Configuring a VNet-to-VNet connection is a simple way to connect virtual networks. When you connect a virtual network to another virtual network with a VNet-to-VNet connection type (VNet2VNet), it's similar to creating a Site-to-Site IPsec connection to an on-premises location. Both connection types use a VPN gateway to provide a secure tunnel with IPsec/IKE and function the same way when communicating. However, they differ in the way the local network gateway is configured. When you create a VNet-to-VNet connection, the local network gateway address space is automatically created and populated. If you update the address space for one VNet, the other VNet automatically routes to the updated address space. It's typically faster and easier to create a VNet-to-VNet connection than a Site-to-Site connection. However, the local network gateway isn't visible in this configuration. When you create a VNet-to-VNet connection, the local network gateway address spa ### Site-to-Site (IPsec) -If you're working with a complicated network configuration, you might prefer to connect your VNets by using a [Site-to-Site connection](./tutorial-site-to-site-portal.md) instead. When you follow the Site-to-Site IPsec steps, you create and configure the local network gateways manually. The local network gateway for each VNet treats the other VNet as a local site. These steps allow you to specify more address spaces for the local network gateway to route traffic. If the address space for a VNet changes, you must manually update the corresponding local network gateway. +If you're working with a complicated network configuration, you might prefer to connect your virtual networks by using a [Site-to-Site connection](./tutorial-site-to-site-portal.md) instead. When you follow the Site-to-Site IPsec steps, you create and configure the local network gateways manually. The local network gateway for each VNet treats the other VNet as a local site. These steps allow you to specify more address spaces for the local network gateway to route traffic. If the address space for a VNet changes, you must manually update the corresponding local network gateway. ### VNet peering -You can also connect your VNets by using VNet peering. +You can also connect your virtual networks by using VNet peering. * VNet peering doesn't use a VPN gateway and has different constraints. * [VNet peering pricing](https://azure.microsoft.com/pricing/details/virtual-network) is calculated differently than [VNet-to-VNet VPN Gateway pricing](https://azure.microsoft.com/pricing/details/vpn-gateway). VNet-to-VNet communication can be combined with multi-site configurations. These :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/connections-diagram.png" alt-text="VNet connections diagram."::: -This article shows you how to connect VNets by using the VNet-to-VNet connection type. When you follow these steps as an exercise, you can use the following example settings values. In the example, the virtual networks are in the same subscription, but in different resource groups. If your VNets are in different subscriptions, you can't create the connection in the portal. Use [PowerShell](vpn-gateway-vnet-vnet-rm-ps.md) or [CLI](vpn-gateway-howto-vnet-vnet-cli.md) instead. For more information about VNet-to-VNet connections, see [VNet-to-VNet FAQ](#vnet-to-vnet-faq). +This article shows you how to connect virtual networks by using the VNet-to-VNet connection type. When you follow these steps as an exercise, you can use the following example settings values. In the example, the virtual networks are in the same subscription, but in different resource groups. If your virtual networks are in different subscriptions, you can't create the connection in the portal. Use [PowerShell](vpn-gateway-vnet-vnet-rm-ps.md) or [CLI](vpn-gateway-howto-vnet-vnet-cli.md) instead. For more information about VNet-to-VNet connections, see [VNet-to-VNet FAQ](#vnet-to-vnet-faq). ### Example settings This article shows you how to connect VNets by using the VNet-to-VNet connection * **Connection** * **Name**: VNet1toVNet4- * **Shared key**: You can create the shared key yourself. When you create the connection between the VNets, the values must match. For this exercise, use abc123. + * **Shared key**: You can create the shared key yourself. When you create the connection between the virtual networks, the values must match. For this exercise, use abc123. **Values for VNet4:** This article shows you how to connect VNets by using the VNet-to-VNet connection * **Connection** * **Name**: VNet4toVNet1- * **Shared key**: You can create the shared key yourself. When you create the connection between the VNets, the values must match. For this exercise, use abc123. + * **Shared key**: You can create the shared key yourself. When you create the connection between the virtual networks, the values must match. For this exercise, use abc123. ## Create and configure VNet1 You can see the deployment status on the Overview page for your gateway. A gatew ## Create and configure VNet4 -After you've configured VNet1, create VNet4 and the VNet4 gateway by repeating the previous steps and replacing the values with VNet4 values. You don't need to wait until the virtual network gateway for VNet1 has finished creating before you configure VNet4. If you're using your own values, make sure the address spaces don't overlap with any of the VNets to which you want to connect. +After you've configured VNet1, create VNet4 and the VNet4 gateway by repeating the previous steps and replacing the values with VNet4 values. You don't need to wait until the virtual network gateway for VNet1 has finished creating before you configure VNet4. If you're using your own values, make sure the address spaces don't overlap with any of the virtual networks to which you want to connect. ## Configure your connections When the VPN gateways for both VNet1 and VNet4 have completed, you can create your virtual network gateway connections. -VNets in the same subscription can be connected using the portal, even if they are in different resource groups. However, if your VNets are in different subscriptions, you must use [PowerShell](vpn-gateway-vnet-vnet-rm-ps.md) to make the connections. +Virtual networks in the same subscription can be connected using the portal, even if they are in different resource groups. However, if your virtual networks are in different subscriptions, you must use [PowerShell](vpn-gateway-vnet-vnet-rm-ps.md) to make the connections. You can create either a bidirectional, or single direction connection. For this exercise, we'll specify a bidirectional connection. The bidirectional connection value creates two separate connections so that traffic can flow in both directions. |