Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Enable Authentication Python Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-python-web-app.md | Add the following templates under the templates folder. These templates extend t {% block metadata %} {% if config.get("B2C_RESET_PASSWORD_AUTHORITY") and "AADB2C90118" in result.get("error_description") %}- <!-- See also https://docs.microsoft.com/en-us/azure/active-directory-b2c/active-directory-b2c-reference-policies#linking-user-flows --> + <!-- See also https://learn.microsoft.com/azure/active-directory-b2c/active-directory-b2c-reference-policies#linking-user-flows --> <meta http-equiv="refresh" content='0;{{_build_auth_code_flow(authority=config["B2C_RESET_PASSWORD_AUTHORITY"])["auth_uri"]}}'> {% endif %} |
active-directory-b2c | Enable Authentication Spa App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-spa-app.md | To sign in the user, do the following: /** * For the purpose of setting an active account for UI update, we want to consider only the auth response resulting * from SUSI flow. "tfp" claim in the id token tells us the policy (NOTE: legacy policies may use "acr" instead of "tfp").- * To learn more about B2C tokens, visit https://docs.microsoft.com/en-us/azure/active-directory-b2c/tokens-overview + * To learn more about B2C tokens, visit https://learn.microsoft.com/azure/active-directory-b2c/tokens-overview */ if (response.idTokenClaims['tfp'].toUpperCase() === b2cPolicies.names.signUpSignIn.toUpperCase()) { handleResponse(response); |
active-directory-b2c | Enable Authentication Web App With Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-web-app-with-api.md | public void ConfigureServices(IServiceCollection services) // This lambda determines whether user consent for non-essential cookies is needed for a given request. options.CheckConsentNeeded = context => true; options.MinimumSameSitePolicy = SameSiteMode.Unspecified;- // Handling SameSite cookie according to https://docs.microsoft.com/en-us/aspnet/core/security/samesite?view=aspnetcore-3.1 + // Handling SameSite cookie according to https://learn.microsoft.com/aspnet/core/security/samesite?view=aspnetcore-3.1 options.HandleSameSiteCookieCompatibility(); }); |
active-directory-b2c | Enable Authentication Web Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-web-application.md | public void ConfigureServices(IServiceCollection services) // This lambda determines whether user consent for non-essential cookies is needed for a given request. options.CheckConsentNeeded = context => true; options.MinimumSameSitePolicy = SameSiteMode.Unspecified;- // Handling SameSite cookie according to https://docs.microsoft.com/en-us/aspnet/core/security/samesite?view=aspnetcore-3.1 + // Handling SameSite cookie according to https://learn.microsoft.com/aspnet/core/security/samesite?view=aspnetcore-3.1 options.HandleSameSiteCookieCompatibility(); }); |
active-directory-b2c | Identity Provider Local | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-local.md | To configure settings for social or enterprise identities, where the identity of ::: zone pivot="b2c-user-flow" +## Prerequisites +++ ## Configure local account identity provider settings |
active-directory-b2c | Javascript And Page Layout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/javascript-and-page-layout.md | function addTermsOfUseLink() { var termsLabelText = termsOfUseLabel.innerHTML; // create a new <a> element with the same inner text- var termsOfUseUrl = 'https://docs.microsoft.com/legal/termsofuse'; + var termsOfUseUrl = 'https://learn.microsoft.com/legal/termsofuse'; var termsOfUseLink = document.createElement('a'); termsOfUseLink.setAttribute('href', termsOfUseUrl); termsOfUseLink.setAttribute('target', '_blank'); |
active-directory-b2c | View Audit Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/view-audit-logs.md | You can try this script in the [Azure Cloud Shell](overview.md). Be sure to upda ```powershell # This script requires an application registration that's granted Microsoft Graph API permission-# https://docs.microsoft.com/azure/active-directory-b2c/microsoft-graph-get-started +# https://learn.microsoft.com/azure/active-directory-b2c/microsoft-graph-get-started # Constants $ClientID = "your-client-application-id-here" # Insert your application's client ID, a GUID Here's the JSON representation of the example activity event shown earlier in th ## Next steps -You can automate other administration tasks, for example, [manage Azure AD B2C user accounts with Microsoft Graph](microsoft-graph-operations.md). +You can automate other administration tasks, for example, [manage Azure AD B2C user accounts with Microsoft Graph](microsoft-graph-operations.md). |
active-directory-domain-services | Troubleshoot Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-alerts.md | The managed domain's health automatically updates itself within two hours and re ### Resolution -This error is unrecoverable. To resolve the alert, [delete your existing managed domain](delete-aadds.md) and recreate it. If you have trouble deleting the managed domain, [open an Azure support request][azure-support] for additional troubleshooting assistance. +Azure AD DS creates additional resources to function properly, such as public IP addresses, virtual network interfaces, and a load balancer. If any of these resources are modified, the managed domain is in an unsupported state and can't be managed. For more information about these resources, see [Network resources used by Azure AD DS](network-considerations.md#network-resources-used-by-azure-ad-ds). ++This alert is generated when one of these required resources is modified and can't automatically be recovered by Azure AD DS. To resolve the alert, [open an Azure support request][azure-support] to fix the instance. ## AADDS114: Subnet invalid |
active-directory | Concept Fido2 Hardware Vendor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-fido2-hardware-vendor.md | -FIDO2 security keys offer an alternative. FIDO2 security keys can replace weak credentials with strong hardware-backed public/private-key credentials which cannot be reused, replayed, or shared across services. Security keys support shared device scenarios, allowing you to carry your credential with you and safely authenticate to an Azure Active Directory joined Windows 10 device thatΓÇÖs part of your organization. +FIDO2 security keys offer an alternative. FIDO2 security keys can replace weak credentials with strong hardware-backed public/private-key credentials which can't be reused, replayed, or shared across services. Security keys support shared device scenarios, allowing you to carry your credential with you and safely authenticate to an Azure Active Directory joined Windows 10 device thatΓÇÖs part of your organization. Microsoft partners with FIDO2 security key vendors to ensure that security devices work on Windows, the Microsoft Edge browser, and online Microsoft accounts, to enable strong password-less authentication. You can become a Microsoft-compatible FIDO2 security key vendor through the following process. Microsoft doesn't commit to do go-to-market activities with the partner and will evaluate partner priority based on customer demand. -1. First, your authenticator needs to have a FIDO2 certification. We will not be able to work with providers who do not have a FIDO2 certification. To learn more about the certification, please visit this website: [https://fidoalliance.org/certification/](https://fidoalliance.org/certification/) +1. First, your authenticator needs to have a FIDO2 certification. We won't be able to work with providers who don't have a FIDO2 certification. To learn more about the certification, please visit this website: [https://fidoalliance.org/certification/](https://fidoalliance.org/certification/) 2. After you have a FIDO2 certification, please fill in your request to our form here: [https://forms.office.com/r/NfmQpuS9hF](https://forms.office.com/r/NfmQpuS9hF). Our engineering team will only test compatibility of your FIDO2 devices. We won't test security of your solutions. 3. Once we confirm a move forward to the testing phase, the process usually take about 3-6 months. The steps usually involve: - Initial discussion between Microsoft and your team. - Verify FIDO Alliance Certification or the path to certification if not complete - Receive an overview of the device from the vendor - Microsoft will share our test scripts with you. Our engineering team will be able to answer questions if you have any specific needs.- - You will complete and send all passed results to Microsoft Engineering team + - You'll complete and send all passed results to Microsoft Engineering team 4. Upon successful passing of all tests by Microsoft Engineering team, Microsoft will confirm vendor's device is listed in [the FIDO MDS](https://fidoalliance.org/metadata/). 5. Microsoft will add your FIDO2 Security Key on Azure AD backend and to our list of approved FIDO2 vendors. You can become a Microsoft-compatible FIDO2 security key vendor through the foll The following table lists partners who are Microsoft-compatible FIDO2 security key vendors. -| **Provider** | **Link** | -| | | -| AuthenTrend | [https://authentrend.com/about-us/#pg-35-3](https://authentrend.com/about-us/#pg-35-3) | -| Ensurity | [https://www.ensurity.com/contact](https://www.ensurity.com/contact) | -| Excelsecu | [https://www.excelsecu.com/productdetail/esecufido2secu.html](https://www.excelsecu.com/productdetail/esecufido2secu.html) | -| Feitian | [https://ftsafe.us/pages/microsoft](https://ftsafe.us/pages/microsoft) | -| Go-Trust ID | [https://www.gotrustid.com/](https://www.gotrustid.com/idem-key) | -| HID | [https://www.hidglobal.com/contact-us](https://www.hidglobal.com/contact-us) | -| Hypersecu | [https://www.hypersecu.com/hyperfido](https://www.hypersecu.com/hyperfido) | -| IDmelon Technologies Inc. | [https://www.idmelon.com/#idmelon](https://www.idmelon.com/#idmelon) | -| Kensington | [https://www.kensington.com/solutions/product-category/why-biometrics/](https://www.kensington.com/solutions/product-category/why-biometrics/) | -| KONA I | [https://konai.com/business/security/fido](https://konai.com/business/security/fido) | -| Nymi | [https://www.nymi.com/product](https://www.nymi.com/product) | -| OneSpan Inc. | [https://www.onespan.com/products/fido](https://www.onespan.com/products/fido) | -| Thales | [https://cpl.thalesgroup.com/access-management/authenticators/fido-devices](https://cpl.thalesgroup.com/access-management/authenticators/fido-devices) | -| Thetis | [https://thetis.io/collections/fido2](https://thetis.io/collections/fido2) | -| Token2 Switzerland | [https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key](https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key) | -| TrustKey Solutions | [https://www.trustkeysolutions.com/security-keys/](https://www.trustkeysolutions.com/security-keys/) | -| VinCSS | [https://passwordless.vincss.net](https://passwordless.vincss.net/) | -| Yubico | [https://www.yubico.com/solutions/passwordless/](https://www.yubico.com/solutions/passwordless/) | +| Provider | Biometric | USB | NFC | BLE | FIPS Certified | Contact | +||:--:|::|::|::|:--:|--| +| AuthenTrend | ![y] | ![y]| ![y]| ![y]| ![n] | https://authentrend.com/about-us/#pg-35-3 | +| Ciright | ![n] | ![n]| ![y]| ![n]| ![n] | https://www.cyberonecard.com/ | +| Crayonic | ![y] | ![n]| ![y]| ![y]| ![n] | https://www.crayonic.com/keyvault | +| Ensurity | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.ensurity.com/contact | +| Excelsecu | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.excelsecu.com/productdetail/esecufido2secu.html | +| Feitian | ![y] | ![y]| ![y]| ![y]| ![y] | https://shop.ftsafe.us/pages/microsoft | +| Fortinet | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.fortinet.com/ | +| Giesecke + Devrient (G+D) | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.gi-de.com/en/identities/enterprise-security/hardware-based-authentication | +| GoTrustID Inc. | ![n] | ![y]| ![y]| ![y]| ![n] | https://www.gotrustid.com/idem-key | +| HID | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.hidglobal.com/contact-us | +| Hypersecu | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.hypersecu.com/hyperfido | +| IDmelon Technologies Inc. | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.idmelon.com/#idmelon | +| Kensington | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.kensington.com/solutions/product-category/why-biometrics/ | +| KONA I | ![y] | ![n]| ![y]| ![y]| ![n] | https://konai.com/business/security/fido | +| NeoWave | ![n] | ![y]| ![y]| ![n]| ![n] | https://neowave.fr/en/products/fido-range/ | +| Nymi | ![y] | ![n]| ![y]| ![n]| ![n] | https://www.nymi.com/nymi-band | +| Octatco | ![y] | ![y]| ![n]| ![n]| ![n] | https://octatco.com/ | +| OneSpan Inc. | ![n] | ![y]| ![n]| ![y]| ![n] | https://www.onespan.com/products/fido | +| Swissbit | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.swissbit.com/en/products/ishield-fido2/ | +| Thales Group | ![n] | ![y]| ![y]| ![n]| ![y] | https://cpl.thalesgroup.com/access-management/authenticators/fido-devices | +| Thetis | ![y] | ![y]| ![y]| ![y]| ![n] | https://thetis.io/collections/fido2 | +| Token2 Switzerland | ![y] | ![y]| ![y]| ![n]| ![n] | https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key | +| TrustKey Solutions | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.trustkeysolutions.com/security-keys/ | +| VinCSS | ![n] | ![y]| ![n]| ![n]| ![n] | https://passwordless.vincss.net | +| Yubico | ![y] | ![y]| ![y]| ![n]| ![y] | https://www.yubico.com/solutions/passwordless/ | ++++<!--Image references--> +[y]: ./media/fido2-compatibility/yes.png +[n]: ./media/fido2-compatibility/no.png + ## Next steps |
active-directory | Migrate Python Adal Msal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-python-adal-msal.md | def get_preexisting_rt_and_their_scopes_from_elsewhere(): # https://github.com/AzureAD/azure-activedirectory-library-for-python/blob/1.2.3/sample/device_code_sample.py#L72 # which uses a resource rather than a scope, # you need to convert your v1 resource into v2 scopes- # See https://docs.microsoft.com/azure/active-directory/azuread-dev/azure-ad-endpoint-comparison#scopes-not-resources + # See https://learn.microsoft.com/azure/active-directory/azuread-dev/azure-ad-endpoint-comparison#scopes-not-resources # You may be able to append "/.default" to your v1 resource to form a scope- # See https://docs.microsoft.com/azure/active-directory/develop/v2-permissions-and-consent#the-default-scope + # See https://learn.microsoft.com/azure/active-directory/develop/v2-permissions-and-consent#the-default-scope # Or maybe you have an app already talking to the Microsoft identity platform, # powered by some 3rd-party auth library, and persist its tokens somehow. |
active-directory | Msal Android B2c | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-b2c.md | String id = account.getId(); // Get the IdToken Claims // // For more information about B2C token claims, see reference documentation-// https://docs.microsoft.com/azure/active-directory-b2c/active-directory-b2c-reference-tokens +// https://learn.microsoft.com/azure/active-directory-b2c/active-directory-b2c-reference-tokens Map<String, ?> claims = account.getClaims(); // Get the 'preferred_username' claim through a convenience function |
active-directory | Msal Net Migration Public Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration-public-client.md | var pca = PublicClientApplicationBuilder.Create("client_id") .WithBroker() .Build(); -// Add a token cache, see https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-token-cache-serialization?tabs=desktop +// Add a token cache, see https://learn.microsoft.com/azure/active-directory/develop/msal-net-token-cache-serialization?tabs=desktop // 2. GetAccounts var accounts = await pca.GetAccountsAsync(); private static async Task<AuthenticationResult> AcquireByDeviceCodeAsync(IPublic { // If you use a CancellationToken, and call the Cancel() method on it, then this *may* be triggered // to indicate that the operation was cancelled.- // See https://docs.microsoft.com/dotnet/standard/threading/cancellation-in-managed-threads + // See https://learn.microsoft.com/dotnet/standard/threading/cancellation-in-managed-threads // for more detailed information on how C# supports cancellation in managed threads. } catch (MsalClientException ex) |
active-directory | Reference Breaking Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-breaking-changes.md | Check this article regularly to learn about: - Deprecated functionality > [!TIP]-> To be notified of updates to this page, add this URL to your RSS feed reader:<br/>`https://docs.microsoft.com/api/search/rss?search=%22Azure+Active+Directory+breaking+changes+reference%22&locale=en-us` +> To be notified of updates to this page, add this URL to your RSS feed reader:<br/>`https://learn.microsoft.com/api/search/rss?search=%22Azure+Active+Directory+breaking+changes+reference%22&locale=en-us` ## December 2021 |
active-directory | Scenario Desktop Acquire Token Device Code Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-device-code-flow.md | private static async Task<AuthenticationResult> AcquireByDeviceCodeAsync(IPublic { // If you use a CancellationToken, and call the Cancel() method on it, then this *may* be triggered // to indicate that the operation was cancelled.- // See https://docs.microsoft.com/dotnet/standard/threading/cancellation-in-managed-threads + // See https://learn.microsoft.com/dotnet/standard/threading/cancellation-in-managed-threads // for more detailed information on how C# supports cancellation in managed threads. } catch (MsalClientException ex) |
active-directory | Scenario Desktop Acquire Token Wam | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-wam.md | var pca = PublicClientApplicationBuilder.Create("client_id") .WithBroker() .Build(); -// Add a token cache, see https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-token-cache-serialization?tabs=desktop +// Add a token cache, see https://learn.microsoft.com/azure/active-directory/develop/msal-net-token-cache-serialization?tabs=desktop // 2. GetAccounts var accounts = await pca.GetAccountsAsync(); |
active-directory | Tutorial V2 Nodejs Console | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-console.md | const msalConfig = { /** * With client credentials flows permissions need to be granted in the portal by a tenant administrator. * The scope is always in the format '<resource>/.default'. For more, visit:- * https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow + * https://learn.microsoft.com/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow */ const tokenRequest = { scopes: [process.env.GRAPH_ENDPOINT + '/.default'], |
active-directory | V2 App Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-app-types.md | -The Microsoft identity platform supports authentication for a variety of modern app architectures, all of them based on industry-standard protocols [OAuth 2.0 or OpenID Connect](active-directory-v2-protocols.md). This article describes the types of apps that you can build by using Microsoft identity platform, regardless of your preferred language or platform. The information is designed to help you understand high-level scenarios before you start working with the code in the [application scenarios](authentication-flows-app-scenarios.md#application-scenarios). +The Microsoft identity platform supports authentication for various modern app architectures, all of them based on industry-standard protocols [OAuth 2.0 or OpenID Connect](active-directory-v2-protocols.md). This article describes the types of apps that you can build by using Microsoft identity platform, regardless of your preferred language or platform. The information is designed to help you understand high-level scenarios before you start working with the code in the [application scenarios](authentication-flows-app-scenarios.md#application-scenarios). ## The basics https://login.microsoftonline.com/common/oauth2/v2.0/token ## Single-page apps (JavaScript) -Many modern apps have a single-page app front end written primarily in JavaScript, often with a framework like Angular, React, or Vue. The Microsoft identity platform supports these apps by using the [OpenID Connect](v2-protocols-oidc.md) protocol for authentication and either [OAuth 2.0 implicit grant flow](v2-oauth2-implicit-grant-flow.md) or the more recent [OAuth 2.0 authorization code + PKCE flow](v2-oauth2-auth-code-flow.md) for authorization (see below). +Many modern apps have a single-page app front end written primarily in JavaScript, often with a framework like Angular, React, or Vue. The Microsoft identity platform supports these apps by using the [OpenID Connect](v2-protocols-oidc.md) protocol for authentication and one of two types of authorization grants defined by OAuth 2.0. The supported grant types are either the [OAuth 2.0 implicit grant flow](v2-oauth2-implicit-grant-flow.md) or the more recent [OAuth 2.0 authorization code + PKCE flow](v2-oauth2-auth-code-flow.md) (see below). The flow diagram below demonstrates the OAuth 2.0 authorization code grant (with details around PKCE omitted), where the app receives a code from the Microsoft identity platform `authorize` endpoint, and redeems it for an access token and a refresh token using cross-site web requests. The access token expires every 24 hours, and the app must request another code using the refresh token. In addition to the access token, an `id_token` that represents the signed-in user to the client application is typically also requested through the same flow and/or a separate OpenID Connect request (not shown here). To see this scenario in action, check out the [Tutorial: Sign in users and call ### Authorization code flow vs. implicit flow -For most of the history of OAuth 2.0, the [implicit flow](v2-oauth2-implicit-grant-flow.md) was the recommended way to build single-page apps. With the removal of [third-party cookies](reference-third-party-cookies-spas.md) and [greater attention](https://tools.ietf.org/html/draft-ietf-oauth-security-topics-14) paid to security concerns around the implicit flow, we've moved to the authorization code flow for single-page apps. --To ensure compatibility of your app in Safari and other privacy-conscious browsers, we no longer recommend use of the implicit flow and instead recommend the authorization code flow. +For most of the history of OAuth 2.0, the [implicit flow](v2-oauth2-implicit-grant-flow.md) was the recommended way to build single-page apps. With the removal of [third-party cookies](reference-third-party-cookies-spas.md) and [greater attention](https://tools.ietf.org/html/draft-ietf-oauth-security-topics-14) paid to security concerns around the implicit flow, the authorization code flow for single-page apps should now be implemented to ensure compatibility of your app in Safari and other privacy-conscious browsers. The continued use of the implicit flow is not recommended. ## Web apps eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6ImtyaU1QZG1Cd... } ``` -Further details of different types of tokens used in the Microsoft identity platform are available in the [access token](access-tokens.md) reference and [id_token reference](id-tokens.md) +Further details of different types of tokens used in the Microsoft identity platform are available in the [access token](access-tokens.md) reference and [id_token](id-tokens.md) reference. In web server apps, the sign-in authentication flow takes these high-level steps: In web server apps, the sign-in authentication flow takes these high-level steps You can ensure the user's identity by validating the ID token with a public signing key that is received from the Microsoft identity platform. A session cookie is set, which can be used to identify the user on subsequent page requests. -To see this scenario in action, try the code samples in the [Web app that signs in users scenario](scenario-web-app-sign-user-overview.md). +To see this scenario in action, try the code samples in [Sign in users from a Web app](scenario-web-app-sign-user-overview.md). -In addition to simple sign-in, a web server app might need to access another web service, such as a REST API. In this case, the web server app engages in a combined OpenID Connect and OAuth 2.0 flow, by using the [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md). For more information about this scenario, read about [getting started with web apps and Web APIs](https://github.com/AzureADQuickStarts/AppModelv2-WebApp-WebAPI-OpenIDConnect-DotNet). +In addition to simple sign-in, a web server app might need to access another web service, such as a Representational State Transfer ([REST](https://docs.microsoft.com/rest/api/azure/)) API. In this case, the web server app engages in a combined OpenID Connect and OAuth 2.0 flow, by using the [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md). For more information about this scenario, refer to our code [sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-1-Call-MSGraph/README.md). ## Web APIs |
active-directory | Clean Up Stale Guest Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/clean-up-stale-guest-accounts.md | -Learn more about [how to manage inactive user accounts in Azure AD](https://docs.microsoft.com/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts). +Learn more about [how to manage inactive user accounts in Azure AD](https://learn.microsoft.com/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts). There are a few recommended patterns that are effective at cleaning up stale guest accounts: 1. Create a multi-stage review whereby guests self-attest whether they still need access. A second-stage reviewer assesses results and makes a final decision. Guests with denied access are disabled and later deleted. -2. Create a review to remove inactive external guests. Admins define inactive as period of days. They disable and later delete guests that donΓÇÖt sign in to the tenant within that time frame. By default, this doesn't affect recently created users. [Learn more about how to identify inactive accounts](https://docs.microsoft.com/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts#how-to-detect-inactive-user-accounts). +2. Create a review to remove inactive external guests. Admins define inactive as period of days. They disable and later delete guests that donΓÇÖt sign in to the tenant within that time frame. By default, this doesn't affect recently created users. [Learn more about how to identify inactive accounts](https://learn.microsoft.com/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts#how-to-detect-inactive-user-accounts). Use the following instructions to learn how to create Access Reviews that follow these patterns. Consider the configuration recommendations and then make the needed changes that suit your environment. ## Create a multi-stage review for guests to self-attest continued access -1. Create a [dynamic group](https://docs.microsoft.com/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example, +1. Create a [dynamic group](https://learn.microsoft.com/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example, `(user.userType -eq "Guest") and (user.mail -contains "@contoso.com") and (user.accountEnabled -eq true)` -2. To [create an Access Review](https://docs.microsoft.com/azure/active-directory/governance/create-access-review) +2. To [create an Access Review](https://learn.microsoft.com/azure/active-directory/governance/create-access-review) for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**. 3. Select **New access review**. Use the following instructions to learn how to create Access Reviews that follow ## Create a review to remove inactive external guests -1. Create a [dynamic group](https://docs.microsoft.com/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example, +1. Create a [dynamic group](https://learn.microsoft.com/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example, `(user.userType -eq "Guest") and (user.mail -contains "@contoso.com") and (user.accountEnabled -eq true)` -2. To [create an access review](https://docs.microsoft.com/azure/active-directory/governance/create-access-review) for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**. +2. To [create an access review](https://learn.microsoft.com/azure/active-directory/governance/create-access-review) for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**. 3. Select **New access review**. |
active-directory | Add User Without Invite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-user-without-invite.md | Last updated 08/05/2020 - |
active-directory | Add Users Administrator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-administrator.md | Last updated 08/31/2022 - |
active-directory | Add Users Information Worker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-information-worker.md | Last updated 12/19/2018 - |
active-directory | Api Connectors Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/api-connectors-overview.md | -As a developer or IT administrator, you can use API connectors to integrate your [self-service sign-up user flows](self-service-sign-up-overview.md) with web APIs to customize the sign-up experience and integrate with external systems. For example, with API connectors, you can: +As a developer or IT administrator, you can use [API connectors](self-service-sign-up-add-api-connector.md#create-an-api-connector) to integrate your [self-service sign-up user flows](self-service-sign-up-overview.md) with web APIs to customize the sign-up experience and integrate with external systems. For example, with API connectors, you can: - [**Integrate with a custom approval workflow**](self-service-sign-up-add-approvals.md). Connect to a custom approval system for managing and limiting account creation. - [**Perform identity verification**](code-samples-self-service-sign-up.md#identity-verification). Use an identity verification service to add an extra level of security to account creation decisions. An API connector at this step in the sign-up process is invoked after the attrib ## Next steps - Learn how to [add an API connector to a user flow](self-service-sign-up-add-api-connector.md)-- Learn how to [add a custom approval system to self-service sign-up](self-service-sign-up-add-approvals.md)+- Learn how to [add a custom approval system to self-service sign-up](self-service-sign-up-add-approvals.md) |
active-directory | Auditing And Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/auditing-and-reporting.md | Last updated 05/11/2020 - |
active-directory | B2b Tutorial Require Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-tutorial-require-mfa.md | Last updated 01/07/2022 - |
active-directory | Bulk Invite Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/bulk-invite-powershell.md | Last updated 02/11/2020 -+ # Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user. |
active-directory | Claims Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/claims-mapping.md | Last updated 04/06/2018 -+ |
active-directory | Code Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/code-samples.md | Last updated 03/14/2022 - async function sendInvite() { // Initialize a confidential client application. For more info, visit: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/identity/identity/samples/AzureIdentityExamples.md#authenticating-a-service-principal-with-a-client-secret const credential = new ClientSecretCredential(TENANT_ID, CLIENT_ID, CLIENT_SECRET); - // Initialize the Microsoft Graph authentication provider. For more info, visit: https://docs.microsoft.com/en-us/graph/sdks/choose-authentication-providers?tabs=Javascript#using--for-server-side-applications + // Initialize the Microsoft Graph authentication provider. For more info, visit: https://learn.microsoft.com/graph/sdks/choose-authentication-providers?tabs=Javascript#using--for-server-side-applications const authProvider = new TokenCredentialAuthenticationProvider(credential, { scopes: ['https://graph.microsoft.com/.default'] }); // Create MS Graph client instance. For more info, visit: https://github.com/microsoftgraph/msgraph-sdk-javascript/blob/dev/docs/CreatingClientInstance.md async function sendInvite() { sendInvitationMessage: true }; - // Execute the MS Graph command. For more information, visit: https://docs.microsoft.com/en-us/graph/api/invitation-post + // Execute the MS Graph command. For more information, visit: https://learn.microsoft.com/graph/api/invitation-post graphResponse = await client.api('/invitations') .post(invitation); |
active-directory | Configure Saas Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/configure-saas-apps.md | Last updated 05/23/2017 - |
active-directory | Direct Federation Adfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation-adfs.md | Last updated 05/13/2022 - |
active-directory | Facebook Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/facebook-federation.md | Last updated 03/02/2021 - |
active-directory | Hybrid Cloud To On Premises | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-cloud-to-on-premises.md | Last updated 11/05/2021 - |
active-directory | Hybrid On Premises To Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-on-premises-to-cloud.md | Last updated 11/03/2020 - |
active-directory | Hybrid Organizations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-organizations.md | Last updated 04/26/2018 - |
active-directory | Invitation Email Elements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/invitation-email-elements.md | Last updated 04/12/2021 - |
active-directory | One Time Passcode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/one-time-passcode.md | Last updated 09/16/2022 - |
active-directory | Self Service Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/self-service-portal.md | Last updated 02/12/2020 - |
active-directory | Self Service Sign Up Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/self-service-sign-up-overview.md | Last updated 03/02/2021 - |
active-directory | User Flow Customize Language | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-flow-customize-language.md | |
active-directory | User Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-token.md | Last updated 02/28/2018 - |
active-directory | Auth Oidc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-oidc.md | There is a need for user consent and for web sign in. * [Web sign-in with OpenID Connect in Azure Active Directory B2C](../../active-directory-b2c/openid-connect.md) -* [Secure your application by using OpenID Connect and Azure AD](/learn/modules/secure-app-with-oidc-and-azure-ad/) -+* [Secure your application by using OpenID Connect and Azure AD](/training/modules/secure-app-with-oidc-and-azure-ad/) |
active-directory | Multi Tenant User Management Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-user-management-introduction.md | There are several mechanisms available for creating and managing the lifecycle o | Mechanism | Description | Best when | | - | - | - |-| [End-user-initiated](multi-tenant-user-management-scenarios.md#end-user-initiated-scenario) | Resource tenant admins delegate the ability to invite guest users to the tenant, an app, or a resource to users within the resource tenant. Users from the home tenant are invited or sign up individually. | <li>Users need improvised access to resources. <li>No automatic synchronization of user attributes is necessary.<li>Unified GAL is not needed.a | +| [End-user-initiated](multi-tenant-user-management-scenarios.md#end-user-initiated-scenario) | Resource tenant admins delegate the ability to invite guest users to the tenant, an app, or a resource to users within the resource tenant. Users from the home tenant are invited or sign up individually. | <li>Users need improvised access to resources. <li>No automatic synchronization of user attributes is necessary.<li>Unified GAL is not needed. | |[Scripted](multi-tenant-user-management-scenarios.md#scripted-scenario) | Resource tenant administrators deploy a scripted ΓÇ£pullΓÇ¥ process to automate discovery and provisioning of guest users to support sharing scenarios. | <li>No more than two tenants.<li>No automatic synchronization of user attributes is necessary.<li>Users need pre-configured (not improvised) access to resources.| |[Automated](multi-tenant-user-management-scenarios.md#automated-scenario)|Resource tenant admins use an identity provisioning system to automate the provisioning and deprovisioning processes. | <li>Full identity lifecycle management with provisioning and deprovisioning must be automated.<li>Attribute syncing is required to populate the GAL details and support dynamic entitlement scenarios.<li>Users need pre-configured (not ad hoc) access to resources on ΓÇ£Day OneΓÇ¥.| |
active-directory | Users Default Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md | Users and contacts | <ul><li>Enumerate the list of all users and contacts<li>Rea Groups | <ul><li>Create security groups<li>Create Microsoft 365 groups<li>Enumerate the list of all groups<li>Read all properties of groups<li>Read non-hidden group memberships<li>Read hidden Microsoft 365 group memberships for joined groups<li>Manage properties, ownership, and membership of groups that the user owns<li>Add guests to owned groups<li>Manage dynamic membership settings<li>Delete owned groups<li>Restore owned Microsoft 365 groups</li></ul> | <ul><li>Read properties of non-hidden groups, including membership and ownership (even non-joined groups)<li>Read hidden Microsoft 365 group memberships for joined groups<li>Search for groups by display name or object ID (if allowed)</li></ul> | <ul><li>Read object ID for joined groups<li>Read membership and ownership of joined groups in some Microsoft 365 apps (if allowed)</li></ul> Applications | <ul><li>Register (create) new applications<li>Enumerate the list of all applications<li>Read properties of registered and enterprise applications<li>Manage application properties, assignments, and credentials for owned applications<li>Create or delete application passwords for users<li>Delete owned applications<li>Restore owned applications</li></ul> | <ul><li>Read properties of registered and enterprise applications</li></ul> | <ul><li>Read properties of registered and enterprise applications Devices</li></ul> | <ul><li>Enumerate the list of all devices<li>Read all properties of devices<li>Manage all properties of owned devices</li></ul> | No permissions | No permissions-Directory | <ul><li>Read all company information<li>Read all domains<li>Read all partner contracts</li></ul> | <ul><li>Read company display name<li>Read all domains</li></ul> | <ul><li>Read company display name<li>Read all domains</li></ul> +Organization | <ul><li>Read all company information<li>Read all domains<li>Read configuration of certificate-based authentication<li>Read all partner contracts</li></ul> | <ul><li>Read company display name<li>Read all domains<li>Read configuration of certificate-based authentication</li></ul> | <ul><li>Read company display name<li>Read all domains</li></ul> Roles and scopes | <ul><li>Read all administrative roles and memberships<li>Read all properties and membership of administrative units</li></ul> | No permissions | No permissions Subscriptions | <ul><li>Read all subscriptions<li>Enable service plan memberships</li></ul> | No permissions | No permissions Policies | <ul><li>Read all properties of policies<li>Manage all properties of owned policies</li></ul> | No permissions | No permissions |
active-directory | Whats New Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md | For more information, see the [Risk detection API reference documentation](/grap In June 2019, we've added these 22 new apps with Federation support to the app gallery: -[Azure AD SAML Toolkit](../saas-apps/saml-toolkit-tutorial.md), [Otsuka Shokai (大塚商会)](../saas-apps/otsuka-shokai-tutorial.md), [ANAQUA](../saas-apps/anaqua-tutorial.md), [Azure VPN Client](https://portal.azure.com/), [ExpenseIn](../saas-apps/expensein-tutorial.md), [Helper Helper](../saas-apps/helper-helper-tutorial.md), [Costpoint](../saas-apps/costpoint-tutorial.md), [GlobalOne](../saas-apps/globalone-tutorial.md), [Mercedes-Benz In-Car Office](https://me.secure.mercedes-benz.com/), [Skore](https://app.justskore.it/), [Oracle Cloud Infrastructure Console](../saas-apps/oracle-cloud-tutorial.md), [CyberArk SAML Authentication](../saas-apps/cyberark-saml-authentication-tutorial.md), [Scrible Edu](https://www.scrible.com/sign-in/#/create-account), [PandaDoc](../saas-apps/pandadoc-tutorial.md), [Perceptyx](https://apexdata.azurewebsites.net/docs.microsoft.com/azure/active-directory/saas-apps/perceptyx-tutorial), Proptimise OS, [Vtiger CRM (SAML)](../saas-apps/vtiger-crm-saml-tutorial.md), Oracle Access Manager for Oracle Retail Merchandising, Oracle Access Manager for Oracle E-Business Suite, Oracle IDCS for E-Business Suite, Oracle IDCS for PeopleSoft, Oracle IDCS for JD Edwards +[Azure AD SAML Toolkit](../saas-apps/saml-toolkit-tutorial.md), [Otsuka Shokai (大塚商会)](../saas-apps/otsuka-shokai-tutorial.md), [ANAQUA](../saas-apps/anaqua-tutorial.md), [Azure VPN Client](https://portal.azure.com/), [ExpenseIn](../saas-apps/expensein-tutorial.md), [Helper Helper](../saas-apps/helper-helper-tutorial.md), [Costpoint](../saas-apps/costpoint-tutorial.md), [GlobalOne](../saas-apps/globalone-tutorial.md), [Mercedes-Benz In-Car Office](https://me.secure.mercedes-benz.com/), [Skore](https://app.justskore.it/), [Oracle Cloud Infrastructure Console](../saas-apps/oracle-cloud-tutorial.md), [CyberArk SAML Authentication](../saas-apps/cyberark-saml-authentication-tutorial.md), [Scrible Edu](https://www.scrible.com/sign-in/#/create-account), [PandaDoc](../saas-apps/pandadoc-tutorial.md), [Perceptyx](https://apexdata.azurewebsites.net/learn.microsoft.com/azure/active-directory/saas-apps/perceptyx-tutorial), Proptimise OS, [Vtiger CRM (SAML)](../saas-apps/vtiger-crm-saml-tutorial.md), Oracle Access Manager for Oracle Retail Merchandising, Oracle Access Manager for Oracle E-Business Suite, Oracle IDCS for E-Business Suite, Oracle IDCS for PeopleSoft, Oracle IDCS for JD Edwards For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md). |
active-directory | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md | ->Get notified about when to revisit this page for updates by copying and pasting this URL: `https://docs.microsoft.com/api/search/rss?search=%22Release+notes+-+Azure+Active+Directory%22&locale=en-us` into your  feed reader. +>Get notified about when to revisit this page for updates by copying and pasting this URL: `https://learn.microsoft.com/api/search/rss?search=%22Release+notes+-+Azure+Active+Directory%22&locale=en-us` into your  feed reader. Azure AD receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about: |
active-directory | What Are Lifecycle Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-are-lifecycle-workflows.md | Azure AD Lifecycle Workflows is a new Azure AD Identity Governance service that Workflows contain specific processes, which run automatically against users as they move through their life cycle. Workflows are made up of [Tasks](lifecycle-workflow-tasks.md) and [Execution conditions](understanding-lifecycle-workflows.md#understanding-lifecycle-workflows). -Tasks are specific actions that run automatically when a workflow is triggered. An Execution condition defines the 'Scope' of "“who” and the 'Trigger' of “when” a workflow will be performed. For example, send a manager an email 7 days before the value in the NewEmployeeHireDate attribute of new employees, can be described as a workflow. It consists of: +Tasks are specific actions that run automatically when a workflow is triggered. An Execution condition defines the 'Scope' of "who" and the 'Trigger' of "when" a workflow will be performed. For example, sending a manager an email 7 days before the value in the NewEmployeeHireDate attribute of new employees can be described as a workflow. It consists of: - Task: send email - When (trigger): Seven days before the NewEmployeeHireDate attribute value - Who (scope): new employees Finally, Lifecycle Workflows can even [integrate with Logic Apps](lifecycle-work Anyone who wants to modernize their identity lifecycle management process for employees, needs to ensure: - **New employee on-boarding** - That when a user joins the organization, they're ready to go on day one. They have the correct access to the information, membership to groups, and applications they need. - - **Employee retirement/terminations/off-boarding** - That users who are no longer tied to the company for various reasons (termination, separation, leave of absence or retirement), have their access revoked in a timely manner + - **Employee retirement/terminations/off-boarding** - That users who are no longer tied to the company for various reasons (termination, separation, leave of absence or retirement), have their access revoked in a timely manner. - **Easy to administer in my organization** - That there's a seamless process to accomplish the above tasks, that isn't overly burdensome or time consuming for Administrators. - **Robust troubleshooting/auditing/compliance** - That there's the ability to easily troubleshoot issues when they arise and that there's sufficient logging to help with this and compliance related issues. The following are key reasons to use Lifecycle workflows. - **Extend** your HR-driven provisioning process with other workflows that simplify and automate tasks. - **Centralize** your workflow process so you can easily create and manage workflows all in one location.-- Easily **troubleshoot** workflow scenarios with the Workflow history and Audit logs+- Easily **troubleshoot** workflow scenarios with the Workflow history and Audit logs. - **Manage** user lifecycle at scale. As your organization grows, the need for other resources to manage user lifecycles are reduced.-- **Reduce** or remove manual tasks that were done in the past with automated lifecycle workflows-- **Apply** logic apps to extend workflows for more complex scenarios using your existing Logic apps+- **Reduce** or remove manual tasks that were done in the past with automated lifecycle workflows. +- **Apply** logic apps to extend workflows for more complex scenarios using your existing Logic apps. All of the above can help ensure a holistic experience by allowing you to remove other dependencies and applications to achieve the same result. Thus translating into, increased on-boarding and off-boarding efficiency. You can use Lifecycle workflows to address any of the following conditions. ## Next steps - [Create a custom workflow using the Azure portal](tutorial-onboard-custom-workflow-portal.md)-- [Create a Lifecycle workflow](create-lifecycle-workflow.md)+- [Create a Lifecycle workflow](create-lifecycle-workflow.md) |
active-directory | How To Connect Install Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md | Before you install Azure AD Connect, there are a few things that you need. * Review [optional sync features you can enable in Azure AD](how-to-connect-syncservice-features.md), and evaluate which features you should enable. ### On-premises Active Directory-* The Active Directory schema version and forest functional level must be Windows Server 2003 or later. The domain controllers can run any version as long as the schema version and forest-level requirements are met. You may require [a paid support program](https://docs.microsoft.com/lifecycle/policies/fixed#extended-support) if you require support for domain controllers running Windows Server 2016 or older. +* The Active Directory schema version and forest functional level must be Windows Server 2003 or later. The domain controllers can run any version as long as the schema version and forest-level requirements are met. You might require [a paid support program](/lifecycle/policies/fixed#extended-support) if you require support for domain controllers running Windows Server 2016 or older. * The domain controller used by Azure AD must be writable. Using a read-only domain controller (RODC) *isn't supported*, and Azure AD Connect doesn't follow any write redirects. * Using on-premises forests or domains by using "dotted" (name contains a period ".") NetBIOS names *isn't supported*. * We recommend that you [enable the Active Directory recycle bin](how-to-connect-sync-recycle-bin.md). To read more about securing your Active Directory environment, see [Best practic #### Installation prerequisites -- Azure AD Connect must be installed on a domain-joined Windows Server 2019 or later - note that Windows Server 2022 is not yet supported. You can deploy Azure AD Connect on Windows Server 2016 but since WS2016 is in extended support, you may require [a paid support program](https://docs.microsoft.com/lifecycle/policies/fixed#extended-support) if you require support for this configuration. +- Azure AD Connect must be installed on a domain-joined Windows Server 2019 or later - note that Windows Server 2022 is not yet supported. You can deploy Azure AD Connect on Windows Server 2016 but since WS2016 is in extended support, you may require [a paid support program](/lifecycle/policies/fixed#extended-support) if you require support for this configuration. - The minimum .Net Framework version required is 4.6.2, and newer versions of .Net are also supported. - Azure AD Connect can't be installed on Small Business Server or Windows Server Essentials before 2019 (Windows Server Essentials 2019 is supported). The server must be using Windows Server standard or better. - The Azure AD Connect server must have a full GUI installed. Installing Azure AD Connect on Windows Server Core isn't supported. - The Azure AD Connect server must not have PowerShell Transcription Group Policy enabled if you use the Azure AD Connect wizard to manage Active Directory Federation Services (AD FS) configuration. You can enable PowerShell transcription if you use the Azure AD Connect wizard to manage sync configuration. - If AD FS is being deployed: - - The servers where AD FS or Web Application Proxy are installed must be Windows Server 2012 R2 or later. Windows remote management must be enabled on these servers for remote installation. You may require [a paid support program](https://docs.microsoft.com/lifecycle/policies/fixed#extended-support) if you require support for Windows Server 2016 and older. + - The servers where AD FS or Web Application Proxy are installed must be Windows Server 2012 R2 or later. Windows remote management must be enabled on these servers for remote installation. You may require [a paid support program](/lifecycle/policies/fixed#extended-support) if you require support for Windows Server 2016 and older. - You must configure TLS/SSL certificates. For more information, see [Managing SSL/TLS protocols and cipher suites for AD FS](/windows-server/identity/ad-fs/operations/manage-ssl-protocols-in-ad-fs) and [Managing SSL certificates in AD FS](/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap). - You must configure name resolution. - It is not supported to break and analyze traffic between Azure AD Connect and Azure AD. Doing so may disrupt the service. We recommend that you harden your Azure AD Connect server to decrease the securi - Follow these [additional guidelines](/windows-server/identity/ad-ds/plan/security-best-practices/reducing-the-active-directory-attack-surface) to reduce the attack surface of your Active Directory environment. - Follow the [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md) to setup alerts to monitor changes to the trust established between your Idp and Azure AD. - Enable Multi Factor Authentication (MFA) for all users that have privileged access in Azure AD or in AD. One security issue with using AADConnect is that if an attacker can get control over the Azure AD Connect server they can manipulate users in Azure AD. To prevent a attacker from using these capabilities to take over Azure AD accounts, MFA offers protections so that even if an attacker manages to e.g. reset a user's password using Azure AD Connect they still cannot bypass the second factor.-- Disable Soft Matching on your tenant. Soft Matching is a great feature to help transfering source of autority for existing cloud only objects to Azure AD Connect, but it comes with certain security risks. If you do not require Soft Matching, you should disable it: [https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-syncservice-features#blocksoftmatch](how-to-connect-syncservice-features.md#blocksoftmatch)+- Disable Soft Matching on your tenant. Soft Matching is a great feature to help transfering source of autority for existing cloud only objects to Azure AD Connect, but it comes with certain security risks. If you do not require it, you should [disable Soft Matching](how-to-connect-syncservice-features.md#blocksoftmatch) ### SQL Server used by Azure AD Connect * Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2019 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10-GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the [performance of Azure AD Connect](./plan-connect-performance-factors.md#sql-database-factors). |
active-directory | Howto Troubleshoot Upn Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/howto-troubleshoot-upn-changes.md | Windows 7 and 8.1 devices are not affected by this issue after UPN changes. **Known Issues** -Your organization may use [MAM app protection policies](https://docs.microsoft.com/mem/intune/apps/app-protection-policy) to protect corporate data in apps on end users' devices. +Your organization may use [MAM app protection policies](https://learn.microsoft.com/mem/intune/apps/app-protection-policy) to protect corporate data in apps on end users' devices. MAM app protection policies are currently not resiliant to UPN changes. UPN changes can break the connection between existing MAM enrollments and active users in MAM integrated applications, resulting in undefined behavior. This could leave data in an unprotected state. **Work Around** -IT admins should [issue a selective wipe](https://docs.microsoft.com/mem/intune/apps/apps-selective-wipe) to impacted users following UPN changes. This will force impacted end users to reauthenticate and reenroll with their new UPNs. +IT admins should [issue a selective wipe](https://learn.microsoft.com/mem/intune/apps/apps-selective-wipe) to impacted users following UPN changes. This will force impacted end users to reauthenticate and reenroll with their new UPNs. ## Microsoft Authenticator known issues and workarounds |
active-directory | Datawiza Azure Ad Sso Oracle Peoplesoft | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-oracle-peoplesoft.md | The scenario solution has the following components: - **Oracle PeopleSoft application**: Legacy application going to be protected by Azure AD and DAB. -Understand the SP initiated flow by following the steps mentioned in [Datawiza and Azure AD authentication architecture](https://docs.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad#datawiza-with-azure-ad-authentication-architecture). +Understand the SP initiated flow by following the steps mentioned in [Datawiza and Azure AD authentication architecture](https://learn.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad#datawiza-with-azure-ad-authentication-architecture). ## Prerequisites Ensure the following prerequisites are met. - An Azure AD tenant linked to the Azure subscription. - - See, [Quickstart: Create a new tenant in Azure Active Directory.](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-access-create-new-tenant) + - See, [Quickstart: Create a new tenant in Azure Active Directory.](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-access-create-new-tenant) - Docker and Docker Compose Ensure the following prerequisites are met. - User identities synchronized from an on-premises directory to Azure AD, or created in Azure AD and flowed back to an on-premises directory. - - See, [Azure AD Connect sync: Understand and customize synchronization](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-whatis). + - See, [Azure AD Connect sync: Understand and customize synchronization](https://learn.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-whatis). - An account with Azure AD and the Application administrator role - - See, [Azure AD built-in roles, all roles](https://docs.microsoft.com/azure/active-directory/roles/permissions-reference#all-roles). + - See, [Azure AD built-in roles, all roles](https://learn.microsoft.com/azure/active-directory/roles/permissions-reference#all-roles). - An Oracle PeopleSoft environment For the Oracle PeopleSoft application to recognize the user correctly, there's a ## Enable Azure AD Multi-Factor Authentication To provide an extra level of security for sign-ins, enforce multi-factor authentication (MFA) for user sign-in. One way to achieve this is to [enable MFA on the Azure-portal](https://docs.microsoft.com/azure/active-directory/authentication/tutorial-enable-azure-mfa). +portal](https://learn.microsoft.com/azure/active-directory/authentication/tutorial-enable-azure-mfa). 1. Sign in to the Azure portal as a **Global Administrator**. To confirm Oracle PeopleSoft application access occurs correctly, a prompt appea - [Watch the video - Enable SSO/MFA for Oracle PeopleSoft with Azure AD via Datawiza](https://www.youtube.com/watch?v=_gUGWHT5m90). -- [Configure Datawiza and Azure AD for secure hybrid access](https://docs.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad)+- [Configure Datawiza and Azure AD for secure hybrid access](https://learn.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad) -- [Configure Datawiza with Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/partner-datawiza)+- [Configure Datawiza with Azure AD B2C](https://learn.microsoft.com/azure/active-directory-b2c/partner-datawiza) - [Datawiza documentation](https://docs.datawiza.com/) |
active-directory | Plan Sso Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/plan-sso-deployment.md | The following SSO protocols are available to use: ## Next steps -- Consider completing the single sign-on training in [Enable single sign-on for applications by using Azure Active Directory](/learn/modules/enable-single-sign-on).+- Consider completing the single sign-on training in [Enable single sign-on for applications by using Azure Active Directory](/training/modules/enable-single-sign-on). |
active-directory | Review Admin Consent Requests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/review-admin-consent-requests.md | To review the admin consent requests and take action: ## Review admin consent requests using Microsoft Graph -To review the admin consent requests programmatically, use the [appConsentRequest resource type](/graph/api/resources/userconsentrequest) and [userConsentRequest resource type](/graph/api/resources/userconsentrequest) and their associated methods in Microsoft Graph. You cannot approve or deny consent requests using Microsoft Graph. +To review the admin consent requests programmatically, use the [appConsentRequest resource type](/graph/api/resources/appconsentrequest) and [userConsentRequest resource type](/graph/api/resources/userconsentrequest) and their associated methods in Microsoft Graph. You cannot approve or deny consent requests using Microsoft Graph. ## Next steps - [Review permissions granted to apps](manage-application-permissions.md) |
active-directory | What Is Application Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-application-management.md | To [manage access](what-is-access-management.md) for an application, you want to You can [manage user consent settings](configure-user-consent.md) to choose whether users can allow an application or service to access user profiles and organizational data. When applications are granted access, users can sign in to applications integrated with Azure AD, and the application can access your organization's data to deliver rich data-driven experiences. -Users often are unable to consent to the permissions an application is requesting. Configure the admin consent workflow to allow users to provide a justification and request an administrator's review and approval of an application. For training on how to configure admin consent workflow in your Azure AD tenant, see [Configure admin consent workflow](/learn/modules/configure-admin-consent-workflow). +Users often are unable to consent to the permissions an application is requesting. Configure the admin consent workflow to allow users to provide a justification and request an administrator's review and approval of an application. For training on how to configure admin consent workflow in your Azure AD tenant, see [Configure admin consent workflow](/training/modules/configure-admin-consent-workflow). As an administrator, you can [grant tenant-wide admin consent](grant-admin-consent.md) to an application. Tenant-wide admin consent is necessary when an application requires permissions that regular users aren't allowed to grant, and allows organizations to implement their own review processes. Always carefully review the permissions the application is requesting before granting consent. When an application has been granted tenant-wide admin consent, all users are able to sign into the application unless it has been configured to require user assignment. ### Single sign-on -Consider implementing SSO in your application. You can manually configure most applications for SSO. The most popular options in Azure AD are [SAML-based SSO and OpenID Connect-based SSO](../develop/active-directory-v2-protocols.md). Before you start, make sure that you understand the requirements for SSO and how to [plan for deployment](plan-sso-deployment.md). For training related to configuring SAML-based SSO for an enterprise application in your Azure AD tenant, see [Enable single sign-on for an application by using Azure Active Directory](/learn/modules/enable-single-sign-on). +Consider implementing SSO in your application. You can manually configure most applications for SSO. The most popular options in Azure AD are [SAML-based SSO and OpenID Connect-based SSO](../develop/active-directory-v2-protocols.md). Before you start, make sure that you understand the requirements for SSO and how to [plan for deployment](plan-sso-deployment.md). For training related to configuring SAML-based SSO for an enterprise application in your Azure AD tenant, see [Enable single sign-on for an application by using Azure Active Directory](/training/modules/enable-single-sign-on). ### User, group, and owner assignment |
active-directory | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview.md | While developers can securely store the secrets in [Azure Key Vault](../../key-v The following video shows how you can use managed identities:</br> -> [!VIDEO https://docs.microsoft.com/Shows/On-NET/Using-Azure-Managed-identities/player?format=ny] +> [!VIDEO https://learn.microsoft.com/Shows/On-NET/Using-Azure-Managed-identities/player?format=ny] Here are some of the benefits of using managed identities: |
active-directory | How To View Applied Conditional Access Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/how-to-view-applied-conditional-access-policies.md | To view the sign-in logs, use: The output of this cmdlet contains a **AppliedConditionalAccessPolicies** property that shows all the conditional access policies applied to the sign-in. -For more information about this cmdlet, see [Get-MgAuditLogSignIn](https://docs.microsoft.com/powershell/module/microsoft.graph.reports/get-mgauditlogsignin?view=graph-powershell-1.0). +For more information about this cmdlet, see [Get-MgAuditLogSignIn](https://learn.microsoft.com/powershell/module/microsoft.graph.reports/get-mgauditlogsignin?view=graph-powershell-1.0). The AzureAD Graph PowerShell module doesn't support viewing applied conditional access policies; only the Microsoft Graph PowerShell module returns applied conditional access policies. To confirm that you have admin access to view applied conditional access policie ## Next steps * [Sign-ins error codes reference](./concept-sign-ins.md)-* [Sign-ins report overview](concept-sign-ins.md) +* [Sign-ins report overview](concept-sign-ins.md) |
active-directory | Fortigate Ssl Vpn Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortigate-ssl-vpn-tutorial.md | To complete these steps, you'll need the values you recorded earlier: | SP entity ID (`entity-id`) | Identifier (Entity ID) | | SP Single Sign-On URL (`single-sign-on-url`) | Reply URL (Assertion Consumer Service URL) | | SP Single Logout URL (`single-logout-url`) | Logout URL |-| IdP Entity ID (`idp-entity-id`) | Azure Login URL | -| IdP Single Sign-On URL (`idp-single-sign-on-url`) | Azure AD Identifier | +| IdP Entity ID (`idp-entity-id`) | Azure AD Identifier | +| IdP Single Sign-On URL (`idp-single-sign-on-url`) | Azure Login URL | | IdP Single Logout URL (`idp-single-logout-url`) | Azure Logout URL | | IdP certificate (`idp-cert`) | Base64 SAML certificate name (REMOTE_Cert_N) | | Username attribute (`user-name`) | username | To complete these steps, you'll need the values you recorded earlier: set entity-id < Identifier (Entity ID)Entity ID> set single-sign-on-url < Reply URL Reply URL> set single-logout-url <Logout URL>- set idp-entity-id <Azure Login URL> - set idp-single-sign-on-url <Azure AD Identifier> + set idp-entity-id <Azure AD Identifier> + set idp-single-sign-on-url <Azure Login URL> set idp-single-logout-url <Azure Logout URL> set idp-cert <Base64 SAML Certificate Name> set user-name username |
active-directory | Mural Identity Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mural-identity-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. | Name | Source Attribute| | -- | | | email | user.userprincipalname |+ | FirstName | user.givenname | + | LastName | user.surname | -1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. +1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer. -  +  1. On the **Set up MURAL Identity** section, copy the appropriate URL(s) based on your requirement. In this section, you'll enable B.Simon to use Azure single sign-on by granting a ## Configure MURAL Identity SSO -To configure single sign-on on **MURAL Identity** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [MURAL Identity support team](mailto:support@mural.co). They set this setting to have the SAML SSO connection set properly on both sides. +1. Log in to the MURAL Identity website as an administrator. ++1. Click your **name** in the bottom left corner of the dashboard and select **Company dashboard** from the list of options. ++1. Click **SSO** in the left sidebar and perform the below steps. ++  ++a. Download the **MURAL's metadata**. ++b. In the **Sign in URL** textbox, paste the **Login URL** value, which you have copied from the Azure portal. ++c. In the **Sign in certificate**, upload the **Certificate (PEM)**, which you have downloaded from the Azure portal. ++d. Select **HTTP-POST** as the Request binding type and select **SHA256** as the Sign in algorithm type. ++e. In the **Claim mapping** section, fill the following fields. ++* Email address: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress` ++* First name: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname` ++* Last name: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname` ++f. Click **Test single sign-on** to test the configuration and **Save** it. ++> [!NOTE] +> For more information on how to configure the SSO at MURAL, please follow [this](https://support.mural.co/articles/6224385-mural-s-azure-ad-integration) support page. ### Create MURAL Identity test user In this section, you test your Azure AD single sign-on configuration with follow * Click on **Test this application** in Azure portal. This will redirect to MURAL Identity Sign on URL where you can initiate the login flow. -* Go to MURAL Identity Sign-on URL directly and initiate the login flow from there. +* Go to MURAL Identity Sign on URL directly and initiate the login flow from there. #### IDP initiated: * Click on **Test this application** in Azure portal and you should be automatically signed in to the MURAL Identity for which you set up the SSO. -You can also use Microsoft My Apps to test the application in any mode. When you click the MURAL Identity tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the MURAL Identity for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). +You can also use Microsoft My Apps to test the application in any mode. When you click the MURAL Identity tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the MURAL Identity for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ## Change log |
active-directory | Rocketreach Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rocketreach-sso-tutorial.md | + + Title: 'Tutorial: Azure AD SSO integration with RocketReach SSO' +description: Learn how to configure single sign-on between Azure Active Directory and RocketReach SSO. ++++++++ Last updated : 09/06/2022+++++# Tutorial: Azure AD SSO integration with RocketReach SSO ++In this tutorial, you'll learn how to integrate RocketReach SSO with Azure Active Directory (Azure AD). When you integrate RocketReach SSO with Azure AD, you can: ++* Control in Azure AD who has access to RocketReach SSO. +* Enable your users to be automatically signed-in to RocketReach SSO with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++## Prerequisites ++To get started, you need the following items: ++* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* RocketReach SSO single sign-on (SSO) enabled subscription. +* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD. +For more information, see [Azure built-in roles](../roles/permissions-reference.md). ++## Scenario description ++In this tutorial, you configure and test Azure AD SSO in a test environment. ++* RocketReach SSO supports **SP** and **IDP** initiated SSO. +* RocketReach SSO supports **Just In Time** user provisioning. ++## Add RocketReach SSO from the gallery ++To configure the integration of RocketReach SSO into Azure AD, you need to add RocketReach SSO from the gallery to your list of managed SaaS apps. ++1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. +1. On the left navigation pane, select the **Azure Active Directory** service. +1. Navigate to **Enterprise Applications** and then select **All Applications**. +1. To add new application, select **New application**. +1. In the **Add from the gallery** section, type **RocketReach SSO** in the search box. +1. Select **RocketReach SSO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true). ++## Configure and test Azure AD SSO for RocketReach SSO ++Configure and test Azure AD SSO with RocketReach SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user at RocketReach SSO. ++To configure and test Azure AD SSO with RocketReach SSO, perform the following steps: ++1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. + 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. + 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on. +1. **[Configure RocketReach SSO](#configure-rocketreach-sso)** - to configure the single sign-on settings on application side. + 1. **[Create RocketReach SSO test user](#create-rocketreach-sso-test-user)** - to have a counterpart of B.Simon in RocketReach SSO that is linked to the Azure AD representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Azure AD SSO ++Follow these steps to enable Azure AD SSO in the Azure portal. ++1. In the Azure portal, on the **RocketReach SSO** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure. ++1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: ++ In the **Sign-on URL** text box, type the URL: + `https://rocketreach.co/login/sso` ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up RocketReach SSO** section, copy the appropriate URL(s) based on your requirement. ++  ++### Create an Azure AD test user ++In this section, you'll create a test user in the Azure portal called B.Simon. ++1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**. +1. Select **New user** at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Name** field, enter `B.Simon`. + 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Click **Create**. ++### Assign the Azure AD test user ++In this section, you'll enable B.Simon to use Azure single sign-on by granting access to RocketReach SSO. ++1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. +1. In the applications list, select **RocketReach SSO**. +1. In the app's overview page, find the **Manage** section and select **Users and groups**. +1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. +1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. +1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. +1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure RocketReach SSO ++To configure single sign-on on **RocketReach SSO** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [RocketReach SSO support team](mailto:support@rocketreach.co). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create RocketReach SSO test user ++In this section, a user called B.Simon is created in RocketReach SSO. RocketReach SSO supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in RocketReach SSO, a new one is created after authentication. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++#### SP initiated: ++* Click on **Test this application** in Azure portal. This will redirect to RocketReach SSO Sign-on URL where you can initiate the login flow. ++* Go to RocketReach SSO Sign-on URL directly and initiate the login flow from there. ++#### IDP initiated: ++* Click on **Test this application** in Azure portal and you should be automatically signed in to the RocketReach SSO for which you set up the SSO. ++You can also use Microsoft My Apps to test the application in any mode. When you click the RocketReach SSO tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the RocketReach SSO for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next steps ++Once you configure RocketReach SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Sketch Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sketch-tutorial.md | To configure and test Azure AD SSO with Sketch, perform the following steps: 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on. 1. **[Configure Sketch SSO](#configure-sketch-sso)** - to configure the single sign-on settings on application side.- 1. **[Create Sketch test user](#create-sketch-test-user)** - to have a counterpart of B.Simon in Sketch that is linked to the Azure AD representation of user. 1. **[Test SSO](#test-sso)** - to verify whether the configuration works. +## Choose a shortname for your Workspace in Sketch ++Follow these steps to choose a shortname and gather information to continue the setup process in Azure AD. ++>[!Note] +> Before starting this process, make sure SSO is available in your Workspace, check there is an SSO tab in your Workspace Admin panel. +> If you don't see the SSO tab, please reach out to customer support. +1. [Sign in to your Workspace](https://www.sketch.com/signin/) as an Admin. +1. Head to the **People & Settings** section in the sidebar. +1. Click on the **Single Sign-On** tab. +1. Click **Choose** a short name. +1. Enter a unique name, it should have less than 16 characters and can only include letters, numbers or hyphens. You can edit this name later on. +1. Click **Submit**. +1. Click on the first tab **Set Up Identity Provider**. In this tab, youΓÇÖll find the unique Workspace values youΓÇÖll need to set up the integration with Azure AD. + 1. **EntityID:** In Azure AD, this is the `Identifier` field. + 1. **ACS URL:** In Azure AD, this is the `Reply URL` field. ++Make sure to keep these values at hand! YouΓÇÖll need them in the next step. Click Copy next to each value to copy it to your clipboard. + ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal. Follow these steps to enable Azure AD SSO in the Azure portal. 1. On the **Basic SAML Configuration** section, perform the following steps: - a. In the **Identifier** textbox, type a value using the following pattern: + a. In the **Identifier** textbox, use the `EntityID` field from the previous step. It looks like: `sketch-<uuid_v4>` - b. In the **Reply URL** textbox, type a URL using the following pattern: + b. In the **Reply URL** textbox, use the `ACS URL` field from the previous step. It looks like: `https://sso.sketch.com/saml/acs?id=<uuid_v4>` -1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: +1. Click **Set additional URLs** and perform the following step: In the **Sign-on URL** text box, type the URL: `https://www.sketch.com` Follow these steps to enable Azure AD SSO in the Azure portal. 1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. -  --1. On the **Set up Sketch** section, copy the appropriate URL(s) based on your requirement. --  +  ### Create an Azure AD test user In this section, you'll enable B.Simon to use Azure single sign-on by granting a ## Configure Sketch SSO -To configure single sign-on on **Sketch** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Sketch support team](mailto:sso-support@sketch.com). They set this setting to have the SAML SSO connection set properly on both sides. --### Create Sketch test user +Follow these steps to finish the configuration in Sketch. -In this section, a user called B.Simon is created in Sketch. Sketch supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Sketch, a new one is created after authentication. +1. In your Workspace, head to the **Set up Sketch** tab in the **Single Sign-On** window. +1. Upload the XML file you downloaded previously in the **Import XML Metadata file** section. +1. Log out. +1. Click **Sign in with SSO**. +1. Use the shortname you configured previously to proceed. ## Test SSO In this section, you test your Azure AD single sign-on configuration with follow ## Next steps -Once you configure Sketch you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). +Once you configure Sketch you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Howto Verifiable Credentials Partner Au10tix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-au10tix.md | Before you can continue with the steps below you need to meet the following requ ## Scenario description -When onboarding users you can remove the need for error prone manual onboarding steps by using Verified ID with A10TIX account onboarding. Verified IDs can be used to digitally onboard employees, students, citizens, or others to securely access resources and services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a Verified ID to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a Verified ID to prove their identity and gain access. Learn more about [account onboarding](https://docs.microsoft.com/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding). +When onboarding users you can remove the need for error prone manual onboarding steps by using Verified ID with A10TIX account onboarding. Verified IDs can be used to digitally onboard employees, students, citizens, or others to securely access resources and services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a Verified ID to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a Verified ID to prove their identity and gain access. Learn more about [account onboarding](https://learn.microsoft.com/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding). |
active-directory | Howto Verifiable Credentials Partner Lexisnexis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-lexisnexis.md | You can use Entra Verified ID with LexisNexis Risk Solutions to enable faster on ## Scenario description -Verifiable Credentials can be used to onboard employees, students, citizens, or others to access services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a verifiable credential to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a VC to prove their identity and gain access. Learn more about [account onboarding](https://docs.microsoft.com/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding). +Verifiable Credentials can be used to onboard employees, students, citizens, or others to access services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a verifiable credential to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a VC to prove their identity and gain access. Learn more about [account onboarding](https://learn.microsoft.com/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding). :::image type="content" source="media/verified-id-partner-au10tix/vc-solution-architecture-diagram.png" alt-text="Diagram of the verifiable credential solution."::: |
advisor | Advisor Sovereign Clouds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-sovereign-clouds.md | + + Title: Sovereign cloud feature variations +description: List of feature variations and usage limitations for Advisor in sovereign clouds. + Last updated : 09/19/2022+++# Azure Advisor in sovereign clouds ++Azure sovereign clouds enable you to build and digitally transform workloads in the cloud while meeting your security, compliance, and policy requirements. ++## Azure Government (United States) ++The following Azure Advisor recommendation **features aren't currently available** in Azure Government: ++### Cost ++- (Preview) Consider App Service stamp fee reserved capacity to save over your on-demand costs. +- (Preview) Consider Azure Data Explorer reserved capacity to save over your pay-as-you-go costs. +- (Preview) Consider Azure Synapse Analytics (formerly SQL DW) reserved capacity to save over your pay-as-you-go costs. +- (Preview) Consider Blob storage reserved capacity to save on Blob v2 and Data Lake Storage Gen2 costs. +- (Preview) Consider Blob storage reserved instance to save on Blob v2 and Data Lake Storage Gen2 costs. +- (Preview) Consider Cache for Redis reserved capacity to save over your pay-as-you-go costs. +- (Preview) Consider Cosmos DB reserved capacity to save over your pay-as-you-go costs. +- (Preview) Consider Database for MariaDB reserved capacity to save over your pay-as-you-go costs. +- (Preview) Consider Database for MySQL reserved capacity to save over your pay-as-you-go costs. +- (Preview) Consider Database for PostgreSQL reserved capacity to save over your pay-as-you-go costs. +- (Preview) Consider SQL DB reserved capacity to save over your pay-as-you-go costs. +- (Preview) Consider SQL PaaS DB reserved capacity to save over your pay-as-you-go costs. +- Consider App Service stamp fee reserved instance to save over your on-demand costs. +- Consider Azure Synapse Analytics (formerly SQL DW) reserved instance to save over your pay-as-you-go costs. +- Consider Cache for Redis reserved instance to save over your pay-as-you-go costs. +- Consider Cosmos DB reserved instance to save over your pay-as-you-go costs. +- Consider Database for MariaDB reserved instance to save over your pay-as-you-go costs. +- Consider Database for MySQL reserved instance to save over your pay-as-you-go costs. +- Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs. +- Consider SQL PaaS DB reserved instance to save over your pay-as-you-go costs. ++### Operational ++- Add Azure Monitor to your virtual machine (VM) labeled as production. +- Delete and recreate your pool using a VM size that will soon be retired. +- Enable Traffic Analytics to view insights into traffic patterns across Azure resources. +- Enforce 'Add or replace a tag on resources' using Azure Policy. +- Enforce 'Allowed locations' using Azure Policy. +- Enforce 'Allowed virtual machine SKUs' using Azure Policy. +- Enforce 'Audit VMs that don't use managed disks' using Azure Policy. +- Enforce 'Inherit a tag from the resource group' using Azure Policy. +- Update Azure Spring Cloud API Version. +- Update your outdated Azure Spring Cloud SDK to the latest version. +- Upgrade to the latest version of the Immersive Reader SDK. ++### Performance ++- Accelerated Networking may require stopping and starting the VM. +- Arista Networks vEOS Router may experience high CPU utilization, reduced throughput and high latency. +- Barracuda Networks NextGen Firewall may experience high CPU utilization, reduced throughput and high latency. +- Cisco Cloud Services Router 1000V may experience high CPU utilization, reduced throughput and high latency. +- Consider increasing the size of your NVA to address persistent high CPU. +- Distribute data in server group to distribute workload among nodes. +- More than 75% of your queries are full scan queries. +- NetApp Cloud Volumes ONTAP may experience high CPU utilization, reduced throughput and high latency. +- Palo Alto Networks VM-Series Firewall may experience high CPU utilization, reduced throughput and high latency. +- Reads happen on most recent data. +- Rebalance data in Hyperscale (Citus) server group to distribute workload among worker nodes more evenly. +- Update Attestation API Version. +- Update Key Vault SDK Version. +- Update to the latest version of your Arista VEOS product for Accelerated Networking support. +- Update to the latest version of your Barracuda NG Firewall product for Accelerated Networking support. +- Update to the latest version of your Check Point product for Accelerated Networking support. +- Update to the latest version of your Cisco Cloud Services Router 1000V product for Accelerated Networking support. +- Update to the latest version of your F5 BigIp product for Accelerated Networking support. +- Update to the latest version of your NetApp product for Accelerated Networking support. +- Update to the latest version of your Palo Alto Firewall product for Accelerated Networking support. +- Upgrade your ExpressRoute circuit bandwidth to accommodate your bandwidth needs. +- Use SSD Disks for your production workloads. +- vSAN capacity utilization has crossed critical threshold. ++### Reliability ++- Avoid hostname override to ensure site integrity. +- Check Point Virtual Machine may lose Network Connectivity. +- Drop and recreate your HDInsight clusters to apply critical updates. +- Upgrade device client SDK to a supported version for IotHub. +- Upgrade to the latest version of the Azure Connected Machine agent. ++## Right size calculations ++The calculation for recommending that you should right-size or shut down underutilized virtual machines in Azure Government is as follows: ++- Advisor monitors your virtual machine usage for seven days and identifies low-utilization virtual machines. +- Virtual machines are considered low utilization if their CPU utilization is 5% or less and their network utilization is less than 2%, or if the current workload can be accommodated by a smaller virtual machine size. ++If you want to be more aggressive at identifying underutilized virtual machines, you can adjust the CPU utilization rule on a per subscription basis. ++## Next steps ++For more information about Advisor recommendations, see: ++- [Introduction to Azure Advisor](./advisor-overview.md) +- [Reliability recommendations](./advisor-high-availability-recommendations.md) +- [Performance recommendations](./advisor-reference-performance-recommendations.md) +- [Cost recommendations](./advisor-reference-cost-recommendations.md) +- [Operational excellence recommendations](./advisor-reference-operational-excellence-recommendations.md) |
advisor | Advisor Tag Filtering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-tag-filtering.md | You can now get Advisor recommendations and scores scoped to a workload, environ * Compare scores for workloads to optimize the critical ones first > [!TIP]-> For more information on how to use resource tags to organize and govern your Azure resources, please see the [Cloud Adoption FrameworkΓÇÖs guidance](/azure/cloud-adoption-framework/ready/azure-best-practices/resource-tagging) and [Build a cloud governance strategy on Azure](/learn/modules/build-cloud-governance-strategy-azure/). +> For more information on how to use resource tags to organize and govern your Azure resources, please see the [Cloud Adoption FrameworkΓÇÖs guidance](/azure/cloud-adoption-framework/ready/azure-best-practices/resource-tagging) and [Build a cloud governance strategy on Azure](/training/modules/build-cloud-governance-strategy-azure/). ## How to filter recommendations using tags |
aks | Concepts Sustainable Software Engineering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-sustainable-software-engineering.md | Learn more about the features of AKS mentioned in this article: [node-sizing]: use-multiple-node-pools.md#specify-a-vm-size-for-a-node-pool [sustainability-calculator]: https://azure.microsoft.com/blog/microsoft-sustainability-calculator-helps-enterprises-analyze-the-carbon-emissions-of-their-it-infrastructure/ [system-pools]: use-system-pools.md-[principles-sse]: /learn/modules/sustainable-software-engineering-overview/ +[principles-sse]: /training/modules/sustainable-software-engineering-overview/ |
aks | Deployment Center Launcher | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deployment-center-launcher.md | +> [!IMPORTANT] +> Deployment Center for Azure Kubernetes Service will be retired on March 31, 2023. [Learn more](/azure/aks/deployment-center-launcher#retirement) + Deployment Center in Azure DevOps simplifies setting up a robust Azure DevOps pipeline for your application. By default, Deployment Center configures an Azure DevOps pipeline to deploy your application updates to the Kubernetes cluster. You can extend the default configured Azure DevOps pipeline and also add richer capabilities: the ability to gain approval before deploying, provision additional Azure resources, run scripts, upgrade your application, and even run more validation tests. In this tutorial, you will: You can delete the related resources that you created when you don't need them a ## Next steps You can modify these build and release pipelines to meet the needs of your team. Or, you can use this CI/CD model as a template for your other pipelines.++## Retirement ++Deployment Center for Azure Kubernetes will be retired on March 31, 2023 in favor of [Automated deployments](/azure/aks/automated-deployments). We encourage you to switch for enjoy similar capabilities. ++#### Migration Steps ++There is no migration required as AKS Deployment center experience does not store any information itself, it just helps users with their Day 0 getting started experience on Azure. Moving forward, the recommended way for users to get started on CI/CD for AKS will be using [Automated deployments](/azure/aks/automated-deployments) feature. ++For existing pipelines, users will still be able to perform all operations from GitHub Actions or Azure DevOps after the retirement of this experience. Only the ability to create and view pipelines from Azure portal will be removed. See [GitHub Actions](https://docs.github.com/en/actions) or [Azure DevOps](/azure/devops/pipelines/get-started/pipelines-get-started) to learn how to get started. ++For new application deployments to AKS, instead of using Deployment center users can get the same capabilities by using Automated deployments. ++#### FAQΓÇ» ++1. Where can I manage my CD pipeline after this experience is deprecated?ΓÇ» ++Post retirement, you will not be able to view or create CD pipelines from Azure portalΓÇÖs AKS blade. However, as with the current experience, you can go to GitHub Actions or Azure DevOps portal and view or update the configured pipelines there. ++2. Will I lose my earlier configured pipelines? ++No. All the created pipelines will still be available and functional in GitHub or Azure DevOps. Only the experience of creating and viewing pipelines from Azure portal will be retired. ++3. How can I still configure CD pipelines directly through Azure portal? ++You can use Automated deployments available in the AKS blade in Azure portal. |
aks | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md | AKS doesn't apply Network Security Groups (NSGs) to its subnet and doesn't modif ## How does Time syncronization work in AKS? -AKS nodes run the "chrony" service which pulls time from the localhost, which in turn sync time with ntp.ubuntu.com. Containers running on pods get the time from the AKS nodes. Applications launched inside a container use time from the container of the pod. +AKS nodes run the "chrony" service which pulls time from the localhost. Containers running on pods get the time from the AKS nodes. Applications launched inside a container use time from the container of the pod. <!-- LINKS - internal --> |
aks | Kubernetes Action | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-action.md | Review the following starter workflows for AKS. For more details on using starte - [Azure Kubernetes Service Kompose][aks-swf-kompose] > [!div class="nextstepaction"]-> [Learn how to create multiple pipelines on GitHub Actions with AKS](/learn/modules/aks-deployment-pipeline-github-actions) +> [Learn how to create multiple pipelines on GitHub Actions with AKS](/training/modules/aks-deployment-pipeline-github-actions) > [!div class="nextstepaction"] > [Learn about Azure Kubernetes Service](/azure/architecture/reference-architectures/containers/aks-start-here) Review the following starter workflows for AKS. For more details on using starte [azure/login]: https://github.com/Azure/login [connect-gh-azure]: /azure/developer/github/connect-from-azure?tabs=azure-cli%2Clinux [gh-azure-vote]: https://github.com/Azure-Samples/azure-voting-app-redis-[actions/checkout]: https://github.com/actions/checkout +[actions/checkout]: https://github.com/actions/checkout |
aks | Quick Windows Container Deploy Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md | To run an AKS cluster that supports node pools for Windows Server containers, yo > [!NOTE] > To ensure your cluster to operate reliably, you should run at least 2 (two) nodes in the default node pool. -Create a username to use as administrator credentials for the Windows Server nodes on your cluster. The following commands prompt you for a username and sets it to *WINDOWS_USERNAME* for use in a later command (remember that the commands in this article are entered into a BASH shell). +Create a username to use as administrator credentials for the Windows Server nodes on your cluster. The following commands prompt you for a username and set it to *WINDOWS_USERNAME* for use in a later command (remember that the commands in this article are entered into a BASH shell). ```azurecli-interactive echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME az aks nodepool add \ The above command creates a new node pool named *npwin* and adds it to the *myAKSCluster*. The above command also uses the default subnet in the default vnet created when running `az aks create`. -## Add a Windows Server 2022 node pool (preview) +## Add a Windows Server 2022 node pool When creating a Windows node pool, the default operating system will be Windows Server 2019. To use Windows Server 2022 nodes, you will need to specify an OS SKU type of `Windows2022`. --### Install the `aks-preview` extension --You also need the *aks-preview* Azure CLI extension version `0.5.68` or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command, or install any available updates by using the [az extension update][az-extension-update] command. --```azurecli-interactive -# Install the aks-preview extension -az extension add --name aks-preview -# Update the extension to make sure you have the latest version installed -az extension update --name aks-preview -``` --### Register the `AKSWindows2022Preview` preview feature --To use the feature, you must also enable the `AKSWindows2022Preview` feature flag on your subscription. --Register the `AKSWindows2022Preview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example: --```azurecli-interactive -az feature register --namespace "Microsoft.ContainerService" --name "AKSWindows2022Preview" -``` --It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command: --```azurecli-interactive -az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKSWindows2022Preview')].{Name:name,State:properties.state}" -``` --When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command: --```azurecli-interactive -az provider register --namespace Microsoft.ContainerService -``` > [!NOTE] > Windows Server 2022 requires Kubernetes version "1.23.0" or higher. |
aks | Open Service Mesh Deploy Addon Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-bicep.md | touch osm.aks.bicep && touch osm.aks.parameters.json Open the *osm.aks.bicep* file and copy the following example content to it. Then save the file. ```azurecli-interactive-// https://docs.microsoft.com/azure/aks/troubleshooting#what-naming-restrictions-are-enforced-for-aks-resources-and-parameters +// https://learn.microsoft.com/azure/aks/troubleshooting#what-naming-restrictions-are-enforced-for-aks-resources-and-parameters @minLength(3) @maxLength(63) @description('Provide a name for the AKS cluster. The only allowed characters are letters, numbers, dashes, and underscore. The first and last character must be a letter or a number.') |
aks | Windows Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-faq.md | This article outlines some of the frequently asked questions and OS concepts for ## Which Windows operating systems are supported? -AKS uses Windows Server 2019 as the host OS version and only supports process isolation. Container images built by using other Windows Server versions are not supported. For more information, see [Windows container version compatibility][windows-container-compat]. +AKS uses Windows Server 2019 and Windows Server 2022 as the host OS version and only supports process isolation. Container images built by using other Windows Server versions are not supported. For more information, see [Windows container version compatibility][windows-container-compat]. ## Is Kubernetes different on Windows and Linux? Yes, an ingress controller that supports Windows Server containers can run on Wi ## Can my Windows Server containers use gMSA? -Group-managed service account (gMSA) support is currently available in preview. See [Enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster (Preview)](use-group-managed-service-accounts.md) +Group-managed service account (gMSA) support is generally available for Windows on AKS. See [Enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster](use-group-managed-service-accounts.md) ## Can I use Azure Monitor for containers with Windows nodes and containers? |
api-management | Plan Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/plan-manage-costs.md | As you add or remove units, capacity and cost scale proportionally. For example, - Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course. - Learn about API Management [capacity](api-management-capacity.md). - See steps to scale and upgrade API Management using the [Azure portal](upgrade-and-scale.md), and learn about [autoscaling](api-management-howto-autoscale.md). |
app-service | App Service Asp Net Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-asp-net-migration.md | The [app containerization tool](https://azure.microsoft.com/blog/accelerate-appl ## Next steps -[Migrate an on-premises web application to Azure App Service](/learn/modules/migrate-app-service-migration-assistant/) +[Migrate an on-premises web application to Azure App Service](/training/modules/migrate-app-service-migration-assistant/) |
app-service | App Service Migration Assess Net | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-migration-assess-net.md | For more information on web apps assessment, see: Next steps:-[At-scale migration of .NET web apps](/learn/modules/migrate-app-service-migration-assistant/) +[At-scale migration of .NET web apps](/training/modules/migrate-app-service-migration-assistant/) |
app-service | App Service Migration Discover Net | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-migration-discover-net.md | For more information about web apps discovery please refer to: Next steps:-[At-scale assessment of .NET web apps](/learn/modules/migrate-app-service-migration-assistant/) +[At-scale assessment of .NET web apps](/training/modules/migrate-app-service-migration-assistant/) |
app-service | Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md | If you want to use your own DNS server, add the following records: 1. Create a zone for `<App Service Environment-name>.appserviceenvironment.net`. 1. Create an A record in that zone that points * to the inbound IP address used by your App Service Environment.-1. Create an A record in that zone that points @ to the inbound IP address used by your App Service Environment. 1. Create a zone in `<App Service Environment-name>.appserviceenvironment.net` named `scm`. 1. Create an A record in the `scm` zone that points * to the IP address used by the private endpoint of your App Service Environment. To configure DNS in Azure DNS private zones: 1. Create an Azure DNS private zone named `<App Service Environment-name>.appserviceenvironment.net`. 1. Create an A record in that zone that points * to the inbound IP address.-1. Create an A record in that zone that points @ to the inbound IP address. 1. Create an A record in that zone that points *.scm to the inbound IP address. In addition to the default domain provided when an app is created, you can also add a custom domain to your app. You can set a custom domain name without any validation on your apps. If you're using custom domains, you need to ensure they have DNS records configured. You can follow the preceding guidance to configure DNS zones and records for a custom domain name (simply replace the default domain name with the custom domain name). The custom domain name works for app requests, but doesn't work for the `scm` site. The `scm` site is only available at *<appname>.scm.<asename>.appserviceenvironment.net*. |
app-service | Overview Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-manage-costs.md | Last updated 06/23/2021 # Plan and manage costs for Azure App Service <!-- Check out the following published examples:-- [https://docs.microsoft.com/azure/cosmos-db/plan-manage-costs](../cosmos-db/plan-manage-costs.md)-- [https://docs.microsoft.com/azure/storage/common/storage-plan-manage-costs](../storage/common/storage-plan-manage-costs.md)-- [https://docs.microsoft.com/azure/machine-learning/concept-plan-manage-cost](../machine-learning/concept-plan-manage-cost.md)+- [https://learn.microsoft.com/azure/cosmos-db/plan-manage-costs](../cosmos-db/plan-manage-costs.md) +- [https://learn.microsoft.com/azure/storage/common/storage-plan-manage-costs](../storage/common/storage-plan-manage-costs.md) +- [https://learn.microsoft.com/azure/machine-learning/concept-plan-manage-cost](../machine-learning/concept-plan-manage-cost.md) --> <!-- Note for Azure service writer: Links to Cost Management articles are full URLS with the ?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn campaign suffix. Leave those URLs intact. They're used to measure traffic to Cost Management articles. You can also [export your cost data](../cost-management-billing/costs/tutorial-e - Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course. <!-- Insert links to other articles that might help users save and manage costs for you service here. |
app-service | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md | First, validate that the new platform update which contains Debian 11 has reache Next, create a deployment slot to test that your application works properly with Debian 11 before applying the change to production. 1. [Create a deployment slot](deploy-staging-slots.md#add-a-slot) if you do not already have one, and clone your settings from the production slot. A deployment slot will allow you to safely test changes to your application (such as upgrading to Debian 11) and swap those changes into production after review. -1. To upgrade to Debian 11 (Bullseye), create an app setting on your slot named `ORYX_DEFAULT_OS` with a value of `bullseye`. +1. To upgrade to Debian 11 (Bullseye), create an app setting on your slot named `WEBSITE_LINUX_OS_VERSION` with a value of `DEBIAN|BULLSEYE`. ```bash- az webapp config appsettings set -g MyResourceGroup -n MyUniqueApp --settings ORYX_DEFAULT_OS=bullseye + az webapp config appsettings set -g MyResourceGroup -n MyUniqueApp --settings WEBSITE_LINUX_OS_VERSION="DEBIAN|BULLSEYE" ``` 1. Deploy your application to the deployment slot using the tool of your choice (VS Code, Azure CLI, GitHub Actions, etc.) 1. Confirm your application is functioning as expected in the deployment slot.-1. [Swap your production and staging slots](deploy-staging-slots.md#swap-two-slots). This will apply the `ORYX_DEFAULT_OS=bullseye` app setting to production. +1. [Swap your production and staging slots](deploy-staging-slots.md#swap-two-slots). This will apply the `WEBSITE_LINUX_OS_VERSION=DEBIAN|BULLSEYE` app setting to production. 1. Delete the deployment slot if you are no longer using it. ##### Resources |
app-service | Tutorial Java Quarkus Postgresql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-quarkus-postgresql-app.md | mvn clean package The final result will be a JAR file in the `target/` subfolder. -To deploy applications to Azure App Service, developers can use the [Maven Plugin for App Service](/learn/modules/publish-web-app-with-maven-plugin-for-azure-app-service/), [VSCode Extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureappservice), or the Azure CLI to deploy apps. Use the following command to deploy our app to the App Service: +To deploy applications to Azure App Service, developers can use the [Maven Plugin for App Service](/training/modules/publish-web-app-with-maven-plugin-for-azure-app-service/), [VSCode Extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureappservice), or the Azure CLI to deploy apps. Use the following command to deploy our app to the App Service: ```azurecli az webapp deploy \ and Learn more about running Java apps on App Service on Linux in the developer guide. > [!div class="nextstepaction"] -> [Java in App Service Linux dev guide](configure-language-java.md?pivots=platform-linux) +> [Java in App Service Linux dev guide](configure-language-java.md?pivots=platform-linux) |
app-service | Tutorial Networking Isolate Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-networking-isolate-vnet.md | Because your Key Vault and Cognitive Services resources will sit behind [private az webapp config appsettings set --resource-group $groupName --name $appName --settings CS_ACCOUNT_NAME="@Microsoft.KeyVault(SecretUri=$csResourceKVUri)" CS_ACCOUNT_KEY="@Microsoft.KeyVault(SecretUri=$csKeyKVUri)" ``` - <!-- If above is not run then it takes a whole day for references to update? https://docs.microsoft.com/en-us/azure/app-service/app-service-key-vault-references#rotation --> + <!-- If above is not run then it takes a whole day for references to update? https://learn.microsoft.com/azure/app-service/app-service-key-vault-references#rotation --> > [!NOTE] > Again, you can observe the behavior change in the sample app. You can no longer load the app because it can no longer access the key vault references. The app has lost its connectivity to the key vault through the shared networking. |
app-service | Tutorial Nodejs Mongodb App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-nodejs-mongodb-app.md | -This article assumes you're already familiar with [Node.js development](/learn/paths/build-javascript-applications-nodejs/) and have Node and MongoDB installed locally. You'll also need an Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/nodejs/). +This article assumes you're already familiar with [Node.js development](/training/paths/build-javascript-applications-nodejs/) and have Node and MongoDB installed locally. You'll also need an Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/nodejs/). ## Sample application Most of the time taken by the two-job process is spent uploading and download ar > [JavaScript on Azure developer center](/azure/developer/javascript) > [!div class="nextstepaction"]-> [Configure Node.js app in App Service](./configure-language-nodejs.md) +> [Configure Node.js app in App Service](./configure-language-nodejs.md) |
app-service | Tutorial Python Postgresql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md | In this tutorial, you'll deploy a data-driven Python web app (**[Django](https:/ **To complete this tutorial, you'll need:** * An Azure account with an active subscription exists. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/python).-* Knowledge of Python with Flask development or [Python with Django development](/learn/paths/django-create-data-driven-websites/) +* Knowledge of Python with Flask development or [Python with Django development](/training/paths/django-create-data-driven-websites/) * [Python 3.7 or higher](https://www.python.org/downloads/) installed locally. * [PostgreSQL](https://www.postgresql.org/download/) installed locally. |
app-service | Tutorial Send Email | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-send-email.md | var jsonData = JsonSerializer.Serialize(new }); HttpResponseMessage result = await client.PostAsync(- // Requires DI configuration to access app settings. See https://docs.microsoft.com/azure/app-service/configure-language-dotnetcore#access-environment-variables + // Requires DI configuration to access app settings. See https://learn.microsoft.com/azure/app-service/configure-language-dotnetcore#access-environment-variables _configuration["LOGIC_APP_URL"], new StringContent(jsonData, Encoding.UTF8, "application/json")); |
application-gateway | Application Gateway Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md | The access log is generated only if you've enabled it on each Application Gatewa } } ```+> [!Note] +>Access logs with clientIP value 127.0.0.1 originate from an internal security process running on the application gateway instances. You can safely ignore these log entries. ### Performance log |
application-gateway | Configuration Infrastructure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md | Subnet Size /24 = 255 IP addresses - 5 reserved from the platform = 250 availabl > [!TIP] > It is possible to change the subnet of an existing Application Gateway within the same virtual network. You can do this using Azure PowerShell or Azure CLI. For more information, see [Frequently asked questions about Application Gateway](application-gateway-faq.yml#can-i-change-the-virtual-network-or-subnet-for-an-existing-application-gateway) +### Virtual network permission ++Since application gateway resources are deployed within a virtual network resource, Application Gateway performs a check to verify the permission on the provided virtual network resource. This is verified during both create and manage operations. ++You should check your [Azure role-based access control](../role-based-access-control/role-assignments-list-portal.md) to verify that users or Service Principals who operate application gateways have at least **Microsoft.Network/virtualNetworks/subnets/join/action** or some higher permission such as the built-in [Network contributor](../role-based-access-control/built-in-roles.md) role on the virtual network. Visit [Add, change, or delete a virtual network subnet](../virtual-network/virtual-network-manage-subnet.md) to know more on subnet permissions. ++If a [built-in](../role-based-access-control/built-in-roles.md) role doesn't provide the right permission, you can [create and assign a custom role](../role-based-access-control/custom-roles-portal.md) for this purpose. + ## Network security groups Network security groups (NSGs) are supported on Application Gateway. But there are some restrictions: |
application-gateway | Monitor Application Gateway Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/monitor-application-gateway-reference.md | For more information, see a list of [all platform metrics supported in Azure Mon For more information on what metric dimensions are, see [Multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics). -<!-- See https://docs.microsoft.com/azure/storage/common/monitor-storage-reference#metrics-dimensions for an example. Part is copied below. --> +<!-- See https://learn.microsoft.com/azure/storage/common/monitor-storage-reference#metrics-dimensions for an example. Part is copied below. --> Azure Application Gateway supports dimensions for some of the metrics in Azure Monitor. Each metric includes a description that explains the available dimensions specifically for that metric. Resource Provider and Type: [Microsoft.Network/applicationGateways](../azure-mon This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure Application Gateway and available for query by Log Analytics. -<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://docs.microsoft.com/azure/azure-monitor/reference/tables/tables-resourcetype where your service tables are listed. These files are auto generated from the REST API. If this article is missing tables that you and the PM know are available, both of you contact azmondocs@microsoft.com. +<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://learn.microsoft.com/azure/azure-monitor/reference/tables/tables-resourcetype where your service tables are listed. These files are auto generated from the REST API. If this article is missing tables that you and the PM know are available, both of you contact azmondocs@microsoft.com. --> <!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. --> sslEnabled_s | Does the client request have SSL enabled| <!-- replace below with the proper link to your main monitoring service article --> - See [Monitoring Azure Azure Application Gateway](monitor-application-gateway.md) for a description of monitoring Azure Application Gateway.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources. |
application-gateway | Monitor Application Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/monitor-application-gateway.md | Resource Logs are not collected and stored until you create a diagnostic setting See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure Application Gateway are listed in [Azure Application Gateway monitoring data reference](monitor-application-gateway-reference.md#resource-logs). -<!-- OPTIONAL: Add specific examples of configuration for this service. For example, CLI and PowerShell commands for creating diagnostic setting. Ideally, customers should set up a policy to automatically turn on collection for services. Azure monitor has Resource Manager template examples you can point to. See https://docs.microsoft.com/azure/azure-monitor/samples/resource-manager-diagnostic-settings. Contact azmondocs@microsoft.com if you have questions. --> +<!-- OPTIONAL: Add specific examples of configuration for this service. For example, CLI and PowerShell commands for creating diagnostic setting. Ideally, customers should set up a policy to automatically turn on collection for services. Azure monitor has Resource Manager template examples you can point to. See https://learn.microsoft.com/azure/azure-monitor/samples/resource-manager-diagnostic-settings. Contact azmondocs@microsoft.com if you have questions. --> The metrics and logs you can collect are discussed in the following sections. The following tables list common and recommended alert rules for Application Gat - See [Monitoring Application Gateway data reference](monitor-application-gateway-reference.md) for a reference of the metrics, logs, and other important values created by Application Gateway. -- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources. |
application-gateway | Overview V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md | An Azure PowerShell script is available in the PowerShell gallery to help you mi Depending on your requirements and environment, you can create a test Application Gateway using either the Azure portal, Azure PowerShell, or Azure CLI. - [Tutorial: Create an application gateway that improves web application access](tutorial-autoscale-ps.md)-- [Learn module: Introduction to Azure Application Gateway](/learn/modules/intro-to-azure-application-gateway)+- [Learn module: Introduction to Azure Application Gateway](/training/modules/intro-to-azure-application-gateway) |
application-gateway | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview.md | Depending on your requirements and environment, you can create a test Applicatio - [Quickstart: Direct web traffic with Azure Application Gateway - Azure portal](quick-create-portal.md) - [Quickstart: Direct web traffic with Azure Application Gateway - Azure PowerShell](quick-create-powershell.md) - [Quickstart: Direct web traffic with Azure Application Gateway - Azure CLI](quick-create-cli.md)-- [Learn module: Introduction to Azure Application Gateway](/learn/modules/intro-to-azure-application-gateway)+- [Learn module: Introduction to Azure Application Gateway](/training/modules/intro-to-azure-application-gateway) - [How an application gateway works](how-application-gateway-works.md) - [Frequently asked questions about Azure Application Gateway](application-gateway-faq.yml) |
application-gateway | Private Link Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link-configure.md | To configure Private link on an existing Application Gateway via Azure PowerShel ```azurepowershell # Disable Private Link Service Network Policies-# https://docs.microsoft.com/azure/private-link/disable-private-endpoint-network-policy +# https://learn.microsoft.com/azure/private-link/disable-private-endpoint-network-policy $net =@{ Name = 'AppGW-PL-PSH' ResourceGroupName = 'AppGW-PL-PSH-RG' Set-AzApplicationGatewayFrontendIPConfig -ApplicationGateway $agw -Name "appGwPu Set-AzApplicationGateway -ApplicationGateway $agw # Disable Private Endpoint Network Policies-# https://docs.microsoft.com/azure/private-link/disable-private-endpoint-network-policy +# https://learn.microsoft.com/azure/private-link/disable-private-endpoint-network-policy $net =@{ Name = 'AppGW-PL-Endpoint-PSH-VNET' ResourceGroupName = 'AppGW-PL-Endpoint-PSH-RG' To configure Private link on an existing Application Gateway via Azure CLI, the ```azurecli # Disable Private Link Service Network Policies-# https://docs.microsoft.com/en-us/azure/private-link/disable-private-endpoint-network-policy +# https://learn.microsoft.com/azure/private-link/disable-private-endpoint-network-policy az network vnet subnet update \ --name AppGW-PL-Subnet \ --vnet-name AppGW-PL-CLI-VNET \ az network application-gateway private-link list \ # Disable Private Endpoint Network Policies-# https://docs.microsoft.com/en-us/azure/private-link/disable-private-endpoint-network-policy +# https://learn.microsoft.com/azure/private-link/disable-private-endpoint-network-policy az network vnet subnet update \ --name MySubnet \ --vnet-name AppGW-PL-Endpoint-CLI-VNET \ |
applied-ai-services | Compose Custom Models V2 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models-v2-1.md | Form Recognizer uses the [Layout](concept-layout.md) API to learn the expected s [Get started with Train with labels](label-tool.md) -> [!VIDEO https://docs.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player] +> [!VIDEO https://learn.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player] ## Create a composed model |
applied-ai-services | Label Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/label-tool.md | keywords: document processing In this article, you'll use the Form Recognizer REST API with the Sample Labeling tool to train a custom model with manually labeled data. -> [!VIDEO https://docs.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player] +> [!VIDEO https://learn.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player] ## Prerequisites |
applied-ai-services | How To Multiple Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to-multiple-resources.md | The **getimmersivereaderlaunchparams** API endpoint should be secured behind som .then(function (response) { const token = response["token"]; const subdomain = response["subdomain"];- // Learn more about chunk usage and supported MIME types https://docs.microsoft.com/azure/cognitive-services/immersive-reader/reference#chunk + // Learn more about chunk usage and supported MIME types https://learn.microsoft.com/azure/cognitive-services/immersive-reader/reference#chunk const data = { Title: $("#ir-title").text(), chunks: [{ The **getimmersivereaderlaunchparams** API endpoint should be secured behind som mimeType: "text/html" }] };- // Learn more about options https://docs.microsoft.com/azure/cognitive-services/immersive-reader/reference#options + // Learn more about options https://learn.microsoft.com/azure/cognitive-services/immersive-reader/reference#options const options = { "onExit": exitCallback, "uiZIndex": 2000 The **getimmersivereaderlaunchparams** API endpoint should be secured behind som ## Next steps * Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](./reference.md)-* View code samples on [GitHub](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/advanced-csharp) +* View code samples on [GitHub](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/advanced-csharp) |
automation | Enable Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/enable-managed-identity.md | This Quickstart shows you how to enable managed identities for an Azure Automati 1. Set the system-assigned **Status** option to **On** and then press **Save**. When you're prompted to confirm, select **Yes**. - Your Automation account can now use the system-assigned identity, which is registered with Azure Active Directory (Azure AD) and is represented by an object ID. + Your Automation account can now use the system-assigned identity, that is registered with Azure Active Directory (Azure AD) and is represented by an object ID. :::image type="content" source="media/enable-managed-identity/system-assigned-object-id.png" alt-text="Managed identity object ID."::: |
azure-arc | Automated Integration Testing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/automated-integration-testing.md | export SPN_TENANT_ID="..." export SUBSCRIPTION_ID="..." # Optional: certain integration tests test upload to Log Analytics workspace:-# https://docs.microsoft.com/azure/azure-arc/data/upload-logs +# https://learn.microsoft.com/azure/azure-arc/data/upload-logs export WORKSPACE_ID="..." export WORKSPACE_SHARED_KEY="..." This cleans up the resource manifests deployed as part of the launcher. ## Next steps > [!div class="nextstepaction"]-> [Pre-release testing](preview-testing.md) +> [Pre-release testing](preview-testing.md) |
azure-arc | Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connectivity.md | -Azure Arc-enabled data services provides you the option to connect to Azure in two different *connectivity modes*: +Azure Arc-enabled data services provide you the option to connect to Azure in two different *connectivity modes*: - Directly connected - Indirectly connected Some Azure-attached services are only available when they can be directly reache |**Feature**|**Indirectly connected**|**Directly connected**| |||| |**Automatic high availability**|Supported|Supported|-|**Self-service provisioning**|Supported<br/>Creation can be done through Azure Data Studio, the appropriate CLI, or Kubernetes native tools (helm, kubectl, oc, etc.), or using Azure Arc-enabled Kubernetes GitOps provisioning.|Supported<br/>In addition to the indirectly connected mode creation options, you can also create through the Azure portal, Azure Resource Manager APIs, the Azure CLI, or ARM templates. +|**Self-service provisioning**|Supported<br/>Use Azure Data Studio, the appropriate CLI, or Kubernetes native tools like Helm, `kubectl`, or `oc`, or use Azure Arc-enabled Kubernetes GitOps provisioning.|Supported<br/>In addition to the indirectly connected mode creation options, you can also create through the Azure portal, Azure Resource Manager APIs, the Azure CLI, or ARM templates. |**Elastic scalability**|Supported|Supported<br/>| |**Billing**|Supported<br/>Billing data is periodically exported out and sent to Azure.|Supported<br/>Billing data is automatically and continuously sent to Azure and reflected in near real time. | |**Inventory management**|Supported<br/>Inventory data is periodically exported out and sent to Azure.<br/><br/>Use client tools like Azure Data Studio, Azure Data CLI, or `kubectl` to view and manage inventory locally.|Supported<br/>Inventory data is automatically and continuously sent to Azure and reflected in near real time. As such, you can manage inventory directly from the Azure portal.| Some Azure-attached services are only available when they can be directly reache There are three connections required to services available on the Internet. These connections include: - [Microsoft Container Registry (MCR)](#microsoft-container-registry-mcr)+- [Helm chart (direct connected mode)](#helm-chart-direct-connected-mode) - [Azure Resource Manager APIs](#azure-resource-manager-apis) - [Azure monitor APIs](#azure-monitor-apis)+- [Azure Arc data processing service](#azure-arc-data-processing-service) All HTTPS connections to Azure and the Microsoft Container Registry are encrypted using SSL/TLS using officially signed and verifiable certificates. Yes None -### Helm chart used to create data controller in direct connected mode +### Helm chart (direct connected mode) -The helm chart used to provision the Azure Arc data controller bootstrapper and cluster level objects, such as custom resource definitions, cluster roles, and cluster role bindings, is pulled from an Azure Container Registry. +The Helm chart used to provision the Azure Arc data controller bootstrapper and cluster level objects, such as custom resource definitions, cluster roles, and cluster role bindings, is pulled from an Azure Container Registry. #### Connection source A computer running Azure Data Studio, or Azure CLI that is connecting to Azure. - `login.microsoftonline.com` - `management.azure.com`-- `san-af-eastus-prod.azurewebsites.net`-- `san-af-eastus2-prod.azurewebsites.net`-- `san-af-australiaeast-prod.azurewebsites.net`-- `san-af-centralus-prod.azurewebsites.net`-- `san-af-westus2-prod.azurewebsites.net`-- `san-af-westeurope-prod.azurewebsites.net`-- `san-af-southeastasia-prod.azurewebsites.net`-- `san-af-koreacentral-prod.azurewebsites.net`-- `san-af-northeurope-prod.azurewebsites.net`-- `san-af-westeurope-prod.azurewebsites.net`-- `san-af-uksouth-prod.azurewebsites.net`-- `san-af-francecentral-prod.azurewebsites.net` #### Protocol HTTPS Yes +To use proxy, verify that the agents meet the network requirements. See [Meet network requirements](../kubernetes/quickstart-connect-cluster.md#meet-network-requirements). + #### Authentication Azure Active Directory Azure Active Directory > For now, all browser HTTPS/443 connections to the data controller for running the command `az arcdata dc export` and Grafana and Kibana dashboards are SSL encrypted using self-signed certificates. A feature will be available in the future that will allow you to provide your own certificates for encryption of these SSL connections. Connectivity from Azure Data Studio to the Kubernetes API server uses the Kubernetes authentication and encryption that you have established. Each user that is using Azure Data Studio or CLI must have an authenticated connection to the Kubernetes API to perform many of the actions related to Azure Arc-enabled data services.++### Azure Arc data processing service ++Points to the data processing service endpoint in connection ++#### Connection target ++- `san-af-eastus-prod.azurewebsites.net` +- `san-af-eastus2-prod.azurewebsites.net` +- `san-af-australiaeast-prod.azurewebsites.net` +- `san-af-centralus-prod.azurewebsites.net` +- `san-af-westus2-prod.azurewebsites.net` +- `san-af-westeurope-prod.azurewebsites.net` +- `san-af-southeastasia-prod.azurewebsites.net` +- `san-af-koreacentral-prod.azurewebsites.net` +- `san-af-northeurope-prod.azurewebsites.net` +- `san-af-westeurope-prod.azurewebsites.net` +- `san-af-uksouth-prod.azurewebsites.net` +- `san-af-francecentral-prod.azurewebsites.net` ++#### Protocol ++HTTPS ++#### Can use proxy ++Yes ++To use proxy, verify that the agents meet the network requirements. See [Meet network requirements](../kubernetes/quickstart-connect-cluster.md#meet-network-requirements). ++#### Authentication ++None |
azure-arc | Managed Instance Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-overview.md | To learn more about these capabilities, watch these introductory videos. ### Azure Arc-enabled SQL Managed Instance - indirect connected mode -> [!VIDEO https://docs.microsoft.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-disconnected-mode/player?format=ny] +> [!VIDEO https://learn.microsoft.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-disconnected-mode/player?format=ny] ### Azure Arc-enabled SQL Managed Instance - direct connected mode -> [!VIDEO https://docs.microsoft.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-connected-mode/player?format=ny] +> [!VIDEO https://learn.microsoft.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-connected-mode/player?format=ny] ## Next steps |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/overview.md | Currently, the following Azure Arc-enabled data services are available: For an introduction to how Azure Arc-enabled data services supports your hybrid work environment, see this introductory video: -> [!VIDEO https://docs.microsoft.com/Shows/Inside-Azure-for-IT/Choose-the-right-data-solution-for-your-hybrid-environment/player?format=ny] +> [!VIDEO https://learn.microsoft.com/Shows/Inside-Azure-for-IT/Choose-the-right-data-solution-for-your-hybrid-environment/player?format=ny] ## Always current |
azure-functions | Durable Functions Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md | Durable Functions is developed in collaboration with Microsoft Research. As a re The following video highlights the benefits of Durable Functions: -> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Durable-Functions-in-Azure-Functions/player] +> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Durable-Functions-in-Azure-Functions/player] For a more in-depth discussion of Durable Functions and the underlying technology, see the following video (it's focused on .NET, but the concepts also apply to other supported languages): -> [!VIDEO https://docs.microsoft.com/Events/dotnetConf/2018/S204/player] +> [!VIDEO https://learn.microsoft.com/Events/dotnetConf/2018/S204/player] Because Durable Functions is an advanced extension for [Azure Functions](../functions-overview.md), it isn't appropriate for all applications. For a comparison with other Azure orchestration technologies, see [Compare Azure Functions and Azure Logic Apps](../functions-compare-logic-apps-ms-flow-webjobs.md#compare-azure-functions-and-azure-logic-apps). |
azure-functions | Functions Dotnet Class Library | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md | As a C# developer, you may also be interested in one of the following articles: | Getting started | Concepts| Guided learning/samples | |--| -- |--| -| <ul><li>[Using Visual Studio](functions-create-your-first-function-visual-studio.md)</li><li>[Using Visual Studio Code](create-first-function-vs-code-csharp.md)</li><li>[Using command line tools](create-first-function-cli-csharp.md)</li></ul> | <ul><li>[Hosting options](functions-scale.md)</li><li>[Performance considerations](functions-best-practices.md)</li><li>[Visual Studio development](functions-develop-vs.md)</li><li>[Dependency injection](functions-dotnet-dependency-injection.md)</li></ul> | <ul><li>[Create serverless applications](/learn/paths/create-serverless-applications/)</li><li>[C# samples](/samples/browse/?products=azure-functions&languages=csharp)</li></ul> | +| <ul><li>[Using Visual Studio](functions-create-your-first-function-visual-studio.md)</li><li>[Using Visual Studio Code](create-first-function-vs-code-csharp.md)</li><li>[Using command line tools](create-first-function-cli-csharp.md)</li></ul> | <ul><li>[Hosting options](functions-scale.md)</li><li>[Performance considerations](functions-best-practices.md)</li><li>[Visual Studio development](functions-develop-vs.md)</li><li>[Dependency injection](functions-dotnet-dependency-injection.md)</li></ul> | <ul><li>[Create serverless applications](/training/paths/create-serverless-applications/)</li><li>[C# samples](/samples/browse/?products=azure-functions&languages=csharp)</li></ul> | Azure Functions supports C# and C# script programming languages. If you're looking for guidance on [using C# in the Azure portal](functions-create-function-app-portal.md), see [C# script (.csx) developer reference](functions-reference-csharp.md). |
azure-functions | Functions Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-get-started.md | Use the following resources to get started. | | | | **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio](./functions-create-your-first-function-visual-studio.md)<li>[Visual Studio Code](./create-first-function-vs-code-csharp.md)<li>[Command line](./create-first-function-cli-csharp.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=csharp&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=C%23) |-| **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Execute an Azure Function with triggers](/learn/modules/execute-azure-function-with-triggers/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).| +| **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/training/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/training/modules/azure-well-architected-performance-efficiency/)<li>[Execute an Azure Function with triggers](/training/modules/execute-azure-function-with-triggers/) <br><br>See a [full listing of interactive tutorials](/training/browse/?expanded=azure&products=azure-functions).| | **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=csharp)<li>[Security](./security-concepts.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [C# language reference](./functions-dotnet-class-library.md)| Use the following resources to get started. | | | | **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio Code](./create-first-function-vs-code-java.md)<li>[Jav) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=java&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=Java) |-| **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Develop an App using the Maven Plugin for Azure Functions](/learn/modules/develop-azure-functions-app-with-maven-plugin/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).| +| **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/training/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/training/modules/azure-well-architected-performance-efficiency/)<li>[Develop an App using the Maven Plugin for Azure Functions](/training/modules/develop-azure-functions-app-with-maven-plugin/) <br><br>See a [full listing of interactive tutorials](/training/browse/?expanded=azure&products=azure-functions).| | **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=java)<li>[Security](./security-concepts.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [Java language reference](./functions-reference-java.md)| ::: zone-end Use the following resources to get started. | | | | **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio Code](./create-first-function-vs-code-node.md)<li>[Node.js terminal/command prompt](./create-first-function-cli-node.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=javascript%2ctypescript&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=JavaScript%2CTypeScript) |-| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/)<li>[Refactor Node.js and Express APIs to Serverless APIs with Azure Functions](/learn/modules/shift-nodejs-express-apis-serverless/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).| +| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/training/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/training/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/training/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/training/modules/create-serverless-logic-with-azure-functions/)<li>[Refactor Node.js and Express APIs to Serverless APIs with Azure Functions](/training/modules/shift-nodejs-express-apis-serverless/) <br><br>See a [full listing of interactive tutorials](/training/browse/?expanded=azure&products=azure-functions).| | **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=javascript)<li>[Security](./security-concepts.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [JavaScript](./functions-reference-node.md) or [TypeScript](./functions-reference-node.md#typescript) language reference| ::: zone-end Use the following resources to get started. | | | | **Create your first function** | <li>Using [Visual Studio Code](./create-first-function-vs-code-powershell.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=powershell&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=PowerShell) |-| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/)<li>[Execute an Azure Function with triggers](/learn/modules/execute-azure-function-with-triggers/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).| +| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/training/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/training/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/training/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/training/modules/create-serverless-logic-with-azure-functions/)<li>[Execute an Azure Function with triggers](/training/modules/execute-azure-function-with-triggers/) <br><br>See a [full listing of interactive tutorials](/training/browse/?expanded=azure&products=azure-functions).| | **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=powershell)<li>[Security](./security-concepts.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [PowerShell language reference](./functions-reference-powershell.md))| ::: zone-end Use the following resources to get started. | | | | **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio Code](./create-first-function-vs-code-python.md)<li>[Terminal/command prompt](./create-first-function-cli-python.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=python&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=Python) |-| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).| +| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/training/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/training/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/training/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/training/modules/create-serverless-logic-with-azure-functions/) <br><br>See a [full listing of interactive tutorials](/training/browse/?expanded=azure&products=azure-functions).| | **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=python)<li>[Security](./security-concepts.md)<li>[Improve throughput performance](./python-scale-performance-reference.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [Python language reference](./functions-reference-python.md)| ::: zone-end |
azure-functions | Functions Hybrid Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-hybrid-powershell.md | The following script enables PowerShell remoting, and it creates a new firewall ```powershell # For configuration of WinRM, see-# https://docs.microsoft.com/windows/win32/winrm/installation-and-configuration-for-windows-remote-management. +# https://learn.microsoft.com/windows/win32/winrm/installation-and-configuration-for-windows-remote-management. # Enable PowerShell remoting. Enable-PSRemoting -Force |
azure-functions | Functions Recover Storage Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-recover-storage-account.md | For more information about inbound rule configuration, see the "Network Security For function apps that run on Linux in a container, the `Azure Functions runtime is unreachable` error can occur as a result of problems with the container. Use the following procedure to review the container logs for errors: -1. Navigate to the Kudu endpoint for the function app, which is located at `https://scm.<FUNCTION_APP>.azurewebsites.net`, where `<FUNCTION_APP>` is the name of your app. +1. Navigate to the Kudu endpoint for the function app, which is located at `https://<FUNCTION_APP>.scm.azurewebsites.net`, where `<FUNCTION_APP>` is the name of your app. 1. Download the Docker logs .zip file and review the contents on your local computer. |
azure-functions | Functions Reference Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md | Title: Azure Functions C# script developer reference description: Understand how to develop Azure Functions using C# script. Previously updated : 12/12/2017 Last updated : 09/15/2022 # Azure Functions C# script (.csx) developer reference Last updated 12/12/2017 This article is an introduction to developing Azure Functions by using C# script (*.csx*). -Azure Functions supports C# and C# script programming languages. If you're looking for guidance on [using C# in a Visual Studio class library project](functions-develop-vs.md), see [C# developer reference](functions-dotnet-class-library.md). +Azure Functions lets you develop functions using C# in one of the following ways: ++| Type | Execution process | Code extension | Development environment | Reference | +| | - | | | | +| C# script | in-process | .csx | [Portal](functions-create-function-app-portal.md)<br/>[Core Tools](functions-run-local.md) | This article | +| C# class library | in-process | .cs | [Visual Studio](functions-develop-vs.md)<br/>[Visual Studio Code](functions-develop-vs-code.md)<br />[Core Tools](functions-run-local.md)s | [In-process C# class library functions](functions-dotnet-class-library.md) | +| C# class library (isolated process)| out-of-process | .cs | [Visual Studio](functions-develop-vs.md)<br/>[Visual Studio Code](functions-develop-vs-code.md)<br />[Core Tools](functions-run-local.md) | [.NET isolated process functions](dotnet-isolated-process-guide.md) | This article assumes that you've already read the [Azure Functions developers guide](functions-reference.md). ## How .csx works -The C# script experience for Azure Functions is based on the [Azure WebJobs SDK](https://github.com/Azure/azure-webjobs-sdk/wiki/Introduction). Data flows into your C# function via method arguments. Argument names are specified in a `function.json` file, and there are predefined names for accessing things like the function logger and cancellation tokens. +Data flows into your C# function via method arguments. Argument names are specified in a `function.json` file, and there are predefined names for accessing things like the function logger and cancellation tokens. The *.csx* format allows you to write less "boilerplate" and focus on writing just a C# function. Instead of wrapping everything in a namespace and class, just define a `Run` method. Include any assembly references and namespaces at the beginning of the file as usual. -A function app's *.csx* files are compiled when an instance is initialized. This compilation step means things like cold start may take longer for C# script functions compared to C# class libraries. This compilation step is also why C# script functions are editable in the Azure portal, while C# class libraries are not. +A function app's *.csx* files are compiled when an instance is initialized. This compilation step means things like cold start may take longer for C# script functions compared to C# class libraries. This compilation step is also why C# script functions are editable in the Azure portal, while C# class libraries aren't. ## Folder structure -The folder structure for a C# script project looks like the following: +The folder structure for a C# script project looks like the following example: ``` FunctionsProject FunctionsProject There's a shared [host.json](functions-host-json.md) file that can be used to configure the function app. Each function has its own code file (.csx) and binding configuration file (function.json). -The binding extensions required in [version 2.x and later versions](functions-versions.md) of the Functions runtime are defined in the `extensions.csproj` file, with the actual library files in the `bin` folder. When developing locally, you must [register binding extensions](./functions-bindings-register.md#extension-bundles). When developing functions in the Azure portal, this registration is done for you. +The binding extensions required in [version 2.x and later versions](functions-versions.md) of the Functions runtime are defined in the `extensions.csproj` file, with the actual library files in the `bin` folder. When developing locally, you must [register binding extensions](./functions-bindings-register.md#extension-bundles). When you develop functions in the Azure portal, this registration is done for you. ## Binding to arguments The following assemblies are automatically added by the Azure Functions hosting * `System.Web.Http` * `System.Net.Http.Formatting` -The following assemblies may be referenced by simple-name (for example, `#r "AssemblyName"`): +The following assemblies may be referenced by simple-name, by runtime version: ++# [v2.x+](#tab/functionsv2) ++* `Newtonsoft.Json` +* `Microsoft.WindowsAzure.Storage`<sup>*</sup> ++<sup>*</sup>Removed in version 4.x of the runtime. ++# [v1.x](#tab/functionsv1) * `Newtonsoft.Json` * `Microsoft.WindowsAzure.Storage` * `Microsoft.ServiceBus` * `Microsoft.AspNet.WebHooks.Receivers` * `Microsoft.AspNet.WebHooks.Common`-* `Microsoft.Azure.NotificationHubs` +++++In code, assemblies are referenced like the following example: ++```csharp +#r "AssemblyName" +``` ## Referencing custom assemblies By default, the [supported set of Functions extension NuGet packages](functions- If for some reason you can't use extension bundles in your project, you can also use the Azure Functions Core Tools to install extensions based on bindings defined in the function.json files in your app. When using Core Tools to register extensions, make sure to use the `--csx` option. To learn more, see [Install extensions](functions-run-local.md#install-extensions). -By default, Core Tools reads the function.json files and adds the required packages to an *extensions.csproj* C# class library project file in the root of the function app's file system (wwwroot). Because Core Tools uses dotnet.exe, you can use it to add any NuGet package reference to this extensions file. During installation, Core Tools builds the extensions.csproj to install the required libraries. Here is an example *extensions.csproj* file that adds a reference to *Microsoft.ProjectOxford.Face* version *1.1.0*: +By default, Core Tools reads the function.json files and adds the required packages to an *extensions.csproj* C# class library project file in the root of the function app's file system (wwwroot). Because Core Tools uses dotnet.exe, you can use it to add any NuGet package reference to this extensions file. During installation, Core Tools builds the extensions.csproj to install the required libraries. Here's an example *extensions.csproj* file that adds a reference to *Microsoft.ProjectOxford.Face* version *1.1.0*: ```xml <Project Sdk="Microsoft.NET.Sdk"> By default, Core Tools reads the function.json files and adds the required packa # [v1.x](#tab/functionsv1) -Version 1.x of the Functions runtime uses a *project.json* file to define dependencies. Here is an example *project.json* file: +Version 1.x of the Functions runtime uses a *project.json* file to define dependencies. Here's an example *project.json* file: ```json { Extension bundles aren't supported by version 1.x. To use a custom NuGet feed, specify the feed in a *Nuget.Config* file in the function app root folder. For more information, see [Configuring NuGet behavior](/nuget/consume-packages/configuring-nuget-behavior). -If you are working on your project only in the portal, you'll need to manually create the extensions.csproj file or a Nuget.Config file directly in the site. To learn more, see [Manually install extensions](functions-how-to-use-azure-function-app-settings.md#manually-install-extensions). +If you're working on your project only in the portal, you'll need to manually create the extensions.csproj file or a Nuget.Config file directly in the site. To learn more, see [Manually install extensions](functions-how-to-use-azure-function-app-settings.md#manually-install-extensions). ## Environment variables using (var output = await binder.BindAsync<T>(new BindingTypeAttribute(...))) ``` `BindingTypeAttribute` is the .NET attribute that defines your binding and `T` is an input or output type that's-supported by that binding type. `T` cannot be an `out` parameter type (such as `out JObject`). For example, the +supported by that binding type. `T` can't be an `out` parameter type (such as `out JObject`). For example, the Mobile Apps table output binding supports [six output types](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.MobileApps/MobileTableAttribute.cs#L17-L22), but you can only use [ICollector\<T>](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/ICollector.cs) public static async Task Run(string input, Binder binder) defines the [Storage blob](functions-bindings-storage-blob.md) input or output binding, and [TextWriter](/dotnet/api/system.io.textwriter) is a supported output binding type. -### Multiple attribute example +### Multiple attributes example The preceding example gets the app setting for the function app's main Storage account connection string (which is `AzureWebJobsStorage`). You can specify a custom app setting to use for the Storage account by adding the [StorageAccountAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/StorageAccountAttribute.cs) public static async Task Run(string input, Binder binder) } ``` -The following table lists the .NET attributes for each binding type and the packages in which they are defined. +The following table lists the .NET attributes for each binding type and the packages in which they're defined. > [!div class="mx-codeBreakAll"] > | Binding | Attribute | Add reference | |
azure-functions | Functions Reference Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md | As an Express.js, Node.js, or JavaScript developer, if you're new to Azure Funct | Getting started | Concepts| Guided learning | | -- | -- | -- | -| <ul><li>[Node.js function using Visual Studio Code](./create-first-function-vs-code-node.md)</li><li>[Node.js function with terminal/command prompt](./create-first-function-cli-node.md)</li><li>[Node.js function using the Azure portal](functions-create-function-app-portal.md)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[TypeScript functions](#typescript)</li><li>[Performance considerations](functions-best-practices.md)</li></ul> | <ul><li>[Create serverless applications](/learn/paths/create-serverless-applications/)</li><li>[Refactor Node.js and Express APIs to Serverless APIs](/learn/modules/shift-nodejs-express-apis-serverless/)</li></ul> | +| <ul><li>[Node.js function using Visual Studio Code](./create-first-function-vs-code-node.md)</li><li>[Node.js function with terminal/command prompt](./create-first-function-cli-node.md)</li><li>[Node.js function using the Azure portal](functions-create-function-app-portal.md)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[TypeScript functions](#typescript)</li><li>[Performance considerations](functions-best-practices.md)</li></ul> | <ul><li>[Create serverless applications](/training/paths/create-serverless-applications/)</li><li>[Refactor Node.js and Express APIs to Serverless APIs](/training/modules/shift-nodejs-express-apis-serverless/)</li></ul> | ## JavaScript function basics |
azure-functions | Functions Run Local | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md | This type of streaming logs requires that Application Insights integration be en ## Next steps -Learn how to [develop, test, and publish Azure functions by using Azure Functions core tools](/learn/modules/develop-test-deploy-azure-functions-with-core-tools/). Azure Functions Core Tools is [open source and hosted on GitHub](https://github.com/azure/azure-functions-cli). To file a bug or feature request, [open a GitHub issue](https://github.com/azure/azure-functions-cli/issues). +Learn how to [develop, test, and publish Azure functions by using Azure Functions core tools](/training/modules/develop-test-deploy-azure-functions-with-core-tools/). Azure Functions Core Tools is [open source and hosted on GitHub](https://github.com/azure/azure-functions-cli). To file a bug or feature request, [open a GitHub issue](https://github.com/azure/azure-functions-cli/issues). <!-- LINKS --> |
azure-functions | Performance Reliability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/performance-reliability.md | All functions in your local project are deployed together as a set of files to y ### Organize functions by privilege -Connection strings and other credentials stored in application settings gives all of the functions in the function app the same set of permissions in the associated resource. Consider minimizing the number of functions with access to specific credentials by moving functions that don't use those credentials to a separate function app. You can always use techniques such as [function chaining](/learn/modules/chain-azure-functions-data-using-bindings/) to pass data between functions in different function apps. +Connection strings and other credentials stored in application settings gives all of the functions in the function app the same set of permissions in the associated resource. Consider minimizing the number of functions with access to specific credentials by moving functions that don't use those credentials to a separate function app. You can always use techniques such as [function chaining](/training/modules/chain-azure-functions-data-using-bindings/) to pass data between functions in different function apps. ## Scalability best practices |
azure-functions | Security Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/security-concepts.md | Permissions are effective at the function app level. The Contributor role is req #### Organize functions by privilege -Connection strings and other credentials stored in application settings gives all of the functions in the function app the same set of permissions in the associated resource. Consider minimizing the number of functions with access to specific credentials by moving functions that don't use those credentials to a separate function app. You can always use techniques such as [function chaining](/learn/modules/chain-azure-functions-data-using-bindings/) to pass data between functions in different function apps. +Connection strings and other credentials stored in application settings gives all of the functions in the function app the same set of permissions in the associated resource. Consider minimizing the number of functions with access to specific credentials by moving functions that don't use those credentials to a separate function app. You can always use techniques such as [function chaining](/training/modules/chain-azure-functions-data-using-bindings/) to pass data between functions in different function apps. #### Managed identities |
azure-functions | Shift Expressjs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/shift-expressjs.md | When migrating code to a serverless architecture, refactoring Express.js endpoin - **Configuration and conventions**: A Functions app uses the _function.json_ file to define HTTP verbs, define security policies, and can configure the function's [input and output](./functions-triggers-bindings.md). By default, the folder name that which contains the function files defines the endpoint name, but you can change the name via the `route` property in the [function.json](./functions-bindings-http-webhook-trigger.md#customize-the-http-endpoint) file. > [!TIP]-> Learn more through the interactive tutorial [Refactor Node.js and Express APIs to Serverless APIs with Azure Functions](/learn/modules/shift-nodejs-express-apis-serverless/). +> Learn more through the interactive tutorial [Refactor Node.js and Express APIs to Serverless APIs with Azure Functions](/training/modules/shift-nodejs-express-apis-serverless/). ## Example By defining `get` in the `methods` array, the function is available to HTTP `GET ## Next steps -- Learn more with the interactive tutorial [Refactor Node.js and Express APIs to Serverless APIs with Azure Functions](/learn/modules/shift-nodejs-express-apis-serverless/)+- Learn more with the interactive tutorial [Refactor Node.js and Express APIs to Serverless APIs with Azure Functions](/training/modules/shift-nodejs-express-apis-serverless/) |
azure-government | Compare Azure Government Global Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md | Cognitive Services Language Understanding (LUIS) is part of [Cognitive Services ### [Cognitive -For feature variations and limitations, including API endpoints, see [Speech service in sovereign clouds](../cognitive-services/Speech-Service/sovereign-clouds.md). +For feature variations and limitations, including API endpoints, see [Speech service in sovereign clouds](../cognitive-services/speech-service/sovereign-clouds.md). ### [Cognitive -The following Translator **features aren't currently available** in Azure Government: --- Custom Translator-- Translator Hub+For feature variations and limitations, including API endpoints, see [Translator in sovereign clouds](../cognitive-services/translator/sovereign-clouds.md). ## Analytics The following Automation **features aren't currently available** in Azure Govern ### [Azure Advisor](../advisor/index.yml) -The following Azure Advisor recommendation **features aren't currently available** in Azure Government: --- Cost- - (Preview) Consider App Service stamp fee reserved capacity to save over your on-demand costs. - - (Preview) Consider Azure Data Explorer reserved capacity to save over your pay-as-you-go costs. - - (Preview) Consider Azure Synapse Analytics (formerly SQL DW) reserved capacity to save over your pay-as-you-go costs. - - (Preview) Consider Blob storage reserved capacity to save on Blob v2 and Data Lake Storage Gen2 costs. - - (Preview) Consider Blob storage reserved instance to save on Blob v2 and Data Lake Storage Gen2 costs. - - (Preview) Consider Cache for Redis reserved capacity to save over your pay-as-you-go costs. - - (Preview) Consider Cosmos DB reserved capacity to save over your pay-as-you-go costs. - - (Preview) Consider Database for MariaDB reserved capacity to save over your pay-as-you-go costs. - - (Preview) Consider Database for MySQL reserved capacity to save over your pay-as-you-go costs. - - (Preview) Consider Database for PostgreSQL reserved capacity to save over your pay-as-you-go costs. - - (Preview) Consider SQL DB reserved capacity to save over your pay-as-you-go costs. - - (Preview) Consider SQL PaaS DB reserved capacity to save over your pay-as-you-go costs. - - Consider App Service stamp fee reserved instance to save over your on-demand costs. - - Consider Azure Synapse Analytics (formerly SQL DW) reserved instance to save over your pay-as-you-go costs. - - Consider Cache for Redis reserved instance to save over your pay-as-you-go costs. - - Consider Cosmos DB reserved instance to save over your pay-as-you-go costs. - - Consider Database for MariaDB reserved instance to save over your pay-as-you-go costs. - - Consider Database for MySQL reserved instance to save over your pay-as-you-go costs. - - Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs. - - Consider SQL PaaS DB reserved instance to save over your pay-as-you-go costs. -- Operational- - Add Azure Monitor to your virtual machine (VM) labeled as production. - - Delete and recreate your pool using a VM size that will soon be retired. - - Enable Traffic Analytics to view insights into traffic patterns across Azure resources. - - Enforce 'Add or replace a tag on resources' using Azure Policy. - - Enforce 'Allowed locations' using Azure Policy. - - Enforce 'Allowed virtual machine SKUs' using Azure Policy. - - Enforce 'Audit VMs that don't use managed disks' using Azure Policy. - - Enforce 'Inherit a tag from the resource group' using Azure Policy. - - Update Azure Spring Cloud API Version. - - Update your outdated Azure Spring Cloud SDK to the latest version. - - Upgrade to the latest version of the Immersive Reader SDK. -- Performance- - Accelerated Networking may require stopping and starting the VM. - - Arista Networks vEOS Router may experience high CPU utilization, reduced throughput and high latency. - - Barracuda Networks NextGen Firewall may experience high CPU utilization, reduced throughput and high latency. - - Cisco Cloud Services Router 1000V may experience high CPU utilization, reduced throughput and high latency. - - Consider increasing the size of your NVA to address persistent high CPU. - - Distribute data in server group to distribute workload among nodes. - - More than 75% of your queries are full scan queries. - - NetApp Cloud Volumes ONTAP may experience high CPU utilization, reduced throughput and high latency. - - Palo Alto Networks VM-Series Firewall may experience high CPU utilization, reduced throughput and high latency. - - Reads happen on most recent data. - - Rebalance data in Hyperscale (Citus) server group to distribute workload among worker nodes more evenly. - - Update Attestation API Version. - - Update Key Vault SDK Version. - - Update to the latest version of your Arista VEOS product for Accelerated Networking support. - - Update to the latest version of your Barracuda NG Firewall product for Accelerated Networking support. - - Update to the latest version of your Check Point product for Accelerated Networking support. - - Update to the latest version of your Cisco Cloud Services Router 1000V product for Accelerated Networking support. - - Update to the latest version of your F5 BigIp product for Accelerated Networking support. - - Update to the latest version of your NetApp product for Accelerated Networking support. - - Update to the latest version of your Palo Alto Firewall product for Accelerated Networking support. - - Upgrade your ExpressRoute circuit bandwidth to accommodate your bandwidth needs. - - Use SSD Disks for your production workloads. - - vSAN capacity utilization has crossed critical threshold. -- Reliability- - Avoid hostname override to ensure site integrity. - - Check Point Virtual Machine may lose Network Connectivity. - - Drop and recreate your HDInsight clusters to apply critical updates. - - Upgrade device client SDK to a supported version for IotHub. - - Upgrade to the latest version of the Azure Connected Machine agent. --The calculation for recommending that you should right-size or shut down underutilized virtual machines in Azure Government is as follows: --- Advisor monitors your virtual machine usage for seven days and identifies low-utilization virtual machines.-- Virtual machines are considered low utilization if their CPU utilization is 5% or less and their network utilization is less than 2%, or if the current workload can be accommodated by a smaller virtual machine size.--If you want to be more aggressive at identifying underutilized virtual machines, you can adjust the CPU utilization rule on a per subscription basis. +For feature variations and limitations, see [Azure Advisor in sovereign clouds](../advisor/advisor-sovereign-clouds.md). ### [Azure Lighthouse](../lighthouse/index.yml) |
azure-government | Connect With Azure Pipelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/connect-with-azure-pipelines.md | Review one of the following quickstarts to set up a build for your specific type $isAzureModulePresent = Get-Module -Name Az -ListAvailable if ([String]::IsNullOrEmpty($isAzureModulePresent) -eq $true) {- Write-Output "Script requires Azure PowerShell modules to be present. Obtain Azure PowerShell from https://docs.microsoft.com//powershell/azure/install-az-ps" -Verbose + Write-Output "Script requires Azure PowerShell modules to be present. Obtain Azure PowerShell from https://learn.microsoft.com//powershell/azure/install-az-ps" -Verbose return } |
azure-government | Documentation Government Overview Wwps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-wwps.md | For most of these scenarios, Microsoft and its partners offer a customer-managed ### Machine learning model training -[Artificial intelligence](/learn/modules/azure-artificial-intelligence/1-introduction-to-azure-artificial-intelligence) (AI) holds tremendous potential for governments. [Machine learning](/learn/modules/azure-artificial-intelligence/3-machine-learning) (ML) is a data science technique that allows computers to learn to use existing data, without being explicitly programmed, to forecast future behaviors, outcomes, and trends. Moreover, [ML technologies](/azure/architecture/data-guide/technology-choices/data-science-and-machine-learning) can discover patterns, anomalies, and predictions that can help governments in their missions. As technical barriers continue to fall, decision-makers face the opportunity to develop and explore transformative AI applications. There are five main vectors that can make it easier, faster, and cheaper to adopt ML: +[Artificial intelligence](/training/modules/azure-artificial-intelligence/1-introduction-to-azure-artificial-intelligence) (AI) holds tremendous potential for governments. [Machine learning](/training/modules/azure-artificial-intelligence/3-machine-learning) (ML) is a data science technique that allows computers to learn to use existing data, without being explicitly programmed, to forecast future behaviors, outcomes, and trends. Moreover, [ML technologies](/azure/architecture/data-guide/technology-choices/data-science-and-machine-learning) can discover patterns, anomalies, and predictions that can help governments in their missions. As technical barriers continue to fall, decision-makers face the opportunity to develop and explore transformative AI applications. There are five main vectors that can make it easier, faster, and cheaper to adopt ML: - Unsupervised learning - Reducing need for training data Synthetic data can exist in several forms, including text, audio, video, and hyb ### Knowledge mining -The exponential growth of unstructured data gathering in recent years has created many analytical problems for government agencies. This problem intensifies when data sets come from diverse sources such as text, audio, video, imaging, and so on. [Knowledge mining](/learn/modules/azure-artificial-intelligence/2-knowledge-mining) is the process of discovering useful knowledge from a collection of diverse data sources. This widely used data mining technique is a process that includes data preparation and selection, data cleansing, incorporation of prior knowledge on data sets, and interpretation of accurate solutions from the observed results. This process has proven to be useful for large volumes of data in different government agencies. +The exponential growth of unstructured data gathering in recent years has created many analytical problems for government agencies. This problem intensifies when data sets come from diverse sources such as text, audio, video, imaging, and so on. [Knowledge mining](/training/modules/azure-artificial-intelligence/2-knowledge-mining) is the process of discovering useful knowledge from a collection of diverse data sources. This widely used data mining technique is a process that includes data preparation and selection, data cleansing, incorporation of prior knowledge on data sets, and interpretation of accurate solutions from the observed results. This process has proven to be useful for large volumes of data in different government agencies. For instance, captured data from the field often includes documents, pamphlets, letters, spreadsheets, propaganda, videos, and audio files across many disparate structured and unstructured formats. Buried within the data are [actionable insights](https://www.youtube.com/watch?v=JFdF-Z7ypQo) that can enhance effective and timely response to crisis and drive decisions. The objective of knowledge mining is to enable decisions that are better, faster, and more humane by implementing proven commercial algorithm-based technologies. |
azure-government | Documentation Government Welcome | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-welcome.md | The following video provides a good introduction to Azure Government: </br> -> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Enable-government-missions-in-the-cloud-with-Azure-Government/player] +> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Enable-government-missions-in-the-cloud-with-Azure-Government/player] ## Compare Azure Government and global Azure |
azure-maps | About Azure Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md | The following video explains Azure Maps in depth: </br> -> [!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Azure-Maps/player?format=ny] +> [!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Azure-Maps/player?format=ny] ## Map controls |
azure-maps | Clustering Point Data Android Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-android-sdk.md | When visualizing many data points on the map, data points may overlap over each </br> ->[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny] +>[!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny] ## Prerequisites |
azure-maps | Clustering Point Data Web Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-web-sdk.md | When visualizing many data points on the map, data points may overlap over each </br> ->[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny] +>[!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny] ## Enabling clustering on a data source |
azure-maps | Data Driven Style Expressions Android Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-android-sdk.md | This video provides an overview of data-driven styling in Azure Maps. </br> ->[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny] +>[!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny] ## Data expressions |
azure-maps | Data Driven Style Expressions Web Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-web-sdk.md | This video provides an overview of data-driven styling in the Azure Maps Web SDK </br> ->[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny] +>[!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny] Expressions are represented as JSON arrays. The first element of an expression in the array is a string that specifies the name of the expression operator. For example, "+" or "case". The next elements (if any) are the arguments to the expression. Each argument is either a literal value (a string, number, boolean, or `null`), or another expression array. The following pseudocode defines the basic structure of an expression. |
azure-maps | How To Request Weather Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-request-weather-data.md | This video provides examples for making REST calls to Azure Maps Weather service </br> ->[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Azure-Maps-Weather-services-for-developers/player?format=ny] +>[!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Azure-Maps-Weather-services-for-developers/player?format=ny] ## Prerequisites |
azure-maps | How To Use Spatial Io Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-spatial-io-module.md | This video provides an overview of Spatial IO module in the Azure Maps Web SDK. </br> -> [!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Easily-integrate-spatial-data-into-the-Azure-Maps/player?format=ny] +> [!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Easily-integrate-spatial-data-into-the-Azure-Maps/player?format=ny] > [!WARNING] > Only use data and services that are from a source you trust, especially if referencing it from another domain. The spatial IO module does take steps to minimize risk, however the safest approach is too not allow any danagerous data into your application to begin with. |
azure-maps | Map Accessibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-accessibility.md | Learn about accessibility in the Web SDK modules. Learn about developing accessible apps: > [!div class="nextstepaction"]-> [Accessibility in Action Digital Badge Learning Path](https://ready.azurewebsites.net/learning/track/2940) +> [Accessibility in Action Digital Badge learning path](https://ready.azurewebsites.net/learning/track/2940) Take a look at these useful accessibility tools: > [!div class="nextstepaction"] |
azure-maps | Map Add Heat Map Layer Android | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer-android.md | You can use heat maps in many different scenarios, including: </br> -> [!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Heat-Maps-and-Image-Overlays-in-Azure-Maps/player?format=ny] +> [!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Heat-Maps-and-Image-Overlays-in-Azure-Maps/player?format=ny] ## Prerequisites |
azure-maps | Map Add Heat Map Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer.md | You can use heat maps in many different scenarios, including: </br> ->[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Heat-Maps-and-Image-Overlays-in-Azure-Maps/player?format=ny] +>[!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Heat-Maps-and-Image-Overlays-in-Azure-Maps/player?format=ny] ## Add a heat map layer |
azure-maps | Migrate From Google Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps.md | The table provides a high-level list of Azure Maps features, which correspond to | Distance Matrix | Γ£ô | | Elevation | Γ£ô | | Geocoding (Forward/Reverse) | Γ£ô |-| Geolocation | N/A | +| Geolocation | Γ£ô | | Nearest Roads | Γ£ô | | Places Search | Γ£ô | | Places Details | N/A ΓÇô website & phone number available | |
azure-monitor | Java Standalone Profiler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-profiler.md | JFR recording can be viewed and analyzed with your preferred tool, for example [ On-demand is user triggered profiling in real-time whereas automatic profiling is with preconfigured triggers. -Use [Profile Now](https://github.com/johnoliver/azure-docs-pr/blob/add-java-profiler-doc/articles/azure-monitor/profiler/profiler-settings.md) for the on-demand profiling option. [Profile Now](https://github.com/johnoliver/azure-docs-pr/blob/add-java-profiler-doc/articles/azure-monitor/profiler/profiler-settings.md) will immediately profile all agents attached to the Application Insights instance. - +Use [Profile Now](../profiler/profiler-settings.md) for the on-demand profiling option. [Profile Now](../profiler/profiler-settings.md) will immediately profile all agents attached to the Application Insights instance. + Automated profiling is triggered a breach in a resource threshold. - + ### Which Java profiling triggers can I configure? Application Insights Java Agent currently supports monitoring of CPU and memory consumption. CPU threshold is configured as a percentage of all available cores on the machine. Memory is the current Tenured memory region (OldGen) occupancy against the maximum possible size of the region. - + ### What are the required prerequisites to enable Java Profiling? Review the [Pre-requisites](#prerequisites) at the top of this article. |
azure-monitor | Resource Manager App Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resource-manager-app-resource.md | param type string @description('Which Azure Region to deploy the resource to. This must be a valid Azure regionId.') param regionId string -@description('See documentation on tags: https://docs.microsoft.com/azure/azure-resource-manager/management/tag-resources.') +@description('See documentation on tags: https://learn.microsoft.com/azure/azure-resource-manager/management/tag-resources.') param tagsArray object @description('Source of Azure Resource Manager deployment') resource component 'Microsoft.Insights/components@2020-02-02' = { "tagsArray": { "type": "object", "metadata": {- "description": "See documentation on tags: https://docs.microsoft.com/azure/azure-resource-manager/management/tag-resources." + "description": "See documentation on tags: https://learn.microsoft.com/azure/azure-resource-manager/management/tag-resources." } }, "requestSource": { param type string @description('Which Azure Region to deploy the resource to. This must be a valid Azure regionId.') param regionId string -@description('See documentation on tags: https://docs.microsoft.com/azure/azure-resource-manager/management/tag-resources.') +@description('See documentation on tags: https://learn.microsoft.com/azure/azure-resource-manager/management/tag-resources.') param tagsArray object @description('Source of Azure Resource Manager deployment') resource component 'Microsoft.Insights/components@2020-02-02' = { "tagsArray": { "type": "object", "metadata": {- "description": "See documentation on tags: https://docs.microsoft.com/azure/azure-resource-manager/management/tag-resources." + "description": "See documentation on tags: https://learn.microsoft.com/azure/azure-resource-manager/management/tag-resources." } }, "requestSource": { |
azure-monitor | Tutorial Asp Net Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-core.md | -This article describes how to enable Application Insights for an [ASP.NET Core](/aspnet/core) application deployed as an Azure Web App. This implementation utilizes an SDK-based approach, an [auto-instrumentation approach](./codeless-overview.md) is also available. +This article describes how to enable Application Insights for an [ASP.NET Core](/aspnet/core) application deployed as an Azure Web App. This implementation uses an SDK-based approach. An [auto-instrumentation approach](./codeless-overview.md) is also available. Application Insights can collect the following telemetry from your ASP.NET Core application: Application Insights can collect the following telemetry from your ASP.NET Core > * Heartbeats > * Logs -We'll use an [ASP.NET Core MVC application](/aspnet/core/tutorials/first-mvc-app) example that targets `net6.0`. You can apply these instructions to all ASP.NET Core applications. If you're using the [Worker Service](/aspnet/core/fundamentals/host/hosted-services#worker-service-template), use the instructions from [here](./worker-service.md). +For a sample application, we'll use an [ASP.NET Core MVC application](/aspnet/core/tutorials/first-mvc-app) that targets `net6.0`. However, you can apply these instructions to all ASP.NET Core applications. If you're using the [Worker Service](/aspnet/core/fundamentals/host/hosted-services#worker-service-template), use the instructions from [here](./worker-service.md). > [!NOTE] > A preview [OpenTelemetry-based .NET offering](./opentelemetry-enable.md?tabs=net) is available. [Learn more](./opentelemetry-overview.md). We'll use an [ASP.NET Core MVC application](/aspnet/core/tutorials/first-mvc-app ## Supported scenarios -The [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) can monitor your applications no matter where or how they run. If your application is running and has network connectivity to Azure, telemetry can be collected. Application Insights monitoring is supported everywhere .NET Core is supported. Support covers the following scenarios: +The [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) can monitor your applications no matter where or how they run. If your application is running and has network connectivity to Azure, Application Insights can collect telemetry from it. Application Insights monitoring is supported everywhere .NET Core is supported. The following scenarios are supported: * **Operating system**: Windows, Linux, or Mac * **Hosting method**: In process or out of process * **Deployment method**: Framework dependent or self-contained-* **Web server**: IIS (Internet Information Server) or Kestrel +* **Web server**: Internet Information Server (IIS) or Kestrel * **Hosting platform**: The Web Apps feature of Azure App Service, Azure VM, Docker, Azure Kubernetes Service (AKS), and so on * **.NET Core version**: All officially [supported .NET Core versions](https://dotnet.microsoft.com/download/dotnet-core) that aren't in preview * **IDE**: Visual Studio, Visual Studio Code, or command line ## Prerequisites -If you'd like to follow along with the guidance in this article, certain pre-requisites are needed. +To complete this tutorial, you need: * Visual Studio 2022-* Visual Studio Workloads: ASP.NET and web development, Data storage and processing, and Azure development +* The following Visual Studio workloads: + * ASP.NET and web development + * Data storage and processing + * Azure development * .NET 6.0 * Azure subscription and user account (with the ability to create and delete resources) ## Deploy Azure resources -Please follow the guidance to deploy the sample application from its [GitHub repository.](https://github.com/gitopsbook/sample-app-deployment). +Please follow the [guidance to deploy the sample application from its GitHub repository.](https://github.com/gitopsbook/sample-app-deployment). -In order to provide globally unique names to some resources, a 5 character suffix has been assigned. Please make note of this suffix for use later on in this article. +In order to provide globally unique names to resources, a six-character suffix is assigned to some resources. Please make note of this suffix for use later on in this article. - ## Create an Application Insights resource -1. In the [Azure portal](https://portal.azure.com), locate and select the **application-insights-azure-cafe** resource group. +1. In the [Azure portal](https://portal.azure.com), select the **application-insights-azure-cafe** resource group. 2. From the top toolbar menu, select **+ Create**. -  + :::image type="content" source="media/tutorial-asp-net-core/create-resource-menu.png" alt-text="Screenshot of the application-insights-azure-cafe resource group in the Azure portal with the + Create button highlighted on the toolbar menu." lightbox="media/tutorial-asp-net-core/create-resource-menu.png"::: -3. On the **Create a resource** screen, search for and select `Application Insights` in the marketplace search textbox. +3. On the **Create a resource** screen, search for and select **Application Insights** in the marketplace search textbox. -  + <!-- The long description for search-application-insights.png: Screenshot of the Create a resource screen in the Azure portal. The screenshot shows a search for Application Insights highlighted and Application Insights displaying in the search results, which is also highlighted. --> + :::image type="content" source="media/tutorial-asp-net-core/search-application-insights.png" alt-text="Screenshot of the Create a resource screen in the Azure portal." lightbox="media/tutorial-asp-net-core/search-application-insights.png"::: 4. On the Application Insights resource overview screen, select **Create**. -  + :::image type="content" source="media/tutorial-asp-net-core/create-application-insights-overview.png" alt-text="Screenshot of the Application Insights overview screen in the Azure portal with the Create button highlighted." lightbox="media/tutorial-asp-net-core/create-application-insights-overview.png"::: -5. On the Application Insights screen **Basics** tab. Complete the form as follows, then select the **Review + create** button. Fields not specified in the table below may retain their default values. +5. On the Application Insights screen, **Basics** tab, complete the form by using the following table, then select the **Review + create** button. Fields not specified in the table below may retain their default values. | Field | Value | |-|-| | Name | Enter `azure-cafe-application-insights-{SUFFIX}`, replacing **{SUFFIX}** with the appropriate suffix value recorded earlier. | | Region | Select the same region chosen when deploying the article resources. |- | Log Analytics Workspace | Select `azure-cafe-log-analytics-workspace`, alternatively a new log analytics workspace can be created here. | + | Log Analytics Workspace | Select **azure-cafe-log-analytics-workspace**. Alternatively, you can create a new log analytics workspace. | -  + :::image type="content" source="media/tutorial-asp-net-core/application-insights-basics-tab.png" alt-text="Screenshot of the Basics tab of the Application Insights screen in the Azure portal with a form populated with the preceding values." lightbox="media/tutorial-asp-net-core/application-insights-basics-tab.png"::: 6. Once validation has passed, select **Create** to deploy the resource. -  + :::image type="content" source="media/tutorial-asp-net-core/application-insights-validation-passed.png" alt-text="Screenshot of the Application Insights screen in the Azure portal. The message stating validation has passed and Create button are both highlighted." lightbox="media/tutorial-asp-net-core/application-insights-validation-passed.png"::: -7. Once deployment has completed, return to the `application-insights-azure-cafe` resource group, and select the deployed Application Insights resource. +7. Once the resource is deployed, return to the `application-insights-azure-cafe` resource group, and select the Application Insights resource you deployed. -  + :::image type="content" source="media/tutorial-asp-net-core/application-insights-resource-group.png" alt-text="Screenshot of the application-insights-azure-cafe resource group in the Azure portal with the Application Insights resource highlighted." lightbox="media/tutorial-asp-net-core/application-insights-resource-group.png"::: -8. On the Overview screen of the Application Insights resource, copy the **Connection String** value for use in the next section of this article. +8. On the Overview screen of the Application Insights resource, select the **Copy to clipboard** button to copy the connection string value. You will use the connection string value in the next section of this article. -  + <!-- The long description for application-insights-connection-string-overview.png: Screenshot of the Application Insights Overview screen in the Azure portal. The screenshot shows the connection string value highlighted and the Copy to clipboard button selected and highlighted. --> + :::image type="content" source="media/tutorial-asp-net-core/application-insights-connection-string-overview.png" alt-text="Screenshot of the Application Insights Overview screen in the Azure portal." lightbox="media/tutorial-asp-net-core/application-insights-connection-string-overview.png"::: ## Configure the Application Insights connection string application setting in the web App Service -1. Return to the `application-insights-azure-cafe` resource group, locate and open the **azure-cafe-web-{SUFFIX}** App Service resource. +1. Return to the `application-insights-azure-cafe` resource group and open the **azure-cafe-web-{SUFFIX}** App Service resource. -  + :::image type="content" source="media/tutorial-asp-net-core/web-app-service-resource-group.png" alt-text="Screenshot of the application-insights-azure-cafe resource group in the Azure portal with the azure-cafe-web-{SUFFIX} resource highlighted." lightbox="media/tutorial-asp-net-core/web-app-service-resource-group.png"::: -2. From the left menu, beneath the Settings header, select **Configuration**. Then, on the **Application settings** tab, select **+ New application setting** beneath the Application settings header. +2. From the left menu, under the Settings section, select **Configuration**. Then, on the **Application settings** tab, select **+ New application setting** beneath the Application settings header. -  + <!-- The long description for app-service-app-setting-button.png: Screenshot of the App Service resource screen in the Azure portal. The screenshot shows Configuration in the left menu under the Settings section selected and highlighted, the Application settings tab selected and highlighted, and the + New application setting toolbar button highlighted. --> + :::image type="content" source="media/tutorial-asp-net-core/app-service-app-setting-button.png" alt-text="Screenshot of the App Service resource screen in the Azure portal." lightbox="media/tutorial-asp-net-core/app-service-app-setting-button.png"::: 3. In the Add/Edit application setting blade, complete the form as follows and select **OK**. | Field | Value | |-|-| | Name | APPLICATIONINSIGHTS_CONNECTION_STRING |- | Value | Paste the Application Insights connection string obtained in the preceding section. | + | Value | Paste the Application Insights connection string value you copied in the preceding section. | -  + :::image type="content" source="media/tutorial-asp-net-core/add-edit-app-setting.png" alt-text="Screenshot of the Add/Edit application setting blade in the Azure portal with the preceding values populated in the Name and Value fields." lightbox="media/tutorial-asp-net-core/add-edit-app-setting.png"::: 4. On the App Service Configuration screen, select the **Save** button from the toolbar menu. When prompted to save the changes, select **Continue**. -  + :::image type="content" source="media/tutorial-asp-net-core/save-app-service-configuration.png" alt-text="Screenshot of the App Service Configuration screen in the Azure portal with the Save button highlighted on the toolbar menu." lightbox="media/tutorial-asp-net-core/save-app-service-configuration.png"::: ## Install the Application Insights NuGet Package We need to configure the ASP.NET Core MVC web application to send telemetry. This is accomplished using the [Application Insights for ASP.NET Core web applications NuGet package](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore). -1. With Visual Studio, open `1 - Starter Application\src\AzureCafe.sln`. +1. In Visual Studio, open `1 - Starter Application\src\AzureCafe.sln`. -2. In the Solution Explorer panel, right-click the AzureCafe project file, and select **Manage NuGet Packages**. +2. In the Visual Studio Solution Explorer panel, right-click on the AzureCafe project file and select **Manage NuGet Packages**. -  + :::image type="content" source="media/tutorial-asp-net-core/manage-nuget-packages-menu.png" alt-text="Screenshot of the Visual Studio Solution Explorer with the Azure Cafe project selected and the Manage NuGet Packages context menu item highlighted." lightbox="media/tutorial-asp-net-core/manage-nuget-packages-menu.png"::: -3. Select the **Browse** tab, then search for and select **Microsoft.ApplicationInsights.AspNetCore**. Select **Install**, and accept the license terms. It is recommended to use the latest stable version. Find full release notes for the SDK on the [open-source GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet/releases). +3. Select the **Browse** tab and then search for and select **Microsoft.ApplicationInsights.AspNetCore**. Select **Install**, and accept the license terms. It is recommended you use the latest stable version. For the full release notes for the SDK, see the [open-source GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet/releases). -  + <!-- The long description for asp-net-core-install-nuget-package.png: Screenshot that shows the NuGet Package Manager user interface in Visual Studio with the Browse tab selected. Microsoft.ApplicationInsights.AspNetCore is entered in the search box, and the Microsoft.ApplicationInsights.AspNetCore package is selected from a list of results. In the right pane, the latest stable version of the Microsoft.ApplicationInsights.AspNetCore package is selected from a drop down list and the Install button is highlighted. --> + :::image type="content" source="media/tutorial-asp-net-core/asp-net-core-install-nuget-package.png" alt-text="Screenshot of the NuGet Package Manager user interface in Visual Studio." lightbox="media/tutorial-asp-net-core/asp-net-core-install-nuget-package.png"::: -4. Keep Visual Studio open for the next section of the article. + Keep Visual Studio open for the next section of the article. ## Enable Application Insights server-side telemetry The Application Insights for ASP.NET Core web applications NuGet package encapsulates features to enable sending server-side telemetry to the Application Insights resource in Azure. -1. From the Visual Studio Solution Explorer, locate and open the **Program.cs** file. +1. From the Visual Studio Solution Explorer, open the **Program.cs** file. -  + :::image type="content" source="media/tutorial-asp-net-core/solution-explorer-programcs.png" alt-text="Screenshot of the Visual Studio Solution Explorer with the Program.cs file highlighted." lightbox="media/tutorial-asp-net-core/solution-explorer-programcs.png"::: -2. Insert the following code prior to the `builder.Services.AddControllersWithViews()` statement. This code automatically reads the Application Insights connection string value from configuration. The `AddApplicationInsightsTelemetry` method registers the `ApplicationInsightsLoggerProvider` with the built-in dependency injection container, that will then be used to fulfill [ILogger](/dotnet/api/microsoft.extensions.logging.ilogger) and [ILogger\<TCategoryName\>](/dotnet/api/microsoft.extensions.logging.iloggerprovider) implementation requests. +2. Insert the following code prior to the `builder.Services.AddControllersWithViews()` statement. This code automatically reads the Application Insights connection string value from configuration. The `AddApplicationInsightsTelemetry` method registers the `ApplicationInsightsLoggerProvider` with the built-in dependency injection container that will then be used to fulfill [ILogger](/dotnet/api/microsoft.extensions.logging.ilogger) and [ILogger\<TCategoryName\>](/dotnet/api/microsoft.extensions.logging.iloggerprovider) implementation requests. ```csharp builder.Services.AddApplicationInsightsTelemetry(); ``` -  + :::image type="content" source="media/tutorial-asp-net-core/enable-server-side-telemetry.png" alt-text="Screenshot of a code window in Visual Studio with the preceding code snippet highlighted." lightbox="media/tutorial-asp-net-core/enable-server-side-telemetry.png"::: > [!TIP]- > Learn more about [configuration options in ASP.NET Core](/aspnet/core/fundamentals/configuration). + > Learn more about the [configuration options in ASP.NET Core](/aspnet/core/fundamentals/configuration). ## Enable client-side telemetry for web applications -The preceding steps are enough to help you start collecting server-side telemetry. This application has client-side components, follow the next steps to start collecting [usage telemetry](./usage-overview.md). +The preceding steps are enough to help you start collecting server-side telemetry. The sample application has client-side components. Follow the next steps to start collecting [usage telemetry](./usage-overview.md). -1. In Visual Studio Solution explorer, locate and open `\Views\_ViewImports.cshtml`. Add the following code at the end of the existing file. +1. In Visual Studio Solution Explorer, open `\Views\_ViewImports.cshtml`. ++2. Add the following code at the end of the existing file. ```cshtml @inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet ``` -  + :::image type="content" source="media/tutorial-asp-net-core/view-imports-injection.png" alt-text="Screenshot of the _ViewImports.cshtml file in Visual Studio with the preceding line of code highlighted." lightbox="media/tutorial-asp-net-core/view-imports-injection.png"::: -2. To properly enable client-side monitoring for your application, the JavaScript snippet must appear in the `<head>` section of each page of your application that you want to monitor. In Visual Studio Solution Explorer, locate and open `\Views\Shared\_Layout.cshtml`, insert the following code immediately preceding the closing `<\head>` tag. +3. To properly enable client-side monitoring for your application, in Visual Studio Solution Explorer, open `\Views\Shared\_Layout.cshtml` and insert the following code immediately before the closing `<\head>` tag. This JavaScript snippet must be inserted in the `<head>` section of each page of your application that you want to monitor. ```cshtml @Html.Raw(JavaScriptSnippet.FullScript) ``` -  + :::image type="content" source="media/tutorial-asp-net-core/layout-head-code.png" alt-text="Screenshot of the _Layout.cshtml file in Visual Studio with the preceding line of code highlighted within the head section of the file." lightbox="media/tutorial-asp-net-core/layout-head-code.png"::: > [!TIP]- > As an alternative to using the `FullScript`, the `ScriptBody` is available. Use `ScriptBody` if you need to control the `<script>` tag to set a Content Security Policy: + > An alternative to using `FullScript` is `ScriptBody`. Use `ScriptBody` if you need to control the `<script>` tag to set a Content Security Policy: ```cshtml <script> // apply custom changes to this script tag. The preceding steps are enough to help you start collecting server-side telemetr ## Enable monitoring of database queries -When investigating causes for performance degradation, it is important to include insights into database calls. Enable monitoring through configuration of the [dependency module](./asp-net-dependencies.md). Dependency monitoring, including SQL is enabled by default. The following steps can be followed to capture the full SQL query text. +When investigating causes for performance degradation, it is important to include insights into database calls. You enable monitoring by configuring the [dependency module](./asp-net-dependencies.md). Dependency monitoring, including SQL, is enabled by default. ++Follow these steps to capture the full SQL query text. > [!NOTE] > SQL text may contain sensitive data such as passwords and PII. Be careful when enabling this feature. -1. From the Visual Studio Solution Explorer, locate and open the **Program.cs** file. +1. From the Visual Studio Solution Explorer, open the **Program.cs** file. 2. At the top of the file, add the following `using` statement. When investigating causes for performance degradation, it is important to includ using Microsoft.ApplicationInsights.DependencyCollector; ``` -3. Immediately following the `builder.Services.AddApplicationInsightsTelemetry()` code, insert the following to enable SQL command text instrumentation. +3. To enable SQL command text instrumentation, insert the following code immediately after the `builder.Services.AddApplicationInsightsTelemetry()` code. ```csharp builder.Services.ConfigureTelemetryModule<DependencyTrackingTelemetryModule>((module, o) => { module.EnableSqlCommandTextInstrumentation = true; }); ``` -  + :::image type="content" source="media/tutorial-asp-net-core/enable-sql-command-text-instrumentation.png" alt-text="Screenshot of a code window in Visual Studio with the preceding code highlighted." lightbox="media/tutorial-asp-net-core/enable-sql-command-text-instrumentation.png"::: ## Run the Azure Cafe web application -After the web application code is deployed, telemetry will flow to Application Insights. The Application Insights SDK automatically collects incoming web requests to your application. +After you deploy the web application code, telemetry will flow to Application Insights. The Application Insights SDK automatically collects incoming web requests to your application. -1. Right-click the **AzureCafe** project in Solution Explorer and select **Publish** from the context menu. +1. From the Visual Studio Solution Explorer, right-click on the **AzureCafe** project and select **Publish** from the context menu. -  + :::image type="content" source="media/tutorial-asp-net-core/web-project-publish-context-menu.png" alt-text="Screenshot of the Visual Studio Solution Explorer with the Azure Cafe project selected and the Publish context menu item highlighted." lightbox="media/tutorial-asp-net-core/web-project-publish-context-menu.png"::: 2. Select **Publish** to promote the new code to the Azure App Service. -  + :::image type="content" source="media/tutorial-asp-net-core/publish-profile.png" alt-text="Screenshot of the AzureCafe publish profile with the Publish button highlighted." lightbox="media/tutorial-asp-net-core/publish-profile.png"::: -3. Once the publish has succeeded, a new browser window opens to the Azure Cafe web application. + When the Azure Cafe web application is successfully published, a new browser window opens to the Azure Cafe web application. -  + :::image type="content" source="media/tutorial-asp-net-core/azure-cafe-index.png" alt-text="Screenshot of the Azure Cafe web application." lightbox="media/tutorial-asp-net-core/azure-cafe-index.png"::: -4. Perform various activities in the web application to generate some telemetry. +3. To generate some telemetry, follow these steps in the web application to add a review. - 1. Select **Details** next to a Cafe to view its menu and reviews. + 1. To view a cafe's menu and reviews, select **Details** next to a cafe. -  + :::image type="content" source="media/tutorial-asp-net-core/cafe-details-button.png" alt-text="Screenshot of a portion of the Azure Cafe list in the Azure Cafe web application with the Details button highlighted." lightbox="media/tutorial-asp-net-core/cafe-details-button.png"::: - 2. On the Cafe screen, select the **Reviews** tab to view and add reviews. Select the **Add review** button to add a review. + 2. To view and add reviews, on the Cafe screen, select the **Reviews** tab. Select the **Add review** button to add a review. -  + :::image type="content" source="media/tutorial-asp-net-core/cafe-add-review-button.png" alt-text="Screenshot of the Cafe details screen in the Azure Cafe web application with the Add review button highlighted." lightbox="media/tutorial-asp-net-core/cafe-add-review-button.png"::: - 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. Once completed, select **Add review**. + 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. When finished, select **Add review**. -  + :::image type="content" source="media/tutorial-asp-net-core/create-a-review-dialog.png" alt-text="Screenshot of the Create a review dialog in the Azure Cafe web application." lightbox="media/tutorial-asp-net-core/create-a-review-dialog.png"::: - 4. Repeat adding reviews as desired to generate additional telemetry. + 4. If you need to generate additional telemetry, add additional reviews. ### Live metrics -[Live Metrics](./live-stream.md) can be used to quickly verify if Application Insights monitoring is configured correctly. It might take a few minutes for telemetry to appear in the portal and analytics, but Live Metrics shows CPU usage of the running process in near real time. It can also show other telemetry like Requests, Dependencies, and Traces. +You can use [Live Metrics](./live-stream.md) to quickly verify if Application Insights monitoring is configured correctly. Live Metrics shows CPU usage of the running process in near real time. It can also show other telemetry such as Requests, Dependencies, and Traces. Note that it might take a few minutes for the telemetry to appear in the portal and analytics. -### Application map +### Viewing the application map The sample application makes calls to multiple Azure resources, including Azure SQL, Azure Blob Storage, and the Azure Language Service (for review sentiment analysis). - -Application Insights introspects incoming telemetry data and is able to generate a visual map of detected system integrations. +Application Insights introspects the incoming telemetry data and is able to generate a visual map of the system integrations it detects. 1. Access and log into the [Azure portal](https://portal.azure.com). -2. Open the sample application resource group `application-insights-azure-cafe`. +2. Open the resource group for the sample application, which is `application-insights-azure-cafe`. 3. From the list of resources, select the `azure-cafe-insights-{SUFFIX}` Application Insights resource. -4. Select **Application map** from the left menu, beneath the **Investigate** heading. Observe the generated Application map. +4. From the left menu, beneath the **Investigate** heading, select **Application map**. Observe the generated Application map. -  + :::image type="content" source="media/tutorial-asp-net-core/application-map.png" alt-text="Screenshot of the Application Insights application map in the Azure portal." lightbox="media/tutorial-asp-net-core/application-map.png"::: ### Viewing HTTP calls and database SQL command text 1. In the Azure portal, open the Application Insights resource. -2. Beneath the **Investigate** header on the left menu, select **Performance**. +2. On the left menu, beneath the **Investigate** header, select **Performance**. -3. The **Operations** tab contains details of the HTTP calls received by the application. You can also toggle between Server and Browser (client-side) views of data. +3. The **Operations** tab contains details of the HTTP calls received by the application. To toggle between Server and Browser (client-side) views of the data, use the Server/Browser toggle. -  + <!-- The long description for server-performance.png: Screenshot of the Application Insights Performance screen in the Azure portal. The screenshot shows the Server/Browser toggle and HTTP calls received by the application highlighted. --> + :::image type="content" source="media/tutorial-asp-net-core/server-performance.png" alt-text="Screenshot of the Performance screen in the Azure portal." lightbox="media/tutorial-asp-net-core/server-performance.png"::: 4. Select an Operation from the table, and choose to drill into a sample of the request.+ + <!-- The long description for select-operation-performance.png: Screenshot of the Application Insights Performance screen in the Azure portal. The screenshot shows a POST operation and a sample operation from the suggested list selected and highlighted and the Drill into samples button is highlighted. --> + :::image type="content" source="media/tutorial-asp-net-core/select-operation-performance.png" alt-text="Screenshot of the Application Insights Performance screen in the Azure portal with operations and sample operations listed." lightbox="media/tutorial-asp-net-core/select-operation-performance.png"::: -  --5. The End-to-end transaction displays for the selected request. In this case, a review was created including an image, thus it includes calls to Azure Storage, the Language Service (for sentiment analysis), as well as database calls into SQL Azure to persist the review. In this example, the first selected Event displays information relative to the HTTP POST call. + The end-to-end transaction displays for the selected request. In this case, a review was created, including an image, so it includes calls to Azure Storage and the Language Service (for sentiment analysis). It also includes database calls into SQL Azure to persist the review. In this example, the first selected Event displays information relative to the HTTP POST call. -  + :::image type="content" source="media/tutorial-asp-net-core/e2e-http-call.png" alt-text="Screenshot of the end-to-end transaction in the Azure portal with the HTTP Post call selected." lightbox="media/tutorial-asp-net-core/e2e-http-call.png"::: -6. Select a SQL item to review the SQL command text issued to the database. +5. Select a SQL item to review the SQL command text issued to the database. -  + :::image type="content" source="media/tutorial-asp-net-core/e2e-db-call.png" alt-text="Screenshot of the end-to-end transaction in the Azure portal with SQL command details." lightbox="media/tutorial-asp-net-core/e2e-db-call.png"::: -7. Optionally select Dependency (outgoing) requests to Azure Storage or the Language Service. +6. Optionally, select the Dependency (outgoing) requests to Azure Storage or the Language Service. -8. Return to the **Performance** screen, and select the **Dependencies** tab to investigate calls into external resources. Notice the Operations table includes calls into Sentiment Analysis, Blob Storage, and Azure SQL. +7. Return to the **Performance** screen and select the **Dependencies** tab to investigate calls into external resources. Notice the Operations table includes calls into Sentiment Analysis, Blob Storage, and Azure SQL. -  + :::image type="content" source="media/tutorial-asp-net-core/performance-dependencies.png" alt-text="Screenshot of the Application Insights Performance screen in the Azure portal with the Dependencies tab selected and the Operations table highlighted." lightbox="media/tutorial-asp-net-core/performance-dependencies.png"::: ## Application logging with Application Insights ### Logging overview -Application Insights is one type of [logging provider](/dotnet/core/extensions/logging-providers) available to ASP.NET Core applications that becomes available to applications when the [Application Insights for ASP.NET Core](#install-the-application-insights-nuget-package) NuGet package is installed and [server-side telemetry collection enabled](#enable-application-insights-server-side-telemetry). As a reminder, the following code in **Program.cs** registers the `ApplicationInsightsLoggerProvider` with the built-in dependency injection container. +Application Insights is one type of [logging provider](/dotnet/core/extensions/logging-providers) available to ASP.NET Core applications that becomes available to applications when the [Application Insights for ASP.NET Core](#install-the-application-insights-nuget-package) NuGet package is installed and [server-side telemetry collection is enabled](#enable-application-insights-server-side-telemetry). ++As a reminder, the following code in **Program.cs** registers the `ApplicationInsightsLoggerProvider` with the built-in dependency injection container. ```csharp builder.Services.AddApplicationInsightsTelemetry(); ``` -With the `ApplicationInsightsLoggerProvider` registered as the logging provider, the app is ready to log to Application Insights using either constructor injection with <xref:Microsoft.Extensions.Logging.ILogger> or the generic-type alternative <xref:Microsoft.Extensions.Logging.ILogger%601>. +With the `ApplicationInsightsLoggerProvider` registered as the logging provider, the app is ready to log into Application Insights by using either constructor injection with <xref:Microsoft.Extensions.Logging.ILogger> or the generic-type alternative <xref:Microsoft.Extensions.Logging.ILogger%601>. > [!NOTE]-> With default settings, the logging provider is configured to automatically capture log events with a severity of <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType> or greater. +> By default, the logging provider is configured to automatically capture log events with a severity of <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType> or greater. -Consider the following example controller that demonstrates the injection of ILogger which is resolved with the `ApplicationInsightsLoggerProvider` that is registered with the dependency injection container. Observe in the **Get** method that an Informational, Warning and Error message are recorded. +Consider the following example controller. It demonstrates the injection of ILogger, which is resolved with the `ApplicationInsightsLoggerProvider` that is registered with the dependency injection container. Observe in the **Get** method that an Informational, Warning, and Error message are recorded. > [!NOTE] > By default, the Information level trace will not be recorded. Only the Warning and above levels are captured. The ValuesController above is deployed with the sample application and is locate 1. Using an internet browser, open the sample application. In the address bar, append `/api/Values` and press <kbd>Enter</kbd>. -  + :::image type="content" source="media/tutorial-asp-net-core/values-api-url.png" alt-text="Screenshot of a browser window with /api/Values appended to the URL in the address bar." lightbox="media/tutorial-asp-net-core/values-api-url.png"::: ++2. In the [Azure portal](https://portal.azure.com), wait a few moments and then select the **azure-cafe-insights-{SUFFIX}** Application Insights resource. -2. Wait a few moments, then return to the **Application Insights** resource in the [Azure portal](https://portal.azure.com). + :::image type="content" source="media/tutorial-asp-net-core/application-insights-resource-group.png" alt-text="Screenshot of the application-insights-azure-cafe resource group in the Azure portal with the Application Insights resource highlighted." lightbox="media/tutorial-asp-net-core/application-insights-resource-group.png"::: -  +3. From the left menu of the Application Insights resource, under the **Monitoring** section, select **Logs**. + +4. In the **Tables** pane, under the **Application Insights** tree, double-click on the **traces** table. -3. From the left menu of the Application Insights resource, select **Logs** from beneath the **Monitoring** section. In the **Tables** pane, double-click on the **traces** table, located under the **Application Insights** tree. Modify the query to retrieve traces for the **Values** controller as follows, then select **Run** to filter the results. +5. Modify the query to retrieve traces for the **Values** controller as follows, then select **Run** to filter the results. ```kql traces | where operation_Name == "GET Values/Get" ``` -4. Observe the results display the logging messages present in the controller. A log severity of 2 indicates a warning level, and a log severity of 3 indicates an Error level. + The results display the logging messages present in the controller. A log severity of 2 indicates a warning level, and a log severity of 3 indicates an Error level. -5. Alternatively, the query can also be written to retrieve results based on the category of the log. By default, the category is the fully qualified name of the class where the ILogger is injected, in this case **ValuesController** (if there was a namespace associated with the class the name will be prefixed with the namespace). Re-write and run the following query to retrieve results based on category. +6. Alternatively, you can also write the query to retrieve results based on the category of the log. By default, the category is the fully qualified name of the class where the ILogger is injected. In this case, the category name is **ValuesController** (if there is a namespace associated with the class, the name will be prefixed with the namespace). Re-write and run the following query to retrieve results based on category. ```kql traces The ValuesController above is deployed with the sample application and is locate ## Control the level of logs sent to Application Insights -`ILogger` implementations have a built-in mechanism to apply [log filtering](/dotnet/core/extensions/logging#how-filtering-rules-are-applied). This filtering lets you control the logs that are sent to each registered provider, including the Application Insights provider. You can use the filtering either in configuration (using an *appsettings.json* file) or in code. For more information about log levels and guidance on appropriate use, see the [Log Level](/aspnet/core/fundamentals/logging#log-level) documentation. +`ILogger` implementations have a built-in mechanism to apply [log filtering](/dotnet/core/extensions/logging#how-filtering-rules-are-applied). This filtering lets you control the logs that are sent to each registered provider, including the Application Insights provider. You can use the filtering either in configuration (using an *appsettings.json* file) or in code. For more information about log levels and guidance on how to use them appropriately, see the [Log Level](/aspnet/core/fundamentals/logging#log-level) documentation. The following examples show how to apply filter rules to the `ApplicationInsightsLoggerProvider` to control the level of logs sent to Application Insights. ### Create filter rules with configuration -The `ApplicationInsightsLoggerProvider` is aliased as **ApplicationInsights** in configuration. The following section of an *appsettings.json* file sets the default log level for all providers to <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType>. The configuration for the ApplicationInsights provider specifically for categories that start with "ValuesController" override this default value with <xref:Microsoft.Extensions.Logging.LogLevel.Error?displayProperty=nameWithType> and higher. +The `ApplicationInsightsLoggerProvider` is aliased as **ApplicationInsights** in configuration. The following section of an *appsettings.json* file sets the default log level for all providers to <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType>. The configuration for the ApplicationInsights provider, specifically for categories that start with "ValuesController," overrides this default value with <xref:Microsoft.Extensions.Logging.LogLevel.Error?displayProperty=nameWithType> and higher. ```json { The `ApplicationInsightsLoggerProvider` is aliased as **ApplicationInsights** in } ``` -Deploying the sample application with the preceding code in *appsettings.json* will yield only the error trace being sent to Application Insights when interacting with the **ValuesController**. This is because the **LogLevel** for the **ValuesController** category is set to **Error**, therefore the **Warning** trace is suppressed. +Deploying the sample application with the preceding code in *appsettings.json* will yield only the error trace being sent to Application Insights when interacting with the **ValuesController**. This is because the **LogLevel** for the **ValuesController** category is set to **Error**. Therefore, the **Warning** trace is suppressed. ## Turn off logging to Application Insights -To disable logging using configuration, set all LogLevel values to "None". +To disable logging by using configuration, set all LogLevel values to "None". ```json { To disable logging using configuration, set all LogLevel values to "None". } ``` -Similarly, within code, set the default level for the `ApplicationInsightsLoggerProvider` and any subsequent log levels to **None**. +Similarly, within the code, set the default level for the `ApplicationInsightsLoggerProvider` and any subsequent log levels to **None**. ```csharp var builder = WebApplication.CreateBuilder(args); |
azure-monitor | Autoscale Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-best-practices.md | Azure Monitor autoscale applies only to [Virtual Machine Scale Sets](https://azu An autoscale setting has a maximum, minimum, and default value of instances. * An autoscale job always reads the associated metric to scale by, checking if it has crossed the configured threshold for scale-out or scale-in. You can view a list of metrics that autoscale can scale by at [Azure Monitor autoscaling common metrics](autoscale-common-metrics.md). * All thresholds are calculated at an instance level. For example, "scale out by one instance when average CPU > 80% when instance count is 2", means scale-out when the average CPU across all instances is greater than 80%.-* All autoscale failures are logged to the Activity Log. You can then configure an [activity log alert](../alerts/activity-log-alerts.md) so that you can be notified via email, SMS, or webhooks whenever there is an autoscale failure. -* Similarly, all successful scale actions are posted to the Activity Log. You can then configure an activity log alert so that you can be notified via email, SMS, or webhooks whenever there is a successful autoscale action. You can also configure email or webhook notifications to get notified for successful scale actions via the notifications tab on the autoscale setting. +* All autoscale failures are logged to the Activity Log. You can then configure an [activity log alert](../alerts/activity-log-alerts.md) so that you can be notified via email, SMS, or webhooks whenever there's an autoscale failure. +* Similarly, all successful scale actions are posted to the Activity Log. You can then configure an activity log alert so that you can be notified via email, SMS, or webhooks whenever there's a successful autoscale action. You can also configure email or webhook notifications to get notified for successful scale actions via the notifications tab on the autoscale setting. ## Autoscale best practices Use the following best practices as you use autoscale. If you have a setting that has minimum=2, maximum=2 and the current instance cou If you manually update the instance count to a value above or below the maximum, the autoscale engine automatically scales back to the minimum (if below) or the maximum (if above). For example, you set the range between 3 and 6. If you have one running instance, the autoscale engine scales to three instances on its next run. Likewise, if you manually set the scale to eight instances, on the next run autoscale will scale it back to six instances on its next run. Manual scaling is temporary unless you reset the autoscale rules as well. ### Always use a scale-out and scale-in rule combination that performs an increase and decrease-If you use only one part of the combination, autoscale will only take action in a single direction (scale out, or in) until it reaches the maximum, or minimum instance counts, as defined in the profile. This is not optimal, ideally you want your resource to scale up at times of high usage to ensure availability. Similarly, at times of low usage you want your resource to scale down, so you can realize cost savings. +If you use only one part of the combination, autoscale will only take action in a single direction (scale out, or in) until it reaches the maximum, or minimum instance counts, as defined in the profile. This isn't optimal, ideally you want your resource to scale up at times of high usage to ensure availability. Similarly, at times of low usage you want your resource to scale down, so you can realize cost savings. -When you use a scale-in and scale-out rule, ideally use the same metric to control both. Otherwise, itΓÇÖs possible that the scale-in and scale-out conditions could be met at the same time resulting in some level of flapping. For example, the following rule combination is *not* recommended because there is no scale-in rule for memory usage: +When you use a scale-in and scale-out rule, ideally use the same metric to control both. Otherwise, itΓÇÖs possible that the scale-in and scale-out conditions could be met at the same time resulting in some level of flapping. For example, the following rule combination isn't* recommended because there's no scale-in rule for memory usage: * If CPU > 90%, scale-out by 1 * If Memory > 90%, scale-out by 1 In this example, you can have a situation in which the memory usage is over 90% ### Choose the appropriate statistic for your diagnostics metric For diagnostics metrics, you can choose among *Average*, *Minimum*, *Maximum* and *Total* as a metric to scale by. The most common statistic is *Average*. -- ### Considerations for scaling threshold values for special metrics For special metrics such as Storage or Service Bus Queue length metric, the threshold is the average number of messages available per current number of instances. Carefully choose the threshold value for this metric. Let's illustrate it with an example to ensure you understand the behavior better Consider the following sequence: 1. There are two storage queue instances.-2. Messages keep coming and when you review the storage queue, the total count reads 50. You might assume that autoscale should start a scale-out action. However, note that it is still 50/2 = 25 messages per instance. So, scale-out does not occur. For the first scale-out to happen, the total message count in the storage queue should be 100. +2. Messages keep coming and when you review the storage queue, the total count reads 50. You might assume that autoscale should start a scale-out action. However, note that it's still 50/2 = 25 messages per instance. So, scale-out doesn't occur. For the first scale-out to happen, the total message count in the storage queue should be 100. 3. Next, assume that the total message count reaches 100.-4. A third storage queue instance is added due to a scale-out action. The next scale-out action will not happen until the total message count in the queue reaches 150 because 150/3 = 50. +4. A third storage queue instance is added due to a scale-out action. The next scale-out action won't happen until the total message count in the queue reaches 150 because 150/3 = 50. 5. Now the number of messages in the queue gets smaller. With three instances, the first scale-in action happens when the total messages in all queues add up to 30 because 30/3 = 10 messages per instance, which is the scale-in threshold. -### Considerations for scaling when multiple profiles are configured in an autoscale setting -In an autoscale setting, you can choose a default profile, which is always applied without any dependency on schedule or time, or you can choose a recurring profile or a profile for a fixed period with a date and time range. --When autoscale service processes them, it always checks in the following order: --1. Fixed Date profile -2. Recurring profile -3. Default ("Always") profile --If a profile condition is met, autoscale does not check the next profile condition below it. Autoscale only processes one profile at a time. This means if you want to also include a processing condition from a lower-tier profile, you must include those rules as well in the current profile. --Let's review using an example: --The image below shows an autoscale setting with a default profile of minimum instances = 2 and maximum instances = 10. In this example, rules are configured to scale out when the message count in the queue is greater than 10 and scale-in when the message count in the queue is less than three. So now the resource can scale between two and ten instances. --In addition, there is a recurring profile set for Monday. It is set for minimum instances = 3 and maximum instances = 10. This means on Monday, the first-time autoscale checks for this condition, if the instance count is two, it scales to the new minimum of three. As long as autoscale continues to find this profile condition matched (Monday), it only processes the CPU-based scale-out/in rules configured for this profile. At this time, it does not check for the queue length. However, if you also want the queue length condition to be checked, you should include those rules from the default profile as well in your Monday profile. --Similarly, when autoscale switches back to the default profile, it first checks if the minimum and maximum conditions are met. If the number of instances at the time is 12, it scales in to 10, the maximum allowed for the default profile. -- - ### Considerations for scaling when multiple rules are configured in a profile+ There are cases where you may have to set multiple rules in a profile. The following autoscale rules are used by the autoscale engine when multiple rules are set. On *scale-out*, autoscale runs if any rule is met. Then the follow occurs: On the other hand, if CPU is 25% and memory is 51% autoscale does **not** scale-in. In order to scale-in, CPU must be 29% and Memory 49%. ### Always select a safe default instance count-The default instance count is important because autoscale scales your service to that count when metrics are not available. Therefore, select a default instance count that's safe for your workloads. ++The default instance count is important because autoscale scales your service to that count when metrics aren't available. Therefore, select a default instance count that's safe for your workloads. ### Configure autoscale notifications+ Autoscale will post to the Activity Log if any of the following conditions occur: * Autoscale issues a scale operation. * Autoscale service successfully completes a scale action. * Autoscale service fails to take a scale action.-* Metrics are not available for autoscale service to make a scale decision. +* Metrics aren't available for autoscale service to make a scale decision. * Metrics are available (recovery) again to make a scale decision.-* Autoscale detects flapping and aborts the scale attempt. You will see a log type of `Flapping` in this situation. If you see this, consider whether your thresholds are too narrow. -* Autoscale detects flapping but is still able to successfully scale. You will see a log type of `FlappingOccurred` in this situation. If you see this, the autoscale engine has attempted to scale (e.g. from 4 instances to 2), but has determined that this would cause flapping. Instead, the autoscale engine has scaled to a different number of instances (e.g. using 3 instances instead of 2), which no longer causes flapping, so it has scaled to this number of instances. +* Autoscale detects flapping and aborts the scale attempt. You'll see a log type of `Flapping` in this situation. If you see this, consider whether your thresholds are too narrow. +* Autoscale detects flapping but is still able to successfully scale. You'll see a log type of `FlappingOccurred` in this situation. If you see this, the autoscale engine has attempted to scale (for example, from 4 instances to 2), but has determined that this would cause flapping. Instead, the autoscale engine has scaled to a different number of instances (for example, using 3 instances instead of 2), which no longer causes flapping, so it has scaled to this number of instances. You can also use an Activity Log alert to monitor the health of the autoscale engine. Here are examples to [create an Activity Log Alert to monitor all autoscale engine operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert) or to [create an Activity Log Alert to monitor all failed autoscale scale in/scale out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert). In addition to using activity log alerts, you can also configure email or webhook notifications to get notified for scale actions via the notifications tab on the autoscale setting. ## Send data securely using TLS 1.2+ To ensure the security of data in transit to Azure Monitor, we strongly encourage you to configure the agent to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**, and the industry is quickly moving to abandon support for these older protocols. -The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a deadline of [June 30th, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf) to disable older versions of TLS/SSL and upgrade to more secure protocols. Once Azure drops legacy support, if your agents cannot communicate over at least TLS 1.2 you would not be able to send data to Azure Monitor Logs. +The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a deadline of [June 30th, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf) to disable older versions of TLS/SSL and upgrade to more secure protocols. Once Azure drops legacy support, if your agents can't communicate over at least TLS 1.2 you wouldn't be able to send data to Azure Monitor Logs. We recommend you do NOT explicit set your agent to only use TLS 1.2 unless absolutely necessary. Allowing the agent to automatically detect, negotiate, and take advantage of future security standards is preferable. Otherwise you may miss the added security of the newer standards and possibly experience problems if TLS 1.2 is ever deprecated in favor of those newer standards. |
azure-monitor | Metrics Supported | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md | This latest update adds a new column and reorders the metrics to be alphabetical |||||||| |AddRegion|Yes|Region Added|Count|Count|Region Added|Region| |AutoscaleMaxThroughput|No|Autoscale Max Throughput|Count|Maximum|Autoscale Max Throughput|DatabaseName, CollectionName|-|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage" will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this doc https://docs.microsoft.com/azure/cosmos-db/concepts-limits. After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region| +|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage" will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this doc https://learn.microsoft.com/azure/cosmos-db/concepts-limits. After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region| |CassandraConnectionClosures|No|Cassandra Connection Closures|Count|Total|Number of Cassandra connections that were closed, reported at a 1 minute granularity|Region, ClosureReason| |CassandraConnectorAvgReplicationLatency|No|Cassandra Connector Average ReplicationLatency|MilliSeconds|Average|Cassandra Connector Average ReplicationLatency|No Dimensions| |CassandraConnectorReplicationHealthStatus|No|Cassandra Connector Replication Health Status|Count|Count|Cassandra Connector Replication Health Status|NotStarted, ReplicationInProgress, Error| |
azure-monitor | Cost Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md | The following query can be used to make a recommendation for the optimal pricing ```kusto // Set these parameters before running query // For Pay-As-You-Go (per-GB) and commitment tier pricing details, see https://azure.microsoft.com/pricing/details/monitor/.-// You can see your per-node costs in your Azure usage and charge data. For more information, see https://docs.microsoft.com/en-us/azure/cost-management-billing/understand/download-azure-daily-usage. +// You can see your per-node costs in your Azure usage and charge data. For more information, see https://learn.microsoft.com/azure/cost-management-billing/understand/download-azure-daily-usage. let PerNodePrice = 15.; // Monthly price per monitored node let PerNodeOveragePrice = 2.30; // Price per GB for data overage in the Per Node pricing tier let PerGBPrice = 2.30; // Enter the Pay-as-you-go price for your workspace's region (from https://azure.microsoft.com/pricing/details/monitor/) |
azure-monitor | Log Powerbi | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-powerbi.md | To learn more and how to configure incremental refresh, see [Power BI Datasets a After your data is sent to Power BI, you can continue to use Power BI to create reports and dashboards. -For more information, see [this guide on how to create your first Power BI model and report](/learn/modules/build-your-first-power-bi-report/). +For more information, see [this guide on how to create your first Power BI model and report](/training/modules/build-your-first-power-bi-report/). ## Excel integration |
azure-monitor | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md | This article lists significant changes to Azure Monitor documentation. | Article | Description | ||| |[Log Analytics agent overview](agents/log-analytics-agent.md)|Restructured the Agents section and rewrote the Agents Overview article to reflect that Azure Monitor Agent is the primary agent for collecting monitoring data.|-|[Dependency analysis in Azure Migrate Discovery and assessment - Azure Migrate](https://docs.microsoft.com/azure/migrate/concepts-dependency-visualization)|Revamped the guidance for migrating from Log Analytics Agent to Azure Monitor Agent.| +|[Dependency analysis in Azure Migrate Discovery and assessment - Azure Migrate](https://learn.microsoft.com/azure/migrate/concepts-dependency-visualization)|Revamped the guidance for migrating from Log Analytics Agent to Azure Monitor Agent.| ### Alerts All references to unsupported versions of .NET and .NET CORE have been scrubbed | Article | Description | |:|:| | [Migrate from VM insights guest health (preview) to Azure Monitor log alerts](vm/vminsights-health-migrate.md) | New article describing process to replace VM guest health with alert rules |-| [VM insights guest health (preview)](vm/vminsights-health-overview.md) | Added deprecation statement | +| [VM insights guest health (preview)](vm/vminsights-health-overview.md) | Added deprecation statement | |
azure-netapp-files | Azacsnap Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-troubleshoot.md | To troubleshoot this error: 1. Check the log file to see if the service principal has expired. The following log file example shows that the client secret keys are expired. ```output- [19/Nov/2020:18:41:10 +13:00] DEBUG: [PID:0020257:StorageANF:659] [1] Innerexception: Microsoft.IdentityModel.Clients.ActiveDirectory.AdalServiceException AADSTS7000222: The provided client secret keys are expired. Visit the Azure Portal to create new keys for your app, or consider using certificate credentials for added security: https://docs.microsoft.com/azure/active-directory/develop/active-directory-certificate-credentials + [19/Nov/2020:18:41:10 +13:00] DEBUG: [PID:0020257:StorageANF:659] [1] Innerexception: Microsoft.IdentityModel.Clients.ActiveDirectory.AdalServiceException AADSTS7000222: The provided client secret keys are expired. Visit the Azure Portal to create new keys for your app, or consider using certificate credentials for added security: https://learn.microsoft.com/azure/active-directory/develop/active-directory-certificate-credentials ``` > [!TIP] |
azure-portal | Azure Portal Quickstart Center | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-quickstart-center.md | You can also select **Browse our full Azure catalog** to see all Azure learning ## Next steps * Learn more about Azure setup and migration in the [Microsoft Cloud Adoption Framework for Azure](/azure/architecture/cloud-adoption/).-* Unlock your cloud skills with more [Learn modules]](/learn/azure/). +* Unlock your cloud skills with more [Learn modules]](/training/azure/). |
azure-portal | Azure Portal Safelist Urls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-safelist-urls.md | datalake.azure.net (Azure Data Lake Service) dev.azure.com (Azure DevOps) dev.azuresynapse.net (Azure Synapse) digitaltwins.azure.net (Azure Digital Twins)-docs.microsoft.com (Azure documentation) +learn.microsoft.com (Azure documentation) elm.iga.azure.com (Azure AD) eventhubs.azure.net (Azure Event Hubs) functions.azure.com (Azure Functions) |
azure-resource-manager | Add Template To Azure Pipelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/add-template-to-azure-pipelines.md | Last updated 08/03/2022 This quickstart shows you how to integrate Bicep files with Azure Pipelines for continuous integration and continuous deployment (CI/CD). -It provides a short introduction to the pipeline task you need for deploying a Bicep file. If you want more detailed steps on setting up the pipeline and project, see [Deploy Azure resources by using Bicep and Azure Pipelines](/learn/paths/bicep-azure-pipelines/). +It provides a short introduction to the pipeline task you need for deploying a Bicep file. If you want more detailed steps on setting up the pipeline and project, see [Deploy Azure resources by using Bicep and Azure Pipelines](/training/paths/bicep-azure-pipelines/). ## Prerequisites |
azure-resource-manager | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/best-practices.md | This article recommends practices to follow when developing your Bicep files. Th ### Training resources -If you would rather learn about Bicep best practices through step-by-step guidance, see [Structure your Bicep code for collaboration](/learn/modules/structure-bicep-code-collaboration/). +If you would rather learn about Bicep best practices through step-by-step guidance, see [Structure your Bicep code for collaboration](/training/modules/structure-bicep-code-collaboration/). ## Parameters |
azure-resource-manager | Bicep Functions Array | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-array.md | Last updated 04/12/2022 # Array functions for Bicep -This article describes the Bicep functions for working with arrays. +This article describes the Bicep functions for working with arrays. The lambda functions for working with arrays can be found [here](./bicep-functions-lambda.md). ## array The output from the preceding example with the default values is: | arrayOutput | String | one | | stringOutput | String | O | +## flatten ++`flatten(arrayToFlatten)` ++Takes an array of arrays, and returns an array of sub-array elements, in the original order. Sub-arrays are only flattened once, not recursively. ++Namespace: [sys](bicep-functions.md#namespaces-for-functions). ++### Parameters ++| Parameter | Required | Type | Description | +|: |: |: |: | +| arrayToFlattern |Yes |array |The array of sub-arrays to flatten.| ++### Return value ++Array ++### Example ++The following example shows how to use the flatten function. ++```bicep +param arrayToTest array = [ + ['one', 'two'] + ['three'] + ['four', 'five'] +] +output arrayOutput array = flatten(arrayToTest) +``` ++The output from the preceding example with the default values is: ++| Name | Type | Value | +| - | - | -- | +| arrayOutput | array | ['one', 'two', 'three', 'four', 'five'] | + ## indexOf `indexOf(arrayToSearch, itemToFind)` |
azure-resource-manager | Bicep Functions Lambda | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-lambda.md | + + Title: Bicep functions - lambda +description: Describes the lambda functions to use in a Bicep file. +++ Last updated : 09/20/2022+++# Lambda functions for Bicep ++This article describes the lambda functions to use in Bicep. Lambda expressions (or lambda functions) are essentially blocks of code that can be passed as an argument. In Bicep, lambda expression is in this format: ++```bicep +<lambda variable> => <expression> +``` ++> [!NOTE] +> The lambda functions are only supported in Bicep CLI version 0.10.61 or newer. ++## Limitations ++Bicep lambda function has these limitations: ++- Lambda expression can only be specified directly as function arguments in these functions: [`filter()`](#filter), [`map()`](#map), [`reduce()`](#reduce), and [`sort()`](#sort). +- Using lambda variables (the temporary variables used in the lambda expressions) inside resource or module array access isn't currently supported. +- Using lambda variables inside the [`listKeys`](./bicep-functions-resource.md#list) function isn't currently supported. +- Using lambda variables inside the [reference](./bicep-functions-resource.md#reference) function isn't currently supported. ++## filter ++`filter(inputArray, lambda expression)` ++Filters an array with a custom filtering function. ++Namespace: [sys](bicep-functions.md#namespaces-for-functions). ++### Parameters ++| Parameter | Required | Type | Description | +|: |: |: |: | +| inputArray |Yes |array |The array to filter.| +| lambda expression |Yes |expression |The lambda expression applied to each input array element. If false, the item will be filtered out of the output array.| ++### Return value ++An array. ++### Examples ++The following examples show how to use the filter function. ++```bicep +var dogs = [ + { + name: 'Evie' + age: 5 + interests: ['Ball', 'Frisbee'] + } + { + name: 'Casper' + age: 3 + interests: ['Other dogs'] + } + { + name: 'Indy' + age: 2 + interests: ['Butter'] + } + { + name: 'Kira' + age: 8 + interests: ['Rubs'] + } +] ++output oldDogs array = filter(dogs, dog => dog.age >=5) +``` ++The output from the preceding example shows the dogs that are five or older: ++| Name | Type | Value | +| - | - | -- | +| oldDogs | Array | [{"name":"Evie","age":5,"interests":["Ball","Frisbee"]},{"name":"Kira","age":8,"interests":["Rubs"]}] | ++```bicep +var itemForLoop = [for item in range(0, 10): item] ++output filteredLoop array = filter(itemForLoop, i => i > 5) +output isEven array = filter(range(0, 10), i => 0 == i % 2) +``` ++The output from the preceding example: ++| Name | Type | Value | +| - | - | -- | +| filteredLoop | Array | [6, 7, 8, 9] | +| isEven | Array | [0, 2, 4, 6, 8] | ++**filterdLoop** shows the numbers in an array that are greater than 5; and **isEven** shows the even numbers in the array. ++## map ++`map(inputArray, lambda expression)` ++Applies a custom mapping function to each element of an array. ++Namespace: [sys](bicep-functions.md#namespaces-for-functions). ++### Parameters ++| Parameter | Required | Type | Description | +|: |: |: |: | +| inputArray |Yes |array |The array to map.| +| lambda expression |Yes |expression |The lambda expression applied to each input array element, in order to generate the output array.| ++### Return value ++An array. ++### Example ++The following example shows how to use the map function. ++```bicep +var dogs = [ + { + name: 'Evie' + age: 5 + interests: ['Ball', 'Frisbee'] + } + { + name: 'Casper' + age: 3 + interests: ['Other dogs'] + } + { + name: 'Indy' + age: 2 + interests: ['Butter'] + } + { + name: 'Kira' + age: 8 + interests: ['Rubs'] + } +] ++output dogNames array = map(dogs, dog => dog.name) +output sayHi array = map(dogs, dog => 'Hello ${dog.name}!') +output mapObject array = map(range(0, length(dogs)), i => { + i: i + dog: dogs[i].name + greeting: 'Ahoy, ${dogs[i].name}!' +}) +``` ++The output from the preceding example is: ++| Name | Type | Value | +| - | - | -- | +| dogNames | Array | ["Evie","Casper","Indy","Kira"] | +| sayHi | Array | ["Hello Evie!","Hello Casper!","Hello Indy!","Hello Kira!"] | +| mapObject | Array | [{"i":0,"dog":"Evie","greeting":"Ahoy, Evie!"},{"i":1,"dog":"Casper","greeting":"Ahoy, Casper!"},{"i":2,"dog":"Indy","greeting":"Ahoy, Indy!"},{"i":3,"dog":"Kira","greeting":"Ahoy, Kira!"}] | ++**dogNames** shows the dog names from the array of objects; **sayHi** concatenates "Hello" and each of the dog names; and **mapObject** creates another array of objects. ++## reduce ++`reduce(inputArray, initialValue, lambda expression)` ++Reduces an array with a custom reduce function. ++Namespace: [sys](bicep-functions.md#namespaces-for-functions). ++### Parameters ++| Parameter | Required | Type | Description | +|: |: |: |: | +| inputArray |Yes |array |The array to reduce.| +| initialValue |No |any |Initial value.| +| lambda expression |Yes |expression |The lambda expression used to aggregate the current value and the next value.| ++### Return value ++Any. ++### Example ++The following examples show how to use the reduce function. ++```bicep +var dogs = [ + { + name: 'Evie' + age: 5 + interests: ['Ball', 'Frisbee'] + } + { + name: 'Casper' + age: 3 + interests: ['Other dogs'] + } + { + name: 'Indy' + age: 2 + interests: ['Butter'] + } + { + name: 'Kira' + age: 8 + interests: ['Rubs'] + } +] +var ages = map(dogs, dog => dog.age) +output totalAge int = reduce(ages, 0, (cur, prev) => cur + prev) +output totalAgeAdd1 int = reduce(ages, 1, (cur, prev) => cur + prev) +``` ++The output from the preceding example is: ++| Name | Type | Value | +| - | - | -- | +| totalAge | int | 18 | +| totalAgeAdd1 | int | 19 | ++**totalAge** sums the ages of the dogs; **totalAgeAdd1** has an initial value of 1, and adds all the dog ages to the initial values. ++```bicep +output reduceObjectUnion object = reduce([ + { foo: 123 } + { bar: 456 } + { baz: 789 } +], {}, (cur, next) => union(cur, next)) +``` ++The output from the preceding example is: ++| Name | Type | Value | +| - | - | -- | +| reduceObjectUnion | object | {"foo":123,"bar":456,"baz":789} | ++The [union](./bicep-functions-object.md#union) function returns a single object with all elements from the parameters. The function call unionizes the key value pairs of the objects into a new object. ++## sort ++`sort(inputArray, lambda expression)` ++Sorts an array with a custom sort function. ++Namespace: [sys](bicep-functions.md#namespaces-for-functions). ++### Parameters ++| Parameter | Required | Type | Description | +|: |: |: |: | +| inputArray |Yes |array |The array to sort.| +| lambda expression |Yes |expression |The lambda expression used to compare two array elements for ordering. If true, the second element will be ordered after the first in the output array.| ++### Return value ++An array. ++### Example ++The following example shows how to use the sort function. ++```bicep +var dogs = [ + { + name: 'Evie' + age: 5 + interests: ['Ball', 'Frisbee'] + } + { + name: 'Casper' + age: 3 + interests: ['Other dogs'] + } + { + name: 'Indy' + age: 2 + interests: ['Butter'] + } + { + name: 'Kira' + age: 8 + interests: ['Rubs'] + } +] ++output dogsByAge array = sort(dogs, (a, b) => a.age < b.age) +``` ++The output from the preceding example sorts the dog objects from the youngest to the oldest: ++| Name | Type | Value | +| - | - | -- | +| dogsByAge | Array | [{"name":"Indy","age":2,"interests":["Butter"]},{"name":"Casper","age":3,"interests":["Other dogs"]},{"name":"Evie","age":5,"interests":["Ball","Frisbee"]},{"name":"Kira","age":8,"interests":["Rubs"]}] | ++## Next steps ++- See [Bicep functions - arrays](./bicep-functions-array.md) for additional array related Bicep functions. |
azure-resource-manager | Bicep Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions.md | The following functions are available for working with arrays. All of these func * [empty](./bicep-functions-array.md#empty) * [indexOf](./bicep-functions-array.md#indexof) * [first](./bicep-functions-array.md#first)+* [flatten](./bicep-functions-array.md#flatten) * [intersection](./bicep-functions-array.md#intersection) * [last](./bicep-functions-array.md#last) * [lastIndexOf](./bicep-functions-array.md#lastindexof) The following functions are available for loading the content from external file * [loadJsonContent](bicep-functions-files.md#loadjsoncontent) * [loadTextContent](bicep-functions-files.md#loadtextcontent) +## Lambda functions ++The following functions are available for working with lambda expressions. All of these functions are in the `sys` namespace. ++* [filter](bicep-functions-lambda.md#filter) +* [map](bicep-functions-lambda.md#map) +* [reduce](bicep-functions-lambda.md#reduce) +* [sort](bicep-functions-lambda.md#sort) ++ ## Logical functions The following function is available for working with logical conditions. This function is in the `sys` namespace. |
azure-resource-manager | Child Resource Name Type | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/child-resource-name-type.md | This article show different ways you can declare a child resource. ### Training resources -If you would rather learn about about child resources through step-by-step guidance, see [Deploy child and extension resources by using Bicep](/learn/modules/child-extension-bicep-templates). +If you would rather learn about about child resources through step-by-step guidance, see [Deploy child and extension resources by using Bicep](/training/modules/child-extension-bicep-templates). ## Name and type pattern |
azure-resource-manager | Conditional Resource Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/conditional-resource-deployment.md | Sometimes you need to optionally deploy a resource or module in Bicep. Use the ` ### Training resources -If you would rather learn about conditions through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/). +If you would rather learn about conditions through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/training/modules/build-flexible-bicep-templates-conditions-loops/). ## Deploy condition output mgmtStatus string = ((!empty(logAnalytics)) ? 'Enabled monitoring for VM! ## Next steps -* Review the Learn module [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/). +* Review the Learn module [Build flexible Bicep templates by using conditions and loops](/training/modules/build-flexible-bicep-templates-conditions-loops/). * For recommendations about creating Bicep files, see [Best practices for Bicep](best-practices.md). * To create multiple instances of a resource, see [Iterative loops in Bicep](loops.md). |
azure-resource-manager | Deploy Github Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-github-actions.md | -It provides a short introduction to GitHub actions and Bicep files. If you want more detailed steps on setting up the GitHub actions and project, see [Learning path: Deploy Azure resources by using Bicep and GitHub Actions](/learn/paths/bicep-github-actions). +It provides a short introduction to GitHub actions and Bicep files. If you want more detailed steps on setting up the GitHub actions and project, see [Deploy Azure resources by using Bicep and GitHub Actions](/training/paths/bicep-github-actions). ## Prerequisites |
azure-resource-manager | Deploy To Management Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-management-group.md | As your organization matures, you can deploy a Bicep file to create resources at ### Training resources -If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/). +If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/training/modules/deploy-resources-scopes-bicep/). ## Supported resources |
azure-resource-manager | Deploy To Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-subscription.md | To simplify the management of resources, you can deploy resources at the level o ### Training resources -If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/). +If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/training/modules/deploy-resources-scopes-bicep/). ## Supported resources |
azure-resource-manager | Deploy To Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-tenant.md | As your organization matures, you may need to define and assign [policies](../.. ### Training resources -If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/). +If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/training/modules/deploy-resources-scopes-bicep/). ## Supported resources |
azure-resource-manager | Deploy What If | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-what-if.md | You can use the what-if operation with Azure PowerShell, Azure CLI, or REST API ### Training resources -If you would rather learn about the what-if operation through step-by-step guidance, see [Preview Azure deployment changes by using what-if](/learn/modules/arm-template-whatif/). +If you would rather learn about the what-if operation through step-by-step guidance, see [Preview Azure deployment changes by using what-if](/training/modules/arm-template-whatif/). [!INCLUDE [permissions](../../../includes/template-deploy-permissions.md)] You can use the what-if operation through the Azure SDKs. * To use the what-if operation in a pipeline, see [Test ARM templates with What-If in a pipeline](https://4bes.nl/2021/03/06/test-arm-templates-with-what-if/). * If you notice incorrect results from the what-if operation, please report the issues at [https://aka.ms/whatifissues](https://aka.ms/whatifissues).-* For a Learn module that demonstrates using what-if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/). +* For a Learn module that demonstrates using what-if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/training/modules/arm-template-test/). |
azure-resource-manager | Deployment Script Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-script-bicep.md | The deployment script resource is only available in the regions where Azure Cont ### Training resources -If you would rather learn about the ARM template test toolkit through step-by-step guidance, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts). +If you would rather learn about the ARM template test toolkit through step-by-step guidance, see [Extend ARM templates by using deployment scripts](/training/modules/extend-resource-manager-template-deployment-scripts). ## Configure the minimum permissions After the script is tested successfully, you can use it as a deployment script i In this article, you learned how to use deployment scripts. To walk through a Learn module: > [!div class="nextstepaction"]-> [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts) +> [Extend ARM templates by using deployment scripts](/training/modules/extend-resource-manager-template-deployment-scripts) |
azure-resource-manager | Key Vault Parameter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/key-vault-parameter.md | New-AzResourceGroupDeployment ` - For general information about key vaults, see [What is Azure Key Vault?](../../key-vault/general/overview.md) - For complete examples of referencing key secrets, see [key vault examples](https://github.com/rjmax/ArmExamples/tree/master/keyvaultexamples) on GitHub.-- For a Learn module that covers passing a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).+- For a Learn module that covers passing a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/training/modules/manage-deployments-advanced-arm-template-features/). |
azure-resource-manager | Learn Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/learn-bicep.md | Ready to see how Bicep can help simplify and accelerate your deployments to Azur If you're new to Bicep, a great way to get started is by reviewing the following Learn module. You'll learn how Bicep makes it easier to define how your Azure resources should be configured and deployed in a way that's automated and repeatable. YouΓÇÖll deploy several Azure resources so you can see for yourself how Bicep works. We provide free access to Azure resources to help you practice the concepts. -[<img src="media/learn-bicep/build-first-bicep-template.svg" width="101" height="120" alt="The badge for the Build your first Bicep template module." role="presentation"></img>](/learn/modules/build-first-bicep-template/) +[<img src="media/learn-bicep/build-first-bicep-template.svg" width="101" height="120" alt="The badge for the Build your first Bicep template module." role="presentation"></img>](/training/modules/build-first-bicep-template/) -[Build your first Bicep template](/learn/modules/build-first-bicep-template/) +[Build your first Bicep template](/training/modules/build-first-bicep-template/) ## Learn more To learn even more about Bicep's features, take these learning paths: :::row::: :::column:::- [<img src="media/learn-bicep/fundamentals-bicep.svg" width="101" height="120" alt="The trophy for the Fundamentals of Bicep learning path." role="presentation"></img>](/learn/paths/fundamentals-bicep/) + [<img src="media/learn-bicep/fundamentals-bicep.svg" width="101" height="120" alt="The trophy for the Fundamentals of Bicep learning path." role="presentation"></img>](/training/paths/fundamentals-bicep/) - [Part 1: Fundamentals of Bicep](/learn/paths/fundamentals-bicep/) + [Part 1: Fundamentals of Bicep](/training/paths/fundamentals-bicep/) :::column-end::: :::column:::- [<img src="media/learn-bicep/intermediate-bicep.svg" width="101" height="120" alt="The trophy for the Intermediate Bicep learning path." role="presentation"></img>](/learn/paths/intermediate-bicep/) + [<img src="media/learn-bicep/intermediate-bicep.svg" width="101" height="120" alt="The trophy for the Intermediate Bicep learning path." role="presentation"></img>](/training/paths/intermediate-bicep/) - [Part 2: Intermediate Bicep](/learn/paths/intermediate-bicep/) + [Part 2: Intermediate Bicep](/training/paths/intermediate-bicep/) :::column-end::: :::column:::- [<img src="media/learn-bicep/advanced-bicep.svg" width="101" height="120" alt="The trophy for the Advanced Bicep learning path." role="presentation"></img>](/learn/paths/advanced-bicep/) + [<img src="media/learn-bicep/advanced-bicep.svg" width="101" height="120" alt="The trophy for the Advanced Bicep learning path." role="presentation"></img>](/training/paths/advanced-bicep/) - [Part 3: Advanced Bicep](/learn/paths/advanced-bicep/) + [Part 3: Advanced Bicep](/training/paths/advanced-bicep/) :::column-end::: :::row-end::: After that, you might be interested in adding your Bicep code to a deployment pi :::row::: :::column:::- [<img src="media/learn-bicep/bicep-azure-pipelines.svg" width="101" height="120" alt="The trophy for the Deploy Azure resources using Bicep and Azure Pipelines learning path." role="presentation"></img>](/learn/paths/bicep-azure-pipelines/) + [<img src="media/learn-bicep/bicep-azure-pipelines.svg" width="101" height="120" alt="The trophy for the Deploy Azure resources using Bicep and Azure Pipelines learning path." role="presentation"></img>](/training/paths/bicep-azure-pipelines/) - [Option 1: Deploy Azure resources by using Bicep and Azure Pipelines](/learn/paths/bicep-azure-pipelines/) + [Option 1: Deploy Azure resources by using Bicep and Azure Pipelines](/training/paths/bicep-azure-pipelines/) :::column-end::: :::column:::- [<img src="media/learn-bicep/bicep-github-actions.svg" width="101" height="120" alt="The trophy for the Deploy Azure resources using Bicep and GitHub Actions learning path." role="presentation"></img>](/learn/paths/bicep-github-actions/) + [<img src="media/learn-bicep/bicep-github-actions.svg" width="101" height="120" alt="The trophy for the Deploy Azure resources using Bicep and GitHub Actions learning path." role="presentation"></img>](/training/paths/bicep-github-actions/) - [Option 2: Deploy Azure resources by using Bicep and GitHub Actions](/learn/paths/bicep-github-actions/) + [Option 2: Deploy Azure resources by using Bicep and GitHub Actions](/training/paths/bicep-github-actions/) :::column-end::: :::row-end::: |
azure-resource-manager | Loops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/loops.md | This article shows you how to use the `for` syntax to iterate over items in a co ### Training resources -If you would rather learn about loops through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/). +If you would rather learn about loops through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/training/modules/build-flexible-bicep-templates-conditions-loops/). ## Loop syntax |
azure-resource-manager | Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/migrate.md | The first step in the process is to capture an initial representation of your Az :::image type="content" source="./media/migrate/migrate-bicep.png" alt-text="Diagram of the recommended workflow for migrating Azure resources to Bicep." border="false"::: -In this article we summarize this recommended workflow. For detailed guidance, see [Migrate Azure resources and JSON ARM templates to use Bicep](/learn/modules/migrate-azure-resources-bicep/). +In this article we summarize this recommended workflow. For detailed guidance, see [Migrate Azure resources and JSON ARM templates to use Bicep](/training/modules/migrate-azure-resources-bicep/). ## Phase 1: Convert |
azure-resource-manager | Modules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/modules.md | Bicep modules are converted into a single Azure Resource Manager template with [ ### Training resources -If you would rather learn about modules through step-by-step guidance, see [Create composable Bicep files by using modules](/learn/modules/create-composable-bicep-files-using-modules/). +If you would rather learn about modules through step-by-step guidance, see [Create composable Bicep files by using modules](/training/modules/create-composable-bicep-files-using-modules/). ## Definition syntax When used as module, you can get that output value. ## Next steps -- For a tutorial, see [Deploy Azure resources by using Bicep templates](/learn/modules/deploy-azure-resources-by-using-bicep-templates/).+- For a tutorial, see [Deploy Azure resources by using Bicep templates](/training/modules/deploy-azure-resources-by-using-bicep-templates/). - To pass a sensitive value to a module, use the [getSecret](bicep-functions-resource.md#getsecret) function. |
azure-resource-manager | Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameters.md | For parameter best practices, see [Parameters](./best-practices.md#parameters). ### Training resources -If you would rather learn about parameters through step-by-step guidance, see [Build reusable Bicep templates by using parameters](/learn/modules/build-reusable-bicep-templates-parameters). +If you would rather learn about parameters through step-by-step guidance, see [Build reusable Bicep templates by using parameters](/training/modules/build-reusable-bicep-templates-parameters). ## Declaration |
azure-resource-manager | Private Module Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/private-module-registry.md | To work with module registries, you must have [Bicep CLI](./install.md) version ### Training resources -If you would rather learn about parameters through step-by-step guidance, see [Share Bicep modules by using private registries](/learn/modules/share-bicep-modules-using-private-registries). +If you would rather learn about parameters through step-by-step guidance, see [Share Bicep modules by using private registries](/training/modules/share-bicep-modules-using-private-registries). ## Configure private registry |
azure-resource-manager | Quickstart Create Bicep Use Visual Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio.md | Remove-AzResourceGroup -Name exampleRG ## Next steps > [!div class="nextstepaction"]-> [Bicep in Microsoft Learn](learn-bicep.md) +> [Learn modules for Bicep](learn-bicep.md) |
azure-resource-manager | Scope Extension Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/scope-extension-resources.md | This article shows how to set the scope for an extension resource type when depl ### Training resources -If you would rather learn about extension resources through step-by-step guidance, see [Deploy child and extension resources by using Bicep](/learn/modules/child-extension-bicep-templates). +If you would rather learn about extension resources through step-by-step guidance, see [Deploy child and extension resources by using Bicep](/training/modules/child-extension-bicep-templates). ## Apply at deployment scope |
azure-resource-manager | Template Specs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/template-specs.md | When designing your deployment, always consider the lifecycle of the resources a ### Training resources -To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs). +To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/training/modules/arm-template-specs). ## Required permissions After creating a template spec, you can link to that template spec in a Bicep mo ## Next steps -To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs). +To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/training/modules/arm-template-specs). |
azure-resource-manager | Create Custom Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/create-custom-provider.md | Title: Create resource provider -description: Describes how to create a resource provider and deploy its custom resource types. -+ Title: Create a custom resource provider +description: Describes how to create a custom resource provider and deploy custom resources. Previously updated : 06/24/2020- Last updated : 09/20/2022++ -# Quickstart: Create a custom provider and deploy custom resources +# Quickstart: Create a custom resource provider and deploy custom resources -In this quickstart, you create your own resource provider and deploy custom resource types for that resource provider. For more information about custom providers, see [Azure Custom Providers Preview overview](overview.md). +In this quickstart, you create a custom resource provider and deploy custom resources for that resource provider. For more information about custom providers, see [Azure Custom Resource Providers Overview](overview.md). ## Prerequisites Azure CLI examples use `az rest` for `REST` requests. For more information, see - The PowerShell commands are run locally using PowerShell 7 or later and the Azure PowerShell modules. For more information, see [Install Azure PowerShell](/powershell/azure/install-az-ps). - If you don't already have a tool for `REST` operations, install the [ARMClient](https://github.com/projectkudu/ARMClient). It's an open-source command-line tool that simplifies invoking the Azure Resource Manager API.-- After the **ARMClient** is installed you can display usage information from a PowerShell command prompt by typing: `armclient.exe`. Or, go to the [ARMClient wiki](https://github.com/projectkudu/ARMClient/wiki).+- After the **ARMClient** is installed, you can display usage information from a PowerShell command prompt by typing: `armclient.exe`. Or, go to the [ARMClient wiki](https://github.com/projectkudu/ARMClient/wiki). ## Deploy custom provider -To set up the custom provider, deploy an [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/custom-providers/customprovider.json) to your Azure subscription. +To set up the custom resource provider, deploy an [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/custom-providers/customprovider.json) to your Azure subscription. -After deploying the template, your subscription has the following resources: +The template deploys the following resources to your subscription: -- Function App with the operations for the resources and actions.-- Storage Account for storing users that are created through the custom provider.-- Custom Provider that defines the custom resource types and actions. It uses the function app endpoint for sending requests.-- Custom resource from the custom provider.+- Function app with the operations for the resources and actions. +- Storage account for storing users that are created through the custom provider. +- Custom resource provider that defines the custom resource types and actions. It uses the function app endpoint for sending requests. +- Custom resource from the custom resource provider. -To deploy the custom provider, use Azure CLI, PowerShell, or the Azure portal: +To deploy the custom resource provider, use Azure CLI, PowerShell, or the Azure portal. # [Azure CLI](#tab/azure-cli) Read-Host -Prompt "Press [ENTER] to continue ..." -You can also deploy the solution from the Azure portal. Select the **Deploy to Azure** button to open the template in the Azure portal. +To deploy the template from the Azure portal, select the **Deploy to Azure** button. [](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-docs-json-samples%2Fmaster%2Fcustom-providers%2Fcustomprovider.json) -## View custom provider and resource +## View custom resource provider and resource -In the portal, the custom provider is a hidden resource type. To confirm that the resource provider was deployed, navigate to the resource group. Select the option to **Show hidden types**. +In the portal, the custom resource provider is a hidden resource type. To confirm that the resource provider was deployed, go to the resource group and select **Show hidden types**. - -To see the custom resource type that you deployed, use the `GET` operation on your resource type. +To see the custom resource that you deployed, use the `GET` operation on your resource type. The resource type `Microsoft.CustomProviders/resourceProviders/users` shown in the JSON response includes the resource that was created by the template. ```http GET https://management.azure.com/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.CustomProviders/resourceProviders/<provider-name>/users?api-version=2018-09-01-preview You receive the response: { "value": [ {- "id": "/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.CustomProviders/resourceProviders/<provider-name>/users/santa", - "name": "santa", + "id": "/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.CustomProviders/resourceProviders/<provider-name>/users/ana", + "name": "ana", "properties": {- "FullName": "Santa Claus", - "Location": "NorthPole", + "FullName": "Ana Bowman", + "Location": "Moon", "provisioningState": "Succeeded" },- "resourceGroup": "<rg-name>", "type": "Microsoft.CustomProviders/resourceProviders/users" } ] You receive the response: { "properties": { "provisioningState": "Succeeded",- "FullName": "Santa Claus", - "Location": "NorthPole" + "FullName": "Ana Bowman", + "Location": "Moon" },- "id": "/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.CustomProviders/resourceProviders/<provider-name>/users/santa", - "name": "santa", + "id": "/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.CustomProviders/resourceProviders/<provider-name>/users/ana", + "name": "ana", "type": "Microsoft.CustomProviders/resourceProviders/users" } ] You receive the response: ## Call action -Your custom provider also has an action named `ping`. The code that processes the request is implemented in the function app. The `ping` action replies with a greeting. +Your custom resource provider also has an action named `ping`. The code that processes the request is implemented in the function app. The `ping` action replies with a greeting. -To send a `ping` request, use the `POST` operation on your custom provider. +To send a `ping` request, use the `POST` operation on your action. ```http POST https://management.azure.com/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.CustomProviders/resourceProviders/<provider-name>/ping?api-version=2018-09-01-preview You receive the response: -## Create a resource type +## Use PUT to create resource ++In this quickstart, the template used the resource type `Microsoft.CustomProviders/resourceProviders/users` to deploy a resource. You can also use a `PUT` operation to create a resource. For example, if a resource isn't deployed with the template, the `PUT` operation will create a resource. -To create the custom resource type, you can deploy the resource in a template. This approach is shown in the template you deployed in this quickstart. You can also send a `PUT` request for the resource type. +In this example, because the template already deployed a resource, the `PUT` operation creates a new resource. ```http PUT https://management.azure.com/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.CustomProviders/resourceProviders/<provider-name>/users/<resource-name>?api-version=2018-09-01-preview You receive the response: "Location": "Earth", "provisioningState": "Succeeded" },- "resourceGroup": "<rg-name>", "type": "Microsoft.CustomProviders/resourceProviders/users" } ``` You receive the response: +You can rerun the `GET` operation from the [view custom resource provider and resource](#view-custom-resource-provider-and-resource) section to show the two resources that were created. This example shows output from the Azure CLI command. ++```json +{ + "value": [ + { + "id": "/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.CustomProviders/resourceProviders/<provider-name>/users/ana", + "name": "ana", + "properties": { + "FullName": "Ana Bowman", + "Location": "Moon", + "provisioningState": "Succeeded" + }, + "type": "Microsoft.CustomProviders/resourceProviders/users" + }, + { + "id": "/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.CustomProviders/resourceProviders/<provider-name>/users/testuser", + "name": "testuser", + "properties": { + "FullName": "Test User", + "Location": "Earth", + "provisioningState": "Succeeded" + }, + "type": "Microsoft.CustomProviders/resourceProviders/users" + } + ] +} +``` + ## Custom resource provider commands Use the [custom-providers](/cli/azure/custom-providers/resource-provider) commands to work with your custom resource provider. The `delete` command prompts you and deletes only the custom resource provider. az custom-providers resource-provider delete --resource-group $rgName --name $funcName ``` +## Clean up resources ++If you're finished with the resources created in this article, you can delete the resource group. When you delete a resource group, all the resources in that resource group are deleted. ++# [Azure CLI](#tab/azure-cli) ++```azurecli-interactive +az group delete --resource-group $rgName +``` ++# [PowerShell](#tab/azure-powershell) ++```azurepowershell-interactive +Remove-AzResourceGroup -Name $rgName +``` ++++ ## Next steps For an introduction to custom providers, see the following article: |
azure-resource-manager | Tutorial Custom Providers Function Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/tutorial-custom-providers-function-setup.md | Title: Set up Azure Functions -description: This tutorial goes over how to create a function app in Azure Functions and set it up to work with Azure Custom Providers. -+description: This tutorial describes how to create a function app in Azure Functions that works with Azure Custom Providers. Previously updated : 05/06/2022 Last updated : 09/20/2022 + # Set up Azure Functions for custom providers To start this tutorial, you should first follow the tutorial [Create your first To install the Azure Table storage bindings: -1. Go to the **Integrate** tab for the HttpTrigger. +1. Go to the **Integrate** tab for the `HttpTrigger`. 1. Select **+ New Input**. 1. Select **Azure Table Storage**.-1. Install the Microsoft.Azure.WebJobs.Extensions.Storage extension if it isn't already installed. +1. Install the `Microsoft.Azure.WebJobs.Extensions.Storage` extension if it isn't already installed. 1. In the **Table parameter name** box, enter *tableStorage*. 1. In the **Table name** box, enter *myCustomResources*. 1. Select **Save** to save the updated input parameter. - ## Update RESTful HTTP methods To set up the Azure function to include the custom provider RESTful request methods: -1. Go to the **Integrate** tab for the HttpTrigger. +1. Go to the **Integrate** tab for the `HttpTrigger`. 1. Under **Selected HTTP methods**, select **GET**, **POST**, **DELETE**, and **PUT**. - ## Add Azure Resource Manager NuGet packages > [!NOTE]-> If your C# project file is missing from the project directory, you can add it manually, or it will appear after the Microsoft.Azure.WebJobs.Extensions.Storage extension is installed on the function app. +> If your C# project file is missing from the project directory, you can add it manually, or it will appear after the `Microsoft.Azure.WebJobs.Extensions.Storage` extension is installed on the function app. Next, update the C# project file to include helpful NuGet libraries. These libraries make it easier to parse incoming requests from custom providers. Follow the steps to [add extensions from the portal](../../azure-functions/functions-bindings-register.md) and update the C# project file to include the following package references: |
azure-resource-manager | Conditional Resource Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/conditional-resource-deployment.md | If you deploy a template with [complete mode](deployment-modes.md) and a resourc ## Next steps -* For a Learn module that covers conditional deployment, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/). +* For a Learn module that covers conditional deployment, see [Manage complex cloud deployments by using advanced ARM template features](/training/modules/manage-deployments-advanced-arm-template-features/). * For recommendations about creating templates, see [ARM template best practices](./best-practices.md). * To create multiple instances of a resource, see [Resource iteration in ARM templates](copy-resources.md). |
azure-resource-manager | Copy Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/copy-resources.md | You can't use a copy loop for a child resource. To create more than one instance For example, suppose you typically define a dataset as a child resource within a data factory. ```json-"resources": [ {- "type": "Microsoft.DataFactory/factories", - "name": "exampleDataFactory", - ... "resources": [ {- "type": "datasets", - "name": "exampleDataSet", - "dependsOn": [ - "exampleDataFactory" - ], + "type": "Microsoft.DataFactory/factories", + "name": "exampleDataFactory", + ... + "resources": [ + { + "type": "datasets", + "name": "exampleDataSet", + "dependsOn": [ + "exampleDataFactory" + ], + ... + } + ] ... } ]+} ``` To create more than one data set, move it outside of the data factory. The dataset must be at the same level as the data factory, but it's still a child resource of the data factory. You preserve the relationship between data set and data factory through the type and name properties. Since type can no longer be inferred from its position in the template, you must provide the fully qualified type in the format: `{resource-provider-namespace}/{parent-resource-type}/{child-resource-type}`. The following examples show common scenarios for creating more than one instance - To set dependencies on resources that are created in a copy loop, see [Define the order for deploying resources in ARM templates](./resource-dependency.md). - To go through a tutorial, see [Tutorial: Create multiple resource instances with ARM templates](template-tutorial-create-multiple-instances.md).-- For a Learn module that covers resource copy, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).+- For a Learn module that covers resource copy, see [Manage complex cloud deployments by using advanced ARM template features](/training/modules/manage-deployments-advanced-arm-template-features/). - For other uses of the copy loop, see: - [Property iteration in ARM templates](copy-properties.md) - [Variable iteration in ARM templates](copy-variables.md) |
azure-resource-manager | Deploy Github Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-github-actions.md | When your resource group and repository are no longer needed, clean up the resou > [Create your first ARM template](./template-tutorial-create-first-template.md) > [!div class="nextstepaction"]-> [Learn module: Automate the deployment of ARM templates by using GitHub Actions](/learn/modules/deploy-templates-command-line-github-actions/) +> [Learn module: Automate the deployment of ARM templates by using GitHub Actions](/training/modules/deploy-templates-command-line-github-actions/) |
azure-resource-manager | Deploy What If | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-what-if.md | You can use the what-if operation with Azure PowerShell, Azure CLI, or REST API ### Training resources -To learn more about what-if, and for hands-on guidance, see [Preview Azure deployment changes by using what-if](/learn/modules/arm-template-whatif). +To learn more about what-if, and for hands-on guidance, see [Preview Azure deployment changes by using what-if](/training/modules/arm-template-whatif). [!INCLUDE [permissions](../../../includes/template-deploy-permissions.md)] You can use the what-if operation through the Azure SDKs. - [ARM Deployment Insights](https://marketplace.visualstudio.com/items?itemName=AuthorityPartnersInc.arm-deployment-insights) extension provides an easy way to integrate the what-if operation in your Azure DevOps pipeline. - To use the what-if operation in a pipeline, see [Test ARM templates with What-If in a pipeline](https://4bes.nl/2021/03/06/test-arm-templates-with-what-if/). - If you notice incorrect results from the what-if operation, please report the issues at [https://aka.ms/whatifissues](https://aka.ms/whatifissues).-- For a Learn module that covers using what if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).+- For a Learn module that covers using what if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/training/modules/arm-template-test/). |
azure-resource-manager | Deployment Script Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template.md | The deployment script resource is only available in the regions where Azure Cont ### Training resources -To learn more about the ARM template test toolkit, and for hands-on guidance, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts). +To learn more about the ARM template test toolkit, and for hands-on guidance, see [Extend ARM templates by using deployment scripts](/training/modules/extend-resource-manager-template-deployment-scripts). ## Configure the minimum permissions In this article, you learned how to use deployment scripts. To walk through a de > [Tutorial: Use deployment scripts in Azure Resource Manager templates](./template-tutorial-deployment-script.md) > [!div class="nextstepaction"]-> [Learn module: Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts/) +> [Learn module: Extend ARM templates by using deployment scripts](/training/modules/extend-resource-manager-template-deployment-scripts/) |
azure-resource-manager | Key Vault Parameter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/key-vault-parameter.md | The following template dynamically creates the key vault ID and passes it as a p - For general information about key vaults, see [What is Azure Key Vault?](../../key-vault/general/overview.md) - For complete examples of referencing key secrets, see [key vault examples](https://github.com/rjmax/ArmExamples/tree/master/keyvaultexamples) on GitHub.-- For a Learn module that covers passing a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).+- For a Learn module that covers passing a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/training/modules/manage-deployments-advanced-arm-template-features/). |
azure-resource-manager | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/overview.md | To implement infrastructure as code for your Azure solutions, use Azure Resource To learn about how you can get started with ARM templates, see the following video. -> [!VIDEO https://docs.microsoft.com/Shows/Azure-Enablement/How-and-why-to-learn-about-ARM-templates/player] +> [!VIDEO https://learn.microsoft.com/Shows/Azure-Enablement/How-and-why-to-learn-about-ARM-templates/player] ## Why choose ARM templates? This approach means you can safely share templates that meet your organization's ## Next steps * For a step-by-step tutorial that guides you through the process of creating a template, see [Tutorial: Create and deploy your first ARM template](template-tutorial-create-first-template.md).-* To learn about ARM templates through a guided set of Learn modules, see [Deploy and manage resources in Azure by using ARM templates](/learn/paths/deploy-manage-resource-manager-templates/). +* To learn about ARM templates through a guided set of Learn modules, see [Deploy and manage resources in Azure by using ARM templates](/training/paths/deploy-manage-resource-manager-templates/). * For information about the properties in template files, see [Understand the structure and syntax of ARM templates](./syntax.md). * To learn about exporting templates, see [Quickstart: Create and deploy ARM templates by using the Azure portal](quickstart-create-templates-use-the-portal.md). * For answers to common questions, see [Frequently asked questions about ARM templates](./frequently-asked-questions.yml). |
azure-resource-manager | Resource Dependency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-dependency.md | In the following example, a CDN endpoint explicitly depends on the CDN profile, "originHostHeader": "[reference(variables('webAppName')).hostNames[0]]", ... }+ ... +} ``` To learn more, see [reference function](template-functions-resource.md#reference). For information about assessing the deployment order and resolving dependency er ## Next steps * To go through a tutorial, see [Tutorial: Create ARM templates with dependent resources](template-tutorial-create-templates-with-dependent-resources.md).-* For a Learn module that covers resource dependencies, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/). +* For a Learn module that covers resource dependencies, see [Manage complex cloud deployments by using advanced ARM template features](/training/modules/manage-deployments-advanced-arm-template-features/). * For recommendations when setting dependencies, see [ARM template best practices](./best-practices.md). * To learn about troubleshooting dependencies during deployment, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](common-deployment-errors.md). * To learn about creating Azure Resource Manager templates, see [Understand the structure and syntax of ARM templates](./syntax.md). |
azure-resource-manager | Syntax | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/syntax.md | Last updated 07/18/2022 This article describes the structure of an Azure Resource Manager template (ARM template). It presents the different sections of a template and the properties that are available in those sections. -This article is intended for users who have some familiarity with ARM templates. It provides detailed information about the structure of the template. For a step-by-step tutorial that guides you through the process of creating a template, see [Tutorial: Create and deploy your first ARM template](template-tutorial-create-first-template.md). To learn about ARM templates through a guided set of Learn modules, see [Deploy and manage resources in Azure by using ARM templates](/learn/paths/deploy-manage-resource-manager-templates/). +This article is intended for users who have some familiarity with ARM templates. It provides detailed information about the structure of the template. For a step-by-step tutorial that guides you through the process of creating a template, see [Tutorial: Create and deploy your first ARM template](template-tutorial-create-first-template.md). To learn about ARM templates through a guided set of Learn modules, see [Deploy and manage resources in Azure by using ARM templates](/training/paths/deploy-manage-resource-manager-templates/). > [!TIP] > Bicep is a new language that offers the same capabilities as ARM templates but with a syntax that's easier to use. If you're considering infrastructure as code options, we recommend looking at Bicep. |
azure-resource-manager | Template Specs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs.md | When designing your deployment, always consider the lifecycle of the resources a ### Training resources -To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs). +To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/training/modules/arm-template-specs). > [!TIP] > We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [Azure Resource Manager template specs in Bicep](../bicep/template-specs.md). |
azure-resource-manager | Template Test Cases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-test-cases.md | The following example **passes** because `expressionEvaluationOptions` uses `inn ## Next steps - To learn about running the test toolkit, see [Use ARM template test toolkit](test-toolkit.md).-- For a Learn module that covers using the test toolkit, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).+- For a Learn module that covers using the test toolkit, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/training/modules/arm-template-test/). - To test parameter files, see [Test cases for parameter files](parameters.md). - For createUiDefinition tests, see [Test cases for createUiDefinition.json](createUiDefinition-test-cases.md). - To learn about tests for all files, see [Test cases for all files](all-files-test-cases.md). |
azure-resource-manager | Template Tutorial Create First Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-first-template.md | This tutorial introduces you to Azure Resource Manager templates (ARM templates) This tutorial is the first of a series. As you progress through the series, you modify the starting template, step by step, until you explore all of the core parts of an ARM template. These elements are the building blocks for more complex templates. We hope by the end of the series you're confident in creating your own templates and ready to automate your deployments with templates. -If you want to learn about the benefits of using templates and why you should automate deployments with templates, see [ARM template overview](overview.md). To learn about ARM templates through a guided set of [Learn modules](/learn), see [Deploy and manage resources in Azure by using JSON ARM templates](/learn/paths/deploy-manage-resource-manager-templates). +If you want to learn about the benefits of using templates and why you should automate deployments with templates, see [ARM template overview](overview.md). To learn about ARM templates through a guided set of [Learn modules](/training), see [Deploy and manage resources in Azure by using JSON ARM templates](/training/paths/deploy-manage-resource-manager-templates). If you don't have a Microsoft Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. |
azure-resource-manager | Template Tutorial Create Multiple Instances | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-multiple-instances.md | This tutorial covers the following tasks: If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. -For a Learn module that covers resource copy, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/). +For a Learn module that covers resource copy, see [Manage complex cloud deployments by using advanced ARM template features](/training/modules/manage-deployments-advanced-arm-template-features/). ## Prerequisites |
azure-resource-manager | Template Tutorial Create Templates With Dependent Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-templates-with-dependent-resources.md | This tutorial covers the following tasks: If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. -For a Learn module that covers resource dependencies, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/). +For a Learn module that covers resource dependencies, see [Manage complex cloud deployments by using advanced ARM template features](/training/modules/manage-deployments-advanced-arm-template-features/). ## Prerequisites |
azure-resource-manager | Template Tutorial Deployment Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-deployment-script.md | This tutorial covers the following tasks: > * Debug the failed script > * Clean up resources -For a Learn module that covers deployment scripts, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts/). +For a Learn module that covers deployment scripts, see [Extend ARM templates by using deployment scripts](/training/modules/extend-resource-manager-template-deployment-scripts/). ## Prerequisites |
azure-resource-manager | Template Tutorial Use Conditions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-use-conditions.md | This tutorial only covers a basic scenario of using conditions. For more informa * [Template function: If](./template-functions-logical.md#if). * [Comparison functions for ARM templates](./template-functions-comparison.md) -For a Learn module that covers conditions, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/). +For a Learn module that covers conditions, see [Manage complex cloud deployments by using advanced ARM template features](/training/modules/manage-deployments-advanced-arm-template-features/). If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. |
azure-resource-manager | Template Tutorial Use Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-use-key-vault.md | This tutorial covers the following tasks: If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. -For a Learn module that uses a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/). +For a Learn module that uses a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/training/modules/manage-deployments-advanced-arm-template-features/). ## Prerequisites |
azure-resource-manager | Test Toolkit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/test-toolkit.md | The toolkit contains four sets of tests: ### Training resources -To learn more about the ARM template test toolkit, and for hands-on guidance, see [Validate Azure resources by using the ARM Template Test Toolkit](/learn/modules/arm-template-test). +To learn more about the ARM template test toolkit, and for hands-on guidance, see [Validate Azure resources by using the ARM Template Test Toolkit](/training/modules/arm-template-test). ## Install on Windows The next example shows how to run the tests. - To test parameter files, see [Test cases for parameter files](parameters.md). - For createUiDefinition tests, see [Test cases for createUiDefinition.json](createUiDefinition-test-cases.md). - To learn about tests for all files, see [Test cases for all files](all-files-test-cases.md).-- For a Learn module that covers using the test toolkit, see [Validate Azure resources by using the ARM Template Test Toolkit](/learn/modules/arm-template-test/).+- For a Learn module that covers using the test toolkit, see [Validate Azure resources by using the ARM Template Test Toolkit](/training/modules/arm-template-test/). |
azure-signalr | Signalr Concept Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-azure-functions.md | -> Learn to use SignalR and Azure Functions together in the interactive tutorial [Enable automatic updates in a web application using Azure Functions and SignalR Service](/learn/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr). +> Learn to use SignalR and Azure Functions together in the interactive tutorial [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr). ## Integrate real-time communications with Azure services In this article, you got an overview of how to use Azure Functions with SignalR For full details on how to use Azure Functions and SignalR Service together visit the following resources: * [Azure Functions development and configuration with SignalR Service](signalr-concept-serverless-development-config.md)-* [Enable automatic updates in a web application using Azure Functions and SignalR Service](/learn/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr) +* [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr) Follow one of these quickstarts to learn more. * [Azure SignalR Service Serverless Quickstart - C#](signalr-quickstart-azure-functions-csharp.md)-* [Azure SignalR Service Serverless Quickstart - JavaScript](signalr-quickstart-azure-functions-javascript.md) +* [Azure SignalR Service Serverless Quickstart - JavaScript](signalr-quickstart-azure-functions-javascript.md) |
azure-signalr | Signalr Howto Troubleshoot Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-troubleshoot-guide.md | public class ThreadPoolStarvationDetector : EventListener protected override void OnEventWritten(EventWrittenEventArgs eventData) {- // See: https://docs.microsoft.com/en-us/dotnet/framework/performance/thread-pool-etw-events#threadpoolworkerthreadadjustmentadjustment + // See: https://learn.microsoft.com/dotnet/framework/performance/thread-pool-etw-events#threadpoolworkerthreadadjustmentadjustment if (eventData.EventId == EventIdForThreadPoolWorkerThreadAdjustmentAdjustment && eventData.Payload[3] as uint? == ReasonForStarvation) { |
azure-sql-edge | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/overview.md | Azure SQL Edge is an optimized relational database engine geared for IoT and IoT Azure SQL Edge is built on the latest versions of the [SQL Server Database Engine](/sql/sql-server/sql-server-technical-documentation), which provides industry-leading performance, security and query processing capabilities. Since Azure SQL Edge is built on the same engine as [SQL Server](/sql/sql-server/sql-server-technical-documentation) and [Azure SQL](/azure/azure-sql/index), it provides the same Transact-SQL (T-SQL) programming surface area that makes development of applications or solutions easier and faster, and makes application portability between IoT Edge devices, data centers and the cloud straight forward. What is Azure SQL Edge video on Channel 9:-> [!VIDEO https://docs.microsoft.com/shows/Data-Exposed/What-is-Azure-SQL-Edge/player] +> [!VIDEO https://learn.microsoft.com/shows/Data-Exposed/What-is-Azure-SQL-Edge/player] ## Deployment Models |
azure-sql-edge | Tutorial Renewable Energy Demo | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-renewable-energy-demo.md | This Azure SQL Edge demo is based on a Contoso Renewable Energy, a wind turbine This demo will walk you through resolving an alert being raised because of wind turbulence being detected at the device. You will train a model and deploy it to SQL DB Edge that will correct the detected wind wake and ultimately optimize power output. Azure SQL Edge - renewable Energy demo video on Channel 9:-> [!VIDEO https://docs.microsoft.com/shows/Data-Exposed/Azure-SQL-Edge-Demo-Renewable-Energy/player] +> [!VIDEO https://learn.microsoft.com/shows/Data-Exposed/Azure-SQL-Edge-Demo-Renewable-Energy/player] ## Setting up the demo on your local computer Git will be used to copy all files from the demo to your local computer. Git will be used to copy all files from the demo to your local computer. 2. Open a command prompt and navigate to a folder where the repo should be downloaded. 3. Issue the command https://github.com/microsoft/sql-server-samples.git. 4. Navigate to **'sql-server-samples\samples\demos\azure-sql-edge-demos\Wind Turbine Demo'** in the location where the repository is cloned.-5. Follow the instructions in README.md to set up the demo environment and execute the demo. +5. Follow the instructions in README.md to set up the demo environment and execute the demo. |
azure-video-analyzer | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/release-notes.md | ->Get notified about when to revisit this page for updates by copying and pasting this URL: `https://docs.microsoft.com/api/search/rss?search=%22Azure+Video+Analyzer+on+IoT+Edge+release+notes%22&locale=en-us` into your RSS feed reader. +>Get notified about when to revisit this page for updates by copying and pasting this URL: `https://learn.microsoft.com/api/search/rss?search=%22Azure+Video+Analyzer+on+IoT+Edge+release+notes%22&locale=en-us` into your RSS feed reader. This article provides you with information about: |
azure-video-indexer | Monitor Video Indexer Data Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md | Azure Video Indexer currently does not support any monitoring on metrics. <!--**OPTION 1 EXAMPLE** -<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://docs.microsoft.com/azure/azure-monitor/platform/metrics-supported, which is auto generated from underlying systems. Not all metrics are published depending on whether your product group wants them to be. If the metric is published, but descriptions are wrong of missing, contact your PM and tell them to update them in the Azure Monitor "shoebox" manifest. If this article is missing metrics that you and the PM know are available, both of you contact azmondocs@microsoft.com. +<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://learn.microsoft.com/azure/azure-monitor/platform/metrics-supported, which is auto generated from underlying systems. Not all metrics are published depending on whether your product group wants them to be. If the metric is published, but descriptions are wrong of missing, contact your PM and tell them to update them in the Azure Monitor "shoebox" manifest. If this article is missing metrics that you and the PM know are available, both of you contact azmondocs@microsoft.com. --> <!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. --> Azure Video Indexer does not have any metrics that contain dimensions. Azure Video Indexer has the following dimensions associated with its metrics. -<!-- See https://docs.microsoft.com/azure/storage/common/monitor-storage-reference#metrics-dimensions for an example. Part is copied below. --> +<!-- See https://learn.microsoft.com/azure/storage/common/monitor-storage-reference#metrics-dimensions for an example. Part is copied below. --> <!--**--EXAMPLE format when you have dimensions** For reference, see a list of [all resource logs category types supported in Azur <!--**OPTION 1 EXAMPLE** -<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://docs.microsoft.com/azure/azure-monitor/platform/resource-logs-categories, which is auto generated from the REST API. Not all resource log types metrics are published depending on whether your product group wants them to be. If the resource log is published, but category display names are wrong or missing, contact your PM and tell them to update them in the Azure Monitor "shoebox" manifest. If this article is missing resource logs that you and the PM know are available, both of you contact azmondocs@microsoft.com. +<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://learn.microsoft.com/azure/azure-monitor/platform/resource-logs-categories, which is auto generated from the REST API. Not all resource log types metrics are published depending on whether your product group wants them to be. If the resource log is published, but category display names are wrong or missing, contact your PM and tell them to update them in the Azure Monitor "shoebox" manifest. If this article is missing resource logs that you and the PM know are available, both of you contact azmondocs@microsoft.com. --> <!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. --> This section refers to all of the Azure Monitor Logs Kusto tables relevant to Az <!--**OPTION 1 EXAMPLE** -<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://docs.microsoft.com/azure/azure-monitor/reference/tables/tables-resourcetype where your service tables are listed. These files are auto generated from the REST API. If this article is missing tables that you and the PM know are available, both of you contact azmondocs@microsoft.com. +<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://learn.microsoft.com/azure/azure-monitor/reference/tables/tables-resourcetype where your service tables are listed. These files are auto generated from the REST API. If this article is missing tables that you and the PM know are available, both of you contact azmondocs@microsoft.com. --> <!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. --> The following schemas are in use by Azure Video Indexer <!-- replace below with the proper link to your main monitoring service article --> - See [Monitoring Azure Video Indexer](monitor-video-indexer.md) for a description of monitoring Azure Video Indexer.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources. |
azure-video-indexer | Monitor Video Indexer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer.md | See [Create diagnostic setting to collect platform logs and metrics in Azure](/a :::image type="content" source="./media/monitor/toc-diagnostics-save.png" alt-text="Screenshot of diagnostic settings." lightbox="./media/monitor/toc-diagnostics-save.png"::: :::image type="content" source="./media/monitor/diagnostics-settings-destination.png" alt-text="Screenshot of where to send lots." lightbox="./media/monitor/diagnostics-settings-destination.png":::-<!-- OPTIONAL: Add specific examples of configuration for this service. For example, CLI and PowerShell commands for creating diagnostic setting. Ideally, customers should set up a policy to automatically turn on collection for services. Azure monitor has Resource Manager template examples you can point to. See https://docs.microsoft.com/azure/azure-monitor/samples/resource-manager-diagnostic-settings. Contact azmondocs@microsoft.com if you have questions. --> +<!-- OPTIONAL: Add specific examples of configuration for this service. For example, CLI and PowerShell commands for creating diagnostic setting. Ideally, customers should set up a policy to automatically turn on collection for services. Azure monitor has Resource Manager template examples you can point to. See https://learn.microsoft.com/azure/azure-monitor/samples/resource-manager-diagnostic-settings. Contact azmondocs@microsoft.com if you have questions. --> The metrics and logs you can collect are discussed in the following sections. |
azure-video-indexer | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md | ->Get notified about when to revisit this page for updates by copying and pasting this URL: `https://docs.microsoft.com/api/search/rss?search=%22Azure+Media+Services+Video+Indexer+release+notes%22&locale=en-us` into your RSS feed reader. +>Get notified about when to revisit this page for updates by copying and pasting this URL: `https://learn.microsoft.com/api/search/rss?search=%22Azure+Media+Services+Video+Indexer+release+notes%22&locale=en-us` into your RSS feed reader. To stay up-to-date with the most recent Azure Video Indexer developments, this article provides you with information about: With the ARM-based [paid (unlimited)](accounts-overview.md) account you are able - [Azure role-based access control (RBAC)](../role-based-access-control/overview.md). - Managed Identity to better secure the communication between your Azure Media Services and Azure Video Indexer account, Network Service Tags, and native integration with Azure Monitor to monitor your account (audit and indexing logs). - Scale and automate your [deployment with ARM-template](deploy-with-arm-template.md), [bicep](deploy-with-bicep.md) or terraform. +- [Create logic apps connector for ARM-based accounts](logic-apps-connector-arm-accounts.md). To create an ARM-based account, see [create an account](create-account-portal.md). Now supporting source languages for STT (speech-to-text), translation, and searc For more information, see [supported languages](language-support.md). +### Expanded the supported languages in LID and MLID through the API ++We expand the list of the languages to be supported in LID (language identification) and MLID (multi language Identification) using APIs. ++For more information, see [supported languages](language-support.md). + ### Configure confidence level in a person model with an API Use the [Patch person model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Patch-Person-Model) API to configure the confidence level for face recognition within a person model. |
backup | Automation Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/automation-backup.md | Once you assign an Azure Policy to a scope, all VMs that meet your criteria are The following video illustrates how Azure Policy works for backup: <br><br> -> [!VIDEO https://docs.microsoft.com/shows/IT-Ops-Talk/Configure-backups-at-scale-using-Azure-Policy/player] +> [!VIDEO https://learn.microsoft.com/shows/IT-Ops-Talk/Configure-backups-at-scale-using-Azure-Policy/player] ### Export backup-operational data For more information on how to set up this runbook, see [Automatic retry of fail The following video provides an end-to-end walk-through of the scenario: <br><br> - > [!VIDEO https://docs.microsoft.com/shows/IT-Ops-Talk/Automatically-retry-failed-backup-jobs-using-Azure-Resource-Graph-and-Azure-Automation-Runbooks/player] + > [!VIDEO https://learn.microsoft.com/shows/IT-Ops-Talk/Automatically-retry-failed-backup-jobs-using-Azure-Resource-Graph-and-Azure-Automation-Runbooks/player] ## Additional resources |
backup | Microsoft Azure Recovery Services Powershell All | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/microsoft-azure-recovery-services-powershell-all.md | $WC = New-Object System.Net.WebClient $WC.DownloadFile($MarsAURL,'C:\downloads\MARSAgentInstaller.EXE') C:\Downloads\MARSAgentInstaller.EXE /q -MARSAgentInstaller.exe /q # Please note the commandline install options available here: https://docs.microsoft.com/azure/backup/backup-client-automation#installation-options +MARSAgentInstaller.exe /q # Please note the commandline install options available here: https://learn.microsoft.com/azure/backup/backup-client-automation#installation-options # Registering Windows Server or Windows client machine to a Recovery Services Vault $CredsPath = "C:\downloads" Set-OBMachineSetting -NoThrottle # Encryption settings $PassPhrase = ConvertTo-SecureString -String "Complex!123_STRING" -AsPlainText -Force Set-OBMachineSetting -EncryptionPassPhrase $PassPhrase -SecurityPin "<generatedPIN>" #NOTE: You must generate a security pin by selecting Generate, under Settings > Properties > Security PIN in the Recovery Services vault section of the Azure portal. -# See: https://docs.microsoft.com/rest/api/backup/securitypins/get -# See: https://docs.microsoft.com/powershell/module/azurerm.keyvault/Add-AzureKeyVaultKey?view=azurermps-6.13.0 +# See: https://learn.microsoft.com/rest/api/backup/securitypins/get +# See: https://learn.microsoft.com/powershell/module/azurerm.keyvault/Add-AzureKeyVaultKey?view=azurermps-6.13.0 # Back up files and folders $NewPolicy = New-OBPolicy Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Force ## Next steps -[Learn more](../backup-client-automation.md) about how to use PowerShell to deploy and manage on-premises backups using MARS agent. +[Learn more](../backup-client-automation.md) about how to use PowerShell to deploy and manage on-premises backups using MARS agent. |
backup | Transport Layer Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/transport-layer-security.md | Title: Transport Layer Security in Azure Backup description: Learn how to enable Azure Backup to use the encryption protocol Transport Layer Security (TLS) to keep data secure when being transferred over a network. Previously updated : 11/01/2020 Last updated : 09/20/2022 # Transport Layer Security in Azure Backup The following registry keys configure .NET Framework to support strong cryptogra "SchUseStrongCrypto" = dword:00000001 ``` +## Azure TLS certificate changes ++Azure TLS/SSL endpoints now contain updated certificates chaining up to new root CAs. Ensure that the following changes include the updated root CAs. [Learn more](../security/fundamentals/tls-certificate-changes.md#what-changed) about the possible impacts on your applications. ++Earlier, most of the TLS certificates, used by Azure services, chained up to the following Root CA: ++Common name of CA | Thumbprint (SHA1) + | +[Baltimore CyberTrust Root](https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt) | d4de20d05e66fc53fe1a50882c78db2852cae474 ++Now, TLS certificates, used by Azure services, helps to chain up to one of the following Root CAs: ++Common name of CA | Thumbprint (SHA1) + | +[DigiCert Global Root G2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt) | df3c24f9bfd666761b268073fe06d1cc8d4f82a4 +[DgiCert Global Root CA](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt) | a8985d3a65e5e5c4b2d7d66d40c6dd2fb19c5436 +[Baltimore CyberTrust Root](https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt)| d4de20d05e66fc53fe1a50882c78db2852cae474 +[D-TRUST Root Class 3 CA 2 2009](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt) | 58e8abb0361533fb80f79b1b6d29d3ff8d5f00f0 +[Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt) | 73a5e64a3bff8316ff0edccc618a906e4eae4d74 +[Microsoft ECC Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20ECC%20Root%20Certificate%20Authority%202017.crt) | 999a64c37ff47d9fab95f14769891460eec4c3c5 + ## Frequently asked questions ### Why enable TLS 1.2? The highest protocol version supported by both the client and server is negotiat For improved security from protocol downgrade attacks, Azure Backup is beginning to disable TLS versions older than 1.2 in a phased manner. This is part of a long-term shift across services to disallow legacy protocol and cipher suite connections. Azure Backup services and components fully support TLS 1.2. However, Windows versions lacking required updates or certain customized configurations can still prevent TLS 1.2 protocols being offered. This can cause failures including but not limited to one or more of the following: - Backup and restore operations may fail.-- Backup components connections failures with error 10054 (An existing connection was forcibly closed by the remote host).+- The backup components connections failures with error 10054 (An existing connection was forcibly closed by the remote host). - Services related to Azure Backup won't stop or start as usual. ## Additional resources |
bastion | Bastion Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-overview.md | For frequently asked questions, see the Bastion [FAQ](bastion-faq.md). * [Quickstart: Deploy Bastion using default settings](quickstart-host-portal.md). * [Tutorial: Deploy Bastion using specified settings](tutorial-create-host-portal.md).-* [Learn module: Introduction to Azure Bastion](/learn/modules/intro-to-azure-bastion/). +* [Learn module: Introduction to Azure Bastion](/training/modules/intro-to-azure-bastion/). * Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure. |
batch | Batch Aad Auth Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-aad-auth-management.md | Your client application uses the application ID (also referred to as the client // Specify the unique identifier (the "Client ID") for your application. This is required so that your // native client application (i.e. this sample) can access the Microsoft Graph API. For information // about registering an application in Azure Active Directory, please see "Register an application with the Microsoft identity platform" here:-// https://docs.microsoft.com/azure/active-directory/develop/quickstart-register-app +// https://learn.microsoft.com/azure/active-directory/develop/quickstart-register-app private const string ClientId = "<application-id>"; ``` Also copy the redirect URI that you specified during the registration process. The redirect URI specified in your code must match the redirect URI that you provided when you registered the application. |
batch | Batch Pools Without Public Ip Addresses Classic Retirement Migration Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pools-without-public-ip-addresses-classic-retirement-migration-guide.md | In late 2021, we launched a simplified compute node communication model for Azur [Simplified Compute Node Communication Pools without Public IPs](./simplified-node-communication-pool-no-public-ip.md) requires using simplified compute node communication. It provides customers with enhanced security for their workload environments on network isolation and data exfiltration to Azure Batch accounts. Its key benefits include: * Allow creating simplified node communication pool without public IP addresses.-* Support Batch private pool using a new private endpoint (sub-resource nodeManagement) for Azure Batch account. +* Support Batch private pool using a new private endpoint (sub-resource: **nodeManagement**) for Azure Batch account. * Simplified private link DNS zone for Batch account private endpoints: changed from **privatelink.\<region>.batch.azure.com** to **privatelink.batch.azure.com**. * Mutable public network access for Batch accounts. * Firewall support for Batch account public endpoints: configure IP address network rules to restrict public network access with Batch accounts. ## Migration steps -Batch pool without public IP addresses (classic) will retire on **31/2023 and will be updated to simplified compute node communication pools without public IPs. For existing pools that use the previous preview version of Batch pool without public IP addresses (classic), it's only possible to migrate pools created in a virtual network. To migrate the pool, follow the opt-in process for simplified compute node communication: +Batch pool without public IP addresses (classic) will retire on **31 March 2023** and will be updated to simplified compute node communication pools without public IPs. For existing pools that use the previous preview version of Batch pool without public IP addresses (classic), it's only possible to migrate pools created in a virtual network. To migrate the pool, follow the opt-in process for simplified compute node communication: 1. Opt in to [use simplified compute node communication](./simplified-compute-node-communication.md#opt-your-batch-account-in-or-out-of-simplified-compute-node-communication). Batch pool without public IP addresses (classic) will retire on **31/2023 and wi * How can I connect to my pool nodes for troubleshooting? - Similar to Batch pools without public IP addresses (classic). As there is no public IP address for the Batch pool, users will need to connect their pool nodes from within the virtual network. You can create a jump box VM in the virtual network or use other remote connectivity solutions like [Azure Bastion](../bastion/bastion-overview.md). + Similar to Batch pools without public IP addresses (classic). As there's no public IP address for the Batch pool, users will need to connect their pool nodes from within the virtual network. You can create a jump box VM in the virtual network or use other remote connectivity solutions like [Azure Bastion](../bastion/bastion-overview.md). * Will there be any change to how my workloads are downloaded from Azure Storage? Batch pool without public IP addresses (classic) will retire on **31/2023 and wi * What if I donΓÇÖt migrate to simplified compute node communication pools without public IPs? - After **31 March 2023**, we will stop supporting Batch pool without public IP addresses. The functionality of the existing pool in that configuration may break, such as scale out operations, or may be actively scaled down to zero at any point in time after that date. + After **31 March 2023**, we'll stop supporting Batch pool without public IP addresses. The functionality of the existing pool in that configuration may break, such as scale-out operations, or may be actively scaled down to zero at any point in time after that date. ## Next steps -For more information, refer to [Simplified compute node communication](./simplified-compute-node-communication.md). +For more information, see [Simplified compute node communication](./simplified-compute-node-communication.md). |
batch | Job Pool Lifetime Statistics Migration Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/job-pool-lifetime-statistics-migration-guide.md | Last updated 08/15/2022 The Azure Batch service currently supports API for Job/Pool to retrieve lifetime statistics. The API is used to get lifetime statistics for all the Pools/Jobs in the specified batch account or for a specified Pool/Job. The API collects the statistical data from when the Batch account was created until the last time updated or entire lifetime of the specified Job/Pool. Job/Pool lifetime statistics API is helpful for customers to analyze and evaluate their usage. -To make the statistical data available for customers, the Batch service allocates batch pools and schedule jobs with an in-house MapReduce implementation to perform background periodic roll-up of statistics. The aggregation is performed for all accounts/pools/jobs in each region, no matter if customer needs or queries the stats for their account/pool/job. The operating cost includes eleven VMs allocated in each region to execute MapReduce aggregation jobs. For busy regions, we had to increase the pool size further to accommodate the extra aggregation load. +To make the statistical data available for customers, the Batch service allocates batch pools and schedule jobs with an in-house MapReduce implementation to perform background periodic roll-up of statistics. The aggregation is performed for all accounts/pools/jobs in each region, no matter if customer needs or queries the stats for their account/pool/job. The operating cost includes 11 VMs allocated in each region to execute MapReduce aggregation jobs. For busy regions, we had to increase the pool size further to accommodate the extra aggregation load. The MapReduce aggregation logic was implemented with legacy code, and no new features are being added or improvised due to technical challenges with legacy code. Still, the legacy code and its hosting repo need to be updated frequently to accommodate ever growing load in production and to meet security/compliance requirements. In addition, since the API is featured to provide lifetime statistics, the data is growing and demands more storage and performance issues, even though most customers aren't using the API. Batch service currently eats up all the compute and storage usage charges associated with MapReduce pools and jobs. -The purpose of the API is designed and maintained to serve the customer in troubleshooting. However, not many customers use it in real life, and the customers are interested in extracting the details for not more than a month. Now more advanced ways of log/job/pool data can be collected and used on a need basis using Azure portal logs, Alerts, Log export, and other methods. Therefore, we are retire Job/Pool Lifetime. +The purpose of the API is designed and maintained to serve the customer in troubleshooting. However, not many customers use it in real life, and the customers are interested in extracting the details for not more than a month. Now more advanced ways of log/job/pool data can be collected and used on a need basis using Azure portal logs, Alerts, Log export, and other methods. Therefore, we're retiring the Job/Pool Lifetime. Job/Pool Lifetime Statistics API will be retired on **30 April 2023**. Once complete, the API will no longer work and will return an appropriate HTTP response error code back to the client. Job/Pool Lifetime Statistics API will be retired on **30 April 2023**. Once comp * Is there an alternate way to view logs of Pool/Jobs? - Azure portal has various options to enable the logs, namely system logs, diagnostic logs. Refer [Monitor Batch Solutions](./monitoring-overview.md) for more information. + Azure portal has various options to enable the logs, namely system logs, diagnostic logs. See [Monitor Batch Solutions](./monitoring-overview.md) for more information. * Can customers extract logs to their system if the API doesn't exist? - Azure portal log feature allows every customer to extract the output and error logs to their workspace. Refer [Monitor with Application Insights](./monitor-application-insights.md) for more information. + Azure portal log feature allows every customer to extract the output and error logs to their workspace. See [Monitor with Application Insights](./monitor-application-insights.md) for more information. ## Next steps -For more information, refer to [Azure Monitor Logs](../azure-monitor/logs/data-platform-logs.md). +For more information, see [Azure Monitor Logs](../azure-monitor/logs/data-platform-logs.md). |
batch | Low Priority Vms Retirement Migration Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/low-priority-vms-retirement-migration-guide.md | Azure Batch offers Low priority and Spot virtual machines (VMs). The virtual mac Low priority VMs enable the customer to take advantage of unutilized capacity. The amount of available unutilized capacity can vary based on size, region, time of day, and more. At any point in time when Azure needs the capacity back, we'll evict low-priority VMs. Therefore, the low-priority offering is excellent for flexible workloads, like large processing jobs, dev/test environments, demos, and proofs of concept. In addition, low-priority VMs can easily be deployed through our virtual machine scale set offering. -Low priority VMs are a deprecated feature, and it will never become Generally Available (GA). Spot VMs are the official preemptible offering from the Compute platform, and is generally available. Therefore, we'll retire Low Priority VMs on **30 September 2025**. After that, we'll stop supporting Low priority VMs. The existing Low priority pools may no longer work or be provisioned. +Low priority VMs are a deprecated feature, and it will never become Generally Available (GA). Spot VMs are the official preemptible offering from the Compute platform, and are generally available. Therefore, we'll retire Low Priority VMs on **30 September 2025**. After that, we'll stop supporting Low priority VMs. The existing Low priority pools may no longer work or be provisioned. ## Retirement alternative The other key difference is that Azure Spot pricing is variable and based on the When it comes to eviction, you have two policy options to choose between: -* Stop/Deallocate (default) – when evicted, the VM is deallocated, but you keep (and pay for) underlying disks. This is ideal for cases where the state is stored on disks. +* Stop/Deallocate (default) – when evicted, the VM is deallocated, but you keep (and pay for) underlying disks. This is the ideal for cases where the state is stored on disks. * Delete – when evicted, the VM and underlying disks are deleted. While similar in idea, there are a few key differences between these two purchasing options: While similar in idea, there are a few key differences between these two purchas ## Migration steps -Customers in User Subscription mode have the option to include Spot VMs using the following the steps below: +Customers in User Subscription mode can include Spot VMs using the following the steps below: 1. In the Azure portal, select the Batch account and view the existing pool or create a new pool. 2. Under **Scale**, users can choose 'Target dedicated nodes' or 'Target Spot/low-priority nodes.' -  +  3. Navigate to the existing Pool and select 'Scale' to update the number of Spot nodes required based on the job scheduled. 4. Click **Save**. Customers in Batch Managed mode must recreate the Batch account, pool, and jobs * How to create a new Batch account /job/pool? - Refer to the quick start [link](./batch-account-create-portal.md) on creating a new Batch account/pool/task. + See the quick start [link](./batch-account-create-portal.md) on creating a new Batch account/pool/task. * Are Spot VMs available in Batch Managed mode? Customers in Batch Managed mode must recreate the Batch account, pool, and jobs * What is the pricing and eviction policy of Spot VMs? Can I view pricing history and eviction rates? - Refer to [Spot VMs](../virtual-machines/spot-vms.md) for more information on using Spot VMs. Yes, you can see historical pricing and eviction rates per size in a region in the portal. + See [Spot VMs](../virtual-machines/spot-vms.md) for more information on using Spot VMs. Yes, you can see historical pricing and eviction rates per size in a region in the portal. ## Next steps |
cdn | Cdn Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-overview.md | For a complete list of features that each Azure CDN product supports, see [Compa - To get started with CDN, see [Create an Azure CDN profile and endpoint](cdn-create-new-endpoint.md). - Manage your CDN endpoints through the [Microsoft Azure portal](https://portal.azure.com) or with [PowerShell](cdn-manage-powershell.md). - Learn how to automate Azure CDN with [.NET](cdn-app-dev-net.md) or [Node.js](cdn-app-dev-node.md).-- [Learn module: Introduction to Azure Content Delivery Network (CDN)](/learn/modules/intro-to-azure-content-delivery-network).+- [Learn module: Introduction to Azure Content Delivery Network (CDN)](/training/modules/intro-to-azure-content-delivery-network). |
cloud-services | Cloud Services Guestos Msrc Releases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md | +## September 2022 Guest OS ++>[!NOTE] ++>The September Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the September Guest OS. This list is subject to change. ++| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | +| | | | | | +| Rel 22-09 | [5017315] | Latest Cumulative Update(LCU) | 6.48 | Sep 13, 2022 | +| Rel 22-09 | [5016618] | IE Cumulative Updates | 2.128, 3.115, 4.108 | Aug 9, 2022 | +| Rel 22-09 | [5017316] | Latest Cumulative Update(LCU) | 7.16 | Sep 13, 2022 | +| Rel 22-09 | [5017305] | Latest Cumulative Update(LCU) | 5.72 | Sep 13, 2022 | +| Rel 22-09 | [5013641] | .NET Framework 3.5 and 4.7.2 Cumulative Update | 6.48 | May 10, 2022 | +| Rel 22-09 | [5017397] | Servicing Stack Update | 2.128 | Sep 13, 2022 | +| Rel 22-09 | [5017361] | September '22 Rollup | 2.128 | Sep 13, 2022 | +| Rel 22-09 | [5013637] | .NET Framework 3.5 Security and Quality Rollup LKG | 2.128 | Sep 13, 2022 | +| Rel 22-09 | [5013644] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 2.128 | May 10, 2022 | +| Rel 22-09 | [5016263] | Servicing Stack Update | 3.115 | July 12, 2022 | +| Rel 22-09 | [5017370] | September '22 Rollup | 3.115 | Sep 13, 2022 | +| Rel 22-09 | [5013635] | .NET Framework 3.5 Security and Quality Rollup LKG | 3.115 | Sep 13, 2022 | +| Rel 22-09 | [5013642] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 3.115 | May 10, 2022 | +| Rel 22-09 | [5017398] | Servicing Stack Update | 4.108 | Sep 13, 2022 | +| Rel 22-09 | [5017367] | Monthly Rollup | 4.108 | Sep 13, 2022 | +| Rel 22-09 | [5013638] | .NET Framework 3.5 Security and Quality Rollup LKG | 4.108 | Jun 14, 2022 | +| Rel 22-09 | [5013643] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 4.108 | May 10, 2022 | +| Rel 22-09 | [4578013] | OOB Standalone Security Update | 4.108 | Aug 19, 2020 | +| Rel 22-09 | [5017396] | Servicing Stack Update | 5.72 | Sep 13, 2022 | +| Rel 22-09 | [4494175] | Microcode | 5.72 | Sep 1, 2020 | +| Rel 22-09 | 5015896 | Servicing Stack Update | 6.48 | Sep 1, 2020 | +| Rel 22-09 | [5013626] | .NET Framework 4.8 Security and Quality Rollup LKG | 6.48 | May 10, 2022 | ++[5017315]: https://support.microsoft.com/kb/5017315 +[5016618]: https://support.microsoft.com/kb/5016618 +[5017316]: https://support.microsoft.com/kb/5017316 +[5017305]: https://support.microsoft.com/kb/5017305 +[5013641]: https://support.microsoft.com/kb/5013641 +[5017397]: https://support.microsoft.com/kb/5017397 +[5017361]: https://support.microsoft.com/kb/5017361 +[5013637]: https://support.microsoft.com/kb/5013637 +[5013644]: https://support.microsoft.com/kb/5013644 +[5016263]: https://support.microsoft.com/kb/5016263 +[5017370]: https://support.microsoft.com/kb/5017370 +[5013635]: https://support.microsoft.com/kb/5013635 +[5013642]: https://support.microsoft.com/kb/5013642 +[5017398]: https://support.microsoft.com/kb/5017398 +[5017367]: https://support.microsoft.com/kb/5017367 +[5013638]: https://support.microsoft.com/kb/5013638 +[5013643]: https://support.microsoft.com/kb/5013643 +[4578013]: https://support.microsoft.com/kb/4578013 +[5017396]: https://support.microsoft.com/kb/5017396 +[4494175]: https://support.microsoft.com/kb/4494175 +[5015896]: https://support.microsoft.com/kb/5015896 +[5013626]: https://support.microsoft.com/kb/5013626 + ## August 2022 Guest OS |
cloud-shell | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/overview.md | You can access the Cloud Shell in three ways:  -- **Code snippets**: In Microsoft [technical documentation](/) and [training resources](/learn), select the **Try It** button that appears with Azure CLI and Azure PowerShell code snippets:+- **Code snippets**: In Microsoft [technical documentation](/) and [training resources](/training), select the **Try It** button that appears with Azure CLI and Azure PowerShell code snippets: ```azurecli-interactive az account show |
cloud-shell | Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md | Known resolutions for troubleshooting issues in Azure Cloud Shell include: ### Disabling Cloud Shell in a locked down network environment -- **Details**: Administrators may wish to disable access to Cloud Shell for their users. Cloud Shell utilizes access to the `ux.console.azure.com` domain, which can be denied, stopping any access to Cloud Shell's entrypoints including `portal.azure.com`, `shell.azure.com`, Visual Studio Code Azure Account extension, and `docs.microsoft.com`. In the US Government cloud, the entrypoint is `ux.console.azure.us`; there is no corresponding `shell.azure.us`.+- **Details**: Administrators may wish to disable access to Cloud Shell for their users. Cloud Shell utilizes access to the `ux.console.azure.com` domain, which can be denied, stopping any access to Cloud Shell's entrypoints including `portal.azure.com`, `shell.azure.com`, Visual Studio Code Azure Account extension, and `learn.microsoft.com`. In the US Government cloud, the entrypoint is `ux.console.azure.us`; there is no corresponding `shell.azure.us`. - **Resolution**: Restrict access to `ux.console.azure.com` or `ux.console.azure.us` via network settings to your environment. The Cloud Shell icon will still exist in the Azure portal, but will not successfully connect to the service. ### Storage Dialog - Error: 403 RequestDisallowedByPolicy |
cognitive-services | Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/go.md | Responses from the Bing Web Search API are returned as JSON. This sample respons ```go Microsoft Cognitive Services || https://www.microsoft.com/cognitive-services Cognitive Services | Microsoft Azure || https://azure.microsoft.com/services/cognitive-services/-What is Microsoft Cognitive Services? | Microsoft Docs || https://docs.microsoft.com/azure/cognitive-services/Welcome +What is Microsoft Cognitive Services? | Microsoft Docs || https://learn.microsoft.com/azure/cognitive-services/Welcome Microsoft Cognitive Toolkit || https://www.microsoft.com/en-us/cognitive-toolkit/ Microsoft Customers || https://customers.microsoft.com/en-us/search?sq=%22Microsoft%20Cognitive%20Services%22&ff=&p=0&so=story_publish_date%20desc Microsoft Enterprise Services - Microsoft Enterprise || https://enterprise.microsoft.com/en-us/services/ |
cognitive-services | Overview Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-identity.md | This documentation contains the following types of articles: * The [tutorials](./enrollment-overview.md) are longer guides that show you how to use this service as a component in broader business solutions. For a more structured approach, follow a Learn module for Face.-* [Detect and analyze faces with the Face service](/learn/modules/detect-analyze-faces/) +* [Detect and analyze faces with the Face service](/training/modules/detect-analyze-faces/) ## Example use cases |
cognitive-services | Overview Image Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md | This documentation contains the following types of articles: * The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions. For a more structured approach, follow a Learn module for Image Analysis.-* [Analyze images with the Computer Vision service](/learn/modules/analyze-images-computer-vision/) +* [Analyze images with the Computer Vision service](/training/modules/analyze-images-computer-vision/) ## Image Analysis features |
cognitive-services | Overview Ocr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md | This documentation contains the following types of articles: * The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions. --> For a more structured approach, follow a Learn module for OCR.-* [Read Text in Images and Documents with the Computer Vision Service](/learn/modules/read-text-images-documents-with-computer-vision-service/) +* [Read Text in Images and Documents with the Computer Vision Service](/training/modules/read-text-images-documents-with-computer-vision-service/) ## Read API |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/overview.md | This documentation contains the following article types: * [**Tutorials**](ecommerce-retail-catalog-moderation.md) are longer guides that show you how to use the service as a component in broader business solutions. For a more structured approach, follow a Learn module for Content Moderator.-* [Introduction to Content Moderator](/learn/modules/intro-to-content-moderator/) -* [Classify and moderate text with Azure Content Moderator](/learn/modules/classify-and-moderate-text-with-azure-content-moderator/) +* [Introduction to Content Moderator](/training/modules/intro-to-content-moderator/) +* [Classify and moderate text with Azure Content Moderator](/training/modules/classify-and-moderate-text-with-azure-content-moderator/) ## Where it's used |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/overview.md | This documentation contains the following types of articles: <!--* The [conceptual articles](Vision-API-How-to-Topics/call-read-api.md) provide in-depth explanations of the service's functionality and features.--> For a more structured approach, follow a Learn module for Custom Vision:-* [Classify images with the Custom Vision service](/learn/modules/classify-images-custom-vision/) -* [Classify endangered bird species with Custom Vision](/learn/modules/cv-classify-bird-species/) +* [Classify images with the Custom Vision service](/training/modules/classify-images-custom-vision/) +* [Classify endangered bird species with Custom Vision](/training/modules/cv-classify-bird-species/) ## How it works |
cognitive-services | Reference Markdown Format | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/reference-markdown-format.md | A new line between 2 sentences.|`\n\n`|`How can I create a bot with \n\n QnA Mak |Italics |`*text*`|`How do I create a bot with *QnA Maker*?`|| |Strong (bold)|`**text**`|`How do I create a bot with **QnA Maker**?`|| |URL for link|`[text](https://www.my.com)`|`How do I create a bot with [QnA Maker](https://www.qnamaker.ai)?`||-|*URL for public image|``|`How can I create a bot with `|| +|*URL for public image|``|`How can I create a bot with `|| |Strikethrough|`~~text~~`|`some ~~questoins~~ questions need to be asked`|| |Bold and italics|`***text***`|`How can I create a ***QnA Maker*** bot?`|| |Bold URL for link|`[**text**](https://www.my.com)`|`How do I create a bot with [**QnA Maker**](https://www.qnamaker.ai)?`|| |
cognitive-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/whats-new.md | Learn what's new with QnA Maker. * New version of QnA Maker launched in free Public Preview. Read more [here](https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/ba-p/1845575). -> [!VIDEO https://docs.microsoft.com/Shows/AI-Show/Introducing-QnA-managed-Now-in-Public-Preview/player] +> [!VIDEO https://learn.microsoft.com/Shows/AI-Show/Introducing-QnA-managed-Now-in-Public-Preview/player] * Simplified resource creation * End to End region support * Deep learnt ranking model |
cognitive-services | Audio Processing Speech Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/audio-processing-speech-sdk.md | |
cognitive-services | Get Started Speech To Text | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-speech-to-text.md | |
cognitive-services | Get Started Speech Translation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-speech-translation.md | |
cognitive-services | Get Started Text To Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-text-to-speech.md | |
cognitive-services | How To Custom Commands Deploy Cicd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-deploy-cicd.md | The scripts are hosted at [Cognitive Services Voice Assistant - Custom Commands] | - | | -- | | SourceAppId | ID of the DEV application | | TargetAppId | ID of the PROD application |- | SubscriptionKey | Subscription key used for both applications | + | SubscriptionKey | The key used for both applications | | Culture | Culture of the applications (i.e. en-us) | > [!div class="mx-imgBorder"] The scripts are hosted at [Cognitive Services Voice Assistant - Custom Commands] ``` | Arguments | Description | | - | | -- |- | region | region of the application, i.e. westus2. | - | subscriptionkey | subscription key of your speech resource. | + | region | Your Speech resource region. For example: `westus2` | + | subscriptionkey | Your Speech resource key. | | appid | the Custom Commands' application ID you want to export. | 1. Push these changes to your repository. The scripts are hosted at [Cognitive Services Voice Assistant - Custom Commands] | Variable | Description | | - | | -- | | TargetAppId | ID of the PROD application |- | SubscriptionKey | Subscription key used for both applications | + | SubscriptionKey | The key used for both applications | | Culture | Culture of the applications (i.e. en-us) | 1. Click "Run" and then click in the "Job" running. |
cognitive-services | How To Custom Commands Setup Speech Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-setup-speech-sdk.md | Add the code-behind source as follows: 1. Add the following code to the method body of `InitializeDialogServiceConnector` ```csharp- // This code creates the `DialogServiceConnector` with your subscription information. - // create a DialogServiceConfig by providing a Custom Commands application id and Cognitive Services subscription key - // the RecoLanguage property is optional (default en-US); note that only en-US is supported in Preview + // This code creates the `DialogServiceConnector` with your resource information. + // create a DialogServiceConfig by providing a Custom Commands application id and Speech resource key + // The RecoLanguage property is optional (default en-US); note that only en-US is supported in Preview const string speechCommandsApplicationId = "YourApplicationId"; // Your application id- const string speechSubscriptionKey = "YourSpeechSubscriptionKey"; // Your subscription key - const string region = "YourServiceRegion"; // The subscription service region. + const string speechSubscriptionKey = "YourSpeechSubscriptionKey"; // Your Speech resource key + const string region = "YourServiceRegion"; // The Speech resource region. var speechCommandsConfig = CustomCommandsConfig.FromSubscription(speechCommandsApplicationId, speechSubscriptionKey, region); speechCommandsConfig.SetProperty(PropertyId.SpeechServiceConnection_RecoLanguage, "en-us"); connector = new DialogServiceConnector(speechCommandsConfig); ``` -1. Replace the strings `YourApplicationId`, `YourSpeechSubscriptionKey`, and `YourServiceRegion` with your own values for your app, speech subscription, and [region](regions.md) +1. Replace the strings `YourApplicationId`, `YourSpeechSubscriptionKey`, and `YourServiceRegion` with your own values for your app, speech key, and [region](regions.md) 1. Append the following code snippet to the end of the method body of `InitializeDialogServiceConnector` |
cognitive-services | How To Custom Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice.md | -A Speech service subscription is required before you can use Custom Neural Voice. Follow these instructions to create a Speech service subscription in Azure. If you don't have an Azure account, you can sign up for a new one. +A Speech resource is required before you can use Custom Neural Voice. Follow these instructions to create a Speech resource in Azure. If you don't have an Azure account, you can sign up for a new one. -Once you've created an Azure account and a Speech service subscription, you'll need to sign in to Speech Studio and connect your subscription. +Once you've created an Azure account and a Speech resource, you'll need to sign in to Speech Studio and connect your subscription. -1. Get your Speech service subscription key from the Azure portal. +1. Get your Speech resource key from the Azure portal. 1. Sign in to [Speech Studio](https://aka.ms/speechstudio), and then select **Custom Voice**. 1. Select your subscription and create a speech project. 1. If you want to switch to another Speech subscription, select the **cog** icon at the top. |
cognitive-services | How To Deploy And Use Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-deploy-and-use-endpoint.md | You can suspend and resume your endpoint if you don't use it all the time. When You can also update the endpoint to a new model. To change the model, make sure the new model is named the same as the one you want to update. > [!NOTE]->- Standard subscription (S0) users can create up to 50 endpoints, each with its own custom neural voice. ->- To use your custom neural voice, you must specify the voice model name, use the custom URI directly in an HTTP request, and use the same subscription to pass through the authentication of the text-to-speech service. +>- You can create up to 50 endpoints with a standard (S0) Speech resource, each with its own custom neural voice. +>- To use your custom neural voice, you must specify the voice model name, use the custom URI directly in an HTTP request, and use the same Speech resource to pass through the authentication of the text-to-speech service. After your endpoint is deployed, the endpoint name appears as a link. Select the link to display information specific to your endpoint, such as the endpoint key, endpoint URL, and sample code. The application settings that you use as REST API [request parameters](#request- :::image type="content" source="./media/custom-voice/cnv-endpoint-app-settings-zoom.png" alt-text="Screenshot of custom endpoint app settings in Speech Studio." lightbox="./media/custom-voice/cnv-endpoint-app-settings-full.png"::: -* The **Endpoint key** shows the subscription key the endpoint is associated with. Use the endpoint key as the value of your `Ocp-Apim-Subscription-Key` request header. +* The **Endpoint key** shows the Speech resource key the endpoint is associated with. Use the endpoint key as the value of your `Ocp-Apim-Subscription-Key` request header. * The **Endpoint URL** shows your service region. Use the value that precedes `voice.speech.microsoft.com` as your service region request parameter. For example, use `eastus` if the endpoint URL is `https://eastus.voice.speech.microsoft.com/cognitiveservices/v1`. * The **Endpoint URL** shows your endpoint ID. Use the value appended to the `?deploymentId=` query parameter as the value of your endpoint ID request parameter. The possible `status` property values are: ##### Get endpoint example -For information about endpoint ID, region, and subscription key parameters, see [request parameters](#request-parameters). +For information about endpoint ID, region, and Speech resource key parameters, see [request parameters](#request-parameters). HTTP example: ```HTTP GET api/texttospeech/v3.0/endpoints/<YourEndpointId> HTTP/1.1-Ocp-Apim-Subscription-Key: YourSubscriptionKey -Host: <YourServiceRegion>.customvoice.api.speech.microsoft.com +Ocp-Apim-Subscription-Key: YourResourceKey +Host: <YourResourceRegion>.customvoice.api.speech.microsoft.com ``` cURL example: ```Console-curl -v -X GET "https://<YourServiceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>" -H "Ocp-Apim-Subscription-Key: <YourSubscriptionKey >" +curl -v -X GET "https://<YourResourceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>" -H "Ocp-Apim-Subscription-Key: <YourResourceKey >" ``` Response header example: Use the [get endpoint](#get-endpoint) operation to poll and track the status pro ##### Suspend endpoint example -For information about endpoint ID, region, and subscription key parameters, see [request parameters](#request-parameters). +For information about endpoint ID, region, and Speech resource key parameters, see [request parameters](#request-parameters). HTTP example: ```HTTP POST api/texttospeech/v3.0/endpoints/<YourEndpointId>/suspend HTTP/1.1-Ocp-Apim-Subscription-Key: YourSubscriptionKey -Host: <YourServiceRegion>.customvoice.api.speech.microsoft.com +Ocp-Apim-Subscription-Key: YourResourceKey +Host: <YourResourceRegion>.customvoice.api.speech.microsoft.com Content-Type: application/json Content-Length: 0 ``` Content-Length: 0 cURL example: ```Console-curl -v -X POST "https://<YourServiceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>/suspend" -H "Ocp-Apim-Subscription-Key: <YourSubscriptionKey >" -H "content-type: application/json" -H "content-length: 0" +curl -v -X POST "https://<YourResourceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>/suspend" -H "Ocp-Apim-Subscription-Key: <YourResourceKey >" -H "content-type: application/json" -H "content-length: 0" ``` Response header example: Use the [get endpoint](#get-endpoint) operation to poll and track the status pro ##### Resume endpoint example -For information about endpoint ID, region, and subscription key parameters, see [request parameters](#request-parameters). +For information about endpoint ID, region, and Speech resource key parameters, see [request parameters](#request-parameters). HTTP example: ```HTTP POST api/texttospeech/v3.0/endpoints/<YourEndpointId>/resume HTTP/1.1-Ocp-Apim-Subscription-Key: YourSubscriptionKey -Host: <YourServiceRegion>.customvoice.api.speech.microsoft.com +Ocp-Apim-Subscription-Key: YourResourceKey +Host: <YourResourceRegion>.customvoice.api.speech.microsoft.com Content-Type: application/json Content-Length: 0 ``` Content-Length: 0 cURL example: ```Console-curl -v -X POST "https://<YourServiceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>/resume" -H "Ocp-Apim-Subscription-Key: <YourSubscriptionKey >" -H "content-type: application/json" -H "content-length: 0" +curl -v -X POST "https://<YourResourceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>/resume" -H "Ocp-Apim-Subscription-Key: <YourResourceKey >" -H "content-type: application/json" -H "content-length: 0" ``` Response header example: For more information, see [response headers](#response-headers). ##### Request parameters -You use these request parameters with calls to the REST API. See [application settings](#application-settings) for information about where to get your region, endpoint ID, and subscription key in Speech Studio. +You use these request parameters with calls to the REST API. See [application settings](#application-settings) for information about where to get your region, endpoint ID, and Speech resource key in Speech Studio. | Name | Location | Required | Type | Description | | | | -- | | |-| `YourServiceRegion` | Path | `True` | string | The Azure region the endpoint is associated with. | +| `YourResourceRegion` | Path | `True` | string | The Azure region the endpoint is associated with. | | `YourEndpointId` | Path | `True` | string | The identifier of the endpoint. |-| `Ocp-Apim-Subscription-Key` | Header | `True` | string | The subscription key the endpoint is associated with. | +| `Ocp-Apim-Subscription-Key` | Header | `True` | string | The Speech resource key the endpoint is associated with. | ##### Response headers The HTTP status code for each response indicates success or common errors. | 200 | OK | The request was successful. | | 202 | Accepted | The request has been accepted and is being processed. | | 400 | Bad Request | The value of a parameter is invalid, or a required parameter is missing, empty, or null. One common issue is a header that is too long. |-| 401 | Unauthorized | The request isn't authorized. Check to make sure your subscription key or [token](rest-speech-to-text-short.md#authentication) is valid and in the correct region. | -| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your subscription. | +| 401 | Unauthorized | The request isn't authorized. Check to make sure your Speech resource key or [token](rest-speech-to-text-short.md#authentication) is valid and in the correct region. | +| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your Speech resource. | | 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. | ## Use your custom voice The difference between Custom voice sample codes and [Text-to-speech quickstart ::: zone pivot="programming-language-csharp" ```csharp-var speechConfig = SpeechConfig.FromSubscription(YourSubscriptionKey, YourServiceRegion); +var speechConfig = SpeechConfig.FromSubscription(YourResourceKey, YourResourceRegion); speechConfig.SpeechSynthesisVoiceName = "YourCustomVoiceName"; speechConfig.EndpointId = "YourEndpointId"; ``` speechConfig.EndpointId = "YourEndpointId"; ::: zone pivot="programming-language-cpp" ```cpp-auto speechConfig = SpeechConfig::FromSubscription(YourSubscriptionKey, YourServiceRegion); +auto speechConfig = SpeechConfig::FromSubscription(YourResourceKey, YourResourceRegion); speechConfig->SetSpeechSynthesisVoiceName("YourCustomVoiceName"); speechConfig->SetEndpointId("YourEndpointId"); ``` speechConfig->SetEndpointId("YourEndpointId"); ::: zone pivot="programming-language-java" ```java-SpeechConfig speechConfig = SpeechConfig.fromSubscription(YourSubscriptionKey, YourServiceRegion); +SpeechConfig speechConfig = SpeechConfig.fromSubscription(YourResourceKey, YourResourceRegion); speechConfig.setSpeechSynthesisVoiceName("YourCustomVoiceName"); speechConfig.setEndpointId("YourEndpointId"); ``` |
cognitive-services | How To Recognize Intents From Speech Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-recognize-intents-from-speech-csharp.md | Next, you add code to the project. [!code-csharp[Intent recognition by using a microphone](~/samples-cognitive-services-speech-sdk/samples/csharp/sharedcontent/console/intent_recognition_samples.cs#intentRecognitionWithMicrophone)] -1. Replace the placeholders in this method with your LUIS subscription key, region, and app ID as follows. +1. Replace the placeholders in this method with your LUIS resource key, region, and app ID as follows. | Placeholder | Replace with | | -- | |- | `YourLanguageUnderstandingSubscriptionKey` | Your LUIS key. Again, you must get this item from your Azure dashboard. You can find it on your app's **Azure Resources** page (under **Manage**) in the [LUIS portal](https://www.luis.ai/home). | - | `YourLanguageUnderstandingServiceRegion` | The short identifier for the region your LUIS subscription is in, such as `westus` for West US. See [Regions](regions.md). | + | `YourLanguageUnderstandingSubscriptionKey` | Your LUIS resource key. Again, you must get this item from your Azure dashboard. You can find it on your app's **Azure Resources** page (under **Manage**) in the [LUIS portal](https://www.luis.ai/home). | + | `YourLanguageUnderstandingServiceRegion` | The short identifier for the region your LUIS resource is in, such as `westus` for West US. See [Regions](regions.md). | | `YourLanguageUnderstandingAppId` | The LUIS app ID. You can find it on your app's **Settings** page in the [LUIS portal](https://www.luis.ai/home). | With these changes made, you can build (**Control+Shift+B**) and run (**F5**) the application. When you're prompted, try saying "Turn off the lights" into your PC's microphone. The application displays the result in the console window. The following sections include a discussion of the code. ## Create an intent recognizer -First, you need to create a speech configuration from your LUIS prediction key and region. You can use speech configurations to create recognizers for the various capabilities of the Speech SDK. The speech configuration has multiple ways to specify the subscription you want to use; here, we use `FromSubscription`, which takes the subscription key and region. +First, you need to create a speech configuration from your LUIS prediction key and region. You can use speech configurations to create recognizers for the various capabilities of the Speech SDK. The speech configuration has multiple ways to specify the resource you want to use; here, we use `FromSubscription`, which takes the resource key and region. > [!NOTE]-> Use the key and region of your LUIS subscription, not a Speech service subscription. +> Use the key and region of your LUIS resource, not a Speech resource. -Next, create an intent recognizer using `new IntentRecognizer(config)`. Since the configuration already knows which subscription to use, you don't need to specify the subscription key again when creating the recognizer. +Next, create an intent recognizer using `new IntentRecognizer(config)`. Since the configuration already knows which resource to use, you don't need to specify the key again when creating the recognizer. ## Import a LUIS model and add intents |
cognitive-services | How To Recognize Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-recognize-speech.md | |
cognitive-services | How To Speech Synthesis Viseme | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md | |
cognitive-services | How To Speech Synthesis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis.md | |
cognitive-services | How To Windows Voice Assistants Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-windows-voice-assistants-get-started.md | For a complete voice assistant experience, the application will need a dialog se These are the requirements to create a basic dialog service using Direct Line Speech. -- **Speech resource:** A subscription for Cognitive Speech Services for speech-to-text and text-to-speech conversions. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).+- **Speech resource:** A resource for Cognitive Speech Services for speech-to-text and text-to-speech conversions. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource). - **Bot Framework bot:** A bot created using Bot Framework version 4.2 or above that's subscribed to [Direct Line Speech](./direct-line-speech.md) to enable voice input and output. [This guide](./tutorial-voice-enable-your-bot-speech-sdk.md) contains step-by-step instructions to make an "echo bot" and subscribe it to Direct Line Speech. You can also go [here](https://blog.botframework.com/2018/05/07/build-a-microsoft-bot-framework-bot-with-the-bot-builder-sdk-v4/) for steps on how to create a customized bot, then follow the same steps [here](./tutorial-voice-enable-your-bot-speech-sdk.md) to subscribe it to Direct Line Speech, but with your new bot rather than the "echo bot". ## Try out the sample app -With your Speech Services subscription key and echo bot's bot ID, you're ready to try out the [UWP Voice Assistant sample](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample). Follow the instructions in the readme to run the app and enter your credentials. +With your Speech resource key and echo bot's bot ID, you're ready to try out the [UWP Voice Assistant sample](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample). Follow the instructions in the readme to run the app and enter your credentials. ## Create your own voice assistant for Windows |
cognitive-services | Language Identification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md | |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md | |
cognitive-services | Long Audio Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/long-audio-api.md | get_voices() Replace the following values: -* Replace `<your_key>` with your Speech service subscription key. This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal). +* Replace `<your_key>` with your Speech resource key. This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal). * Replace `<region>` with the region where your Speech resource was created (for example: `eastus` or `westus`). This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal). You'll see output that looks like this: submit_synthesis() Replace the following values: -* Replace `<your_key>` with your Speech service subscription key. This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal). +* Replace `<your_key>` with your Speech resource key. This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal). * Replace `<region>` with the region where your Speech resource was created (for example: `eastus` or `westus`). This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal). * Replace `<input_file_path>` with the path to the text file you've prepared for text-to-speech. * Replace `<locale>` with the desired output locale. For more information, see [language support](language-support.md?tabs=stt-tts). The following table details the HTTP response codes and messages from the REST A | API | HTTP status code | Description | Solution | |--||-|-|-| Create | 400 | The voice synthesis is not enabled in this region. | Change the speech subscription key with a supported region. | -| | 400 | Only the **Standard** speech subscription for this region is valid. | Change the speech subscription key to the "Standard" pricing tier. | +| Create | 400 | The voice synthesis is not enabled in this region. | Change the speech resource key with a supported region. | +| | 400 | Only the **Standard** speech resource for this region is valid. | Change the speech resource key to the "Standard" pricing tier. | | | 400 | Exceed the 20,000 request limit for the Azure account. Remove some requests before submitting new ones. | The server will keep up to 20,000 requests for each Azure account. Delete some requests before submitting new ones. | | | 400 | This model cannot be used in the voice synthesis: {modelID}. | Make sure the {modelID}'s state is correct. | | | 400 | The region for the request does not match the region for the model: {modelID}. | Make sure the {modelID}'s region match with the request's region. | |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/overview.md | |
cognitive-services | Quickstart Custom Commands Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstart-custom-commands-application.md | -At this time, Custom Commands supports speech subscriptions created in regions that have [voice assistant capabilities](./regions.md#voice-assistants). +At this time, Custom Commands supports speech resources created in regions that have [voice assistant capabilities](./regions.md#voice-assistants). ## Prerequisites At this time, Custom Commands supports speech subscriptions created in regions t 1. In a web browser, go to [Speech Studio](https://aka.ms/speechstudio/customcommands). 1. Enter your credentials to sign in to the portal. - The default view is your list of Speech subscriptions. + The default view is your list of Speech resources. > [!NOTE]- > If you don't see the select subscription page, you can navigate there by choosing "Speech resources" from the settings menu on the top bar. + > If you don't see the select resource page, you can navigate there by choosing "Resource" from the settings menu on the top bar. -1. Select your Speech subscription, and then select **Go to Studio**. +1. Select your Speech resource, and then select **Go to Studio**. 1. Select **Custom Commands**. - The default view is a list of the Custom Commands applications you have under your selected subscription. + The default view is a list of the Custom Commands applications you have under your selected resource. ## Import an existing application as a new Custom Commands project |
cognitive-services | Setup Platform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstarts/setup-platform.md | |
cognitive-services | Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/regions.md | |
cognitive-services | Resiliency And Recovery Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/resiliency-and-recovery-plan.md | -The Speech service is [available in various regions](./regions.md). Service subscription keys are tied to a single region. When you acquire a key, you select a specific region, where your data, model and deployments reside. +The Speech service is [available in various regions](./regions.md). Speech resource keys are tied to a single region. When you acquire a key, you select a specific region, where your data, model and deployments reside. Datasets for customer-created data assets, such as customized speech models, custom voice fonts and speaker recognition voice profiles, are also **available only within the service-deployed region**. Such assets are: These assets are backed up regularly and automatically by the repositories thems ## How to monitor service availability -If you use the default endpoints, you should configure your client code to monitor for errors. If errors persist, be prepared to redirect to another region where you have a service subscription. +If you use the default endpoints, you should configure your client code to monitor for errors. If errors persist, be prepared to redirect to another region where you have a Speech resource. Follow these steps to configure your client to monitor for errors: Follow these steps to configure your client to monitor for errors: 4. Each region has its own STS token service. For the primary region and any backup regions your client configuration file needs to know the: - Regional Speech service endpoints- - [Regional subscription key and the region code](./rest-speech-to-text.md) + - [Regional key and the region code](./rest-speech-to-text.md) 5. Configure your code to monitor for connectivity errors (typically connection timeouts and service unavailability errors). Here's sample code in C#: [GitHub: Adding Sample for showing a possible candidate for switching regions](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/fa6428a0837779cbeae172688e0286625e340942/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#L965). |
cognitive-services | Rest Speech To Text Short | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text-short.md | The endpoint for the REST API for short audio has this format: https://<REGION_IDENTIFIER>.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1 ``` -Replace `<REGION_IDENTIFIER>` with the identifier that matches the [region](regions.md) of your subscription. +Replace `<REGION_IDENTIFIER>` with the identifier that matches the [region](regions.md) of your Speech resource. > [!NOTE] > You must append the language parameter to the URL to avoid receiving a 4xx HTTP error. For example, the language set to US English via the West US endpoint is: `https://westus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US`. This table lists required and optional headers for speech-to-text requests: |Header| Description | Required or optional | ||-||-| `Ocp-Apim-Subscription-Key` | Your subscription key for the Speech service. | Either this header or `Authorization` is required. | +| `Ocp-Apim-Subscription-Key` | Your resource key for the Speech service. | Either this header or `Authorization` is required. | | `Authorization` | An authorization token preceded by the word `Bearer`. For more information, see [Authentication](#authentication). | Either this header or `Ocp-Apim-Subscription-Key` is required. | | `Pronunciation-Assessment` | Specifies the parameters for showing pronunciation scores in recognition results. These scores assess the pronunciation quality of speech input, with indicators like accuracy, fluency, and completeness. <br><br>This parameter is a Base64-encoded JSON that contains multiple detailed parameters. To learn how to build this header, see [Pronunciation assessment parameters](#pronunciation-assessment-parameters). | Optional | | `Content-type` | Describes the format and codec of the provided audio data. Accepted values are `audio/wav; codecs=audio/pcm; samplerate=16000` and `audio/ogg; codecs=opus`. | Required | The following sample includes the host name and required headers. It's important POST speech/recognition/conversation/cognitiveservices/v1?language=en-US&format=detailed HTTP/1.1 Accept: application/json;text/xml Content-Type: audio/wav; codecs=audio/pcm; samplerate=16000-Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY +Ocp-Apim-Subscription-Key: YOUR_RESOURCE_KEY Host: westus.stt.speech.microsoft.com Transfer-Encoding: chunked Expect: 100-continue The HTTP status code for each response indicates success or common errors. | 100 | Continue | The initial request has been accepted. Proceed with sending the rest of the data. (This code is used with chunked transfer.) | | 200 | OK | The request was successful. The response body is a JSON object. | | 400 | Bad request | The language code wasn't provided, the language isn't supported, or the audio file is invalid (for example). |-| 401 | Unauthorized | A subscription key or an authorization token is invalid in the specified region, or an endpoint is invalid. | -| 403 | Forbidden | A subscription key or authorization token is missing. | +| 401 | Unauthorized | A resource key or an authorization token is invalid in the specified region, or an endpoint is invalid. | +| 403 | Forbidden | A resource key or authorization token is missing. | ### Chunked transfer request.Method = "POST"; request.ProtocolVersion = HttpVersion.Version11; request.Host = host; request.ContentType = @"audio/wav; codecs=audio/pcm; samplerate=16000";-request.Headers["Ocp-Apim-Subscription-Key"] = "YOUR_SUBSCRIPTION_KEY"; +request.Headers["Ocp-Apim-Subscription-Key"] = "YOUR_RESOURCE_KEY"; request.AllowWriteStreamBuffering = false; using (var fs = new FileStream(audioFile, FileMode.Open, FileAccess.Read)) |
cognitive-services | Rest Text To Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-text-to-speech.md | -The text-to-speech REST API supports neural text-to-speech voices, which support specific languages and dialects that are identified by locale. Each available endpoint is associated with a region. A subscription key for the endpoint or region that you plan to use is required. Here are links to more information: +The text-to-speech REST API supports neural text-to-speech voices, which support specific languages and dialects that are identified by locale. Each available endpoint is associated with a region. A Speech resource key for the endpoint or region that you plan to use is required. Here are links to more information: - For a complete list of voices, see [Language and voice support for the Speech service](language-support.md?tabs=stt-tts). - For information about regional availability, see [Speech service supported regions](regions.md#speech-service). This table lists required and optional headers for text-to-speech requests: | Header | Description | Required or optional | |--|-||-| `Ocp-Apim-Subscription-Key` | Your subscription key for the Speech service. | Either this header or `Authorization` is required. | +| `Ocp-Apim-Subscription-Key` | Your Speech resource key. | Either this header or `Authorization` is required. | | `Authorization` | An authorization token preceded by the word `Bearer`. For more information, see [Authentication](#authentication). | Either this header or `Ocp-Apim-Subscription-Key` is required. | ### Request body This request requires only an authorization header: GET /cognitiveservices/voices/list HTTP/1.1 Host: westus.tts.speech.microsoft.com-Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY +Ocp-Apim-Subscription-Key: YOUR_RESOURCE_KEY ``` ### Sample response The HTTP status code for each response indicates success or common errors. ||-|--| | 200 | OK | The request was successful. | | 400 | Bad request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common reason is a header that's too long. |-| 401 | Unauthorized | The request is not authorized. Make sure your subscription key or token is valid and in the correct region. | -| 429 | Too many requests | You have exceeded the quota or rate of requests allowed for your subscription. | +| 401 | Unauthorized | The request is not authorized. Make sure your resource key or token is valid and in the correct region. | +| 429 | Too many requests | You have exceeded the quota or rate of requests allowed for your resource. | | 502 | Bad gateway | There's a network or server-side problem. This status might also indicate invalid headers. | The `v1` endpoint allows you to convert text to speech by using [Speech Synthesi ### Regions and endpoints -These regions are supported for text-to-speech through the REST API. Be sure to select the endpoint that matches your subscription region. +These regions are supported for text-to-speech through the REST API. Be sure to select the endpoint that matches your Speech resource region. [!INCLUDE [](includes/cognitive-services-speech-service-endpoints-text-to-speech.md)] The HTTP status code for each response indicates success or common errors: ||-|--| | 200 | OK | The request was successful. The response body is an audio file. | | 400 | Bad request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common reason is a header that's too long. |-| 401 | Unauthorized | The request is not authorized. Make sure your subscription key or token is valid and in the correct region. | +| 401 | Unauthorized | The request is not authorized. Make sure your Speech resource key or token is valid and in the correct region. | | 415 | Unsupported media type | It's possible that the wrong `Content-Type` value was provided. `Content-Type` should be set to `application/ssml+xml`. |-| 429 | Too many requests | You have exceeded the quota or rate of requests allowed for your subscription. | +| 429 | Too many requests | You have exceeded the quota or rate of requests allowed for your resource. | | 502 | Bad gateway | There's a network or server-side problem. This status might also indicate invalid headers. | If the HTTP status is `200 OK`, the body of the response contains an audio file in the requested format. This file can be played as it's transferred, saved to a buffer, or saved to a file. |
cognitive-services | Sovereign Clouds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/sovereign-clouds.md | Available to US government entities and their partners only. See more informatio - Text-to-speech - Standard voice - Neural voice- - Speech translator + - Speech translation - **Unsupported features:** - Custom Voice - **Supported languages:** Speech Services REST API endpoints in Azure Government have the following format | REST API type / operation | Endpoint format | |--|--| | Access token | `https://<REGION_IDENTIFIER>.api.cognitive.microsoft.us/sts/v1.0/issueToken`-| [Speech-to-text REST API v3.0](rest-speech-to-text.md) | `https://<REGION_IDENTIFIER>.api.cognitive.microsoft.us/<URL_PATH>` | +| [Speech-to-text REST API](rest-speech-to-text.md) | `https://<REGION_IDENTIFIER>.api.cognitive.microsoft.us/<URL_PATH>` | | [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) | `https://<REGION_IDENTIFIER>.stt.speech.azure.us/<URL_PATH>` | | [Text-to-speech REST API](rest-text-to-speech.md) | `https://<REGION_IDENTIFIER>.tts.speech.azure.us/<URL_PATH>` | Speech Services REST API endpoints in Azure China have the following format: | REST API type / operation | Endpoint format | |--|--| | Access token | `https://<REGION_IDENTIFIER>.api.cognitive.azure.cn/sts/v1.0/issueToken`-| [Speech-to-text REST API v3.0](rest-speech-to-text.md) | `https://<REGION_IDENTIFIER>.api.cognitive.azure.cn/<URL_PATH>` | +| [Speech-to-text REST API](rest-speech-to-text.md) | `https://<REGION_IDENTIFIER>.api.cognitive.azure.cn/<URL_PATH>` | | [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) | `https://<REGION_IDENTIFIER>.stt.speech.azure.cn/<URL_PATH>` | | [Text-to-speech REST API](rest-text-to-speech.md) | `https://<REGION_IDENTIFIER>.tts.speech.azure.cn/<URL_PATH>` | For [Speech SDK](speech-sdk.md) in sovereign clouds you need to use "from host / # [C#](#tab/c-sharp) ```csharp-var config = SpeechConfig.FromHost(azCnHost, subscriptionKey); +var config = SpeechConfig.FromHost("azCnHost", subscriptionKey); ``` # [C++](#tab/cpp) ```cpp-auto config = SpeechConfig::FromHost(azCnHost, subscriptionKey); +auto config = SpeechConfig::FromHost("azCnHost", subscriptionKey); ``` # [Java](#tab/java) ```java-SpeechConfig config = SpeechConfig.fromHost(azCnHost, subscriptionKey); +SpeechConfig config = SpeechConfig.fromHost("azCnHost", subscriptionKey); ``` # [Python](#tab/python) ```python import azure.cognitiveservices.speech as speechsdk-speech_config = speechsdk.SpeechConfig(host=azCnHost, subscription=subscriptionKey) +speech_config = speechsdk.SpeechConfig(host="azCnHost", subscription=subscriptionKey) ``` # [Objective-C](#tab/objective-c) ```objectivec-SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithHost:azCnHost subscription:subscriptionKey]; +SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithHost:"azCnHost" subscription:subscriptionKey]; ``` *** |
cognitive-services | Speech Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-sdk.md | |
cognitive-services | Speech Services Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-private-link.md | A Speech resource with a custom domain name and a private endpoint turned on use > A Speech resource without private endpoints that uses a custom domain name also has a special way of interacting with Speech Services. > This way differs from the scenario of a Speech resource that uses a private endpoint. > This is important to consider because you may decide to remove private endpoints later.-> See _Adjust an application to use a Speech resource without private endpoints_ later in this article. +> See [Adjust an application to use a Speech resource without private endpoints](#adjust-an-application-to-use-a-speech-resource-without-private-endpoints) later in this article. ### Speech resource with a custom domain name and a private endpoint: Usage with the REST APIs The detailed description of the special endpoints and how their URL should be tr Get familiar with the material in the subsection mentioned in the previous paragraph and see the following example. The example describes the Text-to-speech REST API. Usage of the Speech-to-text REST API for short audio is fully equivalent. > [!NOTE]-> When you're using the Speech-to-text REST API for short audio and Text-to-speech REST API in private endpoint scenarios, use a subscription key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech-to-text REST API for short audio](rest-speech-to-text-short.md#request-headers) and [Text-to-speech REST API](rest-text-to-speech.md#request-headers)) +> When you're using the Speech-to-text REST API for short audio and Text-to-speech REST API in private endpoint scenarios, use a resource key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech-to-text REST API for short audio](rest-speech-to-text-short.md#request-headers) and [Text-to-speech REST API](rest-text-to-speech.md#request-headers)) > > Using an authorization token and passing it to the special endpoint via the `Authorization` header will work *only* if you've turned on the **All networks** access option in the **Networking** section of your Speech resource. In other cases you will get either `Forbidden` or `BadRequest` error when trying to obtain an authorization token. Follow these steps to modify your code: 1. Modify how you create the instance of `SpeechConfig`. Most likely, your application is using something like this: ```csharp- var config = SpeechConfig.FromSubscription(subscriptionKey, azureRegion); + var config = SpeechConfig.FromSubscription(speechKey, azureRegion); ``` This won't work for a private-endpoint-enabled Speech resource because of the host name and URL changes that we described in the previous sections. If you try to run your existing application without any modifications by using the key of a private-endpoint-enabled resource, you'll get an authentication error (401). To make it work, modify how you instantiate the `SpeechConfig` class and use "from endpoint"/"with endpoint" initialization. Suppose we have the following two variables defined:- - `subscriptionKey` contains the key of the private-endpoint-enabled Speech resource. + - `speechKey` contains the key of the private-endpoint-enabled Speech resource. - `endPoint` contains the full *modified* endpoint URL (using the type required by the corresponding programming language). In our example, this variable should contain: ``` wss://my-private-link-speech.cognitiveservices.azure.com/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US Follow these steps to modify your code: Create a `SpeechConfig` instance: ```csharp- var config = SpeechConfig.FromEndpoint(endPoint, subscriptionKey); + var config = SpeechConfig.FromEndpoint(endPoint, speechKey); ``` ```cpp- auto config = SpeechConfig::FromEndpoint(endPoint, subscriptionKey); + auto config = SpeechConfig::FromEndpoint(endPoint, speechKey); ``` ```java- SpeechConfig config = SpeechConfig.fromEndpoint(endPoint, subscriptionKey); + SpeechConfig config = SpeechConfig.fromEndpoint(endPoint, speechKey); ``` ```python import azure.cognitiveservices.speech as speechsdk- speech_config = speechsdk.SpeechConfig(endpoint=endPoint, subscription=subscriptionKey) + speech_config = speechsdk.SpeechConfig(endpoint=endPoint, subscription=speechKey) ``` ```objectivec- SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithEndpoint:endPoint subscription:subscriptionKey]; + SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithEndpoint:endPoint subscription:speechKey]; ``` > [!TIP] Speech-to-text REST API v3.0 usage is fully equivalent to the case of [private-e In this case, usage of the Speech-to-text REST API for short audio and usage of the Text-to-speech REST API have no differences from the general case, with one exception. (See the following note.) You should use both APIs as described in the [speech-to-text REST API for short audio](rest-speech-to-text-short.md) and [Text-to-speech REST API](rest-text-to-speech.md) documentation. > [!NOTE]-> When you're using the Speech-to-text REST API for short audio and Text-to-speech REST API in custom domain scenarios, use a subscription key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech-to-text REST API for short audio](rest-speech-to-text-short.md#request-headers) and [Text-to-speech REST API](rest-text-to-speech.md#request-headers)) +> When you're using the Speech-to-text REST API for short audio and Text-to-speech REST API in custom domain scenarios, use a Speech resource key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech-to-text REST API for short audio](rest-speech-to-text-short.md#request-headers) and [Text-to-speech REST API](rest-text-to-speech.md#request-headers)) > > Using an authorization token and passing it to the special endpoint via the `Authorization` header will work *only* if you've turned on the **All networks** access option in the **Networking** section of your Speech resource. In other cases you will get either `Forbidden` or `BadRequest` error when trying to obtain an authorization token. However, if you try to run the same application after having all private endpoin You need to roll back your application to the standard instantiation of `SpeechConfig` in the style of the following code: ```csharp-var config = SpeechConfig.FromSubscription(subscriptionKey, azureRegion); +var config = SpeechConfig.FromSubscription(speechKey, azureRegion); ``` [!INCLUDE [](includes/speech-vnet-service-enpoints-private-endpoints-simultaneously.md)] |
cognitive-services | Speech Ssml Phonetic Sets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-ssml-phonetic-sets.md | |
cognitive-services | Speech Synthesis Markup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md | |
cognitive-services | Speech Translation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-translation.md | |
cognitive-services | Spx Basics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-basics.md | This article assumes that you have working knowledge of the Command Prompt windo [!INCLUDE [](includes/spx-setup.md)] -## Create a subscription configuration +## Create a resource configuration # [Terminal](#tab/terminal) -To get started, you need an Azure subscription key and region identifier (for example, `eastus`, `westus`). Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource). +To get started, you need a Speech resource key and region identifier (for example, `eastus`, `westus`). Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource). -To configure your subscription key and region identifier, run the following commands: +To configure your resource key and region identifier, run the following commands: ```console-spx config @key --set SUBSCRIPTION-KEY -spx config @region --set REGION +spx config @key --set SPEECH-KEY +spx config @region --set SPEECH-REGION ``` The key and region are stored for future Speech CLI commands. To view the current configuration, run the following commands: spx config @region --clear # [PowerShell](#tab/powershell) -To get started, you need an Azure subscription key and region identifier (for example, `eastus`, `westus`). Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource). +To get started, you need a Speech resource key and region identifier (for example, `eastus`, `westus`). Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource). -To configure your subscription key and region identifier, run the following commands in PowerShell: +To configure your Speech resource key and region identifier, run the following commands in PowerShell: ```powershell-spx --% config @key --set SUBSCRIPTION-KEY -spx --% config @region --set REGION +spx --% config @key --set SPEECH-KEY +spx --% config @region --set SPEECH-REGION ``` The key and region are stored for future SPX commands. To view the current configuration, run the following commands: |
cognitive-services | Spx Batch Operations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-batch-operations.md | |
cognitive-services | Spx Output Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-output-options.md | |
cognitive-services | Spx Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-overview.md | |
cognitive-services | Text To Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md | |
cognitive-services | Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/troubleshooting.md | This article provides information to help you solve issues you might encounter w You might have the wrong endpoint for your region or service. Check the URI to make sure it's correct. -Also, there might be a problem with your subscription key or authorization token. For more information, see the next section. +Also, there might be a problem with your Speech resource key or authorization token. For more information, see the next section. ## Error: HTTP 403 Forbidden or HTTP 401 Unauthorized This error often is caused by authentication issues. Connection requests without a valid `Ocp-Apim-Subscription-Key` or `Authorization` header are rejected with a status of 403 or 401. -* If you're using a subscription key for authentication, you might see the error because: +* If you're using a resource key for authentication, you might see the error because: - - The subscription key is missing or invalid - - You have exceeded your subscription's usage quota + - The key is missing or invalid + - You have exceeded your resource's usage quota * If you're using an authorization token for authentication, you might see the error because: - The authorization token is invalid - The authorization token is expired -### Validate your subscription key +### Validate your resource key -You can verify that you have a valid subscription key by running one of the following commands. +You can verify that you have a valid resource key by running one of the following commands. > [!NOTE]-> Replace `YOUR_SUBSCRIPTION_KEY` and `YOUR_REGION` with your own subscription key and associated region. +> Replace `YOUR_RESOURCE_KEY` and `YOUR_REGION` with your own resource key and associated region. * PowerShell You can verify that you have a valid subscription key by running one of the foll $FetchTokenHeader = @{ 'Content-type'='application/x-www-form-urlencoded' 'Content-Length'= '0'- 'Ocp-Apim-Subscription-Key' = 'YOUR_SUBSCRIPTION_KEY' + 'Ocp-Apim-Subscription-Key' = 'YOUR_RESOURCE_KEY' } $OAuthToken = Invoke-RestMethod -Method POST -Uri https://YOUR_REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken -Headers $FetchTokenHeader $OAuthToken You can verify that you have a valid subscription key by running one of the foll * cURL ```- curl -v -X POST "https://YOUR_REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken" -H "Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY" -H "Content-type: application/x-www-form-urlencoded" -H "Content-Length: 0" + curl -v -X POST "https://YOUR_REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken" -H "Ocp-Apim-Subscription-Key: YOUR_RESOURCE_KEY" -H "Content-type: application/x-www-form-urlencoded" -H "Content-Length: 0" ``` -If you entered a valid subscription key, the command returns an authorization token, otherwise an error is returned. +If you entered a valid resource key, the command returns an authorization token, otherwise an error is returned. ### Validate an authorization token |
cognitive-services | Tutorial Voice Enable Your Bot Speech Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/tutorial-voice-enable-your-bot-speech-sdk.md | If you get an error message in your main app window, use this table to identify | Message | What should you do? | |-|-|-|Error (AuthenticationFailure) : WebSocket Upgrade failed with an authentication error (401). Check for correct subscription key (or authorization token) and region name| On the **Settings** page of the app, make sure that you entered the subscription key and its region correctly. | +|Error (AuthenticationFailure) : WebSocket Upgrade failed with an authentication error (401). Check for correct resource key (or authorization token) and region name| On the **Settings** page of the app, make sure that you entered the key and its region correctly. | |Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1011. Error details: We could not connect to the bot before sending a message | Make sure that you [selected the Enable Streaming Endpoint checkbox](#register-the-direct-line-speech-channel) and/or [turned on web sockets](#enable-web-sockets).<br>Make sure that Azure App Service is running. If it is, try restarting it.| |Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1002. Error details: The server returned status code '503' when status code '101' was expected | Make sure that you [selected the Enable Streaming Endpoint checkbox](#register-the-direct-line-speech-channel) box and/or [turned on web sockets](#enable-web-sockets).<br>Make sure that Azure App Service is running. If it is, try restarting it.|-|Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1011. Error details: Response status code does not indicate success: 500 (InternalServerError)| Your bot specified a neural voice in the [speak](https://github.com/microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md#speak) field of its output activity, but the Azure region associated with your subscription key doesn't support neural voices. See [neural voices](./regions.md#speech-service) and [standard voices](how-to-migrate-to-prebuilt-neural-voice.md).| +|Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1011. Error details: Response status code does not indicate success: 500 (InternalServerError)| Your bot specified a neural voice in the [speak](https://github.com/microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md#speak) field of its output activity, but the Azure region associated with your resource key doesn't support neural voices. See [neural voices](./regions.md#speech-service) and [standard voices](how-to-migrate-to-prebuilt-neural-voice.md).| If the actions in the table don't address your problem, see [Voice assistants: Frequently asked questions](faq-voice-assistants.yml). If you still can't resolve your problem after following all the steps in this tutorial, please enter a new issue on the [Voice Assistant GitHub page](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/issues). To learn more about what's returned in the JSON output, see the [fields in the a ### View client source code for calls to the Speech SDK The Windows Voice Assistant Client uses the NuGet package [Microsoft.CognitiveServices.Speech](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech/), which contains the Speech SDK. A good place to start reviewing the sample code is the method `InitSpeechConnector()` in the file [VoiceAssistantClient\MainWindow.xaml.cs](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/blob/master/clients/csharp-wpf/VoiceAssistantClient/MainWindow.xaml.cs), which creates these two Speech SDK objects:-- [DialogServiceConfig](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconfig): For configuration settings like subscription key and its region.+- [DialogServiceConfig](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconfig): For configuration settings like resource key and its region. - [DialogServiceConnector](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector.-ctor): To manage the channel connection and client subscription events for handling recognized speech and bot responses. ## Add custom keyword activation |
cognitive-services | Get Started With Document Translation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md | The following headers are included with each Document Translation API request: "inputs": [ { "source": {- "sourceUrl": "https://myblob.blob.core.windows.net/source", + "sourceUrl": "https://myblob.blob.core.windows.net/source" }, "targets": [ { payload= { "sourceUrl": "https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS", "storageSource": "AzureBlob", "language": "en"- } }, "targets": [ { |
cognitive-services | Quickstart Translator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-translator.md | The core operation of the Translator service is translating text. In this quicks > [!TIP] >- > If you're new to Visual Studio, try the [Introduction to Visual Studio](/learn/modules/go-get-started/) Learn module. + > If you're new to Visual Studio, try the [Introduction to Visual Studio](/training/modules/go-get-started/) Learn module. 1. Open Visual Studio. You can use any text editor to write Go applications. We recommend using the lat > [!TIP] >-> If you're new to Go, try the [Get started with Go](/learn/modules/go-get-started/) Learn module. +> If you're new to Go, try the [Get started with Go](/training/modules/go-get-started/) Learn module. 1. If you haven't done so already, [download and install Go](https://go.dev/doc/install). After a successful call, you should see the following response: > [!TIP] >- > If you're new to Node.js, try the [Introduction to Node.js](/learn/modules/intro-to-nodejs/) Learn module. + > If you're new to Node.js, try the [Introduction to Node.js](/training/modules/intro-to-nodejs/) Learn module. 1. In a console window (such as cmd, PowerShell, or Bash), create and navigate to a new directory for your app named `translator-app`. After a successful call, you should see the following response: > [!TIP] >- > If you're new to Python, try the [Introduction to Python](/learn/paths/beginner-python/) Learn module. + > If you're new to Python, try the [Introduction to Python](/training/paths/beginner-python/) Learn module. 1. Open a terminal window and use pip to install the Requests library and uuid0 package: |
cognitive-services | Translator Text Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-text-apis.md | To call the Translator service via the [REST API](reference/rest-api-guide.md), > [!TIP] >- > If you're new to Visual Studio, try the [Introduction to Visual Studio](/learn/modules/go-get-started/) Learn module. + > If you're new to Visual Studio, try the [Introduction to Visual Studio](/training/modules/go-get-started/) Learn module. 1. Open Visual Studio. You can use any text editor to write Go applications. We recommend using the lat > [!TIP] >-> If you're new to Go, try the [Get started with Go](/learn/modules/go-get-started/) Learn module. +> If you're new to Go, try the [Get started with Go](/training/modules/go-get-started/) Learn module. 1. If you haven't done so already, [download and install Go](https://go.dev/doc/install). You can use any text editor to write Go applications. We recommend using the lat > [!TIP] >- > If you're new to Node.js, try the [Introduction to Node.js](/learn/modules/intro-to-nodejs/) Learn module. + > If you're new to Node.js, try the [Introduction to Node.js](/training/modules/intro-to-nodejs/) Learn module. 1. In a console window (such as cmd, PowerShell, or Bash), create and navigate to a new directory for your app named `translator-text-app`. You can use any text editor to write Go applications. We recommend using the lat > [!TIP] >- > If you're new to Python, try the [Introduction to Python](/learn/paths/beginner-python/) Learn module. + > If you're new to Python, try the [Introduction to Python](/training/paths/beginner-python/) Learn module. 1. Open a terminal window and use pip to install the Requests library and uuid0 package: |
cognitive-services | Autoscale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/autoscale.md | No, the autoscale feature is not available to free tier subscriptions. - [Plan and Manage costs for Azure Cognitive Services](./plan-manage-costs.md). - [Optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course. |
cognitive-services | Tutorial Visual Search Crop Area Results | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/tutorial-visual-search-crop-area-results.md | This image is cropped by creating an `ImageInfo` object from the crop area, and ```csharp CropArea CropArea = new CropArea(top: (float)0.01, bottom: (float)0.30, left: (float)0.01, right: (float)0.20);-string imageURL = "https://docs.microsoft.com/azure/cognitive-services/bing-visual-search/media/ms_srleaders.jpg"; +string imageURL = "https://learn.microsoft.com/azure/cognitive-services/bing-visual-search/media/ms_srleaders.jpg"; ImageInfo imageInfo = new ImageInfo(cropArea: CropArea, url: imageURL); VisualSearchRequest visualSearchRequest = new VisualSearchRequest(imageInfo: imageInfo); Getting the actual image URLs requires a cast that reads an `ActionType` as `Ima > [Create a Visual Search single-page web app](tutorial-bing-visual-search-single-page-app.md) ## See also-> [What is the Bing Visual Search API?](./overview.md) +> [What is the Bing Visual Search API?](./overview.md) |
cognitive-services | Cognitive Services Environment Variables | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-environment-variables.md | class Program # [C++](#tab/cpp) -For more information, see <a href="/cpp/c-runtime-library/reference/getenv-wgetenv" target="_blank">`getenv` </a>. +For more information, see <a href="/cpp/c-runtime-library/reference/getenv-s-wgetenv-s" target="_blank">`getenv_s`</a> and <a href="/cpp/c-runtime-library/reference/getenv-wgetenv" target="_blank">`getenv`</a>. ```cpp+#include <iostream> #include <stdlib.h> +std::string getEnvironmentVariable(const char* name); + int main() { // Get the named env var, and assign it to the value variable- auto value = - getenv("ENVIRONMENT_VARIABLE_KEY"); + auto value = getEnvironmentVariable("ENVIRONMENT_VARIABLE_KEY"); +} ++std::string getEnvironmentVariable(const char* name) +{ +#if defined(_MSC_VER) + size_t requiredSize = 0; + (void)getenv_s(&requiredSize, nullptr, 0, name); + if (requiredSize == 0) + { + return ""; + } + auto buffer = std::make_unique<char[]>(requiredSize); + (void)getenv_s(&requiredSize, buffer.get(), requiredSize, name); + return buffer.get(); +#else + auto value = getenv(name); + return value ? value : ""; +#endif } ``` |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/overview.md | After you've had a chance to get started with the Language service, try our tuto * [Extract key phrases from text stored in Power BI](key-phrase-extraction/tutorials/integrate-power-bi.md) * [Use Power Automate to sort information in Microsoft Excel](named-entity-recognition/tutorials/extract-excel-information.md) -* [Use Flask to translate text, analyze sentiment, and synthesize speech](/learn/modules/python-flask-build-ai-web-app/) +* [Use Flask to translate text, analyze sentiment, and synthesize speech](/training/modules/python-flask-build-ai-web-app/) * [Use Cognitive Services in canvas apps](/powerapps/maker/canvas-apps/cognitive-services-api?context=/azure/cognitive-services/language-service/context/context) * [Create a FAQ Bot](question-answering/tutorials/bot-service.md) |
cognitive-services | Authoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/authoring.md | curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: applicat "value": [ { "displayName": "source1",- "sourceUri": "https://docs.microsoft.com/azure/cognitive-services/qnamaker/overview/overview", + "sourceUri": "https://learn.microsoft.com/azure/cognitive-services/qnamaker/overview/overview", "sourceKind": "url", "lastUpdatedDateTime": "2021-05-01T15:13:22Z" }, curl -X PATCH -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: applic "op": "add", "value":{ "id": 1,- "answer": "The latest question answering docs are on https://docs.microsoft.com", + "answer": "The latest question answering docs are on https://learn.microsoft.com", "source": "source5", "questions": [ "Where do I find docs for question answering?" |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/overview.md | Text Analytics for health extracts and labels relevant medical information from [!INCLUDE [Text Analytics for health](includes/features.md)] -> [!VIDEO https://docs.microsoft.com/Shows/AI-Show/Introducing-Text-Analytics-for-Health/player] +> [!VIDEO https://learn.microsoft.com/Shows/AI-Show/Introducing-Text-Analytics-for-Health/player] ## Get started with Text analytics for health |
cognitive-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md | Azure Cognitive Service for Language is updated on an ongoing basis. To stay up- ## September 2022 +* [Conversational language understanding](./conversational-language-understanding/overview.md) is available in the following regions: + * Central India + * Switzerland North + * West US 2 * Text Analytics for Health now [supports additional languages](./text-analytics-for-health/language-support.md) in preview: Spanish, French, German Italian, Portuguese and Hebrew. These languages are available when using a docker container to deploy the API service. - * The Azure.AI.TextAnalytics client library v5.2.0 are generally available and ready for use in production applications. For more information on Language service client libraries, see the [**Developer overview**](./concepts/developer-guide.md). This release includes the following updates: |
cognitive-services | Concepts Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-features.md | JSON objects can include nested JSON objects and simple property/values. An arra } ``` +## Inference Explainability +Personalizer can help you to understand which features are the most and least influential when determining the best action. When enabled, inference explainability includes feature scores from the underlying model into the Rank API response, so your application receives this information at the time of inference. +Feature scores empower you to better understand the relationship between features and the decisions made by Personalizer. They can be used to provide insight to your end-users into why a particular recommendation was made, or to analyze whether your model is exhibiting bias toward or against certain contextual settings, users, and actions. ++Setting the service configuration flag IsInferenceExplainabilityEnabled in your service configuration enables Personalizer to include feature values and weights in the Rank API response. To update your current service configuration, use the [Service Configuration ΓÇô Update API](https://docs.microsoft.com/rest/api/personalizer/1.1preview1/service-configuration/update?tabs=HTTP). In the JSON request body, include your current service configuration and add the additional entry: ΓÇ£IsInferenceExplainabilityEnabledΓÇ¥: true. If you donΓÇÖt know your current service configuration, you can obtain it from the [Service Configuration ΓÇô Get API](https://docs.microsoft.com/rest/api/personalizer/1.1preview1/service-configuration/get?tabs=HTTP) ++```JSON +{ + "rewardWaitTime": "PT10M", + "defaultReward": 0, + "rewardAggregation": "earliest", + "explorationPercentage": 0.2, + "modelExportFrequency": "PT5M", + "logMirrorEnabled": true, + "logMirrorSasUri": "https://testblob.blob.core.windows.net/container?se=2020-08-13T00%3A00Z&sp=rwl&spr=https&sv=2018-11-09&sr=c&sig=signature", + "logRetentionDays": 7, + "lastConfigurationEditDate": "0001-01-01T00:00:00Z", + "learningMode": "Online", + "isAutoOptimizationEnabled": true, + "autoOptimizationFrequency": "P7D", + "autoOptimizationStartDate": "2019-01-19T00:00:00Z", +"isInferenceExplainabilityEnabled": true +} +``` ++### How to interpret feature scores? +Enabling inference explainability will add a collection to the JSON response from the Rank API called *inferenceExplanation*. This contains a list of feature names and values that were submitted in the Rank request, along with feature scores learned by PersonalizerΓÇÖs underlying model. The feature scores provide you with insight on how influential each feature was in the model choosing the action. ++```JSON ++{ + "ranking": [ + { + "id": "EntertainmentArticle", + "probability": 0.8 + }, + { + "id": "SportsArticle", + "probability": 0 + }, + { + "id": "NewsArticle", + "probability": 0.2 + } + ], + "eventId": "75269AD0-BFEE-4598-8196-C57383D38E10", + "rewardActionId": "EntertainmentArticle", + "inferenceExplanation": [ + { + "idΓÇ¥: "EntertainmentArticle", + "features": [ + { + "name": "user.profileType", + "score": 3.0 + }, + { + "name": "user.latLong", + "score": -4.3 + }, + { + "name": "user.profileType^user.latLong", + "score" : 12.1 + }, + ] + ] +} +``` ++Recall that Personalizer will either return the _best action_ as determined by the model or an _exploratory action_ chosen by the exploration policy. The best action is the one that the model has determined has the highest probability of maximizing the average reward, whereas exploratory actions are chosen among the set of all possible actions provided in the Rank API call. Actions taken during exploration do not leverage the feature scores in determining which action to take, therefore **feature scores for exploratory actions should not be used to gain an understanding of why the action was taken.** [You can learn more about exploration here](https://docs.microsoft.com/azure/cognitive-services/personalizer/concepts-exploration). ++For the best actions returned by Personalizer, the feature scores can provide general insight where: +* Larger positive scores provide more support for the model choosing the best action. +* Larger negative scores provide more support for the model not choosing the best action. +* Scores close to zero have a small effect on the decision to choose the best action. ++### Important considerations for Inference Explainability +* **Increased latency.** Enabling _Inference Explainability_ will significantly increase the latency of Rank API calls due to processing of the feature information. Run experiments and measure the latency in your scenario to see if it satisfies your applicationΓÇÖs latency requirements. Future versions of Inference Explainability will mitigate this issue. +* **Correlated Features.** Features that are highly correlated with each other can reduce the utility of feature scores. For example, suppose Feature A is highly correlated with Feature B. It may be that Feature AΓÇÖs score is a large positive value while Feature BΓÇÖs score is a large negative value. In this case, the two features may effectively cancel each other out and have little to no impact on the model. While Personalizer is very robust to highly correlated features, when using _Inference Explainability_, ensure that features sent to Personalizer are not highly correlated ++ ## Next steps [Reinforcement learning](concepts-reinforcement-learning.md) |
cognitive-services | Concepts Reinforcement Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-reinforcement-learning.md | The current version of Personalizer uses **contextual bandits**, an approach to The _decision memory_, the model that has been trained to capture the best possible decision, given a context, uses a set of linear models. These have repeatedly shown business results and are a proven approach, partially because they can learn from the real world very rapidly without needing multi-pass training, and partially because they can complement supervised learning models and deep neural network models. -The explore/exploit traffic allocation is made randomly following the percentage set for exploration, and the default algorithm for exploration is epsilon-greedy. +The explore / best action traffic allocation is made randomly following the percentage set for exploration, and the default algorithm for exploration is epsilon-greedy. ### History of Contextual Bandits Personalizer currently uses [Vowpal Wabbit](https://github.com/VowpalWabbit/vowp ## Next steps -[Offline evaluation](concepts-offline-evaluation.md) +[Offline evaluation](concepts-offline-evaluation.md) |
cognitive-services | Responsible Guidance Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-guidance-integration.md | When you get ready to integrate and responsibly use AI-powered products or featu - **User Study**: Any consent or disclosure recommendations should be framed in a user study. Evaluate the first and continuous-use experience with a representative sample of the community to validate that the design choices lead to effective disclosure. Conduct user research with 10-20 community members (affected stakeholders) to evaluate their comprehension of the information and to determine if their expectations are met. -- **Transparency**: Consider providing users with information about how the content was personalized. For example, you can give your users a button labeled Why These Suggestions? that shows which top features of the user and actions played a role in producing the Personalizer results. +- **Transparency & Explainability:** Consider enabling and using Personalizer's [inference explainability](https://learn.microsoft.com/azure/cognitive-services/personalizer/concepts-features?branch=main#inference-explainability) capability to better understand which features play a significant role in Personalizer's decision choice in each Rank call. This capability empowers you to provide your users with transparency regarding how their data played a role in producing the recommended best action. For example, you can give your users a button labeled "Why These Suggestions?" that shows which top features played a role in producing the Personalizer results. This information can also be used to better understand what data attributes about your users, contexts, and actions are working in favor of Personalizer's choice of best action, which are working against it, and which may have little or no effect. This capability can also provide insights about your user segments and help you identify and address potential biases. - **Adversarial use**: consider establishing a process to detect and act on malicious manipulation. There are actors that will take advantage of machine learning and AI systems' ability to learn from their environment. With coordinated attacks, they can artificially fake patterns of behavior that shift the data and AI models toward their goals. If your use of Personalizer could influence important choices, make sure you have the appropriate means to detect and mitigate these types of attacks in place. |
cognitive-services | Plan Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/plan-manage-costs.md | You can also [export your cost data](../cost-management-billing/costs/tutorial-e - Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course. |
communication-services | Identity Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/identity-model.md | The properties of an access token are: * Expiration. * Scopes. -An access token is always valid for 24 hours. After it expires, the access token is invalidated and can't be used to access any primitive. +An access token is valid for a period of time between 1 and 24 hours. After it expires, the access token is invalidated and can't be used to access any primitive. +To generate a token with a custom validity, specify the desired validity period when generating the token. If no custom validity is specified, the token will be valid for 24 hours. +We recommend using short lifetime tokens for one-off meetings and longer lifetime tokens for agents using the application for longer periods of time. An identity needs a way to request a new access token from a server-side service. The *scope* parameter defines a nonempty set of primitives that can be used. Azure Communication Services supports the following scopes for access tokens. |
communication-services | Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md | In this article, you will learn which capabilities are supported for Teams exter | | Use typing indicators | ✔️ | | | Read receipt | ❌ | | | File sharing | ❌ |-| | Reply to chat message | ❌ | +| | Reply to specific chat message | ❌ | | | React to chat message | ❌ | | Mid call control | Turn your video on/off | ✔️ | | | Mute/Unmute mic | ✔️ | |
communication-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/overview.md | You can create an identity and access token for Teams external users on Azure po With a valid identity, access token, and Teams meeting URL, you can use [Azure Communication Services UI Library](https://azure.github.io/communication-ui-library/?path=/story/composites-call-with-chat-jointeamsmeeting--join-teams-meeting) to join Teams meeting without any code. ->[!VIDEO https://www.youtube.com/embed/chMHVHLFcao] +>[!VIDEO https://www.youtube.com/embed/FF1LS516Bjw] ### Single-click deployment The following table show supported use cases for Teams external user with Azure Any licensed Teams users can schedule Teams meetings and share the invite with external users. External users can join the Teams meeting experience via existing Teams desktop, mobile, and web clients without additional charge. External users joining via Azure Communication Services SDKs will pay [standard Azure Communication Services consumption](https://azure.microsoft.com/pricing/details/communication-services/) for audio, video, and chat. There's no additional fee for the interoperability capability itself. - ## Next steps - [Authenticate as Teams external user](../../../quickstarts/identity/access-token-teams-external-users.md) |
communication-services | Subscribe Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/subscribe-events.md | Copy the following code snippet and paste into source file: **Program.cs** using Azure.Storage.Queues; using Azure.Messaging.EventGrid; -// For more detailed tutorials on storage queues, see: https://docs.microsoft.com/azure/storage/queues/storage-tutorial-queues +// For more detailed tutorials on storage queues, see: https://learn.microsoft.com/azure/storage/queues/storage-tutorial-queues var queueClient = new QueueClient("<Storage Account Connection String>", "router-events"); |
communication-services | Learn Modules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/learn-modules.md | +- [Introduction to Communication Services](/training/modules/intro-azure-communication-services/) +- [Send an SMS message from a C# console application with Azure Communication Services](/training/modules/communication-service-send-sms-console-app/) +- [Create a voice calling web app with Azure Communication Services](/training/modules/communication-services-voice-calling-web-app) |
communication-services | File Sharing Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial.md | const uploadFileToAzureBlob = async (fileUpload: FileUploadManager) => { const fileExtension = file.name.split('.').pop(); // Following is an example of calling an Azure Function to handle file upload- // The https://docs.microsoft.com/en-us/azure/developer/javascript/how-to/with-web-app/azure-function-file-upload + // The https://learn.microsoft.com/azure/developer/javascript/how-to/with-web-app/azure-function-file-upload // tutorial uses 'username' parameter to specify the storage container name. // Note that the container in the tutorial is private by default. To get default downloads working in // this sample, you need to change the container's access level to Public via Azure Portal. |
connectors | Connect Common Data Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connect-common-data-service.md | For technical information based on the connector's Swagger description, such as * A [Dataverse Data Service environment and database](/power-platform/admin/environments-overview), which is a space where your organization stores, manages, and shares business data in a Dataverse database. For more information, review the following resources: - * [Learn: Create and manage Dataverse environments](/learn/modules/create-manage-environments/) + * [Learn: Create and manage Dataverse environments](/training/modules/create-manage-environments/) * [Power Platform - Environments overview](/power-platform/admin/environments-overview) |
container-apps | Background Processing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/background-processing.md | QUEUE_CONNECTION_STRING=`az storage account show-connection-string -g $RESOURCE_ # [Azure PowerShell](#tab/azure-powershell) -Here we use Azure CLI as there isn't an equivalent PowerShell cmdlet to get the connection string for the storage account queue. - ```azurepowershell $QueueConnectionString = (Get-AzStorageAccount -ResourceGroupName $ResourceGroupName -Name $StorageAcctName).Context.ConnectionString ```-<!-- -- $QueueConnectionString = (az storage account show-connection-string -g $ResourceGroupName --name $StorageAcctName --query connectionString --out json) -replace '"','' > |
container-apps | Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md | As you begin to design the network around your container app, refer to [Plan vir :::image type="content" source="media/networking/azure-container-apps-virtual-network.png" alt-text="Diagram of how Azure Container Apps environments use an existing V NET, or you can provide your own."::: <!---https://docs.microsoft.com/azure/azure-functions/functions-networking-options +https://learn.microsoft.com/azure/azure-functions/functions-networking-options https://techcommunity.microsoft.com/t5/apps-on-azure-blog/azure-container-apps-virtual-network-integration/ba-p/3096932 --> When you deploy an internal or an external environment into your own network, a ## Next steps - [Deploy with an external environment](vnet-custom.md)-- [Deploy with an internal environment](vnet-custom-internal.md)+- [Deploy with an internal environment](vnet-custom-internal.md) |
container-apps | Revisions Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions-manage.md | az containerapp revision show \ # [PowerShell](#tab/powershell) ```azurecli+az containerapp revision show ` --name <APPLICATION_NAME> ` --revision <REVISION_NAME> ` --resource-group <RESOURCE_GROUP_NAME> az containerapp revision activate \ # [PowerShell](#tab/powershell) -```poweshell +```azurecli az containerapp revision activate ` --revision <REVISION_NAME> ` --resource-group <RESOURCE_GROUP_NAME> |
cosmos-db | Account Databases Containers Items | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/account-databases-containers-items.md | Azure Cosmos items support the following operations. You can use any of the Azur Learn how to manage your Azure Cosmos account and other concepts: -* To learn more, see the [Azure Cosmos DB SQL API](/learn/modules/intro-to-azure-cosmos-db-core-api/) training module. +* To learn more, see the [Azure Cosmos DB SQL API](/training/modules/intro-to-azure-cosmos-db-core-api/) training module. * [How-to manage your Azure Cosmos DB account](how-to-manage-database-account.md) * [Global distribution](distribute-data-globally.md) * [Consistency levels](consistency-levels.md) |
cosmos-db | Analytical Store Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md | To learn more, see the following docs: * [Azure Synapse Link for Azure Cosmos DB](synapse-link.md) -* Check out the training module on how to [Design hybrid transactional and analytical processing using Azure Synapse Analytics](/learn/modules/design-hybrid-transactional-analytical-processing-using-azure-synapse-analytics/) +* Check out the training module on how to [Design hybrid transactional and analytical processing using Azure Synapse Analytics](/training/modules/design-hybrid-transactional-analytical-processing-using-azure-synapse-analytics/) * [Get started with Azure Synapse Link for Azure Cosmos DB](configure-synapse-link.md) |
cosmos-db | Choose Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/choose-api.md | Based on your workload, you must choose the API that fits your requirement. The ## Core(SQL) API -This API stores data in document format. It offers the best end-to-end experience as we have full control over the interface, service, and the SDK client libraries. Any new feature that is rolled out to Azure Cosmos DB is first available on SQL API accounts. Azure Cosmos DB SQL API accounts provide support for querying items using the Structured Query Language (SQL) syntax, one of the most familiar and popular query languages to query JSON objects. To learn more, see the [Azure Cosmos DB SQL API](/learn/modules/intro-to-azure-cosmos-db-core-api/) training module and [getting started with SQL queries](sql-query-getting-started.md) article. +This API stores data in document format. It offers the best end-to-end experience as we have full control over the interface, service, and the SDK client libraries. Any new feature that is rolled out to Azure Cosmos DB is first available on SQL API accounts. Azure Cosmos DB SQL API accounts provide support for querying items using the Structured Query Language (SQL) syntax, one of the most familiar and popular query languages to query JSON objects. To learn more, see the [Azure Cosmos DB SQL API](/training/modules/intro-to-azure-cosmos-db-core-api/) training module and [getting started with SQL queries](sql-query-getting-started.md) article. If you are migrating from other databases such as Oracle, DynamoDB, HBase etc. and if you want to use the modernized technologies to build your apps, SQL API is the recommended option. SQL API supports analytics and offers performance isolation between operational and analytical workloads. |
cosmos-db | Configure Synapse Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md | Azure Synapse Link is available for Azure Cosmos DB SQL API or for Azure Cosmos * [Query the analytical store using Azure Synapse serverless SQL pool](#query-analytical-store-sql-on-demand) * [Use Azure Synapse serverless SQL pool to analyze and visualize data in Power BI](#analyze-with-powerbi) -You can also checkout the training module on how to [configure Azure Synapse Link for Azure Cosmos DB.](/learn/modules/configure-azure-synapse-link-with-azure-cosmos-db/) +You can also checkout the training module on how to [configure Azure Synapse Link for Azure Cosmos DB.](/training/modules/configure-azure-synapse-link-with-azure-cosmos-db/) ## <a id="enable-synapse-link"></a>Enable Azure Synapse Link for Azure Cosmos DB accounts You can find samples to get started with Azure Synapse Link on [GitHub](https:// To learn more, see the following docs: -* Checkout the training module on how to [configure Azure Synapse Link for Azure Cosmos DB.](/learn/modules/configure-azure-synapse-link-with-azure-cosmos-db/) +* Checkout the training module on how to [configure Azure Synapse Link for Azure Cosmos DB.](/training/modules/configure-azure-synapse-link-with-azure-cosmos-db/) * [Azure Cosmos DB analytical store overview.](analytical-store-introduction.md) |
cosmos-db | How To Provision Throughput Mongodb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-provision-throughput-mongodb.md | mongoClient = new MongoClient(mongoClientSettings); mongoDatabase = mongoClient.GetDatabase("testdb"); // Change the collection name, throughput value then update via MongoDB extension commands-// https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-custom-commands#update-collection +// https://learn.microsoft.com/azure/cosmos-db/mongodb-custom-commands#update-collection var result = mongoDatabase.RunCommand<BsonDocument>(@"{customAction: ""UpdateCollection"", collection: ""testcollection"", offerThroughput: 400}"); ``` See the following articles to learn about throughput provisioning in Azure Cosmo * [Request units and throughput in Azure Cosmos DB](../request-units.md) * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) - * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md) + * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md) |
cosmos-db | Partitioning Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partitioning-overview.md | Some things to consider when selecting the *item ID* as the partition key includ * Learn about [global distribution in Azure Cosmos DB](distribute-data-globally.md). * Learn how to [provision throughput on an Azure Cosmos container](how-to-provision-container-throughput.md). * Learn how to [provision throughput on an Azure Cosmos database](how-to-provision-database-throughput.md).-* See the training module on how to [Model and partition your data in Azure Cosmos DB.](/learn/modules/model-partition-data-azure-cosmos-db/) +* See the training module on how to [Model and partition your data in Azure Cosmos DB.](/training/modules/model-partition-data-azure-cosmos-db/) * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) |
cosmos-db | Plan Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/plan-manage-costs.md | See the following articles to learn more on how pricing works in Azure Cosmos DB * Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). * Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). * Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-* Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course. +* Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course. |
cosmos-db | Change Feed Processor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/change-feed-processor.md | You can share the lease container across multiple [deployment units](#deployment The change feed processor can be hosted in any platform that supports long running processes or tasks: -* A continuous running [Azure WebJob](/learn/modules/run-web-app-background-task-with-webjobs/). +* A continuous running [Azure WebJob](/training/modules/run-web-app-background-task-with-webjobs/). * A process in an [Azure Virtual Machine](/azure/architecture/best-practices/background-jobs#azure-virtual-machines). * A background job in [Azure Kubernetes Service](/azure/architecture/best-practices/background-jobs#azure-kubernetes-service). * A serverless function in [Azure Functions](/azure/architecture/best-practices/background-jobs#azure-functions). |
cosmos-db | Create Sql Api Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-nodejs.md | In this quickstart, you create and manage an Azure Cosmos DB SQL API account fro Watch this video for a complete walkthrough of the content in this article. -> [!VIDEO https://docs.microsoft.com/Shows/Docs-Azure/Quickstart-Use-Nodejs-to-connect-and-query-data-from-Azure-Cosmos-DB-SQL-API-account/player] +> [!VIDEO https://learn.microsoft.com/Shows/Docs-Azure/Quickstart-Use-Nodejs-to-connect-and-query-data-from-Azure-Cosmos-DB-SQL-API-account/player] ## Prerequisites |
cosmos-db | How To Dotnet Query Items | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-query-items.md | The [Container.GetItemLinqQueryable<>](/dotnet/api/microsoft.azure.cosmos.contai Now that you've queried multiple items, try one of our end-to-end tutorials with the SQL API. > [!div class="nextstepaction"]-> [Build an app that queries and adds data to Azure Cosmos DB SQL API](/learn/modules/build-dotnet-app-cosmos-db-sql-api/) +> [Build an app that queries and adds data to Azure Cosmos DB SQL API](/training/modules/build-dotnet-app-cosmos-db-sql-api/) |
cosmos-db | Kafka Connector Sink | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/kafka-connector-sink.md | curl -H "Content-Type: application/json" -X POST -d @<path-to-JSON-config-file> ## Confirm data written to Cosmos DB -Sign into the [Azure portal](https://portal.azure.com/learn.docs.microsoft.com) and navigate to your Azure Cosmos DB account. Check that the three records from the ΓÇ£hotelsΓÇ¥ topic are created in your account. +Sign into the [Azure portal](https://portal.azure.com) and navigate to your Azure Cosmos DB account. Check that the three records from the ΓÇ£hotelsΓÇ¥ topic are created in your account. ## Cleanup |
cosmos-db | Kafka Connector Source | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/kafka-connector-source.md | curl -H "Content-Type: application/json" -X POST -d @<path-to-JSON-config-file> ## Insert document into Azure Cosmos DB -1. Sign into the [Azure portal](https://portal.azure.com/learn.docs.microsoft.com) and navigate to your Azure Cosmos DB account. +1. Sign into the [Azure portal](https://portal.azure.com/learn.learn.microsoft.com) and navigate to your Azure Cosmos DB account. 1. Open the **Data Explore** tab and select **Databases** 1. Open the "kafkaconnect" database and "kafka" container you created earlier. 1. To create a new JSON document, in the SQL API pane, expand "kafka" container, select **Items**, then select **New Item** in the toolbar. The Azure Cosmos DB source connector converts JSON document to schema and suppor ## Next steps -* Kafka Connect for Azure Cosmos DB [sink connector](kafka-connector-sink.md) +* Kafka Connect for Azure Cosmos DB [sink connector](kafka-connector-sink.md) |
cosmos-db | Migrate Relational To Cosmos Db Sql Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-relational-to-cosmos-db-sql-api.md | def writeOrder(orderid): df = spark.read.json(sc.parallelize([orderjsondata])) #write the dataframe (this will be a single order record with merged many-to-one order details) to cosmos db using spark the connector- #https://docs.microsoft.com/azure/cosmos-db/spark-connector + #https://learn.microsoft.com/azure/cosmos-db/spark-connector df.write.format("com.microsoft.azure.cosmosdb.spark").mode("append").options(**writeConfig).save() ``` |
cosmos-db | Modeling Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/modeling-data.md | Just as there's no single way to represent a piece of data on a screen, there's * To learn how to model and partition data on Azure Cosmos DB using a real-world example, refer to [ Data Modeling and Partitioning - a Real-World Example](how-to-model-partition-example.md). -* See the training module on how to [Model and partition your data in Azure Cosmos DB.](/learn/modules/model-partition-data-azure-cosmos-db/) +* See the training module on how to [Model and partition your data in Azure Cosmos DB.](/training/modules/model-partition-data-azure-cosmos-db/) * Configure and use [Azure Synapse Link for Azure Cosmos DB](../configure-synapse-link.md). |
cosmos-db | Troubleshoot Not Found | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-not-found.md | string containerRid = selfLinkSegments[3]; Container containerByRid = this.cosmosClient.GetContainer(databaseRid, containerRid); // Invalid characters are listed here.-// https://docs.microsoft.com/dotnet/api/microsoft.azure.documents.resource.id#remarks +// https://learn.microsoft.com/dotnet/api/microsoft.azure.documents.resource.id#remarks FeedIterator<JObject> invalidItemsIterator = this.Container.GetItemQueryIterator<JObject>( @"select * from t where CONTAINS(t.id, ""/"") or CONTAINS(t.id, ""#"") or CONTAINS(t.id, ""?"") or CONTAINS(t.id, ""\\"") "); while (invalidItemsIterator.HasMoreResults) |
cosmos-db | Synapse Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link.md | To learn more, see the following docs: * [Azure Cosmos DB analytical store overview](analytical-store-introduction.md) -* Check out the training module on how to [Design hybrid transactional and analytical processing using Azure Synapse Analytics](/learn/modules/design-hybrid-transactional-analytical-processing-using-azure-synapse-analytics/) +* Check out the training module on how to [Design hybrid transactional and analytical processing using Azure Synapse Analytics](/training/modules/design-hybrid-transactional-analytical-processing-using-azure-synapse-analytics/) * [Get started with Azure Synapse Link for Azure Cosmos DB](configure-synapse-link.md) |
cosmos-db | How To Use Php | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-php.md | catch(ServiceException $e){ $error_message = $e->getMessage(); // Handle exception based on error codes and messages. // Error codes and messages can be found here:- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes + // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes } ``` try{ catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes + // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes $code = $e->getCode(); $error_message = $e->getMessage(); } try { catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes + // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes $code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />"; try { catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes + // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes $code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />"; try { catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes + // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes $code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />"; try { catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes + // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes $code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />"; try { catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes + // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes $code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />"; try { catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes + // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes $code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />"; try { catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes + // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes $code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />"; try { catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes + // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes $code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />"; try { catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes + // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes $code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />"; |
cost-management-billing | Assign Roles Azure Service Principals | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/assign-roles-azure-service-principals.md | tags: billing Previously updated : 07/22/2022 Last updated : 09/20/2022 Later in this article, you'll give permission to the Azure AD app to act by usin | Role | Actions allowed | Role definition ID | | | | |-| EnrollmentReader | Can view usage and charges across all accounts and subscriptions. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | 24f8edb6-1668-4659-b5e2-40bb5f3a7d7e | +| EnrollmentReader | Enrollment readers can view data at the enrollment, department, and account scopes. The data contains charges for all of the subscriptions under the scopes, including across tenants. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | 24f8edb6-1668-4659-b5e2-40bb5f3a7d7e | | EA purchaser | Purchase reservation orders and view reservation transactions. It has all the permissions of EnrollmentReader, which will in turn have all the permissions of DepartmentReader. It can view usage and charges across all accounts and subscriptions. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | da6647fb-7651-49ee-be91-c43c4877f0c4 | | DepartmentReader | Download the usage details for the department they administer. Can view the usage and charges associated with their department. | db609904-a47f-4794-9be8-9bd86fbffd8a | | SubscriptionCreator | Create new subscriptions in the given scope of Account. | a0bcee42-bf30-4d1b-926a-48d21664ef71 | |
cost-management-billing | Change Credit Card | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/change-credit-card.md | tags: billing Previously updated : 03/11/2022 Last updated : 09/20/2022 In the Azure portal, you can change your default payment method to a new credit - For a Microsoft Online Service Program (pay-as-you-go) account, you must be an [Account Administrator](add-change-subscription-administrator.md#whoisaa). - For a Microsoft Customer Agreement, you must have the correct [MCA permissions](understand-mca-roles.md) to make these changes. -If you want to a delete credit card, see [Delete an Azure billing payment method](delete-azure-payment-method.md). +If you want to a delete credit card, see [Delete an Azure billing payment method](delete-azure-payment-method.md). The supported payment methods for Microsoft Azure are credit cards, debit cards, and check wire transfer. To get approved to pay by check wire transfer, see [Pay for your Azure subscription by check or wire transfer](pay-by-invoice.md). >[!NOTE]+> Azure doesn't support virtual or prepaid cards. > Credit cards are accepted and debit cards are accepted by most countries or regions. > - Hong Kong and Brazil only support credit cards. > - India supports debit and credit cards through Visa and Mastercard. |
cost-management-billing | Prepay Hana Large Instances Reserved Capacity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepay-hana-large-instances-reserved-capacity.md | armclient post /providers/Microsoft.Capacity/calculatePrice?api-version=2019-04- 'billingScopeId': '/subscriptions/11111111-1111-1111-111111111111', 'term': 'P1Y', 'quantity': '1',- 'billingplan': 'Monthly' + 'billingplan': 'Monthly', 'displayName': 'testreservation_S224om', 'appliedScopes': ['/subscriptions/11111111-1111-1111-111111111111'], 'appliedScopeType': 'Single', The following example response resembles what you get returned. Note the value y ### Make your purchase -Make your purchase using the returned `quoteId` and the `reservationOrderId` that you got from the preceding [Get the reservation order and price](#get-the-reservation-order-and-price) section. +Make your purchase using the returned `reservationOrderId` that you got from the preceding [Get the reservation order and price](#get-the-reservation-order-and-price) section. Here's an example request: armclient put /providers/Microsoft.Capacity/reservationOrders/22222222-2222-2222 'billingScopeId': '/subscriptions/11111111-1111-1111-111111111111', 'term': 'P1Y', 'quantity': '1',- 'billingplan': 'Monthly' + 'billingplan': 'Monthly', 'displayName': ' testreservation_S224om', 'appliedScopes': ['/subscriptions/11111111-1111-1111-111111111111/resourcegroups/123'], 'appliedScopeType': 'Single', 'instanceFlexibility': 'NotSupported',- 'renew': true, - 'quoteId': 'd0fd3a890795' + 'renew': true } }" ``` location. You can also go to https://aka.ms/corequotaincrease to learn about quo ## Next steps - Learn about [How to call Azure REST APIs with Postman and cURL](/rest/api/azure/#how-to-call-azure-rest-apis-with-postman).-- See [SKUs for SAP HANA on Azure (Large Instances)](../../virtual-machines/workloads/sap/hana-available-skus.md) for the available SKU list and regions.+- See [SKUs for SAP HANA on Azure (Large Instances)](../../virtual-machines/workloads/sap/hana-available-skus.md) for the available SKU list and regions. |
cost-management-billing | Save Compute Costs Reservations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/save-compute-costs-reservations.md | For more information, see [Self-service exchanges and refunds for Azure Reservat - **Azure Cache for Redis** - Only the compute costs are included with a reservation. A reservation doesn't cover networking or storage charges associated with the Redis cache instances. - **Azure Dedicated Host** - Only the compute costs are included with the Dedicated host. - **Azure Disk Storage reservations** - A reservation only covers premium SSDs of P30 size or greater. It doesn't cover any other disk types or sizes smaller than P30.+- **Azure Backup Storage reserved capacity** - A capacity reservation lowers storage costs of backup data in a Recovery Services Vault. Software plans: |
data-factory | Ci Cd Github Troubleshoot Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ci-cd-github-troubleshoot-guide.md | CI/CD release pipeline failing with the following error: 2020-07-06T09:50:50.8771655Z ##[error]Details: 2020-07-06T09:50:50.8772837Z ##[error]DataFactoryPropertyUpdateNotSupported: Updating property type is not supported. 2020-07-06T09:50:50.8774148Z ##[error]DataFactoryPropertyUpdateNotSupported: Updating property type is not supported.-2020-07-06T09:50:50.8775530Z ##[error]Check out the troubleshooting guide to see if your issue is addressed: https://docs.microsoft.com/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment#troubleshooting +2020-07-06T09:50:50.8775530Z ##[error]Check out the troubleshooting guide to see if your issue is addressed: https://learn.microsoft.com/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment#troubleshooting 2020-07-06T09:50:50.8776801Z ##[error]Task failed while creating or updating the template deployment. ``` If you are using old default parameterization template, new way to include globa Default parameterization template should include all values from global parameter list. #### Resolution-Use updated [default parameterization template.](/azure/data-factory/continuous-integration-delivery-resource-manager-custom-parameters#default-parameterization-template) as one time migration to new method of including global parameters. This template references to all values in global parameter list. You also have to update the deployment task in the **release pipeline** if you are already overriding the template parameters there. +* Use updated [default parameterization template.](/azure/data-factory/continuous-integration-delivery-resource-manager-custom-parameters#default-parameterization-template) as one time migration to new method of including global parameters. This template references to all values in global parameter list. You also have to update the deployment task in the **release pipeline** if you are already overriding the template parameters there. +* Update the template parameter names in CI/CD pipeline if you are already overriding the template parameters (for global parameters). ### Error code: InvalidTemplate |
data-factory | Connector Amazon Simple Storage Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-simple-storage-service.md | Format specific settings are located in the documentation for that format. For m In source transformation, you can read from a container, folder, or individual file in Amazon S3. Use the **Source options** tab to manage how the files are read. **Wildcard paths:** Using a wildcard pattern will instruct the service to loop through each matching folder and file in a single source transformation. This is an effective way to process multiple files within a single flow. Add multiple wildcard matching patterns with the plus sign that appears when you hover over your existing wildcard pattern. Wildcard examples: First, set a wildcard to include all paths that are the partitioned folders plus the leaf files that you want to read. Use the **Partition root path** setting to define what the top level of the folder structure is. When you view the contents of your data via a data preview, you'll see that the service will add the resolved partitions found in each of your folder levels. |
data-factory | Connector Azure Blob Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-blob-storage.md | Format specific settings are located in the documentation for that format. For m In source transformation, you can read from a container, folder, or individual file in Azure Blob Storage. Use the **Source options** tab to manage how the files are read. **Wildcard paths:** Using a wildcard pattern will instruct the service to loop through each matching folder and file in a single source transformation. This is an effective way to process multiple files within a single flow. Add multiple wildcard matching patterns with the plus sign that appears when you hover over your existing wildcard pattern. Wildcard examples: First, set a wildcard to include all paths that are the partitioned folders plus the leaf files that you want to read. Use the **Partition root path** setting to define what the top level of the folder structure is. When you view the contents of your data via a data preview, you'll see that the service will add the resolved partitions found in each of your folder levels. |
data-factory | Connector Azure Data Lake Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-storage.md | Format specific settings are located in the documentation for that format. For m In the source transformation, you can read from a container, folder, or individual file in Azure Data Lake Storage Gen2. The **Source options** tab lets you manage how the files get read. **Wildcard path:** Using a wildcard pattern will instruct ADF to loop through each matching folder and file in a single Source transformation. This is an effective way to process multiple files within a single flow. Add multiple wildcard matching patterns with the + sign that appears when hovering over your existing wildcard pattern. Wildcard examples: First, set a wildcard to include all paths that are the partitioned folders plus the leaf files that you wish to read. Use the Partition Root Path setting to define what the top level of the folder structure is. When you view the contents of your data via a data preview, you'll see that ADF will add the resolved partitions found in each of your folder levels. |
data-factory | Connector Azure Data Lake Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-store.md | Format-specific settings are located in the documentation for that format. For m In the source transformation, you can read from a container, folder, or individual file in Azure Data Lake Storage Gen1. The **Source options** tab lets you manage how the files get read. **Wildcard path:** Using a wildcard pattern will instruct the service to loop through each matching folder and file in a single Source transformation. This is an effective way to process multiple files within a single flow. Add multiple wildcard matching patterns with the + sign that appears when hovering over your existing wildcard pattern. Wildcard examples: First, set a wildcard to include all paths that are the partitioned folders plus the leaf files that you wish to read. Use the Partition Root Path setting to define what the top level of the folder structure is. When you view the contents of your data via a data preview, you'll see that the service will add the resolved partitions found in each of your folder levels. |
data-factory | Connector Azure Sql Data Warehouse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-data-warehouse.md | By default, a data flow run will fail on the first error it gets. You can choose **Report success on error:** If enabled, the data flow will be marked as a success even if error rows are found. ## Lookup activity properties |
data-factory | Connector Azure Sql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-database.md | |
data-factory | Connector Sap Change Data Capture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-change-data-capture.md | + + Title: Transform data from an SAP ODP source with the SAP CDC connector in Azure Data Factory or Azure Synapse Analytics ++description: Learn how to transform data from an SAP ODP source to supported sink data stores by using mapping data flows in Azure Data Factory or Azure Synapse Analytics. ++++++ Last updated : 09/05/2022+++# Transform data from an SAP ODP source using the SAP CDC connector in Azure Data Factory or Azure Synapse Analytics +++This article outlines how to use mapping data flow to transform data from an SAP ODP source using the SAP CDC connector. To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md). For an introduction to transforming data with Azure Data Factory and Azure Synapse analytics, read [mapping data flow](concepts-data-flow-overview.md). ++>[!TIP] +>To learn the overall support on SAP data integration scenario, see [SAP data integration using Azure Data Factory whitepaper](https://github.com/Azure/Azure-DataFactory/blob/master/whitepaper/SAP%20Data%20Integration%20using%20Azure%20Data%20Factory.pdf) with detailed introduction on each SAP connector, comparsion and guidance. ++## Supported capabilities ++This SAP CDC connector is supported for the following capabilities: ++| Supported capabilities|IR | +|| --| +|[Mapping data flow](concepts-data-flow-overview.md) (source/-)|②| ++<small>*① Azure integration runtime ② Self-hosted integration runtime*</small> ++This SAP CDC connector leverages the SAP ODP framework to extract data from SAP source systems. For an introduction to the architecture of the solution, read [Introduction and architecture to SAP change data capture (CDC)](sap-change-data-capture-introduction-architecture.md) in our [SAP knowledge center](industry-sap-overview.md). ++The SAP ODP framework is contained in most SAP NetWeaver based systems, including SAP ECC, SAP S/4HANA, SAP BW, SAP BW/4HANA, SAP LT Replication Server (SLT), except very old ones. For prerequisites and minimum required releases, see [Prerequisites and configuration](sap-change-data-capture-prerequisites-configuration.md#sap-system-requirements). ++The SAP CDC connector supports basic authentication or Secure Network Communications (SNC), if SNC is configured. ++## Prerequisites ++To use this SAP CDC connector, you need to: ++- Set up a self-hosted integration runtime (version 3.17 or later). For more information, see [Create and configure a self-hosted integration runtime](create-self-hosted-integration-runtime.md). ++- Download the 64-bit [SAP Connector for Microsoft .NET 3.0](https://support.sap.com/en/product/connectors/msnet.html) from SAP's website, and install it on the self-hosted integration runtime machine. During installation, make sure you select the **Install Assemblies to GAC** option in the **Optional setup steps** window. ++ :::image type="content" source="./media/connector-sap-business-warehouse-open-hub/install-sap-dotnet-connector.png" alt-text="Screenshot showing installation of SAP Connector for .NET."::: ++- The SAP user who's being used in the SAP table connector must have the permissions described in [User Configuration](sap-change-data-capture-prerequisites-configuration.md#set-up-the-sap-user): +++## Get started +++## Create a linked service for the SAP CDC connector using UI ++Follow the steps described in [Prepare the SAP CDC linked service](sap-change-data-capture-prepare-linked-service-source-dataset.md#set-up-a-linked-service) to create a linked service for the SAP CDC connector in the Azure portal UI. ++## Dataset properties ++To prepare an SAP CDC dataset, follow [Prepare the SAP CDC source dataset](sap-change-data-capture-prepare-linked-service-source-dataset.md#set-up-the-source-dataset) ++## Transform data with the SAP CDC connector ++SAP CDC datasets can be used as source in mapping data flow. Since the raw SAP ODP change feed is difficult to interpret and to correctly update to a sink, mapping data flow takes care of this by evaluating technical attributes provided by the ODP framework (e.g., ODQ_CHANGEMODE) automatically. This allows users to concentrate on the required transformation logic without having to bother with the internals of the SAP ODP change feed, the right order of changes, etc. ++### Mapping data flow properties ++To create a mapping data flow using the SAP CDC connector as a source, complete the following steps: ++1. In ADF Studio, go to the **Data flows** section of the **Author** hub, select the **ΓǪ** button to drop down the **Data flow actions** menu, and select the **New data flow** item. Turn on debug mode by using the **Data flow debug** button in the top bar of data flow canvas. ++ :::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-data-flow-debug.png" alt-text="Screenshot of the data flow debug button in mapping data flow."::: ++1. In the mapping data flow editor, select **Add Source**. ++ :::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-add-source.png" alt-text="Screenshot of add source in mapping data flow."::: ++1. On the tab **Source settings** select a prepared SAP CDC dataset or select the **New** button to create a new one. Alternatively, you can also select **Inline** in the **Source type** property and continue without defining an explicit dataset. ++ :::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-select-dataset.png" alt-text="Screenshot of the select dataset option in source settings of mapping data flow source."::: ++1. On the tab **Source options** select the option **Full on every run** if you want to load full snapshots on every execution of your mapping data flow, or **Full on the first run, then incremental** if you want to subscribe to a change feed from the SAP source system. In this case, the first run of your pipeline will do a delta initialization, which means it will return a current full data snapshot and create an ODP delta subscription in the source system so that with subsequent runs, the SAP source system will return incremental changes since the previous run only. In case of incremental loads it is required to specify the keys of the ODP source object in the **Key columns** property. ++ :::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-run-mode.png" alt-text="Screenshot of the run mode property in source options of mapping data flow source."::: ++ :::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-key-columns.png" alt-text="Screenshot of the key columns selection in source options of mapping data flow source."::: ++1. For details on the tabs **Projection**, **Optimize** and **Inspect**, please follow [mapping data flow](concepts-data-flow-overview.md). |
data-factory | Connector Troubleshoot Azure Blob Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-blob-storage.md | This article provides suggestions to troubleshoot common problems with the Azure ## Error code: FIPSModeIsNotSupport -- **Message**: `Fail to read data form Azure Blob Storage for Azure Blob connector needs MD5 algorithm which can't co-work with FIPS mode. Please change diawp.exe.config in self-hosted integration runtime install directory to disable FIPS policy following https://docs.microsoft.com/dotnet/framework/configure-apps/file-schema/runtime/enforcefipspolicy-element.`+- **Message**: `Fail to read data form Azure Blob Storage for Azure Blob connector needs MD5 algorithm which can't co-work with FIPS mode. Please change diawp.exe.config in self-hosted integration runtime install directory to disable FIPS policy following https://learn.microsoft.com/dotnet/framework/configure-apps/file-schema/runtime/enforcefipspolicy-element.` - **Cause**: Then FIPS policy is enabled on the VM where the self-hosted integration runtime was installed. |
data-factory | Create Azure Ssis Integration Runtime Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime-powershell.md | $ExpressCustomSetup = "[RunCmdkey|SetEnvironmentVariable|InstallAzurePowerShell| $VnetId = "[your virtual network resource ID or leave it empty]" # REQUIRED if you use Azure SQL Database server configured with a private endpoint/IP firewall rule/virtual network service endpoint or Azure SQL Managed Instance that joins a virtual network to host SSISDB, or if you require access to on-premises data without configuring a self-hosted IR. We recommend Azure Resource Manager virtual network, because classic virtual network will be deprecated soon. $SubnetName = "[your subnet name or leave it empty]" # WARNING: Use the same subnet as the one used for Azure SQL Database server configured with a virtual network service endpoint or a different subnet from the one used for Azure SQL Managed Instance that joins a virtual network $SubnetId = $VnetId + '/subnets/' + $SubnetName -# Virtual network injection method: Standard or Express. For comparison, see https://docs.microsoft.com/azure/data-factory/azure-ssis-integration-runtime-virtual-network-configuration. +# Virtual network injection method: Standard or Express. For comparison, see https://learn.microsoft.com/azure/data-factory/azure-ssis-integration-runtime-virtual-network-configuration. $VnetInjectionMethod = "Standard" # Standard by default, whereas Express lets you use the express virtual network injection method # Public IP address info: OPTIONAL to provide two standard static public IP addresses with DNS name under the same subscription and in the same region as your virtual network $FirstPublicIP = "[your first public IP address resource ID or leave it empty]" $SSISDBServerEndpoint = "[your Azure SQL Database server name.database.windows.n # Authentication info: SQL or Azure AD $SSISDBServerAdminUserName = "[your server admin username for SQL authentication or leave it empty for Azure AD authentication]" $SSISDBServerAdminPassword = "[your server admin password for SQL authentication or leave it empty for Azure AD authentication]"-# For the basic pricing tier, specify "Basic," not "B." For standard, premium, and elastic pool tiers, specify "S0," "S1," "S2," "S3," etc. See https://docs.microsoft.com/azure/sql-database/sql-database-resource-limits-database-server. +# For the basic pricing tier, specify "Basic," not "B." For standard, premium, and elastic pool tiers, specify "S0," "S1," "S2," "S3," etc. See https://learn.microsoft.com/azure/sql-database/sql-database-resource-limits-database-server. $SSISDBPricingTier = "[Basic|S0|S1|S2|S3|S4|S6|S7|S9|S12|P1|P2|P4|P6|P11|P15|…|ELASTIC_POOL(name = <elastic_pool_name>) for Azure SQL Database server or leave it empty for managed instance]" ### Self-hosted integration runtime info - This can be configured as a proxy for on-premises data access $ExpressCustomSetup = "[RunCmdkey|SetEnvironmentVariable|InstallAzurePowerShell| $VnetId = "[your virtual network resource ID or leave it empty]" # REQUIRED if you use Azure SQL Database server configured with a private endpoint/IP firewall rule/virtual network service endpoint or Azure SQL Managed Instance that joins a virtual network to host SSISDB, or if you require access to on-premises data without configuring a self-hosted IR. We recommend Azure Resource Manager virtual network, because classic virtual network will be deprecated soon. $SubnetName = "[your subnet name or leave it empty]" # WARNING: Use the same subnet as the one used for Azure SQL Database server configured with a virtual network service endpoint or a different subnet from the one used for Azure SQL Managed Instance that joins a virtual network $SubnetId = $VnetId + '/subnets/' + $SubnetName -# Virtual network injection method: Standard or Express. For comparison, see https://docs.microsoft.com/azure/data-factory/azure-ssis-integration-runtime-virtual-network-configuration. +# Virtual network injection method: Standard or Express. For comparison, see https://learn.microsoft.com/azure/data-factory/azure-ssis-integration-runtime-virtual-network-configuration. $VnetInjectionMethod = "Standard" # Standard by default, whereas Express lets you use the express virtual network injection method # Public IP address info: OPTIONAL to provide two standard static public IP addresses with DNS name under the same subscription and in the same region as your virtual network $FirstPublicIP = "[your first public IP address resource ID or leave it empty]" $SSISDBServerEndpoint = "[your Azure SQL Database server name.database.windows.n # Authentication info: SQL or Azure AD $SSISDBServerAdminUserName = "[your server admin username for SQL authentication or leave it empty for Azure AD authentication]" $SSISDBServerAdminPassword = "[your server admin password for SQL authentication or leave it empty for Azure AD authentication]"-# For the basic pricing tier, specify "Basic," not "B." For standard, premium, and elastic pool tiers, specify "S0," "S1," "S2," "S3," etc. See https://docs.microsoft.com/azure/sql-database/sql-database-resource-limits-database-server. +# For the basic pricing tier, specify "Basic," not "B." For standard, premium, and elastic pool tiers, specify "S0," "S1," "S2," "S3," etc. See https://learn.microsoft.com/azure/sql-database/sql-database-resource-limits-database-server. $SSISDBPricingTier = "[Basic|S0|S1|S2|S3|S4|S6|S7|S9|S12|P1|P2|P4|P6|P11|P15|…|ELASTIC_POOL(name = <elastic_pool_name>) for Azure SQL Database server or leave it empty for managed instance]" ### Self-hosted integration runtime info - This can be configured as a proxy for on-premises data access |
data-factory | Create Shared Self Hosted Integration Runtime Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-shared-self-hosted-integration-runtime-powershell.md | You can reuse an existing self-hosted integration runtime infrastructure that yo To see an introduction and demonstration of this feature, watch the following 12-minute video: -> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Hybrid-data-movement-across-multiple-Azure-Data-Factories/player] +> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Hybrid-data-movement-across-multiple-Azure-Data-Factories/player] ### Terminology |
data-factory | Data Flow Pivot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-pivot.md | In the section labeled **Value**, you can enter specific row values to be pivote For each unique pivot key value that becomes a column, generate an aggregated row value for each group. You can create multiple columns per pivot key. Each pivot column must contain at least one [aggregate function](data-flow-aggregate-functions.md). -**Column name pattern:** Select how to format the column name of each pivot column. The outputted column name will be a combination of the pivot key value, column prefix and optional prefix, suffice, middle characters. +**Column name pattern:** Select how to format the column name of each pivot column. The outputted column name will be a combination of the pivot key value, column prefix and optional prefix, suffix, middle characters. **Column arrangement:** If you generate more than one pivot column per pivot key, choose how you want the columns to be ordered. |
data-factory | Iterative Development Debugging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/iterative-development-debugging.md | Azure Data Factory and Synapse Analytics supports iterative development and debu For an eight-minute introduction and demonstration of this feature, watch the following video: -> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Iterative-development-and-debugging-with-Azure-Data-Factory/player] +> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Iterative-development-and-debugging-with-Azure-Data-Factory/player] ## Debugging a pipeline |
data-factory | Join Azure Ssis Integration Runtime Virtual Network Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network-powershell.md | $AzureSSISName = "[your Azure-SSIS IR name]" $VnetId = "[your virtual network resource ID or leave it empty]" # REQUIRED if you use Azure SQL Database server configured with a private endpoint/IP firewall rule/virtual network service endpoint or Azure SQL Managed Instance that joins a virtual network to host SSISDB, or if you require access to on-premises data without configuring a self-hosted IR. We recommend Azure Resource Manager virtual network, because classic virtual network will be deprecated soon. $SubnetName = "[your subnet name or leave it empty]" # WARNING: Use the same subnet as the one used for Azure SQL Database server configured with a virtual network service endpoint or a different subnet from the one used for Azure SQL Managed Instance that joins a virtual network $SubnetId = $VnetId + '/subnets/' + $SubnetName -# Virtual network injection method: Standard or Express. For comparison, see https://docs.microsoft.com/azure/data-factory/azure-ssis-integration-runtime-virtual-network-configuration. +# Virtual network injection method: Standard or Express. For comparison, see https://learn.microsoft.com/azure/data-factory/azure-ssis-integration-runtime-virtual-network-configuration. $VnetInjectionMethod = "Standard" # Standard by default, whereas Express lets you use the express virtual network injection method # Public IP address info: OPTIONAL to provide two standard static public IP addresses with DNS name under the same subscription and in the same region as your virtual network $FirstPublicIP = "[your first public IP address resource ID or leave it empty]" |
data-factory | Monitor Visually | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-visually.md | You can raise alerts on supported metrics in Data Factory. Select **Monitor** > For a seven-minute introduction and demonstration of this feature, watch the following video: -> [!VIDEO https://docs.microsoft.com/shows/azure-friday/Monitor-your-Azure-Data-Factory-pipelines-proactively-with-alerts/player] +> [!VIDEO https://learn.microsoft.com/shows/azure-friday/Monitor-your-Azure-Data-Factory-pipelines-proactively-with-alerts/player] ### Create alerts |
data-factory | Parameterize Linked Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameterize-linked-services.md | You can use the UI in the Azure portal or a programming interface to parameteriz For a seven-minute introduction and demonstration of this feature, watch the following video: -> [!VIDEO https://docs.microsoft.com/shows/azure-friday/Parameterize-connections-to-your-data-stores-in-Azure-Data-Factory/player] +> [!VIDEO https://learn.microsoft.com/shows/azure-friday/Parameterize-connections-to-your-data-stores-in-Azure-Data-Factory/player] ## Supported linked service types |
data-factory | Plan Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/plan-manage-costs.md | You can also [export your cost data](../cost-management-billing/costs/tutorial-e - Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course. - [Azure Data Factory pricing page](https://azure.microsoft.com/pricing/details/data-factory/ssis/) - [Understanding Azure Data Factory through examples](./pricing-concepts.md) - [Azure Data Factory pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=data-factory) |
data-factory | Quickstart Create Data Factory Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-portal.md | This quickstart describes how to use the Azure Data Factory UI to create and mon ### Video Watching this video helps you understand the Data Factory UI: ->[!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Visually-build-pipelines-for-Azure-Data-Factory-v2/Player] +>[!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Visually-build-pipelines-for-Azure-Data-Factory-v2/Player] ## Create a data factory |
data-factory | Sap Change Data Capture Data Partitioning Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-data-partitioning-template.md | - Title: Auto-generate a pipeline by using the SAP data partitioning template- -description: Learn how to use the SAP data partitioning template for SAP change data capture (CDC) (preview) extraction in Azure Data Factory. ---- Previously updated : 06/01/2022----# Auto-generate a pipeline by using the SAP data partitioning template ---Learn how to use the SAP data partitioning template to auto-generate a pipeline as part of your SAP change data capture (CDC) solution (preview). Then, use the pipeline in Azure Data Factory to partition SAP CDC extracted data. --## Create a data partitioning pipeline from a template --To auto-generate an Azure Data Factory pipeline by using the SAP data partitioning template: --1. In Azure Data Factory Studio, go to the Author hub of your data factory. In **Factory Resources**, under **Pipelines** > **Pipelines Actions**, select **Pipeline from template**. -- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-pipeline-from-template.png" alt-text="Screenshot of the Azure Data Factory resources tab, with Pipeline from template highlighted."::: --1. Select the **Partition SAP data to extract and load into Azure Data Lake Store Gen2 in parallel** template, and then select **Continue**. -- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-template-selection.png" alt-text="Screenshot of the template gallery, with the SAP data partitioning template highlighted."::: --1. Create new or use existing [linked services](sap-change-data-capture-prepare-linked-service-source-dataset.md) for SAP ODP (preview), Azure Data Lake Storage Gen2, and Azure Synapse Analytics. Use the linked services as inputs in the SAP data partitioning template. -- Under **Inputs**, for the SAP ODP linked service, in **Connect via integration runtime**, select your self-hosted integration runtime. For the Data Lake Storage Gen2 linked service, in **Connect via integration runtime**, select **AutoResolveIntegrationRuntime**. -- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-template-configuration.png" alt-text="Screenshot of the SAP data partitioning template configuration page, with the Inputs section highlighted."::: --1. Select **Use this template** to auto-generate an SAP data partitioning pipeline that can run multiple Data Factory copy activities to extract multiple partitions in parallel. -- Data Factory copy activities run on a self-hosted integration runtime to concurrently extract full raw data from your SAP system and load it into Data Lake Storage Gen2 as CSV files. The files are stored in the *sapcdc* container in the *deltachange/\<your pipeline name\>\<your pipeline run timestamp\>* folder path. Be sure that **Extraction mode** for the Data Factory copy activity is set to **Full**. -- To ensure high throughput, deploy your SAP system, self-hosted integration runtime, Data Lake Storage Gen2 instance, Azure integration runtime, and Azure Synapse Analytics instance in the same region. --1. Assign your SAP data extraction context, data source object names, and an array of partitions. Define each element as an array of row selection conditions that serve as runtime parameter values for the SAP data partitioning pipeline. -- For the `selectionRangeList` parameter, enter your array of partitions. Define each partition as an array of row selection conditions. For example, hereΓÇÖs an array of three partitions, where the first partition includes only rows where the value in the **CUSTOMERID** column is between **1** and **1000000** (the first million customers), the second partition includes only rows where the value in the **CUSTOMERID** column is between **1000001** and **2000000** (the second million customers), and the third partition includes the rest of the customers: -- `[[{"fieldName":"CUSTOMERID","sign":"I","option":"BT","low":"1","high":"1000000"}],[{"fieldName":"CUSTOMERID","sign":"I","option":"BT","low":"1000001","high":"2000000"}],[{"fieldName":"CUSTOMERID","sign":"E","option":"BT","low":"1","high":"2000000"}]]` -- The three partitions are extracted by using three Data Factory copy activities that run in parallel. -- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-partition-extraction-configuration.png" alt-text="Screenshot of the pipeline configuration for the SAP data partitioning template with the parameters section highlighted."::: --1. Select **Save all** and run the SAP data partitioning pipeline. --## Next steps --[Auto-generate a pipeline by using the SAP data replication template](sap-change-data-capture-data-replication-template.md) |
data-factory | Sap Change Data Capture Data Replication Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-data-replication-template.md | - Title: Auto-generate a pipeline by using the SAP data replication template- -description: Learn how to use the SAP data replication template for SAP change data capture (CDC) (preview) extraction in Azure Data Factory. ---- Previously updated : 06/01/2022----# Auto-generate a pipeline by using the SAP data replication template ---Learn how to use the SAP data replication template to auto-generate a pipeline as part of your SAP change data capture (CDC) solution (preview). Then, use the pipeline in Azure Data Factory for SAP CDC extraction in your datasets. --## Create a data replication pipeline from a template --To auto-generate an Azure Data Factory pipeline by using the SAP data replication template: --1. In Azure Data Factory Studio, go to the Author hub of your data factory. In **Factory Resources**, under **Pipelines** > **Pipelines Actions**, select **Pipeline from template**. -- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-new-pipeline.png" alt-text="Screenshot that shows creating a new pipeline in the Author hub."::: --1. Select the **Replicate SAP data to Azure Synapse Analytics and persist raw data in Azure Data Lake Storage Gen2** template, and then select **Continue**. -- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-data-replication-template.png" alt-text="Screenshot of the template gallery, with the SAP data replication template highlighted."::: --1. Create new or use existing [linked services](sap-change-data-capture-prepare-linked-service-source-dataset.md) for SAP ODP (preview), Azure Data Lake Storage Gen2, and Azure Synapse Analytics. Use the linked services as inputs in the SAP data replication template. -- Under **Inputs**, for the SAP ODP linked service, in **Connect via integration runtime**, select your self-hosted integration runtime. For the Data Lake Storage Gen2 and Azure Synapse Analytics linked services, in **Connect via integration runtime**, select **AutoResolveIntegrationRuntime**. -- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-data-replication-template-configuration.png" alt-text="Screenshot of the configuration page for the SAP data replication template."::: --1. Select **Use this template** to auto-generate an SAP data replication pipeline that contains Azure Data Factory copy activities and data flow activities. -- The Data Factory copy activity runs on the self-hosted integration runtime to extract raw data (full and deltas) from the SAP system. The copy activity loads the raw data into Data Lake Storage Gen2 as a persisted CSV file. Historical changes are archived and preserved. The files are stored in the *sapcdc* container in the *deltachange/\<your pipeline name\>\<your pipeline run timestamp\>* folder path. Be sure that **Extraction mode** for the Data Factory copy activity is set to **Delta**. The **Subscriber process** property of copy activity is parameterized. -- The Data Factory data flow activity runs on the Azure integration runtime to transform the raw data and merge all changes into Azure Synapse Analytics. The process replicates the SAP data. -- To ensure high throughput, deploy your SAP system, self-hosted integration runtime, Data Lake Storage Gen2 instance, Azure integration runtime, and Azure Synapse Analytics instance in the same region. -- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-data-replication-architecture.png" alt-text="Shows a diagram of the architecture of the SAP data replication scenario."::: --1. Assign your SAP data extraction context, data source object, key column names, subscriber process names, and Synapse SQL schema and table names as runtime parameter values for the SAP data replication pipeline. -- For the `keyColumns` parameter, enter your key column names as an array of strings, like `[“CUSTOMERID”]/[“keyColumn1”, “keyColumn2”, “keyColumn3”, … ]`. Include up to 10 key column names. The Data Factory data flow activity uses key columns in raw SAP data to identify changed rows. A changed row is a row that is created, deleted, or changed. -- For the `subscriberProcess` parameter, enter a unique name for **Subscriber process** in the Data Factory copy activity. For example, you might name it `<your pipeline name>\<your copy activity name>`. You can rename it to start a new Operational Delta Queue subscription in SAP systems. -- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-data-replication-pipeline-parameters.png" alt-text="Screenshot of the SAP data replication pipeline with the parameters section highlighted."::: --1. Select **Save all** and run the SAP data replication pipeline. --## Create a data delta replication pipeline from a template --If you want to replicate SAP data to Data Lake Storage Gen2 in delta format, complete the steps that are detailed in the preceding section, but instead use the **Replicate SAP data to Azure Data Lake Store Gen2 in Delta format and persist raw data in CSV format** template. --Like in the data replication template, in a data delta pipeline, the Data Factory copy activity runs on the self-hosted integration runtime to extract raw data (full and deltas) from the SAP system. The copy activity loads the raw data into Data Lake Storage Gen2 as a persisted CSV file. Historical changes are archived and preserved. The files are stored in the *sapcdc* container in the *deltachange/\<your pipeline name\>\<your pipeline run timestamp\>* folder path. The **Extraction mode** property of the copy activity is set to **Delta**. The **Subscriber process** property of copy activity is parameterized. --The Data Factory data flow activity runs on the Azure integration runtime to transform the raw data and merge all changes into Data Lake Storage Gen2 as an open source Delta Lake or Lakehouse table. The process replicates the SAP data. --The table is stored in the *saptimetravel* container in the *\<your SAP table or object name\>* folder that has the *\*delta\*log* subfolder and Parquet files. You can [query the table by using an Azure Synapse Analytics serverless SQL pool](../synapse-analytics/sql/query-delta-lake-format.md). You also can use the Delta Lake Time Travel feature with an Azure Synapse Analytics serverless Apache Spark pool. For more information, see [Create a serverless Apache Spark pool in Azure Synapse Analytics by using web tools](../synapse-analytics/quickstart-apache-spark-notebook.md) and [Read older versions of data by using Time Travel](../synapse-analytics/spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#read-older-versions-of-data-using-time-travel). --To ensure high throughput, deploy your SAP system, self-hosted integration runtime, Data Lake Storage Gen2 instance, Azure integration runtime, and Delta Lake or Lakehouse instances in the same region. --## Next steps --[Manage your SAP CDC solution](sap-change-data-capture-management.md) |
data-factory | Sap Change Data Capture Debug Shir Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-debug-shir-logs.md | Title: Debug copy activity in your SAP CDC solution (preview) by sending logs + Title: Debug SAP CDC connector (preview) by sending logs -description: Learn how to debug issues with the Azure Data Factory copy activity for your SAP change data capture (CDC) solution (preview) by sending self-hosted integration runtime logs to Microsoft. +description: Learn how to debug issues with the Azure Data Factory SAP CDC (change data capture) connector (preview) by sending self-hosted integration runtime logs to Microsoft. Last updated 06/01/2022 -# Debug copy activity by sending self-hosted integration runtime logs +# Debug the SAP CDC connector by sending self-hosted integration runtime logs [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] -If you want Microsoft to debug Azure Data Factory copy activity issues in your SAP change data capture (CDC) solution (preview), send us your self-hosted integration runtime logs, and then contact us. +If you want Microsoft to debug Azure Data Factory issues with your SAP CDC connector (preview), send us your self-hosted integration runtime logs, and then contact us. ## Send logs to Microsoft After you've uploaded and sent your self-hosted integration runtime logs, contac ## Next steps -[Auto-generate a pipeline by using the SAP data partitioning template](sap-change-data-capture-data-partitioning-template.md) +[SAP CDC (Change Data Capture) Connector](connector-sap-change-data-capture.md) |
data-factory | Sap Change Data Capture Introduction Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-introduction-architecture.md | Title: Overview and architecture of the SAP CDC solution (preview) + Title: Overview and architecture of the SAP CDC capabilities (preview) -description: Learn about the SAP change data capture (CDC) solution (preview) in Azure Data Factory and understand its architecture. +description: Learn about the SAP change data capture (CDC) capabilities (preview) in Azure Data Factory and understand its architecture. Last updated 06/01/2022 -# Overview and architecture of the SAP CDC solution (preview) +# Overview and architecture of the SAP CDC capabilities (preview) [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] -Learn about the SAP change data capture (CDC) solution (preview) in Azure Data Factory and understand its architecture. +Learn about the SAP change data capture (CDC) capabilities (preview) in Azure Data Factory and understand the architecture. Azure Data Factory is an ETL and ELT data integration platform as a service (PaaS). For SAP data integration, Data Factory currently offers six general availability connectors: The SAP connectors in Data Factory extract SAP source data only in batches. Each You can keep your copy of SAP data fresh and up-to-date by frequently extracting the full dataset, but this approach is expensive and inefficient. You also can use a manual, limited workaround to extract mostly new or updated records. In a process called *watermarking*, extraction requires using a timestamp column, monotonously increasing values, and continuously tracking the highest value since the last extraction. But some tables don't have a column that you can use for watermarking. This process also doesn't identify a deleted record as a change in the dataset. -## The SAP CDC solution +## SAP CDC capabilities -Microsoft customers indicate that they need a connector that can extract only the delta between two sets of data. In data, a *delta* is any change in a dataset that's the result of an update, insert, or deletion in the dataset. A delta extraction connector uses the [SAP change data capture (CDC) feature](https://help.sap.com/docs/SAP_DATA_SERVICES/ec06fadc50b64b6184f835e4f0e1f52f/1752bddf523c45f18ce305ac3bcd7e08.html?q=change%20data%20capture) that exists in most SAP systems to determine the delta in a dataset. The SAP CDC solution in Data Factory uses the SAP Operational Data Provisioning (ODP) framework to replicate the delta in an SAP source dataset. +Microsoft customers indicate that they need a connector that can extract only the delta between two sets of data. In data, a *delta* is any change in a dataset that's the result of an update, insert, or deletion in the dataset. A delta extraction connector uses the [SAP change data capture (CDC) feature](https://help.sap.com/docs/SAP_DATA_SERVICES/ec06fadc50b64b6184f835e4f0e1f52f/1752bddf523c45f18ce305ac3bcd7e08.html?q=change%20data%20capture) that exists in most SAP systems to determine the delta in a dataset. The SAP CDC capabilities in Data Factory use the SAP Operational Data Provisioning (ODP) framework to replicate the delta in an SAP source dataset. -This article provides a high-level architecture of the SAP CDC solution in Azure Data Factory. Get more information about the SAP CDC solution: +This article provides a high-level architecture of the SAP CDC capabilities in Azure Data Factory. Get more information about the SAP CDC capabilities: - [Prerequisites and setup](sap-change-data-capture-prerequisites-configuration.md) - [Set up a self-hosted integration runtime](sap-change-data-capture-shir-preparation.md) - [Set up a linked service and source dataset](sap-change-data-capture-prepare-linked-service-source-dataset.md)-- [Use the SAP data extraction template](sap-change-data-capture-data-replication-template.md)-- [Use the SAP data partition template](sap-change-data-capture-data-partitioning-template.md) - [Manage your solution](sap-change-data-capture-management.md) -## How to use the SAP CDC solution +## How to use the SAP CDC capabilities -The SAP CDC solution is a connector that you access through an SAP ODP (preview) linked service, an SAP ODP (preview) source dataset, and the SAP data replication template or the SAP data partitioning template. Choose your template when you set up a new pipeline in Azure Data Factory Studio. To access preview templates, you must [enable the preview experience in Azure Data Factory Studio](how-to-manage-studio-preview-exp.md#how-to-enabledisable-preview-experience). +At the core of the SAP CDC capabilities is the new SAP CDC connector (preview). It can connect to all SAP systems that support ODP. This includes SAP ECC, SAP S/4HANA, SAP BW, and SAP BW/4HANA. The solution works either directly at the application layer or indirectly via an SAP Landscape Transformation Replication Server (SLT) as a proxy. It doesn't rely on watermarking to extract SAP data either fully or incrementally. The data the SAP CDC connector extracts includes not only physical tables but also logical objects that are created by using the tables. An example of a table-based object is an SAP Advanced Business Application Programming (ABAP) Core Data Services (CDS) view. -The SAP CDC solution connects to all SAP systems that support ODP, including SAP R/3, SAP ECC, SAP S/4HANA, SAP BW, and SAP BW/4HANA. The solution works either directly at the application layer or indirectly via an SAP Landscape Transformation Replication Server (SLT) as a proxy. The solution doesn't rely on watermarking to extract SAP data either fully or incrementally. The data the SAP CDC solution extracts includes not only physical tables but also logical objects that are created by using the tables. An example of a table-based object is an SAP Advanced Business Application Programming (ABAP) Core Data Services (CDS) view. +Use the SAP CDC connector with Data Factory features like mapping data flow activities, and tumbling window triggers for a low-latency SAP CDC replication solution in a self-managed pipeline. -Use the SAP CDC solution with Data Factory features like copy activities and data flow activities, pipeline templates, and tumbling window triggers for a low-latency SAP CDC replication solution in a self-managed pipeline. --## The SAP CDC solution architecture +## The SAP CDC architecture The SAP CDC solution in Azure Data Factory is a connector between SAP and Azure. The SAP side includes the SAP ODP connector that invokes the ODP API over standard Remote Function Call (RFC) modules to extract full and delta raw SAP data. -The Azure side includes the Data Factory copy activity that loads the raw SAP data into a storage destination like Azure Blob Storage or Azure Data Lake Storage Gen2. The data is saved in CSV or Parquet format, essentially archiving or preserving all historical changes. --The Azure side also might include a Data Factory data flow activity that transforms the raw SAP data, merges all changes, and loads the results in a destination like Azure SQL Database or Azure Synapse Analytics, essentially replicating the SAP data. The Data Factory data flow activity also can load the results in Data Lake Storage Gen2 in delta format. You can use the open source Delta Lake Time Travel feature to produce snapshots of SAP data for a specific period. --In Azure Data Factory Studio, the SAP template that you use to auto-generate a Data Factory pipeline connects SAP with Azure. You can run the pipeline frequently by using a Data Factory tumbling window trigger to replicate SAP data in Azure with low latency and without using watermarking. +The Azure side includes the Data Factory mapping data flow that can transform and load the SAP data into any data sink supported by mapping data flows. This includes storage destinations like Azure Data Lake Storage Gen2 or databases like Azure SQL Database or Azure Synapse Analytics. The Data Factory data flow activity also can load the results in Data Lake Storage Gen2 in delta format. You can use the Delta Lake Time Travel feature to produce snapshots of SAP data for a specific period. You can run your pipeline and mapping data flows frequently by using a Data Factory tumbling window trigger to replicate SAP data in Azure with low latency and without using watermarking. :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-architecture-diagram.png" border="false" alt-text="Diagram of the architecture of the SAP CDC solution."::: -To get started, create a Data Factory copy activity by using an SAP ODP linked service, an SAP ODP source dataset, and an SAP data replication template or SAP data partitioning template. The copy activity runs on a self-hosted integration runtime that you install on an on-premises computer or on a virtual machine (VM). An on-premises computer has a line of sight to your SAP source systems and to the SLT. The Data Factory data flow activity runs on a serverless Azure Databricks or Apache Spark cluster, or on an Azure integration runtime. +To get started, create a Data Factory SAP CDC linked service, an SAP CDC source dataset, and a pipeline with a mapping data flow activity in which you use the SAP CDC source dataset. To extract the data from SAP, a self-hosted integration runtime is required that you install on an on-premises computer or on a virtual machine (VM). An on-premises computer has a line of sight to your SAP source systems and to your SLT server. The Data Factory data flow activity runs on a serverless Azure Databricks or Apache Spark cluster, or on an Azure integration runtime. -The SAP CDC solution uses ODP to extract various data source types, including: +The SAP CDC connector uses the SAP ODP framework to extract various data source types, including: -- SAP extractors, originally built to extract data from ECC and load it into BW-- ABAP CDS views, the new data extraction standard for S/4HANA-- InfoProviders and InfoObjects datasets in BW and BW/4HANA-- SAP application tables, when you use an SLT replication server as a proxy+- SAP extractors, originally built to extract data from SAP ECC and load it into SAP BW +- ABAP CDS views, the new data extraction standard for SAP S/4HANA +- InfoProviders and InfoObjects datasets in SAP BW and SAP BW/4HANA +- SAP application tables, when you use an SAP LT replication server (SLT) as a proxy -In this process, the SAP data sources are *providers*. The providers run on SAP systems to produce either full or incremental data in an operational delta queue (ODQ). The Data Factory copy activity is a *subscriber* of the ODQ. The copy activity consumes the ODQ through the SAP CDC solution in the Data Factory pipeline. +In this process, the SAP data sources are *providers*. The providers run on SAP systems to produce either full or incremental data in an operational delta queue (ODQ). The Data Factory mapping data flow source is a *subscriber* of the ODQ. :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-shir-architecture-diagram.png" border="false" alt-text="Diagram of the architecture of the SAP ODP framework through a self-hosted integration runtime."::: |
data-factory | Sap Change Data Capture Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-management.md | Title: Manage your SAP CDC solution (preview) + Title: Manage your SAP CDC (preview) ETL process description: Learn how to manage your SAP change data capture (CDC) solution (preview) in Azure Data Factory. Last updated 06/01/2022 -# Manage your SAP CDC solution (preview) +# Manage your SAP CDC (preview) ETL process [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] -After you create a pipeline in Azure Data Factory as part of your SAP change data capture (CDC) solution (preview), it's important to manage the solution. +After you create a pipeline in Azure Data Factory using the SAP CDC connector (preview), it's important to manage the solution. ## Run an SAP data replication pipeline on a recurring schedule To run an SAP data replication pipeline on a recurring schedule with a specified :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-tumbling-window-trigger.png" alt-text="Screenshot of the Edit trigger window with values highlighted to configure the tumbling window trigger."::: -## Recover a failed SAP data replication pipeline run --If an SAP data replication pipeline run fails, a subsequent run that's scheduled via a tumbling window trigger is suspended while it waits on the dependency. ---To recover a failed SAP data replication pipeline run: --1. Fix the issues that caused the pipeline run failure. --1. Switch the **Extraction mode** property of the copy activity to **Recovery**. --1. Manually run the SAP data replication pipeline. --1. If the recovery run finishes successfully, change the **Extraction mode** property of the copy activity to **Delta**. --1. Next to the failed run of the tumbling window trigger, select **Rerun**. - ## Monitor data extractions on SAP systems To monitor data extractions on SAP systems: To monitor data extractions on SAP systems: :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-logon-tool.png" alt-text="Screenshot of the SAP Logon Tool."::: -1. In **Subscriber**, enter the value for the **Subscriber name** property of your SAP ODP (preview) linked service. In the **Request Selection** dropdown, select **All** to show all data extractions that use the linked service. +1. In **Subscriber**, enter the value for the **Subscriber name** property of your SAP CDC (preview) linked service. In the **Request Selection** dropdown, select **All** to show all data extractions that use the linked service. :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-monitor-delta-queues.png" alt-text="Screenshot of the SAP ODQMON tool with all data extractions for a specific subscriber."::: - You can see all registered subscriber processes in the operational delta queue (ODQ). Subscriber processes represent data extractions from Azure Data Factory copy activities that use your SAP ODP linked service. For each ODQ subscription, you can look at details to see all full and delta extractions. For each extraction, you can see individual data packages that were consumed. + You can see all registered subscriber processes in the operational delta queue (ODQ). Subscriber processes represent data extractions from Azure Data Factory copy activities that use your SAP CDC linked service. For each ODQ subscription, you can look at details to see all full and delta extractions. For each extraction, you can see individual data packages that were consumed. 1. When Data Factory copy activities that extract SAP data are no longer needed, you should delete their ODQ subscriptions. When you delete ODQ subscriptions, SAP systems can stop tracking their subscription states and remove the unconsumed data packages from the ODQ. To delete an ODQ subscription, select the subscription and select the Delete icon. To monitor data extractions on SAP systems: ## Troubleshoot delta changes -The SAP CDC solution in Data Factory reads delta changes from the SAP ODP framework. The deltas are recorded in ODQ tables. +The SAP CDC connector in Data Factory reads delta changes from the SAP ODP framework. The deltas are recorded in ODQ tables. In scenarios in which data movement works (copy activities finish without errors), but data isn't delivered correctly (no data at all, or maybe just a subset of the expected data), you should first investigate whether the number of records provided on the SAP side match the number of rows transferred by Data Factory. If they match, the issue isn't related to Data Factory, but probably comes from an incorrect or missing configuration on the SAP side. Based on the timestamp in the first row, find the line that corresponds to the c ## Current limitations -Here are current limitations of the SAP CDC solution in Data Factory: +Here are current limitations of the SAP CDC connector in Data Factory: - You can't reset or delete ODQ subscriptions in Data Factory. - You can't use SAP hierarchies with the solution. |
data-factory | Sap Change Data Capture Prepare Linked Service Source Dataset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prepare-linked-service-source-dataset.md | Title: Set up a linked service and dataset for the SAP CDC solution (preview) + Title: Set up a linked service and dataset for the SAP CDC connector (preview) -description: Learn how to set up a linked service and source dataset to use with the SAP change data capture (CDC) solution (preview) in Azure Data Factory. +description: Learn how to set up a linked service and source dataset to use with the SAP CDC (change data capture) connector (preview) in Azure Data Factory. Last updated 06/01/2022 -# Set up a linked service and source dataset for your SAP CDC solution (preview) +# Set up a linked service and source dataset for the SAP CDC connector (preview) [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] -Learn how to set up the linked service and source dataset for your SAP change data capture (CDC) solution (preview) in Azure Data Factory. +Learn how to set up the linked service and source dataset for the SAP CDC connector (preview) in Azure Data Factory. ## Set up a linked service -To set up an SAP ODP (preview) linked service for your SAP CDC solution: +To set up an SAP CDC (preview) linked service: 1. In Azure Data Factory Studio, go to the Manage hub of your data factory. In the menu under **Connections**, select **Linked services**. Select **New** to create a new linked service. :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-new-linked-service.png" alt-text="Screenshot of the Manage hub in Azure Data Factory Studio, with the New linked service button highlighted."::: -1. In **New linked service**, search for **SAP**. Select **SAP ODP (Preview)**, and then select **Continue**. +1. In **New linked service**, search for **SAP**. Select **SAP CDC (Preview)**, and then select **Continue**. - :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-linked-service-selection.png" alt-text="Screenshot of the linked service source selection, with SAP ODP (Preview) selected."::: + :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-linked-service-selection.png" alt-text="Screenshot of the linked service source selection, with SAP CDC (Preview) selected."::: 1. Set the linked service properties. Many of the properties are similar to SAP Table linked service properties. For more information, see [Linked service properties](connector-sap-table.md?tabs=data-factory#linked-service-properties). 1. In **Name**, enter a unique name for the linked service. 1. In **Connect via integration runtime**, select your self-hosted integration runtime. 1. In **Server name**, enter the mapped server name for your SAP system.- 1. In **Subscriber name**, enter a unique name to register and identify this Data Factory connection as a subscriber that consumes data packages that are produced in the Operational Delta Queue (ODQ) by your SAP system. For example, you might name it `<your data factory -name>_<your linked service name>`. + 1. In **Subscriber name**, enter a unique name to register and identify this Data Factory connection as a subscriber that consumes data packages that are produced in the Operational Delta Queue (ODQ) by your SAP system. For example, you might name it `<your data factory -name>_<your linked service name>`. Make sure to only use upper case letters. - When you use delta extraction mode in SAP, the combination of subscriber name (maintained in the linked service) and subscriber process must be unique for every copy activity that reads from the same ODP source object. A unique name ensures that the ODP framework can distinguish between copy activities and provide the correct delta. + Make sure you assign a unique subscriber name to every linked service connecting to the same SAP system. This will make monitoring and trouble shooting on SAP side much easier. - :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-linked-service-configuration.png" alt-text="Screenshot of the SAP ODP linked service configuration."::: + :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-linked-service-configuration.png" alt-text="Screenshot of the SAP CDC linked service configuration."::: 1. Select **Test connection**, and then select **Create**. -## Create a copy activity +## Set up the source dataset -To create a Data Factory copy activity that uses an SAP ODP (preview) data source, complete the steps in the following sections. --### Set up the source dataset --1. In Azure Data Factory Studio, go to the Author hub of your data factory. In **Factory Resources**, under **Pipelines** > **Pipelines Actions**, select **New pipeline**. +1. In Azure Data Factory Studio, go to the Author hub of your data factory. In **Factory Resources**, under **Datasets** > **Dataset Actions**, select **New dataset**. :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-new-pipeline.png" alt-text="Screenshot that shows creating a new pipeline in the Data Factory Studio Author hub."::: -1. In **Activities**, select the **Move & transform** dropdown. Select the **Copy data** activity and drag it to the canvas of the new pipeline. Select the **Source** tab of the Data Factory copy activity, and then select **New** to create a new source dataset. -- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-data-source-new.png" alt-text="Screenshot of the Copy data activity Source configuration."::: +1. In **New dataset**, search for **SAP**. Select **SAP CDC (Preview)**, and then select **Continue**. -1. In **New dataset**, search for **SAP**. Select **SAP ODP (Preview)**, and then select **Continue**. + :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-source-dataset-selection.png" alt-text="Screenshot of the SAP CDC (Preview) dataset type in the New dataset dialog."::: - :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-source-dataset-selection.png" alt-text="Screenshot of the SAP ODP (Preview) dataset type in the New dataset dialog."::: +1. In **Set properties**, enter a name for the SAP CDC linked service data source. In **Linked service**, select the dropdown and select **New**. -1. In **Set properties**, enter a name for the SAP ODP linked service data source. In **Linked service**, select the dropdown and select **New**. --1. Select your SAP ODP linked service for the new source dataset and set the rest of the properties for the linked service: +1. Select your SAP CDC linked service for the new source dataset and set the rest of the properties for the linked service: 1. In **Connect via integration runtime**, select your self-hosted integration runtime. - 1. In **Context**, select the context of the ODP data extraction. Here are some examples: + 1. In **ODP context**, select the context of the ODP data extraction. Here are some examples: - To extract ABAP CDS views from S/4HANA, select **ABAP_CDS**. - To extract InfoProviders or InfoObjects from SAP BW or BW/4HANA, select **BW**. To create a Data Factory copy activity that uses an SAP ODP (preview) data sourc If you want to extract SAP application tables, but you donΓÇÖt want to use SAP Landscape Transformation Replication Server (SLT) as a proxy, you can create SAP extractors by using the RSO2 transaction code or Core Data Services (CDS) views with the tables. Then, extract the tables directly from your SAP source systems by using either an **SAPI** or an **ABAP_CDS** context. - 1. For **Object name**, under the selected data extraction context, select the name of the data source object to extract. If you connect to your SAP source system by using SLT as a proxy, the **Preview data** feature currently isn't supported. + 1. For **ODP name**, under the selected data extraction context, select the name of the data source object to extract. If you connect to your SAP source system by using SLT as a proxy, the **Preview data** feature currently isn't supported. To enter the selections directly, select the **Edit** checkbox. - :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-source-dataset-configuration.png" alt-text="Screenshot of the SAP ODP (Preview) dataset configuration page."::: --1. Select **OK** to create your new SAP ODP source dataset. --1. In the Data Factory copy activity, in **Extraction mode**, select one of the following options: -- - **Full**: Always extracts the current snapshot of the selected data source object. This option doesn't register the Data Factory copy activity as its delta subscriber that consumes data changes produced in the ODQ by your SAP system. - - **Delta**: Initially extracts the current snapshot of the selected data source object. This option registers the Data Factory copy activity as its delta subscriber and then extracts new data changes produced in the ODQ by your SAP system since the last extraction. - - **Recovery**: Repeats the last extraction that was part of a failed pipeline run. --1. In **Subscriber process**, enter a unique name to register and identify this Data Factory copy activity as a delta subscriber of the selected data source object. Your SAP system manages its subscription state to keep track of data changes that are produced in the ODQ and consumed in consecutive extractions. You don't need to manually watermark data changes. For example, you might name the subscriber process `<your pipeline name>_<your copy activity name>`. -- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-source-configuration.png" alt-text="Screenshot of the SAP CDC source configuration in a Data Factory copy activity."::: --1. If you want to extract data from only some columns or rows, you can use the column projection or row selection features: -- 1. In **Projection**, select **Refresh** to load the dropdown selections with column names of the selected data source object. -- If you want to include only a few columns in your data extraction, select the checkboxes for those columns. If you want to exclude only a few columns from your data extraction, select the **Select all** checkbox first, and then clear the checkboxes for columns you want to exclude. If no column is selected, all columns are extracted. -- To enter the selections directly, select the **Edit** checkbox. -- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-source-projection-configuration.png" alt-text="Screenshot of the SAP CDC source configuration with the Projection, Selection, and Additional columns sections highlighted."::: -- 1. In **Selection**, select **New** to add a new row selection condition that contains arguments. -- 1. In **Field name**, select **Refresh** to load the dropdown selections with column names of the selected data source object. You also can enter the column names manually. - 1. In **Sign**, select **Inclusive** or **Exclusive** to include or exclude rows that meet the selection condition in your data extraction. - 1. In **Option**, select **EQ**, **CP**, or **BT** to apply the following row selection conditions: -- - **EQ**: True if the value in the **Field name** column is equal to the value of the **Low** argument. - - **CP**: True if the value in the **Field name** column contains a pattern that's specified in the value of the **Low** argument. - - **BT**: True if the value in the **Field name** column is between the values of the **Low** and **High** arguments. -- To ensure that your row selection conditions can be applied to the selected data source object, see SAP documentation or support notes for the data source object. -- The following table shows example row selection conditions and their respective arguments: -- | Row selection condition | Field name | Sign | Option | Low | High | - ||||||| - | Include only rows in which the value in the **COUNTRY** column is **CHINA** | **COUNTRY** | **Inclusive** | **EQ** | **CHINA** | | - | Exclude only rows in which the value in the **COUNTRY** column is **CHINA** | **COUNTRY** | **Exclusive** | **EQ** | **CHINA** | | - | Include only rows in which the value in the **FIRSTNAME** column contains the **JO\*** pattern | **FIRSTNAME** | **Inclusive** | **CP** | **JO\*** | | - | Include only rows in which the value in the **CUSTOMERID** column is between **1** and **999999** | **CUSTOMERID** | **Inclusive** | **BT** | **1** | **999999** | - - :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-selection-additional-columns.png" alt-text="Screenshot of the SAP ODP source configuration for a copy activity with the Selection and Additional columns sections highlighted."::: -- Row selections are especially useful to divide large data sets into multiple partitions. You can extract each partition by using a single copy activity. You can perform full extractions by using multiple copy activities running in parallel. These copy activities in turn invoke parallel processes on your SAP system to produce separate data packages in the ODQ. Parallel processes in each copy activity can consume packages and increase throughput significantly. --### Set up the source sink --- In the Data Factory copy activity, select the **Sink** tab. Select an existing sink dataset or create a new one for a data store like Azure Blob Storage or Azure Data Lake Storage Gen2.-- To increase throughput, you can enable the Data Factory copy activity to concurrently extract data packages that your SAP system produces in the ODQ. You can enforce all extraction processes to immediately write them to the sink in parallel. For example, if you use Data Lake Storage Gen2 as a sink, in **File path** for the sink dataset, leave **File name** empty. All extracted data packages will be written as separate files. -- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-staging-dataset.png" alt-text="Screenshot of the staging dataset configuration for the solution."::: --### Configure copy activity settings --1. To increase throughput, in the Data Factory copy activity, select the **Settings** tab. Set **Degree of copy parallelism** to concurrently extract data packages that your SAP system produces in the ODQ. -- If you use Azure Blob Storage or Data Lake Storage Gen2 as the sink, the maximum number of effective parallel extractions you can set is four or five per self-hosted integration runtime machine. You can install a self-hosted integration runtime as a cluster of up to four machines. For more information, see [High availability and scalability](create-self-hosted-integration-runtime.md?tabs=data-factory#high-availability-and-scalability). -- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-settings-parallelism.png" alt-text="Screenshot of a Copy activity with the Degree of parallelism setting highlighted."::: --1. To fine-tune parallel extractions, adjust the maximum size of data packages that are produced in the ODQ. The default size is 50 MB. 3 GB of an SAP table or object are extracted into 60 files of raw SAP data in Data Lake Storage Gen2. Lowering the maximum size to 15 MB might increase throughput, but more (200) files are produced. To lower the maximum size, in the pipeline navigation menu, select **Code**. -- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-code-configuration.png" alt-text="Screenshot of a pipeline with the Code configuration button highlighted."::: -- Then, in the JSON file, edit `maxPackageSize` to lower the maximum size. -- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-code-1.png" alt-text="Screenshot of the code configuration for a pipeline with the maxPackageSize setting highlighted."::: --1. If you set **Extraction mode** in the Data Factory copy activity to **Delta**, your initial or subsequent extractions consume full data or new data changes produced in the ODQ by your SAP system since the last extraction. -- For each extraction, you can skip the actual data production, consumption, or transfer, and instead directly initialize or advance your delta subscription state. This option is especially useful if you want to perform full and delta extractions by using separate copy activities by using different partitions. To set up full and delta extractions by using separate copy activities with different partitions, in the pipeline navigation menu, select **Code**. In the JSON file, add the `deltaExtensionNoData` property and set it to `true`. To resume extracting data, remove that property or set it to `false`. -- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-code-2.png" alt-text="Screenshot of the code configuration for a pipeline with the deltaExtensionNoData property highlighted."::: --1. Select **Save all**, and then select **Debug** to run your new pipeline that contains the Data Factory copy activity with the SAP ODP source dataset. --To illustrate the results of full and delta extractions from consecutively running your new pipeline, here's an example of a simple table in SAP ECC: ---HereΓÇÖs the raw SAP data from an initial or full extraction in CSV format in Data Lake Storage Gen2: ---The file contains the system columns **ODQ_CHANGEMODE**, **ODQ_ENTITYCNTR**, and **SEQUENCENUMBER**. The Data Factory data flow activity uses these columns to merge data changes when it replicates SAP data. + :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-source-dataset-configuration.png" alt-text="Screenshot of the SAP CDC (Preview) dataset configuration page."::: -The **ODQ_CHANGEMODE** column marks the type of change for each row or record: **C** (created), **U** (updated), or **D** (deleted). The initial run of your pipeline in *delta* extraction mode always induces a full load that marks all rows as **C** (created). +1. Select **OK** to create your new SAP CDC source dataset. -The following example shows the delta extraction in CSV format in Data Lake Storage Gen2 after three rows of the custom table in SAP ECC are created, updated, and deleted: +## Set up a mapping data flow using the SAP CDC dataset as a source +To set up a mapping data flow using the SAP CDC dataset as a source, follow [Transform data with the SAP CDC connector](connector-sap-change-data-capture.md#transform-data-with-the-sap-cdc-connector) ## Next steps |
data-factory | Sap Change Data Capture Prerequisites Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prerequisites-configuration.md | Title: Prerequisites and setup for the SAP CDC solution (preview) + Title: Prerequisites and setup for the SAP CDC connector (preview) -description: Learn about the prerequisites and setup for the SAP change data capture (CDC) solution (preview) in Azure Data Factory. +description: Learn about the prerequisites and setup for the SAP CDC connector (preview) in Azure Data Factory. Last updated 06/01/2022 -# Prerequisites and setup for the SAP CDC solution (preview) +# Prerequisites and setup for the SAP CDC connector (preview) [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] -Learn about the prerequisites for the SAP change data capture (CDC) solution (preview) in Azure Data Factory and how to set up the solution in Azure Data Factory Studio. +Learn about the prerequisites for the SAP CDC connector (preview) in Azure Data Factory and how to set up the solution in Azure Data Factory Studio. ## Prerequisites -To preview the SAP CDC solution in Azure Data Factory, be able to complete these prerequisites: +To preview the SAP CDC capabilities in Azure Data Factory, be able to complete these prerequisites: - In Azure Data Factory Studio, [enable the preview experience](how-to-manage-studio-preview-exp.md#how-to-enabledisable-preview-experience). - Set up SAP systems to use the [SAP Operational Data Provisioning (ODP) framework](https://help.sap.com/docs/SAP_LANDSCAPE_TRANSFORMATION_REPLICATION_SERVER/007c373fcacb4003b990c6fac29a26e4/b6e26f56fbdec259e10000000a441470.html?q=SAP%20Operational%20Data%20Provisioning%20%28ODP%29%20framework).-- Be familiar with Data Factory concepts like integration runtimes, linked services, datasets, activities, data flows, pipelines, templates, and triggers.+- Be familiar with Data Factory concepts like integration runtimes, linked services, datasets, activities, data flows, pipelines, and triggers. - Set up a self-hosted integration runtime to use for the connector.-- Set up an SAP ODP (preview) linked service.-- Set up the Data Factory copy activity with an SAP ODP (preview) source dataset.+- Set up an SAP CDC (preview) linked service. +- Set up the Data Factory copy activity with an SAP CDC (preview) source dataset. - Debug Data Factory copy activity issues by sending self-hosted integration runtime logs to Microsoft.-- Auto-generate a Data Factory pipeline by using the SAP data partitioning template.-- Auto-generate a Data Factory pipeline by using the SAP data replication template. - Be able to run an SAP data replication pipeline frequently. - Be able to recover a failed SAP data replication pipeline run. - Be familiar with monitoring data extractions on SAP systems. To set up your SAP systems to use the SAP ODP framework, follow the guidelines t ### SAP system requirements -The ODP framework is available by default in most recent software releases of most SAP systems, including SAP ECC, SAP S/4HANA, SAP BW, and SAP BW/4HANA. To ensure that your SAP systems have ODP, see the following SAP documentation or support notes. Even though the guidance primarily refers to SAP BW and SAP DS as subscribers or consumers in data extraction via ODP, the guidance also applies to Data Factory as a subscriber or consumer. +The ODP framework is part of many SAP systems, including SAP ECC and SAP S/4HANA. It is also contained in SAP BW and SAP BW/4HANA. To ensure that your SAP releases have ODP, see the following SAP documentation or support notes. Even though the guidance primarily refers to SAP BW and SAP Data Services, the information also applies to Data Factory. - To support ODP, run your SAP systems on SAP NetWeaver 7.0 SPS 24 or later. For more information, see [Transferring Data from SAP Source Systems via ODP (Extractors)](https://help.sap.com/docs/SAP_BW4HANA/107a6e8a38b74ede94c833ca3b7b6f51/327833022dcf42159a5bec552663dc51.html). - To support SAP Advanced Business Application Programming (ABAP) Core Data Services (CDS) full extractions via ODP, run your SAP systems on NetWeaver 7.4 SPS 08 or later. To support SAP ABAP CDS delta extractions, run your SAP systems on NetWeaver 7.5 SPS 05 or later. For more information, see [Transferring Data from SAP Systems via ODP (ABAP CDS Views)](https://help.sap.com/docs/SAP_BW4HANA/107a6e8a38b74ede94c833ca3b7b6f51/af11a5cb6d2e4d4f90d344f58fa0fb1d.html). |
data-factory | Sap Change Data Capture Shir Preparation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-shir-preparation.md | Title: Set up a self-hosted integration runtime for the SAP CDC solution (preview) + Title: Set up a self-hosted integration runtime for the SAP CDC connector (preview) description: Learn how to create and set up a self-hosted integration runtime for your SAP change data capture (CDC) solution (preview) in Azure Data Factory. Last updated 06/01/2022 -# Set up a self-hosted integration runtime for the SAP CDC solution (preview) +# Set up a self-hosted integration runtime for the SAP CDC connector (preview) [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] -Learn how to create and set up a self-hosted integration runtime for the SAP change data capture (CDC) solution (preview) in Azure Data Factory. +Learn how to create and set up a self-hosted integration runtime for the SAP CDC connector (preview) in Azure Data Factory. -To prepare a self-hosted integration runtime to use with the SAP ODP (preview) linked service and the SAP data extraction template or the SAP data partition template, complete the steps that are described in the following sections. +To prepare a self-hosted integration runtime to use with the SAP CDC connector (preview), complete the steps that are described in the following sections. ## Create and set up a self-hosted integration runtime |
data-factory | Transform Data Databricks Jar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-databricks-jar.md | The Azure Databricks Jar Activity in a [pipeline](concepts-pipelines-activities. For an eleven-minute introduction and demonstration of this feature, watch the following video: -> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player] +> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player] ## Add a Jar activity for Azure Databricks to a pipeline with UI |
data-factory | Transform Data Databricks Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-databricks-python.md | The Azure Databricks Python Activity in a [pipeline](concepts-pipelines-activiti For an eleven-minute introduction and demonstration of this feature, watch the following video: -> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player] +> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player] ## Add a Python activity for Azure Databricks to a pipeline with UI |
data-factory | Transform Data Machine Learning Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-machine-learning-service.md | Run your Azure Machine Learning pipelines as a step in your Azure Data Factory a The below video features a six-minute introduction and demonstration of this feature. -> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/How-to-execute-Azure-Machine-Learning-service-pipelines-in-Azure-Data-Factory/player] +> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/How-to-execute-Azure-Machine-Learning-service-pipelines-in-Azure-Data-Factory/player] ## Create a Machine Learning Execute Pipeline activity with UI |
data-factory | Transform Data Using Databricks Notebook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-databricks-notebook.md | If you don't have an Azure subscription, create a [free account](https://azure.m For an eleven-minute introduction and demonstration of this feature, watch the following video: -> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/ingest-prepare-and-transform-using-azure-databricks-and-data-factory/player] +> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/ingest-prepare-and-transform-using-azure-databricks-and-data-factory/player] ## Prerequisites |
data-factory | Tumbling Window Trigger Dependency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tumbling-window-trigger-dependency.md | In order to build a dependency chain and make sure that a trigger is executed on For a demonstration on how to create dependent pipelines using tumbling window trigger, watch the following video: -> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Create-dependent-pipelines-in-your-Azure-Data-Factory/player] +> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Create-dependent-pipelines-in-your-Azure-Data-Factory/player] ## Create a dependency in the UI |
data-factory | Tutorial Deploy Ssis Packages Azure Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-packages-azure-powershell.md | $ExpressCustomSetup = "[RunCmdkey|SetEnvironmentVariable|InstallAzurePowerShell| $SSISDBServerEndpoint = "[your server name.database.windows.net or managed instance name.public.DNS prefix.database.windows.net,3342 or leave it empty if you're not using SSISDB]" # WARNING: If you use SSISDB, please ensure that there is no existing SSISDB on your database server, so we can prepare and manage one on your behalf $SSISDBServerAdminUserName = "[your server admin username for SQL authentication]" $SSISDBServerAdminPassword = "[your server admin password for SQL authentication]"-# For the basic pricing tier, specify "Basic", not "B" - For standard/premium/elastic pool tiers, specify "S0", "S1", "S2", "S3", etc., see https://docs.microsoft.com/azure/sql-database/sql-database-resource-limits-database-server +# For the basic pricing tier, specify "Basic", not "B" - For standard/premium/elastic pool tiers, specify "S0", "S1", "S2", "S3", etc., see https://learn.microsoft.com/azure/sql-database/sql-database-resource-limits-database-server $SSISDBPricingTier = "[Basic|S0|S1|S2|S3|S4|S6|S7|S9|S12|P1|P2|P4|P6|P11|P15|…|ELASTIC_POOL(name = <elastic_pool_name>) for SQL Database or leave it empty for SQL Managed Instance]" ### Self-hosted integration runtime info - This can be configured as a proxy for on-premises data access $ExpressCustomSetup = "[RunCmdkey|SetEnvironmentVariable|InstallAzurePowerShell| $SSISDBServerEndpoint = "[your server name.database.windows.net or managed instance name.public.DNS prefix.database.windows.net,3342 or leave it empty if you're not using SSISDB]" # WARNING: If you want to use SSISDB, ensure that there is no existing SSISDB on your database server, so we can prepare and manage one on your behalf $SSISDBServerAdminUserName = "[your server admin username for SQL authentication]" $SSISDBServerAdminPassword = "[your server admin password for SQL authentication]"-# For the basic pricing tier, specify "Basic", not "B" - For standard/premium/elastic pool tiers, specify "S0", "S1", "S2", "S3", etc., see https://docs.microsoft.com/azure/sql-database/sql-database-resource-limits-database-server +# For the basic pricing tier, specify "Basic", not "B" - For standard/premium/elastic pool tiers, specify "S0", "S1", "S2", "S3", etc., see https://learn.microsoft.com/azure/sql-database/sql-database-resource-limits-database-server $SSISDBPricingTier = "[Basic|S0|S1|S2|S3|S4|S6|S7|S9|S12|P1|P2|P4|P6|P11|P15|…|ELASTIC_POOL(name = <elastic_pool_name>) for SQL Database or leave it empty for SQL Managed Instance]" ### Self-hosted integration runtime info - This can be configured as a proxy for on-premises data access In this tutorial, you learned how to: To learn about customizing your Azure-SSIS Integration Runtime, see the following article: > [!div class="nextstepaction"]->[Customize your Azure-SSIS IR](./how-to-configure-azure-ssis-ir-custom-setup.md) +>[Customize your Azure-SSIS IR](./how-to-configure-azure-ssis-ir-custom-setup.md) |
databox-online | Azure Stack Edge Gpu Deploy Iot Edge Linux Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md | Use these steps to verify that your IoT Edge runtime is running. To troubleshoot your IoT Edge device configuration, see [Troubleshoot your IoT Edge device](../iot-edge/troubleshoot.md?view=iotedge-2020-11&tabs=linux&preserve-view=true). - <!-- Cannot get the link to render properly for version at https://docs.microsoft.com/azure/iot-edge/troubleshoot?view=iotedge-2020-11 --> + <!-- Cannot get the link to render properly for version at https://learn.microsoft.com/azure/iot-edge/troubleshoot?view=iotedge-2020-11 --> ## Update the IoT Edge runtime |
databox-online | Azure Stack Edge Gpu Troubleshoot Virtual Machine Gpu Extension Installation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-troubleshoot-virtual-machine-gpu-extension-installation.md | If the installation failed during the package download, that error indicates the 1. Enable compute on a port that's connected to the Internet. For guidance, see [Create GPU VMs](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md#create-gpu-vms). -1. Deallocate the VM by stopping the VM in the portal. To stop the VM, go to **Virtual machines** > **Overview**, and select the VM. Then, on the VM properties page, select **Stop**.<!--Follow-up (formatting): Create an include file for stopping a VM. Use it here and in prerequisites for "Use the Azure portal to manage network interfaces on the VMs" (https://docs.microsoft.com/azure/databox-online/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal#prerequisites).--> +1. Deallocate the VM by stopping the VM in the portal. To stop the VM, go to **Virtual machines** > **Overview**, and select the VM. Then, on the VM properties page, select **Stop**.<!--Follow-up (formatting): Create an include file for stopping a VM. Use it here and in prerequisites for "Use the Azure portal to manage network interfaces on the VMs" (https://learn.microsoft.com/azure/databox-online/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal#prerequisites).--> 1. Create a new VM. |
ddos-protection | Ddos Protection Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md | For frequently asked questions, see the [DDoS Protection FAQ](ddos-faq.yml). ## Next steps * [Quickstart: Create a DDoS Protection Plan](manage-ddos-protection.md)-* [Learn module: Introduction to Azure DDoS Protection](/learn/modules/introduction-azure-ddos-protection/) +* [Learn module: Introduction to Azure DDoS Protection](/training/modules/introduction-azure-ddos-protection/) |
defender-for-cloud | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md | description: A description of what's new and changed in Microsoft Defender for C Previously updated : 08/31/2022 Last updated : 09/20/2022+ # What's new in Microsoft Defender for Cloud? To learn about *planned* changes that are coming soon to Defender for Cloud, see - [Suppress alerts based on Container and Kubernetes entities](#suppress-alerts-based-on-container-and-kubernetes-entities) - [Defender for Servers supports File Integrity Monitoring with Azure Monitor Agent](#defender-for-servers-supports-file-integrity-monitoring-with-azure-monitor-agent)+- [Legacy Assessments APIs deprecation](#legacy-assessments-apis-deprecation) ### Suppress alerts based on Container and Kubernetes entities FIM is now available in a new version based on Azure Monitor Agent (AMA), which Learn more about [File Integrity Monitoring with the Azure Monitor Agent](file-integrity-monitoring-enable-ama.md). +### Legacy Assessments APIs deprecation ++The following APIs are deprecated: ++- Security Tasks +- Security Statuses +- Security Summaries ++These three APIs exposed old formats of assessments and are replaced by the [Assessments APIs](/rest/api/defenderforcloud/assessments) and [SubAssessments APIs](/rest/api/defenderforcloud/sub-assessments). All data that is exposed by these legacy APIs are also available in the new APIs. + ## August 2022 Updates in August include: |
defender-for-cloud | Upcoming Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md | Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 08/10/2022 Last updated : 09/20/2022 # Important upcoming changes to Microsoft Defender for Cloud If you're looking for the latest release notes, you'll find them in the [What's |--|--| | [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | September 2022 | | [Removing security alerts for machines reporting to cross tenant Log Analytics workspaces](#removing-security-alerts-for-machines-reporting-to-cross-tenant-log-analytics-workspaces) | September 2022 |-| [Legacy Assessments APIs deprecation](#legacy-assessments-apis-deprecation) | September 2022 | - ### Multiple changes to identity recommendations With this change, alerts on machines connected to Log Analytics workspace in a d If you want to continue receiving the alerts in Defender for Cloud, connect the Log Analytics agent of the relevant machines to the workspace in the same tenant as the machine. -### Legacy Assessments APIs deprecation --The following APIs are set to be deprecated: --- Security Tasks-- Security Statuses-- Security Summaries--These three APIs exposed old formats of assessments and will be replaced by the [Assessments APIs](/rest/api/defenderforcloud/assessments) and [SubAssessments APIs](/rest/api/defenderforcloud/sub-assessments). All data that is exposed by these legacy APIs will also be available in the new APIs. - ## Next steps For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md) |
defender-for-cloud | Workflow Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md | In this article, you learned about creating Logic Apps, automating their executi For related material, see: -- [The Learn module on how to use workflow automation to automate a security response](/learn/modules/resolve-threats-with-azure-security-center/)+- [The Learn module on how to use workflow automation to automate a security response](/training/modules/resolve-threats-with-azure-security-center/) - [Security recommendations in Microsoft Defender for Cloud](review-security-recommendations.md) - [Security alerts in Microsoft Defender for Cloud](alerts-overview.md) - [About Azure Logic Apps](../logic-apps/logic-apps-overview.md) |
dev-box | How To Manage Network Connection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-network-connection.md | + + Title: How to manage network connections ++description: This article describes how to create, delete, attach and remove Microsoft Dev Box network connections. ++++ Last updated : 04/15/2022++++<!-- Intent: As a dev infrastructure manager, I want to be able to manage network connections so that I can enable dev boxes to connect to my existing networks and deploy them in the desired region. --> +# Manage network connections +Network connections allow dev boxes to connect to existing virtual networks, and determine the region into which dev boxes are deployed. ++When planning network connectivity for your dev boxes, you must: +- Ensure you have sufficient permissions to create and configure network connections. +- Ensure you have at least one virtual network (VNet) and subnet available for your dev boxes. +- Identify the region or location closest to your dev boxes users. Deploying dev boxes into a region close to the users provides them with a better experience. +- Determine whether dev boxes should connect to your existing networks using an Azure Active Directory (Azure AD) join, or a Hybrid Azure AD join. +## Permissions +To manage a network connection, you need the following permissions: ++|Action|Permission required| +|--|--| +|Create and configure VNet and subnet|Network Contributor permissions on an existing virtual network (owner or contributor) or permission to create a new virtual network and subnet.| +|Create or delete network connection|Owner or Contributor permissions on an Azure Subscription or a specific resource group.| +|Add or remove network connection |Write permission on the dev center.| ++## Create a virtual network and subnet +To create a network connection, you need an existing VNet and subnet. If you don't have a VNet and subnet available, use the following steps to create them: ++1. Sign in to the [Azure portal](https://portal.azure.com). + +1. In the search box, enter *Virtual Network*, and then select **Virtual Network** from the search results. ++1. On the Virtual Network page, select **Create**. ++1. On the Create virtual network page, enter or select this information on the **Basics** tab: ++ | Setting | Value | + | - | -- | + | Subscription | Select your subscription. | + | Resource group | Select an existing resource group, or to create a new one: </br> Select **Create new**. </br> Enter *rg-name*. </br> Select **OK**. | + | Name | Enter *VNet-name*. | + | Region | Select the region for the VNet and dev boxes. | ++ :::image type="content" source="./media/how-to-manage-network-connection/example-basics-tab.png" alt-text="Screenshot of creating a virtual network in Azure portal." border="true"::: ++ > [!Important] + > The region you select for the VNet is the where the dev boxes will be deployed. ++1. On the **IP Addresses** tab, accept the default settings. ++1. On the **Security** tab, accept the default settings. ++1. On the **Review + create** tab review the settings. ++1. Select **Create**. ++ +## Allow access to Dev Box endpoints from your network +Network ingress and egress can be controlled using a firewall, network security groups, and even Microsoft Defender. ++If your organization routes egress traffic through a firewall, you need to open certain ports to allow the Dev Box service to function. For more information, see [Network requirements](/windows-365/enterprise/requirements-network). ++## Plan a network connection +The following steps show you how to create and configure a network connection in Microsoft Dev Box. +### Types of Azure Active Directory Join +The Dev Box service requires a configured and working Azure AD join or Hybrid AD join, which defines how dev boxes join your domain and access resources. ++If your organization uses Azure AD, you can use an Azure AD join, sometimes called a native Azure AD join. Dev box users sign into Azure AD joined dev boxes using their Azure AD account and access resources based on the permissions assigned to that account. Azure AD join enables access to cloud-based and on-premises apps and resources. ++If your organization has an on-premises Active Directory implementation, you can still benefit from some of the functionality provided by Azure AD by using hybrid Azure AD joined dev boxes. These dev boxes are joined to your on-premises Active Directory and registered with Azure Active Directory. Hybrid Azure AD joined dev boxes require network line of sight to your on-premises domain controllers periodically. Without this connection, devices become unusable. ++You can learn more about each type of join and how to plan for them here: +- [Plan your hybrid Azure Active Directory join deployment](/azure/active-directory/devices/hybrid-azuread-join-plan) +- [Plan your Azure Active Directory join deployment](/azure/active-directory/devices/azureadjoin-plan) ++### Create a network connection +1. Sign in to the [Azure portal](https://portal.azure.com). ++1. In the search box, type *Network connections* and then select **Network connections** from the list. ++1. On the **Network Connections** page, select **+Create**. + :::image type="content" source="./media/how-to-manage-network-connection/network-connections-empty.png" alt-text="Screenshot showing the Network Connections page with Create highlighted."::: ++1. Follow the steps on the appropriate tab to create your network connection. + #### [**Azure AD join**](#tab/AzureADJoin/) ++ On the **Create a network connection** page, on the **Basics** tab, enter the following values: ++ |Name|Value| + |-|-| + |**Domain join type**|Select **Azure active directory join**.| + |**Subscription**|Select the subscription in which you want to create the network connection.| + |**Resource group**|Select an existing resource group or select **Create new**, and enter a name for the resource group.| + |**Name**|Enter a descriptive name for your network connection.| + |**Virtual network**|Select the virtual network you want the network connection to use.| + |**Subnet**|Select the subnet you want the network connection to use.| ++ :::image type="content" source="./media/how-to-manage-network-connection/create-native-network-connection-full-blank.png" alt-text="Screenshot showing the create network connection basics tab with Azure Active Directory join highlighted."::: ++ #### [**Hybrid Azure AD join**](#tab/HybridAzureADJoin/) ++ On the **Create a network connection** page, on the **Basics** tab, enter the following values: ++ |Name|Value| + |-|-| + |**Domain join type**|Select **Hybrid Azure active directory join**.| + |**Subscription**|Select the subscription in which you want to create the network connection.| + |**Resource group**|Select an existing resource group or select **Create new**, and enter a name for the resource group.| + |**Name**|Enter a descriptive name for your network connection.| + |**Virtual network**|Select the virtual network you want the network connection to use.| + |**Subnet**|Select the subnet you want the network connection to use.| + |**AD DNS domain name**| The DNS name of the Active Directory domain that you want to use for connecting and provisioning Cloud PCs. For example, corp.contoso.com. | + |**Organizational unit**| An organizational unit (OU) is a container within an Active Directory domain, which can hold users, groups, and computers. | + |**AD username UPN**| The username, in user principal name (UPN) format, that you want to use for connecting the Cloud PCs to your Active Directory domain. For example, svcDomainJoin@corp.contoso.com. This service account must have permission to join computers to the domain and, if set, the target OU. | + |**AD domain password**| The password for the user specified above. | ++ :::image type="content" source="./media/how-to-manage-network-connection/create-hybrid-network-connection-full-blank.png" alt-text="Screenshot showing the create network connection basics tab with Hybrid Azure Active Directory join highlighted."::: ++ ++Use the following steps to finish creating your network connection, for both Azure AD join and Hybrid Azure AD join: + 1. Select **Review + Create**. ++ 1. On the **Review** tab, select **Create**. ++ 1. When the deployment is complete, select **Go to resource**. You'll see the Network Connection overview page. + ++## Attach network connection to dev center +You need to attach a network connection to a dev center before it can be used in projects to create dev box pools. ++1. In the [Azure portal](https://portal.azure.com), in the search box, type *Dev centers* and then select **Dev centers** from the list. ++1. Select the dev center you created and select **Networking**. + +1. Select **+ Add**. + +1. In the **Add network connection** pane, select the network connection you created earlier, and then select **Add**. + + :::image type="content" source="./media/how-to-manage-network-connection/add-network-connection.png" alt-text="Screenshot showing the Add network connection pane."::: ++After creation, several health checks are run on the network. You can view the status of the checks on the resource overview page. Network connections that pass all the health checks can be added to a dev center and used in the creation of dev box pools. The dev boxes within the dev box pools will be created and domain joined in the location of the VNet assigned to the network connection. +++To resolve any errors, refer to the [Troubleshoot Azure network connections](/windows-365/enterprise/troubleshoot-azure-network-connection). +++## Remove a network connection from a dev center +You can remove a network connection from a dev center if you no longer want it to be used to connect to network resources. Network connections can't be removed if they are in use by one or more dev box pools. ++1. In the [Azure portal](https://portal.azure.com), in the search box, type *Dev centers* and then select **Dev centers** from the list. ++1. Select the dev center you created and select **Networking**. + +1. Select the network connection you want to remove and then select **Remove**. ++ :::image type="content" source="./media/how-to-manage-network-connection/remove-network-connection.png" alt-text="Screenshot showing the network connection page with Remove highlighted."::: ++1. Read the warning message, and then select **Ok**. ++The network connection will no longer be available for use in the dev center. ++## Next steps ++<!-- [Manage a dev center](./how-to-manage-dev-center.md) --> +- [Quickstart: Configure a Microsoft Dev Box Project](./quickstart-configure-dev-box-project.md) |
devops-project | Azure Devops Project Sql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/azure-devops-project-sql-database.md | To learn more about the CI/CD pipeline, see: ## Videos -> [!VIDEO https://docs.microsoft.com/Events/Build/2018/BRK3308/player] +> [!VIDEO https://learn.microsoft.com/Events/Build/2018/BRK3308/player] |
devops-project | Retirement And Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/retirement-and-migration.md | + + Title: Retirement of DevOps Starter for Azure | Microsoft Docs +description: Retirement of Azure Devops Starter and Migration ++documentationcenter: '' +++editor: +ms.assetid: +++ na + Last updated : 09/16/2022++++# Retirement of DevOps Starter ++Azure DevOps Starter will be retired March 31, 2023. The corresponding REST APIs for [Microsoft.DevOps](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/devops/resource-manager/Microsoft.DevOps/) and [Microsoft.VisualStudio/accounts/projects](/rest/api/visualstudio/projects) resources will be retired as well. +Customers are encouraged to use [Azure Developer CLI](/azure/developer/azure-developer-cli/overview?tabs=nodejs) instead. ++## Azure Developer CLI ++The replacement [Azure Developer CLI (azd)](/azure/developer/azure-developer-cli/overview?tabs=nodejs) is a developer command-line tool for building cloud apps. It provides commands that map to key stages in your workflow: code, build, deploy, monitor, repeat. You can use Azure CLI to create, provision, and deploy a new application in a single step. ++## Comparison between Azure DevOps and Azure Developer CLI ++| DevOps Starter | Azure Developer CLI | +| | - | +| Deploy to Azure with few clicks | A single step to deploy to Azure | +| Configures code, deployment, monitoring | Configures code, deployment, monitoring | +| Provides sample application to get started | Provides sample applications to get started | +| Allows userΓÇÖs repo to be deployed | Allows userΓÇÖs repo to be deployed | +| UI-based experience in Azure portal | CLI-based experience | ++## Migration: ++There is no migration required because DevOps Starter does not store any information, it just helps users with their Day 0 getting started experience on Azure. Moving forward the recommended way for users to get started on Azure will be [Azure Developer CLI](/azure/developer/azure-developer-cli/overview?tabs=nodejs). +++1. For choosing language, framework and target service, choose an appropriate [template](https://github.com/search?q=org:azure-samples%20topic:azd-templates) from azd repo and run the command `azd up --template \<template-name\>` ++2. For provisioning Azure service resources, run the command `azd provision` ++3. For creating CI/CD pipelines, run the command `azd pipeline config` ++4. For application insights monitoring, run the command `azd monitor` ++For existing application deployments, **DevOps starter does not store any information itself** and users can use following to get same information: ++1. Azure resource details in Azure portal ΓÇô In Azure portal, visit the resource page for which you had configured DevOps starter. ++2. To see pipeline and deployment information, go to the corresponding GitHub Actions workflow or Azure pipeline to view runs and deployments. ++3. To see monitoring details in Application insights, go to application insights for your Azure resource and look at the monitoring charts. ++## FAQ ++### What is the difference between DevOps starter and Azure developer CLI? ++Both are tools, which enable quick setting up of application deployment to Azure and configure CI/CD pipeline for the same. They enable users to quickly get started with Azure. ++Azure Developer CLI provides more developer-friendly commands in contrast to the UI wizard for DevOps Starter. This also means better clarity with config-as-code. ++### Will I lose my application or the Azure resources if I am not able to access DevOps starter? ++No. Application code, deployments, and Azure resources that host the application will still be available. DevOps Starter does not store any of these resources. ++### Will I lose the CI/CD pipeline that I created using DevOps Starter? ++No. You can still manage CI/CD pipelines in GitHub Actions or Azure Pipelines. + |
devtest | How To Manage Reliability Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/how-to-manage-reliability-performance.md | How SRE and DevOps differ is still under discussion in the field. Some broadly a If you want to learn more about the practice of SRE, check out these links: -- [SRE in Context](/learn/modules/intro-to-site-reliability-engineering/3-sre-in-context) -- [Key SRE Principles and Practices: virtuous cycles](/learn/modules/intro-to-site-reliability-engineering/4-key-principles-1-virtuous-cycles) -- [Key SRE Principles and Practices: The human side of SRE](/learn/modules/intro-to-site-reliability-engineering/5-key-principles-2-human-side-of-sre) -- [Getting Started with SRE](/learn/modules/intro-to-site-reliability-engineering/6-getting-started) +- [SRE in Context](/training/modules/intro-to-site-reliability-engineering/3-sre-in-context) +- [Key SRE Principles and Practices: virtuous cycles](/training/modules/intro-to-site-reliability-engineering/4-key-principles-1-virtuous-cycles) +- [Key SRE Principles and Practices: The human side of SRE](/training/modules/intro-to-site-reliability-engineering/5-key-principles-2-human-side-of-sre) +- [Getting Started with SRE](/training/modules/intro-to-site-reliability-engineering/6-getting-started) ## Service Level Agreements |
dns | Dns Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-overview.md | For more information, see [Overview of Azure DNS alias records](dns-alias.md). * For frequently asked questions about Azure DNS, see the [Azure DNS FAQ](dns-faq.yml). -* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns). +* [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns). |
dns | Dns Private Resolver Get Started Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-powershell.md | description: In this quickstart, you learn how to create and manage your first p Previously updated : 09/16/2022 Last updated : 09/20/2022 $targetDNS2 = New-AzDnsResolverTargetDnsServerObject -IPAddress 192.168.1.3 -Por $targetDNS3 = New-AzDnsResolverTargetDnsServerObject -IPAddress 10.0.0.4 -Port 53 $targetDNS4 = New-AzDnsResolverTargetDnsServerObject -IPAddress 10.5.5.5 -Port 53 $forwardingrule = New-AzDnsForwardingRulesetForwardingRule -ResourceGroupName myresourcegroup -DnsForwardingRulesetName myruleset -Name "Internal" -DomainName "internal.contoso.com." -ForwardingRuleState "Enabled" -TargetDnsServer @($targetDNS1,$targetDNS2)-$forwardingrule = New-AzDnsForwardingRulesetForwardingRule -ResourceGroupName myresourcegroup -DnsForwardingRulesetName myruleset -Name "AzurePrivate" -DomainName "." -ForwardingRuleState "Enabled" -TargetDnsServer $targetDNS3 +$forwardingrule = New-AzDnsForwardingRulesetForwardingRule -ResourceGroupName myresourcegroup -DnsForwardingRulesetName myruleset -Name "AzurePrivate" -DomainName "azure.contoso.com" -ForwardingRuleState "Enabled" -TargetDnsServer $targetDNS3 $forwardingrule = New-AzDnsForwardingRulesetForwardingRule -ResourceGroupName myresourcegroup -DnsForwardingRulesetName myruleset -Name "Wildcard" -DomainName "." -ForwardingRuleState "Enabled" -TargetDnsServer $targetDNS4 ``` |
dns | Dns Private Resolver Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md | Azure DNS Private Resolver is available in the following regions: - West US 3 - East US - North Central US-- Central US EUAP-- East US 2 EUAP - West Central US - East US 2 - West Europe Outbound endpoints have the following limitations: ### Ruleset restrictions - Rulesets can have no more than 25 rules in Public Preview.-- Rulesets can't be linked across different subscriptions in Public Preview. ### Other restrictions Outbound endpoints have the following limitations: * Learn how to [Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md) * Learn how to [configure hybrid DNS](private-resolver-hybrid-dns.md) using private resolvers. * Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.-* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns). +* [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns). |
dns | Private Dns Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-overview.md | Title: What is Azure Private DNS? description: In this article, get started with an overview of the private DNS hosting service on Microsoft Azure. -+ Previously updated : 04/09/2021- Last updated : 09/20/2022+ #Customer intent: As an administrator, I want to evaluate Azure Private DNS so I can determine if I want to use it instead of my current DNS service. Azure Private DNS has the following limitations: * A specific virtual network can be linked to only one private zone if automatic registration of VM DNS records is enabled. You can however link multiple virtual networks to a single DNS zone. * Reverse DNS works only for private IP space in the linked virtual network * Reverse DNS for a private IP address in linked virtual network will return `internal.cloudapp.net` as the default suffix for the virtual machine. For virtual networks that are linked to a private zone with autoregistration enabled, reverse DNS for a private IP address returns two FQDNs: one with default the suffix `internal.cloudapp.net` and another with the private zone suffix.-* Conditional forwarding isn't currently natively supported. To enable resolution between Azure and on-premises networks, see [Name resolution for VMs and role instances](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md). +* Conditional forwarding is supported using [Azure DNS Private Resolver](dns-private-resolver-overview.md). To enable resolution between Azure and on-premises networks, see [Name resolution for VMs and role instances](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md). ## Pricing For pricing information, see [Azure DNS Pricing](https://azure.microsoft.com/pri * Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure. -* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns). +* [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns). |
dns | Private Resolver Endpoints Rulesets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-endpoints-rulesets.md | A query for `secure.store.azure.contoso.com` will match the **AzurePrivate** rul * Learn how to [Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md) * Learn how to [configure hybrid DNS](private-resolver-hybrid-dns.md) using private resolvers. * Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.-* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns). +* [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns). |
dns | Private Resolver Hybrid Dns | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-hybrid-dns.md | The path for this query is: client's default DNS resolver (10.100.0.2) > on-prem * Learn about [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md). * Learn how to [Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md) * Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.-* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns). +* [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns). |
dns | Find Unhealthy Dns Records | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/scripts/find-unhealthy-dns-records.md | The following Azure PowerShell script finds unhealthy DNS records in Azure DNS. ```azurepowershell-interactive <#- 1. Install Pre requisites Az PowerShell modules (https://docs.microsoft.com/powershell/azure/install-az-ps?view=azps-5.7.0) + 1. Install Pre requisites Az PowerShell modules (https://learn.microsoft.com/powershell/azure/install-az-ps?view=azps-5.7.0) 2. From PowerShell prompt navigate to folder where the script is saved and run the following command .\ Get-AzDNSUnhealthyRecords.ps1 -SubscriptionId <subscription id> -ZoneName <zonename> Replace subscription id with subscription id of interest. This script uses the following commands to create the deployment. Each item in t ## Next steps -For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/). +For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/). |
dns | Tutorial Dns Private Resolver Failover | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-dns-private-resolver-failover.md | You can now demonstrate that DNS resolution works when one of the connections is * Learn about [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md). * Learn how to [configure hybrid DNS](private-resolver-hybrid-dns.md) using private resolvers. * Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.-* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns). -+* [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns). |
education-hub | Azure Students Program | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/azure-dev-tools-teaching/azure-students-program.md | To get detailed terms of use for Azure for Students, see the [offer terms](https - [Get help with login errors](troubleshoot-login.md) - [Download software (Azure for Students)](download-software.md) - [Azure for Students Starter overview](azure-students-starter-program.md)-- [Microsoft Learn: a free online learning platform](/learn/)+- [Microsoft Learn training](/training/) |
education-hub | Azure Students Starter Program | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/azure-dev-tools-teaching/azure-students-starter-program.md | any time to a pay-as-you-go subscription to get access to all Azure services, us - [Get help with login errors](troubleshoot-login.md) - [Download software (Azure for Students Starter)](download-software.md) - [Azure for Students program](azure-students-program.md)-- [Microsoft Learn: a free online learning platform](/learn/)+- [Microsoft Learn training](/training/) |
education-hub | Download Software | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/azure-dev-tools-teaching/download-software.md | Have your students follow this procedure to download the software developer tool - [Get help with login errors](troubleshoot-login.md) - [Azure for Students](azure-students-program.md) - [Azure for Students Starter](azure-students-starter-program.md)-- [Microsoft Learn: a free online learning platform](/learn/)+- [Microsoft Learn training](/training/) - [Frequently asked questions](./program-faq.yml) |
education-hub | Set Up Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/azure-dev-tools-teaching/set-up-access.md | Portal](https://azureforeducation.microsoft.com/account/Subscriptions). Once app ## For students, faculty, and administrators Studences access Azure dev tools through the [Education Hub](https://aka.ms/devtoolsforteaching). -Students and faculty alike can get access to all the software download benefits through the Education Hub. The Education Hub is built within the Azure portal and it provides your students easy access to the entire catalog of software, as well as access to the entire [Microsoft Learn](/learn/) catalog. +Students and faculty alike can get access to all the software download benefits through the Education Hub. The Education Hub is built within the Azure portal and it provides your students easy access to the entire catalog of software, as well as access to the entire [Microsoft Learn training](/training/) catalog. ## Next steps - [Manage student accounts](manage-students.md) |
energy-data-services | Concepts Csv Parser Ingestion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-csv-parser-ingestion.md | + + Title: Microsoft Energy Data Services Preview csv parser ingestion workflow concept #Required; page title is displayed in search results. Include the brand. +description: Learn how to use CSV parser ingestion. #Required; article description that is displayed in search results. ++++ Last updated : 08/18/2022++++# CSV parser ingestion concepts ++One of the simplest generic data formats that are supported by the Microsoft Energy Data Services Preview ingestion process is the "comma separated values" format, which is called a CSV format. The CSV format is processed through a CSV Parser DAG definition. ++CSV Parser DAG implements an ELT approach to data loading, that is, data is loaded after it's extracted. Customers can use CSV Parser DAG to load data that doesn't match the [OSDU™](https://osduforum.org) canonical schema. Customers need to create and register a custom schema using the schema service matching the format of the CSV file. +++## What does CSV ingestion do? ++* **Schema validation** ΓÇô ensure CSV file conforms to the schema. +* **Type conversion** ΓÇô ensure that the type of a field is as defined and converts it to the defined type if otherwise. +* **ID generation** ΓÇô used to upload into storage service. It helps in scenarios where the ingestion failed half-way as ID generation logic is idempotent, one avoids duplicate data on platform. +* **Reference handling** ΓÇô enable customers to refer to actual data on the platform and access it. +* **Persistence** ΓÇô It persists each row after validations by calling storage service API. Once persisted, the data is available for consumption through search and storage service APIs. ++## CSV Parser ingestion functionality ++The CSV parser ingestion currently supports the following functionality as a one-step DAG: ++- CSV file is parsed as per the schema (one row in CSV = 1 record ingested into the data platform) +- CSV file contents match the contents of the provided schema. + - **Success**: validate the schema vs. the header of the CSV file and the values of the first nrows. Use the schema for all downstream tasks to build the metadata. + - **Fail**: log the error(s) in the schema validation, proceed with ingestion if errors are non-breaking +- Convert all characters to UTF8, and gracefully handle/replace characters that can't be converted to UTF8. +- Unique data identity for an object in the Data Platform - CSV Ingestion generates Unique Identifier (ID) for each record by combining source, entity type and a base64 encoded string formed by concatenating natural key(s) in the data. In case the schema used for CSV Ingestion doesn't contain any natural keys storage service will generate random IDs for every record +- Typecast to JSON-supported data types: + - **Number** - Typecast integers, doubles, floats, etc. as described in the schema to "number". Some common spatial formats, such as Degrees/Minutes/Seconds (DMS) or Easting/Northing should be typecast to "String." Special Handling of these string formats will be handled in the Spatial Data Handling Task. + - **Date** - Typecast dates as described in the schema to a date, doing a date format conversion toISO8601TZ format (for fully qualified dates). Some date fragments (such as years) can't be easily converted to this format and should be typecast to a number instead, or if textual date representations, for example, "July" should be typecast to string. + - **Others** - All other encountered attributes should be typecast as string. +- Stores a batch of records in the context of a particular ingestion job. Fragments/outputs from the previous steps are collected into a batch, and formatted in a way that is compatible with the Storage Service with the appropriate additional information, such as the ACL's, Legal tags, etc. +- Support frame of reference handling: + - **Unit** - converting declared frame of reference information into the appropriate persistable reference as per the Unit Service. This information is stored in the meta[] block. + - **CRS** - the CRS Frame of Reference (FoR) information should be included in the schema of the data, including the source CRS (either geographic or projected), and if projected, the CRS info and persistable reference (if provided in schema) information is stored in the meta[] block. +- Creates relationships as declared in the source schema. +- Supports publishing status of ingested/failed records on GSM article ++## CSV parser ingestion components ++* **File service** ΓÇô Facilitates management of files on data platform. Uploading, Secure discovery and downloading of files are capabilities provided by file service. +* **Schema service** ΓÇô Facilitates management of Schemas on data platform. Creating, fetching and searching for schemas are capabilities provided by schema service. +* **Storage Service** ΓÇô JSON object store that facilitates storage of metadata information for domain entities. Also raises storage events when records are saved using storage service. +* **Unit Service** ΓÇô Facilitates management and conversion of Units +* **Workflow service** ΓÇô Facilitates management of workflows on data platform. Wrapper over the workflow engine and abstract many technical nuances of the workflow engine from consumers. +* **Airflow engine** ΓÇô Heart of the ingestion framework. Actual Workflow orchestrator. +* **DAGs** ΓÇô Based on Direct Acyclic Graph concept, are workflows that are authored, orchestrated, managed and monitored by the workflow engine. ++## CSV ingestion components diagram +++## CSV ingestion sequence diagram +++## CSV parser ingestion workflow ++### Prerequisites ++* To trigger APIs, the user must have the below access and a valid authorization token + * Access to + * Access to Workflow service. + * Following is list of service level groups that you need access to register and execute DAG using workflow service. + * "service.workflow.creator" + * "service.workflow.viewer" + * "service.workflow.admin" ++### Steps to execute a DAG using Workflow Service ++* **Create schema** ΓÇô Definition of the kind of records that will be created as outcome of ingestion workflow. The schema is uploaded through the schema service. The schema needs to be registered using schema service. +* **Uploading the file** ΓÇô Use file Service to upload a file. The file service provides a signed URL, which enables the customers to upload the data without credential requirements. +* **Create Metadata record for the file** ΓÇô Use file service to create meta data. The meta data enables discovery of file and secure downloads. It also provides a mechanism to provide information associated with the file that is needed during the processing of the file. +* The file ID created is provided to the CSV parser, which takes care of downloading the file, ingesting the file, and ingesting the records with the help of workflow service. The customers also need to register the workflow, the CSV parser DAG is already deployed in the Airflow. +* **Trigger the Workflow service** ΓÇô To trigger the workflow, the customer needs to provide the file ID, the kind of the file and data partition ID. Once the workflow is triggered, the customer gets a run ID. +Workflow service provides API to monitor the status of each workflow run. Once the csv parser run is completed, data is ingested into OSDU™ Data Platform, and can be searched through search service ++OSDU™ is a trademark of The Open Group. ++## Next steps +Advance to the CSV parser tutorial and learn how to perform a CSV parser ingestion +> [!div class="nextstepaction"] +> [Tutorial: Sample steps to perform a CSV parser ingestion](tutorial-csv-ingestion.md) |
energy-data-services | Concepts Ddms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-ddms.md | + + Title: Domain data management services concepts #Required; page title is displayed in search results. Include the brand. +description: Learn how to use Domain Data Management Services #Required; article description that is displayed in search results. ++++ Last updated : 08/18/2022++++# Domain data management service concepts ++**Domain Data Management Service (DDMS)** ΓÇô is a platform component that extends [OSDU™](https://osduforum.org) core data platform with domain specific model and optimizations. DDMS is a mechanism of a platform extension that: ++* delivers optimized handling of data for each (non-overlapping) "domain." +* single vertical discipline or business area, for example, Petrophysics, Geophysics, Seismic +* a functional aspect of one or more vertical disciplines or business areas, for example, Earth Model +* delivers high performance capabilities not supported by OSDU™ generic normal APIs. +* can help achieve the extension of OSDU™ scope to new business areas. +* may be developed in a distributed manner with separate resources/sponsors. ++OSDU™ Technical Standard defines the following types of OSDU™ application types: ++| Application Type | Description | +| | -- | +| OSDU™™ Embedded Applications | An application developed and managed within the OSDU™ Open-Source community that is built on and deployed as part of the OSDU™ Data Platform distribution. | +| ISV Extension Applications | An application, developed and managed in the marketplace that is NOT part of THE OSDU™ Data Platform distributions, and when selected is deployed within the OSDU™ Data Platform as add-ons | +| ISV third Party Applications | An application, developed and managed in the marketplace that integrates with the OSDU™ Data Platform, and runs outside the OSDU™ Data Platform | +++| Characteristics | Embedded | Extension | Third Party | +| -- | - | | | +| Developed, managed, and deployed by | The OSDU™ Data Platform | ISV | ISV | +| Software License | Apache 2 | ISV | ISV | +| Mandatory as part of an OSDU™ distribution | Yes | No | No | +| Replaceable | Yes, with preservation of behavior | Yes | Yes | +| Architecture Compliance | The OSDU™ Standard | The OSDU™ Standard | ISV | +| Examples | OS CRS <br /> Wellbore DDMS | ESRI CRS <br /> Petrel DS | Petrel | +++## Who did we build this for? ++**IT Developers** build systems to connect data to domain applications (internal and external ΓÇô for example, Petrel) which enables data managers to deliver projects to geoscientists. The DDMS suite on Microsoft Energy Data Services helps automate these workflows and eliminates time spent managing updates. ++**Geoscientists** use domain applications for key Exploration and Production workflows such as Seismic interpretation and Well tie analysis. While these users won't directly interact with the DDMS, their expectations for data performance and accessibility will drive requirements for the DDMS in the Foundation Tier. Azure will enable geoscientists to stream cross domain data instantly in OSDU™ compatible applications (for example, Petrel) connected to Microsoft Energy Data Services. ++**Data managers** spend a significant number of time fulfilling requests for data retrieval and delivery. The Seismic, Wellbore, and Petrel Data Services enable them to discover and manage data in one place while tracking version changes as derivatives are created. ++## Platform landscape ++Microsoft Energy Data Services is an OSDU™ compatible product, meaning that its landscape and release model are dependent on OSDU™. ++Currently, OSDU™ certification and release process are not fully defined yet and this topic should be defined as a part of the Microsoft Energy Data Services Foundation Architecture. ++OSDU™ R3 M8 is the base for the scope of the Microsoft Energy Data Services Foundation Private Preview ΓÇô as a latest stable, tested version of the platform. ++## Learn more: OSDU™ DDMS community principles ++[OSDU™ community DDMS Overview](https://community.opengroup.org/osdu/documentation/-/wikis/OSDU™-(C)/Design-and-Implementation/Domain-&-Data-Management-Services#ddms-requirements) provides an extensive overview of DDMS motivation and community requirements from a user, technical, and business perspective. These principles are extended to Microsoft Energy Data Services. ++## DDMS requirements ++A DDMS meets the following requirements, further classified into capability, architectural, operational and openness/extensibility requirements: ++|**#** | **Description** | **Business rationale** | **Principle** | +||||| +| 1 | Data can be ingested with low friction | Need to seamlessly integrate with systems of record, to start with the industry standards | Capability | +| 2 | New data is available in workflows with minimal latency | Deliver new data in context of the end-user workflow ΓÇô seamlessly and fast. | Capability | +| 3 | Domain data and services are highly usable | The business anticipates a large set of use-cases where domain data is used in various workflows. Need to make the consumption simple and efficient | Capability | +| 4 | Scalable performance for E&P workflows | E&P data has specific access requirements, way beyond standard cloud storage. Scalable E&P data requires E&P workflow experience and insights | Capability | +| 5 | Data is available for visual analytics and discovery (Viz/BI) | Deliver minimum set of visualization capabilities on the data | Capability | +| 6 | One source of truth for data | Drive towards reduction of duplication | Capability | +| 7 | Data is secured, and access governed | Securely stored and managed | Architectural | +| 8 | All data is preserved and immutable | Ability to associate data to milestones and have data/workflow traceable across the ecosystem | Architectural | +| 9 | Data is globally identifiable | No risk of overwriting or creating non-unique relationships between data and activities | Architectural | +| 10 | Data lineage is tracked | Required for auditability, re-creation of the workflow, and learning from work previously done | Architectural | +| 11 | Data is discoverable | Possible to find and consume back ingested data | Architectural | +| 12 | Provisioning | Efficient provisioning of the DDMS and auto integration with the Data Ecosystem | Operational | +| 13 | Business Continuity | Deliver on industry expectation for business continuity (RPO, RTO, SLA) | Operational | +| 14 | Cost | Cost efficient delivery of data | Operational | +| 15 | Auditability | Deliver required forensics to support cyber security incident investigations | Operational | +| 16 | Accessibility | Deliver technology | Operational | +| 17 | Domain-Centric Data APIs | | Openness and Extensibility | +| 18 | Workflow composability and customizations | | Openness and Extensibility | +| 19 | Data-Centric Extensibility | | Openness and Extensibility | ++OSDU™ is a trademark of The Open Group. ++## Next steps +Advance to the seismic ddms sdutil tutorial to learn how to use sdutil to load seismic data into seismic store. +> [!div class="nextstepaction"] +> [Tutorial: Seismic store sdutil](tutorial-seismic-ddms-sdutil.md) |
energy-data-services | Concepts Entitlements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-entitlements.md | + + Title: Microsoft Energy Data Services Preview entitlement concepts #Required; page title is displayed in search results. Include the brand. +description: This article describes the various concepts regarding the entitlement services in Microsoft Energy Data Services Preview #Required; article description that is displayed in search results. ++++ Last updated : 08/19/2022++++# Entitlement service ++Access management is a critical function for any service or resource. Entitlement service helps you manage who has access to your Microsoft Energy Data Service instance, what they can do with it, and what services they have access to. +++## Groups ++The entitlements service of Microsoft Energy Data Services allows you to create groups, and an entitlement group defines permissions on services/data sources for your Microsoft Energy Data Services instance. Users added by you to that group obtain the associated permissions. ++The main motivation for entitlements service is data authorization, but the functionality enables three use cases: ++- **Data groups** used for data authorization (for example, data.welldb.viewers, data.welldb.owners) +- **Service groups** used for service authorization (for example, service.storage.user, service.storage.admin) +- **User groups** used for hierarchical grouping of user and service identities (for example, users.datalake.viewers, users.datalake.editors) ++## Users ++For each group, you can either add a user as an OWNER or a MEMBER. The only difference being if you're an OWNER of a group, then you can manage the members of that group. +> [!NOTE] +> Do not delete the OWNER of a group unless there is another OWNER to manage the users. ++## Group naming ++All group identifiers (emails) will be of form {groupType}.{serviceName|resourceName}.{permission}@{partition}.{domain}.com. A group naming convention has been adopted such that the group's name should start with the word "data." for data groups; "service." for service groups; and "users." for user groups. An exception is when a data partition is provisioned. When a data partition is created, so is a corresponding group: users (for example, for data partition `opendes`, the group `users@opendes.dataservices.energy` is created). ++## Permissions/roles ++The OSDU™ Data Ecosystem user groups provide an abstraction from permission and user management and--without a user creating their own groups--the following user groups exist by default: ++- **users.datalake.viewers**: viewer level authorization for OSDU Data Ecosystem services. +- **users.datalake.editors**: editor level authorization for OSDU Data Ecosystem services and authorization to create the data using OSDU™ Data Ecosystem storage service. +- **users.datalake.admins**: admin level authorization for OSDU Data Ecosystem services. ++A full list of all API endpoints for entitlements can be found in [OSDU entitlement service](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/release/0.15/docs/tutorial/Entitlements-Service.md#entitlement-service-api). We have provided few illustrations below. Depending on the resources you have, you need to use the entitlements service in different ways than what is shown below. [Entitlement permissions](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/release/0.15/docs/tutorial/Entitlements-Service.md#permissions) on the endpoints and the corresponding minimum level of permissions required. ++> [!NOTE] +> The OSDU documentation refers to V1 endpoints, but the scripts noted in this documentation refers to V2 endpoints, which work and have been successfully validated ++OSDU™ is a trademark of The Open Group. ++## Next steps +<!-- Add a context sentence for the following links --> +> [!div class="nextstepaction"] +> [How to manage users](how-to-manage-users.md) |
energy-data-services | Concepts Index And Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-index-and-search.md | + + Title: Microsoft Energy Data Services Preview - index and search workflow concepts #Required; page title is displayed in search results. Include the brand. +description: Learn how to use indexing and search workflows #Required; article description that is displayed in search results. ++++ Last updated : 08/23/2022+++#Customer intent: As a developer, I want to understand indexing and search workflows so that I could search for ingested data in the platform. ++# Microsoft Energy Data Services Preview indexing and search workflows ++All data and associated metadata ingested into the platform are indexed to enable search. The metadata is accessible to ensure awareness even when the data isn't available. +++## Indexer Service ++The `Indexer Service` provides a mechanism for indexing documents that contain structured and unstructured data. ++> [!NOTE] +> This service is not a public service and only meant to be called internally by other core platform services. + +### Indexing workflow ++The below diagram illustrates the Indexing workflow: +++When a customer loads data into the platform, the associated metadata is ingested using the `Storage service`. The `Storage service` provides a set of APIs to manage the entire metadata lifecycle such as ingestion (persistence), modification, deletion, versioning, retrieval, and data schema management. Each storage metadata record created by the `Storage service` contains a *kind* parameter that refers to an underlying *schema*. This schema determines the attributes that will be indexed by the `Indexer service`. + +When the `Storage service` creates a metadata record, it raises a *recordChangedMessages* event that is collected in the Azure Service Bus (message queue). The `Indexer queue` service pulls the message from the Azure Service Bus, performs basic validation and sends it over to the `Indexer service`. If there are any failures in sending the messages to the `Indexer service`, the `Indexer queue` service retries sending the message up to a maximum allowed configurable retry count. If the retry attempts fail, a negative acknowledgment is sent to the Azure Service Bus, which then archives the message. ++When the *recordChangedMessages* event is received by the `Indexer Service`, it fetches the required schemas from the schema cache or through the `Schema service` APIs. The `Indexer Service` then creates a new index within Elasticsearch (if not already present), and then sends a bulk query to create or update the records as needed. If the response from Elasticsearch is a failure response of type *service unavailable* or *request timed out*, then the `Indexer Service` creates *recordChangedMessages* for these failed record IDs and puts the message in the Azure Service Bus. These messages will again be pulled by the `Indexer Queue` service and will follow the same flow as before. + ++For more information, see [Indexer service OSDU™ documentation](https://community.opengroup.org/osdu/platform/system/indexer-service/-/blob/release/0.15/docs/tutorial/IndexerService.md) provides information on indexer service ++## Search workflow + +`Search service` provides a mechanism for discovering indexed metadata documents. The Search API supports full-text search on string fields, range queries on date, numeric, or string field, etc. along with geo-spatial searches. ++For a detailed tutorial on `Search service`, refer [Search service OSDU™ documentation](https://community.opengroup.org/osdu/platform/system/search-service/-/blob/release/0.15/docs/tutorial/SearchService.md) + ++## Reindex workflow +Reindex API allows users to reindex a kind without reingesting the records via storage API. For detailed information, refer to +[Reindex OSDU™ documentation](https://community.opengroup.org/osdu/platform/system/indexer-service/-/blob/release/0.15/docs/tutorial/IndexerService.md#reindex) ++OSDU™ is a trademark of The Open Group. ++## Next steps +<!-- Add a context sentence for the following links --> +> [!div class="nextstepaction"] +> [Domain data management service concepts](concepts-ddms.md) |
energy-data-services | Concepts Manifest Ingestion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-manifest-ingestion.md | + + Title: Microsoft Energy Data Services Preview manifest ingestion concepts #Required; page title is displayed in search results. Include the brand. +description: This article describes manifest ingestion concepts #Required; article description that is displayed in search results. ++++ Last updated : 08/18/2022++++# Manifest-based ingestion concepts ++Manifest-based file ingestion provides end-users and systems a robust mechanism for loading metadata in Microsoft Energy Data Services Preview instance. A manifest is a JSON document that has a pre-determined structure for capturing entities that conform to the [OSDU™](https://osduforum.org/) Well-known Schema (WKS) definitions. ++Manifest-based file ingestion doesn't understand the contents of the file or doesn't parse the file. It just creates a metadata record for the file and makes it searchable. It doesn't infer or does anything on top of the file. +++## Understanding the manifest ++The manifest schema has containers for the following entities ++* **ReferenceData** (*zero or more*) - A set of permissible values to be used by other (master or transaction) data fields. Examples include *Unit of Measure (feet)*, *Currency*, etc. +* **MasterData** (*zero or more*) - A single source of basic business data used across multiple systems, applications, and/or process. Examples include *Wells* and *Wellbores* +* **WorkProduct (WP)** (*one - must be present if loading WorkProductComponents*) - A session boundary or collection (project, study) encompasses a set of entities that need to be processed together. As an example, you can take the ingestion of one or more log collections. +* **WorkProductComponents (WPC)** (*zero or more - must be present if loading datasets*) - A typed, smallest, independently usable unit of business data content transferred as part of a Work Product (a collection of things ingested together). Each Work Product Component (WPC) typically uses reference data, belongs to some master data, and maintains a reference to datasets. Example: *Well Logs, Faults, Documents* +* **Datasets** (*zero or more - must be present if loading WorkProduct and WorkProductComponent records*) - Each Work Product Component (WPC) consists of one or more data containers known as datasets. ++## Manifest-based file ingestion workflow steps ++1. A manifest is submitted to the Workflow Service using the manifest ingestion workflow name (for example, "Osdu_ingest") +2. Once the request is validated and the user authorization is complete, the workflow service will load and initiate the manifest ingestion workflow. +3. The first step is to check the syntax of the manifest. + 1. Retrieve the **kind** property of the manifest + 2. Retrieve the **schema definition** from the Schema service for the manifest kind + 3. Validate that the manifest is syntactically correct according to the manifest schema definitions. + 4. For each Reference data, Master data, Work Product, Work Product Component, and Dataset, do the following activities: + 1. Retrieve the **kind** property. + 2. Retrieve the **schema definition** from the Schema service for the kind + 3. Validate that the entity is syntactically correct according to the schema definition and submits the manifest to the Workflow Service + 4. Validate that mandatory attributes exist in the manifest + 5. Validate that all property values follow the patterns defined in the schemas + 6. Validate that no extra properties are present in the manifest + 5. Any entity that doesn't pass the syntax check is rejected +4. The content is checked for a series of validation rules + 1. Validation of referential integrity between Work Product Components and Datasets + 1. There are no orphan Datasets defined in the WP (each Dataset belongs to a WPC) + 2. Each Dataset defined in the WPC is described in the WP Dataset block + 3. Each WPC is linked to at least + 2. Validation that referenced parent data exists + 3. Validation that Dataset file paths aren't empty +5. Process the contents into storage + 1. Write each valid entity into the data platform via the Storage API + 2. Capture the ID generated to update surrogate-keys where surrogate-keys are used +6. Workflow exits ++## Manifest ingestion components ++* **Workflow Service** is a wrapper service on top of the Airflow workflow engine, which orchestrates the ingestion workflow. Airflow is the chosen workflow engine by the [OSDU™](https://osduforum.org/) community to orchestrate and run ingestion workflows. Airflow isn't directly exposed to clients, instead its features are accessed through the workflow service. +* **File Service** is used to upload files, file collections, and other types of source data to the data platform. +* **Storage Service** is used to save the manifest records into the data platform. +* **Airflow engine** is the workflow engine that executes DAGs (Directed Acyclic Graphs). +* **Schema Service** stores schemas used in the data platform. Schemas are being referenced during the Manifest-based file ingestion. +* **Entitlements Service** manages access groups. This service is used during the ingestion for verification of ingestion permissions. This service is also used during the metadata record retrieval for validation of "read" writes. +* **Search Service** is used to perform referential integrity check during the manifest ingestion process. ++## Manifest ingestion workflow sequence +++OSDU™ is a trademark of The Open Group. ++## Next steps +Advance to the manifest ingestion tutorial and learn how to perform a manifest-based file ingestion +> [!div class="nextstepaction"] +> [Tutorial: Sample steps to perform a manifest-based file ingestion](tutorial-manifest-ingestion.md) |
energy-data-services | How To Add More Data Partitions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-add-more-data-partitions.md | + + Title: How to manage partitions +description: This is a how-to article on managing data partitions using the Microsoft Energy Data Services Preview instance UI. ++++ Last updated : 07/05/2022++++# How to manage data partitions? +++The article describes how you can add data partitions to an existing Microsoft Energy Data Services (MEDS) instance. The concept of "data partitions" in MEDS is picked from [OSDU™](https://osduforum.org/) where single deployment can contain multiple partitions. ++Each partition provides the highest level of data isolation within a single deployment. All access rights are governed at a partition level. Data is separated in a way that allows for the partition's life cycle and deployment to be handled independently. (See [Partition Service](https://community.opengroup.org/osdu/platform/home/-/issues/31) in OSDU™) ++> [!NOTE] +> You can create maximum five data partitions in one MEDS instance. Currently, in line with the data partition capabilities that are available in OSDU™, you can only create data partitions but can't delete or rename data existing data partitions. +++## Create a data partition ++1. **Open the "Data Partitions" menu-item from left-panel of MEDS overview page.** ++ [](media/how-to-add-more-data-partitions/dynamic-data-partitions-discovery-meds-overview-page.png#lightbox) ++2. **Select "Create"** ++ The page shows a table of all data partitions in your MEDS instance with the status of the data partition next to it. Clicking "Create" option on the top opens a right-pane for next steps. ++ [](media/how-to-add-more-data-partitions/start-create-data-partition.png#lightbox) ++3. **Choose a name for your data partition** ++ Each data partition name needs to be - "1-10 characters long and be a combination of lowercase letters, numbers and hyphens only" The data partition name will be prepended with the name of the MEDS instance. Choose a name for your data partition and hit create. Soon as you hit create, the deployment of the underlying data partition resources such as Cosmos DB and Storage Accounts is started. ++ >[!NOTE] + >It generally takes 15-20 minutes to create a data partition. ++ [](media/how-to-add-more-data-partitions/create-data-partition-name-validation.png#lightbox) ++ If the deployment is successful, the status changes to "created successfully" with or without clicking "Refresh" on top. ++ [](media/how-to-add-more-data-partitions/create-progress.png#lightbox) ++## Delete a failed data partition ++The data-partition deployment triggered in the previous process might fail in some cases due to issues - quota limits reached, ARM template deployment transient issues, data seeding failures, and failure in connecting to underlying AKS clusters. ++The status of such data partitions shows as "Creation Failed". You can delete these deployments using the "delete" button that shows next to all failed data partition deployments. This deletion will clean up any records created in the backend. You can retry creating the data partitions later. +++[](media/how-to-add-more-data-partitions/delete-failed-instances.png#lightbox) ++OSDU™ is a trademark of The Open Group. ++## Next steps ++You can start loading data in your new data partitions. ++> [!div class="nextstepaction"] +> [Load data using manifest ingestion](tutorial-manifest-ingestion.md) |
energy-data-services | How To Convert Segy To Ovds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-ovds.md | + + Title: Microsoft Energy Data Services Preview - How to convert a segy to ovds file #Required; page title is displayed in search results. Include the brand. +description: This article explains how to convert a SGY file to oVDS file format #Required; article description that is displayed in search results. ++++ Last updated : 08/18/2022++++# How to convert a SEG-Y file to oVDS? ++Seismic data stored in the industry standard SEG-Y format can be converted to Open VDS (oVDS) format for use in applications via the Seismic DMS. ++[OSDU™ SEG-Y to oVDS conversation](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-vds-conversion/-/tree/release/0.15) +++## Prerequisites ++### Postman ++* Download and install [Postman](https://www.postman.com/) desktop app. +* Import the [oVDS Conversions.postman_collection](https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M9/Azure-M9/Services/DDMS/oVDS_Conversions.postman_collection.json) into Postman. All curl commands used below are added to this collection. Update your Environment file accordingly +* Microsoft Energy Data Services Preview instance is created already +* Clone the **sdutil** repo as shown below: + ```markdown + git clone https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil.git ++ git checkout azure/stable + ``` ++## Step by step guide ++1. Check if VDS is registered with the workflow service or not: ++ ```markdown + curl --location --request GET '<url>/api/workflow/v1/workflow/' + --header 'Data-Partition-Id: <datapartition>' + --header 'Content-Type: application/json' + --header 'Authorization: Bearer {{TOKEN}} + ``` ++ You should see VDS converter DAG in the list. IF NOT in the response list then REPORT the issue to Azure Team ++2. Open **sdutil** and edit the `config.yaml` at the root + Update `config` to: ++ ```yaml + seistore: + service: '{"azure": {"azureEnv":{"url": "<url>/seistore-svc/api/v3", "appkey": ""}}}' + url: '<url>/seistore-svc/api/v3' + cloud_provider: azure + env: glab + auth-mode: JWT Token + ssl_verify: false + auth_provider: + azure: '{ + "provider": "azure", + "authorize_url": "https://login.microsoftonline.com/", "oauth_token_host_end": "/oauth2/v2.0/token", + "scope_end":"/.default openid profile offline_access", + "redirect_uri":"http://localhost:8080", + "login_grant_type": "refresh_token", + "refresh_token": "<RefreshToken acquired earlier>" + }' + azure: + empty: none + ``` ++ > [!NOTE] + > See [Generate a refresh token](how-to-generate-refresh-token.md) on how to generate a refresh token. If you continue to follow other "how-to" documentation, you'll use this refresh token again. Once you've generated the token, store it in a place where you'll be able to access it in the future. ++3. Run **sdutil** to see if it's working fine. Follow the directions in [Setup and Usage for Azure env](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable#setup-and-usage-for-azure-env). Understand that depending on your OS and Python version, you may have to run `python3` command as opposed to `python`. ++ > [!NOTE] + > when running `python sdutil config init`, you don't need to enter anything when prompted with `Insert the azure (azureGlabEnv) application key:`. ++4. Upload the seismic file ++ ```markdown + python sdutil cp \source.segy sd://<datapartition>/<subproject>/destination.segy + ``` ++5. Fetch the idtoken from sdutil for the uploaded file. ++ ```markdown + python sdutil auth idtoken + ``` ++6. Trigger the DAG through `POSTMAN` or using the call below: ++ ```bash + curl --location --request POST '<url>/api/workflow/v1/workflow/<dag-name>/workflowRun' \ + --header 'data-partition-id: <datapartition>' \ + --header 'Content-Type: application/json' \ + --header 'Authorization: Bearer {{TOKEN}}' \ + --data-raw '{ + "executionContext": { + "vds_url": "sd://<datapartition>/<subproject>", + "persistent_id": "<filename>", + "id_token": "<token>", + "segy_url": "sd://<datapartition>/<subproject>/<filename>.segy" ++ } + }' + ``` ++7. Let the DAG run to complete state. You can check the status using the workflow status call ++8. Verify the converted files are present on the specified location in DAG Trigger or not ++ ```markdown + python sdutil ls sd://<datapartition>/<subproject>/ + ``` ++9. If you would like to download and inspect your VDS files, don't use the `cp` command as it will not work. The VDS conversion results in multiple files, therefore the `cp` command won't be able to download all of them in one command. Use either the [SEGYExport](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/tools/SEGYExport/README.html) or [VDSCopy](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/tools/VDSCopy/README.html) tool instead. These tools use a series of REST calls accessing a [naming scheme](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/connection.html) to retrieve information about all the resulting VDS files. ++OSDU™ is a trademark of The Open Group. ++## Next steps +<!-- Add a context sentence for the following links --> +> [!div class="nextstepaction"] +> [How to convert a segy to zgy file](/how-to-convert-segy-to-zgy.md) + |
energy-data-services | How To Convert Segy To Zgy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-zgy.md | + + Title: Microsoft Energy Data Service - How to convert segy to zgy file #Required; page title is displayed in search results. Include the brand. +description: This article describes how to convert a SEG-Y file to a ZGY file #Required; article description that is displayed in search results. ++++ Last updated : 08/18/2022++++# How to convert a SEG-Y file to ZGY? ++Seismic data stored in industry standard SEG-Y format can be converted to ZGY for use in applications such as Petrel via the Seismic DMS. See here for [ZGY Conversion FAQ's](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion#faq) and more background can be found in the OSDU™ community here: [SEG-Y to ZGY conversation](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/tree/azure/m10-master) +++## Prerequisites ++### Postman ++* Download and install [Postman](https://www.postman.com/) desktop app. +* Import the [oZGY Conversions.postman_collection](https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M9/Azure-M9/Services/DDMS/oZGY%20Conversions.postman_collection.json) into Postman. All curl commands used below are added to this collection. Update your Environment file accordingly +* Microsoft Energy Data Services Preview instance is created already +* Clone the **sdutil** repo as shown below: + ```markdown + git clone https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil.git ++ git checkout azure/stable + ``` +* The [jq command](https://stedolan.github.io/jq/download/), using your favorite tool on your favorite OS. + +## Step by Step guide ++1. The user needs to be part of the `users.datalake.admins` group and user needs to generate a valid refresh token. See [How to generate a refresh token](how-to-generate-refresh-token.md) for further instructions. If you continue to follow other "how-to" documentation, you'll use this refresh token again. Once you've generated the token, store it in a place where you'll be able to access it in the future. If it isn't present, add the group for the member ID. In this case, use the app ID you have been using for everything as the `user-email`. ++ > [!NOTE] + > `data-partition-id` should be in the format `<instance-name>-<data-partition-name>` in both the header and the url, and will be for any following command that requires `data-partition-id`. ++ ```bash + curl --location --request POST "<url>/api/entitlements/v2/groups/users.datalake.admins@<data-partition>.<domain>.com/members" \ + --header 'Content-Type: application/json' \ + --header 'data-partition-id: <data-partition>' \ + --header 'Authorization: Bearer {{TOKEN}}' \ + --data-raw '{ + "email" : "<user-email>", + "role" : "MEMBER" + } + ``` ++ You can also add the user to this group by using the entitlements API and assigning the required group ID. In order to check the entitlements groups for a user, perform the command [Get entitlements groups for a given user](how-to-manage-users.md#get-entitlements-groups-for-a-given-user). In order to get all the groups available, do the following command: ++ ```bash + curl --location --request GET "<url>/api/entitlements/v2/groups/" \ + --header 'data-partition-id: <data-partition>' \ + --header 'Authorization: Bearer {{TOKEN}}' + ``` ++2. Check if ZGY is registered with the workflow service or not: ++ ```bash + curl --location --request GET '<url>/api/workflow/v1/workflow/' \ + --header 'Data-Partition-Id: <data-partition>' \ + --header 'Content-Type: application/json' \ + --header 'Authorization: Bearer {{TOKEN}}' + ``` ++ You should see ZGY converter DAG in the list. IF NOT in the response list then REPORT the issue to Azure Team ++3. Register Data partition to Seismic: ++ ```bash + curl --location --request POST '<url>/seistore-svc/api/v3/tenant/<data-partition>' \ + --header 'Authorization: Bearer {{TOKEN}}' \ + --header 'Content-Type: application/json' \ + --data-raw '{ + "esd": "{{data-partition}}.{{domain}}.com", + "gcpid": "{{data-partition}}", + "default_acl": "users.datalake.admins@{{data-partition}}.{{domain}}.com"}' + ``` ++4. Create Legal tag ++ ```bash + curl --location --request POST '<url>/api/legal/v1/legaltags' \ + --header 'Content-Type: application/json' \ + --header 'data-partition-id: <data-partition>' \ + --header 'Authorization: Bearer {{TOKEN}}' \ + --data-raw '{ + "name": "<tag-name>", + "description": "Legal Tag added for Seismic", + "properties": { + "contractId": "123456", + "countryOfOrigin": [ + "US", + "CA" + ], + "dataType": "Public Domain Data", + "exportClassification": "EAR99", + "originator": "Schlumberger", + "personalData": "No Personal Data", + "securityClassification": "Private", + "expirationDate": "2025-12-25" + } + }' + ``` ++5. Create Subproject. Use your previously created entitlements groups that you would like to add as ACLs (Access Control List) admins and viewers. If you haven't yet created entitlements groups, follow the directions as outlined in [How to manage users?](how-to-manage-users.md). If you would like to see what groups you have, use [Get entitlements groups for a given user](how-to-manage-users.md#get-entitlements-groups-for-a-given-user). Data access isolation achieved with this dedicated ACL (access control list) per object within a given data partition. You may have many subprojects within a data partition, so this command allows you to provide access to a specific subproject without providing access to an entire data partition. Data partition entitlements don't necessarily translate to the subprojects within it, so it's important to be explicit about the ACLs for each subproject, regardless of what data partition it is in. ++ > [!NOTE] + > Later in this tutorial, you'll need at least one `owner` and at least one `viewer`. These user groups will look like `data.default.owners` and `data.default.viewers`. Make sure to include one of each in your list of `acls` in the request below. ++ ```bash + curl --location --request POST '<url>/seistore-svc/api/v3/subproject/tenant/<data-partition>/subproject/<subproject>' \ + --header 'Authorization: Bearer {{TOKEN}}' \ + --header 'Content-Type: text/plain' \ + --data-raw '{ + "admin": "test@email", + "storage_class": "MULTI_REGIONAL", + "storage_location": "US", + "acls": { + "admins": [ + "<user-group>@<data-partition>.<domain>.com", + "<user-group>@<data-partition>.<domain>.com" + ], + "owners": [ + "<user-group>@<data-partition>.<domain>.com" + ], + "viewers": [ + "<user-group>@<data-partition>.<domain>.com" + ] + } + }' + ``` ++ The following request is an example of the create subproject request: ++ ```bash + curl --location --request POST 'https://<instance>.energy.azure.com/seistore-svc/api/v3/subproject/tenant/<instance>-<data-partition-name>/subproject/subproject1' \ + --header 'Authorization: Bearer eyJ...' \ + --header 'Content-Type: text/plain' \ + --data-raw '{ + "admin": "test@email", + "storage_class": "MULTI_REGIONAL", + "storage_location": "US", + "acls": { + "admins": [ + "service.seistore.p4d.tenant01.subproject01.admin@slb.p4d.cloud.slb-ds.com", + "service.seistore.p4d.tenant01.subproject01.editor@slb.p4d.cloud.slb-ds.com" + ], + "owners": [ + "data.default.owners@slb.p4d.cloud.slb-ds.com" + ], + "viewers": [ + "service.seistore.p4d.tenant01.subproject01.viewer@slb.p4d.cloud.slb-ds.com" + ] + } + }' + ``` ++6. Patch Subproject with the legal tag you created above: ++ ```bash + curl --location --request PATCH '<url>/seistore-svc/api/v3/subproject/tenant/<data-partition>/subproject/<subproject-name>' \ + --header 'ltag: <Tag-name-above>' \ + --header 'recursive: true' \ + --header 'Authorization: Bearer {{TOKEN}}' \ + --header 'Content-Type: text/plain' \ + --data-raw '{ + "admin": "test@email", + "storage_class": "MULTI_REGIONAL", + "storage_location": "US", + "acls": { + "admins": [ + "<user-group>@<data-partition>.<domain>.com", + "<user-group>@<data-partition>.<domain>.com" + ], + "viewers": [ + "<user-group>@<data-partition>.<domain>.com" + ] + } + }' + ``` ++ > [!NOTE] + > Recall that the format of the legal tag will be prefixed with the Microsoft Energy Data Services instance name and data partition name, so it looks like `<instancename>`-`<datapartitionname>`-`<legaltagname>`. ++7. Open the [sdutil](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable) codebase and edit the `config.yaml` at the root. Update this config to: ++ ```yaml + seistore: + service: '{"azure": {"azureEnv":{"url": "<url>/seistore-svc/api/v3", "appkey": ""}}}' + url: '<url>/seistore-svc/api/v3' + cloud_provider: azure + env: glab + auth-mode: JWT Token + ssl_verify: false + auth_provider: + azure: '{ + "provider": "azure", + "authorize_url": "https://login.microsoftonline.com/", "oauth_token_host_end": "/oauth2/v2.0/token", + "scope_end":"/.default openid profile offline_access", + "redirect_uri":"http://localhost:8080", + "login_grant_type": "refresh_token", + "refresh_token": "<RefreshToken acquired earlier>" + }' + azure: + empty: none + ``` ++ > [!NOTE] + > See [How to generate a refresh token](how-to-generate-refresh-token.md). Once you've generated the token, store it in a place where you'll be able to access it in the future. ++8. Run the following commands using **sdutil** to see its working fine. Follow the directions in [Setup and Usage for Azure env](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable#setup-and-usage-for-azure-env). Understand that depending on your OS and Python version, you may have to run `python3` command as opposed to `python`. If you run into errors with these commands, refer to the [SDUTIL tutorial](/tutorials/tutorial-seismic-ddms-sdutil.md). ++ > [!NOTE] + > when running `python sdutil config init`, you don't need to enter anything when prompted with `Insert the azure (azureGlabEnv) application key:`. ++ ```bash + python sdutil config init + python sdutil auth login + python sdutil ls sd://<data-partition>/<subproject>/ + ``` ++9. Upload your seismic file to your Seismic Store. Here's an example with a SEGY-format file called `source.segy`: ++ ```bash + python sdutil cp source.segy sd://<data-partition>/<subproject>/destination.segy + ``` ++ If you would like to use a test file we supply instead, download [this file](https://community.opengroup.org/osdu/platform/testing/-/tree/master/Postman%20Collection/40_CICD_OpenVDS) to your local machine then run the following command: +++ ```bash + python sdutil cp ST10010ZC11_PZ_PSDM_KIRCH_FULL_T.MIG_FIN.POST_STACK.3D.JS-017536.segy sd://<data-partition>/<subproject>/destination.segy + ``` ++ The sample records were meant to be similar to real-world data so a significant part of their content isn't directly related to conversion. This file is large and will take up about 1 GB of space. ++10. Create the manifest file (otherwise known as the records file) ++ ZGY conversion uses a manifest file that you'll upload to your storage account in order to run the conversion. This manifest file is created by using multiple JSON files and running a script. The JSON files for this process are stored [here](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/tree/master/doc/sample-records/volve). For more information on Volve, where the dataset definitions come from, visit [their website](https://www.equinor.com/en/what-we-do/digitalisation-in-our-dna/volve-field-data-village-download.html). Complete the following steps in order to create the manifest file: ++ * Clone the [repo](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/tree/master/) and navigate to the folder doc/sample-records/volve + * Edit the values in the `prepare-records.sh` bash script: ++ * `DATA_PARTITION_ID=<your-partition-id>` + * `ACL_OWNER=data.default.owners@<your-partition-id>.<your-tenant>.com` + * `ACL_VIEWER=data.default.viewers@<your-partition-id>.<your-tenant>.com` + * `LEGAL_TAG=<legal-tag-created-above>` ++ > [!NOTE] + > Recall that the format of the legal tag will be prefixed with the Microsoft Energy Data Services instance name and data partition name, so it looks like `<instancename>`-`<datapartitionname>`-`<legaltagname>`. + * The output will be a JSON array with all objects and will be saved in the `all_records.json` file. + * Save the `filecollection_segy_id` and the `work_product_id` values in that JSON file to use in the conversion step. That way the converter knows where to look for this contents of your `all_records.json`. ++11. Insert the contents of your `all_records.json` file in storage for work-product, seismic trace data, seismic grid, and file collection (that is, copy and paste the contents of that file to the `--data-raw` field in the following command): ++ ```bash + curl --location --request PUT '<url>/api/storage/v2/records' \ + --header 'Content-Type: application/json' \ + --header 'data-partition-id: <data-partition>' \ + --header 'Authorization: Bearer {{TOKEN}}' \ + --data-raw '[ + { + ... + "kind": "osdu:wks:work-product--WorkProduct:1.0.0", + ... + }, + { + ... + "kind": "osdu:wks:work-product-component--SeismicTraceData:1.0.0" + ... + }, + { + ... + "kind": "osdu:wks:work-product-component--SeismicBinGrid:1.0.0", + ... + }, + { + ... + "kind": "osdu:wks:dataset--FileCollection.SEGY:1.0.0", + ... + } + ] + ' + ``` ++12. Trigger the ZGY Conversion DAG to convert your data using the values you had saved above. Your call will look like this: ++ ```bash + curl --location --request POST '<url>/api/workflow/v1/workflow/<dag-name>/workflowRun' \ + --header 'data-partition-id: <data-partition>' \ + --header 'Content-Type: application/json' \ + --header 'Authorization: Bearer {{TOKEN}}' \ + --data-raw '{ + "executionContext": { + "data_partition_id": <data-partition>, + "sd_svc_api_key": "test-sd-svc", + "storage_svc_api_key": "test-storage-svc", + "filecollection_segy_id": "<data-partition>:dataset--FileCollection.SEGY:<guid>", + "work_product_id": "<data-partition>:work-product--WorkProduct:<guid>" + } + }' + ``` ++13. Let the DAG run to the `succeeded` state. You can check the status using the workflow status call. You'll get run ID in the response of the above call ++ ```bash + curl --location --request GET '<url>/api/workflow/v1/workflow/<dag-name>/workflowRun/<run-id>' \ + --header 'Data-Partition-Id: <data-partition>' \ + --header 'Content-Type: application/json' \ + --header 'Authorization: Bearer {{TOKEN}}' + ``` ++14. You can see if the converted file is present using the following command: ++ ```bash + python sdutil ls sd://<data-partition>/<subproject> + ``` ++15. You can download and inspect the file using the [sdutil](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable) `cp` command: ++ ```bash + python sdutil cp sd://<data-partition>/<subproject>/<filename.zgy> <local/destination/path> + ``` +OSDU™ is a trademark of The Open Group. ++## Next steps +<!-- Add a context sentence for the following links --> +> [!div class="nextstepaction"] +> [How to convert segy to ovds](/how-to-convert-segy-to-ovds.md) |
energy-data-services | How To Generate Refresh Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-generate-refresh-token.md | + + Title: How to generate a refresh token for Microsoft Energy Data Service #Required; page title is displayed in search results. Include the brand. +description: This article describes how to generate a refresh token #Required; article description that is displayed in search results. ++++ Last updated : 08/25/2022++++# OAuth 2.0 authorization ++The following are the basic steps to use the OAuth 2.0 authorization code grant flow to get a refresh token from the Microsoft identity platform endpoint: ++ 1. Register your app with Azure AD. + 2. Get authorization. + 3. Get a refresh token. + ++## 1. Register your app with Azure AD +To use the Microsoft Energy Data Services Preview platform endpoint, you must register your app using the [Azure app registration portal](https://go.microsoft.com/fwlink/?linkid=2083908). You can use either a Microsoft account or a work or school account to register an app. ++To configure an app to use the OAuth 2.0 authorization code grant flow, save the following values when registering the app: ++- The `Directory (tenant) ID` that will be used in place of `{Tenant ID}` +- The `application (client) ID` assigned by the app registration portal, which will be used instead of `client_id`. +- A `client (application) secret`, either a password or a public/private key pair (certificate). The client secret isn't required for native apps. This secret will be used instead of `{AppReg Secret}` later. +- A `redirect URI (or reply URL)` for your app to receive responses from Azure AD. + +> [!NOTE] +> If there's no redirect URIs specified, add a platform, select "Web", then add `http://localhost:8080`, and select save. +++For steps on how to configure a |