Updates from: 09/21/2022 01:20:20
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Enable Authentication Python Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-python-web-app.md
Add the following templates under the templates folder. These templates extend t
{% block metadata %} {% if config.get("B2C_RESET_PASSWORD_AUTHORITY") and "AADB2C90118" in result.get("error_description") %}
- <!-- See also https://docs.microsoft.com/en-us/azure/active-directory-b2c/active-directory-b2c-reference-policies#linking-user-flows -->
+ <!-- See also https://learn.microsoft.com/azure/active-directory-b2c/active-directory-b2c-reference-policies#linking-user-flows -->
<meta http-equiv="refresh" content='0;{{_build_auth_code_flow(authority=config["B2C_RESET_PASSWORD_AUTHORITY"])["auth_uri"]}}'> {% endif %}
active-directory-b2c Enable Authentication Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-spa-app.md
To sign in the user, do the following:
/** * For the purpose of setting an active account for UI update, we want to consider only the auth response resulting * from SUSI flow. "tfp" claim in the id token tells us the policy (NOTE: legacy policies may use "acr" instead of "tfp").
- * To learn more about B2C tokens, visit https://docs.microsoft.com/en-us/azure/active-directory-b2c/tokens-overview
+ * To learn more about B2C tokens, visit https://learn.microsoft.com/azure/active-directory-b2c/tokens-overview
*/ if (response.idTokenClaims['tfp'].toUpperCase() === b2cPolicies.names.signUpSignIn.toUpperCase()) { handleResponse(response);
active-directory-b2c Enable Authentication Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-web-app-with-api.md
public void ConfigureServices(IServiceCollection services)
// This lambda determines whether user consent for non-essential cookies is needed for a given request. options.CheckConsentNeeded = context => true; options.MinimumSameSitePolicy = SameSiteMode.Unspecified;
- // Handling SameSite cookie according to https://docs.microsoft.com/en-us/aspnet/core/security/samesite?view=aspnetcore-3.1
+ // Handling SameSite cookie according to https://learn.microsoft.com/aspnet/core/security/samesite?view=aspnetcore-3.1
options.HandleSameSiteCookieCompatibility(); });
active-directory-b2c Enable Authentication Web Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-web-application.md
public void ConfigureServices(IServiceCollection services)
// This lambda determines whether user consent for non-essential cookies is needed for a given request. options.CheckConsentNeeded = context => true; options.MinimumSameSitePolicy = SameSiteMode.Unspecified;
- // Handling SameSite cookie according to https://docs.microsoft.com/en-us/aspnet/core/security/samesite?view=aspnetcore-3.1
+ // Handling SameSite cookie according to https://learn.microsoft.com/aspnet/core/security/samesite?view=aspnetcore-3.1
options.HandleSameSiteCookieCompatibility(); });
active-directory-b2c Identity Provider Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-local.md
Previously updated : 09/16/2021 Last updated : 09/02/2022 zone_pivot_groups: b2c-policy-type
To configure settings for social or enterprise identities, where the identity of
::: zone pivot="b2c-user-flow"
+## Prerequisites
+++ ## Configure local account identity provider settings
active-directory-b2c Javascript And Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/javascript-and-page-layout.md
function addTermsOfUseLink() {
var termsLabelText = termsOfUseLabel.innerHTML; // create a new <a> element with the same inner text
- var termsOfUseUrl = 'https://docs.microsoft.com/legal/termsofuse';
+ var termsOfUseUrl = 'https://learn.microsoft.com/legal/termsofuse';
var termsOfUseLink = document.createElement('a'); termsOfUseLink.setAttribute('href', termsOfUseUrl); termsOfUseLink.setAttribute('target', '_blank');
active-directory-b2c View Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/view-audit-logs.md
You can try this script in the [Azure Cloud Shell](overview.md). Be sure to upda
```powershell # This script requires an application registration that's granted Microsoft Graph API permission
-# https://docs.microsoft.com/azure/active-directory-b2c/microsoft-graph-get-started
+# https://learn.microsoft.com/azure/active-directory-b2c/microsoft-graph-get-started
# Constants $ClientID = "your-client-application-id-here" # Insert your application's client ID, a GUID
Here's the JSON representation of the example activity event shown earlier in th
## Next steps
-You can automate other administration tasks, for example, [manage Azure AD B2C user accounts with Microsoft Graph](microsoft-graph-operations.md).
+You can automate other administration tasks, for example, [manage Azure AD B2C user accounts with Microsoft Graph](microsoft-graph-operations.md).
active-directory-domain-services Troubleshoot Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-alerts.md
Previously updated : 08/17/2022 Last updated : 09/20/2022
The managed domain's health automatically updates itself within two hours and re
### Resolution
-This error is unrecoverable. To resolve the alert, [delete your existing managed domain](delete-aadds.md) and recreate it. If you have trouble deleting the managed domain, [open an Azure support request][azure-support] for additional troubleshooting assistance.
+Azure AD DS creates additional resources to function properly, such as public IP addresses, virtual network interfaces, and a load balancer. If any of these resources are modified, the managed domain is in an unsupported state and can't be managed. For more information about these resources, see [Network resources used by Azure AD DS](network-considerations.md#network-resources-used-by-azure-ad-ds).
+
+This alert is generated when one of these required resources is modified and can't automatically be recovered by Azure AD DS. To resolve the alert, [open an Azure support request][azure-support] to fix the instance.
## AADDS114: Subnet invalid
active-directory Concept Fido2 Hardware Vendor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-fido2-hardware-vendor.md
Most hacking related breaches use either stolen or weak passwords. Often, IT will enforce stronger password complexity or frequent password changes to reduce the risk of a security incident. However, this increases help desk costs and leads to poor user experiences as users are required to memorize or store new, complex passwords.
-FIDO2 security keys offer an alternative. FIDO2 security keys can replace weak credentials with strong hardware-backed public/private-key credentials which cannot be reused, replayed, or shared across services. Security keys support shared device scenarios, allowing you to carry your credential with you and safely authenticate to an Azure Active Directory joined Windows 10 device thatΓÇÖs part of your organization.
+FIDO2 security keys offer an alternative. FIDO2 security keys can replace weak credentials with strong hardware-backed public/private-key credentials which can't be reused, replayed, or shared across services. Security keys support shared device scenarios, allowing you to carry your credential with you and safely authenticate to an Azure Active Directory joined Windows 10 device thatΓÇÖs part of your organization.
Microsoft partners with FIDO2 security key vendors to ensure that security devices work on Windows, the Microsoft Edge browser, and online Microsoft accounts, to enable strong password-less authentication. You can become a Microsoft-compatible FIDO2 security key vendor through the following process. Microsoft doesn't commit to do go-to-market activities with the partner and will evaluate partner priority based on customer demand.
-1. First, your authenticator needs to have a FIDO2 certification. We will not be able to work with providers who do not have a FIDO2 certification. To learn more about the certification, please visit this website: [https://fidoalliance.org/certification/](https://fidoalliance.org/certification/)
+1. First, your authenticator needs to have a FIDO2 certification. We won't be able to work with providers who don't have a FIDO2 certification. To learn more about the certification, please visit this website: [https://fidoalliance.org/certification/](https://fidoalliance.org/certification/)
2. After you have a FIDO2 certification, please fill in your request to our form here: [https://forms.office.com/r/NfmQpuS9hF](https://forms.office.com/r/NfmQpuS9hF). Our engineering team will only test compatibility of your FIDO2 devices. We won't test security of your solutions. 3. Once we confirm a move forward to the testing phase, the process usually take about 3-6 months. The steps usually involve: - Initial discussion between Microsoft and your team. - Verify FIDO Alliance Certification or the path to certification if not complete - Receive an overview of the device from the vendor - Microsoft will share our test scripts with you. Our engineering team will be able to answer questions if you have any specific needs.
- - You will complete and send all passed results to Microsoft Engineering team
+ - You'll complete and send all passed results to Microsoft Engineering team
4. Upon successful passing of all tests by Microsoft Engineering team, Microsoft will confirm vendor's device is listed in [the FIDO MDS](https://fidoalliance.org/metadata/). 5. Microsoft will add your FIDO2 Security Key on Azure AD backend and to our list of approved FIDO2 vendors.
You can become a Microsoft-compatible FIDO2 security key vendor through the foll
The following table lists partners who are Microsoft-compatible FIDO2 security key vendors.
-| **Provider** | **Link** |
-| | |
-| AuthenTrend | [https://authentrend.com/about-us/#pg-35-3](https://authentrend.com/about-us/#pg-35-3) |
-| Ensurity | [https://www.ensurity.com/contact](https://www.ensurity.com/contact) |
-| Excelsecu | [https://www.excelsecu.com/productdetail/esecufido2secu.html](https://www.excelsecu.com/productdetail/esecufido2secu.html) |
-| Feitian | [https://ftsafe.us/pages/microsoft](https://ftsafe.us/pages/microsoft) |
-| Go-Trust ID | [https://www.gotrustid.com/](https://www.gotrustid.com/idem-key) |
-| HID | [https://www.hidglobal.com/contact-us](https://www.hidglobal.com/contact-us) |
-| Hypersecu | [https://www.hypersecu.com/hyperfido](https://www.hypersecu.com/hyperfido) |
-| IDmelon Technologies Inc. | [https://www.idmelon.com/#idmelon](https://www.idmelon.com/#idmelon) |
-| Kensington | [https://www.kensington.com/solutions/product-category/why-biometrics/](https://www.kensington.com/solutions/product-category/why-biometrics/) |
-| KONA I | [https://konai.com/business/security/fido](https://konai.com/business/security/fido) |
-| Nymi | [https://www.nymi.com/product](https://www.nymi.com/product) |
-| OneSpan Inc. | [https://www.onespan.com/products/fido](https://www.onespan.com/products/fido) |
-| Thales | [https://cpl.thalesgroup.com/access-management/authenticators/fido-devices](https://cpl.thalesgroup.com/access-management/authenticators/fido-devices) |
-| Thetis | [https://thetis.io/collections/fido2](https://thetis.io/collections/fido2) |
-| Token2 Switzerland | [https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key](https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key) |
-| TrustKey Solutions | [https://www.trustkeysolutions.com/security-keys/](https://www.trustkeysolutions.com/security-keys/) |
-| VinCSS | [https://passwordless.vincss.net](https://passwordless.vincss.net/) |
-| Yubico | [https://www.yubico.com/solutions/passwordless/](https://www.yubico.com/solutions/passwordless/) |
+| Provider | Biometric | USB | NFC | BLE | FIPS Certified | Contact |
+||:--:|::|::|::|:--:|--|
+| AuthenTrend | ![y] | ![y]| ![y]| ![y]| ![n] | https://authentrend.com/about-us/#pg-35-3 |
+| Ciright | ![n] | ![n]| ![y]| ![n]| ![n] | https://www.cyberonecard.com/ |
+| Crayonic | ![y] | ![n]| ![y]| ![y]| ![n] | https://www.crayonic.com/keyvault |
+| Ensurity | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.ensurity.com/contact |
+| Excelsecu | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.excelsecu.com/productdetail/esecufido2secu.html |
+| Feitian | ![y] | ![y]| ![y]| ![y]| ![y] | https://shop.ftsafe.us/pages/microsoft |
+| Fortinet | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.fortinet.com/ |
+| Giesecke + Devrient (G+D) | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.gi-de.com/en/identities/enterprise-security/hardware-based-authentication |
+| GoTrustID Inc. | ![n] | ![y]| ![y]| ![y]| ![n] | https://www.gotrustid.com/idem-key |
+| HID | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.hidglobal.com/contact-us |
+| Hypersecu | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.hypersecu.com/hyperfido |
+| IDmelon Technologies Inc. | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.idmelon.com/#idmelon |
+| Kensington | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.kensington.com/solutions/product-category/why-biometrics/ |
+| KONA I | ![y] | ![n]| ![y]| ![y]| ![n] | https://konai.com/business/security/fido |
+| NeoWave | ![n] | ![y]| ![y]| ![n]| ![n] | https://neowave.fr/en/products/fido-range/ |
+| Nymi | ![y] | ![n]| ![y]| ![n]| ![n] | https://www.nymi.com/nymi-band |
+| Octatco | ![y] | ![y]| ![n]| ![n]| ![n] | https://octatco.com/ |
+| OneSpan Inc. | ![n] | ![y]| ![n]| ![y]| ![n] | https://www.onespan.com/products/fido |
+| Swissbit | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.swissbit.com/en/products/ishield-fido2/ |
+| Thales Group | ![n] | ![y]| ![y]| ![n]| ![y] | https://cpl.thalesgroup.com/access-management/authenticators/fido-devices |
+| Thetis | ![y] | ![y]| ![y]| ![y]| ![n] | https://thetis.io/collections/fido2 |
+| Token2 Switzerland | ![y] | ![y]| ![y]| ![n]| ![n] | https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key |
+| TrustKey Solutions | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.trustkeysolutions.com/security-keys/ |
+| VinCSS | ![n] | ![y]| ![n]| ![n]| ![n] | https://passwordless.vincss.net |
+| Yubico | ![y] | ![y]| ![y]| ![n]| ![y] | https://www.yubico.com/solutions/passwordless/ |
+++
+<!--Image references-->
+[y]: ./media/fido2-compatibility/yes.png
+[n]: ./media/fido2-compatibility/no.png
+ ## Next steps
active-directory Migrate Python Adal Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-python-adal-msal.md
def get_preexisting_rt_and_their_scopes_from_elsewhere():
# https://github.com/AzureAD/azure-activedirectory-library-for-python/blob/1.2.3/sample/device_code_sample.py#L72 # which uses a resource rather than a scope, # you need to convert your v1 resource into v2 scopes
- # See https://docs.microsoft.com/azure/active-directory/azuread-dev/azure-ad-endpoint-comparison#scopes-not-resources
+ # See https://learn.microsoft.com/azure/active-directory/azuread-dev/azure-ad-endpoint-comparison#scopes-not-resources
# You may be able to append "/.default" to your v1 resource to form a scope
- # See https://docs.microsoft.com/azure/active-directory/develop/v2-permissions-and-consent#the-default-scope
+ # See https://learn.microsoft.com/azure/active-directory/develop/v2-permissions-and-consent#the-default-scope
# Or maybe you have an app already talking to the Microsoft identity platform, # powered by some 3rd-party auth library, and persist its tokens somehow.
active-directory Msal Android B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-b2c.md
String id = account.getId();
// Get the IdToken Claims // // For more information about B2C token claims, see reference documentation
-// https://docs.microsoft.com/azure/active-directory-b2c/active-directory-b2c-reference-tokens
+// https://learn.microsoft.com/azure/active-directory-b2c/active-directory-b2c-reference-tokens
Map<String, ?> claims = account.getClaims(); // Get the 'preferred_username' claim through a convenience function
active-directory Msal Net Migration Public Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration-public-client.md
var pca = PublicClientApplicationBuilder.Create("client_id")
.WithBroker() .Build();
-// Add a token cache, see https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-token-cache-serialization?tabs=desktop
+// Add a token cache, see https://learn.microsoft.com/azure/active-directory/develop/msal-net-token-cache-serialization?tabs=desktop
// 2. GetAccounts var accounts = await pca.GetAccountsAsync();
private static async Task<AuthenticationResult> AcquireByDeviceCodeAsync(IPublic
{ // If you use a CancellationToken, and call the Cancel() method on it, then this *may* be triggered // to indicate that the operation was cancelled.
- // See https://docs.microsoft.com/dotnet/standard/threading/cancellation-in-managed-threads
+ // See https://learn.microsoft.com/dotnet/standard/threading/cancellation-in-managed-threads
// for more detailed information on how C# supports cancellation in managed threads. } catch (MsalClientException ex)
active-directory Reference Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-breaking-changes.md
Check this article regularly to learn about:
- Deprecated functionality > [!TIP]
-> To be notified of updates to this page, add this URL to your RSS feed reader:<br/>`https://docs.microsoft.com/api/search/rss?search=%22Azure+Active+Directory+breaking+changes+reference%22&locale=en-us`
+> To be notified of updates to this page, add this URL to your RSS feed reader:<br/>`https://learn.microsoft.com/api/search/rss?search=%22Azure+Active+Directory+breaking+changes+reference%22&locale=en-us`
## December 2021
active-directory Scenario Desktop Acquire Token Device Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-device-code-flow.md
private static async Task<AuthenticationResult> AcquireByDeviceCodeAsync(IPublic
{ // If you use a CancellationToken, and call the Cancel() method on it, then this *may* be triggered // to indicate that the operation was cancelled.
- // See https://docs.microsoft.com/dotnet/standard/threading/cancellation-in-managed-threads
+ // See https://learn.microsoft.com/dotnet/standard/threading/cancellation-in-managed-threads
// for more detailed information on how C# supports cancellation in managed threads. } catch (MsalClientException ex)
active-directory Scenario Desktop Acquire Token Wam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-wam.md
var pca = PublicClientApplicationBuilder.Create("client_id")
.WithBroker() .Build();
-// Add a token cache, see https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-token-cache-serialization?tabs=desktop
+// Add a token cache, see https://learn.microsoft.com/azure/active-directory/develop/msal-net-token-cache-serialization?tabs=desktop
// 2. GetAccounts var accounts = await pca.GetAccountsAsync();
active-directory Tutorial V2 Nodejs Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-console.md
const msalConfig = {
/** * With client credentials flows permissions need to be granted in the portal by a tenant administrator. * The scope is always in the format '<resource>/.default'. For more, visit:
- * https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow
+ * https://learn.microsoft.com/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow
*/ const tokenRequest = { scopes: [process.env.GRAPH_ENDPOINT + '/.default'],
active-directory V2 App Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-app-types.md
Previously updated : 11/13/2020 Last updated : 09/09/2022
# Application types for the Microsoft identity platform
-The Microsoft identity platform supports authentication for a variety of modern app architectures, all of them based on industry-standard protocols [OAuth 2.0 or OpenID Connect](active-directory-v2-protocols.md). This article describes the types of apps that you can build by using Microsoft identity platform, regardless of your preferred language or platform. The information is designed to help you understand high-level scenarios before you start working with the code in the [application scenarios](authentication-flows-app-scenarios.md#application-scenarios).
+The Microsoft identity platform supports authentication for various modern app architectures, all of them based on industry-standard protocols [OAuth 2.0 or OpenID Connect](active-directory-v2-protocols.md). This article describes the types of apps that you can build by using Microsoft identity platform, regardless of your preferred language or platform. The information is designed to help you understand high-level scenarios before you start working with the code in the [application scenarios](authentication-flows-app-scenarios.md#application-scenarios).
## The basics
https://login.microsoftonline.com/common/oauth2/v2.0/token
## Single-page apps (JavaScript)
-Many modern apps have a single-page app front end written primarily in JavaScript, often with a framework like Angular, React, or Vue. The Microsoft identity platform supports these apps by using the [OpenID Connect](v2-protocols-oidc.md) protocol for authentication and either [OAuth 2.0 implicit grant flow](v2-oauth2-implicit-grant-flow.md) or the more recent [OAuth 2.0 authorization code + PKCE flow](v2-oauth2-auth-code-flow.md) for authorization (see below).
+Many modern apps have a single-page app front end written primarily in JavaScript, often with a framework like Angular, React, or Vue. The Microsoft identity platform supports these apps by using the [OpenID Connect](v2-protocols-oidc.md) protocol for authentication and one of two types of authorization grants defined by OAuth 2.0. The supported grant types are either the [OAuth 2.0 implicit grant flow](v2-oauth2-implicit-grant-flow.md) or the more recent [OAuth 2.0 authorization code + PKCE flow](v2-oauth2-auth-code-flow.md) (see below).
The flow diagram below demonstrates the OAuth 2.0 authorization code grant (with details around PKCE omitted), where the app receives a code from the Microsoft identity platform `authorize` endpoint, and redeems it for an access token and a refresh token using cross-site web requests. The access token expires every 24 hours, and the app must request another code using the refresh token. In addition to the access token, an `id_token` that represents the signed-in user to the client application is typically also requested through the same flow and/or a separate OpenID Connect request (not shown here).
To see this scenario in action, check out the [Tutorial: Sign in users and call
### Authorization code flow vs. implicit flow
-For most of the history of OAuth 2.0, the [implicit flow](v2-oauth2-implicit-grant-flow.md) was the recommended way to build single-page apps. With the removal of [third-party cookies](reference-third-party-cookies-spas.md) and [greater attention](https://tools.ietf.org/html/draft-ietf-oauth-security-topics-14) paid to security concerns around the implicit flow, we've moved to the authorization code flow for single-page apps.
-
-To ensure compatibility of your app in Safari and other privacy-conscious browsers, we no longer recommend use of the implicit flow and instead recommend the authorization code flow.
+For most of the history of OAuth 2.0, the [implicit flow](v2-oauth2-implicit-grant-flow.md) was the recommended way to build single-page apps. With the removal of [third-party cookies](reference-third-party-cookies-spas.md) and [greater attention](https://tools.ietf.org/html/draft-ietf-oauth-security-topics-14) paid to security concerns around the implicit flow, the authorization code flow for single-page apps should now be implemented to ensure compatibility of your app in Safari and other privacy-conscious browsers. The continued use of the implicit flow is not recommended.
## Web apps
eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6ImtyaU1QZG1Cd...
} ```
-Further details of different types of tokens used in the Microsoft identity platform are available in the [access token](access-tokens.md) reference and [id_token reference](id-tokens.md)
+Further details of different types of tokens used in the Microsoft identity platform are available in the [access token](access-tokens.md) reference and [id_token](id-tokens.md) reference.
In web server apps, the sign-in authentication flow takes these high-level steps:
In web server apps, the sign-in authentication flow takes these high-level steps
You can ensure the user's identity by validating the ID token with a public signing key that is received from the Microsoft identity platform. A session cookie is set, which can be used to identify the user on subsequent page requests.
-To see this scenario in action, try the code samples in the [Web app that signs in users scenario](scenario-web-app-sign-user-overview.md).
+To see this scenario in action, try the code samples in [Sign in users from a Web app](scenario-web-app-sign-user-overview.md).
-In addition to simple sign-in, a web server app might need to access another web service, such as a REST API. In this case, the web server app engages in a combined OpenID Connect and OAuth 2.0 flow, by using the [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md). For more information about this scenario, read about [getting started with web apps and Web APIs](https://github.com/AzureADQuickStarts/AppModelv2-WebApp-WebAPI-OpenIDConnect-DotNet).
+In addition to simple sign-in, a web server app might need to access another web service, such as a Representational State Transfer ([REST](https://docs.microsoft.com/rest/api/azure/)) API. In this case, the web server app engages in a combined OpenID Connect and OAuth 2.0 flow, by using the [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md). For more information about this scenario, refer to our code [sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-1-Call-MSGraph/README.md).
## Web APIs
active-directory Clean Up Stale Guest Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/clean-up-stale-guest-accounts.md
As users collaborate with external partners, itΓÇÖs possible that many guest accounts get created in Azure Active Directory (Azure AD) tenants over time. When collaboration ends and the users no longer access your tenant, the guest accounts may become stale. Admins can use Access Reviews to automatically review inactive guest users and block them from signing in, and later, delete them from the directory.
-Learn more about [how to manage inactive user accounts in Azure AD](https://docs.microsoft.com/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts).
+Learn more about [how to manage inactive user accounts in Azure AD](https://learn.microsoft.com/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts).
There are a few recommended patterns that are effective at cleaning up stale guest accounts: 1. Create a multi-stage review whereby guests self-attest whether they still need access. A second-stage reviewer assesses results and makes a final decision. Guests with denied access are disabled and later deleted.
-2. Create a review to remove inactive external guests. Admins define inactive as period of days. They disable and later delete guests that donΓÇÖt sign in to the tenant within that time frame. By default, this doesn't affect recently created users. [Learn more about how to identify inactive accounts](https://docs.microsoft.com/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts#how-to-detect-inactive-user-accounts).
+2. Create a review to remove inactive external guests. Admins define inactive as period of days. They disable and later delete guests that donΓÇÖt sign in to the tenant within that time frame. By default, this doesn't affect recently created users. [Learn more about how to identify inactive accounts](https://learn.microsoft.com/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts#how-to-detect-inactive-user-accounts).
Use the following instructions to learn how to create Access Reviews that follow these patterns. Consider the configuration recommendations and then make the needed changes that suit your environment. ## Create a multi-stage review for guests to self-attest continued access
-1. Create a [dynamic group](https://docs.microsoft.com/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example,
+1. Create a [dynamic group](https://learn.microsoft.com/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example,
`(user.userType -eq "Guest") and (user.mail -contains "@contoso.com") and (user.accountEnabled -eq true)`
-2. To [create an Access Review](https://docs.microsoft.com/azure/active-directory/governance/create-access-review)
+2. To [create an Access Review](https://learn.microsoft.com/azure/active-directory/governance/create-access-review)
for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**. 3. Select **New access review**.
Use the following instructions to learn how to create Access Reviews that follow
## Create a review to remove inactive external guests
-1. Create a [dynamic group](https://docs.microsoft.com/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example,
+1. Create a [dynamic group](https://learn.microsoft.com/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example,
`(user.userType -eq "Guest") and (user.mail -contains "@contoso.com") and (user.accountEnabled -eq true)`
-2. To [create an access review](https://docs.microsoft.com/azure/active-directory/governance/create-access-review) for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**.
+2. To [create an access review](https://learn.microsoft.com/azure/active-directory/governance/create-access-review) for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**.
3. Select **New access review**.
active-directory Add User Without Invite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-user-without-invite.md
Last updated 08/05/2020
-
active-directory Add Users Administrator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-administrator.md
Last updated 08/31/2022
-
active-directory Add Users Information Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-information-worker.md
Last updated 12/19/2018
-
active-directory Api Connectors Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/api-connectors-overview.md
# Use API connectors to customize and extend self-service sign-up ## Overview
-As a developer or IT administrator, you can use API connectors to integrate your [self-service sign-up user flows](self-service-sign-up-overview.md) with web APIs to customize the sign-up experience and integrate with external systems. For example, with API connectors, you can:
+As a developer or IT administrator, you can use [API connectors](self-service-sign-up-add-api-connector.md#create-an-api-connector) to integrate your [self-service sign-up user flows](self-service-sign-up-overview.md) with web APIs to customize the sign-up experience and integrate with external systems. For example, with API connectors, you can:
- [**Integrate with a custom approval workflow**](self-service-sign-up-add-approvals.md). Connect to a custom approval system for managing and limiting account creation. - [**Perform identity verification**](code-samples-self-service-sign-up.md#identity-verification). Use an identity verification service to add an extra level of security to account creation decisions.
An API connector at this step in the sign-up process is invoked after the attrib
## Next steps - Learn how to [add an API connector to a user flow](self-service-sign-up-add-api-connector.md)-- Learn how to [add a custom approval system to self-service sign-up](self-service-sign-up-add-approvals.md)
+- Learn how to [add a custom approval system to self-service sign-up](self-service-sign-up-add-approvals.md)
active-directory Auditing And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/auditing-and-reporting.md
Last updated 05/11/2020
-
active-directory B2b Tutorial Require Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-tutorial-require-mfa.md
Last updated 01/07/2022
-
active-directory Bulk Invite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/bulk-invite-powershell.md
Last updated 02/11/2020
-+ # Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user.
active-directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/claims-mapping.md
Last updated 04/06/2018
-+
active-directory Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/code-samples.md
Last updated 03/14/2022
-
async function sendInvite() {
// Initialize a confidential client application. For more info, visit: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/identity/identity/samples/AzureIdentityExamples.md#authenticating-a-service-principal-with-a-client-secret const credential = new ClientSecretCredential(TENANT_ID, CLIENT_ID, CLIENT_SECRET);
- // Initialize the Microsoft Graph authentication provider. For more info, visit: https://docs.microsoft.com/en-us/graph/sdks/choose-authentication-providers?tabs=Javascript#using--for-server-side-applications
+ // Initialize the Microsoft Graph authentication provider. For more info, visit: https://learn.microsoft.com/graph/sdks/choose-authentication-providers?tabs=Javascript#using--for-server-side-applications
const authProvider = new TokenCredentialAuthenticationProvider(credential, { scopes: ['https://graph.microsoft.com/.default'] }); // Create MS Graph client instance. For more info, visit: https://github.com/microsoftgraph/msgraph-sdk-javascript/blob/dev/docs/CreatingClientInstance.md
async function sendInvite() {
sendInvitationMessage: true };
- // Execute the MS Graph command. For more information, visit: https://docs.microsoft.com/en-us/graph/api/invitation-post
+ // Execute the MS Graph command. For more information, visit: https://learn.microsoft.com/graph/api/invitation-post
graphResponse = await client.api('/invitations') .post(invitation);
active-directory Configure Saas Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/configure-saas-apps.md
Last updated 05/23/2017
-
active-directory Direct Federation Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation-adfs.md
Last updated 05/13/2022
-
active-directory Facebook Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/facebook-federation.md
Last updated 03/02/2021
-
active-directory Hybrid Cloud To On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-cloud-to-on-premises.md
Last updated 11/05/2021
-
active-directory Hybrid On Premises To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-on-premises-to-cloud.md
Last updated 11/03/2020
-
active-directory Hybrid Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-organizations.md
Last updated 04/26/2018
-
active-directory Invitation Email Elements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/invitation-email-elements.md
Last updated 04/12/2021
-
active-directory One Time Passcode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/one-time-passcode.md
Last updated 09/16/2022
-
active-directory Self Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/self-service-portal.md
Last updated 02/12/2020
-
active-directory Self Service Sign Up Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/self-service-sign-up-overview.md
Last updated 03/02/2021
-
active-directory User Flow Customize Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-flow-customize-language.md
Last updated 03/02/2021 -
active-directory User Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-token.md
Last updated 02/28/2018
-
active-directory Auth Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-oidc.md
There is a need for user consent and for web sign in.
* [Web sign-in with OpenID Connect in Azure Active Directory B2C](../../active-directory-b2c/openid-connect.md)
-* [Secure your application by using OpenID Connect and Azure AD](/learn/modules/secure-app-with-oidc-and-azure-ad/)
-
+* [Secure your application by using OpenID Connect and Azure AD](/training/modules/secure-app-with-oidc-and-azure-ad/)
active-directory Multi Tenant User Management Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-user-management-introduction.md
There are several mechanisms available for creating and managing the lifecycle o
| Mechanism | Description | Best when | | - | - | - |
-| [End-user-initiated](multi-tenant-user-management-scenarios.md#end-user-initiated-scenario) | Resource tenant admins delegate the ability to invite guest users to the tenant, an app, or a resource to users within the resource tenant. Users from the home tenant are invited or sign up individually. | <li>Users need improvised access to resources. <li>No automatic synchronization of user attributes is necessary.<li>Unified GAL is not needed.a |
+| [End-user-initiated](multi-tenant-user-management-scenarios.md#end-user-initiated-scenario) | Resource tenant admins delegate the ability to invite guest users to the tenant, an app, or a resource to users within the resource tenant. Users from the home tenant are invited or sign up individually. | <li>Users need improvised access to resources. <li>No automatic synchronization of user attributes is necessary.<li>Unified GAL is not needed. |
|[Scripted](multi-tenant-user-management-scenarios.md#scripted-scenario) | Resource tenant administrators deploy a scripted ΓÇ£pullΓÇ¥ process to automate discovery and provisioning of guest users to support sharing scenarios. | <li>No more than two tenants.<li>No automatic synchronization of user attributes is necessary.<li>Users need pre-configured (not improvised) access to resources.| |[Automated](multi-tenant-user-management-scenarios.md#automated-scenario)|Resource tenant admins use an identity provisioning system to automate the provisioning and deprovisioning processes. | <li>Full identity lifecycle management with provisioning and deprovisioning must be automated.<li>Attribute syncing is required to populate the GAL details and support dynamic entitlement scenarios.<li>Users need pre-configured (not ad hoc) access to resources on ΓÇ£Day OneΓÇ¥.|
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md
Users and contacts | <ul><li>Enumerate the list of all users and contacts<li>Rea
Groups | <ul><li>Create security groups<li>Create Microsoft 365 groups<li>Enumerate the list of all groups<li>Read all properties of groups<li>Read non-hidden group memberships<li>Read hidden Microsoft 365 group memberships for joined groups<li>Manage properties, ownership, and membership of groups that the user owns<li>Add guests to owned groups<li>Manage dynamic membership settings<li>Delete owned groups<li>Restore owned Microsoft 365 groups</li></ul> | <ul><li>Read properties of non-hidden groups, including membership and ownership (even non-joined groups)<li>Read hidden Microsoft 365 group memberships for joined groups<li>Search for groups by display name or object ID (if allowed)</li></ul> | <ul><li>Read object ID for joined groups<li>Read membership and ownership of joined groups in some Microsoft 365 apps (if allowed)</li></ul> Applications | <ul><li>Register (create) new applications<li>Enumerate the list of all applications<li>Read properties of registered and enterprise applications<li>Manage application properties, assignments, and credentials for owned applications<li>Create or delete application passwords for users<li>Delete owned applications<li>Restore owned applications</li></ul> | <ul><li>Read properties of registered and enterprise applications</li></ul> | <ul><li>Read properties of registered and enterprise applications Devices</li></ul> | <ul><li>Enumerate the list of all devices<li>Read all properties of devices<li>Manage all properties of owned devices</li></ul> | No permissions | No permissions
-Directory | <ul><li>Read all company information<li>Read all domains<li>Read all partner contracts</li></ul> | <ul><li>Read company display name<li>Read all domains</li></ul> | <ul><li>Read company display name<li>Read all domains</li></ul>
+Organization | <ul><li>Read all company information<li>Read all domains<li>Read configuration of certificate-based authentication<li>Read all partner contracts</li></ul> | <ul><li>Read company display name<li>Read all domains<li>Read configuration of certificate-based authentication</li></ul> | <ul><li>Read company display name<li>Read all domains</li></ul>
Roles and scopes | <ul><li>Read all administrative roles and memberships<li>Read all properties and membership of administrative units</li></ul> | No permissions | No permissions Subscriptions | <ul><li>Read all subscriptions<li>Enable service plan memberships</li></ul> | No permissions | No permissions Policies | <ul><li>Read all properties of policies<li>Manage all properties of owned policies</li></ul> | No permissions | No permissions
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
For more information, see the [Risk detection API reference documentation](/grap
In June 2019, we've added these 22 new apps with Federation support to the app gallery:
-[Azure AD SAML Toolkit](../saas-apps/saml-toolkit-tutorial.md), [Otsuka Shokai (大塚商会)](../saas-apps/otsuka-shokai-tutorial.md), [ANAQUA](../saas-apps/anaqua-tutorial.md), [Azure VPN Client](https://portal.azure.com/), [ExpenseIn](../saas-apps/expensein-tutorial.md), [Helper Helper](../saas-apps/helper-helper-tutorial.md), [Costpoint](../saas-apps/costpoint-tutorial.md), [GlobalOne](../saas-apps/globalone-tutorial.md), [Mercedes-Benz In-Car Office](https://me.secure.mercedes-benz.com/), [Skore](https://app.justskore.it/), [Oracle Cloud Infrastructure Console](../saas-apps/oracle-cloud-tutorial.md), [CyberArk SAML Authentication](../saas-apps/cyberark-saml-authentication-tutorial.md), [Scrible Edu](https://www.scrible.com/sign-in/#/create-account), [PandaDoc](../saas-apps/pandadoc-tutorial.md), [Perceptyx](https://apexdata.azurewebsites.net/docs.microsoft.com/azure/active-directory/saas-apps/perceptyx-tutorial), Proptimise OS, [Vtiger CRM (SAML)](../saas-apps/vtiger-crm-saml-tutorial.md), Oracle Access Manager for Oracle Retail Merchandising, Oracle Access Manager for Oracle E-Business Suite, Oracle IDCS for E-Business Suite, Oracle IDCS for PeopleSoft, Oracle IDCS for JD Edwards
+[Azure AD SAML Toolkit](../saas-apps/saml-toolkit-tutorial.md), [Otsuka Shokai (大塚商会)](../saas-apps/otsuka-shokai-tutorial.md), [ANAQUA](../saas-apps/anaqua-tutorial.md), [Azure VPN Client](https://portal.azure.com/), [ExpenseIn](../saas-apps/expensein-tutorial.md), [Helper Helper](../saas-apps/helper-helper-tutorial.md), [Costpoint](../saas-apps/costpoint-tutorial.md), [GlobalOne](../saas-apps/globalone-tutorial.md), [Mercedes-Benz In-Car Office](https://me.secure.mercedes-benz.com/), [Skore](https://app.justskore.it/), [Oracle Cloud Infrastructure Console](../saas-apps/oracle-cloud-tutorial.md), [CyberArk SAML Authentication](../saas-apps/cyberark-saml-authentication-tutorial.md), [Scrible Edu](https://www.scrible.com/sign-in/#/create-account), [PandaDoc](../saas-apps/pandadoc-tutorial.md), [Perceptyx](https://apexdata.azurewebsites.net/learn.microsoft.com/azure/active-directory/saas-apps/perceptyx-tutorial), Proptimise OS, [Vtiger CRM (SAML)](../saas-apps/vtiger-crm-saml-tutorial.md), Oracle Access Manager for Oracle Retail Merchandising, Oracle Access Manager for Oracle E-Business Suite, Oracle IDCS for E-Business Suite, Oracle IDCS for PeopleSoft, Oracle IDCS for JD Edwards
For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
# What's new in Azure Active Directory?
->Get notified about when to revisit this page for updates by copying and pasting this URL: `https://docs.microsoft.com/api/search/rss?search=%22Release+notes+-+Azure+Active+Directory%22&locale=en-us` into your ![RSS feed reader icon](./media/whats-new/feed-icon-16x16.png) feed reader.
+>Get notified about when to revisit this page for updates by copying and pasting this URL: `https://learn.microsoft.com/api/search/rss?search=%22Release+notes+-+Azure+Active+Directory%22&locale=en-us` into your ![RSS feed reader icon](./media/whats-new/feed-icon-16x16.png) feed reader.
Azure AD receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about:
active-directory What Are Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-are-lifecycle-workflows.md
Azure AD Lifecycle Workflows is a new Azure AD Identity Governance service that
Workflows contain specific processes, which run automatically against users as they move through their life cycle. Workflows are made up of [Tasks](lifecycle-workflow-tasks.md) and [Execution conditions](understanding-lifecycle-workflows.md#understanding-lifecycle-workflows).
-Tasks are specific actions that run automatically when a workflow is triggered. An Execution condition defines the 'Scope' of "ΓÇ£whoΓÇ¥ and the 'Trigger' of ΓÇ£whenΓÇ¥ a workflow will be performed. For example, send a manager an email 7 days before the value in the NewEmployeeHireDate attribute of new employees, can be described as a workflow. It consists of:
+Tasks are specific actions that run automatically when a workflow is triggered. An Execution condition defines the 'Scope' of "who" and the 'Trigger' of "when" a workflow will be performed. For example, sending a manager an email 7 days before the value in the NewEmployeeHireDate attribute of new employees can be described as a workflow. It consists of:
- Task: send email - When (trigger): Seven days before the NewEmployeeHireDate attribute value - Who (scope): new employees
Finally, Lifecycle Workflows can even [integrate with Logic Apps](lifecycle-work
Anyone who wants to modernize their identity lifecycle management process for employees, needs to ensure: - **New employee on-boarding** - That when a user joins the organization, they're ready to go on day one. They have the correct access to the information, membership to groups, and applications they need.
- - **Employee retirement/terminations/off-boarding** - That users who are no longer tied to the company for various reasons (termination, separation, leave of absence or retirement), have their access revoked in a timely manner
+ - **Employee retirement/terminations/off-boarding** - That users who are no longer tied to the company for various reasons (termination, separation, leave of absence or retirement), have their access revoked in a timely manner.
- **Easy to administer in my organization** - That there's a seamless process to accomplish the above tasks, that isn't overly burdensome or time consuming for Administrators. - **Robust troubleshooting/auditing/compliance** - That there's the ability to easily troubleshoot issues when they arise and that there's sufficient logging to help with this and compliance related issues. The following are key reasons to use Lifecycle workflows. - **Extend** your HR-driven provisioning process with other workflows that simplify and automate tasks. - **Centralize** your workflow process so you can easily create and manage workflows all in one location.-- Easily **troubleshoot** workflow scenarios with the Workflow history and Audit logs
+- Easily **troubleshoot** workflow scenarios with the Workflow history and Audit logs.
- **Manage** user lifecycle at scale. As your organization grows, the need for other resources to manage user lifecycles are reduced.-- **Reduce** or remove manual tasks that were done in the past with automated lifecycle workflows-- **Apply** logic apps to extend workflows for more complex scenarios using your existing Logic apps
+- **Reduce** or remove manual tasks that were done in the past with automated lifecycle workflows.
+- **Apply** logic apps to extend workflows for more complex scenarios using your existing Logic apps.
All of the above can help ensure a holistic experience by allowing you to remove other dependencies and applications to achieve the same result. Thus translating into, increased on-boarding and off-boarding efficiency.
You can use Lifecycle workflows to address any of the following conditions.
## Next steps - [Create a custom workflow using the Azure portal](tutorial-onboard-custom-workflow-portal.md)-- [Create a Lifecycle workflow](create-lifecycle-workflow.md)
+- [Create a Lifecycle workflow](create-lifecycle-workflow.md)
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
Before you install Azure AD Connect, there are a few things that you need.
* Review [optional sync features you can enable in Azure AD](how-to-connect-syncservice-features.md), and evaluate which features you should enable. ### On-premises Active Directory
-* The Active Directory schema version and forest functional level must be Windows Server 2003 or later. The domain controllers can run any version as long as the schema version and forest-level requirements are met. You may require [a paid support program](https://docs.microsoft.com/lifecycle/policies/fixed#extended-support) if you require support for domain controllers running Windows Server 2016 or older.
+* The Active Directory schema version and forest functional level must be Windows Server 2003 or later. The domain controllers can run any version as long as the schema version and forest-level requirements are met. You might require [a paid support program](/lifecycle/policies/fixed#extended-support) if you require support for domain controllers running Windows Server 2016 or older.
* The domain controller used by Azure AD must be writable. Using a read-only domain controller (RODC) *isn't supported*, and Azure AD Connect doesn't follow any write redirects. * Using on-premises forests or domains by using "dotted" (name contains a period ".") NetBIOS names *isn't supported*. * We recommend that you [enable the Active Directory recycle bin](how-to-connect-sync-recycle-bin.md).
To read more about securing your Active Directory environment, see [Best practic
#### Installation prerequisites -- Azure AD Connect must be installed on a domain-joined Windows Server 2019 or later - note that Windows Server 2022 is not yet supported. You can deploy Azure AD Connect on Windows Server 2016 but since WS2016 is in extended support, you may require [a paid support program](https://docs.microsoft.com/lifecycle/policies/fixed#extended-support) if you require support for this configuration.
+- Azure AD Connect must be installed on a domain-joined Windows Server 2019 or later - note that Windows Server 2022 is not yet supported. You can deploy Azure AD Connect on Windows Server 2016 but since WS2016 is in extended support, you may require [a paid support program](/lifecycle/policies/fixed#extended-support) if you require support for this configuration.
- The minimum .Net Framework version required is 4.6.2, and newer versions of .Net are also supported. - Azure AD Connect can't be installed on Small Business Server or Windows Server Essentials before 2019 (Windows Server Essentials 2019 is supported). The server must be using Windows Server standard or better. - The Azure AD Connect server must have a full GUI installed. Installing Azure AD Connect on Windows Server Core isn't supported. - The Azure AD Connect server must not have PowerShell Transcription Group Policy enabled if you use the Azure AD Connect wizard to manage Active Directory Federation Services (AD FS) configuration. You can enable PowerShell transcription if you use the Azure AD Connect wizard to manage sync configuration. - If AD FS is being deployed:
- - The servers where AD FS or Web Application Proxy are installed must be Windows Server 2012 R2 or later. Windows remote management must be enabled on these servers for remote installation. You may require [a paid support program](https://docs.microsoft.com/lifecycle/policies/fixed#extended-support) if you require support for Windows Server 2016 and older.
+ - The servers where AD FS or Web Application Proxy are installed must be Windows Server 2012 R2 or later. Windows remote management must be enabled on these servers for remote installation. You may require [a paid support program](/lifecycle/policies/fixed#extended-support) if you require support for Windows Server 2016 and older.
- You must configure TLS/SSL certificates. For more information, see [Managing SSL/TLS protocols and cipher suites for AD FS](/windows-server/identity/ad-fs/operations/manage-ssl-protocols-in-ad-fs) and [Managing SSL certificates in AD FS](/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap). - You must configure name resolution. - It is not supported to break and analyze traffic between Azure AD Connect and Azure AD. Doing so may disrupt the service.
We recommend that you harden your Azure AD Connect server to decrease the securi
- Follow these [additional guidelines](/windows-server/identity/ad-ds/plan/security-best-practices/reducing-the-active-directory-attack-surface) to reduce the attack surface of your Active Directory environment. - Follow the [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md) to setup alerts to monitor changes to the trust established between your Idp and Azure AD. - Enable Multi Factor Authentication (MFA) for all users that have privileged access in Azure AD or in AD. One security issue with using AADConnect is that if an attacker can get control over the Azure AD Connect server they can manipulate users in Azure AD. To prevent a attacker from using these capabilities to take over Azure AD accounts, MFA offers protections so that even if an attacker manages to e.g. reset a user's password using Azure AD Connect they still cannot bypass the second factor.-- Disable Soft Matching on your tenant. Soft Matching is a great feature to help transfering source of autority for existing cloud only objects to Azure AD Connect, but it comes with certain security risks. If you do not require Soft Matching, you should disable it: [https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-syncservice-features#blocksoftmatch](how-to-connect-syncservice-features.md#blocksoftmatch)
+- Disable Soft Matching on your tenant. Soft Matching is a great feature to help transfering source of autority for existing cloud only objects to Azure AD Connect, but it comes with certain security risks. If you do not require it, you should [disable Soft Matching](how-to-connect-syncservice-features.md#blocksoftmatch)
### SQL Server used by Azure AD Connect * Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2019 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10-GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the [performance of Azure AD Connect](./plan-connect-performance-factors.md#sql-database-factors).
active-directory Howto Troubleshoot Upn Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/howto-troubleshoot-upn-changes.md
Windows 7 and 8.1 devices are not affected by this issue after UPN changes.
**Known Issues**
-Your organization may use [MAM app protection policies](https://docs.microsoft.com/mem/intune/apps/app-protection-policy) to protect corporate data in apps on end users' devices.
+Your organization may use [MAM app protection policies](https://learn.microsoft.com/mem/intune/apps/app-protection-policy) to protect corporate data in apps on end users' devices.
MAM app protection policies are currently not resiliant to UPN changes. UPN changes can break the connection between existing MAM enrollments and active users in MAM integrated applications, resulting in undefined behavior. This could leave data in an unprotected state. **Work Around**
-IT admins should [issue a selective wipe](https://docs.microsoft.com/mem/intune/apps/apps-selective-wipe) to impacted users following UPN changes. This will force impacted end users to reauthenticate and reenroll with their new UPNs.
+IT admins should [issue a selective wipe](https://learn.microsoft.com/mem/intune/apps/apps-selective-wipe) to impacted users following UPN changes. This will force impacted end users to reauthenticate and reenroll with their new UPNs.
## Microsoft Authenticator known issues and workarounds
active-directory Datawiza Azure Ad Sso Oracle Peoplesoft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-oracle-peoplesoft.md
The scenario solution has the following components:
- **Oracle PeopleSoft application**: Legacy application going to be protected by Azure AD and DAB.
-Understand the SP initiated flow by following the steps mentioned in [Datawiza and Azure AD authentication architecture](https://docs.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad#datawiza-with-azure-ad-authentication-architecture).
+Understand the SP initiated flow by following the steps mentioned in [Datawiza and Azure AD authentication architecture](https://learn.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad#datawiza-with-azure-ad-authentication-architecture).
## Prerequisites
Ensure the following prerequisites are met.
- An Azure AD tenant linked to the Azure subscription.
- - See, [Quickstart: Create a new tenant in Azure Active Directory.](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-access-create-new-tenant)
+ - See, [Quickstart: Create a new tenant in Azure Active Directory.](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-access-create-new-tenant)
- Docker and Docker Compose
Ensure the following prerequisites are met.
- User identities synchronized from an on-premises directory to Azure AD, or created in Azure AD and flowed back to an on-premises directory.
- - See, [Azure AD Connect sync: Understand and customize synchronization](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-whatis).
+ - See, [Azure AD Connect sync: Understand and customize synchronization](https://learn.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-whatis).
- An account with Azure AD and the Application administrator role
- - See, [Azure AD built-in roles, all roles](https://docs.microsoft.com/azure/active-directory/roles/permissions-reference#all-roles).
+ - See, [Azure AD built-in roles, all roles](https://learn.microsoft.com/azure/active-directory/roles/permissions-reference#all-roles).
- An Oracle PeopleSoft environment
For the Oracle PeopleSoft application to recognize the user correctly, there's a
## Enable Azure AD Multi-Factor Authentication To provide an extra level of security for sign-ins, enforce multi-factor authentication (MFA) for user sign-in. One way to achieve this is to [enable MFA on the Azure
-portal](https://docs.microsoft.com/azure/active-directory/authentication/tutorial-enable-azure-mfa).
+portal](https://learn.microsoft.com/azure/active-directory/authentication/tutorial-enable-azure-mfa).
1. Sign in to the Azure portal as a **Global Administrator**.
To confirm Oracle PeopleSoft application access occurs correctly, a prompt appea
- [Watch the video - Enable SSO/MFA for Oracle PeopleSoft with Azure AD via Datawiza](https://www.youtube.com/watch?v=_gUGWHT5m90). -- [Configure Datawiza and Azure AD for secure hybrid access](https://docs.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad)
+- [Configure Datawiza and Azure AD for secure hybrid access](https://learn.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad)
-- [Configure Datawiza with Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/partner-datawiza)
+- [Configure Datawiza with Azure AD B2C](https://learn.microsoft.com/azure/active-directory-b2c/partner-datawiza)
- [Datawiza documentation](https://docs.datawiza.com/)
active-directory Plan Sso Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/plan-sso-deployment.md
The following SSO protocols are available to use:
## Next steps -- Consider completing the single sign-on training in [Enable single sign-on for applications by using Azure Active Directory](/learn/modules/enable-single-sign-on).
+- Consider completing the single sign-on training in [Enable single sign-on for applications by using Azure Active Directory](/training/modules/enable-single-sign-on).
active-directory Review Admin Consent Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/review-admin-consent-requests.md
To review the admin consent requests and take action:
## Review admin consent requests using Microsoft Graph
-To review the admin consent requests programmatically, use the [appConsentRequest resource type](/graph/api/resources/userconsentrequest) and [userConsentRequest resource type](/graph/api/resources/userconsentrequest) and their associated methods in Microsoft Graph. You cannot approve or deny consent requests using Microsoft Graph.
+To review the admin consent requests programmatically, use the [appConsentRequest resource type](/graph/api/resources/appconsentrequest) and [userConsentRequest resource type](/graph/api/resources/userconsentrequest) and their associated methods in Microsoft Graph. You cannot approve or deny consent requests using Microsoft Graph.
## Next steps - [Review permissions granted to apps](manage-application-permissions.md)
active-directory What Is Application Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-application-management.md
To [manage access](what-is-access-management.md) for an application, you want to
You can [manage user consent settings](configure-user-consent.md) to choose whether users can allow an application or service to access user profiles and organizational data. When applications are granted access, users can sign in to applications integrated with Azure AD, and the application can access your organization's data to deliver rich data-driven experiences.
-Users often are unable to consent to the permissions an application is requesting. Configure the admin consent workflow to allow users to provide a justification and request an administrator's review and approval of an application. For training on how to configure admin consent workflow in your Azure AD tenant, see [Configure admin consent workflow](/learn/modules/configure-admin-consent-workflow).
+Users often are unable to consent to the permissions an application is requesting. Configure the admin consent workflow to allow users to provide a justification and request an administrator's review and approval of an application. For training on how to configure admin consent workflow in your Azure AD tenant, see [Configure admin consent workflow](/training/modules/configure-admin-consent-workflow).
As an administrator, you can [grant tenant-wide admin consent](grant-admin-consent.md) to an application. Tenant-wide admin consent is necessary when an application requires permissions that regular users aren't allowed to grant, and allows organizations to implement their own review processes. Always carefully review the permissions the application is requesting before granting consent. When an application has been granted tenant-wide admin consent, all users are able to sign into the application unless it has been configured to require user assignment. ### Single sign-on
-Consider implementing SSO in your application. You can manually configure most applications for SSO. The most popular options in Azure AD are [SAML-based SSO and OpenID Connect-based SSO](../develop/active-directory-v2-protocols.md). Before you start, make sure that you understand the requirements for SSO and how to [plan for deployment](plan-sso-deployment.md). For training related to configuring SAML-based SSO for an enterprise application in your Azure AD tenant, see [Enable single sign-on for an application by using Azure Active Directory](/learn/modules/enable-single-sign-on).
+Consider implementing SSO in your application. You can manually configure most applications for SSO. The most popular options in Azure AD are [SAML-based SSO and OpenID Connect-based SSO](../develop/active-directory-v2-protocols.md). Before you start, make sure that you understand the requirements for SSO and how to [plan for deployment](plan-sso-deployment.md). For training related to configuring SAML-based SSO for an enterprise application in your Azure AD tenant, see [Enable single sign-on for an application by using Azure Active Directory](/training/modules/enable-single-sign-on).
### User, group, and owner assignment
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview.md
While developers can securely store the secrets in [Azure Key Vault](../../key-v
The following video shows how you can use managed identities:</br>
-> [!VIDEO https://docs.microsoft.com/Shows/On-NET/Using-Azure-Managed-identities/player?format=ny]
+> [!VIDEO https://learn.microsoft.com/Shows/On-NET/Using-Azure-Managed-identities/player?format=ny]
Here are some of the benefits of using managed identities:
active-directory How To View Applied Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/how-to-view-applied-conditional-access-policies.md
To view the sign-in logs, use:
The output of this cmdlet contains a **AppliedConditionalAccessPolicies** property that shows all the conditional access policies applied to the sign-in.
-For more information about this cmdlet, see [Get-MgAuditLogSignIn](https://docs.microsoft.com/powershell/module/microsoft.graph.reports/get-mgauditlogsignin?view=graph-powershell-1.0).
+For more information about this cmdlet, see [Get-MgAuditLogSignIn](https://learn.microsoft.com/powershell/module/microsoft.graph.reports/get-mgauditlogsignin?view=graph-powershell-1.0).
The AzureAD Graph PowerShell module doesn't support viewing applied conditional access policies; only the Microsoft Graph PowerShell module returns applied conditional access policies.
To confirm that you have admin access to view applied conditional access policie
## Next steps * [Sign-ins error codes reference](./concept-sign-ins.md)
-* [Sign-ins report overview](concept-sign-ins.md)
+* [Sign-ins report overview](concept-sign-ins.md)
active-directory Fortigate Ssl Vpn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortigate-ssl-vpn-tutorial.md
Previously updated : 09/08/2022 Last updated : 09/19/2022
To complete these steps, you'll need the values you recorded earlier:
| SP entity ID (`entity-id`) | Identifier (Entity ID) | | SP Single Sign-On URL (`single-sign-on-url`) | Reply URL (Assertion Consumer Service URL) | | SP Single Logout URL (`single-logout-url`) | Logout URL |
-| IdP Entity ID (`idp-entity-id`) | Azure Login URL |
-| IdP Single Sign-On URL (`idp-single-sign-on-url`) | Azure AD Identifier |
+| IdP Entity ID (`idp-entity-id`) | Azure AD Identifier |
+| IdP Single Sign-On URL (`idp-single-sign-on-url`) | Azure Login URL |
| IdP Single Logout URL (`idp-single-logout-url`) | Azure Logout URL | | IdP certificate (`idp-cert`) | Base64 SAML certificate name (REMOTE_Cert_N) | | Username attribute (`user-name`) | username |
To complete these steps, you'll need the values you recorded earlier:
set entity-id < Identifier (Entity ID)Entity ID> set single-sign-on-url < Reply URL Reply URL> set single-logout-url <Logout URL>
- set idp-entity-id <Azure Login URL>
- set idp-single-sign-on-url <Azure AD Identifier>
+ set idp-entity-id <Azure AD Identifier>
+ set idp-single-sign-on-url <Azure Login URL>
set idp-single-logout-url <Azure Logout URL> set idp-cert <Base64 SAML Certificate Name> set user-name username
active-directory Mural Identity Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mural-identity-tutorial.md
Previously updated : 12/10/2021 Last updated : 09/19/2022
Follow these steps to enable Azure AD SSO in the Azure portal.
| Name | Source Attribute| | -- | | | email | user.userprincipalname |
+ | FirstName | user.givenname |
+ | LastName | user.surname |
-1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![The Certificate download link](common/certificate-base64-download.png)
1. On the **Set up MURAL Identity** section, copy the appropriate URL(s) based on your requirement.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure MURAL Identity SSO
-To configure single sign-on on **MURAL Identity** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [MURAL Identity support team](mailto:support@mural.co). They set this setting to have the SAML SSO connection set properly on both sides.
+1. Log in to the MURAL Identity website as an administrator.
+
+1. Click your **name** in the bottom left corner of the dashboard and select **Company dashboard** from the list of options.
+
+1. Click **SSO** in the left sidebar and perform the below steps.
+
+ ![Screenshot of showing the configuration for MURAL.](./media/mural-identity-tutorial/settings.png)
+
+a. Download the **MURAL's metadata**.
+
+b. In the **Sign in URL** textbox, paste the **Login URL** value, which you have copied from the Azure portal.
+
+c. In the **Sign in certificate**, upload the **Certificate (PEM)**, which you have downloaded from the Azure portal.
+
+d. Select **HTTP-POST** as the Request binding type and select **SHA256** as the Sign in algorithm type.
+
+e. In the **Claim mapping** section, fill the following fields.
+
+* Email address: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress`
+
+* First name: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname`
+
+* Last name: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname`
+
+f. Click **Test single sign-on** to test the configuration and **Save** it.
+
+> [!NOTE]
+> For more information on how to configure the SSO at MURAL, please follow [this](https://support.mural.co/articles/6224385-mural-s-azure-ad-integration) support page.
### Create MURAL Identity test user
In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal. This will redirect to MURAL Identity Sign on URL where you can initiate the login flow.
-* Go to MURAL Identity Sign-on URL directly and initiate the login flow from there.
+* Go to MURAL Identity Sign on URL directly and initiate the login flow from there.
#### IDP initiated: * Click on **Test this application** in Azure portal and you should be automatically signed in to the MURAL Identity for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the MURAL Identity tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the MURAL Identity for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+You can also use Microsoft My Apps to test the application in any mode. When you click the MURAL Identity tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the MURAL Identity for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Change log
active-directory Rocketreach Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rocketreach-sso-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with RocketReach SSO'
+description: Learn how to configure single sign-on between Azure Active Directory and RocketReach SSO.
++++++++ Last updated : 09/06/2022++++
+# Tutorial: Azure AD SSO integration with RocketReach SSO
+
+In this tutorial, you'll learn how to integrate RocketReach SSO with Azure Active Directory (Azure AD). When you integrate RocketReach SSO with Azure AD, you can:
+
+* Control in Azure AD who has access to RocketReach SSO.
+* Enable your users to be automatically signed-in to RocketReach SSO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* RocketReach SSO single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* RocketReach SSO supports **SP** and **IDP** initiated SSO.
+* RocketReach SSO supports **Just In Time** user provisioning.
+
+## Add RocketReach SSO from the gallery
+
+To configure the integration of RocketReach SSO into Azure AD, you need to add RocketReach SSO from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **RocketReach SSO** in the search box.
+1. Select **RocketReach SSO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
+
+## Configure and test Azure AD SSO for RocketReach SSO
+
+Configure and test Azure AD SSO with RocketReach SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user at RocketReach SSO.
+
+To configure and test Azure AD SSO with RocketReach SSO, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure RocketReach SSO](#configure-rocketreach-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create RocketReach SSO test user](#create-rocketreach-sso-test-user)** - to have a counterpart of B.Simon in RocketReach SSO that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **RocketReach SSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://rocketreach.co/login/sso`
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up RocketReach SSO** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to RocketReach SSO.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **RocketReach SSO**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure RocketReach SSO
+
+To configure single sign-on on **RocketReach SSO** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [RocketReach SSO support team](mailto:support@rocketreach.co). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create RocketReach SSO test user
+
+In this section, a user called B.Simon is created in RocketReach SSO. RocketReach SSO supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in RocketReach SSO, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to RocketReach SSO Sign-on URL where you can initiate the login flow.
+
+* Go to RocketReach SSO Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the RocketReach SSO for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the RocketReach SSO tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the RocketReach SSO for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure RocketReach SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Sketch Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sketch-tutorial.md
Previously updated : 08/22/2022 Last updated : 09/13/2022
To configure and test Azure AD SSO with Sketch, perform the following steps:
1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on. 1. **[Configure Sketch SSO](#configure-sketch-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Sketch test user](#create-sketch-test-user)** - to have a counterpart of B.Simon in Sketch that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+## Choose a shortname for your Workspace in Sketch
+
+Follow these steps to choose a shortname and gather information to continue the setup process in Azure AD.
+
+>[!Note]
+> Before starting this process, make sure SSO is available in your Workspace, check there is an SSO tab in your Workspace Admin panel.
+> If you don't see the SSO tab, please reach out to customer support.
+1. [Sign in to your Workspace](https://www.sketch.com/signin/) as an Admin.
+1. Head to the **People & Settings** section in the sidebar.
+1. Click on the **Single Sign-On** tab.
+1. Click **Choose** a short name.
+1. Enter a unique name, it should have less than 16 characters and can only include letters, numbers or hyphens. You can edit this name later on.
+1. Click **Submit**.
+1. Click on the first tab **Set Up Identity Provider**. In this tab, youΓÇÖll find the unique Workspace values youΓÇÖll need to set up the integration with Azure AD.
+ 1. **EntityID:** In Azure AD, this is the `Identifier` field.
+ 1. **ACS URL:** In Azure AD, this is the `Reply URL` field.
+
+Make sure to keep these values at hand! YouΓÇÖll need them in the next step. Click Copy next to each value to copy it to your clipboard.
+ ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Identifier** textbox, type a value using the following pattern:
+ a. In the **Identifier** textbox, use the `EntityID` field from the previous step. It looks like:
`sketch-<uuid_v4>`
- b. In the **Reply URL** textbox, type a URL using the following pattern:
+ b. In the **Reply URL** textbox, use the `ACS URL` field from the previous step. It looks like:
`https://sso.sketch.com/saml/acs?id=<uuid_v4>`
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+1. Click **Set additional URLs** and perform the following step:
In the **Sign-on URL** text box, type the URL: `https://www.sketch.com`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
- ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
-
-1. On the **Set up Sketch** section, copy the appropriate URL(s) based on your requirement.
-
- ![Screenshot shows how to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Sketch SSO
-To configure single sign-on on **Sketch** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Sketch support team](mailto:sso-support@sketch.com). They set this setting to have the SAML SSO connection set properly on both sides.
-
-### Create Sketch test user
+Follow these steps to finish the configuration in Sketch.
-In this section, a user called B.Simon is created in Sketch. Sketch supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Sketch, a new one is created after authentication.
+1. In your Workspace, head to the **Set up Sketch** tab in the **Single Sign-On** window.
+1. Upload the XML file you downloaded previously in the **Import XML Metadata file** section.
+1. Log out.
+1. Click **Sign in with SSO**.
+1. Use the shortname you configured previously to proceed.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure Sketch you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure Sketch you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Howto Verifiable Credentials Partner Au10tix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-au10tix.md
Before you can continue with the steps below you need to meet the following requ
## Scenario description
-When onboarding users you can remove the need for error prone manual onboarding steps by using Verified ID with A10TIX account onboarding. Verified IDs can be used to digitally onboard employees, students, citizens, or others to securely access resources and services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a Verified ID to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a Verified ID to prove their identity and gain access. Learn more about [account onboarding](https://docs.microsoft.com/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
+When onboarding users you can remove the need for error prone manual onboarding steps by using Verified ID with A10TIX account onboarding. Verified IDs can be used to digitally onboard employees, students, citizens, or others to securely access resources and services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a Verified ID to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a Verified ID to prove their identity and gain access. Learn more about [account onboarding](https://learn.microsoft.com/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
active-directory Howto Verifiable Credentials Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-lexisnexis.md
You can use Entra Verified ID with LexisNexis Risk Solutions to enable faster on
## Scenario description
-Verifiable Credentials can be used to onboard employees, students, citizens, or others to access services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a verifiable credential to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a VC to prove their identity and gain access. Learn more about [account onboarding](https://docs.microsoft.com/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
+Verifiable Credentials can be used to onboard employees, students, citizens, or others to access services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a verifiable credential to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a VC to prove their identity and gain access. Learn more about [account onboarding](https://learn.microsoft.com/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
:::image type="content" source="media/verified-id-partner-au10tix/vc-solution-architecture-diagram.png" alt-text="Diagram of the verifiable credential solution.":::
advisor Advisor Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-sovereign-clouds.md
+
+ Title: Sovereign cloud feature variations
+description: List of feature variations and usage limitations for Advisor in sovereign clouds.
+ Last updated : 09/19/2022++
+# Azure Advisor in sovereign clouds
+
+Azure sovereign clouds enable you to build and digitally transform workloads in the cloud while meeting your security, compliance, and policy requirements.
+
+## Azure Government (United States)
+
+The following Azure Advisor recommendation **features aren't currently available** in Azure Government:
+
+### Cost
+
+- (Preview) Consider App Service stamp fee reserved capacity to save over your on-demand costs.
+- (Preview) Consider Azure Data Explorer reserved capacity to save over your pay-as-you-go costs.
+- (Preview) Consider Azure Synapse Analytics (formerly SQL DW) reserved capacity to save over your pay-as-you-go costs.
+- (Preview) Consider Blob storage reserved capacity to save on Blob v2 and Data Lake Storage Gen2 costs.
+- (Preview) Consider Blob storage reserved instance to save on Blob v2 and Data Lake Storage Gen2 costs.
+- (Preview) Consider Cache for Redis reserved capacity to save over your pay-as-you-go costs.
+- (Preview) Consider Cosmos DB reserved capacity to save over your pay-as-you-go costs.
+- (Preview) Consider Database for MariaDB reserved capacity to save over your pay-as-you-go costs.
+- (Preview) Consider Database for MySQL reserved capacity to save over your pay-as-you-go costs.
+- (Preview) Consider Database for PostgreSQL reserved capacity to save over your pay-as-you-go costs.
+- (Preview) Consider SQL DB reserved capacity to save over your pay-as-you-go costs.
+- (Preview) Consider SQL PaaS DB reserved capacity to save over your pay-as-you-go costs.
+- Consider App Service stamp fee reserved instance to save over your on-demand costs.
+- Consider Azure Synapse Analytics (formerly SQL DW) reserved instance to save over your pay-as-you-go costs.
+- Consider Cache for Redis reserved instance to save over your pay-as-you-go costs.
+- Consider Cosmos DB reserved instance to save over your pay-as-you-go costs.
+- Consider Database for MariaDB reserved instance to save over your pay-as-you-go costs.
+- Consider Database for MySQL reserved instance to save over your pay-as-you-go costs.
+- Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs.
+- Consider SQL PaaS DB reserved instance to save over your pay-as-you-go costs.
+
+### Operational
+
+- Add Azure Monitor to your virtual machine (VM) labeled as production.
+- Delete and recreate your pool using a VM size that will soon be retired.
+- Enable Traffic Analytics to view insights into traffic patterns across Azure resources.
+- Enforce 'Add or replace a tag on resources' using Azure Policy.
+- Enforce 'Allowed locations' using Azure Policy.
+- Enforce 'Allowed virtual machine SKUs' using Azure Policy.
+- Enforce 'Audit VMs that don't use managed disks' using Azure Policy.
+- Enforce 'Inherit a tag from the resource group' using Azure Policy.
+- Update Azure Spring Cloud API Version.
+- Update your outdated Azure Spring Cloud SDK to the latest version.
+- Upgrade to the latest version of the Immersive Reader SDK.
+
+### Performance
+
+- Accelerated Networking may require stopping and starting the VM.
+- Arista Networks vEOS Router may experience high CPU utilization, reduced throughput and high latency.
+- Barracuda Networks NextGen Firewall may experience high CPU utilization, reduced throughput and high latency.
+- Cisco Cloud Services Router 1000V may experience high CPU utilization, reduced throughput and high latency.
+- Consider increasing the size of your NVA to address persistent high CPU.
+- Distribute data in server group to distribute workload among nodes.
+- More than 75% of your queries are full scan queries.
+- NetApp Cloud Volumes ONTAP may experience high CPU utilization, reduced throughput and high latency.
+- Palo Alto Networks VM-Series Firewall may experience high CPU utilization, reduced throughput and high latency.
+- Reads happen on most recent data.
+- Rebalance data in Hyperscale (Citus) server group to distribute workload among worker nodes more evenly.
+- Update Attestation API Version.
+- Update Key Vault SDK Version.
+- Update to the latest version of your Arista VEOS product for Accelerated Networking support.
+- Update to the latest version of your Barracuda NG Firewall product for Accelerated Networking support.
+- Update to the latest version of your Check Point product for Accelerated Networking support.
+- Update to the latest version of your Cisco Cloud Services Router 1000V product for Accelerated Networking support.
+- Update to the latest version of your F5 BigIp product for Accelerated Networking support.
+- Update to the latest version of your NetApp product for Accelerated Networking support.
+- Update to the latest version of your Palo Alto Firewall product for Accelerated Networking support.
+- Upgrade your ExpressRoute circuit bandwidth to accommodate your bandwidth needs.
+- Use SSD Disks for your production workloads.
+- vSAN capacity utilization has crossed critical threshold.
+
+### Reliability
+
+- Avoid hostname override to ensure site integrity.
+- Check Point Virtual Machine may lose Network Connectivity.
+- Drop and recreate your HDInsight clusters to apply critical updates.
+- Upgrade device client SDK to a supported version for IotHub.
+- Upgrade to the latest version of the Azure Connected Machine agent.
+
+## Right size calculations
+
+The calculation for recommending that you should right-size or shut down underutilized virtual machines in Azure Government is as follows:
+
+- Advisor monitors your virtual machine usage for seven days and identifies low-utilization virtual machines.
+- Virtual machines are considered low utilization if their CPU utilization is 5% or less and their network utilization is less than 2%, or if the current workload can be accommodated by a smaller virtual machine size.
+
+If you want to be more aggressive at identifying underutilized virtual machines, you can adjust the CPU utilization rule on a per subscription basis.
+
+## Next steps
+
+For more information about Advisor recommendations, see:
+
+- [Introduction to Azure Advisor](./advisor-overview.md)
+- [Reliability recommendations](./advisor-high-availability-recommendations.md)
+- [Performance recommendations](./advisor-reference-performance-recommendations.md)
+- [Cost recommendations](./advisor-reference-cost-recommendations.md)
+- [Operational excellence recommendations](./advisor-reference-operational-excellence-recommendations.md)
advisor Advisor Tag Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-tag-filtering.md
You can now get Advisor recommendations and scores scoped to a workload, environ
* Compare scores for workloads to optimize the critical ones first > [!TIP]
-> For more information on how to use resource tags to organize and govern your Azure resources, please see the [Cloud Adoption FrameworkΓÇÖs guidance](/azure/cloud-adoption-framework/ready/azure-best-practices/resource-tagging) and [Build a cloud governance strategy on Azure](/learn/modules/build-cloud-governance-strategy-azure/).
+> For more information on how to use resource tags to organize and govern your Azure resources, please see the [Cloud Adoption FrameworkΓÇÖs guidance](/azure/cloud-adoption-framework/ready/azure-best-practices/resource-tagging) and [Build a cloud governance strategy on Azure](/training/modules/build-cloud-governance-strategy-azure/).
## How to filter recommendations using tags
aks Concepts Sustainable Software Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-sustainable-software-engineering.md
Learn more about the features of AKS mentioned in this article:
[node-sizing]: use-multiple-node-pools.md#specify-a-vm-size-for-a-node-pool [sustainability-calculator]: https://azure.microsoft.com/blog/microsoft-sustainability-calculator-helps-enterprises-analyze-the-carbon-emissions-of-their-it-infrastructure/ [system-pools]: use-system-pools.md
-[principles-sse]: /learn/modules/sustainable-software-engineering-overview/
+[principles-sse]: /training/modules/sustainable-software-engineering-overview/
aks Deployment Center Launcher https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deployment-center-launcher.md
# Deployment Center for Azure Kubernetes
+> [!IMPORTANT]
+> Deployment Center for Azure Kubernetes Service will be retired on March 31, 2023. [Learn more](/azure/aks/deployment-center-launcher#retirement)
+ Deployment Center in Azure DevOps simplifies setting up a robust Azure DevOps pipeline for your application. By default, Deployment Center configures an Azure DevOps pipeline to deploy your application updates to the Kubernetes cluster. You can extend the default configured Azure DevOps pipeline and also add richer capabilities: the ability to gain approval before deploying, provision additional Azure resources, run scripts, upgrade your application, and even run more validation tests. In this tutorial, you will:
You can delete the related resources that you created when you don't need them a
## Next steps You can modify these build and release pipelines to meet the needs of your team. Or, you can use this CI/CD model as a template for your other pipelines.+
+## Retirement
+
+Deployment Center for Azure Kubernetes will be retired on March 31, 2023 in favor of [Automated deployments](/azure/aks/automated-deployments). We encourage you to switch for enjoy similar capabilities.
+
+#### Migration Steps
+
+There is no migration required as AKS Deployment center experience does not store any information itself, it just helps users with their Day 0 getting started experience on Azure. Moving forward, the recommended way for users to get started on CI/CD for AKS will be using [Automated deployments](/azure/aks/automated-deployments) feature.
+
+For existing pipelines, users will still be able to perform all operations from GitHub Actions or Azure DevOps after the retirement of this experience. Only the ability to create and view pipelines from Azure portal will be removed. See [GitHub Actions](https://docs.github.com/en/actions) or [Azure DevOps](/azure/devops/pipelines/get-started/pipelines-get-started) to learn how to get started.
+
+For new application deployments to AKS, instead of using Deployment center users can get the same capabilities by using Automated deployments.
+
+#### FAQΓÇ»
+
+1. Where can I manage my CD pipeline after this experience is deprecated?ΓÇ»
+
+Post retirement, you will not be able to view or create CD pipelines from Azure portalΓÇÖs AKS blade. However, as with the current experience, you can go to GitHub Actions or Azure DevOps portal and view or update the configured pipelines there.
+
+2. Will I lose my earlier configured pipelines?
+
+No. All the created pipelines will still be available and functional in GitHub or Azure DevOps. Only the experience of creating and viewing pipelines from Azure portal will be retired.
+
+3. How can I still configure CD pipelines directly through Azure portal?
+
+You can use Automated deployments available in the AKS blade in Azure portal.
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
AKS doesn't apply Network Security Groups (NSGs) to its subnet and doesn't modif
## How does Time syncronization work in AKS?
-AKS nodes run the "chrony" service which pulls time from the localhost, which in turn sync time with ntp.ubuntu.com. Containers running on pods get the time from the AKS nodes. Applications launched inside a container use time from the container of the pod.
+AKS nodes run the "chrony" service which pulls time from the localhost. Containers running on pods get the time from the AKS nodes. Applications launched inside a container use time from the container of the pod.
<!-- LINKS - internal -->
aks Kubernetes Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-action.md
Review the following starter workflows for AKS. For more details on using starte
- [Azure Kubernetes Service Kompose][aks-swf-kompose] > [!div class="nextstepaction"]
-> [Learn how to create multiple pipelines on GitHub Actions with AKS](/learn/modules/aks-deployment-pipeline-github-actions)
+> [Learn how to create multiple pipelines on GitHub Actions with AKS](/training/modules/aks-deployment-pipeline-github-actions)
> [!div class="nextstepaction"] > [Learn about Azure Kubernetes Service](/azure/architecture/reference-architectures/containers/aks-start-here)
Review the following starter workflows for AKS. For more details on using starte
[azure/login]: https://github.com/Azure/login [connect-gh-azure]: /azure/developer/github/connect-from-azure?tabs=azure-cli%2Clinux [gh-azure-vote]: https://github.com/Azure-Samples/azure-voting-app-redis
-[actions/checkout]: https://github.com/actions/checkout
+[actions/checkout]: https://github.com/actions/checkout
aks Quick Windows Container Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md
To run an AKS cluster that supports node pools for Windows Server containers, yo
> [!NOTE] > To ensure your cluster to operate reliably, you should run at least 2 (two) nodes in the default node pool.
-Create a username to use as administrator credentials for the Windows Server nodes on your cluster. The following commands prompt you for a username and sets it to *WINDOWS_USERNAME* for use in a later command (remember that the commands in this article are entered into a BASH shell).
+Create a username to use as administrator credentials for the Windows Server nodes on your cluster. The following commands prompt you for a username and set it to *WINDOWS_USERNAME* for use in a later command (remember that the commands in this article are entered into a BASH shell).
```azurecli-interactive echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME
az aks nodepool add \
The above command creates a new node pool named *npwin* and adds it to the *myAKSCluster*. The above command also uses the default subnet in the default vnet created when running `az aks create`.
-## Add a Windows Server 2022 node pool (preview)
+## Add a Windows Server 2022 node pool
When creating a Windows node pool, the default operating system will be Windows Server 2019. To use Windows Server 2022 nodes, you will need to specify an OS SKU type of `Windows2022`. -
-### Install the `aks-preview` extension
-
-You also need the *aks-preview* Azure CLI extension version `0.5.68` or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command, or install any available updates by using the [az extension update][az-extension-update] command.
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
-
-### Register the `AKSWindows2022Preview` preview feature
-
-To use the feature, you must also enable the `AKSWindows2022Preview` feature flag on your subscription.
-
-Register the `AKSWindows2022Preview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "AKSWindows2022Preview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKSWindows2022Preview')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
> [!NOTE] > Windows Server 2022 requires Kubernetes version "1.23.0" or higher.
aks Open Service Mesh Deploy Addon Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-bicep.md
touch osm.aks.bicep && touch osm.aks.parameters.json
Open the *osm.aks.bicep* file and copy the following example content to it. Then save the file. ```azurecli-interactive
-// https://docs.microsoft.com/azure/aks/troubleshooting#what-naming-restrictions-are-enforced-for-aks-resources-and-parameters
+// https://learn.microsoft.com/azure/aks/troubleshooting#what-naming-restrictions-are-enforced-for-aks-resources-and-parameters
@minLength(3) @maxLength(63) @description('Provide a name for the AKS cluster. The only allowed characters are letters, numbers, dashes, and underscore. The first and last character must be a letter or a number.')
aks Windows Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-faq.md
This article outlines some of the frequently asked questions and OS concepts for
## Which Windows operating systems are supported?
-AKS uses Windows Server 2019 as the host OS version and only supports process isolation. Container images built by using other Windows Server versions are not supported. For more information, see [Windows container version compatibility][windows-container-compat].
+AKS uses Windows Server 2019 and Windows Server 2022 as the host OS version and only supports process isolation. Container images built by using other Windows Server versions are not supported. For more information, see [Windows container version compatibility][windows-container-compat].
## Is Kubernetes different on Windows and Linux?
Yes, an ingress controller that supports Windows Server containers can run on Wi
## Can my Windows Server containers use gMSA?
-Group-managed service account (gMSA) support is currently available in preview. See [Enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster (Preview)](use-group-managed-service-accounts.md)
+Group-managed service account (gMSA) support is generally available for Windows on AKS. See [Enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster](use-group-managed-service-accounts.md)
## Can I use Azure Monitor for containers with Windows nodes and containers?
api-management Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/plan-manage-costs.md
As you add or remove units, capacity and cost scale proportionally. For example,
- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
- Learn about API Management [capacity](api-management-capacity.md). - See steps to scale and upgrade API Management using the [Azure portal](upgrade-and-scale.md), and learn about [autoscaling](api-management-howto-autoscale.md).
app-service App Service Asp Net Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-asp-net-migration.md
The [app containerization tool](https://azure.microsoft.com/blog/accelerate-appl
## Next steps
-[Migrate an on-premises web application to Azure App Service](/learn/modules/migrate-app-service-migration-assistant/)
+[Migrate an on-premises web application to Azure App Service](/training/modules/migrate-app-service-migration-assistant/)
app-service App Service Migration Assess Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-migration-assess-net.md
For more information on web apps assessment, see:
Next steps:
-[At-scale migration of .NET web apps](/learn/modules/migrate-app-service-migration-assistant/)
+[At-scale migration of .NET web apps](/training/modules/migrate-app-service-migration-assistant/)
app-service App Service Migration Discover Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-migration-discover-net.md
For more information about web apps discovery please refer to:
Next steps:
-[At-scale assessment of .NET web apps](/learn/modules/migrate-app-service-migration-assistant/)
+[At-scale assessment of .NET web apps](/training/modules/migrate-app-service-migration-assistant/)
app-service Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md
If you want to use your own DNS server, add the following records:
1. Create a zone for `<App Service Environment-name>.appserviceenvironment.net`. 1. Create an A record in that zone that points * to the inbound IP address used by your App Service Environment.
-1. Create an A record in that zone that points @ to the inbound IP address used by your App Service Environment.
1. Create a zone in `<App Service Environment-name>.appserviceenvironment.net` named `scm`. 1. Create an A record in the `scm` zone that points * to the IP address used by the private endpoint of your App Service Environment.
To configure DNS in Azure DNS private zones:
1. Create an Azure DNS private zone named `<App Service Environment-name>.appserviceenvironment.net`. 1. Create an A record in that zone that points * to the inbound IP address.
-1. Create an A record in that zone that points @ to the inbound IP address.
1. Create an A record in that zone that points *.scm to the inbound IP address. In addition to the default domain provided when an app is created, you can also add a custom domain to your app. You can set a custom domain name without any validation on your apps. If you're using custom domains, you need to ensure they have DNS records configured. You can follow the preceding guidance to configure DNS zones and records for a custom domain name (simply replace the default domain name with the custom domain name). The custom domain name works for app requests, but doesn't work for the `scm` site. The `scm` site is only available at *&lt;appname&gt;.scm.&lt;asename&gt;.appserviceenvironment.net*.
app-service Overview Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-manage-costs.md
Last updated 06/23/2021
# Plan and manage costs for Azure App Service <!-- Check out the following published examples:-- [https://docs.microsoft.com/azure/cosmos-db/plan-manage-costs](../cosmos-db/plan-manage-costs.md)-- [https://docs.microsoft.com/azure/storage/common/storage-plan-manage-costs](../storage/common/storage-plan-manage-costs.md)-- [https://docs.microsoft.com/azure/machine-learning/concept-plan-manage-cost](../machine-learning/concept-plan-manage-cost.md)
+- [https://learn.microsoft.com/azure/cosmos-db/plan-manage-costs](../cosmos-db/plan-manage-costs.md)
+- [https://learn.microsoft.com/azure/storage/common/storage-plan-manage-costs](../storage/common/storage-plan-manage-costs.md)
+- [https://learn.microsoft.com/azure/machine-learning/concept-plan-manage-cost](../machine-learning/concept-plan-manage-cost.md)
--> <!-- Note for Azure service writer: Links to Cost Management articles are full URLS with the ?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn campaign suffix. Leave those URLs intact. They're used to measure traffic to Cost Management articles.
You can also [export your cost data](../cost-management-billing/costs/tutorial-e
- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
<!-- Insert links to other articles that might help users save and manage costs for you service here.
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
First, validate that the new platform update which contains Debian 11 has reache
Next, create a deployment slot to test that your application works properly with Debian 11 before applying the change to production. 1. [Create a deployment slot](deploy-staging-slots.md#add-a-slot) if you do not already have one, and clone your settings from the production slot. A deployment slot will allow you to safely test changes to your application (such as upgrading to Debian 11) and swap those changes into production after review.
-1. To upgrade to Debian 11 (Bullseye), create an app setting on your slot named `ORYX_DEFAULT_OS` with a value of `bullseye`.
+1. To upgrade to Debian 11 (Bullseye), create an app setting on your slot named `WEBSITE_LINUX_OS_VERSION` with a value of `DEBIAN|BULLSEYE`.
```bash
- az webapp config appsettings set -g MyResourceGroup -n MyUniqueApp --settings ORYX_DEFAULT_OS=bullseye
+ az webapp config appsettings set -g MyResourceGroup -n MyUniqueApp --settings WEBSITE_LINUX_OS_VERSION="DEBIAN|BULLSEYE"
``` 1. Deploy your application to the deployment slot using the tool of your choice (VS Code, Azure CLI, GitHub Actions, etc.) 1. Confirm your application is functioning as expected in the deployment slot.
-1. [Swap your production and staging slots](deploy-staging-slots.md#swap-two-slots). This will apply the `ORYX_DEFAULT_OS=bullseye` app setting to production.
+1. [Swap your production and staging slots](deploy-staging-slots.md#swap-two-slots). This will apply the `WEBSITE_LINUX_OS_VERSION=DEBIAN|BULLSEYE` app setting to production.
1. Delete the deployment slot if you are no longer using it. ##### Resources
app-service Tutorial Java Quarkus Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-quarkus-postgresql-app.md
mvn clean package
The final result will be a JAR file in the `target/` subfolder.
-To deploy applications to Azure App Service, developers can use the [Maven Plugin for App Service](/learn/modules/publish-web-app-with-maven-plugin-for-azure-app-service/), [VSCode Extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureappservice), or the Azure CLI to deploy apps. Use the following command to deploy our app to the App Service:
+To deploy applications to Azure App Service, developers can use the [Maven Plugin for App Service](/training/modules/publish-web-app-with-maven-plugin-for-azure-app-service/), [VSCode Extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureappservice), or the Azure CLI to deploy apps. Use the following command to deploy our app to the App Service:
```azurecli az webapp deploy \
and
Learn more about running Java apps on App Service on Linux in the developer guide. > [!div class="nextstepaction"]
-> [Java in App Service Linux dev guide](configure-language-java.md?pivots=platform-linux)
+> [Java in App Service Linux dev guide](configure-language-java.md?pivots=platform-linux)
app-service Tutorial Networking Isolate Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-networking-isolate-vnet.md
Because your Key Vault and Cognitive Services resources will sit behind [private
az webapp config appsettings set --resource-group $groupName --name $appName --settings CS_ACCOUNT_NAME="@Microsoft.KeyVault(SecretUri=$csResourceKVUri)" CS_ACCOUNT_KEY="@Microsoft.KeyVault(SecretUri=$csKeyKVUri)" ```
- <!-- If above is not run then it takes a whole day for references to update? https://docs.microsoft.com/en-us/azure/app-service/app-service-key-vault-references#rotation -->
+ <!-- If above is not run then it takes a whole day for references to update? https://learn.microsoft.com/azure/app-service/app-service-key-vault-references#rotation -->
> [!NOTE] > Again, you can observe the behavior change in the sample app. You can no longer load the app because it can no longer access the key vault references. The app has lost its connectivity to the key vault through the shared networking.
app-service Tutorial Nodejs Mongodb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-nodejs-mongodb-app.md
:::image type="content" source="./media/tutorial-nodejs-mongodb-app/app-diagram.png" alt-text="A diagram showing how the Express.js app will be deployed to Azure App Service and the MongoDB data will be hosted inside of Azure Cosmos DB." lightbox="./media/tutorial-nodejs-mongodb-app/app-diagram-large.png":::
-This article assumes you're already familiar with [Node.js development](/learn/paths/build-javascript-applications-nodejs/) and have Node and MongoDB installed locally. You'll also need an Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/nodejs/).
+This article assumes you're already familiar with [Node.js development](/training/paths/build-javascript-applications-nodejs/) and have Node and MongoDB installed locally. You'll also need an Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/nodejs/).
## Sample application
Most of the time taken by the two-job process is spent uploading and download ar
> [JavaScript on Azure developer center](/azure/developer/javascript) > [!div class="nextstepaction"]
-> [Configure Node.js app in App Service](./configure-language-nodejs.md)
+> [Configure Node.js app in App Service](./configure-language-nodejs.md)
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
In this tutorial, you'll deploy a data-driven Python web app (**[Django](https:/
**To complete this tutorial, you'll need:** * An Azure account with an active subscription exists. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/python).
-* Knowledge of Python with Flask development or [Python with Django development](/learn/paths/django-create-data-driven-websites/)
+* Knowledge of Python with Flask development or [Python with Django development](/training/paths/django-create-data-driven-websites/)
* [Python 3.7 or higher](https://www.python.org/downloads/) installed locally. * [PostgreSQL](https://www.postgresql.org/download/) installed locally.
app-service Tutorial Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-send-email.md
var jsonData = JsonSerializer.Serialize(new
}); HttpResponseMessage result = await client.PostAsync(
- // Requires DI configuration to access app settings. See https://docs.microsoft.com/azure/app-service/configure-language-dotnetcore#access-environment-variables
+ // Requires DI configuration to access app settings. See https://learn.microsoft.com/azure/app-service/configure-language-dotnetcore#access-environment-variables
_configuration["LOGIC_APP_URL"], new StringContent(jsonData, Encoding.UTF8, "application/json"));
application-gateway Application Gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md
The access log is generated only if you've enabled it on each Application Gatewa
} } ```
+> [!Note]
+>Access logs with clientIP value 127.0.0.1 originate from an internal security process running on the application gateway instances. You can safely ignore these log entries.
### Performance log
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
Subnet Size /24 = 255 IP addresses - 5 reserved from the platform = 250 availabl
> [!TIP] > It is possible to change the subnet of an existing Application Gateway within the same virtual network. You can do this using Azure PowerShell or Azure CLI. For more information, see [Frequently asked questions about Application Gateway](application-gateway-faq.yml#can-i-change-the-virtual-network-or-subnet-for-an-existing-application-gateway)
+### Virtual network permission
+
+Since application gateway resources are deployed within a virtual network resource, Application Gateway performs a check to verify the permission on the provided virtual network resource. This is verified during both create and manage operations.
+
+You should check your [Azure role-based access control](../role-based-access-control/role-assignments-list-portal.md) to verify that users or Service Principals who operate application gateways have at least **Microsoft.Network/virtualNetworks/subnets/join/action** or some higher permission such as the built-in [Network contributor](../role-based-access-control/built-in-roles.md) role on the virtual network. Visit [Add, change, or delete a virtual network subnet](../virtual-network/virtual-network-manage-subnet.md) to know more on subnet permissions.
+
+If a [built-in](../role-based-access-control/built-in-roles.md) role doesn't provide the right permission, you can [create and assign a custom role](../role-based-access-control/custom-roles-portal.md) for this purpose.
+ ## Network security groups Network security groups (NSGs) are supported on Application Gateway. But there are some restrictions:
application-gateway Monitor Application Gateway Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/monitor-application-gateway-reference.md
For more information, see a list of [all platform metrics supported in Azure Mon
For more information on what metric dimensions are, see [Multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics).
-<!-- See https://docs.microsoft.com/azure/storage/common/monitor-storage-reference#metrics-dimensions for an example. Part is copied below. -->
+<!-- See https://learn.microsoft.com/azure/storage/common/monitor-storage-reference#metrics-dimensions for an example. Part is copied below. -->
Azure Application Gateway supports dimensions for some of the metrics in Azure Monitor. Each metric includes a description that explains the available dimensions specifically for that metric.
Resource Provider and Type: [Microsoft.Network/applicationGateways](../azure-mon
This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure Application Gateway and available for query by Log Analytics.
-<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://docs.microsoft.com/azure/azure-monitor/reference/tables/tables-resourcetype where your service tables are listed. These files are auto generated from the REST API. If this article is missing tables that you and the PM know are available, both of you contact azmondocs@microsoft.com.
+<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://learn.microsoft.com/azure/azure-monitor/reference/tables/tables-resourcetype where your service tables are listed. These files are auto generated from the REST API. If this article is missing tables that you and the PM know are available, both of you contact azmondocs@microsoft.com.
--> <!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. -->
sslEnabled_s | Does the client request have SSL enabled|
<!-- replace below with the proper link to your main monitoring service article --> - See [Monitoring Azure Azure Application Gateway](monitor-application-gateway.md) for a description of monitoring Azure Application Gateway.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
application-gateway Monitor Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/monitor-application-gateway.md
Resource Logs are not collected and stored until you create a diagnostic setting
See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure Application Gateway are listed in [Azure Application Gateway monitoring data reference](monitor-application-gateway-reference.md#resource-logs).
-<!-- OPTIONAL: Add specific examples of configuration for this service. For example, CLI and PowerShell commands for creating diagnostic setting. Ideally, customers should set up a policy to automatically turn on collection for services. Azure monitor has Resource Manager template examples you can point to. See https://docs.microsoft.com/azure/azure-monitor/samples/resource-manager-diagnostic-settings. Contact azmondocs@microsoft.com if you have questions. -->
+<!-- OPTIONAL: Add specific examples of configuration for this service. For example, CLI and PowerShell commands for creating diagnostic setting. Ideally, customers should set up a policy to automatically turn on collection for services. Azure monitor has Resource Manager template examples you can point to. See https://learn.microsoft.com/azure/azure-monitor/samples/resource-manager-diagnostic-settings. Contact azmondocs@microsoft.com if you have questions. -->
The metrics and logs you can collect are discussed in the following sections.
The following tables list common and recommended alert rules for Application Gat
- See [Monitoring Application Gateway data reference](monitor-application-gateway-reference.md) for a reference of the metrics, logs, and other important values created by Application Gateway. -- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
application-gateway Overview V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md
An Azure PowerShell script is available in the PowerShell gallery to help you mi
Depending on your requirements and environment, you can create a test Application Gateway using either the Azure portal, Azure PowerShell, or Azure CLI. - [Tutorial: Create an application gateway that improves web application access](tutorial-autoscale-ps.md)-- [Learn module: Introduction to Azure Application Gateway](/learn/modules/intro-to-azure-application-gateway)
+- [Learn module: Introduction to Azure Application Gateway](/training/modules/intro-to-azure-application-gateway)
application-gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview.md
Depending on your requirements and environment, you can create a test Applicatio
- [Quickstart: Direct web traffic with Azure Application Gateway - Azure portal](quick-create-portal.md) - [Quickstart: Direct web traffic with Azure Application Gateway - Azure PowerShell](quick-create-powershell.md) - [Quickstart: Direct web traffic with Azure Application Gateway - Azure CLI](quick-create-cli.md)-- [Learn module: Introduction to Azure Application Gateway](/learn/modules/intro-to-azure-application-gateway)
+- [Learn module: Introduction to Azure Application Gateway](/training/modules/intro-to-azure-application-gateway)
- [How an application gateway works](how-application-gateway-works.md) - [Frequently asked questions about Azure Application Gateway](application-gateway-faq.yml)
application-gateway Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link-configure.md
To configure Private link on an existing Application Gateway via Azure PowerShel
```azurepowershell # Disable Private Link Service Network Policies
-# https://docs.microsoft.com/azure/private-link/disable-private-endpoint-network-policy
+# https://learn.microsoft.com/azure/private-link/disable-private-endpoint-network-policy
$net =@{ Name = 'AppGW-PL-PSH' ResourceGroupName = 'AppGW-PL-PSH-RG'
Set-AzApplicationGatewayFrontendIPConfig -ApplicationGateway $agw -Name "appGwPu
Set-AzApplicationGateway -ApplicationGateway $agw # Disable Private Endpoint Network Policies
-# https://docs.microsoft.com/azure/private-link/disable-private-endpoint-network-policy
+# https://learn.microsoft.com/azure/private-link/disable-private-endpoint-network-policy
$net =@{ Name = 'AppGW-PL-Endpoint-PSH-VNET' ResourceGroupName = 'AppGW-PL-Endpoint-PSH-RG'
To configure Private link on an existing Application Gateway via Azure CLI, the
```azurecli # Disable Private Link Service Network Policies
-# https://docs.microsoft.com/en-us/azure/private-link/disable-private-endpoint-network-policy
+# https://learn.microsoft.com/azure/private-link/disable-private-endpoint-network-policy
az network vnet subnet update \ --name AppGW-PL-Subnet \ --vnet-name AppGW-PL-CLI-VNET \
az network application-gateway private-link list \
# Disable Private Endpoint Network Policies
-# https://docs.microsoft.com/en-us/azure/private-link/disable-private-endpoint-network-policy
+# https://learn.microsoft.com/azure/private-link/disable-private-endpoint-network-policy
az network vnet subnet update \ --name MySubnet \ --vnet-name AppGW-PL-Endpoint-CLI-VNET \
applied-ai-services Compose Custom Models V2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models-v2-1.md
Form Recognizer uses the [Layout](concept-layout.md) API to learn the expected s
[Get started with Train with labels](label-tool.md)
-> [!VIDEO https://docs.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
## Create a composed model
applied-ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/label-tool.md
keywords: document processing
In this article, you'll use the Form Recognizer REST API with the Sample Labeling tool to train a custom model with manually labeled data.
-> [!VIDEO https://docs.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
## Prerequisites
applied-ai-services How To Multiple Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to-multiple-resources.md
The **getimmersivereaderlaunchparams** API endpoint should be secured behind som
.then(function (response) { const token = response["token"]; const subdomain = response["subdomain"];
- // Learn more about chunk usage and supported MIME types https://docs.microsoft.com/azure/cognitive-services/immersive-reader/reference#chunk
+ // Learn more about chunk usage and supported MIME types https://learn.microsoft.com/azure/cognitive-services/immersive-reader/reference#chunk
const data = { Title: $("#ir-title").text(), chunks: [{
The **getimmersivereaderlaunchparams** API endpoint should be secured behind som
mimeType: "text/html" }] };
- // Learn more about options https://docs.microsoft.com/azure/cognitive-services/immersive-reader/reference#options
+ // Learn more about options https://learn.microsoft.com/azure/cognitive-services/immersive-reader/reference#options
const options = { "onExit": exitCallback, "uiZIndex": 2000
The **getimmersivereaderlaunchparams** API endpoint should be secured behind som
## Next steps * Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](./reference.md)
-* View code samples on [GitHub](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/advanced-csharp)
+* View code samples on [GitHub](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/advanced-csharp)
automation Enable Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/enable-managed-identity.md
This Quickstart shows you how to enable managed identities for an Azure Automati
1. Set the system-assigned **Status** option to **On** and then press **Save**. When you're prompted to confirm, select **Yes**.
- Your Automation account can now use the system-assigned identity, which is registered with Azure Active Directory (Azure AD) and is represented by an object ID.
+ Your Automation account can now use the system-assigned identity, that is registered with Azure Active Directory (Azure AD) and is represented by an object ID.
:::image type="content" source="media/enable-managed-identity/system-assigned-object-id.png" alt-text="Managed identity object ID.":::
azure-arc Automated Integration Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/automated-integration-testing.md
export SPN_TENANT_ID="..."
export SUBSCRIPTION_ID="..." # Optional: certain integration tests test upload to Log Analytics workspace:
-# https://docs.microsoft.com/azure/azure-arc/data/upload-logs
+# https://learn.microsoft.com/azure/azure-arc/data/upload-logs
export WORKSPACE_ID="..." export WORKSPACE_SHARED_KEY="..."
This cleans up the resource manifests deployed as part of the launcher.
## Next steps > [!div class="nextstepaction"]
-> [Pre-release testing](preview-testing.md)
+> [Pre-release testing](preview-testing.md)
azure-arc Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connectivity.md
There are multiple options for the degree of connectivity from your Azure Arc-enabled data services environment to Azure. As your requirements vary based on business policy, government regulation, or the availability of network connectivity to Azure, you can choose from the following connectivity modes.
-Azure Arc-enabled data services provides you the option to connect to Azure in two different *connectivity modes*:
+Azure Arc-enabled data services provide you the option to connect to Azure in two different *connectivity modes*:
- Directly connected - Indirectly connected
Some Azure-attached services are only available when they can be directly reache
|**Feature**|**Indirectly connected**|**Directly connected**| |||| |**Automatic high availability**|Supported|Supported|
-|**Self-service provisioning**|Supported<br/>Creation can be done through Azure Data Studio, the appropriate CLI, or Kubernetes native tools (helm, kubectl, oc, etc.), or using Azure Arc-enabled Kubernetes GitOps provisioning.|Supported<br/>In addition to the indirectly connected mode creation options, you can also create through the Azure portal, Azure Resource Manager APIs, the Azure CLI, or ARM templates.
+|**Self-service provisioning**|Supported<br/>Use Azure Data Studio, the appropriate CLI, or Kubernetes native tools like Helm, `kubectl`, or `oc`, or use Azure Arc-enabled Kubernetes GitOps provisioning.|Supported<br/>In addition to the indirectly connected mode creation options, you can also create through the Azure portal, Azure Resource Manager APIs, the Azure CLI, or ARM templates.
|**Elastic scalability**|Supported|Supported<br/>| |**Billing**|Supported<br/>Billing data is periodically exported out and sent to Azure.|Supported<br/>Billing data is automatically and continuously sent to Azure and reflected in near real time. | |**Inventory management**|Supported<br/>Inventory data is periodically exported out and sent to Azure.<br/><br/>Use client tools like Azure Data Studio, Azure Data CLI, or `kubectl` to view and manage inventory locally.|Supported<br/>Inventory data is automatically and continuously sent to Azure and reflected in near real time. As such, you can manage inventory directly from the Azure portal.|
Some Azure-attached services are only available when they can be directly reache
There are three connections required to services available on the Internet. These connections include: - [Microsoft Container Registry (MCR)](#microsoft-container-registry-mcr)
+- [Helm chart (direct connected mode)](#helm-chart-direct-connected-mode)
- [Azure Resource Manager APIs](#azure-resource-manager-apis) - [Azure monitor APIs](#azure-monitor-apis)
+- [Azure Arc data processing service](#azure-arc-data-processing-service)
All HTTPS connections to Azure and the Microsoft Container Registry are encrypted using SSL/TLS using officially signed and verifiable certificates.
Yes
None
-### Helm chart used to create data controller in direct connected mode
+### Helm chart (direct connected mode)
-The helm chart used to provision the Azure Arc data controller bootstrapper and cluster level objects, such as custom resource definitions, cluster roles, and cluster role bindings, is pulled from an Azure Container Registry.
+The Helm chart used to provision the Azure Arc data controller bootstrapper and cluster level objects, such as custom resource definitions, cluster roles, and cluster role bindings, is pulled from an Azure Container Registry.
#### Connection source
A computer running Azure Data Studio, or Azure CLI that is connecting to Azure.
- `login.microsoftonline.com` - `management.azure.com`-- `san-af-eastus-prod.azurewebsites.net`-- `san-af-eastus2-prod.azurewebsites.net`-- `san-af-australiaeast-prod.azurewebsites.net`-- `san-af-centralus-prod.azurewebsites.net`-- `san-af-westus2-prod.azurewebsites.net`-- `san-af-westeurope-prod.azurewebsites.net`-- `san-af-southeastasia-prod.azurewebsites.net`-- `san-af-koreacentral-prod.azurewebsites.net`-- `san-af-northeurope-prod.azurewebsites.net`-- `san-af-westeurope-prod.azurewebsites.net`-- `san-af-uksouth-prod.azurewebsites.net`-- `san-af-francecentral-prod.azurewebsites.net` #### Protocol
HTTPS
Yes
+To use proxy, verify that the agents meet the network requirements. See [Meet network requirements](../kubernetes/quickstart-connect-cluster.md#meet-network-requirements).
+ #### Authentication Azure Active Directory
Azure Active Directory
> For now, all browser HTTPS/443 connections to the data controller for running the command `az arcdata dc export` and Grafana and Kibana dashboards are SSL encrypted using self-signed certificates. A feature will be available in the future that will allow you to provide your own certificates for encryption of these SSL connections. Connectivity from Azure Data Studio to the Kubernetes API server uses the Kubernetes authentication and encryption that you have established. Each user that is using Azure Data Studio or CLI must have an authenticated connection to the Kubernetes API to perform many of the actions related to Azure Arc-enabled data services.+
+### Azure Arc data processing service
+
+Points to the data processing service endpoint in connection
+
+#### Connection target
+
+- `san-af-eastus-prod.azurewebsites.net`
+- `san-af-eastus2-prod.azurewebsites.net`
+- `san-af-australiaeast-prod.azurewebsites.net`
+- `san-af-centralus-prod.azurewebsites.net`
+- `san-af-westus2-prod.azurewebsites.net`
+- `san-af-westeurope-prod.azurewebsites.net`
+- `san-af-southeastasia-prod.azurewebsites.net`
+- `san-af-koreacentral-prod.azurewebsites.net`
+- `san-af-northeurope-prod.azurewebsites.net`
+- `san-af-westeurope-prod.azurewebsites.net`
+- `san-af-uksouth-prod.azurewebsites.net`
+- `san-af-francecentral-prod.azurewebsites.net`
+
+#### Protocol
+
+HTTPS
+
+#### Can use proxy
+
+Yes
+
+To use proxy, verify that the agents meet the network requirements. See [Meet network requirements](../kubernetes/quickstart-connect-cluster.md#meet-network-requirements).
+
+#### Authentication
+
+None
azure-arc Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-overview.md
To learn more about these capabilities, watch these introductory videos.
### Azure Arc-enabled SQL Managed Instance - indirect connected mode
-> [!VIDEO https://docs.microsoft.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-disconnected-mode/player?format=ny]
+> [!VIDEO https://learn.microsoft.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-disconnected-mode/player?format=ny]
### Azure Arc-enabled SQL Managed Instance - direct connected mode
-> [!VIDEO https://docs.microsoft.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-connected-mode/player?format=ny]
+> [!VIDEO https://learn.microsoft.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-connected-mode/player?format=ny]
## Next steps
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/overview.md
Currently, the following Azure Arc-enabled data services are available:
For an introduction to how Azure Arc-enabled data services supports your hybrid work environment, see this introductory video:
-> [!VIDEO https://docs.microsoft.com/Shows/Inside-Azure-for-IT/Choose-the-right-data-solution-for-your-hybrid-environment/player?format=ny]
+> [!VIDEO https://learn.microsoft.com/Shows/Inside-Azure-for-IT/Choose-the-right-data-solution-for-your-hybrid-environment/player?format=ny]
## Always current
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
Durable Functions is developed in collaboration with Microsoft Research. As a re
The following video highlights the benefits of Durable Functions:
-> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Durable-Functions-in-Azure-Functions/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Durable-Functions-in-Azure-Functions/player]
For a more in-depth discussion of Durable Functions and the underlying technology, see the following video (it's focused on .NET, but the concepts also apply to other supported languages):
-> [!VIDEO https://docs.microsoft.com/Events/dotnetConf/2018/S204/player]
+> [!VIDEO https://learn.microsoft.com/Events/dotnetConf/2018/S204/player]
Because Durable Functions is an advanced extension for [Azure Functions](../functions-overview.md), it isn't appropriate for all applications. For a comparison with other Azure orchestration technologies, see [Compare Azure Functions and Azure Logic Apps](../functions-compare-logic-apps-ms-flow-webjobs.md#compare-azure-functions-and-azure-logic-apps).
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md
As a C# developer, you may also be interested in one of the following articles:
| Getting started | Concepts| Guided learning/samples | |--| -- |--|
-| <ul><li>[Using Visual Studio](functions-create-your-first-function-visual-studio.md)</li><li>[Using Visual Studio Code](create-first-function-vs-code-csharp.md)</li><li>[Using command line tools](create-first-function-cli-csharp.md)</li></ul> | <ul><li>[Hosting options](functions-scale.md)</li><li>[Performance&nbsp;considerations](functions-best-practices.md)</li><li>[Visual Studio development](functions-develop-vs.md)</li><li>[Dependency injection](functions-dotnet-dependency-injection.md)</li></ul> | <ul><li>[Create serverless applications](/learn/paths/create-serverless-applications/)</li><li>[C# samples](/samples/browse/?products=azure-functions&languages=csharp)</li></ul> |
+| <ul><li>[Using Visual Studio](functions-create-your-first-function-visual-studio.md)</li><li>[Using Visual Studio Code](create-first-function-vs-code-csharp.md)</li><li>[Using command line tools](create-first-function-cli-csharp.md)</li></ul> | <ul><li>[Hosting options](functions-scale.md)</li><li>[Performance&nbsp;considerations](functions-best-practices.md)</li><li>[Visual Studio development](functions-develop-vs.md)</li><li>[Dependency injection](functions-dotnet-dependency-injection.md)</li></ul> | <ul><li>[Create serverless applications](/training/paths/create-serverless-applications/)</li><li>[C# samples](/samples/browse/?products=azure-functions&languages=csharp)</li></ul> |
Azure Functions supports C# and C# script programming languages. If you're looking for guidance on [using C# in the Azure portal](functions-create-function-app-portal.md), see [C# script (.csx) developer reference](functions-reference-csharp.md).
azure-functions Functions Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-get-started.md
Use the following resources to get started.
| | | | **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio](./functions-create-your-first-function-visual-studio.md)<li>[Visual Studio Code](./create-first-function-vs-code-csharp.md)<li>[Command line](./create-first-function-cli-csharp.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=csharp&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=C%23) |
-| **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Execute an Azure Function with triggers](/learn/modules/execute-azure-function-with-triggers/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/training/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/training/modules/azure-well-architected-performance-efficiency/)<li>[Execute an Azure Function with triggers](/training/modules/execute-azure-function-with-triggers/) <br><br>See a [full listing of interactive tutorials](/training/browse/?expanded=azure&products=azure-functions).|
| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=csharp)<li>[Security](./security-concepts.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [C# language reference](./functions-dotnet-class-library.md)|
Use the following resources to get started.
| | | | **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio Code](./create-first-function-vs-code-java.md)<li>[Jav) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=java&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=Java) |
-| **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Develop an App using the Maven Plugin for Azure Functions](/learn/modules/develop-azure-functions-app-with-maven-plugin/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/training/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/training/modules/azure-well-architected-performance-efficiency/)<li>[Develop an App using the Maven Plugin for Azure Functions](/training/modules/develop-azure-functions-app-with-maven-plugin/) <br><br>See a [full listing of interactive tutorials](/training/browse/?expanded=azure&products=azure-functions).|
| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=java)<li>[Security](./security-concepts.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [Java language reference](./functions-reference-java.md)| ::: zone-end
Use the following resources to get started.
| | | | **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio Code](./create-first-function-vs-code-node.md)<li>[Node.js terminal/command prompt](./create-first-function-cli-node.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=javascript%2ctypescript&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=JavaScript%2CTypeScript) |
-| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/)<li>[Refactor Node.js and Express APIs to Serverless APIs with Azure Functions](/learn/modules/shift-nodejs-express-apis-serverless/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/training/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/training/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/training/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/training/modules/create-serverless-logic-with-azure-functions/)<li>[Refactor Node.js and Express APIs to Serverless APIs with Azure Functions](/training/modules/shift-nodejs-express-apis-serverless/) <br><br>See a [full listing of interactive tutorials](/training/browse/?expanded=azure&products=azure-functions).|
| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=javascript)<li>[Security](./security-concepts.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [JavaScript](./functions-reference-node.md) or [TypeScript](./functions-reference-node.md#typescript) language reference| ::: zone-end
Use the following resources to get started.
| | | | **Create your first function** | <li>Using [Visual Studio Code](./create-first-function-vs-code-powershell.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=powershell&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=PowerShell) |
-| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/)<li>[Execute an Azure Function with triggers](/learn/modules/execute-azure-function-with-triggers/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/training/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/training/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/training/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/training/modules/create-serverless-logic-with-azure-functions/)<li>[Execute an Azure Function with triggers](/training/modules/execute-azure-function-with-triggers/) <br><br>See a [full listing of interactive tutorials](/training/browse/?expanded=azure&products=azure-functions).|
| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=powershell)<li>[Security](./security-concepts.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [PowerShell language reference](./functions-reference-powershell.md))| ::: zone-end
Use the following resources to get started.
| | | | **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio Code](./create-first-function-vs-code-python.md)<li>[Terminal/command prompt](./create-first-function-cli-python.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=python&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=Python) |
-| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/training/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/training/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/training/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/training/modules/create-serverless-logic-with-azure-functions/) <br><br>See a [full listing of interactive tutorials](/training/browse/?expanded=azure&products=azure-functions).|
| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=python)<li>[Security](./security-concepts.md)<li>[Improve throughput performance](./python-scale-performance-reference.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [Python language reference](./functions-reference-python.md)| ::: zone-end
azure-functions Functions Hybrid Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-hybrid-powershell.md
The following script enables PowerShell remoting, and it creates a new firewall
```powershell # For configuration of WinRM, see
-# https://docs.microsoft.com/windows/win32/winrm/installation-and-configuration-for-windows-remote-management.
+# https://learn.microsoft.com/windows/win32/winrm/installation-and-configuration-for-windows-remote-management.
# Enable PowerShell remoting. Enable-PSRemoting -Force
azure-functions Functions Recover Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-recover-storage-account.md
For more information about inbound rule configuration, see the "Network Security
For function apps that run on Linux in a container, the `Azure Functions runtime is unreachable` error can occur as a result of problems with the container. Use the following procedure to review the container logs for errors:
-1. Navigate to the Kudu endpoint for the function app, which is located at `https://scm.<FUNCTION_APP>.azurewebsites.net`, where `<FUNCTION_APP>` is the name of your app.
+1. Navigate to the Kudu endpoint for the function app, which is located at `https://<FUNCTION_APP>.scm.azurewebsites.net`, where `<FUNCTION_APP>` is the name of your app.
1. Download the Docker logs .zip file and review the contents on your local computer.
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
Title: Azure Functions C# script developer reference
description: Understand how to develop Azure Functions using C# script. Previously updated : 12/12/2017 Last updated : 09/15/2022 # Azure Functions C# script (.csx) developer reference
Last updated 12/12/2017
This article is an introduction to developing Azure Functions by using C# script (*.csx*).
-Azure Functions supports C# and C# script programming languages. If you're looking for guidance on [using C# in a Visual Studio class library project](functions-develop-vs.md), see [C# developer reference](functions-dotnet-class-library.md).
+Azure Functions lets you develop functions using C# in one of the following ways:
+
+| Type | Execution process | Code extension | Development environment | Reference |
+| | - | | | |
+| C# script | in-process | .csx | [Portal](functions-create-function-app-portal.md)<br/>[Core Tools](functions-run-local.md) | This article |
+| C# class library | in-process | .cs | [Visual Studio](functions-develop-vs.md)<br/>[Visual Studio Code](functions-develop-vs-code.md)<br />[Core Tools](functions-run-local.md)s | [In-process C# class library functions](functions-dotnet-class-library.md) |
+| C# class library (isolated process)| out-of-process | .cs | [Visual Studio](functions-develop-vs.md)<br/>[Visual Studio Code](functions-develop-vs-code.md)<br />[Core Tools](functions-run-local.md) | [.NET isolated process functions](dotnet-isolated-process-guide.md) |
This article assumes that you've already read the [Azure Functions developers guide](functions-reference.md). ## How .csx works
-The C# script experience for Azure Functions is based on the [Azure WebJobs SDK](https://github.com/Azure/azure-webjobs-sdk/wiki/Introduction). Data flows into your C# function via method arguments. Argument names are specified in a `function.json` file, and there are predefined names for accessing things like the function logger and cancellation tokens.
+Data flows into your C# function via method arguments. Argument names are specified in a `function.json` file, and there are predefined names for accessing things like the function logger and cancellation tokens.
The *.csx* format allows you to write less "boilerplate" and focus on writing just a C# function. Instead of wrapping everything in a namespace and class, just define a `Run` method. Include any assembly references and namespaces at the beginning of the file as usual.
-A function app's *.csx* files are compiled when an instance is initialized. This compilation step means things like cold start may take longer for C# script functions compared to C# class libraries. This compilation step is also why C# script functions are editable in the Azure portal, while C# class libraries are not.
+A function app's *.csx* files are compiled when an instance is initialized. This compilation step means things like cold start may take longer for C# script functions compared to C# class libraries. This compilation step is also why C# script functions are editable in the Azure portal, while C# class libraries aren't.
## Folder structure
-The folder structure for a C# script project looks like the following:
+The folder structure for a C# script project looks like the following example:
``` FunctionsProject
FunctionsProject
There's a shared [host.json](functions-host-json.md) file that can be used to configure the function app. Each function has its own code file (.csx) and binding configuration file (function.json).
-The binding extensions required in [version 2.x and later versions](functions-versions.md) of the Functions runtime are defined in the `extensions.csproj` file, with the actual library files in the `bin` folder. When developing locally, you must [register binding extensions](./functions-bindings-register.md#extension-bundles). When developing functions in the Azure portal, this registration is done for you.
+The binding extensions required in [version 2.x and later versions](functions-versions.md) of the Functions runtime are defined in the `extensions.csproj` file, with the actual library files in the `bin` folder. When developing locally, you must [register binding extensions](./functions-bindings-register.md#extension-bundles). When you develop functions in the Azure portal, this registration is done for you.
## Binding to arguments
The following assemblies are automatically added by the Azure Functions hosting
* `System.Web.Http` * `System.Net.Http.Formatting`
-The following assemblies may be referenced by simple-name (for example, `#r "AssemblyName"`):
+The following assemblies may be referenced by simple-name, by runtime version:
+
+# [v2.x+](#tab/functionsv2)
+
+* `Newtonsoft.Json`
+* `Microsoft.WindowsAzure.Storage`<sup>*</sup>
+
+<sup>*</sup>Removed in version 4.x of the runtime.
+
+# [v1.x](#tab/functionsv1)
* `Newtonsoft.Json` * `Microsoft.WindowsAzure.Storage` * `Microsoft.ServiceBus` * `Microsoft.AspNet.WebHooks.Receivers` * `Microsoft.AspNet.WebHooks.Common`
-* `Microsoft.Azure.NotificationHubs`
++++
+In code, assemblies are referenced like the following example:
+
+```csharp
+#r "AssemblyName"
+```
## Referencing custom assemblies
By default, the [supported set of Functions extension NuGet packages](functions-
If for some reason you can't use extension bundles in your project, you can also use the Azure Functions Core Tools to install extensions based on bindings defined in the function.json files in your app. When using Core Tools to register extensions, make sure to use the `--csx` option. To learn more, see [Install extensions](functions-run-local.md#install-extensions).
-By default, Core Tools reads the function.json files and adds the required packages to an *extensions.csproj* C# class library project file in the root of the function app's file system (wwwroot). Because Core Tools uses dotnet.exe, you can use it to add any NuGet package reference to this extensions file. During installation, Core Tools builds the extensions.csproj to install the required libraries. Here is an example *extensions.csproj* file that adds a reference to *Microsoft.ProjectOxford.Face* version *1.1.0*:
+By default, Core Tools reads the function.json files and adds the required packages to an *extensions.csproj* C# class library project file in the root of the function app's file system (wwwroot). Because Core Tools uses dotnet.exe, you can use it to add any NuGet package reference to this extensions file. During installation, Core Tools builds the extensions.csproj to install the required libraries. Here's an example *extensions.csproj* file that adds a reference to *Microsoft.ProjectOxford.Face* version *1.1.0*:
```xml <Project Sdk="Microsoft.NET.Sdk">
By default, Core Tools reads the function.json files and adds the required packa
# [v1.x](#tab/functionsv1)
-Version 1.x of the Functions runtime uses a *project.json* file to define dependencies. Here is an example *project.json* file:
+Version 1.x of the Functions runtime uses a *project.json* file to define dependencies. Here's an example *project.json* file:
```json {
Extension bundles aren't supported by version 1.x.
To use a custom NuGet feed, specify the feed in a *Nuget.Config* file in the function app root folder. For more information, see [Configuring NuGet behavior](/nuget/consume-packages/configuring-nuget-behavior).
-If you are working on your project only in the portal, you'll need to manually create the extensions.csproj file or a Nuget.Config file directly in the site. To learn more, see [Manually install extensions](functions-how-to-use-azure-function-app-settings.md#manually-install-extensions).
+If you're working on your project only in the portal, you'll need to manually create the extensions.csproj file or a Nuget.Config file directly in the site. To learn more, see [Manually install extensions](functions-how-to-use-azure-function-app-settings.md#manually-install-extensions).
## Environment variables
using (var output = await binder.BindAsync<T>(new BindingTypeAttribute(...)))
``` `BindingTypeAttribute` is the .NET attribute that defines your binding and `T` is an input or output type that's
-supported by that binding type. `T` cannot be an `out` parameter type (such as `out JObject`). For example, the
+supported by that binding type. `T` can't be an `out` parameter type (such as `out JObject`). For example, the
Mobile Apps table output binding supports [six output types](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.MobileApps/MobileTableAttribute.cs#L17-L22), but you can only use [ICollector\<T>](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/ICollector.cs)
public static async Task Run(string input, Binder binder)
defines the [Storage blob](functions-bindings-storage-blob.md) input or output binding, and [TextWriter](/dotnet/api/system.io.textwriter) is a supported output binding type.
-### Multiple attribute example
+### Multiple attributes example
The preceding example gets the app setting for the function app's main Storage account connection string (which is `AzureWebJobsStorage`). You can specify a custom app setting to use for the Storage account by adding the [StorageAccountAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/StorageAccountAttribute.cs)
public static async Task Run(string input, Binder binder)
} ```
-The following table lists the .NET attributes for each binding type and the packages in which they are defined.
+The following table lists the .NET attributes for each binding type and the packages in which they're defined.
> [!div class="mx-codeBreakAll"] > | Binding | Attribute | Add reference |
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md
As an Express.js, Node.js, or JavaScript developer, if you're new to Azure Funct
| Getting started | Concepts| Guided learning | | -- | -- | -- |
-| <ul><li>[Node.js function using Visual Studio Code](./create-first-function-vs-code-node.md)</li><li>[Node.js function with terminal/command prompt](./create-first-function-cli-node.md)</li><li>[Node.js function using the Azure portal](functions-create-function-app-portal.md)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[TypeScript functions](#typescript)</li><li>[Performance&nbsp; considerations](functions-best-practices.md)</li></ul> | <ul><li>[Create serverless applications](/learn/paths/create-serverless-applications/)</li><li>[Refactor Node.js and Express APIs to Serverless APIs](/learn/modules/shift-nodejs-express-apis-serverless/)</li></ul> |
+| <ul><li>[Node.js function using Visual Studio Code](./create-first-function-vs-code-node.md)</li><li>[Node.js function with terminal/command prompt](./create-first-function-cli-node.md)</li><li>[Node.js function using the Azure portal](functions-create-function-app-portal.md)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[TypeScript functions](#typescript)</li><li>[Performance&nbsp; considerations](functions-best-practices.md)</li></ul> | <ul><li>[Create serverless applications](/training/paths/create-serverless-applications/)</li><li>[Refactor Node.js and Express APIs to Serverless APIs](/training/modules/shift-nodejs-express-apis-serverless/)</li></ul> |
## JavaScript function basics
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
This type of streaming logs requires that Application Insights integration be en
## Next steps
-Learn how to [develop, test, and publish Azure functions by using Azure Functions core tools](/learn/modules/develop-test-deploy-azure-functions-with-core-tools/). Azure Functions Core Tools is [open source and hosted on GitHub](https://github.com/azure/azure-functions-cli). To file a bug or feature request, [open a GitHub issue](https://github.com/azure/azure-functions-cli/issues).
+Learn how to [develop, test, and publish Azure functions by using Azure Functions core tools](/training/modules/develop-test-deploy-azure-functions-with-core-tools/). Azure Functions Core Tools is [open source and hosted on GitHub](https://github.com/azure/azure-functions-cli). To file a bug or feature request, [open a GitHub issue](https://github.com/azure/azure-functions-cli/issues).
<!-- LINKS -->
azure-functions Performance Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/performance-reliability.md
All functions in your local project are deployed together as a set of files to y
### Organize functions by privilege
-Connection strings and other credentials stored in application settings gives all of the functions in the function app the same set of permissions in the associated resource. Consider minimizing the number of functions with access to specific credentials by moving functions that don't use those credentials to a separate function app. You can always use techniques such as [function chaining](/learn/modules/chain-azure-functions-data-using-bindings/) to pass data between functions in different function apps.
+Connection strings and other credentials stored in application settings gives all of the functions in the function app the same set of permissions in the associated resource. Consider minimizing the number of functions with access to specific credentials by moving functions that don't use those credentials to a separate function app. You can always use techniques such as [function chaining](/training/modules/chain-azure-functions-data-using-bindings/) to pass data between functions in different function apps.
## Scalability best practices
azure-functions Security Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/security-concepts.md
Permissions are effective at the function app level. The Contributor role is req
#### Organize functions by privilege
-Connection strings and other credentials stored in application settings gives all of the functions in the function app the same set of permissions in the associated resource. Consider minimizing the number of functions with access to specific credentials by moving functions that don't use those credentials to a separate function app. You can always use techniques such as [function chaining](/learn/modules/chain-azure-functions-data-using-bindings/) to pass data between functions in different function apps.
+Connection strings and other credentials stored in application settings gives all of the functions in the function app the same set of permissions in the associated resource. Consider minimizing the number of functions with access to specific credentials by moving functions that don't use those credentials to a separate function app. You can always use techniques such as [function chaining](/training/modules/chain-azure-functions-data-using-bindings/) to pass data between functions in different function apps.
#### Managed identities
azure-functions Shift Expressjs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/shift-expressjs.md
When migrating code to a serverless architecture, refactoring Express.js endpoin
- **Configuration and conventions**: A Functions app uses the _function.json_ file to define HTTP verbs, define security policies, and can configure the function's [input and output](./functions-triggers-bindings.md). By default, the folder name that which contains the function files defines the endpoint name, but you can change the name via the `route` property in the [function.json](./functions-bindings-http-webhook-trigger.md#customize-the-http-endpoint) file. > [!TIP]
-> Learn more through the interactive tutorial [Refactor Node.js and Express APIs to Serverless APIs with Azure Functions](/learn/modules/shift-nodejs-express-apis-serverless/).
+> Learn more through the interactive tutorial [Refactor Node.js and Express APIs to Serverless APIs with Azure Functions](/training/modules/shift-nodejs-express-apis-serverless/).
## Example
By defining `get` in the `methods` array, the function is available to HTTP `GET
## Next steps -- Learn more with the interactive tutorial [Refactor Node.js and Express APIs to Serverless APIs with Azure Functions](/learn/modules/shift-nodejs-express-apis-serverless/)
+- Learn more with the interactive tutorial [Refactor Node.js and Express APIs to Serverless APIs with Azure Functions](/training/modules/shift-nodejs-express-apis-serverless/)
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
recommendations: false Previously updated : 07/14/2022 Last updated : 09/20/2022 # Compare Azure Government and global Azure
Cognitive Services Language Understanding (LUIS) is part of [Cognitive Services
### [Cognitive
-For feature variations and limitations, including API endpoints, see [Speech service in sovereign clouds](../cognitive-services/Speech-Service/sovereign-clouds.md).
+For feature variations and limitations, including API endpoints, see [Speech service in sovereign clouds](../cognitive-services/speech-service/sovereign-clouds.md).
### [Cognitive
-The following Translator **features aren't currently available** in Azure Government:
--- Custom Translator-- Translator Hub
+For feature variations and limitations, including API endpoints, see [Translator in sovereign clouds](../cognitive-services/translator/sovereign-clouds.md).
## Analytics
The following Automation **features aren't currently available** in Azure Govern
### [Azure Advisor](../advisor/index.yml)
-The following Azure Advisor recommendation **features aren't currently available** in Azure Government:
--- Cost
- - (Preview) Consider App Service stamp fee reserved capacity to save over your on-demand costs.
- - (Preview) Consider Azure Data Explorer reserved capacity to save over your pay-as-you-go costs.
- - (Preview) Consider Azure Synapse Analytics (formerly SQL DW) reserved capacity to save over your pay-as-you-go costs.
- - (Preview) Consider Blob storage reserved capacity to save on Blob v2 and Data Lake Storage Gen2 costs.
- - (Preview) Consider Blob storage reserved instance to save on Blob v2 and Data Lake Storage Gen2 costs.
- - (Preview) Consider Cache for Redis reserved capacity to save over your pay-as-you-go costs.
- - (Preview) Consider Cosmos DB reserved capacity to save over your pay-as-you-go costs.
- - (Preview) Consider Database for MariaDB reserved capacity to save over your pay-as-you-go costs.
- - (Preview) Consider Database for MySQL reserved capacity to save over your pay-as-you-go costs.
- - (Preview) Consider Database for PostgreSQL reserved capacity to save over your pay-as-you-go costs.
- - (Preview) Consider SQL DB reserved capacity to save over your pay-as-you-go costs.
- - (Preview) Consider SQL PaaS DB reserved capacity to save over your pay-as-you-go costs.
- - Consider App Service stamp fee reserved instance to save over your on-demand costs.
- - Consider Azure Synapse Analytics (formerly SQL DW) reserved instance to save over your pay-as-you-go costs.
- - Consider Cache for Redis reserved instance to save over your pay-as-you-go costs.
- - Consider Cosmos DB reserved instance to save over your pay-as-you-go costs.
- - Consider Database for MariaDB reserved instance to save over your pay-as-you-go costs.
- - Consider Database for MySQL reserved instance to save over your pay-as-you-go costs.
- - Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs.
- - Consider SQL PaaS DB reserved instance to save over your pay-as-you-go costs.
-- Operational
- - Add Azure Monitor to your virtual machine (VM) labeled as production.
- - Delete and recreate your pool using a VM size that will soon be retired.
- - Enable Traffic Analytics to view insights into traffic patterns across Azure resources.
- - Enforce 'Add or replace a tag on resources' using Azure Policy.
- - Enforce 'Allowed locations' using Azure Policy.
- - Enforce 'Allowed virtual machine SKUs' using Azure Policy.
- - Enforce 'Audit VMs that don't use managed disks' using Azure Policy.
- - Enforce 'Inherit a tag from the resource group' using Azure Policy.
- - Update Azure Spring Cloud API Version.
- - Update your outdated Azure Spring Cloud SDK to the latest version.
- - Upgrade to the latest version of the Immersive Reader SDK.
-- Performance
- - Accelerated Networking may require stopping and starting the VM.
- - Arista Networks vEOS Router may experience high CPU utilization, reduced throughput and high latency.
- - Barracuda Networks NextGen Firewall may experience high CPU utilization, reduced throughput and high latency.
- - Cisco Cloud Services Router 1000V may experience high CPU utilization, reduced throughput and high latency.
- - Consider increasing the size of your NVA to address persistent high CPU.
- - Distribute data in server group to distribute workload among nodes.
- - More than 75% of your queries are full scan queries.
- - NetApp Cloud Volumes ONTAP may experience high CPU utilization, reduced throughput and high latency.
- - Palo Alto Networks VM-Series Firewall may experience high CPU utilization, reduced throughput and high latency.
- - Reads happen on most recent data.
- - Rebalance data in Hyperscale (Citus) server group to distribute workload among worker nodes more evenly.
- - Update Attestation API Version.
- - Update Key Vault SDK Version.
- - Update to the latest version of your Arista VEOS product for Accelerated Networking support.
- - Update to the latest version of your Barracuda NG Firewall product for Accelerated Networking support.
- - Update to the latest version of your Check Point product for Accelerated Networking support.
- - Update to the latest version of your Cisco Cloud Services Router 1000V product for Accelerated Networking support.
- - Update to the latest version of your F5 BigIp product for Accelerated Networking support.
- - Update to the latest version of your NetApp product for Accelerated Networking support.
- - Update to the latest version of your Palo Alto Firewall product for Accelerated Networking support.
- - Upgrade your ExpressRoute circuit bandwidth to accommodate your bandwidth needs.
- - Use SSD Disks for your production workloads.
- - vSAN capacity utilization has crossed critical threshold.
-- Reliability
- - Avoid hostname override to ensure site integrity.
- - Check Point Virtual Machine may lose Network Connectivity.
- - Drop and recreate your HDInsight clusters to apply critical updates.
- - Upgrade device client SDK to a supported version for IotHub.
- - Upgrade to the latest version of the Azure Connected Machine agent.
-
-The calculation for recommending that you should right-size or shut down underutilized virtual machines in Azure Government is as follows:
--- Advisor monitors your virtual machine usage for seven days and identifies low-utilization virtual machines.-- Virtual machines are considered low utilization if their CPU utilization is 5% or less and their network utilization is less than 2%, or if the current workload can be accommodated by a smaller virtual machine size.-
-If you want to be more aggressive at identifying underutilized virtual machines, you can adjust the CPU utilization rule on a per subscription basis.
+For feature variations and limitations, see [Azure Advisor in sovereign clouds](../advisor/advisor-sovereign-clouds.md).
### [Azure Lighthouse](../lighthouse/index.yml)
azure-government Connect With Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/connect-with-azure-pipelines.md
Review one of the following quickstarts to set up a build for your specific type
$isAzureModulePresent = Get-Module -Name Az -ListAvailable if ([String]::IsNullOrEmpty($isAzureModulePresent) -eq $true) {
- Write-Output "Script requires Azure PowerShell modules to be present. Obtain Azure PowerShell from https://docs.microsoft.com//powershell/azure/install-az-ps" -Verbose
+ Write-Output "Script requires Azure PowerShell modules to be present. Obtain Azure PowerShell from https://learn.microsoft.com//powershell/azure/install-az-ps" -Verbose
return }
azure-government Documentation Government Overview Wwps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-wwps.md
For most of these scenarios, Microsoft and its partners offer a customer-managed
### Machine learning model training
-[Artificial intelligence](/learn/modules/azure-artificial-intelligence/1-introduction-to-azure-artificial-intelligence) (AI) holds tremendous potential for governments. [Machine learning](/learn/modules/azure-artificial-intelligence/3-machine-learning) (ML) is a data science technique that allows computers to learn to use existing data, without being explicitly programmed, to forecast future behaviors, outcomes, and trends. Moreover, [ML technologies](/azure/architecture/data-guide/technology-choices/data-science-and-machine-learning) can discover patterns, anomalies, and predictions that can help governments in their missions. As technical barriers continue to fall, decision-makers face the opportunity to develop and explore transformative AI applications. There are five main vectors that can make it easier, faster, and cheaper to adopt ML:
+[Artificial intelligence](/training/modules/azure-artificial-intelligence/1-introduction-to-azure-artificial-intelligence) (AI) holds tremendous potential for governments. [Machine learning](/training/modules/azure-artificial-intelligence/3-machine-learning) (ML) is a data science technique that allows computers to learn to use existing data, without being explicitly programmed, to forecast future behaviors, outcomes, and trends. Moreover, [ML technologies](/azure/architecture/data-guide/technology-choices/data-science-and-machine-learning) can discover patterns, anomalies, and predictions that can help governments in their missions. As technical barriers continue to fall, decision-makers face the opportunity to develop and explore transformative AI applications. There are five main vectors that can make it easier, faster, and cheaper to adopt ML:
- Unsupervised learning - Reducing need for training data
Synthetic data can exist in several forms, including text, audio, video, and hyb
### Knowledge mining
-The exponential growth of unstructured data gathering in recent years has created many analytical problems for government agencies. This problem intensifies when data sets come from diverse sources such as text, audio, video, imaging, and so on. [Knowledge mining](/learn/modules/azure-artificial-intelligence/2-knowledge-mining) is the process of discovering useful knowledge from a collection of diverse data sources. This widely used data mining technique is a process that includes data preparation and selection, data cleansing, incorporation of prior knowledge on data sets, and interpretation of accurate solutions from the observed results. This process has proven to be useful for large volumes of data in different government agencies.
+The exponential growth of unstructured data gathering in recent years has created many analytical problems for government agencies. This problem intensifies when data sets come from diverse sources such as text, audio, video, imaging, and so on. [Knowledge mining](/training/modules/azure-artificial-intelligence/2-knowledge-mining) is the process of discovering useful knowledge from a collection of diverse data sources. This widely used data mining technique is a process that includes data preparation and selection, data cleansing, incorporation of prior knowledge on data sets, and interpretation of accurate solutions from the observed results. This process has proven to be useful for large volumes of data in different government agencies.
For instance, captured data from the field often includes documents, pamphlets, letters, spreadsheets, propaganda, videos, and audio files across many disparate structured and unstructured formats. Buried within the data are [actionable insights](https://www.youtube.com/watch?v=JFdF-Z7ypQo) that can enhance effective and timely response to crisis and drive decisions. The objective of knowledge mining is to enable decisions that are better, faster, and more humane by implementing proven commercial algorithm-based technologies.
azure-government Documentation Government Welcome https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-welcome.md
The following video provides a good introduction to Azure Government:
</br>
-> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Enable-government-missions-in-the-cloud-with-Azure-Government/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Enable-government-missions-in-the-cloud-with-Azure-Government/player]
## Compare Azure Government and global Azure
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
The following video explains Azure Maps in depth:
</br>
-> [!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Azure-Maps/player?format=ny]
+> [!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Azure-Maps/player?format=ny]
## Map controls
azure-maps Clustering Point Data Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-android-sdk.md
When visualizing many data points on the map, data points may overlap over each
</br>
->[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny]
+>[!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny]
## Prerequisites
azure-maps Clustering Point Data Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-web-sdk.md
When visualizing many data points on the map, data points may overlap over each
</br>
->[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny]
+>[!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny]
## Enabling clustering on a data source
azure-maps Data Driven Style Expressions Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-android-sdk.md
This video provides an overview of data-driven styling in Azure Maps.
</br>
->[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny]
+>[!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny]
## Data expressions
azure-maps Data Driven Style Expressions Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-web-sdk.md
This video provides an overview of data-driven styling in the Azure Maps Web SDK
</br>
->[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny]
+>[!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny]
Expressions are represented as JSON arrays. The first element of an expression in the array is a string that specifies the name of the expression operator. For example, "+" or "case". The next elements (if any) are the arguments to the expression. Each argument is either a literal value (a string, number, boolean, or `null`), or another expression array. The following pseudocode defines the basic structure of an expression.
azure-maps How To Request Weather Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-request-weather-data.md
This video provides examples for making REST calls to Azure Maps Weather service
</br>
->[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Azure-Maps-Weather-services-for-developers/player?format=ny]
+>[!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Azure-Maps-Weather-services-for-developers/player?format=ny]
## Prerequisites
azure-maps How To Use Spatial Io Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-spatial-io-module.md
This video provides an overview of Spatial IO module in the Azure Maps Web SDK.
</br>
-> [!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Easily-integrate-spatial-data-into-the-Azure-Maps/player?format=ny]
+> [!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Easily-integrate-spatial-data-into-the-Azure-Maps/player?format=ny]
> [!WARNING] > Only use data and services that are from a source you trust, especially if referencing it from another domain. The spatial IO module does take steps to minimize risk, however the safest approach is too not allow any danagerous data into your application to begin with.
azure-maps Map Accessibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-accessibility.md
Learn about accessibility in the Web SDK modules.
Learn about developing accessible apps: > [!div class="nextstepaction"]
-> [Accessibility in Action Digital Badge Learning Path](https://ready.azurewebsites.net/learning/track/2940)
+> [Accessibility in Action Digital Badge learning path](https://ready.azurewebsites.net/learning/track/2940)
Take a look at these useful accessibility tools: > [!div class="nextstepaction"]
azure-maps Map Add Heat Map Layer Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer-android.md
You can use heat maps in many different scenarios, including:
</br>
-> [!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Heat-Maps-and-Image-Overlays-in-Azure-Maps/player?format=ny]
+> [!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Heat-Maps-and-Image-Overlays-in-Azure-Maps/player?format=ny]
## Prerequisites
azure-maps Map Add Heat Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer.md
You can use heat maps in many different scenarios, including:
</br>
->[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Heat-Maps-and-Image-Overlays-in-Azure-Maps/player?format=ny]
+>[!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Heat-Maps-and-Image-Overlays-in-Azure-Maps/player?format=ny]
## Add a heat map layer
azure-maps Migrate From Google Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps.md
The table provides a high-level list of Azure Maps features, which correspond to
| Distance Matrix | Γ£ô | | Elevation | Γ£ô | | Geocoding (Forward/Reverse) | Γ£ô |
-| Geolocation | N/A |
+| Geolocation | Γ£ô |
| Nearest Roads | Γ£ô | | Places Search | Γ£ô | | Places Details | N/A ΓÇô website & phone number available |
azure-monitor Java Standalone Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-profiler.md
JFR recording can be viewed and analyzed with your preferred tool, for example [
On-demand is user triggered profiling in real-time whereas automatic profiling is with preconfigured triggers.
-Use [Profile Now](https://github.com/johnoliver/azure-docs-pr/blob/add-java-profiler-doc/articles/azure-monitor/profiler/profiler-settings.md) for the on-demand profiling option. [Profile Now](https://github.com/johnoliver/azure-docs-pr/blob/add-java-profiler-doc/articles/azure-monitor/profiler/profiler-settings.md) will immediately profile all agents attached to the Application Insights instance.
-
+Use [Profile Now](../profiler/profiler-settings.md) for the on-demand profiling option. [Profile Now](../profiler/profiler-settings.md) will immediately profile all agents attached to the Application Insights instance.
+ Automated profiling is triggered a breach in a resource threshold.
-
+ ### Which Java profiling triggers can I configure? Application Insights Java Agent currently supports monitoring of CPU and memory consumption. CPU threshold is configured as a percentage of all available cores on the machine. Memory is the current Tenured memory region (OldGen) occupancy against the maximum possible size of the region.
-
+ ### What are the required prerequisites to enable Java Profiling? Review the [Pre-requisites](#prerequisites) at the top of this article.
azure-monitor Resource Manager App Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resource-manager-app-resource.md
param type string
@description('Which Azure Region to deploy the resource to. This must be a valid Azure regionId.') param regionId string
-@description('See documentation on tags: https://docs.microsoft.com/azure/azure-resource-manager/management/tag-resources.')
+@description('See documentation on tags: https://learn.microsoft.com/azure/azure-resource-manager/management/tag-resources.')
param tagsArray object @description('Source of Azure Resource Manager deployment')
resource component 'Microsoft.Insights/components@2020-02-02' = {
"tagsArray": { "type": "object", "metadata": {
- "description": "See documentation on tags: https://docs.microsoft.com/azure/azure-resource-manager/management/tag-resources."
+ "description": "See documentation on tags: https://learn.microsoft.com/azure/azure-resource-manager/management/tag-resources."
} }, "requestSource": {
param type string
@description('Which Azure Region to deploy the resource to. This must be a valid Azure regionId.') param regionId string
-@description('See documentation on tags: https://docs.microsoft.com/azure/azure-resource-manager/management/tag-resources.')
+@description('See documentation on tags: https://learn.microsoft.com/azure/azure-resource-manager/management/tag-resources.')
param tagsArray object @description('Source of Azure Resource Manager deployment')
resource component 'Microsoft.Insights/components@2020-02-02' = {
"tagsArray": { "type": "object", "metadata": {
- "description": "See documentation on tags: https://docs.microsoft.com/azure/azure-resource-manager/management/tag-resources."
+ "description": "See documentation on tags: https://learn.microsoft.com/azure/azure-resource-manager/management/tag-resources."
} }, "requestSource": {
azure-monitor Tutorial Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-core.md
# Enable Application Insights for ASP.NET Core applications
-This article describes how to enable Application Insights for an [ASP.NET Core](/aspnet/core) application deployed as an Azure Web App. This implementation utilizes an SDK-based approach, an [auto-instrumentation approach](./codeless-overview.md) is also available.
+This article describes how to enable Application Insights for an [ASP.NET Core](/aspnet/core) application deployed as an Azure Web App. This implementation uses an SDK-based approach. An [auto-instrumentation approach](./codeless-overview.md) is also available.
Application Insights can collect the following telemetry from your ASP.NET Core application:
Application Insights can collect the following telemetry from your ASP.NET Core
> * Heartbeats > * Logs
-We'll use an [ASP.NET Core MVC application](/aspnet/core/tutorials/first-mvc-app) example that targets `net6.0`. You can apply these instructions to all ASP.NET Core applications. If you're using the [Worker Service](/aspnet/core/fundamentals/host/hosted-services#worker-service-template), use the instructions from [here](./worker-service.md).
+For a sample application, we'll use an [ASP.NET Core MVC application](/aspnet/core/tutorials/first-mvc-app) that targets `net6.0`. However, you can apply these instructions to all ASP.NET Core applications. If you're using the [Worker Service](/aspnet/core/fundamentals/host/hosted-services#worker-service-template), use the instructions from [here](./worker-service.md).
> [!NOTE] > A preview [OpenTelemetry-based .NET offering](./opentelemetry-enable.md?tabs=net) is available. [Learn more](./opentelemetry-overview.md).
We'll use an [ASP.NET Core MVC application](/aspnet/core/tutorials/first-mvc-app
## Supported scenarios
-The [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) can monitor your applications no matter where or how they run. If your application is running and has network connectivity to Azure, telemetry can be collected. Application Insights monitoring is supported everywhere .NET Core is supported. Support covers the following scenarios:
+The [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) can monitor your applications no matter where or how they run. If your application is running and has network connectivity to Azure, Application Insights can collect telemetry from it. Application Insights monitoring is supported everywhere .NET Core is supported. The following scenarios are supported:
* **Operating system**: Windows, Linux, or Mac * **Hosting method**: In process or out of process * **Deployment method**: Framework dependent or self-contained
-* **Web server**: IIS (Internet Information Server) or Kestrel
+* **Web server**: Internet Information Server (IIS) or Kestrel
* **Hosting platform**: The Web Apps feature of Azure App Service, Azure VM, Docker, Azure Kubernetes Service (AKS), and so on * **.NET Core version**: All officially [supported .NET Core versions](https://dotnet.microsoft.com/download/dotnet-core) that aren't in preview * **IDE**: Visual Studio, Visual Studio Code, or command line ## Prerequisites
-If you'd like to follow along with the guidance in this article, certain pre-requisites are needed.
+To complete this tutorial, you need:
* Visual Studio 2022
-* Visual Studio Workloads: ASP.NET and web development, Data storage and processing, and Azure development
+* The following Visual Studio workloads:
+ * ASP.NET and web development
+ * Data storage and processing
+ * Azure development
* .NET 6.0 * Azure subscription and user account (with the ability to create and delete resources) ## Deploy Azure resources
-Please follow the guidance to deploy the sample application from its [GitHub repository.](https://github.com/gitopsbook/sample-app-deployment).
+Please follow the [guidance to deploy the sample application from its GitHub repository.](https://github.com/gitopsbook/sample-app-deployment).
-In order to provide globally unique names to some resources, a 5 character suffix has been assigned. Please make note of this suffix for use later on in this article.
+In order to provide globally unique names to resources, a six-character suffix is assigned to some resources. Please make note of this suffix for use later on in this article.
-![The deployed Azure resource listing displays with the 5 character suffix highlighted.](./media/tutorial-asp-net-core/naming-suffix.png "Record the 5 character suffix")
## Create an Application Insights resource
-1. In the [Azure portal](https://portal.azure.com), locate and select the **application-insights-azure-cafe** resource group.
+1. In the [Azure portal](https://portal.azure.com), select the **application-insights-azure-cafe** resource group.
2. From the top toolbar menu, select **+ Create**.
- ![The resource group application-insights-azure-cafe displays with the + Create button highlighted on the toolbar menu.](./media/tutorial-asp-net-core/create-resource-menu.png "Create new resource")
+ :::image type="content" source="media/tutorial-asp-net-core/create-resource-menu.png" alt-text="Screenshot of the application-insights-azure-cafe resource group in the Azure portal with the + Create button highlighted on the toolbar menu." lightbox="media/tutorial-asp-net-core/create-resource-menu.png":::
-3. On the **Create a resource** screen, search for and select `Application Insights` in the marketplace search textbox.
+3. On the **Create a resource** screen, search for and select **Application Insights** in the marketplace search textbox.
- ![The Create a resource screen displays with Application Insights entered into the search box and Application Insights highlighted from the search results.](./media/tutorial-asp-net-core/search-application-insights.png "Search for Application Insights")
+ <!-- The long description for search-application-insights.png: Screenshot of the Create a resource screen in the Azure portal. The screenshot shows a search for Application Insights highlighted and Application Insights displaying in the search results, which is also highlighted. -->
+ :::image type="content" source="media/tutorial-asp-net-core/search-application-insights.png" alt-text="Screenshot of the Create a resource screen in the Azure portal." lightbox="media/tutorial-asp-net-core/search-application-insights.png":::
4. On the Application Insights resource overview screen, select **Create**.
- ![The Application Insights overview screen displays with the Create button highlighted.](./media/tutorial-asp-net-core/create-application-insights-overview.png "Create Application Insights resource")
+ :::image type="content" source="media/tutorial-asp-net-core/create-application-insights-overview.png" alt-text="Screenshot of the Application Insights overview screen in the Azure portal with the Create button highlighted." lightbox="media/tutorial-asp-net-core/create-application-insights-overview.png":::
-5. On the Application Insights screen **Basics** tab. Complete the form as follows, then select the **Review + create** button. Fields not specified in the table below may retain their default values.
+5. On the Application Insights screen, **Basics** tab, complete the form by using the following table, then select the **Review + create** button. Fields not specified in the table below may retain their default values.
| Field | Value | |-|-| | Name | Enter `azure-cafe-application-insights-{SUFFIX}`, replacing **{SUFFIX}** with the appropriate suffix value recorded earlier. | | Region | Select the same region chosen when deploying the article resources. |
- | Log Analytics Workspace | Select `azure-cafe-log-analytics-workspace`, alternatively a new log analytics workspace can be created here. |
+ | Log Analytics Workspace | Select **azure-cafe-log-analytics-workspace**. Alternatively, you can create a new log analytics workspace. |
- ![The Application Insights Basics tab displays with a form populated with the preceding values.](./media/tutorial-asp-net-core/application-insights-basics-tab.png "Application Insights Basics tab")
+ :::image type="content" source="media/tutorial-asp-net-core/application-insights-basics-tab.png" alt-text="Screenshot of the Basics tab of the Application Insights screen in the Azure portal with a form populated with the preceding values." lightbox="media/tutorial-asp-net-core/application-insights-basics-tab.png":::
6. Once validation has passed, select **Create** to deploy the resource.
- ![The Application Insights validation screen displays indicating Validation passed and the Create button is highlighted.](./media/tutorial-asp-net-core/application-insights-validation-passed.png "Validation passed")
+ :::image type="content" source="media/tutorial-asp-net-core/application-insights-validation-passed.png" alt-text="Screenshot of the Application Insights screen in the Azure portal. The message stating validation has passed and Create button are both highlighted." lightbox="media/tutorial-asp-net-core/application-insights-validation-passed.png":::
-7. Once deployment has completed, return to the `application-insights-azure-cafe` resource group, and select the deployed Application Insights resource.
+7. Once the resource is deployed, return to the `application-insights-azure-cafe` resource group, and select the Application Insights resource you deployed.
- ![The Azure Cafe resource group displays with the Application Insights resource highlighted.](./media/tutorial-asp-net-core/application-insights-resource-group.png "Application Insights")
+ :::image type="content" source="media/tutorial-asp-net-core/application-insights-resource-group.png" alt-text="Screenshot of the application-insights-azure-cafe resource group in the Azure portal with the Application Insights resource highlighted." lightbox="media/tutorial-asp-net-core/application-insights-resource-group.png":::
-8. On the Overview screen of the Application Insights resource, copy the **Connection String** value for use in the next section of this article.
+8. On the Overview screen of the Application Insights resource, select the **Copy to clipboard** button to copy the connection string value. You will use the connection string value in the next section of this article.
- ![The Application Insights Overview screen displays with the Connection String value highlighted and the Copy button selected.](./media/tutorial-asp-net-core/application-insights-connection-string-overview.png "Copy Connection String value")
+ <!-- The long description for application-insights-connection-string-overview.png: Screenshot of the Application Insights Overview screen in the Azure portal. The screenshot shows the connection string value highlighted and the Copy to clipboard button selected and highlighted. -->
+ :::image type="content" source="media/tutorial-asp-net-core/application-insights-connection-string-overview.png" alt-text="Screenshot of the Application Insights Overview screen in the Azure portal." lightbox="media/tutorial-asp-net-core/application-insights-connection-string-overview.png":::
## Configure the Application Insights connection string application setting in the web App Service
-1. Return to the `application-insights-azure-cafe` resource group, locate and open the **azure-cafe-web-{SUFFIX}** App Service resource.
+1. Return to the `application-insights-azure-cafe` resource group and open the **azure-cafe-web-{SUFFIX}** App Service resource.
- ![The Azure Cafe resource group displays with the azure-cafe-web-{SUFFIX} resource highlighted.](./media/tutorial-asp-net-core/web-app-service-resource-group.png "Web App Service")
+ :::image type="content" source="media/tutorial-asp-net-core/web-app-service-resource-group.png" alt-text="Screenshot of the application-insights-azure-cafe resource group in the Azure portal with the azure-cafe-web-{SUFFIX} resource highlighted." lightbox="media/tutorial-asp-net-core/web-app-service-resource-group.png":::
-2. From the left menu, beneath the Settings header, select **Configuration**. Then, on the **Application settings** tab, select **+ New application setting** beneath the Application settings header.
+2. From the left menu, under the Settings section, select **Configuration**. Then, on the **Application settings** tab, select **+ New application setting** beneath the Application settings header.
- ![The App Service resource screen displays with the Configuration item selected from the left menu and the + New application setting toolbar button highlighted.](./media/tutorial-asp-net-core/app-service-app-setting-button.png "Create New application setting")
+ <!-- The long description for app-service-app-setting-button.png: Screenshot of the App Service resource screen in the Azure portal. The screenshot shows Configuration in the left menu under the Settings section selected and highlighted, the Application settings tab selected and highlighted, and the + New application setting toolbar button highlighted. -->
+ :::image type="content" source="media/tutorial-asp-net-core/app-service-app-setting-button.png" alt-text="Screenshot of the App Service resource screen in the Azure portal." lightbox="media/tutorial-asp-net-core/app-service-app-setting-button.png":::
3. In the Add/Edit application setting blade, complete the form as follows and select **OK**. | Field | Value | |-|-| | Name | APPLICATIONINSIGHTS_CONNECTION_STRING |
- | Value | Paste the Application Insights connection string obtained in the preceding section. |
+ | Value | Paste the Application Insights connection string value you copied in the preceding section. |
- ![The Add/Edit application setting blade displays populated with the preceding values.](./media/tutorial-asp-net-core/add-edit-app-setting.png "Add/Edit application setting")
+ :::image type="content" source="media/tutorial-asp-net-core/add-edit-app-setting.png" alt-text="Screenshot of the Add/Edit application setting blade in the Azure portal with the preceding values populated in the Name and Value fields." lightbox="media/tutorial-asp-net-core/add-edit-app-setting.png":::
4. On the App Service Configuration screen, select the **Save** button from the toolbar menu. When prompted to save the changes, select **Continue**.
- ![The App Service Configuration screen displays with the Save button highlighted on the toolbar menu.](./media/tutorial-asp-net-core/save-app-service-configuration.png "Save the App Service Configuration")
+ :::image type="content" source="media/tutorial-asp-net-core/save-app-service-configuration.png" alt-text="Screenshot of the App Service Configuration screen in the Azure portal with the Save button highlighted on the toolbar menu." lightbox="media/tutorial-asp-net-core/save-app-service-configuration.png":::
## Install the Application Insights NuGet Package We need to configure the ASP.NET Core MVC web application to send telemetry. This is accomplished using the [Application Insights for ASP.NET Core web applications NuGet package](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore).
-1. With Visual Studio, open `1 - Starter Application\src\AzureCafe.sln`.
+1. In Visual Studio, open `1 - Starter Application\src\AzureCafe.sln`.
-2. In the Solution Explorer panel, right-click the AzureCafe project file, and select **Manage NuGet Packages**.
+2. In the Visual Studio Solution Explorer panel, right-click on the AzureCafe project file and select **Manage NuGet Packages**.
- ![The Solution Explorer displays with Manage NuGet Packages selected from the context menu.](./media/tutorial-asp-net-core/manage-nuget-packages-menu.png "Manage NuGet Packages")
+ :::image type="content" source="media/tutorial-asp-net-core/manage-nuget-packages-menu.png" alt-text="Screenshot of the Visual Studio Solution Explorer with the Azure Cafe project selected and the Manage NuGet Packages context menu item highlighted." lightbox="media/tutorial-asp-net-core/manage-nuget-packages-menu.png":::
-3. Select the **Browse** tab, then search for and select **Microsoft.ApplicationInsights.AspNetCore**. Select **Install**, and accept the license terms. It is recommended to use the latest stable version. Find full release notes for the SDK on the [open-source GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet/releases).
+3. Select the **Browse** tab and then search for and select **Microsoft.ApplicationInsights.AspNetCore**. Select **Install**, and accept the license terms. It is recommended you use the latest stable version. For the full release notes for the SDK, see the [open-source GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet/releases).
- ![The NuGet tab displays with the Browse tab selected and Microsoft.ApplicationInsights.AspNetCore is entered in the search box. The Microsoft.ApplicationInsights.AspNetCore package is selected from a list of results. In the right pane, the latest stable version is selected from a drop down list and the Install button is highlighted.](./media/tutorial-asp-net-core/asp-net-core-install-nuget-package.png "Install NuGet Package")
+ <!-- The long description for asp-net-core-install-nuget-package.png: Screenshot that shows the NuGet Package Manager user interface in Visual Studio with the Browse tab selected. Microsoft.ApplicationInsights.AspNetCore is entered in the search box, and the Microsoft.ApplicationInsights.AspNetCore package is selected from a list of results. In the right pane, the latest stable version of the Microsoft.ApplicationInsights.AspNetCore package is selected from a drop down list and the Install button is highlighted. -->
+ :::image type="content" source="media/tutorial-asp-net-core/asp-net-core-install-nuget-package.png" alt-text="Screenshot of the NuGet Package Manager user interface in Visual Studio." lightbox="media/tutorial-asp-net-core/asp-net-core-install-nuget-package.png":::
-4. Keep Visual Studio open for the next section of the article.
+ Keep Visual Studio open for the next section of the article.
## Enable Application Insights server-side telemetry The Application Insights for ASP.NET Core web applications NuGet package encapsulates features to enable sending server-side telemetry to the Application Insights resource in Azure.
-1. From the Visual Studio Solution Explorer, locate and open the **Program.cs** file.
+1. From the Visual Studio Solution Explorer, open the **Program.cs** file.
- ![The Visual Studio Solution Explorer displays with the Program.cs highlighted.](./media/tutorial-asp-net-core/solution-explorer-programcs.png "Program.cs")
+ :::image type="content" source="media/tutorial-asp-net-core/solution-explorer-programcs.png" alt-text="Screenshot of the Visual Studio Solution Explorer with the Program.cs file highlighted." lightbox="media/tutorial-asp-net-core/solution-explorer-programcs.png":::
-2. Insert the following code prior to the `builder.Services.AddControllersWithViews()` statement. This code automatically reads the Application Insights connection string value from configuration. The `AddApplicationInsightsTelemetry` method registers the `ApplicationInsightsLoggerProvider` with the built-in dependency injection container, that will then be used to fulfill [ILogger](/dotnet/api/microsoft.extensions.logging.ilogger) and [ILogger\<TCategoryName\>](/dotnet/api/microsoft.extensions.logging.iloggerprovider) implementation requests.
+2. Insert the following code prior to the `builder.Services.AddControllersWithViews()` statement. This code automatically reads the Application Insights connection string value from configuration. The `AddApplicationInsightsTelemetry` method registers the `ApplicationInsightsLoggerProvider` with the built-in dependency injection container that will then be used to fulfill [ILogger](/dotnet/api/microsoft.extensions.logging.ilogger) and [ILogger\<TCategoryName\>](/dotnet/api/microsoft.extensions.logging.iloggerprovider) implementation requests.
```csharp builder.Services.AddApplicationInsightsTelemetry(); ```
- ![A code window displays with the preceding code snippet highlighted.](./media/tutorial-asp-net-core/enable-server-side-telemetry.png "Enable server-side telemetry")
+ :::image type="content" source="media/tutorial-asp-net-core/enable-server-side-telemetry.png" alt-text="Screenshot of a code window in Visual Studio with the preceding code snippet highlighted." lightbox="media/tutorial-asp-net-core/enable-server-side-telemetry.png":::
> [!TIP]
- > Learn more about [configuration options in ASP.NET Core](/aspnet/core/fundamentals/configuration).
+ > Learn more about the [configuration options in ASP.NET Core](/aspnet/core/fundamentals/configuration).
## Enable client-side telemetry for web applications
-The preceding steps are enough to help you start collecting server-side telemetry. This application has client-side components, follow the next steps to start collecting [usage telemetry](./usage-overview.md).
+The preceding steps are enough to help you start collecting server-side telemetry. The sample application has client-side components. Follow the next steps to start collecting [usage telemetry](./usage-overview.md).
-1. In Visual Studio Solution explorer, locate and open `\Views\_ViewImports.cshtml`. Add the following code at the end of the existing file.
+1. In Visual Studio Solution Explorer, open `\Views\_ViewImports.cshtml`.
+
+2. Add the following code at the end of the existing file.
```cshtml @inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet ```
- ![The _ViewImports.cshtml file displays with the preceding line of code highlighted.](./media/tutorial-asp-net-core/view-imports-injection.png "JavaScriptSnippet injection")
+ :::image type="content" source="media/tutorial-asp-net-core/view-imports-injection.png" alt-text="Screenshot of the _ViewImports.cshtml file in Visual Studio with the preceding line of code highlighted." lightbox="media/tutorial-asp-net-core/view-imports-injection.png":::
-2. To properly enable client-side monitoring for your application, the JavaScript snippet must appear in the `<head>` section of each page of your application that you want to monitor. In Visual Studio Solution Explorer, locate and open `\Views\Shared\_Layout.cshtml`, insert the following code immediately preceding the closing `<\head>` tag.
+3. To properly enable client-side monitoring for your application, in Visual Studio Solution Explorer, open `\Views\Shared\_Layout.cshtml` and insert the following code immediately before the closing `<\head>` tag. This JavaScript snippet must be inserted in the `<head>` section of each page of your application that you want to monitor.
```cshtml @Html.Raw(JavaScriptSnippet.FullScript) ```
- ![The _Layout.cshtml file displays with the preceding line of code highlighted within the head section of the page.](./media/tutorial-asp-net-core/layout-head-code.png "The head section of _Layout.cshtml")
+ :::image type="content" source="media/tutorial-asp-net-core/layout-head-code.png" alt-text="Screenshot of the _Layout.cshtml file in Visual Studio with the preceding line of code highlighted within the head section of the file." lightbox="media/tutorial-asp-net-core/layout-head-code.png":::
> [!TIP]
- > As an alternative to using the `FullScript`, the `ScriptBody` is available. Use `ScriptBody` if you need to control the `<script>` tag to set a Content Security Policy:
+ > An alternative to using `FullScript` is `ScriptBody`. Use `ScriptBody` if you need to control the `<script>` tag to set a Content Security Policy:
```cshtml <script> // apply custom changes to this script tag.
The preceding steps are enough to help you start collecting server-side telemetr
## Enable monitoring of database queries
-When investigating causes for performance degradation, it is important to include insights into database calls. Enable monitoring through configuration of the [dependency module](./asp-net-dependencies.md). Dependency monitoring, including SQL is enabled by default. The following steps can be followed to capture the full SQL query text.
+When investigating causes for performance degradation, it is important to include insights into database calls. You enable monitoring by configuring the [dependency module](./asp-net-dependencies.md). Dependency monitoring, including SQL, is enabled by default.
+
+Follow these steps to capture the full SQL query text.
> [!NOTE] > SQL text may contain sensitive data such as passwords and PII. Be careful when enabling this feature.
-1. From the Visual Studio Solution Explorer, locate and open the **Program.cs** file.
+1. From the Visual Studio Solution Explorer, open the **Program.cs** file.
2. At the top of the file, add the following `using` statement.
When investigating causes for performance degradation, it is important to includ
using Microsoft.ApplicationInsights.DependencyCollector; ```
-3. Immediately following the `builder.Services.AddApplicationInsightsTelemetry()` code, insert the following to enable SQL command text instrumentation.
+3. To enable SQL command text instrumentation, insert the following code immediately after the `builder.Services.AddApplicationInsightsTelemetry()` code.
```csharp builder.Services.ConfigureTelemetryModule<DependencyTrackingTelemetryModule>((module, o) => { module.EnableSqlCommandTextInstrumentation = true; }); ```
- ![A code window displays with the preceding code highlighted.](./media/tutorial-asp-net-core/enable-sql-command-text-instrumentation.png "Enable SQL command text instrumentation")
+ :::image type="content" source="media/tutorial-asp-net-core/enable-sql-command-text-instrumentation.png" alt-text="Screenshot of a code window in Visual Studio with the preceding code highlighted." lightbox="media/tutorial-asp-net-core/enable-sql-command-text-instrumentation.png":::
## Run the Azure Cafe web application
-After the web application code is deployed, telemetry will flow to Application Insights. The Application Insights SDK automatically collects incoming web requests to your application.
+After you deploy the web application code, telemetry will flow to Application Insights. The Application Insights SDK automatically collects incoming web requests to your application.
-1. Right-click the **AzureCafe** project in Solution Explorer and select **Publish** from the context menu.
+1. From the Visual Studio Solution Explorer, right-click on the **AzureCafe** project and select **Publish** from the context menu.
- ![The Visual Studio Solution Explorer displays with the Azure Cafe project selected and the Publish context menu item highlighted.](./media/tutorial-asp-net-core/web-project-publish-context-menu.png "Publish Web App")
+ :::image type="content" source="media/tutorial-asp-net-core/web-project-publish-context-menu.png" alt-text="Screenshot of the Visual Studio Solution Explorer with the Azure Cafe project selected and the Publish context menu item highlighted." lightbox="media/tutorial-asp-net-core/web-project-publish-context-menu.png":::
2. Select **Publish** to promote the new code to the Azure App Service.
- ![The AzureCafe publish profile displays with the Publish button highlighted.](./media/tutorial-asp-net-core/publish-profile.png "Publish profile")
+ :::image type="content" source="media/tutorial-asp-net-core/publish-profile.png" alt-text="Screenshot of the AzureCafe publish profile with the Publish button highlighted." lightbox="media/tutorial-asp-net-core/publish-profile.png":::
-3. Once the publish has succeeded, a new browser window opens to the Azure Cafe web application.
+ When the Azure Cafe web application is successfully published, a new browser window opens to the Azure Cafe web application.
- ![The Azure Cafe web application displays.](./media/tutorial-asp-net-core/azure-cafe-index.png "Azure Cafe web application")
+ :::image type="content" source="media/tutorial-asp-net-core/azure-cafe-index.png" alt-text="Screenshot of the Azure Cafe web application." lightbox="media/tutorial-asp-net-core/azure-cafe-index.png":::
-4. Perform various activities in the web application to generate some telemetry.
+3. To generate some telemetry, follow these steps in the web application to add a review.
- 1. Select **Details** next to a Cafe to view its menu and reviews.
+ 1. To view a cafe's menu and reviews, select **Details** next to a cafe.
- ![A portion of the Azure Cafe list displays with the Details button highlighted.](./media/tutorial-asp-net-core/cafe-details-button.png "Azure Cafe Details")
+ :::image type="content" source="media/tutorial-asp-net-core/cafe-details-button.png" alt-text="Screenshot of a portion of the Azure Cafe list in the Azure Cafe web application with the Details button highlighted." lightbox="media/tutorial-asp-net-core/cafe-details-button.png":::
- 2. On the Cafe screen, select the **Reviews** tab to view and add reviews. Select the **Add review** button to add a review.
+ 2. To view and add reviews, on the Cafe screen, select the **Reviews** tab. Select the **Add review** button to add a review.
- ![The Cafe details screen displays with the Add review button highlighted.](./media/tutorial-asp-net-core/cafe-add-review-button.png "Add review")
+ :::image type="content" source="media/tutorial-asp-net-core/cafe-add-review-button.png" alt-text="Screenshot of the Cafe details screen in the Azure Cafe web application with the Add review button highlighted." lightbox="media/tutorial-asp-net-core/cafe-add-review-button.png":::
- 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. Once completed, select **Add review**.
+ 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. When finished, select **Add review**.
- ![The Create a review dialog displays.](./media/tutorial-asp-net-core/create-a-review-dialog.png "Create a review")
+ :::image type="content" source="media/tutorial-asp-net-core/create-a-review-dialog.png" alt-text="Screenshot of the Create a review dialog in the Azure Cafe web application." lightbox="media/tutorial-asp-net-core/create-a-review-dialog.png":::
- 4. Repeat adding reviews as desired to generate additional telemetry.
+ 4. If you need to generate additional telemetry, add additional reviews.
### Live metrics
-[Live Metrics](./live-stream.md) can be used to quickly verify if Application Insights monitoring is configured correctly. It might take a few minutes for telemetry to appear in the portal and analytics, but Live Metrics shows CPU usage of the running process in near real time. It can also show other telemetry like Requests, Dependencies, and Traces.
+You can use [Live Metrics](./live-stream.md) to quickly verify if Application Insights monitoring is configured correctly. Live Metrics shows CPU usage of the running process in near real time. It can also show other telemetry such as Requests, Dependencies, and Traces. Note that it might take a few minutes for the telemetry to appear in the portal and analytics.
-### Application map
+### Viewing the application map
The sample application makes calls to multiple Azure resources, including Azure SQL, Azure Blob Storage, and the Azure Language Service (for review sentiment analysis).
-![The Azure Cafe sample application architecture displays.](./media/tutorial-asp-net-core/azure-cafe-app-insights.png "Azure Cafe sample application architecture")
-Application Insights introspects incoming telemetry data and is able to generate a visual map of detected system integrations.
+Application Insights introspects the incoming telemetry data and is able to generate a visual map of the system integrations it detects.
1. Access and log into the [Azure portal](https://portal.azure.com).
-2. Open the sample application resource group `application-insights-azure-cafe`.
+2. Open the resource group for the sample application, which is `application-insights-azure-cafe`.
3. From the list of resources, select the `azure-cafe-insights-{SUFFIX}` Application Insights resource.
-4. Select **Application map** from the left menu, beneath the **Investigate** heading. Observe the generated Application map.
+4. From the left menu, beneath the **Investigate** heading, select **Application map**. Observe the generated Application map.
- ![The Application Insights application map displays.](./media/tutorial-asp-net-core/application-map.png "Application map")
+ :::image type="content" source="media/tutorial-asp-net-core/application-map.png" alt-text="Screenshot of the Application Insights application map in the Azure portal." lightbox="media/tutorial-asp-net-core/application-map.png":::
### Viewing HTTP calls and database SQL command text 1. In the Azure portal, open the Application Insights resource.
-2. Beneath the **Investigate** header on the left menu, select **Performance**.
+2. On the left menu, beneath the **Investigate** header, select **Performance**.
-3. The **Operations** tab contains details of the HTTP calls received by the application. You can also toggle between Server and Browser (client-side) views of data.
+3. The **Operations** tab contains details of the HTTP calls received by the application. To toggle between Server and Browser (client-side) views of the data, use the Server/Browser toggle.
- ![The Performance screen of Application Insights displays with the toggle between Server and Browser highlighted along with the list of HTTP calls received by the application.](./media/tutorial-asp-net-core/server-performance.png "Server performance HTTP calls")
+ <!-- The long description for server-performance.png: Screenshot of the Application Insights Performance screen in the Azure portal. The screenshot shows the Server/Browser toggle and HTTP calls received by the application highlighted. -->
+ :::image type="content" source="media/tutorial-asp-net-core/server-performance.png" alt-text="Screenshot of the Performance screen in the Azure portal." lightbox="media/tutorial-asp-net-core/server-performance.png":::
4. Select an Operation from the table, and choose to drill into a sample of the request.
+
+ <!-- The long description for select-operation-performance.png: Screenshot of the Application Insights Performance screen in the Azure portal. The screenshot shows a POST operation and a sample operation from the suggested list selected and highlighted and the Drill into samples button is highlighted. -->
+ :::image type="content" source="media/tutorial-asp-net-core/select-operation-performance.png" alt-text="Screenshot of the Application Insights Performance screen in the Azure portal with operations and sample operations listed." lightbox="media/tutorial-asp-net-core/select-operation-performance.png":::
- ![The Performance screen displays with a POST operation selected, the Drill into samples button is highlighted and a sample is selected from the suggested list.](./media/tutorial-asp-net-core/select-operation-performance.png "Drill into an operation")
-
-5. The End-to-end transaction displays for the selected request. In this case, a review was created including an image, thus it includes calls to Azure Storage, the Language Service (for sentiment analysis), as well as database calls into SQL Azure to persist the review. In this example, the first selected Event displays information relative to the HTTP POST call.
+ The end-to-end transaction displays for the selected request. In this case, a review was created, including an image, so it includes calls to Azure Storage and the Language Service (for sentiment analysis). It also includes database calls into SQL Azure to persist the review. In this example, the first selected Event displays information relative to the HTTP POST call.
- ![The End-to-end transaction displays with the HTTP Post call selected.](./media/tutorial-asp-net-core/e2e-http-call.png "HTTP POST details")
+ :::image type="content" source="media/tutorial-asp-net-core/e2e-http-call.png" alt-text="Screenshot of the end-to-end transaction in the Azure portal with the HTTP Post call selected." lightbox="media/tutorial-asp-net-core/e2e-http-call.png":::
-6. Select a SQL item to review the SQL command text issued to the database.
+5. Select a SQL item to review the SQL command text issued to the database.
- ![The End-to-end transaction displays with SQL command details.](./media/tutorial-asp-net-core/e2e-db-call.png "SQL Command text details")
+ :::image type="content" source="media/tutorial-asp-net-core/e2e-db-call.png" alt-text="Screenshot of the end-to-end transaction in the Azure portal with SQL command details." lightbox="media/tutorial-asp-net-core/e2e-db-call.png":::
-7. Optionally select Dependency (outgoing) requests to Azure Storage or the Language Service.
+6. Optionally, select the Dependency (outgoing) requests to Azure Storage or the Language Service.
-8. Return to the **Performance** screen, and select the **Dependencies** tab to investigate calls into external resources. Notice the Operations table includes calls into Sentiment Analysis, Blob Storage, and Azure SQL.
+7. Return to the **Performance** screen and select the **Dependencies** tab to investigate calls into external resources. Notice the Operations table includes calls into Sentiment Analysis, Blob Storage, and Azure SQL.
- ![The Performance screen displays with the Dependencies tab selected and the Operations table highlighted.](./media/tutorial-asp-net-core/performance-dependencies.png "Dependency Operations")
+ :::image type="content" source="media/tutorial-asp-net-core/performance-dependencies.png" alt-text="Screenshot of the Application Insights Performance screen in the Azure portal with the Dependencies tab selected and the Operations table highlighted." lightbox="media/tutorial-asp-net-core/performance-dependencies.png":::
## Application logging with Application Insights ### Logging overview
-Application Insights is one type of [logging provider](/dotnet/core/extensions/logging-providers) available to ASP.NET Core applications that becomes available to applications when the [Application Insights for ASP.NET Core](#install-the-application-insights-nuget-package) NuGet package is installed and [server-side telemetry collection enabled](#enable-application-insights-server-side-telemetry). As a reminder, the following code in **Program.cs** registers the `ApplicationInsightsLoggerProvider` with the built-in dependency injection container.
+Application Insights is one type of [logging provider](/dotnet/core/extensions/logging-providers) available to ASP.NET Core applications that becomes available to applications when the [Application Insights for ASP.NET Core](#install-the-application-insights-nuget-package) NuGet package is installed and [server-side telemetry collection is enabled](#enable-application-insights-server-side-telemetry).
+
+As a reminder, the following code in **Program.cs** registers the `ApplicationInsightsLoggerProvider` with the built-in dependency injection container.
```csharp builder.Services.AddApplicationInsightsTelemetry(); ```
-With the `ApplicationInsightsLoggerProvider` registered as the logging provider, the app is ready to log to Application Insights using either constructor injection with <xref:Microsoft.Extensions.Logging.ILogger> or the generic-type alternative <xref:Microsoft.Extensions.Logging.ILogger%601>.
+With the `ApplicationInsightsLoggerProvider` registered as the logging provider, the app is ready to log into Application Insights by using either constructor injection with <xref:Microsoft.Extensions.Logging.ILogger> or the generic-type alternative <xref:Microsoft.Extensions.Logging.ILogger%601>.
> [!NOTE]
-> With default settings, the logging provider is configured to automatically capture log events with a severity of <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType> or greater.
+> By default, the logging provider is configured to automatically capture log events with a severity of <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType> or greater.
-Consider the following example controller that demonstrates the injection of ILogger which is resolved with the `ApplicationInsightsLoggerProvider` that is registered with the dependency injection container. Observe in the **Get** method that an Informational, Warning and Error message are recorded.
+Consider the following example controller. It demonstrates the injection of ILogger, which is resolved with the `ApplicationInsightsLoggerProvider` that is registered with the dependency injection container. Observe in the **Get** method that an Informational, Warning, and Error message are recorded.
> [!NOTE] > By default, the Information level trace will not be recorded. Only the Warning and above levels are captured.
The ValuesController above is deployed with the sample application and is locate
1. Using an internet browser, open the sample application. In the address bar, append `/api/Values` and press <kbd>Enter</kbd>.
- ![A browser window displays with /api/Values appended to the URL in the address bar.](media/tutorial-asp-net-core/values-api-url.png "Values API URL")
+ :::image type="content" source="media/tutorial-asp-net-core/values-api-url.png" alt-text="Screenshot of a browser window with /api/Values appended to the URL in the address bar." lightbox="media/tutorial-asp-net-core/values-api-url.png":::
+
+2. In the [Azure portal](https://portal.azure.com), wait a few moments and then select the **azure-cafe-insights-{SUFFIX}** Application Insights resource.
-2. Wait a few moments, then return to the **Application Insights** resource in the [Azure portal](https://portal.azure.com).
+ :::image type="content" source="media/tutorial-asp-net-core/application-insights-resource-group.png" alt-text="Screenshot of the application-insights-azure-cafe resource group in the Azure portal with the Application Insights resource highlighted." lightbox="media/tutorial-asp-net-core/application-insights-resource-group.png":::
- ![A resource group displays with the Application Insights resource highlighted.](./media/tutorial-asp-net-core/application-insights-resource-group.png "Resource Group")
+3. From the left menu of the Application Insights resource, under the **Monitoring** section, select **Logs**.
+
+4. In the **Tables** pane, under the **Application Insights** tree, double-click on the **traces** table.
-3. From the left menu of the Application Insights resource, select **Logs** from beneath the **Monitoring** section. In the **Tables** pane, double-click on the **traces** table, located under the **Application Insights** tree. Modify the query to retrieve traces for the **Values** controller as follows, then select **Run** to filter the results.
+5. Modify the query to retrieve traces for the **Values** controller as follows, then select **Run** to filter the results.
```kql traces | where operation_Name == "GET Values/Get" ```
-4. Observe the results display the logging messages present in the controller. A log severity of 2 indicates a warning level, and a log severity of 3 indicates an Error level.
+ The results display the logging messages present in the controller. A log severity of 2 indicates a warning level, and a log severity of 3 indicates an Error level.
-5. Alternatively, the query can also be written to retrieve results based on the category of the log. By default, the category is the fully qualified name of the class where the ILogger is injected, in this case **ValuesController** (if there was a namespace associated with the class the name will be prefixed with the namespace). Re-write and run the following query to retrieve results based on category.
+6. Alternatively, you can also write the query to retrieve results based on the category of the log. By default, the category is the fully qualified name of the class where the ILogger is injected. In this case, the category name is **ValuesController** (if there is a namespace associated with the class, the name will be prefixed with the namespace). Re-write and run the following query to retrieve results based on category.
```kql traces
The ValuesController above is deployed with the sample application and is locate
## Control the level of logs sent to Application Insights
-`ILogger` implementations have a built-in mechanism to apply [log filtering](/dotnet/core/extensions/logging#how-filtering-rules-are-applied). This filtering lets you control the logs that are sent to each registered provider, including the Application Insights provider. You can use the filtering either in configuration (using an *appsettings.json* file) or in code. For more information about log levels and guidance on appropriate use, see the [Log Level](/aspnet/core/fundamentals/logging#log-level) documentation.
+`ILogger` implementations have a built-in mechanism to apply [log filtering](/dotnet/core/extensions/logging#how-filtering-rules-are-applied). This filtering lets you control the logs that are sent to each registered provider, including the Application Insights provider. You can use the filtering either in configuration (using an *appsettings.json* file) or in code. For more information about log levels and guidance on how to use them appropriately, see the [Log Level](/aspnet/core/fundamentals/logging#log-level) documentation.
The following examples show how to apply filter rules to the `ApplicationInsightsLoggerProvider` to control the level of logs sent to Application Insights. ### Create filter rules with configuration
-The `ApplicationInsightsLoggerProvider` is aliased as **ApplicationInsights** in configuration. The following section of an *appsettings.json* file sets the default log level for all providers to <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType>. The configuration for the ApplicationInsights provider specifically for categories that start with "ValuesController" override this default value with <xref:Microsoft.Extensions.Logging.LogLevel.Error?displayProperty=nameWithType> and higher.
+The `ApplicationInsightsLoggerProvider` is aliased as **ApplicationInsights** in configuration. The following section of an *appsettings.json* file sets the default log level for all providers to <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType>. The configuration for the ApplicationInsights provider, specifically for categories that start with "ValuesController," overrides this default value with <xref:Microsoft.Extensions.Logging.LogLevel.Error?displayProperty=nameWithType> and higher.
```json {
The `ApplicationInsightsLoggerProvider` is aliased as **ApplicationInsights** in
} ```
-Deploying the sample application with the preceding code in *appsettings.json* will yield only the error trace being sent to Application Insights when interacting with the **ValuesController**. This is because the **LogLevel** for the **ValuesController** category is set to **Error**, therefore the **Warning** trace is suppressed.
+Deploying the sample application with the preceding code in *appsettings.json* will yield only the error trace being sent to Application Insights when interacting with the **ValuesController**. This is because the **LogLevel** for the **ValuesController** category is set to **Error**. Therefore, the **Warning** trace is suppressed.
## Turn off logging to Application Insights
-To disable logging using configuration, set all LogLevel values to "None".
+To disable logging by using configuration, set all LogLevel values to "None".
```json {
To disable logging using configuration, set all LogLevel values to "None".
} ```
-Similarly, within code, set the default level for the `ApplicationInsightsLoggerProvider` and any subsequent log levels to **None**.
+Similarly, within the code, set the default level for the `ApplicationInsightsLoggerProvider` and any subsequent log levels to **None**.
```csharp var builder = WebApplication.CreateBuilder(args);
azure-monitor Autoscale Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-best-practices.md
Azure Monitor autoscale applies only to [Virtual Machine Scale Sets](https://azu
An autoscale setting has a maximum, minimum, and default value of instances. * An autoscale job always reads the associated metric to scale by, checking if it has crossed the configured threshold for scale-out or scale-in. You can view a list of metrics that autoscale can scale by at [Azure Monitor autoscaling common metrics](autoscale-common-metrics.md). * All thresholds are calculated at an instance level. For example, "scale out by one instance when average CPU > 80% when instance count is 2", means scale-out when the average CPU across all instances is greater than 80%.
-* All autoscale failures are logged to the Activity Log. You can then configure an [activity log alert](../alerts/activity-log-alerts.md) so that you can be notified via email, SMS, or webhooks whenever there is an autoscale failure.
-* Similarly, all successful scale actions are posted to the Activity Log. You can then configure an activity log alert so that you can be notified via email, SMS, or webhooks whenever there is a successful autoscale action. You can also configure email or webhook notifications to get notified for successful scale actions via the notifications tab on the autoscale setting.
+* All autoscale failures are logged to the Activity Log. You can then configure an [activity log alert](../alerts/activity-log-alerts.md) so that you can be notified via email, SMS, or webhooks whenever there's an autoscale failure.
+* Similarly, all successful scale actions are posted to the Activity Log. You can then configure an activity log alert so that you can be notified via email, SMS, or webhooks whenever there's a successful autoscale action. You can also configure email or webhook notifications to get notified for successful scale actions via the notifications tab on the autoscale setting.
## Autoscale best practices Use the following best practices as you use autoscale.
If you have a setting that has minimum=2, maximum=2 and the current instance cou
If you manually update the instance count to a value above or below the maximum, the autoscale engine automatically scales back to the minimum (if below) or the maximum (if above). For example, you set the range between 3 and 6. If you have one running instance, the autoscale engine scales to three instances on its next run. Likewise, if you manually set the scale to eight instances, on the next run autoscale will scale it back to six instances on its next run. Manual scaling is temporary unless you reset the autoscale rules as well. ### Always use a scale-out and scale-in rule combination that performs an increase and decrease
-If you use only one part of the combination, autoscale will only take action in a single direction (scale out, or in) until it reaches the maximum, or minimum instance counts, as defined in the profile. This is not optimal, ideally you want your resource to scale up at times of high usage to ensure availability. Similarly, at times of low usage you want your resource to scale down, so you can realize cost savings.
+If you use only one part of the combination, autoscale will only take action in a single direction (scale out, or in) until it reaches the maximum, or minimum instance counts, as defined in the profile. This isn't optimal, ideally you want your resource to scale up at times of high usage to ensure availability. Similarly, at times of low usage you want your resource to scale down, so you can realize cost savings.
-When you use a scale-in and scale-out rule, ideally use the same metric to control both. Otherwise, itΓÇÖs possible that the scale-in and scale-out conditions could be met at the same time resulting in some level of flapping. For example, the following rule combination is *not* recommended because there is no scale-in rule for memory usage:
+When you use a scale-in and scale-out rule, ideally use the same metric to control both. Otherwise, itΓÇÖs possible that the scale-in and scale-out conditions could be met at the same time resulting in some level of flapping. For example, the following rule combination isn't* recommended because there's no scale-in rule for memory usage:
* If CPU > 90%, scale-out by 1 * If Memory > 90%, scale-out by 1
In this example, you can have a situation in which the memory usage is over 90%
### Choose the appropriate statistic for your diagnostics metric For diagnostics metrics, you can choose among *Average*, *Minimum*, *Maximum* and *Total* as a metric to scale by. The most common statistic is *Average*. -- ### Considerations for scaling threshold values for special metrics For special metrics such as Storage or Service Bus Queue length metric, the threshold is the average number of messages available per current number of instances. Carefully choose the threshold value for this metric.
Let's illustrate it with an example to ensure you understand the behavior better
Consider the following sequence: 1. There are two storage queue instances.
-2. Messages keep coming and when you review the storage queue, the total count reads 50. You might assume that autoscale should start a scale-out action. However, note that it is still 50/2 = 25 messages per instance. So, scale-out does not occur. For the first scale-out to happen, the total message count in the storage queue should be 100.
+2. Messages keep coming and when you review the storage queue, the total count reads 50. You might assume that autoscale should start a scale-out action. However, note that it's still 50/2 = 25 messages per instance. So, scale-out doesn't occur. For the first scale-out to happen, the total message count in the storage queue should be 100.
3. Next, assume that the total message count reaches 100.
-4. A third storage queue instance is added due to a scale-out action. The next scale-out action will not happen until the total message count in the queue reaches 150 because 150/3 = 50.
+4. A third storage queue instance is added due to a scale-out action. The next scale-out action won't happen until the total message count in the queue reaches 150 because 150/3 = 50.
5. Now the number of messages in the queue gets smaller. With three instances, the first scale-in action happens when the total messages in all queues add up to 30 because 30/3 = 10 messages per instance, which is the scale-in threshold.
-### Considerations for scaling when multiple profiles are configured in an autoscale setting
-In an autoscale setting, you can choose a default profile, which is always applied without any dependency on schedule or time, or you can choose a recurring profile or a profile for a fixed period with a date and time range.
-
-When autoscale service processes them, it always checks in the following order:
-
-1. Fixed Date profile
-2. Recurring profile
-3. Default ("Always") profile
-
-If a profile condition is met, autoscale does not check the next profile condition below it. Autoscale only processes one profile at a time. This means if you want to also include a processing condition from a lower-tier profile, you must include those rules as well in the current profile.
-
-Let's review using an example:
-
-The image below shows an autoscale setting with a default profile of minimum instances = 2 and maximum instances = 10. In this example, rules are configured to scale out when the message count in the queue is greater than 10 and scale-in when the message count in the queue is less than three. So now the resource can scale between two and ten instances.
-
-In addition, there is a recurring profile set for Monday. It is set for minimum instances = 3 and maximum instances = 10. This means on Monday, the first-time autoscale checks for this condition, if the instance count is two, it scales to the new minimum of three. As long as autoscale continues to find this profile condition matched (Monday), it only processes the CPU-based scale-out/in rules configured for this profile. At this time, it does not check for the queue length. However, if you also want the queue length condition to be checked, you should include those rules from the default profile as well in your Monday profile.
-
-Similarly, when autoscale switches back to the default profile, it first checks if the minimum and maximum conditions are met. If the number of instances at the time is 12, it scales in to 10, the maximum allowed for the default profile.
-
-![autoscale settings](./media/autoscale-best-practices/insights-autoscale-best-practices-2.png)
- ### Considerations for scaling when multiple rules are configured in a profile+ There are cases where you may have to set multiple rules in a profile. The following autoscale rules are used by the autoscale engine when multiple rules are set. On *scale-out*, autoscale runs if any rule is met.
Then the follow occurs:
On the other hand, if CPU is 25% and memory is 51% autoscale does **not** scale-in. In order to scale-in, CPU must be 29% and Memory 49%. ### Always select a safe default instance count
-The default instance count is important because autoscale scales your service to that count when metrics are not available. Therefore, select a default instance count that's safe for your workloads.
+
+The default instance count is important because autoscale scales your service to that count when metrics aren't available. Therefore, select a default instance count that's safe for your workloads.
### Configure autoscale notifications+ Autoscale will post to the Activity Log if any of the following conditions occur: * Autoscale issues a scale operation. * Autoscale service successfully completes a scale action. * Autoscale service fails to take a scale action.
-* Metrics are not available for autoscale service to make a scale decision.
+* Metrics aren't available for autoscale service to make a scale decision.
* Metrics are available (recovery) again to make a scale decision.
-* Autoscale detects flapping and aborts the scale attempt. You will see a log type of `Flapping` in this situation. If you see this, consider whether your thresholds are too narrow.
-* Autoscale detects flapping but is still able to successfully scale. You will see a log type of `FlappingOccurred` in this situation. If you see this, the autoscale engine has attempted to scale (e.g. from 4 instances to 2), but has determined that this would cause flapping. Instead, the autoscale engine has scaled to a different number of instances (e.g. using 3 instances instead of 2), which no longer causes flapping, so it has scaled to this number of instances.
+* Autoscale detects flapping and aborts the scale attempt. You'll see a log type of `Flapping` in this situation. If you see this, consider whether your thresholds are too narrow.
+* Autoscale detects flapping but is still able to successfully scale. You'll see a log type of `FlappingOccurred` in this situation. If you see this, the autoscale engine has attempted to scale (for example, from 4 instances to 2), but has determined that this would cause flapping. Instead, the autoscale engine has scaled to a different number of instances (for example, using 3 instances instead of 2), which no longer causes flapping, so it has scaled to this number of instances.
You can also use an Activity Log alert to monitor the health of the autoscale engine. Here are examples to [create an Activity Log Alert to monitor all autoscale engine operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert) or to [create an Activity Log Alert to monitor all failed autoscale scale in/scale out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert). In addition to using activity log alerts, you can also configure email or webhook notifications to get notified for scale actions via the notifications tab on the autoscale setting. ## Send data securely using TLS 1.2+ To ensure the security of data in transit to Azure Monitor, we strongly encourage you to configure the agent to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**, and the industry is quickly moving to abandon support for these older protocols.
-The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a deadline of [June 30th, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf) to disable older versions of TLS/SSL and upgrade to more secure protocols. Once Azure drops legacy support, if your agents cannot communicate over at least TLS 1.2 you would not be able to send data to Azure Monitor Logs.
+The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a deadline of [June 30th, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf) to disable older versions of TLS/SSL and upgrade to more secure protocols. Once Azure drops legacy support, if your agents can't communicate over at least TLS 1.2 you wouldn't be able to send data to Azure Monitor Logs.
We recommend you do NOT explicit set your agent to only use TLS 1.2 unless absolutely necessary. Allowing the agent to automatically detect, negotiate, and take advantage of future security standards is preferable. Otherwise you may miss the added security of the newer standards and possibly experience problems if TLS 1.2 is ever deprecated in favor of those newer standards.
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |AddRegion|Yes|Region Added|Count|Count|Region Added|Region| |AutoscaleMaxThroughput|No|Autoscale Max Throughput|Count|Maximum|Autoscale Max Throughput|DatabaseName, CollectionName|
-|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage" will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this doc https://docs.microsoft.com/azure/cosmos-db/concepts-limits. After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region|
+|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage" will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this doc https://learn.microsoft.com/azure/cosmos-db/concepts-limits. After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region|
|CassandraConnectionClosures|No|Cassandra Connection Closures|Count|Total|Number of Cassandra connections that were closed, reported at a 1 minute granularity|Region, ClosureReason| |CassandraConnectorAvgReplicationLatency|No|Cassandra Connector Average ReplicationLatency|MilliSeconds|Average|Cassandra Connector Average ReplicationLatency|No Dimensions| |CassandraConnectorReplicationHealthStatus|No|Cassandra Connector Replication Health Status|Count|Count|Cassandra Connector Replication Health Status|NotStarted, ReplicationInProgress, Error|
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
The following query can be used to make a recommendation for the optimal pricing
```kusto // Set these parameters before running query // For Pay-As-You-Go (per-GB) and commitment tier pricing details, see https://azure.microsoft.com/pricing/details/monitor/.
-// You can see your per-node costs in your Azure usage and charge data. For more information, see https://docs.microsoft.com/en-us/azure/cost-management-billing/understand/download-azure-daily-usage.
+// You can see your per-node costs in your Azure usage and charge data. For more information, see https://learn.microsoft.com/azure/cost-management-billing/understand/download-azure-daily-usage.
let PerNodePrice = 15.; // Monthly price per monitored node let PerNodeOveragePrice = 2.30; // Price per GB for data overage in the Per Node pricing tier let PerGBPrice = 2.30; // Enter the Pay-as-you-go price for your workspace's region (from https://azure.microsoft.com/pricing/details/monitor/)
azure-monitor Log Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-powerbi.md
To learn more and how to configure incremental refresh, see [Power BI Datasets a
After your data is sent to Power BI, you can continue to use Power BI to create reports and dashboards.
-For more information, see [this guide on how to create your first Power BI model and report](/learn/modules/build-your-first-power-bi-report/).
+For more information, see [this guide on how to create your first Power BI model and report](/training/modules/build-your-first-power-bi-report/).
## Excel integration
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
| Article | Description | ||| |[Log Analytics agent overview](agents/log-analytics-agent.md)|Restructured the Agents section and rewrote the Agents Overview article to reflect that Azure Monitor Agent is the primary agent for collecting monitoring data.|
-|[Dependency analysis in Azure Migrate Discovery and assessment - Azure Migrate](https://docs.microsoft.com/azure/migrate/concepts-dependency-visualization)|Revamped the guidance for migrating from Log Analytics Agent to Azure Monitor Agent.|
+|[Dependency analysis in Azure Migrate Discovery and assessment - Azure Migrate](https://learn.microsoft.com/azure/migrate/concepts-dependency-visualization)|Revamped the guidance for migrating from Log Analytics Agent to Azure Monitor Agent.|
### Alerts
All references to unsupported versions of .NET and .NET CORE have been scrubbed
| Article | Description | |:|:| | [Migrate from VM insights guest health (preview) to Azure Monitor log alerts](vm/vminsights-health-migrate.md) | New article describing process to replace VM guest health with alert rules |
-| [VM insights guest health (preview)](vm/vminsights-health-overview.md) | Added deprecation statement |
+| [VM insights guest health (preview)](vm/vminsights-health-overview.md) | Added deprecation statement |
azure-netapp-files Azacsnap Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-troubleshoot.md
To troubleshoot this error:
1. Check the log file to see if the service principal has expired. The following log file example shows that the client secret keys are expired. ```output
- [19/Nov/2020:18:41:10 +13:00] DEBUG: [PID:0020257:StorageANF:659] [1] Innerexception: Microsoft.IdentityModel.Clients.ActiveDirectory.AdalServiceException AADSTS7000222: The provided client secret keys are expired. Visit the Azure Portal to create new keys for your app, or consider using certificate credentials for added security: https://docs.microsoft.com/azure/active-directory/develop/active-directory-certificate-credentials
+ [19/Nov/2020:18:41:10 +13:00] DEBUG: [PID:0020257:StorageANF:659] [1] Innerexception: Microsoft.IdentityModel.Clients.ActiveDirectory.AdalServiceException AADSTS7000222: The provided client secret keys are expired. Visit the Azure Portal to create new keys for your app, or consider using certificate credentials for added security: https://learn.microsoft.com/azure/active-directory/develop/active-directory-certificate-credentials
``` > [!TIP]
azure-portal Azure Portal Quickstart Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-quickstart-center.md
You can also select **Browse our full Azure catalog** to see all Azure learning
## Next steps * Learn more about Azure setup and migration in the [Microsoft Cloud Adoption Framework for Azure](/azure/architecture/cloud-adoption/).
-* Unlock your cloud skills with more [Learn modules]](/learn/azure/).
+* Unlock your cloud skills with more [Learn modules]](/training/azure/).
azure-portal Azure Portal Safelist Urls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-safelist-urls.md
datalake.azure.net (Azure Data Lake Service)
dev.azure.com (Azure DevOps) dev.azuresynapse.net (Azure Synapse) digitaltwins.azure.net (Azure Digital Twins)
-docs.microsoft.com (Azure documentation)
+learn.microsoft.com (Azure documentation)
elm.iga.azure.com (Azure AD) eventhubs.azure.net (Azure Event Hubs) functions.azure.com (Azure Functions)
azure-resource-manager Add Template To Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/add-template-to-azure-pipelines.md
Last updated 08/03/2022
This quickstart shows you how to integrate Bicep files with Azure Pipelines for continuous integration and continuous deployment (CI/CD).
-It provides a short introduction to the pipeline task you need for deploying a Bicep file. If you want more detailed steps on setting up the pipeline and project, see [Deploy Azure resources by using Bicep and Azure Pipelines](/learn/paths/bicep-azure-pipelines/).
+It provides a short introduction to the pipeline task you need for deploying a Bicep file. If you want more detailed steps on setting up the pipeline and project, see [Deploy Azure resources by using Bicep and Azure Pipelines](/training/paths/bicep-azure-pipelines/).
## Prerequisites
azure-resource-manager Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/best-practices.md
This article recommends practices to follow when developing your Bicep files. Th
### Training resources
-If you would rather learn about Bicep best practices through step-by-step guidance, see [Structure your Bicep code for collaboration](/learn/modules/structure-bicep-code-collaboration/).
+If you would rather learn about Bicep best practices through step-by-step guidance, see [Structure your Bicep code for collaboration](/training/modules/structure-bicep-code-collaboration/).
## Parameters
azure-resource-manager Bicep Functions Array https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-array.md
Last updated 04/12/2022
# Array functions for Bicep
-This article describes the Bicep functions for working with arrays.
+This article describes the Bicep functions for working with arrays. The lambda functions for working with arrays can be found [here](./bicep-functions-lambda.md).
## array
The output from the preceding example with the default values is:
| arrayOutput | String | one | | stringOutput | String | O |
+## flatten
+
+`flatten(arrayToFlatten)`
+
+Takes an array of arrays, and returns an array of sub-array elements, in the original order. Sub-arrays are only flattened once, not recursively.
+
+Namespace: [sys](bicep-functions.md#namespaces-for-functions).
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+|: |: |: |: |
+| arrayToFlattern |Yes |array |The array of sub-arrays to flatten.|
+
+### Return value
+
+Array
+
+### Example
+
+The following example shows how to use the flatten function.
+
+```bicep
+param arrayToTest array = [
+ ['one', 'two']
+ ['three']
+ ['four', 'five']
+]
+output arrayOutput array = flatten(arrayToTest)
+```
+
+The output from the preceding example with the default values is:
+
+| Name | Type | Value |
+| - | - | -- |
+| arrayOutput | array | ['one', 'two', 'three', 'four', 'five'] |
+ ## indexOf `indexOf(arrayToSearch, itemToFind)`
azure-resource-manager Bicep Functions Lambda https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-lambda.md
+
+ Title: Bicep functions - lambda
+description: Describes the lambda functions to use in a Bicep file.
+++ Last updated : 09/20/2022++
+# Lambda functions for Bicep
+
+This article describes the lambda functions to use in Bicep. Lambda expressions (or lambda functions) are essentially blocks of code that can be passed as an argument. In Bicep, lambda expression is in this format:
+
+```bicep
+<lambda variable> => <expression>
+```
+
+> [!NOTE]
+> The lambda functions are only supported in Bicep CLI version 0.10.61 or newer.
+
+## Limitations
+
+Bicep lambda function has these limitations:
+
+- Lambda expression can only be specified directly as function arguments in these functions: [`filter()`](#filter), [`map()`](#map), [`reduce()`](#reduce), and [`sort()`](#sort).
+- Using lambda variables (the temporary variables used in the lambda expressions) inside resource or module array access isn't currently supported.
+- Using lambda variables inside the [`listKeys`](./bicep-functions-resource.md#list) function isn't currently supported.
+- Using lambda variables inside the [reference](./bicep-functions-resource.md#reference) function isn't currently supported.
+
+## filter
+
+`filter(inputArray, lambda expression)`
+
+Filters an array with a custom filtering function.
+
+Namespace: [sys](bicep-functions.md#namespaces-for-functions).
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+|: |: |: |: |
+| inputArray |Yes |array |The array to filter.|
+| lambda expression |Yes |expression |The lambda expression applied to each input array element. If false, the item will be filtered out of the output array.|
+
+### Return value
+
+An array.
+
+### Examples
+
+The following examples show how to use the filter function.
+
+```bicep
+var dogs = [
+ {
+ name: 'Evie'
+ age: 5
+ interests: ['Ball', 'Frisbee']
+ }
+ {
+ name: 'Casper'
+ age: 3
+ interests: ['Other dogs']
+ }
+ {
+ name: 'Indy'
+ age: 2
+ interests: ['Butter']
+ }
+ {
+ name: 'Kira'
+ age: 8
+ interests: ['Rubs']
+ }
+]
+
+output oldDogs array = filter(dogs, dog => dog.age >=5)
+```
+
+The output from the preceding example shows the dogs that are five or older:
+
+| Name | Type | Value |
+| - | - | -- |
+| oldDogs | Array | [{"name":"Evie","age":5,"interests":["Ball","Frisbee"]},{"name":"Kira","age":8,"interests":["Rubs"]}] |
+
+```bicep
+var itemForLoop = [for item in range(0, 10): item]
+
+output filteredLoop array = filter(itemForLoop, i => i > 5)
+output isEven array = filter(range(0, 10), i => 0 == i % 2)
+```
+
+The output from the preceding example:
+
+| Name | Type | Value |
+| - | - | -- |
+| filteredLoop | Array | [6, 7, 8, 9] |
+| isEven | Array | [0, 2, 4, 6, 8] |
+
+**filterdLoop** shows the numbers in an array that are greater than 5; and **isEven** shows the even numbers in the array.
+
+## map
+
+`map(inputArray, lambda expression)`
+
+Applies a custom mapping function to each element of an array.
+
+Namespace: [sys](bicep-functions.md#namespaces-for-functions).
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+|: |: |: |: |
+| inputArray |Yes |array |The array to map.|
+| lambda expression |Yes |expression |The lambda expression applied to each input array element, in order to generate the output array.|
+
+### Return value
+
+An array.
+
+### Example
+
+The following example shows how to use the map function.
+
+```bicep
+var dogs = [
+ {
+ name: 'Evie'
+ age: 5
+ interests: ['Ball', 'Frisbee']
+ }
+ {
+ name: 'Casper'
+ age: 3
+ interests: ['Other dogs']
+ }
+ {
+ name: 'Indy'
+ age: 2
+ interests: ['Butter']
+ }
+ {
+ name: 'Kira'
+ age: 8
+ interests: ['Rubs']
+ }
+]
+
+output dogNames array = map(dogs, dog => dog.name)
+output sayHi array = map(dogs, dog => 'Hello ${dog.name}!')
+output mapObject array = map(range(0, length(dogs)), i => {
+ i: i
+ dog: dogs[i].name
+ greeting: 'Ahoy, ${dogs[i].name}!'
+})
+```
+
+The output from the preceding example is:
+
+| Name | Type | Value |
+| - | - | -- |
+| dogNames | Array | ["Evie","Casper","Indy","Kira"] |
+| sayHi | Array | ["Hello Evie!","Hello Casper!","Hello Indy!","Hello Kira!"] |
+| mapObject | Array | [{"i":0,"dog":"Evie","greeting":"Ahoy, Evie!"},{"i":1,"dog":"Casper","greeting":"Ahoy, Casper!"},{"i":2,"dog":"Indy","greeting":"Ahoy, Indy!"},{"i":3,"dog":"Kira","greeting":"Ahoy, Kira!"}] |
+
+**dogNames** shows the dog names from the array of objects; **sayHi** concatenates "Hello" and each of the dog names; and **mapObject** creates another array of objects.
+
+## reduce
+
+`reduce(inputArray, initialValue, lambda expression)`
+
+Reduces an array with a custom reduce function.
+
+Namespace: [sys](bicep-functions.md#namespaces-for-functions).
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+|: |: |: |: |
+| inputArray |Yes |array |The array to reduce.|
+| initialValue |No |any |Initial value.|
+| lambda expression |Yes |expression |The lambda expression used to aggregate the current value and the next value.|
+
+### Return value
+
+Any.
+
+### Example
+
+The following examples show how to use the reduce function.
+
+```bicep
+var dogs = [
+ {
+ name: 'Evie'
+ age: 5
+ interests: ['Ball', 'Frisbee']
+ }
+ {
+ name: 'Casper'
+ age: 3
+ interests: ['Other dogs']
+ }
+ {
+ name: 'Indy'
+ age: 2
+ interests: ['Butter']
+ }
+ {
+ name: 'Kira'
+ age: 8
+ interests: ['Rubs']
+ }
+]
+var ages = map(dogs, dog => dog.age)
+output totalAge int = reduce(ages, 0, (cur, prev) => cur + prev)
+output totalAgeAdd1 int = reduce(ages, 1, (cur, prev) => cur + prev)
+```
+
+The output from the preceding example is:
+
+| Name | Type | Value |
+| - | - | -- |
+| totalAge | int | 18 |
+| totalAgeAdd1 | int | 19 |
+
+**totalAge** sums the ages of the dogs; **totalAgeAdd1** has an initial value of 1, and adds all the dog ages to the initial values.
+
+```bicep
+output reduceObjectUnion object = reduce([
+ { foo: 123 }
+ { bar: 456 }
+ { baz: 789 }
+], {}, (cur, next) => union(cur, next))
+```
+
+The output from the preceding example is:
+
+| Name | Type | Value |
+| - | - | -- |
+| reduceObjectUnion | object | {"foo":123,"bar":456,"baz":789} |
+
+The [union](./bicep-functions-object.md#union) function returns a single object with all elements from the parameters. The function call unionizes the key value pairs of the objects into a new object.
+
+## sort
+
+`sort(inputArray, lambda expression)`
+
+Sorts an array with a custom sort function.
+
+Namespace: [sys](bicep-functions.md#namespaces-for-functions).
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+|: |: |: |: |
+| inputArray |Yes |array |The array to sort.|
+| lambda expression |Yes |expression |The lambda expression used to compare two array elements for ordering. If true, the second element will be ordered after the first in the output array.|
+
+### Return value
+
+An array.
+
+### Example
+
+The following example shows how to use the sort function.
+
+```bicep
+var dogs = [
+ {
+ name: 'Evie'
+ age: 5
+ interests: ['Ball', 'Frisbee']
+ }
+ {
+ name: 'Casper'
+ age: 3
+ interests: ['Other dogs']
+ }
+ {
+ name: 'Indy'
+ age: 2
+ interests: ['Butter']
+ }
+ {
+ name: 'Kira'
+ age: 8
+ interests: ['Rubs']
+ }
+]
+
+output dogsByAge array = sort(dogs, (a, b) => a.age < b.age)
+```
+
+The output from the preceding example sorts the dog objects from the youngest to the oldest:
+
+| Name | Type | Value |
+| - | - | -- |
+| dogsByAge | Array | [{"name":"Indy","age":2,"interests":["Butter"]},{"name":"Casper","age":3,"interests":["Other dogs"]},{"name":"Evie","age":5,"interests":["Ball","Frisbee"]},{"name":"Kira","age":8,"interests":["Rubs"]}] |
+
+## Next steps
+
+- See [Bicep functions - arrays](./bicep-functions-array.md) for additional array related Bicep functions.
azure-resource-manager Bicep Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions.md
The following functions are available for working with arrays. All of these func
* [empty](./bicep-functions-array.md#empty) * [indexOf](./bicep-functions-array.md#indexof) * [first](./bicep-functions-array.md#first)
+* [flatten](./bicep-functions-array.md#flatten)
* [intersection](./bicep-functions-array.md#intersection) * [last](./bicep-functions-array.md#last) * [lastIndexOf](./bicep-functions-array.md#lastindexof)
The following functions are available for loading the content from external file
* [loadJsonContent](bicep-functions-files.md#loadjsoncontent) * [loadTextContent](bicep-functions-files.md#loadtextcontent)
+## Lambda functions
+
+The following functions are available for working with lambda expressions. All of these functions are in the `sys` namespace.
+
+* [filter](bicep-functions-lambda.md#filter)
+* [map](bicep-functions-lambda.md#map)
+* [reduce](bicep-functions-lambda.md#reduce)
+* [sort](bicep-functions-lambda.md#sort)
++ ## Logical functions The following function is available for working with logical conditions. This function is in the `sys` namespace.
azure-resource-manager Child Resource Name Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/child-resource-name-type.md
This article show different ways you can declare a child resource.
### Training resources
-If you would rather learn about about child resources through step-by-step guidance, see [Deploy child and extension resources by using Bicep](/learn/modules/child-extension-bicep-templates).
+If you would rather learn about about child resources through step-by-step guidance, see [Deploy child and extension resources by using Bicep](/training/modules/child-extension-bicep-templates).
## Name and type pattern
azure-resource-manager Conditional Resource Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/conditional-resource-deployment.md
Sometimes you need to optionally deploy a resource or module in Bicep. Use the `
### Training resources
-If you would rather learn about conditions through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/).
+If you would rather learn about conditions through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/training/modules/build-flexible-bicep-templates-conditions-loops/).
## Deploy condition
output mgmtStatus string = ((!empty(logAnalytics)) ? 'Enabled monitoring for VM!
## Next steps
-* Review the Learn module [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/).
+* Review the Learn module [Build flexible Bicep templates by using conditions and loops](/training/modules/build-flexible-bicep-templates-conditions-loops/).
* For recommendations about creating Bicep files, see [Best practices for Bicep](best-practices.md). * To create multiple instances of a resource, see [Iterative loops in Bicep](loops.md).
azure-resource-manager Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-github-actions.md
In this quickstart, you use the [GitHub Actions for Azure Resource Manager deployment](https://github.com/marketplace/actions/deploy-azure-resource-manager-arm-template) to automate deploying a Bicep file to Azure.
-It provides a short introduction to GitHub actions and Bicep files. If you want more detailed steps on setting up the GitHub actions and project, see [Learning path: Deploy Azure resources by using Bicep and GitHub Actions](/learn/paths/bicep-github-actions).
+It provides a short introduction to GitHub actions and Bicep files. If you want more detailed steps on setting up the GitHub actions and project, see [Deploy Azure resources by using Bicep and GitHub Actions](/training/paths/bicep-github-actions).
## Prerequisites
azure-resource-manager Deploy To Management Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-management-group.md
As your organization matures, you can deploy a Bicep file to create resources at
### Training resources
-If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/).
+If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/training/modules/deploy-resources-scopes-bicep/).
## Supported resources
azure-resource-manager Deploy To Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-subscription.md
To simplify the management of resources, you can deploy resources at the level o
### Training resources
-If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/).
+If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/training/modules/deploy-resources-scopes-bicep/).
## Supported resources
azure-resource-manager Deploy To Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-tenant.md
As your organization matures, you may need to define and assign [policies](../..
### Training resources
-If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/).
+If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/training/modules/deploy-resources-scopes-bicep/).
## Supported resources
azure-resource-manager Deploy What If https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-what-if.md
You can use the what-if operation with Azure PowerShell, Azure CLI, or REST API
### Training resources
-If you would rather learn about the what-if operation through step-by-step guidance, see [Preview Azure deployment changes by using what-if](/learn/modules/arm-template-whatif/).
+If you would rather learn about the what-if operation through step-by-step guidance, see [Preview Azure deployment changes by using what-if](/training/modules/arm-template-whatif/).
[!INCLUDE [permissions](../../../includes/template-deploy-permissions.md)]
You can use the what-if operation through the Azure SDKs.
* To use the what-if operation in a pipeline, see [Test ARM templates with What-If in a pipeline](https://4bes.nl/2021/03/06/test-arm-templates-with-what-if/). * If you notice incorrect results from the what-if operation, please report the issues at [https://aka.ms/whatifissues](https://aka.ms/whatifissues).
-* For a Learn module that demonstrates using what-if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).
+* For a Learn module that demonstrates using what-if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/training/modules/arm-template-test/).
azure-resource-manager Deployment Script Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-script-bicep.md
The deployment script resource is only available in the regions where Azure Cont
### Training resources
-If you would rather learn about the ARM template test toolkit through step-by-step guidance, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts).
+If you would rather learn about the ARM template test toolkit through step-by-step guidance, see [Extend ARM templates by using deployment scripts](/training/modules/extend-resource-manager-template-deployment-scripts).
## Configure the minimum permissions
After the script is tested successfully, you can use it as a deployment script i
In this article, you learned how to use deployment scripts. To walk through a Learn module: > [!div class="nextstepaction"]
-> [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts)
+> [Extend ARM templates by using deployment scripts](/training/modules/extend-resource-manager-template-deployment-scripts)
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/key-vault-parameter.md
New-AzResourceGroupDeployment `
- For general information about key vaults, see [What is Azure Key Vault?](../../key-vault/general/overview.md) - For complete examples of referencing key secrets, see [key vault examples](https://github.com/rjmax/ArmExamples/tree/master/keyvaultexamples) on GitHub.-- For a Learn module that covers passing a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+- For a Learn module that covers passing a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/training/modules/manage-deployments-advanced-arm-template-features/).
azure-resource-manager Learn Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/learn-bicep.md
Ready to see how Bicep can help simplify and accelerate your deployments to Azur
If you're new to Bicep, a great way to get started is by reviewing the following Learn module. You'll learn how Bicep makes it easier to define how your Azure resources should be configured and deployed in a way that's automated and repeatable. YouΓÇÖll deploy several Azure resources so you can see for yourself how Bicep works. We provide free access to Azure resources to help you practice the concepts.
-[<img src="media/learn-bicep/build-first-bicep-template.svg" width="101" height="120" alt="The badge for the Build your first Bicep template module." role="presentation"></img>](/learn/modules/build-first-bicep-template/)
+[<img src="media/learn-bicep/build-first-bicep-template.svg" width="101" height="120" alt="The badge for the Build your first Bicep template module." role="presentation"></img>](/training/modules/build-first-bicep-template/)
-[Build your first Bicep template](/learn/modules/build-first-bicep-template/)
+[Build your first Bicep template](/training/modules/build-first-bicep-template/)
## Learn more
To learn even more about Bicep's features, take these learning paths:
:::row::: :::column:::
- [<img src="media/learn-bicep/fundamentals-bicep.svg" width="101" height="120" alt="The trophy for the Fundamentals of Bicep learning path." role="presentation"></img>](/learn/paths/fundamentals-bicep/)
+ [<img src="media/learn-bicep/fundamentals-bicep.svg" width="101" height="120" alt="The trophy for the Fundamentals of Bicep learning path." role="presentation"></img>](/training/paths/fundamentals-bicep/)
- [Part 1: Fundamentals of Bicep](/learn/paths/fundamentals-bicep/)
+ [Part 1: Fundamentals of Bicep](/training/paths/fundamentals-bicep/)
:::column-end::: :::column:::
- [<img src="media/learn-bicep/intermediate-bicep.svg" width="101" height="120" alt="The trophy for the Intermediate Bicep learning path." role="presentation"></img>](/learn/paths/intermediate-bicep/)
+ [<img src="media/learn-bicep/intermediate-bicep.svg" width="101" height="120" alt="The trophy for the Intermediate Bicep learning path." role="presentation"></img>](/training/paths/intermediate-bicep/)
- [Part 2: Intermediate Bicep](/learn/paths/intermediate-bicep/)
+ [Part 2: Intermediate Bicep](/training/paths/intermediate-bicep/)
:::column-end::: :::column:::
- [<img src="media/learn-bicep/advanced-bicep.svg" width="101" height="120" alt="The trophy for the Advanced Bicep learning path." role="presentation"></img>](/learn/paths/advanced-bicep/)
+ [<img src="media/learn-bicep/advanced-bicep.svg" width="101" height="120" alt="The trophy for the Advanced Bicep learning path." role="presentation"></img>](/training/paths/advanced-bicep/)
- [Part 3: Advanced Bicep](/learn/paths/advanced-bicep/)
+ [Part 3: Advanced Bicep](/training/paths/advanced-bicep/)
:::column-end::: :::row-end:::
After that, you might be interested in adding your Bicep code to a deployment pi
:::row::: :::column:::
- [<img src="media/learn-bicep/bicep-azure-pipelines.svg" width="101" height="120" alt="The trophy for the Deploy Azure resources using Bicep and Azure Pipelines learning path." role="presentation"></img>](/learn/paths/bicep-azure-pipelines/)
+ [<img src="media/learn-bicep/bicep-azure-pipelines.svg" width="101" height="120" alt="The trophy for the Deploy Azure resources using Bicep and Azure Pipelines learning path." role="presentation"></img>](/training/paths/bicep-azure-pipelines/)
- [Option 1: Deploy Azure resources by using Bicep and Azure Pipelines](/learn/paths/bicep-azure-pipelines/)
+ [Option 1: Deploy Azure resources by using Bicep and Azure Pipelines](/training/paths/bicep-azure-pipelines/)
:::column-end::: :::column:::
- [<img src="media/learn-bicep/bicep-github-actions.svg" width="101" height="120" alt="The trophy for the Deploy Azure resources using Bicep and GitHub Actions learning path." role="presentation"></img>](/learn/paths/bicep-github-actions/)
+ [<img src="media/learn-bicep/bicep-github-actions.svg" width="101" height="120" alt="The trophy for the Deploy Azure resources using Bicep and GitHub Actions learning path." role="presentation"></img>](/training/paths/bicep-github-actions/)
- [Option 2: Deploy Azure resources by using Bicep and GitHub Actions](/learn/paths/bicep-github-actions/)
+ [Option 2: Deploy Azure resources by using Bicep and GitHub Actions](/training/paths/bicep-github-actions/)
:::column-end::: :::row-end:::
azure-resource-manager Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/loops.md
This article shows you how to use the `for` syntax to iterate over items in a co
### Training resources
-If you would rather learn about loops through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/).
+If you would rather learn about loops through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/training/modules/build-flexible-bicep-templates-conditions-loops/).
## Loop syntax
azure-resource-manager Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/migrate.md
The first step in the process is to capture an initial representation of your Az
:::image type="content" source="./media/migrate/migrate-bicep.png" alt-text="Diagram of the recommended workflow for migrating Azure resources to Bicep." border="false":::
-In this article we summarize this recommended workflow. For detailed guidance, see [Migrate Azure resources and JSON ARM templates to use Bicep](/learn/modules/migrate-azure-resources-bicep/).
+In this article we summarize this recommended workflow. For detailed guidance, see [Migrate Azure resources and JSON ARM templates to use Bicep](/training/modules/migrate-azure-resources-bicep/).
## Phase 1: Convert
azure-resource-manager Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/modules.md
Bicep modules are converted into a single Azure Resource Manager template with [
### Training resources
-If you would rather learn about modules through step-by-step guidance, see [Create composable Bicep files by using modules](/learn/modules/create-composable-bicep-files-using-modules/).
+If you would rather learn about modules through step-by-step guidance, see [Create composable Bicep files by using modules](/training/modules/create-composable-bicep-files-using-modules/).
## Definition syntax
When used as module, you can get that output value.
## Next steps -- For a tutorial, see [Deploy Azure resources by using Bicep templates](/learn/modules/deploy-azure-resources-by-using-bicep-templates/).
+- For a tutorial, see [Deploy Azure resources by using Bicep templates](/training/modules/deploy-azure-resources-by-using-bicep-templates/).
- To pass a sensitive value to a module, use the [getSecret](bicep-functions-resource.md#getsecret) function.
azure-resource-manager Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameters.md
For parameter best practices, see [Parameters](./best-practices.md#parameters).
### Training resources
-If you would rather learn about parameters through step-by-step guidance, see [Build reusable Bicep templates by using parameters](/learn/modules/build-reusable-bicep-templates-parameters).
+If you would rather learn about parameters through step-by-step guidance, see [Build reusable Bicep templates by using parameters](/training/modules/build-reusable-bicep-templates-parameters).
## Declaration
azure-resource-manager Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/private-module-registry.md
To work with module registries, you must have [Bicep CLI](./install.md) version
### Training resources
-If you would rather learn about parameters through step-by-step guidance, see [Share Bicep modules by using private registries](/learn/modules/share-bicep-modules-using-private-registries).
+If you would rather learn about parameters through step-by-step guidance, see [Share Bicep modules by using private registries](/training/modules/share-bicep-modules-using-private-registries).
## Configure private registry
azure-resource-manager Quickstart Create Bicep Use Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio.md
Remove-AzResourceGroup -Name exampleRG
## Next steps > [!div class="nextstepaction"]
-> [Bicep in Microsoft Learn](learn-bicep.md)
+> [Learn modules for Bicep](learn-bicep.md)
azure-resource-manager Scope Extension Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/scope-extension-resources.md
This article shows how to set the scope for an extension resource type when depl
### Training resources
-If you would rather learn about extension resources through step-by-step guidance, see [Deploy child and extension resources by using Bicep](/learn/modules/child-extension-bicep-templates).
+If you would rather learn about extension resources through step-by-step guidance, see [Deploy child and extension resources by using Bicep](/training/modules/child-extension-bicep-templates).
## Apply at deployment scope
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/template-specs.md
When designing your deployment, always consider the lifecycle of the resources a
### Training resources
-To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs).
+To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/training/modules/arm-template-specs).
## Required permissions
After creating a template spec, you can link to that template spec in a Bicep mo
## Next steps
-To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs).
+To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/training/modules/arm-template-specs).
azure-resource-manager Create Custom Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/create-custom-provider.md
Title: Create resource provider
-description: Describes how to create a resource provider and deploy its custom resource types.
-
+ Title: Create a custom resource provider
+description: Describes how to create a custom resource provider and deploy custom resources.
Previously updated : 06/24/2020- Last updated : 09/20/2022++
-# Quickstart: Create a custom provider and deploy custom resources
+# Quickstart: Create a custom resource provider and deploy custom resources
-In this quickstart, you create your own resource provider and deploy custom resource types for that resource provider. For more information about custom providers, see [Azure Custom Providers Preview overview](overview.md).
+In this quickstart, you create a custom resource provider and deploy custom resources for that resource provider. For more information about custom providers, see [Azure Custom Resource Providers Overview](overview.md).
## Prerequisites
Azure CLI examples use `az rest` for `REST` requests. For more information, see
- The PowerShell commands are run locally using PowerShell 7 or later and the Azure PowerShell modules. For more information, see [Install Azure PowerShell](/powershell/azure/install-az-ps). - If you don't already have a tool for `REST` operations, install the [ARMClient](https://github.com/projectkudu/ARMClient). It's an open-source command-line tool that simplifies invoking the Azure Resource Manager API.-- After the **ARMClient** is installed you can display usage information from a PowerShell command prompt by typing: `armclient.exe`. Or, go to the [ARMClient wiki](https://github.com/projectkudu/ARMClient/wiki).
+- After the **ARMClient** is installed, you can display usage information from a PowerShell command prompt by typing: `armclient.exe`. Or, go to the [ARMClient wiki](https://github.com/projectkudu/ARMClient/wiki).
## Deploy custom provider
-To set up the custom provider, deploy an [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/custom-providers/customprovider.json) to your Azure subscription.
+To set up the custom resource provider, deploy an [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/custom-providers/customprovider.json) to your Azure subscription.
-After deploying the template, your subscription has the following resources:
+The template deploys the following resources to your subscription:
-- Function App with the operations for the resources and actions.-- Storage Account for storing users that are created through the custom provider.-- Custom Provider that defines the custom resource types and actions. It uses the function app endpoint for sending requests.-- Custom resource from the custom provider.
+- Function app with the operations for the resources and actions.
+- Storage account for storing users that are created through the custom provider.
+- Custom resource provider that defines the custom resource types and actions. It uses the function app endpoint for sending requests.
+- Custom resource from the custom resource provider.
-To deploy the custom provider, use Azure CLI, PowerShell, or the Azure portal:
+To deploy the custom resource provider, use Azure CLI, PowerShell, or the Azure portal.
# [Azure CLI](#tab/azure-cli)
Read-Host -Prompt "Press [ENTER] to continue ..."
-You can also deploy the solution from the Azure portal. Select the **Deploy to Azure** button to open the template in the Azure portal.
+To deploy the template from the Azure portal, select the **Deploy to Azure** button.
[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-docs-json-samples%2Fmaster%2Fcustom-providers%2Fcustomprovider.json)
-## View custom provider and resource
+## View custom resource provider and resource
-In the portal, the custom provider is a hidden resource type. To confirm that the resource provider was deployed, navigate to the resource group. Select the option to **Show hidden types**.
+In the portal, the custom resource provider is a hidden resource type. To confirm that the resource provider was deployed, go to the resource group and select **Show hidden types**.
-![Show hidden resource types](./media/create-custom-provider/show-hidden.png)
-To see the custom resource type that you deployed, use the `GET` operation on your resource type.
+To see the custom resource that you deployed, use the `GET` operation on your resource type. The resource type `Microsoft.CustomProviders/resourceProviders/users` shown in the JSON response includes the resource that was created by the template.
```http GET https://management.azure.com/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.CustomProviders/resourceProviders/<provider-name>/users?api-version=2018-09-01-preview
You receive the response:
{ "value": [ {
- "id": "/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.CustomProviders/resourceProviders/<provider-name>/users/santa",
- "name": "santa",
+ "id": "/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.CustomProviders/resourceProviders/<provider-name>/users/ana",
+ "name": "ana",
"properties": {
- "FullName": "Santa Claus",
- "Location": "NorthPole",
+ "FullName": "Ana Bowman",
+ "Location": "Moon",
"provisioningState": "Succeeded" },
- "resourceGroup": "<rg-name>",
"type": "Microsoft.CustomProviders/resourceProviders/users" } ]
You receive the response:
{ "properties": { "provisioningState": "Succeeded",
- "FullName": "Santa Claus",
- "Location": "NorthPole"
+ "FullName": "Ana Bowman",
+ "Location": "Moon"
},
- "id": "/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.CustomProviders/resourceProviders/<provider-name>/users/santa",
- "name": "santa",
+ "id": "/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.CustomProviders/resourceProviders/<provider-name>/users/ana",
+ "name": "ana",
"type": "Microsoft.CustomProviders/resourceProviders/users" } ]
You receive the response:
## Call action
-Your custom provider also has an action named `ping`. The code that processes the request is implemented in the function app. The `ping` action replies with a greeting.
+Your custom resource provider also has an action named `ping`. The code that processes the request is implemented in the function app. The `ping` action replies with a greeting.
-To send a `ping` request, use the `POST` operation on your custom provider.
+To send a `ping` request, use the `POST` operation on your action.
```http POST https://management.azure.com/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.CustomProviders/resourceProviders/<provider-name>/ping?api-version=2018-09-01-preview
You receive the response:
-## Create a resource type
+## Use PUT to create resource
+
+In this quickstart, the template used the resource type `Microsoft.CustomProviders/resourceProviders/users` to deploy a resource. You can also use a `PUT` operation to create a resource. For example, if a resource isn't deployed with the template, the `PUT` operation will create a resource.
-To create the custom resource type, you can deploy the resource in a template. This approach is shown in the template you deployed in this quickstart. You can also send a `PUT` request for the resource type.
+In this example, because the template already deployed a resource, the `PUT` operation creates a new resource.
```http PUT https://management.azure.com/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.CustomProviders/resourceProviders/<provider-name>/users/<resource-name>?api-version=2018-09-01-preview
You receive the response:
"Location": "Earth", "provisioningState": "Succeeded" },
- "resourceGroup": "<rg-name>",
"type": "Microsoft.CustomProviders/resourceProviders/users" } ```
You receive the response:
+You can rerun the `GET` operation from the [view custom resource provider and resource](#view-custom-resource-provider-and-resource) section to show the two resources that were created. This example shows output from the Azure CLI command.
+
+```json
+{
+ "value": [
+ {
+ "id": "/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.CustomProviders/resourceProviders/<provider-name>/users/ana",
+ "name": "ana",
+ "properties": {
+ "FullName": "Ana Bowman",
+ "Location": "Moon",
+ "provisioningState": "Succeeded"
+ },
+ "type": "Microsoft.CustomProviders/resourceProviders/users"
+ },
+ {
+ "id": "/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.CustomProviders/resourceProviders/<provider-name>/users/testuser",
+ "name": "testuser",
+ "properties": {
+ "FullName": "Test User",
+ "Location": "Earth",
+ "provisioningState": "Succeeded"
+ },
+ "type": "Microsoft.CustomProviders/resourceProviders/users"
+ }
+ ]
+}
+```
+ ## Custom resource provider commands Use the [custom-providers](/cli/azure/custom-providers/resource-provider) commands to work with your custom resource provider.
The `delete` command prompts you and deletes only the custom resource provider.
az custom-providers resource-provider delete --resource-group $rgName --name $funcName ```
+## Clean up resources
+
+If you're finished with the resources created in this article, you can delete the resource group. When you delete a resource group, all the resources in that resource group are deleted.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az group delete --resource-group $rgName
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name $rgName
+```
++++ ## Next steps For an introduction to custom providers, see the following article:
azure-resource-manager Tutorial Custom Providers Function Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/tutorial-custom-providers-function-setup.md
Title: Set up Azure Functions
-description: This tutorial goes over how to create a function app in Azure Functions and set it up to work with Azure Custom Providers.
-
+description: This tutorial describes how to create a function app in Azure Functions that works with Azure Custom Providers.
Previously updated : 05/06/2022 Last updated : 09/20/2022 + # Set up Azure Functions for custom providers
To start this tutorial, you should first follow the tutorial [Create your first
To install the Azure Table storage bindings:
-1. Go to the **Integrate** tab for the HttpTrigger.
+1. Go to the **Integrate** tab for the `HttpTrigger`.
1. Select **+ New Input**. 1. Select **Azure Table Storage**.
-1. Install the Microsoft.Azure.WebJobs.Extensions.Storage extension if it isn't already installed.
+1. Install the `Microsoft.Azure.WebJobs.Extensions.Storage` extension if it isn't already installed.
1. In the **Table parameter name** box, enter *tableStorage*. 1. In the **Table name** box, enter *myCustomResources*. 1. Select **Save** to save the updated input parameter.
-![Custom provider overview showing table bindings](./media/create-custom-provider/azure-functions-table-bindings.png)
## Update RESTful HTTP methods To set up the Azure function to include the custom provider RESTful request methods:
-1. Go to the **Integrate** tab for the HttpTrigger.
+1. Go to the **Integrate** tab for the `HttpTrigger`.
1. Under **Selected HTTP methods**, select **GET**, **POST**, **DELETE**, and **PUT**.
-![Custom provider overview showing HTTP methods](./media/create-custom-provider/azure-functions-http-methods.png)
## Add Azure Resource Manager NuGet packages > [!NOTE]
-> If your C# project file is missing from the project directory, you can add it manually, or it will appear after the Microsoft.Azure.WebJobs.Extensions.Storage extension is installed on the function app.
+> If your C# project file is missing from the project directory, you can add it manually, or it will appear after the `Microsoft.Azure.WebJobs.Extensions.Storage` extension is installed on the function app.
Next, update the C# project file to include helpful NuGet libraries. These libraries make it easier to parse incoming requests from custom providers. Follow the steps to [add extensions from the portal](../../azure-functions/functions-bindings-register.md) and update the C# project file to include the following package references:
azure-resource-manager Conditional Resource Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/conditional-resource-deployment.md
If you deploy a template with [complete mode](deployment-modes.md) and a resourc
## Next steps
-* For a Learn module that covers conditional deployment, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+* For a Learn module that covers conditional deployment, see [Manage complex cloud deployments by using advanced ARM template features](/training/modules/manage-deployments-advanced-arm-template-features/).
* For recommendations about creating templates, see [ARM template best practices](./best-practices.md). * To create multiple instances of a resource, see [Resource iteration in ARM templates](copy-resources.md).
azure-resource-manager Copy Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/copy-resources.md
You can't use a copy loop for a child resource. To create more than one instance
For example, suppose you typically define a dataset as a child resource within a data factory. ```json
-"resources": [
{
- "type": "Microsoft.DataFactory/factories",
- "name": "exampleDataFactory",
- ...
"resources": [ {
- "type": "datasets",
- "name": "exampleDataSet",
- "dependsOn": [
- "exampleDataFactory"
- ],
+ "type": "Microsoft.DataFactory/factories",
+ "name": "exampleDataFactory",
+ ...
+ "resources": [
+ {
+ "type": "datasets",
+ "name": "exampleDataSet",
+ "dependsOn": [
+ "exampleDataFactory"
+ ],
+ ...
+ }
+ ]
... } ]
+}
``` To create more than one data set, move it outside of the data factory. The dataset must be at the same level as the data factory, but it's still a child resource of the data factory. You preserve the relationship between data set and data factory through the type and name properties. Since type can no longer be inferred from its position in the template, you must provide the fully qualified type in the format: `{resource-provider-namespace}/{parent-resource-type}/{child-resource-type}`.
The following examples show common scenarios for creating more than one instance
- To set dependencies on resources that are created in a copy loop, see [Define the order for deploying resources in ARM templates](./resource-dependency.md). - To go through a tutorial, see [Tutorial: Create multiple resource instances with ARM templates](template-tutorial-create-multiple-instances.md).-- For a Learn module that covers resource copy, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+- For a Learn module that covers resource copy, see [Manage complex cloud deployments by using advanced ARM template features](/training/modules/manage-deployments-advanced-arm-template-features/).
- For other uses of the copy loop, see: - [Property iteration in ARM templates](copy-properties.md) - [Variable iteration in ARM templates](copy-variables.md)
azure-resource-manager Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-github-actions.md
When your resource group and repository are no longer needed, clean up the resou
> [Create your first ARM template](./template-tutorial-create-first-template.md) > [!div class="nextstepaction"]
-> [Learn module: Automate the deployment of ARM templates by using GitHub Actions](/learn/modules/deploy-templates-command-line-github-actions/)
+> [Learn module: Automate the deployment of ARM templates by using GitHub Actions](/training/modules/deploy-templates-command-line-github-actions/)
azure-resource-manager Deploy What If https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-what-if.md
You can use the what-if operation with Azure PowerShell, Azure CLI, or REST API
### Training resources
-To learn more about what-if, and for hands-on guidance, see [Preview Azure deployment changes by using what-if](/learn/modules/arm-template-whatif).
+To learn more about what-if, and for hands-on guidance, see [Preview Azure deployment changes by using what-if](/training/modules/arm-template-whatif).
[!INCLUDE [permissions](../../../includes/template-deploy-permissions.md)]
You can use the what-if operation through the Azure SDKs.
- [ARM Deployment Insights](https://marketplace.visualstudio.com/items?itemName=AuthorityPartnersInc.arm-deployment-insights) extension provides an easy way to integrate the what-if operation in your Azure DevOps pipeline. - To use the what-if operation in a pipeline, see [Test ARM templates with What-If in a pipeline](https://4bes.nl/2021/03/06/test-arm-templates-with-what-if/). - If you notice incorrect results from the what-if operation, please report the issues at [https://aka.ms/whatifissues](https://aka.ms/whatifissues).-- For a Learn module that covers using what if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).
+- For a Learn module that covers using what if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/training/modules/arm-template-test/).
azure-resource-manager Deployment Script Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template.md
The deployment script resource is only available in the regions where Azure Cont
### Training resources
-To learn more about the ARM template test toolkit, and for hands-on guidance, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts).
+To learn more about the ARM template test toolkit, and for hands-on guidance, see [Extend ARM templates by using deployment scripts](/training/modules/extend-resource-manager-template-deployment-scripts).
## Configure the minimum permissions
In this article, you learned how to use deployment scripts. To walk through a de
> [Tutorial: Use deployment scripts in Azure Resource Manager templates](./template-tutorial-deployment-script.md) > [!div class="nextstepaction"]
-> [Learn module: Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts/)
+> [Learn module: Extend ARM templates by using deployment scripts](/training/modules/extend-resource-manager-template-deployment-scripts/)
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/key-vault-parameter.md
The following template dynamically creates the key vault ID and passes it as a p
- For general information about key vaults, see [What is Azure Key Vault?](../../key-vault/general/overview.md) - For complete examples of referencing key secrets, see [key vault examples](https://github.com/rjmax/ArmExamples/tree/master/keyvaultexamples) on GitHub.-- For a Learn module that covers passing a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+- For a Learn module that covers passing a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/training/modules/manage-deployments-advanced-arm-template-features/).
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/overview.md
To implement infrastructure as code for your Azure solutions, use Azure Resource
To learn about how you can get started with ARM templates, see the following video.
-> [!VIDEO https://docs.microsoft.com/Shows/Azure-Enablement/How-and-why-to-learn-about-ARM-templates/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Azure-Enablement/How-and-why-to-learn-about-ARM-templates/player]
## Why choose ARM templates?
This approach means you can safely share templates that meet your organization's
## Next steps * For a step-by-step tutorial that guides you through the process of creating a template, see [Tutorial: Create and deploy your first ARM template](template-tutorial-create-first-template.md).
-* To learn about ARM templates through a guided set of Learn modules, see [Deploy and manage resources in Azure by using ARM templates](/learn/paths/deploy-manage-resource-manager-templates/).
+* To learn about ARM templates through a guided set of Learn modules, see [Deploy and manage resources in Azure by using ARM templates](/training/paths/deploy-manage-resource-manager-templates/).
* For information about the properties in template files, see [Understand the structure and syntax of ARM templates](./syntax.md). * To learn about exporting templates, see [Quickstart: Create and deploy ARM templates by using the Azure portal](quickstart-create-templates-use-the-portal.md). * For answers to common questions, see [Frequently asked questions about ARM templates](./frequently-asked-questions.yml).
azure-resource-manager Resource Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-dependency.md
In the following example, a CDN endpoint explicitly depends on the CDN profile,
"originHostHeader": "[reference(variables('webAppName')).hostNames[0]]", ... }
+ ...
+}
``` To learn more, see [reference function](template-functions-resource.md#reference).
For information about assessing the deployment order and resolving dependency er
## Next steps * To go through a tutorial, see [Tutorial: Create ARM templates with dependent resources](template-tutorial-create-templates-with-dependent-resources.md).
-* For a Learn module that covers resource dependencies, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+* For a Learn module that covers resource dependencies, see [Manage complex cloud deployments by using advanced ARM template features](/training/modules/manage-deployments-advanced-arm-template-features/).
* For recommendations when setting dependencies, see [ARM template best practices](./best-practices.md). * To learn about troubleshooting dependencies during deployment, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](common-deployment-errors.md). * To learn about creating Azure Resource Manager templates, see [Understand the structure and syntax of ARM templates](./syntax.md).
azure-resource-manager Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/syntax.md
Last updated 07/18/2022
This article describes the structure of an Azure Resource Manager template (ARM template). It presents the different sections of a template and the properties that are available in those sections.
-This article is intended for users who have some familiarity with ARM templates. It provides detailed information about the structure of the template. For a step-by-step tutorial that guides you through the process of creating a template, see [Tutorial: Create and deploy your first ARM template](template-tutorial-create-first-template.md). To learn about ARM templates through a guided set of Learn modules, see [Deploy and manage resources in Azure by using ARM templates](/learn/paths/deploy-manage-resource-manager-templates/).
+This article is intended for users who have some familiarity with ARM templates. It provides detailed information about the structure of the template. For a step-by-step tutorial that guides you through the process of creating a template, see [Tutorial: Create and deploy your first ARM template](template-tutorial-create-first-template.md). To learn about ARM templates through a guided set of Learn modules, see [Deploy and manage resources in Azure by using ARM templates](/training/paths/deploy-manage-resource-manager-templates/).
> [!TIP] > Bicep is a new language that offers the same capabilities as ARM templates but with a syntax that's easier to use. If you're considering infrastructure as code options, we recommend looking at Bicep.
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs.md
When designing your deployment, always consider the lifecycle of the resources a
### Training resources
-To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs).
+To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/training/modules/arm-template-specs).
> [!TIP] > We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [Azure Resource Manager template specs in Bicep](../bicep/template-specs.md).
azure-resource-manager Template Test Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-test-cases.md
The following example **passes** because `expressionEvaluationOptions` uses `inn
## Next steps - To learn about running the test toolkit, see [Use ARM template test toolkit](test-toolkit.md).-- For a Learn module that covers using the test toolkit, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).
+- For a Learn module that covers using the test toolkit, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/training/modules/arm-template-test/).
- To test parameter files, see [Test cases for parameter files](parameters.md). - For createUiDefinition tests, see [Test cases for createUiDefinition.json](createUiDefinition-test-cases.md). - To learn about tests for all files, see [Test cases for all files](all-files-test-cases.md).
azure-resource-manager Template Tutorial Create First Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-first-template.md
This tutorial introduces you to Azure Resource Manager templates (ARM templates)
This tutorial is the first of a series. As you progress through the series, you modify the starting template, step by step, until you explore all of the core parts of an ARM template. These elements are the building blocks for more complex templates. We hope by the end of the series you're confident in creating your own templates and ready to automate your deployments with templates.
-If you want to learn about the benefits of using templates and why you should automate deployments with templates, see [ARM template overview](overview.md). To learn about ARM templates through a guided set of [Learn modules](/learn), see [Deploy and manage resources in Azure by using JSON ARM templates](/learn/paths/deploy-manage-resource-manager-templates).
+If you want to learn about the benefits of using templates and why you should automate deployments with templates, see [ARM template overview](overview.md). To learn about ARM templates through a guided set of [Learn modules](/training), see [Deploy and manage resources in Azure by using JSON ARM templates](/training/paths/deploy-manage-resource-manager-templates).
If you don't have a Microsoft Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
azure-resource-manager Template Tutorial Create Multiple Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-multiple-instances.md
This tutorial covers the following tasks:
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
-For a Learn module that covers resource copy, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+For a Learn module that covers resource copy, see [Manage complex cloud deployments by using advanced ARM template features](/training/modules/manage-deployments-advanced-arm-template-features/).
## Prerequisites
azure-resource-manager Template Tutorial Create Templates With Dependent Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-templates-with-dependent-resources.md
This tutorial covers the following tasks:
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
-For a Learn module that covers resource dependencies, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+For a Learn module that covers resource dependencies, see [Manage complex cloud deployments by using advanced ARM template features](/training/modules/manage-deployments-advanced-arm-template-features/).
## Prerequisites
azure-resource-manager Template Tutorial Deployment Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-deployment-script.md
This tutorial covers the following tasks:
> * Debug the failed script > * Clean up resources
-For a Learn module that covers deployment scripts, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts/).
+For a Learn module that covers deployment scripts, see [Extend ARM templates by using deployment scripts](/training/modules/extend-resource-manager-template-deployment-scripts/).
## Prerequisites
azure-resource-manager Template Tutorial Use Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-use-conditions.md
This tutorial only covers a basic scenario of using conditions. For more informa
* [Template function: If](./template-functions-logical.md#if). * [Comparison functions for ARM templates](./template-functions-comparison.md)
-For a Learn module that covers conditions, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+For a Learn module that covers conditions, see [Manage complex cloud deployments by using advanced ARM template features](/training/modules/manage-deployments-advanced-arm-template-features/).
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
azure-resource-manager Template Tutorial Use Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-use-key-vault.md
This tutorial covers the following tasks:
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
-For a Learn module that uses a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+For a Learn module that uses a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/training/modules/manage-deployments-advanced-arm-template-features/).
## Prerequisites
azure-resource-manager Test Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/test-toolkit.md
The toolkit contains four sets of tests:
### Training resources
-To learn more about the ARM template test toolkit, and for hands-on guidance, see [Validate Azure resources by using the ARM Template Test Toolkit](/learn/modules/arm-template-test).
+To learn more about the ARM template test toolkit, and for hands-on guidance, see [Validate Azure resources by using the ARM Template Test Toolkit](/training/modules/arm-template-test).
## Install on Windows
The next example shows how to run the tests.
- To test parameter files, see [Test cases for parameter files](parameters.md). - For createUiDefinition tests, see [Test cases for createUiDefinition.json](createUiDefinition-test-cases.md). - To learn about tests for all files, see [Test cases for all files](all-files-test-cases.md).-- For a Learn module that covers using the test toolkit, see [Validate Azure resources by using the ARM Template Test Toolkit](/learn/modules/arm-template-test/).
+- For a Learn module that covers using the test toolkit, see [Validate Azure resources by using the ARM Template Test Toolkit](/training/modules/arm-template-test/).
azure-signalr Signalr Concept Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-azure-functions.md
Because Azure SignalR Service and Azure Functions are both fully managed, highly scalable services that allow you to focus on building applications instead of managing infrastructure, it's common to use the two services together to provide real-time communications in a [serverless](https://azure.microsoft.com/solutions/serverless/) environment. > [!NOTE]
-> Learn to use SignalR and Azure Functions together in the interactive tutorial [Enable automatic updates in a web application using Azure Functions and SignalR Service](/learn/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr).
+> Learn to use SignalR and Azure Functions together in the interactive tutorial [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr).
## Integrate real-time communications with Azure services
In this article, you got an overview of how to use Azure Functions with SignalR
For full details on how to use Azure Functions and SignalR Service together visit the following resources: * [Azure Functions development and configuration with SignalR Service](signalr-concept-serverless-development-config.md)
-* [Enable automatic updates in a web application using Azure Functions and SignalR Service](/learn/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr)
+* [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr)
Follow one of these quickstarts to learn more. * [Azure SignalR Service Serverless Quickstart - C#](signalr-quickstart-azure-functions-csharp.md)
-* [Azure SignalR Service Serverless Quickstart - JavaScript](signalr-quickstart-azure-functions-javascript.md)
+* [Azure SignalR Service Serverless Quickstart - JavaScript](signalr-quickstart-azure-functions-javascript.md)
azure-signalr Signalr Howto Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-troubleshoot-guide.md
public class ThreadPoolStarvationDetector : EventListener
protected override void OnEventWritten(EventWrittenEventArgs eventData) {
- // See: https://docs.microsoft.com/en-us/dotnet/framework/performance/thread-pool-etw-events#threadpoolworkerthreadadjustmentadjustment
+ // See: https://learn.microsoft.com/dotnet/framework/performance/thread-pool-etw-events#threadpoolworkerthreadadjustmentadjustment
if (eventData.EventId == EventIdForThreadPoolWorkerThreadAdjustmentAdjustment && eventData.Payload[3] as uint? == ReasonForStarvation) {
azure-sql-edge Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/overview.md
Azure SQL Edge is an optimized relational database engine geared for IoT and IoT
Azure SQL Edge is built on the latest versions of the [SQL Server Database Engine](/sql/sql-server/sql-server-technical-documentation), which provides industry-leading performance, security and query processing capabilities. Since Azure SQL Edge is built on the same engine as [SQL Server](/sql/sql-server/sql-server-technical-documentation) and [Azure SQL](/azure/azure-sql/index), it provides the same Transact-SQL (T-SQL) programming surface area that makes development of applications or solutions easier and faster, and makes application portability between IoT Edge devices, data centers and the cloud straight forward. What is Azure SQL Edge video on Channel 9:
-> [!VIDEO https://docs.microsoft.com/shows/Data-Exposed/What-is-Azure-SQL-Edge/player]
+> [!VIDEO https://learn.microsoft.com/shows/Data-Exposed/What-is-Azure-SQL-Edge/player]
## Deployment Models
azure-sql-edge Tutorial Renewable Energy Demo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-renewable-energy-demo.md
This Azure SQL Edge demo is based on a Contoso Renewable Energy, a wind turbine
This demo will walk you through resolving an alert being raised because of wind turbulence being detected at the device. You will train a model and deploy it to SQL DB Edge that will correct the detected wind wake and ultimately optimize power output. Azure SQL Edge - renewable Energy demo video on Channel 9:
-> [!VIDEO https://docs.microsoft.com/shows/Data-Exposed/Azure-SQL-Edge-Demo-Renewable-Energy/player]
+> [!VIDEO https://learn.microsoft.com/shows/Data-Exposed/Azure-SQL-Edge-Demo-Renewable-Energy/player]
## Setting up the demo on your local computer Git will be used to copy all files from the demo to your local computer.
Git will be used to copy all files from the demo to your local computer.
2. Open a command prompt and navigate to a folder where the repo should be downloaded. 3. Issue the command https://github.com/microsoft/sql-server-samples.git. 4. Navigate to **'sql-server-samples\samples\demos\azure-sql-edge-demos\Wind Turbine Demo'** in the location where the repository is cloned.
-5. Follow the instructions in README.md to set up the demo environment and execute the demo.
+5. Follow the instructions in README.md to set up the demo environment and execute the demo.
azure-video-analyzer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/release-notes.md
# Azure Video Analyzer release notes
->Get notified about when to revisit this page for updates by copying and pasting this URL: `https://docs.microsoft.com/api/search/rss?search=%22Azure+Video+Analyzer+on+IoT+Edge+release+notes%22&locale=en-us` into your RSS feed reader.
+>Get notified about when to revisit this page for updates by copying and pasting this URL: `https://learn.microsoft.com/api/search/rss?search=%22Azure+Video+Analyzer+on+IoT+Edge+release+notes%22&locale=en-us` into your RSS feed reader.
This article provides you with information about:
azure-video-indexer Monitor Video Indexer Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md
Azure Video Indexer currently does not support any monitoring on metrics.
<!--**OPTION 1 EXAMPLE**
-<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://docs.microsoft.com/azure/azure-monitor/platform/metrics-supported, which is auto generated from underlying systems. Not all metrics are published depending on whether your product group wants them to be. If the metric is published, but descriptions are wrong of missing, contact your PM and tell them to update them in the Azure Monitor "shoebox" manifest. If this article is missing metrics that you and the PM know are available, both of you contact azmondocs@microsoft.com.
+<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://learn.microsoft.com/azure/azure-monitor/platform/metrics-supported, which is auto generated from underlying systems. Not all metrics are published depending on whether your product group wants them to be. If the metric is published, but descriptions are wrong of missing, contact your PM and tell them to update them in the Azure Monitor "shoebox" manifest. If this article is missing metrics that you and the PM know are available, both of you contact azmondocs@microsoft.com.
--> <!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. -->
Azure Video Indexer does not have any metrics that contain dimensions.
Azure Video Indexer has the following dimensions associated with its metrics.
-<!-- See https://docs.microsoft.com/azure/storage/common/monitor-storage-reference#metrics-dimensions for an example. Part is copied below. -->
+<!-- See https://learn.microsoft.com/azure/storage/common/monitor-storage-reference#metrics-dimensions for an example. Part is copied below. -->
<!--**--EXAMPLE format when you have dimensions**
For reference, see a list of [all resource logs category types supported in Azur
<!--**OPTION 1 EXAMPLE**
-<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://docs.microsoft.com/azure/azure-monitor/platform/resource-logs-categories, which is auto generated from the REST API. Not all resource log types metrics are published depending on whether your product group wants them to be. If the resource log is published, but category display names are wrong or missing, contact your PM and tell them to update them in the Azure Monitor "shoebox" manifest. If this article is missing resource logs that you and the PM know are available, both of you contact azmondocs@microsoft.com.
+<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://learn.microsoft.com/azure/azure-monitor/platform/resource-logs-categories, which is auto generated from the REST API. Not all resource log types metrics are published depending on whether your product group wants them to be. If the resource log is published, but category display names are wrong or missing, contact your PM and tell them to update them in the Azure Monitor "shoebox" manifest. If this article is missing resource logs that you and the PM know are available, both of you contact azmondocs@microsoft.com.
--> <!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. -->
This section refers to all of the Azure Monitor Logs Kusto tables relevant to Az
<!--**OPTION 1 EXAMPLE**
-<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://docs.microsoft.com/azure/azure-monitor/reference/tables/tables-resourcetype where your service tables are listed. These files are auto generated from the REST API. If this article is missing tables that you and the PM know are available, both of you contact azmondocs@microsoft.com.
+<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://learn.microsoft.com/azure/azure-monitor/reference/tables/tables-resourcetype where your service tables are listed. These files are auto generated from the REST API. If this article is missing tables that you and the PM know are available, both of you contact azmondocs@microsoft.com.
--> <!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. -->
The following schemas are in use by Azure Video Indexer
<!-- replace below with the proper link to your main monitoring service article --> - See [Monitoring Azure Video Indexer](monitor-video-indexer.md) for a description of monitoring Azure Video Indexer.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-video-indexer Monitor Video Indexer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer.md
See [Create diagnostic setting to collect platform logs and metrics in Azure](/a
:::image type="content" source="./media/monitor/toc-diagnostics-save.png" alt-text="Screenshot of diagnostic settings." lightbox="./media/monitor/toc-diagnostics-save.png"::: :::image type="content" source="./media/monitor/diagnostics-settings-destination.png" alt-text="Screenshot of where to send lots." lightbox="./media/monitor/diagnostics-settings-destination.png":::
-<!-- OPTIONAL: Add specific examples of configuration for this service. For example, CLI and PowerShell commands for creating diagnostic setting. Ideally, customers should set up a policy to automatically turn on collection for services. Azure monitor has Resource Manager template examples you can point to. See https://docs.microsoft.com/azure/azure-monitor/samples/resource-manager-diagnostic-settings. Contact azmondocs@microsoft.com if you have questions. -->
+<!-- OPTIONAL: Add specific examples of configuration for this service. For example, CLI and PowerShell commands for creating diagnostic setting. Ideally, customers should set up a policy to automatically turn on collection for services. Azure monitor has Resource Manager template examples you can point to. See https://learn.microsoft.com/azure/azure-monitor/samples/resource-manager-diagnostic-settings. Contact azmondocs@microsoft.com if you have questions. -->
The metrics and logs you can collect are discussed in the following sections.
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
# Azure Video Indexer release notes
->Get notified about when to revisit this page for updates by copying and pasting this URL: `https://docs.microsoft.com/api/search/rss?search=%22Azure+Media+Services+Video+Indexer+release+notes%22&locale=en-us` into your RSS feed reader.
+>Get notified about when to revisit this page for updates by copying and pasting this URL: `https://learn.microsoft.com/api/search/rss?search=%22Azure+Media+Services+Video+Indexer+release+notes%22&locale=en-us` into your RSS feed reader.
To stay up-to-date with the most recent Azure Video Indexer developments, this article provides you with information about:
With the ARM-based [paid (unlimited)](accounts-overview.md) account you are able
- [Azure role-based access control (RBAC)](../role-based-access-control/overview.md). - Managed Identity to better secure the communication between your Azure Media Services and Azure Video Indexer account, Network Service Tags, and native integration with Azure Monitor to monitor your account (audit and indexing logs). - Scale and automate your [deployment with ARM-template](deploy-with-arm-template.md), [bicep](deploy-with-bicep.md) or terraform.
+- [Create logic apps connector for ARM-based accounts](logic-apps-connector-arm-accounts.md).
To create an ARM-based account, see [create an account](create-account-portal.md).
Now supporting source languages for STT (speech-to-text), translation, and searc
For more information, see [supported languages](language-support.md).
+### Expanded the supported languages in LID and MLID through the API
+
+We expand the list of the languages to be supported in LID (language identification) and MLID (multi language Identification) using APIs.
+
+For more information, see [supported languages](language-support.md).
+ ### Configure confidence level in a person model with an API Use the [Patch person model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Patch-Person-Model) API to configure the confidence level for face recognition within a person model.
backup Automation Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/automation-backup.md
Once you assign an Azure Policy to a scope, all VMs that meet your criteria are
The following video illustrates how Azure Policy works for backup: <br><br>
-> [!VIDEO https://docs.microsoft.com/shows/IT-Ops-Talk/Configure-backups-at-scale-using-Azure-Policy/player]
+> [!VIDEO https://learn.microsoft.com/shows/IT-Ops-Talk/Configure-backups-at-scale-using-Azure-Policy/player]
### Export backup-operational data
For more information on how to set up this runbook, see [Automatic retry of fail
The following video provides an end-to-end walk-through of the scenario: <br><br>
- > [!VIDEO https://docs.microsoft.com/shows/IT-Ops-Talk/Automatically-retry-failed-backup-jobs-using-Azure-Resource-Graph-and-Azure-Automation-Runbooks/player]
+ > [!VIDEO https://learn.microsoft.com/shows/IT-Ops-Talk/Automatically-retry-failed-backup-jobs-using-Azure-Resource-Graph-and-Azure-Automation-Runbooks/player]
## Additional resources
backup Microsoft Azure Recovery Services Powershell All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/microsoft-azure-recovery-services-powershell-all.md
$WC = New-Object System.Net.WebClient
$WC.DownloadFile($MarsAURL,'C:\downloads\MARSAgentInstaller.EXE') C:\Downloads\MARSAgentInstaller.EXE /q
-MARSAgentInstaller.exe /q # Please note the commandline install options available here: https://docs.microsoft.com/azure/backup/backup-client-automation#installation-options
+MARSAgentInstaller.exe /q # Please note the commandline install options available here: https://learn.microsoft.com/azure/backup/backup-client-automation#installation-options
# Registering Windows Server or Windows client machine to a Recovery Services Vault $CredsPath = "C:\downloads"
Set-OBMachineSetting -NoThrottle
# Encryption settings $PassPhrase = ConvertTo-SecureString -String "Complex!123_STRING" -AsPlainText -Force Set-OBMachineSetting -EncryptionPassPhrase $PassPhrase -SecurityPin "<generatedPIN>" #NOTE: You must generate a security pin by selecting Generate, under Settings > Properties > Security PIN in the Recovery Services vault section of the Azure portal.
-# See: https://docs.microsoft.com/rest/api/backup/securitypins/get
-# See: https://docs.microsoft.com/powershell/module/azurerm.keyvault/Add-AzureKeyVaultKey?view=azurermps-6.13.0
+# See: https://learn.microsoft.com/rest/api/backup/securitypins/get
+# See: https://learn.microsoft.com/powershell/module/azurerm.keyvault/Add-AzureKeyVaultKey?view=azurermps-6.13.0
# Back up files and folders $NewPolicy = New-OBPolicy
Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Force
## Next steps
-[Learn more](../backup-client-automation.md) about how to use PowerShell to deploy and manage on-premises backups using MARS agent.
+[Learn more](../backup-client-automation.md) about how to use PowerShell to deploy and manage on-premises backups using MARS agent.
backup Transport Layer Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/transport-layer-security.md
Title: Transport Layer Security in Azure Backup description: Learn how to enable Azure Backup to use the encryption protocol Transport Layer Security (TLS) to keep data secure when being transferred over a network. Previously updated : 11/01/2020 Last updated : 09/20/2022 # Transport Layer Security in Azure Backup
The following registry keys configure .NET Framework to support strong cryptogra
"SchUseStrongCrypto" = dword:00000001 ```
+## Azure TLS certificate changes
+
+Azure TLS/SSL endpoints now contain updated certificates chaining up to new root CAs. Ensure that the following changes include the updated root CAs. [Learn more](../security/fundamentals/tls-certificate-changes.md#what-changed) about the possible impacts on your applications.
+
+Earlier, most of the TLS certificates, used by Azure services, chained up to the following Root CA:
+
+Common name of CA | Thumbprint (SHA1)
+ |
+[Baltimore CyberTrust Root](https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt) | d4de20d05e66fc53fe1a50882c78db2852cae474
+
+Now, TLS certificates, used by Azure services, helps to chain up to one of the following Root CAs:
+
+Common name of CA | Thumbprint (SHA1)
+ |
+[DigiCert Global Root G2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt) | df3c24f9bfd666761b268073fe06d1cc8d4f82a4
+[DgiCert Global Root CA](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt) | a8985d3a65e5e5c4b2d7d66d40c6dd2fb19c5436
+[Baltimore CyberTrust Root](https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt)| d4de20d05e66fc53fe1a50882c78db2852cae474
+[D-TRUST Root Class 3 CA 2 2009](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt) | 58e8abb0361533fb80f79b1b6d29d3ff8d5f00f0
+[Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt) | 73a5e64a3bff8316ff0edccc618a906e4eae4d74
+[Microsoft ECC Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20ECC%20Root%20Certificate%20Authority%202017.crt) | 999a64c37ff47d9fab95f14769891460eec4c3c5
+ ## Frequently asked questions ### Why enable TLS 1.2?
The highest protocol version supported by both the client and server is negotiat
For improved security from protocol downgrade attacks, Azure Backup is beginning to disable TLS versions older than 1.2 in a phased manner. This is part of a long-term shift across services to disallow legacy protocol and cipher suite connections. Azure Backup services and components fully support TLS 1.2. However, Windows versions lacking required updates or certain customized configurations can still prevent TLS 1.2 protocols being offered. This can cause failures including but not limited to one or more of the following: - Backup and restore operations may fail.-- Backup components connections failures with error 10054 (An existing connection was forcibly closed by the remote host).
+- The backup components connections failures with error 10054 (An existing connection was forcibly closed by the remote host).
- Services related to Azure Backup won't stop or start as usual. ## Additional resources
bastion Bastion Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-overview.md
For frequently asked questions, see the Bastion [FAQ](bastion-faq.md).
* [Quickstart: Deploy Bastion using default settings](quickstart-host-portal.md). * [Tutorial: Deploy Bastion using specified settings](tutorial-create-host-portal.md).
-* [Learn module: Introduction to Azure Bastion](/learn/modules/intro-to-azure-bastion/).
+* [Learn module: Introduction to Azure Bastion](/training/modules/intro-to-azure-bastion/).
* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
batch Batch Aad Auth Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-aad-auth-management.md
Your client application uses the application ID (also referred to as the client
// Specify the unique identifier (the "Client ID") for your application. This is required so that your // native client application (i.e. this sample) can access the Microsoft Graph API. For information // about registering an application in Azure Active Directory, please see "Register an application with the Microsoft identity platform" here:
-// https://docs.microsoft.com/azure/active-directory/develop/quickstart-register-app
+// https://learn.microsoft.com/azure/active-directory/develop/quickstart-register-app
private const string ClientId = "<application-id>"; ``` Also copy the redirect URI that you specified during the registration process. The redirect URI specified in your code must match the redirect URI that you provided when you registered the application.
batch Batch Pools Without Public Ip Addresses Classic Retirement Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pools-without-public-ip-addresses-classic-retirement-migration-guide.md
In late 2021, we launched a simplified compute node communication model for Azur
[Simplified Compute Node Communication Pools without Public IPs](./simplified-node-communication-pool-no-public-ip.md) requires using simplified compute node communication. It provides customers with enhanced security for their workload environments on network isolation and data exfiltration to Azure Batch accounts. Its key benefits include: * Allow creating simplified node communication pool without public IP addresses.
-* Support Batch private pool using a new private endpoint (sub-resource nodeManagement) for Azure Batch account.
+* Support Batch private pool using a new private endpoint (sub-resource: **nodeManagement**) for Azure Batch account.
* Simplified private link DNS zone for Batch account private endpoints: changed from **privatelink.\<region>.batch.azure.com** to **privatelink.batch.azure.com**. * Mutable public network access for Batch accounts. * Firewall support for Batch account public endpoints: configure IP address network rules to restrict public network access with Batch accounts. ## Migration steps
-Batch pool without public IP addresses (classic) will retire on **31/2023 and will be updated to simplified compute node communication pools without public IPs. For existing pools that use the previous preview version of Batch pool without public IP addresses (classic), it's only possible to migrate pools created in a virtual network. To migrate the pool, follow the opt-in process for simplified compute node communication:
+Batch pool without public IP addresses (classic) will retire on **31 March 2023** and will be updated to simplified compute node communication pools without public IPs. For existing pools that use the previous preview version of Batch pool without public IP addresses (classic), it's only possible to migrate pools created in a virtual network. To migrate the pool, follow the opt-in process for simplified compute node communication:
1. Opt in to [use simplified compute node communication](./simplified-compute-node-communication.md#opt-your-batch-account-in-or-out-of-simplified-compute-node-communication).
Batch pool without public IP addresses (classic) will retire on **31/2023 and wi
* How can I connect to my pool nodes for troubleshooting?
- Similar to Batch pools without public IP addresses (classic). As there is no public IP address for the Batch pool, users will need to connect their pool nodes from within the virtual network. You can create a jump box VM in the virtual network or use other remote connectivity solutions like [Azure Bastion](../bastion/bastion-overview.md).
+ Similar to Batch pools without public IP addresses (classic). As there's no public IP address for the Batch pool, users will need to connect their pool nodes from within the virtual network. You can create a jump box VM in the virtual network or use other remote connectivity solutions like [Azure Bastion](../bastion/bastion-overview.md).
* Will there be any change to how my workloads are downloaded from Azure Storage?
Batch pool without public IP addresses (classic) will retire on **31/2023 and wi
* What if I donΓÇÖt migrate to simplified compute node communication pools without public IPs?
- After **31 March 2023**, we will stop supporting Batch pool without public IP addresses. The functionality of the existing pool in that configuration may break, such as scale out operations, or may be actively scaled down to zero at any point in time after that date.
+ After **31 March 2023**, we'll stop supporting Batch pool without public IP addresses. The functionality of the existing pool in that configuration may break, such as scale-out operations, or may be actively scaled down to zero at any point in time after that date.
## Next steps
-For more information, refer to [Simplified compute node communication](./simplified-compute-node-communication.md).
+For more information, see [Simplified compute node communication](./simplified-compute-node-communication.md).
batch Job Pool Lifetime Statistics Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/job-pool-lifetime-statistics-migration-guide.md
Last updated 08/15/2022
The Azure Batch service currently supports API for Job/Pool to retrieve lifetime statistics. The API is used to get lifetime statistics for all the Pools/Jobs in the specified batch account or for a specified Pool/Job. The API collects the statistical data from when the Batch account was created until the last time updated or entire lifetime of the specified Job/Pool. Job/Pool lifetime statistics API is helpful for customers to analyze and evaluate their usage.
-To make the statistical data available for customers, the Batch service allocates batch pools and schedule jobs with an in-house MapReduce implementation to perform background periodic roll-up of statistics. The aggregation is performed for all accounts/pools/jobs in each region, no matter if customer needs or queries the stats for their account/pool/job. The operating cost includes eleven VMs allocated in each region to execute MapReduce aggregation jobs. For busy regions, we had to increase the pool size further to accommodate the extra aggregation load.
+To make the statistical data available for customers, the Batch service allocates batch pools and schedule jobs with an in-house MapReduce implementation to perform background periodic roll-up of statistics. The aggregation is performed for all accounts/pools/jobs in each region, no matter if customer needs or queries the stats for their account/pool/job. The operating cost includes 11 VMs allocated in each region to execute MapReduce aggregation jobs. For busy regions, we had to increase the pool size further to accommodate the extra aggregation load.
The MapReduce aggregation logic was implemented with legacy code, and no new features are being added or improvised due to technical challenges with legacy code. Still, the legacy code and its hosting repo need to be updated frequently to accommodate ever growing load in production and to meet security/compliance requirements. In addition, since the API is featured to provide lifetime statistics, the data is growing and demands more storage and performance issues, even though most customers aren't using the API. Batch service currently eats up all the compute and storage usage charges associated with MapReduce pools and jobs.
-The purpose of the API is designed and maintained to serve the customer in troubleshooting. However, not many customers use it in real life, and the customers are interested in extracting the details for not more than a month. Now more advanced ways of log/job/pool data can be collected and used on a need basis using Azure portal logs, Alerts, Log export, and other methods. Therefore, we are retire Job/Pool Lifetime.
+The purpose of the API is designed and maintained to serve the customer in troubleshooting. However, not many customers use it in real life, and the customers are interested in extracting the details for not more than a month. Now more advanced ways of log/job/pool data can be collected and used on a need basis using Azure portal logs, Alerts, Log export, and other methods. Therefore, we're retiring the Job/Pool Lifetime.
Job/Pool Lifetime Statistics API will be retired on **30 April 2023**. Once complete, the API will no longer work and will return an appropriate HTTP response error code back to the client.
Job/Pool Lifetime Statistics API will be retired on **30 April 2023**. Once comp
* Is there an alternate way to view logs of Pool/Jobs?
- Azure portal has various options to enable the logs, namely system logs, diagnostic logs. Refer [Monitor Batch Solutions](./monitoring-overview.md) for more information.
+ Azure portal has various options to enable the logs, namely system logs, diagnostic logs. See [Monitor Batch Solutions](./monitoring-overview.md) for more information.
* Can customers extract logs to their system if the API doesn't exist?
- Azure portal log feature allows every customer to extract the output and error logs to their workspace. Refer [Monitor with Application Insights](./monitor-application-insights.md) for more information.
+ Azure portal log feature allows every customer to extract the output and error logs to their workspace. See [Monitor with Application Insights](./monitor-application-insights.md) for more information.
## Next steps
-For more information, refer to [Azure Monitor Logs](../azure-monitor/logs/data-platform-logs.md).
+For more information, see [Azure Monitor Logs](../azure-monitor/logs/data-platform-logs.md).
batch Low Priority Vms Retirement Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/low-priority-vms-retirement-migration-guide.md
Azure Batch offers Low priority and Spot virtual machines (VMs). The virtual mac
Low priority VMs enable the customer to take advantage of unutilized capacity. The amount of available unutilized capacity can vary based on size, region, time of day, and more. At any point in time when Azure needs the capacity back, we'll evict low-priority VMs. Therefore, the low-priority offering is excellent for flexible workloads, like large processing jobs, dev/test environments, demos, and proofs of concept. In addition, low-priority VMs can easily be deployed through our virtual machine scale set offering.
-Low priority VMs are a deprecated feature, and it will never become Generally Available (GA). Spot VMs are the official preemptible offering from the Compute platform, and is generally available. Therefore, we'll retire Low Priority VMs on **30 September 2025**. After that, we'll stop supporting Low priority VMs. The existing Low priority pools may no longer work or be provisioned.
+Low priority VMs are a deprecated feature, and it will never become Generally Available (GA). Spot VMs are the official preemptible offering from the Compute platform, and are generally available. Therefore, we'll retire Low Priority VMs on **30 September 2025**. After that, we'll stop supporting Low priority VMs. The existing Low priority pools may no longer work or be provisioned.
## Retirement alternative
The other key difference is that Azure Spot pricing is variable and based on the
When it comes to eviction, you have two policy options to choose between:
-* Stop/Deallocate (default) ΓÇô when evicted, the VM is deallocated, but you keep (and pay for) underlying disks. This is ideal for cases where the state is stored on disks.
+* Stop/Deallocate (default) ΓÇô when evicted, the VM is deallocated, but you keep (and pay for) underlying disks. This is the ideal for cases where the state is stored on disks.
* Delete ΓÇô when evicted, the VM and underlying disks are deleted. While similar in idea, there are a few key differences between these two purchasing options:
While similar in idea, there are a few key differences between these two purchas
## Migration steps
-Customers in User Subscription mode have the option to include Spot VMs using the following the steps below:
+Customers in User Subscription mode can include Spot VMs using the following the steps below:
1. In the Azure portal, select the Batch account and view the existing pool or create a new pool. 2. Under **Scale**, users can choose 'Target dedicated nodes' or 'Target Spot/low-priority nodes.'
- ![Scale Target Nodes](../batch/media/certificates/lowpriorityvms-scale-target-nodes.png)
+ ![Scale Target Nodes](../batch/media/certificates/low-priority-vms-scale-target-nodes.png)
3. Navigate to the existing Pool and select 'Scale' to update the number of Spot nodes required based on the job scheduled. 4. Click **Save**.
Customers in Batch Managed mode must recreate the Batch account, pool, and jobs
* How to create a new Batch account /job/pool?
- Refer to the quick start [link](./batch-account-create-portal.md) on creating a new Batch account/pool/task.
+ See the quick start [link](./batch-account-create-portal.md) on creating a new Batch account/pool/task.
* Are Spot VMs available in Batch Managed mode?
Customers in Batch Managed mode must recreate the Batch account, pool, and jobs
* What is the pricing and eviction policy of Spot VMs? Can I view pricing history and eviction rates?
- Refer to [Spot VMs](../virtual-machines/spot-vms.md) for more information on using Spot VMs. Yes, you can see historical pricing and eviction rates per size in a region in the portal.
+ See [Spot VMs](../virtual-machines/spot-vms.md) for more information on using Spot VMs. Yes, you can see historical pricing and eviction rates per size in a region in the portal.
## Next steps
cdn Cdn Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-overview.md
For a complete list of features that each Azure CDN product supports, see [Compa
- To get started with CDN, see [Create an Azure CDN profile and endpoint](cdn-create-new-endpoint.md). - Manage your CDN endpoints through the [Microsoft Azure portal](https://portal.azure.com) or with [PowerShell](cdn-manage-powershell.md). - Learn how to automate Azure CDN with [.NET](cdn-app-dev-net.md) or [Node.js](cdn-app-dev-node.md).-- [Learn module: Introduction to Azure Content Delivery Network (CDN)](/learn/modules/intro-to-azure-content-delivery-network).
+- [Learn module: Introduction to Azure Content Delivery Network (CDN)](/training/modules/intro-to-azure-content-delivery-network).
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 9/2/2022 Last updated : 9/19/2022 # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## September 2022 Guest OS
+
+>[!NOTE]
+
+>The September Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the September Guest OS. This list is subject to change.
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 22-09 | [5017315] | Latest Cumulative Update(LCU) | 6.48 | Sep 13, 2022 |
+| Rel 22-09 | [5016618] | IE Cumulative Updates | 2.128, 3.115, 4.108 | Aug 9, 2022 |
+| Rel 22-09 | [5017316] | Latest Cumulative Update(LCU) | 7.16 | Sep 13, 2022 |
+| Rel 22-09 | [5017305] | Latest Cumulative Update(LCU) | 5.72 | Sep 13, 2022 |
+| Rel 22-09 | [5013641] | .NET Framework 3.5 and 4.7.2 Cumulative Update | 6.48 | May 10, 2022 |
+| Rel 22-09 | [5017397] | Servicing Stack Update | 2.128 | Sep 13, 2022 |
+| Rel 22-09 | [5017361] | September '22 Rollup | 2.128 | Sep 13, 2022 |
+| Rel 22-09 | [5013637] | .NET Framework 3.5 Security and Quality Rollup LKG | 2.128 | Sep 13, 2022 |
+| Rel 22-09 | [5013644] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 2.128 | May 10, 2022 |
+| Rel 22-09 | [5016263] | Servicing Stack Update | 3.115 | July 12, 2022 |
+| Rel 22-09 | [5017370] | September '22 Rollup | 3.115 | Sep 13, 2022 |
+| Rel 22-09 | [5013635] | .NET Framework 3.5 Security and Quality Rollup LKG | 3.115 | Sep 13, 2022 |
+| Rel 22-09 | [5013642] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 3.115 | May 10, 2022 |
+| Rel 22-09 | [5017398] | Servicing Stack Update | 4.108 | Sep 13, 2022 |
+| Rel 22-09 | [5017367] | Monthly Rollup | 4.108 | Sep 13, 2022 |
+| Rel 22-09 | [5013638] | .NET Framework 3.5 Security and Quality Rollup LKG | 4.108 | Jun 14, 2022 |
+| Rel 22-09 | [5013643] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 4.108 | May 10, 2022 |
+| Rel 22-09 | [4578013] | OOB Standalone Security Update | 4.108 | Aug 19, 2020 |
+| Rel 22-09 | [5017396] | Servicing Stack Update | 5.72 | Sep 13, 2022 |
+| Rel 22-09 | [4494175] | Microcode | 5.72 | Sep 1, 2020 |
+| Rel 22-09 | 5015896 | Servicing Stack Update | 6.48 | Sep 1, 2020 |
+| Rel 22-09 | [5013626] | .NET Framework 4.8 Security and Quality Rollup LKG | 6.48 | May 10, 2022 |
+
+[5017315]: https://support.microsoft.com/kb/5017315
+[5016618]: https://support.microsoft.com/kb/5016618
+[5017316]: https://support.microsoft.com/kb/5017316
+[5017305]: https://support.microsoft.com/kb/5017305
+[5013641]: https://support.microsoft.com/kb/5013641
+[5017397]: https://support.microsoft.com/kb/5017397
+[5017361]: https://support.microsoft.com/kb/5017361
+[5013637]: https://support.microsoft.com/kb/5013637
+[5013644]: https://support.microsoft.com/kb/5013644
+[5016263]: https://support.microsoft.com/kb/5016263
+[5017370]: https://support.microsoft.com/kb/5017370
+[5013635]: https://support.microsoft.com/kb/5013635
+[5013642]: https://support.microsoft.com/kb/5013642
+[5017398]: https://support.microsoft.com/kb/5017398
+[5017367]: https://support.microsoft.com/kb/5017367
+[5013638]: https://support.microsoft.com/kb/5013638
+[5013643]: https://support.microsoft.com/kb/5013643
+[4578013]: https://support.microsoft.com/kb/4578013
+[5017396]: https://support.microsoft.com/kb/5017396
+[4494175]: https://support.microsoft.com/kb/4494175
+[5015896]: https://support.microsoft.com/kb/5015896
+[5013626]: https://support.microsoft.com/kb/5013626
+ ## August 2022 Guest OS
cloud-shell Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/overview.md
You can access the Cloud Shell in three ways:
![Icon to launch the Cloud Shell from the Azure portal](media/overview/portal-launch-icon.png) -- **Code snippets**: In Microsoft [technical documentation](/) and [training resources](/learn), select the **Try It** button that appears with Azure CLI and Azure PowerShell code snippets:
+- **Code snippets**: In Microsoft [technical documentation](/) and [training resources](/training), select the **Try It** button that appears with Azure CLI and Azure PowerShell code snippets:
```azurecli-interactive az account show
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md
Known resolutions for troubleshooting issues in Azure Cloud Shell include:
### Disabling Cloud Shell in a locked down network environment -- **Details**: Administrators may wish to disable access to Cloud Shell for their users. Cloud Shell utilizes access to the `ux.console.azure.com` domain, which can be denied, stopping any access to Cloud Shell's entrypoints including `portal.azure.com`, `shell.azure.com`, Visual Studio Code Azure Account extension, and `docs.microsoft.com`. In the US Government cloud, the entrypoint is `ux.console.azure.us`; there is no corresponding `shell.azure.us`.
+- **Details**: Administrators may wish to disable access to Cloud Shell for their users. Cloud Shell utilizes access to the `ux.console.azure.com` domain, which can be denied, stopping any access to Cloud Shell's entrypoints including `portal.azure.com`, `shell.azure.com`, Visual Studio Code Azure Account extension, and `learn.microsoft.com`. In the US Government cloud, the entrypoint is `ux.console.azure.us`; there is no corresponding `shell.azure.us`.
- **Resolution**: Restrict access to `ux.console.azure.com` or `ux.console.azure.us` via network settings to your environment. The Cloud Shell icon will still exist in the Azure portal, but will not successfully connect to the service. ### Storage Dialog - Error: 403 RequestDisallowedByPolicy
cognitive-services Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/go.md
Responses from the Bing Web Search API are returned as JSON. This sample respons
```go Microsoft Cognitive Services || https://www.microsoft.com/cognitive-services Cognitive Services | Microsoft Azure || https://azure.microsoft.com/services/cognitive-services/
-What is Microsoft Cognitive Services? | Microsoft Docs || https://docs.microsoft.com/azure/cognitive-services/Welcome
+What is Microsoft Cognitive Services? | Microsoft Docs || https://learn.microsoft.com/azure/cognitive-services/Welcome
Microsoft Cognitive Toolkit || https://www.microsoft.com/en-us/cognitive-toolkit/ Microsoft Customers || https://customers.microsoft.com/en-us/search?sq=%22Microsoft%20Cognitive%20Services%22&ff=&p=0&so=story_publish_date%20desc Microsoft Enterprise Services - Microsoft Enterprise || https://enterprise.microsoft.com/en-us/services/
cognitive-services Overview Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-identity.md
This documentation contains the following types of articles:
* The [tutorials](./enrollment-overview.md) are longer guides that show you how to use this service as a component in broader business solutions. For a more structured approach, follow a Learn module for Face.
-* [Detect and analyze faces with the Face service](/learn/modules/detect-analyze-faces/)
+* [Detect and analyze faces with the Face service](/training/modules/detect-analyze-faces/)
## Example use cases
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md
This documentation contains the following types of articles:
* The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions. For a more structured approach, follow a Learn module for Image Analysis.
-* [Analyze images with the Computer Vision service](/learn/modules/analyze-images-computer-vision/)
+* [Analyze images with the Computer Vision service](/training/modules/analyze-images-computer-vision/)
## Image Analysis features
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
This documentation contains the following types of articles:
* The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions. --> For a more structured approach, follow a Learn module for OCR.
-* [Read Text in Images and Documents with the Computer Vision Service](/learn/modules/read-text-images-documents-with-computer-vision-service/)
+* [Read Text in Images and Documents with the Computer Vision Service](/training/modules/read-text-images-documents-with-computer-vision-service/)
## Read API
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/overview.md
This documentation contains the following article types:
* [**Tutorials**](ecommerce-retail-catalog-moderation.md) are longer guides that show you how to use the service as a component in broader business solutions. For a more structured approach, follow a Learn module for Content Moderator.
-* [Introduction to Content Moderator](/learn/modules/intro-to-content-moderator/)
-* [Classify and moderate text with Azure Content Moderator](/learn/modules/classify-and-moderate-text-with-azure-content-moderator/)
+* [Introduction to Content Moderator](/training/modules/intro-to-content-moderator/)
+* [Classify and moderate text with Azure Content Moderator](/training/modules/classify-and-moderate-text-with-azure-content-moderator/)
## Where it's used
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/overview.md
This documentation contains the following types of articles:
<!--* The [conceptual articles](Vision-API-How-to-Topics/call-read-api.md) provide in-depth explanations of the service's functionality and features.--> For a more structured approach, follow a Learn module for Custom Vision:
-* [Classify images with the Custom Vision service](/learn/modules/classify-images-custom-vision/)
-* [Classify endangered bird species with Custom Vision](/learn/modules/cv-classify-bird-species/)
+* [Classify images with the Custom Vision service](/training/modules/classify-images-custom-vision/)
+* [Classify endangered bird species with Custom Vision](/training/modules/cv-classify-bird-species/)
## How it works
cognitive-services Reference Markdown Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/reference-markdown-format.md
A new line between 2 sentences.|`\n\n`|`How can I create a bot with \n\n QnA Mak
|Italics |`*text*`|`How do I create a bot with *QnA Maker*?`|![format with italics](./media/qnamaker-concepts-datasources/format-italics.png)| |Strong (bold)|`**text**`|`How do I create a bot with **QnA Maker**?`|![format with strong marking for bold](./media/qnamaker-concepts-datasources/format-strong.png)| |URL for link|`[text](https://www.my.com)`|`How do I create a bot with [QnA Maker](https://www.qnamaker.ai)?`|![format for URL (hyperlink)](./media/qnamaker-concepts-datasources/format-url.png)|
-|*URL for public image|`![text](https://www.my.com/image.png)`|`How can I create a bot with ![QnAMaker](https://review.docs.microsoft.com/azure/cognitive-services/qnamaker/media/qnamaker-how-to-key-management/qnamaker-resource-list.png)`|![format for public image URL](./media/qnamaker-concepts-datasources/format-image-url.png)|
+|*URL for public image|`![text](https://www.my.com/image.png)`|`How can I create a bot with ![QnAMaker](https://review.learn.microsoft.com/azure/cognitive-services/qnamaker/media/qnamaker-how-to-key-management/qnamaker-resource-list.png)`|![format for public image URL](./media/qnamaker-concepts-datasources/format-image-url.png)|
|Strikethrough|`~~text~~`|`some ~~questoins~~ questions need to be asked`|![format for strikethrough](./media/qnamaker-concepts-datasources/format-strikethrough.png)| |Bold and italics|`***text***`|`How can I create a ***QnA Maker*** bot?`|![format for bold and italics](./media/qnamaker-concepts-datasources/format-bold-italics.png)| |Bold URL for link|`[**text**](https://www.my.com)`|`How do I create a bot with [**QnA Maker**](https://www.qnamaker.ai)?`|![format for bold URL](./media/qnamaker-concepts-datasources/format-bold-url.png)|
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/whats-new.md
Learn what's new with QnA Maker.
* New version of QnA Maker launched in free Public Preview. Read more [here](https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/ba-p/1845575).
-> [!VIDEO https://docs.microsoft.com/Shows/AI-Show/Introducing-QnA-managed-Now-in-Public-Preview/player]
+> [!VIDEO https://learn.microsoft.com/Shows/AI-Show/Introducing-QnA-managed-Now-in-Public-Preview/player]
* Simplified resource creation * End to End region support * Deep learnt ranking model
cognitive-services Audio Processing Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/audio-processing-speech-sdk.md
Previously updated : 01/31/2022 Last updated : 09/16/2022 ms.devlang: cpp, csharp, java
cognitive-services Get Started Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-speech-to-text.md
Previously updated : 06/13/2022 Last updated : 09/16/2022 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
cognitive-services Get Started Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-speech-translation.md
Previously updated : 06/13/2022 Last updated : 09/16/2022 zone_pivot_groups: programming-languages-speech-services keywords: speech translation
cognitive-services Get Started Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-text-to-speech.md
Previously updated : 06/13/2022 Last updated : 09/16/2022 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
cognitive-services How To Custom Commands Deploy Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-deploy-cicd.md
The scripts are hosted at [Cognitive Services Voice Assistant - Custom Commands]
| - | | -- | | SourceAppId | ID of the DEV application | | TargetAppId | ID of the PROD application |
- | SubscriptionKey | Subscription key used for both applications |
+ | SubscriptionKey | The key used for both applications |
| Culture | Culture of the applications (i.e. en-us) | > [!div class="mx-imgBorder"]
The scripts are hosted at [Cognitive Services Voice Assistant - Custom Commands]
``` | Arguments | Description | | - | | -- |
- | region | region of the application, i.e. westus2. |
- | subscriptionkey | subscription key of your speech resource. |
+ | region | Your Speech resource region. For example: `westus2` |
+ | subscriptionkey | Your Speech resource key. |
| appid | the Custom Commands' application ID you want to export. | 1. Push these changes to your repository.
The scripts are hosted at [Cognitive Services Voice Assistant - Custom Commands]
| Variable | Description | | - | | -- | | TargetAppId | ID of the PROD application |
- | SubscriptionKey | Subscription key used for both applications |
+ | SubscriptionKey | The key used for both applications |
| Culture | Culture of the applications (i.e. en-us) | 1. Click "Run" and then click in the "Job" running.
cognitive-services How To Custom Commands Setup Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-setup-speech-sdk.md
Add the code-behind source as follows:
1. Add the following code to the method body of `InitializeDialogServiceConnector` ```csharp
- // This code creates the `DialogServiceConnector` with your subscription information.
- // create a DialogServiceConfig by providing a Custom Commands application id and Cognitive Services subscription key
- // the RecoLanguage property is optional (default en-US); note that only en-US is supported in Preview
+ // This code creates the `DialogServiceConnector` with your resource information.
+ // create a DialogServiceConfig by providing a Custom Commands application id and Speech resource key
+ // The RecoLanguage property is optional (default en-US); note that only en-US is supported in Preview
const string speechCommandsApplicationId = "YourApplicationId"; // Your application id
- const string speechSubscriptionKey = "YourSpeechSubscriptionKey"; // Your subscription key
- const string region = "YourServiceRegion"; // The subscription service region.
+ const string speechSubscriptionKey = "YourSpeechSubscriptionKey"; // Your Speech resource key
+ const string region = "YourServiceRegion"; // The Speech resource region.
var speechCommandsConfig = CustomCommandsConfig.FromSubscription(speechCommandsApplicationId, speechSubscriptionKey, region); speechCommandsConfig.SetProperty(PropertyId.SpeechServiceConnection_RecoLanguage, "en-us"); connector = new DialogServiceConnector(speechCommandsConfig); ```
-1. Replace the strings `YourApplicationId`, `YourSpeechSubscriptionKey`, and `YourServiceRegion` with your own values for your app, speech subscription, and [region](regions.md)
+1. Replace the strings `YourApplicationId`, `YourSpeechSubscriptionKey`, and `YourServiceRegion` with your own values for your app, speech key, and [region](regions.md)
1. Append the following code snippet to the end of the method body of `InitializeDialogServiceConnector`
cognitive-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
## Set up your Azure account
-A Speech service subscription is required before you can use Custom Neural Voice. Follow these instructions to create a Speech service subscription in Azure. If you don't have an Azure account, you can sign up for a new one.
+A Speech resource is required before you can use Custom Neural Voice. Follow these instructions to create a Speech resource in Azure. If you don't have an Azure account, you can sign up for a new one.
-Once you've created an Azure account and a Speech service subscription, you'll need to sign in to Speech Studio and connect your subscription.
+Once you've created an Azure account and a Speech resource, you'll need to sign in to Speech Studio and connect your subscription.
-1. Get your Speech service subscription key from the Azure portal.
+1. Get your Speech resource key from the Azure portal.
1. Sign in to [Speech Studio](https://aka.ms/speechstudio), and then select **Custom Voice**. 1. Select your subscription and create a speech project. 1. If you want to switch to another Speech subscription, select the **cog** icon at the top.
cognitive-services How To Deploy And Use Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-deploy-and-use-endpoint.md
You can suspend and resume your endpoint if you don't use it all the time. When
You can also update the endpoint to a new model. To change the model, make sure the new model is named the same as the one you want to update. > [!NOTE]
->- Standard subscription (S0) users can create up to 50 endpoints, each with its own custom neural voice.
->- To use your custom neural voice, you must specify the voice model name, use the custom URI directly in an HTTP request, and use the same subscription to pass through the authentication of the text-to-speech service.
+>- You can create up to 50 endpoints with a standard (S0) Speech resource, each with its own custom neural voice.
+>- To use your custom neural voice, you must specify the voice model name, use the custom URI directly in an HTTP request, and use the same Speech resource to pass through the authentication of the text-to-speech service.
After your endpoint is deployed, the endpoint name appears as a link. Select the link to display information specific to your endpoint, such as the endpoint key, endpoint URL, and sample code.
The application settings that you use as REST API [request parameters](#request-
:::image type="content" source="./media/custom-voice/cnv-endpoint-app-settings-zoom.png" alt-text="Screenshot of custom endpoint app settings in Speech Studio." lightbox="./media/custom-voice/cnv-endpoint-app-settings-full.png":::
-* The **Endpoint key** shows the subscription key the endpoint is associated with. Use the endpoint key as the value of your `Ocp-Apim-Subscription-Key` request header.
+* The **Endpoint key** shows the Speech resource key the endpoint is associated with. Use the endpoint key as the value of your `Ocp-Apim-Subscription-Key` request header.
* The **Endpoint URL** shows your service region. Use the value that precedes `voice.speech.microsoft.com` as your service region request parameter. For example, use `eastus` if the endpoint URL is `https://eastus.voice.speech.microsoft.com/cognitiveservices/v1`. * The **Endpoint URL** shows your endpoint ID. Use the value appended to the `?deploymentId=` query parameter as the value of your endpoint ID request parameter.
The possible `status` property values are:
##### Get endpoint example
-For information about endpoint ID, region, and subscription key parameters, see [request parameters](#request-parameters).
+For information about endpoint ID, region, and Speech resource key parameters, see [request parameters](#request-parameters).
HTTP example: ```HTTP GET api/texttospeech/v3.0/endpoints/<YourEndpointId> HTTP/1.1
-Ocp-Apim-Subscription-Key: YourSubscriptionKey
-Host: <YourServiceRegion>.customvoice.api.speech.microsoft.com
+Ocp-Apim-Subscription-Key: YourResourceKey
+Host: <YourResourceRegion>.customvoice.api.speech.microsoft.com
``` cURL example: ```Console
-curl -v -X GET "https://<YourServiceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>" -H "Ocp-Apim-Subscription-Key: <YourSubscriptionKey >"
+curl -v -X GET "https://<YourResourceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>" -H "Ocp-Apim-Subscription-Key: <YourResourceKey >"
``` Response header example:
Use the [get endpoint](#get-endpoint) operation to poll and track the status pro
##### Suspend endpoint example
-For information about endpoint ID, region, and subscription key parameters, see [request parameters](#request-parameters).
+For information about endpoint ID, region, and Speech resource key parameters, see [request parameters](#request-parameters).
HTTP example: ```HTTP POST api/texttospeech/v3.0/endpoints/<YourEndpointId>/suspend HTTP/1.1
-Ocp-Apim-Subscription-Key: YourSubscriptionKey
-Host: <YourServiceRegion>.customvoice.api.speech.microsoft.com
+Ocp-Apim-Subscription-Key: YourResourceKey
+Host: <YourResourceRegion>.customvoice.api.speech.microsoft.com
Content-Type: application/json Content-Length: 0 ```
Content-Length: 0
cURL example: ```Console
-curl -v -X POST "https://<YourServiceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>/suspend" -H "Ocp-Apim-Subscription-Key: <YourSubscriptionKey >" -H "content-type: application/json" -H "content-length: 0"
+curl -v -X POST "https://<YourResourceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>/suspend" -H "Ocp-Apim-Subscription-Key: <YourResourceKey >" -H "content-type: application/json" -H "content-length: 0"
``` Response header example:
Use the [get endpoint](#get-endpoint) operation to poll and track the status pro
##### Resume endpoint example
-For information about endpoint ID, region, and subscription key parameters, see [request parameters](#request-parameters).
+For information about endpoint ID, region, and Speech resource key parameters, see [request parameters](#request-parameters).
HTTP example: ```HTTP POST api/texttospeech/v3.0/endpoints/<YourEndpointId>/resume HTTP/1.1
-Ocp-Apim-Subscription-Key: YourSubscriptionKey
-Host: <YourServiceRegion>.customvoice.api.speech.microsoft.com
+Ocp-Apim-Subscription-Key: YourResourceKey
+Host: <YourResourceRegion>.customvoice.api.speech.microsoft.com
Content-Type: application/json Content-Length: 0 ```
Content-Length: 0
cURL example: ```Console
-curl -v -X POST "https://<YourServiceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>/resume" -H "Ocp-Apim-Subscription-Key: <YourSubscriptionKey >" -H "content-type: application/json" -H "content-length: 0"
+curl -v -X POST "https://<YourResourceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>/resume" -H "Ocp-Apim-Subscription-Key: <YourResourceKey >" -H "content-type: application/json" -H "content-length: 0"
``` Response header example:
For more information, see [response headers](#response-headers).
##### Request parameters
-You use these request parameters with calls to the REST API. See [application settings](#application-settings) for information about where to get your region, endpoint ID, and subscription key in Speech Studio.
+You use these request parameters with calls to the REST API. See [application settings](#application-settings) for information about where to get your region, endpoint ID, and Speech resource key in Speech Studio.
| Name | Location | Required | Type | Description | | | | -- | | |
-| `YourServiceRegion` | Path | `True` | string | The Azure region the endpoint is associated with. |
+| `YourResourceRegion` | Path | `True` | string | The Azure region the endpoint is associated with. |
| `YourEndpointId` | Path | `True` | string | The identifier of the endpoint. |
-| `Ocp-Apim-Subscription-Key` | Header | `True` | string | The subscription key the endpoint is associated with. |
+| `Ocp-Apim-Subscription-Key` | Header | `True` | string | The Speech resource key the endpoint is associated with. |
##### Response headers
The HTTP status code for each response indicates success or common errors.
| 200 | OK | The request was successful. | | 202 | Accepted | The request has been accepted and is being processed. | | 400 | Bad Request | The value of a parameter is invalid, or a required parameter is missing, empty, or null. One common issue is a header that is too long. |
-| 401 | Unauthorized | The request isn't authorized. Check to make sure your subscription key or [token](rest-speech-to-text-short.md#authentication) is valid and in the correct region. |
-| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your subscription. |
+| 401 | Unauthorized | The request isn't authorized. Check to make sure your Speech resource key or [token](rest-speech-to-text-short.md#authentication) is valid and in the correct region. |
+| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your Speech resource. |
| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. | ## Use your custom voice
The difference between Custom voice sample codes and [Text-to-speech quickstart
::: zone pivot="programming-language-csharp" ```csharp
-var speechConfig = SpeechConfig.FromSubscription(YourSubscriptionKey, YourServiceRegion);
+var speechConfig = SpeechConfig.FromSubscription(YourResourceKey, YourResourceRegion);
speechConfig.SpeechSynthesisVoiceName = "YourCustomVoiceName"; speechConfig.EndpointId = "YourEndpointId"; ```
speechConfig.EndpointId = "YourEndpointId";
::: zone pivot="programming-language-cpp" ```cpp
-auto speechConfig = SpeechConfig::FromSubscription(YourSubscriptionKey, YourServiceRegion);
+auto speechConfig = SpeechConfig::FromSubscription(YourResourceKey, YourResourceRegion);
speechConfig->SetSpeechSynthesisVoiceName("YourCustomVoiceName"); speechConfig->SetEndpointId("YourEndpointId"); ```
speechConfig->SetEndpointId("YourEndpointId");
::: zone pivot="programming-language-java" ```java
-SpeechConfig speechConfig = SpeechConfig.fromSubscription(YourSubscriptionKey, YourServiceRegion);
+SpeechConfig speechConfig = SpeechConfig.fromSubscription(YourResourceKey, YourResourceRegion);
speechConfig.setSpeechSynthesisVoiceName("YourCustomVoiceName"); speechConfig.setEndpointId("YourEndpointId"); ```
cognitive-services How To Recognize Intents From Speech Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-recognize-intents-from-speech-csharp.md
Next, you add code to the project.
[!code-csharp[Intent recognition by using a microphone](~/samples-cognitive-services-speech-sdk/samples/csharp/sharedcontent/console/intent_recognition_samples.cs#intentRecognitionWithMicrophone)]
-1. Replace the placeholders in this method with your LUIS subscription key, region, and app ID as follows.
+1. Replace the placeholders in this method with your LUIS resource key, region, and app ID as follows.
| Placeholder | Replace with | | -- | |
- | `YourLanguageUnderstandingSubscriptionKey` | Your LUIS key. Again, you must get this item from your Azure dashboard. You can find it on your app's **Azure Resources** page (under **Manage**) in the [LUIS portal](https://www.luis.ai/home). |
- | `YourLanguageUnderstandingServiceRegion` | The short identifier for the region your LUIS subscription is in, such as `westus` for West US. See [Regions](regions.md). |
+ | `YourLanguageUnderstandingSubscriptionKey` | Your LUIS resource key. Again, you must get this item from your Azure dashboard. You can find it on your app's **Azure Resources** page (under **Manage**) in the [LUIS portal](https://www.luis.ai/home). |
+ | `YourLanguageUnderstandingServiceRegion` | The short identifier for the region your LUIS resource is in, such as `westus` for West US. See [Regions](regions.md). |
| `YourLanguageUnderstandingAppId` | The LUIS app ID. You can find it on your app's **Settings** page in the [LUIS portal](https://www.luis.ai/home). | With these changes made, you can build (**Control+Shift+B**) and run (**F5**) the application. When you're prompted, try saying "Turn off the lights" into your PC's microphone. The application displays the result in the console window.
The following sections include a discussion of the code.
## Create an intent recognizer
-First, you need to create a speech configuration from your LUIS prediction key and region. You can use speech configurations to create recognizers for the various capabilities of the Speech SDK. The speech configuration has multiple ways to specify the subscription you want to use; here, we use `FromSubscription`, which takes the subscription key and region.
+First, you need to create a speech configuration from your LUIS prediction key and region. You can use speech configurations to create recognizers for the various capabilities of the Speech SDK. The speech configuration has multiple ways to specify the resource you want to use; here, we use `FromSubscription`, which takes the resource key and region.
> [!NOTE]
-> Use the key and region of your LUIS subscription, not a Speech service subscription.
+> Use the key and region of your LUIS resource, not a Speech resource.
-Next, create an intent recognizer using `new IntentRecognizer(config)`. Since the configuration already knows which subscription to use, you don't need to specify the subscription key again when creating the recognizer.
+Next, create an intent recognizer using `new IntentRecognizer(config)`. Since the configuration already knows which resource to use, you don't need to specify the key again when creating the recognizer.
## Import a LUIS model and add intents
cognitive-services How To Recognize Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-recognize-speech.md
Previously updated : 04/24/2022 Last updated : 09/16/2022 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python zone_pivot_groups: programming-languages-speech-services
cognitive-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
Previously updated : 01/23/2022 Last updated : 09/16/2022 ms.devlang: cpp, csharp, java, javascript, python
cognitive-services How To Speech Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis.md
Previously updated : 03/14/2022 Last updated : 09/16/2022 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
cognitive-services How To Windows Voice Assistants Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-windows-voice-assistants-get-started.md
For a complete voice assistant experience, the application will need a dialog se
These are the requirements to create a basic dialog service using Direct Line Speech. -- **Speech resource:** A subscription for Cognitive Speech Services for speech-to-text and text-to-speech conversions. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
+- **Speech resource:** A resource for Cognitive Speech Services for speech-to-text and text-to-speech conversions. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
- **Bot Framework bot:** A bot created using Bot Framework version 4.2 or above that's subscribed to [Direct Line Speech](./direct-line-speech.md) to enable voice input and output. [This guide](./tutorial-voice-enable-your-bot-speech-sdk.md) contains step-by-step instructions to make an "echo bot" and subscribe it to Direct Line Speech. You can also go [here](https://blog.botframework.com/2018/05/07/build-a-microsoft-bot-framework-bot-with-the-bot-builder-sdk-v4/) for steps on how to create a customized bot, then follow the same steps [here](./tutorial-voice-enable-your-bot-speech-sdk.md) to subscribe it to Direct Line Speech, but with your new bot rather than the "echo bot". ## Try out the sample app
-With your Speech Services subscription key and echo bot's bot ID, you're ready to try out the [UWP Voice Assistant sample](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample). Follow the instructions in the readme to run the app and enter your credentials.
+With your Speech resource key and echo bot's bot ID, you're ready to try out the [UWP Voice Assistant sample](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample). Follow the instructions in the readme to run the app and enter your credentials.
## Create your own voice assistant for Windows
cognitive-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md
Previously updated : 06/21/2022 Last updated : 09/16/2022 zone_pivot_groups: programming-languages-speech-services-nomore-variant
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
Previously updated : 08/25/2022 Last updated : 09/16/2022
cognitive-services Long Audio Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/long-audio-api.md
Previously updated : 01/24/2022 Last updated : 09/16/2022
get_voices()
Replace the following values:
-* Replace `<your_key>` with your Speech service subscription key. This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal).
+* Replace `<your_key>` with your Speech resource key. This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal).
* Replace `<region>` with the region where your Speech resource was created (for example: `eastus` or `westus`). This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal). You'll see output that looks like this:
submit_synthesis()
Replace the following values:
-* Replace `<your_key>` with your Speech service subscription key. This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal).
+* Replace `<your_key>` with your Speech resource key. This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal).
* Replace `<region>` with the region where your Speech resource was created (for example: `eastus` or `westus`). This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal). * Replace `<input_file_path>` with the path to the text file you've prepared for text-to-speech. * Replace `<locale>` with the desired output locale. For more information, see [language support](language-support.md?tabs=stt-tts).
The following table details the HTTP response codes and messages from the REST A
| API | HTTP status code | Description | Solution | |--||-|-|
-| Create | 400 | The voice synthesis is not enabled in this region. | Change the speech subscription key with a supported region. |
-| | 400 | Only the **Standard** speech subscription for this region is valid. | Change the speech subscription key to the "Standard" pricing tier. |
+| Create | 400 | The voice synthesis is not enabled in this region. | Change the speech resource key with a supported region. |
+| | 400 | Only the **Standard** speech resource for this region is valid. | Change the speech resource key to the "Standard" pricing tier. |
| | 400 | Exceed the 20,000 request limit for the Azure account. Remove some requests before submitting new ones. | The server will keep up to 20,000 requests for each Azure account. Delete some requests before submitting new ones. | | | 400 | This model cannot be used in the voice synthesis: {modelID}. | Make sure the {modelID}'s state is correct. | | | 400 | The region for the request does not match the region for the model: {modelID}. | Make sure the {modelID}'s region match with the request's region. |
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/overview.md
Previously updated : 04/21/2022 Last updated : 09/16/2022
cognitive-services Quickstart Custom Commands Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstart-custom-commands-application.md
In this quickstart, you create and test a basic Custom Commands application using Speech Studio. You will also be able to access this application from a Windows client app. ## Region Availability
-At this time, Custom Commands supports speech subscriptions created in regions that have [voice assistant capabilities](./regions.md#voice-assistants).
+At this time, Custom Commands supports speech resources created in regions that have [voice assistant capabilities](./regions.md#voice-assistants).
## Prerequisites
At this time, Custom Commands supports speech subscriptions created in regions t
1. In a web browser, go to [Speech Studio](https://aka.ms/speechstudio/customcommands). 1. Enter your credentials to sign in to the portal.
- The default view is your list of Speech subscriptions.
+ The default view is your list of Speech resources.
> [!NOTE]
- > If you don't see the select subscription page, you can navigate there by choosing "Speech resources" from the settings menu on the top bar.
+ > If you don't see the select resource page, you can navigate there by choosing "Resource" from the settings menu on the top bar.
-1. Select your Speech subscription, and then select **Go to Studio**.
+1. Select your Speech resource, and then select **Go to Studio**.
1. Select **Custom Commands**.
- The default view is a list of the Custom Commands applications you have under your selected subscription.
+ The default view is a list of the Custom Commands applications you have under your selected resource.
## Import an existing application as a new Custom Commands project
cognitive-services Setup Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstarts/setup-platform.md
Previously updated : 06/10/2022 Last updated : 09/16/2022 zone_pivot_groups: programming-languages-speech-sdk
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/regions.md
Previously updated : 07/27/2022 Last updated : 09/16/2022
cognitive-services Resiliency And Recovery Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/resiliency-and-recovery-plan.md
# Back up and recover speech customer resources
-The Speech service is [available in various regions](./regions.md). Service subscription keys are tied to a single region. When you acquire a key, you select a specific region, where your data, model and deployments reside.
+The Speech service is [available in various regions](./regions.md). Speech resource keys are tied to a single region. When you acquire a key, you select a specific region, where your data, model and deployments reside.
Datasets for customer-created data assets, such as customized speech models, custom voice fonts and speaker recognition voice profiles, are also **available only within the service-deployed region**. Such assets are:
These assets are backed up regularly and automatically by the repositories thems
## How to monitor service availability
-If you use the default endpoints, you should configure your client code to monitor for errors. If errors persist, be prepared to redirect to another region where you have a service subscription.
+If you use the default endpoints, you should configure your client code to monitor for errors. If errors persist, be prepared to redirect to another region where you have a Speech resource.
Follow these steps to configure your client to monitor for errors:
Follow these steps to configure your client to monitor for errors:
4. Each region has its own STS token service. For the primary region and any backup regions your client configuration file needs to know the: - Regional Speech service endpoints
- - [Regional subscription key and the region code](./rest-speech-to-text.md)
+ - [Regional key and the region code](./rest-speech-to-text.md)
5. Configure your code to monitor for connectivity errors (typically connection timeouts and service unavailability errors). Here's sample code in C#: [GitHub: Adding Sample for showing a possible candidate for switching regions](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/fa6428a0837779cbeae172688e0286625e340942/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#L965).
cognitive-services Rest Speech To Text Short https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text-short.md
The endpoint for the REST API for short audio has this format:
https://<REGION_IDENTIFIER>.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1 ```
-Replace `<REGION_IDENTIFIER>` with the identifier that matches the [region](regions.md) of your subscription.
+Replace `<REGION_IDENTIFIER>` with the identifier that matches the [region](regions.md) of your Speech resource.
> [!NOTE] > You must append the language parameter to the URL to avoid receiving a 4xx HTTP error. For example, the language set to US English via the West US endpoint is: `https://westus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
This table lists required and optional headers for speech-to-text requests:
|Header| Description | Required or optional | ||-||
-| `Ocp-Apim-Subscription-Key` | Your subscription key for the Speech service. | Either this header or `Authorization` is required. |
+| `Ocp-Apim-Subscription-Key` | Your resource key for the Speech service. | Either this header or `Authorization` is required. |
| `Authorization` | An authorization token preceded by the word `Bearer`. For more information, see [Authentication](#authentication). | Either this header or `Ocp-Apim-Subscription-Key` is required. | | `Pronunciation-Assessment` | Specifies the parameters for showing pronunciation scores in recognition results. These scores assess the pronunciation quality of speech input, with indicators like accuracy, fluency, and completeness. <br><br>This parameter is a Base64-encoded JSON that contains multiple detailed parameters. To learn how to build this header, see [Pronunciation assessment parameters](#pronunciation-assessment-parameters). | Optional | | `Content-type` | Describes the format and codec of the provided audio data. Accepted values are `audio/wav; codecs=audio/pcm; samplerate=16000` and `audio/ogg; codecs=opus`. | Required |
The following sample includes the host name and required headers. It's important
POST speech/recognition/conversation/cognitiveservices/v1?language=en-US&format=detailed HTTP/1.1 Accept: application/json;text/xml Content-Type: audio/wav; codecs=audio/pcm; samplerate=16000
-Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY
+Ocp-Apim-Subscription-Key: YOUR_RESOURCE_KEY
Host: westus.stt.speech.microsoft.com Transfer-Encoding: chunked Expect: 100-continue
The HTTP status code for each response indicates success or common errors.
| 100 | Continue | The initial request has been accepted. Proceed with sending the rest of the data. (This code is used with chunked transfer.) | | 200 | OK | The request was successful. The response body is a JSON object. | | 400 | Bad request | The language code wasn't provided, the language isn't supported, or the audio file is invalid (for example). |
-| 401 | Unauthorized | A subscription key or an authorization token is invalid in the specified region, or an endpoint is invalid. |
-| 403 | Forbidden | A subscription key or authorization token is missing. |
+| 401 | Unauthorized | A resource key or an authorization token is invalid in the specified region, or an endpoint is invalid. |
+| 403 | Forbidden | A resource key or authorization token is missing. |
### Chunked transfer
request.Method = "POST";
request.ProtocolVersion = HttpVersion.Version11; request.Host = host; request.ContentType = @"audio/wav; codecs=audio/pcm; samplerate=16000";
-request.Headers["Ocp-Apim-Subscription-Key"] = "YOUR_SUBSCRIPTION_KEY";
+request.Headers["Ocp-Apim-Subscription-Key"] = "YOUR_RESOURCE_KEY";
request.AllowWriteStreamBuffering = false; using (var fs = new FileStream(audioFile, FileMode.Open, FileAccess.Read))
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
The Speech service allows you to [convert text into synthesized speech](#convert-text-to-speech) and [get a list of supported voices](#get-a-list-of-voices) for a region by using a REST API. In this article, you'll learn about authorization options, query options, how to structure a request, and how to interpret a response.
-The text-to-speech REST API supports neural text-to-speech voices, which support specific languages and dialects that are identified by locale. Each available endpoint is associated with a region. A subscription key for the endpoint or region that you plan to use is required. Here are links to more information:
+The text-to-speech REST API supports neural text-to-speech voices, which support specific languages and dialects that are identified by locale. Each available endpoint is associated with a region. A Speech resource key for the endpoint or region that you plan to use is required. Here are links to more information:
- For a complete list of voices, see [Language and voice support for the Speech service](language-support.md?tabs=stt-tts). - For information about regional availability, see [Speech service supported regions](regions.md#speech-service).
This table lists required and optional headers for text-to-speech requests:
| Header | Description | Required or optional | |--|-||
-| `Ocp-Apim-Subscription-Key` | Your subscription key for the Speech service. | Either this header or `Authorization` is required. |
+| `Ocp-Apim-Subscription-Key` | Your Speech resource key. | Either this header or `Authorization` is required. |
| `Authorization` | An authorization token preceded by the word `Bearer`. For more information, see [Authentication](#authentication). | Either this header or `Ocp-Apim-Subscription-Key` is required. | ### Request body
This request requires only an authorization header:
GET /cognitiveservices/voices/list HTTP/1.1 Host: westus.tts.speech.microsoft.com
-Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY
+Ocp-Apim-Subscription-Key: YOUR_RESOURCE_KEY
``` ### Sample response
The HTTP status code for each response indicates success or common errors.
||-|--| | 200 | OK | The request was successful. | | 400 | Bad request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common reason is a header that's too long. |
-| 401 | Unauthorized | The request is not authorized. Make sure your subscription key or token is valid and in the correct region. |
-| 429 | Too many requests | You have exceeded the quota or rate of requests allowed for your subscription. |
+| 401 | Unauthorized | The request is not authorized. Make sure your resource key or token is valid and in the correct region. |
+| 429 | Too many requests | You have exceeded the quota or rate of requests allowed for your resource. |
| 502 | Bad gateway | There's a network or server-side problem. This status might also indicate invalid headers. |
The `v1` endpoint allows you to convert text to speech by using [Speech Synthesi
### Regions and endpoints
-These regions are supported for text-to-speech through the REST API. Be sure to select the endpoint that matches your subscription region.
+These regions are supported for text-to-speech through the REST API. Be sure to select the endpoint that matches your Speech resource region.
[!INCLUDE [](includes/cognitive-services-speech-service-endpoints-text-to-speech.md)]
The HTTP status code for each response indicates success or common errors:
||-|--| | 200 | OK | The request was successful. The response body is an audio file. | | 400 | Bad request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common reason is a header that's too long. |
-| 401 | Unauthorized | The request is not authorized. Make sure your subscription key or token is valid and in the correct region. |
+| 401 | Unauthorized | The request is not authorized. Make sure your Speech resource key or token is valid and in the correct region. |
| 415 | Unsupported media type | It's possible that the wrong `Content-Type` value was provided. `Content-Type` should be set to `application/ssml+xml`. |
-| 429 | Too many requests | You have exceeded the quota or rate of requests allowed for your subscription. |
+| 429 | Too many requests | You have exceeded the quota or rate of requests allowed for your resource. |
| 502 | Bad gateway | There's a network or server-side problem. This status might also indicate invalid headers. | If the HTTP status is `200 OK`, the body of the response contains an audio file in the requested format. This file can be played as it's transferred, saved to a buffer, or saved to a file.
cognitive-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/sovereign-clouds.md
Available to US government entities and their partners only. See more informatio
- Text-to-speech - Standard voice - Neural voice
- - Speech translator
+ - Speech translation
- **Unsupported features:** - Custom Voice - **Supported languages:**
Speech Services REST API endpoints in Azure Government have the following format
| REST API type / operation | Endpoint format | |--|--| | Access token | `https://<REGION_IDENTIFIER>.api.cognitive.microsoft.us/sts/v1.0/issueToken`
-| [Speech-to-text REST API v3.0](rest-speech-to-text.md) | `https://<REGION_IDENTIFIER>.api.cognitive.microsoft.us/<URL_PATH>` |
+| [Speech-to-text REST API](rest-speech-to-text.md) | `https://<REGION_IDENTIFIER>.api.cognitive.microsoft.us/<URL_PATH>` |
| [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) | `https://<REGION_IDENTIFIER>.stt.speech.azure.us/<URL_PATH>` | | [Text-to-speech REST API](rest-text-to-speech.md) | `https://<REGION_IDENTIFIER>.tts.speech.azure.us/<URL_PATH>` |
Speech Services REST API endpoints in Azure China have the following format:
| REST API type / operation | Endpoint format | |--|--| | Access token | `https://<REGION_IDENTIFIER>.api.cognitive.azure.cn/sts/v1.0/issueToken`
-| [Speech-to-text REST API v3.0](rest-speech-to-text.md) | `https://<REGION_IDENTIFIER>.api.cognitive.azure.cn/<URL_PATH>` |
+| [Speech-to-text REST API](rest-speech-to-text.md) | `https://<REGION_IDENTIFIER>.api.cognitive.azure.cn/<URL_PATH>` |
| [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) | `https://<REGION_IDENTIFIER>.stt.speech.azure.cn/<URL_PATH>` | | [Text-to-speech REST API](rest-text-to-speech.md) | `https://<REGION_IDENTIFIER>.tts.speech.azure.cn/<URL_PATH>` |
For [Speech SDK](speech-sdk.md) in sovereign clouds you need to use "from host /
# [C#](#tab/c-sharp) ```csharp
-var config = SpeechConfig.FromHost(azCnHost, subscriptionKey);
+var config = SpeechConfig.FromHost("azCnHost", subscriptionKey);
``` # [C++](#tab/cpp) ```cpp
-auto config = SpeechConfig::FromHost(azCnHost, subscriptionKey);
+auto config = SpeechConfig::FromHost("azCnHost", subscriptionKey);
``` # [Java](#tab/java) ```java
-SpeechConfig config = SpeechConfig.fromHost(azCnHost, subscriptionKey);
+SpeechConfig config = SpeechConfig.fromHost("azCnHost", subscriptionKey);
``` # [Python](#tab/python) ```python import azure.cognitiveservices.speech as speechsdk
-speech_config = speechsdk.SpeechConfig(host=azCnHost, subscription=subscriptionKey)
+speech_config = speechsdk.SpeechConfig(host="azCnHost", subscription=subscriptionKey)
``` # [Objective-C](#tab/objective-c) ```objectivec
-SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithHost:azCnHost subscription:subscriptionKey];
+SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithHost:"azCnHost" subscription:subscriptionKey];
``` ***
cognitive-services Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-sdk.md
Previously updated : 06/14/2022 Last updated : 09/16/2022
cognitive-services Speech Services Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-private-link.md
A Speech resource with a custom domain name and a private endpoint turned on use
> A Speech resource without private endpoints that uses a custom domain name also has a special way of interacting with Speech Services. > This way differs from the scenario of a Speech resource that uses a private endpoint. > This is important to consider because you may decide to remove private endpoints later.
-> See _Adjust an application to use a Speech resource without private endpoints_ later in this article.
+> See [Adjust an application to use a Speech resource without private endpoints](#adjust-an-application-to-use-a-speech-resource-without-private-endpoints) later in this article.
### Speech resource with a custom domain name and a private endpoint: Usage with the REST APIs
The detailed description of the special endpoints and how their URL should be tr
Get familiar with the material in the subsection mentioned in the previous paragraph and see the following example. The example describes the Text-to-speech REST API. Usage of the Speech-to-text REST API for short audio is fully equivalent. > [!NOTE]
-> When you're using the Speech-to-text REST API for short audio and Text-to-speech REST API in private endpoint scenarios, use a subscription key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech-to-text REST API for short audio](rest-speech-to-text-short.md#request-headers) and [Text-to-speech REST API](rest-text-to-speech.md#request-headers))
+> When you're using the Speech-to-text REST API for short audio and Text-to-speech REST API in private endpoint scenarios, use a resource key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech-to-text REST API for short audio](rest-speech-to-text-short.md#request-headers) and [Text-to-speech REST API](rest-text-to-speech.md#request-headers))
> > Using an authorization token and passing it to the special endpoint via the `Authorization` header will work *only* if you've turned on the **All networks** access option in the **Networking** section of your Speech resource. In other cases you will get either `Forbidden` or `BadRequest` error when trying to obtain an authorization token.
Follow these steps to modify your code:
1. Modify how you create the instance of `SpeechConfig`. Most likely, your application is using something like this: ```csharp
- var config = SpeechConfig.FromSubscription(subscriptionKey, azureRegion);
+ var config = SpeechConfig.FromSubscription(speechKey, azureRegion);
``` This won't work for a private-endpoint-enabled Speech resource because of the host name and URL changes that we described in the previous sections. If you try to run your existing application without any modifications by using the key of a private-endpoint-enabled resource, you'll get an authentication error (401). To make it work, modify how you instantiate the `SpeechConfig` class and use "from endpoint"/"with endpoint" initialization. Suppose we have the following two variables defined:
- - `subscriptionKey` contains the key of the private-endpoint-enabled Speech resource.
+ - `speechKey` contains the key of the private-endpoint-enabled Speech resource.
- `endPoint` contains the full *modified* endpoint URL (using the type required by the corresponding programming language). In our example, this variable should contain: ``` wss://my-private-link-speech.cognitiveservices.azure.com/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US
Follow these steps to modify your code:
Create a `SpeechConfig` instance: ```csharp
- var config = SpeechConfig.FromEndpoint(endPoint, subscriptionKey);
+ var config = SpeechConfig.FromEndpoint(endPoint, speechKey);
``` ```cpp
- auto config = SpeechConfig::FromEndpoint(endPoint, subscriptionKey);
+ auto config = SpeechConfig::FromEndpoint(endPoint, speechKey);
``` ```java
- SpeechConfig config = SpeechConfig.fromEndpoint(endPoint, subscriptionKey);
+ SpeechConfig config = SpeechConfig.fromEndpoint(endPoint, speechKey);
``` ```python import azure.cognitiveservices.speech as speechsdk
- speech_config = speechsdk.SpeechConfig(endpoint=endPoint, subscription=subscriptionKey)
+ speech_config = speechsdk.SpeechConfig(endpoint=endPoint, subscription=speechKey)
``` ```objectivec
- SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithEndpoint:endPoint subscription:subscriptionKey];
+ SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithEndpoint:endPoint subscription:speechKey];
``` > [!TIP]
Speech-to-text REST API v3.0 usage is fully equivalent to the case of [private-e
In this case, usage of the Speech-to-text REST API for short audio and usage of the Text-to-speech REST API have no differences from the general case, with one exception. (See the following note.) You should use both APIs as described in the [speech-to-text REST API for short audio](rest-speech-to-text-short.md) and [Text-to-speech REST API](rest-text-to-speech.md) documentation. > [!NOTE]
-> When you're using the Speech-to-text REST API for short audio and Text-to-speech REST API in custom domain scenarios, use a subscription key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech-to-text REST API for short audio](rest-speech-to-text-short.md#request-headers) and [Text-to-speech REST API](rest-text-to-speech.md#request-headers))
+> When you're using the Speech-to-text REST API for short audio and Text-to-speech REST API in custom domain scenarios, use a Speech resource key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech-to-text REST API for short audio](rest-speech-to-text-short.md#request-headers) and [Text-to-speech REST API](rest-text-to-speech.md#request-headers))
> > Using an authorization token and passing it to the special endpoint via the `Authorization` header will work *only* if you've turned on the **All networks** access option in the **Networking** section of your Speech resource. In other cases you will get either `Forbidden` or `BadRequest` error when trying to obtain an authorization token.
However, if you try to run the same application after having all private endpoin
You need to roll back your application to the standard instantiation of `SpeechConfig` in the style of the following code: ```csharp
-var config = SpeechConfig.FromSubscription(subscriptionKey, azureRegion);
+var config = SpeechConfig.FromSubscription(speechKey, azureRegion);
``` [!INCLUDE [](includes/speech-vnet-service-enpoints-private-endpoints-simultaneously.md)]
cognitive-services Speech Ssml Phonetic Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-ssml-phonetic-sets.md
Previously updated : 02/17/2022 Last updated : 09/16/2022
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Previously updated : 03/23/2022 Last updated : 09/16/2022 ms.devlang: cpp, csharp, java, javascript, objective-c, python
cognitive-services Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-translation.md
Previously updated : 06/13/2022 Last updated : 09/16/2022 keywords: speech translation
cognitive-services Spx Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-basics.md
Previously updated : 01/16/2022 Last updated : 09/16/2022
This article assumes that you have working knowledge of the Command Prompt windo
[!INCLUDE [](includes/spx-setup.md)]
-## Create a subscription configuration
+## Create a resource configuration
# [Terminal](#tab/terminal)
-To get started, you need an Azure subscription key and region identifier (for example, `eastus`, `westus`). Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
+To get started, you need a Speech resource key and region identifier (for example, `eastus`, `westus`). Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
-To configure your subscription key and region identifier, run the following commands:
+To configure your resource key and region identifier, run the following commands:
```console
-spx config @key --set SUBSCRIPTION-KEY
-spx config @region --set REGION
+spx config @key --set SPEECH-KEY
+spx config @region --set SPEECH-REGION
``` The key and region are stored for future Speech CLI commands. To view the current configuration, run the following commands:
spx config @region --clear
# [PowerShell](#tab/powershell)
-To get started, you need an Azure subscription key and region identifier (for example, `eastus`, `westus`). Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
+To get started, you need a Speech resource key and region identifier (for example, `eastus`, `westus`). Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
-To configure your subscription key and region identifier, run the following commands in PowerShell:
+To configure your Speech resource key and region identifier, run the following commands in PowerShell:
```powershell
-spx --% config @key --set SUBSCRIPTION-KEY
-spx --% config @region --set REGION
+spx --% config @key --set SPEECH-KEY
+spx --% config @region --set SPEECH-REGION
``` The key and region are stored for future SPX commands. To view the current configuration, run the following commands:
cognitive-services Spx Batch Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-batch-operations.md
Previously updated : 01/13/2021 Last updated : 09/16/2022
cognitive-services Spx Output Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-output-options.md
Previously updated : 05/01/2022 Last updated : 09/16/2022
cognitive-services Spx Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-overview.md
Previously updated : 01/16/2022 Last updated : 09/16/2022
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md
Previously updated : 06/13/2022 Last updated : 09/16/2022 keywords: text to speech
cognitive-services Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/troubleshooting.md
This article provides information to help you solve issues you might encounter w
You might have the wrong endpoint for your region or service. Check the URI to make sure it's correct.
-Also, there might be a problem with your subscription key or authorization token. For more information, see the next section.
+Also, there might be a problem with your Speech resource key or authorization token. For more information, see the next section.
## Error: HTTP 403 Forbidden or HTTP 401 Unauthorized This error often is caused by authentication issues. Connection requests without a valid `Ocp-Apim-Subscription-Key` or `Authorization` header are rejected with a status of 403 or 401.
-* If you're using a subscription key for authentication, you might see the error because:
+* If you're using a resource key for authentication, you might see the error because:
- - The subscription key is missing or invalid
- - You have exceeded your subscription's usage quota
+ - The key is missing or invalid
+ - You have exceeded your resource's usage quota
* If you're using an authorization token for authentication, you might see the error because: - The authorization token is invalid - The authorization token is expired
-### Validate your subscription key
+### Validate your resource key
-You can verify that you have a valid subscription key by running one of the following commands.
+You can verify that you have a valid resource key by running one of the following commands.
> [!NOTE]
-> Replace `YOUR_SUBSCRIPTION_KEY` and `YOUR_REGION` with your own subscription key and associated region.
+> Replace `YOUR_RESOURCE_KEY` and `YOUR_REGION` with your own resource key and associated region.
* PowerShell
You can verify that you have a valid subscription key by running one of the foll
$FetchTokenHeader = @{ 'Content-type'='application/x-www-form-urlencoded' 'Content-Length'= '0'
- 'Ocp-Apim-Subscription-Key' = 'YOUR_SUBSCRIPTION_KEY'
+ 'Ocp-Apim-Subscription-Key' = 'YOUR_RESOURCE_KEY'
} $OAuthToken = Invoke-RestMethod -Method POST -Uri https://YOUR_REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken -Headers $FetchTokenHeader $OAuthToken
You can verify that you have a valid subscription key by running one of the foll
* cURL ```
- curl -v -X POST "https://YOUR_REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken" -H "Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY" -H "Content-type: application/x-www-form-urlencoded" -H "Content-Length: 0"
+ curl -v -X POST "https://YOUR_REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken" -H "Ocp-Apim-Subscription-Key: YOUR_RESOURCE_KEY" -H "Content-type: application/x-www-form-urlencoded" -H "Content-Length: 0"
```
-If you entered a valid subscription key, the command returns an authorization token, otherwise an error is returned.
+If you entered a valid resource key, the command returns an authorization token, otherwise an error is returned.
### Validate an authorization token
cognitive-services Tutorial Voice Enable Your Bot Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/tutorial-voice-enable-your-bot-speech-sdk.md
If you get an error message in your main app window, use this table to identify
| Message | What should you do? | |-|-|
-|Error (AuthenticationFailure) : WebSocket Upgrade failed with an authentication error (401). Check for correct subscription key (or authorization token) and region name| On the **Settings** page of the app, make sure that you entered the subscription key and its region correctly. |
+|Error (AuthenticationFailure) : WebSocket Upgrade failed with an authentication error (401). Check for correct resource key (or authorization token) and region name| On the **Settings** page of the app, make sure that you entered the key and its region correctly. |
|Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1011. Error details: We could not connect to the bot before sending a message | Make sure that you [selected the Enable Streaming Endpoint checkbox](#register-the-direct-line-speech-channel) and/or [turned on web sockets](#enable-web-sockets).<br>Make sure that Azure App Service is running. If it is, try restarting it.| |Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1002. Error details: The server returned status code '503' when status code '101' was expected | Make sure that you [selected the Enable Streaming Endpoint checkbox](#register-the-direct-line-speech-channel) box and/or [turned on web sockets](#enable-web-sockets).<br>Make sure that Azure App Service is running. If it is, try restarting it.|
-|Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1011. Error details: Response status code does not indicate success: 500 (InternalServerError)| Your bot specified a neural voice in the [speak](https://github.com/microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md#speak) field of its output activity, but the Azure region associated with your subscription key doesn't support neural voices. See [neural voices](./regions.md#speech-service) and [standard voices](how-to-migrate-to-prebuilt-neural-voice.md).|
+|Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1011. Error details: Response status code does not indicate success: 500 (InternalServerError)| Your bot specified a neural voice in the [speak](https://github.com/microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md#speak) field of its output activity, but the Azure region associated with your resource key doesn't support neural voices. See [neural voices](./regions.md#speech-service) and [standard voices](how-to-migrate-to-prebuilt-neural-voice.md).|
If the actions in the table don't address your problem, see [Voice assistants: Frequently asked questions](faq-voice-assistants.yml). If you still can't resolve your problem after following all the steps in this tutorial, please enter a new issue on the [Voice Assistant GitHub page](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/issues).
To learn more about what's returned in the JSON output, see the [fields in the a
### View client source code for calls to the Speech SDK The Windows Voice Assistant Client uses the NuGet package [Microsoft.CognitiveServices.Speech](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech/), which contains the Speech SDK. A good place to start reviewing the sample code is the method `InitSpeechConnector()` in the file [VoiceAssistantClient\MainWindow.xaml.cs](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/blob/master/clients/csharp-wpf/VoiceAssistantClient/MainWindow.xaml.cs), which creates these two Speech SDK objects:-- [DialogServiceConfig](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconfig): For configuration settings like subscription key and its region.
+- [DialogServiceConfig](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconfig): For configuration settings like resource key and its region.
- [DialogServiceConnector](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector.-ctor): To manage the channel connection and client subscription events for handling recognized speech and bot responses. ## Add custom keyword activation
cognitive-services Get Started With Document Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md
Previously updated : 07/28/2022 Last updated : 09/20/2022 recommendations: false ms.devlang: csharp, golang, java, javascript, python
The following headers are included with each Document Translation API request:
"inputs": [ { "source": {
- "sourceUrl": "https://myblob.blob.core.windows.net/source",
+ "sourceUrl": "https://myblob.blob.core.windows.net/source"
}, "targets": [ {
payload= {
"sourceUrl": "https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS", "storageSource": "AzureBlob", "language": "en"
- }
}, "targets": [ {
cognitive-services Quickstart Translator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-translator.md
The core operation of the Translator service is translating text. In this quicks
> [!TIP] >
- > If you're new to Visual Studio, try the [Introduction to Visual Studio](/learn/modules/go-get-started/) Learn module.
+ > If you're new to Visual Studio, try the [Introduction to Visual Studio](/training/modules/go-get-started/) Learn module.
1. Open Visual Studio.
You can use any text editor to write Go applications. We recommend using the lat
> [!TIP] >
-> If you're new to Go, try the [Get started with Go](/learn/modules/go-get-started/) Learn module.
+> If you're new to Go, try the [Get started with Go](/training/modules/go-get-started/) Learn module.
1. If you haven't done so already, [download and install Go](https://go.dev/doc/install).
After a successful call, you should see the following response:
> [!TIP] >
- > If you're new to Node.js, try the [Introduction to Node.js](/learn/modules/intro-to-nodejs/) Learn module.
+ > If you're new to Node.js, try the [Introduction to Node.js](/training/modules/intro-to-nodejs/) Learn module.
1. In a console window (such as cmd, PowerShell, or Bash), create and navigate to a new directory for your app named `translator-app`.
After a successful call, you should see the following response:
> [!TIP] >
- > If you're new to Python, try the [Introduction to Python](/learn/paths/beginner-python/) Learn module.
+ > If you're new to Python, try the [Introduction to Python](/training/paths/beginner-python/) Learn module.
1. Open a terminal window and use pip to install the Requests library and uuid0 package:
cognitive-services Translator Text Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-text-apis.md
To call the Translator service via the [REST API](reference/rest-api-guide.md),
> [!TIP] >
- > If you're new to Visual Studio, try the [Introduction to Visual Studio](/learn/modules/go-get-started/) Learn module.
+ > If you're new to Visual Studio, try the [Introduction to Visual Studio](/training/modules/go-get-started/) Learn module.
1. Open Visual Studio.
You can use any text editor to write Go applications. We recommend using the lat
> [!TIP] >
-> If you're new to Go, try the [Get started with Go](/learn/modules/go-get-started/) Learn module.
+> If you're new to Go, try the [Get started with Go](/training/modules/go-get-started/) Learn module.
1. If you haven't done so already, [download and install Go](https://go.dev/doc/install).
You can use any text editor to write Go applications. We recommend using the lat
> [!TIP] >
- > If you're new to Node.js, try the [Introduction to Node.js](/learn/modules/intro-to-nodejs/) Learn module.
+ > If you're new to Node.js, try the [Introduction to Node.js](/training/modules/intro-to-nodejs/) Learn module.
1. In a console window (such as cmd, PowerShell, or Bash), create and navigate to a new directory for your app named `translator-text-app`.
You can use any text editor to write Go applications. We recommend using the lat
> [!TIP] >
- > If you're new to Python, try the [Introduction to Python](/learn/paths/beginner-python/) Learn module.
+ > If you're new to Python, try the [Introduction to Python](/training/paths/beginner-python/) Learn module.
1. Open a terminal window and use pip to install the Requests library and uuid0 package:
cognitive-services Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/autoscale.md
No, the autoscale feature is not available to free tier subscriptions.
- [Plan and Manage costs for Azure Cognitive Services](./plan-manage-costs.md). - [Optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
cognitive-services Tutorial Visual Search Crop Area Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/tutorial-visual-search-crop-area-results.md
This image is cropped by creating an `ImageInfo` object from the crop area, and
```csharp CropArea CropArea = new CropArea(top: (float)0.01, bottom: (float)0.30, left: (float)0.01, right: (float)0.20);
-string imageURL = "https://docs.microsoft.com/azure/cognitive-services/bing-visual-search/media/ms_srleaders.jpg";
+string imageURL = "https://learn.microsoft.com/azure/cognitive-services/bing-visual-search/media/ms_srleaders.jpg";
ImageInfo imageInfo = new ImageInfo(cropArea: CropArea, url: imageURL); VisualSearchRequest visualSearchRequest = new VisualSearchRequest(imageInfo: imageInfo);
Getting the actual image URLs requires a cast that reads an `ActionType` as `Ima
> [Create a Visual Search single-page web app](tutorial-bing-visual-search-single-page-app.md) ## See also
-> [What is the Bing Visual Search API?](./overview.md)
+> [What is the Bing Visual Search API?](./overview.md)
cognitive-services Cognitive Services Environment Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-environment-variables.md
class Program
# [C++](#tab/cpp)
-For more information, see <a href="/cpp/c-runtime-library/reference/getenv-wgetenv" target="_blank">`getenv` </a>.
+For more information, see <a href="/cpp/c-runtime-library/reference/getenv-s-wgetenv-s" target="_blank">`getenv_s`</a> and <a href="/cpp/c-runtime-library/reference/getenv-wgetenv" target="_blank">`getenv`</a>.
```cpp
+#include <iostream>
#include <stdlib.h>
+std::string getEnvironmentVariable(const char* name);
+ int main() { // Get the named env var, and assign it to the value variable
- auto value =
- getenv("ENVIRONMENT_VARIABLE_KEY");
+ auto value = getEnvironmentVariable("ENVIRONMENT_VARIABLE_KEY");
+}
+
+std::string getEnvironmentVariable(const char* name)
+{
+#if defined(_MSC_VER)
+ size_t requiredSize = 0;
+ (void)getenv_s(&requiredSize, nullptr, 0, name);
+ if (requiredSize == 0)
+ {
+ return "";
+ }
+ auto buffer = std::make_unique<char[]>(requiredSize);
+ (void)getenv_s(&requiredSize, buffer.get(), requiredSize, name);
+ return buffer.get();
+#else
+ auto value = getenv(name);
+ return value ? value : "";
+#endif
} ```
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/overview.md
After you've had a chance to get started with the Language service, try our tuto
* [Extract key phrases from text stored in Power BI](key-phrase-extraction/tutorials/integrate-power-bi.md) * [Use Power Automate to sort information in Microsoft Excel](named-entity-recognition/tutorials/extract-excel-information.md)
-* [Use Flask to translate text, analyze sentiment, and synthesize speech](/learn/modules/python-flask-build-ai-web-app/)
+* [Use Flask to translate text, analyze sentiment, and synthesize speech](/training/modules/python-flask-build-ai-web-app/)
* [Use Cognitive Services in canvas apps](/powerapps/maker/canvas-apps/cognitive-services-api?context=/azure/cognitive-services/language-service/context/context) * [Create a FAQ Bot](question-answering/tutorials/bot-service.md)
cognitive-services Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/authoring.md
curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: applicat
"value": [ { "displayName": "source1",
- "sourceUri": "https://docs.microsoft.com/azure/cognitive-services/qnamaker/overview/overview",
+ "sourceUri": "https://learn.microsoft.com/azure/cognitive-services/qnamaker/overview/overview",
"sourceKind": "url", "lastUpdatedDateTime": "2021-05-01T15:13:22Z" },
curl -X PATCH -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: applic
"op": "add", "value":{ "id": 1,
- "answer": "The latest question answering docs are on https://docs.microsoft.com",
+ "answer": "The latest question answering docs are on https://learn.microsoft.com",
"source": "source5", "questions": [ "Where do I find docs for question answering?"
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/overview.md
Text Analytics for health extracts and labels relevant medical information from
[!INCLUDE [Text Analytics for health](includes/features.md)]
-> [!VIDEO https://docs.microsoft.com/Shows/AI-Show/Introducing-Text-Analytics-for-Health/player]
+> [!VIDEO https://learn.microsoft.com/Shows/AI-Show/Introducing-Text-Analytics-for-Health/player]
## Get started with Text analytics for health
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Previously updated : 08/25/2022 Last updated : 09/19/2022
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
## September 2022
+* [Conversational language understanding](./conversational-language-understanding/overview.md) is available in the following regions:
+ * Central India
+ * Switzerland North
+ * West US 2
* Text Analytics for Health now [supports additional languages](./text-analytics-for-health/language-support.md) in preview: Spanish, French, German Italian, Portuguese and Hebrew. These languages are available when using a docker container to deploy the API service. - * The Azure.AI.TextAnalytics client library v5.2.0 are generally available and ready for use in production applications. For more information on Language service client libraries, see the [**Developer overview**](./concepts/developer-guide.md). This release includes the following updates:
cognitive-services Concepts Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-features.md
JSON objects can include nested JSON objects and simple property/values. An arra
} ```
+## Inference Explainability
+Personalizer can help you to understand which features are the most and least influential when determining the best action. When enabled, inference explainability includes feature scores from the underlying model into the Rank API response, so your application receives this information at the time of inference.
+Feature scores empower you to better understand the relationship between features and the decisions made by Personalizer. They can be used to provide insight to your end-users into why a particular recommendation was made, or to analyze whether your model is exhibiting bias toward or against certain contextual settings, users, and actions.
+
+Setting the service configuration flag IsInferenceExplainabilityEnabled in your service configuration enables Personalizer to include feature values and weights in the Rank API response. To update your current service configuration, use the [Service Configuration ΓÇô Update API](https://docs.microsoft.com/rest/api/personalizer/1.1preview1/service-configuration/update?tabs=HTTP). In the JSON request body, include your current service configuration and add the additional entry: ΓÇ£IsInferenceExplainabilityEnabledΓÇ¥: true. If you donΓÇÖt know your current service configuration, you can obtain it from the [Service Configuration ΓÇô Get API](https://docs.microsoft.com/rest/api/personalizer/1.1preview1/service-configuration/get?tabs=HTTP)
+
+```JSON
+{
+ "rewardWaitTime": "PT10M",
+ "defaultReward": 0,
+ "rewardAggregation": "earliest",
+ "explorationPercentage": 0.2,
+ "modelExportFrequency": "PT5M",
+ "logMirrorEnabled": true,
+ "logMirrorSasUri": "https://testblob.blob.core.windows.net/container?se=2020-08-13T00%3A00Z&sp=rwl&spr=https&sv=2018-11-09&sr=c&sig=signature",
+ "logRetentionDays": 7,
+ "lastConfigurationEditDate": "0001-01-01T00:00:00Z",
+ "learningMode": "Online",
+ "isAutoOptimizationEnabled": true,
+ "autoOptimizationFrequency": "P7D",
+ "autoOptimizationStartDate": "2019-01-19T00:00:00Z",
+"isInferenceExplainabilityEnabled": true
+}
+```
+
+### How to interpret feature scores?
+Enabling inference explainability will add a collection to the JSON response from the Rank API called *inferenceExplanation*. This contains a list of feature names and values that were submitted in the Rank request, along with feature scores learned by PersonalizerΓÇÖs underlying model. The feature scores provide you with insight on how influential each feature was in the model choosing the action.
+
+```JSON
+
+{
+ "ranking": [
+ {
+ "id": "EntertainmentArticle",
+ "probability": 0.8
+ },
+ {
+ "id": "SportsArticle",
+ "probability": 0
+ },
+ {
+ "id": "NewsArticle",
+ "probability": 0.2
+ }
+ ],
+ "eventId": "75269AD0-BFEE-4598-8196-C57383D38E10",
+ "rewardActionId": "EntertainmentArticle",
+ "inferenceExplanation": [
+ {
+ "idΓÇ¥: "EntertainmentArticle",
+ "features": [
+ {
+ "name": "user.profileType",
+ "score": 3.0
+ },
+ {
+ "name": "user.latLong",
+ "score": -4.3
+ },
+ {
+ "name": "user.profileType^user.latLong",
+ "score" : 12.1
+ },
+ ]
+ ]
+}
+```
+
+Recall that Personalizer will either return the _best action_ as determined by the model or an _exploratory action_ chosen by the exploration policy. The best action is the one that the model has determined has the highest probability of maximizing the average reward, whereas exploratory actions are chosen among the set of all possible actions provided in the Rank API call. Actions taken during exploration do not leverage the feature scores in determining which action to take, therefore **feature scores for exploratory actions should not be used to gain an understanding of why the action was taken.** [You can learn more about exploration here](https://docs.microsoft.com/azure/cognitive-services/personalizer/concepts-exploration).
+
+For the best actions returned by Personalizer, the feature scores can provide general insight where:
+* Larger positive scores provide more support for the model choosing the best action.
+* Larger negative scores provide more support for the model not choosing the best action.
+* Scores close to zero have a small effect on the decision to choose the best action.
+
+### Important considerations for Inference Explainability
+* **Increased latency.** Enabling _Inference Explainability_ will significantly increase the latency of Rank API calls due to processing of the feature information. Run experiments and measure the latency in your scenario to see if it satisfies your applicationΓÇÖs latency requirements. Future versions of Inference Explainability will mitigate this issue.
+* **Correlated Features.** Features that are highly correlated with each other can reduce the utility of feature scores. For example, suppose Feature A is highly correlated with Feature B. It may be that Feature AΓÇÖs score is a large positive value while Feature BΓÇÖs score is a large negative value. In this case, the two features may effectively cancel each other out and have little to no impact on the model. While Personalizer is very robust to highly correlated features, when using _Inference Explainability_, ensure that features sent to Personalizer are not highly correlated
++ ## Next steps [Reinforcement learning](concepts-reinforcement-learning.md)
cognitive-services Concepts Reinforcement Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-reinforcement-learning.md
The current version of Personalizer uses **contextual bandits**, an approach to
The _decision memory_, the model that has been trained to capture the best possible decision, given a context, uses a set of linear models. These have repeatedly shown business results and are a proven approach, partially because they can learn from the real world very rapidly without needing multi-pass training, and partially because they can complement supervised learning models and deep neural network models.
-The explore/exploit traffic allocation is made randomly following the percentage set for exploration, and the default algorithm for exploration is epsilon-greedy.
+The explore / best action traffic allocation is made randomly following the percentage set for exploration, and the default algorithm for exploration is epsilon-greedy.
### History of Contextual Bandits
Personalizer currently uses [Vowpal Wabbit](https://github.com/VowpalWabbit/vowp
## Next steps
-[Offline evaluation](concepts-offline-evaluation.md)
+[Offline evaluation](concepts-offline-evaluation.md)
cognitive-services Responsible Guidance Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-guidance-integration.md
When you get ready to integrate and responsibly use AI-powered products or featu
- **User Study**: Any consent or disclosure recommendations should be framed in a user study. Evaluate the first and continuous-use experience with a representative sample of the community to validate that the design choices lead to effective disclosure. Conduct user research with 10-20 community members (affected stakeholders) to evaluate their comprehension of the information and to determine if their expectations are met. -- **Transparency**: Consider providing users with information about how the content was personalized. For example, you can give your users a button labeled Why These Suggestions? that shows which top features of the user and actions played a role in producing the Personalizer results.
+- **Transparency & Explainability:** Consider enabling and using Personalizer's [inference explainability](https://learn.microsoft.com/azure/cognitive-services/personalizer/concepts-features?branch=main#inference-explainability) capability to better understand which features play a significant role in Personalizer's decision choice in each Rank call. This capability empowers you to provide your users with transparency regarding how their data played a role in producing the recommended best action. For example, you can give your users a button labeled "Why These Suggestions?" that shows which top features played a role in producing the Personalizer results. This information can also be used to better understand what data attributes about your users, contexts, and actions are working in favor of Personalizer's choice of best action, which are working against it, and which may have little or no effect. This capability can also provide insights about your user segments and help you identify and address potential biases.
- **Adversarial use**: consider establishing a process to detect and act on malicious manipulation. There are actors that will take advantage of machine learning and AI systems' ability to learn from their environment. With coordinated attacks, they can artificially fake patterns of behavior that shift the data and AI models toward their goals. If your use of Personalizer could influence important choices, make sure you have the appropriate means to detect and mitigate these types of attacks in place.
cognitive-services Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/plan-manage-costs.md
You can also [export your cost data](../cost-management-billing/costs/tutorial-e
- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
communication-services Identity Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/identity-model.md
The properties of an access token are:
* Expiration. * Scopes.
-An access token is always valid for 24 hours. After it expires, the access token is invalidated and can't be used to access any primitive.
+An access token is valid for a period of time between 1 and 24 hours. After it expires, the access token is invalidated and can't be used to access any primitive.
+To generate a token with a custom validity, specify the desired validity period when generating the token. If no custom validity is specified, the token will be valid for 24 hours.
+We recommend using short lifetime tokens for one-off meetings and longer lifetime tokens for agents using the application for longer periods of time.
An identity needs a way to request a new access token from a server-side service. The *scope* parameter defines a nonempty set of primitives that can be used. Azure Communication Services supports the following scopes for access tokens.
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
In this article, you will learn which capabilities are supported for Teams exter
| | Use typing indicators | ✔️ | | | Read receipt | ❌ | | | File sharing | ❌ |
-| | Reply to chat message | ❌ |
+| | Reply to specific chat message | ❌ |
| | React to chat message | ❌ | | Mid call control | Turn your video on/off | ✔️ | | | Mute/Unmute mic | ✔️ |
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/overview.md
You can create an identity and access token for Teams external users on Azure po
With a valid identity, access token, and Teams meeting URL, you can use [Azure Communication Services UI Library](https://azure.github.io/communication-ui-library/?path=/story/composites-call-with-chat-jointeamsmeeting--join-teams-meeting) to join Teams meeting without any code.
->[!VIDEO https://www.youtube.com/embed/chMHVHLFcao]
+>[!VIDEO https://www.youtube.com/embed/FF1LS516Bjw]
### Single-click deployment
The following table show supported use cases for Teams external user with Azure
Any licensed Teams users can schedule Teams meetings and share the invite with external users. External users can join the Teams meeting experience via existing Teams desktop, mobile, and web clients without additional charge. External users joining via Azure Communication Services SDKs will pay [standard Azure Communication Services consumption](https://azure.microsoft.com/pricing/details/communication-services/) for audio, video, and chat. There's no additional fee for the interoperability capability itself. - ## Next steps - [Authenticate as Teams external user](../../../quickstarts/identity/access-token-teams-external-users.md)
communication-services Subscribe Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/subscribe-events.md
Copy the following code snippet and paste into source file: **Program.cs**
using Azure.Storage.Queues; using Azure.Messaging.EventGrid;
-// For more detailed tutorials on storage queues, see: https://docs.microsoft.com/azure/storage/queues/storage-tutorial-queues
+// For more detailed tutorials on storage queues, see: https://learn.microsoft.com/azure/storage/queues/storage-tutorial-queues
var queueClient = new QueueClient("<Storage Account Connection String>", "router-events");
communication-services Learn Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/learn-modules.md
If you're looking for more guided experiences that teach you how to use Azure Communication Services then we have several Learn modules at your disposal. These modules provide a more structured experience of learning by providing a step by step guide to learning particular topics. Check them out, we'd love to know what you think. -- [Introduction to Communication Services](/learn/modules/intro-azure-communication-services/)-- [Send an SMS message from a C# console application with Azure Communication Services](/learn/modules/communication-service-send-sms-console-app/)-- [Create a voice calling web app with Azure Communication Services](/learn/modules/communication-services-voice-calling-web-app)
+- [Introduction to Communication Services](/training/modules/intro-azure-communication-services/)
+- [Send an SMS message from a C# console application with Azure Communication Services](/training/modules/communication-service-send-sms-console-app/)
+- [Create a voice calling web app with Azure Communication Services](/training/modules/communication-services-voice-calling-web-app)
communication-services File Sharing Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial.md
const uploadFileToAzureBlob = async (fileUpload: FileUploadManager) => {
const fileExtension = file.name.split('.').pop(); // Following is an example of calling an Azure Function to handle file upload
- // The https://docs.microsoft.com/en-us/azure/developer/javascript/how-to/with-web-app/azure-function-file-upload
+ // The https://learn.microsoft.com/azure/developer/javascript/how-to/with-web-app/azure-function-file-upload
// tutorial uses 'username' parameter to specify the storage container name. // Note that the container in the tutorial is private by default. To get default downloads working in // this sample, you need to change the container's access level to Public via Azure Portal.
connectors Connect Common Data Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connect-common-data-service.md
For technical information based on the connector's Swagger description, such as
* A [Dataverse Data Service environment and database](/power-platform/admin/environments-overview), which is a space where your organization stores, manages, and shares business data in a Dataverse database. For more information, review the following resources:
- * [Learn: Create and manage Dataverse environments](/learn/modules/create-manage-environments/)
+ * [Learn: Create and manage Dataverse environments](/training/modules/create-manage-environments/)
* [Power Platform - Environments overview](/power-platform/admin/environments-overview)
container-apps Background Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/background-processing.md
QUEUE_CONNECTION_STRING=`az storage account show-connection-string -g $RESOURCE_
# [Azure PowerShell](#tab/azure-powershell)
-Here we use Azure CLI as there isn't an equivalent PowerShell cmdlet to get the connection string for the storage account queue.
- ```azurepowershell $QueueConnectionString = (Get-AzStorageAccount -ResourceGroupName $ResourceGroupName -Name $StorageAcctName).Context.ConnectionString ```
-<!--
-
- $QueueConnectionString = (az storage account show-connection-string -g $ResourceGroupName --name $StorageAcctName --query connectionString --out json) -replace '"',''
>
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
As you begin to design the network around your container app, refer to [Plan vir
:::image type="content" source="media/networking/azure-container-apps-virtual-network.png" alt-text="Diagram of how Azure Container Apps environments use an existing V NET, or you can provide your own."::: <!--
-https://docs.microsoft.com/azure/azure-functions/functions-networking-options
+https://learn.microsoft.com/azure/azure-functions/functions-networking-options
https://techcommunity.microsoft.com/t5/apps-on-azure-blog/azure-container-apps-virtual-network-integration/ba-p/3096932 -->
When you deploy an internal or an external environment into your own network, a
## Next steps - [Deploy with an external environment](vnet-custom.md)-- [Deploy with an internal environment](vnet-custom-internal.md)
+- [Deploy with an internal environment](vnet-custom-internal.md)
container-apps Revisions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions-manage.md
az containerapp revision show \
# [PowerShell](#tab/powershell) ```azurecli
+az containerapp revision show `
--name <APPLICATION_NAME> ` --revision <REVISION_NAME> ` --resource-group <RESOURCE_GROUP_NAME>
az containerapp revision activate \
# [PowerShell](#tab/powershell)
-```poweshell
+```azurecli
az containerapp revision activate ` --revision <REVISION_NAME> ` --resource-group <RESOURCE_GROUP_NAME>
cosmos-db Account Databases Containers Items https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/account-databases-containers-items.md
Azure Cosmos items support the following operations. You can use any of the Azur
Learn how to manage your Azure Cosmos account and other concepts:
-* To learn more, see the [Azure Cosmos DB SQL API](/learn/modules/intro-to-azure-cosmos-db-core-api/) training module.
+* To learn more, see the [Azure Cosmos DB SQL API](/training/modules/intro-to-azure-cosmos-db-core-api/) training module.
* [How-to manage your Azure Cosmos DB account](how-to-manage-database-account.md) * [Global distribution](distribute-data-globally.md) * [Consistency levels](consistency-levels.md)
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
To learn more, see the following docs:
* [Azure Synapse Link for Azure Cosmos DB](synapse-link.md)
-* Check out the training module on how to [Design hybrid transactional and analytical processing using Azure Synapse Analytics](/learn/modules/design-hybrid-transactional-analytical-processing-using-azure-synapse-analytics/)
+* Check out the training module on how to [Design hybrid transactional and analytical processing using Azure Synapse Analytics](/training/modules/design-hybrid-transactional-analytical-processing-using-azure-synapse-analytics/)
* [Get started with Azure Synapse Link for Azure Cosmos DB](configure-synapse-link.md)
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/choose-api.md
Based on your workload, you must choose the API that fits your requirement. The
## Core(SQL) API
-This API stores data in document format. It offers the best end-to-end experience as we have full control over the interface, service, and the SDK client libraries. Any new feature that is rolled out to Azure Cosmos DB is first available on SQL API accounts. Azure Cosmos DB SQL API accounts provide support for querying items using the Structured Query Language (SQL) syntax, one of the most familiar and popular query languages to query JSON objects. To learn more, see the [Azure Cosmos DB SQL API](/learn/modules/intro-to-azure-cosmos-db-core-api/) training module and [getting started with SQL queries](sql-query-getting-started.md) article.
+This API stores data in document format. It offers the best end-to-end experience as we have full control over the interface, service, and the SDK client libraries. Any new feature that is rolled out to Azure Cosmos DB is first available on SQL API accounts. Azure Cosmos DB SQL API accounts provide support for querying items using the Structured Query Language (SQL) syntax, one of the most familiar and popular query languages to query JSON objects. To learn more, see the [Azure Cosmos DB SQL API](/training/modules/intro-to-azure-cosmos-db-core-api/) training module and [getting started with SQL queries](sql-query-getting-started.md) article.
If you are migrating from other databases such as Oracle, DynamoDB, HBase etc. and if you want to use the modernized technologies to build your apps, SQL API is the recommended option. SQL API supports analytics and offers performance isolation between operational and analytical workloads.
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md
Azure Synapse Link is available for Azure Cosmos DB SQL API or for Azure Cosmos
* [Query the analytical store using Azure Synapse serverless SQL pool](#query-analytical-store-sql-on-demand) * [Use Azure Synapse serverless SQL pool to analyze and visualize data in Power BI](#analyze-with-powerbi)
-You can also checkout the training module on how to [configure Azure Synapse Link for Azure Cosmos DB.](/learn/modules/configure-azure-synapse-link-with-azure-cosmos-db/)
+You can also checkout the training module on how to [configure Azure Synapse Link for Azure Cosmos DB.](/training/modules/configure-azure-synapse-link-with-azure-cosmos-db/)
## <a id="enable-synapse-link"></a>Enable Azure Synapse Link for Azure Cosmos DB accounts
You can find samples to get started with Azure Synapse Link on [GitHub](https://
To learn more, see the following docs:
-* Checkout the training module on how to [configure Azure Synapse Link for Azure Cosmos DB.](/learn/modules/configure-azure-synapse-link-with-azure-cosmos-db/)
+* Checkout the training module on how to [configure Azure Synapse Link for Azure Cosmos DB.](/training/modules/configure-azure-synapse-link-with-azure-cosmos-db/)
* [Azure Cosmos DB analytical store overview.](analytical-store-introduction.md)
cosmos-db How To Provision Throughput Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-provision-throughput-mongodb.md
mongoClient = new MongoClient(mongoClientSettings);
mongoDatabase = mongoClient.GetDatabase("testdb"); // Change the collection name, throughput value then update via MongoDB extension commands
-// https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-custom-commands#update-collection
+// https://learn.microsoft.com/azure/cosmos-db/mongodb-custom-commands#update-collection
var result = mongoDatabase.RunCommand<BsonDocument>(@"{customAction: ""UpdateCollection"", collection: ""testcollection"", offerThroughput: 400}"); ```
See the following articles to learn about throughput provisioning in Azure Cosmo
* [Request units and throughput in Azure Cosmos DB](../request-units.md) * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partitioning-overview.md
Some things to consider when selecting the *item ID* as the partition key includ
* Learn about [global distribution in Azure Cosmos DB](distribute-data-globally.md). * Learn how to [provision throughput on an Azure Cosmos container](how-to-provision-container-throughput.md). * Learn how to [provision throughput on an Azure Cosmos database](how-to-provision-database-throughput.md).
-* See the training module on how to [Model and partition your data in Azure Cosmos DB.](/learn/modules/model-partition-data-azure-cosmos-db/)
+* See the training module on how to [Model and partition your data in Azure Cosmos DB.](/training/modules/model-partition-data-azure-cosmos-db/)
* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/plan-manage-costs.md
See the following articles to learn more on how pricing works in Azure Cosmos DB
* Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). * Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). * Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-* Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+* Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
cosmos-db Change Feed Processor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/change-feed-processor.md
You can share the lease container across multiple [deployment units](#deployment
The change feed processor can be hosted in any platform that supports long running processes or tasks:
-* A continuous running [Azure WebJob](/learn/modules/run-web-app-background-task-with-webjobs/).
+* A continuous running [Azure WebJob](/training/modules/run-web-app-background-task-with-webjobs/).
* A process in an [Azure Virtual Machine](/azure/architecture/best-practices/background-jobs#azure-virtual-machines). * A background job in [Azure Kubernetes Service](/azure/architecture/best-practices/background-jobs#azure-kubernetes-service). * A serverless function in [Azure Functions](/azure/architecture/best-practices/background-jobs#azure-functions).
cosmos-db Create Sql Api Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-nodejs.md
In this quickstart, you create and manage an Azure Cosmos DB SQL API account fro
Watch this video for a complete walkthrough of the content in this article.
-> [!VIDEO https://docs.microsoft.com/Shows/Docs-Azure/Quickstart-Use-Nodejs-to-connect-and-query-data-from-Azure-Cosmos-DB-SQL-API-account/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Docs-Azure/Quickstart-Use-Nodejs-to-connect-and-query-data-from-Azure-Cosmos-DB-SQL-API-account/player]
## Prerequisites
cosmos-db How To Dotnet Query Items https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-query-items.md
The [Container.GetItemLinqQueryable<>](/dotnet/api/microsoft.azure.cosmos.contai
Now that you've queried multiple items, try one of our end-to-end tutorials with the SQL API. > [!div class="nextstepaction"]
-> [Build an app that queries and adds data to Azure Cosmos DB SQL API](/learn/modules/build-dotnet-app-cosmos-db-sql-api/)
+> [Build an app that queries and adds data to Azure Cosmos DB SQL API](/training/modules/build-dotnet-app-cosmos-db-sql-api/)
cosmos-db Kafka Connector Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/kafka-connector-sink.md
curl -H "Content-Type: application/json" -X POST -d @<path-to-JSON-config-file>
## Confirm data written to Cosmos DB
-Sign into the [Azure portal](https://portal.azure.com/learn.docs.microsoft.com) and navigate to your Azure Cosmos DB account. Check that the three records from the ΓÇ£hotelsΓÇ¥ topic are created in your account.
+Sign into the [Azure portal](https://portal.azure.com) and navigate to your Azure Cosmos DB account. Check that the three records from the ΓÇ£hotelsΓÇ¥ topic are created in your account.
## Cleanup
cosmos-db Kafka Connector Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/kafka-connector-source.md
curl -H "Content-Type: application/json" -X POST -d @<path-to-JSON-config-file>
## Insert document into Azure Cosmos DB
-1. Sign into the [Azure portal](https://portal.azure.com/learn.docs.microsoft.com) and navigate to your Azure Cosmos DB account.
+1. Sign into the [Azure portal](https://portal.azure.com/learn.learn.microsoft.com) and navigate to your Azure Cosmos DB account.
1. Open the **Data Explore** tab and select **Databases** 1. Open the "kafkaconnect" database and "kafka" container you created earlier. 1. To create a new JSON document, in the SQL API pane, expand "kafka" container, select **Items**, then select **New Item** in the toolbar.
The Azure Cosmos DB source connector converts JSON document to schema and suppor
## Next steps
-* Kafka Connect for Azure Cosmos DB [sink connector](kafka-connector-sink.md)
+* Kafka Connect for Azure Cosmos DB [sink connector](kafka-connector-sink.md)
cosmos-db Migrate Relational To Cosmos Db Sql Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-relational-to-cosmos-db-sql-api.md
def writeOrder(orderid):
df = spark.read.json(sc.parallelize([orderjsondata])) #write the dataframe (this will be a single order record with merged many-to-one order details) to cosmos db using spark the connector
- #https://docs.microsoft.com/azure/cosmos-db/spark-connector
+ #https://learn.microsoft.com/azure/cosmos-db/spark-connector
df.write.format("com.microsoft.azure.cosmosdb.spark").mode("append").options(**writeConfig).save() ```
cosmos-db Modeling Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/modeling-data.md
Just as there's no single way to represent a piece of data on a screen, there's
* To learn how to model and partition data on Azure Cosmos DB using a real-world example, refer to [ Data Modeling and Partitioning - a Real-World Example](how-to-model-partition-example.md).
-* See the training module on how to [Model and partition your data in Azure Cosmos DB.](/learn/modules/model-partition-data-azure-cosmos-db/)
+* See the training module on how to [Model and partition your data in Azure Cosmos DB.](/training/modules/model-partition-data-azure-cosmos-db/)
* Configure and use [Azure Synapse Link for Azure Cosmos DB](../configure-synapse-link.md).
cosmos-db Troubleshoot Not Found https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-not-found.md
string containerRid = selfLinkSegments[3];
Container containerByRid = this.cosmosClient.GetContainer(databaseRid, containerRid); // Invalid characters are listed here.
-// https://docs.microsoft.com/dotnet/api/microsoft.azure.documents.resource.id#remarks
+// https://learn.microsoft.com/dotnet/api/microsoft.azure.documents.resource.id#remarks
FeedIterator<JObject> invalidItemsIterator = this.Container.GetItemQueryIterator<JObject>( @"select * from t where CONTAINS(t.id, ""/"") or CONTAINS(t.id, ""#"") or CONTAINS(t.id, ""?"") or CONTAINS(t.id, ""\\"") "); while (invalidItemsIterator.HasMoreResults)
cosmos-db Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link.md
To learn more, see the following docs:
* [Azure Cosmos DB analytical store overview](analytical-store-introduction.md)
-* Check out the training module on how to [Design hybrid transactional and analytical processing using Azure Synapse Analytics](/learn/modules/design-hybrid-transactional-analytical-processing-using-azure-synapse-analytics/)
+* Check out the training module on how to [Design hybrid transactional and analytical processing using Azure Synapse Analytics](/training/modules/design-hybrid-transactional-analytical-processing-using-azure-synapse-analytics/)
* [Get started with Azure Synapse Link for Azure Cosmos DB](configure-synapse-link.md)
cosmos-db How To Use Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-php.md
catch(ServiceException $e){
$error_message = $e->getMessage(); // Handle exception based on error codes and messages. // Error codes and messages can be found here:
- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
} ```
try{
catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:
- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
$code = $e->getCode(); $error_message = $e->getMessage(); }
try {
catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:
- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
$code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />";
try {
catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:
- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
$code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />";
try {
catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:
- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
$code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />";
try {
catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:
- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
$code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />";
try {
catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:
- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
$code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />";
try {
catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:
- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
$code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />";
try {
catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:
- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
$code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />";
try {
catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:
- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
$code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />";
try {
catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:
- // https://docs.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Table-Service-Error-Codes
$code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />";
cost-management-billing Assign Roles Azure Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/assign-roles-azure-service-principals.md
tags: billing
Previously updated : 07/22/2022 Last updated : 09/20/2022
Later in this article, you'll give permission to the Azure AD app to act by usin
| Role | Actions allowed | Role definition ID | | | | |
-| EnrollmentReader | Can view usage and charges across all accounts and subscriptions. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | 24f8edb6-1668-4659-b5e2-40bb5f3a7d7e |
+| EnrollmentReader | Enrollment readers can view data at the enrollment, department, and account scopes. The data contains charges for all of the subscriptions under the scopes, including across tenants. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | 24f8edb6-1668-4659-b5e2-40bb5f3a7d7e |
| EA purchaser | Purchase reservation orders and view reservation transactions. It has all the permissions of EnrollmentReader, which will in turn have all the permissions of DepartmentReader. It can view usage and charges across all accounts and subscriptions. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | da6647fb-7651-49ee-be91-c43c4877f0c4 | | DepartmentReader | Download the usage details for the department they administer. Can view the usage and charges associated with their department. | db609904-a47f-4794-9be8-9bd86fbffd8a | | SubscriptionCreator | Create new subscriptions in the given scope of Account. | a0bcee42-bf30-4d1b-926a-48d21664ef71 |
cost-management-billing Change Credit Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/change-credit-card.md
tags: billing
Previously updated : 03/11/2022 Last updated : 09/20/2022
In the Azure portal, you can change your default payment method to a new credit
- For a Microsoft Online Service Program (pay-as-you-go) account, you must be an [Account Administrator](add-change-subscription-administrator.md#whoisaa). - For a Microsoft Customer Agreement, you must have the correct [MCA permissions](understand-mca-roles.md) to make these changes.
-If you want to a delete credit card, see [Delete an Azure billing payment method](delete-azure-payment-method.md).
+If you want to a delete credit card, see [Delete an Azure billing payment method](delete-azure-payment-method.md).
The supported payment methods for Microsoft Azure are credit cards, debit cards, and check wire transfer. To get approved to pay by check wire transfer, see [Pay for your Azure subscription by check or wire transfer](pay-by-invoice.md). >[!NOTE]
+> Azure doesn't support virtual or prepaid cards.
> Credit cards are accepted and debit cards are accepted by most countries or regions. > - Hong Kong and Brazil only support credit cards. > - India supports debit and credit cards through Visa and Mastercard.
cost-management-billing Prepay Hana Large Instances Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepay-hana-large-instances-reserved-capacity.md
armclient post /providers/Microsoft.Capacity/calculatePrice?api-version=2019-04-
'billingScopeId': '/subscriptions/11111111-1111-1111-111111111111', 'term': 'P1Y', 'quantity': '1',
- 'billingplan': 'Monthly'
+ 'billingplan': 'Monthly',
'displayName': 'testreservation_S224om', 'appliedScopes': ['/subscriptions/11111111-1111-1111-111111111111'], 'appliedScopeType': 'Single',
The following example response resembles what you get returned. Note the value y
### Make your purchase
-Make your purchase using the returned `quoteId` and the `reservationOrderId` that you got from the preceding [Get the reservation order and price](#get-the-reservation-order-and-price) section.
+Make your purchase using the returned `reservationOrderId` that you got from the preceding [Get the reservation order and price](#get-the-reservation-order-and-price) section.
Here's an example request:
armclient put /providers/Microsoft.Capacity/reservationOrders/22222222-2222-2222
'billingScopeId': '/subscriptions/11111111-1111-1111-111111111111', 'term': 'P1Y', 'quantity': '1',
- 'billingplan': 'Monthly'
+ 'billingplan': 'Monthly',
'displayName': ' testreservation_S224om', 'appliedScopes': ['/subscriptions/11111111-1111-1111-111111111111/resourcegroups/123'], 'appliedScopeType': 'Single', 'instanceFlexibility': 'NotSupported',
- 'renew': true,
- 'quoteId': 'd0fd3a890795'
+ 'renew': true
} }" ```
location. You can also go to https://aka.ms/corequotaincrease to learn about quo
## Next steps - Learn about [How to call Azure REST APIs with Postman and cURL](/rest/api/azure/#how-to-call-azure-rest-apis-with-postman).-- See [SKUs for SAP HANA on Azure (Large Instances)](../../virtual-machines/workloads/sap/hana-available-skus.md) for the available SKU list and regions.
+- See [SKUs for SAP HANA on Azure (Large Instances)](../../virtual-machines/workloads/sap/hana-available-skus.md) for the available SKU list and regions.
cost-management-billing Save Compute Costs Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/save-compute-costs-reservations.md
For more information, see [Self-service exchanges and refunds for Azure Reservat
- **Azure Cache for Redis** - Only the compute costs are included with a reservation. A reservation doesn't cover networking or storage charges associated with the Redis cache instances. - **Azure Dedicated Host** - Only the compute costs are included with the Dedicated host. - **Azure Disk Storage reservations** - A reservation only covers premium SSDs of P30 size or greater. It doesn't cover any other disk types or sizes smaller than P30.
+- **Azure Backup Storage reserved capacity** - A capacity reservation lowers storage costs of backup data in a Recovery Services Vault.
Software plans:
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ci-cd-github-troubleshoot-guide.md
Previously updated : 08/26/2022 Last updated : 09/19/2022 # Troubleshoot CI-CD, Azure DevOps, and GitHub issues in Azure Data Factory and Synapse Analytics
CI/CD release pipeline failing with the following error:
2020-07-06T09:50:50.8771655Z ##[error]Details: 2020-07-06T09:50:50.8772837Z ##[error]DataFactoryPropertyUpdateNotSupported: Updating property type is not supported. 2020-07-06T09:50:50.8774148Z ##[error]DataFactoryPropertyUpdateNotSupported: Updating property type is not supported.
-2020-07-06T09:50:50.8775530Z ##[error]Check out the troubleshooting guide to see if your issue is addressed: https://docs.microsoft.com/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment#troubleshooting
+2020-07-06T09:50:50.8775530Z ##[error]Check out the troubleshooting guide to see if your issue is addressed: https://learn.microsoft.com/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment#troubleshooting
2020-07-06T09:50:50.8776801Z ##[error]Task failed while creating or updating the template deployment. ```
If you are using old default parameterization template, new way to include globa
Default parameterization template should include all values from global parameter list. #### Resolution
-Use updated [default parameterization template.](/azure/data-factory/continuous-integration-delivery-resource-manager-custom-parameters#default-parameterization-template) as one time migration to new method of including global parameters. This template references to all values in global parameter list. You also have to update the deployment task in the **release pipeline** if you are already overriding the template parameters there.
+* Use updated [default parameterization template.](/azure/data-factory/continuous-integration-delivery-resource-manager-custom-parameters#default-parameterization-template) as one time migration to new method of including global parameters. This template references to all values in global parameter list. You also have to update the deployment task in the **release pipeline** if you are already overriding the template parameters there.
+* Update the template parameter names in CI/CD pipeline if you are already overriding the template parameters (for global parameters).
### Error code: InvalidTemplate
data-factory Connector Amazon Simple Storage Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-simple-storage-service.md
Previously updated : 06/29/2022 Last updated : 09/01/2022 # Copy and transform data in Amazon Simple Storage Service using Azure Data Factory or Azure Synapse Analytics
Format specific settings are located in the documentation for that format. For m
In source transformation, you can read from a container, folder, or individual file in Amazon S3. Use the **Source options** tab to manage how the files are read. **Wildcard paths:** Using a wildcard pattern will instruct the service to loop through each matching folder and file in a single source transformation. This is an effective way to process multiple files within a single flow. Add multiple wildcard matching patterns with the plus sign that appears when you hover over your existing wildcard pattern.
Wildcard examples:
First, set a wildcard to include all paths that are the partitioned folders plus the leaf files that you want to read. Use the **Partition root path** setting to define what the top level of the folder structure is. When you view the contents of your data via a data preview, you'll see that the service will add the resolved partitions found in each of your folder levels.
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-blob-storage.md
Previously updated : 08/24/2022 Last updated : 09/01/2022 # Copy and transform data in Azure Blob Storage by using Azure Data Factory or Azure Synapse Analytics
Format specific settings are located in the documentation for that format. For m
In source transformation, you can read from a container, folder, or individual file in Azure Blob Storage. Use the **Source options** tab to manage how the files are read. **Wildcard paths:** Using a wildcard pattern will instruct the service to loop through each matching folder and file in a single source transformation. This is an effective way to process multiple files within a single flow. Add multiple wildcard matching patterns with the plus sign that appears when you hover over your existing wildcard pattern.
Wildcard examples:
First, set a wildcard to include all paths that are the partitioned folders plus the leaf files that you want to read. Use the **Partition root path** setting to define what the top level of the folder structure is. When you view the contents of your data via a data preview, you'll see that the service will add the resolved partitions found in each of your folder levels.
data-factory Connector Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-storage.md
Previously updated : 08/15/2022 Last updated : 09/01/2022 # Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics
Format specific settings are located in the documentation for that format. For m
In the source transformation, you can read from a container, folder, or individual file in Azure Data Lake Storage Gen2. The **Source options** tab lets you manage how the files get read. **Wildcard path:** Using a wildcard pattern will instruct ADF to loop through each matching folder and file in a single Source transformation. This is an effective way to process multiple files within a single flow. Add multiple wildcard matching patterns with the + sign that appears when hovering over your existing wildcard pattern.
Wildcard examples:
First, set a wildcard to include all paths that are the partitioned folders plus the leaf files that you wish to read. Use the Partition Root Path setting to define what the top level of the folder structure is. When you view the contents of your data via a data preview, you'll see that ADF will add the resolved partitions found in each of your folder levels.
data-factory Connector Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-store.md
Previously updated : 07/04/2022 Last updated : 09/01/2022 # Copy data to or from Azure Data Lake Storage Gen1 using Azure Data Factory or Azure Synapse Analytics
Format-specific settings are located in the documentation for that format. For m
In the source transformation, you can read from a container, folder, or individual file in Azure Data Lake Storage Gen1. The **Source options** tab lets you manage how the files get read. **Wildcard path:** Using a wildcard pattern will instruct the service to loop through each matching folder and file in a single Source transformation. This is an effective way to process multiple files within a single flow. Add multiple wildcard matching patterns with the + sign that appears when hovering over your existing wildcard pattern.
Wildcard examples:
First, set a wildcard to include all paths that are the partitioned folders plus the leaf files that you wish to read. Use the Partition Root Path setting to define what the top level of the folder structure is. When you view the contents of your data via a data preview, you'll see that the service will add the resolved partitions found in each of your folder levels.
data-factory Connector Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-data-warehouse.md
By default, a data flow run will fail on the first error it gets. You can choose
**Report success on error:** If enabled, the data flow will be marked as a success even if error rows are found. ## Lookup activity properties
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-database.md
Previously updated : 08/10/2022 Last updated : 09/02/2022 # Copy and transform data in Azure SQL Database by using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Sap Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-change-data-capture.md
+
+ Title: Transform data from an SAP ODP source with the SAP CDC connector in Azure Data Factory or Azure Synapse Analytics
+
+description: Learn how to transform data from an SAP ODP source to supported sink data stores by using mapping data flows in Azure Data Factory or Azure Synapse Analytics.
++++++ Last updated : 09/05/2022++
+# Transform data from an SAP ODP source using the SAP CDC connector in Azure Data Factory or Azure Synapse Analytics
++
+This article outlines how to use mapping data flow to transform data from an SAP ODP source using the SAP CDC connector. To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md). For an introduction to transforming data with Azure Data Factory and Azure Synapse analytics, read [mapping data flow](concepts-data-flow-overview.md).
+
+>[!TIP]
+>To learn the overall support on SAP data integration scenario, see [SAP data integration using Azure Data Factory whitepaper](https://github.com/Azure/Azure-DataFactory/blob/master/whitepaper/SAP%20Data%20Integration%20using%20Azure%20Data%20Factory.pdf) with detailed introduction on each SAP connector, comparsion and guidance.
+
+## Supported capabilities
+
+This SAP CDC connector is supported for the following capabilities:
+
+| Supported capabilities|IR |
+|| --|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9313;|
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+This SAP CDC connector leverages the SAP ODP framework to extract data from SAP source systems. For an introduction to the architecture of the solution, read [Introduction and architecture to SAP change data capture (CDC)](sap-change-data-capture-introduction-architecture.md) in our [SAP knowledge center](industry-sap-overview.md).
+
+The SAP ODP framework is contained in most SAP NetWeaver based systems, including SAP ECC, SAP S/4HANA, SAP BW, SAP BW/4HANA, SAP LT Replication Server (SLT), except very old ones. For prerequisites and minimum required releases, see [Prerequisites and configuration](sap-change-data-capture-prerequisites-configuration.md#sap-system-requirements).
+
+The SAP CDC connector supports basic authentication or Secure Network Communications (SNC), if SNC is configured.
+
+## Prerequisites
+
+To use this SAP CDC connector, you need to:
+
+- Set up a self-hosted integration runtime (version 3.17 or later). For more information, see [Create and configure a self-hosted integration runtime](create-self-hosted-integration-runtime.md).
+
+- Download the 64-bit [SAP Connector for Microsoft .NET 3.0](https://support.sap.com/en/product/connectors/msnet.html) from SAP's website, and install it on the self-hosted integration runtime machine. During installation, make sure you select the **Install Assemblies to GAC** option in the **Optional setup steps** window.
+
+ :::image type="content" source="./media/connector-sap-business-warehouse-open-hub/install-sap-dotnet-connector.png" alt-text="Screenshot showing installation of SAP Connector for .NET.":::
+
+- The SAP user who's being used in the SAP table connector must have the permissions described in [User Configuration](sap-change-data-capture-prerequisites-configuration.md#set-up-the-sap-user):
++
+## Get started
++
+## Create a linked service for the SAP CDC connector using UI
+
+Follow the steps described in [Prepare the SAP CDC linked service](sap-change-data-capture-prepare-linked-service-source-dataset.md#set-up-a-linked-service) to create a linked service for the SAP CDC connector in the Azure portal UI.
+
+## Dataset properties
+
+To prepare an SAP CDC dataset, follow [Prepare the SAP CDC source dataset](sap-change-data-capture-prepare-linked-service-source-dataset.md#set-up-the-source-dataset)
+
+## Transform data with the SAP CDC connector
+
+SAP CDC datasets can be used as source in mapping data flow. Since the raw SAP ODP change feed is difficult to interpret and to correctly update to a sink, mapping data flow takes care of this by evaluating technical attributes provided by the ODP framework (e.g., ODQ_CHANGEMODE) automatically. This allows users to concentrate on the required transformation logic without having to bother with the internals of the SAP ODP change feed, the right order of changes, etc.
+
+### Mapping data flow properties
+
+To create a mapping data flow using the SAP CDC connector as a source, complete the following steps:
+
+1. In ADF Studio, go to the **Data flows** section of the **Author** hub, select the **…** button to drop down the **Data flow actions** menu, and select the **New data flow** item. Turn on debug mode by using the **Data flow debug** button in the top bar of data flow canvas.
+
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-data-flow-debug.png" alt-text="Screenshot of the data flow debug button in mapping data flow.":::
+
+1. In the mapping data flow editor, select **Add Source**.
+
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-add-source.png" alt-text="Screenshot of add source in mapping data flow.":::
+
+1. On the tab **Source settings** select a prepared SAP CDC dataset or select the **New** button to create a new one. Alternatively, you can also select **Inline** in the **Source type** property and continue without defining an explicit dataset.
+
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-select-dataset.png" alt-text="Screenshot of the select dataset option in source settings of mapping data flow source.":::
+
+1. On the tab **Source options** select the option **Full on every run** if you want to load full snapshots on every execution of your mapping data flow, or **Full on the first run, then incremental** if you want to subscribe to a change feed from the SAP source system. In this case, the first run of your pipeline will do a delta initialization, which means it will return a current full data snapshot and create an ODP delta subscription in the source system so that with subsequent runs, the SAP source system will return incremental changes since the previous run only. In case of incremental loads it is required to specify the keys of the ODP source object in the **Key columns** property.
+
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-run-mode.png" alt-text="Screenshot of the run mode property in source options of mapping data flow source.":::
+
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-key-columns.png" alt-text="Screenshot of the key columns selection in source options of mapping data flow source.":::
+
+1. For details on the tabs **Projection**, **Optimize** and **Inspect**, please follow [mapping data flow](concepts-data-flow-overview.md).
data-factory Connector Troubleshoot Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-blob-storage.md
This article provides suggestions to troubleshoot common problems with the Azure
## Error code: FIPSModeIsNotSupport -- **Message**: `Fail to read data form Azure Blob Storage for Azure Blob connector needs MD5 algorithm which can't co-work with FIPS mode. Please change diawp.exe.config in self-hosted integration runtime install directory to disable FIPS policy following https://docs.microsoft.com/dotnet/framework/configure-apps/file-schema/runtime/enforcefipspolicy-element.`
+- **Message**: `Fail to read data form Azure Blob Storage for Azure Blob connector needs MD5 algorithm which can't co-work with FIPS mode. Please change diawp.exe.config in self-hosted integration runtime install directory to disable FIPS policy following https://learn.microsoft.com/dotnet/framework/configure-apps/file-schema/runtime/enforcefipspolicy-element.`
- **Cause**: Then FIPS policy is enabled on the VM where the self-hosted integration runtime was installed.
data-factory Create Azure Ssis Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime-powershell.md
$ExpressCustomSetup = "[RunCmdkey|SetEnvironmentVariable|InstallAzurePowerShell|
$VnetId = "[your virtual network resource ID or leave it empty]" # REQUIRED if you use Azure SQL Database server configured with a private endpoint/IP firewall rule/virtual network service endpoint or Azure SQL Managed Instance that joins a virtual network to host SSISDB, or if you require access to on-premises data without configuring a self-hosted IR. We recommend Azure Resource Manager virtual network, because classic virtual network will be deprecated soon. $SubnetName = "[your subnet name or leave it empty]" # WARNING: Use the same subnet as the one used for Azure SQL Database server configured with a virtual network service endpoint or a different subnet from the one used for Azure SQL Managed Instance that joins a virtual network $SubnetId = $VnetId + '/subnets/' + $SubnetName
-# Virtual network injection method: Standard or Express. For comparison, see https://docs.microsoft.com/azure/data-factory/azure-ssis-integration-runtime-virtual-network-configuration.
+# Virtual network injection method: Standard or Express. For comparison, see https://learn.microsoft.com/azure/data-factory/azure-ssis-integration-runtime-virtual-network-configuration.
$VnetInjectionMethod = "Standard" # Standard by default, whereas Express lets you use the express virtual network injection method # Public IP address info: OPTIONAL to provide two standard static public IP addresses with DNS name under the same subscription and in the same region as your virtual network $FirstPublicIP = "[your first public IP address resource ID or leave it empty]"
$SSISDBServerEndpoint = "[your Azure SQL Database server name.database.windows.n
# Authentication info: SQL or Azure AD $SSISDBServerAdminUserName = "[your server admin username for SQL authentication or leave it empty for Azure AD authentication]" $SSISDBServerAdminPassword = "[your server admin password for SQL authentication or leave it empty for Azure AD authentication]"
-# For the basic pricing tier, specify "Basic," not "B." For standard, premium, and elastic pool tiers, specify "S0," "S1," "S2," "S3," etc. See https://docs.microsoft.com/azure/sql-database/sql-database-resource-limits-database-server.
+# For the basic pricing tier, specify "Basic," not "B." For standard, premium, and elastic pool tiers, specify "S0," "S1," "S2," "S3," etc. See https://learn.microsoft.com/azure/sql-database/sql-database-resource-limits-database-server.
$SSISDBPricingTier = "[Basic|S0|S1|S2|S3|S4|S6|S7|S9|S12|P1|P2|P4|P6|P11|P15|…|ELASTIC_POOL(name = <elastic_pool_name>) for Azure SQL Database server or leave it empty for managed instance]" ### Self-hosted integration runtime info - This can be configured as a proxy for on-premises data access
$ExpressCustomSetup = "[RunCmdkey|SetEnvironmentVariable|InstallAzurePowerShell|
$VnetId = "[your virtual network resource ID or leave it empty]" # REQUIRED if you use Azure SQL Database server configured with a private endpoint/IP firewall rule/virtual network service endpoint or Azure SQL Managed Instance that joins a virtual network to host SSISDB, or if you require access to on-premises data without configuring a self-hosted IR. We recommend Azure Resource Manager virtual network, because classic virtual network will be deprecated soon. $SubnetName = "[your subnet name or leave it empty]" # WARNING: Use the same subnet as the one used for Azure SQL Database server configured with a virtual network service endpoint or a different subnet from the one used for Azure SQL Managed Instance that joins a virtual network $SubnetId = $VnetId + '/subnets/' + $SubnetName
-# Virtual network injection method: Standard or Express. For comparison, see https://docs.microsoft.com/azure/data-factory/azure-ssis-integration-runtime-virtual-network-configuration.
+# Virtual network injection method: Standard or Express. For comparison, see https://learn.microsoft.com/azure/data-factory/azure-ssis-integration-runtime-virtual-network-configuration.
$VnetInjectionMethod = "Standard" # Standard by default, whereas Express lets you use the express virtual network injection method # Public IP address info: OPTIONAL to provide two standard static public IP addresses with DNS name under the same subscription and in the same region as your virtual network $FirstPublicIP = "[your first public IP address resource ID or leave it empty]"
$SSISDBServerEndpoint = "[your Azure SQL Database server name.database.windows.n
# Authentication info: SQL or Azure AD $SSISDBServerAdminUserName = "[your server admin username for SQL authentication or leave it empty for Azure AD authentication]" $SSISDBServerAdminPassword = "[your server admin password for SQL authentication or leave it empty for Azure AD authentication]"
-# For the basic pricing tier, specify "Basic," not "B." For standard, premium, and elastic pool tiers, specify "S0," "S1," "S2," "S3," etc. See https://docs.microsoft.com/azure/sql-database/sql-database-resource-limits-database-server.
+# For the basic pricing tier, specify "Basic," not "B." For standard, premium, and elastic pool tiers, specify "S0," "S1," "S2," "S3," etc. See https://learn.microsoft.com/azure/sql-database/sql-database-resource-limits-database-server.
$SSISDBPricingTier = "[Basic|S0|S1|S2|S3|S4|S6|S7|S9|S12|P1|P2|P4|P6|P11|P15|…|ELASTIC_POOL(name = <elastic_pool_name>) for Azure SQL Database server or leave it empty for managed instance]" ### Self-hosted integration runtime info - This can be configured as a proxy for on-premises data access
data-factory Create Shared Self Hosted Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-shared-self-hosted-integration-runtime-powershell.md
You can reuse an existing self-hosted integration runtime infrastructure that yo
To see an introduction and demonstration of this feature, watch the following 12-minute video:
-> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Hybrid-data-movement-across-multiple-Azure-Data-Factories/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Hybrid-data-movement-across-multiple-Azure-Data-Factories/player]
### Terminology
data-factory Data Flow Pivot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-pivot.md
In the section labeled **Value**, you can enter specific row values to be pivote
For each unique pivot key value that becomes a column, generate an aggregated row value for each group. You can create multiple columns per pivot key. Each pivot column must contain at least one [aggregate function](data-flow-aggregate-functions.md).
-**Column name pattern:** Select how to format the column name of each pivot column. The outputted column name will be a combination of the pivot key value, column prefix and optional prefix, suffice, middle characters.
+**Column name pattern:** Select how to format the column name of each pivot column. The outputted column name will be a combination of the pivot key value, column prefix and optional prefix, suffix, middle characters.
**Column arrangement:** If you generate more than one pivot column per pivot key, choose how you want the columns to be ordered.
data-factory Iterative Development Debugging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/iterative-development-debugging.md
Azure Data Factory and Synapse Analytics supports iterative development and debu
For an eight-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Iterative-development-and-debugging-with-Azure-Data-Factory/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Iterative-development-and-debugging-with-Azure-Data-Factory/player]
## Debugging a pipeline
data-factory Join Azure Ssis Integration Runtime Virtual Network Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network-powershell.md
$AzureSSISName = "[your Azure-SSIS IR name]"
$VnetId = "[your virtual network resource ID or leave it empty]" # REQUIRED if you use Azure SQL Database server configured with a private endpoint/IP firewall rule/virtual network service endpoint or Azure SQL Managed Instance that joins a virtual network to host SSISDB, or if you require access to on-premises data without configuring a self-hosted IR. We recommend Azure Resource Manager virtual network, because classic virtual network will be deprecated soon. $SubnetName = "[your subnet name or leave it empty]" # WARNING: Use the same subnet as the one used for Azure SQL Database server configured with a virtual network service endpoint or a different subnet from the one used for Azure SQL Managed Instance that joins a virtual network $SubnetId = $VnetId + '/subnets/' + $SubnetName
-# Virtual network injection method: Standard or Express. For comparison, see https://docs.microsoft.com/azure/data-factory/azure-ssis-integration-runtime-virtual-network-configuration.
+# Virtual network injection method: Standard or Express. For comparison, see https://learn.microsoft.com/azure/data-factory/azure-ssis-integration-runtime-virtual-network-configuration.
$VnetInjectionMethod = "Standard" # Standard by default, whereas Express lets you use the express virtual network injection method # Public IP address info: OPTIONAL to provide two standard static public IP addresses with DNS name under the same subscription and in the same region as your virtual network $FirstPublicIP = "[your first public IP address resource ID or leave it empty]"
data-factory Monitor Visually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-visually.md
You can raise alerts on supported metrics in Data Factory. Select **Monitor** >
For a seven-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://docs.microsoft.com/shows/azure-friday/Monitor-your-Azure-Data-Factory-pipelines-proactively-with-alerts/player]
+> [!VIDEO https://learn.microsoft.com/shows/azure-friday/Monitor-your-Azure-Data-Factory-pipelines-proactively-with-alerts/player]
### Create alerts
data-factory Parameterize Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameterize-linked-services.md
You can use the UI in the Azure portal or a programming interface to parameteriz
For a seven-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://docs.microsoft.com/shows/azure-friday/Parameterize-connections-to-your-data-stores-in-Azure-Data-Factory/player]
+> [!VIDEO https://learn.microsoft.com/shows/azure-friday/Parameterize-connections-to-your-data-stores-in-Azure-Data-Factory/player]
## Supported linked service types
data-factory Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/plan-manage-costs.md
You can also [export your cost data](../cost-management-billing/costs/tutorial-e
- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
- [Azure Data Factory pricing page](https://azure.microsoft.com/pricing/details/data-factory/ssis/) - [Understanding Azure Data Factory through examples](./pricing-concepts.md) - [Azure Data Factory pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=data-factory)
data-factory Quickstart Create Data Factory Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-portal.md
This quickstart describes how to use the Azure Data Factory UI to create and mon
### Video Watching this video helps you understand the Data Factory UI:
->[!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Visually-build-pipelines-for-Azure-Data-Factory-v2/Player]
+>[!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Visually-build-pipelines-for-Azure-Data-Factory-v2/Player]
## Create a data factory
data-factory Sap Change Data Capture Data Partitioning Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-data-partitioning-template.md
- Title: Auto-generate a pipeline by using the SAP data partitioning template-
-description: Learn how to use the SAP data partitioning template for SAP change data capture (CDC) (preview) extraction in Azure Data Factory.
---- Previously updated : 06/01/2022---
-# Auto-generate a pipeline by using the SAP data partitioning template
--
-Learn how to use the SAP data partitioning template to auto-generate a pipeline as part of your SAP change data capture (CDC) solution (preview). Then, use the pipeline in Azure Data Factory to partition SAP CDC extracted data.
-
-## Create a data partitioning pipeline from a template
-
-To auto-generate an Azure Data Factory pipeline by using the SAP data partitioning template:
-
-1. In Azure Data Factory Studio, go to the Author hub of your data factory. In **Factory Resources**, under **Pipelines** > **Pipelines Actions**, select **Pipeline from template**.
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-pipeline-from-template.png" alt-text="Screenshot of the Azure Data Factory resources tab, with Pipeline from template highlighted.":::
-
-1. Select the **Partition SAP data to extract and load into Azure Data Lake Store Gen2 in parallel** template, and then select **Continue**.
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-template-selection.png" alt-text="Screenshot of the template gallery, with the SAP data partitioning template highlighted.":::
-
-1. Create new or use existing [linked services](sap-change-data-capture-prepare-linked-service-source-dataset.md) for SAP ODP (preview), Azure Data Lake Storage Gen2, and Azure Synapse Analytics. Use the linked services as inputs in the SAP data partitioning template.
-
- Under **Inputs**, for the SAP ODP linked service, in **Connect via integration runtime**, select your self-hosted integration runtime. For the Data Lake Storage Gen2 linked service, in **Connect via integration runtime**, select **AutoResolveIntegrationRuntime**.
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-template-configuration.png" alt-text="Screenshot of the SAP data partitioning template configuration page, with the Inputs section highlighted.":::
-
-1. Select **Use this template** to auto-generate an SAP data partitioning pipeline that can run multiple Data Factory copy activities to extract multiple partitions in parallel.
-
- Data Factory copy activities run on a self-hosted integration runtime to concurrently extract full raw data from your SAP system and load it into Data Lake Storage Gen2 as CSV files. The files are stored in the *sapcdc* container in the *deltachange/\<your pipeline name\>\<your pipeline run timestamp\>* folder path. Be sure that **Extraction mode** for the Data Factory copy activity is set to **Full**.
-
- To ensure high throughput, deploy your SAP system, self-hosted integration runtime, Data Lake Storage Gen2 instance, Azure integration runtime, and Azure Synapse Analytics instance in the same region.
-
-1. Assign your SAP data extraction context, data source object names, and an array of partitions. Define each element as an array of row selection conditions that serve as runtime parameter values for the SAP data partitioning pipeline.
-
- For the `selectionRangeList` parameter, enter your array of partitions. Define each partition as an array of row selection conditions. For example, hereΓÇÖs an array of three partitions, where the first partition includes only rows where the value in the **CUSTOMERID** column is between **1** and **1000000** (the first million customers), the second partition includes only rows where the value in the **CUSTOMERID** column is between **1000001** and **2000000** (the second million customers), and the third partition includes the rest of the customers:
-
- `[[{"fieldName":"CUSTOMERID","sign":"I","option":"BT","low":"1","high":"1000000"}],[{"fieldName":"CUSTOMERID","sign":"I","option":"BT","low":"1000001","high":"2000000"}],[{"fieldName":"CUSTOMERID","sign":"E","option":"BT","low":"1","high":"2000000"}]]`
-
- The three partitions are extracted by using three Data Factory copy activities that run in parallel.
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-partition-extraction-configuration.png" alt-text="Screenshot of the pipeline configuration for the SAP data partitioning template with the parameters section highlighted.":::
-
-1. Select **Save all** and run the SAP data partitioning pipeline.
-
-## Next steps
-
-[Auto-generate a pipeline by using the SAP data replication template](sap-change-data-capture-data-replication-template.md)
data-factory Sap Change Data Capture Data Replication Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-data-replication-template.md
- Title: Auto-generate a pipeline by using the SAP data replication template-
-description: Learn how to use the SAP data replication template for SAP change data capture (CDC) (preview) extraction in Azure Data Factory.
---- Previously updated : 06/01/2022---
-# Auto-generate a pipeline by using the SAP data replication template
--
-Learn how to use the SAP data replication template to auto-generate a pipeline as part of your SAP change data capture (CDC) solution (preview). Then, use the pipeline in Azure Data Factory for SAP CDC extraction in your datasets.
-
-## Create a data replication pipeline from a template
-
-To auto-generate an Azure Data Factory pipeline by using the SAP data replication template:
-
-1. In Azure Data Factory Studio, go to the Author hub of your data factory. In **Factory Resources**, under **Pipelines** > **Pipelines Actions**, select **Pipeline from template**.
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-new-pipeline.png" alt-text="Screenshot that shows creating a new pipeline in the Author hub.":::
-
-1. Select the **Replicate SAP data to Azure Synapse Analytics and persist raw data in Azure Data Lake Storage Gen2** template, and then select **Continue**.
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-data-replication-template.png" alt-text="Screenshot of the template gallery, with the SAP data replication template highlighted.":::
-
-1. Create new or use existing [linked services](sap-change-data-capture-prepare-linked-service-source-dataset.md) for SAP ODP (preview), Azure Data Lake Storage Gen2, and Azure Synapse Analytics. Use the linked services as inputs in the SAP data replication template.
-
- Under **Inputs**, for the SAP ODP linked service, in **Connect via integration runtime**, select your self-hosted integration runtime. For the Data Lake Storage Gen2 and Azure Synapse Analytics linked services, in **Connect via integration runtime**, select **AutoResolveIntegrationRuntime**.
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-data-replication-template-configuration.png" alt-text="Screenshot of the configuration page for the SAP data replication template.":::
-
-1. Select **Use this template** to auto-generate an SAP data replication pipeline that contains Azure Data Factory copy activities and data flow activities.
-
- The Data Factory copy activity runs on the self-hosted integration runtime to extract raw data (full and deltas) from the SAP system. The copy activity loads the raw data into Data Lake Storage Gen2 as a persisted CSV file. Historical changes are archived and preserved. The files are stored in the *sapcdc* container in the *deltachange/\<your pipeline name\>\<your pipeline run timestamp\>* folder path. Be sure that **Extraction mode** for the Data Factory copy activity is set to **Delta**. The **Subscriber process** property of copy activity is parameterized.
-
- The Data Factory data flow activity runs on the Azure integration runtime to transform the raw data and merge all changes into Azure Synapse Analytics. The process replicates the SAP data.
-
- To ensure high throughput, deploy your SAP system, self-hosted integration runtime, Data Lake Storage Gen2 instance, Azure integration runtime, and Azure Synapse Analytics instance in the same region.
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-data-replication-architecture.png" alt-text="Shows a diagram of the architecture of the SAP data replication scenario.":::
-
-1. Assign your SAP data extraction context, data source object, key column names, subscriber process names, and Synapse SQL schema and table names as runtime parameter values for the SAP data replication pipeline.
-
- For the `keyColumns` parameter, enter your key column names as an array of strings, like `[“CUSTOMERID”]/[“keyColumn1”, “keyColumn2”, “keyColumn3”, … ]`. Include up to 10 key column names. The Data Factory data flow activity uses key columns in raw SAP data to identify changed rows. A changed row is a row that is created, deleted, or changed.
-
- For the `subscriberProcess` parameter, enter a unique name for **Subscriber process** in the Data Factory copy activity. For example, you might name it `<your pipeline name>\<your copy activity name>`. You can rename it to start a new Operational Delta Queue subscription in SAP systems.
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-data-replication-pipeline-parameters.png" alt-text="Screenshot of the SAP data replication pipeline with the parameters section highlighted.":::
-
-1. Select **Save all** and run the SAP data replication pipeline.
-
-## Create a data delta replication pipeline from a template
-
-If you want to replicate SAP data to Data Lake Storage Gen2 in delta format, complete the steps that are detailed in the preceding section, but instead use the **Replicate SAP data to Azure Data Lake Store Gen2 in Delta format and persist raw data in CSV format** template.
-
-Like in the data replication template, in a data delta pipeline, the Data Factory copy activity runs on the self-hosted integration runtime to extract raw data (full and deltas) from the SAP system. The copy activity loads the raw data into Data Lake Storage Gen2 as a persisted CSV file. Historical changes are archived and preserved. The files are stored in the *sapcdc* container in the *deltachange/\<your pipeline name\>\<your pipeline run timestamp\>* folder path. The **Extraction mode** property of the copy activity is set to **Delta**. The **Subscriber process** property of copy activity is parameterized.
-
-The Data Factory data flow activity runs on the Azure integration runtime to transform the raw data and merge all changes into Data Lake Storage Gen2 as an open source Delta Lake or Lakehouse table. The process replicates the SAP data.
-
-The table is stored in the *saptimetravel* container in the *\<your SAP table or object name\>* folder that has the *\*delta\*log* subfolder and Parquet files. You can [query the table by using an Azure Synapse Analytics serverless SQL pool](../synapse-analytics/sql/query-delta-lake-format.md). You also can use the Delta Lake Time Travel feature with an Azure Synapse Analytics serverless Apache Spark pool. For more information, see [Create a serverless Apache Spark pool in Azure Synapse Analytics by using web tools](../synapse-analytics/quickstart-apache-spark-notebook.md) and [Read older versions of data by using Time Travel](../synapse-analytics/spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#read-older-versions-of-data-using-time-travel).
-
-To ensure high throughput, deploy your SAP system, self-hosted integration runtime, Data Lake Storage Gen2 instance, Azure integration runtime, and Delta Lake or Lakehouse instances in the same region.
-
-## Next steps
-
-[Manage your SAP CDC solution](sap-change-data-capture-management.md)
data-factory Sap Change Data Capture Debug Shir Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-debug-shir-logs.md
Title: Debug copy activity in your SAP CDC solution (preview) by sending logs
+ Title: Debug SAP CDC connector (preview) by sending logs
-description: Learn how to debug issues with the Azure Data Factory copy activity for your SAP change data capture (CDC) solution (preview) by sending self-hosted integration runtime logs to Microsoft.
+description: Learn how to debug issues with the Azure Data Factory SAP CDC (change data capture) connector (preview) by sending self-hosted integration runtime logs to Microsoft.
Last updated 06/01/2022
-# Debug copy activity by sending self-hosted integration runtime logs
+# Debug the SAP CDC connector by sending self-hosted integration runtime logs
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-If you want Microsoft to debug Azure Data Factory copy activity issues in your SAP change data capture (CDC) solution (preview), send us your self-hosted integration runtime logs, and then contact us.
+If you want Microsoft to debug Azure Data Factory issues with your SAP CDC connector (preview), send us your self-hosted integration runtime logs, and then contact us.
## Send logs to Microsoft
After you've uploaded and sent your self-hosted integration runtime logs, contac
## Next steps
-[Auto-generate a pipeline by using the SAP data partitioning template](sap-change-data-capture-data-partitioning-template.md)
+[SAP CDC (Change Data Capture) Connector](connector-sap-change-data-capture.md)
data-factory Sap Change Data Capture Introduction Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-introduction-architecture.md
Title: Overview and architecture of the SAP CDC solution (preview)
+ Title: Overview and architecture of the SAP CDC capabilities (preview)
-description: Learn about the SAP change data capture (CDC) solution (preview) in Azure Data Factory and understand its architecture.
+description: Learn about the SAP change data capture (CDC) capabilities (preview) in Azure Data Factory and understand its architecture.
Last updated 06/01/2022
-# Overview and architecture of the SAP CDC solution (preview)
+# Overview and architecture of the SAP CDC capabilities (preview)
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-Learn about the SAP change data capture (CDC) solution (preview) in Azure Data Factory and understand its architecture.
+Learn about the SAP change data capture (CDC) capabilities (preview) in Azure Data Factory and understand the architecture.
Azure Data Factory is an ETL and ELT data integration platform as a service (PaaS). For SAP data integration, Data Factory currently offers six general availability connectors:
The SAP connectors in Data Factory extract SAP source data only in batches. Each
You can keep your copy of SAP data fresh and up-to-date by frequently extracting the full dataset, but this approach is expensive and inefficient. You also can use a manual, limited workaround to extract mostly new or updated records. In a process called *watermarking*, extraction requires using a timestamp column, monotonously increasing values, and continuously tracking the highest value since the last extraction. But some tables don't have a column that you can use for watermarking. This process also doesn't identify a deleted record as a change in the dataset.
-## The SAP CDC solution
+## SAP CDC capabilities
-Microsoft customers indicate that they need a connector that can extract only the delta between two sets of data. In data, a *delta* is any change in a dataset that's the result of an update, insert, or deletion in the dataset. A delta extraction connector uses the [SAP change data capture (CDC) feature](https://help.sap.com/docs/SAP_DATA_SERVICES/ec06fadc50b64b6184f835e4f0e1f52f/1752bddf523c45f18ce305ac3bcd7e08.html?q=change%20data%20capture) that exists in most SAP systems to determine the delta in a dataset. The SAP CDC solution in Data Factory uses the SAP Operational Data Provisioning (ODP) framework to replicate the delta in an SAP source dataset.
+Microsoft customers indicate that they need a connector that can extract only the delta between two sets of data. In data, a *delta* is any change in a dataset that's the result of an update, insert, or deletion in the dataset. A delta extraction connector uses the [SAP change data capture (CDC) feature](https://help.sap.com/docs/SAP_DATA_SERVICES/ec06fadc50b64b6184f835e4f0e1f52f/1752bddf523c45f18ce305ac3bcd7e08.html?q=change%20data%20capture) that exists in most SAP systems to determine the delta in a dataset. The SAP CDC capabilities in Data Factory use the SAP Operational Data Provisioning (ODP) framework to replicate the delta in an SAP source dataset.
-This article provides a high-level architecture of the SAP CDC solution in Azure Data Factory. Get more information about the SAP CDC solution:
+This article provides a high-level architecture of the SAP CDC capabilities in Azure Data Factory. Get more information about the SAP CDC capabilities:
- [Prerequisites and setup](sap-change-data-capture-prerequisites-configuration.md) - [Set up a self-hosted integration runtime](sap-change-data-capture-shir-preparation.md) - [Set up a linked service and source dataset](sap-change-data-capture-prepare-linked-service-source-dataset.md)-- [Use the SAP data extraction template](sap-change-data-capture-data-replication-template.md)-- [Use the SAP data partition template](sap-change-data-capture-data-partitioning-template.md) - [Manage your solution](sap-change-data-capture-management.md)
-## How to use the SAP CDC solution
+## How to use the SAP CDC capabilities
-The SAP CDC solution is a connector that you access through an SAP ODP (preview) linked service, an SAP ODP (preview) source dataset, and the SAP data replication template or the SAP data partitioning template. Choose your template when you set up a new pipeline in Azure Data Factory Studio. To access preview templates, you must [enable the preview experience in Azure Data Factory Studio](how-to-manage-studio-preview-exp.md#how-to-enabledisable-preview-experience).
+At the core of the SAP CDC capabilities is the new SAP CDC connector (preview). It can connect to all SAP systems that support ODP. This includes SAP ECC, SAP S/4HANA, SAP BW, and SAP BW/4HANA. The solution works either directly at the application layer or indirectly via an SAP Landscape Transformation Replication Server (SLT) as a proxy. It doesn't rely on watermarking to extract SAP data either fully or incrementally. The data the SAP CDC connector extracts includes not only physical tables but also logical objects that are created by using the tables. An example of a table-based object is an SAP Advanced Business Application Programming (ABAP) Core Data Services (CDS) view.
-The SAP CDC solution connects to all SAP systems that support ODP, including SAP R/3, SAP ECC, SAP S/4HANA, SAP BW, and SAP BW/4HANA. The solution works either directly at the application layer or indirectly via an SAP Landscape Transformation Replication Server (SLT) as a proxy. The solution doesn't rely on watermarking to extract SAP data either fully or incrementally. The data the SAP CDC solution extracts includes not only physical tables but also logical objects that are created by using the tables. An example of a table-based object is an SAP Advanced Business Application Programming (ABAP) Core Data Services (CDS) view.
+Use the SAP CDC connector with Data Factory features like mapping data flow activities, and tumbling window triggers for a low-latency SAP CDC replication solution in a self-managed pipeline.
-Use the SAP CDC solution with Data Factory features like copy activities and data flow activities, pipeline templates, and tumbling window triggers for a low-latency SAP CDC replication solution in a self-managed pipeline.
-
-## The SAP CDC solution architecture
+## The SAP CDC architecture
The SAP CDC solution in Azure Data Factory is a connector between SAP and Azure. The SAP side includes the SAP ODP connector that invokes the ODP API over standard Remote Function Call (RFC) modules to extract full and delta raw SAP data.
-The Azure side includes the Data Factory copy activity that loads the raw SAP data into a storage destination like Azure Blob Storage or Azure Data Lake Storage Gen2. The data is saved in CSV or Parquet format, essentially archiving or preserving all historical changes.
-
-The Azure side also might include a Data Factory data flow activity that transforms the raw SAP data, merges all changes, and loads the results in a destination like Azure SQL Database or Azure Synapse Analytics, essentially replicating the SAP data. The Data Factory data flow activity also can load the results in Data Lake Storage Gen2 in delta format. You can use the open source Delta Lake Time Travel feature to produce snapshots of SAP data for a specific period.
-
-In Azure Data Factory Studio, the SAP template that you use to auto-generate a Data Factory pipeline connects SAP with Azure. You can run the pipeline frequently by using a Data Factory tumbling window trigger to replicate SAP data in Azure with low latency and without using watermarking.
+The Azure side includes the Data Factory mapping data flow that can transform and load the SAP data into any data sink supported by mapping data flows. This includes storage destinations like Azure Data Lake Storage Gen2 or databases like Azure SQL Database or Azure Synapse Analytics. The Data Factory data flow activity also can load the results in Data Lake Storage Gen2 in delta format. You can use the Delta Lake Time Travel feature to produce snapshots of SAP data for a specific period. You can run your pipeline and mapping data flows frequently by using a Data Factory tumbling window trigger to replicate SAP data in Azure with low latency and without using watermarking.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-architecture-diagram.png" border="false" alt-text="Diagram of the architecture of the SAP CDC solution.":::
-To get started, create a Data Factory copy activity by using an SAP ODP linked service, an SAP ODP source dataset, and an SAP data replication template or SAP data partitioning template. The copy activity runs on a self-hosted integration runtime that you install on an on-premises computer or on a virtual machine (VM). An on-premises computer has a line of sight to your SAP source systems and to the SLT. The Data Factory data flow activity runs on a serverless Azure Databricks or Apache Spark cluster, or on an Azure integration runtime.
+To get started, create a Data Factory SAP CDC linked service, an SAP CDC source dataset, and a pipeline with a mapping data flow activity in which you use the SAP CDC source dataset. To extract the data from SAP, a self-hosted integration runtime is required that you install on an on-premises computer or on a virtual machine (VM). An on-premises computer has a line of sight to your SAP source systems and to your SLT server. The Data Factory data flow activity runs on a serverless Azure Databricks or Apache Spark cluster, or on an Azure integration runtime.
-The SAP CDC solution uses ODP to extract various data source types, including:
+The SAP CDC connector uses the SAP ODP framework to extract various data source types, including:
-- SAP extractors, originally built to extract data from ECC and load it into BW-- ABAP CDS views, the new data extraction standard for S/4HANA-- InfoProviders and InfoObjects datasets in BW and BW/4HANA-- SAP application tables, when you use an SLT replication server as a proxy
+- SAP extractors, originally built to extract data from SAP ECC and load it into SAP BW
+- ABAP CDS views, the new data extraction standard for SAP S/4HANA
+- InfoProviders and InfoObjects datasets in SAP BW and SAP BW/4HANA
+- SAP application tables, when you use an SAP LT replication server (SLT) as a proxy
-In this process, the SAP data sources are *providers*. The providers run on SAP systems to produce either full or incremental data in an operational delta queue (ODQ). The Data Factory copy activity is a *subscriber* of the ODQ. The copy activity consumes the ODQ through the SAP CDC solution in the Data Factory pipeline.
+In this process, the SAP data sources are *providers*. The providers run on SAP systems to produce either full or incremental data in an operational delta queue (ODQ). The Data Factory mapping data flow source is a *subscriber* of the ODQ.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-shir-architecture-diagram.png" border="false" alt-text="Diagram of the architecture of the SAP ODP framework through a self-hosted integration runtime.":::
data-factory Sap Change Data Capture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-management.md
Title: Manage your SAP CDC solution (preview)
+ Title: Manage your SAP CDC (preview) ETL process
description: Learn how to manage your SAP change data capture (CDC) solution (preview) in Azure Data Factory.
Last updated 06/01/2022
-# Manage your SAP CDC solution (preview)
+# Manage your SAP CDC (preview) ETL process
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-After you create a pipeline in Azure Data Factory as part of your SAP change data capture (CDC) solution (preview), it's important to manage the solution.
+After you create a pipeline in Azure Data Factory using the SAP CDC connector (preview), it's important to manage the solution.
## Run an SAP data replication pipeline on a recurring schedule
To run an SAP data replication pipeline on a recurring schedule with a specified
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-tumbling-window-trigger.png" alt-text="Screenshot of the Edit trigger window with values highlighted to configure the tumbling window trigger.":::
-## Recover a failed SAP data replication pipeline run
-
-If an SAP data replication pipeline run fails, a subsequent run that's scheduled via a tumbling window trigger is suspended while it waits on the dependency.
--
-To recover a failed SAP data replication pipeline run:
-
-1. Fix the issues that caused the pipeline run failure.
-
-1. Switch the **Extraction mode** property of the copy activity to **Recovery**.
-
-1. Manually run the SAP data replication pipeline.
-
-1. If the recovery run finishes successfully, change the **Extraction mode** property of the copy activity to **Delta**.
-
-1. Next to the failed run of the tumbling window trigger, select **Rerun**.
- ## Monitor data extractions on SAP systems To monitor data extractions on SAP systems:
To monitor data extractions on SAP systems:
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-logon-tool.png" alt-text="Screenshot of the SAP Logon Tool.":::
-1. In **Subscriber**, enter the value for the **Subscriber name** property of your SAP ODP (preview) linked service. In the **Request Selection** dropdown, select **All** to show all data extractions that use the linked service.
+1. In **Subscriber**, enter the value for the **Subscriber name** property of your SAP CDC (preview) linked service. In the **Request Selection** dropdown, select **All** to show all data extractions that use the linked service.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-monitor-delta-queues.png" alt-text="Screenshot of the SAP ODQMON tool with all data extractions for a specific subscriber.":::
- You can see all registered subscriber processes in the operational delta queue (ODQ). Subscriber processes represent data extractions from Azure Data Factory copy activities that use your SAP ODP linked service. For each ODQ subscription, you can look at details to see all full and delta extractions. For each extraction, you can see individual data packages that were consumed.
+ You can see all registered subscriber processes in the operational delta queue (ODQ). Subscriber processes represent data extractions from Azure Data Factory copy activities that use your SAP CDC linked service. For each ODQ subscription, you can look at details to see all full and delta extractions. For each extraction, you can see individual data packages that were consumed.
1. When Data Factory copy activities that extract SAP data are no longer needed, you should delete their ODQ subscriptions. When you delete ODQ subscriptions, SAP systems can stop tracking their subscription states and remove the unconsumed data packages from the ODQ. To delete an ODQ subscription, select the subscription and select the Delete icon.
To monitor data extractions on SAP systems:
## Troubleshoot delta changes
-The SAP CDC solution in Data Factory reads delta changes from the SAP ODP framework. The deltas are recorded in ODQ tables.
+The SAP CDC connector in Data Factory reads delta changes from the SAP ODP framework. The deltas are recorded in ODQ tables.
In scenarios in which data movement works (copy activities finish without errors), but data isn't delivered correctly (no data at all, or maybe just a subset of the expected data), you should first investigate whether the number of records provided on the SAP side match the number of rows transferred by Data Factory. If they match, the issue isn't related to Data Factory, but probably comes from an incorrect or missing configuration on the SAP side.
Based on the timestamp in the first row, find the line that corresponds to the c
## Current limitations
-Here are current limitations of the SAP CDC solution in Data Factory:
+Here are current limitations of the SAP CDC connector in Data Factory:
- You can't reset or delete ODQ subscriptions in Data Factory. - You can't use SAP hierarchies with the solution.
data-factory Sap Change Data Capture Prepare Linked Service Source Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prepare-linked-service-source-dataset.md
Title: Set up a linked service and dataset for the SAP CDC solution (preview)
+ Title: Set up a linked service and dataset for the SAP CDC connector (preview)
-description: Learn how to set up a linked service and source dataset to use with the SAP change data capture (CDC) solution (preview) in Azure Data Factory.
+description: Learn how to set up a linked service and source dataset to use with the SAP CDC (change data capture) connector (preview) in Azure Data Factory.
Last updated 06/01/2022
-# Set up a linked service and source dataset for your SAP CDC solution (preview)
+# Set up a linked service and source dataset for the SAP CDC connector (preview)
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-Learn how to set up the linked service and source dataset for your SAP change data capture (CDC) solution (preview) in Azure Data Factory.
+Learn how to set up the linked service and source dataset for the SAP CDC connector (preview) in Azure Data Factory.
## Set up a linked service
-To set up an SAP ODP (preview) linked service for your SAP CDC solution:
+To set up an SAP CDC (preview) linked service:
1. In Azure Data Factory Studio, go to the Manage hub of your data factory. In the menu under **Connections**, select **Linked services**. Select **New** to create a new linked service. :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-new-linked-service.png" alt-text="Screenshot of the Manage hub in Azure Data Factory Studio, with the New linked service button highlighted.":::
-1. In **New linked service**, search for **SAP**. Select **SAP ODP (Preview)**, and then select **Continue**.
+1. In **New linked service**, search for **SAP**. Select **SAP CDC (Preview)**, and then select **Continue**.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-linked-service-selection.png" alt-text="Screenshot of the linked service source selection, with SAP ODP (Preview) selected.":::
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-linked-service-selection.png" alt-text="Screenshot of the linked service source selection, with SAP CDC (Preview) selected.":::
1. Set the linked service properties. Many of the properties are similar to SAP Table linked service properties. For more information, see [Linked service properties](connector-sap-table.md?tabs=data-factory#linked-service-properties). 1. In **Name**, enter a unique name for the linked service. 1. In **Connect via integration runtime**, select your self-hosted integration runtime. 1. In **Server name**, enter the mapped server name for your SAP system.
- 1. In **Subscriber name**, enter a unique name to register and identify this Data Factory connection as a subscriber that consumes data packages that are produced in the Operational Delta Queue (ODQ) by your SAP system. For example, you might name it `<your data factory -name>_<your linked service name>`.
+ 1. In **Subscriber name**, enter a unique name to register and identify this Data Factory connection as a subscriber that consumes data packages that are produced in the Operational Delta Queue (ODQ) by your SAP system. For example, you might name it `<your data factory -name>_<your linked service name>`. Make sure to only use upper case letters.
- When you use delta extraction mode in SAP, the combination of subscriber name (maintained in the linked service) and subscriber process must be unique for every copy activity that reads from the same ODP source object. A unique name ensures that the ODP framework can distinguish between copy activities and provide the correct delta.
+ Make sure you assign a unique subscriber name to every linked service connecting to the same SAP system. This will make monitoring and trouble shooting on SAP side much easier.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-linked-service-configuration.png" alt-text="Screenshot of the SAP ODP linked service configuration.":::
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-linked-service-configuration.png" alt-text="Screenshot of the SAP CDC linked service configuration.":::
1. Select **Test connection**, and then select **Create**.
-## Create a copy activity
+## Set up the source dataset
-To create a Data Factory copy activity that uses an SAP ODP (preview) data source, complete the steps in the following sections.
-
-### Set up the source dataset
-
-1. In Azure Data Factory Studio, go to the Author hub of your data factory. In **Factory Resources**, under **Pipelines** > **Pipelines Actions**, select **New pipeline**.
+1. In Azure Data Factory Studio, go to the Author hub of your data factory. In **Factory Resources**, under **Datasets** > **Dataset Actions**, select **New dataset**.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-new-pipeline.png" alt-text="Screenshot that shows creating a new pipeline in the Data Factory Studio Author hub.":::
-1. In **Activities**, select the **Move & transform** dropdown. Select the **Copy data** activity and drag it to the canvas of the new pipeline. Select the **Source** tab of the Data Factory copy activity, and then select **New** to create a new source dataset.
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-data-source-new.png" alt-text="Screenshot of the Copy data activity Source configuration.":::
+1. In **New dataset**, search for **SAP**. Select **SAP CDC (Preview)**, and then select **Continue**.
-1. In **New dataset**, search for **SAP**. Select **SAP ODP (Preview)**, and then select **Continue**.
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-source-dataset-selection.png" alt-text="Screenshot of the SAP CDC (Preview) dataset type in the New dataset dialog.":::
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-source-dataset-selection.png" alt-text="Screenshot of the SAP ODP (Preview) dataset type in the New dataset dialog.":::
+1. In **Set properties**, enter a name for the SAP CDC linked service data source. In **Linked service**, select the dropdown and select **New**.
-1. In **Set properties**, enter a name for the SAP ODP linked service data source. In **Linked service**, select the dropdown and select **New**.
-
-1. Select your SAP ODP linked service for the new source dataset and set the rest of the properties for the linked service:
+1. Select your SAP CDC linked service for the new source dataset and set the rest of the properties for the linked service:
1. In **Connect via integration runtime**, select your self-hosted integration runtime.
- 1. In **Context**, select the context of the ODP data extraction. Here are some examples:
+ 1. In **ODP context**, select the context of the ODP data extraction. Here are some examples:
- To extract ABAP CDS views from S/4HANA, select **ABAP_CDS**. - To extract InfoProviders or InfoObjects from SAP BW or BW/4HANA, select **BW**.
To create a Data Factory copy activity that uses an SAP ODP (preview) data sourc
If you want to extract SAP application tables, but you donΓÇÖt want to use SAP Landscape Transformation Replication Server (SLT) as a proxy, you can create SAP extractors by using the RSO2 transaction code or Core Data Services (CDS) views with the tables. Then, extract the tables directly from your SAP source systems by using either an **SAPI** or an **ABAP_CDS** context.
- 1. For **Object name**, under the selected data extraction context, select the name of the data source object to extract. If you connect to your SAP source system by using SLT as a proxy, the **Preview data** feature currently isn't supported.
+ 1. For **ODP name**, under the selected data extraction context, select the name of the data source object to extract. If you connect to your SAP source system by using SLT as a proxy, the **Preview data** feature currently isn't supported.
To enter the selections directly, select the **Edit** checkbox.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-source-dataset-configuration.png" alt-text="Screenshot of the SAP ODP (Preview) dataset configuration page.":::
-
-1. Select **OK** to create your new SAP ODP source dataset.
-
-1. In the Data Factory copy activity, in **Extraction mode**, select one of the following options:
-
- - **Full**: Always extracts the current snapshot of the selected data source object. This option doesn't register the Data Factory copy activity as its delta subscriber that consumes data changes produced in the ODQ by your SAP system.
- - **Delta**: Initially extracts the current snapshot of the selected data source object. This option registers the Data Factory copy activity as its delta subscriber and then extracts new data changes produced in the ODQ by your SAP system since the last extraction.
- - **Recovery**: Repeats the last extraction that was part of a failed pipeline run.
-
-1. In **Subscriber process**, enter a unique name to register and identify this Data Factory copy activity as a delta subscriber of the selected data source object. Your SAP system manages its subscription state to keep track of data changes that are produced in the ODQ and consumed in consecutive extractions. You don't need to manually watermark data changes. For example, you might name the subscriber process `<your pipeline name>_<your copy activity name>`.
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-source-configuration.png" alt-text="Screenshot of the SAP CDC source configuration in a Data Factory copy activity.":::
-
-1. If you want to extract data from only some columns or rows, you can use the column projection or row selection features:
-
- 1. In **Projection**, select **Refresh** to load the dropdown selections with column names of the selected data source object.
-
- If you want to include only a few columns in your data extraction, select the checkboxes for those columns. If you want to exclude only a few columns from your data extraction, select the **Select all** checkbox first, and then clear the checkboxes for columns you want to exclude. If no column is selected, all columns are extracted.
-
- To enter the selections directly, select the **Edit** checkbox.
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-source-projection-configuration.png" alt-text="Screenshot of the SAP CDC source configuration with the Projection, Selection, and Additional columns sections highlighted.":::
-
- 1. In **Selection**, select **New** to add a new row selection condition that contains arguments.
-
- 1. In **Field name**, select **Refresh** to load the dropdown selections with column names of the selected data source object. You also can enter the column names manually.
- 1. In **Sign**, select **Inclusive** or **Exclusive** to include or exclude rows that meet the selection condition in your data extraction.
- 1. In **Option**, select **EQ**, **CP**, or **BT** to apply the following row selection conditions:
-
- - **EQ**: True if the value in the **Field name** column is equal to the value of the **Low** argument.
- - **CP**: True if the value in the **Field name** column contains a pattern that's specified in the value of the **Low** argument.
- - **BT**: True if the value in the **Field name** column is between the values of the **Low** and **High** arguments.
-
- To ensure that your row selection conditions can be applied to the selected data source object, see SAP documentation or support notes for the data source object.
-
- The following table shows example row selection conditions and their respective arguments:
-
- | Row selection condition | Field name | Sign | Option | Low | High |
- |||||||
- | Include only rows in which the value in the **COUNTRY** column is **CHINA** | **COUNTRY** | **Inclusive** | **EQ** | **CHINA** | |
- | Exclude only rows in which the value in the **COUNTRY** column is **CHINA** | **COUNTRY** | **Exclusive** | **EQ** | **CHINA** | |
- | Include only rows in which the value in the **FIRSTNAME** column contains the **JO\*** pattern | **FIRSTNAME** | **Inclusive** | **CP** | **JO\*** | |
- | Include only rows in which the value in the **CUSTOMERID** column is between **1** and **999999** | **CUSTOMERID** | **Inclusive** | **BT** | **1** | **999999** |
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-selection-additional-columns.png" alt-text="Screenshot of the SAP ODP source configuration for a copy activity with the Selection and Additional columns sections highlighted.":::
-
- Row selections are especially useful to divide large data sets into multiple partitions. You can extract each partition by using a single copy activity. You can perform full extractions by using multiple copy activities running in parallel. These copy activities in turn invoke parallel processes on your SAP system to produce separate data packages in the ODQ. Parallel processes in each copy activity can consume packages and increase throughput significantly.
-
-### Set up the source sink
--- In the Data Factory copy activity, select the **Sink** tab. Select an existing sink dataset or create a new one for a data store like Azure Blob Storage or Azure Data Lake Storage Gen2.-
- To increase throughput, you can enable the Data Factory copy activity to concurrently extract data packages that your SAP system produces in the ODQ. You can enforce all extraction processes to immediately write them to the sink in parallel. For example, if you use Data Lake Storage Gen2 as a sink, in **File path** for the sink dataset, leave **File name** empty. All extracted data packages will be written as separate files.
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-staging-dataset.png" alt-text="Screenshot of the staging dataset configuration for the solution.":::
-
-### Configure copy activity settings
-
-1. To increase throughput, in the Data Factory copy activity, select the **Settings** tab. Set **Degree of copy parallelism** to concurrently extract data packages that your SAP system produces in the ODQ.
-
- If you use Azure Blob Storage or Data Lake Storage Gen2 as the sink, the maximum number of effective parallel extractions you can set is four or five per self-hosted integration runtime machine. You can install a self-hosted integration runtime as a cluster of up to four machines. For more information, see [High availability and scalability](create-self-hosted-integration-runtime.md?tabs=data-factory#high-availability-and-scalability).
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-settings-parallelism.png" alt-text="Screenshot of a Copy activity with the Degree of parallelism setting highlighted.":::
-
-1. To fine-tune parallel extractions, adjust the maximum size of data packages that are produced in the ODQ. The default size is 50 MB. 3 GB of an SAP table or object are extracted into 60 files of raw SAP data in Data Lake Storage Gen2. Lowering the maximum size to 15 MB might increase throughput, but more (200) files are produced. To lower the maximum size, in the pipeline navigation menu, select **Code**.
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-code-configuration.png" alt-text="Screenshot of a pipeline with the Code configuration button highlighted.":::
-
- Then, in the JSON file, edit `maxPackageSize` to lower the maximum size.
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-code-1.png" alt-text="Screenshot of the code configuration for a pipeline with the maxPackageSize setting highlighted.":::
-
-1. If you set **Extraction mode** in the Data Factory copy activity to **Delta**, your initial or subsequent extractions consume full data or new data changes produced in the ODQ by your SAP system since the last extraction.
-
- For each extraction, you can skip the actual data production, consumption, or transfer, and instead directly initialize or advance your delta subscription state. This option is especially useful if you want to perform full and delta extractions by using separate copy activities by using different partitions. To set up full and delta extractions by using separate copy activities with different partitions, in the pipeline navigation menu, select **Code**. In the JSON file, add the `deltaExtensionNoData` property and set it to `true`. To resume extracting data, remove that property or set it to `false`.
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-code-2.png" alt-text="Screenshot of the code configuration for a pipeline with the deltaExtensionNoData property highlighted.":::
-
-1. Select **Save all**, and then select **Debug** to run your new pipeline that contains the Data Factory copy activity with the SAP ODP source dataset.
-
-To illustrate the results of full and delta extractions from consecutively running your new pipeline, here's an example of a simple table in SAP ECC:
--
-HereΓÇÖs the raw SAP data from an initial or full extraction in CSV format in Data Lake Storage Gen2:
--
-The file contains the system columns **ODQ_CHANGEMODE**, **ODQ_ENTITYCNTR**, and **SEQUENCENUMBER**. The Data Factory data flow activity uses these columns to merge data changes when it replicates SAP data.
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-source-dataset-configuration.png" alt-text="Screenshot of the SAP CDC (Preview) dataset configuration page.":::
-The **ODQ_CHANGEMODE** column marks the type of change for each row or record: **C** (created), **U** (updated), or **D** (deleted). The initial run of your pipeline in *delta* extraction mode always induces a full load that marks all rows as **C** (created).
+1. Select **OK** to create your new SAP CDC source dataset.
-The following example shows the delta extraction in CSV format in Data Lake Storage Gen2 after three rows of the custom table in SAP ECC are created, updated, and deleted:
+## Set up a mapping data flow using the SAP CDC dataset as a source
+To set up a mapping data flow using the SAP CDC dataset as a source, follow [Transform data with the SAP CDC connector](connector-sap-change-data-capture.md#transform-data-with-the-sap-cdc-connector)
## Next steps
data-factory Sap Change Data Capture Prerequisites Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prerequisites-configuration.md
Title: Prerequisites and setup for the SAP CDC solution (preview)
+ Title: Prerequisites and setup for the SAP CDC connector (preview)
-description: Learn about the prerequisites and setup for the SAP change data capture (CDC) solution (preview) in Azure Data Factory.
+description: Learn about the prerequisites and setup for the SAP CDC connector (preview) in Azure Data Factory.
Last updated 06/01/2022
-# Prerequisites and setup for the SAP CDC solution (preview)
+# Prerequisites and setup for the SAP CDC connector (preview)
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-Learn about the prerequisites for the SAP change data capture (CDC) solution (preview) in Azure Data Factory and how to set up the solution in Azure Data Factory Studio.
+Learn about the prerequisites for the SAP CDC connector (preview) in Azure Data Factory and how to set up the solution in Azure Data Factory Studio.
## Prerequisites
-To preview the SAP CDC solution in Azure Data Factory, be able to complete these prerequisites:
+To preview the SAP CDC capabilities in Azure Data Factory, be able to complete these prerequisites:
- In Azure Data Factory Studio, [enable the preview experience](how-to-manage-studio-preview-exp.md#how-to-enabledisable-preview-experience). - Set up SAP systems to use the [SAP Operational Data Provisioning (ODP) framework](https://help.sap.com/docs/SAP_LANDSCAPE_TRANSFORMATION_REPLICATION_SERVER/007c373fcacb4003b990c6fac29a26e4/b6e26f56fbdec259e10000000a441470.html?q=SAP%20Operational%20Data%20Provisioning%20%28ODP%29%20framework).-- Be familiar with Data Factory concepts like integration runtimes, linked services, datasets, activities, data flows, pipelines, templates, and triggers.
+- Be familiar with Data Factory concepts like integration runtimes, linked services, datasets, activities, data flows, pipelines, and triggers.
- Set up a self-hosted integration runtime to use for the connector.-- Set up an SAP ODP (preview) linked service.-- Set up the Data Factory copy activity with an SAP ODP (preview) source dataset.
+- Set up an SAP CDC (preview) linked service.
+- Set up the Data Factory copy activity with an SAP CDC (preview) source dataset.
- Debug Data Factory copy activity issues by sending self-hosted integration runtime logs to Microsoft.-- Auto-generate a Data Factory pipeline by using the SAP data partitioning template.-- Auto-generate a Data Factory pipeline by using the SAP data replication template. - Be able to run an SAP data replication pipeline frequently. - Be able to recover a failed SAP data replication pipeline run. - Be familiar with monitoring data extractions on SAP systems.
To set up your SAP systems to use the SAP ODP framework, follow the guidelines t
### SAP system requirements
-The ODP framework is available by default in most recent software releases of most SAP systems, including SAP ECC, SAP S/4HANA, SAP BW, and SAP BW/4HANA. To ensure that your SAP systems have ODP, see the following SAP documentation or support notes. Even though the guidance primarily refers to SAP BW and SAP DS as subscribers or consumers in data extraction via ODP, the guidance also applies to Data Factory as a subscriber or consumer.
+The ODP framework is part of many SAP systems, including SAP ECC and SAP S/4HANA. It is also contained in SAP BW and SAP BW/4HANA. To ensure that your SAP releases have ODP, see the following SAP documentation or support notes. Even though the guidance primarily refers to SAP BW and SAP Data Services, the information also applies to Data Factory.
- To support ODP, run your SAP systems on SAP NetWeaver 7.0 SPS 24 or later. For more information, see [Transferring Data from SAP Source Systems via ODP (Extractors)](https://help.sap.com/docs/SAP_BW4HANA/107a6e8a38b74ede94c833ca3b7b6f51/327833022dcf42159a5bec552663dc51.html). - To support SAP Advanced Business Application Programming (ABAP) Core Data Services (CDS) full extractions via ODP, run your SAP systems on NetWeaver 7.4 SPS 08 or later. To support SAP ABAP CDS delta extractions, run your SAP systems on NetWeaver 7.5 SPS 05 or later. For more information, see [Transferring Data from SAP Systems via ODP (ABAP CDS Views)](https://help.sap.com/docs/SAP_BW4HANA/107a6e8a38b74ede94c833ca3b7b6f51/af11a5cb6d2e4d4f90d344f58fa0fb1d.html).
data-factory Sap Change Data Capture Shir Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-shir-preparation.md
Title: Set up a self-hosted integration runtime for the SAP CDC solution (preview)
+ Title: Set up a self-hosted integration runtime for the SAP CDC connector (preview)
description: Learn how to create and set up a self-hosted integration runtime for your SAP change data capture (CDC) solution (preview) in Azure Data Factory.
Last updated 06/01/2022
-# Set up a self-hosted integration runtime for the SAP CDC solution (preview)
+# Set up a self-hosted integration runtime for the SAP CDC connector (preview)
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-Learn how to create and set up a self-hosted integration runtime for the SAP change data capture (CDC) solution (preview) in Azure Data Factory.
+Learn how to create and set up a self-hosted integration runtime for the SAP CDC connector (preview) in Azure Data Factory.
-To prepare a self-hosted integration runtime to use with the SAP ODP (preview) linked service and the SAP data extraction template or the SAP data partition template, complete the steps that are described in the following sections.
+To prepare a self-hosted integration runtime to use with the SAP CDC connector (preview), complete the steps that are described in the following sections.
## Create and set up a self-hosted integration runtime
data-factory Transform Data Databricks Jar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-databricks-jar.md
The Azure Databricks Jar Activity in a [pipeline](concepts-pipelines-activities.
For an eleven-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player]
## Add a Jar activity for Azure Databricks to a pipeline with UI
data-factory Transform Data Databricks Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-databricks-python.md
The Azure Databricks Python Activity in a [pipeline](concepts-pipelines-activiti
For an eleven-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player]
## Add a Python activity for Azure Databricks to a pipeline with UI
data-factory Transform Data Machine Learning Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-machine-learning-service.md
Run your Azure Machine Learning pipelines as a step in your Azure Data Factory a
The below video features a six-minute introduction and demonstration of this feature.
-> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/How-to-execute-Azure-Machine-Learning-service-pipelines-in-Azure-Data-Factory/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/How-to-execute-Azure-Machine-Learning-service-pipelines-in-Azure-Data-Factory/player]
## Create a Machine Learning Execute Pipeline activity with UI
data-factory Transform Data Using Databricks Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-databricks-notebook.md
If you don't have an Azure subscription, create a [free account](https://azure.m
For an eleven-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/ingest-prepare-and-transform-using-azure-databricks-and-data-factory/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/ingest-prepare-and-transform-using-azure-databricks-and-data-factory/player]
## Prerequisites
data-factory Tumbling Window Trigger Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tumbling-window-trigger-dependency.md
In order to build a dependency chain and make sure that a trigger is executed on
For a demonstration on how to create dependent pipelines using tumbling window trigger, watch the following video:
-> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Create-dependent-pipelines-in-your-Azure-Data-Factory/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Create-dependent-pipelines-in-your-Azure-Data-Factory/player]
## Create a dependency in the UI
data-factory Tutorial Deploy Ssis Packages Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-packages-azure-powershell.md
$ExpressCustomSetup = "[RunCmdkey|SetEnvironmentVariable|InstallAzurePowerShell|
$SSISDBServerEndpoint = "[your server name.database.windows.net or managed instance name.public.DNS prefix.database.windows.net,3342 or leave it empty if you're not using SSISDB]" # WARNING: If you use SSISDB, please ensure that there is no existing SSISDB on your database server, so we can prepare and manage one on your behalf $SSISDBServerAdminUserName = "[your server admin username for SQL authentication]" $SSISDBServerAdminPassword = "[your server admin password for SQL authentication]"
-# For the basic pricing tier, specify "Basic", not "B" - For standard/premium/elastic pool tiers, specify "S0", "S1", "S2", "S3", etc., see https://docs.microsoft.com/azure/sql-database/sql-database-resource-limits-database-server
+# For the basic pricing tier, specify "Basic", not "B" - For standard/premium/elastic pool tiers, specify "S0", "S1", "S2", "S3", etc., see https://learn.microsoft.com/azure/sql-database/sql-database-resource-limits-database-server
$SSISDBPricingTier = "[Basic|S0|S1|S2|S3|S4|S6|S7|S9|S12|P1|P2|P4|P6|P11|P15|…|ELASTIC_POOL(name = <elastic_pool_name>) for SQL Database or leave it empty for SQL Managed Instance]" ### Self-hosted integration runtime info - This can be configured as a proxy for on-premises data access
$ExpressCustomSetup = "[RunCmdkey|SetEnvironmentVariable|InstallAzurePowerShell|
$SSISDBServerEndpoint = "[your server name.database.windows.net or managed instance name.public.DNS prefix.database.windows.net,3342 or leave it empty if you're not using SSISDB]" # WARNING: If you want to use SSISDB, ensure that there is no existing SSISDB on your database server, so we can prepare and manage one on your behalf $SSISDBServerAdminUserName = "[your server admin username for SQL authentication]" $SSISDBServerAdminPassword = "[your server admin password for SQL authentication]"
-# For the basic pricing tier, specify "Basic", not "B" - For standard/premium/elastic pool tiers, specify "S0", "S1", "S2", "S3", etc., see https://docs.microsoft.com/azure/sql-database/sql-database-resource-limits-database-server
+# For the basic pricing tier, specify "Basic", not "B" - For standard/premium/elastic pool tiers, specify "S0", "S1", "S2", "S3", etc., see https://learn.microsoft.com/azure/sql-database/sql-database-resource-limits-database-server
$SSISDBPricingTier = "[Basic|S0|S1|S2|S3|S4|S6|S7|S9|S12|P1|P2|P4|P6|P11|P15|…|ELASTIC_POOL(name = <elastic_pool_name>) for SQL Database or leave it empty for SQL Managed Instance]" ### Self-hosted integration runtime info - This can be configured as a proxy for on-premises data access
In this tutorial, you learned how to:
To learn about customizing your Azure-SSIS Integration Runtime, see the following article: > [!div class="nextstepaction"]
->[Customize your Azure-SSIS IR](./how-to-configure-azure-ssis-ir-custom-setup.md)
+>[Customize your Azure-SSIS IR](./how-to-configure-azure-ssis-ir-custom-setup.md)
databox-online Azure Stack Edge Gpu Deploy Iot Edge Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md
Use these steps to verify that your IoT Edge runtime is running.
To troubleshoot your IoT Edge device configuration, see [Troubleshoot your IoT Edge device](../iot-edge/troubleshoot.md?view=iotedge-2020-11&tabs=linux&preserve-view=true).
- <!-- Cannot get the link to render properly for version at https://docs.microsoft.com/azure/iot-edge/troubleshoot?view=iotedge-2020-11 -->
+ <!-- Cannot get the link to render properly for version at https://learn.microsoft.com/azure/iot-edge/troubleshoot?view=iotedge-2020-11 -->
## Update the IoT Edge runtime
databox-online Azure Stack Edge Gpu Troubleshoot Virtual Machine Gpu Extension Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-troubleshoot-virtual-machine-gpu-extension-installation.md
If the installation failed during the package download, that error indicates the
1. Enable compute on a port that's connected to the Internet. For guidance, see [Create GPU VMs](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md#create-gpu-vms).
-1. Deallocate the VM by stopping the VM in the portal. To stop the VM, go to **Virtual machines** > **Overview**, and select the VM. Then, on the VM properties page, select **Stop**.<!--Follow-up (formatting): Create an include file for stopping a VM. Use it here and in prerequisites for "Use the Azure portal to manage network interfaces on the VMs" (https://docs.microsoft.com/azure/databox-online/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal#prerequisites).-->
+1. Deallocate the VM by stopping the VM in the portal. To stop the VM, go to **Virtual machines** > **Overview**, and select the VM. Then, on the VM properties page, select **Stop**.<!--Follow-up (formatting): Create an include file for stopping a VM. Use it here and in prerequisites for "Use the Azure portal to manage network interfaces on the VMs" (https://learn.microsoft.com/azure/databox-online/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal#prerequisites).-->
1. Create a new VM.
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
For frequently asked questions, see the [DDoS Protection FAQ](ddos-faq.yml).
## Next steps * [Quickstart: Create a DDoS Protection Plan](manage-ddos-protection.md)
-* [Learn module: Introduction to Azure DDoS Protection](/learn/modules/introduction-azure-ddos-protection/)
+* [Learn module: Introduction to Azure DDoS Protection](/training/modules/introduction-azure-ddos-protection/)
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
description: A description of what's new and changed in Microsoft Defender for C
Previously updated : 08/31/2022 Last updated : 09/20/2022+ # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
- [Suppress alerts based on Container and Kubernetes entities](#suppress-alerts-based-on-container-and-kubernetes-entities) - [Defender for Servers supports File Integrity Monitoring with Azure Monitor Agent](#defender-for-servers-supports-file-integrity-monitoring-with-azure-monitor-agent)
+- [Legacy Assessments APIs deprecation](#legacy-assessments-apis-deprecation)
### Suppress alerts based on Container and Kubernetes entities
FIM is now available in a new version based on Azure Monitor Agent (AMA), which
Learn more about [File Integrity Monitoring with the Azure Monitor Agent](file-integrity-monitoring-enable-ama.md).
+### Legacy Assessments APIs deprecation
+
+The following APIs are deprecated:
+
+- Security Tasks
+- Security Statuses
+- Security Summaries
+
+These three APIs exposed old formats of assessments and are replaced by the [Assessments APIs](/rest/api/defenderforcloud/assessments) and [SubAssessments APIs](/rest/api/defenderforcloud/sub-assessments). All data that is exposed by these legacy APIs are also available in the new APIs.
+ ## August 2022 Updates in August include:
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 08/10/2022 Last updated : 09/20/2022 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
|--|--| | [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | September 2022 | | [Removing security alerts for machines reporting to cross tenant Log Analytics workspaces](#removing-security-alerts-for-machines-reporting-to-cross-tenant-log-analytics-workspaces) | September 2022 |
-| [Legacy Assessments APIs deprecation](#legacy-assessments-apis-deprecation) | September 2022 |
- ### Multiple changes to identity recommendations
With this change, alerts on machines connected to Log Analytics workspace in a d
If you want to continue receiving the alerts in Defender for Cloud, connect the Log Analytics agent of the relevant machines to the workspace in the same tenant as the machine.
-### Legacy Assessments APIs deprecation
-
-The following APIs are set to be deprecated:
--- Security Tasks-- Security Statuses-- Security Summaries-
-These three APIs exposed old formats of assessments and will be replaced by the [Assessments APIs](/rest/api/defenderforcloud/assessments) and [SubAssessments APIs](/rest/api/defenderforcloud/sub-assessments). All data that is exposed by these legacy APIs will also be available in the new APIs.
- ## Next steps For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md)
defender-for-cloud Workflow Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md
In this article, you learned about creating Logic Apps, automating their executi
For related material, see: -- [The Learn module on how to use workflow automation to automate a security response](/learn/modules/resolve-threats-with-azure-security-center/)
+- [The Learn module on how to use workflow automation to automate a security response](/training/modules/resolve-threats-with-azure-security-center/)
- [Security recommendations in Microsoft Defender for Cloud](review-security-recommendations.md) - [Security alerts in Microsoft Defender for Cloud](alerts-overview.md) - [About Azure Logic Apps](../logic-apps/logic-apps-overview.md)
dev-box How To Manage Network Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-network-connection.md
+
+ Title: How to manage network connections
+
+description: This article describes how to create, delete, attach and remove Microsoft Dev Box network connections.
++++ Last updated : 04/15/2022+++
+<!-- Intent: As a dev infrastructure manager, I want to be able to manage network connections so that I can enable dev boxes to connect to my existing networks and deploy them in the desired region. -->
+# Manage network connections
+Network connections allow dev boxes to connect to existing virtual networks, and determine the region into which dev boxes are deployed.
+
+When planning network connectivity for your dev boxes, you must:
+- Ensure you have sufficient permissions to create and configure network connections.
+- Ensure you have at least one virtual network (VNet) and subnet available for your dev boxes.
+- Identify the region or location closest to your dev boxes users. Deploying dev boxes into a region close to the users provides them with a better experience.
+- Determine whether dev boxes should connect to your existing networks using an Azure Active Directory (Azure AD) join, or a Hybrid Azure AD join.
+## Permissions
+To manage a network connection, you need the following permissions:
+
+|Action|Permission required|
+|--|--|
+|Create and configure VNet and subnet|Network Contributor permissions on an existing virtual network (owner or contributor) or permission to create a new virtual network and subnet.|
+|Create or delete network connection|Owner or Contributor permissions on an Azure Subscription or a specific resource group.|
+|Add or remove network connection |Write permission on the dev center.|
+
+## Create a virtual network and subnet
+To create a network connection, you need an existing VNet and subnet. If you don't have a VNet and subnet available, use the following steps to create them:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box, enter *Virtual Network*, and then select **Virtual Network** from the search results.
+
+1. On the Virtual Network page, select **Create**.
+
+1. On the Create virtual network page, enter or select this information on the **Basics** tab:
+
+ | Setting | Value |
+ | - | -- |
+ | Subscription | Select your subscription. |
+ | Resource group | Select an existing resource group, or to create a new one: </br> Select **Create new**. </br> Enter *rg-name*. </br> Select **OK**. |
+ | Name | Enter *VNet-name*. |
+ | Region | Select the region for the VNet and dev boxes. |
+
+ :::image type="content" source="./media/how-to-manage-network-connection/example-basics-tab.png" alt-text="Screenshot of creating a virtual network in Azure portal." border="true":::
+
+ > [!Important]
+ > The region you select for the VNet is the where the dev boxes will be deployed.
+
+1. On the **IP Addresses** tab, accept the default settings.
+
+1. On the **Security** tab, accept the default settings.
+
+1. On the **Review + create** tab review the settings.
+
+1. Select **Create**.
+
+
+## Allow access to Dev Box endpoints from your network
+Network ingress and egress can be controlled using a firewall, network security groups, and even Microsoft Defender.
+
+If your organization routes egress traffic through a firewall, you need to open certain ports to allow the Dev Box service to function. For more information, see [Network requirements](/windows-365/enterprise/requirements-network).
+
+## Plan a network connection
+The following steps show you how to create and configure a network connection in Microsoft Dev Box.
+### Types of Azure Active Directory Join
+The Dev Box service requires a configured and working Azure AD join or Hybrid AD join, which defines how dev boxes join your domain and access resources.
+
+If your organization uses Azure AD, you can use an Azure AD join, sometimes called a native Azure AD join. Dev box users sign into Azure AD joined dev boxes using their Azure AD account and access resources based on the permissions assigned to that account. Azure AD join enables access to cloud-based and on-premises apps and resources.
+
+If your organization has an on-premises Active Directory implementation, you can still benefit from some of the functionality provided by Azure AD by using hybrid Azure AD joined dev boxes. These dev boxes are joined to your on-premises Active Directory and registered with Azure Active Directory. Hybrid Azure AD joined dev boxes require network line of sight to your on-premises domain controllers periodically. Without this connection, devices become unusable.
+
+You can learn more about each type of join and how to plan for them here:
+- [Plan your hybrid Azure Active Directory join deployment](/azure/active-directory/devices/hybrid-azuread-join-plan)
+- [Plan your Azure Active Directory join deployment](/azure/active-directory/devices/azureadjoin-plan)
+
+### Create a network connection
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box, type *Network connections* and then select **Network connections** from the list.
+
+1. On the **Network Connections** page, select **+Create**.
+ :::image type="content" source="./media/how-to-manage-network-connection/network-connections-empty.png" alt-text="Screenshot showing the Network Connections page with Create highlighted.":::
+
+1. Follow the steps on the appropriate tab to create your network connection.
+ #### [**Azure AD join**](#tab/AzureADJoin/)
+
+ On the **Create a network connection** page, on the **Basics** tab, enter the following values:
+
+ |Name|Value|
+ |-|-|
+ |**Domain join type**|Select **Azure active directory join**.|
+ |**Subscription**|Select the subscription in which you want to create the network connection.|
+ |**Resource group**|Select an existing resource group or select **Create new**, and enter a name for the resource group.|
+ |**Name**|Enter a descriptive name for your network connection.|
+ |**Virtual network**|Select the virtual network you want the network connection to use.|
+ |**Subnet**|Select the subnet you want the network connection to use.|
+
+ :::image type="content" source="./media/how-to-manage-network-connection/create-native-network-connection-full-blank.png" alt-text="Screenshot showing the create network connection basics tab with Azure Active Directory join highlighted.":::
+
+ #### [**Hybrid Azure AD join**](#tab/HybridAzureADJoin/)
+
+ On the **Create a network connection** page, on the **Basics** tab, enter the following values:
+
+ |Name|Value|
+ |-|-|
+ |**Domain join type**|Select **Hybrid Azure active directory join**.|
+ |**Subscription**|Select the subscription in which you want to create the network connection.|
+ |**Resource group**|Select an existing resource group or select **Create new**, and enter a name for the resource group.|
+ |**Name**|Enter a descriptive name for your network connection.|
+ |**Virtual network**|Select the virtual network you want the network connection to use.|
+ |**Subnet**|Select the subnet you want the network connection to use.|
+ |**AD DNS domain name**| The DNS name of the Active Directory domain that you want to use for connecting and provisioning Cloud PCs. For example, corp.contoso.com. |
+ |**Organizational unit**| An organizational unit (OU) is a container within an Active Directory domain, which can hold users, groups, and computers. |
+ |**AD username UPN**| The username, in user principal name (UPN) format, that you want to use for connecting the Cloud PCs to your Active Directory domain. For example, svcDomainJoin@corp.contoso.com. This service account must have permission to join computers to the domain and, if set, the target OU. |
+ |**AD domain password**| The password for the user specified above. |
+
+ :::image type="content" source="./media/how-to-manage-network-connection/create-hybrid-network-connection-full-blank.png" alt-text="Screenshot showing the create network connection basics tab with Hybrid Azure Active Directory join highlighted.":::
+
+
+
+Use the following steps to finish creating your network connection, for both Azure AD join and Hybrid Azure AD join:
+ 1. Select **Review + Create**.
+
+ 1. On the **Review** tab, select **Create**.
+
+ 1. When the deployment is complete, select **Go to resource**. You'll see the Network Connection overview page.
+
+
+## Attach network connection to dev center
+You need to attach a network connection to a dev center before it can be used in projects to create dev box pools.
+
+1. In the [Azure portal](https://portal.azure.com), in the search box, type *Dev centers* and then select **Dev centers** from the list.
+
+1. Select the dev center you created and select **Networking**.
+
+1. Select **+ Add**.
+
+1. In the **Add network connection** pane, select the network connection you created earlier, and then select **Add**.
+
+ :::image type="content" source="./media/how-to-manage-network-connection/add-network-connection.png" alt-text="Screenshot showing the Add network connection pane.":::
+
+After creation, several health checks are run on the network. You can view the status of the checks on the resource overview page. Network connections that pass all the health checks can be added to a dev center and used in the creation of dev box pools. The dev boxes within the dev box pools will be created and domain joined in the location of the VNet assigned to the network connection.
++
+To resolve any errors, refer to the [Troubleshoot Azure network connections](/windows-365/enterprise/troubleshoot-azure-network-connection).
++
+## Remove a network connection from a dev center
+You can remove a network connection from a dev center if you no longer want it to be used to connect to network resources. Network connections can't be removed if they are in use by one or more dev box pools.
+
+1. In the [Azure portal](https://portal.azure.com), in the search box, type *Dev centers* and then select **Dev centers** from the list.
+
+1. Select the dev center you created and select **Networking**.
+
+1. Select the network connection you want to remove and then select **Remove**.
+
+ :::image type="content" source="./media/how-to-manage-network-connection/remove-network-connection.png" alt-text="Screenshot showing the network connection page with Remove highlighted.":::
+
+1. Read the warning message, and then select **Ok**.
+
+The network connection will no longer be available for use in the dev center.
+
+## Next steps
+
+<!-- [Manage a dev center](./how-to-manage-dev-center.md) -->
+- [Quickstart: Configure a Microsoft Dev Box Project](./quickstart-configure-dev-box-project.md)
devops-project Azure Devops Project Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/azure-devops-project-sql-database.md
To learn more about the CI/CD pipeline, see:
## Videos
-> [!VIDEO https://docs.microsoft.com/Events/Build/2018/BRK3308/player]
+> [!VIDEO https://learn.microsoft.com/Events/Build/2018/BRK3308/player]
devops-project Retirement And Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/retirement-and-migration.md
+
+ Title: Retirement of DevOps Starter for Azure | Microsoft Docs
+description: Retirement of Azure Devops Starter and Migration
+
+documentationcenter: ''
++
+editor:
+ms.assetid:
++
+ na
+ Last updated : 09/16/2022+++
+# Retirement of DevOps Starter
+
+Azure DevOps Starter will be retired March 31, 2023. The corresponding REST APIs for [Microsoft.DevOps](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/devops/resource-manager/Microsoft.DevOps/) and [Microsoft.VisualStudio/accounts/projects](/rest/api/visualstudio/projects) resources will be retired as well.
+Customers are encouraged to use [Azure Developer CLI](/azure/developer/azure-developer-cli/overview?tabs=nodejs) instead.
+
+## Azure Developer CLI
+
+The replacement [Azure Developer CLI (azd)](/azure/developer/azure-developer-cli/overview?tabs=nodejs) is a developer command-line tool for building cloud apps. It provides commands that map to key stages in your workflow: code, build, deploy, monitor, repeat. You can use Azure CLI to create, provision, and deploy a new application in a single step.
+
+## Comparison between Azure DevOps and Azure Developer CLI
+
+| DevOps Starter | Azure Developer CLI |
+| | - |
+| Deploy to Azure with few clicks | A single step to deploy to Azure |
+| Configures code, deployment, monitoring | Configures code, deployment, monitoring |
+| Provides sample application to get started | Provides sample applications to get started |
+| Allows userΓÇÖs repo to be deployed | Allows userΓÇÖs repo to be deployed |
+| UI-based experience in Azure portal | CLI-based experience |
+
+## Migration:
+
+There is no migration required because DevOps Starter does not store any information, it just helps users with their Day 0 getting started experience on Azure. Moving forward the recommended way for users to get started on Azure will be [Azure Developer CLI](/azure/developer/azure-developer-cli/overview?tabs=nodejs).
++
+1. For choosing language, framework and target service, choose an appropriate [template](https://github.com/search?q=org:azure-samples%20topic:azd-templates) from azd repo and run the command `azd up --template \<template-name\>`
+
+2. For provisioning Azure service resources, run the command `azd provision`
+
+3. For creating CI/CD pipelines, run the command `azd pipeline config`
+
+4. For application insights monitoring, run the command `azd monitor`
+
+For existing application deployments, **DevOps starter does not store any information itself** and users can use following to get same information:
+
+1. Azure resource details in Azure portal ΓÇô In Azure portal, visit the resource page for which you had configured DevOps starter.
+
+2. To see pipeline and deployment information, go to the corresponding GitHub Actions workflow or Azure pipeline to view runs and deployments.
+
+3. To see monitoring details in Application insights, go to application insights for your Azure resource and look at the monitoring charts.
+
+## FAQ
+
+### What is the difference between DevOps starter and Azure developer CLI?
+
+Both are tools, which enable quick setting up of application deployment to Azure and configure CI/CD pipeline for the same. They enable users to quickly get started with Azure.
+
+Azure Developer CLI provides more developer-friendly commands in contrast to the UI wizard for DevOps Starter. This also means better clarity with config-as-code.
+
+### Will I lose my application or the Azure resources if I am not able to access DevOps starter?
+
+No. Application code, deployments, and Azure resources that host the application will still be available. DevOps Starter does not store any of these resources.
+
+### Will I lose the CI/CD pipeline that I created using DevOps Starter?
+
+No. You can still manage CI/CD pipelines in GitHub Actions or Azure Pipelines.
+
devtest How To Manage Reliability Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/how-to-manage-reliability-performance.md
How SRE and DevOps differ is still under discussion in the field. Some broadly a
If you want to learn more about the practice of SRE, check out these links: -- [SRE in Context](/learn/modules/intro-to-site-reliability-engineering/3-sre-in-context) -- [Key SRE Principles and Practices: virtuous cycles](/learn/modules/intro-to-site-reliability-engineering/4-key-principles-1-virtuous-cycles) -- [Key SRE Principles and Practices: The human side of SRE](/learn/modules/intro-to-site-reliability-engineering/5-key-principles-2-human-side-of-sre) -- [Getting Started with SRE](/learn/modules/intro-to-site-reliability-engineering/6-getting-started)
+- [SRE in Context](/training/modules/intro-to-site-reliability-engineering/3-sre-in-context)
+- [Key SRE Principles and Practices: virtuous cycles](/training/modules/intro-to-site-reliability-engineering/4-key-principles-1-virtuous-cycles)
+- [Key SRE Principles and Practices: The human side of SRE](/training/modules/intro-to-site-reliability-engineering/5-key-principles-2-human-side-of-sre)
+- [Getting Started with SRE](/training/modules/intro-to-site-reliability-engineering/6-getting-started)
## Service Level Agreements
dns Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-overview.md
For more information, see [Overview of Azure DNS alias records](dns-alias.md).
* For frequently asked questions about Azure DNS, see the [Azure DNS FAQ](dns-faq.yml).
-* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
+* [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns).
dns Dns Private Resolver Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-powershell.md
description: In this quickstart, you learn how to create and manage your first p
Previously updated : 09/16/2022 Last updated : 09/20/2022
$targetDNS2 = New-AzDnsResolverTargetDnsServerObject -IPAddress 192.168.1.3 -Por
$targetDNS3 = New-AzDnsResolverTargetDnsServerObject -IPAddress 10.0.0.4 -Port 53 $targetDNS4 = New-AzDnsResolverTargetDnsServerObject -IPAddress 10.5.5.5 -Port 53 $forwardingrule = New-AzDnsForwardingRulesetForwardingRule -ResourceGroupName myresourcegroup -DnsForwardingRulesetName myruleset -Name "Internal" -DomainName "internal.contoso.com." -ForwardingRuleState "Enabled" -TargetDnsServer @($targetDNS1,$targetDNS2)
-$forwardingrule = New-AzDnsForwardingRulesetForwardingRule -ResourceGroupName myresourcegroup -DnsForwardingRulesetName myruleset -Name "AzurePrivate" -DomainName "." -ForwardingRuleState "Enabled" -TargetDnsServer $targetDNS3
+$forwardingrule = New-AzDnsForwardingRulesetForwardingRule -ResourceGroupName myresourcegroup -DnsForwardingRulesetName myruleset -Name "AzurePrivate" -DomainName "azure.contoso.com" -ForwardingRuleState "Enabled" -TargetDnsServer $targetDNS3
$forwardingrule = New-AzDnsForwardingRulesetForwardingRule -ResourceGroupName myresourcegroup -DnsForwardingRulesetName myruleset -Name "Wildcard" -DomainName "." -ForwardingRuleState "Enabled" -TargetDnsServer $targetDNS4 ```
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
Previously updated : 08/17/2022 Last updated : 09/20/2022 #Customer intent: As an administrator, I want to evaluate Azure DNS Private Resolver so I can determine if I want to use it instead of my current DNS resolver service.
Azure DNS Private Resolver is available in the following regions:
- West US 3 - East US - North Central US-- Central US EUAP-- East US 2 EUAP - West Central US - East US 2 - West Europe
Outbound endpoints have the following limitations:
### Ruleset restrictions - Rulesets can have no more than 25 rules in Public Preview.-- Rulesets can't be linked across different subscriptions in Public Preview. ### Other restrictions
Outbound endpoints have the following limitations:
* Learn how to [Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md) * Learn how to [configure hybrid DNS](private-resolver-hybrid-dns.md) using private resolvers. * Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
-* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
+* [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns).
dns Private Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-overview.md
Title: What is Azure Private DNS? description: In this article, get started with an overview of the private DNS hosting service on Microsoft Azure. -+ Previously updated : 04/09/2021- Last updated : 09/20/2022+ #Customer intent: As an administrator, I want to evaluate Azure Private DNS so I can determine if I want to use it instead of my current DNS service.
Azure Private DNS has the following limitations:
* A specific virtual network can be linked to only one private zone if automatic registration of VM DNS records is enabled. You can however link multiple virtual networks to a single DNS zone. * Reverse DNS works only for private IP space in the linked virtual network * Reverse DNS for a private IP address in linked virtual network will return `internal.cloudapp.net` as the default suffix for the virtual machine. For virtual networks that are linked to a private zone with autoregistration enabled, reverse DNS for a private IP address returns two FQDNs: one with default the suffix `internal.cloudapp.net` and another with the private zone suffix.
-* Conditional forwarding isn't currently natively supported. To enable resolution between Azure and on-premises networks, see [Name resolution for VMs and role instances](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md).
+* Conditional forwarding is supported using [Azure DNS Private Resolver](dns-private-resolver-overview.md). To enable resolution between Azure and on-premises networks, see [Name resolution for VMs and role instances](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md).
## Pricing
For pricing information, see [Azure DNS Pricing](https://azure.microsoft.com/pri
* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
-* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
+* [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns).
dns Private Resolver Endpoints Rulesets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-endpoints-rulesets.md
A query for `secure.store.azure.contoso.com` will match the **AzurePrivate** rul
* Learn how to [Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md) * Learn how to [configure hybrid DNS](private-resolver-hybrid-dns.md) using private resolvers. * Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
-* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
+* [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns).
dns Private Resolver Hybrid Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-hybrid-dns.md
The path for this query is: client's default DNS resolver (10.100.0.2) > on-prem
* Learn about [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md). * Learn how to [Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md) * Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
-* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
+* [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns).
dns Find Unhealthy Dns Records https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/scripts/find-unhealthy-dns-records.md
The following Azure PowerShell script finds unhealthy DNS records in Azure DNS.
```azurepowershell-interactive <#
- 1. Install Pre requisites Az PowerShell modules (https://docs.microsoft.com/powershell/azure/install-az-ps?view=azps-5.7.0)
+ 1. Install Pre requisites Az PowerShell modules (https://learn.microsoft.com/powershell/azure/install-az-ps?view=azps-5.7.0)
2. From PowerShell prompt navigate to folder where the script is saved and run the following command .\ Get-AzDNSUnhealthyRecords.ps1 -SubscriptionId <subscription id> -ZoneName <zonename> Replace subscription id with subscription id of interest.
This script uses the following commands to create the deployment. Each item in t
## Next steps
-For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).
+For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).
dns Tutorial Dns Private Resolver Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-dns-private-resolver-failover.md
You can now demonstrate that DNS resolution works when one of the connections is
* Learn about [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md). * Learn how to [configure hybrid DNS](private-resolver-hybrid-dns.md) using private resolvers. * Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
-* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
-
+* [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns).
education-hub Azure Students Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/azure-dev-tools-teaching/azure-students-program.md
To get detailed terms of use for Azure for Students, see the [offer terms](https
- [Get help with login errors](troubleshoot-login.md) - [Download software (Azure for Students)](download-software.md) - [Azure for Students Starter overview](azure-students-starter-program.md)-- [Microsoft Learn: a free online learning platform](/learn/)
+- [Microsoft Learn training](/training/)
education-hub Azure Students Starter Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/azure-dev-tools-teaching/azure-students-starter-program.md
any time to a pay-as-you-go subscription to get access to all Azure services, us
- [Get help with login errors](troubleshoot-login.md) - [Download software (Azure for Students Starter)](download-software.md) - [Azure for Students program](azure-students-program.md)-- [Microsoft Learn: a free online learning platform](/learn/)
+- [Microsoft Learn training](/training/)
education-hub Download Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/azure-dev-tools-teaching/download-software.md
Have your students follow this procedure to download the software developer tool
- [Get help with login errors](troubleshoot-login.md) - [Azure for Students](azure-students-program.md) - [Azure for Students Starter](azure-students-starter-program.md)-- [Microsoft Learn: a free online learning platform](/learn/)
+- [Microsoft Learn training](/training/)
- [Frequently asked questions](./program-faq.yml)
education-hub Set Up Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/azure-dev-tools-teaching/set-up-access.md
Portal](https://azureforeducation.microsoft.com/account/Subscriptions). Once app
## For students, faculty, and administrators Studences access Azure dev tools through the [Education Hub](https://aka.ms/devtoolsforteaching).
-Students and faculty alike can get access to all the software download benefits through the Education Hub. The Education Hub is built within the Azure portal and it provides your students easy access to the entire catalog of software, as well as access to the entire [Microsoft Learn](/learn/) catalog.
+Students and faculty alike can get access to all the software download benefits through the Education Hub. The Education Hub is built within the Azure portal and it provides your students easy access to the entire catalog of software, as well as access to the entire [Microsoft Learn training](/training/) catalog.
## Next steps - [Manage student accounts](manage-students.md)
energy-data-services Concepts Csv Parser Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-csv-parser-ingestion.md
+
+ Title: Microsoft Energy Data Services Preview csv parser ingestion workflow concept #Required; page title is displayed in search results. Include the brand.
+description: Learn how to use CSV parser ingestion. #Required; article description that is displayed in search results.
++++ Last updated : 08/18/2022+++
+# CSV parser ingestion concepts
+
+One of the simplest generic data formats that are supported by the Microsoft Energy Data Services Preview ingestion process is the "comma separated values" format, which is called a CSV format. The CSV format is processed through a CSV Parser DAG definition.
+
+CSV Parser DAG implements an ELT approach to data loading, that is, data is loaded after it's extracted. Customers can use CSV Parser DAG to load data that doesn't match the [OSDU&trade;](https://osduforum.org) canonical schema. Customers need to create and register a custom schema using the schema service matching the format of the CSV file.
++
+## What does CSV ingestion do?
+
+* **Schema validation** ΓÇô ensure CSV file conforms to the schema.
+* **Type conversion** ΓÇô ensure that the type of a field is as defined and converts it to the defined type if otherwise.
+* **ID generation** ΓÇô used to upload into storage service. It helps in scenarios where the ingestion failed half-way as ID generation logic is idempotent, one avoids duplicate data on platform.
+* **Reference handling** ΓÇô enable customers to refer to actual data on the platform and access it.
+* **Persistence** ΓÇô It persists each row after validations by calling storage service API. Once persisted, the data is available for consumption through search and storage service APIs.
+
+## CSV Parser ingestion functionality
+
+The CSV parser ingestion currently supports the following functionality as a one-step DAG:
+
+- CSV file is parsed as per the schema (one row in CSV = 1 record ingested into the data platform)
+- CSV file contents match the contents of the provided schema.
+ - **Success**: validate the schema vs. the header of the CSV file and the values of the first nrows. Use the schema for all downstream tasks to build the metadata.
+ - **Fail**: log the error(s) in the schema validation, proceed with ingestion if errors are non-breaking
+- Convert all characters to UTF8, and gracefully handle/replace characters that can't be converted to UTF8.
+- Unique data identity for an object in the Data Platform - CSV Ingestion generates Unique Identifier (ID) for each record by combining source, entity type and a base64 encoded string formed by concatenating natural key(s) in the data. In case the schema used for CSV Ingestion doesn't contain any natural keys storage service will generate random IDs for every record
+- Typecast to JSON-supported data types:
+ - **Number** - Typecast integers, doubles, floats, etc. as described in the schema to "number". Some common spatial formats, such as Degrees/Minutes/Seconds (DMS) or Easting/Northing should be typecast to "String." Special Handling of these string formats will be handled in the Spatial Data Handling Task.
+ - **Date** - Typecast dates as described in the schema to a date, doing a date format conversion toISO8601TZ format (for fully qualified dates). Some date fragments (such as years) can't be easily converted to this format and should be typecast to a number instead, or if textual date representations, for example, "July" should be typecast to string.
+ - **Others** - All other encountered attributes should be typecast as string.
+- Stores a batch of records in the context of a particular ingestion job. Fragments/outputs from the previous steps are collected into a batch, and formatted in a way that is compatible with the Storage Service with the appropriate additional information, such as the ACL's, Legal tags, etc.
+- Support frame of reference handling:
+ - **Unit** - converting declared frame of reference information into the appropriate persistable reference as per the Unit Service. This information is stored in the meta[] block.
+ - **CRS** - the CRS Frame of Reference (FoR) information should be included in the schema of the data, including the source CRS (either geographic or projected), and if projected, the CRS info and persistable reference (if provided in schema) information is stored in the meta[] block.
+- Creates relationships as declared in the source schema.
+- Supports publishing status of ingested/failed records on GSM article
+
+## CSV parser ingestion components
+
+* **File service** ΓÇô Facilitates management of files on data platform. Uploading, Secure discovery and downloading of files are capabilities provided by file service.
+* **Schema service** ΓÇô Facilitates management of Schemas on data platform. Creating, fetching and searching for schemas are capabilities provided by schema service.
+* **Storage Service** ΓÇô JSON object store that facilitates storage of metadata information for domain entities. Also raises storage events when records are saved using storage service.
+* **Unit Service** ΓÇô Facilitates management and conversion of Units
+* **Workflow service** ΓÇô Facilitates management of workflows on data platform. Wrapper over the workflow engine and abstract many technical nuances of the workflow engine from consumers.
+* **Airflow engine** ΓÇô Heart of the ingestion framework. Actual Workflow orchestrator.
+* **DAGs** ΓÇô Based on Direct Acyclic Graph concept, are workflows that are authored, orchestrated, managed and monitored by the workflow engine.
+
+## CSV ingestion components diagram
++
+## CSV ingestion sequence diagram
++
+## CSV parser ingestion workflow
+
+### Prerequisites
+
+* To trigger APIs, the user must have the below access and a valid authorization token
+ * Access to
+ * Access to Workflow service.
+ * Following is list of service level groups that you need access to register and execute DAG using workflow service.
+ * "service.workflow.creator"
+ * "service.workflow.viewer"
+ * "service.workflow.admin"
+
+### Steps to execute a DAG using Workflow Service
+
+* **Create schema** ΓÇô Definition of the kind of records that will be created as outcome of ingestion workflow. The schema is uploaded through the schema service. The schema needs to be registered using schema service.
+* **Uploading the file** ΓÇô Use file Service to upload a file. The file service provides a signed URL, which enables the customers to upload the data without credential requirements.
+* **Create Metadata record for the file** ΓÇô Use file service to create meta data. The meta data enables discovery of file and secure downloads. It also provides a mechanism to provide information associated with the file that is needed during the processing of the file.
+* The file ID created is provided to the CSV parser, which takes care of downloading the file, ingesting the file, and ingesting the records with the help of workflow service. The customers also need to register the workflow, the CSV parser DAG is already deployed in the Airflow.
+* **Trigger the Workflow service** ΓÇô To trigger the workflow, the customer needs to provide the file ID, the kind of the file and data partition ID. Once the workflow is triggered, the customer gets a run ID.
+Workflow service provides API to monitor the status of each workflow run. Once the csv parser run is completed, data is ingested into OSDU&trade; Data Platform, and can be searched through search service
+
+OSDU&trade; is a trademark of The Open Group.
+
+## Next steps
+Advance to the CSV parser tutorial and learn how to perform a CSV parser ingestion
+> [!div class="nextstepaction"]
+> [Tutorial: Sample steps to perform a CSV parser ingestion](tutorial-csv-ingestion.md)
energy-data-services Concepts Ddms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-ddms.md
+
+ Title: Domain data management services concepts #Required; page title is displayed in search results. Include the brand.
+description: Learn how to use Domain Data Management Services #Required; article description that is displayed in search results.
++++ Last updated : 08/18/2022+++
+# Domain data management service concepts
+
+**Domain Data Management Service (DDMS)** ΓÇô is a platform component that extends [OSDU&trade;](https://osduforum.org) core data platform with domain specific model and optimizations. DDMS is a mechanism of a platform extension that:
+
+* delivers optimized handling of data for each (non-overlapping) "domain."
+* single vertical discipline or business area, for example, Petrophysics, Geophysics, Seismic
+* a functional aspect of one or more vertical disciplines or business areas, for example, Earth Model
+* delivers high performance capabilities not supported by OSDU&trade; generic normal APIs.
+* can help achieve the extension of OSDU&trade; scope to new business areas.
+* may be developed in a distributed manner with separate resources/sponsors.
+
+OSDU&trade; Technical Standard defines the following types of OSDU&trade; application types:
+
+| Application Type | Description |
+| | -- |
+| OSDU&trade;&trade; Embedded Applications | An application developed and managed within the OSDU&trade; Open-Source community that is built on and deployed as part of the OSDU&trade; Data Platform distribution. |
+| ISV Extension Applications | An application, developed and managed in the marketplace that is NOT part of THE OSDU&trade; Data Platform distributions, and when selected is deployed within the OSDU&trade; Data Platform as add-ons |
+| ISV third Party Applications | An application, developed and managed in the marketplace that integrates with the OSDU&trade; Data Platform, and runs outside the OSDU&trade; Data Platform |
++
+| Characteristics | Embedded | Extension | Third Party |
+| -- | - | | |
+| Developed, managed, and deployed by | The OSDU&trade; Data Platform | ISV | ISV |
+| Software License | Apache 2 | ISV | ISV |
+| Mandatory as part of an OSDU&trade; distribution | Yes | No | No |
+| Replaceable | Yes, with preservation of behavior | Yes | Yes |
+| Architecture Compliance | The OSDU&trade; Standard | The OSDU&trade; Standard | ISV |
+| Examples | OS CRS <br /> Wellbore DDMS | ESRI CRS <br /> Petrel DS | Petrel |
++
+## Who did we build this for?
+
+**IT Developers** build systems to connect data to domain applications (internal and external ΓÇô for example, Petrel) which enables data managers to deliver projects to geoscientists. The DDMS suite on Microsoft Energy Data Services helps automate these workflows and eliminates time spent managing updates.
+
+**Geoscientists** use domain applications for key Exploration and Production workflows such as Seismic interpretation and Well tie analysis. While these users won't directly interact with the DDMS, their expectations for data performance and accessibility will drive requirements for the DDMS in the Foundation Tier. Azure will enable geoscientists to stream cross domain data instantly in OSDU&trade; compatible applications (for example, Petrel) connected to Microsoft Energy Data Services.
+
+**Data managers** spend a significant number of time fulfilling requests for data retrieval and delivery. The Seismic, Wellbore, and Petrel Data Services enable them to discover and manage data in one place while tracking version changes as derivatives are created.
+
+## Platform landscape
+
+Microsoft Energy Data Services is an OSDU&trade; compatible product, meaning that its landscape and release model are dependent on OSDU&trade;.
+
+Currently, OSDU&trade; certification and release process are not fully defined yet and this topic should be defined as a part of the Microsoft Energy Data Services Foundation Architecture.
+
+OSDU&trade; R3 M8 is the base for the scope of the Microsoft Energy Data Services Foundation Private Preview ΓÇô as a latest stable, tested version of the platform.
+
+## Learn more: OSDU&trade; DDMS community principles
+
+[OSDU&trade; community DDMS Overview](https://community.opengroup.org/osdu/documentation/-/wikis/OSDU&trade;-(C)/Design-and-Implementation/Domain-&-Data-Management-Services#ddms-requirements) provides an extensive overview of DDMS motivation and community requirements from a user, technical, and business perspective. These principles are extended to Microsoft Energy Data Services.
+
+## DDMS requirements
+
+A DDMS meets the following requirements, further classified into capability, architectural, operational and openness/extensibility requirements:
+
+|**#** | **Description** | **Business rationale** | **Principle** |
+|||||
+| 1 | Data can be ingested with low friction | Need to seamlessly integrate with systems of record, to start with the industry standards | Capability |
+| 2 | New data is available in workflows with minimal latency | Deliver new data in context of the end-user workflow ΓÇô seamlessly and fast. | Capability |
+| 3 | Domain data and services are highly usable | The business anticipates a large set of use-cases where domain data is used in various workflows. Need to make the consumption simple and efficient | Capability |
+| 4 | Scalable performance for E&P workflows | E&P data has specific access requirements, way beyond standard cloud storage. Scalable E&P data requires E&P workflow experience and insights | Capability |
+| 5 | Data is available for visual analytics and discovery (Viz/BI) | Deliver minimum set of visualization capabilities on the data | Capability |
+| 6 | One source of truth for data | Drive towards reduction of duplication | Capability |
+| 7 | Data is secured, and access governed | Securely stored and managed | Architectural |
+| 8 | All data is preserved and immutable | Ability to associate data to milestones and have data/workflow traceable across the ecosystem | Architectural |
+| 9 | Data is globally identifiable | No risk of overwriting or creating non-unique relationships between data and activities | Architectural |
+| 10 | Data lineage is tracked | Required for auditability, re-creation of the workflow, and learning from work previously done | Architectural |
+| 11 | Data is discoverable | Possible to find and consume back ingested data | Architectural |
+| 12 | Provisioning | Efficient provisioning of the DDMS and auto integration with the Data Ecosystem | Operational |
+| 13 | Business Continuity | Deliver on industry expectation for business continuity (RPO, RTO, SLA) | Operational |
+| 14 | Cost | Cost efficient delivery of data | Operational |
+| 15 | Auditability | Deliver required forensics to support cyber security incident investigations | Operational |
+| 16 | Accessibility | Deliver technology | Operational |
+| 17 | Domain-Centric Data APIs | | Openness and Extensibility |
+| 18 | Workflow composability and customizations | | Openness and Extensibility |
+| 19 | Data-Centric Extensibility | | Openness and Extensibility |
+
+OSDU&trade; is a trademark of The Open Group.
+
+## Next steps
+Advance to the seismic ddms sdutil tutorial to learn how to use sdutil to load seismic data into seismic store.
+> [!div class="nextstepaction"]
+> [Tutorial: Seismic store sdutil](tutorial-seismic-ddms-sdutil.md)
energy-data-services Concepts Entitlements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-entitlements.md
+
+ Title: Microsoft Energy Data Services Preview entitlement concepts #Required; page title is displayed in search results. Include the brand.
+description: This article describes the various concepts regarding the entitlement services in Microsoft Energy Data Services Preview #Required; article description that is displayed in search results.
++++ Last updated : 08/19/2022+++
+# Entitlement service
+
+Access management is a critical function for any service or resource. Entitlement service helps you manage who has access to your Microsoft Energy Data Service instance, what they can do with it, and what services they have access to.
++
+## Groups
+
+The entitlements service of Microsoft Energy Data Services allows you to create groups, and an entitlement group defines permissions on services/data sources for your Microsoft Energy Data Services instance. Users added by you to that group obtain the associated permissions.
+
+The main motivation for entitlements service is data authorization, but the functionality enables three use cases:
+
+- **Data groups** used for data authorization (for example, data.welldb.viewers, data.welldb.owners)
+- **Service groups** used for service authorization (for example, service.storage.user, service.storage.admin)
+- **User groups** used for hierarchical grouping of user and service identities (for example, users.datalake.viewers, users.datalake.editors)
+
+## Users
+
+For each group, you can either add a user as an OWNER or a MEMBER. The only difference being if you're an OWNER of a group, then you can manage the members of that group.
+> [!NOTE]
+> Do not delete the OWNER of a group unless there is another OWNER to manage the users.
+
+## Group naming
+
+All group identifiers (emails) will be of form {groupType}.{serviceName|resourceName}.{permission}@{partition}.{domain}.com. A group naming convention has been adopted such that the group's name should start with the word "data." for data groups; "service." for service groups; and "users." for user groups. An exception is when a data partition is provisioned. When a data partition is created, so is a corresponding group: users (for example, for data partition `opendes`, the group `users@opendes.dataservices.energy` is created).
+
+## Permissions/roles
+
+The OSDU&trade; Data Ecosystem user groups provide an abstraction from permission and user management and--without a user creating their own groups--the following user groups exist by default:
+
+- **users.datalake.viewers**: viewer level authorization for OSDU Data Ecosystem services.
+- **users.datalake.editors**: editor level authorization for OSDU Data Ecosystem services and authorization to create the data using OSDU&trade; Data Ecosystem storage service.
+- **users.datalake.admins**: admin level authorization for OSDU Data Ecosystem services.
+
+A full list of all API endpoints for entitlements can be found in [OSDU entitlement service](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/release/0.15/docs/tutorial/Entitlements-Service.md#entitlement-service-api). We have provided few illustrations below. Depending on the resources you have, you need to use the entitlements service in different ways than what is shown below. [Entitlement permissions](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/release/0.15/docs/tutorial/Entitlements-Service.md#permissions) on the endpoints and the corresponding minimum level of permissions required.
+
+> [!NOTE]
+> The OSDU documentation refers to V1 endpoints, but the scripts noted in this documentation refers to V2 endpoints, which work and have been successfully validated
+
+OSDU&trade; is a trademark of The Open Group.
+
+## Next steps
+<!-- Add a context sentence for the following links -->
+> [!div class="nextstepaction"]
+> [How to manage users](how-to-manage-users.md)
energy-data-services Concepts Index And Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-index-and-search.md
+
+ Title: Microsoft Energy Data Services Preview - index and search workflow concepts #Required; page title is displayed in search results. Include the brand.
+description: Learn how to use indexing and search workflows #Required; article description that is displayed in search results.
++++ Last updated : 08/23/2022++
+#Customer intent: As a developer, I want to understand indexing and search workflows so that I could search for ingested data in the platform.
+
+# Microsoft Energy Data Services Preview indexing and search workflows
+
+All data and associated metadata ingested into the platform are indexed to enable search. The metadata is accessible to ensure awareness even when the data isn't available.
++
+## Indexer Service
+
+The `Indexer Service` provides a mechanism for indexing documents that contain structured and unstructured data.
+
+> [!NOTE]
+> This service is not a public service and only meant to be called internally by other core platform services.
+
+### Indexing workflow
+
+The below diagram illustrates the Indexing workflow:
++
+When a customer loads data into the platform, the associated metadata is ingested using the `Storage service`. The `Storage service` provides a set of APIs to manage the entire metadata lifecycle such as ingestion (persistence), modification, deletion, versioning, retrieval, and data schema management. Each storage metadata record created by the `Storage service` contains a *kind* parameter that refers to an underlying *schema*. This schema determines the attributes that will be indexed by the `Indexer service`.
+
+When the `Storage service` creates a metadata record, it raises a *recordChangedMessages* event that is collected in the Azure Service Bus (message queue). The `Indexer queue` service pulls the message from the Azure Service Bus, performs basic validation and sends it over to the `Indexer service`. If there are any failures in sending the messages to the `Indexer service`, the `Indexer queue` service retries sending the message up to a maximum allowed configurable retry count. If the retry attempts fail, a negative acknowledgment is sent to the Azure Service Bus, which then archives the message.
+
+When the *recordChangedMessages* event is received by the `Indexer Service`, it fetches the required schemas from the schema cache or through the `Schema service` APIs. The `Indexer Service` then creates a new index within Elasticsearch (if not already present), and then sends a bulk query to create or update the records as needed. If the response from Elasticsearch is a failure response of type *service unavailable* or *request timed out*, then the `Indexer Service` creates *recordChangedMessages* for these failed record IDs and puts the message in the Azure Service Bus. These messages will again be pulled by the `Indexer Queue` service and will follow the same flow as before.
+
+
+For more information, see [Indexer service OSDU&trade; documentation](https://community.opengroup.org/osdu/platform/system/indexer-service/-/blob/release/0.15/docs/tutorial/IndexerService.md) provides information on indexer service
+
+## Search workflow
+
+`Search service` provides a mechanism for discovering indexed metadata documents. The Search API supports full-text search on string fields, range queries on date, numeric, or string field, etc. along with geo-spatial searches.
+
+For a detailed tutorial on `Search service`, refer [Search service OSDU&trade; documentation](https://community.opengroup.org/osdu/platform/system/search-service/-/blob/release/0.15/docs/tutorial/SearchService.md)
+
+
+## Reindex workflow
+Reindex API allows users to reindex a kind without reingesting the records via storage API. For detailed information, refer to
+[Reindex OSDU&trade; documentation](https://community.opengroup.org/osdu/platform/system/indexer-service/-/blob/release/0.15/docs/tutorial/IndexerService.md#reindex)
+
+OSDU&trade; is a trademark of The Open Group.
+
+## Next steps
+<!-- Add a context sentence for the following links -->
+> [!div class="nextstepaction"]
+> [Domain data management service concepts](concepts-ddms.md)
energy-data-services Concepts Manifest Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-manifest-ingestion.md
+
+ Title: Microsoft Energy Data Services Preview manifest ingestion concepts #Required; page title is displayed in search results. Include the brand.
+description: This article describes manifest ingestion concepts #Required; article description that is displayed in search results.
++++ Last updated : 08/18/2022+++
+# Manifest-based ingestion concepts
+
+Manifest-based file ingestion provides end-users and systems a robust mechanism for loading metadata in Microsoft Energy Data Services Preview instance. A manifest is a JSON document that has a pre-determined structure for capturing entities that conform to the [OSDU&trade;](https://osduforum.org/) Well-known Schema (WKS) definitions.
+
+Manifest-based file ingestion doesn't understand the contents of the file or doesn't parse the file. It just creates a metadata record for the file and makes it searchable. It doesn't infer or does anything on top of the file.
++
+## Understanding the manifest
+
+The manifest schema has containers for the following entities
+
+* **ReferenceData** (*zero or more*) - A set of permissible values to be used by other (master or transaction) data fields. Examples include *Unit of Measure (feet)*, *Currency*, etc.
+* **MasterData** (*zero or more*) - A single source of basic business data used across multiple systems, applications, and/or process. Examples include *Wells* and *Wellbores*
+* **WorkProduct (WP)** (*one - must be present if loading WorkProductComponents*) - A session boundary or collection (project, study) encompasses a set of entities that need to be processed together. As an example, you can take the ingestion of one or more log collections.
+* **WorkProductComponents (WPC)** (*zero or more - must be present if loading datasets*) - A typed, smallest, independently usable unit of business data content transferred as part of a Work Product (a collection of things ingested together). Each Work Product Component (WPC) typically uses reference data, belongs to some master data, and maintains a reference to datasets. Example: *Well Logs, Faults, Documents*
+* **Datasets** (*zero or more - must be present if loading WorkProduct and WorkProductComponent records*) - Each Work Product Component (WPC) consists of one or more data containers known as datasets.
+
+## Manifest-based file ingestion workflow steps
+
+1. A manifest is submitted to the Workflow Service using the manifest ingestion workflow name (for example, "Osdu_ingest")
+2. Once the request is validated and the user authorization is complete, the workflow service will load and initiate the manifest ingestion workflow.
+3. The first step is to check the syntax of the manifest.
+ 1. Retrieve the **kind** property of the manifest
+ 2. Retrieve the **schema definition** from the Schema service for the manifest kind
+ 3. Validate that the manifest is syntactically correct according to the manifest schema definitions.
+ 4. For each Reference data, Master data, Work Product, Work Product Component, and Dataset, do the following activities:
+ 1. Retrieve the **kind** property.
+ 2. Retrieve the **schema definition** from the Schema service for the kind
+ 3. Validate that the entity is syntactically correct according to the schema definition and submits the manifest to the Workflow Service
+ 4. Validate that mandatory attributes exist in the manifest
+ 5. Validate that all property values follow the patterns defined in the schemas
+ 6. Validate that no extra properties are present in the manifest
+ 5. Any entity that doesn't pass the syntax check is rejected
+4. The content is checked for a series of validation rules
+ 1. Validation of referential integrity between Work Product Components and Datasets
+ 1. There are no orphan Datasets defined in the WP (each Dataset belongs to a WPC)
+ 2. Each Dataset defined in the WPC is described in the WP Dataset block
+ 3. Each WPC is linked to at least
+ 2. Validation that referenced parent data exists
+ 3. Validation that Dataset file paths aren't empty
+5. Process the contents into storage
+ 1. Write each valid entity into the data platform via the Storage API
+ 2. Capture the ID generated to update surrogate-keys where surrogate-keys are used
+6. Workflow exits
+
+## Manifest ingestion components
+
+* **Workflow Service** is a wrapper service on top of the Airflow workflow engine, which orchestrates the ingestion workflow. Airflow is the chosen workflow engine by the [OSDU&trade;](https://osduforum.org/) community to orchestrate and run ingestion workflows. Airflow isn't directly exposed to clients, instead its features are accessed through the workflow service.
+* **File Service** is used to upload files, file collections, and other types of source data to the data platform.
+* **Storage Service** is used to save the manifest records into the data platform.
+* **Airflow engine** is the workflow engine that executes DAGs (Directed Acyclic Graphs).
+* **Schema Service** stores schemas used in the data platform. Schemas are being referenced during the Manifest-based file ingestion.
+* **Entitlements Service** manages access groups. This service is used during the ingestion for verification of ingestion permissions. This service is also used during the metadata record retrieval for validation of "read" writes.
+* **Search Service** is used to perform referential integrity check during the manifest ingestion process.
+
+## Manifest ingestion workflow sequence
++
+OSDU&trade; is a trademark of The Open Group.
+
+## Next steps
+Advance to the manifest ingestion tutorial and learn how to perform a manifest-based file ingestion
+> [!div class="nextstepaction"]
+> [Tutorial: Sample steps to perform a manifest-based file ingestion](tutorial-manifest-ingestion.md)
energy-data-services How To Add More Data Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-add-more-data-partitions.md
+
+ Title: How to manage partitions
+description: This is a how-to article on managing data partitions using the Microsoft Energy Data Services Preview instance UI.
++++ Last updated : 07/05/2022+++
+# How to manage data partitions?
++
+The article describes how you can add data partitions to an existing Microsoft Energy Data Services (MEDS) instance. The concept of "data partitions" in MEDS is picked from [OSDU&trade;](https://osduforum.org/) where single deployment can contain multiple partitions.
+
+Each partition provides the highest level of data isolation within a single deployment. All access rights are governed at a partition level. Data is separated in a way that allows for the partition's life cycle and deployment to be handled independently. (See [Partition Service](https://community.opengroup.org/osdu/platform/home/-/issues/31) in OSDU&trade;)
+
+> [!NOTE]
+> You can create maximum five data partitions in one MEDS instance. Currently, in line with the data partition capabilities that are available in OSDU&trade;, you can only create data partitions but can't delete or rename data existing data partitions.
++
+## Create a data partition
+
+1. **Open the "Data Partitions" menu-item from left-panel of MEDS overview page.**
+
+ [![Screenshot for dynamic data partitions feature discovery from MEDS overview page. Find it under the 'advanced' section in menu-items.](media/how-to-add-more-data-partitions/dynamic-data-partitions-discovery-meds-overview-page.png)](media/how-to-add-more-data-partitions/dynamic-data-partitions-discovery-meds-overview-page.png#lightbox)
+
+2. **Select "Create"**
+
+ The page shows a table of all data partitions in your MEDS instance with the status of the data partition next to it. Clicking "Create" option on the top opens a right-pane for next steps.
+
+ [![Screenshot to help you locate the create button on the data partitions page. The 'create' button to add a new data partition is highlighted.](media/how-to-add-more-data-partitions/start-create-data-partition.png)](media/how-to-add-more-data-partitions/start-create-data-partition.png#lightbox)
+
+3. **Choose a name for your data partition**
+
+ Each data partition name needs to be - "1-10 characters long and be a combination of lowercase letters, numbers and hyphens only" The data partition name will be prepended with the name of the MEDS instance. Choose a name for your data partition and hit create. Soon as you hit create, the deployment of the underlying data partition resources such as Cosmos DB and Storage Accounts is started.
+
+ >[!NOTE]
+ >It generally takes 15-20 minutes to create a data partition.
+
+ [![Screenshot for create a data partition with name validation. The page also shows the name validation while choosing the name of a new data partition.](media/how-to-add-more-data-partitions/create-data-partition-name-validation.png)](media/how-to-add-more-data-partitions/create-data-partition-name-validation.png#lightbox)
+
+ If the deployment is successful, the status changes to "created successfully" with or without clicking "Refresh" on top.
+
+ [![Screenshot for the in progress page for data partitions. The in-progress status of a new data partition that is getting deployed is highlighted.](media/how-to-add-more-data-partitions/create-progress.png)](media/how-to-add-more-data-partitions/create-progress.png#lightbox)
+
+## Delete a failed data partition
+
+The data-partition deployment triggered in the previous process might fail in some cases due to issues - quota limits reached, ARM template deployment transient issues, data seeding failures, and failure in connecting to underlying AKS clusters.
+
+The status of such data partitions shows as "Creation Failed". You can delete these deployments using the "delete" button that shows next to all failed data partition deployments. This deletion will clean up any records created in the backend. You can retry creating the data partitions later.
++
+[![Screenshot for the deleting failed instances page. The button to delete an incorrectly created data partition is available next to the partition's name.](media/how-to-add-more-data-partitions/delete-failed-instances.png)](media/how-to-add-more-data-partitions/delete-failed-instances.png#lightbox)
+
+OSDU&trade; is a trademark of The Open Group.
+
+## Next steps
+
+You can start loading data in your new data partitions.
+
+> [!div class="nextstepaction"]
+> [Load data using manifest ingestion](tutorial-manifest-ingestion.md)
energy-data-services How To Convert Segy To Ovds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-ovds.md
+
+ Title: Microsoft Energy Data Services Preview - How to convert a segy to ovds file #Required; page title is displayed in search results. Include the brand.
+description: This article explains how to convert a SGY file to oVDS file format #Required; article description that is displayed in search results.
++++ Last updated : 08/18/2022+++
+# How to convert a SEG-Y file to oVDS?
+
+Seismic data stored in the industry standard SEG-Y format can be converted to Open VDS (oVDS) format for use in applications via the Seismic DMS.
+
+[OSDU&trade; SEG-Y to oVDS conversation](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-vds-conversion/-/tree/release/0.15)
++
+## Prerequisites
+
+### Postman
+
+* Download and install [Postman](https://www.postman.com/) desktop app.
+* Import the [oVDS Conversions.postman_collection](https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M9/Azure-M9/Services/DDMS/oVDS_Conversions.postman_collection.json) into Postman. All curl commands used below are added to this collection. Update your Environment file accordingly
+* Microsoft Energy Data Services Preview instance is created already
+* Clone the **sdutil** repo as shown below:
+ ```markdown
+ git clone https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil.git
+
+ git checkout azure/stable
+ ```
+
+## Step by step guide
+
+1. Check if VDS is registered with the workflow service or not:
+
+ ```markdown
+ curl --location --request GET '<url>/api/workflow/v1/workflow/'
+ --header 'Data-Partition-Id: <datapartition>'
+ --header 'Content-Type: application/json'
+ --header 'Authorization: Bearer {{TOKEN}}
+ ```
+
+ You should see VDS converter DAG in the list. IF NOT in the response list then REPORT the issue to Azure Team
+
+2. Open **sdutil** and edit the `config.yaml` at the root
+ Update `config` to:
+
+ ```yaml
+ seistore:
+ service: '{"azure": {"azureEnv":{"url": "<url>/seistore-svc/api/v3", "appkey": ""}}}'
+ url: '<url>/seistore-svc/api/v3'
+ cloud_provider: azure
+ env: glab
+ auth-mode: JWT Token
+ ssl_verify: false
+ auth_provider:
+ azure: '{
+ "provider": "azure",
+ "authorize_url": "https://login.microsoftonline.com/", "oauth_token_host_end": "/oauth2/v2.0/token",
+ "scope_end":"/.default openid profile offline_access",
+ "redirect_uri":"http://localhost:8080",
+ "login_grant_type": "refresh_token",
+ "refresh_token": "<RefreshToken acquired earlier>"
+ }'
+ azure:
+ empty: none
+ ```
+
+ > [!NOTE]
+ > See [Generate a refresh token](how-to-generate-refresh-token.md) on how to generate a refresh token. If you continue to follow other "how-to" documentation, you'll use this refresh token again. Once you've generated the token, store it in a place where you'll be able to access it in the future.
+
+3. Run **sdutil** to see if it's working fine. Follow the directions in [Setup and Usage for Azure env](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable#setup-and-usage-for-azure-env). Understand that depending on your OS and Python version, you may have to run `python3` command as opposed to `python`.
+
+ > [!NOTE]
+ > when running `python sdutil config init`, you don't need to enter anything when prompted with `Insert the azure (azureGlabEnv) application key:`.
+
+4. Upload the seismic file
+
+ ```markdown
+ python sdutil cp \source.segy sd://<datapartition>/<subproject>/destination.segy
+ ```
+
+5. Fetch the idtoken from sdutil for the uploaded file.
+
+ ```markdown
+ python sdutil auth idtoken
+ ```
+
+6. Trigger the DAG through `POSTMAN` or using the call below:
+
+ ```bash
+ curl --location --request POST '<url>/api/workflow/v1/workflow/<dag-name>/workflowRun' \
+ --header 'data-partition-id: <datapartition>' \
+ --header 'Content-Type: application/json' \
+ --header 'Authorization: Bearer {{TOKEN}}' \
+ --data-raw '{
+ "executionContext": {
+ "vds_url": "sd://<datapartition>/<subproject>",
+ "persistent_id": "<filename>",
+ "id_token": "<token>",
+ "segy_url": "sd://<datapartition>/<subproject>/<filename>.segy"
+
+ }
+ }'
+ ```
+
+7. Let the DAG run to complete state. You can check the status using the workflow status call
+
+8. Verify the converted files are present on the specified location in DAG Trigger or not
+
+ ```markdown
+ python sdutil ls sd://<datapartition>/<subproject>/
+ ```
+
+9. If you would like to download and inspect your VDS files, don't use the `cp` command as it will not work. The VDS conversion results in multiple files, therefore the `cp` command won't be able to download all of them in one command. Use either the [SEGYExport](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/tools/SEGYExport/README.html) or [VDSCopy](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/tools/VDSCopy/README.html) tool instead. These tools use a series of REST calls accessing a [naming scheme](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/connection.html) to retrieve information about all the resulting VDS files.
+
+OSDU&trade; is a trademark of The Open Group.
+
+## Next steps
+<!-- Add a context sentence for the following links -->
+> [!div class="nextstepaction"]
+> [How to convert a segy to zgy file](/how-to-convert-segy-to-zgy.md)
+
energy-data-services How To Convert Segy To Zgy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-zgy.md
+
+ Title: Microsoft Energy Data Service - How to convert segy to zgy file #Required; page title is displayed in search results. Include the brand.
+description: This article describes how to convert a SEG-Y file to a ZGY file #Required; article description that is displayed in search results.
++++ Last updated : 08/18/2022+++
+# How to convert a SEG-Y file to ZGY?
+
+Seismic data stored in industry standard SEG-Y format can be converted to ZGY for use in applications such as Petrel via the Seismic DMS. See here for [ZGY Conversion FAQ's](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion#faq) and more background can be found in the OSDU&trade; community here: [SEG-Y to ZGY conversation](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/tree/azure/m10-master)
++
+## Prerequisites
+
+### Postman
+
+* Download and install [Postman](https://www.postman.com/) desktop app.
+* Import the [oZGY Conversions.postman_collection](https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M9/Azure-M9/Services/DDMS/oZGY%20Conversions.postman_collection.json) into Postman. All curl commands used below are added to this collection. Update your Environment file accordingly
+* Microsoft Energy Data Services Preview instance is created already
+* Clone the **sdutil** repo as shown below:
+ ```markdown
+ git clone https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil.git
+
+ git checkout azure/stable
+ ```
+* The [jq command](https://stedolan.github.io/jq/download/), using your favorite tool on your favorite OS.
+
+## Step by Step guide
+
+1. The user needs to be part of the `users.datalake.admins` group and user needs to generate a valid refresh token. See [How to generate a refresh token](how-to-generate-refresh-token.md) for further instructions. If you continue to follow other "how-to" documentation, you'll use this refresh token again. Once you've generated the token, store it in a place where you'll be able to access it in the future. If it isn't present, add the group for the member ID. In this case, use the app ID you have been using for everything as the `user-email`.
+
+ > [!NOTE]
+ > `data-partition-id` should be in the format `<instance-name>-<data-partition-name>` in both the header and the url, and will be for any following command that requires `data-partition-id`.
+
+ ```bash
+ curl --location --request POST "<url>/api/entitlements/v2/groups/users.datalake.admins@<data-partition>.<domain>.com/members" \
+ --header 'Content-Type: application/json' \
+ --header 'data-partition-id: <data-partition>' \
+ --header 'Authorization: Bearer {{TOKEN}}' \
+ --data-raw '{
+ "email" : "<user-email>",
+ "role" : "MEMBER"
+ }
+ ```
+
+ You can also add the user to this group by using the entitlements API and assigning the required group ID. In order to check the entitlements groups for a user, perform the command [Get entitlements groups for a given user](how-to-manage-users.md#get-entitlements-groups-for-a-given-user). In order to get all the groups available, do the following command:
+
+ ```bash
+ curl --location --request GET "<url>/api/entitlements/v2/groups/" \
+ --header 'data-partition-id: <data-partition>' \
+ --header 'Authorization: Bearer {{TOKEN}}'
+ ```
+
+2. Check if ZGY is registered with the workflow service or not:
+
+ ```bash
+ curl --location --request GET '<url>/api/workflow/v1/workflow/' \
+ --header 'Data-Partition-Id: <data-partition>' \
+ --header 'Content-Type: application/json' \
+ --header 'Authorization: Bearer {{TOKEN}}'
+ ```
+
+ You should see ZGY converter DAG in the list. IF NOT in the response list then REPORT the issue to Azure Team
+
+3. Register Data partition to Seismic:
+
+ ```bash
+ curl --location --request POST '<url>/seistore-svc/api/v3/tenant/<data-partition>' \
+ --header 'Authorization: Bearer {{TOKEN}}' \
+ --header 'Content-Type: application/json' \
+ --data-raw '{
+ "esd": "{{data-partition}}.{{domain}}.com",
+ "gcpid": "{{data-partition}}",
+ "default_acl": "users.datalake.admins@{{data-partition}}.{{domain}}.com"}'
+ ```
+
+4. Create Legal tag
+
+ ```bash
+ curl --location --request POST '<url>/api/legal/v1/legaltags' \
+ --header 'Content-Type: application/json' \
+ --header 'data-partition-id: <data-partition>' \
+ --header 'Authorization: Bearer {{TOKEN}}' \
+ --data-raw '{
+ "name": "<tag-name>",
+ "description": "Legal Tag added for Seismic",
+ "properties": {
+ "contractId": "123456",
+ "countryOfOrigin": [
+ "US",
+ "CA"
+ ],
+ "dataType": "Public Domain Data",
+ "exportClassification": "EAR99",
+ "originator": "Schlumberger",
+ "personalData": "No Personal Data",
+ "securityClassification": "Private",
+ "expirationDate": "2025-12-25"
+ }
+ }'
+ ```
+
+5. Create Subproject. Use your previously created entitlements groups that you would like to add as ACLs (Access Control List) admins and viewers. If you haven't yet created entitlements groups, follow the directions as outlined in [How to manage users?](how-to-manage-users.md). If you would like to see what groups you have, use [Get entitlements groups for a given user](how-to-manage-users.md#get-entitlements-groups-for-a-given-user). Data access isolation achieved with this dedicated ACL (access control list) per object within a given data partition. You may have many subprojects within a data partition, so this command allows you to provide access to a specific subproject without providing access to an entire data partition. Data partition entitlements don't necessarily translate to the subprojects within it, so it's important to be explicit about the ACLs for each subproject, regardless of what data partition it is in.
+
+ > [!NOTE]
+ > Later in this tutorial, you'll need at least one `owner` and at least one `viewer`. These user groups will look like `data.default.owners` and `data.default.viewers`. Make sure to include one of each in your list of `acls` in the request below.
+
+ ```bash
+ curl --location --request POST '<url>/seistore-svc/api/v3/subproject/tenant/<data-partition>/subproject/<subproject>' \
+ --header 'Authorization: Bearer {{TOKEN}}' \
+ --header 'Content-Type: text/plain' \
+ --data-raw '{
+ "admin": "test@email",
+ "storage_class": "MULTI_REGIONAL",
+ "storage_location": "US",
+ "acls": {
+ "admins": [
+ "<user-group>@<data-partition>.<domain>.com",
+ "<user-group>@<data-partition>.<domain>.com"
+ ],
+ "owners": [
+ "<user-group>@<data-partition>.<domain>.com"
+ ],
+ "viewers": [
+ "<user-group>@<data-partition>.<domain>.com"
+ ]
+ }
+ }'
+ ```
+
+ The following request is an example of the create subproject request:
+
+ ```bash
+ curl --location --request POST 'https://<instance>.energy.azure.com/seistore-svc/api/v3/subproject/tenant/<instance>-<data-partition-name>/subproject/subproject1' \
+ --header 'Authorization: Bearer eyJ...' \
+ --header 'Content-Type: text/plain' \
+ --data-raw '{
+ "admin": "test@email",
+ "storage_class": "MULTI_REGIONAL",
+ "storage_location": "US",
+ "acls": {
+ "admins": [
+ "service.seistore.p4d.tenant01.subproject01.admin@slb.p4d.cloud.slb-ds.com",
+ "service.seistore.p4d.tenant01.subproject01.editor@slb.p4d.cloud.slb-ds.com"
+ ],
+ "owners": [
+ "data.default.owners@slb.p4d.cloud.slb-ds.com"
+ ],
+ "viewers": [
+ "service.seistore.p4d.tenant01.subproject01.viewer@slb.p4d.cloud.slb-ds.com"
+ ]
+ }
+ }'
+ ```
+
+6. Patch Subproject with the legal tag you created above:
+
+ ```bash
+ curl --location --request PATCH '<url>/seistore-svc/api/v3/subproject/tenant/<data-partition>/subproject/<subproject-name>' \
+ --header 'ltag: <Tag-name-above>' \
+ --header 'recursive: true' \
+ --header 'Authorization: Bearer {{TOKEN}}' \
+ --header 'Content-Type: text/plain' \
+ --data-raw '{
+ "admin": "test@email",
+ "storage_class": "MULTI_REGIONAL",
+ "storage_location": "US",
+ "acls": {
+ "admins": [
+ "<user-group>@<data-partition>.<domain>.com",
+ "<user-group>@<data-partition>.<domain>.com"
+ ],
+ "viewers": [
+ "<user-group>@<data-partition>.<domain>.com"
+ ]
+ }
+ }'
+ ```
+
+ > [!NOTE]
+ > Recall that the format of the legal tag will be prefixed with the Microsoft Energy Data Services instance name and data partition name, so it looks like `<instancename>`-`<datapartitionname>`-`<legaltagname>`.
+
+7. Open the [sdutil](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable) codebase and edit the `config.yaml` at the root. Update this config to:
+
+ ```yaml
+ seistore:
+ service: '{"azure": {"azureEnv":{"url": "<url>/seistore-svc/api/v3", "appkey": ""}}}'
+ url: '<url>/seistore-svc/api/v3'
+ cloud_provider: azure
+ env: glab
+ auth-mode: JWT Token
+ ssl_verify: false
+ auth_provider:
+ azure: '{
+ "provider": "azure",
+ "authorize_url": "https://login.microsoftonline.com/", "oauth_token_host_end": "/oauth2/v2.0/token",
+ "scope_end":"/.default openid profile offline_access",
+ "redirect_uri":"http://localhost:8080",
+ "login_grant_type": "refresh_token",
+ "refresh_token": "<RefreshToken acquired earlier>"
+ }'
+ azure:
+ empty: none
+ ```
+
+ > [!NOTE]
+ > See [How to generate a refresh token](how-to-generate-refresh-token.md). Once you've generated the token, store it in a place where you'll be able to access it in the future.
+
+8. Run the following commands using **sdutil** to see its working fine. Follow the directions in [Setup and Usage for Azure env](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable#setup-and-usage-for-azure-env). Understand that depending on your OS and Python version, you may have to run `python3` command as opposed to `python`. If you run into errors with these commands, refer to the [SDUTIL tutorial](/tutorials/tutorial-seismic-ddms-sdutil.md).
+
+ > [!NOTE]
+ > when running `python sdutil config init`, you don't need to enter anything when prompted with `Insert the azure (azureGlabEnv) application key:`.
+
+ ```bash
+ python sdutil config init
+ python sdutil auth login
+ python sdutil ls sd://<data-partition>/<subproject>/
+ ```
+
+9. Upload your seismic file to your Seismic Store. Here's an example with a SEGY-format file called `source.segy`:
+
+ ```bash
+ python sdutil cp source.segy sd://<data-partition>/<subproject>/destination.segy
+ ```
+
+ If you would like to use a test file we supply instead, download [this file](https://community.opengroup.org/osdu/platform/testing/-/tree/master/Postman%20Collection/40_CICD_OpenVDS) to your local machine then run the following command:
++
+ ```bash
+ python sdutil cp ST10010ZC11_PZ_PSDM_KIRCH_FULL_T.MIG_FIN.POST_STACK.3D.JS-017536.segy sd://<data-partition>/<subproject>/destination.segy
+ ```
+
+ The sample records were meant to be similar to real-world data so a significant part of their content isn't directly related to conversion. This file is large and will take up about 1 GB of space.
+
+10. Create the manifest file (otherwise known as the records file)
+
+ ZGY conversion uses a manifest file that you'll upload to your storage account in order to run the conversion. This manifest file is created by using multiple JSON files and running a script. The JSON files for this process are stored [here](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/tree/master/doc/sample-records/volve). For more information on Volve, where the dataset definitions come from, visit [their website](https://www.equinor.com/en/what-we-do/digitalisation-in-our-dna/volve-field-data-village-download.html). Complete the following steps in order to create the manifest file:
+
+ * Clone the [repo](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/tree/master/) and navigate to the folder doc/sample-records/volve
+ * Edit the values in the `prepare-records.sh` bash script:
+
+ * `DATA_PARTITION_ID=<your-partition-id>`
+ * `ACL_OWNER=data.default.owners@<your-partition-id>.<your-tenant>.com`
+ * `ACL_VIEWER=data.default.viewers@<your-partition-id>.<your-tenant>.com`
+ * `LEGAL_TAG=<legal-tag-created-above>`
+
+ > [!NOTE]
+ > Recall that the format of the legal tag will be prefixed with the Microsoft Energy Data Services instance name and data partition name, so it looks like `<instancename>`-`<datapartitionname>`-`<legaltagname>`.
+ * The output will be a JSON array with all objects and will be saved in the `all_records.json` file.
+ * Save the `filecollection_segy_id` and the `work_product_id` values in that JSON file to use in the conversion step. That way the converter knows where to look for this contents of your `all_records.json`.
+
+11. Insert the contents of your `all_records.json` file in storage for work-product, seismic trace data, seismic grid, and file collection (that is, copy and paste the contents of that file to the `--data-raw` field in the following command):
+
+ ```bash
+ curl --location --request PUT '<url>/api/storage/v2/records' \
+ --header 'Content-Type: application/json' \
+ --header 'data-partition-id: <data-partition>' \
+ --header 'Authorization: Bearer {{TOKEN}}' \
+ --data-raw '[
+ {
+ ...
+ "kind": "osdu:wks:work-product--WorkProduct:1.0.0",
+ ...
+ },
+ {
+ ...
+ "kind": "osdu:wks:work-product-component--SeismicTraceData:1.0.0"
+ ...
+ },
+ {
+ ...
+ "kind": "osdu:wks:work-product-component--SeismicBinGrid:1.0.0",
+ ...
+ },
+ {
+ ...
+ "kind": "osdu:wks:dataset--FileCollection.SEGY:1.0.0",
+ ...
+ }
+ ]
+ '
+ ```
+
+12. Trigger the ZGY Conversion DAG to convert your data using the values you had saved above. Your call will look like this:
+
+ ```bash
+ curl --location --request POST '<url>/api/workflow/v1/workflow/<dag-name>/workflowRun' \
+ --header 'data-partition-id: <data-partition>' \
+ --header 'Content-Type: application/json' \
+ --header 'Authorization: Bearer {{TOKEN}}' \
+ --data-raw '{
+ "executionContext": {
+ "data_partition_id": <data-partition>,
+ "sd_svc_api_key": "test-sd-svc",
+ "storage_svc_api_key": "test-storage-svc",
+ "filecollection_segy_id": "<data-partition>:dataset--FileCollection.SEGY:<guid>",
+ "work_product_id": "<data-partition>:work-product--WorkProduct:<guid>"
+ }
+ }'
+ ```
+
+13. Let the DAG run to the `succeeded` state. You can check the status using the workflow status call. You'll get run ID in the response of the above call
+
+ ```bash
+ curl --location --request GET '<url>/api/workflow/v1/workflow/<dag-name>/workflowRun/<run-id>' \
+ --header 'Data-Partition-Id: <data-partition>' \
+ --header 'Content-Type: application/json' \
+ --header 'Authorization: Bearer {{TOKEN}}'
+ ```
+
+14. You can see if the converted file is present using the following command:
+
+ ```bash
+ python sdutil ls sd://<data-partition>/<subproject>
+ ```
+
+15. You can download and inspect the file using the [sdutil](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable) `cp` command:
+
+ ```bash
+ python sdutil cp sd://<data-partition>/<subproject>/<filename.zgy> <local/destination/path>
+ ```
+OSDU&trade; is a trademark of The Open Group.
+
+## Next steps
+<!-- Add a context sentence for the following links -->
+> [!div class="nextstepaction"]
+> [How to convert segy to ovds](/how-to-convert-segy-to-ovds.md)
energy-data-services How To Generate Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-generate-refresh-token.md
+
+ Title: How to generate a refresh token for Microsoft Energy Data Service #Required; page title is displayed in search results. Include the brand.
+description: This article describes how to generate a refresh token #Required; article description that is displayed in search results.
++++ Last updated : 08/25/2022+++
+# OAuth 2.0 authorization
+
+The following are the basic steps to use the OAuth 2.0 authorization code grant flow to get a refresh token from the Microsoft identity platform endpoint:
+
+ 1. Register your app with Azure AD.
+ 2. Get authorization.
+ 3. Get a refresh token.
+
+
+## 1. Register your app with Azure AD
+To use the Microsoft Energy Data Services Preview platform endpoint, you must register your app using the [Azure app registration portal](https://go.microsoft.com/fwlink/?linkid=2083908). You can use either a Microsoft account or a work or school account to register an app.
+
+To configure an app to use the OAuth 2.0 authorization code grant flow, save the following values when registering the app:
+
+- The `Directory (tenant) ID` that will be used in place of `{Tenant ID}`
+- The `application (client) ID` assigned by the app registration portal, which will be used instead of `client_id`.
+- A `client (application) secret`, either a password or a public/private key pair (certificate). The client secret isn't required for native apps. This secret will be used instead of `{AppReg Secret}` later.
+- A `redirect URI (or reply URL)` for your app to receive responses from Azure AD.
+
+> [!NOTE]
+> If there's no redirect URIs specified, add a platform, select "Web", then add `http://localhost:8080`, and select save.
++
+For steps on how to configure an app in the Azure portal, see [Register your app](/azure/active-directory/develop/quickstart-register-app#register-an-application).
+
+## 2. Get authorization
+The first step to getting an access token for many OpenID Connect (OIDC) and OAuth 2.0 flows is to redirect the user to the Microsoft identity platform /authorize endpoint. Azure AD will sign the user in and request their consent for the permissions your app requests. In the authorization code grant flow, after consent is obtained, Azure AD will return an `authorization_code` to your app that it can redeem at the Microsoft identity platform /token endpoint for an access token.
+
+### Authorization request
+
+The authorization code flow begins with the client directing the user to the `/authorize` endpoint. This step is the interactive part of the flow, where the user takes action.
+
+The following shows an example of an authorization request:
+```bash
+ https://login.microsoftonline.com/{Tenant ID}/oauth2/v2.0/authorize?client_id={AppReg ID}
+ &response_type=code
+ &redirect_uri=http%3a%2f%2flocalhost%3a8080
+ &response_mode=query
+ &scope={AppReg ID}%2f.default&state=12345&sso_reload=true
+```
+
+| Parameter | Required? | Description |
+| | | |
+|`{Tenant ID}`|Required|Name of your Azure AD tenant|
+| client_id |Required |The application ID assigned to your app in the [Azure portal](https://portal.azure.com). |
+| response_type |Required |The response type, which must include `code` for the authorization code flow. You can receive an ID token if you include it in the response type, such as `code+id_token`, and in this case, the scope needs to include `openid`.|
+| redirect_uri |Required |The redirect URI of your app, where authentication responses are sent and received by your app. It must exactly match one of the redirect URIs that you registered in the portal, except that it must be URL-encoded. |
+| scope |Required |A space-separated list of scopes. The `openid` scope indicates a permission to sign in the user and get data about the user in the form of ID tokens. The `offline_access` scope is optional for web applications. It indicates that your application will need a *refresh token* for extended access to resources. The client-id indicates the token issued are intended for use by Azure AD B2C registered client. The `https://{tenant-name}/{app-id-uri}/{scope}` indicates a permission to protected resources, such as a web API. |
+| response_mode |Recommended |The method that you use to send the resulting authorization code back to your app. It can be `query`, `form_post`, or `fragment`. |
+| state |Recommended |A value included in the request that can be a string of any content that you want to use. Usually, a randomly generated unique value is used, to prevent cross-site request forgery attacks. The state also is used to encode information about the user's state in the app before the authentication request occurred. For example, the page the user was on, or the user flow that was being executed. |
+
+### Authorization response
+In the response, you'll get an `authorization code` in the URL bar.
+
+```bash
+http://localhost:8080/?code=0.BRoAv4j5cvGGr0...au78f&state=12345&session....
+```
+The browser will redirect to `http://localhost:8080/?code={authorization code}&state=...` upon successful authentication.
+
+> [!NOTE]
+> The browser may say that the site can't be reached, but it should still have the authorization code in the URL bar.
+
+|Parameter| Description|
+| | |
+|code|The authorization_code that the app requested. The app can use the authorization code to request an access token for the target resource. Authorization_codes are short lived, typically they expire after about 10 minutes.|
+|state|If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. This check helps to detect [Cross-Site Request Forgery (CSRF) attacks](https://tools.ietf.org/html/rfc6749#section-10.12) against the client.|
+|session_state|A unique value that identifies the current user session. This value is a GUID, but should be treated as an opaque value that is passed without examination.|
+
+Copy the code between `code=` and `&state`.
+
+> [!WARNING]
+> Running the URL in Postman won't work as it requires extra configuration for token retrieval.
+
+## 3. Get a refresh token
+Your app uses the authorization code received in the previous step to request an access token by sending a POST request to the `/token` endpoint.
+
+### Sample request
+
+```bash
+ curl -X POST -H "Content-Type: application/x-www-form-urlencoded" -d 'client_id={AppReg ID}
+ &scope={AppReg ID}%2f.default+openid+profile+offline_access
+ &code={authorization code}
+ &redirect_uri=http%3A%2F%2Flocalhost%3a8080
+ &grant_type=authorization_code
+ &client_secret={AppReg Secret}' 'https://login.microsoftonline.com/{Tenant ID}/oauth2/v2.0/token'
+```
+|Parameter |Required |Description |
+||||
+|tenant | Required | The {Tenant ID} value in the path of the request can be used to control who can sign into the application.|
+|client_id | Required | The application ID assigned to your app upon registration |
+|scope | Required | A space-separated list of scopes. The scopes that your app requests in this leg must be equivalent to or a subset of the scopes that it requested in the first (authorization) leg. If the scopes specified in this request span multiple resource server, then the v2.0 endpoint will return a token for the resource specified in the first scope. |
+|code |Required |The authorization_code that you acquired in the first leg of the flow. |
+|redirect_uri | Required |The same redirect_uri value that was used to acquire the authorization_code. |
+|grant_type | Required | Must be authorization_code for the authorization code flow. |
+|client_secret | Required | The client secret that you created in the app registration portal for your app. It shouldn't be used in a native app, because client_secrets can't be reliably stored on devices. It's required for web apps and web APIs, which have the ability to store the client_secret securely on the server side.|
+
+### Sample response
+
+```bash
+{
+ "token_type": "Bearer",
+ "scope": "User.Read profile openid email",
+ "expires_in": 4557,
+ "access_token": "eyJ0eXAiOiJKV1QiLCJub25jZSI6IkJuUXdJd0ZFc...",
+ "refresh_token": "0.ARoAv4j5cvGGr0GRqy180BHbR8lB8cvIWGtHpawGN..."
+}
+```
+
+|Parameter | Description |
+|||
+|token_type |Indicates the token type value. The only type that Azure AD supports is Bearer. |
+|scope |A space separated list of the Microsoft Graph permissions that the access_token is valid for. |
+|expires_in |How long the access token is valid (in seconds). |
+|access_token |The requested access token. Your app can use this token to call Microsoft Graph. |
+|refresh_token |An OAuth 2.0 refresh token. Your app can use this token to acquire extra access tokens after the current access token expires. Refresh tokens are long-lived, and can be used to retain access to resources for extended periods of time.|
+
+For more information, see [Generate refresh tokens](/graph/auth-v2-user#2-get-authorization).
+
+## Alternative options
+
+If you're struggling with getting a proper authorization token, follow the steps in [OSDU&trade; auth app](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/tree/release/0.15/tools/rest/osduauth) to locally run a static webpage that generates the refresh token for you. Once it's running, fill in the correct values in the UI of the static webpage (they may be filled in with the wrong values to start). Use the UI to generate a refresh token.
+
+OSDU&trade; is a trademark of The Open Group.
+
+## Next steps
+<!-- Add a context sentence for the following links -->
+> [!div class="nextstepaction"]
+> [How to convert segy to ovds](how-to-convert-segy-to-zgy.md)
energy-data-services How To Integrate Airflow Logs With Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-integrate-airflow-logs-with-azure-monitor.md
+
+ Title: Integrate airflow logs with Azure Monitor - Microsoft Energy Data Services Preview
+description: This is a how-to article on how to start collecting Airflow Task logs in Azure Monitor, archiving them to a storage account, and querying them in Log Analytics workspace.
++++ Last updated : 08/18/2022+++
+# Integrate airflow logs with Azure Monitor
+
+This article describes how you can start collecting Airflow Logs for your Microsoft Energy Data Services instances into Azure Monitor. This integration feature helps you debug Airflow DAG run failures.
++
+## Prerequisites
++
+* An existing **Log Analytics Workspace**.
+ This workspace will be used to query the Airflow logs using the Kusto Query Language (KQL) query editor in the Log Analytics Workspace. Useful Resource: [Create a log analytics workspace in Azure portal](../azure-monitor/logs/quick-create-workspace.md).
+++
+* An existing **storage account**:
+ It will be used to store JSON dumps of Airflow logs. The storage account doesnΓÇÖt have to be in the same subscription as your Log Analytics workspace.
++
+## Enabling diagnostic settings to collect logs in a storage account
+Every Microsoft Energy Data Services instance comes inbuilt with an Azure Data Factory-managed Airflow instance. We collect Airflow logs for internal troubleshooting and debugging purposes. Airflow logs can be integrated with Azure Monitor in the following ways:
+
+* Storage account
+* Log Analytics workspace
+
+To access logs via any of the above two options, you need to create a Diagnostic Setting. Each Diagnostic Setting has three basic parts:
+
+| Title | Description |
+|-|-|
+| Name | This is the name of the diagnostic log. Ensure a unique name is set for each log. |
+| Categories | Category of logs to send to each of the destinations. The set of categories will vary for each Azure service. Visit: [Supported Resource Log Categories](../azure-monitor/essentials/resource-logs-categories.md) |
+| Destinations | One or more destinations to send the logs. All Azure services share the same set of possible destinations. Each diagnostic setting can define one or more destinations but no more than one destination of a particular type. It should be a storage account, an Event Hubs namespace or an event hub. |
+
+Follow the following steps to set up Diagnostic Settings:
+
+1. Open Microsoft Energy Data Services' "**Overview**" page
+1. Select "**Diagnostic Settings**" from the left panel
+
+ [![Screenshot for Azure monitor diagnostic setting overview page. The page shows a list of existing diagnostic settings and the option to add a new one.](media/how-to-integrate-airflow-logs-with-azure-monitor/azure-monitor-diagnostic-settings-overview-page.png)](media/how-to-integrate-airflow-logs-with-azure-monitor/azure-monitor-diagnostic-settings-overview-page.png#lightbox)
++
+1. Select "**Add diagnostic setting**"
+
+1. Select "**Airflow Task Logs**" under Logs
+
+1. Select "**Archive to a storage account**"
+
+ [![Screenshot for creating a diagnostic setting to archive logs to a storage account. The image shows the subscription and the storage account chosen for a diagnostic setting.](media/how-to-integrate-airflow-logs-with-azure-monitor/creating-diagnostic-setting-destination-storage-account.png)](media/how-to-integrate-airflow-logs-with-azure-monitor/creating-diagnostic-setting-destination-storage-account.png#lightbox)
+
+6. Verify the subscription and the storage account to which you want to archive the logs.
++
+## Navigate storage account to download Airflow logs
+
+After a diagnostic setting is created for archiving Airflow task logs into a storage account, you can navigate to the storage account **overview** page. You can then use the "Storage Browser" on the left panel to find the right JSON file that you want to investigate. Browsing through different directories is intuitive as you move from a year to a month to a day.
+
+1. Navigate through **Containers**, available on the left panel.
+
+ [![Screenshot for exploring archived logs in the containers of the Storage Account. The container will show logs from all the sources set up.](media/how-to-integrate-airflow-logs-with-azure-monitor/storage-account-containers-page-showing-collected-logs-explorer.png)](media/how-to-integrate-airflow-logs-with-azure-monitor/storage-account-containers-page-showing-collected-logs-explorer.png#lightbox)
+
+2. Open the information pane on the right. It contains a "download" button to save the log file locally.
++
+1. Downloaded logs can be analyzed in any editor.
+++
+## Enabling diagnostic settings to integrate logs with Log Analytics Workspace
+
+You can integrate Airflow logs with Log Analytics Workspace by using **Diagnostic Settings** under the left panel of your Microsoft Energy Data Services instance overview page.
+
+[![Screenshot for creating a diagnostic setting. It shows the options to select subscription & Log Analytics Workspace with which to integrate.](media/how-to-integrate-airflow-logs-with-azure-monitor/creating-diagnostic-setting-choosing-destination-retention.png)](media/how-to-integrate-airflow-logs-with-azure-monitor/creating-diagnostic-setting-choosing-destination-retention.png#lightbox)
+
+## Working with the integrated Airflow Logs in Log Analytics Workspace
+
+Data is retrieved from a Log Analytics Workspace using a query written in Kusto Query Language (KQL). A set of precreated queries is available for many Azure services (not available for Airflow at the moment) so that you don't require knowledge of KQL to get started.
++
+[![Screenshot for Azure Monitor Log Analytics page for viewing collected logs. Under log management, tables from all sources will be visible.](media/how-to-integrate-airflow-logs-with-azure-monitor/azure-monitor-log-analytics-page-viewing-collected-logs.png)](media/how-to-integrate-airflow-logs-with-azure-monitor/azure-monitor-log-analytics-page-viewing-collected-logs.png#lightbox)
+
+1. Select Logs from your resource's menu. Log Analytics opens with the **Queries** window that includes prebuilt queries for your resource type.
++
+2. Browse through the available queries. Identify the one to run and select Run. The query is added to the query window and the results are returned.
++++
+## Next steps
+Now that you're collecting resource logs, create a log query alert to be proactively notified when interesting data is identified in your log data.
+
+> [!div class="nextstepaction"]
+> [Create a log query alert for an Azure resource](../azure-monitor/alerts/tutorial-log-alert.md)
energy-data-services How To Integrate Elastic Logs With Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-integrate-elastic-logs-with-azure-monitor.md
+
+ Title: Integrate elastic logs with Azure Monitor - Microsoft Energy Data Services Preview
+description: This is a how-to article on how to start collecting ElasticSearch logs in Azure Monitor, archiving them to a storage account, and querying them in Log Analytics workspace.
++++ Last updated : 08/18/2022+++
+# Integrate elastic logs with Azure Monitor
++
+This article describes how you can start collecting Elasticsearch logs for your Microsoft Energy Data Services instances in Azure Monitor. This integration feature is developed to help you debug Elasticsearch related issues inside Azure Monitor.
++
+## Prerequisites
+
+- You need to have a Log Analytics workspace. It will be used to query the Elasticsearch logs dataset using the Kusto Query Language (KQL) query editor in the Log Analytics workspace. Useful Resource: [Create a log Analytics workspace in Azure portal](../azure-monitor/logs/quick-create-workspace.md)
++
+- You need to have a storage account. It will be used to store JSON dumps of Elasticsearch & Elasticsearch Operator logs. The storage account doesnΓÇÖt have to be in the same subscription as your Log Analytics workspace.
++
+## Enabling Diagnostic Settings to collect logs in a storage account & a Log Analytics workspace
+Every Microsoft Energy Data Services instance comes inbuilt with a managed Elasticsearch service. We collect Elasticsearch logs for internal troubleshooting and debugging purposes. You can get access to these logs by integrating Elasticsearch logs with Azure Monitor.
+++
+Each diagnostic setting has three basic parts:
+
+| Title | Description |
+|-|-|
+| Name | This is the name of the diagnostic log. Ensure a unique name is set for each log. |
+| Categories | Category of logs to send to each of the destinations. The set of categories will vary for each Azure service. Visit: [Supported Resource Log Categories](../azure-monitor/essentials/resource-logs-categories.md) |
+| Destinations | One or more destinations to send the logs. All Azure services share the same set of possible destinations. Each diagnostic setting can define one or more destinations but no more than one destination of a particular type. It should be a storage account, an Event Hubs namespace or an event hub. |
+
+We support two destinations for your Elasticsearch logs from Microsoft Energy Data Services instance:
+
+* Storage account
+* Log Analytics workspace
+++
+## Steps to enable diagnostic setting to collect Elasticsearch logs
+
+1. Open *Microsoft Energy Data Services* overview page
+1. Select *Diagnostic Settings* from the left panel
+
+ [![Screenshot for diagnostic settings overview page. It shows the list of existing settings as well as the option to create a new diagnostic setting.](media/how-to-integrate-elastic-logs-with-azure-monitor/diagnostic-setting-overview-page.png)](media/how-to-integrate-elastic-logs-with-azure-monitor/diagnostic-setting-overview-page.png#lightbox)
+
+1. Select *Add diagnostic setting*.
+
+1. Select *Elasticsearch logs* and *Elasticsearch Operator logs* under Log categories
+
+1. Select *Send to a Log Analytics workspace*
+
+1. Choose Subscription and the Log Analytics workspace Name. You would have created it already as a prerequisite.
+
+
+ [![Screenshot for choosing destination settings for Log Analytics workspace. The image shows the subscription and Log Analytics workspace chosen.](media/how-to-integrate-elastic-logs-with-azure-monitor/diagnostic-setting-log-analytics-workspace.png)](media/how-to-integrate-elastic-logs-with-azure-monitor/diagnostic-setting-log-analytics-workspace.png#lightbox)
++
+1. Select *Archive to storage account*
+1. Choose Subscription and storage account Name. You would have created it already as a prerequisite.
+ [![Screenshot that shows choosing destination settings for storage account. Required fields include regions, subscription and storage account.](media/how-to-integrate-elastic-logs-with-azure-monitor/diagnostic-setting-archive-storage-account.png)](media/how-to-integrate-elastic-logs-with-azure-monitor/diagnostic-setting-archive-storage-account.png#lightbox)
+
+1. Select *Save*.
+
+Go back to the Diagnostic Settings page. You would now see a new diagnostic setting created along with the names of the destination storage account and Log Analytics workspace you chose for this setting.
+
+[![Screenshot for diagnostic settings overview page. The page shows a sample diagnostic setting to link Elasticsearch logs with Azure Monitor.](media/how-to-integrate-elastic-logs-with-azure-monitor/diagnostic-setting-created-page.png)](media/how-to-integrate-elastic-logs-with-azure-monitor/diagnostic-setting-created-page.png#lightbox)
+
+## View Elasticsearch logs in Log Analytics workspace or download them as JSON files using storage account
+
+### How to view & query logs in Log Analytics workspace
+The editor in Log Analytics workspace support Kusto (KQL) queries through which you can easily perform complicated queries to extract interesting logs data from the Elasticsearch service running in your Microsoft Energy Data Services instance.
+
+
+* Run queries and see Elasticsearch logs in the Log Analytics workspace.
+
+ [![Screenshot for Elasticsearch logs. The image shows the simplest KQL query that shows all logs in the last 24 hours.](media/how-to-integrate-elastic-logs-with-azure-monitor/view-elasticsearch-logs.png)](media/how-to-integrate-elastic-logs-with-azure-monitor/view-elasticsearch-logs.png#lightbox)
+
+* Run queries and see Elasticsearch Operator logs in the Log Analytics workspace.
+
+ [![Screenshot for elasticsearch Operator logs. The image shows the simplest KQL query that shows all logs in the last 24 hours.](media/how-to-integrate-elastic-logs-with-azure-monitor/view-elasticsearch-operator-logs.png)](media/how-to-integrate-elastic-logs-with-azure-monitor/view-elasticsearch-operator-logs.png#lightbox)
++
+### How to download logs as JSON files from storage account
+
+* The *Containers* menu option in the left panel of your storage account's overview page allows you to browse through the various directories that neatly store your log files.
+
+
+ [![Screenshot for storage account that stores elastic logs. The logs can be viewed by selecting 'containers' under the data storage menu-item.](media/how-to-integrate-elastic-logs-with-azure-monitor/storage-account-containers-page.png)](media/how-to-integrate-elastic-logs-with-azure-monitor/storage-account-containers-page.png#lightbox)
+
+* Logs are organized into different folders. Drill down by month, date and time.
+
+ [![Screenshot for JSON file view in storage account. The image shows tracked path from year, month, data, and time to locate a log file.](media/how-to-integrate-elastic-logs-with-azure-monitor/storage-account-log-file.png)](media/how-to-integrate-elastic-logs-with-azure-monitor/storage-account-log-file.png#lightbox)
+
+* Select any JSON file in your containers to view other options.
+
+ [![Screenshot to view the downloaded JSON file from storage account. Other options shown include getting a URL for the JSON file.](media/how-to-integrate-elastic-logs-with-azure-monitor/storage-account-download-log-file-json.png)](media/how-to-integrate-elastic-logs-with-azure-monitor/storage-account-download-log-file-json.png#lightbox)
+
+* Select *Download* option to download the JSON file. Open it in a code editor of your choice.
+
+ [![Screenshot to view downloaded JSON file locally. The images shows formatted logs in Visual Studio Code.](media/how-to-integrate-elastic-logs-with-azure-monitor/logs-downloaded-opened-editor.png)](media/how-to-integrate-elastic-logs-with-azure-monitor/logs-downloaded-opened-editor.png#lightbox)
+
+
+## Next steps
+
+After collecting resource logs as explained in this article, there are more capabilities you can explore.
+
+* Create a log query alert to be proactively notified when interesting data is identified in your log data.
+ [Create a log query alert for an Azure resource](../azure-monitor/alerts/tutorial-log-alert.md)
+
+* Start collecting logs from other sources such as Airflow in your Microsoft Energy Data Services instance.
+ [How to Integrate Airflow logs with Azure Monitor](how-to-integrate-airflow-logs-with-azure-monitor.md)
+
energy-data-services How To Manage Legal Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-legal-tags.md
+
+ Title: How to manage legal tags in Microsoft Energy Data Services Preview #Required; page title is displayed in search results. Include the brand.
+description: This article describes how to manage legal tags in Microsoft Energy Data Services Preview #Required; article description that is displayed in search results.
++++ Last updated : 08/19/2022+++
+# How to manage legal tags?
+A Legal tag is the entity that represents the legal status of data in the Microsoft Energy Data Services Preview instance. Legal tag is a collection of properties that governs how data can be ingested and consumed. A legal tag is necessarily required for data to be [ingested](concepts-csv-parser-ingestion.md) into your Microsoft Energy Data Services Preview instance. It's also required for the [consumption](concepts-index-and-search.md) of the data from your Microsoft Energy Data Services Preview instance. Legal tags are defined at a data partition level individually.
+
+While in Microsoft Energy Data Services Preview instance, [entitlement service](concepts-entitlements.md) defines access to data for a given user(s), legal tag defines the overall access to the data across users. A user may have access to manage the data within a data partition however, they may not be able to do so, until certain legal requirements are fulfilled.
++
+## Create a legal tag
+Run the below curl command in Azure Cloud Bash to create a legal tag for a given data partition of your Microsoft Energy Data Services Preview instance.
+
+```bash
+ curl --location --request POST 'https://<URI>/api/legal/v1/legaltags' \
+ --header 'data-partition-id: <data-partition-id>' \
+ --header 'Authorization: Bearer <access_token>' \
+ --header 'Content-Type: application/json' \
+ --data-raw '{
+ "name": "<legal-tag-name>",
+ "description": "<legal-tag-description>",
+ "properties": {
+ "contractId": "<contract-id>",
+ "countryOfOrigin": ["<country-of-origin>"],
+ "dataType": "<data-type>",
+ "expirationDate": "<expiration-ID>",
+ "exportClassification": "<export-classification>",
+ "originator": "<originator>",
+ "personalData": "<personal-data>",
+ "securityClassification": "Public"
+ }
+ }'
+
+```
+
+### Sample request
+
+```bash
+ curl --location --request POST 'https://<instance>.energy.azure.com/api/legal/v1/legaltags' \
+ --header 'data-partition-id: <instance>-<data-partition-name>' \
+ --header 'Authorization: Bearer <access_token>' \
+ --header 'Content-Type: application/json' \
+ --data-raw '{
+ "name": "<instance>-<data-partition-name>-legal-tag",
+ "description": "Microsoft Energy Data Services Preview Legal Tag",
+ "properties": {
+ "contractId": "A1234",
+ "countryOfOrigin": ["US"],
+ "dataType": "Public Domain Data",
+ "expirationDate": "2099-01-25",
+ "exportClassification": "EAR99",
+ "originator": "MyCompany",
+ "personalData": "No Personal Data",
+ "securityClassification": "Public"
+ }
+ }'
+
+```
+
+### Sample response
+
+```JSON
+ {
+ "name": "<instance>-<data-partition-name>-legal-tag",
+ "description": "Microsoft Energy Data Services Preview Legal Tag",
+ "properties": {
+ "countryOfOrigin": [
+ "US"
+ ],
+ "contractId": "A1234",
+ "expirationDate": "2099-01-25",
+ "originator": "MyCompany",
+ "dataType": "Public Domain Data",
+ "securityClassification": "Public",
+ "personalData": "No Personal Data",
+ "exportClassification": "EAR99"
+ }
+}
+```
+
+The country of origin should follow [ISO Alpha2 format](https://www.nationsonline.org/oneworld/country_code_list.htm).
+
+> [!NOTE]
+> Create Legal Tag api, internally appends data-partition-id to legal tag name if it isn't already present. For instance, if request has name as: ```legal-tag```, then the create legal tag name would be ```<instancename>-<data-partition-id>-legal-tag```
+
+```bash
+ curl --location --request POST 'https://<instance>.energy.azure.com/api/legal/v1/legaltags' \
+ --header 'data-partition-id: <instance>-<data-partition-name>' \
+ --header 'Authorization: Bearer <access_token>' \
+ --header 'Content-Type: application/json' \
+ --data-raw '{
+ "name": "legal-tag",
+ "description": "Microsoft Energy Data Services Preview Legal Tag",
+ "properties": {
+ "contractId": "A1234",
+ "countryOfOrigin": ["US"],
+ "dataType": "Public Domain Data",
+ "expirationDate": "2099-01-25",
+ "exportClassification": "EAR99",
+ "originator": "MyCompany",
+ "personalData": "No Personal Data",
+ "securityClassification": "Public"
+ }
+ }'
+
+```
+The sample response will have data-partition-id appended to the legal tag name and sample response will be:
++
+```JSON
+ {
+ "name": "<instance>-<data-partition-name>-legal-tag",
+ "description": "Microsoft Energy Data Services Preview Legal Tag",
+ "properties": {
+ "countryOfOrigin": [
+ "US"
+ ],
+ "contractId": "A1234",
+ "expirationDate": "2099-01-25",
+ "originator": "MyCompany",
+ "dataType": "Public Domain Data",
+ "securityClassification": "Public",
+ "personalData": "No Personal Data",
+ "exportClassification": "EAR99"
+ }
+}
+```
+
+## Get a legal tag
+Run the below curl command in Azure Cloud Bash to get the legal tag associated with a data partition of your Microsoft Energy Data Services Preview instance.
+
+```bash
+ curl --location --request GET 'https://<URI>/api/legal/v1/legaltags/<legal-tag-name>' \
+ --header 'data-partition-id: <data-partition-id>' \
+ --header 'Authorization: Bearer <access_token>'
+```
+
+### Sample request
+
+```bash
+ curl --location --request GET 'https://<instance>.energy.azure.com/api/legal/v1/legaltags/<instance>-<data-partition-name>-legal-tag' \
+ --header 'data-partition-id: <instance>-<data-partition-name>' \
+ --header 'Authorization: Bearer <access_token>'
+```
+
+### Sample response
+
+```JSON
+ {
+ "name": "<instance>-<data-partition-name>-legal-tag",
+ "description": "Microsoft Energy Data Services Preview Legal Tag",
+ "properties": {
+ "countryOfOrigin": [
+ "US"
+ ],
+ "contractId": "A1234",
+ "expirationDate": "2099-01-25",
+ "originator": "MyCompany",
+ "dataType": "Public Domain Data",
+ "securityClassification": "Public",
+ "personalData": "No Personal Data",
+ "exportClassification": "EAR99"
+ }
+ }
+```
+
+## Next steps
+<!-- Add a context sentence for the following links -->
+> [!div class="nextstepaction"]
+> [How to add more data partitions](how-to-add-more-data-partitions.md)
+
energy-data-services How To Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md
+
+ Title: How to manage users in Microsoft Energy Data Services Preview #Required; page title is displayed in search results. Include the brand.
+description: This article describes how to manage users in Microsoft Energy Data Services Preview #Required; article description that is displayed in search results.
++++ Last updated : 08/19/2022+++
+# How to manage users?
+This article describes how to manage users in Microsoft Energy Data Services Preview. It uses the [entitlements API](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/tree/master/) and acts as a group-based authorization system for data partitions within Microsoft Energy Data Service instance. For more information about Microsoft Energy Data Services Preview entitlements, see [entitlement services](concepts-entitlements.md).
++
+## Prerequisites
+
+Create a Microsoft Energy Data Services Preview instance using guide at [How to create Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md).
+
+Keep the following values handy. These values will be used to:
+
+* Generate the access token, which you'll need to make valid calls to the Entitlements API of your Microsoft Energy Data Services Preview instance
+* Pass as parameters for different user management requests to the Entitlements API.
+
+#### Find `tenant-id`
+Navigate to the Azure Active Directory account for your organization. One way to do so is by searching for "Azure Active Directory" in the Azure portal's search bar. Once there, locate `tenant-id` under the basic information section in the *Overview* tab. Copy the `tenant-id` and paste in an editor to be used later.
+++
+#### Find `client-id`
+Often called `app-id`, it's the same value that you used to register your application during the provisioning of your [Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md). You'll find the `client-id` in the *Essentials* pane of Microsoft Energy Data Services Preview *Overview* page. Copy the `client-id` and paste in an editor to be used later.
+
+> [!NOTE]
+> The 'client-id' that is passed as values in the entitlement API calls needs to be the same which was used for provisioning of your Microsoft Energy Data Services Preview instance.
+
+#### Find `client-secret`
+Sometimes called an application password, a `client-secret` is a string value your app can use in place of a certificate to identity itself. Navigate to *App Registrations*. Once there, open 'Certificates & secrets' under the *Manage* section.Create a `client-secret` for the `client-id` that you used to create your Microsoft Energy Data Services Preview instance, you can add one now by clicking on *New Client Secret*. Record the secret's `value` for use in your client application code.
+
+> [!NOTE]
+> Don't forget to record the secret's value for use in your client application code. This secret value is never displayed again after you leave this page at the time of creation of 'client secret'.
+
+#### Find the `url`for your Microsoft Energy Data Services Preview instance
+Navigate to your Microsoft Energy Data Services Preview *Overview* page on Azure portal. Copy the URI from the essentials pane.
++
+#### Find the `data-partition-id` for your group
+You have two ways to get the list of data-partitions in your Microsoft Energy Data Services Preview instance.
+- By navigating *Data Partitions* menu-item under the Advanced section of your Microsoft Energy Data Services Preview UI.
++
+- By clicking on the *view* below the *data partitions* field in the essentials pane of your Microsoft Energy Data Services Preview *Overview* page.
++
+## Generate access token
+
+You need to generate access token to use entitlements API. Run the below curl command in Azure Cloud Bash after replacing the placeholder values with the corresponding values found earlier in the pre-requisites step.
+
+**Request format**
+
+```bash
+curl --location --request POST 'https://login.microsoftonline.com/<tenant-id>/oauth2/token' \
+--header 'Content-Type: application/x-www-form-urlencoded' \
+--data-urlencode 'grant_type=client_credentials' \
+--data-urlencode 'scope=<client-id>.default' \
+--data-urlencode 'client_id=<client-id>' \
+--data-urlencode 'client_secret=<client-secret>' \
+--data-urlencode 'resource=<client-id>'
+```
+
+**Sample response**
+
+```JSON
+ {
+ "token_type": "Bearer",
+ "expires_in": 86399,
+ "ext_expires_in": 86399,
+ "access_token": abcdefgh123456............."
+ }
+```
+Copy the `access_token` value from the response. You'll need it to pass as one of the headers in all calls to the Entitlements API of your Microsoft Energy Data Services Preview instance.
+
+## User management activities
+You can manage user's access to your Microsoft Energy Data Services instance or data partitions. As a prerequisite for the same, you need to find the 'object-id' (OID) of the user(s) first.
+
+You'll need to input `object-id` (OID) of the users as parameters in the calls to the Entitlements API of your Microsoft Energy Data Services Preview Instance. `object-id`(OID) is the Azure Active Directory User Object ID.
+++
+### Get the list of all available groups
+
+Run the below curl command in Azure Cloud Bash to get all the groups that are available for your Microsoft Energy Data Services instance and its data partitions.
+
+```bash
+ curl --location --request GET "https://<URI>/api/entitlements/v2/groups/" \
+ --header 'data-partition-id: <data-partition>' \
+ --header 'Authorization: Bearer <access_token>'
+```
+
+### Add user(s) to a users group
+
+Run the below curl command in Azure Cloud Bash to add user(s) to the "Users" group using Entitlement service.
+
+```bash
+ curl --location --request POST 'https://<URI>/api/entitlements/v2/groups/users@<data-partition-id>.dataservices.energy/members' \
+ --header 'data-partition-id: <data-partition-id>' \
+ --header 'Authorization: Bearer <access_token>' \
+ --header 'Content-Type: application/json' \
+ --data-raw '{
+ "email": "<Object_ID>",
+ "role": "MEMBER"
+ }'
+```
+> [!NOTE]
+> The value to be sent for the param "email" is the Object ID of the user and not the user's email
+**Sample request**
+
+```bash
+ curl --location --request POST 'https://<instance>.energy.azure.com/api/entitlements/v2/groups/users@<instance>-<data-partition-name>.dataservices.energy/members' \
+ --header 'data-partition-id: <instance>-<data-partition-name>' \
+ --header 'Authorization: Bearer <access_token>' \
+ --header 'Content-Type: application/json' \
+ --data-raw '{
+ "email": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
+ "role": "MEMBER"
+ }'
+```
+
+**Sample Response**
+
+```JSON
+ {
+ "email": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
+ "role": "MEMBER"
+ }
+```
+
+### Add user(s) to an entitlements group
+
+Run the below curl command in Azure Cloud Bash to add user(s) to an entitlement group using Entitlement service.
+
+```bash
+ curl --location --request POST 'https://<URI>/api/entitlements/v2/groups/service.search.user@<data-partition-id>.dataservices.energy/members' \
+ --header 'data-partition-id: <data-partition-id>' \
+ --header 'Authorization: Bearer <access_token>' \
+ --header 'Content-Type: application/json' \
+ --data-raw '{
+ "email": "<Object_ID>",
+ "role": "MEMBER"
+ }'
+```
+> [!NOTE]
+> The value to be sent for the param "email" is the Object ID of the user and not the user's email
+**Sample request**
+
+```bash
+ curl --location --request POST 'https://<instance>.energy.azure.com/api/entitlements/v2/groups/service.search.user@<instance>-<data-partition-name>.dataservices.energy/members' \
+ --header 'data-partition-id: <instance>-<data-partition-name>' \
+ --header 'Authorization: Bearer <access_token>' \
+ --header 'Content-Type: application/json' \
+ --data-raw '{
+ "email": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
+ "role": "MEMBER"
+ }'
+```
+
+**Sample response**
+
+```JSON
+ {
+ "email": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
+ "role": "MEMBER"
+ }
+```
+
+### Get entitlements groups for a given user
+
+Run the below curl command in Azure Cloud Bash to get all the groups associated with the user.
+
+```bash
+ curl --location --request GET 'https://<URI>/api/entitlements/v2/members/<OBJECT_ID>/groups?type=none' \
+ --header 'data-partition-id: <data-partition-id>' \
+ --header 'Authorization: Bearer <access_token>'
+```
+
+**Sample request**
+
+```bash
+ curl --location --request GET 'https://<instance>.energy.azure.com/api/entitlements/v2/members/90e0d063-2f8e-4244-860a-XXXXXXXXXX/groups?type=none' \
+ --header 'data-partition-id: <instance>-<data-partition-name>' \
+ --header 'Authorization: Bearer <access_token>'
+```
+**Sample response**
+
+```JSON
+ {
+ "desId": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
+ "memberEmail": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
+ "groups": [
+ {
+ "name": "users",
+ "description": "Datalake users",
+ "email": "users@<instance>-<data-partition-name>.dataservices.energy"
+ },
+ {
+ "name": "service.search.user",
+ "description": "Datalake Search users",
+ "email": "service.search.user@<instance>-<data-partition-name>.dataservices.energy"
+ }
+ ]
+ }
+```
+
+### Delete entitlement groups of a given user
+
+Run the below curl command in Azure Cloud Bash to delete a given user to your Microsoft Energy Data Services instance data partition.
+
+> [!NOTE]
+> As stated above, **DO NOT** delete the OWNER of a group unless you have another OWNER that can manage users in that group.
+```bash
+ curl --location --request DELETE 'https://<URI>/api/entitlements/v2/members/<OBJECT_ID>' \
+ --header 'data-partition-id: <data-partition-id>' \
+ --header 'Authorization: Bearer <access_token>'
+```
+
+**Sample request**
+
+```bash
+ curl --location --request DELETE 'https://<instance>.energy.azure.com/api/entitlements/v2/members/90e0d063-2f8e-4244-860a-XXXXXXXXXX' \
+ --header 'data-partition-id: <instance>-<data-partition-name>' \
+ --header 'Authorization: Bearer <access_token>'
+```
+
+**Sample response**
+No output for a successful response
+++
+## Next steps
+<!-- Add a context sentence for the following links -->
+Create a legal tag for your Microsoft Energy Data Services Preview instance's data partition.
+> [!div class="nextstepaction"]
+> [How to manage legal tags](how-to-manage-legal-tags.md)
+
+Begin your journey by ingesting data into your Microsoft Energy Data Services Preview instance.
+> [!div class="nextstepaction"]
+> [Tutorial on CSV parser ingestion](tutorial-csv-ingestion.md)
+> [!div class="nextstepaction"]
+> [Tutorial on manifest ingestion](tutorial-manifest-ingestion.md)
energy-data-services Overview Ddms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/overview-ddms.md
+
+ Title: Overview of domain data management services - Microsoft Energy Data Services Preview #Required; page title is displayed in search results. Include the brand.
+description: This article provides an overview of Domain data management services #Required; article description that is displayed in search results.
+++ Last updated : 09/01/2022++++
+# Domain data management services (DDMS)
+
+The energy industry works with data of an extraordinary magnitude, which has significant ramifications for storage and compute requirements. Geoscientists stream terabytes of seismic, well log, and other data types at full resolution. Immediate responsiveness of data is essential for all stages of petroleum exploration--particularly for geologic interpretation and analysis. 
+
+## Overview
+Domain data management services (DDMS) store, access, and retrieve metadata and bulk data from applications connected to the data platform. Developers, therefore, use DDMS to deliver seamless and secure consumption of data in the applications they build on Microsoft Energy Data Services Preview. The Microsoft Energy Data Services Preview suite of DDMS adheres to [Open Subsurface Data Universe](https://osduforum.org/) (OSDU&trade;) standards and provides enhancements in performance, geo-availability, and access controls. DDMS service is optimized for each data type and can be extended to accommodate new data types. The DDMS service preserves raw data and offers multi format support and conversion for consuming applications such as Petrel while tracking lineage. Data within the DDMS service is discoverable and governed by entitlement and legal tags.
++
+### OSDU&trade; definition
+
+- Highly optimized storage & access for bulk data, with highly opinionated APIs delivering the data required to enable domain workflows
+- Governed schemas that incorporate domain-specific perspective and type-safe accessors for registered entity types
+
+### Aspirational components for any DDMS
+
+ - Direct connection to OSDU&trade; core
+ - Connection to adjacent or proximal databases (blob storage, Cosmos, external) and client applications
+ - Configure infrastructure provisioning to enable optimal performance for data streaming and access
+
+### Additional components for most DDMS (may include but not be limited to)
+
+ - File format converter--for example, for Seismic DDMS: SGY to ZGY, etc.
+ - Hierarchy of data organization and chunking - Tenant, project, and data
+
+## Use cases and value add
+
+### Frictionless Exploration and Production(E&P)
+
+The Microsoft Energy Data Services Preview DDMS service enables energy companies to access their data in a manner that is fast, portable, testable and extendible. As a result, they'll achieve unparalleled streaming performance and use the standards and output from OSDU&trade;. The Azure DDMS service will onboard the OSDU&trade; DDMS and Schlumberger proprietary DMS. Microsoft also continues to contribute to the OSDU&trade; community DDMS to ensure compatibility and architectural alignment.
+
+### Seamless connection between applications and data
+
+Customers can deploy applications on top of Microsoft Energy Data Services Preview that have been developed as per the OSDU&trade; standard. They're able to connect applications to Core Services and DDMS without spending extensive cycles on deployment. Customers can also easily connect DELFI to Microsoft Energy Data Services Preview, eliminating the cycles associated with Petrel deployments and connection to data management systems. By connecting applications to DDMS service, Geoscientists can execute integrated E&P workflows with unparalleled performance on Azure and use OSDU&trade; core services. For example, a geophysicist can pick well ties on a seismic volume in Petrel and stream data from the seismic DMS.
+
+## Types of DMS
+Below are the OSDU&trade; DMS the service supports -
+
+### OSDU&trade; - Seismic DMS
+
+Seismic data is a fundamental data type for oil and gas exploration. Seismic data provides a geophysical representation of the subsurface that can be applied for prospect identification and drilling decisions. Typical seismic datasets represent a multi-kilometer survey and are therefore massive in size.
+
+Due to this extraordinary data size, geoscientists working on-premises struggle to use seismic data in domain applications. They suffer from crashes as the seismic dataset exceeds their workstation's RAM, which leads to significant non-productive time. To achieve performance needed for domain workflows, geoscientists must chunk a seismic dataset and view each chunk in isolation. As a result, users suffer from the time spent wrangling seismic data and the opportunity cost of missing the significant picture view of the subsurface and target reservoirs.
+
+The seismic DMS is part of the OSDU&trade; platform and enables users to connect seismic data to cloud storage to applications. It allows secure access to metadata associated with seismic data to efficiently retrieve and handle large blocks of data for OpenVDS, ZGY, and other seismic data formats. The DMS therefore enables users to stream huge amounts of data in OSDU&trade; compliant applications in real time. Enabling the seismic DMS on Microsoft Energy Data Services Preview opens a pathway for Azure customers to bring their seismic data to the cloud and take advantage of Azure storage and high performance computing.
+
+## OSDU&trade; - Wellbore DMS
+
+Well Logs are measurements taken while drilling, which tells energy companies information about the subsurface. Ultimately, they reveal whether hydrocarbons are present (or if the well is dry). Logs contain many attributes that inform geoscientists about the type of rock, its quality, and whether it contains oil, water, gas, or a mix. Energy companies use these attributes to determine the quality of a reservoir ΓÇô how much oil or gas is present, its quality, and ultimately, economic viability. Maintaining Well Log data and ensuring easy access to historical logs is critical to energy companies. The Wellbore DMS facilitates access to this data in any OSDU&trade; compliant application. The Wellbore DMS was contributed by Schlumberger to OSDU&trade;.
+
+Well Log data can come in different formats. It's most often indexed by depth or time and the increment of these measurements can vary. Well Logs typically contain multiple attributes for each vertical measurement. Well Logs can therefore be small or for more modern Well Logs that use high frequency data, greater than 1 Gb. Well Log data is smaller than seismic; however, users will want to look at upwards of hundreds of wells at a time. This scenario is common in mature areas that have been heavily drilled such as the Permian Basin in West Texas.
+
+Geoscientists therefore want to access numerous well logs in a single session. They often are looking at all historical drilling programs in an area. As a result, they'll look at Well Log data that was collected using a wide variety of instruments and technology. This data will vary widely in format, quality, and sampling. The Wellbore DMS resolves this data through the OSDU&trade; schemas to deliver the data to the consuming applications.
+
+Here are the services that the Wellbore DMS offers -
+
+- **Objects and Consumption** - The Wellbore DMS can consume Wellbore, log set, log, marker, trajectory, and dip objects. This covers most well related exploration workflows
+- **Lifecycle** ΓÇô The Wellbore DMS supports the dataset through creation and writing to storage, versioning, lineage, auditing, and deletion
+- **Ingestion** - connection to file, interpretation software, system of records, and acquisition systems
+- **Contextualization** (Contextualized Access)
+
+## OSDU&trade; - Well Delivery DMS
+
+The Well Delivery DMS stores critical drilling domain information related to the planning and execution of a well. Throughout a drilling program, engineers and domain experts need to access a wide variety of data types including activities, trajectories, risks, subsurface information, equipment used, fluid and cementing, rig utilization, and reports. Integrating this collection of data types together are the cornerstone to drilling insights. At the same time, until now, there was no industry wide standardization or enforced format. The common standards the Well Delivery DMS enables is critical to the Drilling Value Chain as it connects a diverse group of personas including operations, oil companies, service companies, logistics companies, etc.
+
+## Next steps
+Learn more about DDMS concepts below.
+> [!div class="nextstepaction"]
+> [DDMS Concepts](concepts-ddms.md)
energy-data-services Overview Microsoft Energy Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/overview-microsoft-energy-data-services.md
+
+ Title: What is Microsoft Energy Data Services Preview? #Required; page title is displayed in search results. Include the brand.
+description: This article provides an overview of Microsoft Energy Data Services Preview #Required; article description that is displayed in search results.
++++ Last updated : 09/08/2022 #Required; mm/dd/yyyy format.++
+# What is Microsoft Energy Data Services Preview?
+
+Microsoft Energy Data Services Preview is a secure, reliable, hyperscale, fully managed cloud-based data platform solution for the energy industry. It is an enterprise-grade data platform that brings together the capabilities of OSDU&trade; Data Platform, Microsoft's secure and trusted Azure cloud platform, and Schlumberger's extensive domain expertise. It allows customers to free data from silos, provides strong data management, storage, and federation strategy. Microsoft Energy Data Services ensures compatibility with evolving community standards like OSDU&trade; and enables value addition through interoperability with both first-party and third-party solutions.
++
+## Principles
+
+Microsoft Energy Data Services conforms to the following principles:
+
+### Fully managed OSDU&trade; platform
+
+Microsoft Energy Data Services Preview is a first-party PaaS (Platform as a Service) offering where Microsoft manages the deployment, monitoring, management, scale, security, updates, and upgrades of the service so that the customers can focus on the value from the platform. Microsoft offers seamless upgrades to the latest OSDU&trade; milestone versions after testing and validation.
+
+Furthermore, Microsoft Energy Data Services Preview provides security capabilities like encryption for data-in-transit and data-at-rest. The authentication and authorization are provided by Azure Active Directory. Microsoft also assumes the responsibility of providing regular security patches and updates.
+
+Microsoft Energy Data Services Preview also supports multiple data partitions for every platform instance. More data partitions can also be created after creating an instance, as needed.
+
+As an Azure-based service, it also provides elasticity with auto-scaling to handle dynamically varying workload requirements. The service provides out-of-the-box compatibility and built-in integration with industry-leading applications from Schlumberger, including Petrel to provide quick time to value.
+
+Microsoft will provide support for the platform to enable our customers' use cases.
+
+### Accelerated innovation with openness in mind
+
+Microsoft Energy Data Services Preview is compatible with the OSDU&trade; Technical Standard enables seamless integration of existing applications that have been developed in alignment with the emerging requirements of the OSDU&trade; Standard.
+
+The platform's openness and integration with Microsoft Azure Marketplace brings industry-leading applications, solutions, and integration services offered by our extensive partner ecosystem to our customers.
+
+### Extensibility with the Microsoft ecosystem
+
+Most of our customers rely on ubiquitous tools and applications from Microsoft. The Microsoft Energy Data Services Preview platform is piloting how it can seamlessly work with deeply used Microsoft apps like SharePoint for data ingestion, Synapse for data transformations and pipelines, Power BI for data visualization, and other possibilities. A Power BI connector has already been released in the community, and partners are leveraging these tools and connectors to enhance their integrations with Microsoft apps and services.
+
+OSDU&trade; is a trademark of The Open Group.
+
+## Next steps
+Follow the quickstart guide to quickly deploy Microsoft Energy Data Service in your Azure subscription
+> [!div class="nextstepaction"]
+> [Quickstart: Create Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md)
energy-data-services Quickstart Create Microsoft Energy Data Services Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/quickstart-create-microsoft-energy-data-services-instance.md
+
+ Title: Create a Microsoft Energy Data Services Preview instance #Required; page title is displayed in search results. Include the brand.
+description: Quickly create a Microsoft Energy Data Services Preview instance #Required; article description that is displayed in search results.
++++ Last updated : 08/18/2022+++
+# Quickstart: Create a Microsoft Energy Data Services Preview instance
++
+Get started by creating a Microsoft Energy Data Services Preview instance on Azure portal on a web browser. You first register an Azure application on Active Directory and then use the application ID to create a Microsoft Energy Data Services instance in your chosen Azure Subscription and region.
+
+The setup of Microsoft Energy Data Services Preview instance can be triggered using a simple interface on Azure portal and takes about 50 minutes to complete.
+
+Microsoft Energy Data Services Preview is a managed "Platform as a service (PaaS)" offering from Microsoft that builds on top of the [OSDU&trade;](https://osduforum.org/) Data Platform. Microsoft Energy Data Services Preview lets you ingest, transform, and export subsurface data by letting you connect your consuming in-house or third-party applications.
+
+## Prerequisites
+
+| Prerequisite | Details |
+| | - |
+Active Azure Subscription | You'll need the Azure subscription ID in which you want to install Microsoft Energy Data Services. You need to have appropriate permissions to create Azure resources in this subscription.
+Application ID | You'll need an [application ID](../active-directory/develop/application-model.md) (often referred to as "App ID" or a "client ID"). This application ID will be used for authentication to Azure Active Directory and will be associated with your Microsoft Energy Data Services instance. You can [create an application ID](../active-directory/develop/quickstart-register-app.md) by navigating to Active directory and selecting *App registrations* > *New registration*.
+Client Secret | Sometimes called an application password, a client secret is a string value that your app can use in place of a certificate to identity itself. You can [create a client secret](../active-directory/develop/quickstart-register-app.md#add-a-client-secret) by selecting *Certificates & secrets* > *Client secrets* > *New client secret*. Record the secret's value for use in your client application code. This secret value is never displayed again after you leave this page.
++
+## Create a Microsoft Energy Data Services Preview instance
++
+1. Save your **Application (client) ID** and **client secret** from Azure Active Directory to refer to them later in this quickstart.
+
+1. Sign in to [Microsoft Azure Marketplace](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_OpenEnergyPlatformHidden)
+
+ > [!IMPORTANT]
+ > *Microsoft Energy Data Services* is accessible on the Azure Marketplace only if you use the above Azure portal link.
++
+1. If you have access to multiple tenants, use the *Directories + subscriptions* filter in the top menu to switch to the tenant in which you want to install Microsoft Energy Data Services.
+
+1. Use the search bar in the Azure Marketplace (not the global Azure search bar on top of the screen) to search for *Microsoft Energy Data Services*.
+
+ [![Screenshot of the search result on Azure Marketplace that shows Microsoft energy data services. Microsoft Energy data services shows as a card.](media/quickstart-create-microsoft-energy-data-services-instance/search-meds-on-azure-marketplace.png)](media/quickstart-create-microsoft-energy-data-services-instance/search-meds-on-azure-marketplace.png#lightbox)
+
+1. In the search page, select *Create* on the card titled "Microsoft Energy Data Services (Preview)".
+
+1. A new window appears. Complete the *Basics* tab by choosing the *subscription*, *resource group*, and the *region* in which you want to create your instance of Microsoft Energy Data Services. Enter the *App ID* that you created during the prerequisite steps.
+
+ [![Screenshot of the basic details page after you select 'create' for Microsoft energy data services. This page allows you to enter both instance and data partition details.](media/quickstart-create-microsoft-energy-data-services-instance/input-basic-details.png)](media/quickstart-create-microsoft-energy-data-services-instance/input-basic-details.png#lightbox)
+
+
+ Some naming conventions to guide you at this step:
+
+ | Field | Name Validation |
+ | -- | |
+ Instance name | Only alphanumeric characters are allowed, and the value must be 1-15 characters long. The name is **not** case-sensitive. One resource group can't have two instances with the same name.
+ Application ID | Enter the valid Application ID that you generated and saved in the last section.
+ Data Partition name | Name should be 1-10 char long consisting of lowercase alphanumeric characters and hyphens. It should start with an alphanumeric character and not contain consecutive hyphens. The data partition names that you chose are automatically prefixed with your Microsoft Energy Data Services instance name. This compound name will be used to refer to your data partition in application and API calls.
+
+ > [!NOTE]
+ > Microsoft Energy Data Services instance and data partition names, once created, cannot be changed later.
++
+1. Select **Next: Tags** and enter any tags that you would want to specify. If nothing, this field can be left blank.
+
+ > [!TIP]
+ > Tags are metadata elements attached to resources. They're key-value pairs that help you identify resources based on settings that are relevant to your organization. If you want to track the deployment environment for your resources, add a key named `Environment`. To identify the resources deployed to production, give them a value of `Production`. [Learn more](../azure-resource-manager/management/tag-resources.md?tabs=json).
+
+ [![Screenshot of the tags tab on the create workflow. Any number of tags can be added and will show up in the list.](media/quickstart-create-microsoft-energy-data-services-instance/input-tags.png)](media/quickstart-create-microsoft-energy-data-services-instance/input-tags.png#lightbox)
+
+1. Select Next: **Review + Create**.
+
+1. Once the basic validation tests pass (validation takes a few seconds), review the Terms and Basic Details.
+
+ [![Screenshot of the review tab. It shows that data validation happens before you start deployment.](media/quickstart-create-microsoft-energy-data-services-instance/validation-check-after-entering-details.png)](media/quickstart-create-microsoft-energy-data-services-instance/validation-check-after-entering-details.png#lightbox)
+
+1. This step is optional. You can download an Azure Resource Manager (ARM) template and use it for automated deployments of Microsoft Energy Data Services in future. Select *Download a template for automation* located on the bottom-right of the screen.
+
+ [![Screenshot to help locate the link to download Azure Resource Manager template for automation. It is available on the bottom right of the *review + create* tab.](media/quickstart-create-microsoft-energy-data-services-instance/download-template-automation.png)](media/quickstart-create-microsoft-energy-data-services-instance/download-template-automation.png#lightbox)
+
+ [![Screenshot of the template that opens up when you select 'download template for automation'. Options are available to download or deploy from this page.](media/quickstart-create-microsoft-energy-data-services-instance/automate-deploy-resource-using-azure-resource-manager.png)](media/quickstart-create-microsoft-energy-data-services-instance/automate-deploy-resource-using-azure-resource-manager.png#lightbox)
+
+1. Select **Create** to start the deployment.
+
+1. Wait while the deployment happens in the background. Review the details of the instance created.
+
+ [![Screenshot of the deployment completion page. Options are available to view details of the deployment.](media/quickstart-create-microsoft-energy-data-services-instance/deployment-complete.png)](media/quickstart-create-microsoft-energy-data-services-instance/deployment-complete.png#lightbox)
+
+ [![Screenshot of the overview of Microsoft Energy Data Services instance page. Details as such data partitions, instance URI, and app ID are accessible.](media/quickstart-create-microsoft-energy-data-services-instance/overview-energy-data-services.png)](media/quickstart-create-microsoft-energy-data-services-instance/overview-energy-data-services.png#lightbox)
+
+
+## Delete a Microsoft Energy Data Services Preview instance
+
+Deleting a Microsoft Energy Data instance also deletes any data that you've ingested. This action is permanent and the ingested data can't be recovered. To delete a Microsoft Energy Data Services instance, complete the following steps:
+
+1. Sign in to the Azure portal and delete the *resource group* in which these components are installed.
+
+2. This step is optional. Go to Azure Active Directory and delete the *app registration* that you linked to your Microsoft Energy Data Services instance.
+
+OSDU&trade; is a trademark of The Open Group.
+
+## Next steps
+After provisioning a Microsoft Energy Data Services instance, you can learn about user management on this instance.
+> [!div class="nextstepaction"]
+> [How to manage users](how-to-manage-users.md)
energy-data-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md
+
+ Title: Release notes for Microsoft Energy Data Services Preview #Required; page title is displayed in search results. Include the brand.
+description: This topic provides release notes of Microsoft Energy Data Services Preview releases, improvements, bug fixes, and known issues. #Required; article description that is displayed in search results.
++++ Last updated : 09/20/2022 #Required; mm/dd/yyyy format.+++
+# Release Notes for Microsoft Energy Data Services Preview
++
+Microsoft Energy Data Services is updated on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about:
+
+- The latest releases
+- Known issues
+- Bug fixes
+- Deprecated functionality
+- Plans for changes
+
+## Microsoft Energy Data Services Preview Release
++
+### Key Announcement
+
+Microsoft Energy Data Services is now available in public preview.
+
+Microsoft Energy Data Services is developed in alignment with the emerging requirements of the OSDUΓäó Technical Standard, Version 1.0. and is currently aligned with Mercury Release(R3), [Milestone-12](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/M12-Release-Notes).
+
+### Partition & User Management
+
+- New data partitions can be [created dynamically](how-to-add-more-data-partitions.md) as needed post provising of the platform (up to five). Earlier, data partitions could only be created when provisioning a new instance.
+- The domain name for entitlement groups for [user management](how-to-manage-users.md) has been changed to "dataservices.energy".
+
+### Data Ingestion
+
+- Enabled support for user context in ingestion ([ADR: Issue 52](https://community.opengroup.org/osdu/platform/data-flow/ingestion/home/-/issues/52))
+ - User identity is preserved and passed on to all ingestion workflow related services using the newly introduced _x-on-behalf-of_ header. A user needs to have appropriate service level entitlements on all dependent services involved in the ingestion workflow and only users with appropriate data level entitlements can modify data.
+- Workflow service payload is restricted to a maximum of 2 MB. If it exceeds, the service will throw an HTTP 413 error. This restriction is placed to prevent workflow requests from overwhelming the server.
+- Microsoft Energy Data Services uses Azure Data Factory (ADF) to run large scale ingestion workloads.
+
+### Search
+
+- Improved security as Elasticsearch images are now pulled from Microsoft's internal Azure Container Registry instead of public repositories.
+- Improved security by enabling encryption in transit for Elasticsearch, Registration, and Notification services.
+
+### Monitoring
+
+- Diagnostic settings can be exported from [Airflow](how-to-integrate-airflow-logs-with-azure-monitor.md) and [Elasticsearch](how-to-integrate-elastic-logs-with-azure-monitor.md) to Azure Monitor.
+
+### Region Availability
+
+- Currently, Microsoft Energy Data Services is being offered in the following regions - South Central US, East US, West Europe, and North Europe.
+++
+
energy-data-services Tutorial Csv Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-csv-ingestion.md
+
+ Title: Microsoft Energy Data Services - Steps to perform a CSV parser ingestion #Required; page title is displayed in search results. Include the brand.
+description: This tutorial shows you how to perform CSV parser ingestion #Required; article description that is displayed in search results.
++++ Last updated : 09/19/2022++
+#Customer intent: As a customer, I want to learn how to use CSV parser ingestion so that I can load CSV data into the Microsoft Energy Data Services Preview instance.
++
+# Tutorial: Sample steps to perform a CSV parser ingestion
+
+CSV Parser ingestion provides the capability to ingest CSV files into the Microsoft Energy Data Services Preview instance.
+
+In this tutorial, you'll learn how to:
+
+> [!div class="checklist"]
+> * Ingest a sample wellbore data CSV file into the Microsoft Energy Data Services Preview instance using Postman
+> * Search for storage metadata records created during the CSV Ingestion using Postman
++
+## Prerequisites
+
+### Get Microsoft Energy Data Services Preview instance details
+
+* Microsoft Energy Data Services Preview instance is created already. If not, follow the steps outlined in [Quickstart: Create a Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md)
+* For this tutorial, you will need the following parameters:
+
+ | Parameter | Value to use | Example | Where to find these values? |
+ | | |-- |-- |
+ | CLIENT_ID | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx | App ID or Client_ID used when registering the application with the Microsoft Identity Platform. See [Register an application](../active-directory/develop/quickstart-register-app.md#register-an-application) |
+ | CLIENT_SECRET | Client secrets | _fl****************** | Sometimes called an *application password*, a client secret is a string value your app can use in place of a certificate to identity itself. See [Add a client secret](../active-directory/develop/quickstart-register-app.md#add-a-client-secret)|
+ | TENANT_ID | Directory (tenant) ID | 72f988bf-86f1-41af-91ab-xxxxxxxxxxxx | Hover over your account name in the Azure portal to get the directory or tenant ID. Alternately, search and select *Azure Active Directory > Properties > Tenant ID* in the Azure portal. |
+ | SCOPE | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx | Same as App ID or Client_ID mentioned above |
+ | refresh_token | Refresh Token value | 0.ATcA01-XWHdJ0ES-qDevC6r........... | Follow the [How to Generate a Refresh Token](how-to-generate-refresh-token.md) to create a refresh token and save it. This refresh token is required later to generate a user token. |
+ | DNS | URI | `<instance>`.energy.Azure.com | Overview page of Microsoft Energy Data Services instance|
+ | data-partition-id | Data Partition(s) | `<instance>`-`<data-partition-name>` | Overview page of Microsoft Energy Data Services instance|
+
+* Follow the [Manage users](how-to-manage-users.md) guide to add appropriate entitlements for the user running this tutorial
+
+### Set up and execute Postman requests
+
+* Download and install [Postman](https://www.postman.com/) desktop app
+* Import the following files into Postman:
+ * [CSV Workflow Postman collection](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/IngestionWorkflows.postman_collection.json)
+ * [CSV Workflow Postman Environment](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/IngestionWorkflowEnvironment.postman_environment.json)
+
+ > [!NOTE]
+ > To import the Postman collection and environment variables, follow the steps outlined in [Importing data into Postman](https://learning.postman.com/docs/getting-started/importing-and-exporting-data/#importing-data-into-postman)
+
+* Update the **CURRENT_VALUE** of the Postman environment with the information obtained in [Microsoft Energy Data Services Preview instance details](#get-microsoft-energy-data-services-preview-instance-details)
+* The Postman collection for CSV parser ingestion contains a total of 10 requests, which have to be executed in a sequential manner.
+* Make sure to choose the **Ingestion Workflow Environment** before triggering the Postman collection.
+ :::image type="content" source="media/tutorial-csv-ingestion/tutorial-postman-choose-environment.png" alt-text="Screenshot of the postman environment." lightbox="media/tutorial-csv-ingestion/tutorial-postman-choose-environment.png":::
+* Each request can be triggered by clicking the **Send** Button.
+* On every request Postman will validate the actual API response code against the expected response code; if there's any mismatch the test section will indicate failures.
+
+#### Successful Postman request
+
+ :::image type="content" source="media/tutorial-csv-ingestion/tutorial-postman-test-success.png" alt-text="Screenshot of a successful postman call." lightbox="media/tutorial-csv-ingestion/tutorial-postman-test-success.png":::
+
+#### Failed Postman request
+
+ :::image type="content" source="media/tutorial-csv-ingestion/tutorial-postman-test-failure.png" alt-text="Screenshot of a failure postman call." lightbox="media/tutorial-csv-ingestion/tutorial-postman-test-failure.png":::
+
+## Ingest a sample wellbore data CSV file into the Microsoft Energy Data Services Preview instance using Postman
+
+ 1. **Get a user token** - Generate the User token, which will be used to authenticate further API calls.
+ 2. **Create a schema** - Generate a schema that adheres to the columns present in the CSV file
+ 3. **Get schema details** - Get the schema created in the previous step and validate it
+ 4. **Create a legal tag** - Create a legal tag that will be added to the CSV data for data compliance purpose
+ 5. **Get a signed url for uploading a CSV file** - Get the signed URL path to which the CSV file will be uploaded
+ 6. **Upload a CSV file** - Download the [Wellbore.csv](https://github.com/microsoft/meds-samples/blob/main/test-data/wellbore.csv) to your local machine, and select this file in Postman by clicking the **Select File** option as shown in the Screenshot below.
+ :::image type="content" source="media/tutorial-csv-ingestion/tutorial-select-csv-file.png" alt-text="Screenshot of uploading a CSV file." lightbox="media/tutorial-csv-ingestion/tutorial-select-csv-file.png":::
+ 7. **Upload CSV file metadata** - Upload the file metadata information such as file location & other relevant fields
+ 8. **Trigger a CSV parser ingestion workflow** - Triggers the CSV Parser ingestion workflow DAG.
+ 9. **Get CSV parser ingestion workflow status** - Gets the status of CSV Parser Dag Run.
+
+## Search for storage metadata records created during the CSV Ingestion using Postman
+
+ 1. **Search for ingested CSV records** - Search for the CSV records created earlier.
+ :::image type="content" source="media/tutorial-csv-ingestion/tutorial-search-success.png" alt-text="Screenshot of searching ingested CSV records." lightbox="media/tutorial-csv-ingestion/tutorial-search-success.png":::
+
+## Next steps
+Advance to the next tutorial to learn how to do Manifest ingestion
+> [!div class="nextstepaction"]
+> [Tutorial: Sample steps to perform a manifest-based file ingestion](tutorial-manifest-ingestion.md)
energy-data-services Tutorial Manifest Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-manifest-ingestion.md
+
+ Title: Microsoft Energy Data Services - Steps to perform a manifest-based file ingestion #Required; page title is displayed in search results. Include the brand.
+description: This tutorial shows you how to perform Manifest ingestion #Required; article description that is displayed in search results.
++++ Last updated : 08/18/2022++
+#Customer intent: As a customer, I want to learn how to use manifest ingestion so that I can load manifest information into the Microsoft Energy Data Services Preview instance.
++
+# Tutorial: Sample steps to perform a manifest-based file ingestion
+
+Manifest ingestion provides the capability to ingest manifests into Microsoft Energy Data Services Preview instance
+
+In this tutorial, you will learn how to:
+
+> [!div class="checklist"]
+> * Ingest sample manifests into the Microsoft Energy Data Services Preview instance using Postman
+> * Search for storage metadata records created during the manifest ingestion using Postman
++
+## Prerequisites
+
+Before beginning this tutorial, the following prerequisites must be completed:
+### Get Microsoft Energy Data Services Preview instance details
+
+* Microsoft Energy Data Services Preview instance is created already. If not, follow the steps outlined in [Quickstart: Create a Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md)
+* For this tutorial, you will need the following parameters:
+
+ | Parameter | Value to use | Example | Where to find these values? |
+ | | |-- |-- |
+ | CLIENT_ID | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx | App ID or Client_ID used when registering the application with the Microsoft Identity Platform. See [Register an application](../active-directory/develop/quickstart-register-app.md#register-an-application) |
+ | CLIENT_SECRET | Client secrets | _fl****************** | Sometimes called an *application password*, a client secret is a string value your app can use in place of a certificate to identity itself. See [Add a client secret](../active-directory/develop/quickstart-register-app.md#add-a-client-secret)|
+ | TENANT_ID | Directory (tenant) ID | 72f988bf-86f1-41af-91ab-xxxxxxxxxxxx | Hover over your account name in the Azure portal to get the directory or tenant ID. Alternately, search and select *Azure Active Directory > Properties > Tenant ID* in the Azure portal. |
+ | SCOPE | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx | Same as App ID or Client_ID mentioned above |
+ | refresh_token | Refresh Token value | 0.ATcA01-XWHdJ0ES-qDevC6r........... | Follow the [How to Generate a Refresh Token](how-to-generate-refresh-token.md) to create a refresh token and save it. This refresh token is required later to generate a user token. |
+ | DNS | URI | `<instance>`.energy.Azure.com | Overview page of Microsoft Energy Data Services instance|
+ | data-partition-id | Data Partition(s) | `<instance>`-`<data-partition-name>` | Overview page of Microsoft Energy Data Services instance|
+
+* Follow the [Manage users](how-to-manage-users.md) guide to add appropriate entitlements for the user running this tutorial
++
+### Set up Postman and execute requests
+
+* Download and install [Postman](https://www.postman.com/) desktop app
+* Import the following files into Postman:
+ * [Manifest ingestion postman collection](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/IngestionWorkflows.postman_collection.json)
+ * [Manifest Ingestion postman environment](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/IngestionWorkflowEnvironment.postman_environment.json)
+ > [!NOTE]
+ > To import the Postman collection and environment variables, follow the steps outlined in [Importing data into Postman](https://learning.postman.com/docs/getting-started/importing-and-exporting-data/#importing-data-into-postman)
+* Update the **CURRENT_VALUE** of the postman environment with the information obtained in [Get Microsoft Energy Data Services Preview instance details](#get-microsoft-energy-data-services-preview-instance-details)
+* The Postman collection for manifest ingestion contains multiple requests, which will have to be executed in a sequential manner.
+* Make sure to choose the **Ingestion Workflow Environment** before triggering the Postman collection.
+ :::image type="content" source="media/tutorial-manifest-ingestion/tutorial-postman-choose-environment.png" alt-text="Screenshot of the Postman environment." lightbox="media/tutorial-manifest-ingestion/tutorial-postman-choose-environment.png":::
+* Each request can be triggered by clicking the **Send** Button.
+* On every request, Postman will validate the actual API response code against the expected response code; if there is any mismatch the test section will indicate failures.
+
+#### Successful Postman request
++
+#### Failed Postman request
++
+## Ingest sample manifests into the Microsoft Energy Data Services Preview instance using Postman
+
+ 1. **Get a user token** - Generate the User token, which will be used to authenticate further API calls.
+ 2. **Create a legal tag** - Create a legal tag that will be added to the Manifest data for data compliance purpose
+ 3. **Get a signed url for uploading a file** - Get the signed URL path to which the Manifest file will be uploaded
+ 4. **Upload a file** - Download the sample [Wellbore.csv](https://github.com/microsoft/meds-samples/blob/main/test-data/wellbore.csv) to your local machine (it could be any filetype - CSV, LAS, JSON, etc.), and select this file in Postman by clicking the **Select File** option as shown in the Screenshot below.
+ :::image type="content" source="media/tutorial-manifest-ingestion/tutorial-select-manifest-file.png" alt-text="Screenshot of a select file option." lightbox="media/tutorial-manifest-ingestion/tutorial-select-manifest-file.png":::
+ 5. **Upload file metadata** - Upload the file metadata information such as file location & other relevant fields
+ 6. **Get the file metadata** - Call to validate if the metadata got created successfully
+ 7. **Ingest Master, Reference and Work Product Component (WPC) data** - Ingest the Master, Reference and Work Product Component manifest metadata.
+ 8. **Get manifest ingestion workflow status** - The workflow will start and will be in the **running** state. Keep querying until it changes state to **finished** (typically 20-30 seconds)
+
+## Search for storage metadata records created during the manifest ingestion using Postman
+ - **Search Work Products** - Call Search service to retrieve the Work Product metadata records
+ - **Search Work Product Components** - Call Search service to retrieve the Work Product Component metadata records
+ - **Search for Dataset** - Call Search service to retrieve the Dataset metadata records
+ - **Search for Master data** - Call Search service to retrieve the Master metadata records
+ - **Search for Reference data** - Call Search service to retrieve the Reference metadata records
+
+## Next steps
+Advance to the next tutorial to learn about sdutil
+> [!div class="nextstepaction"]
+> [Tutorial: Seismic store sdutil](tutorial-seismic-ddms-sdutil.md)
++
energy-data-services Tutorial Seismic Ddms Sdutil https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-seismic-ddms-sdutil.md
+
+ Title: Microsoft Energy Data Services Preview - Seismic store sdutil tutorial #Required; page title is displayed in search results. Include the brand.
+description: Information on setting up and using sdutil, a command-line interface (CLI) tool that allows users to easily interact with seismic store. #Required; article description that is displayed in search results.
++++ Last updated : 09/09/2022++
+#Customer intent: As a developer, I want to learn how to use sdutil so that I can load data into the seismic store.
++
+# Tutorial: Seismic store sdutil
+
+A command line python utility designed to easily interact with seismic store.
+
+Seismic store is a cloud-based solution designed to store and manage datasets of any size in the cloud by enabling a secure way to access them through a scoped authorization mechanism. Seismic Store overcomes the object size limitations imposed by a cloud provider, by managing generic datasets as multi-independent objects and, therefore, provides a generic, reliable and a better performed solution to handle data on a cloud storage.
+
+The **sdutil** is an intuitive command line utility tool to interact with seismic store and perform some basic operations like upload or download datasets to or from seismic store, manage users, list folders content and more.
++
+## Prerequisites
+
+Install the following prerequisites based on your OS:
+
+Windows
+
+- [64-bit Python 3.8.3](https://www.python.org/ftp/python/3.8.3/python-3.8.3-amd64.exe)
+- [Microsoft C++ Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/)
+- [Linux Subsystem Ubuntu](https://learn.microsoft.com/windows/wsl/install)
+
+Linux
+
+- [64-bit Python 3.8.3](https://www.python.org/ftp/python/3.8.3/Python-3.8.3.tgz)
+
+Unix
+
+- [64-bit Python 3.8.3](https://www.python.org/ftp/python/3.8.3/Python-3.8.3.tgz)
+- Apple Xcode C++ Build Tools
+
+Other requirements are addressed in the INSTALLATION section below.
+
+## Installation
+
+Follow the directions in the sdutil documentation for [running sdutil in Azure environments](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable#setup-and-usage-for-azure-env).
+
+The utility requires other modules noted in [requirements.txt](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/blob/azure/stable/requirements.txt). You could either install the modules as is or install them in virtualenv to keep your host clean from package conflicts. if you don't want to install them in a virtual environment jump directly to the step 3.
+
+```bash
+ # check if virtualenv is already installed
+ virtualenv --version
+
+ # if not install it via pip
+ pip install virtualenv
+
+ # create a virtual environment for sdutil
+ virtualenv sdutilenv
+
+ # activate the virtual environemnt
+ Windows: sdutilenv/Scripts/activate
+ Linux: source sdutilenv/bin/activate
+```
+
+Install required dependencies:
+
+```bash
+ # run it from the extracted sdutil folder
+ pip install -r requirements.txt
+```
+
+## Usage
+
+### Configuration
+
+1. Replace/edit `config.yaml` in `sdlib/config.yaml` by this [config-azure.yaml](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/blob/azure/stable/docs/config-azure.yaml)
+
+2. Update three values in `config.yaml`:
+ ```yaml
+ - service: '{"Azure": {"azureGlabEnv":{"url": "<base-url-for-microsoft-energy-data-services-instance>/seistore-svc/api/v3", "appkey": ""}}}'
+ - url: '<base-url-for-microsoft-energy-data-services-instance>/seistore-svc/api/v3'
+ - "refresh_token": "<refresh-token-for-your-env>"
+ ```
+
+ > [!NOTE]
+ > Follow the directions in [How to Generate a Refresh Token](how-to-generate-refresh-token.md) to obtain a token if not already present.
+
+3. Export or set below environment variables
+
+ ```bash
+ export AZURE_TENANT_ID=check-env-provisioning-team-as-specific-to-cluster
+ export AZURE_CLIENT_ID=check-env-provisioning-team-as-specific-to-cluster
+ export AZURE_CLIENT_SECRET=check-env-provisioning-team-as-specific-to-cluster
+ ```
+
+### Running the Tool
+
+Run the utility from the extracted utility folder by typing:
+
+```bash
+ python sdutil
+```
+
+If no arguments are specified, this menu will be displayed:
+
+```code
+ Seismic Store Utility
+
+ > python sdutil [command]
+
+ available commands:
+
+ * auth : authentication utilities
+ * unlock : remove a lock on a seismic store dataset
+ * version : print the sdutil version
+ * rm : delete a subproject or a space separated list of datasets
+ * mv : move a dataset in seismic store
+ * config : manage the utility configuration
+ * mk : create a subproject resource
+ * cp : copy data to(upload)/from(download)/in(copy) seismic store
+ * stat : print information like size, creation date, legal tag(admin) for a space separated list of tenants, subprojects or datasets
+ * patch : patch a seismic store subproject or dataset
+ * app : application authorization utilities
+ * ls : list subprojects and datasets
+ * user : user authorization utilities
+```
+
+At first usage time, the utility required to be initialized by invoking the sdutil config init command.
+
+```bash
+ python sdutil config init
+```
+
+Before start using the utility and perform any operation, you must sign-in the system. When you run the following sign-in command, sdutil will open a sign-in page in a web browser.
+
+```bash
+ python sdutil auth login
+```
+
+Once you've successfully logged in, your credentials will be valid for a week. You don't need to sign in again unless the credentials expired (after one week), in this case the system will require you to sign in again.
+
+> [!NOTE]
+> If you aren't getting the "sign-in Successful!" message, make sure your three environment variables are set and you've followed all steps in the "Configuration" section above.
+
+## Seistore Resources
+
+Before you start using the system, it's important to understand how resources are addressed in seismic store. There are three different types of resources managed by seismic store:
+
+- **Tenant Project:** the main project. Tenant is the first section of the seismic store path
+- **Subproject:** the working subproject, directly linked under the main tenant project. Subproject is the second section of the seismic store path.
+- **Dataset:** the seismic store dataset entity. Dataset is the third and last section of the seismic store path. The Dataset resource can be specified by using the form `path/dataset_name` where `path` is optional and have the same meaning of a directory in a generic file-system and `dataset_name` is the name of the dataset entity.
+
+The seismic store uri is a string used to uniquely address a resource in the system and can be obtained by appending the prefix `sd://` before the required resource path:
+
+```code
+ sd://<tenant>/<subproject>/<path>*/<dataset>
+```
+
+For example, if we have a dataset `results.segy` stored in the directory structure `qadata/ustest` in the `carbon` subproject under the `gtc` tenant project, then the corresponding sdpath will be:
+
+```code
+ sd://gtc/carbon/qadata/ustest/results.segy
+```
+
+Every resource can be addressed by using the corresponding sdpath section
+
+```code
+ Tenant: sd://gtc
+ Subproject: sd://gtc/carbon
+ Dataset: sd://gtc/carbon/qadata/ustest/results.segy
+```
+
+## Subprojects
+
+A subproject in Seismic Store is a working unit where datasets can be saved. The system can handles multiple subprojects under a tenant project.
+
+A subproject resource can be created by a **Tenant Admin Only** with the following sdutil command:
+
+```code
+ > python sdutil mk *sdpath *admin@email *legaltag (options)
+
+ create a new subproject resource in the seismic store. user can interactively
+ set the storage class for the subproject. only tenant admins are allowed to create subprojects.
+
+ *sdpath : the seismic store subproject path. sd://<tenant>/<subproject>
+ *admin@email : the email of the user to be set as the subproject admin
+ *legaltag : the default legal tag for the created subproject
+
+ (options) | --idtoken=<token> pass the credential token to use, rather than generating a new one
+```
+
+## Users Management
+
+To be able to use seismic store, a user must be registered/added to at least a subproject resource with a role that defines their access level. Seismic Store support three different roles scoped at subproject level:
+
+- **admin**: read/write access + users management.
+- **viewer**: read/list access
+
+A user can be registered by a **Subproject Admin Only** with the following sdutil command:
+
+```code
+ > python sdutil user [ *add | *list | *remove | *roles ] (options)
+
+ *add $ python sdutil user add [user@email] [sdpath] [role]*
+ add a user to a subproject resource
+
+ [user@email] : email of the user to add
+ [sdpath] : seismic store subproject path, sd://<tenant>/<subproject>
+ [role] : user role [admin|viewer]
+```
+
+## Usage Examples
+
+An example of how to use sdutil to manage datasets with seismic store. For this example, we'll use sd://gtc/carbon as subproject resource
+
+```bash
+ # create a new file
+ echo "My Test Data" > data1.txt
+
+ # upload the created file to seismic store
+ ./sdutil cp data1.txt sd://gtc/carbon/test/mydata/data.txt
+
+ # list the content of the seismic store subproject
+ ./sdutil ls sd://gtc/carbon/test/mydata/ (display: data.txt)
+ ./sdutil ls sd://gtc (display: carbon)
+ ./sdutil ls sd://gtc/carbon (display: test/)
+ ./sdutil ls sd://gtc/carbon/test (display: data/)
+
+ # download the file from seismic store:
+ ./sdutil cp sd://gtc/carbon/test/mydata/data.txt data2.txt
+
+ # check if file orginal file match the one downloaded from sesimic store:
+ diff data1.txt data2.txt
+```
+
+## Utility Testing
+
+The test folder contains a set of integral/unit and regressions/e2e tests written for [pytest](https://docs.pytest.org/en/latest/). These tests should be executed to validate the utility functionalities.
+
+Requirements
+
+ ```bash
+ # install required dependencies:
+ pip install -r test/e2e/requirements.txt
+ ```
+
+Integral/Unit tests
+
+ ```bash
+ # run integral/unit test
+ ./devops/scripts/run_unit_tests.sh
+
+ # test execution paramaters
+ --mnt-volume = sdapi root dir (default=".")
+ ```
+
+Regression tests
+
+ ```bash
+ # run integral/unit test
+ ./devops/scripts/run_regression_tests.sh --cloud-provider= --service-url= --service-key= --idtoken= --tenant= --subproject=
+
+ # test execution paramaters
+ --mnt-volume = sdapi root dir (default=".")
+ --disable-ssl-verify (to disable ssl verification)
+ ```
+
+## FAQ
+
+**How can I generate a new utility command?**
+
+run the command generation script (`./command_gen.py`) to automatically generate the base infrastructure for integrate new command in the sdutil utility. A folder with the command infrastructure will be created in sdlib/cmd/new_command_name
+
+```bash
+ ./scripts/command_gen.py new_command_name
+```
+
+How can I delete all files in a directory?
+
+```bash
+ ./sdutil ls -lr sd://tenant/subproject/your/folder/here | xargs -r ./sdutil rm --idtoken=x.xxx.x
+```
+
+**How can I generate the utility changelog?**
+
+run the changelog script (`./changelog-generator.sh`) to automatically generate the utility changelog
+
+```bash
+ ./scripts/changelog-generator.sh
+```
+
+## Setup and usage for Microsoft Energy Data Services
+
+Below steps are for windows subsystem linux - ubuntu 20.04
+Microsoft Energy Data Services instance is using OSDU&trade; M12 Version of sdutil
+
+- Download the source code from community [sdutil](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable/) Azure Stable branch.
+
+- In case python virtual env isn't installed, use below commands else skip to next section
+
+ ```bash
+ sudo apt-get update
+ sudo apt-get install python3-venv
+ ```
+
+- create new venv and install package
+
+ ```bash
+ #create new virtual env with name : sdutilenv
+ python3 -m venv sdutilenv
+
+ #activate the virtual end
+ source sdutilenv/bin/Activate
+
+ #install python package for sdutil
+ pip install -r requirements.txt
+ ```
+
+- replace/edit config.yaml in sdlib/config.yaml by this [config-azure.yaml](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/blob/azure/stable/docs/config-azure.yaml)
+
+- You need to Update three values in `config.yaml`:
+ ```yaml
+ - service: '{"Azure": {"azureGlabEnv":{"url": "<base-url-for-microsoft-energy-data-services-instance>/seistore-svc/api/v3", "appkey": ""}}}'
+ - url: '<base-url-for-microsoft-energy-data-services-instance>/seistore-svc/api/v3'
+ - "refresh_token": "<refresh-token-for-your-env>"
+ ```
+
+ > [!NOTE]
+ > Follow the directions in [How to Generate a Refresh Token](how-to-generate-refresh-token.md) to obtain a token if not already present.
+
+- Export or set below environment variables
+
+ ```bash
+ export AZURE_TENANT_ID=check-env-provisioning-team-as-specific-to-cluster
+ export AZURE_CLIENT_ID=check-env-provisioning-team-as-specific-to-cluster
+ export AZURE_CLIENT_SECRET=check-env-provisioning-team-as-specific-to-cluster
+ ```
+
+- Run below commands to sign-in, list, upload and download
+
+ ```bash
+ python sdutil config init
+ python sdutil auth login
+ ```
+
+- SAMPLE
+ ```code
+ (sdutilenv) > python sdutil config init
+ [one] Azure
+ Select the cloud provider: **enter 1**
+ Insert the Azure (azureGlabEnv) application key: **just press enter--no need to provide a key**
+
+ sdutil successfully configured to use Azure (azureGlabEnv)
+
+ Should display sign-in success message. Credentials expiry set to 1 hour.
+ ```
+- list files
+
+ ```bash
+ python sdutil ls sd://<tenant> e.g. sd://<datapartition>
+ python sdutil ls sd://<tenant>/<subproject> e.g. sd://<datapartition>/test
+ ```
+
+- upload file
+
+ ```bash
+ python sdutil cp local-dir/file-name-at-source.txt sd://<datapartition>/test/file-name-at-destination.txt
+ ```
+
+- download file
+
+ ```bash
+ python sdutil cp sd://<datapartition>/test/file-name-at-ddms.txt local-dir/file-name-at-destination.txt
+ ```
+
+ > [!NOTE]
+ > Don't use `cp` command to download VDS files. The VDS conversion results in multiple files, therefore the `cp` command won't be able to download all of them in one command. Use either the [SEGYExport](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/tools/SEGYExport/README.html) or [VDSCopy](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/tools/VDSCopy/README.html) tool instead. These tools use a series of REST calls accessing a [naming scheme](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/connection.html) to retrieve information about all the resulting VDS files.
+
+OSDU&trade; is a trademark of The Open Group.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Steps to interact with Well Delivery DDMS](tutorial-well-delivery-ddms.md)
energy-data-services Tutorial Seismic Ddms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-seismic-ddms.md
+
+ Title: Tutorial - Sample steps to interact with Seismic DDMS in Microsoft Energy Data Services #Required; page title is displayed in search results. Include the brand.
+description: This tutorial shows you how to interact with Seismic DDMS Microsoft Energy Data Services #Required; article description that is displayed in search results.
++++ Last updated : 3/16/2022+++
+# Tutorial: Sample steps to interact with Seismic ddms
+
+Seismic DDMS provides the capability to operate on seismic data in the Microsoft Energy Data Services instance.
+
+In this tutorial, you will learn how to:
+
+> [!div class="checklist"]
+> * Register data partition to seismic
+> * Utilize seismic DDMS Api's to store and retrieve seismic data
+
+## Prerequisites
+
+### Microsoft Energy Data Services instance details
+
+* Once the [Microsoft Energy Data Services instance](./quickstart-create-microsoft-energy-data-services-instance.md) is created, note down the following details:
+
+ | Parameter | Value to use | Example |
+ | | |-- |
+ | CLIENT_ID | Application (client) ID | 3dbbb..... |
+ | CLIENT_SECRET | Client secrets | _fl****************** |
+ | TENANT_ID | Directory (tenant) ID | 72f988bf-86f1-41af-91ab-2d7cd011db47 |
+ | SCOPE | Application (client) ID | 3dbbb..... |
+ | base_uri | URI | instancename.energy.azure.com |
+ | data-partition-id | Data Partition(s) | instancename-datapartitionid |
+
+### Postman setup
+
+1. Download and install [Postman](https://www.postman.com/) desktop app
+2. Import the following files into Postman:
+ * To import the Postman collection and environment variables, follow the steps outlined in [Importing data into Postman](https://learning.postman.com/docs/getting-started/importing-and-exporting-data/#importing-data-into-postman)
+ * [Smoke test Postman collection](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/raw/master/source/ddms-smoke-tests/Azure%20DDMS%20OSDU%20Smoke%20Tests.postman_collection.json)
+ * [Smoke Test Environment](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/raw/master/source/ddms-smoke-tests/%5BShip%5D%20osdu-glab.msft-osdu-test.org.postman_environment.json)
+
+3. Update the **CURRENT_VALUE** of the Postman Environment with the information obtained in [Microsoft Energy Data Services instance details](#microsoft-energy-data-services-instance-details)
+
+## Register data partition to seismic
+
+ * Script to register
+ ```sh
+ curl --location --request POST '[url]/seistore-svc/api/v3/tenant/{{datapartition}}' \
+ --header 'Authorization: Bearer {{TOKEN}}' \
+ --header 'Content-Type: application/json' \
+ --data-raw '{
+ "esd": "{{datapartition}}.{{domain}}",
+ "gcpid": "{{datapartition}}",
+ "default_acl": "users.datalake.admins@{{datapartition}}.{{domain}}.com"
+ }'
+ ```
+## Utilize seismic ddms API's to store and retrieve seismic data
+
+In order to use the Seismic DMS, follow the steps in the Seismic DDMS SDUTIL tutorial.
+
+## Next steps
+<!-- Add a context sentence for the following links -->
+> [!div class="nextstepaction"]
+> [Seismic DDMS SDUTIL tutorial](./tutorial-seismic-ddms-sdutil.md)
energy-data-services Tutorial Well Delivery Ddms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-well-delivery-ddms.md
+
+ Title: Microsoft Energy Data Services Preview - Steps to interact with Well Delivery DDMS #Required; page title is displayed in search results. Include the brand.
+description: This tutorial shows you how to interact with Well Delivery DDMS #Required; article description that is displayed in search results.
++++ Last updated : 07/28/2022+++
+# Tutorial: Sample steps to interact with Well Delivery ddms
+
+Well Delivery DDMS provides the capability to manage well related data in the Microsoft Energy Data Services Preview instance.
+
+In this tutorial, you'll learn how to:
+
+> [!div class="checklist"]
+> * Utilize Well Delivery DDMS API's to store and retrieve well data
++
+## Prerequisites
+
+### Get Microsoft Energy Data Services Preview instance details
+
+* Once the [Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) is created, note down the following details:
+
+```Table
+ | Parameter | Value to use | Example |
+ | | |-- |
+ | CLIENT_ID | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx |
+ | CLIENT_SECRET | Client secrets | _fl****************** |
+ | TENANT_ID | Directory (tenant) ID | 72f988bf-86f1-41af-91ab-xxxxxxxxxxxx |
+ | SCOPE | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx |
+ | base_uri | URI | <instance>.energy.azure.com |
+ | data-partition-id | Data Partition(s) | <instance>-<data-partition-name> |
+```
+
+### How to set up Postman
+
+* Download and install [Postman](https://www.postman.com/) desktop app.
+* Import the following files into Postman:
+ * [Well Delivery DDMS Postman collection](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/WelldeliveryDDMS.postman_collection.json)
+ * [Well Delivery DDMS Postman Environment](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/WelldeliveryDDMSEnviroment.postman_environment.json)
+
+* Update the **CURRENT_VALUE** of the Postman Environment with the information obtained in [Microsoft Energy Data Services Preview instance details](#get-microsoft-energy-data-services-preview-instance-details).
+
+### How to execute Postman requests
+
+* The Postman collection for Well Delivery DDMS contains requests that allows interaction with wells, wellbore, well planning, wellbore planning, well activity program and well trajectory data.
+* Make sure to choose the **Well Delivery DDMS Environment** before triggering the Postman collection.
+* Each request can be triggered by clicking the **Send** Button.
+* On every request Postman will validate the actual API response code against the expected response code; if there's any mismatch the Test Section will indicate failures.
+
+### Generate a token
+
+1. **Get a Token** - Import the CURL command in Postman to generate the bearer token. Update the bearerToken in well delivery ddms environment. Use Bearer Token as Authorization type in other API calls.
+ ```bash
+ curl --location --request POST 'https://login.microsoftonline.com/{{TENANT_ID}}/oauth2/v2.0/token' \
+ --header 'Content-Type: application/x-www-form-urlencoded' \
+ --data-urlencode 'grant_type=client_credentials' \
+ --data-urlencode 'client_id={{CLIENT_ID}}' \
+ --data-urlencode 'client_secret={{CLIENT_SECRET}}' \
+ --data-urlencode 'scope={{SCOPE}}'
+ ```
+ :::image type="content" source="media/tutorial-well-delivery/screenshot-of-the-well-delivery-generate-token.png" alt-text="Screenshot of the well delivery generate token." lightbox="media/tutorial-well-delivery/screenshot-of-the-well-delivery-generate-token.png":::
++
+## Store and retrieve well data with Well Delivery ddms APIs
+
+1. **Create a Legal Tag** - Create a legal tag that will be added automatically to the environment for data compliance purpose.
+1. **Create Well** - Creates the well record.
+ :::image type="content" source="media/tutorial-well-delivery/screenshot-of-the-well-delivery-create-well.png" alt-text="Screenshot of the well delivery - create well." lightbox="media/tutorial-well-delivery/screenshot-of-the-well-delivery-create-well.png":::
+1. **Create Wellbore** - Creates the wellbore record.
+ :::image type="content" source="media/tutorial-well-delivery/screenshot-of-the-well-delivery-create-well-bore.png" alt-text="Screenshot of the well delivery - create wellbore." lightbox="media/tutorial-well-delivery/screenshot-of-the-well-delivery-create-well-bore.png":::
+1. **Get Well Version** - Returns the well record based on given WellId.
+ :::image type="content" source="media/tutorial-well-delivery/screenshot-of-the-well-delivery-get-well.png" alt-text="Screenshot of the well delivery - get well." lightbox="media/tutorial-well-delivery/screenshot-of-the-well-delivery-get-well.png":::
+1. **Get Wellbore Version** - Returns the wellbore record based on given WellboreId.
+ :::image type="content" source="media/tutorial-well-delivery/screenshot-of-the-well-delivery-get-well-bore.png" alt-text="Screenshot of the well delivery - get wellbore." lightbox="media/tutorial-well-delivery/screenshot-of-the-well-delivery-get-well-bore.png":::
+1. **Create ActivityPlan** - Create the ActivityPlan.
+ :::image type="content" source="media/tutorial-well-delivery/screenshot-of-the-well-delivery-create-activity-plan.png" alt-text="Screenshot of the well delivery - create activity plan." lightbox="media/tutorial-well-delivery/screenshot-of-the-well-delivery-create-activity-plan.png":::
+1. **Get ActivityPlan by Well Id** - Returns the Activity Plan object from a wellId generated in Step 1.
+ :::image type="content" source="media/tutorial-well-delivery/screenshot-of-the-well-delivery-activity-plans-by-well.png" alt-text="Screenshot of the well delivery - get activity plan by well." lightbox="media/tutorial-well-delivery/screenshot-of-the-well-delivery-activity-plans-by-well.png":::
+1. **Delete wellbore record** - Deletes the specified wellbore record.
+ :::image type="content" source="media/tutorial-well-delivery/screenshot-of-the-well-delivery-delete-well-bore.png" alt-text="Screenshot of the well delivery - delete wellbore." lightbox="media/tutorial-well-delivery/screenshot-of-the-well-delivery-delete-well-bore.png":::
+1. **Delete well record** - Deletes the specified well record.
+ :::image type="content" source="media/tutorial-well-delivery/screenshot-of-the-well-delivery-delete-well.png" alt-text="Screenshot of the well delivery - delete well." lightbox="media/tutorial-well-delivery/screenshot-of-the-well-delivery-delete-well.png":::
+
+Completion of the above steps indicates successful creation and retrieval of well and wellbore records. Similar steps could be followed for well planning, wellbore planning, well activity program and wellbore trajectory data.
+
+## See also
+Advance to the next tutorial to learn how to use sdutil to load seismic data into seismic store
+> [!div class="nextstepaction"]
+> [Tutorial: Sample steps to interact with Wellbore ddms](tutorial-wellbore-ddms.md)
energy-data-services Tutorial Wellbore Ddms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-wellbore-ddms.md
+
+ Title: Tutorial - Sample steps to interact with Wellbore DDMS #Required; page title is displayed in search results. Include the brand.
+description: This tutorial shows you how to interact with Wellbore DDMS in Microsoft Energy Data Services #Required; article description that is displayed in search results.
++++ Last updated : 09/07/2022+++
+# Tutorial: Sample steps to interact with Wellbore ddms
+
+Wellbore ddms provides the capability to operate on well data in the Microsoft Energy Data Services instance.
+
+In this tutorial, you'll learn how to:
+> [!div class="checklist"]
+> * Utilize Wellbore ddms APIs to store and retrieve Wellbore and well log data
++
+## Prerequisites
+
+### Microsoft Energy Data Services instance details
+
+* Once the [Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) is created, note down the following details:
+
+```Table
+ | Parameter | Value to use | Example |
+ | | |-- |
+ | CLIENT_ID | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx |
+ | CLIENT_SECRET | Client secrets | _fl****************** |
+ | TENANT_ID | Directory (tenant) ID | 72f988bf-86f1-41af-91ab-xxxxxxxxxxxx |
+ | SCOPE | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx |
+ | base_uri | URI | <instance>.energy.azure.com |
+ | data-partition-id | Data Partition(s) | <instance>-<data-partition-name> |
+```
+
+### Postman setup
+
+* Download and install [Postman](https://www.postman.com/) desktop app
+* Import the following files into Postman:
+ * [Wellbore ddms Postman collection](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/WellboreDDMS.postman_collection.json)
+ * [Wellbore ddms Postman Environment](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/WellboreDDMSEnvironment.postman_environment.json)
+
+* Update the **CURRENT_VALUE** of the Postman Environment with the information obtained in [Microsoft Energy Data Services instance details](#microsoft-energy-data-services-instance-details)
+
+### Executing Postman Requests
+
+* The Postman collection for Wellbore ddms contains requests that allows interaction with wells, wellbore, well log and well trajectory data.
+* Make sure to choose the **Wellbore DDMS Environment** before triggering the Postman collection.
+ :::image type="content" source="media/tutorial-wellbore-ddms/tutorial-postman-choose-wellbore-environment.png" alt-text="Choose environment." lightbox="media/tutorial-wellbore-ddms/tutorial-postman-choose-wellbore-environment.png":::
+* Each request can be triggered by clicking the **Send** Button.
+* On every request Postman will validate the actual API response code against the expected response code; if there's any mismatch the Test Section will indicate failures.
+
+**Successful Postman Call**
++
+**Failed Postman Call**
++
+### Utilize Wellbore ddms APIs to store and retrieve wellbore and well log data
+
+1. **Get an SPN Token** - Generate the Service Principal Bearer token, which will be used to authenticate further API calls.
+2. **Create a Legal Tag** - Create a legal tag that will be added automatically to the environment for data compliance purpose.
+3. **Create Well** - Creates the wellbore record in Microsoft Energy Data Services instance.
+ :::image type="content" source="media/tutorial-wellbore-ddms/tutorial-create-well.png" alt-text="Screenshot of creating a Well." lightbox="media/tutorial-wellbore-ddms/tutorial-create-well.png":::
+4. **Get Wells** - Returns the well data created in the last step.
+ :::image type="content" source="media/tutorial-wellbore-ddms/tutorial-get-wells.png" alt-text="Screenshot of getting all wells." lightbox="media/tutorial-wellbore-ddms/tutorial-get-wells.png":::
+1. **Get Well Versions** - Returns the versions of each ingested well record.
+ :::image type="content" source="media/tutorial-wellbore-ddms/tutorial-get-well-versions.png" alt-text="Screenshot of getting all Well versions." lightbox="media/tutorial-wellbore-ddms/tutorial-get-well-versions.png":::
+1. **Get specific Well Version** - Returns the details of specified version of specified record.
+ :::image type="content" source="media/tutorial-wellbore-ddms/tutorial-get-specific-well-version.png" alt-text="Screenshot of getting a specific well version." lightbox="media/tutorial-wellbore-ddms/tutorial-get-specific-well-version.png":::
+1. **Delete well record** - Deletes the specified record.
+ :::image type="content" source="media/tutorial-wellbore-ddms/tutorial-delete-well.png" alt-text="Screenshot of delete well record." lightbox="media/tutorial-wellbore-ddms/tutorial-delete-well.png":::
+
+***Successful completion of above steps indicates success ingestion and retrieval of well records***
+
+## Next steps
+Advance to the next tutorial to learn about sdutil
+> [!div class="nextstepaction"]
+> [Tutorial: Seismic store sdutil](tutorial-seismic-ddms-sdutil.md)
event-grid Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/whats-new.md
Last updated 07/20/2022
# What's new in Azure Event Grid?
->Get notified about when to revisit this page for updates by copying and pasting this URL: `https://docs.microsoft.com/api/search/rss?search=%22Release+notes+-+Azure+Event+Grid%22&locale=en-us` into your ![RSS feed reader icon](./media/whats-new/feed-icon-16x16.png) feed reader.
+>Get notified about when to revisit this page for updates by copying and pasting this URL: `https://learn.microsoft.com/api/search/rss?search=%22Release+notes+-+Azure+Event+Grid%22&locale=en-us` into your ![RSS feed reader icon](./media/whats-new/feed-icon-16x16.png) feed reader.
Azure Event Grid receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about the features that are added or updated in a release.
event-hubs Dynamically Add Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/dynamically-add-partitions.md
You can specify the number of partitions at the time of creating an event hub. I
> Dynamic additions of partitions is available only in **premium** and **dedicated** tiers of Event Hubs. > [!NOTE]
-> For Apache Kafka clients, an **event hub** maps to a **Kafka topic**. For more mappings between Azure Event Hubs and Apache Kafka, see [Kafka and Event Hubs conceptual mapping](event-hubs-for-kafka-ecosystem-overview.md#kafka-and-event-hub-conceptual-mapping)
+> For Apache Kafka clients, an **event hub** maps to a **Kafka topic**. For more mappings between Azure Event Hubs and Apache Kafka, see [Kafka and Event Hubs conceptual mapping](event-hubs-for-kafka-ecosystem-overview.md#kafka-and-event-hubs-conceptual-mapping)
## Update the partition count
event-hubs Event Hubs For Kafka Ecosystem Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-for-kafka-ecosystem-overview.md
You can often use the Event Hubs Kafka endpoint from your applications without c
Conceptually, Kafka and Event Hubs are very similar: they're both partitioned logs built for streaming data, whereby the client controls which part of the retained log it wants to read. The following table maps concepts between Kafka and Event Hubs.
-### Kafka and Event Hub conceptual mapping
+### Kafka and Event Hubs conceptual mapping
| Kafka Concept | Event Hubs Concept| | | | | Cluster | Namespace |
-| Topic | Event Hub |
+| Topic | An event hub |
| Partition | Partition| | Consumer Group | Consumer Group | | Offset | Offset|
sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginMo
sasl.login.callback.handler.class=CustomAuthenticateCallbackHandler ```
+> [!NOTE]
+> The above configuration properties are for the Java programming language. For **samples** that show how to use OAuth with Event Hubs for Kafka using different programming languages, see [samples on GitHub](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth).
++ ### Shared Access Signature (SAS) Event Hubs also provides the **Shared Access Signatures (SAS)** for delegated access to Event Hubs for Kafka resources. Authorizing access using OAuth 2.0 token-based mechanism provides superior security and ease of use over SAS. The built-in roles can also eliminate the need for ACL-based authorization, which has to be maintained and managed by the user. You can use this feature with your Kafka clients by specifying **SASL_SSL** for the protocol and **PLAIN** for the mechanism.
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule require
#### Samples For a **tutorial** with step-by-step instructions to create an event hub and access it using SAS or OAuth, see [Quickstart: Data streaming with Event Hubs using the Kafka protocol](event-hubs-quickstart-kafka-enabled-event-hubs.md).
-For more **samples** that show how to use OAuth with Event Hubs for Kafka, see [samples on GitHub](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth).
- ## Other Event Hubs features The Event Hubs for Apache Kafka feature is one of three protocols concurrently available on Azure Event Hubs, complementing HTTP and AMQP. You can write with any of these protocols and read with any another, so that your current Apache Kafka producers can continue publishing via Apache Kafka, but your reader can benefit from the native integration with Event Hubs' AMQP interface, such as Azure Stream Analytics or Azure Functions. Conversely, you can readily integrate Azure Event Hubs into AMQP routing networks as a target endpoint, and yet read data through Apache Kafka integrations.
Additionally, Event Hubs features such as [Capture](event-hubs-capture-overview.
## Apache Kafka feature differences
-The goal of Event Hubs for Apache Kafka is to provide access to Azure Event Hub's capabilities to applications that are locked into the Apache Kafka API and would otherwise have to be backed by an Apache Kafka cluster.
+The goal of Event Hubs for Apache Kafka is to provide access to Azure Event Hubs capabilities to applications that are locked into the Apache Kafka API and would otherwise have to be backed by an Apache Kafka cluster.
As explained [above](#is-apache-kafka-the-right-solution-for-your-workload), the Azure Messaging fleet provides rich and robust coverage for a multitude of messaging scenarios, and although the following features aren't currently supported through Event Hubs' support for the Apache Kafka API, we point out where and how the desired capability is available.
The client-side [compression](https://cwiki.apache.org/confluence/display/KAFKA/
This feature is fundamentally at odds with Azure Event Hubs' multi-protocol model, which allows for messages, even those sent in batches, to be individually retrievable from the broker and through any protocol.
-The payload of any Event Hub event is a byte stream and the content can be compressed with an algorithm of your choosing. The Apache Avro encoding format supports compression natively.
+The payload of any Event Hubs event is a byte stream and the content can be compressed with an algorithm of your choosing. The Apache Avro encoding format supports compression natively.
### Log Compaction
If you must use the Kafka Streams framework on Azure, [Apache Kafka on HDInsight
## Next steps This article provided an introduction to Event Hubs for Kafka. To learn more, see [Apache Kafka developer guide for Azure Event Hubs](apache-kafka-developer-guide.md).+
+For a **tutorial** with step-by-step instructions to create an event hub and access it using SAS or OAuth, see [Quickstart: Data streaming with Event Hubs using the Kafka protocol](event-hubs-quickstart-kafka-enabled-event-hubs.md).
+
+Also, see the [OAuth samples on GitHub](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth).
event-hubs Event Hubs Kafka Connect Debezium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-connect-debezium.md
Follow the latest instructions in the [Debezium documentation](https://debezium.
Minimal reconfiguration is necessary when redirecting Kafka Connect throughput from Kafka to Event Hubs. The following `connect-distributed.properties` sample illustrates how to configure Connect to authenticate and communicate with the Kafka endpoint on Event Hubs: > [!IMPORTANT]
-> - Debezium will auto-create a topic per table and a bunch of metadata topics. Kafka **topic** corresponds to an Event Hubs instance (event hub). For Apache Kafka to Azure Event Hubs mappings, see [Kafka and Event Hubs conceptual mapping](event-hubs-for-kafka-ecosystem-overview.md#kafka-and-event-hub-conceptual-mapping).
+> - Debezium will auto-create a topic per table and a bunch of metadata topics. Kafka **topic** corresponds to an Event Hubs instance (event hub). For Apache Kafka to Azure Event Hubs mappings, see [Kafka and Event Hubs conceptual mapping](event-hubs-for-kafka-ecosystem-overview.md#kafka-and-event-hubs-conceptual-mapping).
> - There are different **limits** on number of event hubs in an Event Hubs namespace depending on the tier (Basic, Standard, Premium, or Dedicated). For these limits, See [Quotas](compare-tiers.md#quotas). ```properties
expressroute Expressroute Circuit Peerings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-circuit-peerings.md
Previously updated : 12/13/2019 Last updated : 09/19/2022
ExpressRoute circuits connect your on-premises infrastructure to Microsoft throu
## <a name="circuits"></a>ExpressRoute circuits
-An ExpressRoute circuit represents a logical connection between your on-premises infrastructure and Microsoft cloud services through a connectivity provider. You can order multiple ExpressRoute circuits. Each circuit can be in the same or different regions, and can be connected to your premises through different connectivity providers.
+An ExpressRoute circuit represents a logical connection between your on-premises infrastructure and Microsoft cloud services through a connectivity provider. You can have multiple ExpressRoute circuits. Each circuit can be in the same or different regions, and can be connected to your premises through different connectivity providers.
-ExpressRoute circuits do not map to any physical entities. A circuit is uniquely identified by a standard GUID called as a service key (s-key). The service key is the only piece of information exchanged between Microsoft, the connectivity provider, and you. The s-key is not a secret for security purposes. There is a 1:1 mapping between an ExpressRoute circuit and the s-key.
+ExpressRoute circuits don't map to any physical entities. A circuit is uniquely identified by a standard GUID called as a service key (s-key). The service key is the only piece of information exchanged between Microsoft, the connectivity provider, and you. The s-key isn't a secret for security purposes. There's a 1:1 mapping between an ExpressRoute circuit and the s-key.
-New ExpressRoute circuits can include two independent peerings: Private peering and Microsoft peering. Whereas existing ExpressRoute circuits may contain three peerings: Azure Public, Azure Private and Microsoft. Each peering is a pair of independent BGP sessions, each of them configured redundantly for high availability. There is a 1:N (1 <= N <= 3) mapping between an ExpressRoute circuit and routing domains. An ExpressRoute circuit can have any one, two, or all three peerings enabled per ExpressRoute circuit.
+New ExpressRoute circuits can include two independent peerings: Private peering and Microsoft peering. Whereas existing ExpressRoute circuits may have three peerings: Azure Public, Azure Private and Microsoft. Each peering is a pair of independent BGP sessions, each of them configured redundantly for high availability. There's a 1:N (1 <= N <= 3) mapping between an ExpressRoute circuit and routing domains. An ExpressRoute circuit can have any one, two, or all three peerings enabled per ExpressRoute circuit.
Each circuit has a fixed bandwidth (50 Mbps, 100 Mbps, 200 Mbps, 500 Mbps, 1 Gbps, 10 Gbps) and is mapped to a connectivity provider and a peering location. The bandwidth you select is shared across all circuit peerings
Each circuit has a fixed bandwidth (50 Mbps, 100 Mbps, 200 Mbps, 500 Mbps, 1 Gbp
Default quotas and limits apply for every ExpressRoute circuit. Refer to the [Azure Subscription and Service Limits, Quotas, and Constraints](../azure-resource-manager/management/azure-subscription-service-limits.md) page for up-to-date information on quotas.
+### Circuit SKU upgrade and downgrade
+
+#### Allowed workflow
+
+* Upgrade from Standard to Premium SKU.
+* Upgrade from Local to Standard or Premium SKU.
+ * Can only be done using Azure CLI or Azure PowerShell.
+ * Billing type must be **unlimited**.
+* Changing from *MeteredData* to *UnlimitedData*.
+
+#### Unsupported workflow
+
+* Downgrade from Premium to Standard SKU.
+* Changing from *UnlimitedData* to *MeteredData*.
+ ## <a name="routingdomains"></a>ExpressRoute peering An ExpressRoute circuit has multiple routing domains/peerings associated with it: Azure public, Azure private, and Microsoft. Each peering is configured identically on a pair of routers (in active-active or load sharing configuration) for high availability. Azure services are categorized as *Azure public* and *Azure private* to represent the IP addressing schemes.
You can connect more than one virtual network to the private peering domain. Rev
Connectivity to Microsoft online services (Microsoft 365 and Azure PaaS services) occurs through Microsoft peering. We enable bi-directional connectivity between your WAN and Microsoft cloud services through the Microsoft peering routing domain. You must connect to Microsoft cloud services only over public IP addresses that are owned by you or your connectivity provider and you must adhere to all the defined rules. For more information, see the [ExpressRoute prerequisites](expressroute-prerequisites.md) page.
-See the [FAQ page](expressroute-faqs.md) for more information on services supported, costs, and configuration details. See the [ExpressRoute Locations](expressroute-locations.md) page for information on the list of connectivity providers offering Microsoft peering support.
+For more information on services supported, costs, and configuration details, see the [FAQ page](expressroute-faqs.md). For information on the list of connectivity providers offering Microsoft peering support, see the [ExpressRoute locations](expressroute-locations.md) page.
## <a name="peeringcompare"></a>Peering comparison
The following table compares the three peerings:
You may enable one or more of the routing domains as part of your ExpressRoute circuit. You can choose to have all the routing domains put on the same VPN if you want to combine them into a single routing domain. You can also put them on different routing domains, similar to the diagram. The recommended configuration is that private peering is connected directly to the core network, and the public and Microsoft peering links are connected to your DMZ.
-Each peering requires separate BGP sessions (one pair for each peering type). The BGP session pairs provide a highly available link. If you are connecting through layer 2 connectivity providers, you are responsible for configuring and managing routing. You can learn more by reviewing the [workflows](expressroute-workflows.md) for setting up ExpressRoute.
+Each peering requires separate BGP sessions (one pair for each peering type). The BGP session pairs provide a highly available link. If you're connecting through layer 2 connectivity providers, you're responsible for configuring and managing routing. You can learn more by reviewing the [workflows](expressroute-workflows.md) for setting up ExpressRoute.
## <a name="health"></a>ExpressRoute health
-ExpressRoute circuits may be monitored for availability, connectivity to VNets and bandwidth utilization using [Network Performance Monitor](../networking/network-monitoring-overview.md) (NPM).
+ExpressRoute circuits can be monitored for availability, connectivity to VNets and bandwidth utilization using [ExpressRoute Network Insights](expressroute-network-insights.md).
-NPM monitors the health of Azure private peering and Microsoft peering. Check out our [post](https://azure.microsoft.com/blog/monitoring-of-azure-expressroute-in-preview/) for more information.
+Connection Monitor for Expressroute monitors the health of Azure private peering and Microsoft peering. For more information on configuration, see [Configure Connection Monitor for ExpressRoute](how-to-configure-connection-monitor.md).
## Next steps
expressroute Expressroute Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-introduction.md
Subscribe to the RSS feed and view the latest ExpressRoute feature updates on th
## Next steps * Ensure that all prerequisites are met. See [ExpressRoute prerequisites](expressroute-prerequisites.md).
-* [Learn module: Introduction to Azure ExpressRoute](/learn/modules/intro-to-azure-expressroute).
+* [Learn module: Introduction to Azure ExpressRoute](/training/modules/intro-to-azure-expressroute).
* Learn about [ExpressRoute connectivity models](expressroute-connectivity-models.md). * Find a service provider. See [ExpressRoute partners and peering locations](expressroute-locations.md).
expressroute Plan Manage Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/plan-manage-cost.md
You can also [export your cost data](../cost-management-billing/costs/tutorial-e
- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
firewall-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/overview.md
Azure Firewall Manager has the following known issues:
## Next steps -- [Learn module: Introduction to Azure Firewall Manager](/learn/modules/intro-to-azure-firewall-manager/).
+- [Learn module: Introduction to Azure Firewall Manager](/training/modules/intro-to-azure-firewall-manager/).
- Review [Azure Firewall Manager deployment overview](deployment-overview.md) - Learn about [secured Virtual Hubs](secured-virtual-hub.md).
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Untrusted customer signed certificates|Customer signed certificates are not trus
- [Quickstart: Create an Azure Firewall and a firewall policy - ARM template](../firewall-manager/quick-firewall-policy.md) - [Quickstart: Deploy Azure Firewall with Availability Zones - ARM template](deploy-template.md) - [Tutorial: Deploy and configure Azure Firewall using the Azure portal](tutorial-firewall-deploy-portal.md)-- [Learn module: Introduction to Azure Firewall](/learn/modules/introduction-azure-firewall/)
+- [Learn module: Introduction to Azure Firewall](/training/modules/introduction-azure-firewall/)
frontdoor Front Door Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-overview.md
Subscribe to the RSS feed and view the latest Azure Front Door feature updates o
* Learn about [Azure Front Door routing architecture](front-door-routing-architecture.md) * Learn how to [create an Azure Front Door profile](create-front-door-portal.md).
-* [Learn module: Introduction to Azure Front Door](/learn/modules/intro-to-azure-front-door/).
+* [Learn module: Introduction to Azure Front Door](/training/modules/intro-to-azure-front-door/).
hdinsight Apache Spark Jupyter Spark Sql Use Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-spark-sql-use-portal.md
SQL (Structured Query Language) is the most common and widely used language for
1. Verify the kernel is ready. The kernel is ready when you see a hollow circle next to the kernel name in the notebook. Solid circle denotes that the kernel is busy.
- :::image type="content" source="./media/apache-spark-jupyter-spark-sql/jupyter-spark-kernel-status.png " alt-text="Screenshot shows a Jupyter window with a PySpark indicator." border="true":::ark indicator." border="true":::
+ :::image type="content" source="./media/apache-spark-jupyter-spark-sql/jupyter-spark-kernel-status.png " alt-text="Screenshot shows a Jupyter window with a PySpark indicator." border="true":::
When you start the notebook for the first time, the kernel performs some tasks in the background. Wait for the kernel to be ready.
healthcare-apis Azure Api Fhir Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-resource-manager-template.md
The template defines one Azure resource:
<!--
-Replace the line above with the following line once https://docs.microsoft.com/azure/templates/microsoft.healthcareapis/services goes live:
+Replace the line above with the following line once https://learn.microsoft.com/azure/templates/microsoft.healthcareapis/services goes live:
* [**Microsoft.HealthcareApis/services**](/azure/templates/microsoft.healthcareapis/services)
internet-peering Walkthrough Communications Services Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-communications-services-partner.md
For optimized routing for your Communication services infrastructure prefixes, y
Please ensure that the prefixes registered are being announced over the direct interconnects established in that location. If the same prefix is announced in multiple peering locations, it is sufficient to register them with just one of the peerings in order to retrieve the unique prefix keys after validation.
+> [!NOTE]
+> The Connection State of your peering connections must be Active before registering any prefixes.
+ **Prefix Registration** 1. If you are an Operator Connect Partner, you would be able to see the ΓÇ£Register PrefixΓÇ¥ tab on the left panel of your peering resource page.
Below are the steps to activate the prefix.
## FAQs:
+**Q.** When will my BGP peer come up?
+
+**A.** After the LAG comes up our automated process configures BGP. Note, BFD must be configured on the non-MSFT peer to start route exchange.
+
+**Q.** When will peering IP addresses be allocated and displayed in the Azure portal?
+
+**A.** Our automated process allocates addresses and sends the information via email after the port is configured on our side.
+ **Q.** I have smaller subnets (</24) for my Communications services. Can I get the smaller subnets also routed? **A.** Yes, Microsoft Azure Peering service supports smaller prefix routing also. Please ensure that you are registering the smaller prefixes for routing and the same are announced over the interconnects.
Below are the steps to activate the prefix.
**A.** Microsoft announces all of Microsoft's public service prefixes over these interconnects. This will ensure not only Communications but other cloud services are accessible from the same interconnect.
+**Q.** Are there any AS path constraints?
+
+**A.** Yes, for registered prefixes smaller than /24, advertised AS path must be less than four. Path of four or longer will cause the advertisement to be rejected by policy.
+ **Q.** I need to set the prefix limit, how many routes Microsoft would be announcing? **A.** Microsoft announces roughly 280 prefixes on internet, and it may increase by 10-15% in future. So, a safe limit of 400-500 can be good to set as ΓÇ£Max prefix countΓÇ¥
Below are the steps to activate the prefix.
**A.** Time will be variable depending on number and location of sites, and if Peer is migrating existing private peerings or establishing new cabling. Carrier should plan for 3+ weeks.
+**Q.** How is progress communicated outside of the portal status?
+
+**A.** Automated emails are sent at varying milestones
+ **Q.** Can we use APIs for onboarding? **A.** Currently there is no API support, and configuration must be performed via web portal.
iot-central Concepts Faq Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-extend.md
The REST APIs enable extension scenarios such as:
- Programmatic management of your IoT Central applications. - Tight integration with other applications.
-To learn more, see [Manage an IoT Central application with the REST API](/learn/modules/manage-iot-central-apps-with-rest-api/).
+To learn more, see [Manage an IoT Central application with the REST API](/training/modules/manage-iot-central-apps-with-rest-api/).
## Next steps
iot-central Concepts Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-iot-edge.md
In the previous screenshot you can see:
The deployment manifest doesn't include information about the telemetry the **SimulatedTemperatureSensor** module sends or the commands it responds to. Add these definitions to the device template manually before you publish it.
-To learn more, see [Tutorial: Add an Azure IoT Edge device to your Azure IoT Central application](/learn/modules/connect-iot-edge-device-to-iot-central/).
+To learn more, see [Tutorial: Add an Azure IoT Edge device to your Azure IoT Central application](/training/modules/connect-iot-edge-device-to-iot-central/).
### Update a deployment manifest
iot-central Howto Manage Jobs With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-jobs-with-rest-api.md
GET https://{your app subdomain}.azureiotcentral.com/api/scheduledJobs/scheduled
## Next steps
-Now that you've learned how to manage jobs with the REST API, a suggested next step is to learn how to [Manage IoT Central applications with the REST API](/learn/modules/manage-iot-central-apps-with-rest-api/).
+Now that you've learned how to manage jobs with the REST API, a suggested next step is to learn how to [Manage IoT Central applications with the REST API](/training/modules/manage-iot-central-apps-with-rest-api/).
iot-central Howto Set Up Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-set-up-template.md
You have several options to create device templates:
- When the device connects to IoT Central, have it send the model ID of the model it implements. IoT Central uses the model ID to retrieve the model from the model repository and to create a device template. Add any cloud properties and views your IoT Central application needs to the device template. - When the device connects to IoT Central, let IoT Central [autogenerate a device template](#autogenerate-a-device-template) definition from the data the device sends. - Author a device model using the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). Manually import the device model into your IoT Central application. Then add the cloud properties and views your IoT Central application needs.-- You can also add device templates to an IoT Central application using the [REST API](/learn/modules/manage-iot-central-apps-with-rest-api/) or the [CLI](howto-manage-iot-central-from-cli.md).
+- You can also add device templates to an IoT Central application using the [REST API](/training/modules/manage-iot-central-apps-with-rest-api/) or the [CLI](howto-manage-iot-central-from-cli.md).
> [!NOTE] > In each case, the device code must implement the capabilities defined in the model. The device code implementation isn't affected by the cloud properties and views sections of the device template.
iot-central Overview Iot Central Api Tour https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-api-tour.md
Version 2022-05-31 of the data plane API lets you manage the following resources
The preview devices API also lets you [query telemetry and property values from your devices](howto-query-with-rest-api.md), [manage jobs](howto-manage-jobs-with-rest-api.md), and [manage data exports](howto-manage-data-export-with-rest-api.md).
-To get started with the data plane APIs, see [Explore the IoT Central APIs](/learn/modules/manage-iot-central-apps-with-rest-api/).
+To get started with the data plane APIs, see [Explore the IoT Central APIs](/training/modules/manage-iot-central-apps-with-rest-api/).
## Control plane operations
Version 2021-06-01 of the control plane API lets you manage the IoT Central appl
## Next steps
-Now that you have an overview of Azure IoT Central and are familiar with the capabilities of the IoT Central REST API, the suggested next step is to complete the [Explore the IoT Central APIs](/learn/modules/manage-iot-central-apps-with-rest-api/) Learn module.
+Now that you have an overview of Azure IoT Central and are familiar with the capabilities of the IoT Central REST API, the suggested next step is to complete the [Explore the IoT Central APIs](/training/modules/manage-iot-central-apps-with-rest-api/) Learn module.
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-developer.md
An IoT Edge device connects directly to IoT Central. An IoT Edge device can send
IoT Central only sees the IoT Edge device, not the downstream devices connected to the IoT Edge device.
-To learn more, see [Add an Azure IoT Edge device to your Azure IoT Central application](/learn/modules/connect-iot-edge-device-to-iot-central/).
+To learn more, see [Add an Azure IoT Edge device to your Azure IoT Central application](/training/modules/connect-iot-edge-device-to-iot-central/).
### Gateways
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central.md
Build [custom rules](tutorial-create-telemetry-rules.md) based on device state a
## Integrate with other services
-As an application platform, IoT Central lets you transform your IoT data into the business insights that drive actionable outcomes. [Rules](./tutorial-create-telemetry-rules.md), [data export](./howto-export-to-blob-storage.md), and the [public REST API](/learn/modules/manage-iot-central-apps-with-rest-api/) are examples of how you can integrate IoT Central with line-of-business applications:
+As an application platform, IoT Central lets you transform your IoT data into the business insights that drive actionable outcomes. [Rules](./tutorial-create-telemetry-rules.md), [data export](./howto-export-to-blob-storage.md), and the [public REST API](/training/modules/manage-iot-central-apps-with-rest-api/) are examples of how you can integrate IoT Central with line-of-business applications:
![How IoT Central can transform your IoT data](media/overview-iot-central/transform.png)
iot-central Tutorial Define Gateway Device Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-define-gateway-device-type.md
In this tutorial, you learned how to:
Next you can learn how to: > [!div class="nextstepaction"]
-> [Add an Azure IoT Edge device to your Azure IoT Central application](/learn/modules/connect-iot-edge-device-to-iot-central/)
+> [Add an Azure IoT Edge device to your Azure IoT Central application](/training/modules/connect-iot-edge-device-to-iot-central/)
iot-edge Configure Connect Verify Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/configure-connect-verify-gpu.md
az group list
## Next steps
-This article helped you set up your virtual machine and IoT Edge device to be GPU-accelerated. To run an application with a similar setup, try the learning path for [NVIDIA DeepStream development with Microsoft Azure](/learn/paths/nvidia-deepstream-development-with-microsoft-azure/?WT.mc_id=iot-47680-cxa). The Learn tutorial shows you how to develop optimized Intelligent Video Applications that can consume multiple video, image, and audio sources.
+This article helped you set up your virtual machine and IoT Edge device to be GPU-accelerated. To run an application with a similar setup, try the learning path for [NVIDIA DeepStream development with Microsoft Azure](/training/paths/nvidia-deepstream-development-with-microsoft-azure/?WT.mc_id=iot-47680-cxa). The Learn tutorial shows you how to develop optimized Intelligent Video Applications that can consume multiple video, image, and audio sources.
iot-hub Horizontal Arm Route Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/horizontal-arm-route-messages.md
If your environment meets the prerequisites and you're familiar with using ARM t
dotnet --version ``` -- Download and unzip the [IoT C# Samples](/samples/azure-samples/azure-iot-samples-csharp/azure-iot-samples-for-csharp-net/).
+- Download and unzip the [IoT C# SDK](https://github.com/Azure/azure-iot-sdk-csharp/archive/main.zip).
## Review the template
This section provides the steps to deploy the template, create a virtual device,
[![Deploy To Azure](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.svg?sanitize=true)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.devices%2Fiothub-auto-route-messages%2Fazuredeploy.json)
-1. Open a command window and go to the folder where you unzipped the IoT C# Samples. Find the folder with the arm-read-write.csproj file. You create the environment variables in this command window. Log into the [Azure portal](https://portal.azure.com) to get the keys. Select **Resource Groups** then select the resource group used for this quickstart.
+1. Open a command window and go to the folder where you unzipped the IoT C# SDK. Find the folder with the arm-read-write.csproj file. You create the environment variables in this command window. Log into the [Azure portal](https://portal.azure.com) to get the keys. Select **Resource Groups** then select the resource group used for this quickstart.
![Select the resource group](./media/horizontal-arm-route-messages/01-select-resource-group.png)
This section provides the steps to deploy the template, create a virtual device,
![View the sent messages](./media/horizontal-arm-route-messages/08-messages.png) > [!NOTE]
- > These messages are encoded in UTF-32 and base64. If you read the message back, you have to decode it from base64 and utf-32 in order to read it as ASCII. If you're interested, you can use the method ReadOneRowFromFile in the Routing Tutorial to read one for from one of these message files and decode it into ASCII. ReadOneRowFromFile is in the IoT C# Samples repository that you unzipped for this quickstart. Here is the path from the top of that folder: *./iot-hub/Tutorials/Routing/SimulatedDevice/Program.cs.* Set the boolean `readTheFile` to true, and hardcode the path to the file on disk, and it will open and translate the first row in the file.
+ > These messages are encoded in UTF-32 and base64. If you read the message back, you have to decode it from base64 and utf-32 in order to read it as ASCII. If you're interested, you can use the method ReadOneRowFromFile in the Routing Tutorial to read one for from one of these message files and decode it into ASCII. ReadOneRowFromFile is in the IoT C# SDK repository that you unzipped for this quickstart. Here is the path from the top of that folder: *./iothub/device/samples/getting started/RoutingTutorial/SimulatedDevice/Program.cs.* Set the boolean `readTheFile` to true, and hardcode the path to the file on disk, and it will open and translate the first row in the file.
You have deployed an ARM template to create an IoT Hub and a storage account, and run a program to send messages to the hub. The messages are then automatically stored in the storage account where they can be viewed.
iot-hub Iot Hub Amqp Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-amqp-support.md
import urllib
import time # Use generate_sas_token implementation available here:
-# https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-security#sas-token-structure
+# https://learn.microsoft.com/azure/iot-hub/iot-hub-devguide-security#sas-token-structure
from helper import generate_sas_token iot_hub_name = '<iot-hub-name>'
import uamqp
import urllib import time
-# Use the generate_sas_token implementation that's available here: https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-security#sas-token-structure
+# Use the generate_sas_token implementation that's available here: https://learn.microsoft.com/azure/iot-hub/iot-hub-devguide-security#sas-token-structure
from helper import generate_sas_token iot_hub_name = '<iot-hub-name>'
import urllib
import uuid # Use generate_sas_token implementation available here:
-# https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-security#sas-token-structure
+# https://learn.microsoft.com/azure/iot-hub/iot-hub-devguide-security#sas-token-structure
from helper import generate_sas_token iot_hub_name = '<iot-hub-name>'
iot-hub Iot Hub Bulk Identity Mgmt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-bulk-identity-mgmt.md
static string GetContainerSasUri(CloudBlobContainer container)
In this article, you learned how to perform bulk operations against the identity registry in an IoT hub. Many of these operations, including how to move devices from one hub to another, are used in the [Managing devices registered to the IoT hub section of How to Clone an IoT Hub](iot-hub-how-to-clone.md#managing-the-devices-registered-to-the-iot-hub).
-The cloning article has a working sample associated with it, which is located in the IoT C# samples on this page: [Azure IoT Samples for C#](https://azure.microsoft.com/resources/samples/azure-iot-samples-csharp/), with the project being ImportExportDevicesSample. You can download the sample and try it out; there are instructions in the [How to Clone an IoT Hub](iot-hub-how-to-clone.md) article.
+The cloning article has a working sample associated with it, which is located in the IoT C# samples on this page: [Azure IoT hub service samples for C#](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/service/samples/how%20to%20guides), with the project being ImportExportDevicesSample. You can download the sample and try it out; there are instructions in the [How to Clone an IoT Hub](iot-hub-how-to-clone.md) article.
To learn more about managing Azure IoT Hub, check out the following articles:
iot-hub Iot Hub Csharp Csharp File Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-csharp-csharp-file-upload.md
These files are typically batch processed in the cloud, using tools such as [Azu
At the end of this article, you run two .NET console apps:
-* **FileUploadSample**. This device app uploads a file to storage using a SAS URI provided by your IoT hub. You'll run this app from the Azure IoT C# samples repository that you download in the prerequisites.
+* **FileUploadSample**. This device app uploads a file to storage using a SAS URI provided by your IoT hub. You'll run this app from the Azure IoT C# SDK repository that you download in the prerequisites.
* **ReadFileUploadNotification**. This service app receives file upload notifications from your IoT hub. You'll create this app.
At the end of this article, you run two .NET console apps:
dotnet --version ```
-* Download the Azure IoT C# samples from [Download sample](https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/main.zip) and extract the ZIP archive.
+* Download the Azure IoT C# SDK from [Download sample](https://github.com/Azure/azure-iot-sdk-csharp/archive/main.zip) and extract the ZIP archive.
* Port 8883 should be open in your firewall. The sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
At the end of this article, you run two .NET console apps:
## Upload file from a device app
-In this article, you use a sample from the Azure IoT C# samples repository you downloaded earlier as the device app. You can open the files below using Visual Studio, Visual Studio Code, or a text editor of your choice.
+In this article, you use a sample from the Azure IoT C# SDK repository you downloaded earlier as the device app. You can open the files below using Visual Studio, Visual Studio Code, or a text editor of your choice.
-The sample is located at **azure-iot-samples-csharp/iot-hub/Samples/device/FileUploadSample** in the folder where you extracted the Azure IoT C# samples.
+The sample is located at **azure-iot-sdk-csharp/iothub/device/samples/getting started/FileUploadSample** in the folder where you extracted the Azure IoT C# SDK.
Examine the code in **FileUpLoadSample.cs**. This file contains the main sample logic. After creating an IoT Hub device client, it follows the standard three-part procedure for uploading files from a device:
Now you're ready to run the applications.
-1. Next, run the device app to upload the file to Azure storage. Open a new command prompt and change folders to the **azure-iot-samples-csharp-main\iot-hub\Samples\device\FileUploadSample** under the folder where you expanded the Azure IoT C# samples. Run the following commands. Replace the `{Your device connection string}` placeholder value in the second command with the device connection string you saw when you registered a device in the IoT Hub.
+1. Next, run the device app to upload the file to Azure storage. Open a new command prompt and change folders to the **azure-iot-sdk-csharp\iothub\device\samples\getting started\FileUploadSample** under the folder where you expanded the Azure IoT C# SDK. Run the following commands. Replace the `{Your device connection string}` placeholder value in the second command with the device connection string you saw when you registered a device in the IoT Hub.
```cmd/sh dotnet restore
iot-hub Iot Hub Devguide Messages Read Builtin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md
You can use the Event Hubs SDKs to read from the built-in endpoint in environmen
| Language | Sample | | -- | |
-| .NET | [ReadD2cMessages .NET](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/iot-hub/Quickstarts/ReadD2cMessages) |
+| .NET | [ReadD2cMessages .NET](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples/Getting%20Started/ReadD2cMessages) |
| Java | [read-d2c-messages Java](https://github.com/Azure-Samples/azure-iot-samples-java/tree/master/iot-hub/Quickstarts/read-d2c-messages) | | Node.js | [read-d2c-messages Node.js](https://github.com/Azure-Samples/azure-iot-samples-node/tree/master/iot-hub/Quickstarts/read-d2c-messages) | | Python | [read-dec-messages Python](https://github.com/Azure-Samples/azure-iot-samples-python/tree/master/iot-hub/Quickstarts/read-d2c-messages) |
iot-hub Iot Hub Devguide Routing Query Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-routing-query-syntax.md
deviceClient.sendEvent(message, (err, res) => {
``` > [!NOTE]
-> This shows how to handle the encoding of the body in JavaScript. If you want to see a sample in C#, download the [Azure IoT C# Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/main.zip). Unzip the master.zip file. The Visual Studio solution *SimulatedDevice*'s Program.cs file shows how to encode and submit messages to an IoT Hub. This is the same sample used for testing the message routing, as explained in the [Message Routing tutorial](tutorial-routing.md). At the bottom of Program.cs, it also has a method to read in one of the encoded files, decode it, and write it back out as ASCII so you can read it.
+> This shows how to handle the encoding of the body in JavaScript. If you want to see a sample in C#, download the [Azure IoT C# SDK](https://github.com/Azure/azure-iot-sdk-csharp/archive/main.zip). Unzip the master.zip file. The Visual Studio solution *SimulatedDevice*'s Program.cs file shows how to encode and submit messages to an IoT Hub. This is the same sample used for testing the message routing, as explained in the [Message Routing tutorial](tutorial-routing.md). At the bottom of Program.cs, it also has a method to read in one of the encoded files, decode it, and write it back out as ASCII so you can read it.
### Query expressions
iot-hub Iot Hub Devguide Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-sdks.md
The SDKs are available in **multiple languages** providing the flexibility to ch
| Language | Package | Source | Quickstarts | Samples | Reference | | :-- | :-- | :-- | :-- | :-- | :-- |
-| **.NET** | [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) | [Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp) | [Reference](/dotnet/api/microsoft.azure.devices.client) |
+| **.NET** | [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) | [Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples) | [Reference](/dotnet/api/microsoft.azure.devices.client) |
| **Python** | [pip](https://pypi.org/project/azure-iot-device/) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples) | [Reference](/python/api/azure-iot-device) | | **Node.js** | [npm](https://www.npmjs.com/package/azure-iot-device) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/device/samples) | [Reference](/javascript/api/azure-iot-device/) | | **Java** | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-device-client) | [GitHub](https://github.com/Azure/azure-iot-sdk-java) | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/master/device/iot-device-samples) | [Reference](/java/api/com.microsoft.azure.sdk.iot.device) |
The Azure IoT service SDKs contain code to facilitate building applications that
| Platform | Package | Code Repository | Samples | Reference | ||||||
-| .NET | [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices ) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp) | [Reference](/dotnet/api/microsoft.azure.devices) |
+| .NET | [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices ) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/service/samples) | [Reference](/dotnet/api/microsoft.azure.devices) |
| Java | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-service-client) | [GitHub](https://github.com/Azure/azure-iot-sdk-java) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/main/service/iot-service-samples/pnp-service-sample) | [Reference](/java/api/com.microsoft.azure.sdk.iot.service) | | Node | [npm](https://www.npmjs.com/package/azure-iothub) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/service/samples) | [Reference](/javascript/api/azure-iothub/) | | Python | [pip](https://pypi.org/project/azure-iot-hub) | [GitHub](https://github.com/Azure/azure-iot-hub-python) | [Samples](https://github.com/Azure/azure-iot-hub-python/tree/main/samples) | [Reference](/python/api/azure-iot-hub) |
iot-hub Iot Hub Device Sdk C Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-device-sdk-c-intro.md
There are a broad range of platforms on which the SDK has been tested (see the [
The following video presents an overview of the Azure IoT SDK for C:
->[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Azure-IoT-C-SDK-insights/Player]
+>[!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Azure-IoT-C-SDK-insights/Player]
This article introduces you to the architecture of the Azure IoT device SDK for C. It demonstrates how to initialize the device library, send data to IoT Hub, and receive messages from it. The information in this article should be enough to get started using the SDK, but also provides pointers to additional information about the libraries.
iot-hub Iot Hub How To Clone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-how-to-clone.md
The application targets .NET Core, so you can run it on either Windows or Linux.
### Downloading the sample
-1. Use the IoT C# samples from this page: [Azure IoT Samples for C#](https://azure.microsoft.com/resources/samples/azure-iot-samples-csharp/). Download the zip file and unzip it on your computer.
+1. Use the IoT C# samples here: [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp/archive/main.zip). Download the zip file and unzip it on your computer.
-1. The pertinent code is in ./iot-hub/Samples/service/ImportExportDevicesSample. You don't need to view or edit the code in order to run the application.
+1. The pertinent code is in ./iothub/service/samples/how to guides/ImportExportDevicesSample. You don't need to view or edit the code in order to run the application.
1. To run the application, specify three connection strings and five options. You pass this data in as command-line arguments or use environment variables, or use a combination of the two. We're going to pass the options in as command line arguments, and the connection strings as environment variables.
Now you have the environment variables in a file with the SET commands, and you
### Running the sample application using Visual Studio
-1. If you want to run the application in Visual Studio, change your current directory to the folder where the IoTHubServiceSamples.sln file resides. Then run this command in the command prompt window to open the solution in Visual Studio. You must do this in the same command window where you set the environment variables, so those variables are known.
+1. If you want to run the application in Visual Studio, change your current directory to the folder where the azureiot.sln file resides. Then run this command in the command prompt window to open the solution in Visual Studio. You must do this in the same command window where you set the environment variables, so those variables are known.
``` console
- IoTHubServiceSamples.sln
+ azureiot.sln
``` 1. Right-click on the project *ImportExportDevicesSample* and select **Set as startup project**.
iot-hub Quickstart Bicep Route Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-bicep-route-messages.md
This section provides the steps to deploy the Bicep file, create a virtual devic
When the deployment finishes, you should see a message indicating the deployment succeeded.
-1. Download and unzip the [IoT C# Samples](/samples/azure-samples/azure-iot-samples-csharp/azure-iot-samples-for-csharp-net/).
+1. Download and unzip the [IoT C# SDK](https://github.com/Azure/azure-iot-sdk-csharp/archive/main.zip).
-1. Open a command window and go to the folder where you unzipped the IoT C# Samples. Find the folder with the arm-read-write.csproj file. You create the environment variables in this command window. Log into the [Azure portal](https://portal.azure.com) to get the keys. Select **Resource Groups** then select the resource group used for this quickstart.
+1. Open a command window and go to the folder where you unzipped the IoT C# SDK. Find the folder with the arm-read-write.csproj file. You create the environment variables in this command window. Log into the [Azure portal](https://portal.azure.com) to get the keys. Select **Resource Groups** then select the resource group used for this quickstart.
![Select the resource group](./media/horizontal-arm-route-messages/01-select-resource-group.png)
This section provides the steps to deploy the Bicep file, create a virtual devic
![View the sent messages](./media/horizontal-arm-route-messages/08-messages.png) > [!NOTE]
- > These messages are encoded in UTF-32 and base64. If you read the message back, you have to decode it from base64 and utf-32 in order to read it as ASCII. If you're interested, you can use the method ReadOneRowFromFile in the Routing Tutorial to read one for from one of these message files and decode it into ASCII. ReadOneRowFromFile is in the IoT C# Samples repository that you unzipped for this quickstart. Here is the path from the top of that folder: *./iot-hub/Tutorials/Routing/SimulatedDevice/Program.cs.* Set the boolean `readTheFile` to true, and hardcode the path to the file on disk, and it will open and translate the first row in the file.
+ > These messages are encoded in UTF-32 and base64. If you read the message back, you have to decode it from base64 and utf-32 in order to read it as ASCII. If you're interested, you can use the method ReadOneRowFromFile in the Routing Tutorial to read one for from one of these message files and decode it into ASCII. ReadOneRowFromFile is in the IoT C# SDK repository that you unzipped for this quickstart. Here is the path from the top of that folder: *./iothub/device/samples/getting started/RoutingTutorial/SimulatedDevice/Program.cs* Set the boolean `readTheFile` to true, and hardcode the path to the file on disk, and it will open and translate the first row in the file.
You have deployed a Bicep file to create an IoT Hub and a storage account, and run a program to send messages to the hub. The messages are then automatically stored in the storage account where they can be viewed.
iot-hub Tutorial Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-routing.md
In this tutorial, you perform the following tasks:
* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](iot-hub-create-through-portal.md).
-* This tutorial uses sample code from [Azure IoT samples for C#](https://github.com/Azure-Samples/azure-iot-samples-csharp).
+* This tutorial uses sample code from [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp).
- * Download or clone the samples repo to your development machine.
- * Have .NET Core 3.0.0 or greater on your development machine. Check your version by running `dotnet --version` and [Download .NET](https://dotnet.microsoft.com/download) if necessary. <!-- TODO: update sample to use .NET 6.0 -->
+ * Download or clone the SDK repo to your development machine.
+ * Have .NET Core 3.0.0 or greater on your development machine. Check your version by running `dotnet --version` and [Download .NET](https://dotnet.microsoft.com/download) if necessary.
* Make sure that port 8883 is open in your firewall. The sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
Register a new device in your IoT hub.
Now that you have a device ID and key, use the sample code to start sending device telemetry messages to IoT Hub.
-<!-- TODO: update sample to use environment variables, not inline variables -->
>[!TIP] >If you're following the Azure CLI steps for this tutorial, run the sample code in a separate session. That way, you can allow the sample code to continue running while you follow the rest of the CLI steps.
-1. If you didn't as part of the prerequisites, download or clone the [Azure IoT samples for C# repo](https://github.com/Azure-Samples/azure-iot-samples-csharp) from GitHub now.
-1. In the sample folder, navigate to the `/iot-hub/Tutorials/Routing/SimulatedDevice/` folder.
-1. In an editor of your choice, open the `Program.cs` file.
-1. Find the variable definitions at the top of the **Program** class. Update the following variables with your own information:
-
- * **s_myDeviceId**: The device ID that you assigned when registering the device.
- * **s_iotHubUri**: The hostname of your IoT hub, which takes the format `IOTHUB_NAME.azure-devices.net`.
- * **s_deviceKey**: The device key that you copied from the device identity information.
-
-1. Save and close the file.
+1. If you didn't as part of the prerequisites, download or clone the [Azure IoT SDK for C# repo](https://github.com/Azure/azure-iot-sdk-csharp) from GitHub now.
+1. In the sample folder, navigate to the `/iothub/device/samples/getting started/RoutingTutorial/SimulatedDevice/` folder.
1. Install the Azure IoT C# SDK and necessary dependencies as specified in the `SimulatedDevice.csproj` file: ```console dotnet restore ```
-1. Run the sample code:
+1. In an editor of your choice, open the `Paramaters.cs` file. This file shows the parameters that are supported by the sample. Only the first three required parameters will be used in this article when running the sample. Review the code in this file. No changes are needed.
+1. Build and run the sample code using the following command:
- ```console
- dotnet run
- ```
+ * Replace `<myDeviceId>` with the device ID that you assigned when registering the device.
+ * Replace `<iotHubUri>` with the hostname of your IoT hub, which takes the format `IOTHUB_NAME.azure-devices.net`.
+ * Replace `<deviceKey>` with the device key that you copied from the device identity information.
+
+ ```cmd
+ dotnet run --d <myDeviceId> --u <iotHubUri> --k <deviceKey>
+ ```
1. You should start to see messages printed to output as they are sent to IoT Hub. Leave this program running for the duration of the tutorial.
iot-hub Tutorial Use Metrics And Diags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-use-metrics-and-diags.md
In the [Set up resources](#set-up-resources) section, you registered a device id
> > Alerts can take up to 10 minutes to be fully configured and enabled by IoT Hub. Wait at least 10 minutes between the time you configure your last alert and running the simulated device app.
-Download or clone the solution for the [Azure IoT C# samples repo](https://github.com/Azure-Samples/azure-iot-samples-csharp) from GitHub. This repo contains several sample applications. For this tutorial, we'll use iot-hub/Quickstarts/simulated-device/.
+Download or clone the solution for the [Azure IoT C# SDK repo](https://github.com/Azure/azure-iot-sdk-csharp) from GitHub. This repo contains several sample applications. For this tutorial, we'll use iothub/device/samples/getting started/SimulatedDevice/.
-1. In a local terminal window, navigate to the root folder of the solution. Then navigate to the **iot-hub\Quickstarts\simulated-device** folder.
+1. In a local terminal window, navigate to the root folder of the solution. Then navigate to the **iothub\device\samples\getting started\SimulatedDevice** folder.
1. Open the **SimulatedDevice.cs** file in a text editor of your choice.
key-vault Create Certificate Signing Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/create-certificate-signing-request.md
If you want to add more information when creating the CSR, define it in **Subjec
Example ```azure-powershell
- SubjectName="CN = docs.microsoft.com, OU = Microsoft Corporation, O = Microsoft Corporation, L = Redmond, S = WA, C = US"
+ SubjectName="CN = learn.microsoft.com, OU = Microsoft Corporation, O = Microsoft Corporation, L = Redmond, S = WA, C = US"
``` > [!NOTE]
key-vault How To Integrate Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/how-to-integrate-certificate-authority.md
DigicertCA is now in the certificate authority list.
### Azure portal (GlobalSign)
-1. To add DigiCert certificate authority, go to the key vault you want to add it to.
+1. To add GlobalSign certificate authority, go to the key vault you want to add it to.
2. On the Key Vault property page, select **Certificates**. 3. Select the **Certificate Authorities** tab: :::image type="content" source="../media/certificates/how-to-integrate-certificate-authority/select-certificate-authorities.png" alt-text="Screenshot that shows selecting the Certificate Authorities tab.":::
key-vault Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/private-link-service.md
az keyvault private-endpoint-connection delete --resource-group {RG} --vault-nam
az network private-endpoint show -g {RG} -n {PE NAME} # look for the property networkInterfaces then id; the value must be placed on {PE NIC} below. az network nic show --ids {PE NIC} # look for the property ipConfigurations then privateIpAddress; the value must be placed on {NIC IP} below.
-# https://docs.microsoft.com/azure/dns/private-dns-getstarted-cli#create-an-additional-dns-record
+# https://learn.microsoft.com/azure/dns/private-dns-getstarted-cli#create-an-additional-dns-record
az network private-dns zone list -g {RG} az network private-dns record-set a add-record -g {RG} -z "privatelink.vaultcore.azure.net" -n {KEY VAULT NAME} -a {NIC IP} az network private-dns record-set list -g {RG} -z "privatelink.vaultcore.azure.net"
Aliases: <your-key-vault-name>.vault.azure.net
## Next Steps - Learn more about [Azure Private Link](../../private-link/private-link-service-overview.md)-- Learn more about [Azure Key Vault](overview.md)
+- Learn more about [Azure Key Vault](overview.md)
key-vault Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/private-link.md
az keyvault private-endpoint-connection delete --resource-group {RG} --hsm-name
az network private-endpoint show -g {RG} -n {PE NAME} # look for the property networkInterfaces then id; the value must be placed on {PE NIC} below. az network nic show --ids {PE NIC} # look for the property ipConfigurations then privateIpAddress; the value must be placed on {NIC IP} below.
-# https://docs.microsoft.com/en-us/azure/dns/private-dns-getstarted-cli#create-an-additional-dns-record
+# https://learn.microsoft.com/azure/dns/private-dns-getstarted-cli#create-an-additional-dns-record
az network private-dns zone list -g {RG} az network private-dns record-set a add-record -g {RG} -z "privatelink.managedhsm.azure.net" -n {HSM NAME} -a {NIC IP} az network private-dns record-set list -g {RG} -z "privatelink.managedhsm.azure.net"
load-balancer Load Balancer Multivip Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multivip-overview.md
Title: Multiple frontends - Azure Load Balancer
-description: With this learning path, get started with an overview of multiple frontends on Azure Load Balancer
+description: This article describes the fundamentals of load balancing across multiple IP addresses using the same port and protocol using multiple frontends on Azure Load Balancer
documentationcenter: na
na Previously updated : 01/26/2022 Last updated : 09/19/2022 # Multiple frontends for Azure Load Balancer
-Azure Load Balancer allows you to load balance services on multiple ports, multiple IP addresses, or both. You can use public and internal load balancer definitions to load balance flows across a set of VMs.
+Azure Load Balancer allows you to load balance services on multiple ports, multiple IP addresses, or both. You can use a public or internal load balancer to load balance traffic across a set of services like virtual machine scale sets or virtual machines (VMs).
-This article describes the fundamentals of this ability, important concepts, and constraints. If you only intend to expose services on one IP address, you can find simplified instructions for [public](./quickstart-load-balancer-standard-public-portal.md) or [internal](./quickstart-load-balancer-standard-internal-portal.md) load balancer configurations. Adding multiple frontends is incremental to a single frontend configuration. Using the concepts in this article, you can expand a simplified configuration at any time.
+This article describes the fundamentals of load balancing across multiple IP addresses using the same port and protocol. If you only intend to expose services on one IP address, you can find simplified instructions for [public](./quickstart-load-balancer-standard-public-portal.md) or [internal](./quickstart-load-balancer-standard-internal-portal.md) load balancer configurations. Adding multiple frontends is incremental to a single frontend configuration. Using the concepts in this article, you can expand a simplified configuration at any time.
-When you define an Azure Load Balancer, a frontend and a backend pool configuration are connected with rules. The health probe referenced by the rule is used to determine how new flows are sent to a node in the backend pool. The frontend (also known as VIP) is defined by a 3-tuple comprised of an IP address (public or internal), a transport protocol (UDP or TCP), and a port number from the load balancing rule. The backend pool is a collection of Virtual Machine IP configurations (part of the NIC resource) which reference the Load Balancer backend pool.
+When you define an Azure Load Balancer, a frontend and a backend pool configuration are connected with a load balancing rule. The health probe referenced by the load balancing rule is used to determine the health of a VM on a certain port and protocol. Based on the health probe results, new flows are sent to VMs in the backend pool. The frontend is defined by a three-tuple comprised of an IP address (public or internal), a transport protocol (UDP or TCP), and a port number from the load balancing rule. The backend pool is a collection of Virtual Machine IP configurations (part of the NIC resource) which reference the Load Balancer backend pool.
The following table contains some example frontend configurations:
The following table contains some example frontend configurations:
| 3 |65.52.0.1 |*UDP* |80 | | 4 |*65.52.0.2* |TCP |80 |
-The table shows four different frontends. Frontends #1, #2 and #3 are a single frontend with multiple rules. The same IP address is used but the port or protocol is different for each frontend. Frontends #1 and #4 are an example of multiple frontends, where the same frontend protocol and port are reused across multiple frontends.
+The table shows four different frontend configurations. Frontends #1, #2 and #3 use the same IP address but the port or protocol is different for each frontend. Frontends #1 and #4 are an example of multiple frontends, where the same frontend protocol and port are reused across multiple frontend IPs.
-Azure Load Balancer provides flexibility in defining the load balancing rules. A rule declares how an address and port on the frontend is mapped to the destination address and port on the backend. Whether or not backend ports are reused across rules depends on the type of the rule. Each type of rule has specific requirements that can affect host configuration and probe design. There are two types of rules:
+Azure Load Balancer provides flexibility in defining the load balancing rules. A load balancing rule declares how an address and port on the frontend is mapped to the destination address and port on the backend. Whether or not backend ports are reused across rules depends on the type of the rule. Each type of rule has specific requirements that can affect host configuration and probe design. There are two types of rules:
-1. The default rule with no backend port reuse
-2. The Floating IP rule where backend ports are reused
+1. The default rule with no backend port reuse.
+2. The Floating IP rule where backend ports are reused.
-Azure Load Balancer allows you to mix both rule types on the same load balancer configuration. The load balancer can use them simultaneously for a given VM, or any combination, as long as you abide by the constraints of the rule. Which rule type you choose depends on the requirements of your application and the complexity of supporting that configuration. You should evaluate which rule types are best for your scenario.
-
-We explore these scenarios further by starting with the default behavior.
+Azure Load Balancer allows you to mix both rule types on the same load balancer configuration. The load balancer can use them simultaneously for a given VM, or any combination, if you abide by the constraints of the rule. The rule type you choose depends on the requirements of your application and the complexity of supporting that configuration. You should evaluate which rule types are best for your scenario. We'll explore these scenarios further by starting with the default behavior.
## Rule type #1: No backend port reuse-
-![Multiple frontend illustration with green and purple frontend](./media/load-balancer-multivip-overview/load-balancer-multivip.png)
In this scenario, the frontends are configured as follows:
In this scenario, the frontends are configured as follows:
| ![green frontend](./media/load-balancer-multivip-overview/load-balancer-rule-green.png) 1 |65.52.0.1 |TCP |80 | | ![purple frontend](./media/load-balancer-multivip-overview/load-balancer-rule-purple.png) 2 |*65.52.0.2* |TCP |80 |
-The DIP is the destination of the inbound flow. In the backend pool, each VM exposes the desired service on a unique port on a DIP. This service is associated with the frontend through a rule definition.
+The backend instance IP (BIP) is the IP address of the backend service in the backend pool, each VM exposes the desired service on a unique port on the backend instance IP. This service is associated with the frontend IP (FIP) through a rule definition.
We define two rules: | Rule | Map frontend | To backend pool | | | | |
-| 1 |![green frontend](./media/load-balancer-multivip-overview/load-balancer-rule-green.png) Frontend1:80 |![green backend](./media/load-balancer-multivip-overview/load-balancer-rule-green.png) DIP1:80, ![green backend](./media/load-balancer-multivip-overview/load-balancer-rule-green.png) DIP2:80 |
-| 2 |![VIP](./media/load-balancer-multivip-overview/load-balancer-rule-purple.png) Frontend2:80 |![purple backend](./media/load-balancer-multivip-overview/load-balancer-rule-purple.png) DIP1:81, ![purple backend](./media/load-balancer-multivip-overview/load-balancer-rule-purple.png) DIP2:81 |
+| 1 |![green frontend](./media/load-balancer-multivip-overview/load-balancer-rule-green.png) FIP1:80 |![green backend](./media/load-balancer-multivip-overview/load-balancer-rule-green.png) BIP1:80, ![green backend](./media/load-balancer-multivip-overview/load-balancer-rule-green.png) BIP2:80 |
+| 2 |![VIP](./media/load-balancer-multivip-overview/load-balancer-rule-purple.png) FIP2:80 |![purple backend](./media/load-balancer-multivip-overview/load-balancer-rule-purple.png) BIP1:81, ![purple backend](./media/load-balancer-multivip-overview/load-balancer-rule-purple.png) BIP2:81 |
The complete mapping in Azure Load Balancer is now as follows: | Rule | Frontend IP address | protocol | port | Destination | port | | | | | | | |
-| ![green rule](./media/load-balancer-multivip-overview/load-balancer-rule-green.png) 1 |65.52.0.1 |TCP |80 |DIP IP Address |80 |
-| ![purple rule](./media/load-balancer-multivip-overview/load-balancer-rule-purple.png) 2 |65.52.0.2 |TCP |80 |DIP IP Address |81 |
+| ![green rule](./media/load-balancer-multivip-overview/load-balancer-rule-green.png) 1 |65.52.0.1 |TCP |80 |BIP IP Address |80 |
+| ![purple rule](./media/load-balancer-multivip-overview/load-balancer-rule-purple.png) 2 |65.52.0.2 |TCP |80 |BIP IP Address |81 |
-Each rule must produce a flow with a unique combination of destination IP address and destination port. By varying the destination port of the flow, multiple rules can deliver flows to the same DIP on different ports.
+Each rule must produce a flow with a unique combination of destination IP address and destination port. Multiple load balancing rules can deliver flows to the same backend instance IP on different ports by varying the destination port of the flow.
-Health probes are always directed to the DIP of a VM. You must ensure that your probe reflects the health of the VM.
+Health probes are always directed to the backend instance IP of a VM. You must ensure that your probe reflects the health of the VM.
## Rule type #2: backend port reuse by using Floating IP
-Azure Load Balancer provides the flexibility to reuse the frontend port across multiple frontends regardless of the rule type used. Additionally, some application scenarios prefer or require the same port to be used by multiple application instances on a single VM in the backend pool. Common examples of port reuse include clustering for high availability, network virtual appliances, and exposing multiple TLS endpoints without re-encryption.
+Azure Load Balancer provides the flexibility to reuse the frontend port across multiple frontends configurations. Additionally, some application scenarios prefer or require the same port to be used by multiple application instances on a single VM in the backend pool. Common examples of port reuse include clustering for high availability, network virtual appliances, and exposing multiple TLS endpoints without re-encryption.
-If you want to reuse the backend port across multiple rules, you must enable Floating IP in the rule definition.
+If you want to reuse the backend port across multiple rules, you must enable Floating IP in the load balancing rule definition.
-"Floating IP" is Azure's terminology for a portion of what is known as Direct Server Return (DSR). DSR consists of two parts: a flow topology and an IP address mapping scheme. At a platform level, Azure Load Balancer always operates in a DSR flow topology regardless of whether Floating IP is enabled or not. This means that the outbound part of a flow is always correctly rewritten to flow directly back to the origin.
+*Floating IP* is Azure's terminology for a portion of what is known as Direct Server Return (DSR). DSR consists of two parts: a flow topology and an IP address mapping scheme. At a platform level, Azure Load Balancer always operates in a DSR flow topology regardless of whether Floating IP is enabled or not. This means that the outbound part of a flow is always correctly rewritten to flow directly back to the origin.
-With the default rule type, Azure exposes a traditional load balancing IP address mapping scheme for ease of use. Enabling Floating IP changes the IP address mapping scheme to allow for additional flexibility as explained below.
+With the default rule type, Azure exposes a traditional load balancing IP address mapping scheme for ease of use. Enabling Floating IP changes the IP address mapping scheme to allow for more flexibility as explained below.
-The following diagram illustrates this configuration:
-
-![Multiple frontend illustration with green and purple frontend with DSR](./media/load-balancer-multivip-overview/load-balancer-multivip-dsr.png)
For this scenario, every VM in the backend pool has three network interfaces:
-* DIP: a Virtual NIC associated with the VM (IP configuration of Azure's NIC resource)
-* Frontend 1: a loopback interface within guest OS that is configured with IP address of Frontend 1
-* Frontend 2: a loopback interface within guest OS that is configured with IP address of Frontend 2
+* Backend IP: a Virtual NIC associated with the VM (IP configuration of Azure's NIC resource).
+* Frontend 1 (FIP1): a loopback interface within guest OS that is configured with IP address of FIP1.
+* Frontend 2 (FIP2): a loopback interface within guest OS that is configured with IP address of FIP2.
Let's assume the same frontend configuration as in the previous scenario:
Let's assume the same frontend configuration as in the previous scenario:
| ![green frontend](./media/load-balancer-multivip-overview/load-balancer-rule-green.png) 1 |65.52.0.1 |TCP |80 | | ![purple frontend](./media/load-balancer-multivip-overview/load-balancer-rule-purple.png) 2 |*65.52.0.2* |TCP |80 |
-We define two rules:
+We define two floating IP rules:
| Rule | Frontend | Map to backend pool | | | | |
-| 1 |![green rule](./media/load-balancer-multivip-overview/load-balancer-rule-green.png) Frontend1:80 |![green backend](./media/load-balancer-multivip-overview/load-balancer-rule-green.png) Frontend1:80 (in VM1 and VM2) |
-| 2 |![purple rule](./media/load-balancer-multivip-overview/load-balancer-rule-purple.png) Frontend2:80 |![purple backend](./media/load-balancer-multivip-overview/load-balancer-rule-purple.png) Frontend2:80 (in VM1 and VM2) |
+| 1 |![green rule](./media/load-balancer-multivip-overview/load-balancer-rule-green.png) FIP1:80 |![green backend](./media/load-balancer-multivip-overview/load-balancer-rule-green.png) FIP1:80 (in VM1 and VM2) |
+| 2 |![purple rule](./media/load-balancer-multivip-overview/load-balancer-rule-purple.png) FIP2:80 |![purple backend](./media/load-balancer-multivip-overview/load-balancer-rule-purple.png) FIP2:80 (in VM1 and VM2) |
The following table shows the complete mapping in the load balancer:
The following table shows the complete mapping in the load balancer:
| ![green rule](./media/load-balancer-multivip-overview/load-balancer-rule-green.png) 1 |65.52.0.1 |TCP |80 |same as frontend (65.52.0.1) |same as frontend (80) | | ![purple rule](./media/load-balancer-multivip-overview/load-balancer-rule-purple.png) 2 |65.52.0.2 |TCP |80 |same as frontend (65.52.0.2) |same as frontend (80) |
-The destination of the inbound flow is the frontend IP address on the loopback interface in the VM. Each rule must produce a flow with a unique combination of destination IP address and destination port. By varying the destination IP address of the flow, port reuse is possible on the same VM. Your service is exposed to the load balancer by binding it to the frontendΓÇÖs IP address and port of the respective loopback interface.
+The destination of the inbound flow is now the frontend IP address on the loopback interface in the VM. Each rule must produce a flow with a unique combination of destination IP address and destination port. Port reuse is possible on the same VM by varying the destination IP address to the frontend IP address of the flow. Your service is exposed to the load balancer by binding it to the frontendΓÇÖs IP address and port of the respective loopback interface.
-Notice that this example does not change the destination port. Even though this is a Floating IP scenario, Azure Load Balancer also supports defining a rule to rewrite the backend destination port and to make it different from the frontend destination port.
+You'll notice the destination port doesn't change in the example. In floating IP scenarios, Azure Load Balancer also supports defining a load balancing rule to change the backend destination port and to make it different from the frontend destination port.
-The Floating IP rule type is the foundation of several load balancer configuration patterns. One example that is currently available is the [Configure one or more Always On availability group listeners](/azure/azure-sql/virtual-machines/windows/availability-group-listener-powershell-configure) configuration. Over time, we will document more of these scenarios.
+The Floating IP rule type is the foundation of several load balancer configuration patterns. One example that is currently available is the [Configure one or more Always On availability group listeners](/azure/azure-sql/virtual-machines/windows/availability-group-listener-powershell-configure) configuration. Over time, we'll document more of these scenarios.
> [!NOTE] > For more detailed information on the specific Guest OS configurations required to enable Floating IP, please refer to [Azure Load Balancer Floating IP configuration](load-balancer-floating-ip.md).
The Floating IP rule type is the foundation of several load balancer configurati
## Limitations * Multiple frontend configurations are only supported with IaaS VMs and virtual machine scale sets.
-* With the Floating IP rule, your application must use the primary IP configuration for outbound SNAT flows. If your application binds to the frontend IP address configured on the loopback interface in the guest OS, Azure's outbound SNAT is not available to rewrite the outbound flow and the flow fails. Review [outbound scenarios](load-balancer-outbound-connections.md).
-* Floating IP is not currently supported on secondary IP configurations.
+* With the Floating IP rule, your application must use the primary IP configuration for outbound SNAT flows. If your application binds to the frontend IP address configured on the loopback interface in the guest OS, Azure's outbound SNAT won't rewrite the outbound flow, and the flow fails. Review [outbound scenarios](load-balancer-outbound-connections.md).
+* Floating IP isn't currently supported on secondary IP configurations.
* Public IP addresses have an effect on billing. For more information, see [IP Address pricing](https://azure.microsoft.com/pricing/details/ip-addresses/) * Subscription limits apply. For more information, see [Service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits) for details. ## Next steps -- Review [Outbound connections](load-balancer-outbound-connections.md) to understand the impact of multiple frontends on outbound connection behavior.
+- Review [Outbound connections](load-balancer-outbound-connections.md) to understand the effect of multiple frontends on outbound connection behavior.
load-balancer Load Balancer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-overview.md
Subscribe to the RSS feed and view the latest Azure Load Balancer feature update
* For more information on Azure Load Balancer limitations and components, see [Azure Load Balancer components](./components.md) and [Azure Load Balancer concepts](./concepts.md)
-* [Learn module: Introduction to Azure Load Balancer](/learn/paths/intro-to-azure-application-delivery-services).
+* [Learn module: Introduction to Azure Load Balancer](/training/paths/intro-to-azure-application-delivery-services).
logic-apps Block Connections Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/block-connections-connectors.md
If you already have a logic app with the connection that you want to block, foll
For example, if you want to block the Instagram connector, which is deprecated, go to this page:
- `https://docs.microsoft.com/connectors/instagram/`
+ `https://learn.microsoft.com/connectors/instagram/`
1. From the page's URL, copy and save the connector reference ID at the end without the forward slash (`/`), for example, `instagram`.
logic-apps Create Integration Service Environment Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-integration-service-environment-rest-api.md
Deployment usually takes within two hours to finish. Occasionally, deployment mi
> [!NOTE] > If deployment fails or you delete your ISE, Azure might take up to an hour before releasing your subnets.
-> This delay means means you might have to wait before reusing those subnets in another ISE.
+> This delay means you might have to wait before reusing those subnets in another ISE.
> > If you delete your virtual network, Azure generally takes up to two hours > before releasing up your subnets, but this operation might take longer.
logic-apps Logic Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-overview.md
The following list describes just a few example tasks, business processes, and w
* Monitor tweets, analyze the sentiment, and create alerts or tasks for items that need review.
-> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Go-serverless-Enterprise-integration-with-Azure-Logic-Apps/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Go-serverless-Enterprise-integration-with-Azure-Logic-Apps/player]
Based on the logic app resource type that you choose and create, your logic apps run in multi-tenant Azure Logic Apps, [single-tenant Azure Logic Apps](single-tenant-overview-compare.md), or a dedicated [integration service environment](connect-virtual-network-vnet-isolated-environment-overview.md) when accessing an Azure virtual network. To run logic apps in containers, [create single-tenant based logic apps using Azure Arc enabled Logic Apps](azure-arc-enabled-logic-apps-create-deploy-workflows.md). For more information, review [What is Azure Arc enabled Logic Apps?](azure-arc-enabled-logic-apps-overview.md) and [Resource type and host environment differences for logic apps](#resource-environment-differences).
You might also want to explore other quickstart guides for Azure Logic Apps:
Learn more about the Azure Logic Apps platform with these introductory videos:
-> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Connect-and-extend-your-mainframe-to-the-cloud-with-Logic-Apps/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Connect-and-extend-your-mainframe-to-the-cloud-with-Logic-Apps/player]
## Next steps
logic-apps Logic Apps Scenario Function Sb Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-scenario-function-sb-trigger.md
Next, create the function that acts as the trigger and listens to the queue.
// Can also fetch from App Settings or environment variable private static string logicAppUri = @"https://prod-05.westus.logic.azure.com:443/workflows/<remaining-callback-URL>";
- // Reuse the instance of HTTP clients if possible: https://docs.microsoft.com/azure/azure-functions/manage-connections
+ // Reuse the instance of HTTP clients if possible: https://learn.microsoft.com/azure/azure-functions/manage-connections
private static HttpClient httpClient = new HttpClient(); public static async Task Run(string myQueueItem, TraceWriter log)
Next, create the function that acts as the trigger and listens to the queue.
## Next steps
-* [Call, trigger, or nest workflows by using HTTP endpoints](../logic-apps/logic-apps-http-endpoint.md)
+* [Call, trigger, or nest workflows by using HTTP endpoints](../logic-apps/logic-apps-http-endpoint.md)
logic-apps Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/plan-manage-costs.md
To help you reduce costs on your logic aps and related resources, try these opti
* [Optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) * [Manage costs using cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) * [Prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
-* Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course
+* Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course
logic-apps Quickstart Logic Apps Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-logic-apps-azure-cli.md
az logic workflow list --resource-group "testResourceGroup" --filter "(State eq
The following error indicates that the Azure Logic Apps CLI extension isn't installed. Follow the steps in the [prerequisites to install the Logic Apps extension](#prerequisites) on your computer. ```output
-az: 'logic' is not in the 'az' command group. See 'az --help'. If the command is from an extension, please make sure the corresponding extension is installed. To learn more about extensions, please visit https://docs.microsoft.com/cli/azure/azure-cli-extensions-overview
+az: 'logic' is not in the 'az' command group. See 'az --help'. If the command is from an extension, please make sure the corresponding extension is installed. To learn more about extensions, please visit https://learn.microsoft.com/cli/azure/azure-cli-extensions-overview
``` The following error might indicate that the file path for uploading your workflow definition is incorrect.
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/workflow-definition-language-functions-reference.md
To work with strings, you can use these string functions and also some [collecti
| String function | Task | | | - |
+| [chunk](../logic-apps/workflow-definition-language-functions-reference.md#chunk) | Split a string or collection into chunks of equal length. |
| [concat](../logic-apps/workflow-definition-language-functions-reference.md#concat) | Combine two or more strings, and return the combined string. | | [endsWith](../logic-apps/workflow-definition-language-functions-reference.md#endswith) | Check whether a string ends with the specified substring. | | [formatNumber](../logic-apps/workflow-definition-language-functions-reference.md#formatNumber) | Return a number as a string based on the specified format | | [guid](../logic-apps/workflow-definition-language-functions-reference.md#guid) | Generate a globally unique identifier (GUID) as a string. | | [indexOf](../logic-apps/workflow-definition-language-functions-reference.md#indexof) | Return the starting position for a substring. |
+| [isFloat](../logic-apps/workflow-definition-language-functions-reference.md#isInt) | Return a boolean that indicates whether a string is a floating-point number. |
+| [isInt](../logic-apps/workflow-definition-language-functions-reference.md#isInt) | Return a boolean that indicates whether a string is an integer. |
| [lastIndexOf](../logic-apps/workflow-definition-language-functions-reference.md#lastindexof) | Return the starting position for the last occurrence of a substring. | | [length](../logic-apps/workflow-definition-language-functions-reference.md#length) | Return the number of items in a string or array. | | [nthIndexOf](../logic-apps/workflow-definition-language-functions-reference.md#nthIndexOf) | Return the starting position or index value where the *n*th occurrence of a substring appears in a string. |
To work with collections, generally arrays, strings, and sometimes, dictionaries
| Collection function | Task | | - | - |
+| [chunk](../logic-apps/workflow-definition-language-functions-reference.md#chunk) | Split a string or collection into chunks of equal length. |
| [contains](../logic-apps/workflow-definition-language-functions-reference.md#contains) | Check whether a collection has a specific item. | | [empty](../logic-apps/workflow-definition-language-functions-reference.md#empty) | Check whether a collection is empty. | | [first](../logic-apps/workflow-definition-language-functions-reference.md#first) | Return the first item from a collection. |
To work with collections, generally arrays, strings, and sometimes, dictionaries
| [join](../logic-apps/workflow-definition-language-functions-reference.md#join) | Return a string that has *all* the items from an array, separated by the specified character. | | [last](../logic-apps/workflow-definition-language-functions-reference.md#last) | Return the last item from a collection. | | [length](../logic-apps/workflow-definition-language-functions-reference.md#length) | Return the number of items in a string or array. |
+| [reverse](../logic-apps/workflow-definition-language-functions-reference.md#reverse) | Reverse the order of items in an array. |
| [skip](../logic-apps/workflow-definition-language-functions-reference.md#skip) | Remove items from the front of a collection, and return *all the other* items. |
+| [sort](../logic-apps/workflow-definition-language-functions-reference.md#sort) | Sort items in a collection. |
| [take](../logic-apps/workflow-definition-language-functions-reference.md#take) | Return items from the front of a collection. | | [union](../logic-apps/workflow-definition-language-functions-reference.md#union) | Return a collection that has *all* the items from the specified collections. | |||
For the full reference about each function, see the
| [convertFromUtc](../logic-apps/workflow-definition-language-functions-reference.md#convertFromUtc) | Convert a timestamp from Universal Time Coordinated (UTC) to the target time zone. | | [convertTimeZone](../logic-apps/workflow-definition-language-functions-reference.md#convertTimeZone) | Convert a timestamp from the source time zone to the target time zone. | | [convertToUtc](../logic-apps/workflow-definition-language-functions-reference.md#convertToUtc) | Convert a timestamp from the source time zone to Universal Time Coordinated (UTC). |
+| [dateDifference](../logic-apps/workflow-definition-language-functions-reference.md#dateDifference) | Return the difference between two dates as a timespan. |
| [dayOfMonth](../logic-apps/workflow-definition-language-functions-reference.md#dayOfMonth) | Return the day of the month component from a timestamp. | | [dayOfWeek](../logic-apps/workflow-definition-language-functions-reference.md#dayOfWeek) | Return the day of the week component from a timestamp. | | [dayOfYear](../logic-apps/workflow-definition-language-functions-reference.md#dayOfYear) | Return the day of the year component from a timestamp. |
These examples show the different supported types of input for `bool()`:
## C
+<a name="chunk"></a>
+
+### chunk
+
+Split a string or array into chunks of equal length.
+
+```
+chunk('<collection>', '<length>')
+chunk([<collection>], '<length>')
+```
+
+| Parameter | Required | Type | Description |
+| | -- | - | -- |
+| <*collection*> | Yes | String or Array | The collection to split |
+| <*length*> | Yes | The length of each chunk |
+|||||
+
+| Return value | Type | Description |
+| | - | -- |
+| <*collection*> | Array | An array of chunks with the specified length |
+||||
+
+*Example 1*
+
+This example splits a string into chunks of length 10:
+
+```
+chunk('abcdefghijklmnopqrstuvwxyz', 10)
+```
+
+And returns this result: `['abcdefghij', 'klmnopqrst', 'uvwxyz']`
+
+*Example 2*
+
+This example splits an array into chunks of length 5.
+
+```
+chunk(createArray(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12), 5)
+```
+
+And returns this result: `[ [1,2,3,4,5], [6,7,8,9,10], [11,12] ]`
+ <a name="coalesce"></a> ### coalesce
dataUriToString('data:text/plain;charset=utf-8;base64,aGVsbG8=')
And returns this result: `"hello"`
+<a name="dateDifference"></a>
+
+### dateDifference
+
+Return the difference between two timestamps as a timespan. This function subtracts `startDate` from `endDate`, and returns the result as timestamp in string format.
+
+```
+dateDifference('<startDate>', '<endDate>')
+```
+
+| Parameter | Required | Type | Description |
+| | -- | - | -- |
+| <*startDate*> | Yes | String | A string that contains a timestamp |
+| <*endDate*> | Yes | String | A string that contains a timestamp |
+|||||
+
+| Return value | Type | Description |
+| | - | -- |
+| <*timespan*> | String | The difference between the two timestamps, which is a timestamp in string format. If `startDate` is more recent than `endDate`, the result is a negative value. |
+||||
+
+*Example*
+
+This example subtracts the first value from the second value:
+
+```
+dateDifference('2015-02-08', '2018-07-30')
+```
+
+And returns this result: `"1268.00:00:00"`
+ <a name="dayOfMonth"></a> ### dayOfMonth
And return these results:
### float
-Convert a string version for a floating-point number to an actual floating point number. You can use this function only when passing custom parameters to an app, for example, a logic app or flow.
+Convert a string version for a floating-point number to an actual floating point number. You can use this function only when passing custom parameters to an app, for example, a logic app workflow or Power Automate flow. To convert floating-point strings represented in locale-specific formats, you can optionally specify an RFC 4646 locale code.
```
-float('<value>')
+float('<value>', '<locale>'?)
``` | Parameter | Required | Type | Description | | | -- | - | -- | | <*value*> | Yes | String | The string that has a valid floating-point number to convert. The minimum and maximum values are the same as the limits for the float data type. |
+| <*locale*> | No | String | The RFC 4646 locale code to use. <br><br>If not specified, default locale is used. <br><br>If *locale* isn't a valid value, an error is generated that the provided locale isn't valid or doesn't have an associated locale. |
||||| | Return value | Type | Description |
float('<value>')
| <*float-value*> | Float | The floating-point number for the specified string. The minimum and maximum values are the same as the limits for the float data type. | ||||
-*Example*
+*Example 1*
This example creates a string version for this floating-point number: ```
-float('10.333')
+float('10,000.333')
+```
+
+And returns this result: `10000.333`
+
+*Example 2*
+
+This example creates a string version for this German-style floating-point number:
+
+```
+float('10.000,333', 'de-DE')
```
-And returns this result: `10.333`
+And returns this result: `10000.333`
<a name="formatDateTime"></a>
int('10')
And returns this result: `10`
+<a name="isFloat"></a>
+
+### isFloat
+
+Return a boolean indicating whether a string is a floating-point number. By default, this function uses the invariant culture for the floating-point format. To identify floating-point numbers represented in other locale-specific formats, you can optionally specify an RFC 4646 locale code.
+
+```
+isFloat('<string>', '<locale>'?)
+```
+
+| Parameter | Required | Type | Description |
+| | -- | - | -- |
+| <*value*> | Yes | String | The string to examine |
+| <*locale*> | No | String | The RFC 4646 locale code to use |
+|||||
+
+| Return value | Type | Description |
+| | - | -- |
+| <*boolean-result*> | Boolean | A boolean that indicates whether the string is a floating-point number |
+
+*Example 1*
+
+This example checks whether a string is a floating-point number in the invariant culture:
+
+```
+isFloat('10,000.00')
+```
+
+And returns this result: `true`
+
+*Example 2*
+
+This example checks whether a string is a floating-point number in the German locale:
+
+```
+isFloat('10.000,00', 'de-DE')
+```
+
+And returns this result: `true`
+
+<a name="isInt"></a>
+
+### isInt
+
+Return a boolean that indicates whether a string is an integer.
+
+```
+isInt('<string>')
+```
+
+| Parameter | Required | Type | Description |
+| | -- | - | -- |
+| <*string*> | Yes | String | The string to examine |
+|||||
+
+| Return value | Type | Description |
+| | - | -- |
+| <*boolean-result*> | Boolean | A boolean that indicates whether the string is an integer |
+
+*Example*
+
+This example checks whether a string is an integer:
+
+```
+isInt('10')
+```
+
+And returns this result: `true`
+ <a name="item"></a> ### item
range(1, 4)
And returns this result: `[1, 2, 3, 4]`
-<a name="replace"></a>
-
-### replace
-
-Replace a substring with the specified string,
-and return the result string. This function
-is case-sensitive.
-
-```
-replace('<text>', '<oldText>', '<newText>')
-```
-
-| Parameter | Required | Type | Description |
-| | -- | - | -- |
-| <*text*> | Yes | String | The string that has the substring to replace |
-| <*oldText*> | Yes | String | The substring to replace |
-| <*newText*> | Yes | String | The replacement string |
-|||||
-
-| Return value | Type | Description |
-| | - | -- |
-| <*updated-text*> | String | The updated string after replacing the substring <br><br>If the substring isn't found, return the original string. |
-||||
-
-*Example*
-
-This example finds the "old" substring in "the old string" and replaces "old" with "new":
-
-```
-replace('the old string', 'old', 'new')
-```
-
-And returns this result: `"the new string"`
- <a name="removeProperty"></a> ### removeProperty
Here's the updated JSON object:
} ```
+<a name="replace"></a>
+
+### replace
+
+Replace a substring with the specified string, and return the result string. This function is case-sensitive.
+
+```
+replace('<text>', '<oldText>', '<newText>')
+```
+
+| Parameter | Required | Type | Description |
+| | -- | - | -- |
+| <*text*> | Yes | String | The string that has the substring to replace |
+| <*oldText*> | Yes | String | The substring to replace |
+| <*newText*> | Yes | String | The replacement string |
+|||||
+
+| Return value | Type | Description |
+| | - | -- |
+| <*updated-text*> | String | The updated string after replacing the substring <br><br>If the substring isn't found, return the original string. |
+||||
+
+*Example*
+
+This example finds the "old" substring in "the old string" and replaces "old" with "new":
+
+```
+replace('the old string', 'old', 'new')
+```
+
+And returns this result: `"the new string"`
+ <a name="result"></a> ### result
Here's how the example returned array might look where the outer `outputs` objec
] ```
+<a name="reverse"></a>
+
+### reverse
+
+Reverse the order of items in a collection. When you use this function with [sort()](#sort), you can sort a collection in descending order.
+
+```
+reverse([<collection>])
+```
+
+| Parameter | Required | Type | Description |
+| | -- | - | -- |
+| <*collection*> | Yes | Array | The collection to reverse |
+|||||
+
+| Return value | Type | Description |
+| | - | -- |
+| [<*updated-collection*>] | Array | The reversed collection |
+||||
+
+*Example*
+
+This example reverses an array of integers:
+
+```
+reverse(createArray(0, 1, 2, 3))
+```
+
+And returns this array: `[3,2,1,0]`
+ ## S <a name="setProperty"></a>
slice('Hello World', 3, -1) // Returns 'lo Worl'.
slice('Hello World', 3, 3) // Returns ''. ```
+<a name="sort"></a>
+
+### sort
+
+Sort items in a collection. You can sort the collection objects using any key that contains a simple type.
+
+```
+sort([<collection>], <sortBy>?)
+```
+
+| Parameter | Required | Type | Description |
+| | -- | - | -- |
+| <*collection*> | Yes | Array | The collection with the items to sort |
+| <*sortBy*> | No | String | The key to use for sorting the collection objects |
+|||||
+
+| Return value | Type | Description |
+| | - | -- |
+| [<*updated-collection*>] | Array | The sorted collection |
+||||
+
+*Example 1*
+
+This example sorts an array of integers:
+
+```
+sort(createArray(2, 1, 0, 3))
+```
+
+And returns this array: `[0,1,2,3]`
+
+*Example 2*
+
+This example sorts an array of objects by key:
+
+```
+sort(createArray(json('{ "first": "Amalie", "last": "Rose" }'), json('{ "first": "Elise", "last": "Renee" }'), "last")
+```
+
+And returns this array: `[{ "first": "Elise", "last": "Renee" }, {"first": "Amalie", "last": "Rose" }')]`
+ <a name="split"></a> ### split
machine-learning Azure Machine Learning Release Notes Cli V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes-cli-v2.md
Last updated 04/12/2022
In this article, learn about Azure Machine Learning CLI (v2) releases. __RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader:
-`https://docs.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes-v2%22&locale=en-us`
+`https://learn.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes-v2%22&locale=en-us`
## 2022-05-24
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes.md
Last updated 08/29/2022
In this article, learn about Azure Machine Learning Python SDK releases. For the full SDK reference content, visit the Azure Machine Learning's [**main SDK for Python**](/python/api/overview/azure/ml/intro) reference page. __RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader:
-`https://docs.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`
+`https://learn.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`
## 2022-08-29
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
You can also use Azure Data Factory to create a data ingestion pipeline that pre
Learn more by reading and exploring the following resources:
-+ [Learning path: End-to-end MLOps with Azure Machine Learning](/learn/paths/build-first-machine-operations-workflow/)
++ [Learning path: End-to-end MLOps with Azure Machine Learning](/training/paths/build-first-machine-operations-workflow/) + [How to deploy a model to an online endpoint](how-to-deploy-managed-online-endpoints.md) with Machine Learning + [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md) + [End-to-end MLOps examples repo](https://github.com/microsoft/MLOps)
machine-learning Concept Plan Manage Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-plan-manage-cost.md
For more information, see [manage and optimize costs in Azure Machine Learning](
- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
machine-learning Dsvm Secure Access Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-secure-access-keys.md
curl https://<Vault Name>.vault.azure.net/secrets/SQLPasswd?api-version=2016-10-
## Access storage keys from the DSVM ```bash
-# Prerequisite: You have granted your VMs MSI access to use storage account access keys based on instructions at https://docs.microsoft.com/azure/active-directory/managed-service-identity/tutorial-linux-vm-access-storage. This article describes the process in more detail.
+# Prerequisite: You have granted your VMs MSI access to use storage account access keys based on instructions at https://learn.microsoft.com/azure/active-directory/managed-service-identity/tutorial-linux-vm-access-storage. This article describes the process in more detail.
y=`curl http://localhost:50342/oauth2/token --data "resource=https://management.azure.com/" -H Metadata:true` ytoken=`echo $y | python -c "import sys, json; print(json.load(sys.stdin)['access_token'])"`
az keyvault secret set --name MySecret --vault-name <Vault Name> --value "Hellow
# List access keys for the storage account. az storage account keys list -g <Storage Account Resource Group> -n <Storage Account Name>
-```
+```
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Azure portal users will always find the latest image available for provisioning
See the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds.
+## September 20, 2022
+**Announcement:**
+Ubuntu 18 DSVM will **not be** available on the marketplace starting Oct 1, 2022. We recommend users switch to Ubuntu 20 DSVM as we continue to ship updates/patches on our latest [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
+
+Users that are using Azure Resource Manager (ARM) template/virtual machine scale set to deploy the Ubuntu DSVM machines, should configure:
+
+| Offer | SKU |
+| | |
+| ubuntu-2004 | 2004 for Gen1 or 2004-gen2 for Gen2 VM sizes |
+
+Instead of:
+
+| Offer | SKU |
+| | |
+| ubuntu-1804 | 1804 for Gen1 or 1804-gen2 for Gen2 VM sizes |
+
+**Note**: There is no impact to existing customers who are still on Ubuntu-18 DSVM as of our October 2022 update. Howeverm the deprecation plan is scheduled for December 2022. We recommend that you switch to Ubuntu-20 DSVM at your earliest convenience.
+ ## September 19, 2022 [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
The hosts in the following tables are owned by Microsoft, and provide services r
| Integrated notebook | \<storage\>.blob.core.windows.net | TCP | 443 | | Integrated notebook | graph.microsoft.com | TCP | 443 | | Integrated notebook | \*.aznbcontent.net | TCP | 443 |
-| AutoML NLP | automlresources-prod.azureedge.net | TCP | 443 |
-| AutoML NLP | aka.ms | TCP | 443 |
+| AutoML NLP, Vision | automlresources-prod.azureedge.net | TCP | 443 |
+| AutoML NLP, Vision | aka.ms | TCP | 443 |
> [!NOTE]
-> AutoML NLP is currently only supported in Azure public regions.
+> AutoML NLP, Vision are currently only supported in Azure public regions.
# [Azure Government](#tab/gov)
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
The most important difference between a forecasting regression task type and reg
You can specify separate [training data and validation data](concept-automated-ml.md#training-validation-and-test-data) directly in the `AutoMLConfig` object. Learn more about the [AutoMLConfig](#configure-experiment).
-For time series forecasting, only **Rolling Origin Cross Validation (ROCV)** is used for validation by default. Pass the training and validation data together, and set the number of cross validation folds with the `n_cross_validations` parameter in your `AutoMLConfig`. ROCV divides the series into training and validation data using an origin time point. Sliding the origin in time generates the cross-validation folds. This strategy preserves the time series data integrity and eliminates the risk of data leakage
+For time series forecasting, only **Rolling Origin Cross Validation (ROCV)** is used for validation by default. ROCV divides the series into training and validation data using an origin time point. Sliding the origin in time generates the cross-validation folds. This strategy preserves the time series data integrity and eliminates the risk of data leakage.
-![rolling origin cross validation](./media/how-to-auto-train-forecast/rolling-origin-cross-validation.svg)
-You can also bring your own validation data, learn more in [Configure data splits and cross-validation in AutoML](how-to-configure-cross-validation-data-splits.md#provide-validation-data).
+Pass your training and validation data as one dataset to the parameter `training_data`. Set the number of cross validation folds with the parameter `n_cross_validations` and set the number of periods between two consecutive cross-validation folds with `cv_step_size`. You can also leave either or both parameters empty and AutoML will set them automatically.
[!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)] ```python automl_config = AutoMLConfig(task='forecasting', training_data= training_data,
- n_cross_validations=3,
+ n_cross_validations="auto", # Could be customized as an integer
+ cv_step_size = "auto", # Could be customized as an integer
... **time_series_settings) ``` +
+You can also bring your own validation data, learn more in [Configure data splits and cross-validation in AutoML](how-to-configure-cross-validation-data-splits.md#provide-validation-data).
+ Learn more about how AutoML applies cross validation to [prevent over-fitting models](concept-manage-ml-pitfalls.md#prevent-overfitting). ## Configure experiment
automl_config = AutoMLConfig(task='forecasting',
enable_early_stopping=True, training_data=train_data, label_column_name=label,
- n_cross_validations=5,
+ n_cross_validations="auto", # Could be customized as an integer
+ cv_step_size = "auto", # Could be customized as an integer
enable_ensembling=False, verbosity=logging.INFO,
- **forecasting_parameters)
+ forecasting_parameters=forecasting_parameters)
``` The amount of data required to successfully train a forecasting model with automated ML is influenced by the `forecast_horizon`, `n_cross_validations`, and `target_lags` or `target_rolling_window_size` values specified when you configure your `AutoMLConfig`.
To enable deep learning, set the `enable_dnn=True` in the `AutoMLConfig` object.
automl_config = AutoMLConfig(task='forecasting', enable_dnn=True, ...
- **forecasting_parameters)
+ forecasting_parameters=forecasting_parameters)
``` > [!Warning] > When you enable DNN for experiments created with the SDK, [best model explanations](how-to-machine-learning-interpretability-automl.md) are disabled.
automl_settings = {"task" : 'forecasting',
"iterations" : 15, "experiment_timeout_hours" : 1, "label_column_name" : 'Quantity',
- "n_cross_validations" : 3,
+ "n_cross_validations" : "auto", # Could be customized as an integer
+ "cv_step_size" : "auto", # Could be customized as an integer
"time_column_name": 'WeekStarting', "max_horizon" : 6, "track_child_runs": False,
automl_settings = {"task" : "forecasting",
"model_explainability": model_explainability,# The following settings are specific to this sample and should be adjusted according to your own needs. "iteration_timeout_minutes" : 10, "iterations" : 10,
- "n_cross_validations": 2}
+ "n_cross_validations" : "auto", # Could be customized as an integer
+ "cv_step_size" : "auto", # Could be customized as an integer
+ }
hts_parameters = HTSTrainParameters( automl_settings=automl_settings,
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
Previously updated : 08/05/2022 Last updated : 09/20/2022 # Create an Azure Machine Learning compute cluster
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning CLI version you are using:"]
-> * [CLI v1](v1/how-to-create-attach-compute-cluster.md)
-> * [CLI v2 (current version)](how-to-create-attach-compute-cluster.md)
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning CLI or SDK version you are using:"]
+> * [v1](v1/how-to-create-attach-compute-cluster.md)
+> * [v2 (current version)](how-to-create-attach-compute-cluster.md)
Learn how to create and manage a [compute cluster](concept-compute-target.md#azure-machine-learning-compute-managed) in your Azure Machine Learning workspace.
In this article, learn how to:
* If using the Python SDK, [set up your development environment with a workspace](how-to-configure-environment.md). Once your environment is set up, attach to the workspace in your Python script:
- [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
+ [!INCLUDE [connect ws v2](../../includes/machine-learning-connect-ws-v2.md)]
- ```python
- from azureml.core import Workspace
-
- ws = Workspace.from_config()
- ```
## What is a compute cluster?
The compute autoscales down to zero nodes when it isn't used. Dedicated VMs ar
# [Python SDK](#tab/python)
-To create a persistent Azure Machine Learning Compute resource in Python, specify the **vm_size** and **max_nodes** properties. Azure Machine Learning then uses smart defaults for the other properties.
+To create a persistent Azure Machine Learning Compute resource in Python, specify the **size** and **max_instances** properties. Azure Machine Learning then uses smart defaults for the other properties.
-* **vm_size**: The VM family of the nodes created by Azure Machine Learning Compute.
-* **max_nodes**: The max number of nodes to autoscale up to when you run a job on Azure Machine Learning Compute.
+* *size**: The VM family of the nodes created by Azure Machine Learning Compute.
+* **max_instances*: The max number of nodes to autoscale up to when you run a job on Azure Machine Learning Compute.
-[!code-python[](~/aml-sdk-samples/ignore/doc-qa/how-to-set-up-training-targets/amlcompute2.py?name=cpu_cluster)]
+[!notebook-python[](~/azureml-examples-main/sdk/resources/compute/compute.ipynb?name=cluster_basic)]
You can also configure several advanced properties when you create Azure Machine Learning Compute. The properties allow you to create a persistent cluster of fixed size, or within an existing Azure Virtual Network in your subscription. See the [AmlCompute class](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute) for details.
Use any of these ways to specify a low-priority VM:
# [Python SDK](#tab/python)
-```python
-compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
- vm_priority='lowpriority',
- max_nodes=4)
-```
+[!notebook-python[](~/azureml-examples-main/sdk/resources/compute/compute.ipynb?name=cluster_low_pri)]
# [Azure CLI](#tab/azure-cli)
In the studio, choose **Low Priority** when you create a VM.
# [Python SDK](#tab/python) -
-* Configure managed identity in your provisioning configuration:
-
- * System assigned managed identity created in a workspace named `ws`
- ```python
- # configure cluster with a system-assigned managed identity
- compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
- max_nodes=5,
- identity_type="SystemAssigned",
- )
- cpu_cluster_name = "cpu-cluster"
- cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
- ```
-
- * User-assigned managed identity created in a workspace named `ws`
-
- ```python
- # configure cluster with a user-assigned managed identity
- compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
- max_nodes=5,
- identity_type="UserAssigned",
- identity_id=['/subscriptions/<subcription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user_assigned_identity>'])
-
- cpu_cluster_name = "cpu-cluster"
- cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
- ```
-
-* Add managed identity to an existing compute cluster named `cpu_cluster`
-
- * System-assigned managed identity:
-
- ```python
- # add a system-assigned managed identity
- cpu_cluster.add_identity(identity_type="SystemAssigned")
- ````
-
- * User-assigned managed identity:
-
- ```python
- # add a user-assigned managed identity
- cpu_cluster.add_identity(identity_type="UserAssigned",
- identity_id=['/subscriptions/<subcription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user_assigned_identity>'])
- ```
# [Azure CLI](#tab/azure-cli)
If your Azure Machine Learning compute cluster appears stuck at resizing (0 -> 0
Use your compute cluster to:
-* [Submit a training run](v1/how-to-set-up-training-targets.md)
+* [Submit a training run](./how-to-train-sdk.md)
* [Run batch inference](./tutorial-pipeline-batch-scoring-classification.md).
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-custom-dns.md
Access to a given Azure Machine Learning workspace via Private Link is done by c
- ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.cert.api.azureml.ms``` - ```<compute instance name>.<region the workspace was created in>.instances.azureml.ms``` - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.<region>.notebooks.azure.net```-- ```*.<per-workspace globally-unique identifier>.inference.<region the workspace was created in>.api.azureml.ms``` - Used by managed online endpoints
+- ```<managed online endpoint name>.<region>.inference.ml.azure.com``` - Used by managed online endpoints
**Azure China 21Vianet regions**: - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.ml.azure.cn``` - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.cert.api.ml.azure.cn``` - ```<compute instance name>.<region the workspace was created in>.instances.azureml.cn``` - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.<region>.notebooks.chinacloudapi.cn```
+- ```<managed online endpoint name>.<region>.inference.ml.azure.cn``` - Used by managed online endpoints
**Azure US Government regions**: - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.ml.azure.us``` - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.cert.api.ml.azure.us``` - ```<compute instance name>.<region the workspace was created in>.instances.azureml.us``` - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.<region>.notebooks.usgovcloudapi.net```
+- ```<managed online endpoint name>.<region>.inference.ml.azure.us``` - Used by managed online endpoints
The Fully Qualified Domains resolve to the following Canonical Names (CNAMEs) called the workspace Private Link FQDNs: **Azure Public regions**: - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.privatelink.api.azureml.ms``` - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.<region>.privatelink.notebooks.azure.net```
+- ```<managed online endpoint name>.<per-workspace globally-unique identifier>.inference.<region>.privatelink.api.azureml.ms``` - Used by managed online endpoints
**Azure China regions**: - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.privatelink.api.ml.azure.cn``` - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.<region>.privatelink.notebooks.chinacloudapi.cn```
+- ```<managed online endpoint name>.<per-workspace globally-unique identifier>.inference.<region>.privatelink.api.ml.azure.cn``` - Used by managed online endpoints
**Azure US Government regions**: - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.privatelink.api.ml.azure.us``` - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.<region>.privatelink.notebooks.usgovcloudapi.net```
+- ```<managed online endpoint name>.<per-workspace globally-unique identifier>.inference.<region>.privatelink.api.ml.azure.us``` - Used by managed online endpoints
The FQDNs resolve to the IP addresses of the Azure Machine Learning workspace in that region. However, resolution of the workspace Private Link FQDNs can be overridden by using a custom DNS server hosted in the virtual network. For an example of this architecture, see the [custom DNS server hosted in a vnet](#example-custom-dns-server-hosted-in-vnet) example.
+> [!NOTE]
+> Managed online endpoints share the workspace private endpoint. If you are manually adding DNS records to the private DNS zone `privatelink.api.azureml.ms`, an A record with wildcard
+> `*.<per-workspace globally-unique identifier>.inference.<region>.privatelink.api.azureml.ms` should be added to route all endpoints under the workspace to the private endpoint.
+ ## Manual DNS server integration This section discusses which Fully Qualified Domains to create A records for in a DNS Server, and which IP address to set the value of the A record to.
The following list contains the fully qualified domain names (FQDNs) used by you
> * Compute instances can be accessed only from within the virtual network. > * The IP address for this FQDN is **not** the IP of the compute instance. Instead, use the private IP address of the workspace private endpoint (the IP of the `*.api.azureml.ms` entries.)
-* `*.<workspace-GUID>.inference.<region>.api.azureml.ms`
+* `<managed online endpoint name>.<region>.inference.ml.azure.com` - Used by managed online endpoints
#### Azure China region
The following FQDNs are for Azure China regions:
> [!NOTE] > The workspace name for this FQDN may be truncated. Truncation is done to keep `ml-<workspace-name, truncated>-<region>-<workspace-guid>` at 63 characters or less.
-
+ * `<instance-name>.<region>.instances.azureml.cn` * The IP address for this FQDN is **not** the IP of the compute instance. Instead, use the private IP address of the workspace private endpoint (the IP of the `*.api.azureml.ms` entries.)
+* `<managed online endpoint name>.<region>.inference.ml.azure.cn` - Used by managed online endpoints
+ #### Azure US Government The following FQDNs are for Azure US Government regions:
The following FQDNs are for Azure US Government regions:
* `<instance-name>.<region>.instances.azureml.us` > * The IP address for this FQDN is **not** the IP of the compute instance. Instead, use the private IP address of the workspace private endpoint (the IP of the `*.api.azureml.ms` entries.)
+* `<managed online endpoint name>.<region>.inference.ml.azure.us` - Used by managed online endpoints
+ ### Find the IP addresses To find the internal IP addresses for the FQDNs in the VNet, use one of the following methods:
To find the internal IP addresses for the FQDNs in the VNet, use one of the foll
"ml-myworkspace-eastus-fb7e20a0-8891-458b-b969-55ddb3382f51.eastus.notebooks.azure.net" ], "IPAddress": "10.1.0.6"
+ },
+ {
+ "FQDNs": [
+ "*.eastus.inference.ml.azure.com"
+ ],
+ "IPAddress": "10.1.0.7"
} ] ```
The information returned from all methods is the same; a list of the FQDN and pr
| `fb7e20a0-8891-458b-b969-55ddb3382f51.workspace.eastus.api.azureml.ms` | `10.1.0.5` | | `fb7e20a0-8891-458b-b969-55ddb3382f51.workspace.eastus.cert.api.azureml.ms` | `10.1.0.5` | | `ml-myworkspace-eastus-fb7e20a0-8891-458b-b969-55ddb3382f51.eastus.notebooks.azure.net` | `10.1.0.6` |
-| `mymanagedonlineendpoint.fb7e20a0-8891-458b-b969-55ddb3382f51.inference.eastus.api.azureml.ms` | `10.1.0.7` |
+| `*.eastus.inference.ml.azure.com` | `10.1.0.7` |
The following table shows example IPs from Azure China regions:
The following table shows example IPs from Azure China regions:
| `52882c08-ead2-44aa-af65-08a75cf094bd.workspace.chinaeast2.api.ml.azure.cn` | `10.1.0.5` | | `52882c08-ead2-44aa-af65-08a75cf094bd.workspace.chinaeast2.cert.api.ml.azure.cn` | `10.1.0.5` | | `ml-mype-pltest-chinaeast2-52882c08-ead2-44aa-af65-08a75cf094bd.chinaeast2.notebooks.chinacloudapi.cn` | `10.1.0.6` |
+| `*.chinaeast2.inference.ml.azure.cn` | `10.1.0.7` |
The following table shows example IPs from Azure US Government regions:
The following table shows example IPs from Azure US Government regions:
| `52882c08-ead2-44aa-af65-08a75cf094bd.workspace.chinaeast2.api.ml.azure.us` | `10.1.0.5` | | `52882c08-ead2-44aa-af65-08a75cf094bd.workspace.chinaeast2.cert.api.ml.azure.us` | `10.1.0.5` | | `ml-mype-plt-usgovvirginia-52882c08-ead2-44aa-af65-08a75cf094bd.usgovvirginia.notebooks.usgovcloudapi.net` | `10.1.0.6` |
+| `*.usgovvirginia.inference.ml.azure.us` | `10.1.0.7` |
+
+> [!NOTE]
+> Managed online endpoints share the workspace private endpoint. If you are manually adding DNS records to the private DNS zone `privatelink.api.azureml.ms`, an A record with wildcard
+> `*.<per-workspace globally-unique identifier>.inference.<region>.privatelink.api.azureml.ms` should be added to route all endpoints under the workspace to the private endpoint.
<a id='dns-vnet'></a>
The following steps describe how this topology works:
**Azure Public regions**: - ```privatelink.api.azureml.ms``` - ```privatelink.notebooks.azure.net```
-
+ **Azure China regions**: - ```privatelink.api.ml.azure.cn``` - ```privatelink.notebooks.chinacloudapi.cn```
-
+ **Azure US Government regions**: - ```privatelink.api.ml.azure.us``` - ```privatelink.notebooks.usgovcloudapi.net```
+ > [!NOTE]
+ > Managed online endpoints share the workspace private endpoint. If you are manually adding DNS records to the private DNS zone `privatelink.api.azureml.ms`, an A record with wildcard
+ > `*.<per-workspace globally-unique identifier>.inference.<region>.privatelink.api.azureml.ms` should be added to route all endpoints under the workspace to the private endpoint.
+ Following creation of the Private DNS Zone, it needs to be linked to the DNS Server Virtual Network. The Virtual Network that contains the DNS Server. A Private DNS Zone overrides name resolution for all names within the scope of the root of the zone. This override applies to all Virtual Networks the Private DNS Zone is linked to. For example, if a Private DNS Zone rooted at `privatelink.api.azureml.ms` is linked to Virtual Network foo, all resources in Virtual Network foo that attempt to resolve `bar.workspace.westus2.privatelink.api.azureml.ms` will receive any record that is listed in the `privatelink.api.azureml.ms` zone.
The following steps describe how this topology works:
> [!IMPORTANT] > The private endpoint must have Private DNS integration enabled for this example to function correctly.
-3. **Create conditional forwarder in DNS Server to forward to Azure DNS**:
+3. **Create conditional forwarder in DNS Server to forward to Azure DNS**:
Next, create a conditional forwarder to the Azure DNS Virtual Server. The conditional forwarder ensures that the DNS server always queries the Azure DNS Virtual Server IP address for FQDNs related to your workspace. This means that the DNS Server will return the corresponding record from the Private DNS Zone.
The following steps describe how this topology works:
- ```notebooks.azure.net``` - ```instances.azureml.ms``` - ```aznbcontent.net```
-
+ - ```inference.ml.azure.com``` - Used by managed online endpoints
+ **Azure China regions**: - ```api.ml.azure.cn``` - ```notebooks.chinacloudapi.cn``` - ```instances.azureml.cn``` - ```aznbcontent.net```
-
+ - ```inference.ml.azure.cn``` - Used by managed online endpoints
+ **Azure US Government regions**: - ```api.ml.azure.us``` - ```notebooks.usgovcloudapi.net``` - ```instances.azureml.us``` - ```aznbcontent.net```
+ - ```inference.ml.azure.us``` - Used by managed online endpoints
> [!IMPORTANT] > Configuration steps for the DNS Server are not included here, as there are many DNS solutions available that can be used as a custom DNS Server. Refer to the documentation for your DNS solution for how to appropriately configure conditional forwarding.
The following steps describe how this topology works:
**Azure Public regions**: - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.azureml.ms``` - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.<region>.notebooks.azure.net```
-
+ - ```<managed online endpoint name>.<region>.inference.ml.azure.com``` - Used by managed online endpoints
+ **Azure China regions**: - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.ml.azure.cn``` - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.<region>.notebooks.chinacloudapi.cn```
-
+ - ```<managed online endpoint name>.<region>.inference.ml.azure.cn``` - Used by managed online endpoints
+ **Azure US Government regions**: - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.ml.azure.us``` - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.<region>.notebooks.usgovcloudapi.net```
+ - ```<managed online endpoint name>.<region>.inference.ml.azure.us``` - Used by managed online endpoints
5. **Azure DNS recursively resolves workspace domain to CNAME**:
If you cannot access the workspace from a virtual machine or jobs fail on comput
1. **Access compute resource in Virtual Network topology**:
- Proceed to access a compute resource in the Azure Virtual Network topology. This will likely require accessing a Virtual Machine in a Virtual Network that is peered with the Hub Virtual Network.
+ Proceed to access a compute resource in the Azure Virtual Network topology. This will likely require accessing a Virtual Machine in a Virtual Network that is peered with the Hub Virtual Network.
1. **Resolve workspace FQDNs**: Open a command prompt, shell, or PowerShell. Then for each of the workspace FQDNs, run the following command: `nslookup <workspace FQDN>`
-
+ The result of each nslookup should return one of the two private IP addresses on the Private Endpoint to the Azure Machine Learning workspace. If it does not, then there is something misconfigured in the custom DNS solution. Possible causes:
The following steps describe how this topology works:
1. **Create Private DNS Zone and link to DNS Server Virtual Network**: The first step in ensuring a Custom DNS solution works with your Azure Machine Learning workspace is to create two Private DNS Zones rooted at the following domains:
-
+ **Azure Public regions**: - ``` privatelink.api.azureml.ms``` - ``` privatelink.notebooks.azure.net```
-
+ **Azure China regions**: - ```privatelink.api.ml.azure.cn``` - ```privatelink.notebooks.chinacloudapi.cn```
-
+ **Azure US Government regions**: - ```privatelink.api.ml.azure.us``` - ```privatelink.notebooks.usgovcloudapi.net```
- Following creation of the Private DNS Zone, it needs to be linked to the DNS Server VNet ΓÇô the Virtual Network that contains the DNS Server.
+ > [!NOTE]
+ > Managed online endpoints share the workspace private endpoint. If you are manually adding DNS records to the private DNS zone `privatelink.api.azureml.ms`, an A record with wildcard
+ > `*.<per-workspace globally-unique identifier>.inference.<region>.privatelink.api.azureml.ms` should be added to route all endpoints under the workspace to the private endpoint.
+
+ Following creation of the Private DNS Zone, it needs to be linked to the DNS Server VNet ΓÇô the Virtual Network that contains the DNS Server.
> [!NOTE] > The DNS Server in the virtual network is separate from the On-premises DNS Server.
The following steps describe how this topology works:
- ```api.azureml.ms``` - ```notebooks.azure.net``` - ```instances.azureml.ms```
- - ```aznbcontent.net```
-
+ - ```aznbcontent.net```
+ - ```inference.ml.azure.com``` - Used by managed online endpoints
+ **Azure China regions**: - ```api.ml.azure.cn``` - ```notebooks.chinacloudapi.cn``` - ```instances.azureml.cn```
- - ```aznbcontent.net```
+ - ```aznbcontent.net```
+ - ```inference.ml.azure.cn``` - Used by managed online endpoints
**Azure US Government regions**: - ```api.ml.azure.us``` - ```notebooks.usgovcloudapi.net``` - ```instances.azureml.us```
- - ```aznbcontent.net```
+ - ```aznbcontent.net```
+ - ```inference.ml.azure.us``` - Used by managed online endpoints
> [!IMPORTANT] > Configuration steps for the DNS Server are not included here, as there are many DNS solutions available that can be used as a custom DNS Server. Refer to the documentation for your DNS solution for how to appropriately configure conditional forwarding. 4. **Create conditional forwarder in On-premises DNS Server to forward to DNS Server**:
- Next, create a conditional forwarder to the DNS Server in the DNS Server Virtual Network. This forwarder is for the zones listed in step 1. This is similar to step 3, but, instead of forwarding to the Azure DNS Virtual Server IP address, the On-premises DNS Server will be targeting the IP address of the DNS Server. As the On-premises DNS Server is not in Azure, it is not able to directly resolve records in Private DNS Zones. In this case the DNS Server proxies requests from the On-premises DNS Server to the Azure DNS Virtual Server IP. This allows the On-premises DNS Server to retrieve records in the Private DNS Zones linked to the DNS Server Virtual Network.
+ Next, create a conditional forwarder to the DNS Server in the DNS Server Virtual Network. This forwarder is for the zones listed in step 1. This is similar to step 3, but, instead of forwarding to the Azure DNS Virtual Server IP address, the On-premises DNS Server will be targeting the IP address of the DNS Server. As the On-premises DNS Server is not in Azure, it is not able to directly resolve records in Private DNS Zones. In this case the DNS Server proxies requests from the On-premises DNS Server to the Azure DNS Virtual Server IP. This allows the On-premises DNS Server to retrieve records in the Private DNS Zones linked to the DNS Server Virtual Network.
The zones to conditionally forward are listed below. The IP addresses to forward to are the IP addresses of your DNS Servers:
The following steps describe how this topology works:
- ```api.azureml.ms``` - ```notebooks.azure.net``` - ```instances.azureml.ms```
-
+ - ```inference.ml.azure.com``` - Used by managed online endpoints
+ **Azure China regions**: - ```api.ml.azure.cn``` - ```notebooks.chinacloudapi.cn``` - ```instances.azureml.cn```
-
+ - ```inference.ml.azure.cn``` - Used by managed online endpoints
+ **Azure US Government regions**: - ```api.ml.azure.us``` - ```notebooks.usgovcloudapi.net``` - ```instances.azureml.us```
+ - ```inference.ml.azure.us``` - Used by managed online endpoints
> [!IMPORTANT] > Configuration steps for the DNS Server are not included here, as there are many DNS solutions available that can be used as a custom DNS Server. Refer to the documentation for your DNS solution for how to appropriately configure conditional forwarding.
The following steps describe how this topology works:
**Azure Public regions**: - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.azureml.ms``` - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.<region>.notebooks.azure.net```
-
+ - ```<managed online endpoint name>.<region>.inference.ml.azure.com``` - Used by managed online endpoints
+ **Azure China regions**: - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.ml.azure.cn``` - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.<region>.notebooks.chinacloudapi.cn```
-
+ - ```<managed online endpoint name>.<region>.inference.ml.azure.cn``` - Used by managed online endpoints
+ **Azure US Government regions**: - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.ml.azure.us``` - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.<region>.notebooks.usgovcloudapi.net```
+ - ```<managed online endpoint name>.<region>.inference.ml.azure.us``` - Used by managed online endpoints
6. **On-premises DNS server recursively resolves workspace domain**:
The following is an example of `hosts` file entries for Azure Machine Learning:
10.1.0.5 fb7e20a0-8891-458b-b969-55ddb3382f51.workspace.eastus.api.azureml.ms 10.1.0.5 fb7e20a0-8891-458b-b969-55ddb3382f51.workspace.eastus.cert.api.azureml.ms 10.1.0.6 ml-myworkspace-eastus-fb7e20a0-8891-458b-b969-55ddb3382f51.eastus.notebooks.azure.net
-10.1.0.7 mymanagedonlineendpoint.fb7e20a0-8891-458b-b969-55ddb3382f51.inference.eastus.api.azureml.ms
+
+# For a managed online/batch endpoint named 'mymanagedendpoint'
+10.1.0.7 mymanagedendpoint.eastus.inference.ml.azure.com
# For a compute instance named 'mycomputeinstance' 10.1.0.5 mycomputeinstance.eastus.instances.azureml.ms
If after running through the above steps you are unable to access the workspace
1. **Access compute resource in Virtual Network topology**:
- Proceed to access a compute resource in the Azure Virtual Network topology. This will likely require accessing a Virtual Machine in a Virtual Network that is peered with the Hub Virtual Network.
+ Proceed to access a compute resource in the Azure Virtual Network topology. This will likely require accessing a Virtual Machine in a Virtual Network that is peered with the Hub Virtual Network.
1. **Resolve workspace FQDNs**: Open a command prompt, shell, or PowerShell. Then for each of the workspace FQDNs, run the following command: `nslookup <workspace FQDN>`
-
+ The result of each nslookup should yield one of the two private IP addresses on the Private Endpoint to the Azure Machine Learning workspace. If it does not, then there is something misconfigured in the custom DNS solution. Possible causes:
machine-learning How To Github Actions Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-github-actions-machine-learning.md
When your resource group and repository are no longer needed, clean up the resou
## Next steps > [!div class="nextstepaction"]
-> [Create production ML pipelines with Python SDK](tutorial-pipeline-python-sdk.md)
+> [Create production ML pipelines with Python SDK](tutorial-pipeline-python-sdk.md)
machine-learning How To Identity Based Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-data-access.md
To create datastores that use **credential-based** authentication, like access k
There are two scenarios in which you can apply identity-based data access in Azure Machine Learning. These scenarios are a good fit for identity-based access when you're working with confidential data and need more granular data access management:
-> [!WARNING]
-> Identity-based data access is not supported for [automated ML experiments](how-to-configure-auto-train.md).
- - Accessing storage services - Training machine learning models with private data
If your storage account has virtual network settings, that dictates what identit
We recommend that you use [Azure Machine Learning datasets](./v1/how-to-create-register-datasets.md) when you interact with your data in storage with Azure Machine Learning.
-> [!IMPORTANT]
-> Datasets using identity-based data access are not supported for [automated ML experiments](how-to-configure-auto-train.md).
- Datasets package your data into a lazily evaluated consumable object for machine learning tasks like training. Also, with datasets you can [download or mount](v1/how-to-train-with-datasets.md#mount-vs-download) files of any format from Azure storage services like Azure Blob Storage and Azure Data Lake Storage to a compute target. To create a dataset, you can reference paths from datastores that also use identity-based data access.
machine-learning How To Troubleshoot Auto Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-auto-ml.md
If this pattern is expected in your time series, you can switch your primary met
If you have over 100 automated ML experiments, this may cause new automated ML experiments to have long run times.
+## VNet Firewall Setting Download Failure
+
+If you are under virtual networks (VNets), you may run into model download failures when using AutoML NLP. This is because network traffic is blocked from downloading the models and tokenizers from Azure CDN. To unblock this, please allow list the below URLs in the ΓÇ£Application rulesΓÇ¥ setting of the VNet firewall policy:
+
+* aka.ms
+* https://automlresources-prod.azureedge.net
+
+Please follow the instructions [here to configure the firewall settings.](how-to-access-azureml-behind-firewall.md)
+
+Instructions for configuring workspace under vnet are available [here.](tutorial-create-secure-workspace.md)
+ ## Next steps + Learn more about [how to train a regression model with Automated machine learning](./v1/how-to-auto-train-models-v1.md) or [how to train using Automated machine learning on a remote resource](./v1/concept-automated-ml-v1.md#local-remote).
machine-learning How To Workspace Diagnostic Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-workspace-diagnostic-api.md
The response is a JSON document that contains information on any problems detect
```json {
- 'value': {
- 'user_defined_route_results': [],
- 'network_security_rule_results': [],
- 'resource_lock_results': [],
- 'dns_resolution_results': [{
- 'code': 'CustomDnsInUse',
- 'level': 'Warning',
- 'message': "It is detected VNet '/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>' of private endpoint '/subscriptions/<subscription-id>/resourceGroups/larrygroup0916/providers/Microsoft.Network/privateEndpoints/<workspace-private-endpoint>' is not using Azure default dns. You need to configure your DNS server and check https://docs.microsoft.com/azure/machine-learning/how-to-custom-dns to make sure the custom dns is set up correctly."
- }],
- 'storage_account_results': [],
- 'key_vault_results': [],
- 'container_registry_results': [],
- 'application_insights_results': [],
- 'other_results': []
+ "value": {
+ "user_defined_route_results": [],
+ "network_security_rule_results": [],
+ "resource_lock_results": [],
+ "dns_resolution_results": [{
+ "code": "CustomDnsInUse",
+ "level": "Warning",
+ "message": "It is detected VNet '/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>' of private endpoint '/subscriptions/<subscription-id>/resourceGroups/larrygroup0916/providers/Microsoft.Network/privateEndpoints/<workspace-private-endpoint>' is not using Azure default DNS. You need to configure your DNS server and check https://learn.microsoft.com/azure/machine-learning/how-to-custom-dns to make sure the custom DNS is set up correctly."
+ }],
+ "storage_account_results": [],
+ "key_vault_results": [],
+ "container_registry_results": [],
+ "application_insights_results": [],
+ "other_results": []
} } ```
machine-learning Reference Yaml Component Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-component-command.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `type` | string | **Required.** The type of component input. [Learn more about data access](concept-data.md) | `number`, `integer`, `boolean`, `string`, `uri_file`, `uri_folder`, `mltable`, `mlflow_model`| | | `description` | string | Description of the input. | | | | `default` | number, integer, boolean, or string | The default value for the input. | | |
-| `optional` | boolean | Whether the input is required. | | `false` |
+| `optional` | boolean | Whether the input is required. If set to `true`, you need use the command includes optional inputs with `$[[]]`| | `false` |
| `min` | integer or number | The minimum accepted value for the input. This field can only be specified if `type` field is `number` or `integer`. | | | `max` | integer or number | The maximum accepted value for the input. This field can only be specified if `type` field is `number` or `integer`. | | | `enum` | array | The list of allowed values for the input. Only applicable if `type` field is `string`.| |
Examples are available in the [examples GitHub repository](https://github.com/Az
:::code language="yaml" source="~/azureml-examples-main/cli/assets/component/train.yml":::
+### Define optional inputs in command line
+When the input is set as `optional = true`, you need use `$[[]]` to embrace the command line with inputs. For example `$[[--input1 ${{inputs.input1}}]`. The command line at runtime may have different inputs.
+- If you are using only specify the required `training_data` and `model_output` parameters, the command line will look like:
+
+```azurecli
+python train.py --training_data some_input_path --learning_rate 0.01 --learning_rate_schedule time-based --model_output some_output_path
+```
+
+If no value is specified at runtime, `learning_rate` and `learning_rate_schedule` will use the default value.
+
+- If all inputs/outputs provide values during runtime, the command line will look like:
+```azurecli
+python train.py --training_data some_input_path --max_epocs 10 --learning_rate 0.01 --learning_rate_schedule time-based --model_output some_output_path
+```
++ ## Next steps - [Install and use the CLI (v2)](how-to-configure-cli.md)
machine-learning Reference Yaml Core Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-core-syntax.md
jobs:
Similar to the `command` for a job, the `command` for a component can also be parameterized with references to the `inputs` and `outputs` contexts. In this case the reference is to the component's inputs and outputs. When the component is run in a job, Azure ML will resolve those references to the job runtime input and output values specified for the respective component inputs and outputs. Below is an example of using the context syntax for a command component YAML specification.
-```yaml
-$schema: https://azuremlschemas.azureedge.net/latest/commandComponent.schema.json
-type: command
-code: ./src
-command: python train.py --lr ${{inputs.learning_rate}} --training-data ${{inputs.iris}} --model-dir ${{outputs.model_dir}}
-environment: azureml:AzureML-Minimal@latest
-inputs:
- learning_rate:
- type: number
- default: 0.01
- iris:
- type: uri_file
-outputs:
- model_dir:
- type: uri_folder
+
+#### Define optional inputs in command line
+When the input is set as `optional = true`, you need use `$[[]]` to embrace the command line with inputs. For example `$[[--input1 ${{inputs.input1}}]`. The command line at runtime may have different inputs.
+- If you are using only specify the required `training_data` and `model_output` parameters, the command line will look like:
+
+```cli
+python train.py --training_data some_input_path --learning_rate 0.01 --learning_rate_schedule time-based --model_output some_output_path
+```
+
+If no value is specified at runtime, `learning_rate` and `learning_rate_schedule` will use the default value.
+
+- If all inputs/outputs provide values during runtime, the command line will look like:
+```cli
+python train.py --training_data some_input_path --max_epocs 10 --learning_rate 0.01 --learning_rate_schedule time-based --model_output some_output_path
``` ## Next steps
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace.md
In this tutorial, you accomplish the following tasks:
## Prerequisites
-* Familiarity with Azure Virtual Networks and IP networking. If you are not familiar, try the [Fundamentals of computer networking](/learn/modules/network-fundamentals/) module.
+* Familiarity with Azure Virtual Networks and IP networking. If you are not familiar, try the [Fundamentals of computer networking](/training/modules/network-fundamentals/) module.
* While most of the steps in this article use the Azure portal or the Azure Machine Learning studio, some steps use the Azure CLI extension for Machine Learning v2. ## Limitations
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-attach-compute-cluster.md
Last updated 05/02/2022
# Create an Azure Machine Learning compute cluster with CLI v1
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning CLI version you are using:"]
-> * [CLI v1](how-to-create-attach-compute-cluster.md)
-> * [CLI v2 (current version)](../how-to-create-attach-compute-cluster.md)
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"]
+> * [v1](how-to-create-attach-compute-cluster.md)
+> * [v2 (current version)](../how-to-create-attach-compute-cluster.md)
Learn how to create and manage a [compute cluster](../concept-compute-target.md#azure-machine-learning-compute-managed) in your Azure Machine Learning workspace.
In this article, learn how to:
* Lower your compute cluster cost * Set up a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for the cluster
-This article covers only the CLI v1 way to accomplish these tasks. To see how to use the SDK, CLI v2, or studio, see [Create an Azure Machine Learning compute cluster (CLI v2)](../how-to-create-attach-compute-cluster.md)
-
-> [!NOTE]
-> This article covers only how to do these tasks using CLI v1. For more recent ways to manage a compute instance, see [Create an Azure Machine Learning compute cluster](../how-to-create-attach-compute-cluster.md).
## Prerequisites
This article covers only the CLI v1 way to accomplish these tasks. To see how t
[!INCLUDE [cli v1 deprecation](../../../includes/machine-learning-cli-v1-deprecation.md)]
+* If using the Python SDK, [set up your development environment with a workspace](../how-to-configure-environment.md). Once your environment is set up, attach to the workspace in your Python script:
+
+ [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
+
+ ```python
+ from azureml.core import Workspace
+
+ ws = Workspace.from_config()
+ ```
## What is a compute cluster?
The dedicated cores per region per VM family quota and total regional quota, whi
The compute autoscales down to zero nodes when it isn't used. Dedicated VMs are created to run your jobs as needed.
+# [Python SDK](#tab/python)
+
+To create a persistent Azure Machine Learning Compute resource in Python, specify the **vm_size** and **max_nodes** properties. Azure Machine Learning then uses smart defaults for the other properties.
+
+* **vm_size**: The VM family of the nodes created by Azure Machine Learning Compute.
+* **max_nodes**: The max number of nodes to autoscale up to when you run a job on Azure Machine Learning Compute.
++
+[!code-python[](~/aml-sdk-samples/ignore/doc-qa/how-to-set-up-training-targets/amlcompute2.py?name=cpu_cluster)]
+
+You can also configure several advanced properties when you create Azure Machine Learning Compute. The properties allow you to create a persistent cluster of fixed size, or within an existing Azure Virtual Network in your subscription. See the [AmlCompute class](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute) for details.
+
+> [!WARNING]
+> When setting the `location` parameter, if it is a different region than your workspace or datastores you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
+
+# [Azure CLI](#tab/azure-cli)
[!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
az ml computetarget create amlcompute -n cpu --min-nodes 1 --max-nodes 1 -s STAN
For more information, see Az PowerShell module [az ml computetarget create amlcompute](/cli/azure/ml(v1)/computetarget/create#az-ml-computetarget-create-amlcompute). -+ ## Lower your compute cluster cost You may also choose to use [low-priority VMs](../how-to-manage-optimize-cost.md#low-pri-vm) to run some or all of your workloads. These VMs do not have guaranteed availability and may be preempted while in use. You will have to restart a preempted job.
+# [Python SDK](#tab/python)
++
+```python
+compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
+ vm_priority='lowpriority',
+ max_nodes=4)
+```
+
+# [Azure CLI](#tab/azure-cli)
[!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
Set the `vm-priority`:
```azurecli-interactive az ml computetarget create amlcompute --name lowpriocluster --vm-size Standard_NC6 --max-nodes 5 --vm-priority lowpriority ```-+ ## Set up managed identity [!INCLUDE [aml-clone-in-azure-notebook](../../../includes/aml-managed-identity-intro.md)]
+# [Python SDK](#tab/python)
++
+* Configure managed identity in your provisioning configuration:
+
+ * System assigned managed identity created in a workspace named `ws`
+ ```python
+ # configure cluster with a system-assigned managed identity
+ compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
+ max_nodes=5,
+ identity_type="SystemAssigned",
+ )
+ cpu_cluster_name = "cpu-cluster"
+ cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
+ ```
+
+ * User-assigned managed identity created in a workspace named `ws`
+
+ ```python
+ # configure cluster with a user-assigned managed identity
+ compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
+ max_nodes=5,
+ identity_type="UserAssigned",
+ identity_id=['/subscriptions/<subcription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user_assigned_identity>'])
+
+ cpu_cluster_name = "cpu-cluster"
+ cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
+ ```
+
+* Add managed identity to an existing compute cluster named `cpu_cluster`
+
+ * System-assigned managed identity:
+
+ ```python
+ # add a system-assigned managed identity
+ cpu_cluster.add_identity(identity_type="SystemAssigned")
+ ````
+
+ * User-assigned managed identity:
+
+ ```python
+ # add a user-assigned managed identity
+ cpu_cluster.add_identity(identity_type="UserAssigned",
+ identity_id=['/subscriptions/<subcription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user_assigned_identity>'])
+ ```
+
+# [Azure CLI](#tab/azure-cli)
[!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
machine-learning How To Workspace Diagnostic Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-workspace-diagnostic-api.md
The response is a JSON document that contains information on any problems detect
```json {
- 'value': {
- 'user_defined_route_results': [],
- 'network_security_rule_results': [],
- 'resource_lock_results': [],
- 'dns_resolution_results': [{
- 'code': 'CustomDnsInUse',
- 'level': 'Warning',
- 'message': "It is detected VNet '/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>' of private endpoint '/subscriptions/<subscription-id>/resourceGroups/larrygroup0916/providers/Microsoft.Network/privateEndpoints/<workspace-private-endpoint>' is not using Azure default dns. You need to configure your DNS server and check https://docs.microsoft.com/azure/machine-learning/how-to-custom-dns to make sure the custom dns is set up correctly."
+ "value": {
+ "user_defined_route_results": [],
+ "network_security_rule_results": [],
+ "resource_lock_results": [],
+ "dns_resolution_results": [{
+ "code": "CustomDnsInUse",
+ "level": "Warning",
+ "message": "It is detected VNet '/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>' of private endpoint '/subscriptions/<subscription-id>/resourceGroups/larrygroup0916/providers/Microsoft.Network/privateEndpoints/<workspace-private-endpoint>' is not using Azure default DNS. You need to configure your DNS server and check https://learn.microsoft.com/azure/machine-learning/how-to-custom-dns to make sure the custom DNS is set up correctly."
}],
- 'storage_account_results': [],
- 'key_vault_results': [],
- 'container_registry_results': [],
- 'application_insights_results': [],
- 'other_results': []
+ "storage_account_results": [],
+ "key_vault_results": [],
+ "container_registry_results": [],
+ "application_insights_results": [],
+ "other_results": []
} } ```
machine-learning Tutorial Train Deploy Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-train-deploy-notebook.md
+
+ Title: "Tutorial: Train and deploy an example in Jupyter Notebook"
+
+description: Use Azure Machine Learning to train and deploy an image classification model with scikit-learn in a cloud-based Python Jupyter Notebook.
++++++ Last updated : 09/14/2022+
+#Customer intent: As a professional data scientist, I can build an image classification model with Azure Machine Learning by using Python in a Jupyter Notebook.
++
+# Tutorial: Train and deploy an image classification model with an example Jupyter Notebook
++
+In this tutorial, you train a machine learning model on remote compute resources. You'll use the training and deployment workflow for Azure Machine Learning in a Python Jupyter Notebook. You can then use the notebook as a template to train your own machine learning model with your own data.
+
+This tutorial trains a simple logistic regression by using the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset and [scikit-learn](https://scikit-learn.org) with Azure Machine Learning. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28 x 28 pixels, representing a number from zero to nine. The goal is to create a multi-class classifier to identify the digit a given image represents.
+
+Learn how to take the following actions:
+
+> [!div class="checklist"]
+> * Download a dataset and look at the data.
+> * Train an image classification model and log metrics using MLflow.
+> * Deploy the model to do real-time inference.
++
+## Prerequisites
+
+* Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md) to:
+ * Create a workspace.
+ * Create a cloud-based compute instance to use for your development environment.
+
+## Run a notebook from your workspace
+
+Azure Machine Learning includes a cloud notebook server in your workspace for an install-free and pre-configured experience. Use [your own environment](../how-to-configure-environment.md#local) if you prefer to have control over your environment, packages, and dependencies.
++
+## Clone a notebook folder
+
+You complete the following experiment setup and run steps in Azure Machine Learning studio. This consolidated interface includes machine learning tools to perform data science scenarios for data science practitioners of all skill levels.
+
+1. Sign in to [Azure Machine Learning studio](https://ml.azure.com/).
+
+1. Select your subscription and the workspace you created.
+
+1. On the left, select **Notebooks**.
+
+1. Select the **Open terminal** tool to open a terminal window.
+
+ :::image type="content" source="media/tutorial-train-deploy-notebook/open-terminal.png" alt-text="Screenshot: Open terminal from Notebooks section.":::
+
+1. On the top bar, select the compute instance you created during the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md) to use if it's not already selected. Start the compute instance if it is stopped.
+
+1. In the terminal window, clone the MachineLearningNotebooks repository:
+
+ ```bash
+ git clone --depth 1 https://github.com/Azure/MachineLearningNotebooks
+ ```
+
+1. If necessary, refresh the list of files with the **Refresh** tool to see the newly cloned folder under your user folder.
+
+## Open the cloned notebook
+
+1. Open the **MachineLearningNotebooks** folder that was cloned into your **Files** section.
+
+1. Select the **quickstart-azureml-in-10mins.ipynb** file from your **MachineLearningNotebooks/tutorials/compute-instance-quickstarts/quickstart-azureml-in-10mins** folder.
+
+ :::image type="content" source="media/tutorial-train-deploy-notebook/expand-folder.png" alt-text="Screenshot shows the Open tutorials folder.":::
+
+## Install packages
+
+Once the compute instance is running and the kernel appears, add a new code cell to install packages needed for this tutorial.
+
+1. At the top of the notebook, add a code cell.
+ :::image type="content" source="media/tutorial-train-deploy-notebook/add-code-cell.png" alt-text="Screenshot of add code cell for notebook.":::
+
+1. Add the following into the cell and then run the cell, either by using the **Run** tool or by using **Shift+Enter**.
+
+ ```bash
+ %pip install scikit-learn==0.22.1
+ %pip install scipy==1.5.2
+ ```
+
+You may see a few install warnings. These can safely be ignored.
+
+## Run the notebook
+
+This tutorial and accompanying **utils.py** file is also available on [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/tutorials) if you wish to use it on your own [local environment](../how-to-configure-environment.md#local). If you aren't using the compute instance, add `%pip install azureml-sdk[notebooks] azureml-opendatasets matplotlib` to the install above.
+
+> [!Important]
+> The rest of this article contains the same content as you see in the notebook.
+>
+> Switch to the Jupyter Notebook now if you want to run the code while you read along.
+> To run a single code cell in a notebook, click the code cell and hit **Shift+Enter**. Or, run the entire notebook by choosing **Run all** from the top toolbar.
+
+## Import data
+
+Before you train a model, you need to understand the data you're using to train it. In this section, learn how to:
+
+* Download the MNIST dataset
+* Display some sample images
+
+You'll use Azure Open Datasets to get the raw MNIST data files. Azure Open Datasets are curated public datasets that you can use to add scenario-specific features to machine learning solutions for better models. Each dataset has a corresponding class, `MNIST` in this case, to retrieve the data in different ways.
++
+```python
+import os
+from azureml.opendatasets import MNIST
+
+data_folder = os.path.join(os.getcwd(), "/tmp/qs_data")
+os.makedirs(data_folder, exist_ok=True)
+
+mnist_file_dataset = MNIST.get_file_dataset()
+mnist_file_dataset.download(data_folder, overwrite=True)
+```
+
+### Take a look at the data
+
+Load the compressed files into `numpy` arrays. Then use `matplotlib` to plot 30 random images from the dataset with their labels above them.
+
+Note this step requires a `load_data` function that's included in an `utils.py` file. This file is placed in the same folder as this notebook. The `load_data` function simply parses the compressed files into numpy arrays.
++
+```python
+from utils import load_data
+import matplotlib.pyplot as plt
+import numpy as np
+import glob
++
+# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the model converge faster.
+X_train = (
+ load_data(
+ glob.glob(
+ os.path.join(data_folder, "**/train-images-idx3-ubyte.gz"), recursive=True
+ )[0],
+ False,
+ )
+ / 255.0
+)
+X_test = (
+ load_data(
+ glob.glob(
+ os.path.join(data_folder, "**/t10k-images-idx3-ubyte.gz"), recursive=True
+ )[0],
+ False,
+ )
+ / 255.0
+)
+y_train = load_data(
+ glob.glob(
+ os.path.join(data_folder, "**/train-labels-idx1-ubyte.gz"), recursive=True
+ )[0],
+ True,
+).reshape(-1)
+y_test = load_data(
+ glob.glob(
+ os.path.join(data_folder, "**/t10k-labels-idx1-ubyte.gz"), recursive=True
+ )[0],
+ True,
+).reshape(-1)
++
+# now let's show some randomly chosen images from the traininng set.
+count = 0
+sample_size = 30
+plt.figure(figsize=(16, 6))
+for i in np.random.permutation(X_train.shape[0])[:sample_size]:
+ count = count + 1
+ plt.subplot(1, sample_size, count)
+ plt.axhline("")
+ plt.axvline("")
+ plt.text(x=10, y=-10, s=y_train[i], fontsize=18)
+ plt.imshow(X_train[i].reshape(28, 28), cmap=plt.cm.Greys)
+plt.show()
+```
+The code above displays a random set of images with their labels, similar to this:
++
+## Train model and log metrics with MLflow
+
+You'll train the model using the code below. Note that you are using MLflow autologging to track metrics and log model artifacts.
+
+You'll be using the [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) classifier from the [SciKit Learn framework](https://scikit-learn.org/) to classify the data.
+
+> [!NOTE]
+> The model training takes approximately 2 minutes to complete.**
++
+```python
+# create the model
+import mlflow
+import numpy as np
+from sklearn.linear_model import LogisticRegression
+from azureml.core import Workspace
+
+# connect to your workspace
+ws = Workspace.from_config()
+
+# create experiment and start logging to a new run in the experiment
+experiment_name = "azure-ml-in10-mins-tutorial"
+
+# set up MLflow to track the metrics
+mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
+mlflow.set_experiment(experiment_name)
+mlflow.autolog()
+
+# set up the Logistic regression model
+reg = 0.5
+clf = LogisticRegression(
+ C=1.0 / reg, solver="liblinear", multi_class="auto", random_state=42
+)
+
+# train the model
+with mlflow.start_run() as run:
+ clf.fit(X_train, y_train)
+```
+
+## View experiment
+
+In the left-hand menu in Azure Machine Learning studio, select __Jobs__ and then select your job (__azure-ml-in10-mins-tutorial__). A job is a grouping of many runs from a specified script or piece of code. Multiple jobs can be grouped together as an experiment.
+
+Information for the run is stored under that job. If the name doesn't exist when you submit a job, if you select your run you will see various tabs containing metrics, logs, explanations, etc.
+
+## Version control your models with the model registry
+
+You can use model registration to store and version your models in your workspace. Registered models are identified by name and version. Each time you register a model with the same name as an existing one, the registry increments the version. The code below registers and versions the model you trained above. Once you have executed the code cell below you will be able to see the model in the registry by selecting __Models__ in the left-hand menu in Azure Machine Learning studio.
+
+```python
+# register the model
+model_uri = "runs:/{}/model".format(run.info.run_id)
+model = mlflow.register_model(model_uri, "sklearn_mnist_model")
+```
+
+## Deploy the model for real-time inference
+
+In this section you learn how to deploy a model so that an application can consume (inference) the model over REST.
+
+### Create deployment configuration
+
+The code cell gets a _curated environment_, which specifies all the dependencies required to host the model (for example, the packages like scikit-learn). Also, you create a _deployment configuration_, which specifies the amount of compute required to host the model. In this case, the compute will have 1CPU and 1GB memory.
++
+```python
+# create environment for the deploy
+from azureml.core.environment import Environment
+from azureml.core.conda_dependencies import CondaDependencies
+from azureml.core.webservice import AciWebservice
+
+# get a curated environment
+env = Environment.get(
+ workspace=ws,
+ name="AzureML-sklearn-0.24.1-ubuntu18.04-py37-cpu-inference",
+ version=1
+)
+env.inferencing_stack_version='latest'
+
+# create deployment config i.e. compute resources
+aciconfig = AciWebservice.deploy_configuration(
+ cpu_cores=1,
+ memory_gb=1,
+ tags={"data": "MNIST", "method": "sklearn"},
+ description="Predict MNIST with sklearn",
+)
+```
+
+### Deploy model
+
+This next code cell deploys the model to Azure Container Instance.
+
+> [!NOTE]
+> The deployment takes approximately 3 minutes to complete.**
++
+```python
+%%time
+import uuid
+from azureml.core.model import InferenceConfig
+from azureml.core.environment import Environment
+from azureml.core.model import Model
+
+# get the registered model
+model = Model(ws, "sklearn_mnist_model")
+
+# create an inference config i.e. the scoring script and environment
+inference_config = InferenceConfig(entry_script="score.py", environment=env)
+
+# deploy the service
+service_name = "sklearn-mnist-svc-" + str(uuid.uuid4())[:4]
+service = Model.deploy(
+ workspace=ws,
+ name=service_name,
+ models=[model],
+ inference_config=inference_config,
+ deployment_config=aciconfig,
+)
+
+service.wait_for_deployment(show_output=True)
+```
+
+The scoring script file referenced in the code above can be found in the same folder as this notebook, and has two functions:
+
+1. An `init` function that executes once when the service starts - in this function you normally get the model from the registry and set global variables
+1. A `run(data)` function that executes each time a call is made to the service. In this function, you normally format the input data, run a prediction, and output the predicted result.
+
+### View endpoint
+
+Once the model has been successfully deployed, you can view the endpoint by navigating to __Endpoints__ in the left-hand menu in Azure Machine Learning studio. You will be able to see the state of the endpoint (healthy/unhealthy), logs, and consume (how applications can consume the model).
+
+## Test the model service
+
+You can test the model by sending a raw HTTP request to test the web service.
++
+```python
+# send raw HTTP request to test the web service.
+import requests
+
+# send a random row from the test set to score
+random_index = np.random.randint(0, len(X_test) - 1)
+input_data = '{"data": [' + str(list(X_test[random_index])) + "]}"
+
+headers = {"Content-Type": "application/json"}
+
+resp = requests.post(service.scoring_uri, input_data, headers=headers)
+
+print("POST to url", service.scoring_uri)
+print("label:", y_test[random_index])
+print("prediction:", resp.text)
+```
+
+## Clean up resources
+
+If you're not going to continue to use this model, delete the Model service using:
+
+```python
+# if you want to keep workspace and only delete endpoint (it will incur cost while running)
+service.delete()
+```
+
+If you want to control cost further, stop the compute instance by selecting the "Stop compute" button next to the **Compute** dropdown. Then start the compute instance again the next time you need it.
+
+### Delete everything
+
+Use these steps to delete your Azure Machine Learning workspace and all compute resources.
+++
+## Next steps
+++ Learn about all of the [deployment options for Azure Machine Learning](../how-to-deploy-managed-online-endpoints.md).++ Learn how to [authenticate to the deployed model](../how-to-authenticate-online-endpoint.md).++ [Make predictions on large quantities of data](../tutorial-pipeline-batch-scoring-classification.md) asynchronously.++ Monitor your Azure Machine Learning models with [Application Insights](how-to-enable-app-insights.md).+
marketplace Azure Consumption Commitment Enrollment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-consumption-commitment-enrollment.md
An offer must meet the following requirements to be enrolled in the MACC program
## Next steps - To learn more about how the MACC program benefits customers and how they can find solutions that are enabled for MACC, see [Azure Consumption Commitment benefit](/marketplace/azure-consumption-commitment-benefit).-- To learn more about how your organization can leverage Azure Marketplace, complete our Learn module, [Simplify cloud procurement and governance with Azure Marketplace](/learn/modules/simplify-cloud-procurement-governance-azure-marketplace/)
+- To learn more about how your organization can leverage Azure Marketplace, complete our Learn module, [Simplify cloud procurement and governance with Azure Marketplace](/training/modules/simplify-cloud-procurement-governance-azure-marketplace/)
- [Commercial marketplace transact capabilities](marketplace-commercial-transaction-capabilities-and-considerations.md#transact-publishing-option)
marketplace Azure Vm Get Sas Uri https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-get-sas-uri.md
resourceGroupName=myResourceGroupName
snapshotName=mySnapshot #Provide Shared Access Signature (SAS) expiry duration in seconds (such as 3600)
-#Know more about SAS here: https://docs.microsoft.com/azure/storage/storage-dotnet-shared-access-signature-part-1
+#Know more about SAS here: https://learn.microsoft.com/azure/storage/storage-dotnet-shared-access-signature-part-1
sasExpiryDuration=3600 #Provide storage account name where you want to copy the underlying VHD file.
marketplace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/overview.md
When you create a commercial marketplace offer in Partner Center, it may be list
## Next steps -- Get an [Introduction to the Microsoft commercial marketplace](/learn/modules/intro-commercial-marketplace/).
+- Get an [Introduction to the Microsoft commercial marketplace](/training/modules/intro-commercial-marketplace/).
- Find videos and hands-on labs at [Mastering the marketplace](https://go.microsoft.com/fwlink/?linkid=2195692) - For new Microsoft partners who are interested in publishing to the commercial marketplace, see [Create a commercial marketplace account in Partner Center](create-account.md). - To learn more about recent and future releases, join the conversation in the [Microsoft Partner Community](https://www.microsoftpartnercommunity.com/).
migrate Concepts Migration Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-migration-planning.md
Before finalizing your migration plan, make sure you consider and mitigate other
- **Network requirements**: Evaluate network bandwidth and latency constraints, which might cause unforeseen delays and disruptions to migration replication speed. - **Testing/post-migration tweaks**: Allow a time buffer to conduct performance and user acceptance testing for migrated apps, or to configure/tweak apps post-migration, such as updating database connection strings, configuring web servers, performing cut-overs/cleanup etc. - **Permissions**: Review recommended Azure permissions, and server/database access roles and permissions needed for migration.-- **Training**: Prepare your organization for the digital transformation. A solid training foundation is important for successful organizational change. Check out [free Microsoft training](/learn/azure/?ocid=CM_Discovery_Checklist_PDF), including courses on Azure fundamentals, solution architectures, and security. Encourage your team to exploreΓÇ»[Azure certifications](https://www.microsoft.com/learning/certification-overview.aspx?ocid=CM_Discovery_Checklist_PDF).ΓÇ»
+- **Training**: Prepare your organization for the digital transformation. A solid training foundation is important for successful organizational change. Check out free [Microsoft Learn training](/training/azure/?ocid=CM_Discovery_Checklist_PDF), including courses on Azure fundamentals, solution architectures, and security. Encourage your team to exploreΓÇ»[Azure certifications](https://www.microsoft.com/learning/certification-overview.aspx?ocid=CM_Discovery_Checklist_PDF).ΓÇ»
- **Implementation support**: Get support for your implementation if you need it. Many organizations opt for outside help to support their cloud migration. To move to Azure quickly and confidently with personalized assistance, consider anΓÇ»[Azure Expert Managed Service Provider](https://www.microsoft.com/solution-providers/search?cacheId=9c2fed4f-f9e2-42fb-8966-4c565f08f11e&ocid=CM_Discovery_Checklist_PDF), orΓÇ»[FastTrack for Azure](https://azure.microsoft.com/programs/azure-fasttrack/?ocid=CM_Discovery_Checklist_PDF).ΓÇ»
mysql Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/videos.md
This page provides video content for learning about Azure Database for MySQL.
## Overview: Azure Database for PostgreSQL and MySQL
->[!VIDEO https://docs.microsoft.com/Events/Connect/2017/T147/player]
+>[!VIDEO https://learn.microsoft.com/Events/Connect/2017/T147/player]
[Open in Channel 9](/Events/Connect/2017/T147) Azure Database for PostgreSQL and Azure Database for MySQL bring together community edition database engines and capabilities of a fully managed serviceΓÇöso you can focus on your apps instead of having to manage a database. Tune in to get a quick overview of the advantages of using the service, and see some of the capabilities in action.
Azure Database for PostgreSQL and Azure Database for MySQL are managed services
## Deep dive on managed service capabilities for MySQL and PostgreSQL
->[!VIDEO https://docs.microsoft.com/Events/Connect/2017/T148/player]
+>[!VIDEO https://learn.microsoft.com/Events/Connect/2017/T148/player]
[Open in Channel 9](/Events/Connect/2017/T148) Azure Database for PostgreSQL and Azure Database for MySQL bring together community edition database engines and the capabilities of a fully managed service. Tune in to get a deep dive on how these services workΓÇöhow we ensure high availability and fast scaling (within seconds), so you can meet your customersΓÇÖ needs. You'll also learn about some of the underlying investments in security and worldwide availability.
network-watcher Network Watcher Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-monitoring-overview.md
When you create or update a virtual network in your subscription, Network Watche
* You now have an overview of Azure Network Watcher. To get started using Network Watcher, diagnose a common communication problem to and from a virtual machine using IP flow verify. To learn how, see the [Diagnose a virtual machine network traffic filter problem](diagnose-vm-network-traffic-filtering-problem.md) quickstart.
-* [Learn module: Introduction to Azure Network Watcher](/learn/modules/intro-to-azure-network-watcher).
+* [Learn module: Introduction to Azure Network Watcher](/training/modules/intro-to-azure-network-watcher).
network-watcher Network Watcher Nsg Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-grafana.md
You use Logstash to flatten the JSON formatted flow logs to a flow tuple level.
storage_access_key => "VGhpcyBpcyBhIGZha2Uga2V5Lg==" container => "insights-logs-networksecuritygroupflowevent" codec => "json"
- # Refer https://docs.microsoft.com/azure/network-watcher/network-watcher-read-nsg-flow-logs
+ # Refer https://learn.microsoft.com/azure/network-watcher/network-watcher-read-nsg-flow-logs
# Typical numbers could be 21/9 or 12/2 depends on the nsg log file types file_head_bytes => 12 file_tail_bytes => 2
By integrating Network Watcher with ElasticSearch and Grafana, you now have a co
## Next steps - Learn more about using [Network Watcher](network-watcher-monitoring-overview.md).-
network-watcher Network Watcher Visualize Nsg Flow Logs Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-visualize-nsg-flow-logs-open-source-tools.md
For further instructions on installing Elastic search, refer to [Installation in
storage_access_key => "VGhpcyBpcyBhIGZha2Uga2V5Lg==" container => "insights-logs-networksecuritygroupflowevent" codec => "json"
- # Refer https://docs.microsoft.com/azure/network-watcher/network-watcher-read-nsg-flow-logs
+ # Refer https://learn.microsoft.com/azure/network-watcher/network-watcher-read-nsg-flow-logs
# Typical numbers could be 21/9 or 12/2 depends on the nsg log file types file_head_bytes => 12 file_tail_bytes => 2
Learn how to visualize your NSG flow logs with Power BI by visiting [Visualize N
[4]: ./media/network-watcher-visualize-nsg-flow-logs-open-source-tools/figure4.png [5]: ./media/network-watcher-visualize-nsg-flow-logs-open-source-tools/figure5.png [6]: ./media/network-watcher-visualize-nsg-flow-logs-open-source-tools/figure6.png
-[7]: ./media/network-watcher-visualize-nsg-flow-logs-open-source-tools/figure7.png
+[7]: ./media/network-watcher-visualize-nsg-flow-logs-open-source-tools/figure7.png
object-anchors Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/object-anchors/best-practices.md
We recommend trying some of these steps to get the best results.
## Detection
-> [!VIDEO https://docs.microsoft.com/Shows/Docs-Mixed-Reality/Azure-Object-Anchors-Detection-and-Alignment-Best-Practices/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Docs-Mixed-Reality/Azure-Object-Anchors-Detection-and-Alignment-Best-Practices/player]
- The provided runtime SDK requires a user-provided search region to search for and detect the physical object(s). The search region could be a bounding box, a sphere, a view frustum, or any combination of them. To avoid a false detection,
open-datasets Dataset Boston Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-boston-safety.md
Sample not available for this platform/package combination.
``` # This is a package in preview.
-# You need to pip install azureml-opendatasets in Databricks cluster. https://docs.microsoft.com/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
+# You need to pip install azureml-opendatasets in Databricks cluster. https://learn.microsoft.com/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
from azureml.opendatasets import BostonSafety from datetime import datetime
display(spark.sql('SELECT * FROM source LIMIT 10'))
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset Chicago Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-chicago-safety.md
Sample not available for this platform/package combination.
``` # This is a package in preview.
-# You need to pip install azureml-opendatasets in Databricks cluster. https://docs.microsoft.com/en-us/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
+# You need to pip install azureml-opendatasets in Databricks cluster. https://learn.microsoft.com/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
from azureml.opendatasets import ChicagoSafety from datetime import datetime
display(spark.sql('SELECT * FROM source LIMIT 10'))
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset New York City Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-new-york-city-safety.md
Sample not available for this platform/package combination.
``` # This is a package in preview.
-# You need to pip install azureml-opendatasets in Databricks cluster. https://docs.microsoft.com/en-us/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
+# You need to pip install azureml-opendatasets in Databricks cluster. https://learn.microsoft.com/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
from azureml.opendatasets import SanFranciscoSafety from datetime import datetime
display(spark.sql('SELECT * FROM source LIMIT 10'))
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset Oj Sales Simulated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-oj-sales-simulated.md
named_ds = registered_ds.as_named_input(ds_name)
``` # This is a package in preview.
-# You need to pip install azureml-opendatasets in Databricks cluster. https://docs.microsoft.com/en-us/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
+# You need to pip install azureml-opendatasets in Databricks cluster. https://learn.microsoft.com/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
# Download or mount OJ Sales raw files Azure Machine Learning file datasets.
-# This works only for Linux based compute. See https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-create-register-datasets to learn more about datasets.
+# This works only for Linux based compute. See https://learn.microsoft.com/azure/machine-learning/service/how-to-create-register-datasets to learn more about datasets.
from azureml.opendatasets import OjSalesSimulated
open-datasets Dataset Public Holidays https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-public-holidays.md
Sample not available for this platform/package combination.
``` # This is a package in preview.
-# You need to pip install azureml-opendatasets in Databricks cluster. https://docs.microsoft.com/en-us/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
+# You need to pip install azureml-opendatasets in Databricks cluster. https://learn.microsoft.com/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
from azureml.opendatasets import PublicHolidays from datetime import datetime
display(spark.sql('SELECT * FROM source LIMIT 10'))
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset San Francisco Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-san-francisco-safety.md
Sample not available for this platform/package combination.
``` # This is a package in preview.
-# You need to pip install azureml-opendatasets in Databricks cluster. https://docs.microsoft.com/en-us/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
+# You need to pip install azureml-opendatasets in Databricks cluster. https://learn.microsoft.com/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
from azureml.opendatasets import NycSafety from datetime import datetime
open-datasets Dataset Seattle Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-seattle-safety.md
Sample not available for this platform/package combination.
``` # This is a package in preview.
-# You need to pip install azureml-opendatasets in Databricks cluster. https://docs.microsoft.com/en-us/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
+# You need to pip install azureml-opendatasets in Databricks cluster. https://learn.microsoft.com/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
from azureml.opendatasets import SeattleSafety from datetime import datetime
open-datasets Dataset Taxi For Hire Vehicle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-taxi-for-hire-vehicle.md
Sample not available for this platform/package combination.
```python # This is a package in preview.
-# You need to pip install azureml-opendatasets in Databricks cluster. https://docs.microsoft.com/en-us/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
+# You need to pip install azureml-opendatasets in Databricks cluster. https://learn.microsoft.com/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
from azureml.opendatasets import NycTlcFhv from datetime import datetime
display(spark.sql('SELECT * FROM source LIMIT 10'))
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset Taxi Green https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-taxi-green.md
Sample not available for this platform/package combination.
```python # This is a package in preview.
-# You need to pip install azureml-opendatasets in Databricks cluster. https://docs.microsoft.com/en-us/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
+# You need to pip install azureml-opendatasets in Databricks cluster. https://learn.microsoft.com/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
from azureml.opendatasets import NycTlcGreen from datetime import datetime
display(spark.sql('SELECT * FROM source LIMIT 10'))
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset Taxi Yellow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-taxi-yellow.md
Sample not available for this platform/package combination.
```python # This is a package in preview.
-# You need to pip install azureml-opendatasets in Databricks cluster. https://docs.microsoft.com/en-us/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
+# You need to pip install azureml-opendatasets in Databricks cluster. https://learn.microsoft.com/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
from azureml.opendatasets import NycTlcYellow from datetime import datetime
peering-service Location Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/location-partners.md
The table in this article provides information on the Peering Service connectivi
| [CMC Networks](https://www.cmcnetworks.net/products/microsoft-azure-peering-services.html) |Africa| | [Colt](https://www.colt.net/why-colt/strategic-alliances/microsoft-partnership/)|Europe, Asia| | [Converge ICT](https://www.convergeict.com/enterprise/microsoft-azure-peering-service-maps/) |Asia|
+| [Dimension Data](https://www.dimensiondata.com/en-gb/about-us/our-partners/microsoft/)|Africa |
| [DE-CIX](https://www.de-cix.net/)|Europe, North America |
-| [IIJ](https://www.iij.ad.jp/en/) | Japan |
+| [IIJ](https://www.iij.ad.jp/en/) |Japan |
| [Intercloud](https://intercloud.com/microsoft-saas-applications/)|Europe | | [Kordia](https://www.kordia.co.nz/cloudconnect) |Oceania | | [LINX](https://www.linx.net/services/microsoft-azure-peering/) |Europe|
-| [Liquid Telecom](https://liquidcloud.africa/keep-expanding-365-direct/) | Africa |
+| [Liquid Telecom](https://liquidcloud.africa/keep-expanding-365-direct/) |Africa |
| [Lumen Technologies](https://www.ctl.io/microsoft-azure-peering-services/) |North America, Europe, Asia| | [MainOne](https://www.mainone.net/connectivity-services/) |Africa| | [NAP Africa](https://www.napafrica.net/technical/microsoft-azure-peering-service/) |Africa|
-| [NTT Communications](https://www.ntt.com/en/services/network/software-defined-network.html) | Japan, Indonesia |
+| [NTT Communications](https://www.ntt.com/en/services/network/software-defined-network.html) |Japan, Indonesia |
| [PCCW](https://www.pccwglobal.com/en/enterprise/products/network/ep-global-internet-access) |Asia | | [Singtel](https://www.singtel.com/business/campaign/singnet-cloud-connect-microsoft-direct) |Asia | | [Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/wireline/ip-plus.html) |Europe| | [Telstra International](https://www.telstra.com.sg/en/products/global-networks/global-internet/global-internet-direct) |Asia, Europe |
+| [Vocusgroup NZ](https://www.vocus.co.nz/microsoftazuredirectpeering/) |Oceania |
+| [Vodacom](https://www.vodacom.com/index.php) |Africa |
> [!NOTE] >For more information about enlisting with the Peering Service Partner program, reach out to peeringservice@microsoft.com.
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-audit.md
Last updated 11/30/2021
# Audit logging in Azure Database for PostgreSQL - Flexible server Audit logging of database activities in Azure Database for PostgreSQL - Flexible server is available through the PostgreSQL Audit extension: [pgAudit](https://www.pgaudit.org/). pgAudit provides detailed session and/or object audit logging.
postgresql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-azure-advisor-recommendations.md
Last updated 11/16/2021
-# Azure Advisor for PostgreSQL - Flexible Server
+# Azure Advisor for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md
Last updated 06/16/2021
# Backup and restore in Azure Database for PostgreSQL - Flexible Server Backups form an essential part of any business continuity strategy. They help protect data from accidental corruption or deletion.
postgresql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-business-continuity.md
Last updated 11/30/2021
# Overview of business continuity with Azure Database for PostgreSQL - Flexible Server **Business continuity** in Azure Database for PostgreSQL - Flexible Server refers to the mechanisms, policies, and procedures that enable your business to continue operating in the face of disruption, particularly to its computing infrastructure. In most of the cases, flexible server will handle the disruptive events happens that might happen in the cloud environment and keep your applications and business processes running. However, there are some events that cannot be handled automatically such as:
postgresql Concepts Compute Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compute-storage.md
Last updated 11/30/2021
# Compute and Storage options in Azure Database for PostgreSQL - Flexible Server You can create an Azure Database for PostgreSQL server in one of three different pricing tiers: Burstable, General Purpose, and Memory Optimized. The pricing tiers are differentiated by the amount of compute in vCores that can be provisioned, memory per vCore, and the storage technology used to store the data. All resources are provisioned at the PostgreSQL server level. A server can have one or many databases.
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
Last updated 11/30/2021
# PostgreSQL extensions in Azure Database for PostgreSQL - Flexible Server PostgreSQL provides the ability to extend the functionality of your database using extensions. Extensions bundle multiple related SQL objects together in a single package that can be loaded or removed from your database with a command. After being loaded in the database, extensions function like built-in features.
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-firewall-rules.md
Last updated 11/30/2021
# Firewall rules in Azure Database for PostgreSQL - Flexible Server When you're running Azure Database for PostgreSQL - Flexible Server, you have two main networking options. The options are private access (virtual network integration) and public access (allowed IP addresses).
postgresql Concepts Intelligent Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-intelligent-tuning.md
Last updated 11/30/2021
# Perform intelligent tuning in Azure Database for PostgreSQL - Flexible Server **Applies to:** Azure Database for PostgreSQL - Flexible Server versions 11 and later.
postgresql Concepts Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-logging.md
Last updated 11/30/2021
# Logs in Azure Database for PostgreSQL - Flexible Server Azure Database for PostgreSQL allows you to configure and access Postgres' standard logs. The logs can be used to identify, troubleshoot, and repair configuration errors and suboptimal performance. Logging information you can configure and access includes errors, query information, autovacuum records, connections, and checkpoints. (Access to transaction logs is not available).
postgresql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-maintenance.md
Last updated 11/30/2021
# Scheduled maintenance in Azure Database for PostgreSQL ΓÇô Flexible server Azure Database for PostgreSQL - Flexible server performs periodic maintenance to keep your managed database secure, stable, and up-to-date. During maintenance, the server gets new features, updates, and patches.
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-monitoring.md
Last updated 11/30/2021
# Monitor metrics on Azure Database for PostgreSQL - Flexible Server Monitoring data about your servers helps you troubleshoot and optimize for your workload. Azure Database for PostgreSQL provides various monitoring options to provide insight into the behavior of your server.
postgresql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking.md
Last updated 11/30/2021
# Networking overview for Azure Database for PostgreSQL - Flexible Server This article describes connectivity and networking concepts for Azure Database for PostgreSQL - Flexible Server.
postgresql Concepts Pgbouncer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-pgbouncer.md
Last updated 11/30/2021
# PgBouncer in Azure Database for PostgreSQL - Flexible Server Azure Database for PostgreSQL ΓÇô Flexible Server offers [PgBouncer](https://github.com/pgbouncer/pgbouncer) as a built-in connection pooling solution. This is an optional service that can be enabled on a per-database server basis and is supported with both public and private access. PgBouncer runs in the same virtual machine as the Postgres database server. Postgres uses a process-based model for connections which makes it expensive to maintain many idle connections. So, Postgres itself runs into resource constraints once the server runs more than a few thousand connections. The primary benefit of PgBouncer is to improve idle connections and short-lived connections at the database server.
postgresql Concepts Query Store Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store-best-practices.md
Last updated 11/30/2021
# Best practices for Query Store - Flexible Server **Applies to:** Azure Database for PostgreSQL - Flex Server versions 11, 12
postgresql Concepts Query Store Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store-scenarios.md
Last updated 11/30/2021
# Usage scenarios for Query Store - Flexible Server You can use Query Store in a wide variety of scenarios in which tracking and maintaining predictable workload performance is critical. Consider the following examples: - Identifying and tuning top expensive queries
postgresql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store.md
Last updated 11/30/2021
# Monitor Performance with Query Store The Query Store feature in Azure Database for PostgreSQL provides a way to track query performance over time. Query Store simplifies performance-troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Query Store automatically captures a history of queries and runtime statistics, and it retains them for your review. It slices the data by time so that you can see temporal usage patterns. Data for all users, databases and queries is stored in a database named **azure_sys** in the Azure Database for PostgreSQL instance.
postgresql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-security.md
Last updated 11/30/2021
# Security in Azure Database for PostgreSQL - Flexible Server Multiple layers of security are available to help protect the data on your Azure Database for PostgreSQL server. This article outlines those security options.
postgresql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-server-parameters.md
Last updated 11/30/2021
# Server parameters in Azure Database for PostgreSQL - Flexible Server Azure Database for PostgreSQL provides a subset of configurable parameters for each server. For more information on Postgres parameters, see the [PostgreSQL documentation](https://www.postgresql.org/docs/13/config-setting.html). ## An overview of PostgreSQL parameters
-Azure Database for PostgreSQL server is pre-configured with optimal default values for each parameter on creation. Static parameters require a server restart and parameters that require superuser access cannot be configured by the user.
+Azure Database for PostgreSQL server is pre-configured with optimal default values for each parameter on creation. Static parameters require a server restart and parameters that require superuser access can't be configured by the user.
In order to review which parameters are available to view or to modify, we recommend going into the Azure portal, and to the Server Parameters page. You can also configure parameters on a per-user or per-database basis using `ALTER DATABASE` or `ALTER ROLE` commands.
In order to review which parameters are available to view or to modify, we recom
:::image type="content" source="./media/concepts-server-parameters/server-parameters.png" alt-text="Server parameters - portal":::
-Here is the list of some of the parameters:
+Here's the list of some of the parameters:
| Parameter Name | Description | |-|--|
-| **max_connections** | You can tune max_connections on Postgres Flexible Server, where it can be set to 5,000 connections. Please see the [limits documentation](concepts-limits.md) for more details. |
-| **shared_buffers** | The 'shared_buffers' setting changes depending on the selected SKU (SKU determines the memory available). General Purpose servers have 2GB shared_buffers for 2 vCores; Memory Optimized servers have 4GB shared_buffers for 2 vCores. The shared_buffers setting scales linearly (approximately) as vCores increase in a tier. |
-| **shared_preload_libraries** | This parameter is available for configuration with a predefined set of supported extensions. Note that we always load the `azure` extension (used for maintenance tasks), as well as the `pg_stat_statements` extension (you can use the pg_stat_statements.track parameter to control whether the extension is active). |
+| **max_connections** | You can tune max_connections on Postgres Flexible Server, where it can be set to 5,000 connections. See the [limits documentation](concepts-limits.md) for more details. |
+| **shared_buffers** | The 'shared_buffers' setting changes depending on the selected SKU (SKU determines the memory available). General Purpose servers have 2 GB shared_buffers for 2 vCores; Memory Optimized servers have 4 GB shared_buffers for 2 vCores. The shared_buffers setting scales linearly (approximately) as vCores increase in a tier. |
+| **shared_preload_libraries** | This parameter is available for configuration with a predefined set of supported extensions. We always load the `azure` extension (used for maintenance tasks), and the `pg_stat_statements` extension (you can use the pg_stat_statements.track parameter to control whether the extension is active). |
| **connection_throttling** | You can enable or disable temporary connection throttling per IP for too many invalid password login failures. |
- | **work_mem** | This parameter specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. If your workload has few queries with a lot of complex sorting and you have a lot of available memory, increasing this parameter may allow Postgres to do larger scans in-memory vs. spilling to disk, which will be faster. Be careful however, as one complex query may have number of sort, hash operations running concurrently. Each one of those operations will use as much memory as it value allows before it starts writing to disk based temporary files. Therefore on a relatively busy system total memory usage will be many times of individual work_mem parameter. If you do decide to tune this value globally, you can use formula Total RAM * 0.25 / max_connections as initial value. Azure Database for PostgreSQL - Flexible Server supports range of 4096-2097152 kilobytes for this parameter.|
+ | **work_mem** | This parameter specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. If your workload has few queries with many complex sorting and you have a lot of available memories, increasing this parameter may allow Postgres to do larger scans in-memory vs. spilling to disk, which will be faster. Be careful however, as one complex query may have number of sort, hash operations running concurrently. Each one of those operations will use as much memory as it value allows before it starts writing to disk based temporary files. Therefore on a relatively busy system total memory usage will be many times of individual work_mem parameter. If you do decide to tune this value globally, you can use formula Total RAM * 0.25 / max_connections as initial value. Azure Database for PostgreSQL - Flexible Server supports range of 4096-2097152 kilobytes for this parameter.|
| **effective_cache_size** |The effective_cache_size parameter estimates how much memory is available for disk caching by the operating system and within the database itself. The PostgreSQL query planner decides whether itΓÇÖs fixed in RAM or not. Index scans are most likely to be used against higher values; otherwise, sequential scans will be used if the value is low. Recommendations are to set Effective_cache_size at 50% of the machineΓÇÖs total RAM. | | **maintenance_work_mem** | The maintenance_work_mem parameter basically provides the maximum amount of memory to be used by maintenance operations like vacuum, create index, and alter table add foreign key operations. Default value for that parameter is 64 KB. ItΓÇÖs recommended to set this value higher than work_mem; this can improve performance for vacuuming. | | **effective_io_concurrency** | Sets the number of concurrent disk I/O operations that PostgreSQL expects can be executed simultaneously. Raising this value will increase the number of I/O operations that any individual PostgreSQL session attempts to initiate in parallel. The allowed range is 1 to 1000, or zero to disable issuance of asynchronous I/O requests. Currently, this setting only affects bitmap heap scans.. |
- |**require_secure_transport** | If your application does not support SSL connectivity to the server, you can optionally disable secured transport from your client by turning `OFF` this parameter value. |
+ |**require_secure_transport** | If your application doesn't support SSL connectivity to the server, you can optionally disable secured transport from your client by turning `OFF` this parameter value. |
|**log_connections** | This parameter may be read-only, as on Azure Database for PostgreSQL - Flexible Server all connections are logged and intercepted to make sure connections are coming in from right sources for security reasons. | >[!NOTE]
postgresql Concepts Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-servers.md
Last updated 11/30/2021
# Servers - Azure Database for PostgreSQL - Flexible Server This article provides considerations and guidelines for working with Azure Database for PostgreSQL - Flexible Server.
postgresql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-azure-cli.md
Last updated 11/30/2021
# Quickstart: Connect and query with Azure CLI with Azure Database for PostgreSQL - Flexible Server This quickstart demonstrates how to connect to an Azure Database for PostgreSQL Flexible Server using Azure CLI with ```az postgres flexible-server connect``` and execute single query or sql file with ```az postgres flexible-server execute``` command. This command allows you test connectivity to your database server and run queries. You can also run multiple queries using the interactive mode.
This quickstart demonstrates how to connect to an Azure Database for PostgreSQL
## Prerequisites - An Azure account. If you don't have one, [get a free trial](https://azure.microsoft.com/free/). - Install [Azure CLI](/cli/azure/install-azure-cli) latest version (2.20.0 or above)-- Login using Azure CLI with ```az login``` command -- Turn on parameter persistence with ```az config param-persist on```. Parameter persistence will help you use local context without having to repeat a lot of arguments like resource group or location.
+- Log in using Azure CLI with ```az login``` command
+- Turn on parameter persistence with ```az config param-persist on```. Parameter persistence will help you use local context without having to repeat numerous arguments like resource group or location.
## Create a PostgreSQL Flexible Server
The first thing we'll create is a managed PostgreSQL server. In [Azure Cloud She
```azurecli az postgres flexible-server create --public-access <your-ip-address> ```
-You can provide additional arguments for this command to customize it. See all arguments for [az postgres flexible-server create](/cli/azure/postgres/flexible-server#az-postgres-flexible-server-create).
+You can provide more arguments for this command to customize it. See all arguments for [az postgres flexible-server create](/cli/azure/postgres/flexible-server#az-postgres-flexible-server-create).
## View all the arguments You can view all the arguments for this command with ```--help``` argument.
az postgres flexible-server connect -n <servername> -u <username> -p "<password>
```azurecli az postgres flexible-server connect -n postgresdemoserver -u dbuser -p "dbpassword" -d postgres ```
-You will see the output if the connection was successful.
+You'll see the output if the connection was successful.
```output Command group 'postgres flexible-server' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus Successfully connected to postgresdemoserver.
If the connection failed, try these solutions:
- Check if port 5432 is open on your client machine. - if your server administrator user name and password are correct - if you have configured firewall rule for your client machine-- if you have configured your server with private access in virtual networking, make sure your client machine is in the same virtual network.
+- if you've configured your server with private access in virtual networking, make sure your client machine is in the same virtual network.
## Run multiple queries using interactive mode
-You can run multiple queries using the **interactive** mode . To enable interactive mode, run the following command
+You can run multiple queries using the **interactive** mode. To enable interactive mode, run the following command
```azurecli az postgres flexible-server connect -n <servername> -u <username> -p "<password>" -d <databasename>
az postgres flexible-server connect -n <servername> -u <username> -p "<password>
az postgres flexible-server connect -n postgresdemoserver -u dbuser -p "dbpassword" -d flexibleserverdb --interactive ```
-You will see the **psql** shell experience as shown below:
+You'll see the **psql** shell experience as shown below:
```bash Command group 'postgres flexible-server' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
Your preference of are now saved to local context. To learn more, type in `az l
az postgres flexible-server execute -n postgresdemoserver -u dbuser -p "dbpassword" -d flexibleserverdb -q "select * from table1;" --output table ```
-You will see an output as shown below:
+You'll see an output as shown below:
```output Command group 'postgres flexible-server' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
az postgres flexible-server execute -n <server-name> -u <username> -p "<password
az postgres flexible-server execute -n postgresdemoserver -u dbuser -p "dbpassword" -d flexibleserverdb -f "./test.sql" ```
-You will see an output as shown below:
+You'll see an output as shown below:
```output Command group 'postgres flexible-server' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-java.md
Last updated 11/30/2021
# Quickstart: Use Java and JDBC with Azure Database for PostgreSQL Flexible Server This topic demonstrates creating a sample application that uses Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to store and retrieve information in [Azure Database for PostgreSQL Flexible Server](./index.yml).
postgresql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-python.md
Last updated 11/30/2021
# Quickstart: Use Python to connect and query data in Azure Database for PostgreSQL - Flexible Server In this quickstart, you connect to an Azure Database for PostgreSQL - Flexible Server by using Python. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Ubuntu Linux, and Windows platforms.
The following code example connects to your Azure Database for PostgreSQL - Flex
import psycopg2 # Update connection string information host = "<server-name>" dbname = "<database-name>" user = "<admin-username>"
password = "<admin-password>"
sslmode = "require" # Construct connection string conn_string = "host={0} user={1} dbname={2} password={3} sslmode={4}".format(host, user, dbname, password, sslmode) conn = psycopg2.connect(conn_string) print("Connection established") cursor = conn.cursor() # Drop previous table of same name if one exists cursor.execute("DROP TABLE IF EXISTS inventory;") print("Finished dropping table (if existed)") # Create a table cursor.execute("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);") print("Finished creating table") # Insert some data into the table cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("banana", 150)) cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("orange", 154)) cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("apple", 100)) print("Inserted 3 rows of data") # Clean up conn.commit() cursor.close() conn.close()
The following code example connects to your Azure Database for PostgreSQL - Flex
import psycopg2 # Update connection string information host = "<server-name>" dbname = "<database-name>" user = "<admin-username>"
password = "<admin-password>"
sslmode = "require" # Construct connection string conn_string = "host={0} user={1} dbname={2} password={3} sslmode={4}".format(host, user, dbname, password, sslmode) conn = psycopg2.connect(conn_string) print("Connection established") cursor = conn.cursor() # Fetch all rows from table cursor.execute("SELECT * FROM inventory;") rows = cursor.fetchall() # Print all rows for row in rows: print("Data row = (%s, %s, %s)" %(str(row[0]), str(row[1]), str(row[2]))) # Cleanup conn.commit() cursor.close() conn.close()
The following code example connects to your Azure Database for PostgreSQL - Flex
import psycopg2 # Update connection string information host = "<server-name>" dbname = "<database-name>" user = "<admin-username>"
password = "<admin-password>"
sslmode = "require" # Construct connection string conn_string = "host={0} user={1} dbname={2} password={3} sslmode={4}".format(host, user, dbname, password, sslmode) conn = psycopg2.connect(conn_string) print("Connection established") cursor = conn.cursor() # Update a data row in the table cursor.execute("UPDATE inventory SET quantity = %s WHERE name = %s;", (200, "banana")) print("Updated 1 row of data") # Cleanup conn.commit() cursor.close() conn.close()
The following code example connects to your Azure Database for PostgreSQL - Flex
import psycopg2 # Update connection string information host = "<server-name>" dbname = "<database-name>" user = "<admin-username>"
password = "<admin-password>"
sslmode = "require" # Construct connection string conn_string = "host={0} user={1} dbname={2} password={3} sslmode={4}".format(host, user, dbname, password, sslmode) conn = psycopg2.connect(conn_string) print("Connection established") cursor = conn.cursor() # Delete data row from table cursor.execute("DELETE FROM inventory WHERE name = %s;", ("orange",)) print("Deleted 1 row of data") # Cleanup conn.commit() cursor.close() conn.close()
postgresql How To Connect Query Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-query-guide.md
Last updated 11/30/2021
# Connect and query overview for Azure database for PostgreSQL- Flexible Server The following document includes links to examples showing how to connect and query with Azure Database for PostgreSQL Single Server. This guide also includes TLS recommendations and extension that you can use to connect to the server in supported languages below.
postgresql How To Connect Scram https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-scram.md
Last updated 11/30/2021
# SCRAM authentication in Azure Database for PostgreSQL - Flexible Server Salted Challenge Response Authentication Mechanism (SCRAM) is a password-based mutual authentication protocol. It is a challenge-response scheme that adds several levels of security and prevents password sniffing on untrusted connections. SCRAM supports storing passwords on the server in a cryptographically hashed form which provides advanced security.
postgresql How To Connect Tls Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-tls-ssl.md
Last updated 11/30/2021
# Encrypted connectivity using Transport Layer Security in Azure Database for PostgreSQL - Flexible Server Azure Database for PostgreSQL - Flexible Server supports connecting your client applications to the PostgreSQL service using Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL). TLS is an industry standard protocol that ensures encrypted network connections between your database server and client applications, allowing you to adhere to compliance requirements.
postgresql How To Deploy On Azure Free Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-deploy-on-azure-free-account.md
# Use an Azure free account to try Azure Database for PostgreSQL - Flexible Server for free Azure Database for PostgreSQL - Flexible Server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. With an Azure free account, you can use Flexible Server for **free for 12 months** with **monthly limits** of up to: - **750 hours** of **Burstable B1MS** instance, enough hours to run a database instance continuously each month.
postgresql How To Maintenance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-maintenance-portal.md
Last updated 11/30/2021
# Manage scheduled maintenance settings for Azure Database for PostgreSQL ΓÇô Flexible server You can specify maintenance options for each Flexible server in your Azure subscription. Options include the maintenance schedule and notification settings for upcoming and finished maintenance events.
postgresql How To Manage Firewall Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-firewall-cli.md
# Create and manage Azure Database for PostgreSQL - Flexible Server firewall rules using the Azure CLI Azure Database for PostgreSQL - Flexible Server supports two types of mutually exclusive network connectivity methods to connect to your flexible server. The two options are:
postgresql How To Manage Firewall Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-firewall-portal.md
Last updated 11/30/2021
# Create and manage firewall rules for Azure Database for PostgreSQL - Flexible Server using the Azure portal Azure Database for PostgreSQL - Flexible Server supports two types of mutually exclusive network connectivity methods to connect to your flexible server. The two options are:
postgresql How To Manage Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-server-cli.md
Last updated 11/30/2021
# Manage an Azure Database for PostgreSQL - Flexible Server by using the Azure CLI This article shows you how to manage your flexible server deployed in Azure. Management tasks include compute and storage scaling, admin password reset, and viewing server details.
postgresql How To Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-server-portal.md
# Manage an Azure Database for PostgreSQL - Flexible Server using the Azure portal This article shows you how to manage your Azure Database for PostgreSQL - Flexible Server. Management tasks include compute and storage scaling, admin password reset, and viewing server details.
postgresql How To Manage Virtual Network Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-virtual-network-cli.md
Last updated 11/30/2021
# Create and manage virtual networks for Azure Database for PostgreSQL - Flexible Server using the Azure CLI Azure Database for PostgreSQL - Flexible Server supports two types of mutually exclusive network connectivity methods to connect to your flexible server. The two options are:
postgresql How To Manage Virtual Network Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-virtual-network-portal.md
Last updated 11/30/2021
# Create and manage virtual networks for Azure Database for PostgreSQL - Flexible Server using the Azure portal Azure Database for PostgreSQL - Flexible Server supports two types of mutually exclusive network connectivity methods to connect to your flexible server. The two options are:
postgresql How To Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restart-server-cli.md
Last updated 11/30/2021
# Restart an Azure Database for PostgreSQL - Flexible Server This article shows you how to perform restart, start and stop flexible server using Azure CLI.
postgresql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restart-server-portal.md
Last updated 11/30/2021
# Restart Azure Database for PostgreSQL - Flexible Server This article provides step-by-step procedure to perform restart of the flexible server. This operation is useful to apply any static parameter changes that requires database server restart. The procedure is same for servers configured with zone redundant high availability.
postgresql How To Scale Compute Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-scale-compute-storage-portal.md
Last updated 11/30/2021
# Scale operations in Flexible Server This article provides steps to perform scaling operations for compute and storage. You will be able to change your compute tiers between burstable, general purpose, and memory optimized SKUs, including choosing the number of vCores that is suitable to run your application. You can also scale up your storage. Expected IOPS are shown based on the compute tier, vCores and the storage capacity. The cost estimate is also shown based on your selection.
postgresql How To Stop Start Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-stop-start-server-cli.md
Last updated 11/30/2021
# Stop/Start Azure Database for PostgreSQL - Flexible Server using Azure CLI This article shows you how to perform restart, start and stop flexible server using Azure CLI.
postgresql How To Stop Start Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-stop-start-server-portal.md
Last updated 11/30/2021
# Stop/Start an Azure Database for PostgreSQL - Flexible Server using Azure portal This article provides step-by-step instructions to stop and start a flexible server.
postgresql How To Troubleshoot Cli Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-troubleshoot-cli-errors.md
Last updated 11/30/2021
# Troubleshoot Azure Database for PostgreSQL Flexible Server CLI errors This doc will help you troubleshoot common issues with Azure CLI when using PostgreSQL Flexible Server.
postgresql Howto Alert On Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-alert-on-metrics.md
Last updated 11/30/2021
# Use the Azure portal to set up alerts on metrics for Azure Database for PostgreSQL - Flexible Server This article shows you how to set up Azure Database for PostgreSQL alerts using the Azure portal. You can receive an alert based on monitoring metrics for your Azure services.
postgresql Howto Configure And Access Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-configure-and-access-logs.md
Last updated 11/30/2021
# Configure and Access Logs in Azure Database for PostgreSQL - Flexible Server PostgreSQL logs are available on every node of a flexible server. You can ship logs to a storage server, or to an analytics service. The logs can be used to identify, troubleshoot, and repair configuration errors and suboptimal performance.
postgresql Howto Configure Server Parameters Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-configure-server-parameters-using-cli.md
# Customize server parameters for Azure Database for PostgreSQL - Flexible Server using Azure CLI You can list, show, and update configuration parameters for an Azure PostgreSQL server using the Command Line Interface (Azure CLI). A subset of engine parameters is exposed at server-level and can be modified.
postgresql Howto Configure Server Parameters Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-configure-server-parameters-using-portal.md
Last updated 11/30/2021
# Configure server parameters in Azure Database for PostgreSQL - Flexible Server via the Azure portal You can list, show, and update configuration parameters for an Azure Database for PostgreSQL server through the Azure portal.
postgresql Quickstart Create Connect Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-connect-server-vnet.md
Last updated 11/30/2021
# Connect Azure Database for PostgreSQL Flexible Server with the private access connectivity method Azure Database for PostgreSQL Flexible Server is a managed service that you can use to run, manage, and scale highly available PostgreSQL servers in the cloud. This quickstart shows you how to create a flexible server in a virtual network by using the Azure portal.
postgresql Quickstart Create Server Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-arm-template.md
Last updated 05/12/2022
# Quickstart: Use an ARM template to create an Azure Database for PostgreSQL - Flexible Server Flexible server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. You can use an Azure Resource Manager template (ARM template) to provision a PostgreSQL Flexible Server to deploy multiple servers or multiple databases on a server.
Follow these steps to verify if your server was created in Azure.
# [Azure portal](#tab/portal) 1. In the [Azure portal](https://portal.azure.com), search for and select **Azure Database for PostgreSQL Flexible Servers**. 1. In the database list, select your new server to view the **Overview** page to manage the server. # [PowerShell](#tab/PowerShell) You'll have to enter the name of the new server to view the details of your Azure Database for PostgreSQL Flexible server.
Write-Host "Press [ENTER] to continue..."
# [CLI](#tab/CLI) You'll have to enter the name and the resource group of the new server to view details about your Azure Database for PostgreSQL Flexible Server.
To delete the resource group:
# [Portal](#tab/azure-portal) In the [portal](https://portal.azure.com), select the resource group you want to delete.
In the [portal](https://portal.azure.com), select the resource group you want to
# [PowerShell](#tab/azure-powershell) ```azurepowershell-interactive Remove-AzResourceGroup -Name ExampleResourceGroup
Remove-AzResourceGroup -Name ExampleResourceGroup
# [Azure CLI](#tab/azure-cli) ```azurecli-interactive az group delete --name ExampleResourceGroup
postgresql Quickstart Create Server Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-bicep.md
Last updated 05/12/2022
# Quickstart: Use a Bicep to create an Azure Database for PostgreSQL - Flexible Server Flexible server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. You can use [Bicep](../../azure-resource-manager/bicep/overview.md) to provision a PostgreSQL Flexible Server to deploy multiple servers or multiple databases on a server.
Follow these steps to verify if your server was created in Azure.
# [Azure portal](#tab/portal) 1. In the [Azure portal](https://portal.azure.com), search for and select **Azure Database for PostgreSQL Flexible Servers**. 1. In the database list, select your new server to view the **Overview** page to manage the server. # [PowerShell](#tab/PowerShell) You'll have to enter the name of the new server to view the details of your Azure Database for PostgreSQL Flexible server.
Write-Host "Press [ENTER] to continue..."
# [CLI](#tab/CLI) You'll have to enter the name and the resource group of the new server to view details about your Azure Database for PostgreSQL Flexible Server.
To delete the resource group:
# [Portal](#tab/azure-portal) In the [portal](https://portal.azure.com), select the resource group you want to delete.
In the [portal](https://portal.azure.com), select the resource group you want to
# [PowerShell](#tab/azure-powershell) ```azurepowershell-interactive Remove-AzResourceGroup -Name ExampleResourceGroup
Remove-AzResourceGroup -Name ExampleResourceGroup
# [Azure CLI](#tab/azure-cli) ```azurecli-interactive az group delete --name ExampleResourceGroup
postgresql Quickstart Create Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-cli.md
# Quickstart: Create an Azure Database for PostgreSQL Flexible Server using Azure CLI This quickstart shows how to use the [Azure CLI](/cli/azure/get-started-with-azure-cli) commands in [Azure Cloud Shell](https://shell.azure.com) to create an Azure Database for PostgreSQL Flexible Server in five minutes. If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
postgresql Quickstart Create Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-portal.md
Last updated 12/01/2021
# Quickstart: Create an Azure Database for PostgreSQL - Flexible Server in the Azure portal Azure Database for PostgreSQL is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. This Quickstart shows you how to create an Azure Database for PostgreSQL - Flexible Server in about five minutes using the Azure portal.
postgresql Tutorial Django Aks Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-django-aks-database.md
# Tutorial: Deploy Django app on AKS with Azure Database for PostgreSQL - Flexible Server In this quickstart, you deploy a Django application on Azure Kubernetes Service (AKS) cluster with Azure Database for PostgreSQL - Flexible Server using the Azure CLI. **[AKS](../../aks/intro-kubernetes.md)** is a managed Kubernetes service that lets you quickly deploy and manage clusters. **[Azure Database for PostgreSQL - Flexible Server ](overview.md)** is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. > [!NOTE]
-> - This quickstart assumes a basic understanding of Kubernetes concepts, Django and PostgreSQL.
+> This quickstart assumes a basic understanding of Kubernetes concepts, Django and PostgreSQL.
## Pre-requisites++ [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - Launch [Azure Cloud Shell](https://shell.azure.com) in new browser window. You can [install Azure CLI](/cli/azure/install-azure-cli#install) on your local machine too. If you're using a local install, login with Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command. To finish the authentication process, follow the steps displayed in your terminal.
aks-nodepool1-31718369-0 Ready agent 6m44s v1.12.8
``` ## Create an Azure Database for PostgreSQL - Flexible Server+ Create a flexible server with the [az postgreSQL flexible-server create](/cli/azure/postgres/flexible-server#az-postgres-flexible-server-create) command. The following command creates a server using service defaults and values from your Azure CLI's local context: ```azurecli-interactive
The server created has the below attributes:
- Using public-access argument allow you to create a server with public access to any client with correct username and password. - Since the command is using local context it will create the server in the resource group ```django-project``` and in the region ```eastus```. - ## Build your Django docker image Create a new [Django application](https://docs.djangoproject.com/en/3.1/intro/) or use your existing Django project. Make sure your code is in this folder structure. -
-```
+```python
ΓööΓöÇΓöÇΓöÇmy-djangoapp ΓööΓöÇΓöÇΓöÇviews.py ΓööΓöÇΓöÇΓöÇmodels.py
Create a new [Django application](https://docs.djangoproject.com/en/3.1/intro/)
ΓööΓöÇΓöÇΓöÇ Dockerfile ΓööΓöÇΓöÇΓöÇ requirements.txt ΓööΓöÇΓöÇΓöÇ manage.py
-
```+ Update ```ALLOWED_HOSTS``` in ```settings.py``` to make sure the Django application uses the external IP that gets assigned to kubernetes app. ```python
DATABASES={
``` ### Generate a requirements.txt file+ Create a ```requirements.txt``` file to list out the dependencies for the Django Application. Here is an example ```requirements.txt``` file. You can use [``` pip freeze > requirements.txt```](https://pip.pypa.io/en/stable/reference/pip_freeze/) to generate a requirements.txt file for your existing application. ``` text
pytz==2020.4
``` ### Create a Dockerfile+ Create a new file named ```Dockerfile``` and copy the code snippet below. This Dockerfile in setting up Python 3.8 and installing all the requirements listed in requirements.txt file. ```docker # Use the official Python image from the Docker Hub FROM python:3.8.2 # Make a new directory to put our code in. RUN mkdir /code # Change the working directory. WORKDIR /code # Copy to code folder COPY . /code/ # Install the requirements. RUN pip install -r requirements.txt # Run the application: CMD python manage.py runserver 0.0.0.0:8000 ``` ### Build your image
-Make sure you're in the directory ```my-django-app``` in a terminal using the ```cd``` command. Run the following command to build your bulletin board image:
-``` bash
+Make sure you're in the directory ```my-django-app``` in a terminal using the ```cd``` command. Run the following command to build your bulletin board image:
+```bash
docker build --tag myblog:latest .- ``` Deploy your image to [Docker hub](https://docs.docker.com/get-started/part3/#create-a-docker-hub-repository-and-push-your-image) or [Azure Container registry](../../container-registry/container-registry-get-started-azure-cli.md). > [!IMPORTANT]
->If you are using Azure container registry (ACR), then run the ```az aks update``` command to attach ACR account with the AKS cluster.
+> If you are using Azure container registry (ACR), then run the ```az aks update``` command to attach ACR account with the AKS cluster.
>
->```azurecli-interactive
->az aks update -n djangoappcluster -g django-project --attach-acr <your-acr-name>
+> ```azurecli-interactive
+> az aks update -n djangoappcluster -g django-project --attach-acr <your-acr-name>
> ```
->
## Create Kubernetes manifest file A Kubernetes manifest file defines a desired state for the cluster, such as what container images to run. Let's create a manifest file named ```djangoapp.yaml``` and copy in the following YAML definition.
->[!IMPORTANT]
-> - Update ```env``` section below with your ```SERVERNAME```, ```YOUR-DATABASE-USERNAME```, ```YOUR-DATABASE-PASSWORD``` of your postgres flexible server.
+> [!IMPORTANT]
+> Update ```env``` section below with your ```SERVERNAME```, ```YOUR-DATABASE-USERNAME```, ```YOUR-DATABASE-PASSWORD``` of your postgres flexible server.
```yaml apiVersion: apps/v1
spec:
``` ## Deploy Django to AKS cluster+ Deploy the application using the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command and specify the name of your YAML manifest: ```console
django-app LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
Now open a web browser to the external IP address of your service (http://\<service-external-ip-address\>) and view the Django application.
->[!NOTE]
+> [!NOTE]
> - Currently the Django site is not using HTTPS. It is recommended to [ENABLE TLS with your own certificates](../../aks/ingress-own-tls.md).
-> - You can enable [HTTP routing](../../aks/http-application-routing.md) for your cluster. When http routing is enabled, it configures an Ingress controller in your AKS cluster. As applications are deployed, the solution also creates publicly accessible DNS names for application endpoints.
+> - You can enable [HTTP routing](../../aks/http-application-routing.md) for your cluster. When http routing is enabled, it configures an Ingress controller in your AKS cluster. As > > applications are deployed, the solution also creates publicly accessible DNS names for application endpoints.
## Run database migrations
-For any django application, you would need to run database migration or collect static files. You can run these django shell commands using ```$ kubectl exec <pod-name> -- [COMMAND]```. Before running the command you need to find the pod name using ```kubectl get pods```.
+For any django application, you would need to run database migration or collect static files. You can run these django shell commands using `$ kubectl exec <pod-name> -- [COMMAND]`. Before running the command you need to find the pod name using `kubectl get pods`.
```bash $ kubectl get pods ```
-You will see an output like this
+You will see an output like this:
+ ```output NAME READY STATUS RESTARTS AGE django-app-5d9cd6cd8-l6x4b 1/1 Running 0 2m
postgresql Tutorial Django App Service Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-django-app-service-postgres.md
# Tutorial: Deploy Django app with App Service and Azure Database for PostgreSQL - Flexible Server
-In this tutorial you will learn how to deploy a Django application in Azure using App Services and Azure Database for PostgreSQL - Flexible Server in a virtual network.
+In this tutorial you'll learn how to deploy a Django application in Azure using App Services and Azure Database for PostgreSQL - Flexible Server in a virtual network.
## Prerequisites
If you don't have an Azure subscription, create a [free](https://azure.microsoft
This article requires that you're running the Azure CLI version 2.0 or later locally. To see the version installed, run the `az --version` command. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-You'll need to login to your account using the [az login](/cli/azure/authenticate-azure-cli) command. Note the **id** property from the command output for the corresponding subscription name.
+You'll need to log in to your account using the [az login](/cli/azure/authenticate-azure-cli) command. Note the **id** property from the command output for the corresponding subscription name.
```azurecli az login
If you have multiple subscriptions, choose the appropriate subscription in which
```azurecli az account set --subscription <subscription id> ```+ ## Clone or download the sample app # [Git clone](#tab/clone) - Clone the sample repository:
-```terminal
+```console
git clone https://github.com/Azure-Samples/djangoapp ``` Then go into that folder:
-```terminal
+```console
cd djangoapp ``` # [Download](#tab/download) - Visit [https://github.com/Azure-Samples/djangoapp](https://github.com/Azure-Samples/djangoapp), select **Clone**, and then select **Download ZIP**. Unpack the ZIP file into a folder named *djangoapp*.
These changes are specific to configuring Django to run in any production enviro
## Create a PostgreSQL Flexible Server in a new virtual network Create a private flexible server and a database inside a virtual network (VNET) using the following command:+ ```azurecli # Create Flexible server in a VNET - az postgres flexible-server create --resource-group myresourcegroup --location westus2- ``` This command performs the following actions, which may take a few minutes: - Create the resource group if it doesn't already exist.-- Generates a server name if it is not provided.
+- Generates a server name if it isn't provided.
- Create a new virtual network for your new postgreSQL server. **Make a note of virtual network name and subnet name** created for your server since you need to add the web app to the same virtual network.-- Creates admin username , password for your server if not provided. **Make a note of the username and password** to use in the next step.
+- Creates admin username, password for your server if not provided. **Make a note of the username and password** to use in the next step.
- Create a database ```postgres``` that can be used for development. You can run [**psql** to connect to the database](quickstart-create-server-portal.md#connect-to-the-postgresql-database-using-psql) to create a different database. > [!NOTE]
-> Make a note of your password that will be generate for you if not provided. If you forget the password you would have to reset the password using ``` az postgres flexible-server update``` command
-
+> Make a note of your password that will be generate for you if not provided. If you forget the password you would have to reset the password using `az postgres flexible-server update` command
## Deploy the code to Azure App Service In this section, you create app host in App Service app, connect this app to the Postgres database, then deploy your code to that host. - ### Create the App Service web app in a virtual network In the terminal, make sure you're in the repository root (`djangoapp`) that contains the app code.
In the terminal, make sure you're in the repository root (`djangoapp`) that cont
Create an App Service app (the host process) with the [`az webapp up`](/cli/azure/webapp#az-webapp-up) command: ```azurecli- # Create a web app az webapp up --resource-group myresourcegroup --location westus2 --plan DjangoPostgres-tutorial-plan --sku B1 --name <app-name> # Enable VNET integration for web app. # Replace <vnet-name> and <subnet-name> with the virtual network and subnet name that the flexible server is using. - az webapp vnet-integration add -g myresourcegroup -n mywebapp --vnet <vnet-name> --subnet <subnet-name> # Configure database information as environment variables # Use the postgres server name , database name , username , password for the database created in the previous steps - az webapp config appsettings set --settings DJANGO_ENV="production" DBHOST="<postgres-server-name>.postgres.database.azure.com" DBNAME="postgres" DBUSER="<username>" DBPASS="<password>" ``` - For the `--location` argument, use the same location as you did for the database in the previous section.
az webapp config appsettings set --settings DJANGO_ENV="production" DBHOST="<pos
- Enable default logging for the app, if not already enabled. - Upload the repository using ZIP deployment with build automation enabled. - **az webapp vnet-integration** command adds the web app in the same virtual network as the postgres server.-- The app code expects to find database information in a number of environment variables. To set environment variables in App Service, you create "app settings" with the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command.
+- The app code expects to find database information in many environment variables. To set environment variables in App Service, you create "app settings" with the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command.
> [!TIP] > Many Azure CLI commands cache common parameters, such as the name of the resource group and App Service plan, into the file *.azure/config*. As a result, you don't need to specify all the same parameter with later commands. For example, to redeploy the app after making changes, you can just run `az webapp up` again without any parameters.
Django database migrations ensure that the schema in the PostgreSQL on Azure dat
1. Open an SSH session in the browser by navigating to *https://\<app-name>.scm.azurewebsites.net/webssh/host* and sign in with your Azure account credentials (not the database server credentials).
-1. In the SSH session, run the following commands (you can paste commands using **Ctrl**+**Shift**+**V**):
+2. In the SSH session, run the following commands (you can paste commands using **Ctrl**+**Shift**+**V**):
```bash cd site/wwwroot
Django database migrations ensure that the schema in the PostgreSQL on Azure dat
python manage.py createsuperuser ```
-1. The `createsuperuser` command prompts you for superuser credentials. For the purposes of this tutorial, use the default username `root`, press **Enter** for the email address to leave it blank, and enter `postgres1` for the password.
+3. The `createsuperuser` command prompts you for superuser credentials. For the purposes of this tutorial, use the default username `root`, press **Enter** for the email address to leave it blank, and enter `postgres1` for the password.
### Create a poll question in the app
-1. In a browser, open the URL *http:\//\<app-name>.azurewebsites.net*. The app should display the message "No polls are available" because there are no specific polls yet in the database.
+4. In a browser, open the URL *http:\//\<app-name>.azurewebsites.net*. The app should display the message "No polls are available" because there are no specific polls yet in the database.
-1. Browse to *http:\//\<app-name>.azurewebsites.net/admin*. Sign in using superuser credentials from the previous section (`root` and `postgres1`). Under **Polls**, select **Add** next to **Questions** and create a poll question with some choices.
+5. Browse to *http:\//\<app-name>.azurewebsites.net/admin*. Sign in using superuser credentials from the previous section (`root` and `postgres1`). Under **Polls**, select **Add** next to **Questions** and create a poll question with some choices.
-1. Browse again to *http:\//\<app-name>.azurewebsites.net/* to confirm that the questions are now presented to the user. Answer questions however you like to generate some data in the database.
+6. Browse again to *http:\//\<app-name>.azurewebsites.net/* to confirm that the questions are now presented to the user. Answer questions however you like to generate some data in the database.
**Congratulations!** You're running a Python Django web app in Azure App Service for Linux, with an active Postgres database.
In a terminal window, run the following commands. Be sure to follow the prompts
```bash # Configure the Python virtual environment python3 -m venv venv source venv/bin/activate # Install packages pip install -r requirements.txt # Run Django migrations python manage.py migrate # Create Django superuser (follow prompts) python manage.py createsuperuser # Run the dev server python manage.py runserver ``` Once the web app is fully loaded, the Django development server provides the local app URL in the message, "Starting development server at http://127.0.0.1:8000/. Quit the server with CTRL-BREAK".
Test the app locally with the following steps:
1. Go to *http:\//localhost:8000* in a browser, which should display the message "No polls are available".
-1. Go to *http:\//localhost:8000/admin* and sign in using the admin user you created previously. Under **Polls**, again select **Add** next to **Questions** and create a poll question with some choices.
+2. Go to *http:\//localhost:8000/admin* and sign in using the admin user you created previously. Under **Polls**, again select **Add** next to **Questions** and create a poll question with some choices.
-1. Go to *http:\//localhost:8000* again and answer the question to test the app.
+3. Go to *http:\//localhost:8000* again and answer the question to test the app.
-1. Stop the Django server by pressing **Ctrl**+**C**.
+4. Stop the Django server by pressing **Ctrl**+**C**.
When running locally, the app is using a local Sqlite3 database and doesn't interfere with your production database. You can also use a local PostgreSQL database, if desired, to better simulate your production environment. -- ### Update the app In `polls/models.py`, locate the line that begins with `choice_text` and change the `max_length` parameter to 100:
In `polls/models.py`, locate the line that begins with `choice_text` and change
```python # Find this lie of code and set max_length to 100 instead of 200 choice_text = models.CharField(max_length=100) ``` Because you changed the data model, create a new Django migration and migrate the database:
-```
+```python
python manage.py makemigrations python manage.py migrate ```
az webapp up
This command uses the parameters cached in the *.azure/config* file. Because App Service detects that the app already exists, it just redeploys the code. -- ### Rerun migrations in Azure Because you made changes to the data model, you need to rerun database migrations in App Service.
cd site/wwwroot
# Activate default virtual environment in App Service container source /antenv/bin/activate # Run database migrations python manage.py migrate ```
By default, the portal shows your app's **Overview** page, which provides a gene
:::image type="content" source="./media/tutorial-django-app-service-postgres/manage-django-app-in-app-services-in-the-azure-portal.png" alt-text="Manage your Python Django app in the Overview page in the Azure portal"::: - ## Clean up resources If you'd like to keep the app or continue to the next tutorial, skip ahead to [Next steps](#next-steps). Otherwise, to avoid incurring ongoing charges you can delete the resource group create for this tutorial:
postgresql Tutorial Webapp Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-webapp-server-vnet.md
# Tutorial: Create an Azure Database for PostgreSQL - Flexible Server with App Services Web App in Virtual network This tutorial shows you how create a Azure App Service Web app with Azure Database for PostgreSQL - Flexible Server inside a [Virtual network](../../virtual-network/virtual-networks-overview.md).
postgresql Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/videos.md
This page provides video content for learning about Azure Database for PostgreSQ
## Overview: Azure Database for PostgreSQL and MySQL
->[!VIDEO https://docs.microsoft.com/Events/Connect/2017/T147/player]
+>[!VIDEO https://learn.microsoft.com/Events/Connect/2017/T147/player]
[Open in Channel 9](/Events/Connect/2017/T147) Azure Database for PostgreSQL and Azure Database for MySQL bring together community edition database engines and capabilities of a fully managed serviceΓÇöso you can focus on your apps instead of having to manage a database. Tune in to get a quick overview of the advantages of using the service, and see some of the capabilities in action.
Azure Database for PostgreSQL and Azure Database for MySQL are managed services
## Deep dive on managed service capabilities for MySQL and PostgreSQL
->[!VIDEO https://docs.microsoft.com/Events/Connect/2017/T148/player]
+>[!VIDEO https://learn.microsoft.com/Events/Connect/2017/T148/player]
[Open in Channel 9](/Events/Connect/2017/T148) Azure Database for PostgreSQL and Azure Database for MySQL bring together community edition database engines and the capabilities of a fully managed service. Tune in to get a deep dive on how these services workΓÇöhow we ensure high availability and fast scaling (within seconds), so you can meet your customersΓÇÖ needs. You'll also learn about some of the underlying investments in security and worldwide availability. ## Develop an intelligent analytics app with PostgreSQL
->[!VIDEO https://docs.microsoft.com/Events/Connect/2017/T149/player]
+>[!VIDEO https://learn.microsoft.com/Events/Connect/2017/T149/player]
[Open in Channel 9](/Events/Connect/2017/T149) Azure Database for PostgreSQL brings together community edition database engine and capabilities of a fully managed serviceΓÇöso you can focus on your apps instead of having to manage a database. Tune in to see in action how easy it is to create new experiences like adding Cognitive Services to your apps by virtue of being on Azure.
private-link Private Link Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-link-overview.md
For SLA, see [SLA for Azure Private Link](https://azure.microsoft.com/support/le
- [Quickstart: Create a Private Endpoint using Azure portal](create-private-endpoint-portal.md) - [Quickstart: Create a Private Link service by using the Azure portal](create-private-link-service-portal.md)-- [Learn module: Introduction to Azure Private Link](/learn/modules/introduction-azure-private-link/)
+- [Learn module: Introduction to Azure Private Link](/training/modules/introduction-azure-private-link/)
public-multi-access-edge-compute-mec Tutorial Create Vm Using Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/tutorial-create-vm-using-python-sdk.md
In this tutorial, you learn how to:
print(f"Provisioned resource group {rg_result.name} in the {rg_result.location} region") # For details on the previous code, see Example: Use the Azure libraries to provision a resource group
- # at https://docs.microsoft.com/azure/developer/python/azure-sdk-example-resource-group
+ # at https://learn.microsoft.com/azure/developer/python/azure-sdk-example-resource-group
# Step 2: Provision a virtual network
purview Catalog Private Link Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-troubleshoot.md
This guide summarizes known limitations related to using private endpoints for M
- We currently don't support ingestion private endpoints that work with your AWS sources. - Scanning Azure Multiple Sources using self-hosted integration runtime isn't supported. - Using Azure integration runtime to scan data sources behind private endpoint isn't supported.-- The ingestion private endpoints can be created via the Microsoft Purview governance portal experience described in the preceding steps. They can't be created from the Private Link Center.
+- The ingestion private endpoints can be created via the Microsoft Purview governance portal experience described in the steps [here](catalog-private-link-end-to-end.md#option-2enable-account-portal-and-ingestion-private-endpoint-on-existing-microsoft-purview-accounts). They can't be created from the Private Link Center.
- Creating a DNS record for ingestion private endpoints inside existing Azure DNS Zones, while the Azure Private DNS Zones are located in a different subscription than the private endpoints isn't supported via the Microsoft Purview governance portal experience. A record can be added manually in the destination DNS Zones in the other subscription. - If you enable a managed event hub after deploying an ingestion private endpoint, you'll need to redeploy the ingestion private endpoint. - Self-hosted integration runtime machine must be deployed in the same VNet or a peered VNet where Microsoft Purview account and ingestion private endpoints are deployed.
You may receive the following error message when running a scan:
This can be an indication of issues related to connectivity or name resolution between the VM running self-hosted integration runtime and Microsoft Purview's managed resources storage account or Event Hubs. ### Resolution
-Validate if name resolution between the VM running Self-Hosted Integration Runtime.
+Validate if name resolution is successful between the VM running the Self-Hosted Integration Runtime and the Microsoft Purview manage resources such as the blob queue and Event Hubs through port 443 and private IP addresses (step 8 above.)
### Issue
purview How To Create Import Export Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-create-import-export-glossary.md
Title: How to create, import, export, and manage glossary terms
+ Title: Create, import, export, and delete glossary terms
description: Learn how to create, import, export, and manage business glossary terms in Microsoft Purview.
Last updated 03/09/2022
-# How to create, import, and export glossary terms
+# Create, import, export, and delete glossary terms
-This article describes how to work with the business glossary in Microsoft Purview. Steps are provided to create a business glossary term in Microsoft Purview data catalog, and import and export glossary terms using .csv files.
+This article describes how to work with the business glossary in Microsoft Purview. It provides steps to create a business glossary term in the Microsoft Purview data catalog. It also shows you how to import and export glossary terms by using .CSV files, and how to delete terms that you no longer need.
-## Create a new term
+## Create a term
-To create a new glossary term, follow these steps:
+To create a glossary term, follow these steps:
-1. Select **Data catalog** in the left navigation on the home page, and then select the **Manage glossary** button in the center of the page.
+1. On the home page, select **Data catalog** on the left pane, and then select the **Manage glossary** button in the center of the page.
- :::image type="content" source="media/how-to-create-import-export-glossary/find-glossary.png" alt-text="Screenshot of the data catalog with the glossary highlighted." border="true":::
+ :::image type="content" source="media/how-to-create-import-export-glossary/find-glossary.png" alt-text="Screenshot of the data catalog with the button for managing a glossary highlighted." border="true":::
-2. On the **Glossary terms** page, select **+ New term**. A page opens with **System Default** template selected. Choose the template you want to create glossary term with and select **Continue**.
+2. On the **Glossary terms** page, select **+ New term**.
- :::image type="content" source="media/how-to-create-import-export-glossary/new-term-with-default-template.png" alt-text="Screenshot of the New term creation." border="true":::
+ A pane opens with the **System default** template selected. Choose the template that you want to use to create a glossary term, and then select **Continue**.
-3. Give your new term a name, which must be unique in the catalog. The term name is case-sensitive, meaning you could have a term called **Sample** and **sample** in the catalog.
+ :::image type="content" source="media/how-to-create-import-export-glossary/new-term-with-default-template.png" alt-text="Screenshot of the button and pane for creating a new term." border="true":::
-4. Add a **Definition**.
-
-### Adding rich text to a definition
+3. Give your new term a name, which must be unique in the catalog.
-Microsoft Purview enables users to add rich formatting to term definitions such as adding bolding, underlining, or italicizing text. Users can also create tables, bulleted lists, or hyperlinks to external resources.
--
-Below are the rich text formatting options:
-
-| Name | Description | Shortcut key |
-| - | -- | |
-| Bold | Make your text bold. Adding the '*' character around text will also bold it. | Ctrl+B |
-| Italic | Italicize your text. Adding the '_' character around text will also italicize it. | Ctrl+I |
-| Underline | Underline your text. | Ctrl+U |
-| Bullets | Create a bulleted list. Adding the '-' character before text will also create a bulleted list. | |
-| Numbering | Create a numbered list Adding the '1' character before text will also create a bulleted list. | |
-| Heading | Add a formatted heading | |
-| Font size | Change the size of your text. The default size is 12. | |
-| Decrease indent | Move your paragraph closer to the margin. | |
-| Increase indent | Move your paragraph farther away from the margin. | |
-| Add hyperlink | Create a link in your document for quick access to web pages and files. | |
-| Remove hyperlink | Change a link to plain text. | |
-| Quote | Add quote text | |
-| Add table | Add a table to your content. | |
-| Edit table | Insert or delete a column or row from a table | |
-| Clear formatting | Remove all formatting from a selection of text, leaving only the normal, unformatted text. | |
-| Undo | Undo changes you made to the content. | Ctrl+Z |
-| Redo | Redo changes you made to the content. | Ctrl+Y |
-
-> [!NOTE]
-> Updating a definition with the rich text editor adds a new additional attribute `microsoft_isDescriptionRichText": "true"` in the term payload. This attribute is not visible on the UX and is automatically populated when any rich text action is taken by user. See the snippet of term JSON message with rich text definition populated below.
+ > [!NOTE]
+ > Term names are case-sensitive. For example, **Sample** and **sample** could both exist in the same glossary.
+
+4. For **Definition**, add a definition for the term.
+
+ Microsoft Purview enables you to add rich formatting to term definitions. For example, you can add bold, underline, or italic formatting to text. You can also create tables, bulleted lists, or hyperlinks to external resources.
+
+ :::image type="content" source="media/how-to-create-import-export-glossary/rich-text-editor.png" alt-text="Screenshot that shows the rich text editor.":::
+
+ Here are the options for rich text formatting:
+
+ | Name | Description | Keyboard shortcut |
+ | - | -- | |
+ | Bold | Make your text bold. Adding the asterisk (*) character around text will also make it bold. | Ctrl+B |
+ | Italic | Make your text italic. Adding the underscore (_) character around text will also make it italic. | Ctrl+I |
+ | Underline | Underline your text. | Ctrl+U |
+ | Bullets | Create a bulleted list. Adding the hyphen (-) character before text will also create a bulleted list. | |
+ | Numbering | Create a numbered list. Adding the 1 character before text will also create a numbered list. | |
+ | Heading | Add a formatted heading. | |
+ | Font size | Change the size of your text. The default size is 12. | |
+ | Decrease indent | Move your paragraph closer to the margin. | |
+ | Increase indent | Move your paragraph farther away from the margin. | |
+ | Add hyperlink | Create a link for quick access to webpages and files. | |
+ | Remove hyperlink | Change a link to plain text. | |
+ | Quote | Add quote text. | |
+ | Add table | Add a table to your content. | |
+ | Edit table | Insert or delete a column or row from a table. | |
+ | Clear formatting | Remove all formatting from a selection of text. | |
+ | Undo | Undo changes that you made to the content. | Ctrl+Z |
+ | Redo | Redo changes that you made to the content. | Ctrl+Y |
->```json
-> {
-> "additionalAttributes": {
-> "microsoft_isDescriptionRichText": "true"
-> }
-> }
->```
+ > [!NOTE]
+ > Updating a definition with the rich text editor adds the attribute `"microsoft_isDescriptionRichText": "true"` in the term payload. This attribute is not visible on the user experience and is automatically populated when you take any rich text action. The right text definition is populated in the following snippet of a term's JSON message:
+ >
+ >```json
+ > {
+ > "additionalAttributes": {
+ > "microsoft_isDescriptionRichText": "true"
+ > }
+ > }
+ >```
-5. Set the **Status** for the term. New terms default to **Draft** status.
+5. For **Status**, select the status for the term. New terms default to **Draft**.
:::image type="content" source="media/how-to-create-import-export-glossary/overview-tab.png" alt-text="Screenshot of the status choices.":::
- These status markers are metadata associated with the term. Currently you can set the following status on each term:
+ Status markers are metadata associated with the term. Currently, you can set the following status on each term:
- **Draft**: This term isn't yet officially implemented.
- - **Approved**: This term is official/standard/approved.
+ - **Approved**: This term is officially approved.
- **Expired**: This term should no longer be used. - **Alert**: This term needs attention. > [!Important]
- > if an approval workflow is enabled on the term hierarchy then when a new term is created it will go through the approval process and only when it is approved it is stored in catalog. See here to learn about how to manage approval workflows for business glossary [Approval workflows for business glossary](how-to-workflow-business-terms-approval.md)
-
-6. Add **Resources** and **Acronym**. If the term is part of hierarchy, you can add parent terms at **Parent** in the overview tab.
+ > If an approval workflow is enabled on the term hierarchy, a new term will go through the approval process when it's created. The term is stored in the catalog only when it's approved. To learn about how to manage approval workflows for a business glossary, see [Approval workflow for business terms](how-to-workflow-business-terms-approval.md).
+
+6. Add **Resources** and **Acronym** information. If the term is part of a hierarchy, you can add parent terms at **Parent** on the **Overview** tab.
-7. Add **Synonyms** and **Related terms** in the related tab.
+7. Add **Synonyms** and **Related terms** information on the **Related** tab, and then select **Apply**.
- :::image type="content" source="media/how-to-create-import-export-glossary/related-tab.png" alt-text="Screenshot of New term > Related tab." border="true":::
+ :::image type="content" source="media/how-to-create-import-export-glossary/related-tab.png" alt-text="Screenshot of tab for related terms and the box for adding synonyms." border="true":::
-8. Optionally, select the **Contacts** tab to add Experts and Stewards to your term.
+8. Optionally, select the **Contacts** tab to add experts and stewards to your term.
9. Select **Create** to create your term. > [!Important]
- > if an approval workflow is enabled on term hierarchy path, you will see **Submit for approval** instead of create button. Clicking on submit for approval will trigger the approval workflow for this term.
+ > If an approval workflow is enabled on the term's hierarchy path, you'll see **Submit for approval** instead of the **Create** button. Selecting **Submit for approval** will trigger the approval workflow for this term.
- :::image type="content" source="media/how-to-create-import-export-glossary/submit-for-approval.png" alt-text="Screenshot of submit for approval." border="true":::
+ :::image type="content" source="media/how-to-create-import-export-glossary/submit-for-approval.png" alt-text="Screenshot of the button to submit a term for approval." border="true":::
## Import terms into the glossary
-The Microsoft Purview Data Catalog provides a template .csv file for you to import your terms into your Glossary.
+The Microsoft Purview data catalog provides a template .CSV file for you to import terms from the catalog into your glossary. Duplicate terms include both spelling and capitalization, because term names are case-sensitive.
-You can import terms in the catalog. The duplicate terms in file will be overwritten.
+1. On the **Glossary terms** page, select **Import terms**.
-Notice that term names are case-sensitive. For example, `Sample` and `saMple` could both exist in the same glossary.
+ The term template page opens.
-### To import terms, follow these steps
+2. Match the term template to the kind of .CSV file that you want to import, and then select **Continue**.
-1. When you are in the **Glossary terms** page, select **Import terms**.
+ :::image type="content" source="media/how-to-create-import-export-glossary/select-term-template-for-import.png" alt-text="Screenshot of the template list for importing a term, with the system default template highlighted.":::
-2. The term template page opens. Match the term template to the kind of .CSV you want to import.
+3. Download the .csv template and use it to enter the terms that you want to add.
- :::image type="content" source="media/how-to-create-import-export-glossary/select-term-template-for-import.png" alt-text="Screenshot of the Glossary terms page, Import terms button.":::
-
-3. Download the csv template and use it to enter your terms you would like to add. Give your template csv file a name that starts with a letter and only includes letters, numbers, spaces, '_', or other non-ascii unicode characters. Special characters in the file name will create an error.
+ Give your template file a name that starts with a letter and includes only letters, numbers, spaces, an underscore (_), or other non-ASCII Unicode characters. Special characters in the file name will create an error.
> [!Important]
- > The system only supports importing columns that are available in the template. The "System Default" template will have all the default attributes.
- > However, custom term templates will have out of the box attributes and additional custom attributes defined in the template. Therefore, the .CSV file differs both from total number of columns and column names depending on the term template selected. You can also review the file for issues after upload.
- > if you want to upload a file with rich text definition, make sure to enter the definition with markup tags and populate the column **IsDefinitionRichText** to true in the .csv file.
+ > The system supports only importing columns that are available in the template. The **System default** template will have all the default attributes.
+ >
+ > Custom term templates define out-of-the box attributes and additional custom attributes. Therefore, the .CSV file differs in the total number of columns and the column names, depending on the term template that you select. You can also review the file for problems after upload.
+ >
+ > If you want to upload a file with a rich text definition, be sure to enter the definition with markup tags and populate the column `IsDefinitionRichText` to `true` in the .CSV file.
- :::image type="content" source="media/how-to-create-import-export-glossary/select-file-for-import.png" alt-text="Screenshot of the Glossary terms page, select file for Import.":::
+ :::image type="content" source="media/how-to-create-import-export-glossary/select-file-for-import.png" alt-text="Screenshot of the button for downloading a sample template file.":::
-4. Once you've finished filling out your .csv file, select your file to import and then select **OK**.
+4. After you finish filling out your .CSV file, select your file to import, and then select **OK**.
-5. The system will upload the file and add all the terms to your catalog.
+The system will upload the file and add all the terms to your glossary.
- > [!Important]
- > The email address for Stewards and Experts should be the primary address of the user from AAD group. Alternate email, user principal name and non-AAD emails are not yet supported.
+> [!Important]
+> The email address for an expert or steward should be the primary address of the user from the Azure Active Directory (Azure AD) group. Alternate emails, user principal names, and non-Azure AD emails are not yet supported.
-## Export terms from glossary with custom attributes
+## Export terms from the glossary with custom attributes
-You should be able to export terms from glossary as long as the selected terms belong to same term template.
+You can export terms from the glossary as long as the selected terms belong to same term template.
-1. When you are in the Glossary, by default the **Export** button is disabled. Once you select the terms you want to export, the **Export** button is enabled if the selected terms belong to same template.
+When you're in the glossary, the **Export terms** button is disabled by default. After you select the terms that you want to export, the **Export terms** button is enabled if the selected terms belong to same template.
-2. Select **Export** to download the selected terms.
+Select **Export terms** to download the selected terms.
- :::image type="content" source="media/how-to-create-import-export-glossary/select-term-template-for-export.png" lightbox="media/how-to-create-import-export-glossary/select-term-template-for-export.png" alt-text="Screenshot of the Glossary terms page, select file for Export.":::
- > [!Important]
- > If the terms in a hierarchy belong to different term templates then you need to split them into different .CSV files for import. Also, updating a parent of a term is currently not supported using import process.
+> [!Important]
+> If the terms in a hierarchy belong to different term templates, you need to split them into different .CSV files for import. Also, the import process currently doesn't support updating the parent of a term.
## Delete terms
-1. Select **Data catalog** in the left navigation on the home page, and then select the **Manage glossary** button in the center of the page.
+1. On the home page, select **Data catalog** on the left pane, and then select the **Manage glossary** button in the center of the page.
- :::image type="content" source="media/how-to-create-import-export-glossary/find-glossary.png" alt-text="Screenshot of the data catalog with the glossary highlighted." border="true":::
+ :::image type="content" source="media/how-to-create-import-export-glossary/find-glossary.png" alt-text="Screenshot of the data catalog and the button for managing a glossary." border="true":::
-1. Using checkboxes, select the terms you want to delete. You can select a single term, or multiple terms for deletion.
+1. Select the checkboxes for the terms that you want to delete. You can select a single term or multiple terms for deletion.
- :::image type="content" source="media/how-to-create-import-export-glossary/select-terms.png" alt-text="Screenshot of the glossary, with a few terms selected." border="true":::
+ :::image type="content" source="media/how-to-create-import-export-glossary/select-terms.png" alt-text="Screenshot of the glossary with a few terms selected." border="true":::
-1. Select the **Delete** button in the top menu.
+1. Select the **Delete** button on the top menu.
- :::image type="content" source="media/how-to-create-import-export-glossary/select-delete.png" alt-text="Screenshot of the glossary, with the Delete button highlighted in the top menu." border="true":::
+ :::image type="content" source="media/how-to-create-import-export-glossary/select-delete.png" alt-text="Screenshot of the glossary with the Delete button highlighted on the top menu." border="true":::
-
-1. You'll be presented with a window that shows all the terms selected for deletion.
+1. A new window shows all the terms selected for deletion. In the following example, the list of terms to be deleted are the parent term **Revenue** and its two child terms.
> [!NOTE]
- > If a parent is selected for deletion all the children for that parent are automatically selected for deletion.
+ > If a parent is selected for deletion, all the children for that parent are automatically selected for deletion.
- :::image type="content" source="media/how-to-create-import-export-glossary/delete-window.png" alt-text="Screenshot of the glossary delete window, with a list of all terms to be deleted. The Revenue term is a parent to two other terms, and because it was selected to be deleted, its child terms are also in the list to be deleted." border="true":::
+ :::image type="content" source="media/how-to-create-import-export-glossary/delete-window.png" alt-text="Screenshot of the window for deleting glossary terms, with a list of all terms to be deleted." border="true":::
-1. Review the list. You can remove the terms you don't want to delete after review by selecting **Remove**.
+ Review the list. You can remove the terms that you don't want to delete by selecting **Remove**.
- :::image type="content" source="media/how-to-create-import-export-glossary/select-remove.png" alt-text="Screenshot of the glossary delete window, with a list of all terms to be deleted, and the 'Remove' column highlighted on the right." border="true":::
+ :::image type="content" source="media/how-to-create-import-export-glossary/select-remove.png" alt-text="Screenshot of the window for deleting glossary terms, with the column for removing items from the list of terms to be deleted." border="true":::
-1. You can also see which terms will require an approval process in the column **Approval Needed**. If Approval needed is **Yes**, the term will go through an approval workflow before deletion. If the value is **No** then the term will be deleted without any approvals.
+1. The **Approval needed** column shows which terms require an approval process. If the value is **Yes**, the term will go through an approval workflow before deletion. If the value is **No**, the term will be deleted without any approvals.
> [!NOTE]
- > If a parent has an associated approval process, but the child does not, the parent delete term workflow will be triggered. This is because the selection is done on the parent and you are acknowledging to delete child terms along with parent.
-
- :::image type="content" source="media/how-to-create-import-export-glossary/approval-needed.png" alt-text="Screenshot of the glossary delete window, with a list of all terms to be deleted, and the 'Approval needed' column highlighted." border="true":::
-
-1. If there's a least one term that needs to be approved you'll be presented with **Submit for approval** and **Cancel** buttons. Selecting **Submit for approval** will delete all the terms where approval isn't needed and will trigger approval workflows for terms that require it.
+ > If a parent has an associated approval process but its child doesn't, the workflow for deleting the parent term will be triggered. This is because the selection is done on the parent, and you're acknowledging the deletion of child terms along with parent.
- :::image type="content" source="media/how-to-create-import-export-glossary/yes-approval-needed.png" alt-text="Screenshot of the glossary delete window, with a list of all terms to be deleted, and the 'Approval needed' column highlighted. An item is listed as approval needed, so at the bottom, buttons available are 'Submit for approval' and 'Cancel'." border="true":::
+ If at least one term needs to be approved, **Submit for approval** and **Cancel** buttons appear. Selecting **Submit for approval** will delete all the terms where approval isn't needed and will trigger approval workflows for terms that require it.
-1. If there are no terms that need to be approved you'll be presented with **Delete** and **Cancel** buttons. Selecting **Delete** will delete all the selected terms.
+ :::image type="content" source="media/how-to-create-import-export-glossary/yes-approval-needed.png" alt-text="Screenshot of the window for deleting glossary terms, which shows terms that need approval and includes the button for submitting them for approval." border="true":::
- :::image type="content" source="media/how-to-create-import-export-glossary/no-approval-needed.png" alt-text="Screenshot of the glossary delete window, with a list of all terms to be deleted, and the 'Approval needed' column highlighted. All items are listed as no approval needed, so at the bottom, buttons available are 'Delete' and 'Cancel'." border="true":::
+ If no terms need to be approved, **Delete** and **Cancel** buttons appear. Selecting **Delete** will delete all the selected terms.
+ :::image type="content" source="media/how-to-create-import-export-glossary/no-approval-needed.png" alt-text="Screenshot of the window for deleting glossary terms, which shows terms that don't need approval and the button for deleting them." border="true":::
## Business terms with approval workflow enabled
-If [workflows](concept-workflow.md) are enabled on a term, then any creates, updates, or deletes to the term will go through an approval before they're saved in data catalog.
+If [workflows](concept-workflow.md) are enabled on a term, then any create, update, or delete actions for the term will go through an approval before they're saved in the data catalog.
-- **New terms** - when a create approval workflow is enabled on the parent term, during the creation process you'll see **Submit for approval** instead of **Create** after you've entered all the details. Selecting **Submit for approval** will trigger the workflow. You'll receive notification when your request is approved or rejected.
+- **New terms**: When a create approval workflow is enabled on a parent term, you see **Submit for approval** instead of **Create** after you enter all the details in the creation process. Selecting **Submit for approval** triggers the workflow. You'll get a notification when your request is approved or rejected.
-- **Updates to existing terms** - when an update approval workflow is enabled on parent, you'll see **Submit for approval** instead of **Save** when updating the term. Selecting **Submit for approval** will trigger the workflow. The changes won't be saved in catalog until all the approvals are met.
+- **Updates to existing terms**: When an update approval workflow is enabled on a parent term, you see **Submit for approval** instead of **Save** when you're updating the term. Selecting **Submit for approval** triggers the workflow. The changes won't be saved in catalog until all the approvals are met.
-- **Deletion** - when a delete approval workflow is enabled on the parent term, you'll see **Submit for approval** instead of **Delete** when deleting the term. Selecting **Submit for approval** will trigger the workflow. However, the term won't be deleted from catalog until all the approvals are met.
+- **Deletion**: When a delete approval workflow is enabled on the parent term, you see **Submit for approval** instead of **Delete** when you're deleting the term. Selecting **Submit for approval** triggers the workflow. However, the term won't be deleted from the catalog until all the approvals are met.
-- **Importing terms** - when an import approval workflow enabled for Microsoft Purview's glossary, you'll see **Submit for approval** instead of **OK** in the Import window when importing terms via csv. Selecting **Submit for approval** will trigger the workflow. However, the terms in the file won't be updated in catalog until all the approvals are met.
+- **Importing terms**: When an import approval workflow is enabled for the Microsoft Purview glossary, you see **Submit for approval** instead of **OK** in the **Import** window when you're importing terms via .CSV file. Selecting **Submit for approval** triggers the workflow. However, the terms in the file won't be updated in the catalog until all the approvals are met.
## Next steps
-* For more information about glossary terms, see the [glossary reference](reference-azure-purview-glossary.md)
-* For more information about approval workflows of business glossary, see the [Approval workflow for business terms](how-to-workflow-business-terms-approval.md)
+* For more information about glossary terms, see the [glossary reference](reference-azure-purview-glossary.md).
+* For more information about approval workflows of the business glossary, see [Approval workflow for business terms](how-to-workflow-business-terms-approval.md).
purview How To Workflow Self Service Data Access Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-self-service-data-access-hybrid.md
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-[Workflows](concept-workflow.md) allow you to automate some business processes through Microsoft Purview. Self-service access workflows allow you to create a process for your users to request access to datasets they've discovered in Microsoft Purview!
+You can use [workflows](concept-workflow.md) to automate some business processes through Microsoft Purview. Self-service access workflows allow you to create a process for your users to request access to datasets they've discovered in Microsoft Purview.
-For example: let's say your team has a new data analyst who will be doing some business reporting. You add them to your department's collection in Microsoft Purview. From there they can browse the data assets and read descriptions about the data your department has available. They notice that one of the Azure Data Lake Storage Gen2 accounts seems to have the exact data they need to get started. Since a self-service access workflow has been set up for that resource, they can [request access](how-to-request-access.md) to that Azure Data Lake Storage account from within Microsoft Purview!
+Let's say your team has a new data analyst who will do some business reporting. You add that data analyst to your department's collection in Microsoft Purview. From there, they can browse through the data assets and read descriptions about the data that your department has available.
+The data analyst notices that one of the Azure Data Lake Storage Gen2 accounts seems to have the exact data that they need to get started. Because a self-service access workflow has been set up for that resource, they can [request access](how-to-request-access.md) to that Azure Data Lake Storage account from within Microsoft Purview.
-You can create these workflows for any of your resources across your data estate to automate the access request process. Workflows are assigned at the [collection](reference-azure-purview-glossary.md#collection) level, and so automate business processes along the same organizational lines as your permissions.
-This guide will show you how to create and manage self-service access workflows in Microsoft Purview.
+You can create these workflows for any of your resources across your data estate to automate the access request process. Workflows are assigned at the [collection](reference-azure-purview-glossary.md#collection) level, so they automate business processes along the same organizational lines as your permissions.
+
+This guide shows you how to create and manage self-service access workflows in Microsoft Purview.
>[!NOTE]
-> To be able to create or edit a workflow, you'll need the to be in the [workflow admin role](catalog-permissions.md) in Microsoft Purview.
-> You can also contact the workflow admin in your collection, or reach out to your collection administrator for permissions.
+> To create or edit a workflow, you need the [workflow admin role](catalog-permissions.md) in Microsoft Purview. You can also contact the workflow admin in your collection, or reach out to your collection administrator, for permissions.
-## Create and enable self-service access workflow
+## Create and enable the self-service access workflow
-1. Sign in to [the Microsoft Purview governance portal](https://web.purview.azure.com/resource/) and select the Management center. You'll see three new icons in the table of contents.
+1. Sign in to [the Microsoft Purview governance portal](https://web.purview.azure.com/resource/) and select the management center. Three new icons appear in the table of contents.
- :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/workflow-section.png" alt-text="Screenshot showing the management center left menu with the new workflow section highlighted.":::
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/workflow-section.png" alt-text="Screenshot that shows the management center menu with the new workflow section highlighted.":::
-1. To create new workflows, select Authoring. This will take you to the workflow authoring experience.
+1. To create new workflows, select **Authoring**. This step takes you to the workflow authoring experience.
- :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/workflow-authoring-experience.png" alt-text="Screenshot showing the authoring workflows page, showing a list of all workflows.":::
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/workflow-authoring-experience.png" alt-text="Screenshot that shows the page for authoring workflows and a list of all workflows.":::
>[!NOTE]
- >If the authoring tab is greyed out, you don't have the permissions to be able to author workflows. You'll need the [workflow admin role](catalog-permissions.md).
+ >If the **Authoring** tab is unavailable, you don't have the permissions to author workflows. You need the [workflow admin role](catalog-permissions.md).
1. To create a new self-service workflow, select the **+New** button.
- :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/workflow-authoring-select-new.png" alt-text="Screenshot showing the authoring workflows page, with the + New button highlighted.":::
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/workflow-authoring-select-new.png" alt-text="Screenshot that shows the page for authoring workflows, with the New button highlighted.":::
-1. You'll be presented with different categories workflows creatable in Microsoft Purview. To create **an access request workflow** Select **Governance** and select **Continue**.
+1. You're presented with categories of workflows that you can create in Microsoft Purview. To create an access request workflow, select **Governance**, and then select **Continue**.
- :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/select-governance.png" alt-text="Screenshot showing the new workflow window, with the Governance option selected.":::
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/select-governance.png" alt-text="Screenshot that shows the new workflow panel, with the Governance option selected.":::
-1. In the next screen, you'll see all the templates provided by Microsoft Purview to create a self-service data access workflow. Select the template **Data access request** and select **Continue**.
+1. The next screen shows all the templates that Microsoft Purview provides to create a self-service data access workflow. Select the **Data access request** template, and then select **Continue**.
- :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/select-data-access-request.png" alt-text="Screenshot showing the new workflow window, with the Data access request option selected.":::
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/select-data-access-request.png" alt-text="Screenshot that shows the new workflow panel, with the data access request template selected.":::
-1. Next, enter workflow a name and optionally add a description. Then select **Continue**.
+1. Enter a workflow name, optionally add a description, and then select **Continue**.
- :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/name-and-continue.png" alt-text="Screenshot showing the new workflow window, with a name entered in the textbox.":::
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/name-and-continue.png" alt-text="Screenshot that shows the name and description boxes for a new workflow.":::
-1. You'll now be presented with a canvas where the selected template is loaded by default.
+1. You're presented with a canvas where the selected template is loaded by default.
- :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/workflow-canvas-inline.png" alt-text="Screenshot showing the workflow canvas with the selected template workflow steps displayed." lightbox="./media/how-to-workflow-self-service-data-access-hybrid/workflow-canvas-expanded.png":::
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/workflow-canvas-inline.png" alt-text="Screenshot that shows the workflow canvas with the selected template workflow steps displayed." lightbox="./media/how-to-workflow-self-service-data-access-hybrid/workflow-canvas-expanded.png":::
The template has the following steps: 1. Trigger when a data access request is made.
- 1. Approval connector that specifies a user or group that will be contacted to approve the request.
+ 1. Get an approval connector that specifies a user or group that will be contacted to approve the request.
- ### Assign Data owners as approvers
- Using the dynamic variable **Asset.Owner** as approvers in Approval connector will send approval requests to the data owners on the entity.
+ Assign data owners as approvers. Using the dynamic variable **Asset.Owner** as approvers in the approval connector will send approval requests to the data owners on the entity.
>[!Note]
- > Since entities may not have data owner field populated, using the above variables might result in errors if no data owner is found.
+ > Using the **Asset.Owner** variable might result in errors if an entity doesn't have a data owner.
+
+1. If the condition to check approval status is approved, take the following steps:
+
+ * If a data source is registered for [data use management](how-to-enable-data-use-governance.md) with the policy:
+ 1. Create a [self-service policy](concept-self-service-data-access-policy.md).
+ 1. Send email to the requestor that confirms access.
+ * If a data source isn't registered with the policy:
+ 1. Use a connector to assign [a task](how-to-workflow-manage-requests-approvals.md#tasks) to a user or an Azure Active Directory (Azure AD) group to manually provide access to the requestor.
+ 1. Send an email to requestor to explain that access is provided after the task is marked as complete.
- 1. Condition to check approval status
- - If approved:
- 1. Condition to check if data source is registered for [data use management](how-to-enable-data-use-governance.md) (policy)
- 1. If a data source is registered with policy:
- 1. Create a [self-service policy](concept-self-service-data-access-policy.md)
- 1. Send email to requestor that access is provided
- 1. If data source isn't registered with policy:
- 1. Task connector to assign [a task](how-to-workflow-manage-requests-approvals.md#tasks) to a user or Microsoft Azure Active Directory group to manually provide access to requestor.
- 1. Send an email to requestor that access is provided once the task is marked as complete.
- - If rejected:
- 1. Send an email to requestor that data access request is denied.
-1. The default template can be used as it is by populating two fields:
- * Adding an approver's email address or Microsoft Azure Active Directory group in **Start and Wait for approval** Connector
- * Adding a user's email address or Microsoft Azure Active Directory group in **Create task** connector to denote who is responsible for manually providing access if the source isn't registered with policy.
-
- :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/required-fields-for-template-inline.png" alt-text="Screenshot showing the workflow canvas with the start and wait for an approval step, and the Create Task and wait for task completion steps highlighted, and the Assigned to textboxes highlighted within those steps." lightbox="./media/how-to-workflow-self-service-data-access-hybrid/required-fields-for-template-expanded.png":::
+ If the condition to check approval status is rejected, send an email to the requestor to say that the data access request is denied.
+
+1. You can use the default template as it is by populating two fields:
+ * Add an approver's email address or Azure AD group in the **Start and wait for an approval** connector.
+ * Add a user's email address or Azure AD group in the **Create task and wait for task completion** connector to denote who is responsible for manually providing access if the source isn't registered with the policy.
+
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/required-fields-for-template-inline.png" alt-text="Screenshot that shows the workflow canvas with the connector for starting an approval and the connector for creating a task, along with the text boxes for assigning them." lightbox="./media/how-to-workflow-self-service-data-access-hybrid/required-fields-for-template-expanded.png":::
> [!NOTE]
- > Please configure the workflow to create self-service policies ONLY for sources supported by Microsoft Purview's policy feature. To see what's supported by policy, check the [Data owner policies documentation](tutorial-data-owner-policies-storage.md).
+ > Configure the workflow to create self-service policies only for sources that the Microsoft Purview policy supports. To see what the policy supports, check the [documentation about data owner policies](tutorial-data-owner-policies-storage.md).
>
- > If your source isn't supported by Microsoft Purview's policy feature, use the Task connector to assign [tasks](how-to-workflow-manage-requests-approvals.md#tasks) to users or groups that can provide access.
+ > If the Microsoft Purview policy doesn't support your source, use the **Create task and wait for task completion** connector to assign [tasks](how-to-workflow-manage-requests-approvals.md#tasks) to users or groups that can provide access.
-1. You can also modify the template by adding more connectors to suit your organizational needs.
+ You can also modify the template by adding more connectors to suit your organizational needs.
- :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/more-connectors-inline.png" alt-text="Screenshot showing the workflow authoring canvas, with a + button highlighted on the arrow between the two top steps, and the Next Step button highlighted at the bottom of the workspace." lightbox="./media/how-to-workflow-self-service-data-access-hybrid/more-connectors-expanded.png":::
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/more-connectors-inline.png" alt-text="Screenshot that shows the workflow authoring canvas, with the button for adding a connector and the button for saving the new conditions." lightbox="./media/how-to-workflow-self-service-data-access-hybrid/more-connectors-expanded.png":::
-1. Once you're done defining a workflow, you need to bind the workflow to a collection hierarchy path. The binding (or scoping) implies that this workflow is triggered only for data access requests in that collection. To bind a workflow or to apply a scope to a workflow, you need to select **Apply workflow**. Select the scope you want this workflow to be associated with and select **OK**.
+1. After you define a workflow, you need to bind the workflow to a collection hierarchy path. The binding (or scoping) implies that this workflow is triggered only for data access requests in that collection.
- :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/apply-workflow.png" alt-text="Screenshot showing the workflow workspace with the Apply workflow button selected at the top of the space, and the Apply workflow menu open, showing a list of items. One item is selected, and the O K button is highlighted at the bottom.":::
+ To bind a workflow or to apply a scope to a workflow, select **Apply workflow**. Select the scope that you want to associate with this workflow, and then select **OK**.
+
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/apply-workflow.png" alt-text="Screenshot that shows the workflow workspace with a list of items on the menu for applying a workflow.":::
>[!NOTE]
- > Purview workflow engine will always resolve to the closest workflow that the collection hierarchy path is associated with. In case a direct binding is not found, it will traverse up in the tree to find the workflow associated with the closest parent in the collection tree.
+ > The Microsoft Purview workflow engine will always resolve to the closest workflow that the collection hierarchy path is associated with. If the workflow engine doesn't find a direct binding, it will look for the workflow that's associated with the closest parent in the collection tree.
+
+1. Make sure that the **Enable** toggle is on. The workflow should be enabled by default.
+1. Select **Save and close** to create and enable the workflow.
-1. By default, the workflow will be enabled. You can disable by selecting the Enable toggle.
-1. Finally select **Save and close** to create and enable the workflow.
+ Your new workflow now appears in the list of workflows.
- :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/completed-workflows.png" alt-text="Screenshot showing the workflow authoring page with the newly created workflow listed among the other workflows.":::
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/completed-workflows.png" alt-text="Screenshot that shows the workflow authoring page with the newly created workflow listed among the other workflows.":::
## Edit an existing workflow
-To modify an existing workflow, select the workflow and then select the **Edit** button. You'll now be presented with the canvas containing workflow definition. Modify the workflow and select **Save** to commit changes.
+To modify an existing workflow, select the workflow, and then select the **Edit** button. You're presented with the canvas that contains the workflow definition. Modify the workflow, and then select **Save** to commit the changes.
## Disable a workflow
-To disable a workflow, you can select the workflow and then select **Disable**. You can also disable the workflow by selecting **Edit** and changing the enable toggle in workflow canvas then saving.
+To disable a workflow, select the workflow, and then select **Disable**.
+
+Another way is to select the workflow, select **Edit**, turn off the **Enable** toggle in the workflow canvas, and then select **Save and close**.
## Delete a workflow
-To delete a workflow, select the workflow and then select **Delete**.
+To delete a workflow, select the workflow, and then select **Delete**.
## Next steps For more information about workflows, see these articles: -- [What are Microsoft Purview workflows](concept-workflow.md)
+- [Workflows in Microsoft Purview](concept-workflow.md)
- [Approval workflow for business terms](how-to-workflow-business-terms-approval.md) - [Manage workflow requests and approvals](how-to-workflow-manage-requests-approvals.md)
purview Tutorial Using Rest Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-using-rest-apis.md
Once the new service principal is created, you need to assign the data plane rol
1. Select the **Role assignments** tab.
-1. Assign the following roles to the service principal created previously to access various data planes in Microsoft Purview. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+1. Assign the following roles to the service principal created previously to access various data planes in Microsoft Purview. For detailed steps, see [Assign Azure roles using the Microsoft Purview portal](./how-to-create-and-manage-collections.md#add-role-assignments).
* Data Curator role to access Catalog Data plane. * Data Source Administrator role to access Scanning Data plane.
route-server Expressroute Vpn Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/expressroute-vpn-support.md
For example, in the following diagram:
You can also replace the SDWAN appliance with Azure VPN gateway. Since Azure VPN gateway and ExpressRoute are fully managed, you only need to enable the route exchange for the two on-premises networks to talk to each other. > [!IMPORTANT]
-> Azure VPN gateway must be configured in [**active-active**](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md) mode and have the ASN set to 65515.
->
+> * Azure VPN gateway must be configured in [**active-active**](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md) mode and have the ASN set to 65515.
+> * When you create or delete an Azure Route Server from a virtual network that contains a Virtual Network Gateway (ExpressRoute or VPN), expect downtime until the operation complete.
![Diagram showing ExpressRoute and VPN gateway configured with Route Server.](./media/expressroute-vpn-support/expressroute-and-vpn-with-route-server.png)
route-server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/overview.md
For frequently asked questions about Azure Route Server, see [Azure Route Server
- [Learn how to configure Azure Route Server](quickstart-configure-route-server-powershell.md) - [Learn how Azure Route Server works with Azure ExpressRoute and Azure VPN](expressroute-vpn-support.md)-- [Learn module: Introduction to Azure Route Server](/learn/modules/intro-to-azure-route-server)
+- [Learn module: Introduction to Azure Route Server](/training/modules/intro-to-azure-route-server)
search Search Howto Managed Identities Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-cosmos-db.md
- Previously updated : 06/20/2022+ Last updated : 09/19/2022
-# Set up an indexer connection to a Cosmos DB database using a managed identity
+# Set up an indexer connection to Cosmos DB using a managed identity
-This article describes how to set up an Azure Cognitive Search indexer connection to an Azure Cosmos DB database using a managed identity instead of providing credentials in the connection string.
+This article explains how to set up an indexer connection to an Azure Cosmos DB database using a managed identity instead of providing credentials in the connection string.'
-You can use a system-assigned managed identity or a user-assigned managed identity (preview). Managed identities are Azure AD logins and require Azure role assignments to access data in Cosmos DB. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-
-Before learning more about this feature, it is recommended that you have an understanding of what an indexer is and how to set up an indexer for your data source. More information can be found at the following links:
-
-* [Indexer overview](search-indexer-overview.md)
-* [Azure Cosmos DB indexer (SQL API)](search-howto-index-cosmosdb.md)
-* [Azure Cosmos DB indexer (MongoDB API - preview)](search-howto-index-cosmosdb-mongodb.md)
-* [Azure Cosmos DB indexer (Gremlin API - preview)](search-howto-index-cosmosdb-gremlin.md)
+You can use a system-assigned managed identity or a user-assigned managed identity (preview). Managed identities are Azure Active Directory logins and require Azure role assignments to access data in Cosmos DB.
## Prerequisites * [Create a managed identity](search-howto-managed-identities-data-sources.md) for your search service.
-* [Assign a role](search-howto-managed-identities-data-sources.md#assign-a-role) in Cosmos DB. For data reader access, you'll need the **Cosmos DB Account Reader** role and the identity used to make the request. This role works for all Cosmos DB APIs supported by Cognitive Search. This is a control plane RBAC role. At this time, Cognitive Search obtains keys with the identity and uses those keys to connect to the Cosmos DB account. This means that [enforcing RBAC as the only authentication method in Cosmos DB](../cosmos-db/how-to-setup-rbac.md#disable-local-auth) is not supported when using Search with managed identities to connect to Cosmos DB.
+* [Assign a role](search-howto-managed-identities-data-sources.md#assign-a-role) in Cosmos DB.
+
+ For data reader access, you'll need the **Cosmos DB Account Reader** role and the identity used to make the request. This role works for all Cosmos DB APIs supported by Cognitive Search. This is a control plane RBAC role.
+
+ At this time, Cognitive Search obtains keys with the identity and uses those keys to connect to the Cosmos DB account. This means that [enforcing RBAC as the only authentication method in Cosmos DB](../cosmos-db/how-to-setup-rbac.md#disable-local-auth) isn't supported when using Search with managed identities to connect to Cosmos DB.
-The easiest way to test the connection is using the [Import data wizard](search-import-data-portal.md). The wizard supports data source connections for both system and user managed identities.
+* You should be familiar with [indexer concepts](search-indexer-overview.md) and [configuration](search-howto-index-cosmosdb.md).
## Create the data source
The [REST API](/rest/api/searchservice/create-data-source), Azure portal, and th
When you're connecting with a system-assigned managed identity, the only change to the data source definition is the format of the "credentials" property. You'll provide the database name and a ResourceId that has no account key or password. The ResourceId must include the subscription ID of Cosmos DB, the resource group, and the Cosmos DB account name.
-* For SQL collections, the connection string does not require "ApiKind".
+* For SQL collections, the connection string doesn't require "ApiKind".
* For MongoDB collections, add "ApiKind=MongoDb" to the connection string and use a preview REST API. * For Gremlin graphs, add "ApiKind=Gremlin" to the connection string and use a preview REST API.
-Here is an example of how to create a data source to index data from a storage account using the [Create Data Source](/rest/api/searchservice/create-data-source) REST API and a managed identity connection string. The managed identity connection string format is the same for the REST API, .NET SDK, and the Azure portal.
+Here's an example of how to create a data source to index data from a storage account using the [Create Data Source](/rest/api/searchservice/create-data-source) REST API and a managed identity connection string. The managed identity connection string format is the same for the REST API, .NET SDK, and the Azure portal.
```http POST https://[service name].search.windows.net/datasources?api-version=2020-06-30
The 2021-04-30-preview REST API supports connections based on a user-assigned ma
* First, the format of the "credentials" property is the database name and a ResourceId that has no account key or password. The ResourceId must include the subscription ID of Cosmos DB, the resource group, and the Cosmos DB account name.
- * For SQL collections, the connection string does not require "ApiKind".
+ * For SQL collections, the connection string doesn't require "ApiKind".
* For MongoDB collections, add "ApiKind=MongoDb" to the connection string * For Gremlin graphs, add "ApiKind=Gremlin" to the connection string. * Second, you'll add an "identity" property that contains the collection of user-assigned managed identities. Only one user-assigned managed identity should be provided when creating the data source. Set it to type "userAssignedIdentities".
-Here is an example of how to create an indexer data source object using the [preview Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source) REST API:
+Here's an example of how to create an indexer data source object using the [preview Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source) REST API:
```http
api-key: [admin key]
## Create the indexer
-An indexer connects a data source with a target search index and provides a schedule to automate the data refresh. Once the index and data source have been created, you're ready to create and run the indexer.
+An indexer connects a data source with a target search index and provides a schedule to automate the data refresh. Once the index and data source have been created, you're ready to create and run the indexer. If the indexer is successful, the connection syntax and role assignments are valid.
Here's a [Create Indexer](/rest/api/searchservice/create-indexer) REST API call with a Cosmos DB indexer definition. The indexer will run when you submit the request.
Here's a [Create Indexer](/rest/api/searchservice/create-indexer) REST API call
## Troubleshooting
-If you recently rotated your Cosmos DB account keys you will need to wait up to 15 minutes for the managed identity connection string to work.
+If you recently rotated your Cosmos DB account keys you'll need to wait up to 15 minutes for the managed identity connection string to work.
Check to see if the Cosmos DB account has its access restricted to select networks. You can rule out any firewall issues by trying the connection without restrictions in place.
search Search Howto Managed Identities Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-data-sources.md
You can configure an Azure Cognitive Search service to connect to other Azure re
+ A search service at the [Basic tier or above](search-sku-tier.md).
-+ An Azure resource that accepts incoming requests from an Azure AD login that has a valid role assignment.
++ An Azure resource that accepts incoming requests from an Azure Active Directory login that has a valid role assignment. ## Supported scenarios
A user-assigned managed identity is a resource on Azure. It's useful if you need
1. In the "Search services and marketplace" search bar, search for "User Assigned Managed Identity" and then select **Create**.
- :::image type="content" source="media/search-managed-identities/user-assigned-managed-identity.png" alt-text="Screenshot of the user assigned managed identity tile in Azure marketplace.":::
+ :::image type="content" source="media/search-managed-identities/user-assigned-managed-identity.png" alt-text="Screenshot of the user assigned managed identity tile in Azure Marketplace.":::
1. Select the subscription, resource group, and region. Give the identity a descriptive name.
search Search Howto Managed Identities Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-sql.md
- Previously updated : 02/11/2022+ Last updated : 09/19/2022
-# Set up an indexer connection to Azure SQL Database using a managed identity
+# Set up an indexer connection to Azure SQL using a managed identity
-This article describes how to set up an Azure Cognitive Search indexer connection to Azure SQL Database using a managed identity instead of providing credentials in the connection string.
+This article explains how to set up an indexer connection to Azure SQL Database using a managed identity instead of providing credentials in the connection string.
-You can use a system-assigned managed identity or a user-assigned managed identity (preview). Managed identities are Azure AD logins and require Azure role assignments to access data in Azure SQL.
-
-Before learning more about this feature, it is recommended that you have an understanding of what an indexer is and how to set up an indexer for your data source. More information can be found at the following links:
-
-* [Indexer overview](search-indexer-overview.md)
-* [Azure SQL indexer](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md)
+You can use a system-assigned managed identity or a user-assigned managed identity (preview). Managed identities are Azure Active Directory logins and require Azure role assignments to access data in Azure SQL.
## Prerequisites * [Create a managed identity](search-howto-managed-identities-data-sources.md) for your search service.
-* Azure AD admin role on SQL:
+* [Assign an Azure admin role on SQL](/azure/azure-sql/database/authentication-aad-configure). The identity used on the indexer connection needs read permissions. You must be an Azure AD admin with a server in SQL Database or SQL Managed Instance to grant read permissions on a database.
- To assign read permissions on the database, you must be an Azure AD admin with a server in SQL Database or SQL Managed Instance. See [Configure and manage Azure AD authentication with Azure SQL](/azure/azure-sql/database/authentication-aad-configure) and follow the steps to provision an Azure AD admin.
+* You should be familiar with [indexer concepts](search-indexer-overview.md) and [configuration](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md).
## 1 - Assign permissions to read the database
DROP USER IF EXISTS [insert your search service name or user-assigned managed id
## 2 - Add a role assignment
-In this section you'll give your Azure Cognitive Search service permission to read data from your SQL Server. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+In this section you'll, give your Azure Cognitive Search service permission to read data from your SQL Server. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
1. In the Azure portal, navigate to your Azure SQL Server page.
The [REST API](/rest/api/searchservice/create-data-source), Azure portal, and th
When you're connecting with a system-assigned managed identity, the only change to the data source definition is the format of the "credentials" property. You'll provide an Initial Catalog or Database name and a ResourceId that has no account key or password. The ResourceId must include the subscription ID of Azure SQL Database, the resource group of SQL Database, and the name of the SQL database.
-Here is an example of how to create a data source to index data from a storage account using the [Create Data Source](/rest/api/searchservice/create-data-source) REST API and a managed identity connection string. The managed identity connection string format is the same for the REST API, .NET SDK, and the Azure portal.
+Here's an example of how to create a data source to index data from a storage account using the [Create Data Source](/rest/api/searchservice/create-data-source) REST API and a managed identity connection string. The managed identity connection string format is the same for the REST API, .NET SDK, and the Azure portal.
```http POST https://[service name].search.windows.net/datasources?api-version=2020-06-30
The 2021-04-30-preview REST API supports connections based on a user-assigned ma
* Second, you'll add an "identity" property that contains the collection of user-assigned managed identities. Only one user-assigned managed identity should be provided when creating the data source. Set it to type "userAssignedIdentities".
-Here is an example of how to create an indexer data source object using the [preview Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source) REST API:
+Here's an example of how to create an indexer data source object using the [preview Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source) REST API:
```http POST https://[service name].search.windows.net/datasources?api-version=2021-04-30-preview
api-key: [admin key]
## 5 - Create the indexer
-An indexer connects a data source with a target search index, and provides a schedule to automate the data refresh. Once the index and data source have been created, you're ready to create the indexer.
+An indexer connects a data source with a target search index, and provides a schedule to automate the data refresh. Once the index and data source have been created, you're ready to create the indexer. If the indexer is successful, the connection syntax and role assignments are valid.
Here's a [Create Indexer](/rest/api/searchservice/create-indexer) REST API call with an Azure SQL indexer definition. The indexer will run when you submit the request.
api-key: [admin key]
"name" : "sql-indexer", "dataSourceName" : "sql-datasource", "targetIndexName" : "my-target-index"
-```
+```
## Troubleshooting
-If you get an error when the indexer tries to connect to the data source that says that the client is not allowed to access the server, take a look at [common indexer errors](./search-indexer-troubleshooting.md).
+If you get an error when the indexer tries to connect to the data source that says that the client isn't allowed to access the server, take a look at [common indexer errors](./search-indexer-troubleshooting.md).
You can also rule out any firewall issues by trying the connection with and without restrictions in place.
search Search Howto Managed Identities Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-storage.md
- Previously updated : 03/30/2022+ Last updated : 09/19/2022
-# Set up a connection to an Azure Storage account using a managed identity
+# Set up an indexer connection to Azure Storage using a managed identity
-This article describes how to set up an Azure Cognitive Search indexer connection to an Azure Storage account using a managed identity instead of providing credentials in the connection string.
+This article explains how to set up an indexer connection to an Azure Storage account using a managed identity instead of providing credentials in the connection string.
-You can use a system-assigned managed identity or a user-assigned managed identity (preview). Managed identities are Azure AD logins and require Azure role assignments to access data in Azure Storage. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-
-This article assumes familiarity with indexer concepts and configuration. If you're new to indexers, start with these links:
-
-* [Indexer overview](search-indexer-overview.md)
-* [Azure Blob indexer](search-howto-indexing-azure-blob-storage.md)
-* [Azure Data Lake Storage (ADLS) Gen2 indexer](search-howto-index-azure-data-lake-storage.md)
-* [Azure Table indexer](search-howto-indexing-azure-tables.md)
-* [Azure Files indexer (preview)](search-file-storage-integration.md)
-
-For a code example in C#, see [Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md) on GitHub.
+You can use a system-assigned managed identity or a user-assigned managed identity (preview). Managed identities are Azure Active Directory logins and require Azure role assignments to access data in Azure Storage.
> [!NOTE] > If storage is network-protected and in the same region as your search service, you must use a system-assigned managed identity and either one of the following network options: [connect as a trusted service](search-indexer-howto-access-trusted-service-exception.md), or [connect using the resource instance rule](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances).
For a code example in C#, see [Index Data Lake Gen2 using Azure AD](https://gith
* [Create a managed identity](search-howto-managed-identities-data-sources.md) for your search service.
-* [Assign a role](search-howto-managed-identities-data-sources.md#assign-a-role):
+* [Assign a role](search-howto-managed-identities-data-sources.md#assign-a-role) in Azure Storage:
+
+ * Choose **Storage Blob Data Reader** for data read access in Blob Storage and ADLS Gen2.
- * **Storage Blob Data Reader** for data read access in Blob Storage and ADLS Gen2.
+ * Choose **Reader and Data** for data read access in Table Storage and File Storage.
- * **Reader and Data** for data read access in Table Storage and File Storage.
+* You should be familiar with [indexer concepts](search-indexer-overview.md) and [configuration](search-howto-indexing-azure-blob-storage.md).
-The easiest way to test the connection is using the [Import data wizard](search-import-data-portal.md). The wizard supports data source connections for both system and user managed identities.
+> [!TIP]
+> For a code example in C#, see [Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md) on GitHub.
## Create the data source
The [REST API](/rest/api/searchservice/create-data-source), Azure portal, and th
When you're connecting with a system-assigned managed identity, the only change to the data source definition is the format of the "credentials" property. You'll provide a ResourceId that has no account key or password. The ResourceId must include the subscription ID of the storage account, the resource group of the storage account, and the storage account name.
-Here is an example of how to create a data source to index data from a storage account using the [Create Data Source](/rest/api/searchservice/create-data-source) REST API and a managed identity connection string. The managed identity connection string format is the same for the REST API, .NET SDK, and the Azure portal.
+Here's an example of how to create a data source to index data from a storage account using the [Create Data Source](/rest/api/searchservice/create-data-source) REST API and a managed identity connection string. The managed identity connection string format is the same for the REST API, .NET SDK, and the Azure portal.
```http POST https://[service name].search.windows.net/datasources?api-version=2020-06-30
api-key: [admin key]
The 2021-04-30-preview REST API supports connections based on a user-assigned managed identity. When you're connecting with a user-assigned managed identity, there are two changes to the data source definition:
-* First, the format of the "credentials" property is a ResourceId that has no account key or password. The ResourceId must include the subscription ID of the storage account, the resource group of the storage account, and the storage account name. This is the same format as the system-assigned managed identity.
+* First, the format of the "credentials" property is a ResourceId that has no account key or password. The ResourceId must include the subscription ID of the storage account, the resource group of the storage account, and the storage account name. This format is the same format as the system-assigned managed identity.
* Second, you'll add an "identity" property that contains the collection of user-assigned managed identities. Only one user-assigned managed identity should be provided when creating the data source. Set it to type "userAssignedIdentities".
-Here is an example of how to create an indexer data source object using the [preview Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source) REST API:
+Here's an example of how to create an indexer data source object using the [preview Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source) REST API:
```http POST https://[service name].search.windows.net/datasources?api-version=2021-04-30-preview
api-key: [admin key]
The index specifies the fields in a document, attributes, and other constructs that shape the search experience.
-Here's a [Create Index](/rest/api/searchservice/create-index) REST API call with a searchable `content` field to store the text extracted from blobs:
+Here's a [Create Index](/rest/api/searchservice/create-index) REST API call with a searchable `content` field to store the text extracted from blobs:
```http POST https://[service name].search.windows.net/indexes?api-version=2020-06-30
api-key: [admin key]
## Create the indexer
-An indexer connects a data source with a target search index, and provides a schedule to automate the data refresh. Once the index and data source have been created, you're ready to create and run the indexer.
+An indexer connects a data source with a target search index, and provides a schedule to automate the data refresh. Once the index and data source have been created, you're ready to create and run the indexer. If the indexer is successful, the connection syntax and role assignments are valid.
Here's a [Create Indexer](/rest/api/searchservice/create-indexer) REST API call with a blob indexer definition. The indexer will run when you submit the request.
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-overview.md
For Azure Cognitive Search, there's currently one built-in definition. It's for
Watch this fast-paced video for an overview of the security architecture and each feature category.
-> [!VIDEO https://docs.microsoft.com/Shows/AI-Show/Azure-Cognitive-Search-Whats-new-in-security/player]
+> [!VIDEO https://learn.microsoft.com/Shows/AI-Show/Azure-Cognitive-Search-Whats-new-in-security/player]
## See also
search Search Sku Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-sku-manage-costs.md
In-place upgrade or downgrade is not supported. Changing a service tier requires
+ Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). + Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). + Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-+ Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
++ Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
search Tutorial Csharp Orders https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-orders.md
Consider the following takeaways from this project:
You have completed this series of C# tutorials - you should have gained valuable knowledge of the Azure Cognitive Search APIs.
-For further reference and tutorials, consider browsing [Microsoft Learn](/learn/browse/?products=azure), or the other tutorials in the [Azure Cognitive Search documentation](./index.yml).
+For further reference and tutorials, consider browsing the [Microsoft Learn training catalog](/training/browse/?products=azure) or the other tutorials in the [Azure Cognitive Search documentation](./index.yml).
security Secure Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-design.md
with security best practices on Azure:
assess your own DevOps progression. - [Top 5 security items to consider before pushing to
- production](/learn/modules/top-5-security-items-to-consider/index?WT.mc_id=Learn-Blog-tajanca)
+ production](/training/modules/top-5-security-items-to-consider/index?WT.mc_id=Learn-Blog-tajanca)
shows you how to help secure your web applications on Azure and protect your apps against the most common and dangerous web application attacks.
identities for Azure resources, your Azure web app can access secret
configuration values easily and securely without storing any secrets in your source control or configuration. To learn more, see [Manage secrets in your server apps with Azure Key
-Vault](/learn/modules/manage-secrets-with-azure-key-vault/).
+Vault](/training/modules/manage-secrets-with-azure-key-vault/).
### Implement fail-safe measures
security Ransomware Features Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ransomware-features-resources.md
Key Features:
## Additional resources - [Microsoft Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/)-- [Build great solutions with the Microsoft Azure Well-Architected Framework](/learn/paths/azure-well-architected-framework/)
+- [Build great solutions with the Microsoft Azure Well-Architected Framework](/training/paths/azure-well-architected-framework/)
- [Azure Top Security Best Practices](/azure/cloud-adoption-framework/get-started/security#step-1-establish-essential-security-practices) - [Security Baselines](https://techcommunity.microsoft.com/t5/microsoft-security-baselines/bg-p/Microsoft-Security-Baselines) - [Microsoft Azure Resource Center](https://azure.microsoft.com/resources/)
sentinel Billing Monitor Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing-monitor-costs.md
The daily cap doesn't limit collection of all data types. Security data is exclu
- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.-- For more tips on reducing Log Analytics data volume, see [Azure Monitor best practices - Cost management](../azure-monitor/best-practices-cost.md).
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- For more tips on reducing Log Analytics data volume, see [Azure Monitor best practices - Cost management](../azure-monitor/best-practices-cost.md).
sentinel Billing Reduce Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing-reduce-costs.md
Besides for the predefined sets of events that you can select to ingest, such as
- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.-- For more tips on reducing Log Analytics data volume, see [Azure Monitor best practices - Cost management](../azure-monitor/best-practices-cost.md).
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- For more tips on reducing Log Analytics data volume, see [Azure Monitor best practices - Cost management](../azure-monitor/best-practices-cost.md).
sentinel Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing.md
Data connectors listed as public preview don't generate cost. Data connectors ge
- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
- For more tips on reducing Log Analytics data volume, see [Azure Monitor best practices - Cost management](../azure-monitor/best-practices-cost.md).
sentinel Bookmarks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/bookmarks.md
In this article, you learned how to run a hunting investigation using bookmarks
- [Proactively hunt for threats](hunting.md) - [Use notebooks to run automated hunting campaigns](notebooks.md)-- [Threat hunting with Microsoft Sentinel (Learn module)](/learn/modules/hunt-threats-sentinel/)
+- [Threat hunting with Microsoft Sentinel (Learn module)](/training/modules/hunt-threats-sentinel/)
sentinel Deploy Side By Side https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/deploy-side-by-side.md
For more information, see:
- [Webinar: Best Practices for Converting Detection Rules](https://www.youtube.com/watch?v=njXK1h9lfR4) - [Security Orchestration, Automation, and Response (SOAR) in Microsoft Sentinel](automation.md) - [Manage your SOC better with incident metrics](manage-soc-with-incident-metrics.md)-- [Microsoft Sentinel learning path](/learn/paths/security-ops-sentinel/)-- [SC-200 Microsoft Security Operations Analyst certification](/learn/certifications/exams/sc-200)
+- [Microsoft Sentinel learning path](/training/paths/security-ops-sentinel/)
+- [SC-200 Microsoft Security Operations Analyst certification](/certifications/exams/sc-200)
- [Microsoft Sentinel Ninja training](https://techcommunity.microsoft.com/t5/azure-sentinel/become-an-azure-sentinel-ninja-the-complete-level-400-training/ba-p/1246310) - [Investigate an attack on a hybrid environment with Microsoft Sentinel](https://mslearn.cloudguides.com/guides/Investigate%20an%20attack%20on%20a%20hybrid%20environment%20with%20Azure%20Sentinel)
sentinel Dns Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/dns-normalization-schema.md
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **DstPortNumber** | Optional | Integer | Destination Port number.<br><br>Example: `53` | | <a name="dsthostname"></a>**DstHostname** | Optional | String | The destination device hostname, excluding domain information. If no device name is available, store the relevant IP address in this field.<br><br>Example: `DESKTOP-1282V4D`<br><br>**Note**: This value is mandatory if [DstIpAddr](#dstipaddr) is specified. | | <a name="dstdomain"></a>**DstDomain** | Optional | String | The domain of the destination device.<br><br>Example: `Contoso` |
-| <a name="dstdomaintype"></a>**DstDomainType** | Optional | Enumerated | The type of [DstDomain](#dstdomain), if known. Possible values include:<br>- `Windows (contoso\mypc)`<br>- `FQDN (docs.microsoft.com)`<br><br>Required if [DstDomain](#dstdomain) is used. |
+| <a name="dstdomaintype"></a>**DstDomainType** | Optional | Enumerated | The type of [DstDomain](#dstdomain), if known. Possible values include:<br>- `Windows (contoso\mypc)`<br>- `FQDN (learn.microsoft.com)`<br><br>Required if [DstDomain](#dstdomain) is used. |
| **DstFQDN** | Optional | String | The destination device hostname, including domain information when available. <br><br>Example: `Contoso\DESKTOP-1282V4D` <br><br>**Note**: This field supports both traditional FQDN format and Windows domain\hostname format. The [DstDomainType](#dstdomaintype) reflects the format used. | | <a name="dstdvcid"></a>**DstDvcId** | Optional | String | The ID of the destination device as reported in the record.<br><br>Example: `ac7e9755-8eae-4ffc-8a02-50ed7a2216c3` | | **DstDvcIdType** | Optional | Enumerated | The type of [DstDvcId](#dstdvcid), if known. Possible values include:<br> - `AzureResourceId`<br>- `MDEidIf`<br><br>If multiple IDs are available, use the first one from the list above, and store the others in the **DstDvcAzureResourceId** or **DstDvcMDEid** fields, respectively.<br><br>Required if **DstDeviceId** is used.|
sentinel Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/iot-solution.md
View Defender for IoT alerts in the Microsoft Sentinel **Logs** area.
> [!NOTE] > The **Logs** page in Microsoft Sentinel is based on Azure Monitor's Log Analytics. >
-> For more information, see [Log queries overview](../azure-monitor/logs/log-query-overview.md) in the Azure Monitor documentation and the [Write your first KQL query](/learn/modules/write-first-query-kusto-query-language/) Learn module.
+> For more information, see [Log queries overview](../azure-monitor/logs/log-query-overview.md) in the Azure Monitor documentation and the [Write your first KQL query](/training/modules/write-first-query-kusto-query-language/) Learn module.
> ### Understand alert timestamps
sentinel Kusto Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/kusto-resources.md
Microsoft Sentinel uses Azure Monitor's Log Analytics environment and the Kusto
- [Splunk to Kusto Query Language map](/azure/data-explorer/kusto/query/splunk-cheat-sheet) ### Microsoft Sentinel Learn modules-- [Write your first query with Kusto Query Language](/learn/modules/write-first-query-kusto-query-language/)-- [Learning path SC-200: Create queries for Microsoft Sentinel using Kusto Query Language (KQL)](/learn/paths/sc-200-utilize-kql-for-azure-sentinel/)
+- [Write your first query with Kusto Query Language](/training/modules/write-first-query-kusto-query-language/)
+- [Learning path SC-200: Create queries for Microsoft Sentinel using Kusto Query Language (KQL)](/training/paths/sc-200-utilize-kql-for-azure-sentinel/)
## Other resources
Microsoft Sentinel uses Azure Monitor's Log Analytics environment and the Kusto
## Next steps > [!div class="nextstepaction"]
-> [Get certified!](/learn/paths/security-ops-sentinel/)
+> [Get certified!](/training/paths/security-ops-sentinel/)
> [!div class="nextstepaction"] > [Read customer use case stories](https://customers.microsoft.com/en-us/search?sq=%22Azure%20Sentinel%20%22&ff=&p=0&so=story_publish_date%20desc)
sentinel Migration Security Operations Center Processes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-security-operations-center-processes.md
For more information, see:
- [Webinar: Best Practices for Converting Detection Rules](https://www.youtube.com/watch?v=njXK1h9lfR4) - [Security Orchestration, Automation, and Response (SOAR) in Microsoft Sentinel](automation.md) - [Manage your SOC better with incident metrics](manage-soc-with-incident-metrics.md)-- [Microsoft Sentinel learning path](/learn/paths/security-ops-sentinel/)-- [SC-200 Microsoft Security Operations Analyst certification](/learn/certifications/exams/sc-200)
+- [Microsoft Sentinel learning path](/training/paths/security-ops-sentinel/)
+- [SC-200 Microsoft Security Operations Analyst certification](/certifications/exams/sc-200)
- [Microsoft Sentinel Ninja training](https://techcommunity.microsoft.com/t5/azure-sentinel/become-an-azure-sentinel-ninja-the-complete-level-400-training/ba-p/1246310) - [Investigate an attack on a hybrid environment with Microsoft Sentinel](https://mslearn.cloudguides.com/guides/Investigate%20an%20attack%20on%20a%20hybrid%20environment%20with%20Azure%20Sentinel)
sentinel Normalization About Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-schemas.md
Each schema field has a type. Some have built-in, Log Analytics types, such as `
|**Date/Time** | Depending on the ingestion method capability, use any of the following physical representations in descending priority: <br><br>- Log Analytics built-in datetime type <br>- An integer field using Log Analytics datetime numerical representation. <br>- A string field using Log Analytics datetime numerical representation <br>- A string field storing a supported [Log Analytics date/time format](/azure/data-explorer/kusto/query/scalar-data-types/datetime). | [Log Analytics date and time representation](/azure/kusto/query/scalar-data-types/datetime) is similar but different than Unix time representation. For more information, see the [conversion guidelines](/azure/kusto/query/datetime-timespan-arithmetic). <br><br>**Note**: When applicable, the time should be time zone adjusted. | |**MAC address** | String | Colon-Hexadecimal notation. | |**IP address** |String | Microsoft Sentinel schemas don't have separate IPv4 and IPv6 addresses. Any IP address field might include either an IPv4 address or an IPv6 address, as follows: <br><br>- **IPv4** in a dot-decimal notation.<br>- **IPv6** in 8-hextets notation, allowing for the short form.<br><br>For example:<br>- **IPv4**: `192.168.10.10` <br>- **IPv6**: `FEDC:BA98:7654:3210:FEDC:BA98:7654:3210`<br>- **IPv6 short form**: `1080::8:800:200C:417A` |
-|**FQDN** | String | A fully qualified domain name using a dot notation, for example, `docs.microsoft.com`. For more information, see [The Device entity](#the-device-entity). |
+|**FQDN** | String | A fully qualified domain name using a dot notation, for example, `learn.microsoft.com`. For more information, see [The Device entity](#the-device-entity). |
|<a name="hostname"></a>**Hostname** | String | A hostname which is not an FQDN, includes up to 63 characters including letters, numbers and hyphens. For more information, see [The Device entity](#the-device-entity).| | **DomainType** | Enumerated | The type of domain stored in domain and FQDN fields. For a list of values and more information, see [The Device entity](#the-device-entity). | | **DvcIdType** | Enumerated | The type of the device ID stored in DvcId fields. For a list of allowed values and further information refer to [DvcIdType](#dvcidtype). |
sentinel Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/resources.md
Download sample content from the private community GitHub repository to create c
## Next steps > [!div class="nextstepaction"]
-> [Get certified!](/learn/paths/security-ops-sentinel/)
+> [Get certified!](/training/paths/security-ops-sentinel/)
> [!div class="nextstepaction"] > [Read customer use case stories](https://customers.microsoft.com/en-us/search?sq=%22Azure%20Sentinel%20%22&ff=&p=0&so=story_publish_date%20desc)
sentinel Skill Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/skill-up-resources.md
The modules listed here are split into five parts following the life cycle of a
This skill-up training is a level-400 training that's based on the [Microsoft Sentinel Ninja training](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/become-a-microsoft-sentinel-ninja-the-complete-level-400/ba-p/1246310). If you don't want to go as deep, or you have a specific issue to resolve, other resources might be more suitable: * Although the skill-up training is extensive, it naturally has to follow a script and can't expand on every topic. See the referenced documentation for information about each article.
-* You can now become certified with the new certification [SC-200: Microsoft Security Operations Analyst](/learn/certifications/exams/sc-200), which covers Microsoft Sentinel. For a broader, higher-level view of the Microsoft Security suite, you might also want to consider [SC-900: Microsoft Security, Compliance, and Identity Fundamentals](/learn/certifications/exams/sc-900) or [AZ-500: Microsoft Azure Security Technologies](/learn/certifications/exams/az-500).
+* You can now become certified with the new certification [SC-200: Microsoft Security Operations Analyst](/certifications/exams/sc-200), which covers Microsoft Sentinel. For a broader, higher-level view of the Microsoft Security suite, you might also want to consider [SC-900: Microsoft Security, Compliance, and Identity Fundamentals](/certifications/exams/sc-900) or [AZ-500: Microsoft Azure Security Technologies](/certifications/exams/az-500).
* If you're already skilled up on Microsoft Sentinel, keep track of [what's new](whats-new.md) or join the [Microsoft Cloud Security Private Community](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-kibZAPJAVBiU46J6wWF_5URDFSWUhYUldTWjdJNkFMVU1LTEU4VUZHMy4u) program for an earlier view into upcoming releases. * Do you have a feature idea to share with us? Let us know on the [Microsoft Sentinel user voice page](https://feedback.azure.com/d365community/forum/37638d17-0625-ec11-b6e6-000d3a4f07b8). * Are you a premier customer? You might want the on-site or remote, four-day _Microsoft Sentinel Fundamentals Workshop_. Contact your Customer Success Account Manager for more details.
The next section on writing rules explains how to use KQL in the specific contex
* [Must Learn KQL](https://aka.ms/MustLearnKQL): A 20-part KQL series that walks you through the basics of creating your first analytics rule (includes an assessment and certificate) * The Microsoft Sentinel KQL Lab: An interactive lab that teaches KQL with a focus on what you need for Microsoft Sentinel:
- * [Learning module (SC-200 part 4)](/learn/paths/sc-200-utilize-kql-for-azure-sentinel/)
+ * [Learning module (SC-200 part 4)](/training/paths/sc-200-utilize-kql-for-azure-sentinel/)
* [Presentation](https://onedrive.live.com/?authkey=%21AJRxX475AhXGQBE&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21740&parId=66C31D2DBF8E0F71%21446&o=OneUp) or [lab URL](https://aka.ms/lademo) * A [Jupyter notebooks version](https://github.com/jjsantanna/azure_sentinel_learn_kql_lab/blob/master/azure_sentinel_learn_kql_lab.ipynb) that lets you test the queries within the notebook * Learning webinar: [YouTube](https://youtu.be/EDCBLULjtCM) or [MP4](https://1drv.ms/v/s!AnEPjr8tHcNmglwAjUjmYy2Qn5J-)
service-bus-messaging Service Bus Php How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-php-how-to-use-queues.md
try {
catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:
- // https://docs.microsoft.com/rest/api/storageservices/Common-REST-API-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Common-REST-API-Error-Codes
$code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />";
try {
catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:
- // https://docs.microsoft.com/rest/api/storageservices/Common-REST-API-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Common-REST-API-Error-Codes
$code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />";
try {
catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:
- // https://docs.microsoft.com/rest/api/storageservices/Common-REST-API-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Common-REST-API-Error-Codes
$code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />";
For more information, also visit the [PHP Developer Center](https://azure.micros
[BrokeredMessage]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage [Queues, topics, and subscriptions]: service-bus-queues-topics-subscriptions.md [require_once]: https://php.net/require_once--
service-bus-messaging Service Bus Php How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-php-how-to-use-topics-subscriptions.md
try {
catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:
- // https://docs.microsoft.com/rest/api/storageservices/Common-REST-API-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Common-REST-API-Error-Codes
$code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />";
try {
catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:
- // https://docs.microsoft.com/rest/api/storageservices/Common-REST-API-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Common-REST-API-Error-Codes
$code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />";
try {
catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:
- // https://docs.microsoft.com/rest/api/storageservices/Common-REST-API-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Common-REST-API-Error-Codes
$code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />";
try {
catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:
- // https://docs.microsoft.com/rest/api/storageservices/Common-REST-API-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Common-REST-API-Error-Codes
$code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />";
try {
catch(ServiceException $e){ // Handle exception based on error codes and messages. // Error codes and messages are here:
- // https://docs.microsoft.com/rest/api/storageservices/Common-REST-API-Error-Codes
+ // https://learn.microsoft.com/rest/api/storageservices/Common-REST-API-Error-Codes
$code = $e->getCode(); $error_message = $e->getMessage(); echo $code.": ".$error_message."<br />";
For more information, see [Queues, topics, and subscriptions][Queues, topics, an
[Queues, topics, and subscriptions]: service-bus-queues-topics-subscriptions.md [sqlfilter]: /dotnet/api/microsoft.servicebus.messaging.sqlfilter [require-once]: https://php.net/require_once
-[Service Bus quotas]: service-bus-quotas.md
+[Service Bus quotas]: service-bus-quotas.md
service-connector Concept Region Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/concept-region-support.md
Previously updated : 05/03/2022 Last updated : 09/19/2022 # Service Connector region support
-When you create a service connection with Service Connector, the conceptual connection resource is provisioned into the same region as your compute service instance by default. This page shows the region support information and corresponding behavior of Service Connector.
+When you connect Cloud services together with Service Connector, the conceptual connection resource is provisioned into the same region as your compute service instance by default. This page shows the region support information.
## Supported regions with regional endpoint If your compute service instance is located in one of the regions that Service Connector supports below, you can use Service Connector to create and manage service connections.
+- Australia Central
- Australia East
+- Australia Southeast
+- Brazil South
- Canada Central
+- Canada East
+- Central India
+- Central US
- East Asia - East US-- East US 2 EUAP
+- East US 2
+- France Central
- Germany West Central - Japan East
+- Japan West
- Korea Central
+- North Central US
- North Europe
+- Norway East
+- South Africa North
+- South Central US
+- South India
+- UAE North
- UK South
+- UK West
- West Central US - West Europe
+- West US
- West US 2
+- West US 3
-## Supported regions with geographical endpoint
+## Regions not supported
-Your compute service instance might be created in a region where Service Connector has geographical region support. It means that your service connection will be created in a different region from your compute instance. In such cases, you'll see a banner providing some details about the region when you create a service connection. The region difference may impact your compliance, data residency, and data latency.
+In regions where Service Connector isn't supported, you will still find Service Connector in the Azure portal and the Service Connector commands will appear in the Azure CLI, but you won't be able to create or manage service connections. The product team is working actively to enable more regions.
-|Region | Support Region|
-|-||
-|Australia Central |Australia East |
-|Australia Southeast|Australia East |
-|Central US |West US 2 |
-|East US 2 |East US |
-|Japan West |Japan East |
-|UK West |UK South |
-|North Central US |East US |
-|West US |East US |
-|West US 3 |West US 2 |
-|South Central US |West US 2 |
+## Next steps
-## Regions not supported
+Go to the concept article below to learn more about Service Connector.
-In regions where Service Connector isn't supported, you'll still find Service Connector CLI commands and the portal node, but you won't be able to create or manage service connections. The product team is working actively to enable more regions.
+> [!div class="nextstepaction"]
+> [High availability](./concept-availability.md)
service-fabric How To Managed Cluster Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-dedicated-hosts.md
Create an Azure Service Fabric managed cluster with node type(s) configured to r
{ "code": "QuotaExceeded", "message": "Operation could not be completed as it results in exceeding approved standardDSv3Family Cores quota.
- Additional Required: 320, (Minimum) New Limit Required: 320. Submit a request for Quota increase [here](https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/). Please read more about quota limits [here](https://docs.microsoft.com/azure/azure-supportability/per-vm-quota-requests)ΓÇ¥
+ Additional Required: 320, (Minimum) New Limit Required: 320. Submit a request for Quota increase [here](https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/). Please read more about quota limits [here](https://learn.microsoft.com/azure/azure-supportability/per-vm-quota-requests)ΓÇ¥
} ``` ## Next steps > [!div class="nextstepaction"]
-> [Read about Service Fabric managed cluster configuration options](how-to-managed-cluster-configuration.md)
+> [Read about Service Fabric managed cluster configuration options](how-to-managed-cluster-configuration.md)
service-fabric Service Fabric Concepts Scalability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-concepts-scalability.md
Last updated 07/14/2022
# Scaling in Service Fabric Azure Service Fabric makes it easy to build scalable applications by managing the services, partitions, and replicas on the nodes of a cluster. Running many workloads on the same hardware enables maximum resource utilization, but also provides flexibility in terms of how you choose to scale your workloads. This Channel 9 video describes how you can build scalable microservices applications:
-> [!VIDEO https://docs.microsoft.com/Events/Connect/2017/T116/player]
+> [!VIDEO https://learn.microsoft.com/Events/Connect/2017/T116/player]
Scaling in Service Fabric is accomplished several different ways:
service-fabric Service Fabric Reliable Actors Enumerate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-enumerate.md
List<Guid> actorIds = new();
foreach(var partition in partitions) { //Retrieve the partition information
- Int64RangePartitionInformation partitionInformation = (Int64RangePartitionInformation)partition.PartitionInformation; //Actors are restricted to the uniform Int64 scheme per https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-actors-introduction#distribution-and-failover
+ Int64RangePartitionInformation partitionInformation = (Int64RangePartitionInformation)partition.PartitionInformation; //Actors are restricted to the uniform Int64 scheme per https://learn.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction#distribution-and-failover
IActorService actorServiceProxy = ActorServiceProxy.Create(serviceName, partitionInformation.LowKey); ContinuationToken? continuationToken = null;
spring-apps How To Use Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-enterprise-spring-cloud-gateway.md
The route config definition includes the following parts:
- OpenAPI URI: The URI points to an OpenAPI specification. Both OpenAPI 2.0 and OpenAPI 3.0 specs are supported. The specification can be shown in API portal to try out. Two types of URI are accepted. The first type of URI is a public endpoint like `https://petstore3.swagger.io/api/v3/openapi.json`. The second type of URI is a constructed URL `http://<app-name>/{relative-path-to-OpenAPI-spec}`, where `app-name` is the name of an application in Azure Spring Apps that includes the API definition. - routes: A list of route rules about how the traffic goes to one app.
+- protocol: The backend protocol of the application to which Spring Cloud Gateway routes traffic. Its supported values are `HTTP` or `HTTPS`, the default is `HTTP`. To secure traffic from Spring Cloud Gateway to your HTTPS-enabled application, you need to set the protocol to `HTTPS` in your route configuration.
Use the following command to create a route config. The `--app-name` value should be the name of an app hosted in Azure Spring Apps that the requests will route to.
Here's a sample of the JSON file that is passed to the `--routes-file` parameter
"open_api": { "uri": "<OpenAPI-URI>" },
+ "protocol": "<protocol-of-routed-app>",
"routes": [ { "title": "<title-of-route>",
Use the following steps to create an example application using Spring Cloud Gate
```json {
+ "protocol": "HTTP",
"routes": [ { "title": "Customers service",
spring-apps Secure Communications End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/secure-communications-end-to-end.md
Azure Spring Apps is jointly built, operated, and supported by Microsoft and VMw
## Next steps -- [Deploy Spring microservices to Azure](/learn/modules/azure-spring-cloud-workshop/)
+- [Deploy Spring microservices to Azure](/training/modules/azure-spring-cloud-workshop/)
- [Azure Key Vault Certificates Spring Cloud Azure Starter (GitHub.com)](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/spring/spring-cloud-azure-starter-keyvault-certificates/pom.xml) - [Azure Spring Apps reference architecture](reference-architecture.md) - Migrate your [Spring Boot](/azure/developer/java/migration/migrate-spring-boot-to-azure-spring-cloud), [Spring Cloud](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-cloud), and [Tomcat](/azure/developer/java/migration/migrate-tomcat-to-azure-spring-cloud) applications to Azure Spring Apps
static-web-apps Custom Domain External https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/custom-domain-external.md
This guide demonstrates how to configure your domain name with the `www` subdoma
## Walkthrough video
-> [!VIDEO https://docs.microsoft.com/Shows/5-Things/Configuring-a-custom-domain-with-Azure-Static-Web-Apps/player?format=ny]
+> [!VIDEO https://learn.microsoft.com/Shows/5-Things/Configuring-a-custom-domain-with-Azure-Static-Web-Apps/player?format=ny]
## Get static web app URL
static-web-apps Deploy Nuxtjs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-nuxtjs.md
Title: "Tutorial: Deploy static-rendered Nuxt.js websites on Azure Static Web Apps"
-description: "Generate and deploy Nuxt.js dynamic sites with Azure Static Web Apps."
+ Title: "Tutorial: Deploy Nuxt sites with universal rendering on Azure Static Web Apps"
+description: "Generate and deploy Nuxt 3 sites with universal rendering on Azure Static Web Apps."
Previously updated : 05/08/2020 Last updated : 09/01/2022
-# Deploy static-rendered Nuxt.js websites on Azure Static Web Apps
+# Deploy Nuxt 3 sites with universal rendering on Azure Static Web Apps
-In this tutorial, you learn to deploy a [Nuxt.js](https://nuxtjs.org) generated static website to [Azure Static Web Apps](overview.md). To begin, you learn to set up, configure, and deploy a Nuxt.js app. During this process, you also learn to deal with common challenges often faced when generating static pages with Nuxt.js
+In this tutorial, you learn to deploy a [Nuxt 3](https://v3.nuxtjs.org/) application to [Azure Static Web Apps](overview.md). Nuxt 3 supports [universal (client-side and server-side) rendering](https://v3.nuxtjs.org/guide/concepts/rendering/#universal-rendering), including server and API routes. Without extra configuration, you can deploy Nuxt 3 apps with universal rendering to Azure Static Web Apps. When the app is built in the Static Web Apps GitHub Action or Azure Pipelines task, Nuxt 3 automatically converts it into static assets and an Azure Functions app that are compatible with Azure Static Web Apps.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/). - A GitHub account. [Create an account for free](https://github.com/join).-- [Node.js](https://nodejs.org) installed.
+- [Node.js](https://nodejs.org) 16 or later installed.
-## Set up a Nuxt.js app
+## Set up a Nuxt 3 app
-You can set up a new Nuxt.js project using `create-nuxt-app`. Instead of a new project, in this tutorial you begin by cloning an existing repository. This repository is set up to demonstrate how to deploy a dynamic Nuxt.js app as a static site.
+You can set up a new Nuxt project using `npx nuxi init nuxt-app`. Instead of using a new project, this tutorial uses an existing repository set up to demonstrate how to deploy a Nuxt 3 site with universal rendering on Azure Static Web Apps.
-1. Create a new repository under your GitHub account from a template repository.
-1. Navigate to [http://github.com/staticwebdev/nuxtjs-starter/generate](https://github.com/login?return_to=/staticwebdev/nuxtjs-starter/generate)
-1. Name the repository **nuxtjs-starter**.
+1. Navigate to [http://github.com/staticwebdev/nuxt-3-starter/generate](https://github.com/login?return_to=/staticwebdev/nuxt-3-starter/generate).
+1. Name the repository **nuxt-3-starter**.
1. Next, clone the new repo to your machine. Make sure to replace <YOUR_GITHUB_ACCOUNT_NAME> with your account name. ```bash
- git clone http://github.com/<YOUR_GITHUB_ACCOUNT_NAME>/nuxtjs-starter
+ git clone http://github.com/<YOUR_GITHUB_ACCOUNT_NAME>/nuxt-3-starter
``` 1. Navigate to the newly cloned Nuxt.js app: ```bash
- cd nuxtjs-starter
+ cd nuxt-3-starter
``` 1. Install dependencies:
You can set up a new Nuxt.js project using `create-nuxt-app`. Instead of a new p
1. Start Nuxt.js app in development: ```bash
- npm run dev
+ npm run dev -- -o
```
-Navigate to `http://localhost:3000` to open the app, where you should see the following website open in your preferred browser:
+Navigate to `http://localhost:3000` to open the app, where you should see the following website open in your preferred browser. Select the buttons to invoke server and API routes.
-When you click on a framework/library, you should see a details page about the selected item:
+## Deploy your Nuxt 3 site
-
-## Generate a static website from Nuxt.js build
-
-When you build a Nuxt.js site using `npm run build`, the app is built as a traditional web app, not a static site. To generate a static site, use the following application configuration.
-
-1. Update the _package.json_'s build script to only generate a static site using the `nuxt generate` command:
-
- ```json
- "scripts": {
- "dev": "nuxt dev",
- "build": "nuxt generate"
- },
- ```
-
- Now with this command in place, Static Web Apps will run the `build` script every time you push a commit.
-
-1. Generate a static site:
-
- ```bash
- npm run build
- ```
-
- Nuxt.js will generate the static site and copy it into a _dist_ folder at the root of your working directory.
-
- > [!NOTE]
- > This folder is listed in the _.gitignore_ file because it should be generated by CI/CD when you deploy.
-
-## Deploy your static website
-
-The following steps show how to link the app you just pushed to GitHub to Azure Static Web Apps. Once in Azure, you can deploy the application to a production environment.
+The following steps show how to create an Azure Static Web Apps resource and configure it to deploy your app from GitHub.
### Create an Azure Static Web Apps resource
The following steps show how to link the app you just pushed to GitHub to Azure
| | | | _Subscription_ | Your Azure subscription name. | | _Resource group_ | **my-nuxtjs-group** |
- | _Name_ | **my-nuxtjs-app** |
+ | _Name_ | **my-nuxt3-app** |
| _Plan type_ | **Free** | | _Region for Azure Functions API and staging environments_ | Select a region closest to you. | | _Source_ | **GitHub** |
The following steps show how to link the app you just pushed to GitHub to Azure
1. In the _Build Details_ section, select **Custom** from the _Build Presets_ drop-down and keep the default values.
-1. In the _App location_, enter **./** in the box.
-1. Leave the _Api location_ box empty.
-1. In the _Output location_ box, enter **dist**.
+1. In the _App location_, enter **/** in the box.
+1. In the _Api location_, enter **.output/server** in the box.
+1. In the _Output location_, enter **.output/public** in the box.
### Review and create
-1. Select the **Review + Create** button to verify the details are all correct.
+1. Select **Review + Create** to verify the details are all correct.
-1. Select **Create** to start the creation of the App Service Static Web App and provision a GitHub Actions for deployment.
+1. Select **Create** to start the creation of the static web app and provision a GitHub Actions for deployment.
-1. Once the deployment completes click, **Go to resource**.
+1. Once the deployment completes, select **Go to resource**.
-1. On the _Overview_ window, click the *URL* link to open your deployed application.
+1. On the _Overview_ window, select the *URL* link to open your deployed application.
-If the website does note immediately load, then the background GitHub Actions workflow is still running. Once the workflow is complete you can then click refresh the browser to view your web app.
+If the website does not immediately load, then the background GitHub Actions workflow is still running. Once the workflow is complete you can then refresh the browser to view your web app.
You can check the status of the Actions workflows by navigating to the Actions for your repository: ```url
-https://github.com/<YOUR_GITHUB_USERNAME>/nuxtjs-starter/actions
+https://github.com/<YOUR_GITHUB_USERNAME>/nuxt-3-starter/actions
```
-### Sync changes
-
-When you created the app, Azure Static Web Apps created a GitHub Actions workflow file in your repository. You need to bring this file down to your local repository so your git history is synchronized.
+### Synchronize changes
-Return to the terminal and run the following command `git pull origin main`.
+When you created the app, Azure Static Web Apps created a GitHub Actions workflow file in your repository. Return to the terminal and run the following command to pull the commit containing the new file.
-## Configure dynamic routes
-
-Navigate to the newly-deployed site and click on one of the framework or library logos. Instead of getting a details page, you get a 404 error page.
--
-The reason for this is, Nuxt.js generated the static site, it only did so for the home page. Nuxt.js can generate equivalent static `.html` files for every `.vue` pages file, but there's an exception.
-
-If the page is a dynamic page, for example `_id.vue`, it won't have enough information to generate a static HTML from such dynamic page. You'll have to explicitly provide the possible paths for the dynamic routes.
-
-## Generate static pages from dynamic routes
-
-1. Update the _nuxt.config.js_ file so that Nuxt.js uses a list of all available data to generate static pages for each framework/library:
-
- ```javascript
- import { projects } from "./utils/projectsData";
-
- export default {
- mode: "universal",
-
- //...truncated
-
- generate: {
- async routes() {
- const paths = [];
-
- projects.forEach(project => {
- paths.push(`/project/${project.slug}`);
- });
-
- return paths;
- }
- }
- };
- ```
-
- > [!NOTE]
- > `routes` is an async function, so you can make a request to an API in this function and use the returned list to generate the paths.
+```bash
+git pull
+```
-2. Push the new changes to your GitHub repository and wait for a few minutes while GitHub Actions builds your site again. After the build is complete, the 404 error disappears.
+Make changes to the app by updating the code and pushing it to GitHub. GitHub Actions automatically builds and deploys the app.
- :::image type="content" source="media/deploy-nuxtjs/404-in-production-fixed.png" alt-text="404 on dynamic routes fixed":::
+For more information, see the Azure Static Web Apps Nuxt 3 deployment preset [documentation](https://v3.nuxtjs.org/guide/deploy/providers/azure/).
> [!div class="nextstepaction"] > [Set up a custom domain](custom-domain.md)
static-web-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/overview.md
With Static Web Apps, static assets are separated from a traditional web server
## What you can do with Static Web Apps -- **Build modern web applications** with JavaScript frameworks and libraries like [Angular](getting-started.md?tabs=angular), [React](getting-started.md?tabs=react), [Svelte](/learn/modules/publish-app-service-static-web-app-api/), [Vue](getting-started.md?tabs=vue), or using [Blazor](./deploy-blazor.md) to create WebAssembly applications, with an [Azure Functions](apis-functions.md) back-end.
+- **Build modern web applications** with JavaScript frameworks and libraries like [Angular](getting-started.md?tabs=angular), [React](getting-started.md?tabs=react), [Svelte](/training/modules/publish-app-service-static-web-app-api/), [Vue](getting-started.md?tabs=vue), or using [Blazor](./deploy-blazor.md) to create WebAssembly applications, with an [Azure Functions](apis-functions.md) back-end.
- **Publish static sites** with frameworks like [Gatsby](publish-gatsby.md), [Hugo](publish-hugo.md), [VuePress](publish-vuepress.md). - **Deploy web applications** with frameworks like [Next.js](deploy-nextjs.md) and [Nuxt.js](deploy-nuxtjs.md).
storage-mover Job Definition Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/job-definition-create.md
Previously updated : 09/14/2022 Last updated : 09/20/2022 <!--
REVIEW Stephen/Fabian: Reviewed - Stephen
REVIEW Engineering: not reviewed EDIT PASS: started
-Initial doc score: 100 (413 words and 0 issues)
+Initial doc score: 100 (1532 words and 0 issues)
!######################################################## -->
There are three prerequisites to the definition the migration of your source sha
## Create and start a job definition
-A job definition is created within a project resource. If you've followed the examples contained in previous articles, you may have an existing project within a previously deployed storage mover resource.
+A job definition is created within a project resource. Creating a job definition requires you to select or configure a project, a source and target storage endpoint, and a job name. If you've followed the examples contained in previous articles, you may have an existing project within a previously deployed storage mover resource. Follow the steps below to add a job definition to a project.
-Creating a job definition requires you to decide on a project, a source storage endpoint, a target storage endpoint, and a name. Refer to the [resource naming convention](../azure-resource-manager/management/resource-name-rules.md#microsoftstoragesync) to choose a supported name. Storage endpoints are separate resources in your storage mover and must be created first, before you can create a job definition that only references them.
+Storage endpoints are separate resources in your storage mover. Endpoints must be created before they can be referenced by a job definition.
+
+Refer to the [resource naming convention](../azure-resource-manager/management/resource-name-rules.md#microsoftstoragesync) for help with choosing supported resource names.
+
+### [Azure portal](#tab/portal)
+
+1. Navigate to the **Project explorer** page within the [Azure portal](https://portal.azure.com) to view a list of available projects. If no projects exist, or you need to create a new project, you can follow the steps included in the [Manage Azure Storage Mover projects](project-manage.md) article.
+
+ :::image type="content" source="media/job-definition-create/project-explorer-sml.png" alt-text="Screen capture of the Project Explorer's Overview tab within the Azure portal." lightbox="media/job-definition-create/project-explorer-lrg.png":::
+
+ From within the project explorer pane or the results list, select the name of an available project. The project's properties and job summary data are displayed in the **details** pane. Any existing job definitions defined for the project will also be displayed. The status of any deployed jobs will also be shown.
+
+ In the actions menu within the project's details pane, select **Create job definition** to open the **Create a migration job** window. If no job definitions exist within the project, you can also select **Create a job definition** near the bottom of the pane, as shown in the example below.
+
+ :::image type="content" source="media/job-definition-create/project-selected-sml.png" alt-text="Screen capture of the Project Explorer's Overview tab within the Azure portal highlighting the use of filters." lightbox="media/job-definition-create/project-selected-lrg.png":::
+
+1. In the **Basics** tab of the **Create a migration job** window, enter a value in the required **Name** field. You may also add an optional description value of less than 1024 characters. Finally, in the **Migration agent** section, select the agent to perform the data migration and then select **Next** to open the **Source** tab. You should choose an agent located as near your data source as possible. The selected agent should also have resources appropriate to the size and complexity of the job. You can assign a different agent to your job at a later time if desired.
+
+ :::image type="content" source="media/job-definition-create/tab-basics-sml.png" alt-text="Screen capture of the migration job's Basics tab, showing the location of the data fields." lightbox="media/job-definition-create/tab-basics-lrg.png":::
+
+1. In the **Source** tab, select an option within the **Source endpoint** field.
+
+ If you want to use a source endpoint you've previously defined, choose the **Select an existing endpoint** option. Next, select the **Select an existing endpoint as a source** link to open the source endpoint pane. This pane displays a detailed list of your previously defined endpoints. Select the appropriate endpoint and select **Select** to return to the **Source** tab and populate the **Existing source endpoint** field.
+
+ :::image type="content" source="media/job-definition-create/endpoint-source-existing-sml.png" alt-text="Screen capture of the Source tab illustrating the location of the Existing Source Endpoint field." border="false" lightbox="media/job-definition-create/endpoint-source-existing-lrg.png":::
+
+ To define a new source endpoint from which to migrate, select the **Create a new endpoint** option. Next, provide values for the required **Host name or IP**, **Share name**, and **Protocol version** fields. You may also add an optional description value of less than 1024 characters.
+
+ :::image type="content" source="media/job-definition-create/endpoint-source-new-sml.png" alt-text="Screen capture of the Source tab illustrating the location of the New Source Endpoint fields." lightbox="media/job-definition-create/endpoint-source-new-lrg.png":::
+
+ By default, migration jobs will start from the root of your share. However, if your use case involves copying data from a specific path within your source share, you can provide the path in the **Sub-path** field. Supplying this value will start the data migration from the location you've specified. If the sub path you've specified isn't found, no data will be copied.
+
+ Prior to creating an endpoint and a job resource, it's important to verify that the path you've provided is correct and that the data is accessible. You're unable to modify endpoints or job resources after they're created. If the specified path is wrong, you'll need to delete the resources and re-create them.
+
+ Values for host, share name, and subpath are concatenated to form the full migration source path. The path is displayed in the **Full path** field within the **Verify full path** section. Copy the path provided and verify that you're able to access it before committing your changes.
+
+ After you've confirmed that the share is accessible, select **Next** to save your source endpoint settings and begin defining your target.
+
+1. In the **Target** tab, select an option for the **Target endpoint** field.
+
+ As with the source endpoint, choose the **Select an existing endpoint reference** option if you want to use a previously defined endpoint. Next, select the **Select an existing endpoint as a target** link to open the target endpoint pane. A detailed list of your previously defined endpoints is displayed. First, select the desired endpoint, then **Select** to populate the **Existing source endpoint** field and return to the **Source** tab.
+
+ :::image type="content" source="media/job-definition-create/endpoint-target-existing-sml.png" alt-text="Screen capture of the Target tab illustrating the location of the Existing Target Endpoint field." border="false" lightbox="media/job-definition-create/endpoint-target-existing-lrg.png":::
+
+ Similarly, to define a new target endpoint, choose the **Create a new endpoint** option. Next, select values from the drop-down lists for the required **Subscription**, **Storage account**, and **Container** fields. You may also add an optional description value of less than 1024 characters.
+
+ :::image type="content" source="media/job-definition-create/endpoint-target-new-sml.png" alt-text="Screen capture of the Target tab illustrating the location of the New Target Endpoint fields." lightbox="media/job-definition-create/endpoint-target-new-lrg.png":::
+
+ A target subpath value can be used to specify a location within the target container where your migrated data will be copied. The subpath value is relative to the container's root. Omitting the subpath value will result in the data being copied to the root, while providing a unique value will generate a new subfolder.
+
+ After ensuring the accuracy of your settings, select **Next** to continue.
+
+1. Within the **Settings** tab, take note of the settings associated with the **Copy mode** and **Migration outcomes**. The service's **copy mode** will affect the behavior of the migration engine when files or folders change between copy iterations.
+
+ The current release of Azure Storage Mover only supports **merge** mode.
+
+ - Files will be kept in the target, even if they donΓÇÖt exist in the source.
+ - Files with matching names and paths will be updated to match the source.
+ - Folder renames between copies may lead to duplicate content in the target.
+
+ **Migration outcomes** are based upon the specific storage types of the source and target endpoints. For example, because blob storage only supports "virtual" folders, source files in folders will have their paths prepended to their names and placed in a flat list within a blob container. Empty folders will be represented as an empty blob in the target. Source folder metadata will be persisted in the custom metadata field of a blob, as they are with files.
+
+ After viewing the effects of the copy mode and migration outcomes, select **Next** to review the values from the previous tabs.
+
+1. Review the settings for job name and description, and source and target storage endpoint settings. Use the **Previous** and **Next** options to navigate through the tabs and correct any mistakes, if needed. Finally, select **Create** to provision the job definition.
+
+ :::image type="content" source="media/job-definition-create/review-sml.png" alt-text="Screen capture of the Review tab illustrating the location of the fields and settings." lightbox="media/job-definition-create/review-lrg.png":::
+
+### [PowerShell](#tab/powershell)
You'll need to use several cmdlets to create a new job definition.
Start-AzStorageMoverJobDefinition `
``` ++ ## Next steps
-Advance to the next article to learn how to create...
+Now that you've created a job definition with source and target endpoints, learn how to estimate the time required to perform your migration job. Learn about Azure Storage Mover performance targets by visiting the article suggested below.
> [!div class="nextstepaction"]
-> [Prepare Haushaltswaffeln for Fabian and Stephen](service-overview.md)
+> [Azure Storage Mover scale and performance targets](performance-targets.md)
storage-mover Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/release-notes.md
New agent versions will be released on Microsoft Download Center. [https://aka.m
#### Lifecycle and change management guarantees
-Azure Storage Mover is a hybrid service, which continuously introduces new features and improvements. This means that a specific Azure Storage Mover agent version can only be supported for a limited time. To facilitate your deployment, the following rules guarantee you have enough time, and notification to accommodate agent updates/upgrades in your change management process:
+Azure Storage Mover is a hybrid service, which continuously introduces new features and improvements. Azure Storage Mover agent versions can only be supported for a limited time. To facilitate your deployment, the following rules guarantee you have enough time, and notification to accommodate agent updates/upgrades in your change management process:
- Major versions are supported for at least six months from the date of initial release. - We guarantee there's an overlap of at least three months between the support of major agent versions. - Warnings are issued for registered servers using a soon-to-be expired agent at least three months prior to expiration. You can check if a registered server is using an older version of the agent in the registered agents section of a storage mover resource.
-## 2022, September 15
+## 2022 September 15
Initial public preview release notes for:
Supports merging content from the source to the target:
- Files with matching names and paths will be updated to match the source. - Folder renames between copies may lead to duplicate content in the target.
+### Service
+
+- When a job is started w/o the agent having permissions to the target storage and the job is immediately canceled, the job might not close down gracefully and remain in the `Cancel requested` state indefinitely. The only mitigation at the moment is to delete the job definition and recreate it.
+- The Storage Mover service is currently not resilient to a zonal outage within the selected region. Appropriate configuration steps to achieve zonal redundancy are underway.
+ ### Agent -- The storage mover agent appliance VM is currently only tested and supported as a Version 1 Windows Hyper-V VM.-- Re-registration of a previously registered agent is currently not supported.
+- The storage mover agent appliance VM is currently only tested and supported as a `Version 1` Windows Hyper-V VM.
+- Re-registration of a previously registered agent is currently not supported. [Download a new agent image](https://aka.ms/StorageMover/agent) instead.
+- When you register an agent, a hybrid compute resource is also deployed into the same resource group as the storage mover resides in. In some cases, unregistering the server doesn't remove the agent's hybrid compute resource. Admins must manually remove it to complete unregistration of the agent and remove all permissions to target storage the agent previously held.
- Copy logs aren't configurable to be emitted to Azure and must be accessed locally. To access copy logs on the agent:
storage Blob Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md
The following list describes features and capabilities that are available in the
- **Inventory reports for blobs and containers**
- You can generate inventory reports for blobs and containers. A report for blobs can contain base blobs, snapshots, blob versions and their associated properties such as creation time, last modified time. A report for containers describes containers and their associated properties such as immutability policy status, legal hold status.
+ You can generate inventory reports for blobs and containers. A report for blobs can contain base blobs, snapshots, content length, blob versions and their associated properties such as creation time, last modified time. A report for containers describes containers and their associated properties such as immutability policy status, legal hold status. Currently, the report does not have an option to include Soft Deleted blobs or Soft Delete containers.
- **Custom Schema**
storage Data Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-protection-overview.md
Previously updated : 09/14/2022 Last updated : 09/19/2022
The following table summarizes the options available in Azure Storage for common
|--|--|--|--|--| | Prevent a storage account from being deleted or modified. | Azure Resource Manager lock<br />[Learn more...](../common/lock-account-resource.md) | Lock all of your storage accounts with an Azure Resource Manager lock to prevent deletion of the storage account. | Protects the storage account against deletion or configuration changes.<br /><br />Doesn't protect containers or blobs in the account from being deleted or overwritten. | Yes | | Prevent a blob version from being deleted for an interval that you control. | Immutability policy on a blob version<br />[Learn more...](immutable-storage-overview.md) | Set an immutability policy on an individual blob version to protect business-critical documents, for example, in order to meet legal or regulatory compliance requirements. | Protects a blob version from being deleted and its metadata from being overwritten. An overwrite operation creates a new version.<br /><br />If at least one container has version-level immutability enabled, the storage account is also protected from deletion. Container deletion fails if at least one blob exists in the container. | No |
-| Prevent a container and its blobs from being deleted or modified for an interval that you control. | Immutability policy on a container<br />[Learn more...](immutable-storage-overview.md) | Set an immutability policy on a container to protect business-critical documents, for example, in order to meet legal or regulatory compliance requirements. | Protects a container and its blobs from all deletes and overwrites.<br /><br />When a legal hold or a locked time-based retention policy is in effect, the storage account is also protected from deletion. Containers for which no immutability policy has been set aren't protected from deletion. | Yes, in preview |
+| Prevent a container and its blobs from being deleted or modified for an interval that you control. | Immutability policy on a container<br />[Learn more...](immutable-storage-overview.md) | Set an immutability policy on a container to protect business-critical documents, for example, in order to meet legal or regulatory compliance requirements. | Protects a container and its blobs from all deletes and overwrites.<br /><br />When a legal hold or a locked time-based retention policy is in effect, the storage account is also protected from deletion. Containers for which no immutability policy has been set aren't protected from deletion. | Yes |
| Restore a deleted container within a specified interval. | Container soft delete<br />[Learn more...](soft-delete-container-overview.md) | Enable container soft delete for all storage accounts, with a minimum retention interval of seven days.<br /><br />Enable blob versioning and blob soft delete together with container soft delete to protect individual blobs in a container.<br /><br />Store containers that require different retention periods in separate storage accounts. | A deleted container and its contents may be restored within the retention period.<br /><br />Only container-level operations (for example, [Delete Container](/rest/api/storageservices/delete-container)) can be restored. Container soft delete doesn't enable you to restore an individual blob in the container if that blob is deleted. | Yes | | Automatically save the state of a blob in a previous version when it's overwritten. | Blob versioning<br />[Learn more...](versioning-overview.md) | Enable blob versioning, together with container soft delete and blob soft delete, for storage accounts where you need optimal protection for blob data.<br /><br />Store blob data that doesn't require versioning in a separate account to limit costs. | Every blob write operation creates a new version. The current version of a blob may be restored from a previous version if the current version is deleted or overwritten. | No | | Restore a deleted blob or blob version within a specified interval. | Blob soft delete<br />[Learn more...](soft-delete-blob-overview.md) | Enable blob soft delete for all storage accounts, with a minimum retention interval of seven days.<br /><br />Enable blob versioning and container soft delete together with blob soft delete for optimal protection of blob data.<br /><br />Store blobs that require different retention periods in separate storage accounts. | A deleted blob or blob version may be restored within the retention period. | Yes |
storage Encryption Scope Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-scope-overview.md
Previously updated : 07/13/2022 Last updated : 09/20/2022
Encryption scopes enable you to manage encryption with a key that is scoped to a
For more information about working with encryption scopes, see [Create and manage encryption scopes](encryption-scope-manage.md).
+> [!IMPORTANT]
+> Encryption scopes are in preview for storage accounts with a hierarchical namespace enabled. The preview supports REST, HDFS, NFSv3 and SFTP protocols.
+> The preview is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ ## How encryption scopes work By default, a storage account is encrypted with a key that is scoped to the entire storage account. When you define an encryption scope, you specify a key that may be scoped to a container or an individual blob. When the encryption scope is applied to a blob, the blob is encrypted with that key. When the encryption scope is applied to a container, it serves as the default scope for blobs in that container, so that all blobs that are uploaded to that container may be encrypted with the same key. The container can be configured to enforce the default encryption scope for all blobs in the container, or to permit an individual blob to be uploaded to the container with an encryption scope other than the default.
storage Immutable Storage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-storage-overview.md
Previously updated : 09/14/2022 Last updated : 09/19/2022
The following table provides a summary of protections provided by container-leve
Immutability policies are supported for both new and existing storage accounts. The following table shows which types of storage accounts are supported for each type of policy:
-| Type of immutability policy | Scope of policy | Types of storage accounts supported | Supports hierarchical namespace (preview) |
+| Type of immutability policy | Scope of policy | Types of storage accounts supported | Supports hierarchical namespace |
|--|--|--|--| | Time-based retention policy | Version-level scope | General-purpose v2<br />Premium block blob | No | | Time-based retention policy | Container-level scope | General-purpose v2<br />Premium block blob<br />General-purpose v1 (legacy)<sup>1</sup><br> Blob storage (legacy) | Yes |
All redundancy configurations support immutable storage. For geo-redundant confi
### Hierarchical namespace support
-Immutable storage support for accounts with a hierarchical namespace is in preview. To enroll in the preview, see [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9iuLyDgXDNIkMaAAVSMpJxUMVdIOUNDMlNESUlJRVNWOExJVUoxME1CMS4u).
-
-Keep in mind that you cannot rename or move a blob when the blob is in the immutable state and the account has a hierarchical namespace enabled. Both the blob name and the directory structure provide essential container-level data that cannot be modified once the immutable policy is in place.
-
-> [!IMPORTANT]
-> Immutable storage for Azure Blob Storage in accounts that have the hierarchical namespace feature enabled is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+Accounts that have a hierarchical namespace support immutability policies that are scoped to the container. However, you cannot rename or move a blob when the blob is in the immutable state and the account has a hierarchical namespace enabled. Both the blob name and the directory structure provide essential container-level data that cannot be modified once the immutable policy is in place.
## Recommended blob types
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
Get started with any of these guides.
| Guide | Description | |||
-| [Gather metrics from your Azure Blob Storage containers](/learn/modules/gather-metrics-blob-storage/) | Create charts that show metrics (Contains step-by-step guidance). |
-| [Monitor, diagnose, and troubleshoot your Azure Storage](/learn/modules/monitor-diagnose-and-troubleshoot-azure-storage/) | Troubleshoot storage account issues (contains step-by-step guidance). |
+| [Gather metrics from your Azure Blob Storage containers](/training/modules/gather-metrics-blob-storage/) | Create charts that show metrics (Contains step-by-step guidance). |
+| [Monitor, diagnose, and troubleshoot your Azure Storage](/training/modules/monitor-diagnose-and-troubleshoot-azure-storage/) | Troubleshoot storage account issues (contains step-by-step guidance). |
| [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md) | A unified view of storage performance, capacity, and availability | | [Best practices for monitoring Azure Blob Storage](blob-storage-monitoring-scenarios.md) | Guidance for common monitoring and troubleshooting scenarios. | | [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md) | A tour of Metrics Explorer.
storage Storage Blob Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download.md
You can also open a stream to read from a blob. The stream will only download th
## Download to a file path
-The following example downloads a blob by using a file path:
+The following example downloads a blob by using a file path. If the specified directory does not exist, handle the exception and notify the user.
```csharp public static async Task DownloadBlob(BlobClient blobClient, string localFilePath) {
- await blobClient.DownloadToAsync(localFilePath);
+ try
+ {
+ await blobClient.DownloadToAsync(localFilePath);
+ }
+ catch (DirectoryNotFoundException ex)
+ {
+ // Let the user know that the directory does not exist
+ Console.WriteLine($"Directory not found: {ex.Message}");
+ }
} ```
+If the file already exists at `localFilePath`, it will be overwritten by default during subsequent downloads.
+ ## Download to a stream
-The following example downloads a blob by creating a [Stream](/dotnet/api/system.io.stream) object and then downloading to that stream.
+The following example downloads a blob by creating a [Stream](/dotnet/api/system.io.stream) object and then downloads to that stream. If the specified directory does not exist, handle the exception and notify the user.
```csharp public static async Task DownloadToStream(BlobClient blobClient, string localFilePath) {
- FileStream fileStream = File.OpenWrite(localFilePath);
- await blobClient.DownloadToAsync(fileStream);
- fileStream.Close();
+ try
+ {
+ FileStream fileStream = File.OpenWrite(localFilePath);
+ await blobClient.DownloadToAsync(fileStream);
+ fileStream.Close();
+ }
+ catch (DirectoryNotFoundException ex)
+ {
+ // Let the user know that the directory does not exist
+ Console.WriteLine($"Directory not found: {ex.Message}");
+ }
} ```
storage Storage Blob Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-javascript-get-started.md
Previously updated : 07/06/2022 Last updated : 09/19/2022
The [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) o
## Set up your project
-1. Open a command prompt and change into your project folder:
+1. Open a command prompt and change into your project folder. Change `YOUR-DIRECTORY` to your folder name:
```bash cd YOUR-DIRECTORY
The [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) o
npm install @azure/identity ```
-1. In your `index.js` file, add the package:
+## Authenticate to Azure with passwordless credential
- ```javascript
- const { BlobServiceClient, StorageSharedKeyCredential } = require('@azure/storage-blob');
+Azure Active Directory (Azure AD) provides the most secure connection by managing the connection identity ([**managed identity**](../../active-directory/managed-identities-azure-resources/overview.md)). This **passwordless** functionality allows you to develop an application that doesn't require any secrets (keys or connection strings) stored in the code.
- // optional but recommended - connect with managed identity (Azure AD)
- const { DefaultAzureCredential } = require('@azure/identity');
- ```
+### Set up identity access to the Azure cloud
+
+To connect to Azure without passwords, you need to set up an Azure identity or use an existing identity. Once the identity is set up, make sure to assign the appropriate roles to the identity.
-## Connect with Azure AD
+To authorize passwordless access with Azure AD, you'll need to use an Azure credential. Which type of credential you need depends on where your application runs. Use this table as a guide.
-Azure Active Directory (Azure AD) provides the most secure connection by managing the connection identity ([**managed identity**](../../active-directory/managed-identities-azure-resources/overview.md)). This functionality allows you to develop code that doesn't require any secrets (keys or connection strings) stored in the code or environment. Managed identity requires [**setup**](assign-azure-role-data-access.md?tabs=portal) for any identities such as developer (personal) or cloud (hosting) environments. You need to complete the setup before using the code in this section.
+|Environment|Method|
+|--|--|
+|Developer environment|[Visual Studio Code](/azure/developer/javascript/sdk/authentication/local-development-environment-developer-account?tabs=azure-portal%2Csign-in-vscode)|
+|Developer environment|[Service principal](../common/identity-library-acquire-token.md)|
+|Azure-hosted apps|[Azure-hosted apps setup](/azure/storage/blobs/authorize-managed-identity)|
+|On-premises|[On-premises app setup](/azure/storage/common/storage-auth-aad-app?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=dotnet)|
-After you complete the setup, your Storage resource needs to have one or more of the following roles assigned to the identity resource you plan to connect with:
+### Set up storage account roles
+Your storage resource needs to have one or more of the following [Azure RBAC](/azure/role-based-access-control/built-in-roles) roles assigned to the identity resource you plan to connect with. [Setup the Azure Storage roles](assign-azure-role-data-access.md?tabs=portal) for each identity you created in the previous step: Azure cloud, local development, on-premises.
+
+After you complete the setup, each identity needs at least one of the appropriate roles:
+
* A [data access](../common/authorize-data-access.md) role - such as: * **Storage Blob Data Reader** * **Storage Blob Data Contributor**
After you complete the setup, your Storage resource needs to have one or more of
* **Reader** * **Contributor**
-To authorize with Azure AD, you'll need to use an Azure credential. Which type of credential you need depends on where your application runs. Use this table as a guide.
-
-| Where the application runs | Security principal | Guidance |
-|--|--||
-| Local machine (developing and testing) | User identity or service principal | [Use the Azure Identity library to get an access token for authorization](../common/identity-library-acquire-token.md) |
-| Azure | Managed identity | [Authorize access to blob data with managed identities for Azure resources](authorize-managed-identity.md) |
-| Servers or clients outside of Azure | Service principal | [Authorize access to blob or queue data from a native or web application](../common/storage-auth-aad-app.md?toc=/azure/storage/blobs/toc.json) |
-
-Create a [DefaultAzureCredential](/javascript/api/overview/azure/identity-readme#defaultazurecredential) instance. Use that object to create a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient).
-
-```javascript
-const { BlobServiceClient } = require('@azure/storage-blob');
-const { DefaultAzureCredential } = require('@azure/identity');
-require('dotenv').config()
-
-const accountName = process.env.AZURE_STORAGE_ACCOUNT_NAME;
-if (!accountName) throw Error('Azure Storage accountName not found');
-const blobServiceClient = new BlobServiceClient(
- `https://${accountName}.blob.core.windows.net`,
- new DefaultAzureCredential()
-);
+### Connect with passwordless authentication to Azure
-async function main(){
+Once your Azure storage account identity roles and your local environment are set up, create a JavaScript file which includes the [``@azure/identity``](https://www.npmjs.com/package/@azure/identity) package. Using the `DefaultAzureCredential` class provided by the Azure.Identity client library is the recommended approach for implementing passwordless connections to Azure services in your code, including Blob Storage.
- // this call requires Reader role on the identity
- const serviceGetPropertiesResponse = await blobServiceClient.getProperties();
- console.log(`${JSON.stringify(serviceGetPropertiesResponse)}`);
+Create a [DefaultAzureCredential](/javascript/api/overview/azure/identity-readme#defaultazurecredential) instance. Use that object to create a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient).
-}
-main()
- .then(() => console.log(`done`))
- .catch((ex) => console.log(`error: ${ex.message}`));
-```
+The `dotenv` package is used to read your storage account name from a `.env` file. This file should not be checked into source control. If you use a local service principal as part of your DefaultAzureCredential set up, any security information for that credential will also go into the `.env` file.
If you plan to deploy the application to servers and clients that run outside of Azure, you can obtain an OAuth token by using other classes in the [Azure Identity client library for JavaScript](/javascript/api/overview/azure/identity-readme) which derive from the [TokenCredential](/javascript/api/@azure/core-auth/tokencredential) class.
If you plan to deploy the application to servers and clients that run outside of
Create a [StorageSharedKeyCredential](/javascript/api/@azure/storage-blob/storagesharedkeycredential) by using the storage account name and account key. Then use the StorageSharedKeyCredential to initialize a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient).
-```javascript
-const { BlobServiceClient, StorageSharedKeyCredential } = require('@azure/storage-blob');
-require('dotenv').config()
-
-const accountName = process.env.AZURE_STORAGE_ACCOUNT_NAME;
-const accountKey = process.env.AZURE_STORAGE_ACCOUNT_KEY;
-if (!accountName) throw Error('Azure Storage accountName not found');
-if (!accountKey) throw Error('Azure Storage accountKey not found');
-
-const sharedKeyCredential = new StorageSharedKeyCredential(accountName, accountKey);
-
-const blobServiceClient = new BlobServiceClient(
- `https://${accountName}.blob.core.windows.net`,
- sharedKeyCredential
-);
-async function main(){
- const serviceGetPropertiesResponse = await blobServiceClient.getProperties();
- console.log(`${JSON.stringify(serviceGetPropertiesResponse)}`);
-}
-
-main()
- .then(() => console.log(`done`))
- .catch((ex) => console.log(ex.message));
-```
+The `dotenv` package is used to read your storage account name and key from a `.env` file. This file should not be checked into source control.
For information about how to obtain account keys and best practice guidelines for properly managing and safeguarding your keys, see [Manage storage account access keys](../common/storage-account-keys-manage.md).
For information about how to obtain account keys and best practice guidelines fo
Create a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) by using a connection string.
-```javascript
-const { BlobServiceClient } = require('@azure/storage-blob');
-require('dotenv').config()
-
-const connString = process.env.AZURE_STORAGE_CONNECTION_STRING;
-if (!connString) throw Error('Azure Storage Connection string not found');
-
-const blobServiceClient = BlobServiceClient.fromConnectionString(connString);
-async function main(){
- const serviceGetPropertiesResponse = await blobServiceClient.getProperties();
- console.log(`${JSON.stringify(serviceGetPropertiesResponse)}`);
-}
-
-main()
- .then(() => console.log(`done`))
- .catch((ex) => console.log(ex.message));
-```
+The `dotenv` package is used to read your storage account connection string from a `.env` file. This file should not be checked into source control.
For information about how to obtain account keys and best practice guidelines for properly managing and safeguarding your keys, see [Manage storage account access keys](../common/storage-account-keys-manage.md).
-## Object Authorization with a SAS token
+## Connect with a SAS token
Create a Uri to your resource by using the blob service endpoint and SAS token. Then, create a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) with the Uri.
-```javascript
-const { BlobServiceClient } = require('@azure/storage-blob');
-require('dotenv').config()
-
-const accountName = process.env.AZURE_STORAGE_ACCOUNT_NAME;
-const sasToken = process.env.AZURE_STORAGE_SAS_TOKEN;
-if (!accountName) throw Error('Azure Storage accountName not found');
-if (!sasToken) throw Error('Azure Storage accountKey not found');
-
-const blobServiceUri = `https://${accountName}.blob.core.windows.net`;
-
-const blobServiceClient = new BlobServiceClient(
- `${blobServiceUri}${sasToken}`,
- null
-);
-async function main(){
- const serviceGetPropertiesResponse = await blobServiceClient.getProperties();
- console.log(`${JSON.stringify(serviceGetPropertiesResponse)}`);
-}
-
-main()
- .then(() => console.log(`done`))
- .catch((ex) => console.log(`error: ${ex.message}`));
-```
+The `dotenv` package is used to read your storage account name and sas token from a `.env` file. This file should not be checked into source control.
To generate and manage SAS tokens, see any of these articles:
To generate and manage SAS tokens, see any of these articles:
- [Create a service SAS for a container or blob](sas-service-create.md) --- ## Connect anonymously If you explicitly enable anonymous access, then you can connect to Blob Storage without authorization for your request. You can create a new BlobServiceClient object for anonymous access by providing the Blob storage endpoint for the account. This requires you to know the account and container names. To learn how to enable anonymous access, see [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md).
-```javascript
-const { BlobServiceClient, AnonymousCredential } = require('@azure/storage-blob');
-require('dotenv').config()
-
-const accountName = process.env.AZURE_STORAGE_ACCOUNT_NAME;
-if (!accountName) throw Error('Azure Storage accountName not found');
-
-const blobServiceUri = `https://${accountName}.blob.core.windows.net`;
-
-const blobServiceClient = new BlobServiceClient(
- blobServiceUri,
- new AnonymousCredential()
-);
-
-async function getContainerProperties(){
-
- // Access level: 'container'
- const containerName = `blob-storage-dev-guide-1`;
-
- const containerClient = blobServiceClient.getContainerClient(containerName);
- const containerProperties = await containerClient.getProperties();
- console.log(JSON.stringify(containerProperties));
-
-}
-getContainerProperties()
- .then(() => console.log(`done`))
- .catch((ex) => console.log(`error: ${ex.message}`));
-```
+The `dotenv` package is used to read your storage account name from a `.env` file. This file should not be checked into source control.
Each type of resource is represented by one or more associated JavaScript clients:
The following guides show you how to use each of these clients to build your app
| Guide | Description | |--|| | [Create a container](storage-blob-container-create-javascript.md) | Create containers. |
+| [Get container's URL](storage-blob-get-url-javascript.md) | Get URL of container. |
| [Delete and restore containers](storage-blob-container-delete-javascript.md) | Delete containers, and if soft-delete is enabled, restore deleted containers. | | [List containers](storage-blob-containers-list-javascript.md) | List containers in an account and the various options available to customize a listing. | | [Manage properties and metadata](storage-blob-container-properties-metadata-javascript.md) | Get and set properties and metadata for containers. | | [Upload blobs](storage-blob-upload-javascript.md) | Learn how to upload blobs by using strings, streams, file paths, and other methods. |
+| [Get blob's URL](storage-blob-get-url-javascript.md) | Get URL of blob. |
| [Download blobs](storage-blob-download-javascript.md) | Download blobs by using strings, streams, and file paths. | | [Copy blobs](storage-blob-copy-javascript.md) | Copy a blob from one account to another account. | | [List blobs](storage-blobs-list-javascript.md) | List blobs in different ways. |
storage Storage Blob Scalable App Download Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-scalable-app-download-files.md
public static void Main(string[] args)
// Uncomment the following line to enable downloading of files from the storage account. // This is commented out initially to support the tutorial at
- // https://docs.microsoft.com/azure/storage/blobs/storage-blob-scalable-app-download-files
+ // https://learn.microsoft.com/azure/storage/blobs/storage-blob-scalable-app-download-files
await DownloadFilesAsync(); } catch (Exception ex)
public static void Main(string[] args)
{ // The following function will delete the container and all files contained in them. // This is commented out initially as the tutorial at
- // https://docs.microsoft.com/azure/storage/blobs/storage-blob-scalable-app-download-files
+ // https://learn.microsoft.com/azure/storage/blobs/storage-blob-scalable-app-download-files
// has you upload only for one tutorial and download for the other. if (!exception) {
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
Previously updated : 09/16/2022 Last updated : 09/20/2022
The following table describes whether a feature is supported in a standard gener
| [Customer-managed keys in a multi-tenant scenario (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x1F7E6; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Data redundancy options](../common/storage-redundancy.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705;<sup>2</sup> | &#x2705; |
-| [Encryption scopes](encryption-scope-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
-| [Immutable storage](immutable-storage-overview.md) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Encryption scopes](encryption-scope-overview.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; |
+| [Immutable storage](immutable-storage-overview.md) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
| [Last access time tracking for lifecycle management](lifecycle-management-overview.md#move-data-based-on-last-accessed-time) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; | | [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
The following table describes whether a feature is supported in a premium block
| [Customer-managed keys in a multi-tenant scenario (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x1F7E6; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Data redundancy options](../common/storage-redundancy.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705;<sup>2</sup> | &#x2705; |
-| [Encryption scopes](encryption-scope-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
-| [Immutable storage](immutable-storage-overview.md) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Encryption scopes](encryption-scope-overview.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; |
+| [Immutable storage](immutable-storage-overview.md) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
| [Last access time tracking for lifecycle management](lifecycle-management-overview.md#move-data-based-on-last-accessed-time) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; | | [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
storage Customer Managed Keys Configure Existing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-existing-account.md
The managed identity that authorizes access to the key vault may be either a use
A user-assigned is a standalone Azure resource. You must create the user-assigned identity before you configure customer-managed keys. To learn how to create and manage a user-assigned managed identity, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
-#### [Azure portal](#tab/portal)
+#### [Azure portal](#tab/azure-portal)
When you configure customer-managed keys with the Azure portal, you can select an existing user-assigned identity through the portal user interface. For details, see [Configure customer-managed keys for an existing account](#configure-customer-managed-keys-for-an-existing-account).
-#### [PowerShell](#tab/powershell)
+#### [PowerShell](#tab/azure-powershell)
To authorize access to the key vault with a user-assigned managed identity, you'll need the resource ID and principal ID of the user-assigned managed identity. Call [Get-AzUserAssignedIdentity](/powershell/module/az.managedserviceidentity/get-azuserassignedidentity) to get the user-assigned managed identity and assign it to a variable that you'll reference in subsequent steps:
A system-assigned managed identity is associated with an instance of an Azure se
Only existing storage accounts can use a system-assigned identity to authorize access to the key vault. New storage accounts must use a user-assigned identity, if customer-managed keys are configured on account creation.
-#### [Azure portal](#tab/portal)
+#### [Azure portal](#tab/azure-portal)
When you configure customer-managed keys with the Azure portal with a system-assigned managed identity, the system-assigned managed identity is assigned to the storage account for you under the covers. For details, see [Configure customer-managed keys for an existing account](#configure-customer-managed-keys-for-an-existing-account).
-#### [PowerShell](#tab/powershell)
+#### [PowerShell](#tab/azure-powershell)
To assign a system-assigned managed identity to your storage account, call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount):
principalId = $(az storage account show --name <storage-account> --resource-grou
The next step is to configure the key vault access policy. The key vault access policy grants permissions to the managed identity that will be used to authorize access to the key vault. To learn more about key vault access policies, see [Azure Key Vault Overview](../../key-vault/general/overview.md#securely-store-secrets-and-keys) and [Azure Key Vault security overview](../../key-vault/general/security-features.md#key-vault-authentication-options).
-### [Azure portal](#tab/portal)
+### [Azure portal](#tab/azure-portal)
To learn how to configure the key vault access policy with the Azure portal, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
-### [PowerShell](#tab/powershell)
+### [PowerShell](#tab/azure-powershell)
To configure the key vault access policy with PowerShell, call [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy), providing the variable for the principal ID that you previously retrieved for the managed identity.
Azure Storage can automatically update the customer-managed key that is used for
> [!IMPORTANT] > Azure Storage checks the key vault for a new key version only once daily. When you rotate a key, be sure to wait 24 hours before disabling the older version.
-### [Azure portal](#tab/portal)
+### [Azure portal](#tab/azure-portal)
To configure customer-managed keys for an existing account with automatic updating of the key version in the Azure portal, follow these steps:
After you've specified the key, the Azure portal indicates that automatic updati
:::image type="content" source="media/customer-managed-keys-configure-existing-account/portal-auto-rotation-enabled.png" alt-text="Screenshot showing automatic updating of the key version enabled.":::
-### [PowerShell](#tab/powershell)
+### [PowerShell](#tab/azure-powershell)
To configure customer-managed keys for an existing account with automatic updating of the key version with PowerShell, install the [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage) module, version 2.0.0 or later.
az storage account update
If you prefer to manually update the key version, then explicitly specify the version at the time that you configure encryption with customer-managed keys. In this case, Azure Storage won't automatically update the key version when a new version is created in the key vault. To use a new key version, you must manually update the version used for Azure Storage encryption.
-# [Azure portal](#tab/portal)
+# [Azure portal](#tab/azure-portal)
To configure customer-managed keys with manual updating of the key version in the Azure portal, specify the key URI, including the version. To specify a key as a URI, follow these steps:
To configure customer-managed keys with manual updating of the key version in th
1. Specify either a system-assigned or user-assigned managed identity. 1. Save your changes.
-# [PowerShell](#tab/powershell)
+# [PowerShell](#tab/azure-powershell)
To configure customer-managed keys with manual updating of the key version, explicitly provide the key version when you configure encryption for the storage account. Call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings, as shown in the following example, and include the **-KeyvaultEncryption** option to enable customer-managed keys for the storage account.
storage Customer Managed Keys Configure New Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-new-account.md
A user-assigned is a standalone Azure resource. To learn more about user-assigne
Both new and existing storage accounts can use a user-assigned identity to authorize access to the key vault. You must create the user-assigned identity before you configure customer-managed keys.
-### [Azure portal](#tab/portal)
+### [Azure portal](#tab/azure-portal)
When you configure customer-managed keys with the Azure portal, you can select an existing user-assigned identity through the portal user interface. For details, see [Configure customer-managed keys for a new storage account](#configure-customer-managed-keys-for-a-new-storage-account).
-### [PowerShell](#tab/powershell)
+### [PowerShell](#tab/azure-powershell)
To authorize access to the key vault with a user-assigned managed identity, you'll need the resource ID and principal ID of the user-assigned managed identity. Call [Get-AzUserAssignedIdentity](/powershell/module/az.managedserviceidentity/get-azuserassignedidentity) to get the user-assigned managed identity and assign it to a variable that you'll reference in subsequent steps:
principalId=$(az identity show --name sample-user-assigned-identity --resource-g
The next step is to configure the key vault access policy. The key vault access policy grants permissions to the user-assigned managed identity that will be used to authorize access to the key vault. To learn more about key vault access policies, see [Azure Key Vault Overview](../../key-vault/general/overview.md#securely-store-secrets-and-keys) and [Azure Key Vault security overview](../../key-vault/general/security-features.md#key-vault-authentication-options).
-### [Azure portal](#tab/portal)
+### [Azure portal](#tab/azure-portal)
To learn how to configure the key vault access policy with the Azure portal, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
-### [PowerShell](#tab/powershell)
+### [PowerShell](#tab/azure-powershell)
To configure the key vault access policy with PowerShell, call [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy), providing the variable for the principal ID that you previously retrieved for the user-assigned managed identity.
Azure Storage can automatically update the customer-managed key that is used for
> [!IMPORTANT] > Azure Storage checks the key vault for a new key version only once daily. When you rotate a key, be sure to wait 24 hours before disabling the older version.
-### [Azure portal](#tab/portal)
+### [Azure portal](#tab/azure-portal)
To configure customer-managed keys for a new storage account with automatic updating of the key version, follow these steps:
To configure customer-managed keys for a new storage account with automatic upda
You can also configure customer-managed keys with manual updating of the key version when you create a new storage account. Follow the steps described in [Configure encryption for manual updating of key versions](#configure-encryption-for-manual-updating-of-key-versions).
-### [PowerShell](#tab/powershell)
+### [PowerShell](#tab/azure-powershell)
To configure customer-managed keys for a new storage account with automatic updating of the key version, call [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount), as shown in the following example. Use the variable you created previously for the resource ID for the user-assigned managed identity. You'll also need the key vault URI and key name:
If you prefer to manually update the key version, then explicitly specify the ve
You must use an existing user-assigned managed identity to authorize access to the key vault when you configure customer-managed keys while creating the storage account. The user-assigned managed identity must have appropriate permissions to access the key vault. For more information, see [Authenticate to Azure Key Vault](../../key-vault/general/authentication.md).
-# [Azure portal](#tab/portal)
+# [Azure portal](#tab/azure-portal)
To configure customer-managed keys with manual updating of the key version in the Azure portal, specify the key URI, including the version, while creating the storage account. To specify a key as a URI, follow these steps:
To configure customer-managed keys with manual updating of the key version in th
1. Select the **Review** button to validate and create the account.
-# [PowerShell](#tab/powershell)
+# [PowerShell](#tab/azure-powershell)
To configure customer-managed keys with manual updating of the key version, explicitly provide the key version when you configure encryption while creating the storage account. Call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings, as shown in the following example, and include the **-KeyvaultEncryption** option to enable customer-managed keys for the storage account.
storage Storage Metrics Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-metrics-migration.md
To transition to metrics in Azure Monitor, we recommend the following approach.
3. Identify [which metrics in Azure Monitor](#metrics-mapping-between-old-metrics-and-new-metrics) provide the same data as the metrics you currently use.
-4. Create [charts](/learn/modules/gather-metrics-blob-storage/2-viewing-blob-metrics-in-azure-portal) or [dashboards](/learn/modules/gather-metrics-blob-storage/4-using-dashboards-in-the-azure-portal) to view metric data.
+4. Create [charts](/training/modules/gather-metrics-blob-storage/2-viewing-blob-metrics-in-azure-portal) or [dashboards](/training/modules/gather-metrics-blob-storage/4-using-dashboards-in-the-azure-portal) to view metric data.
> [!NOTE] > Metrics in Azure Monitor are enabled by default, so there is nothing you need to do to begin capturing metrics. You must however, create charts or dashboards to view those metrics.
storage Storage Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-plan-manage-costs.md
Storage capacity is billed in units of the average daily amount of data stored,
- Learn [how to optimize your cloud investment with Azure Cost Management](../../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
storage Storage Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples-python.md
The following tables provide an overview of our samples repository and the scena
To view the complete Python sample libraries, go to: -- [Azure blob code samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-blob/samples)
+- [Azure Blob code samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-blob/samples)
- [Azure Data Lake code samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-file-datalake/samples) - [Azure Files code samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-file-share/samples)-- [Azure queue code samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-queue/samples)
+- [Azure Queue code samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-queue/samples)
You can browse and clone the GitHub repository for each library.
storage Troubleshoot Storage Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/troubleshoot-storage-availability.md
The most common cause of this error is a client disconnecting before a timeout e
- [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/blobs/toc.json) - [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/blobs/toc.json)-- [Monitor, diagnose, and troubleshoot your Azure Storage](/learn/modules/monitor-diagnose-and-troubleshoot-azure-storage/)
+- [Monitor, diagnose, and troubleshoot your Azure Storage](/training/modules/monitor-diagnose-and-troubleshoot-azure-storage/)
storage Troubleshoot Storage Client Application Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/troubleshoot-storage-client-application-errors.md
You can find a list of common REST API error codes that the storage services ret
- [Monitoring Azure Table storage](../tables/monitor-table-storage.md) - [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/blobs/toc.json) - [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/blobs/toc.json)-- [Monitor, diagnose, and troubleshoot your Azure Storage](/learn/modules/monitor-diagnose-and-troubleshoot-azure-storage/)
+- [Monitor, diagnose, and troubleshoot your Azure Storage](/training/modules/monitor-diagnose-and-troubleshoot-azure-storage/)
storage Troubleshoot Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/troubleshoot-storage-performance.md
If you are experiencing a delay between the time an application adds a message t
- [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/blobs/toc.json) - [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/blobs/toc.json)-- [Monitor, diagnose, and troubleshoot your Azure Storage](/learn/modules/monitor-diagnose-and-troubleshoot-azure-storage/)
+- [Monitor, diagnose, and troubleshoot your Azure Storage](/training/modules/monitor-diagnose-and-troubleshoot-azure-storage/)
storage File Sync Cloud Tiering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-cloud-tiering-overview.md
It's also possible for a file to be partially tiered (or partially recalled). In
> [!NOTE] > Size represents the logical size of the file. Size on disk represents the physical size of the file stream that's stored on the disk.
+## Low disk space mode
+Disks with server endpoints can run out of space for several reasons, even with cloud tiering enabled. This could result in Azure File Sync not working as expected and even unusable. While it is not possible and not in control of Azure File Sync to prevent these occurrences completely, low disk space mode (new for Azure File Sync agent version 15.1) is designed to avoid a server endpoint reaching this situation.
+
+For server endpoints with cloud tiering enabled and volume free space policy set, if the free space on the volume reaches below the calculated threshold, then the volume is in low disk space mode.
+
+In low disk space mode, the Azure File Sync agent does two things differently:
+
+- Proactive Tiering: In this mode the File Sync agent tiers files proactively to the cloud . Sync agent checks for files to be tiered every minute instead of the normal frequency of every 1 hour. Volume free space policy tiering typically does not happen during initial upload sync until the full upload is complete, but in low disk space mode, tiering is enabled during the initial upload sync and files will be considered for tiering once the individual file has been uploaded to the Azure file share.
+
+- Non-Persistent Recalls: When a user opens a tiered file, files recalled from the Azure File Share directly will not be persisted to the disk. Note that recalls initiated by the cmdlet Invoke-StorageSyncFileRecall are an exemption from this rule and will be persisted to disk.
+
+When the volume free space reaches above the threshold, Azure File Sync reverts to the normal state automatically. Note that low disk space mode only applies to servers with cloud tiering enabled and will always respect the volume free space policy.
+
+If a volume has two server endpoints, one with tiering-enabled and one without tiering, low disk space mode will only apply to the server endpoint where tiering is enabled.
+
+### How is the threshold for low disk space mode calculated?
+The threshold is calculated by taking the minimum of the following three numbers:
+- 10% of volume free space in GB
+- Volume Free Space Policy in GB
+- 20 GB of volume free space
+
+The following table includes some examples of how the threshold is calculated and when the volume will be in low disk space mode.
+
+| Volume Size | Volume Free Space Policy | Current Volume Free Space | Threshold \= Min (10%, Volume Free Space Policy, 20GB) | Is Low Disk Space Mode? | Reason |
+| -- | | - | -- | -- | |
+| 100GB | 7% (7GB) | 9% (9GB) | 7GB = Min (10GB, 7GB, 20GB) | No | Current Volume Free Space > Threshold |
+| 100GB | 7% (7GB) | 5% (5GB) | 7GB = Min (10GB, 7GB, 20GB) | Yes | Current Volume Free Space < Threshold |
+| 300GB | 8% (24GB) | 7% (21GB) | 20GB = Min (30GB, 24GB, 20GB) | No | Current Volume Free Space > Threshold |
+| 300GB | 8% (24GB) | 6% (18GB) | 20GB = Min (30GB, 24GB, 20GB) | Yes | Current Volume Free Space < Threshold |
++
+### How does low disk space mode work with volume free space policy?
+Low disk space mode always respects the volume free space policy. The threshold calculation is designed to make sure volume free space policy set by the user is respected.
+
+### How to get out of low disk space mode?
+Low disk space mode is designed to revert to normal behavior when volume free space is above the threshold. You can help speed up the process by looking for any recently created files outside the server endpoint location and moving them to a different disk if possible.
+
+### How to check if a server is in Low Disk Space mode?
+Event ID 19000 is logged to the Telemetry event log every minute for each server endpoint. Use this event to determine if the server endpoint is in low disk mode (IsLowDiskMode = true). The Telemetry event log is located in Event Viewer under Applications and Services\Microsoft\FileSync\Agent.
+ ## Next steps - [Choose Azure File Sync cloud tiering policies](file-sync-choose-cloud-tiering-policies.md)
storage File Sync Firewall And Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-firewall-and-proxy.md
if ($found) {
} else { # If the file cannot be found, that means there hasn't been an update in # more than a week. Please verify the download URIs are still accurate
- # by checking https://docs.microsoft.com/azure/virtual-network/service-tags-overview
+ # by checking https://learn.microsoft.com/azure/virtual-network/service-tags-overview
Write-Verbose -Message "JSON service tag file not found." return }
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
Previously updated : 8/11/2022 Last updated : 9/19/2022
The following Azure File Sync agent versions are supported:
| Milestone | Agent version number | Release date | Status | |-|-|--||
+| V15.1 Release - [KB5003883](https://support.microsoft.com/topic/45761295-d49a-431e-98ec-4fb3329b0544)| 15.1.0.0 | September 19, 2022 | Supported |
| V15 Release - [KB5003882](https://support.microsoft.com/topic/2f93053f-869b-4782-a832-e3c772a64a2d)| 15.0.0.0 | March 30, 2022 | Supported | | V14.1 Release - [KB5001873](https://support.microsoft.com/topic/d06b8723-c4cf-4c64-b7ec-3f6635e044c5)| 14.1.0.0 | December 1, 2021 | Supported | | V14 Release - [KB5001872](https://support.microsoft.com/topic/92290aa1-75de-400f-9442-499c44c92a81)| 14.0.0.0 | October 29, 2021 | Supported |
The following Azure File Sync agent versions have expired and are no longer supp
### Azure File Sync agent update policy [!INCLUDE [storage-sync-files-agent-update-policy](../../../includes/storage-sync-files-agent-update-policy.md)]
+## Agent version 15.1.0.0
+The following release notes are for version 15.1.0.0 of the Azure File Sync agent released September 19, 2022. These notes are in addition to the release notes listed for version 15.0.0.0.
+
+### Improvements and issues that are fixed
+- Low disk space mode to prevent running out of disk space when using cloud tiering.
+ - Low disk space mode is designed to handle volumes with low free space more effectively. On a server endpoint with cloud tiering enabled, if the free space on the volume reaches below a threshold, Azure File Sync considers the volume to be in Low disk space mode.
+
+ In this mode, Azure File Sync does two things to free up space on the volume:
+
+ - Files are tiered to the Azure file share more proactively.
+ - Tiered files accessed by the user will not be persisted to the disk.
+
+ To learn more, see the [low disk space mode](file-sync-cloud-tiering-overview.md#low-disk-space-mode) section in the Cloud tiering overview documentation.
+
+- Fixed a cloud tiering issue that caused high CPU usage after v15.0 agent is installed.
+
+- Miscellaneous reliability and telemetry improvements.
+ ## Agent version 15.0.0.0 The following release notes are for version 15.0.0.0 of the Azure File Sync agent (released March 30, 2022).
storage Storage Files Identity Ad Ds Assign Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-assign-permissions.md
Title: Control access to Azure file shares - on-premises AD DS authentication
-description: Learn how to assign permissions to an Active Directory Domain Services identity that represents your storage account. This allows you control access with identity-based authentication.
+description: Learn how to assign permissions to an Active Directory Domain Services identity that represents your Azure storage account. This allows you to control access with identity-based authentication.
Previously updated : 05/04/2022 Last updated : 09/19/2022 ms.devlang: azurecli
There are three scenarios where we instead recommend using default share-level p
## Share-level permissions
-The following table lists the share-level permissions and how they align with the built-in RBAC roles:
+The following table lists the share-level permissions and how they align with the built-in Azure role-based access control (RBAC) roles:
|Supported built-in roles |Description | |||
The following table lists the share-level permissions and how they align with th
## Share-level permissions for specific Azure AD users or groups
-If you intend to use a specific Azure AD user or group to access Azure file share resources, that identity must be a **hybrid identity that exists in both on-premises AD DS and Azure AD**. For example, say you have a user in your AD that is user1@onprem.contoso.com and you have synced to Azure AD as user1@contoso.com using Azure AD Connect sync. For this user to access Azure Files, you must assign the share-level permissions to user1@contoso.com. The same concept applies to groups or service principals.
+If you intend to use a specific Azure AD user or group to access Azure file share resources, that identity must be a [hybrid identity](/azure/active-directory/hybrid/whatis-hybrid-identity) that exists in both on-premises AD DS and Azure AD. For example, say you have a user in your AD that is user1@onprem.contoso.com and you have synced to Azure AD as user1@contoso.com using Azure AD Connect sync. For this user to access Azure Files, you must assign the share-level permissions to user1@contoso.com. The same concept applies to groups or service principals.
> [!IMPORTANT] > **Assign permissions by explicitly declaring actions and data actions as opposed to using a wildcard (\*) character.** If a custom role definition for a data action contains a wildcard character, all identities assigned to that role are granted access for all possible data actions. This means that all such identities will also be granted any new data action added to the platform. The additional access and permissions granted through new actions or data actions may be unwanted behavior for customers using wildcard. To mitigate any unintended future impact, we highly recommend declaring actions and data actions explicitly as opposed to using the wildcard.
In order for share-level permissions to work, you must:
Share-level permissions must be assigned to the Azure AD identity representing the same user or group in your AD DS to support AD DS authentication to your Azure file share. Authentication and authorization against identities that only exist in Azure AD, such as Azure Managed Identities (MSIs), are not supported with AD DS authentication.
+> [!TIP]
+> Optional: Customers who want to migrate SMB server share-level permissions to RBAC permissions can use the `Move-OnPremSharePermissionsToAzureFileShare` PowerShell cmdlet to migrate directory and file-level permissions from on-premises to Azure. This cmdlet evaluates the groups of a particular on-premises file share, then writes the appropriate users and groups to the Azure file share using the three RBAC roles. You provide the information for the on-premises share and the Azure file share when invoking the cmdlet.
+ You can use the Azure portal, Azure PowerShell module, or Azure CLI to assign the built-in roles to the Azure AD identity of a user for granting share-level permissions. > [!IMPORTANT]
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
Import-Module -Name AzFilesHybrid
# Login with an Azure AD credential that has either storage account owner or contributor Azure role # assignment. If you are logging into an Azure environment other than Public (ex. AzureUSGovernment) # you will need to specify that.
-# See https://docs.microsoft.com/azure/azure-government/documentation-government-get-started-connect-with-ps
+# See https://learn.microsoft.com/azure/azure-government/documentation-government-get-started-connect-with-ps
# for more information. Connect-AzAccount
Connect-AzAccount
# $StorageAccountName is the name of an existing storage account that you want to join to AD # $SamAccountName is the name of the to-be-created AD object, which is used by AD as the logon name # for the object.
-# See https://docs.microsoft.com/en-us/windows/win32/adschema/a-samaccountname for more information.
+# See https://learn.microsoft.com/windows/win32/adschema/a-samaccountname for more information.
$SubscriptionId = "<your-subscription-id-here>" $ResourceGroupName = "<resource-group-name-here>" $StorageAccountName = "<storage-account-name-here>"
storage Monitor Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/monitor-queue-storage.md
Get started with any of these guides.
| Guide | Description | |||
-| [Monitor, diagnose, and troubleshoot your Azure Storage](/learn/modules/monitor-diagnose-and-troubleshoot-azure-storage/) | Troubleshoot storage account issues (contains step-by-step guidance). |
+| [Monitor, diagnose, and troubleshoot your Azure Storage](/training/modules/monitor-diagnose-and-troubleshoot-azure-storage/) | Troubleshoot storage account issues (contains step-by-step guidance). |
| [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md) | A unified view of storage performance, capacity, and availability | | [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md) | A tour of Metrics Explorer. | [Overview of Log Analytics in Azure Monitor](../../azure-monitor/logs/log-analytics-overview.md) | A tour of Log Analytics. |
storage Storage Quickstart Queues Dotnet Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-quickstart-queues-dotnet-legacy.md
See these additional resources for .NET development with Azure Queue Storage:
In this quickstart, you learned how to add messages to a queue, peek messages from a queue, and dequeue and process messages using .NET. > [!div class="nextstepaction"]
-> [Communicate between applications with Azure Queue Storage](/learn/modules/communicate-between-apps-with-azure-queue-storage/index)
+> [Communicate between applications with Azure Queue Storage](/training/modules/communicate-between-apps-with-azure-queue-storage/index)
- To learn more about .NET Core, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro).
storage Monitor Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/monitor-table-storage.md
No. Azure Compute supports the metrics on disks. For more information, see [Per
| Guide | Description | |||
-| [Monitor, diagnose, and troubleshoot your Azure Storage](/learn/modules/monitor-diagnose-and-troubleshoot-azure-storage/) | Troubleshoot storage account issues (contains step-by-step guidance). |
+| [Monitor, diagnose, and troubleshoot your Azure Storage](/training/modules/monitor-diagnose-and-troubleshoot-azure-storage/) | Troubleshoot storage account issues (contains step-by-step guidance). |
| [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md) | A unified view of storage performance, capacity, and availability | | [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md) | A tour of Metrics Explorer. | [Overview of Log Analytics in Azure Monitor](../../azure-monitor/logs/log-analytics-overview.md) | A tour of Log Analytics. |
stream-analytics Cosmos Db Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/cosmos-db-managed-identity.md
For the Stream Analytics job to access your Cosmos DB using managed identity, th
|Cosmos DB Built-in Data Contributor| > [!IMPORTANT]
-> Cosmos DB data plane built-in role-based access control (RBAC) is not exposed through the Azure Portal. To assign the Cosmos DB Built-in Data Contributor role, you must grant permission via Azure Powershell. For more information about role-based access control with Azure Active Directory for your Azure Cosmos DB account please visit the: [Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account documentation.](https://docs.microsoft.com/azure/cosmos-db/how-to-setup-rbac/)
+> Cosmos DB data plane built-in role-based access control (RBAC) is not exposed through the Azure Portal. To assign the Cosmos DB Built-in Data Contributor role, you must grant permission via Azure Powershell. For more information about role-based access control with Azure Active Directory for your Azure Cosmos DB account please visit the: [Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account documentation.](https://learn.microsoft.com/azure/cosmos-db/how-to-setup-rbac/)
The following command can be used to authenticate your ASA job with Cosmos DB. The `$accountName` and `$resourceGroupName` are for your Cosmos DB account, and the `$principalId` is the value obtained in the previous step, in the Identity tab of your ASA job. You need to have "Contributor" access to your Cosmos DB account for this command to work as intended.
stream-analytics Stream Analytics Machine Learning Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-machine-learning-anomaly-detection.md
The machine learning operations do not support seasonality trends or multi-varia
The following video demonstrates how to detect an anomaly in real time using machine learning functions in Azure Stream Analytics.
-> [!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Real-Time-ML-Based-Anomaly-Detection-In-Azure-Stream-Analytics/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Real-Time-ML-Based-Anomaly-Detection-In-Azure-Stream-Analytics/player]
## Model behavior
Use the Metrics pane in your Azure Stream Analytics job to identify bottlenecks
* [Get started using Azure Stream Analytics](stream-analytics-real-time-fraud-detection.md) * [Scale Azure Stream Analytics jobs](stream-analytics-scale-jobs.md) * [Azure Stream Analytics Query Language Reference](/stream-analytics-query/stream-analytics-query-language-reference)
-* [Azure Stream Analytics Management REST API Reference](/rest/api/streamanalytics/)
+* [Azure Stream Analytics Management REST API Reference](/rest/api/streamanalytics/)
stream-analytics Stream Analytics Window Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-window-functions.md
Will return:
## Snapshot window
-[**Snapshot**](/stream-analytics-query/snapshot-window-azure-stream-analytics) windows group events that have the same timestamp. Unlike other windowing types, which require a specific window function (such as [SessionWindow()](/stream-analytics-query/session-window-azure-stream-analytics), you can apply a snapshot window by adding System.Timestamp() to the GROUP BY clause.
+[**Snapshot**](/stream-analytics-query/snapshot-window-azure-stream-analytics) windows group events that have the same timestamp. Unlike other windowing types, which require a specific window function (such as [SessionWindow()](/stream-analytics-query/session-window-azure-stream-analytics)), you can apply a snapshot window by adding System.Timestamp() to the GROUP BY clause.
![Stream Analytics snapshot window](media/stream-analytics-window-functions/stream-analytics-window-functions-snapshot-intro.png)
synapse-analytics Data Explorer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/data-explorer-overview.md
Azure Synapse Data Explorer provides customers with an interactive query experie
To learn more, see the following video: >
-> [!VIDEO https://docs.microsoft.com/shows/data-exposed/azure-synapse-data-explorer-for-log--telemetry-management/player?WT.mc_id=dataexposed-c9-niner]
+> [!VIDEO https://learn.microsoft.com/shows/data-exposed/azure-synapse-data-explorer-for-log--telemetry-management/player?WT.mc_id=dataexposed-c9-niner]
## What makes Azure Synapse Data Explorer unique?
synapse-analytics Data Explorer Ingest Event Hub Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-event-hub-python.md
location = "Central US"
table_name = "StormEvents" mapping_rule_name = "StormEvents_CSV_Mapping" data_format = "csv"
-#Returns an instance of LROPoller, check https://docs.microsoft.com/python/api/msrest/msrest.polling.lropoller?view=azure-python
+#Returns an instance of LROPoller, check https://learn.microsoft.com/python/api/msrest/msrest.polling.lropoller?view=azure-python
poller = kusto_management_client.data_connections.create_or_update(resource_group_name=resource_group_name, cluster_name=cluster_name, database_name=database_name, data_connection_name=data_connection_name, parameters=EventHubDataConnection(event_hub_resource_id=event_hub_resource_id, consumer_group=consumer_group, location=location, table_name=table_name, mapping_rule_name=mapping_rule_name, data_format=data_format))
synapse-analytics Overview Map Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/overview-map-data.md
Last updated 11/11/2021
The Map Data tool is a guided process to help users create ETL mappings and mapping data flows from their source data to Synapse lake database tables without writing code. This process starts with the user choosing the destination tables in Synapse lake databases and then mapping their source data into these tables.
-For more information on Synapse lake databases see [Overview of Azure Synapse database templates - Azure Synapse Analytics | Microsoft Docs](overview-database-templates.md)
+For more information on Synapse lake databases, see [Overview of Azure Synapse database templates - Azure Synapse Analytics | Microsoft Docs](overview-database-templates.md)
-Map Data provides for a guided experience where the user can generate a mapping data flow without having to start with a blank canvas and quickly generate a scalable mapping data flow runnable in Synapse pipelines.
--
-> [!NOTE]
-> The Map Data feature in Synapse Analytics pipelines is currently in public preview
+Map Data provides for a guided experience where the user can generate a mapping data flow without having to start with a blank canvas. Then you can quickly generate a scalable mapping data flow runnable in Synapse pipelines.
## Getting started The Map Data tool is started from within the Synapse lake database experience. From here, you can select the Map Data tool to begin the process.
-![Screenshot showing how to open an Map data](./media/overview-map-data/open-map-data.png)
+![Screenshot showing how to open an Map data.](./media/overview-map-data/open-map-data.png)
-Map Data needs compute available to assist users with previewing data and reading schema of their source files. Upon using Map Data for the first time in a session you will need to warm up a cluster.
-![Screenshot showing debug clusters](./media/overview-map-data/debug-map-data.png)
+Map Data needs compute available to assist users with previewing data and reading schema of their source files. Upon using Map Data for the first time in a session, you'll need to warm up a cluster.
+![Screenshot showing debug clusters.](./media/overview-map-data/debug-map-data.png)
To begin, choose your data source that you want to map to your lake database tables. Currently supported data sources are Azure Data Lake Storage Gen 2 and Synapse lake databases.
-![Screenshot showing sources](./media/overview-map-data/sources-map-data.png)
+![Screenshot showing sources.](./media/overview-map-data/sources-map-data.png)
### File type options
-When choosing a file store such as Azure Data Lake Storage Gen 2 the following file types are supported:
+When choosing a file store, such as Azure Data Lake Storage Gen 2, the following file types are supported:
* Common Data Model * Delimited Text
When choosing a file store such as Azure Data Lake Storage Gen 2 the following f
## Create data mapping
+Configure your data mapping with the source type you selected.
+![Screenshot showing map data file configuration settings.](./media/overview-map-data/map-data-file-selection.png)
+
+> [!NOTE]
+> You can choose a folder or a single file. If you choose a folder you will be able to map multiple files to your lake database tables. If you choose a folder you are also prompted after selecting continue to include only specific files, if desired.
+ Name your data mapping and select the Synapse lake database destination.
-![Screenshot showing naming and destination](./media/overview-map-data/destination-map-data.png)
+![Screenshot showing naming and destination.](./media/overview-map-data/destination-map-data.png)
## Source to target mapping Choose a Primary source table to map to the Synapse lake database destination table.
-![Screenshot showing Map data rules](./media/overview-map-data/rules-map-data.png)
+![Screenshot showing Map data rules.](./media/overview-map-data/rules-map-data.png)
### New mapping Use the New Mapping button to add a mapping method to create a mapping or transformation.
The following mapping methods are supported:
## Create pipeline
-Once you are done with your Map Data transformations select the Create pipeline button to generate a mapping data flow and pipeline to debug and run your transformation.
+Once you're done with your Map Data transformations, select the Create pipeline button to generate a mapping data flow and pipeline to debug and run your transformation.
synapse-analytics Quick Start Create Lake Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/quick-start-create-lake-database.md
To ingest data to the lake database, you can execute [pipelines](../data-integra
```Spark %%sql
-INSERT INTO `retail_mil`.`customer` VALUES (1,'2021-02-18',1022,557,101,'Tailspin Toys (Head Office)','Waldemar Fisar',90410,466);
+INSERT INTO `retail_mil`.`customer` VALUES (1,date('2021-02-18'),1022,557,101,'Tailspin Toys (Head Office)','Waldemar Fisar',90410,466);
``` ## Query the data
synapse-analytics Implementation Success Assess Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-assess-environment.md
For the serverless SQL pool assessment, evaluate the following points.
- Identify the number of queries that will be sent to the serverless SQL pool and the result set size of each query. > [!TIP]
-> If you're new to serverless SQL pools, we recommend you work through the [Build data analytics solutions using Azure Synapse serverless SQL pools](/learn/paths/build-data-analytics-solutions-using-azure-synapse-serverless-sql-pools/) learning path.
+> If you're new to serverless SQL pools, we recommend you work through the [Build data analytics solutions using Azure Synapse serverless SQL pools](/training/paths/build-data-analytics-solutions-using-azure-synapse-serverless-sql-pools/) learning path.
### Spark pool assessment
For the Spark pool assessment, evaluate the following points.
- Identify whether cluster customization is required. > [!TIP]
-> If you're new to Spark pools, we recommend you work through the [Perform data engineering with Azure Synapse Apache Spark Pools](/learn/paths/perform-data-engineering-with-azure-synapse-apache-spark-pools/) learning path.
+> If you're new to Spark pools, we recommend you work through the [Perform data engineering with Azure Synapse Apache Spark Pools](/training/paths/perform-data-engineering-with-azure-synapse-apache-spark-pools/) learning path.
## Next steps
synapse-analytics Implementation Success Evaluate Team Skill Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-team-skill-sets.md
The *Azure DevOps engineer* is responsible for designing and implementing strate
## Learning resources and certifications
-If you're interested to learn about Microsoft Certifications that may help assess your team's readiness, browse the available certifications for [Azure Synapse Analytics](/learn/certifications/browse/?expanded=azure&products=azure-synapse-analytics).
+If you're interested to learn about Microsoft Certifications that may help assess your team's readiness, browse the available certifications for [Azure Synapse Analytics](/certifications/browse/?expanded=azure&products=azure-synapse-analytics).
-To complete online, self-paced training, browse the available learning paths and modules for [Azure Synapse Analytics](/learn/browse/?filter-products=synapse&products=azure-synapse-analytics).
+To complete online, self-paced training, browse the available learning paths and modules for [Azure Synapse Analytics](/training/browse/?filter-products=synapse&products=azure-synapse-analytics).
## Next steps
synapse-analytics Proof Of Concept Playbook Dedicated Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-dedicated-sql-pool.md
This article presents a high-level methodology for preparing and running an effe
[!INCLUDE [proof-of-concept-playbook-context](includes/proof-of-concept-playbook-context.md)] > [!TIP]
-> If you're new to dedicated SQL pools, we recommend you work through the [Work with Data Warehouses using Azure Synapse Analytics](/learn/paths/work-with-data-warehouses-using-azure-synapse-analytics/) learning path.
+> If you're new to dedicated SQL pools, we recommend you work through the [Work with Data Warehouses using Azure Synapse Analytics](/training/paths/work-with-data-warehouses-using-azure-synapse-analytics/) learning path.
## Prepare for the POC
synapse-analytics Proof Of Concept Playbook Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-overview.md
Whether it's an enterprise data warehouse migration, a big data re-platforming,
The *Synapse proof of concept playbook* is a series of related articles that provide a high-level methodology for planning, preparing, and running an effective Azure Synapse Analytics POC project. The overall objective of a POC is to validate potential solutions to technical problems, such as how to integrate systems or how to achieve certain results through a specific configuration. As emphasized by this series, an effective POC validates that certain concepts have the potential for real-world production application. > [!TIP]
-> If you're new to Azure Synapse, we recommend you work through the [Introduction to Azure Synapse Analytics](/learn/modules/introduction-azure-synapse-analytics/) module.
+> If you're new to Azure Synapse, we recommend you work through the [Introduction to Azure Synapse Analytics](/training/modules/introduction-azure-synapse-analytics/) module.
## Playbook audiences
synapse-analytics Proof Of Concept Playbook Serverless Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-serverless-sql-pool.md
Before you begin planning your serverless SQL Pool POC project:
Before you start preparing for the POC project, we recommend you first read the [serverless SQL pool documentation](../sql/on-demand-workspace-overview.md). > [!TIP]
-> If you're new to serverless SQL pools, we recommend you work through the [Build data analytics solutions using Azure Synapse serverless SQL pools](/learn/paths/build-data-analytics-solutions-using-azure-synapse-serverless-sql-pools/) learning path.
+> If you're new to serverless SQL pools, we recommend you work through the [Build data analytics solutions using Azure Synapse serverless SQL pools](/training/paths/build-data-analytics-solutions-using-azure-synapse-serverless-sql-pools/) learning path.
### Set the goals
When you complete all the POC tests, you evaluate the results. Begin by evaluati
> [Big data analytics with Apache Spark pool in Azure Synapse Analytics](proof-of-concept-playbook-spark-pool.md) > [!div class="nextstepaction"]
-> [Build data analytics solutions using Azure Synapse serverless SQL pools](/learn/paths/build-data-analytics-solutions-using-azure-synapse-serverless-sql-pools/)
+> [Build data analytics solutions using Azure Synapse serverless SQL pools](/training/paths/build-data-analytics-solutions-using-azure-synapse-serverless-sql-pools/)
> [!div class="nextstepaction"] > [Azure Synapse Analytics frequently asked questions](../overview-faq.yml)
synapse-analytics Proof Of Concept Playbook Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-spark-pool.md
Before you begin planning your Spark POC project:
Before you start preparing for the POC project, we recommend you first read the [Apache Spark documentation](../../hdinsight/spark/apache-spark-overview.md). > [!TIP]
-> If you're new to Spark pools, we recommend you work through the [Perform data engineering with Azure Synapse Apache Spark Pools](/learn/paths/perform-data-engineering-with-azure-synapse-apache-spark-pools/) learning path.
+> If you're new to Spark pools, we recommend you work through the [Perform data engineering with Azure Synapse Apache Spark Pools](/training/paths/perform-data-engineering-with-azure-synapse-apache-spark-pools/) learning path.
By now you should have determined that there are no immediate blockers and then you can start preparing for your POC. If you are new to Apache Spark Pools in Azure Synapse Analytics you can refer to [this documentation](../spark/apache-spark-overview.md) where you can get an overview of the Spark architecture and learn how it works in Azure Synapse.
synapse-analytics Security White Paper Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-network-security.md
These endpoints are automatically created when the Synapse workspace is created.
### Synapse Studio
-[*Synapse Studio*](/learn/modules/explore-azure-synapse-studio/) is a secure web front-end development environment for Azure Synapse. It supports various roles, including the data engineer, data scientist, data developer, data analyst, and Synapse administrator.
+[*Synapse Studio*](/training/modules/explore-azure-synapse-studio/) is a secure web front-end development environment for Azure Synapse. It supports various roles, including the data engineer, data scientist, data developer, data analyst, and Synapse administrator.
Use Synapse Studio to perform various data and management operations in Azure Synapse, such as:
synapse-analytics Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/plan-manage-costs.md
To learn more about data integration costs, see [Plan and manage costs for Azure
- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
- Learn about planning and managing costs for [Azure Machine Learning](../machine-learning/concept-plan-manage-cost.md)
synapse-analytics Synapse Spark Sql Pool Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md
dfToReadFromTable = (spark.read
# Set user's password to the database .option(Constants.PASSWORD, "<user_password>") # Set name of the data source definition that is defined with database scoped credentials.
- # https://docs.microsoft.com/sql/t-sql/statements/create-external-data-source-transact-sql?view=sql-server-ver15&tabs=dedicated#h-create-external-data-source-to-access-data-in-azure-storage-using-the-abfs-interface
+ # https://learn.microsoft.com/sql/t-sql/statements/create-external-data-source-transact-sql?view=sql-server-ver15&tabs=dedicated#h-create-external-data-source-to-access-data-in-azure-storage-using-the-abfs-interface
# Data extracted from the SQL query will be staged to the storage path defined on the data source's location setting. .option(Constants.DATA_SOURCE, "<data_source_name>") # Three-part table name from where data will be read.
from com.microsoft.spark.sqlanalytics.Constants import Constants
# to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point. .option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net") # Set name of the data source definition that is defined with database scoped credentials.
- # https://docs.microsoft.com/sql/t-sql/statements/create-external-data-source-transact-sql?view=sql-server-ver15&tabs=dedicated#h-create-external-data-source-to-access-data-in-azure-storage-using-the-abfs-interface
+ # https://learn.microsoft.com/sql/t-sql/statements/create-external-data-source-transact-sql?view=sql-server-ver15&tabs=dedicated#h-create-external-data-source-to-access-data-in-azure-storage-using-the-abfs-interface
.option(Constants.DATA_SOURCE, "<data_source_name>") # Choose a save mode that is apt for your use case. # Options for save modes are "error" or "errorifexists" (default), "overwrite", "append", "ignore".
from com.microsoft.spark.sqlanalytics.Constants import Constants
# Set user's password to the database .option(Constants.PASSWORD, "<user_password>") # Set name of the data source with database scoped credentials for external table.
- # https://docs.microsoft.com/sql/t-sql/statements/create-external-data-source-transact-sql?view=sql-server-ver15&tabs=dedicated#h-create-external-data-source-to-access-data-in-azure-storage-using-the-abfs-interface
+ # https://learn.microsoft.com/sql/t-sql/statements/create-external-data-source-transact-sql?view=sql-server-ver15&tabs=dedicated#h-create-external-data-source-to-access-data-in-azure-storage-using-the-abfs-interface
.option(Constants.DATA_SOURCE, "<data_source_name>") # For Basic Auth, need the storage account key for the storage account where the data will be staged .option(Constants.STAGING_STORAGE_ACCOUNT_KEY,"<storage_account_key>")
synapse-analytics Sql Data Warehouse Integrate Azure Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-integrate-azure-stream-analytics.md
In this article, you will learn how to use your dedicated SQL pool as an output
* Azure Stream Analytics Job - To create an Azure Stream Analytics job, follow the steps in the [Get started using Azure Stream Analytics](../../stream-analytics/stream-analytics-real-time-fraud-detection.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) tutorial to:
- 1. Create an Event Hub input
- 2. Configure and start event generator application
- 3. Provision a Stream Analytics job
- 4. Specify job input and query
+ 1. Create an Event Hubs input
+ 1. Configure and start event generator application. This app sends data from a client through your Event Hubs. The JSON structure of the data looks as follows:
+
+ ```json
+ {
+ RecordType: "",
+ SystemIdentity: "",
+ FileNum: ,
+ SwitchNum: "",
+ CallingNum: "",
+ CallingIMSI: "",
+ CalledNum: "",
+ CalledIMSI: "",
+ DateS: "",
+ TimeS: "",
+ TimeType: ,
+ CallPeriod: ,
+ CallingCellID: "",
+ CalledCellID: "",
+ ServiceType: "",
+ Transfer: ,
+ IncomingTrunk: "",
+ OutgoingTrunk: "",
+ MSRN: "",
+ CalledNum2: "",
+ FCIFlag: "",
+ callrecTime: "",
+ EventProcessedUtcTime: "",
+ PartitionId: ,
+ EventEnqueuedUtcTime: ""
+ }
+ ```
+
+ 1. Provision a Stream Analytics job
+ 1. Specify job input and query
* Dedicated SQL pool - To create a new dedicated SQL pool, follow the steps in the [Quickstart: Create a dedicated SQL pool](../quickstart-create-sql-pool-portal.md). ## Specify streaming output to point to your dedicated SQL pool
From the Azure portal, go to your Stream Analytics job and click on **Outputs**
### Step 2
-Click on the **Add** button and choose **Azure Synapse Analytics** from the drop down menu.
+Click on the **Add** button and choose **Azure Synapse Analytics** from the drop-down menu.
![Choose Azure Synapse Analytics](./media/sql-data-warehouse-integrate-azure-stream-analytics/sql-pool-azure-stream-analytics-output.png)
Enter the following values:
* *Subscription*: * If your dedicated SQL pool is in the same subscription as the Stream Analytics job, click on ***Select Azure Synapse Analytics from your subscriptions***. * If your dedicated SQL pool is in a different subscription, click on Provide Azure Synapse Analytics settings manually.
-* *Database*: Select the destination database from the drop down list.
+* *Database*: Select the destination database from the drop-down list.
* *User Name*: Specify the user name of an account that has write permissions for the database. * *Password*: Provide the password for the specified user account. * *Table*: Specify the name of the target table in the database.
Start the Azure Stream Analytics job. Click on the ***Start*** button on the **
Click the ***Start*** button on the start job pane. ![Click Start](./media/sql-data-warehouse-integrate-azure-stream-analytics/sqlpool-asastartconfirm.png)-
+
## Next steps For an overview of integration, see [Integrate other services](sql-data-warehouse-overview-integrate.md).
synapse-analytics Query Cosmos Db Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-cosmos-db-analytical-store.md
A serverless SQL pool allows you to analyze data in your Azure Cosmos DB contain
For querying Azure Cosmos DB, the full [SELECT](/sql/t-sql/queries/select-transact-sql?view=azure-sqldw-latest&preserve-view=true) surface area is supported through the [OPENROWSET](develop-openrowset.md) function, which includes the majority of [SQL functions and operators](overview-features.md). You can also store results of the query that reads data from Azure Cosmos DB along with data in Azure Blob Storage or Azure Data Lake Storage by using [create external table as select](develop-tables-cetas.md#cetas-in-serverless-sql-pool) (CETAS). You can't currently store serverless SQL pool query results to Azure Cosmos DB by using CETAS.
-In this article, you'll learn how to write a query with a serverless SQL pool that will query data from Azure Cosmos DB containers that are enabled with Azure Synapse Link. You can then learn more about building serverless SQL pool views over Azure Cosmos DB containers and connecting them to Power BI models in [this tutorial](./tutorial-data-analyst.md). This tutorial uses a container with an [Azure Cosmos DB well-defined schema](../../cosmos-db/analytical-store-introduction.md#schema-representation). You can also check out the learn module on how to [Query Azure Cosmos DB with SQL Serverless for Azure Synapse Analytics](/learn/modules/query-azure-cosmos-db-with-sql-serverless-for-azure-synapse-analytics/)
+In this article, you'll learn how to write a query with a serverless SQL pool that will query data from Azure Cosmos DB containers that are enabled with Azure Synapse Link. You can then learn more about building serverless SQL pool views over Azure Cosmos DB containers and connecting them to Power BI models in [this tutorial](./tutorial-data-analyst.md). This tutorial uses a container with an [Azure Cosmos DB well-defined schema](../../cosmos-db/analytical-store-introduction.md#schema-representation). You can also check out the Learn module on how to [Query Azure Cosmos DB with SQL Serverless for Azure Synapse Analytics](/training/modules/query-azure-cosmos-db-with-sql-serverless-for-azure-synapse-analytics/)
## Prerequisites
For more information, see the following articles:
- [Create and use views in a serverless SQL pool](create-use-views.md) - [Tutorial on building serverless SQL pool views over Azure Cosmos DB and connecting them to Power BI models via DirectQuery](./tutorial-data-analyst.md) - Visit the [Azure Synapse link for Cosmos DB self-help page](resources-self-help-sql-on-demand.md#azure-cosmos-db) if you are getting some errors or experiencing performance issues.-- Checkout the learn module on how to [Query Azure Cosmos DB with SQL Serverless for Azure Synapse Analytics](/learn/modules/query-azure-cosmos-db-with-sql-serverless-for-azure-synapse-analytics/).
+- Checkout the Learn module on how to [Query Azure Cosmos DB with SQL Serverless for Azure Synapse Analytics](/training/modules/query-azure-cosmos-db-with-sql-serverless-for-azure-synapse-analytics/).
synapse-analytics How To Query Analytical Store Spark 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-query-analytical-store-spark-3.md
The following capabilities are supported while interacting with Azure Cosmos DB:
* Synapse Apache Spark also allows you to ingest data into Azure Cosmos DB. It is important to note that data is always ingested into Azure Cosmos DB containers through the transactional store. When Synapse Link is enabled, any new inserts, updates, and deletes are then automatically synced to the analytical store. * Synapse Apache Spark also supports Spark structured streaming with Azure Cosmos DB as a source as well as a sink.
-The following sections walk you through the syntax of above capabilities. You can also checkout the learn module on how to [Query Azure Cosmos DB with Apache Spark for Azure Synapse Analytics](/learn/modules/query-azure-cosmos-db-with-apache-spark-for-azure-synapse-analytics/). Gestures in Azure Synapse Analytics workspace are designed to provide an easy out-of-the-box experience to get started. Gestures are visible when you right-click on an Azure Cosmos DB container in the **Data** tab of the Synapse workspace. With gestures, you can quickly generate code and tailor it to your needs. Gestures are also perfect for discovering data with a single click.
+The following sections walk you through the syntax of above capabilities. You can also checkout the Learn module on how to [Query Azure Cosmos DB with Apache Spark for Azure Synapse Analytics](/training/modules/query-azure-cosmos-db-with-apache-spark-for-azure-synapse-analytics/). Gestures in Azure Synapse Analytics workspace are designed to provide an easy out-of-the-box experience to get started. Gestures are visible when you right-click on an Azure Cosmos DB container in the **Data** tab of the Synapse workspace. With gestures, you can quickly generate code and tailor it to your needs. Gestures are also perfect for discovering data with a single click.
> [!IMPORTANT] > You should be aware of some constraints in the analytical schema that could lead to the unexpected behavior in data loading operations.
query.awaitTermination()
* [Samples to get started with Azure Synapse Link on GitHub](https://aka.ms/cosmosdb-synapselink-samples) * [Learn what is supported in Azure Synapse Link for Azure Cosmos DB](./concept-synapse-link-cosmos-db-support.md) * [Connect to Synapse Link for Azure Cosmos DB](../quickstart-connect-synapse-link-cosmos-db.md)
-* Checkout the learn module on how to [Query Azure Cosmos DB with Apache Spark for Azure Synapse Analytics](/learn/modules/query-azure-cosmos-db-with-apache-spark-for-azure-synapse-analytics/).
+* Checkout the Learn module on how to [Query Azure Cosmos DB with Apache Spark for Azure Synapse Analytics](/training/modules/query-azure-cosmos-db-with-apache-spark-for-azure-synapse-analytics/).
synapse-analytics How To Query Analytical Store Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-query-analytical-store-spark.md
The following capabilities are supported while interacting with Azure Cosmos DB:
* Synapse Apache Spark also allows you to ingest data into Azure Cosmos DB. It is important to note that data is always ingested into Azure Cosmos DB containers through the transactional store. When Synapse Link is enabled, any new inserts, updates, and deletes are then automatically synced to the analytical store. * Synapse Apache Spark also supports Spark structured streaming with Azure Cosmos DB as a source as well as a sink.
-The following sections walk you through the syntax of above capabilities. You can also checkout the learn module on how to [Query Azure Cosmos DB with Apache Spark for Azure Synapse Analytics](/learn/modules/query-azure-cosmos-db-with-apache-spark-for-azure-synapse-analytics/). Gestures in Azure Synapse Analytics workspace are designed to provide an easy out-of-the-box experience to get started. Gestures are visible when you right-click on an Azure Cosmos DB container in the **Data** tab of the Synapse workspace. With gestures, you can quickly generate code and tailor it to your needs. Gestures are also perfect for discovering data with a single click.
+The following sections walk you through the syntax of above capabilities. You can also checkout the Learn module on how to [Query Azure Cosmos DB with Apache Spark for Azure Synapse Analytics](/training/modules/query-azure-cosmos-db-with-apache-spark-for-azure-synapse-analytics/). Gestures in Azure Synapse Analytics workspace are designed to provide an easy out-of-the-box experience to get started. Gestures are visible when you right-click on an Azure Cosmos DB container in the **Data** tab of the Synapse workspace. With gestures, you can quickly generate code and tailor it to your needs. Gestures are also perfect for discovering data with a single click.
> [!IMPORTANT] > You should be aware of some constraints in the analytical schema that could lead to the unexpected behavior in data loading operations.
query.awaitTermination()
* [Samples to get started with Azure Synapse Link on GitHub](https://aka.ms/cosmosdb-synapselink-samples) * [Learn what is supported in Azure Synapse Link for Azure Cosmos DB](./concept-synapse-link-cosmos-db-support.md) * [Connect to Synapse Link for Azure Cosmos DB](../quickstart-connect-synapse-link-cosmos-db.md)
-* Checkout the learn module on how to [Query Azure Cosmos DB with Apache Spark for Azure Synapse Analytics](/learn/modules/query-azure-cosmos-db-with-apache-spark-for-azure-synapse-analytics/).
+* Checkout the Learn module on how to [Query Azure Cosmos DB with Apache Spark for Azure Synapse Analytics](/training/modules/query-azure-cosmos-db-with-apache-spark-for-azure-synapse-analytics/).
synapse-analytics Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md
The following updates are new to Azure Synapse Analytics this month.
To learn more about external tables, read [Create and alter SQL Server external tables](/azure/data-explorer/kusto/management/external-sql-tables).
-* **New KQL Learn module (2 out of 3) is live!** - The power of Kusto Query Language (KQL) is its simplicity to query structured, semi-structured, and unstructured data together. To make it easier for you to learn KQL, we are releasing Learn modules. Previously, we released [Write your first query with Kusto Query Language](/learn/modules/write-first-query-kusto-query-language/). New this month is [Gain insights from your data by using Kusto Query Language](/learn/modules/gain-insights-data-kusto-query-language/).
+* **New KQL Learn module (2 out of 3) is live!** - The power of Kusto Query Language (KQL) is its simplicity to query structured, semi-structured, and unstructured data together. To make it easier for you to learn KQL, we are releasing Learn modules. Previously, we released [Write your first query with Kusto Query Language](/training/modules/write-first-query-kusto-query-language/). New this month is [Gain insights from your data by using Kusto Query Language](/training/modules/gain-insights-data-kusto-query-language/).
KQL is the query language used to query Synapse Data Explorer big data. KQL has a fast-growing user community, with hundreds of thousands of developers, data engineers, data analysts, and students.
time-series-insights Concepts Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-power-bi.md
Azure Time Series Insights now seamlessly integrates with [Power BI](https://pow
### Learn more about integrating Azure Time Series Insights with Power BI.</br>
-> [!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Power-BI-integration-with-TSI/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Power-BI-integration-with-TSI/player]
## Summary
time-series-insights Overview What Is Tsi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/overview-what-is-tsi.md
Azure Time Series Insights Gen2 is designed for ad hoc data exploration and oper
Learn more about Azure Time Series Insights Gen2.
-> [!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Using-Azure-Time-Series-Insights-to-create-an-Industrial-IoT-analytics-platform/player]
+> [!VIDEO https://learn.microsoft.com/Shows/Internet-of-Things-Show/Using-Azure-Time-Series-Insights-to-create-an-Industrial-IoT-analytics-platform/player]
## Definition of IoT data
time-series-insights Time Series Insights Manage Reference Data Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-manage-reference-data-csharp.md
namespace CsharpTsiMsalGaSample
/** * Review the product documentation for detailed configuration steps or skip ahead and configure your environment settings. *
- * https://docs.microsoft.com/azure/time-series-insights/time-series-insights-authentication-and-authorization
+ * https://learn.microsoft.com/azure/time-series-insights/time-series-insights-authentication-and-authorization
*/ // Azure Time Series Insights environment configuration
namespace CsharpTsiMsalGaSample
{ if (AadClientApplicationId == "#PLACEHOLDER#" || AadScopes.Length == 0 || AadRedirectUri == "#PLACEHOLDER#" || AadTenantName.StartsWith("#PLACEHOLDER#")) {
- throw new Exception($"Use the link {"https://docs.microsoft.com/azure/time-series-insights/time-series-insights-get-started"} to update the values of 'AadClientApplicationId', 'AadScopes', 'AadRedirectUri', and 'AadAuthenticationAuthority'.");
+ throw new Exception($"Use the link {"https://learn.microsoft.com/azure/time-series-insights/time-series-insights-get-started"} to update the values of 'AadClientApplicationId', 'AadScopes', 'AadRedirectUri', and 'AadAuthenticationAuthority'.");
} /**
namespace CsharpTsiMsalGaSample
{ if (EnvironmentFqdn.StartsWith("#PLACEHOLDER#") || EnvironmentReferenceDataSetName == "#PLACEHOLDER#") {
- throw new Exception($"Use the link {"https://docs.microsoft.com/azure/time-series-insights/time-series-insights-authentication-and-authorization"} to update the values of 'EnvironmentFqdn' and 'EnvironmentReferenceDataSetName'.");
+ throw new Exception($"Use the link {"https://learn.microsoft.com/azure/time-series-insights/time-series-insights-authentication-and-authorization"} to update the values of 'EnvironmentFqdn' and 'EnvironmentReferenceDataSetName'.");
} Console.WriteLine("HTTP JSON Request Body: {0}", input);
traffic-manager Quickstart Create Traffic Manager Profile Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-bicep.md
One Azure resource is defined in the Bicep file:
- The Bicep file deployment creates a profile with two external endpoints. **Endpoint1** uses a target endpoint of `www.microsoft.com` with the location in **North Europe**. **Endpoint2** uses a target endpoint of `docs.microsoft.com` with the location in **South Central US**.
+ The Bicep file deployment creates a profile with two external endpoints. **Endpoint1** uses a target endpoint of `www.microsoft.com` with the location in **North Europe**. **Endpoint2** uses a target endpoint of `learn.microsoft.com` with the location in **South Central US**.
> [!NOTE] > **uniqueDNSname** needs to be a globally unique name in order for the Bicep file to deploy successfully.
Use Azure CLI or Azure PowerShell to validate the deployment.
nslookup -type=cname {relativeDnsName} ```
- You should get a canonical name of either `www.microsoft.com` or `docs.microsoft.com` depending on which region is closer to you.
+ You should get a canonical name of either `www.microsoft.com` or `learn.microsoft.com` depending on which region is closer to you.
# [PowerShell](#tab/PowerShell)
Use Azure CLI or Azure PowerShell to validate the deployment.
Resolve-DnsName -Name {relativeDnsname} | Select-Object NameHost | Select -First 1 ```
- You should get a NameHost of either `www.microsoft.com` or `docs.microsoft.com` depending on which region is closer to you.
+ You should get a NameHost of either `www.microsoft.com` or `learn.microsoft.com` depending on which region is closer to you.
-3. To check if you can resolve to the other endpoint, disable the endpoint for the target you got in the last step. Replace the **{endpointName}** with either **endpoint1** or **endpoint2** to disable the target for `www.microsoft.com` or `docs.microsoft.com` respectively.
+3. To check if you can resolve to the other endpoint, disable the endpoint for the target you got in the last step. Replace the **{endpointName}** with either **endpoint1** or **endpoint2** to disable the target for `www.microsoft.com` or `learn.microsoft.com` respectively.
# [CLI](#tab/CLI)
traffic-manager Quickstart Create Traffic Manager Profile Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-template.md
To find more templates that are related to Azure Traffic Manager, see [Azure Qui
1. Enter the values.
- The template deployment creates a profile with two external endpoints. **Endpoint1** uses a target endpoint of `www.microsoft.com` with the location in **North Europe**. **Endpoint2** uses a target endpoint of `docs.microsoft.com` with the location in **South Central US**.
+ The template deployment creates a profile with two external endpoints. **Endpoint1** uses a target endpoint of `www.microsoft.com` with the location in **North Europe**. **Endpoint2** uses a target endpoint of `learn.microsoft.com` with the location in **South Central US**.
The resource group name is the project name with **rg** appended.
Azure PowerShell is used to deploy the template. In addition to Azure PowerShell
Resolve-DnsName -Name {relativeDNSname} | Select-Object NameHost | Select -First 1 ```
- You should get a NameHost of either `www.microsoft.com` or `docs.microsoft.com` depending on which region is closer to you.
+ You should get a NameHost of either `www.microsoft.com` or `learn.microsoft.com` depending on which region is closer to you.
-1. To check if you can resolve to the other endpoint, disable the endpoint for the target you got in the last step. Replace the **{endpointName}** with either **endpoint1** or **endpoint2** to disable the target for `www.microsoft.com` or `docs.microsoft.com` respectively.
+1. To check if you can resolve to the other endpoint, disable the endpoint for the target you got in the last step. Replace the **{endpointName}** with either **endpoint1** or **endpoint2** to disable the target for `www.microsoft.com` or `learn.microsoft.com` respectively.
```azurepowershell-interactive Disable-AzTrafficManagerEndpoint -Name {endpointName} -Type ExternalEndpoints -ProfileName ExternalEndpointExample -ResourceGroupName $resourceGroupName -Force
traffic-manager Traffic Manager Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-FAQs.md
na Previously updated : 09/19/2022 Last updated : 09/20/2022
If your client or application receives an HTTP 500 error while using Traffic Man
When a service endpoint is unresponsive, clients and applications that are using that endpoint do not reset until the DNS cache is refreshed. The duration of the cache is determined by the time-to-live (TTL) of the DNS record. For more information, see [Traffic Manager and the DNS cache](traffic-manager-how-it-works.md#traffic-manager-and-the-dns-cache).
+Also see the following related FAQs in this article:
+- [What is DNS TTL and how does it impact my users?](#what-is-dns-ttl-and-how-does-it-impact-my-users)
+- [How high or low can I set the TTL for Traffic Manager responses?](#how-high-or-low-can-i-set-the-ttl-for-traffic-manager-responses)
+- [How can I understand the volume of queries coming to my profile?](#how-can-i-understand-the-volume-of-queries-coming-to-my-profile)
+ ### What is the performance impact of using Traffic Manager? As explained in [How Traffic Manager Works](../traffic-manager/traffic-manager-how-it-works.md), Traffic Manager works at the DNS level. Since clients connect to your service endpoints directly, thereΓÇÖs no performance impact incurred when using Traffic Manager once the connection is established.
virtual-desktop Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection.md
The following list shows websites that are known to work with MMR. MMR is suppos
- Facebook - Fox Sports - IMDB-- [Microsoft Learn](/learn)
+- [Microsoft Learn training](/training)
- LinkedIn Learning - Fox Weather - Yammer
virtual-desktop Safe Url List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/safe-url-list.md
Any [Remote Desktop clients](user-documentation/connect-windows-7-10.md?toc=/azu
| `*.servicebus.windows.net` | 443 | Troubleshooting data | All | | `go.microsoft.com` | 443 | Microsoft FWLinks | All | | `aka.ms` | 443 | Microsoft URL shortener | All |
-| `docs.microsoft.com` | 443 | Documentation | All |
+| `learn.microsoft.com` | 443 | Documentation | All |
| `privacy.microsoft.com` | 443 | Privacy statement | All | | `query.prod.cms.rt.microsoft.com` | 443 | Client updates | Windows Desktop |
Any [Remote Desktop clients](user-documentation/connect-windows-7-10.md?toc=/azu
| `*.servicebus.usgovcloudapi.net` | 443 | Troubleshooting data | All | | `go.microsoft.com` | 443 | Microsoft FWLinks | All | | `aka.ms` | 443 | Microsoft URL shortener | All |
-| `docs.microsoft.com` | 443 | Documentation | All |
+| `learn.microsoft.com` | 443 | Documentation | All |
| `privacy.microsoft.com` | 443 | Privacy statement | All | | `query.prod.cms.rt.microsoft.com` | 443 | Client updates | Windows Desktop |
These URLs only correspond to client sites and resources. This list doesn't incl
## Next steps
-To learn how to unblock these URLs in Azure Firewall for your Azure Virtual Desktop deployment, see [Use Azure Firewall to protect Azure Virtual Desktop](../firewall/protect-azure-virtual-desktop.md).
+To learn how to unblock these URLs in Azure Firewall for your Azure Virtual Desktop deployment, see [Use Azure Firewall to protect Azure Virtual Desktop](../firewall/protect-azure-virtual-desktop.md).
virtual-desktop Set Up Scaling Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-scaling-script.md
Finally, you'll need to create the Azure Logic App and set up an execution sched
$AutoAccount = Get-AzAutomationAccount | Out-GridView -OutputMode:Single -Title "Select the Azure Automation account" $AutoAccountConnection = Get-AzAutomationConnection -ResourceGroupName $AutoAccount.ResourceGroupName -AutomationAccountName $AutoAccount.AutomationAccountName | Out-GridView -OutputMode:Single -Title "Select the Azure RunAs connection asset"
- $WebhookURI = Read-Host -Prompt "Enter the webhook URI that has already been generated for this Azure Automation account. The URI is stored as encrypted in the above Automation Account variable. To retrieve the value, see https://docs.microsoft.com/azure/automation/shared-resources/variables?tabs=azure-powershell#powershell-cmdlets-to-access-variables"
+ $WebhookURI = Read-Host -Prompt "Enter the webhook URI that has already been generated for this Azure Automation account. The URI is stored as encrypted in the above Automation Account variable. To retrieve the value, see https://learn.microsoft.com/azure/automation/shared-resources/variables?tabs=azure-powershell#powershell-cmdlets-to-access-variables"
$Params = @{ "AADTenantId" = $AADTenantId # Optional. If not specified, it will use the current Azure context
virtual-desktop Troubleshoot Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client.md
Title: Troubleshoot Remote Desktop client Azure Virtual Desktop - Azure
description: How to resolve issues with the Remote Desktop client when connecting to Azure Virtual Desktop. Previously updated : 09/15/2022 Last updated : 09/20/2022
This article describes common issues with the Remote Desktop client and how to fix them.
-## Remote Desktop client for Windows 7 or Windows 10 stops responding or cannot be opened
+## All clients
-Starting with version 1.2.790, you can reset the user data from the About page or using a command.
+In this section you'll find troubleshooting guidance for all Remote Desktop clients.
-Use the following command to remove your user data, restore default settings and unsubscribe from all Workspaces.
+### Remote Desktop Client doesn't show my resources
+
+First, check the Azure Active Directory account you're using. If you've already signed in with a different Azure Active Directory account than the one you want to use for Azure Virtual Desktop, you should either sign out or use a private browser window.
+
+If you're using Azure Virtual Desktop (classic), use the web client link in [this article](./virtual-desktop-fall-2019/connect-web-2019.md) to connect to your resources.
+
+If that doesn't work, make sure your app group is associated with a workspace.
+
+## Windows client
+
+In this section you'll find troubleshooting guidance for the Remote Desktop client for Windows.
+
+### Remote Desktop client for Windows stops responding or cannot be opened
+
+If the Remote Desktop client for Windows stops responding or cannot be opened, you may need to reset your client. Starting with version 1.2.790, you can reset the user data from the About page or using a command.
+
+You can also use the following command to remove your user data, restore default settings and unsubscribe from all Workspaces. From a Command Prompt or PowerShell session, run the following command:
```cmd msrdcw.exe /reset [/f]
msrdcw.exe /reset [/f]
If you're using an earlier version of the Remote Desktop client, we recommend you uninstall and reinstall the client.
-## Web client won't open
+### Authentication issues while using an N SKU
-First, test your internet connection by opening another website in your browser; for example, [www.bing.com](https://www.bing.com).
+Authentication issues can happen because you're using an *N* SKU of Windows without the media features pack. To resolve this issue, [install the media features pack](https://support.microsoft.com/topic/media-feature-pack-list-for-windows-n-editions-c1c6fffa-d052-8338-7a79-a4bb980a700a).
-Use **nslookup** to confirm DNS can resolve the FQDN:
+### Authentication issues when TLS 1.2 not enabled
-```cmd
-nslookup rdweb.wvd.microsoft.com
-```
+Authentication issues can happen when your client doesn't have TLS 1.2 enabled. This is most likely with Windows 7 where TLS 1.2 is not enabled by default. To enable TLS 1.2 on Windows 7, you need to set the following registry values:
-Try connecting with another client, like Remote Desktop client for Windows 7 or Windows 10, and check to see if you can open the web client.
+- `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client`
+ - "DisabledByDefault": **00000000**
+ - "Enabled": **00000001**
+- `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server`
+ - "DisabledByDefault": **00000000**
+ - "Enabled": **00000001**
+- `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319`
+ - "SchUseStrongCrypto": **00000001**
-### Can't open other websites while connected to the web client
+You can configure these registry values by running the following commands from an elevated PowerShell session:
-If you can't open other websites while you're connected to the web client, there might be network connection problems or a network outage. We recommend you contact network support.
+```powershell
+New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -Force
+New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -Name 'Enabled' -Value '1' -PropertyType 'DWORD' -Force
+New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -Name 'DisabledByDefault' -Value '0' -PropertyType 'DWORD' -Force
-### Nslookup can't resolve the name
+New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Force
+New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Name 'Enabled' -Value '1' -PropertyType 'DWORD' -Force
+New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Name 'DisabledByDefault' -Value '0' -PropertyType 'DWORD' -Force
-If nslookup can't resolve the name, then there might be network connection problems or a network outage. We recommend you contact network support.
+New-Item 'HKLM:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319' -Force
+New-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319' -Name 'SystemDefaultTlsVersions' -Value '1' -PropertyType 'DWORD' -Force
+New-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -PropertyType 'DWORD' -Force
+```
-### Your client can't connect but other clients on your network can connect
+### Windows client blocks Azure Virtual Desktop (classic) feed
-If your browser starts acting up or stops working while you're using the web client, follow these instructions to troubleshoot it:
+If the Windows client feed won't show Azure Virtual Desktop (classic) apps, follow these instructions as an admin of Azure Virtual Desktop in Azure:
-1. Restart the browser.
-2. Clear browser cookies. See [How to delete cookie files in Internet Explorer](https://support.microsoft.com/help/278835/how-to-delete-cookie-files-in-internet-explorer).
-3. Clear browser cache. See [clear browser cache for your browser](https://binged.it/2RKyfdU).
-4. Open browser in Private mode.
+1. Check if the Conditional Access policy includes the app IDs associated with Azure Virtual Desktop (classic).
+2. Check if the Conditional Access policy blocks all access except Azure Virtual Desktop (classic) app IDs. If so, you'll need to add the app ID.**9cdead84-a844-4324-93f2-b2e6bb768d07** to the policy to allow the client to discover the feeds.
-## Client doesn't show my resources
+If you can't find the app ID 9cdead84-a844-4324-93f2-b2e6bb768d07 in the list, you'll need to re-register the Azure Virtual Desktop resource provider. To re-register the resource provider:
-First, check the Azure Active Directory account you're using. If you've already signed in with a different Azure Active Directory account than the one you want to use for Azure Virtual Desktop, you should either sign out or use a private browser window.
+1. Sign in to the Azure portal.
+2. Go to **Subscription**, then select your subscription.
+3. In the menu on the left side of the page, select **Resource provider**.
+4. Find and select **Microsoft.DesktopVirtualization**, then select **Re-register**.
-If you're using Azure Virtual Desktop (classic), use the web client link in [this article](./virtual-desktop-fall-2019/connect-web-2019.md) to connect to your resources.
+## Web client
-If that doesn't work, make sure your app group is associated with a workspace.
+In this section you'll find troubleshooting guidance for the Remote Desktop Web client.
-## Web client stops responding or disconnects
+### Web client stops responding or disconnects
Try connecting using another browser or client.
-### Other browsers and clients also malfunction or fail to open
+### Web client won't open
+
+First, test your internet connection by opening another website in your browser, for example [www.bing.com](https://www.bing.com).
-If issues continue even after you've switched browsers, the problem may not be with your browser, but with your network. We recommend you contact network support.
+Next, open a Command Prompt or PowerShell session and use **nslookup** to confirm DNS can resolve the FQDN by running the following command:
-## Web client keeps prompting for credentials
+```cmd
+nslookup rdweb.wvd.microsoft.com
+```
+
+If one or neither of these work, you most likely have a problem with your network connection. We recommend you contact your network admin for help.
+
+### Your client can't connect but other clients on your network can connect
+
+If your browser starts acting up or stops working while you're using the web client, try these actions to resolve it:
+
+1. Restart the browser.
+2. Clear browser cookies. See [How to delete cookie files in Internet Explorer](https://support.microsoft.com/help/278835/how-to-delete-cookie-files-in-internet-explorer).
+3. Clear browser cache. See [clear browser cache for your browser](https://binged.it/2RKyfdU).
+4. Open browser InPrivate mode.
+
+If issues continue even after you've switched browsers, the problem may not be with your browser, but with your network. We recommend you contact your network admin for help.
+
+### Web client keeps prompting for credentials
If the Web client keeps prompting for credentials, follow these instructions:
If the Web client keeps prompting for credentials, follow these instructions:
4. Clear browser cache. For more information, see [Clear browser cache for your browser](https://binged.it/2RKyfdU). 5. Open your browser in Private mode.
-## Web client
- ### Web client out of memory When using the web client, if you see the error message "Oops, we couldn't connect to 'SessionDesktop,'" (where *SessionDesktop* is the name of the resource you're connecting to), then the web client has run out of memory. To resolve this issue, you'll need to either reduce the size of the browser window or disconnect all existing connections and try connecting again. If you still encounter this issue after doing these things, ask your local admin or tech support for help.
-#### Authentication issues while using an N SKU
-
-This issue may also be happening because you're using an N SKU without a media features pack. To resolve this issue, [install the media features pack](https://support.microsoft.com/topic/media-feature-pack-list-for-windows-n-editions-c1c6fffa-d052-8338-7a79-a4bb980a700a).
-
-#### Authentication issues when TLS 1.2 not enabled
-
-Authentication issues can also happen when your client doesn't have TLS 1.2 enabled. To learn how to enable TLS 1.2 on a compatible client, see [Enable TLS 1.2 on client or server operating systems](/troubleshoot/azure/active-directory/enable-support-tls-environment?tabs=azure-monitor#enable-tls-12-on-client-or-server-operating-systems).
-
-## Windows client blocks Azure Virtual Desktop (classic) feed
-
-If the Windows client feed won't show Azure Virtual Desktop (classic) apps, follow these instructions:
-
-1. Check if the Conditional Access policy includes the app IDs associated with Azure Virtual Desktop (classic).
-2. Check if the Conditional Access policy blocks all access except Azure Virtual Desktop (classic) app IDs. If so, you'll need to add the app ID **9cdead84-a844-4324-93f2-b2e6bb768d07** to the policy to allow the client to discover the feeds.
-
-If you can't find the app ID 9cdead84-a844-4324-93f2-b2e6bb768d07 in the list, you'll need to register the Azure Virtual Desktop resource provider. To register the resource provider:
-
-1. Sign in to the Azure portal.
-2. Go to **Subscription**, then select your subscription.
-3. In the menu on the left side of the page, select **Resource provider**.
-4. Find and select **Microsoft.DesktopVirtualization**, then select **Re-register**.
- ## Next steps - For an overview on troubleshooting Azure Virtual Desktop and the escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview.md).
virtual-desktop Windows 11 Language Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/windows-11-language-packs.md
The second option is more efficient in terms of resources and cost, but requires
Before you can add languages to a Windows 11 Enterprise VM, you'll need to have the following things ready: -- An Azure VM with Windows 11 Enterprise installed-- A Language and Optional Features (LoF) ISO. You can download the ISO at [Windows 11 Language and Optional Features LoF ISO](https://software-download.microsoft.com/download/sg/22000.1.210604-1628.co_release_amd64fre_CLIENT_LOF_PACKAGES_OEM.iso)-- An Azure Files share or a file share on a Windows File Server VM
+- An Azure VM with Windows 11 Enterprise installed
+- A Language and Optional Features ISO and Inbox Apps ISO of the OS version the image uses. You can download them here:
+ - Language and Optional Features ISO:
+ - [Windows 11, version 21H2 Language and Optional Features ISO](https://software-download.microsoft.com/download/sg/22000.1.210604-1628.co_release_amd64fre_CLIENT_LOF_PACKAGES_OEM.iso)
+ - [Windows 11, version 22H2 Language and Optional Features ISO](https://software-static.download.prss.microsoft.com/dbazure/988969d5-f34g-4e03-ac9d-1f9786c66749/22621.1.220506-1250.ni_release_amd64fre_CLIENT_LOF_PACKAGES_OEM.iso)
+ - Inbox Apps ISO:
+ - [Windows 11, version 21H2 Inbox Apps ISO](https://software-download.microsoft.com/download/pr/22000.194.210911-1543.co_release_svc_prod1_amd64fre_InboxApps.iso)
+ - [Windows 11, version 22H2 Inbox Apps ISO](https://software-static.download.prss.microsoft.com/dbazure/988969d5-f34g-4e03-ac9d-1f9786c66749/22621.1.220506-1250.ni_release_amd64fre_InboxApps.iso)
+- An Azure Files share or a file share on a Windows File Server VM
>[!NOTE] >The file share repository must be accessible from the Azure VM that you're going to use to create the custom image.
virtual-machine-scale-sets Use Spot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/use-spot.md
Pricing for Azure Spot Virtual Machine instances is variable, based on region an
With variable pricing, you have option to set a max price, in US dollars (USD), using up to five decimal places. For example, the value `0.98765`would be a max price of $0.98765 USD per hour. If you set the max price to be `-1`, the instance won't be evicted based on price. The price for the instance will be the current price for Azure Spot Virtual Machine or the price for a standard instance, which ever is less, as long as there is capacity and quota available. -
+<New information here!>
## Eviction policy
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
The following table compares the Flexible orchestration mode, Uniform orchestrat
| Azure Load Balancer Standard SKU | Yes | Yes | Yes | | Application Gateway | Yes | Yes | Yes | | Infiniband Networking | No | Yes, single placement group only | Yes |
-| Basic SLB | No | Yes | Yes |
+| Basic LB | No | Yes | Yes |
| Network Port Forwarding | Yes (NAT Rules for individual instances) | Yes (NAT Pool) | Yes (NAT Rules for individual instances) | ### Backup and recovery 
virtual-machines Constrained Vcpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/constrained-vcpu.md
> [!TIP] > Try the **[Virtual Machine selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload.
-Some database workloads like SQL Server require high memory, storage, and I/O bandwidth, but not a high core count. Many database workloads are not CPU-intensive. Azure offers certain VM sizes where you can constrain the VM vCPU count to reduce the cost of software licensing, while maintaining the same memory, storage, and I/O bandwidth.
+Some database workloads like SQL Server require high memory, storage, and I/O bandwidth, but not a high core count. Many database workloads are not CPU-intensive. Azure offers certain VM sizes where you can lower the VM vCPU count to reduce the cost of software licensing, while maintaining the same memory, storage, and I/O bandwidth.
-The vCPU count can be constrained to one half or one quarter of the original VM size. These new VM sizes have a suffix that specifies the number of active vCPUs to make them easier for you to identify.
+The available vCPU count can be reduced to one half or one quarter of the original VM specification. These new VM sizes have a suffix that specifies the number of available vCPUs to make them easier for you to identify. There are no additional cores available that can be used by the VM.
-For example, the current VM size Standard_GS5 comes with 32 vCPUs, 448 GB RAM, 64 disks (up to 256 TB), and 80,000 IOPs or 2 GB/s of I/O bandwidth. The new VM sizes Standard_GS5-16 and Standard_GS5-8 comes with 16 and 8 active vCPUs respectively, while maintaining the rest of the specs of the Standard_GS5 for memory, storage, and I/O bandwidth.
+For example, the current VM size Standard_E32s_v5 comes with 32 vCPUs, 256 GiB RAM, 32 disks, and 80,000 IOPs or 2 GB/s of I/O bandwidth. The new VM sizes Standard_E32-16s_v5 and Standard_E32-8s_v5 comes with 16 and 8 active vCPUs respectively, while maintaining the rest of the specs of the Standard_E32s_v5 for memory, storage, and I/O bandwidth.
-The licensing fees charged for SQL Server are constrained to the new vCPU count, and other products should be charged based on the new vCPU count. This results in a 50% to 75% increase in the ratio of the VM specs to active (billable) vCPUs. These new VM sizes allow customer workloads to use the same memory, storage, and I/O bandwidth while optimizing their software licensing cost. At this time, the compute cost, which includes OS licensing, remains the same one as the original size. For more information, see [Azure VM sizes for more cost-effective database workloads](https://azure.microsoft.com/blog/announcing-new-azure-vm-sizes-for-more-cost-effective-database-workloads/).
+The licensing fees charged for SQL Server are based on the avaialble vCPU count. Third party products should count the available vCPU which represents the max to be used and licensed. This results in a 50% to 75% increase in the ratio of the VM specs to available (billable) vCPUs. These new VM sizes allow customer workloads to use the same memory, storage, and I/O bandwidth while optimizing their software licensing cost. At this time, the compute cost, which includes OS licensing, remains the same one as the original size. For more information, see [Azure VM sizes for more cost-effective database workloads](https://azure.microsoft.com/blog/announcing-new-azure-vm-sizes-for-more-cost-effective-database-workloads/).
| Name | vCPU | Specs |
virtual-machines Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-version.md
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
## New Features Many new features like ARM64, Accelerated Networking, TrustedVMSupported etc. are only supported through Azure Compute Gallery and not available for 'Managed images'. For a complete list of new features available through Azure Compute Gallery, please refer
-https://docs.microsoft.com/cli/azure/sig/image-version?view=azure-cli-latest#az-sig-image-version-create
+https://learn.microsoft.com/cli/azure/sig/image-version?view=azure-cli-latest#az-sig-image-version-create
## Next steps
virtual-machines Azure Hybrid Benefit Byos Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-byos-linux.md
# Explore Azure Hybrid Benefit for bring-your-own-subscription Linux virtual machines
-Azure Hybrid Benefit provides software updates and integrated support directly from Azure for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines. Azure Hybrid Benefit for bring-your-own-subscription (BYOS) virtual machines is a licensing benefit that's currently in public preview. It lets you switch RHEL and SLES BYOS virtual machines generated from custom on-premises images or from Azure Marketplace to pay-as-you-go billing.
+Azure Hybrid Benefit provides software updates and integrated support directly from Azure for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines. Azure Hybrid Benefit for bring-your-own-subscription (BYOS) virtual machines is a licensing benefit that lets you switch RHEL and SLES BYOS virtual machines generated from custom on-premises images or from Azure Marketplace to pay-as-you-go billing.
>[!IMPORTANT] > To do the reverse and switch from a RHEL pay-as-you-go virtual machine or SLES pay-as-you-go virtual machine to a BYOS virtual machine, see [Explore Azure Hybrid Benefit for pay-as-you-go Linux virtual machines](./azure-hybrid-benefit-linux.md).
virtual-machines Image Builder Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-troubleshoot.md
The file name or location is incorrect, or the location isn't reachable.
Ensure that the file is reachable. Verify that the name and location are correct.
+### Authorization error creating disk
+
+The Azure Image Builder build fails with an authorization error that looks like the following:
+
+#### Error
+
+```text
+Attempting to deploy created Image template in Azure fails with an 'The client '6df325020-fe22-4e39-bd69-10873965ac04' with object id '6df325020-fe22-4e39-bd69-10873965ac04' does not have authorization to perform action 'Microsoft.Compute/disks/write' over scope '/subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/disks/proxyVmDiskWin_<timestamp>' or the scope is invalid. If access was recently granted, please refresh your credentials.'
+```
+#### Cause
+
+This error is caused when trying to specify a pre-existing resource group and VNet to the Azure Image Builder service with a Windows source image.
+
+#### Solution
+
+You will need to assign the contributor role to the resource group for the service principal corresponding to Azure Image Builder's first party app by using the CLI command or portal instructions below.
+
+First, validate that the service principal is associated with Azure Image Builder's first party app by using the following CLI command:
+```azurecli-interactive
+az ad sp show --id {servicePrincipalName, or objectId}
+```
+
+Then, to implement this solution using CLI, use the following command:
+```azurecli-interactive
+az role assignment create -g {ResourceGroupName} --assignee {AibrpSpOid} --role Contributor
+```
+
+To implement this solution in portal, follow the instructions in this documentation: [Assign Azure roles using the Azure portal - Azure RBAC](https://learn.microsoft.com/azure/role-based-access-control/role-assignments-portal?tabs=current).
+
+For [Step 1: Identify the needed scope](https://learn.microsoft.com/azure/role-based-access-control/role-assignments-portal?tabs=current#step-1-identify-the-needed-scope): The needed scope is your resource group.
+
+For [Step 3: Select the appropriate role](https://learn.microsoft.com/azure/role-based-access-control/role-assignments-portal?tabs=current#step-3-select-the-appropriate-role): The role is Contributor.
+
+For [Step 4: Select who needs access](https://learn.microsoft.com/azure/role-based-access-control/role-assignments-portal?tabs=current#step-4-select-who-needs-access): Select member “Azure Virtual Machine Image Builder”
+
+Then proceed to [Step 6: Assign role](https://learn.microsoft.com/azure/role-based-access-control/role-assignments-portal?tabs=current#step-6-assign-role) to assign the role.
+ ## Troubleshoot build failures For image build failures, get the error from the `lastrunstatus`, and then review the details in the *customization.log* file.
virtual-machines Tutorial Devops Azure Pipelines Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-devops-azure-pipelines-classic.md
In each iteration, a rolling deployment replaces instances of an application's p
Using **Continuous-delivery**, you can configure rolling updates to your virtual machines within the Azure portal.
+[!IMPORTANT] Virtual Machine's Continuous delivery setting will be retired on March 31, 2023. [Learn more](/azure/virtual-machines/linux/tutorial-devops-azure-pipelines-classic?source=recommendations#retirement)
++ 1. Sign in to [Azure portal](https://portal.azure.com/) and navigate to a virtual machine. 1. Select **Continuous delivery**, and then select **Configure**.
Using **Continuous-delivery**, you can configure rolling updates to your virtual
- [Configure the canary deployment strategy](./tutorial-azure-devops-canary-strategy.md) - [Configure the blue-green deployment strategy](./tutorial-azure-devops-blue-green-strategy.md)+
+## Retirement
+
+Continuous delivery setting of Virtual Machines will be retired on March 31, 2023. Please switch to directly using Azure DevOps to create customized pipelines for deployment to Azure VMs. Release pipeline [Stage Templates](/azure/devops/pipelines/release/env-templates) and [Deployments Groups](/azure/devops/pipelines/process/deployment-group-phases) Azure DevOps' features provide similar experiences.
+
+### Migration Steps
+
+There is no migration required as VM CD experience does not store any information itself, it just helps users with their Day 0 getting started experience on Azure and Azure DevOps. Users will still be able to perform all operations from Azure DevOps after retirement. You won't be able to create and view pipelines from the Azure portal anymore.
+
+### FAQ
+
+Where can I set up my CD pipeline after this experience is deprecated?ΓÇ»
+
+You won't be able to view or create Azure DevOps pipelines from an Azure portal Virtual Machine blade after retirement. You still can go to Azure DevOps portal and view or update pipelines.
+
+Will I lose my earlier configured pipelines?
+
+No. Your pipelines will still be available in Azure DevOps.
+
+
+How can I configure different deployment strategies?
+
+The current experience uses [deployment groups](/azure/devops/pipelines/process/deployment-group-phases) to create deployment strategies. You can use deployment groups or release pipeline [Stage Templates](/azure/devops/pipelines/release/env-templates) to build your pipeline with templates.
++
virtual-machines Nva10v5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nva10v5-series.md
Each virtual machine instance in NVadsA10v5-series comes with a GRID license. Th
[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br> <br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU partition | GPU memory: GiB | Max data disks | Max NICs / Expected network bandwidth (MBps) |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU partition | GPU memory: GiB | Max data disks | Max NICs / Expected network bandwidth (Mbps) |
| | | | | | | | | | Standard_NV6ads_A10_v5 |6 |55 |180 | 1/6 | 4 | 4 | 2 / 5000 | | Standard_NV12ads_A10_v5 |12 |110 |360 | 1/3 | 8 | 4 | 2 / 10000 |
virtual-machines Share Gallery Community https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-community.md
Sharing images to the community is a new capability in [Azure Compute Gallery](.
> [!IMPORTANT] > Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). >
-> To publish a community gallery, you need to register for the preview at [https://aka.ms/communitygallery-preview](https://aka.ms/communitygallery-preview). Creating VMs from the community gallery is open to all Azure users.
+> To publish a community gallery, you need to register for the preview at [https://aka.ms/communitygallery-preview](https://aka.ms/communitygallery-preview). We will follow up within 5 business days after submitting the form. Creating VMs from the community gallery is open to all Azure users.
> > During the preview, the gallery must be created as a community gallery (for CLI, this means using the `--permissions community` parameter) you currently can't migrate a regular gallery to a community gallery. >
virtual-machines Share Gallery Direct https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-direct.md
This article covers how to share an Azure Compute Gallery with specific subscrip
> [!IMPORTANT] > Azure Compute Gallery ΓÇô direct shared gallery is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). >
-> To publish images to a direct shared gallery during the preview, you need to register at [https://aka.ms/directsharedgallery-preview](https://aka.ms/directsharedgallery-preview). No additional access required to consume images, Creating VMs from a direct shared gallery is open to all Azure users in the target subscription or tenant the gallery is shared with.
+> To publish images to a direct shared gallery during the preview, you need to register at [https://aka.ms/directsharedgallery-preview](https://aka.ms/directsharedgallery-preview). We will follow up within 5 business days after submitting the form. No additional access required to consume images, Creating VMs from a direct shared gallery is open to all Azure users in the target subscription or tenant the gallery is shared with.
> > During the preview, you need to create a new gallery, with the property `sharingProfile.permissions` set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated.
During the preview:
- A direct shared gallery can't contain encrypted image versions. Encrypted images can't be created within a gallery that is directly shared. - Only the owner of a subscription, or a user or service principal assigned to the `Compute Gallery Sharing Admin` role at the subscription or gallery level will be able to enable group-based sharing. - You need to create a new gallery, with the property `sharingProfile.permissions` set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated.
+- TrustedLaunch and ConfidentialVM are not supported
- PowerShell, Ansible, and Terraform aren't supported at this time. - Not available in Government clouds - For consuming direct shared images in target subscription, Direct shared images can be found from VM/VMSS creation blade only.
virtual-machines Compiling Scaling Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/hpc/compiling-scaling-applications.md
gcc $(OPTIMIZATIONS) $(OMP) $(STACK) $(STREAM_PARAMETERS) stream.c -o stream.gcc
## Next steps -- Test your knowledge with a [learning module on optimizing HPC applications on Azure](/learn/modules/optimize-tightly-coupled-hpc-apps/).
+- Test your knowledge with a [learning module on optimizing HPC applications on Azure](/training/modules/optimize-tightly-coupled-hpc-apps/).
- Review the [HBv3-series overview](hbv3-series-overview.md) and [HC-series overview](hc-series-overview.md). - Read about the latest announcements, HPC workload examples, and performance results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute).-- Learn more about [HPC](/azure/architecture/topics/high-performance-computing/) on Azure.
+- Learn more about [HPC](/azure/architecture/topics/high-performance-computing/) on Azure.
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/hpc/overview.md
Fourth, for performance and scalability, optimally configure the workloads by fo
- Learn about [configuring and optimizing](configure.md) the InfiniBand enabled [H-series](../../sizes-hpc.md) and [N-series](../../sizes-gpu.md) VMs. - Review the [HBv3-series overview](hb-series-overview.md) and [HC-series overview](hc-series-overview.md) to learn about optimally configuring workloads for performance and scalability. - Read about the latest announcements, HPC workload examples, and performance results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute).-- Test your knowledge with a [learning module on optimizing HPC applications on Azure](/learn/modules/optimize-tightly-coupled-hpc-apps/).-- For a higher level architectural view of running HPC workloads, see [High Performance Computing (HPC) on Azure](/azure/architecture/topics/high-performance-computing/).
+- Test your knowledge with a [learning module on optimizing HPC applications on Azure](/training/modules/optimize-tightly-coupled-hpc-apps/).
+- For a higher level architectural view of running HPC workloads, see [High Performance Computing (HPC) on Azure](/azure/architecture/topics/high-performance-computing/).
virtual-machines Jboss Eap Marketplace Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/jboss-eap-marketplace-image.md
- Title: Azure Marketplace offer for Red Hat JBoss EAP on Azure Red Hat Enterprise Linux Virtual Machine (VM) and virtual machine scale sets
-description: How to deploy Red Hat JBoss EAP on Azure Red Hat Enterprise Linux (RHEL) VM and virtual machine scale sets using Azure Marketplace offer.
------ Previously updated : 05/25/2021--
-# Deploy Red Hat JBoss Enterprise Platform (EAP) on Azure VMs and virtual machine scale sets using the Azure Marketplace offer
-
-The Azure Marketplace offers for [Red Hat JBoss Enterprise Application Platform](https://www.redhat.com/en/technologies/jboss-middleware/application-platform) on Azure [Red Hat Enterprise Linux (RHEL)](https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux) is a joint solution from [Red Hat](https://www.redhat.com/) and Microsoft. Red Hat is a leading open-source solutions provider and contributor including the [Java](https://www.java.com/) standards, [OpenJDK](https://openjdk.java.net/), [MicroProfile](https://microprofile.io/), [Jakarta EE](https://jakarta.ee/), and [Quarkus](https://quarkus.io/). JBoss EAP is a leading Java application server platform that is Java EE Certified and Jakarta EE Compliant in both Web Profile and Full Platform. Every JBoss EAP release is tested and supported on a various market-leading operating systems, Java Virtual Machines (JVMs), and database combinations. JBoss EAP and RHEL include everything you need to build, run, deploy, and manage enterprise Java applications in any environment. It includes on-premises, virtual environments, and in private, public, or hybrid clouds. The joint solution by Red Hat and Microsoft includes integrated support and software licensing flexibility.
-
-## JBoss EAP on Azure-Integrated support
-
-The JBoss EAP on Azure Marketplace offer is a joint solution by Red Hat and Microsoft and includes integrated support and software licensing flexibility. You can reach both Microsoft and Red Hat to file your support tickets. We'll share and resolve the issues jointly so that you don't have to file multiple tickets for each vendor. Integrated support covers all Azure infrastructure and Red Hat application server level support issues.
-
-## Prerequisites
-
-* An Azure Account with an Active Subscription - If you don't have an Azure subscription, you can activate your [Visual Studio Subscription subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) (formerly MSDN) or [create an account for free](https://azure.microsoft.com/pricing/free-trial).
-
-* JBoss EAP installation - You need to have a Red Hat Account with Red Hat Subscription Management (RHSM) entitlement for JBoss EAP. The entitlement will let you download the Red Hat tested and certified JBoss EAP version. If you don't have EAP entitlement, sign up for a free developer subscription through the [Red Hat Developer Subscription for Individuals](https://developers.redhat.com/register). Once registered, you can find the necessary credentials (Pool IDs) at the [Red Hat Customer Portal](https://access.redhat.com/management/).
-
-* RHEL options - Choose between Pay-As-You-Go (PAYG) or Bring-Your-Own-Subscription (BYOS). With BYOS, you need to activate your [Red Hat Cloud Access](https://access.redhat.com/) [RHEL Gold Image](https://azure.microsoft.com/updates/red-hat-enterprise-linux-gold-images-now-available-on-azure/) before deploying the Marketplace offer with solutions template. Follow [these instructions](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/index) to enable RHEL Gold images for use on Microsoft Azure.
-
-* The Azure CLI.
-
-## Azure Marketplace offer subscription options
-
-The Azure Marketplace offer of JBoss EAP on RHEL will install and provision an Azure VM/virtual machine scale sets deployment in less than 10 minutes. You can access these offers from the [Azure Marketplace](https://azuremarketplace.microsoft.com/)
-
-The Marketplace offer includes various combinations of EAP and RHEL versions to support your requirements. JBoss EAP is always BYOS but for RHEL operating system (OS) you can choose between BYOS or PAYG. The offer includes plan configuration options for JBoss EAP on RHEL as stand-alone, clustered VMs, or clustered virtual machine scale sets.
-
-The six available plans are:
--- Boss EAP 7.3 on RHEL 8.3 Stand-alone VM (PAYG)-- JBoss EAP 7.3 on RHEL 8.3 Stand-alone VM (BYOS)-- JBoss EAP 7.3 on RHEL 8.3 Clustered VM (PAYG)-- JBoss EAP 7.3 on RHEL 8.3 Clustered VM (BYOS)-- JBoss EAP 7.3 on RHEL 8.3 Clustered virtual machine scale set (PAYG)-- JBoss EAP 7.3 on RHEL 8.3 Clustered virtual machine scale set (BYOS)-
-## Using RHEL OS with PAYG model
-
-The Azure Marketplace offer allows you to deploy RHEL as on-demand PAYG VMs/virtual machine scale sets. PAYG plans will have extra hourly RHEL subscription charge on top of the normal Azure infrastructure compute, network, and storage costs.
-
-Check out [Red Hat Enterprise Linux pricing](https://azure.microsoft.com/pricing/details/virtual-machines/red-hat/) for details on the PAYG model. To use RHEL in PAYG model, you will need an Azure Subscription. ([RHEL on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-20190605) offers require a payment method to be specified in the Azure Subscription).
-
-## Using RHEL OS with BYOS model
-
-To use RHEL as BYOS VMs/virtual machine scale sets, you're required to have a valid Red Hat subscription with entitlements to use RHEL in Azure. These JBoss EAP on RHEL BYOS plans are available as [Azure private offers](../../../marketplace/private-offers.md). You MUST complete the following prerequisites to deploy a RHEL BYOS offer plan from the Azure Marketplace.
-
-1. Ensure you have RHEL OS and JBoss EAP entitlements attached to your Red Hat subscription.
-2. Authorize your Azure subscription ID to use RHEL BYOS images. Follow the [Red Hat Subscription Management (RHSM) documentation](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/index) to complete the process, which includes these steps:
- 1. Enable Microsoft Azure as a provider in your Red Hat Cloud Access Dashboard.
- 2. Add your Azure subscription IDs.
- 3. Enable new products for Cloud Access on Microsoft Azure.
- 4. Activate Red Hat Gold Images for your Azure Subscription. For more information, read the chapter on [Enabling and maintaining subscriptions for Cloud Access](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/understanding-gold-images_cloud-access#using-gold-images-on-azure_cloud-access#using-gold-images-on-azure_cloud-access) for more details.
- 5. Wait for Red Hat Gold Images to be available in your Azure subscription. These Gold Images are typically available within 3 hours of submission or less as Azure Private offers.
-
-3. Accept the Azure Marketplace Terms and Conditions (T&C) for RHEL BYOS Images. To accept, run the [Azure CLI](/cli/azure/install-azure-cli) commands, as given below. For more information, see the [RHEL BYOS Gold Images in Azure](./byos.md) documentation for more details. It's important that you're running the latest version of the Azure CLI.
- 1. Launch an Azure CLI session and authenticate with your Azure account. Refer to [Signing in with Azure CLI](/cli/azure/authenticate-azure-cli) for assistance. Make sure you're running the latest Azure CLI version before moving on.
- 2. Verify the RHEL BYOS plans are available in your subscription by running the following CLI command. If you don't get any results here, refer to step #2. Ensure that your Azure subscription is activated with entitlement for JBoss EAP on RHEL BYOS plans.
-
- ```cli
- az vm image list --offer rhel-byos --all #use this command to verify the availability of RHEL BYOS images
- ```
-
- 3. Run the following command to accept the Azure Marketplace T&C as required for the JBoss EAP on RHEL BYOS plan.
-
- ```cli
- az vm image terms accept --publisher redhat --offer jboss-eap-rhel --plan $PLANID
- ```
-
- 4. Where `$PLANID` is one of the following (repeat step #3 for each Marketplace offer plan you wish to use):
-
- ```cli
- jboss-eap-73-byos-rhel-80-byos
- jboss-eap-73-byos-rhel-8-byos-clusteredvm
- jboss-eap-73-byos-rhel-80-byos-vmss
- jboss-eap-73-byos-rhel-80-payg
- jboss-eap-73-byos-rhel-8-payg-clusteredvm
- jboss-eap-73-byos-rhel-80-payg-vmss
- ```
-
-4. Your subscription is now ready to deploy EAP on RHEL BYOS plans. During deployment, your subscription(s) will be automatically attached using the `subscription-manager` with the credentials supplied during deployment.
-
-## Using JBoss EAP with BYOS model
-
-JBoss EAP is available on Azure through BYOS model only. When deploying your JBoss EAP on RHEL plan, you need to supply your RHSM credentials along with RHSM Pool ID with valid EAP entitlements. If you don't have EAP entitlement, obtain a [Red Hat Developer Subscription for Individuals](https://developers.redhat.com/register). Once registered, you can find the necessary credentials (Pool IDs) in the Subscription navigation menu. During deployment, your subscription(s) will be automatically attached using the `subscription-manager` with the credentials supplied during deployment.
-
-## Template solution architectures
-
-These offer plans create all the Azure compute resources to run JBoss EAP setup on RHEL. The following resources are created by the template solution:
--- RHEL VM-- One Load Balancer (for clustered configuration)-- Virtual Network with a single subnet-- JBoss EAP setup on a RHEL VM/virtual machine scale sets (stand-alone or clustered)-- Sample Java application-- Storage Account!-
-## After a successful deployment
-
-1. [Create a Jump VM with VNet Peering](../../windows/quick-create-portal.md#create-virtual-machine) in a different Virtual Network and access the server and expose the application using [Virtual Network Peering](../../../virtual-network/tutorial-connect-virtual-networks-portal.md#peer-virtual-networks).
-2. [Create a Public IP](../../../virtual-network/ip-services/virtual-network-public-ip-address.md#create-a-public-ip-address) to access the server and the application.
-3. [Create a Jump VM in the same Virtual Network (VNet)](../../windows/quick-create-portal.md#create-virtual-machine) in a different subnet (new subnet) in the same VNet and access the server via a Jump VM. The Jump VM can be used to expose the application.
-4. Expose the application using an [Application Gateway](../../../application-gateway/quick-create-portal.md#create-an-application-gateway).
-5. Expose the application using an External Load Balancer (ELB).
-6. [Use Azure Bastion](../../../bastion/bastion-overview.md) to access your RHEL VMs using your browser and the Azure portal.
-
-### 1. Create a Jump VM in a different Virtual Network and access the RHEL VM using Virtual Network Peering (recommended method)
-
-1. [Create a Windows Virtual Machine](../../windows/quick-create-portal.md#create-virtual-machine) - in a new Azure Resource Group, create a Windows VM, which MUST be in the same region as RHEL VM. Provide the required details and leave other configurations as default as it will create the Jump VM in a new Virtual Network.
-2. [Peer the Virtual Networks](../../../virtual-network/tutorial-connect-virtual-networks-portal.md#peer-virtual-networks) - Peering is how you associate the RHEL VM with the Jump VM. Once the Virtual Network peering is successful, both the VMs can communicate with each other.
-3. Go to the Jump VM details page and copy the Public IP. Log into the Jump VM using the Public IP.
-4. Copy the Private IP of RHEL VM from the output page and use it to log into the RHEL VM from the Jump VM.
-5. Paste the app URL that you copied from the output page to a browser inside the Jump VM. View the JBoss EAP on Azure web page from this browser.
-6. Access the JBoss EAP Admin Console - paste the Admin Console URL copied from the output page in a browser inside the Jump VM, enter the JBoss EAP username and password to log in.
-
-### 2. Create a Public IP to access the RHEL VM and JBoss EAP Admin Console.
-
-1. The RHEL VM you created don't have a Public IP associated with it. You can [create a Public IP](../../../virtual-network/ip-services/virtual-network-public-ip-address.md#create-a-public-ip-address) for accessing the VM and [associate the Public IP to the VM](../../../virtual-network/ip-services/associate-public-ip-address-vm.md). Creating a Public IP can be done using Azure portal or [Azure PowerShell](/powershell/) commands or [Azure CLI](/cli/azure/install-azure-cli) commands.
-2. Obtain the Public IP of a VM - go to the VM details page and copy the Public IP. Use Public IP to access the VM and JBoss EAP Admin Console.
-3. View the JBoss EAP on Azure web page - open a web browser and go to *http://<PUBLIC_HOSTNAME>:8080/* and you should see the default EAP welcome page.
-4. Log into the JBoss EAP Admin Console - open a web browser and go to *http://<PUBLIC_HOSTNAME>:9990*. Enter the JBoss EAP username and password to log in.
-
-### 3. Create a Jump VM in a different subnet (new subnet) in the same VNet and access the RHEL VM via a Jump VM.
-
-1. [Add a new subnet](../../../virtual-network/virtual-network-manage-subnet.md#add-a-subnet) in the existing Virtual Network, which contains the RHEL VM.
-2. [Create a Windows Virtual Machine](../../windows/quick-create-portal.md#create-virtual-machine) in Azure in the same Resource Group (RG) as the RHEL VM. Provide the required details and leave other configurations as default except for the VNet and subnet. Make sure you select the existing VNet in the RG and select the subnet you created in the step above as it will be your Jump VM.
-3. Access Jump VM Public IP - once successfully deployed, go to the VM details page and copy the Public IP. Log into the Jump VM using the Public IP.
-4. Log into RHEL VM - copy the Private IP of RHEL VM from the output page and use it to log into the RHEL VM from the Jump VM.
-5. Access the JBoss EAP welcome page - in your Jump VM, open a browser and paste the app URL that you copied from the output page of the deployment.
-6. Access the JBoss EAP Admin Console - paste the Admin Console URL that you copied from the output page in a browser inside the Jump VM to access the JBoss EAP Admin Console and enter the JBoss EAP username and password to log in.
-
-### 4. Expose the application using an External Load Balancer
-
-1. [Create an Application Gateway](../../../application-gateway/quick-create-portal.md#create-an-application-gateway) - to access the ports of the RHEL VM, create an Application Gateway in a different subnet. The subnet must only contain the Application Gateway.
-2. Set *Frontends* parameters - make sure you select Public IP or both and provide the required details. Under *Backends* section, select **Add a backend pool** option and add the RHEL VM to the backend pool of the Application Gateway.
-3. Set access ports - under *Configuration* section add routing rules to access the ports 8080 and 9990 of the RHEL VM.
-4. Copy Public IP of Application Gateway - once the Application Gateway is created with the required configurations, go to the overview page and copy the Public IP of the Application Gateway.
-5. To view the JBoss EAP on Azure web page - open a web browser then go to *http://<PUBLIC_IP_AppGateway>:8080* and you should see the default EAP welcome page.
-6. To log into the JBoss EAP Admin Console - open a web browser then go to *http://<PUBLIC_IP_AppGateway>:9990*. Enter the JBoss EAP username and password to log in.
-
-### 5. Use an External Load Balancer (ELB) to access your RHEL VM/virtual machine scale sets
-
-1. [Create a Load Balancer](../../../load-balancer/quickstart-load-balancer-standard-public-portal.md#create-load-balancer) to access the ports of the RHEL VM. Provide the required details to deploy the external Load Balancer and leave other configurations as default. Leave the SKU as Basic for the ELB configuration.
-2. Add Load Balancer rules - once the Load balancer has been created successfully, [create Load Balancer resources](../../../load-balancer/quickstart-load-balancer-standard-public-portal.md#create-load-balancer), then add Load Balancer rules to access ports 8080 and 9990 of the RHEL VM.
-3. Add the RHEL VM to the backend pool of the Load Balancer - click on *Backend pools* under settings section and then select the backend pool you created in the step above. Select the VM corresponding to the option *Associated to* and then add the RHEL VM.
-4. To obtain the Public IP of the Load Balancer - go to the Load Balancer overview page and copy the Public IP of the Load Balancer.
-5. To view the JBoss EAP on Azure web page - open a web browser then go to *http://<PUBLIC_IP_LoadBalancer>:8080/* and you should see the default EAP welcome page.
-6. To log into the JBoss EAP Admin Console - open a web browser then go to *http://<PUBLIC_IP_LoadBalancer>:9990*. Enter the JBoss EAP username and password to log in.
-
-### 6. Use Azure Bastion to connect
-
-1. Create an Azure Bastion host for your VNet in which your RHEL VM is located. The Bastion service will automatically connect to your RHEL VM using SSH.
-2. Create your Reader roles on the VM, NIC with private IP of the VM, and Azure Bastion resource.
-3. The RHEL inbound port is open for SSH (22).
-4. Connect using your username and password in the Azure portal. Click on "Connect" and select "Bastion" from the dropdown. Then select "Connect" to connect to your RHEL VM. You can connect using a private key file or private key stored in Azure Key Vault.
-
-## Azure platform
--- **Validation Failure** - Your deployment won't start if the parameter criteria aren't fulfilled (for example, the admin password criteria weren't met) or if any mandatory parameters aren't provided in the parameters section. You can review the details of parameters before clicking on *Create*.-- **Resource Provisioning Failure** - Once the deployment starts the resources being deployed will be visible on the deployment page. If there's a deployment failure after the parameter validation process, a more detailed failure message is available. -- **VM Provisioning Failure** - If your deployment fails at the **VM Custom Script Extension** resource point, a more detailed failure message is available in the VM log file. Refer to the next section for further troubleshooting.-
-## Troubleshooting EAP deployment extension
-
-These offers use VM Custom Script Extensions to deploy JBoss EAP and configure the JBoss EAP management user. Your deployment can fail at here because of several reasons, such as:
--- Invalid RHSM or EAP entitlement-- Invalid JBoss EAP or RHEL OS entitlement Pool ID-- Azure Marketplace T&C not accepted for RHEL VMs-
-Follow the steps below to troubleshoot the issue further:
-
-1. Log in to the provisioned virtual machine through SSH. You can retrieve the Public IP of the VM from the Azure portal VM overview page.
-1. Switch to root user
-
- ```cli
- sudo su -
- ```
-
-1. Enter the VM admin password if prompted.
-1. Change directory to logging directory
-
- ```cli
- cd /var/lib/waagent/custom-script/download/0 #for EAP on RHEL stand-alone VM
- ```
-
- ```cli
- cd /var/lib/waagent/custom-script/download/1 #for EAP on RHEL clustered VM
- ```
-
-1. Review the logs in eap.log log file.
-
- ```cli
- more eap.log
- ```
-
-The log file will have details that include deployment failure reason and possible solutions. If your deployment failed due to RHSM account or entitlements, refer to 'Support and Subscription Notes' section to complete the prerequisites. Then try again. When deploying EAP on RHEL clustered plan, make sure that the deployment doesn't hit the quota limit. It's important to check your regional vCPU and VM series vCPU quotas before you provide the instance count for deployment. If your subscription or region doesn't have enough quota limit [request for quota](../../../azure-portal/supportability/regional-quota-requests.md) from your Azure portal.
-
-Refer to [Using the Azure Custom Script Extension Version 2 with Linux VMs](../../extensions/custom-script-linux.md) for more details on troubleshooting VM custom script extensions.
-
-## Resource links and support
-
-For any support-related questions, issues or customization requirements, contact [Red Hat Support](https://access.redhat.com/support) or [Microsoft Azure Support](https://portal.azure.com/?quickstart=true#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
-
-* Learn more about [JBoss EAP](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/)
-* [JBoss EAP on Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)
-* [JBoss EAP in Azure App Service](/azure/developer/java/ee/jboss-on-azure)
-* [Azure Hybrid Benefits](../../windows/hybrid-use-benefit-licensing.md)
-* [Red Hat Subscription Management](https://access.redhat.com/products/red-hat-subscription-management)
-* [Red Hat on Azure overview](./overview.md)
-
-## Next steps
-
-* Deploy JBoss EAP on RHEL VM/Virtual Machine Scale Sets from [Azure Marketplace](https://aka.ms/AMP-JBoss-EAP)
-* Deploy JBoss EAP on RHEL VM/Virtual Machine Scale Sets from [Azure Quickstart](https://aka.ms/Quickstart-JBoss-EAP)
-* Configuring a Java app for [Azure App Service](../../../app-service/configure-language-java.md)
-* How to deploy [JBoss EAP onto Azure App Service](https://github.com/JasonFreeberg/jboss-on-app-service) tutorial
-* Use Azure [App Service Migration Assistance](https://azure.microsoft.com/services/app-service/migration-assistant/)
-* Use Red Hat [Migration Toolkit for Applications](https://developers.redhat.com/products/mta)
virtual-machines Jboss Eap On Azure Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/jboss-eap-on-azure-best-practices.md
- Title: Red Hat JBoss EAP on Azure Best Practices
-description: The guide provides information on the best practices for using Red Hat JBoss Enterprise Application Platform in Microsoft Azure.
------ Previously updated : 06/08/2021--
-# Red Hat JBoss EAP on Azure best practices
-
-The Red Hat JBoss EAP on Azure Best Practices guide for using Red Hat JBoss Enterprise Application Platform (EAP) on Microsoft Azure. JBoss EAP can be used with the Microsoft Azure platform, as long as you use it within the specific supported configurations for running JBoss EAP in Azure. If you're manually configuring a clustered JBoss EAP environment, apply the specific configurations necessary to use JBoss EAP clustering features in Azure. This guide details the supported configurations of using JBoss EAP in Microsoft Azure.
-
-JBoss EAP is a Jakarta Enterprise Edition (EE) 8 compatible implementation for both the Web Profile and Full Platform specifications. It's also a certified implementation of the Java EE 8 specification. Major versions of JBoss EAP are forked from the WildFly community project at certain points when the community project has reached the desired feature completeness level. After that point, an extended period of testing and productization takes place in which JBoss EAP is stabilized, certified, and enhanced for production use. During the lifetime of a JBoss EAP major version, selected features may be cherry-picked and back-ported from the community project. Then these features are made available in a seFries of feature enhancing minor releases within the same major version family.
-
-## Supported and unsupported configurations of JBoss EAP on Azure
-
-See the [JBoss EAP Supported Configurations](https://access.redhat.com/articles/2026253) documentation for details on Operating Systems (OS), Java platforms, and other supported platforms on which EAP can be used.
-
-The Red Hat Cloud Access program allows you to use a JBoss EAP subscription to install JBoss EAP on your Azure virtual machine, which are On-Demand Pay-As-You-Go (PAYG) operating systems from the Microsoft Azure Marketplace. Virtual machine operating system subscriptions, in this case Red Hat Enterprise Linux (RHEL), is separate from a JBoss EAP subscription. Red Hat Cloud Access is a Red Hat subscription feature that provides support for JBoss EAP on Red Hat certified cloud infrastructure providers, such as Microsoft Azure. Red Hat Cloud Access allows you to move your subscriptions between traditional on-premises servers and public cloud-based resources in a simple and cost-effective manner.
-
-You can find more information about [Red Hat Cloud Access on the Customer Portal](https://www.redhat.com/en/technologies/cloud-computing/cloud-access). Reminder, you don't need to Red Hat Cloud Access for any PAYG offers on Azure Marketplace.
-
-Every Red Hat JBoss EAP release is tested and supported on a various market-leading operating systems, Java Virtual Machines (JVMs), and database combinations. Red Hat provides both production and development support for supported configurations and tested integrations according to your subscription agreement. Support applies to both physical and virtual environments. Other than the above operating system restrictions, check [supported configurations for JBoss EAP](https://access.redhat.com/articles/2026253), such as supported Java Development Kit (JDK) vendors and versions. It gives you information on supported configurations of various JBoss EAP versions like 7.0, 7.1, 7.2, and 7.3. Check the [Product/Configuration Matrix for Microsoft Azure](https://access.redhat.com/articles/product-configuration-for-azure) for supported RHEL operating systems, VM minimum capacity requirements, and information about other supported Red Hat products. Check [Certified Cloud Provider/Microsoft Azure](https://access.redhat.com/ecosystem/cloud-provider/2068823) for Red Hat software products certified to operate in Microsoft Azure.
-
-There are some unsupported features when using JBoss EAP in a Microsoft Azure environment, which includes:
--- **Managed Domains** - Allows for the management of multiple JBoss EAP instances from a single control point. JBoss EAP managed domains aren't supported in Microsoft Azure today. Only stand-alone JBoss EAP server instances are supported. Configuring JBoss EAP clusters using stand-alone JBoss EAP servers is supported in Azure.--- **ActiveMQ Artemis High Availability (HA) Using a Shared Store** - JBoss EAP messaging HA using Artemis shared stores isn't supported in Microsoft Azure.--- **`mod_custer` Advertising** - You can't use JBoss EAP as an Undertow `mod_cluster` proxy load-balancer, the mod_cluster advertisement functionally is unsupported because of Azure User Datagram Protocol (UDP) multicast limitations.-
-## Other features of JBoss EAP on Azure
-
-JBoss EAP provides pre-configured options for features such as HA clustering, messaging, and distributed caching. It also enables users to write, deploy, and run applications using the various APIs and services that JBoss EAP provides.
--- **Jakarta EE Compatible** - Jakarta EE 8 compatible for both the Web Profile and Full Platform specifications.--- **Java EE Compliant** - Java EE 8 certified for both the Web Profile and Full Platform specifications.--- **Management Console and Management CLI** - Is for stand-alone server management interfaces. The management CLI also includes a batch mode that can script and automate management tasks. Directly editing the JBoss EAP XML configuration files isn't recommended.--- **Simplified Directory Layout** - The modules directory contains all application server modules. The stand-alone directories contain the artifacts and configuration files for stand-alone deployments.--- **Modular Class-Loading Mechanism** - Modules are loaded and unloaded on demand. Modular class-loading mechanism improves performance, has security benefits, and reduces start-up and restart times.--- **Streamlined DataSource Management** - Database drivers are deployed like other services. In addition, DataSources are created and managed using the management console and management CLI.--- **Unified Security Framework** - Elytron provides a single unified framework that can manage and configure access for stand-alone servers. It can also be used to configure security access for applications deployed to JBoss EAP servers.-
-## Creating your Microsoft Azure environment
-
-Create the VMs that will host your JBoss EAP instances in your Microsoft Azure environment. Use Azure VM size of Standard_A2 or higher. You can use either the Azure On-Demand PAYG premium images to create your VMs or manually create your own VMs. For example, you can deploy RHEL VMs as follows :
-
-* Using the On-Demand Marketplace RHEL image in Azure - There are several offers in Azure Marketplace from where you can select the RHEL VM that you want to set up the JBoss EAP. Visit [deploy RHEL 8 VM from Azure Marketplace](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/deploying_red_hat_enterprise_linux_8_on_public_cloud_platforms/assembly_deploying-a-rhel-image-as-a-virtual-machine-on-microsoft-azure_cloud-content). You've two options for choosing the RHEL OS licensing in the Azure Marketplace. Choose either PAYG or Bring-Your-Own-Subscription (BYOS) via the Red Hat Gold Image model. Note that if you've deployed RHEL VM using PAYG plan, only your JBoss EAP subscription details are used to subscribe the resulting deployment to a Red Hat subscription.
-
-* [Manually Creating and Provisioning a RHEL image for Azure](https://access.redhat.com/articles/uploading-rhel-image-to-azure). Use the latest minor version of each major version of RHEL.
-
-For Microsoft Windows Server VM, see the [Microsoft Azure documentation](../../windows/overview.md) on creating a Windows Server VM in Azure.
-
-## JBoss EAP installation
-
-> [!NOTE]
-> If you're using the JBoss EAP on RHEL offer through Azure Marketplace, JBoss EAP is automatically installed and configured for the Azure environment.
-
-The below steps apply if you're manually deploying EAP to a RHEL VM deployed on Microsoft Azure.
-
-Once your VM is set up, you can install JBoss EAP. You need access to [Red Hat Customer Portal](https://access.redhat.com), which is the centralized platform for Red Hat Knowledge Base (KB) and subscription resources. If you don't have an EAP subscription, sign-up for a free developer subscription through the [Red Hat Developer Subscription for Individuals](https://developers.redhat.com/register). Once registered, look for the credentials (Pool IDs) in Subscription section of the [Red Hat Customer Portal](https://access.redhat.com/management/). Note that this subscription isn't intended for production use.
-
-We've used the variable *EAP_HOME* to denote the path to the JBoss EAP installation. Replace this variable with the actual path to your JBoss EAP installation.
-
-> [!IMPORTANT]
-> There are several different ways to install JBoss EAP. Each method is best used in certain situations. If you're using a RHEL On-Demand VM from the Microsoft Azure Marketplace, install JBoss EAP using the ZIP or installer methods. **Do not register a RHEL On-Demand virtual machine to Red Hat Subscription Management (RHSM), as you'll be billed twice for that virtual machine since it uses PAYG billing method.
-
-* **ZIP Installation** - The ZIP archive is suitable for installation on all supported OS. ZIP installation method should be used if you wish to extract the instance manually. The ZIP installation provides a default installation of JBoss EAP, and all configurations to be done following installation. If you plan to use JBoss Operations Network (ON) server to deploy and install JBoss EAP patches, the target JBoss EAP instances should be installed using the ZIP installation method. Check the [Zip Installation](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html-single/installation_guide/index#zip_installation) for more details.
-
-* **JAR Installer** - The JAR installer can either be run in a console or as a graphical wizard. Both options provide step-by-step instructions for installing and configuring the server instance. JAR installer is the preferred method to install JBoss EAP on all supported platforms. For more information about check [JAR Installer Installation](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html-single/installation_guide/index#installer_installation).
-
-* **RPM Installation** - JBoss EAP can be installed using RPM packages on supported installations of RHEL 6, RHEL 7, and RHEL 8. RPM installation method is best suited when you're planning to automate the installation of EAP on your RHEL VM on Azure. RPM installation of JBoss EAP installs everything that is required to run JBoss EAP as a service. For more information about check [RPM Installation](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.3/html-single/installation_guide/index#rpm_installation).
-
-## Other offers by Azure and Red Hat
-
-Red Hat and Microsoft have partnered to bring a set of Azure solution templates to the Azure Marketplace to provide a solid starting point for migrating to Azure. Consult the documentation for the list of offers and choose a suitable environment.
-
-### Azure Marketplace Offers
-
-You can access these offers from the [Azure Marketplace](https://aka.ms/AMP-JBoss-EAP).
-
-This Marketplace offer includes various combinations of JBoss EAP and RHEL versions with flexible support subscription models. JBoss EAP is always BYOS but for RHEL OS you can choose between BYOS or PAYG. The Azure Marketplace offer includes plan options for JBoss EAP on RHEL as stand-alone VMs, clustered VMs, and clustered virtual machine scale sets.
-
-The six plans include:
--- JBoss EAP 7.3 on RHEL 8.3 Stand-alone VM (PAYG)-- JBoss EAP 7.3 on RHEL 8.3 Stand-alone VM (BYOS)-- JBoss EAP 7.3 on RHEL 8.3 Clustered VM (PAYG)-- JBoss EAP 7.3 on RHEL 8.3 Clustered VM (BYOS)-- JBoss EAP 7.3 on RHEL 8.3 Clustered virtual machine scale sets (PAYG)-- JBoss EAP 7.3 on RHEL 8.3 Clustered virtual machine scale sets (BYOS)--
-### Azure Quickstart Templates
-
-Along with Azure Marketplace offers, there are Quickstart templates made available for you to test drive EAP on Azure. These Quickstarts include pre-built ARM templates and script to deploy JBoss EAP on Azure in various configurations and version combinations.
-
-Solution architecture includes:
-
-* JBoss EAP on RHEL Stand-alone VM
-* JBoss EAP on RHEL Clustered VMs
-* JBoss EAP on RHEL Clustered virtual machine scale sets
-
- :::image type="content" source="./media/red-hat-marketplace-image.png" alt-text="Image shows the Red Hat offers available through the Azure Marketplace.":::
-
-You can choose between the RHEL 7.7 and 8.0 and JBoss EAP 7.2 and 7.3. You can select one of the following combinations for deployment:
--- JBoss EAP 7.2 on RHEL 7.7-- JBoss EAP 7.2 on RHEL 8.0-- JBoss EAP 7.3 on RHEL 8.0-
-To get started, select a Quickstart template with a matching JBoss EAP on RHEL combination that meets your deployment goal. Following is the list of available Quickstart templates.
-
-* [JBoss EAP on RHEL Stand-alone VM](https://azure.microsoft.com/resources/templates/jboss-eap-standalone-rhel/) - The Azure template deploys a web application named JBoss-EAP on Azure on JBoss EAP (version 7.2 or 7.3) running on RHEL (version 7.7 or 8.0) VM.
-
-* [JBoss EAP on RHEL Clustered VMs](https://azure.microsoft.com/resources/templates/jboss-eap-clustered-multivm-rhel/) - The Azure template deploys a web application called eap-session-replication on JBoss EAP cluster running on 'n' number RHEL VMs. 'n' is the starting number of VMs you set at the beginning. All the VMs are added to the backend pool of a load balancer.
-
-* [JBoss EAP on RHEL Clustered Virtual Machine Scale Sets](https://azure.microsoft.com/resources/templates/jboss-eap-clustered-vmss-rhel/) - The Azure template deploys a web application called eap-session-replication on JBoss EAP 7.2 or 7.3 cluster running on RHEL 7.7 or 8.0 virtual machine scale sets instances.
-
-## Configuring JBoss EAP to work on cloud platforms
-
-Once you install the JBoss EAP in your VM, you can configure JBoss EAP to run as a service. Configuring JBoss EAP to run as a service depends on the JBoss EAP installation method and type of VM OS. Note that RPM installation of JBoss EAP installs everything that is required to run JBoss EAP as a service. For more information check [Configuring JBoss EAP to run as a service](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html-single/installation_guide/index#configuring_jboss_eap_to_run_as_a_service).
-
-### Starting and stopping JBoss EAP
-
-#### Starting JBoss EAP
-
-JBoss EAP is supported on RHEL and Windows Server, and runs only in a stand-alone server operating mode. The specific command to start JBoss EAP depends on the underlying platform. Servers are initially started in a suspended state and won't accept any requests until all required services have started. At this time the servers are placed into a normal running state and can start accepting requests. Following is the command to start the JBoss EAP as a Stand-alone server:
--- Command to start the JBoss EAP (installed via ZIP or installer method) as a Stand-alone server in RHEL VM :
- ```
- $EAP_HOME/bin/standalone.sh
- ```
-- For Windows Server, use the `EAP_HOME\bin\standalone.bat` script to start the JBoss EAP as a Stand-alone server.-- Starting JBoss EAP is different for an RPM installation compared to a ZIP or JAR installer installation. For example, for RHEL 7 and later, use the following command:
- ```
- systemctl start eap7-standalone.service
- ```
-The startup script used to start the JBoss EAP (installed via ZIP or installer method) uses the `EAP_HOME/bin/standalone.conf` file, or `standalone.conf.bat` for Windows Server, to set some default preferences, such as JVM options. Customize the settings in this file. JBoss EAP uses the `standalone.xml` configuration file by default, but can be started using a different one. To change the default configuration file used for starting JBoss EAP installed via RPM method, use `/etc/opt/rh/eap7/wildfly/eap7-standalone.conf`. Use the same eap7-standalone.conf file to make other configuration changes like WildFly bind address.
-
-For details on the available stand-alone configuration files and how to use them, check the [Stand-alone Server Configuration Files](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html-single/configuration_guide/index#standalone_server_configuration_files).
-
-To start JBoss EAP with a different configuration, use the --server-config argument. For a complete listing of all available startup script arguments and their purposes, use the --help argument or check the [Server Runtime Arguments](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.3/html-single/configuration_guide/index#reference_of_switches_and_arguments_to_pass_at_server_runtime)
-
-#### Stopping JBoss EAP
-
-The way that you stop JBoss EAP depends on how it was started. Press `Ctrl+C` in the terminal where JBoss EAP was started to stop an interactive instance of JBoss EAP. To stop the background instance of JBoss EAP, use the management CLI to connect to the running instance and shut down the server. For more details, check [Stopping JBoss EAP](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html-single/configuration_guide/index#stopping_jboss_eap).
-
-Stopping JBoss EAP is different for an RPM installation compared to a ZIP or installer installation. The command for stopping an RPM installation of JBoss EAP depends on which operating mode you want to start and which RHEL version you're running. Stand-alone mode is only mode supported in Azure.
--- For example, for RHEL 7 and later, use the following command:
- ```
- systemctl stop eap7-standalone.service
- ```
-JBoss EAP can also be suspended or shut down gracefully, allowiing active requests to complete normally, without accepting any new requests. Once the server has been suspended, it can be shut down, returned back to a running state, or left in a suspended state to do maintenance. While the server is suspended, management requests are still processed. The server can be suspended and resumed using the management console or the management CLI.
-
-### Configuring JBoss EAP subsystems to work on cloud platforms
-
-Many of the APIs and capabilities that are exposed to applications deployed to JBoss EAP are organized into subsystems. These subsystems can be configured by administrators to provide different behavior, depending on the goal of the application. For more details on the subsystems, check [JBoss EAP Subsystems](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html-single/configuration_guide/index#jboss_eap_subsystems).
-
-Some JBoss EAP subsystems requires configuration to work properly on cloud platforms. Configuration is required because a JBoss EAP server is bound to a cloud VMΓÇÖs private IP address. Private IP is only visible from within the cloud platform. For certain subsystems, the private IP address needs to be mapped to the serverΓÇÖs public IP address, which is visible from outside the cloud. For more details on how to modify these subsystems, check [Configuring JBoss EAP Subsystems to Work on Cloud Platforms](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html-single/using_jboss_eap_in_microsoft_azure/index#configuring_subsystems_for_cloud_platforms)
-
-## Using JBoss EAP high availability in Microsoft Azure
-
-Azure don't support JGroups discovery protocols that are based on UDP multicast. JGroups uses the UDP stack by default, make sure you change that to TCP since Azure don't support UDP. Although you can use other JGroups discovery protocols like TCPPING, JDBC_PING, we recommend the shared file discovery protocol developed for Azure, which is *Azure_PING*.
-
-*AZURE_PING* uses a common blob container in a Microsoft Azure storage account. If you don't already have a blob container that AZURE_PING can use, create one that your VM can access. For more information, check [Configuring JBoss EAP High Availability in Microsoft Azure](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html-single/using_jboss_eap_in_microsoft_azure/index#using_jboss_eap_high_availability_in_microsoft_azure).
-
-Configure JBoss EAP with load-balancing environment. Ensure that all balancers and workers are bound to accesible IP addresses in your internal Microsoft Azure Virtual Network (VNet). For more details on load-balancing configuration, check [Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft Azure](https://access.redhat.com/articles/3252491).
-
-## Other best practices
--- As an administrator of a JBoss EAP setup on VM, ensuring that your VM is secure is important. It will significantly lower the risk of your guest and host OSs being infected by malicious software. Securing your VM reduces attack on JBoss EAP and malfunctioning of applications hosted on JBoss EAP. Control the access to the Azure VMs using features like [Azure Policy](https://azure.microsoft.com/services/azure-policy/) and [Azure Build-in Roles](../../../role-based-access-control/built-in-roles.md) in [Azure Role-Based Access control (RBAC)](../../../role-based-access-control/overview.md). Protect your VM against malware by installing Microsoft Antimalware or a Microsoft partnerΓÇÖs endpoint protection solution and integrating your antimalware solution with [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/) to monitor the status of your protection. In RHEL VMs, you can protect it by blocking port forwarding and blocking root login, which can be disabled in the */---- Use environment variables to make your experience easy and smooth with JBoss EAP on Azure VMs. For example, you can use EAP_HOME to denote the path to the JBoss EAP installation, which will be used several times. In such cases, environment variables will come in handy. Environment variables are also a common means of configuring services and handling web application secrets. When an environment variable is set from the shell using the export command, its existence ends when the userΓÇÖs sessions ends. It's problematic when we need the variable to persist across sessions. To make an environment persistent for a userΓÇÖs environment, we export the variable from the userΓÇÖs profile script. Add the export command for every environment variable you want to persist in the bash_profile. If you want to set permanent Global environment variable for all the users who have access to the VM, you can add it to the default profile. It's recommended to store global environment variables in a directory named `/etc/profile.d`. The directory contains a list of files that are used to set environment variables for the entire system. Using the set command to set system environment variables in a Windows Server command prompt won't permanently set the environment variable. Use either the *setx* command, or the System interface in the Control Panel.--- Manage your VM updates and upgrades. Use the [Update Management](../../../automation/update-management/overview.md) solution in Azure Automation to manage operating system updates for your Windows and Linux computers that are deployed in Azure. Quickly assess the status of available updates on all agent computers and manage the process of installing required updates for servers. Updating the VM software ensures that important Microsoft Azure patches, Hypervisor drivers, and software packages stay current. In-place upgrades are possible for minor releases. For instance, from RHEL 6.9 to RHEL 6.10 or RHEL 7.3 to RHEL 7.4. In-place type of upgrade can be done by running the *yum update* command. Microsoft Azure does not support an in-place upgrade of a major release, for instance, from RHEL 6 to RHEL 7.--- Use [Azure Monitor](../../../azure-monitor/data-platform.md) to gain visibility into your resourceΓÇÖs health. Azure Monitor features include [Resource Diagnostic Log Files](../../../azure-monitor/essentials/platform-logs-overview.md). Is is used to monitor your VM resources and identifies potential issues that might compromise performance and availability. [Azure Diagnostics Extension](../../../azure-monitor/agents/diagnostics-extension-overview.md) can provide monitoring and diagnostics capabilities on Windows VMs. Enable these capabilities by including the extension as part of the Azure Resource Manager template. Enable Boot Diagnostics, which is an important tool to use when troubleshooting a VM that won't boot. The console output and the boot log can greatly assist Red Hat Technical Support when resolving a boot issue. Enable Boot Diagnostics in the Microsoft Azure portal while creating a VM or on an existing VM. Once it's enabled, you can view the console output for the VM and download the boot log for troubleshooting.--- Another way to ensure secure communication is to use private endpoint in your [Virtual Network (VNet)](../../../virtual-network/virtual-networks-overview.md) and [Virtual Private Networks (VPN)](../../../vpn-gateway/vpn-gateway-about-vpngateways.md). Open networks are accessible to the outside world and as such susceptible to attacks from malicious users. VNet, and VPN restrict access to selected users. VNET uses a private IP to establish isolated communication channels between servers within the same range. Isolated communication allows multiple servers under the same account to exchange information and data without exposure to a public space. Connect to a remote server if you're doing it locally through a private network. There are different methods like using a JumpVM/JumpBox in the same VNet as that of the Application serve or use [Azure Virtual Network Peering](../../../virtual-network/virtual-network-peering-overview.md), [Azure Application Gateway](../../../application-gateway/overview.md), [Azure Bastion](https://azure.microsoft.com/services/azure-bastion), and so on. All these methods enable an entirely secure and private connection and can connect multiple remote servers.--- Use an [Azure Network Security Groups (NSG)](../../../virtual-network/network-security-groups-overview.md) to filter network traffic to and from the Application server in the Azure VNet. NSG contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol. Protect your application on JBoss EAP by using these NSG rules and block or allow ports to the internet.--- For better functionality of Application running on JBoss EAP on Azure, you can use the HA feature available in Azure. HA in Azure can be achieved using Azure resources such as Load-Balancer, Application Gateway, or [Virtual Machine Scale Set](../../../virtual-machine-scale-sets/overview.md). These HA methods will provide redundancy and improved performance, which will allow you to easily do maintenance or update an application instance, by distributing the load to another available application instance. To keep up with more customer demand, you may need to increase the number of application instances that run your application. Virtual machine scale set also have autoscaling feature, which allows your application to automatically scale up or down as demand changes.-
-## Optimizing the JBoss EAP server configuration
-
-Once you've installed the JBoss EAP server, and you've created a management user, you can optimize your server configuration. Make sure you review information in the [Performance Tuning Guide](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html-single/performance_tuning_guide/index) on how to optimize the server configuration and avoid common problems when deploying applications in a production environment
-
-## Resource links and support
-
-For any support-related questions, issues or customization requirements, contact [Red Hat Support](https://access.redhat.com/support) or [Microsoft Azure Support](https://portal.azure.com/?quickstart=true#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
-
-* Learn more about [JBoss EAP](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html/getting_started_with_jboss_eap_for_openshift_online/index)
-* Red Hat Subscription Manager (RHSM) [Cloud Access](https://access.redhat.com/documentation/en/red_hat_subscription_management/1/html-single/red_hat_cloud_access_reference_guide/index)
-* [Azure Red Hat OpenShift (ARO)](https://azure.microsoft.com/services/openshift/)
-* [Red Hat on Azure overview](./overview.md)
-* [RHEL BYOS Gold Images in Azure](./byos.md)
-* JBoss EAP on Azure [Quickstart video tutorial](https://www.youtube.com/watch?v=3DgpVwnQ3V4)
-
-## Next steps
-
-* [Migrate to JBoss EAP on Azure inquiry](https://aka.ms/JavaCloud)
-* Running JBoss EAP in [Azure App Service](/azure/developer/java/ee/jboss-on-azure)
-* Deploy JBoss EAP on RHEL VM/VM Scale Set from [Azure Marketplace](https://aka.ms/AMP-JBoss-EAP)
-* Deploy JBoss EAP on RHEL VM/VM Scale Set from [Azure Quickstart](https://aka.ms/Quickstart-JBoss-EAP)
-* Use Azure [App Service Migration Assistance](https://azure.microsoft.com/services/app-service/migration-assistant/)
-* Use Red Hat [Migration Toolkit for Applications](https://developers.redhat.com/products/mta)
virtual-machines Jboss Eap On Azure Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/jboss-eap-on-azure-migration.md
- Title: JBoss EAP to Azure virtual machines virtual machine scale sets migration guide
-description: This guide provides information on how to migrate your enterprise Java applications from another application server to JBoss EAP and from traditional on-premises server to Azure RHEL VM and virtual machine scale sets.
------ Previously updated : 06/08/2021--
-# How to migrate Java applications to JBoss EAP on Azure VMs and virtual machine scale sets
-
-This guide provides information on how to migrate your enterprise Java applications on [Red Hat JBoss Enterprise Application Platform (EAP)](https://www.redhat.com/en/technologies/jboss-middleware/application-platform) from a traditional on-premises server to Azure Red Hat Enterprise Linux (RHEL) Virtual Machines (VM) and virtual machine scale sets if your cloud strategy is to "Lift and Shift" Java applications as-is. However, if you want to "Lift and Optimize" then alternatively you can migrate your containerized applications to [Azure Red Hat OpenShift (ARO)](https://azure.microsoft.com/services/openshift/) with JBoss EAP images from the Red Hat Gallery, or drop your Java app code directly into a JBoss EAP on Azure App Service instance.
-
-## Best practice starting with Azure Marketplace offers and quickstarts
-
-Red Hat and Microsoft have partnered to bring a set of [JBoss EAP on Azure Marketplace offer](https://aka.ms/AMP-JBoss-EAP) to provide a solid starting point for migrating to Azure. Consult the documentation for a list of offer and plans and select the one that most closely matches your existing deployment. Check out the article on [JBoss EAP on Azure Best Practices](./jboss-eap-on-azure-best-practices.md)
-
-If none of the existing offers is a good starting point, you can manually reproduce the deployment using Azure VM and other available resources. For more information, see [What is IaaS](https://azure.microsoft.com/overview/what-is-iaas/)?
-
-### Azure Marketplace offers
-
-Red Hat in partnership with Microsoft has published the following offerings in Azure Marketplace. You can access these offers from the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/) or from the [Azure portal](https://azure.microsoft.com/features/azure-portal/). Check out the article on how to [Deploy Red Hat JBoss EAP on Azure VM and virtual machine scale sets Using the Azure Marketplace Offer](./jboss-eap-marketplace-image.md) for more details.
-
-This Marketplace offer includes various combinations of JBoss EAP and RHEL versions with flexible support subscription models. JBoss EAP is available as Bring-Your-Own-Subscription (BYOS) but for RHEL you can choose between BYOS or Pay-As-You-Go (PAYG).
-The Azure Marketplace offer includes plan options for JBoss EAP on RHEL as stand-alone VMs, clustered VMs, and clustered virtual machine scale sets. The 6 plans include:
--- JBoss EAP 7.3 on RHEL 8.3 Stand-alone VM (PAYG)-- JBoss EAP 7.3 on RHEL 8.3 Stand-alone VM (BYOS)-- JBoss EAP 7.3 on RHEL 8.3 Clustered VM (PAYG)-- JBoss EAP 7.3 on RHEL 8.3 Clustered VM (BYOS)-- JBoss EAP 7.3 on RHEL 8.3 Clustered virtual machine scale sets (PAYG)-- JBoss EAP 7.3 on RHEL 8.3 Clustered virtual machine scale sets (BYOS)-
-### Azure quickstart templates
-
-Along with Azure Marketplace offers, there are Quickstart templates made available for you to test drive EAP on Azure. These Quickstarts include pre-built ARM templates and script to deploy JBoss EAP on Azure in various configurations and version combinations. Solution architecture includes:
--- JBoss EAP on RHEL Stand-alone VM-- JBoss EAP on RHEL Clustered VMs-- JBoss EAP on RHEL Clustered virtual machine scale sets-
-To quickly get started, select one of the Quickstart template that closely matches your JBoss EAP on RHEL version combination. Check out the [JBoss EAP on Azure Quickstart](./jboss-eap-on-rhel.md) documentation to learn more.
-
-## Prerequisites
-
-* **An Azure Account with an Active Subscription** - If you don't have an Azure subscription, you can activate your [Visual Studio Subscription subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) (former MSDN) or [create an account for free](https://azure.microsoft.com/pricing/free-trial).
--- **JBoss EAP installation** - You need to have a Red Hat Account with Red Hat Subscription Management (RHSM) entitlement for JBoss EAP. This entitlement will let you download the Red Hat tested and certified JBoss EAP version. If you don't have EAP entitlement, you can sign up for a free developer subscription through the [Red Hat Developer Subscription for Individuals](https://developers.redhat.com/register). Once registered, you can find the necessary credentials (Pool IDs) at the [Red Hat Customer Portal](https://access.redhat.com/management/).--- **RHEL options** - Choose between Pay-As-You-Go (PAYG) or Bring-Your-Own-Subscription (BYOS). With BYOS, you need to activate your [Red Hat Cloud Access](https://access.redhat.com/) [RHEL Gold Image](https://azure.microsoft.com/updates/red-hat-enterprise-linux-gold-images-now-available-on-azure/) before deploying the Marketplace offer with solutions template. Follow [these instructions](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide) to enable RHEL Gold images for use on Microsoft Azure.--- **[Azure Command-Line Interface (CLI)](/cli/azure/overview)**.--- **Java source code and [Java Development Kit (JDK) version](https://www.oracle.com/java/technologies/javase-downloads.html)**--- **[Java application based on JBoss EAP 7.2](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.2/html/development_guide/index)** or **[Java application based on JBoss EAP 7.3](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.3/html-single/development_guide/index#get_started_developing_applications)**.-
-**RHEL options** - Choose between PAYG or BYOS. For BYOS, you will need to activate your [Red Hat Cloud Access](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/index) RHEL Gold Image for you use the Azure Marketplace offer. BYOS offers will appear in the Private Offer section in the Azure portal.
-
-**Product versions**
-
-* [JBoss EAP 7.2](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.2)
-* [JBoss EAP 7.3](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3)
-* [RHEL 7.7](https://azuremarketplace.microsoft.com/marketplace/apps/RedHat.RedHatEnterpriseLinux77-ARM)
-* [RHEL 8.0](https://azuremarketplace.microsoft.com/marketplace/apps/RedHat.RedHatEnterpriseLinux80-ARM)
-
-## Migration flow and architecture
-
-This section outlines free tools for migrating JBoss EAP applications from another application server to run on JBoss EAP and from traditional on-premises servers to Microsoft Azure cloud environment.
-
-### Red Hat migration toolkit for applications (MTA)
-
-It is recommended that you use the Red Hat MTA, for migrating Java applications, at the beginning of your planning cycle before executing any EAP related migration project. The MTA is an assembly of tools that support large-scale Java application modernization and migration projects across a [broad range of transformations and use cases](https://developers.redhat.com/products/mta/use-cases). It accelerates application code analysis, supports effort estimation, accelerates code migration, and helps you move applications to the cloud and containers.
--
-Red Hat MTA allows you to migrate applications from other application servers to Red Hat JBoss EAP.
-
-## Pre-migration
-
-To ensure a successful migration, before you start, complete the assessment and inventory steps described in the following sections.
-
-### Validate the compatibility
-
-It is recommended that you validate your current deployment model and version before planning for migration. You may have to make significant changes to your application if your current version isn't supported.
-
-The MTA supports migrations from third-party enterprise application servers, such as Oracle WebLogic Server, to JBoss EAP and upgrades to the latest release of JBoss EAP.
-
-The following table describes the most common supported migration paths.
-
-**Table - Supported migration paths: Source to target**
-
-|Source platform ⇒ | JBoss EAP 6 | JBoss EAP 7 | Red Hat OpenShift | OpenJDK 8 & 11 | Apache Camel 3 | Spring Boot on RH Runtimes | Quarkus
-||::|::|::|::|::|::|::|
-| Oracle WebLogic Server | &#x2714; | &#x2714; | &#x2714; | &#x2714; | - | - | - |
-| IBM WebSphere Application Server | &#x2714; | &#x2714; | &#x2714; | &#x2714; | - | - | - |
-| JBoss EAP 4 | &#x2714; | &#x2714; | X<sup>1</sup> | &#x2714; | - | - | - |
-| JBoss EAP 5 | &#x2714; | &#x2714; | &#x2714; | &#x2714; | - | - | - |
-| JBoss EAP 7 | N/A | &#x2714; | &#x2714; | &#x2714; | - | - | - |
-| Oracle JDK | - | - | &#x2714; | &#x2714; | - | - | - |
-| Apache Camel 2 | - | - | &#x2714; | &#x2714; | &#x2714; | - | - |
-| SpringBoot | - | - | &#x2714; | &#x2714; | - | &#x2714; | &#x2714; |
-| Java application | - | - | &#x2714; | &#x2714; | - | - | - |
-
-<sup>1</sup> Although MTA does not currently provide rules for this migration path, Red Hat Consulting can assist with migration from any source platform to JBoss EAP 7.
-
-You can also check on the [system requirements](https://access.redhat.com/documentation/en/migration_toolkit_for_applications/5.0/html-single/introduction_to_the_migration_toolkit_for_applications/index#system_requirements_getting-started-guide) for the MTA.
-
-Check on the [JBoss EAP 7.3 supported configurations](https://access.redhat.com/articles/2026253#EAP_73) and [JBoss EAP 7.2 supported configurations](https://access.redhat.com/articles/2026253#EAP_72) before planning for migration.
-
-To obtain your current Java version, sign in to your server and run the following command:
-
-```
-java -version
-```
-
-### Validate operating mode
-
-JBoss EAP is supported on RHEL, Windows Server, and Oracle Solaris. JBoss EAP runs in either a stand-alone server operating mode for managing discrete instances or managed domain operating mode for managing groups of instances from a single control point.
-
-JBoss EAP managed domains are not supported in Microsoft Azure. Only stand-alone JBoss EAP server instances are supported. Note that configuring JBoss EAP clusters using stand-alone JBoss EAP servers is supported in Azure and this is how the Azure Marketplace offer create your clustered VMs or virtual machine scale sets.
-
-### Inventory server capacity
-
-Document the hardware (memory, CPU, disk, etc.) of the current production server(s) as well as the average and peak request counts and resource utilization. You'll need this information regardless of the migration path you choose. For additional information on the sizes, visit [Sizes for Cloud Services](../../../cloud-services/cloud-services-sizes-specs.md).
-
-### Inventory all secrets
-
-Before the advent of "configuration as a service" technologies such as [Azure Key Vault](https://azure.microsoft.com/services/key-vault/) or [Azure App Configuration](https://azure.microsoft.com/services/app-configuration/), there wasn't a well-defined concept of "secrets". Instead, you had a disparate set of configuration settings that effectively functioned as what we now call "secrets". With app servers such as JBoss EAP, these secrets are in many different config files and configuration stores. Check all properties and configuration files on the production server(s) for any secrets and passwords. Be sure to check *jboss-web.xml* in your WAR files. Configuration files containing passwords or credentials may also be found inside your application. For additional information on Azure Key Vault, visit [Azure Key Vault basic concepts](../../../key-vault/general/basic-concepts.md).
-
-### Inventory all certificates
-
-Document all the certificates used for public SSL endpoints. You can view all certificates on the production server(s) by running the following command:
-
-```cli
-keytool -list -v -keystore <path to keystore>
-```
-
-### Inventory JNDI resources
--- Inventory all Java Naming and Directory Interface (JNDI) resources. Some, such as Java Message Service (JMS) brokers, may require migration or reconfiguration.-
-### Inside your application
-
-Inspect the WEB-INF/jboss-web.xml and/or WEB-INF/web.xml files.
-
-### Document data sources
-
-If your application uses any databases, you need to capture the following information:
-
-* What is the DataSource name?
-* What is the connection pool configuration?
-* Where can I find the Java Database Connectivity (JDBC) driver JAR file?
-
-For more information, see [About JBoss EAP DataSources](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html/configuration_guide/datasource_management) in the JBoss EAP documentation.
-
-### Determine whether and how the file system is used
-
-Any usage of the file system on the application server will require reconfiguration or, in rare cases, architectural changes. File system may be used by JBoss EAP modules or by your application code. You may identify some or all of the scenarios described in the following sections.
-
-**Read-only static content**
-
-If your application currently serves static content, you'll need an alternate location for it. You may wish to consider moving static content to Azure Blob Storage and adding [Azure Content Delivery Network (CDN)](../../../cdn/index.yml) for lightning-fast downloads globally. For more information, see [Static website hosting in Azure Storage](../../../storage/blobs/storage-blob-static-website.md) and [Quickstart: Integrate an Azure storage account with Azure CDN](../../../cdn/cdn-create-a-storage-account-with-cdn.md).
-
-**Dynamically published static content**
-
-If your application allows for static content that is uploaded/produced by your application but is immutable after its creation, you can use [Azure Blob Storage](../../../storage/blobs/index.yml) and Azure CDN as described above, with an [Azure Function](../../../azure-functions/index.yml) to handle uploads and CDN refresh. We've provided a sample implementation for your use at [Uploading and CDN-preloading static content with Azure Functions](https://github.com/Azure-Samples/functions-java-push-static-contents-to-cdn).
-
-**Dynamic or internal content**
-
-For files that are frequently written and read by your application (such as temporary data files), or static files that are visible only to your application, you can mount [Azure Storage](../../../storage/index.yml) shares as persistent volumes. For more information, see [Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service](../../../aks/azure-files-dynamic-pv.md).
-
-### Determine whether a connection to on-premises is needed
-
-If your application needs to access any of your on-premises services, you'll need to provision one of Azure's connectivity services. For more information, see [Connect an on-premises network to Azure](/azure/architecture/reference-architectures/hybrid-networking/). Alternatively, you'll need to refactor your application to use publicly available APIs that your on-premises resources expose.
-
-### Determine whether JMS queues or topics are in use
-
-If your application is using JMS Queues or Topics, you'll need to migrate them to an externally hosted JMS server. Azure Service Bus and the Advanced Message Queuing Protocol (AMQP) can be a great migration strategy for those using JMS. For more information, visit [Use JMS with Azure Service Bus and AMQP 1.0](../../../service-bus-messaging/service-bus-java-how-to-use-jms-api-amqp.md) or [Send messages to and receive messages from Azure Service Bus queues (Java)](../../../service-bus-messaging/service-bus-java-how-to-use-queues.md)
-
-If JMS persistent stores have been configured, you must capture their configuration and apply it after the migration.
-
-### Determine whether your application is composed of multiple WARs
-
-If your application is composed of multiple WARs, you should treat each of those WARs as separate applications and go through this guide for each of them.
-
-### Determine whether your application is packaged as an EAR
-
-If your application is packaged as an EAR file, be sure to examine the application.xml file and capture the configuration.
-
-### Identify all outside processes and daemons running on the production servers
-
-If you have any processes running outside the application server, such as monitoring daemons, you'll need to eliminate them or migrate them elsewhere.
--
-## Migration
-
-### Provision the target infrastructure
-
-In order to start the migration, first you need to deploy the JBoss EAP infrastructure. You have multiple options to deploy
--- [**Azure Virtual Machine**](https://azure.microsoft.com/overview/what-is-a-virtual-machine/)-- [**Azure Virtual Machine Scale Set**](../../../virtual-machine-scale-sets/overview.md)-- [**Azure App Service**](/azure/developer/java/ee/jboss-on-azure)-- [**Azure Red Hat OpenShift (ARO) for Containers**](https://azure.microsoft.com/services/openshift)-- [**Azure Container Service**](https://azure.microsoft.com/product-categories/containers/)-
-Please refer to the getting started with Azure Marketplace section to evaluate your deployment infrastructure before you build the environment.
-
-### Perform the migration
-
-There are tools that can assist you in Migration :
-
-* [Red Hat Application Migration Toolkit to Analyze Applications for Migration](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.2/html/migration_guide/index#use_windup_to_analyze_applications_for_migration).
-* [JBoss Server Migration Tool to Migrate Server Configurations](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.2/html/migration_guide/index#migration_tool_server_migration_tool)
-
-To migrate your server configuration from the older JBoss EAP version to the newer JBoss EAP version, you can either use the [JBoss Server Migration Tool](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.2/html/migration_guide/index#migrate_server_migration_tool_option) or you can perform a manual migration with the help of the [management CLI migrate operation](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.2/html/migration_guide/index#migrate__migrate_operation_option).
-
-### Run Red Hat Application Migration Toolkit
-
-You can [run the JBoss Server Migration Tool in Interactive Mode](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.2/html-single/using_the_jboss_server_migration_tool/index#migration_tool_server_run_interactive_mode). By default, the JBoss Server Migration Tool runs interactively. This mode allows you to choose exactly which server configurations you want to migrate.
-
-You can also [run the JBoss Server Migration Tool in Non-interactive Mode](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html/using_the_jboss_server_migration_tool/running_the_server_migration_tool#migration_tool_server_run_noninteractive_mode). This mode allows it to run without prompts.
-
-### Review the result of JBoss server migration toolkit execution
-
-When the migration is complete, review the migrated server configuration files in the *EAP_HOME/standalone/configuration/* and *EAP_HOME/domain/configuration/* directories. For more information, visit [Reviewing the Results of JBoss Server Migration Tool Execution](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html/using_the_jboss_server_migration_tool/running_the_server_migration_tool#migration_tool_server_results).
-
-### Expose the application
-
-You can expose the application using the following methods which is suitable for your environment.
-
-* [Create a Public IP](../../../virtual-network/ip-services/virtual-network-public-ip-address.md#create-a-public-ip-address) to access the server and the application.
-* [Create a Jump VM in the Same Virtual Network (VNet)](../../windows/quick-create-portal.md#create-virtual-machine) in a different subnet (new subnet) in the same VNet and access the server via a Jump VM. This Jump VM can be used to expose the application.
-* [Create a Jump VM with VNet Peering](../../windows/quick-create-portal.md#create-virtual-machine) in a different Virtual Network and access the server and expose the application using [Virtual Network Peering](../../../virtual-network/tutorial-connect-virtual-networks-portal.md#peer-virtual-networks).
-* Expose the application using an [Application Gateway](../../../application-gateway/quick-create-portal.md#create-an-application-gateway)
-* Expose the application using an [External Load Balancer](../../../load-balancer/quickstart-load-balancer-standard-public-portal.md#create-load-balancer) (ELB).
-
-## Post-migration
-
-After you've reached the migration goals you defined in the pre-migration step, perform some end-to-end acceptance testing to verify that everything works as expected. Some topics for post-migration enhancements include, but are certainly not limited to the following:
-
-* Using Azure Storage to serve static content mounted to the VMs. For more information, visit [Attach or detach a data disk to a VM](../../../devtest-labs/devtest-lab-attach-detach-data-disk.md)
-* Deploy your applications to your migrated JBoss cluster with Azure DevOps. For more information, visit [Azure DevOps getting started documentation](/azure/devops/get-started).
-* Consider using [Application Gateway](../../../application-gateway/index.yml).
-* Enhance your network topology with advanced load balancing services. For more information, visit [Using load-balancing services in Azure](../../../traffic-manager/traffic-manager-load-balancing-azure.md).
-* Leverage Azure Managed Identities to managed secrets and assign Role Based Access Control (RBAC) to Azure resources. For more information, visit [What are managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md)?
-* Use Azure Key Vault to store any information that functions as a "secret". For more information, visit [Azure Key Vault basic concepts](../../../key-vault/general/basic-concepts.md).
-
-## Resource links and support
-
-For any support related questions, issues or customization requirements, please contact [Red Hat Support](https://access.redhat.com/support) or [Microsoft Azure Support](https://portal.azure.com/?quickstart=true#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
-
-* Learn more about [JBoss EAP](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.2/html/getting_started_with_jboss_eap_for_openshift_online/introduction)
-* Learn more about [Red Hat Subscription Manager (Cloud Access)](https://access.redhat.com/documentation/en/red_hat_subscription_management/1/html-single/red_hat_cloud_access_reference_guide/index)
-* Learn more about [Azure Virtual Machines](https://azure.microsoft.com/overview/what-is-a-virtual-machine/)
-* Learn more about [Azure Virtual Machine Scale Set](../../../virtual-machine-scale-sets/overview.md)
-* Learn more about [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)
-* Learn more about [Azure App Service on Linux](../../../app-service/overview.md#app-service-on-linux)
-* Learn more about [Azure Storage](../../../storage/common/storage-introduction.md)
-* Learn more about [Azure Networking](../../../networking/fundamentals/networking-overview.md)
-
-## Next steps
-* [Deploy JBoss EAP on RHEL VM/VM Scale Set from Azure Marketplace](https://aka.ms/AMP-JBoss-EAP)
-* [Configuring a Java app for Azure App Service](../../../app-service/configure-language-java.md)
-* [How to deploy JBoss EAP onto Azure App Service](https://github.com/JasonFreeberg/jboss-on-app-service) tutorial
-* [Use Azure App Service Migration Assistance](https://azure.microsoft.com/services/app-service/migration-assistant/)
-* [Use Red Hat Migration Toolkit for Applications](https://developers.redhat.com/products/mta)
virtual-machines Jboss Eap On Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/jboss-eap-on-rhel.md
- Title: Quickstart - Deploy JBoss Enterprise Application Platform (EAP) on Red Hat Enterprise Linux (RHEL) to Azure VMs and virtual machine scale sets
-description: How to deploy enterprise Java applications by using Red Hat JBoss EAP on Azure RHEL VMs and virtual machine scale sets.
--- Previously updated : 10/30/2020------
-# Deploy enterprise Java applications to Azure with JBoss EAP on Red Hat Enterprise Linux
-
-**Applies to:** :heavy_check_mark: Linux VMs
-
-The Azure Quickstart templates in this article show you how to deploy [JBoss Enterprise Application Platform (EAP)](https://www.redhat.com/en/technologies/jboss-middleware/application-platform) with [Red Hat Enterprise Linux (RHEL)](https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux) to Azure virtual machines (VMs) and virtual machine scale sets. You'll use a sample Java app to validate the deployment.
-
-JBoss EAP is an open-source application server platform. It delivers enterprise-grade security, scalability, and performance for your Java applications. RHEL is an open-source operating system (OS) platform. It allows scaling of existing apps and rolling out of emerging technologies across all environments.
-
-JBoss EAP and RHEL include everything that you need to build, run, deploy, and manage enterprise Java applications in any environment. The combination is an open-source solution for on-premises, virtual environments, and in private, public, or hybrid clouds.
-
-## Prerequisites
-
-* An Azure account with an active subscription. To get an Azure subscription, activate your [Azure credits for Visual Studio subscribers](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) or [create an account for free](https://azure.microsoft.com/pricing/free-trial).
-
-* JBoss EAP installation. You need to have a Red Hat account with Red Hat Subscription Management (RHSM) entitlement for JBoss EAP. This entitlement will let you download the Red Hat tested and certified JBoss EAP version.
-
- If you don't have EAP entitlement, obtain a [JBoss EAP evaluation subscription](https://access.redhat.com/products/red-hat-jboss-enterprise-application-platform/evaluation) before you get started. To create a new Red Hat subscription, go to [Red Hat Customer Portal](https://access.redhat.com/) and set up an account.
-
-* The [Azure CLI](/cli/azure/overview).
-
-* RHEL options. Choose pay-as-you-go (PAYG) or bring-your-own-subscription (BYOS). With BYOS, you need to activate your [Red Hat Cloud Access](https://access.redhat.com/) RHEL Gold Image before you deploy the Quickstart template.
-
-## Java EE and Jakarta EE application migration
-
-### Migrate to JBoss EAP
-JBoss EAP 7.2 and 7.3 are certified implementations of the Java Enterprise Edition (Java EE) 8 and Jakarta EE 8 specifications. JBoss EAP provides preconfigured options for features such as high-availability (HA) clustering, messaging, and distributed caching. It also enables users to write, deploy, and run applications by using the various APIs and services that JBoss EAP provides.
-
-For more information on JBoss EAP, see [Introduction to JBoss EAP 7.2](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.2/html-single/introduction_to_jboss_eap/index) or [Introduction to JBoss EAP 7.3](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.3/html/introduction_to_jboss_eap/index).
-
- #### Applications on JBoss EAP
-
-* **Web services applications**. Web services provide a standard way to interoperate among software applications. Each application can run on different platforms and frameworks. These web services facilitate internal and heterogeneous subsystem communication.
-
- To learn more, see [Developing Web Services Applications on EAP 7.2](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.2/html/developing_web_services_applications/index) or [Developing Web Services Applications on EAP 7.3](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.3/html/developing_web_services_applications/index).
-
-* **Enterprise Java Beans (EJB) applications**. EJB 3.2 is an API for developing distributed, transactional, secure, and portable Java EE and Jakarta EE applications. EJB uses server-side components called Enterprise Beans to implement the business logic of an application in a decoupled way that encourages reuse.
-
- To learn more, see [Developing EJB Applications on EAP 7.2](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.2/html/developing_ejb_applications/index) or [Developing EJB Applications on EAP 7.3](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.3/html/developing_ejb_applications/index).
-
-* **Hibernate applications**. Developers and administrators can develop and deploy Java Persistence API (JPA) and Hibernate applications with JBoss EAP. Hibernate Core is an object-relational mapping framework for the Java language. It provides a framework for mapping an object-oriented domain model to a relational database, so applications can avoid direct interaction with the database.
-
- Hibernate Entity Manager implements the programming interfaces and lifecycle rules as defined by the [JPA 2.1 specification](https://www.jcp.org/en/jsr/overview). Together with Hibernate Annotations, this wrapper implements a complete (and standalone) JPA solution on top of the mature Hibernate Core.
-
- To learn more about Hibernate, see [JPA on EAP 7.2](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.2/html/development_guide/java_persistence_api) or [Jakarta Persistence on EAP 7.3](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.3/html/development_guide/java_persistence_api).
-
-#### Red Hat Migration Toolkit for Applications
-[Red Hat Migration Toolkit for Applications (MTA)](https://developers.redhat.com/products/mta/overview) is a migration tool for Java application servers. Use this tool to migrate from another app server to JBoss EAP. It works with IDE plug-ins for [Eclipse IDE](https://www.eclipse.org/ide/), [Red Hat CodeReady Workspaces](https://developers.redhat.com/products/codeready-workspaces/overview), and [Visual Studio Code](https://code.visualstudio.com/docs/languages/java) for Java.
-
-MTA is a free and open-source tool that:
-* Automates application analysis.
-* Supports effort estimation.
-* Accelerates code migration.
-* Supports containerization.
-* Integrates with Azure Workload Builder.
-
-### Migrate JBoss EAP from on-premises to Azure
-The Azure Marketplace offer of JBoss EAP on RHEL will install and provision on Azure VMs in less than 20 minutes. You can access these offers from [Azure Marketplace](https://azuremarketplace.microsoft.com/).
-
-This Azure Marketplace offer includes various combinations of EAP and RHEL versions to support your requirements. JBoss EAP is always BYOS, but for RHEL OS, you can choose between BYOS or PAYG. The Azure Marketplace offer includes plan options for JBoss EAP on RHEL as standalone or clustered VMs:
-
-* JBoss EAP 7.2 on RHEL 7.7 VM (PAYG)
-* JBoss EAP 7.2 on RHEL 8.0 VM (PAYG)
-* JBoss EAP 7.3 on RHEL 8.0 VM (PAYG)
-* JBoss EAP 7.2 on RHEL 7.7 VM (BYOS)
-* JBoss EAP 7.2 on RHEL 8.0 VM (BYOS)
-* JBoss EAP 7.3 on RHEL 8.0 VM (BYOS)
-
-Along with Azure Marketplace offers, you can use Quickstart templates to get started on your Azure migration journey. These Quickstarts include prebuilt Azure Resource Manager (ARM) templates and scripts to deploy JBoss EAP on RHEL in various configurations and version combinations. You'll have:
-
-* A load balancer.
-* A private IP for load balancing and VMs.
-* A virtual network with a single subnet.
-* VM configuration (cluster or standalone).
-* A sample Java application.
-
-Solution architecture for these templates includes:
-
-* JBoss EAP on a standalone RHEL VM.
-* JBoss EAP clustered across multiple RHEL VMs.
-* JBoss EAP clustered through Azure virtual machine scale sets.
-
-#### Linux Workload Migration for JBoss EAP
-Azure Workload Builder simplifies the proof-of-concept, evaluation, and migration process for on-premises Java apps to Azure. Workload Builder integrates with the Azure Migrate Discovery tool to identify JBoss EAP servers. Then it dynamically generates an Ansible playbook for JBoss EAP server deployment. It uses the Red Hat MTA tool to migrate servers from other app servers to JBoss EAP.
-
-Steps for simplifying migration include:
-1. **Evaluation**. Evaluate JBoss EAP clusters by using an Azure VM or a virtual machine scale set.
-1. **Assessment**. Scan applications and infrastructure.
-1. **Infrastructure configuration**. Create a workload profile.
-1. **Deployment and testing**. Deploy, migrate, and test the workload.
-1. **Post-deployment configuration**. Integrate with data, monitoring, security, backup, and more.
-
-## Server configuration choice
-
-For deployment of the RHEL VM, you can choose either PAYG or BYOS. Images from [Azure Marketplace](https://azuremarketplace.microsoft.com) default to PAYG. Deploy a BYOS-type RHEL VM if you have your own RHEL OS image. Make sure your RHSM account has BYOS entitlement via Cloud Access before you deploy the VM or virtual machine scale set.
-
-JBoss EAP has powerful management capabilities along with providing functionality and APIs to its applications. These management capabilities differ depending on which operating mode you use to start JBoss EAP. It's supported on RHEL and Windows Server. JBoss EAP offers a standalone server operating mode for managing discrete instances. It also offers a managed domain operating mode for managing groups of instances from a single control point.
-
-> [!NOTE]
-> JBoss EAP-managed domains aren't supported in Microsoft Azure because the Azure infrastructure services manage the HA feature.
-
-The environment variable `EAP_HOME` denotes the path to the JBoss EAP installation. Use the following command to start the JBoss EAP service in standalone mode:
-
-```
-$EAP_HOME/bin/standalone.sh
-```
-
-This startup script uses the EAP_HOME/bin/standalone.conf file to set some default preferences, such as JVM options. You can customize settings in this file. JBoss EAP uses the standalone.xml configuration file to start in standalone mode by default, but you can use a different mode to start it.
-
-For details on the available standalone configuration files and how to use them, see [Standalone Server Configuration Files for EAP 7.2](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.2/html/configuration_guide/jboss_eap_management#standalone_server_configuration_files) or [Standalone Server Configuration Files for EAP 7.3](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.3/html/configuration_guide/jboss_eap_management#standalone_server_configuration_files).
-
-To start JBoss EAP with a different configuration, use the `--server-config` argument. For example:
-
- ```
- $EAP_HOME/bin/standalone.sh --server-config=standalone-full.xml
- ```
-
-For a complete listing of all available startup script arguments and their purposes, use the `--help` argument. For more information, see [Server Runtime Arguments on EAP 7.2](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.2/html/configuration_guide/reference_material#reference_of_switches_and_arguments_to_pass_at_server_runtime) or [Server Runtime Arguments on EAP 7.3](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.3/html/configuration_guide/reference_material#reference_of_switches_and_arguments_to_pass_at_server_runtime).
-
-JBoss EAP can also work in cluster mode. JBoss EAP cluster messaging allows grouping of JBoss EAP messaging servers to share message processing load. Each active node in the cluster is an active JBoss EAP messaging server, which manages its own messages and handles its own connections. To learn more, see [Clusters Overview on EAP 7.2](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.2/html/configuring_messaging/clusters_overview) or [Clusters Overview on EAP 7.3](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.3/html/configuring_messaging/clusters_overview).
-
-## Support and subscription notes
-These Quickstart templates are offered as follows:
--- RHEL OS is offered as PAYG or BYOS via Red Hat Gold Image model.-- JBoss EAP is offered as BYOS only.-
-#### Using RHEL OS with the PAYG model
-
-By default, these Quickstart templates use the On-Demand RHEL 7.7 or 8.0 PAYG image from Azure Marketplace. PAYG images have an additional hourly RHEL subscription charge on top of the normal compute, network, and storage costs. At the same time, the instance is registered to your Red Hat subscription. This means you'll be using one of your entitlements.
-
-This PAYG image will lead to "double billing." You can avoid this issue by building your own RHEL image. To learn more, read the Red Hat knowledge base article [How to provision a RHEL VM for Microsoft Azure](https://access.redhat.com/articles/uploading-rhel-image-to-azure). Or activate your [Red Hat Cloud Access](https://access.redhat.com/) RHEL Gold Image.
-
-For details on PAYG VM pricing, see [Red Hat Enterprise Linux pricing](https://azure.microsoft.com/pricing/details/virtual-machines/red-hat/). To use RHEL in the PAYG model, you'll need an Azure subscription with the specified payment method for [RHEL 7.7 on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/RedHat.RedHatEnterpriseLinux77-ARM) or [RHEL 8.0 on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/RedHat.RedHatEnterpriseLinux80-ARM). These offers require a payment method to be specified in the Azure subscription.
-
-#### Using RHEL OS with the BYOS model
-
-To use BYOS for RHEL OS, you need to have a valid Red Hat subscription with entitlements to use RHEL OS in Azure. Complete the following prerequisites before you deploy the RHEL OS with the BYOS model:
-
-1. Ensure that you have RHEL OS and JBoss EAP entitlements attached to your Red Hat subscription.
-2. Authorize your Azure subscription ID to use RHEL BYOS images. Follow the [Red Hat Subscription Management documentation](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1) to complete the process, which includes these steps:
-
- 1. Enable Microsoft Azure as a provider in your Red Hat Cloud Access Dashboard.
-
- 1. Add your Azure subscription IDs.
-
- 1. Enable new products for Cloud Access on Microsoft Azure.
-
- 1. Activate Red Hat Gold Images for your Azure subscription. For more information, see [Red Hat Gold Images on Microsoft Azure](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/understanding-gold-images_cloud-access#proc_using-gold-images-azure_cloud-access#proc_using-gold-images-azure_cloud-access).
-
- 1. Wait for Red Hat Gold Images to be available in your Azure subscription. These images are typically available within three hours of submission.
-
-3. Accept the Azure Marketplace terms and conditions for RHEL BYOS images. You can complete this process by running the following Azure CLI commands. For more information, see the [RHEL BYOS Gold Images in Azure](./byos.md) documentation. It's important that you're running the latest Azure CLI version.
-
- 1. Open an Azure CLI session and authenticate with your Azure account. For assistance, see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
-
- 1. Verify that the RHEL BYOS images are available in your subscription by running the following CLI command. If you don't get any results here, ensure that your Azure subscription is activated for RHEL BYOS images.
-
- ```
- az vm image list --offer rhel-byos --all
- ```
-
- 1. Run the following command to accept the Azure Marketplace terms for RHEL 7.7 BYOS and RHEL 8.0 BYOS, respectively:
- ```
- az vm image terms accept --publisher redhat --offer rhel-byos --plan rhel-lvm77
- ```
-
- ```
- az vm image terms accept --publisher redhat --offer rhel-byos --plan rhel-lvm8
- ```
-
-Your subscription is now ready to deploy RHEL 7.7 or 8.0 BYOS on Azure virtual machines.
-
-#### Using JBoss EAP with the BYOS model
-
-JBoss EAP is available on Azure through the BYOS model only. When you're deploying this template, you need to supply your RHSM credentials along with the RHSM Pool ID with valid EAP entitlements. If you don't have EAP entitlements, obtain a [JBoss EAP evaluation subscription](https://access.redhat.com/products/red-hat-jboss-enterprise-application-platform/evaluation) before you get started.
-
-## Deployment options
-
-You can deploy the template in the following ways:
--- **PowerShell**. Deploy the template by running the following commands: -
- ```
- New-AzResourceGroup -Name <resource-group-name> -Location <resource-group-location> #use this command when you need to create a new resource group for your deployment
- ```
-
- ```
- New-AzResourceGroupDeployment -ResourceGroupName <resource-group-name> -TemplateUri <raw link to the template which can be obtained from github>
- ```
-
- For information on installing and configuring Azure PowerShell, see the [PowerShell documentation](/powershell/azure/).
--- **Azure CLI**. Deploy the template by running the following commands:-
- ```
- az group create --name <resource-group-name> --location <resource-group-location> #use this command when you need to create a new resource group for your deployment
- ```
-
- ```
- az deployment group create --resource-group <my-resource-group> --template-uri <raw link to the template which can be obtained from github>
- ```
-
- For details on installing and configuring the Azure CLI, see [Install the CLI](/cli/azure/install-azure-cli).
--- **Azure portal**. You can deploy to the Azure portal by going to the Azure Quickstart templates as noted in the next section. After you're in the Quickstart, select the **Deploy to Azure** or **Browse on GitHub** button.-
-## Azure Quickstart templates
-
-You can start by using one of the following Quickstart templates for JBoss EAP on RHEL that meets your deployment goal:
-
-* <a href="https://azure.microsoft.com/resources/templates/jboss-eap-standalone-rhel/"> JBoss EAP on RHEL (standalone VM)</a>. This will deploy a web application named JBoss-EAP on Azure to JBoss EAP 7.2 or 7.3 running on RHEL 7.7 or 8.0 VM.
-
-* <a href="https://azure.microsoft.com/resources/templates/jboss-eap-clustered-multivm-rhel/"> JBoss EAP on RHEL (clustered, multiple VMs)</a>. This will deploy a web application called eap-session-replication on a JBoss EAP 7.2 or 7.3 cluster running on *n* number of RHEL 7.7 or 8.0 VMs. The user decides the *n* value. All the VMs are added to the back-end pool of a load balancer.
-
-* <a href="https://azure.microsoft.com/resources/templates/jboss-eap-clustered-vmss-rhel/"> JBoss EAP on RHEL (clustered, virtual machine scale set)</a>. This will deploy a web application called eap-session-replication on a JBoss EAP 7.2 or 7.3 cluster running on RHEL 7.7 or 8.0 virtual machine scale sets.
-
-## Resource links
-
-* [Azure Hybrid Benefit](../../windows/hybrid-use-benefit-licensing.md)
-* [Configure a Java app for Azure App Service](../../../app-service/configure-language-java.md)
-* [JBoss EAP on Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)
-* [JBoss EAP on Azure App Service Linux](../../../app-service/quickstart-java.md)
-* [Deploy JBoss EAP on Azure App Service](https://github.com/JasonFreeberg/jboss-on-app-service)
-
-## Next steps
-
-* Learn more about [JBoss EAP 7.2](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.2).
-* Learn more about [JBoss EAP 7.3](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3).
-* Learn more about [Red Hat Subscription Management](https://access.redhat.com/products/red-hat-subscription-management).
-* Learn about [Red Hat workloads on Azure](./overview.md).
-* Deploy [JBoss EAP on an RHEL VM or virtual machine scale set from Azure Marketplace](https://aka.ms/AMP-JBoss-EAP).
-* Deploy [JBoss EAP on an RHEL VM or virtual machine scale set from Azure Quickstart templates](https://aka.ms/Quickstart-JBoss-EAP).
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/overview.md
Azure provides Red Hat Update Infrastructure only for pay-as-you-go RHEL VMs. RH
RHEL images connected to RHUI update by default to the latest minor version of RHEL when a `yum update` is run. This behavior means that a RHEL 7.4 VM might get upgraded to RHEL 7.7 if a `yum update` operation is run on it. This behavior is by design for RHUI. To mitigate this upgrade behavior, switch from regular RHEL repositories to [Extended Update Support repositories](./redhat-rhui.md#rhel-eus-and-version-locking-rhel-vms).
+## Red Hat Middleware
+
+Microsoft and Azure have partnered to develop a variety of solutions for running Red Hat Middleware on Azure. Learn more about JBoss EAP on Azure Virtual Machines and Azure App service at [Red Hat JBoss EAP on Azure](/azure/developer/java/ee/jboss-on-azure).
+ ## Next steps * Learn more about [RHEL images on Azure](./redhat-images.md).
virtual-machines Wildfly On Centos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/wildfly-on-centos.md
- Title: Quickstart - WildFly on CentOS
-description: Deploy Java applications to WildFly on CentOS VM
-- Previously updated : 10/23/2020-------
-# Quickstart: WildFly on CentOS 8
-
-**Applies to:** :heavy_check_mark: Linux VMs
-
-This Quickstart shows you how to deploy the standalone node of WildFly of CentOS 8 VM. It is ideal for development and testing of enterprise Java applications on Azure. Application server subscription isn't required to deploy this quickstart.
-
-## Prerequisites
-
-* An Azure account with an active subscription. If you don't have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) or [create an account for free](https://azure.microsoft.com/pricing/free-trial).
-
-## Use case
-
-WildFly is ideal for development and testing of enterprise Java applications on Azure. Lists of technologies available in WildFly 18 server configuration profiles are available in the [WildFly Getting Started Guide](https://docs.wildfly.org/18/Getting_Started_Guide.html#getting-started-with-wildfly).
-
-You can use WildFly in either Standalone or Cluster mode per your use case. You can ensure high availability of critical Jakarta EE applications by WildFly on a cluster of nodes. Make a small number of application configuration changes, and then deploy the application in the cluster. To learn more about this, please check the [WildFly High Availability Guide](https://docs.wildfly.org/18/High_Availability_Guide.html).
-
-## Configuration choice
-
-WildFly can be booted in **Standalone Server** mode - A standalone server instance is an independent process, much like a JBoss Application Server (AS) 3, 4, 5, or 6 instance. Standalone instances can be launched via the standalone.sh or standalone.bat launch scripts. For more than one standalone instance, it is the user's responsibility to coordinate multi-server management across the servers.
-
-You can also start WildFly instance with alternate configuration by using configuration files available in configuration folder.
-
-Following are the Standalone Server Configuration files:
--- standalone.xml (default) - This configuration is the default file used for starting the WildFly instance. It contains Jakarta Web Profile certified configuration with the required technologies.
-
-- standalone-ha.xml - Jakarta EE Web Profile 8 certified configuration with high availability (targeted at web applications).
-
-- standalone-full.xml - Jakarta EE Platform 8 certified configuration including all the required technologies for hosting Jakarta EE applications.--- standalone-full-ha.xml - Jakarta EE Platform 8 certified configuration with high availability for hosting Jakarta EE applications.-
-To start your standalone WildFly server with another provided configuration, use the --server-config argument with the server-config file.
-
-For example, to use the Jakarta EE Platform 8 with clustering capabilities use the following command:
-
-```
-./standalone.sh --server-config=standalone-full-ha.xml
-```
-
-To learn more about the configurations, check out the [WildFly Getting Started Guide](https://docs.wildfly.org/18/Getting_Started_Guide.html#wildfly-10-configurations).
-
-## Licensing, support and subscription notes
-
-Azure CentOS 8 image is a Pay-As-You-Go (PAYG) VM image and doesn't require the user to obtain a license. The first time the VM is launched, the VM's OS license will automatically be activated and charged an hourly rate. This is in additional to Microsoft's Linux hourly VM rates. Click [Linux VM Pricing](https://azure.microsoft.com/pricing/details/virtual-machines/linux/#linux) for details. WildFly is free to download and use and doesn't require a Red Hat subscription or license.
-
-## How to consume
-
-You can deploy the template in the following three ways:
--- Use PowerShell - Deploy the template by running the following commands: (Check out [Azure PowerShell](/powershell/azure/) for information on installing and configuring Azure PowerShell).-
- ```
- New-AzResourceGroup -Name <resource-group-name> -Location <resource-group-location> #use this command when you need to create a new Resource Group for your deployment
- ```
-
- ```
- New-AzResourceGroupDeployment -ResourceGroupName <resource-group-name> -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/application-workloads/wildfly/wildfly-standalone-centos8/azuredeploy.json
- ```
-
-- Use Azure CLI - Deploy the template by running the following commands: (Check out [Azure Cross-Platform Command Line](/cli/azure/install-azure-cli) for details on installing and configuring the Azure Cross-Platform Command-Line Interface).-
- ```
- az group create --name <resource-group-name> --location <resource-group-location> #use this command when you need to create a new Resource Group for your deployment
- ```
-
- ```
- az deployment group create --resource-group <my-resource-group> --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/application-workloads/wildfly/wildfly-standalone-centos8/azuredeploy.json
- ```
--- Use Azure portal - Deploy the template by clicking <a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fwildfly%2Fwildfly-standalone-centos8%2Fazuredeploy.json" target="_blank">here</a> and log into your Azure portal.-
-## ARM template
-
-<a href="https://github.com/Azure/azure-quickstart-templates/tree/master/application-workloads/wildfly/wildfly-standalone-centos8" target="_blank"> WildFly 18 on CentOS 8 (stand-alone VM)</a> - This is a Quickstart template that creates a standalone node of WildFly 18.0.1.Final on CentOS 8 VM in your Resource Group (RG) which includes a Private IP for the VM, Virtual Network, and Diagnostics Storage Account. It also deploys a sample Java application named JBoss-EAP on Azure on WildFly.
-
-## Resource links
-
-* Learn more about [WildFly 18](https://docs.wildfly.org/18/)
-* Learn more about [Linux distributions on Azure](../../linux/endorsed-distros.md)
-* [Azure for Java developers documentation](https://github.com/JasonFreeberg/jboss-on-app-service)
-
-## Next steps
-
-For production environment, check out the Red Hat JBoss EAP Azure Quickstart ARM templates:
-
-Stand-alone RHEL virtual machine with sample application:
-
-* <a href="https://github.com/Azure/azure-quickstart-templates/tree/master/application-workloads/jboss/jboss-eap-standalone-rhel" target="_blank"> JBoss EAP on RHEL (stand-alone VM)</a>
-
-Clustered RHEL virtual machines with sample application:
-
-* <a href="https://github.com/Azure/azure-quickstart-templates/tree/master/application-workloads/jboss/jboss-eap-clustered-multivm-rhel" target="_blank"> JBoss EAP on RHEL (clustered VMs)</a>
-
-Clustered RHEL Virtual Machine Scale Set with sample application:
-
-* <a href="https://github.com/Azure/azure-quickstart-templates/tree/master/application-workloads/jboss/jboss-eap-clustered-vmss-rhel" target="_blank"> JBoss EAP on RHEL (clustered Virtual Machine Scale Set)</a>
virtual-machines Automation Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-devops.md
Invoke-WebRequest -Uri https://raw.githubusercontent.com/Azure/sap-automation/ma
```
+### Create a sample Control Plane configuration
+
+You can run the 'Create Sample Deployer Configuration' pipeline to create a sample configuration for the Control Plane. When running choose the appropriate Azure region.
+ ## Manual configuration of Azure DevOps Services for the SAP Deployment Automation Framework ### Create a new project
virtual-network Nat Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-availability-zones.md
If your scenario requires inbound endpoints, you have two options:
| (1) | **Align** the inbound endpoints with the respective **zonal stacks** you're creating for outbound. | Create a standard load balancer with a zonal frontend. | Same failure model for inbound and outbound. Simpler to operate. | Individual IP addresses per zone may need to be masked by a common DNS name. | | (2) | **Overlay** the zonal stacks with a cross-zone inbound endpoint. | Create a standard load balancer with a zone-redundant front-end. | Single IP address for inbound endpoint. | Varying models for inbound and outbound. More complex to operate. |
+Note that zonal configuration for a load balancer works differently from NAT gateway. The load balancer's availability zone selection is synonymous with its frontend IP configuration's zone selection. For public load balancers, if the public IP in the Load balancer's frontend is zone redundant then the load balancer is also zone-redundant. If the public IP in the load balancer's frontend is zonal, then the load balancer will also be designated to the same zone.
+ ## Limitations * Zones can't be changed, updated, or created for NAT gateway after deployment.
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
For information on the SLA, see [SLA for Virtual Network NAT](https://azure.micr
* Learn about the [NAT gateway resource](./nat-gateway-resource.md).
-* [Learn module: Introduction to Azure Virtual Network NAT](/learn/modules/intro-to-azure-virtual-network-nat).
+* [Learn module: Introduction to Azure Virtual Network NAT](/training/modules/intro-to-azure-virtual-network-nat).
+
virtual-network Troubleshoot Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat.md
Check the following configurations to ensure that NAT gateway can be used to dir
2. At least one subnet is attached to a NAT gateway. You can attach multiple subnets to a NAT gateway for going outbound, but those subnets must exist within the same virtual network. NAT gateway can't span beyond a single virtual network.
-3. No [NSG rules](../network-security-groups-overview.md#outbound) or UDRs are blocking NAT gateway from directing traffic outbound to the internet.
+3. No [NSG rules](../network-security-groups-overview.md#outbound) or [UDRs](/azure/virtual-network/nat-gateway/troubleshoot-nat-connectivity#virtual-appliance-udrs-and-expressroute-override-nat-gateway-for-routing-outbound-traffic) are blocking NAT gateway from directing traffic outbound to the internet.
### How to validate connectivity
To learn more about NAT gateway, see:
* [NAT gateway resource](nat-gateway-resource.md)
-* [Metrics and alerts for NAT gateway resources](nat-metrics.md).
+* [Metrics and alerts for NAT gateway resources](nat-metrics.md).
virtual-network Virtual Networks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-overview.md
There is no charge for using Azure VNet; it is free of cost. Standard charges ar
## Next steps - Learn about [Azure Virtual Network concepts and best practices](concepts-and-best-practices.md). - To get started using a virtual network, create one, deploy a few VMs to it, and communicate between the VMs. To learn how, see the [Create a virtual network](quick-create-portal.md) quickstart.
+ - [Learn module: Introduction to Azure Virtual Networks](/training/modules/introduction-to-azure-virtual-networks)
virtual-wan Virtual Wan About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-about.md
Subscribe to the RSS feed and view the latest Virtual WAN feature updates on the
* [Tutorial: Create a site-to-site connection using Virtual WAN](virtual-wan-site-to-site-portal.md)
-* [Learn module: Introduction to Azure Virtual WAN](/learn/modules/introduction-azure-virtual-wan/)
+* [Learn module: Introduction to Azure Virtual WAN](/training/modules/introduction-azure-virtual-wan/)
visual-studio Vs Storage Cloud Services Getting Started Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/visual-studio/vs-storage-cloud-services-getting-started-blobs.md
- Title: Get started with blob storage using Visual Studio (cloud services)
-description: How to get started using Azure Blob storage in a cloud service project in Visual Studio after connecting to a storage account using Visual Studio connected services
------ Previously updated : 12/02/2016---
-# Get started with Azure Blob Storage and Visual Studio connected services (cloud services projects)
-
-## Overview
---
-This article describes how to get started with Azure Blob Storage after you created or referenced an Azure Storage account by using the Visual Studio **Add Connected Services** dialog in a Visual Studio cloud services project. We'll show you how to access and create blob containers, and how to perform common tasks like uploading, listing, and downloading blobs. The samples are written in C\# and use the [Microsoft Azure Storage Client Library for .NET](/previous-versions/azure/dn261237(v=azure.100)).
-
-Azure Blob Storage is a service for storing large amounts of unstructured data that can be accessed from anywhere in the world via HTTP or HTTPS. A single blob can be any size. Blobs can be things like images, audio and video files, raw data, and document files.
-
-Just as files live in folders, storage blobs live in containers. After you have created a storage, you create one or more containers in the storage. For example, in a storage called "Scrapbook," you can create containers in the storage called "images" to store pictures and another called "audio" to store audio files. After you create the containers, you can upload individual blob files to them.
-
-* For more information on programmatically manipulating blobs, see [Get started with Azure Blob storage using .NET](../storage/blobs/storage-quickstart-blobs-dotnet.md).
-* For general information about Azure Storage, see [Storage documentation](/azure/storage/).
-* For general information about Azure Cloud Services, see [Cloud Services documentation](/azure/cloud-services/).
-* For more information about programming ASP.NET applications, see [ASP.NET](https://www.asp.net).
-
-## Access blob containers in code
-To programmatically access blobs in cloud service projects, you need to add the following items, if they're not already present.
-
-1. Add the following code namespace declarations to the top of any C# file in which you wish to programmatically access Azure Storage.
-
- ```csharp
- using Microsoft.Framework.Configuration;
- using Microsoft.WindowsAzure.Storage;
- using Microsoft.WindowsAzure.Storage.Blob;
- using System.Threading.Tasks;
- using LogLevel = Microsoft.Framework.Logging.LogLevel;
- ```
-2. Get a **CloudStorageAccount** object that represents your storage account information. Use the following code to get the your storage connection string and storage account information from the Azure service configuration.
-
- ```csharp
- CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
- CloudConfigurationManager.GetSetting("<storage account name>_AzureStorageConnectionString"));
- ```
-3. Get a **CloudBlobClient** object to reference an existing container in your storage account.
-
- ```csharp
- // Create a blob client.
- CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
- ```
-4. Get a **CloudBlobContainer** object to reference a specific blob container.
-
- ```csharp
- // Get a reference to a container named "mycontainer."
- CloudBlobContainer container = blobClient.GetContainerReference("mycontainer");
- ```
-
-> [!NOTE]
-> Use all of the code shown in the previous procedure in front of the code shown in the following sections.
->
->
-
-## Create a container in code
-> [!NOTE]
-> Some APIs that perform calls out to Azure Storage in ASP.NET are asynchronous. See [Asynchronous programming with Async and Await](/previous-versions/hh191443(v=vs.140)) for more information. The code in the following example assumes that you are using async programming methods.
->
->
-
-To create a container in your storage account, all you need to do is add a call to **CreateIfNotExistsAsync** as in the following code:
-
-```csharp
-// If "mycontainer" doesn't exist, create it.
-await container.CreateIfNotExistsAsync();
-```
--
-To make the files within the container available to everyone, you can set the container to be public by using the following code.
-
-```csharp
-await container.SetPermissionsAsync(new BlobContainerPermissions
-{
- PublicAccess = BlobContainerPublicAccessType.Blob
-});
-```
--
-Anyone on the Internet can see blobs in a public container, but you can
-modify or delete them only if you have the appropriate access key.
-
-## Upload a blob into a container
-Azure Storage supports block blobs and page blobs. In the majority of cases, block blob is the recommended type to use.
-
-To upload a file to a block blob, get a container reference and use it to get a block blob reference. Once you have a blob reference, you can upload any stream of data to it by calling the **UploadFromStream** method. This operation creates the blob if it didn't previously exist, or overwrites it if it does exist. The following example shows how to upload a blob into a container and assumes that the container was already created.
-
-```csharp
-// Retrieve a reference to a blob named "myblob".
-CloudBlockBlob blockBlob = container.GetBlockBlobReference("myblob");
-
-// Create or overwrite the "myblob" blob with contents from a local file.
-using (var fileStream = System.IO.File.OpenRead(@"path\myfile"))
-{
- blockBlob.UploadFromStream(fileStream);
-}
-```
-
-## List the blobs in a container
-To list the blobs in a container, first get a container reference. You can then use the container's **ListBlobs** method to retrieve the blobs and/or directories within it. To access the rich set of properties and methods for a returned **IListBlobItem**, you must cast it to a **CloudBlockBlob**, **CloudPageBlob**, or **CloudBlobDirectory** object. If the type is unknown, you can use a type check to determine which to cast it to. The following code demonstrates how to retrieve and output the URI of each item in the **photos** container:
-
-```csharp
-// Loop over items within the container and output the length and URI.
-foreach (IListBlobItem item in container.ListBlobs(null, false))
-{
- if (item.GetType() == typeof(CloudBlockBlob))
- {
- CloudBlockBlob blob = (CloudBlockBlob)item;
-
- Console.WriteLine("Block blob of length {0}: {1}", blob.Properties.Length, blob.Uri);
-
- }
- else if (item.GetType() == typeof(CloudPageBlob))
- {
- CloudPageBlob pageBlob = (CloudPageBlob)item;
-
- Console.WriteLine("Page blob of length {0}: {1}", pageBlob.Properties.Length, pageBlob.Uri);
-
- }
- else if (item.GetType() == typeof(CloudBlobDirectory))
- {
- CloudBlobDirectory directory = (CloudBlobDirectory)item;
-
- Console.WriteLine("Directory: {0}", directory.Uri);
- }
-}
-```
-
-As shown in the previous code sample, the blob service has the concept of directories within containers, as well. This is so that you can organize your blobs in a more folder-like structure. For example, consider the following set of block blobs in a container named **photos**:
-
-```output
-photo1.jpg
-2010/architecture/description.txt
-2010/architecture/photo3.jpg
-2010/architecture/photo4.jpg
-2011/architecture/photo5.jpg
-2011/architecture/photo6.jpg
-2011/architecture/description.txt
-2011/photo7.jpg
-```
-
-When you call **ListBlobs** on the container (as in the previous sample), the collection returned
-contains **CloudBlobDirectory** and **CloudBlockBlob** objects representing the directories and blobs contained at the top level. Here is the resulting output:
-
-```output
-Directory: https://<accountname>.blob.core.windows.net/photos/2010/
-Directory: https://<accountname>.blob.core.windows.net/photos/2011/
-Block blob of length 505623: https://<accountname>.blob.core.windows.net/photos/photo1.jpg
-```
--
-Optionally, you can set the **UseFlatBlobListing** parameter of the **ListBlobs** method to
-**true**. This results in every blob being returned as a **CloudBlockBlob**, regardless of directory. Here is the call to **ListBlobs**:
-
-```csharp
-// Loop over items within the container and output the length and URI.
-foreach (IListBlobItem item in container.ListBlobs(null, true))
-{
- ...
-}
-```
-
-and here are the results:
-
-```output
-Block blob of length 4: https://<accountname>.blob.core.windows.net/photos/2010/architecture/description.txt
-Block blob of length 314618: https://<accountname>.blob.core.windows.net/photos/2010/architecture/photo3.jpg
-Block blob of length 522713: https://<accountname>.blob.core.windows.net/photos/2010/architecture/photo4.jpg
-Block blob of length 4: https://<accountname>.blob.core.windows.net/photos/2011/architecture/description.txt
-Block blob of length 419048: https://<accountname>.blob.core.windows.net/photos/2011/architecture/photo5.jpg
-Block blob of length 506388: https://<accountname>.blob.core.windows.net/photos/2011/architecture/photo6.jpg
-Block blob of length 399751: https://<accountname>.blob.core.windows.net/photos/2011/photo7.jpg
-Block blob of length 505623: https://<accountname>.blob.core.windows.net/photos/photo1.jpg
-```
-
-For more information, see [CloudBlobContainer.ListBlobs](/rest/api/storageservices/List-Blobs).
-
-## Download blobs
-To download blobs, first retrieve a blob reference and then call the **DownloadToStream** method. The following
-example uses the **DownloadToStream** method to transfer the blob
-contents to a stream object that you can then persist to a local file.
-
-```csharp
-// Get a reference to a blob named "photo1.jpg".
-CloudBlockBlob blockBlob = container.GetBlockBlobReference("photo1.jpg");
-
-// Save blob contents to a file.
-using (var fileStream = System.IO.File.OpenWrite(@"path\myfile"))
-{
- blockBlob.DownloadToStream(fileStream);
-}
-```
-
-You can also use the **DownloadToStream** method to download the contents of a blob as a text string.
-
-```csharp
-// Get a reference to a blob named "myblob.txt"
-CloudBlockBlob blockBlob2 = container.GetBlockBlobReference("myblob.txt");
-
-string text;
-using (var memoryStream = new MemoryStream())
-{
- blockBlob2.DownloadToStream(memoryStream);
- text = System.Text.Encoding.UTF8.GetString(memoryStream.ToArray());
-}
-```
-
-## Delete blobs
-To delete a blob, first get a blob reference and then call the
-**Delete** method.
-
-```csharp
-// Get a reference to a blob named "myblob.txt".
-CloudBlockBlob blockBlob = container.GetBlockBlobReference("myblob.txt");
-
-// Delete the blob.
-blockBlob.Delete();
-```
--
-## List blobs in pages asynchronously
-If you are listing a large number of blobs, or you want to control the number of results you return in one listing operation, you can list blobs in pages of results. This example shows how to return results in pages asynchronously, so that execution is not blocked while waiting to return a large set of results.
-
-This example shows a flat blob listing, but you can also perform a hierarchical listing, by setting the **useFlatBlobListing** parameter of the **ListBlobsSegmentedAsync** method to **false**.
-
-Because the sample method calls an asynchronous method, it must be prefaced with the **async** keyword, and it must return a **Task** object. The await keyword specified for the **ListBlobsSegmentedAsync** method suspends execution of the sample method until the listing task completes.
-
-```csharp
-async public static Task ListBlobsSegmentedInFlatListing(CloudBlobContainer container)
-{
- // List blobs to the console window, with paging.
- Console.WriteLine("List blobs in pages:");
-
- int i = 0;
- BlobContinuationToken continuationToken = null;
- BlobResultSegment resultSegment = null;
-
- // Call ListBlobsSegmentedAsync and enumerate the result segment returned, while the continuation token is non-null.
- // When the continuation token is null, the last page has been returned and execution can exit the loop.
- do
- {
- // This overload allows control of the page size. You can return all remaining results by passing null for the maxResults parameter,
- // or by calling a different overload.
- resultSegment = await container.ListBlobsSegmentedAsync("", true, BlobListingDetails.All, 10, continuationToken, null, null);
- if (resultSegment.Results.Count<IListBlobItem>() > 0) { Console.WriteLine("Page {0}:", ++i); }
- foreach (var blobItem in resultSegment.Results)
- {
- Console.WriteLine("\t{0}", blobItem.StorageUri.PrimaryUri);
- }
- Console.WriteLine();
-
- //Get the continuation token.
- continuationToken = resultSegment.ContinuationToken;
- }
- while (continuationToken != null);
-}
-```
-
-## Next steps
visual-studio Vs Storage Cloud Services Getting Started Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/visual-studio/vs-storage-cloud-services-getting-started-queues.md
- Title: Get started with queue storage using Visual Studio (cloud services)
-description: How to get started using Azure Queue storage in a cloud service project in Visual Studio after connecting to a storage account using Visual Studio connected services
------ Previously updated : 12/02/2016---
-# Getting started with Azure Queue storage and Visual Studio connected services (cloud services projects)
-
-## Overview
--
-This article describes how to get started using Azure Queue storage in Visual Studio after you have created or referenced an Azure storage account in a cloud services project by using the Visual Studio **Add Connected Services** dialog.
-
-We'll show you how to create a queue in code. We'll also show you how to perform basic queue operations, such as adding, modifying, reading and removing queue messages. The samples are written in C# code and use the [Microsoft Azure Storage Client Library for .NET](/previous-versions/azure/dn261237(v=azure.100)).
-
-The **Add Connected Services** operation installs the appropriate NuGet packages to access Azure storage in your project and adds the connection string for the storage account to your project configuration files.
-
-* See [Get started with Azure Queue storage using .NET](../storage/queues/storage-dotnet-how-to-use-queues.md) for more information on manipulating queues in code.
-* See [Storage documentation](/azure/storage/) for general information about Azure Storage.
-* See [Cloud Services documentation](/azure/cloud-services/) for general information about Azure cloud services.
-* See [ASP.NET](https://www.asp.net) for more information about programming ASP.NET applications.
-
-Azure Queue storage is a service for storing large numbers of messages that can be accessed from anywhere in the world via authenticated calls using HTTP or HTTPS. A single queue message can be up to 64 KB in size, and a queue can contain millions of messages, up to the total capacity limit of a storage account.
-
-## Access queues in code
-To access queues in Visual Studio Cloud Services projects, you need to include the following items to any C# source file that access Azure Queue storage.
-
-1. Make sure the namespace declarations at the top of the C# file include these **using** statements.
-
- ```csharp
- using Microsoft.Framework.Configuration;
- using Microsoft.WindowsAzure.Storage;
- using Microsoft.WindowsAzure.Storage.Queue;
- ```
-2. Get a **CloudStorageAccount** object that represents your storage account information. Use the following code to get the your storage connection string and storage account information from the Azure service configuration.
-
- ```csharp
- CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
- CloudConfigurationManager.GetSetting("<storage-account-name>_AzureStorageConnectionString"));
- ```
-3. Get a **CloudQueueClient** object to reference the queue objects in your storage account.
-
- ```csharp
- // Create the queue client.
- CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
- ```
-4. Get a **CloudQueue** object to reference a specific queue.
-
- ```csharp
- // Get a reference to a queue named "messageQueue"
- CloudQueue messageQueue = queueClient.GetQueueReference("messageQueue");
- ```
-
-**NOTE:** Use all of the above code in front of the code in the following samples.
-
-## Create a queue in code
-To create the queue in code, just add a call to **CreateIfNotExists**.
-
-```csharp
-// Create the CloudQueue if it does not exist
-messageQueue.CreateIfNotExists();
-```
-
-## Add a message to a queue
-To insert a message into an existing queue, create a new **CloudQueueMessage** object, then call the **AddMessage** method.
-
-A **CloudQueueMessage** object can be created from either a string (in UTF-8 format) or a byte array.
-
-Here is an example which inserts the message 'Hello, World'.
-
-```csharp
-// Create a message and add it to the queue.
-CloudQueueMessage message = new CloudQueueMessage("Hello, World");
-messageQueue.AddMessage(message);
-```
-
-## Read a message in a queue
-You can peek at the message in the front of a queue without removing it from the queue by calling the **PeekMessage** method.
-
-```csharp
-// Peek at the next message
-CloudQueueMessage peekedMessage = messageQueue.PeekMessage();
-```
-
-## Read and remove a message in a queue
-Your code can remove (de-queue) a message from a queue in two steps.
-
-1. Call **GetMessage** to get the next message in a queue. A message returned from **GetMessage** becomes invisible to any other code reading messages from this queue. By default, this message stays invisible for 30 seconds.
-2. To finish removing the message from the queue, call **DeleteMessage**.
-
-This two-step process of removing a message assures that if your code fails to process a message due to hardware or software failure, another instance of your code can get the same message and try again. The following code calls **DeleteMessage** right after the message has been processed.
-
-```csharp
-// Get the next message in the queue.
-CloudQueueMessage retrievedMessage = messageQueue.GetMessage();
-
-// Process the message in less than 30 seconds
-
-// Then delete the message.
-await messageQueue.DeleteMessage(retrievedMessage);
-```
--
-## Use additional options to process and remove queue messages
-There are two ways you can customize message retrieval from a queue.
-
-* You can get a batch of messages (up to 32).
-* You can set a longer or shorter invisibility timeout, allowing your code more or less
- time to fully process each message. The following code example uses the
- **GetMessages** method to get 20 messages in one call. Then it processes
- each message using a **foreach** loop. It also sets the invisibility
- timeout to five minutes for each message. Note that the 5 minutes starts
- for all messages at the same time, so after 5 minutes have passed since
- the call to **GetMessages**, any messages which have not been deleted
- will become visible again.
-
-Here's an example:
-
-```csharp
-foreach (CloudQueueMessage message in messageQueue.GetMessages(20, TimeSpan.FromMinutes(5)))
-{
- // Process all messages in less than 5 minutes, deleting each message after processing.
-
- // Then delete the message after processing
- messageQueue.DeleteMessage(message);
-
-}
-```
-
-## Get the queue length
-You can get an estimate of the number of messages in a queue. The
-**FetchAttributes** method asks the Queue service to
-retrieve the queue attributes, including the message count. The **ApproximateMethodCount**
-property returns the last value retrieved by the
-**FetchAttributes** method, without calling the Queue service.
-
-```csharp
-// Fetch the queue attributes.
-messageQueue.FetchAttributes();
-
-// Retrieve the cached approximate message count.
-int? cachedMessageCount = messageQueue.ApproximateMessageCount;
-
-// Display number of messages.
-Console.WriteLine("Number of messages in queue: " + cachedMessageCount);
-```
-
-## Use the Async-Await Pattern with common Azure Queue APIs
-This example shows how to use the Async-Await pattern with common Azure Queue APIs. The sample calls the async version of each of the given methods, this can be seen by the **Async** post-fix of each method. When an async method is used the async-await pattern suspends local execution until the call completes. This behavior allows the current thread to do other work which helps avoid performance bottlenecks and improves the overall responsiveness of your application. For more details on using the Async-Await pattern in .NET see [Async and Await (C# and Visual Basic)](/previous-versions/hh191443(v=vs.140))
-
-```csharp
-// Create a message to put in the queue
-CloudQueueMessage cloudQueueMessage = new CloudQueueMessage("My message");
-
-// Add the message asynchronously
-await messageQueue.AddMessageAsync(cloudQueueMessage);
-Console.WriteLine("Message added");
-
-// Async dequeue the message
-CloudQueueMessage retrievedMessage = await messageQueue.GetMessageAsync();
-Console.WriteLine("Retrieved message with content '{0}'", retrievedMessage.AsString);
-
-// Delete the message asynchronously
-await messageQueue.DeleteMessageAsync(retrievedMessage);
-Console.WriteLine("Deleted message");
-```
-
-## Delete a queue
-To delete a queue and all the messages contained in it, call the **Delete** method on the queue object.
-
-```csharp
-// Delete the queue.
-messageQueue.Delete();
-```
-
-## Next steps
visual-studio Vs Storage Cloud Services Getting Started Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/visual-studio/vs-storage-cloud-services-getting-started-tables.md
- Title: Get started with table storage using Visual Studio (cloud services)
-description: How to get started using Azure Table storage in a cloud service project in Visual Studio after connecting to a storage account using Visual Studio connected services
------ Previously updated : 12/02/2016---
-# Getting started with Azure table storage and Visual Studio connected services (cloud services projects)
-
-## Overview
---
-This article describes how to get started using Azure table storage in Visual Studio after you have created or referenced an Azure storage account in a cloud services project by using the Visual Studio **Add Connected Services** dialog. The **Add Connected Services** operation installs the appropriate NuGet packages to access Azure storage in your project and adds the connection string for the storage account to your project configuration files.
-
-The Azure Table storage service enables you to store large amounts of structured data. The service is a NoSQL datastore that accepts authenticated calls from inside and outside the Azure cloud. Azure tables are ideal for storing structured, non-relational data.
-
-To get started, you first need to create a table in your storage account. We'll show you how to create an Azure table in code, and also how to perform basic table and entity operations, such as adding, modifying, reading and reading table entities. The samples are written in C\# code and use the [Microsoft Azure Storage client library for .NET](/previous-versions/azure/dn261237(v=azure.100)).
-
-**NOTE:** Some of the APIs that perform calls out to Azure storage are asynchronous. See [Asynchronous programming with Async and Await](/previous-versions/hh191443(v=vs.140)) for more information. The code below assumes async programming methods are being used.
-
-* See [Get started with Azure Table storage using .NET](../cosmos-db/tutorial-develop-table-dotnet.md) for more information on programmatically manipulating tables.
-* See [Storage documentation](/azure/storage/) for general information about Azure Storage.
-* See [Cloud Services documentation](/azure/cloud-services/) for general information about Azure cloud services.
-* See [ASP.NET](https://www.asp.net) for more information about programming ASP.NET applications.
-
-## Access tables in code
-To access tables in cloud service projects, you need to include the following items to any C# source files that access Azure table storage.
-
-1. Make sure the namespace declarations at the top of the C# file include these **using** statements.
-
- ```csharp
- using Microsoft.Framework.Configuration;
- using Azure.Data.Table;
- using System.Collections.Generic
- using System.Threading.Tasks;
- using LogLevel = Microsoft.Framework.Logging.LogLevel;
- ```
-2. Get a **AzureStorageConnectionString** object to create a **TableServiceClient** that performs account-level operations like creating and deleting tables.
-
- ```csharp
- string storageConnString = "_AzureStorageConnectionString"
- ```
-
- > [!NOTE]
- > Use all of the above code in front of the code in the following samples.
-
-3. Get a **TableServiceClient** object to reference the table objects in your storage account.
-
- ```csharp
- // Create the table service client.
- TableServiceClient tableServiceClient = new TableServiceClient(storageConnString);
- ```
-
-4. Get a **TableClient** reference object to reference a specific table and entities.
-
- ```csharp
- // Get a reference to a table named "peopleTable".
- TableClient peopleTable = tableServiceClient.GetTableClient("peopleTable");
- ```
-
-## Create a table in code
-To create the Azure table, just add a call to **CreateIfNotExistsAsync** to the after you get a **TableClient** object as described in the "Access tables in code" section.
-
-```csharp
-// Create the TableClient if it does not exist.
-await peopleTable.CreateIfNotExistsAsync();
-```
-
-## Add an entity to a table
-To add an entity to a table, create a class that defines the properties of your entity. The following code defines an entity class called **CustomerEntity** that uses the customer's first name as the row key and the last name as the partition key.
-
-```csharp
-public class CustomerEntity : ITableEntity
-{
- public CustomerEntity(string lastName, string firstName)
- {
- this.PartitionKey = lastName;
- this.RowKey = firstName;
- }
-
- public CustomerEntity() { }
-
- public string Email { get; set; }
-
- public string PhoneNumber { get; set; }
-}
-```
-
-AddEntity operations involving entities are done using the **TableClient** object that you created earlier in "Access tables in code." The **AddEntity** method represents the operation to be done. The following code example shows how to create a **TableClient** object and a **CustomerEntity** object. To prepare the operation, a **AddEntity** is inserting the customer entity into the table.
-
-```csharp
-// Create a new customer entity.
-CustomerEntity customer1 = new CustomerEntity("Harp", "Walter");
-customer1.Email = "Walter@contoso.com";
-customer1.PhoneNumber = "425-555-0101";
-
-// Inserts the customer entity.
-peopleTable.AddEntity(customer1)
-```
--
-## Insert a batch of entities
-You can insert multiple entities into a table in a single write operation. The following code example creates two entity objects ("Jeff Smith" and "Ben Smith"), adds them to a **addEntitiesBatch** object using the AddRange method, and then starts the operation by calling **TableClient.SubmitTransactionAsync**.
-
-```csharp
-// Create a list of 2 entities with the same partition key.
-List<CustomerEntity> entityList = new List<CustomerEntity>
-{
- new CustomerEntity("Smith", "Jeff")
- {
- { "Email", "Jeff@contoso.com" },
- { "PhoneNumber", "425-555-0104" }
- },
- new CustomerEntity("Smith", "Ben")
- {
- { "Email", "Ben@contoso.com" },
- { "PhoneNumber", "425-555-0102" }
- },
-};
-
-// Create the batch.
-List<TableTransactionAction> addEntitiesBatch = new List<TableTransactionAction>();
-
-// Add the entities to be added to the batch.
-addEntitiesBatch.AddRange(entityList.Select(e => new TableTransactionAction(TableTransactionActionType.Add, e)));
-
-// Submit the batch.
-Response<IReadOnlyList<Response>> response = await peopleTable.SubmitTransactionAsync(addEntitiesBatch).ConfigureAwait(false);
-```
-
-## Get all of the entities in a partition
-To query a table for all of the entities in a partition, use a **Query** method. The following code example specifies a filter for entities where 'Smith' is the partition key. This example prints the fields of each entity in the query results to the console.
-
-```csharp
-Pageable<CustomerEntity> queryResultsFilter = peopleTable.Query<CustomerEntity>(filter: "PartitionKey eq 'Smith'");
-
-// Print the fields for each customer.
-foreach (CustomerEntity qEntity in queryResultsFilter)
-{
- Console.WriteLine("{0}, {1}\t{2}\t{3}", qEntity.PartitionKey, qEntity.RowKey, qEntity.Email, qEntity.PhoneNumber);
-}
-```
--
-## Get a single entity
-You can write a query to get a single, specific entity. The following code uses a **GetEntityAsync** method to specify a customer named 'Ben Smith'. This method returns just one entity, rather than a collection, and the returned value in **GetEntityAsync.Result** is a **CustomerEntity** object. Specifying both partition and row keys in a query is the fastest way to retrieve a single entity from the **Table** service.
-
-```csharp
-var singleResult = peopleTable.GetEntityAsync<CustomerEntity>("Smith", "Ben");
-
-// Print the phone number of the result.
-if (singleResult.Result != null)
- Console.WriteLine(((CustomerEntity)singleResult.Result).PhoneNumber);
-else
- Console.WriteLine("The phone number could not be retrieved.");
-```
-
-## Delete an entity
-You can delete an entity after you find it. The following code looks for a customer entity named "Ben Smith", and if it finds it, it deletes it.
-
-```csharp
-var singleResult = peopleTable.GetEntityAsync<CustomerEntity>("Smith", "Ben");
-
-CustomerEntity deleteEntity = (CustomerEntity)singleResult.Result;
-
-// Delete the entity given the partition and row key.
-if (deleteEntity != null)
-{
- await peopleTable.DeleteEntity(deleteEntity.PartitionKey, deleteEntity.RowKey);
-
- Console.WriteLine("Entity deleted.");
-}
-
-else
- Console.WriteLine("Couldn't delete the entity.");
-```
-
-## Next steps
vpn-gateway Vpn Gateway About Vpngateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpngateways.md
Subscribe to the RSS feed and view the latest VPN Gateway feature updates on the
## Next steps - [Tutorial: Create and manage a VPN Gateway](tutorial-create-gateway-portal.md).-- [Learn module: Introduction to Azure VPN Gateway](/learn/modules/intro-to-azure-vpn-gateway).-- [Learn module: Connect your on-premises network to Azure with VPN Gateway](/learn/modules/connect-on-premises-network-with-vpn-gateway/).
+- [Learn module: Introduction to Azure VPN Gateway](/training/modules/intro-to-azure-vpn-gateway).
+- [Learn module: Connect your on-premises network to Azure with VPN Gateway](/training/modules/connect-on-premises-network-with-vpn-gateway/).
- [Subscription and service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits).
vpn-gateway Vpn Gateway Howto Multi Site To Site Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-multi-site-to-site-resource-manager-portal.md
Verify the following items:
## Next steps
-Once your connection is complete, you can add virtual machines to your virtual networks. For more information, see [Virtual machines learning paths](/learn/paths/deploy-a-website-with-azure-virtual-machines/).
+Once your connection is complete, you can add virtual machines to your virtual networks. For more information, see [Virtual machines learning paths](/training/paths/deploy-a-website-with-azure-virtual-machines/).
web-application-firewall Waf Front Door Exclusion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-exclusion.md
The following attributes can be added to exclusion lists by name. The values of
* Request body post args name * RequestBodyJSONArgNames
+>[!NOTE]
+>RequestBodyJSONArgNames is only available on Default Rule Set (DRS) 2.0 or later.
+ You can specify an exact request header, body, cookie, or query string attribute match. Or, you can optionally specify partial matches. The following operators are the supported match criteria: - **Equals**: This operator is used for an exact match. For example, to select a header named **bearerToken**, use the equals operator with the selector set as **bearerToken**.
You can apply exclusion lists to all rules within the managed rule set, to rules
## Next steps
-After you configure your WAF settings, learn how to view your WAF logs. For more information, see [Front Door diagnostics](../afds/waf-front-door-monitor.md).
+After you configure your WAF settings, learn how to view your WAF logs. For more information, see [Front Door diagnostics](../afds/waf-front-door-monitor.md).
web-application-firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/overview.md
WAF can be deployed with Azure Application Gateway, Azure Front Door, and Azure
- For more information about Web Application Firewall on Application Gateway, see [Web Application Firewall on Azure Application Gateway](./ag/ag-overview.md). - For more information about Web Application Firewall on Azure Front Door Service, see [Web Application Firewall on Azure Front Door Service](./afds/afds-overview.md). - For more information about Web Application Firewall on Azure CDN Service, see [Web Application Firewall on Azure CDN Service](./cdn/cdn-overview.md)-- To learn more about Web Application Firewall, see [Learn module: Introduction to Azure Web Application Firewall](/learn/modules/introduction-azure-web-application-firewall/).
+- To learn more about Web Application Firewall, see [Learn module: Introduction to Azure Web Application Firewall](/training/modules/introduction-azure-web-application-firewall/).