Updates from: 01/12/2021 04:04:59
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/partner-gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-gallery.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 06/08/2020
+ms.date: 01/11/2021
ms.author: mimart ms.subservice: B2C ---
@@ -50,7 +50,7 @@ Microsoft partners with the following ISVs for MFA and Passwordless authenticati
| ![Screenshot of a nevis logo](./media/partner-gallery/nevis-logo.png) | [Nevis](./partner-nevis.md) enables passwordless authentication and provides a mobile-first, fully branded end-user experience with Nevis Access app for strong customer authentication and to comply with PSD2 transaction requirements. | | ![Screenshot of a trusona logo](./media/partner-gallery/trusona-logo.png) | [Trusona](./partner-trusona.md) integration helps you sign in securely and enables passwordless authentication, MFA, and digital license scanning. | | ![Screenshot of a twilio logo.](./media/partner-gallery/twilio-logo.png) | [Twilio Verify app](./partner-twilio.md) provides multiple solutions to enable MFA through SMS one-time password (OTP), time-based one-time password (TOTP), and push notifications, and to comply with SCA requirements for PSD2. |
-| ![Screenshot of a typingDNA logo](./media/partner-gallery/typingdna-logo.png) | [TypingDNA](./partner-twilio.md) enables strong customer authentication by analyzing a userΓÇÖs typing pattern. It helps companies enable a silent MFA and comply with SCA requirements for PSD2. |
+| ![Screenshot of a typingDNA logo](./media/partner-gallery/typingdna-logo.png) | [TypingDNA](./partner-typingdna.md) enables strong customer authentication by analyzing a userΓÇÖs typing pattern. It helps companies enable a silent MFA and comply with SCA requirements for PSD2. |
| ![Screenshot of a whoiam logo](./media/partner-gallery/whoiam-logo.png) | [WhoIAM](./partner-whoiam.md) is a Branded Identity Management System (BRIMS) application that enables organizations to verify their user base by voice, SMS, and email. | ## Role-based access control
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/user-provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/user-provisioning.md
@@ -8,7 +8,7 @@ ms.service: active-directory
ms.subservice: app-provisioning ms.topic: overview ms.workload: identity
-ms.date: 11/25/2019
+ms.date: 01/11/2021
ms.author: kenwith ms.reviewer: arvinh, celested ---
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-authentication-protocols https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-authentication-protocols.md
@@ -13,6 +13,7 @@ ms.date: 12/18/2019
ms.author: ryanwi ms.custom: aaddev ms.reviewer: hirsin
+ROBOTS: NOINDEX
--- # Microsoft identity platform authentication protocols
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-how-to-integrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-how-to-integrate.md
@@ -14,6 +14,7 @@ ms.date: 10/01/2020
ms.author: ryanwi ms.reviewer: jmprieur ms.custom: aaddev, seoapril2019
+ROBOTS: NOINDEX
--- # Integrating with Microsoft identity platform
@@ -78,7 +79,7 @@ Integration with Microsoft identity platform comes with benefits that do not req
**Industry standard protocols.** Microsoft is committed to supporting industry standards. The Microsoft identity platform supports the industry-standard OAuth 2.0 and OpenID Connect 1.0 protocols. Learn more about [Microsoft identity platform authentication protocols](active-directory-v2-protocols.md).
-**Open source libraries.** Microsoft provides fully supported open source libraries for popular languages and platforms to speed development. The source code is licensed under Apache 2.0, and you are free to fork and contribute back to the projects. Learn more about [Microsoft Authentication Library (MSAL)](reference-v2-libraries.md).
+**Open source libraries.** Microsoft provides fully supported open source libraries for popular languages and platforms to speed development. The source code is licensed under Apache 2.0, and you are free to fork and contribute back to the projects. Learn more about the [Microsoft Authentication Library (MSAL)](reference-v2-libraries.md).
### Worldwide presence and high availability
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-v2-registration-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-v2-registration-portal.md
@@ -14,6 +14,7 @@ ms.date: 08/13/2019
ms.author: ryanwi ms.reviewer: lenalepa ms.custom: aaddev
+ROBOTS: NOINDEX
--- # App registration reference
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/api-find-an-api-how-to https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/api-find-an-api-how-to.md
@@ -11,6 +11,7 @@ ms.workload: identity
ms.topic: conceptual ms.date: 06/28/2019 ms.author: ryanwi
+ROBOTS: NOINDEX
--- # How to find a specific API needed for a custom-developed application
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/consent-framework-links https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/consent-framework-links.md
@@ -12,7 +12,7 @@ ms.workload: identity
ms.topic: conceptual ms.date: 09/11/2018 ms.author: ryanwi-
+ROBOTS: NOINDEX
--- # How application consent works
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/delegated-and-app-perms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/delegated-and-app-perms.md
@@ -12,7 +12,7 @@ ms.workload: identity
ms.topic: conceptual ms.date: 06/28/2019 ms.author: ryanwi-
+ROBOTS: NOINDEX
--- # How to recognize differences between delegated and application permissions
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/identity-platform-integration-checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/identity-platform-integration-checklist.md
@@ -67,7 +67,7 @@ Use the following checklist to ensure that your application is effectively integ
![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) Don't program directly against protocols such as OAuth 2.0 and Open ID. Instead, leverage the [Microsoft Authentication Library (MSAL)](msal-overview.md). The MSAL libraries securely wrap security protocols in an easy-to-use library, and you get built-in support for [Conditional Access](../conditional-access/overview.md) scenarios, device-wide [single sign-on (SSO)](../manage-apps/what-is-single-sign-on.md), and built-in token caching support. For more info, see the list of Microsoft supported [client libraries](reference-v2-libraries.md#microsoft-supported-client-libraries) and [middleware libraries](reference-v2-libraries.md#microsoft-supported-server-middleware-libraries) and the list of [compatible third-party client libraries](reference-v2-libraries.md#compatible-client-libraries).<br/><br/>If you must hand code for the authentication protocols, you should follow a methodology such as [Microsoft SDL](https://www.microsoft.com/sdl/default.aspx). Pay close attention to the security considerations in the standards specifications for each protocol.
-![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) Migrate existing apps from [Azure Active Directory Authentication Library (ADAL)](../azuread-dev/active-directory-authentication-libraries.md) to [Microsoft Authentication Library](msal-overview.md). MSAL is MicrosoftΓÇÖs latest identity platform solution and is preferred to ADAL. It is available on .NET, JavaScript, Android, iOS, macOS and is also in public preview for Python and Java. Read more about migrating [ADAL.NET](msal-net-migration.md), [ADAL.js](msal-compare-msal-js-and-adal-js.md), and [ADAL.NET and iOS broker](msal-net-migration-ios-broker.md) apps.
+![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) Migrate existing apps from [Azure Active Directory Authentication Library (ADAL)](../azuread-dev/active-directory-authentication-libraries.md) to the [Microsoft Authentication Library](msal-overview.md). MSAL is MicrosoftΓÇÖs latest identity platform solution and is preferred to ADAL. It is available on .NET, JavaScript, Android, iOS, macOS and is also in public preview for Python and Java. Read more about migrating [ADAL.NET](msal-net-migration.md), [ADAL.js](msal-compare-msal-js-and-adal-js.md), and [ADAL.NET and iOS broker](msal-net-migration-ios-broker.md) apps.
![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) For mobile apps, configure each platform using the application registration experience. In order for your application to take advantage of the Microsoft Authenticator or Microsoft Company Portal for single sign-in, your app needs a ΓÇ£broker redirect URIΓÇ¥ configured. This allows Microsoft to return control to your application after authentication. When configuring each platform, the app registration experience will guide you through the process. Use the quickstart to download a working example. On iOS, use brokers and system webview whenever possible.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/migrate-adal-msal-java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/migrate-adal-msal-java.md
@@ -22,7 +22,7 @@ ms.custom: aaddev, devx-track-java
This article highlights changes you need to make to migrate an app that uses the Azure Active Directory Authentication Library (ADAL) to use the Microsoft Authentication Library (MSAL).
-Both Microsoft Authentication Library for Java (MSAL4J) and Azure AD Authentication Library for Java (ADAL4J) are used to authenticate Azure AD entities and request tokens from Azure AD. Until now, most developers have worked with Azure AD for developers platform (v1.0) to authenticate Azure AD identities (work and school accounts) by requesting tokens using Azure AD Authentication Library (ADAL).
+Both the Microsoft Authentication Library for Java (MSAL4J) and Azure AD Authentication Library for Java (ADAL4J) are used to authenticate Azure AD entities and request tokens from Azure AD. Until now, most developers have worked with Azure AD for developers platform (v1.0) to authenticate Azure AD identities (work and school accounts) by requesting tokens using Azure AD Authentication Library (ADAL).
MSAL offers the following benefits:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/migrate-spa-implicit-to-auth-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/migrate-spa-implicit-to-auth-code.md
@@ -16,7 +16,7 @@ ms.custom: aaddev, devx-track-js
# Migrate a JavaScript single-page app from implicit grant to auth code flow
-Microsoft Authentication Library for JavaScript (MSAL.js) v2.0 brings support for the authorization code flow with PKCE and CORS to single-page applications on the Microsoft identity platform. Follow the steps in the sections below to migrate your MSAL.js 1.x application using the implicit grant to MSAL.js 2.0+ (hereafter *2.x*) and the auth code flow.
+The Microsoft Authentication Library for JavaScript (MSAL.js) v2.0 brings support for the authorization code flow with PKCE and CORS to single-page applications on the Microsoft identity platform. Follow the steps in the sections below to migrate your MSAL.js 1.x application using the implicit grant to MSAL.js 2.0+ (hereafter *2.x*) and the auth code flow.
MSAL.js 2.x improves on MSAL.js 1.x by supporting the authorization code flow in the browser instead of the implicit grant flow. MSAL.js 2.x does **NOT** support the implicit flow.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/mobile-sso-support-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/mobile-sso-support-overview.md
@@ -33,7 +33,7 @@ In addition, enabling single sign-on in your app unlocks new authentication mech
We recommend the following to enable your app to take advantage of single sign-on.
-### Use Microsoft Authentication Library (MSAL)
+### Use the Microsoft Authentication Library (MSAL)
The best choice for implementing single sign-on in your application is to use [the Microsoft Authentication Library (MSAL)](msal-overview.md). By using MSAL you can add authentication to your app with minimal code and API calls, get the full features of the [Microsoft identity platform](./index.yml), and let Microsoft handle the maintenance of a secure authentication solution. By default, MSAL adds SSO support for your application. In addition, using MSAL is a requirement if you also plan to implement app protection policies.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-android-b2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-android-b2c.md
@@ -19,7 +19,7 @@ ms.custom: aaddev
# Use MSAL for Android with B2C
-Microsoft Authentication Library (MSAL) enables application developers to authenticate users with social and local identities by using [Azure Active Directory B2C (Azure AD B2C)](../../active-directory-b2c/index.yml). Azure AD B2C is an identity management service. Use it to customize and control how customers sign up, sign in, and manage their profiles when they use your applications.
+The Microsoft Authentication Library (MSAL) enables application developers to authenticate users with social and local identities by using [Azure Active Directory B2C (Azure AD B2C)](../../active-directory-b2c/index.yml). Azure AD B2C is an identity management service. Use it to customize and control how customers sign up, sign in, and manage their profiles when they use your applications.
## Configure known authorities and redirect URI
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-b2c-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-b2c-overview.md
@@ -1,7 +1,7 @@
--- title: Use MSAL.js with Azure AD B2C titleSuffix: Microsoft identity platform
-description: Microsoft Authentication Library for JavaScript (MSAL.js) enables applications to work with Azure AD B2C and acquire tokens to call secured web APIs. These web APIs can be Microsoft Graph, other Microsoft APIs, web APIs from others, or your own web API.
+description: The Microsoft Authentication Library for JavaScript (MSAL.js) enables applications to work with Azure AD B2C and acquire tokens to call secured web APIs. These web APIs can be Microsoft Graph, other Microsoft APIs, web APIs from others, or your own web API.
services: active-directory author: negoe manager: CelesteDG
@@ -18,9 +18,9 @@ ms.custom: aaddev devx-track-js
# authentication and authorization in my organization's web apps and web APIs that my customers log in to and use. ---
-# Use Microsoft Authentication Library for JavaScript to work with Azure AD B2C
+# Use the Microsoft Authentication Library for JavaScript to work with Azure AD B2C
-[Microsoft Authentication Library for JavaScript (MSAL.js)](https://github.com/AzureAD/microsoft-authentication-library-for-js) enables JavaScript developers to authenticate users with social and local identities using [Azure Active Directory B2C](../../active-directory-b2c/overview.md) (Azure AD B2C).
+The [Microsoft Authentication Library for JavaScript (MSAL.js)](https://github.com/AzureAD/microsoft-authentication-library-for-js) enables JavaScript developers to authenticate users with social and local identities using [Azure Active Directory B2C](../../active-directory-b2c/overview.md) (Azure AD B2C).
By using Azure AD B2C as an identity management service, you can customize and control how your customers sign up, sign in, and manage their profiles when they use your applications. Azure AD B2C also enables you to brand and customize the UI that your application displays during the authentication process.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-client-applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-client-applications.md
@@ -18,7 +18,7 @@ ms.custom: aaddev
--- # Public client and confidential client applications
-Microsoft Authentication Library (MSAL) defines two types of clients: public clients and confidential clients. The two client types are distinguished by their ability to authenticate securely with the authorization server and maintain the confidentiality of their client credentials. In contrast, Azure AD Authentication Library (ADAL) uses what's called *authentication context* (which is a connection to Azure AD).
+The Microsoft Authentication Library (MSAL) defines two types of clients: public clients and confidential clients. The two client types are distinguished by their ability to authenticate securely with the authorization server and maintain the confidentiality of their client credentials. In contrast, Azure AD Authentication Library (ADAL) uses what's called *authentication context* (which is a connection to Azure AD).
- **Confidential client applications** are apps that run on servers (web apps, web API apps, or even service/daemon apps). They're considered difficult to access, and for that reason capable of keeping an application secret. Confidential clients can hold configuration-time secrets. Each instance of the client has a distinct configuration (including client ID and client secret). These values are difficult for end users to extract. A web app is the most common confidential client. The client ID is exposed through the web browser, but the secret is passed only in the back channel and never directly exposed.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-compare-msal-js-and-adal-js https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-compare-msal-js-and-adal-js.md
@@ -17,9 +17,9 @@ ms.custom: aaddev
#Customer intent: As an application developer, I want to learn about the differences between the ADAL.js and MSAL.js libraries so I can migrate my applications to MSAL.js. ---
-# Differences between MSAL JS and ADAL JS
+# Differences between MSAL.js and ADAL.js
-Both Microsoft Authentication Library for JavaScript (MSAL.js) and Azure AD Authentication Library for JavaScript (ADAL.js) are used to authenticate Azure AD entities and request tokens from Azure AD. Up until now, most developers have worked with Azure AD for developers (v1.0) to authenticate Azure AD identities (work and school accounts) by requesting tokens using ADAL. Now, using MSAL.js, you can authenticate a broader set of Microsoft identities (Azure AD identities and Microsoft accounts, and social and local accounts through Azure AD B2C) through Microsoft identity platform (v2.0).
+Both the Microsoft Authentication Library for JavaScript (MSAL.js) and Azure AD Authentication Library for JavaScript (ADAL.js) are used to authenticate Azure AD entities and request tokens from Azure AD. Up until now, most developers have worked with Azure AD for developers (v1.0) to authenticate Azure AD identities (work and school accounts) by requesting tokens using ADAL. Now, using MSAL.js, you can authenticate a broader set of Microsoft identities (Azure AD identities and Microsoft accounts, and social and local accounts through Azure AD B2C) through Microsoft identity platform (v2.0).
This article describes how to choose between the Microsoft Authentication Library for JavaScript (MSAL.js) and Azure AD Authentication Library for JavaScript (ADAL.js) and compares the two libraries.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-differences-ios-macos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-differences-ios-macos.md
@@ -1,7 +1,7 @@
--- title: MSAL for iOS & macOS differences | Azure titleSuffix: Microsoft identity platform
-description: Describes Microsoft Authentication Library (MSAL) usage differences between iOS and macOS.
+description: Describes the Microsoft Authentication Library (MSAL) usage differences between iOS and macOS.
services: active-directory author: mmacy manager: CelesteDG
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-java-adfs-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-java-adfs-support.md
@@ -1,7 +1,7 @@
--- title: AD FS support (MSAL for Java) titleSuffix: Microsoft identity platform
-description: Learn about Active Directory Federation Services (AD FS) support in Microsoft Authentication Library for Java (MSAL4j).
+description: Learn about Active Directory Federation Services (AD FS) support in the Microsoft Authentication Library for Java (MSAL4j).
services: active-directory author: sangonzal manager: CelesteDG
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-js-avoid-page-reloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-js-avoid-page-reloads.md
@@ -18,7 +18,7 @@ ms.custom: aaddev
--- # Avoid page reloads when acquiring and renewing tokens silently using MSAL.js
-Microsoft Authentication Library for JavaScript (MSAL.js) uses hidden `iframe` elements to acquire and renew tokens silently in the background. Azure AD returns the token back to the registered redirect_uri specified in the token request(by default this is the app's root page). Since the response is a 302, it results in the HTML corresponding to the `redirect_uri` getting loaded in the `iframe`. Usually the app's `redirect_uri` is the root page and this causes it to reload.
+The Microsoft Authentication Library for JavaScript (MSAL.js) uses hidden `iframe` elements to acquire and renew tokens silently in the background. Azure AD returns the token back to the registered redirect_uri specified in the token request(by default this is the app's root page). Since the response is a 302, it results in the HTML corresponding to the `redirect_uri` getting loaded in the `iframe`. Usually the app's `redirect_uri` is the root page and this causes it to reload.
In other cases, if navigating to the app's root page requires authentication, it might lead to nested `iframe` elements or `X-Frame-Options: deny` error.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-js-initializing-client-applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-js-initializing-client-applications.md
@@ -20,7 +20,7 @@ ms.custom: aaddev, devx-track-js
# Initialize client applications using MSAL.js
-This article describes initializing Microsoft Authentication Library for JavaScript (MSAL.js) with an instance of a user-agent application.
+This article describes initializing the Microsoft Authentication Library for JavaScript (MSAL.js) with an instance of a user-agent application.
The user-agent application is a form of public client application in which the client code is executed in a user-agent such as a web browser. Such clients do not store secrets because the browser context is openly accessible.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-js-pass-custom-state-authentication-request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-js-pass-custom-state-authentication-request.md
@@ -19,7 +19,7 @@ ms.custom: aaddev
# Pass custom state in authentication requests using MSAL.js
-The *state* parameter, as defined by OAuth 2.0, is included in an authentication request and is also returned in the token response to prevent cross-site request forgery attacks. By default, Microsoft Authentication Library for JavaScript (MSAL.js) passes a randomly generated unique *state* parameter value in the authentication requests.
+The *state* parameter, as defined by OAuth 2.0, is included in an authentication request and is also returned in the token response to prevent cross-site request forgery attacks. By default, the Microsoft Authentication Library for JavaScript (MSAL.js) passes a randomly generated unique *state* parameter value in the authentication requests.
The state parameter can also be used to encode information of the app's state before redirect. You can pass the user's state in the app, such as the page or view they were on, as input to this parameter. The MSAL.js library allows you to pass your custom state as state parameter in the `Request` object:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-js-use-ie-browser https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-js-use-ie-browser.md
@@ -19,7 +19,7 @@ ms.custom: aaddev
# Known issues on Internet Explorer browsers (MSAL.js)
-Microsoft Authentication Library for JavaScript (MSAL.js) is generated for [JavaScript ES5](https://fr.wikipedia.org/wiki/ECMAScript#ECMAScript_Edition_5_.28ES5.29) so that it can run in Internet Explorer. There are, however, a few things to know.
+The Microsoft Authentication Library for JavaScript (MSAL.js) is generated for [JavaScript ES5](https://fr.wikipedia.org/wiki/ECMAScript#ECMAScript_Edition_5_.28ES5.29) so that it can run in Internet Explorer. There are, however, a few things to know.
## Run an app in Internet Explorer If you intend to use MSAL.js in applications that can run in Internet Explorer, you will need to add a reference to a promise polyfill before referencing the MSAL.js script.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-migration.md
@@ -1,7 +1,7 @@
---
-title: Migrate to Microsoft Authentication Library (MSAL)
+title: Migrate to the Microsoft Authentication Library (MSAL)
titleSuffix: Microsoft identity platform
-description: Learn about the differences between Microsoft Authentication Library (MSAL) and Azure AD Authentication Library (ADAL) and how to migrate to MSAL.
+description: Learn about the differences between the Microsoft Authentication Library (MSAL) and Azure AD Authentication Library (ADAL) and how to migrate to MSAL.
services: active-directory author: jmprieur manager: CelesteDG
@@ -16,7 +16,7 @@ ms.reviewer: saeeda
ms.custom: aaddev # Customer intent: As an application developer, I want to learn about the differences between the ADAL and MSAL libraries so I can migrate my applications to MSAL. ---
-# Migrate applications to Microsoft Authentication Library (MSAL)
+# Migrate applications to the Microsoft Authentication Library (MSAL)
Many developers have built and deployed applications using the Azure Active Directory Authentication Library (ADAL). We now recommend using the Microsoft Authentication Library (MSAL) for authentication and authorization of Azure AD entities.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-national-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-national-cloud.md
@@ -1,7 +1,7 @@
--- title: Use MSAL in a national cloud app | Azure titleSuffix: Microsoft identity platform
-description: Microsoft Authentication Library (MSAL) enables application developers to acquire tokens in order to call secured web APIs. These web APIs can be Microsoft Graph, other Microsoft APIs, partner web APIs, or your own web API. MSAL supports multiple application architectures and platforms.
+description: The Microsoft Authentication Library (MSAL) enables application developers to acquire tokens in order to call secured web APIs. These web APIs can be Microsoft Graph, other Microsoft APIs, partner web APIs, or your own web API. MSAL supports multiple application architectures and platforms.
services: active-directory author: negoe manager: CelesteDG
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-acquire-token-silently https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-acquire-token-silently.md
@@ -19,7 +19,7 @@ ms.custom: "devx-track-csharp, aaddev"
# Get a token from the token cache using MSAL.NET
-When you acquire an access token using Microsoft Authentication Library for .NET (MSAL.NET), the token is cached. When the application needs a token, it should first call the `AcquireTokenSilent` method to verify if an acceptable token is in the cache. In many cases, it's possible to acquire another token with more scopes based on a token in the cache. It's also possible to refresh a token when it's getting close to expiration (as the token cache also contains a refresh token).
+When you acquire an access token using the Microsoft Authentication Library for .NET (MSAL.NET), the token is cached. When the application needs a token, it should first call the `AcquireTokenSilent` method to verify if an acceptable token is in the cache. In many cases, it's possible to acquire another token with more scopes based on a token in the cache. It's also possible to refresh a token when it's getting close to expiration (as the token cache also contains a refresh token).
The recommended pattern is to call the `AcquireTokenSilent` method first. If `AcquireTokenSilent` fails, then acquire a token using other methods.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-adfs-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-adfs-support.md
@@ -1,7 +1,7 @@
--- title: AD FS support in MSAL.NET | Azure titleSuffix: Microsoft identity platform
-description: Learn about Active Directory Federation Services (AD FS) support in Microsoft Authentication Library for .NET (MSAL.NET).
+description: Learn about Active Directory Federation Services (AD FS) support in the Microsoft Authentication Library for .NET (MSAL.NET).
services: active-directory author: mmacy manager: CelesteDG
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-clear-token-cache https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-clear-token-cache.md
@@ -19,7 +19,7 @@ ms.custom: "devx-track-csharp, aaddev"
# Clear the token cache using MSAL.NET
-When you [acquire an access token](msal-acquire-cache-tokens.md) using Microsoft Authentication Library for .NET (MSAL.NET), the token is cached. When the application needs a token, it should first call the `AcquireTokenSilent` method to verify if an acceptable token is in the cache.
+When you [acquire an access token](msal-acquire-cache-tokens.md) using the Microsoft Authentication Library for .NET (MSAL.NET), the token is cached. When the application needs a token, it should first call the `AcquireTokenSilent` method to verify if an acceptable token is in the cache.
Clearing the cache is achieved by removing the accounts from the cache. This does not remove the session cookie which is in the browser, though. The following example instantiates a public client application, gets the accounts for the application, and removes the accounts.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-client-assertions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-client-assertions.md
@@ -1,7 +1,7 @@
--- title: Client assertions (MSAL.NET) | Azure titleSuffix: Microsoft identity platform
-description: Learn about signed client assertions support for confidential client applications in Microsoft Authentication Library for .NET (MSAL.NET).
+description: Learn about signed client assertions support for confidential client applications in the Microsoft Authentication Library for .NET (MSAL.NET).
services: active-directory author: jmprieur manager: CelesteDG
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-initializing-client-applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-initializing-client-applications.md
@@ -18,7 +18,7 @@ ms.custom: "devx-track-csharp, aaddev"
--- # Initialize client applications using MSAL.NET
-This article describes initializing public client and confidential client applications using Microsoft Authentication Library for .NET (MSAL.NET). To learn more about the client application types and application configuration options, read the [overview](msal-client-applications.md).
+This article describes initializing public client and confidential client applications using the Microsoft Authentication Library for .NET (MSAL.NET). To learn more about the client application types and application configuration options, read the [overview](msal-client-applications.md).
With MSAL.NET 3.x, the recommended way to instantiate an application is by using the application builders: `PublicClientApplicationBuilder` and `ConfidentialClientApplicationBuilder`. They offer a powerful mechanism to configure the application either from the code, or from a configuration file, or even by mixing both approaches.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-instantiate-confidential-client-config-options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-instantiate-confidential-client-config-options.md
@@ -19,7 +19,7 @@ ms.custom: "devx-track-csharp, aaddev"
# Instantiate a confidential client application with configuration options using MSAL.NET
-This article describes how to instantiate a [confidential client application](msal-client-applications.md) using Microsoft Authentication Library for .NET (MSAL.NET). The application is instantiated with configuration options defined in a settings file.
+This article describes how to instantiate a [confidential client application](msal-client-applications.md) using the Microsoft Authentication Library for .NET (MSAL.NET). The application is instantiated with configuration options defined in a settings file.
Before initializing an application, you first need to [register](quickstart-register-app.md) it so that your app can be integrated with the Microsoft identity platform. After registration, you may need the following information (which can be found in the Azure portal):
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-instantiate-public-client-config-options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-instantiate-public-client-config-options.md
@@ -19,7 +19,7 @@ ms.custom: "devx-track-csharp, aaddev"
# Instantiate a public client application with configuration options using MSAL.NET
-This article describes how to instantiate a [public client application](msal-client-applications.md) using Microsoft Authentication Library for .NET (MSAL.NET). The application is instantiated with configuration options defined in a settings file.
+This article describes how to instantiate a [public client application](msal-client-applications.md) using the Microsoft Authentication Library for .NET (MSAL.NET). The application is instantiated with configuration options defined in a settings file.
Before initializing an application, you first need to [register](quickstart-register-app.md) it so that your app can be integrated with the Microsoft identity platform. After registration, you may need the following information (which can be found in the Azure portal):
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-migration.md
@@ -1,7 +1,7 @@
--- title: Migrating to MSAL.NET titleSuffix: Microsoft identity platform
-description: Learn about the differences between Microsoft Authentication Library for .NET (MSAL.NET) and Azure AD Authentication Library for .NET (ADAL.NET) and how to migrate to MSAL.NET.
+description: Learn about the differences between the Microsoft Authentication Library for .NET (MSAL.NET) and Azure AD Authentication Library for .NET (ADAL.NET) and how to migrate to MSAL.NET.
services: active-directory author: jmprieur manager: CelesteDG
@@ -19,7 +19,7 @@ ms.custom: "devx-track-csharp, aaddev"
# Migrating applications to MSAL.NET
-Both Microsoft Authentication Library for .NET (MSAL.NET) and Azure AD Authentication Library for .NET (ADAL.NET) are used to authenticate Azure AD entities and request tokens from Azure AD. Up until now, most developers have worked with Azure AD for developers platform (v1.0) to authenticate Azure AD identities (work and school accounts) by requesting tokens using Azure AD Authentication Library (ADAL). Using MSAL:
+Both the Microsoft Authentication Library for .NET (MSAL.NET) and Azure AD Authentication Library for .NET (ADAL.NET) are used to authenticate Azure AD entities and request tokens from Azure AD. Up until now, most developers have worked with Azure AD for developers platform (v1.0) to authenticate Azure AD identities (work and school accounts) by requesting tokens using Azure AD Authentication Library (ADAL). Using MSAL:
- you can authenticate a broader set of Microsoft identities (Azure AD identities and Microsoft accounts, and social and local accounts through Azure AD B2C) as it uses the Microsoft identity platform endpoint, - your users will get the best single-sign-on experience.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-provide-httpclient https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-provide-httpclient.md
@@ -1,7 +1,7 @@
--- title: Provide an HttpClient & proxy (MSAL.NET) | Azure titleSuffix: Microsoft identity platform
-description: Learn about providing your own HttpClient and proxy to connect to Azure AD using Microsoft Authentication Library for .NET (MSAL.NET).
+description: Learn about providing your own HttpClient and proxy to connect to Azure AD using the Microsoft Authentication Library for .NET (MSAL.NET).
services: active-directory author: jmprieur manager: CelesteDG
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-system-browser-android-considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-system-browser-android-considerations.md
@@ -1,7 +1,7 @@
--- title: Xamarin Android system browser considerations (MSAL.NET) | Azure titleSuffix: Microsoft identity platform
-description: Learn about considerations for using system browsers on Xamarin Android with Microsoft Authentication Library for .NET (MSAL.NET).
+description: Learn about considerations for using system browsers on Xamarin Android with the Microsoft Authentication Library for .NET (MSAL.NET).
services: active-directory author: mmacy manager: CelesteDG
@@ -19,7 +19,7 @@ ms.custom: "devx-track-csharp, aaddev"
# Xamarin Android system browser considerations for using MSAL.NET
-This article discusses what you should consider when you use the system browser on Xamarin Android with Microsoft Authentication Library for .NET (MSAL.NET).
+This article discusses what you should consider when you use the system browser on Xamarin Android with the Microsoft Authentication Library for .NET (MSAL.NET).
Starting with MSAL.NET 2.4.0 Preview, MSAL.NET supports browsers other than Chrome. It no longer requires Chrome be installed on the Android device for authentication.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-token-cache-serialization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-token-cache-serialization.md
@@ -1,7 +1,7 @@
--- title: Token cache serialization (MSAL.NET) | Azure titleSuffix: Microsoft identity platform
-description: Learn about serialization and customer serialization of the token cache using Microsoft Authentication Library for .NET (MSAL.NET).
+description: Learn about serialization and customer serialization of the token cache using the Microsoft Authentication Library for .NET (MSAL.NET).
services: active-directory author: jmprieur manager: CelesteDG
@@ -18,7 +18,7 @@ ms.custom: "devx-track-csharp, aaddev"
--- # Token cache serialization in MSAL.NET
-After a [token is acquired](msal-acquire-cache-tokens.md), it is cached by Microsoft Authentication Library (MSAL). Application code should try to get a token from the cache before acquiring a token by another method. This article discusses default and custom serialization of the token cache in MSAL.NET.
+After a [token is acquired](msal-acquire-cache-tokens.md), it is cached by the Microsoft Authentication Library (MSAL). Application code should try to get a token from the cache before acquiring a token by another method. This article discusses default and custom serialization of the token cache in MSAL.NET.
This article is for MSAL.NET 3.x. If you're interested in MSAL.NET 2.x, see [Token cache serialization in MSAL.NET 2.x](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Token-cache-serialization-2x).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-use-brokers-with-xamarin-apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-use-brokers-with-xamarin-apps.md
@@ -1,7 +1,7 @@
--- title: Use brokers with Xamarin iOS & Android | Azure titleSuffix: Microsoft identity platform
-description: Learn how to setup Xamarin iOS applications that can use Microsoft Authenticator and Microsoft Authentication Library for .NET (MSAL.NET). Also learn how to migrate from Azure AD Authentication Library for .NET (ADAL.NET) to Microsoft Authentication Library for .NET (MSAL.NET).
+description: Learn how to setup Xamarin iOS applications that can use the Microsoft Authenticator and the Microsoft Authentication Library for .NET (MSAL.NET). Also learn how to migrate from Azure AD Authentication Library for .NET (ADAL.NET) to the Microsoft Authentication Library for .NET (MSAL.NET).
author: jmprieur manager: CelesteDG
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-uwp-considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-uwp-considerations.md
@@ -1,7 +1,7 @@
--- title: UWP considerations (MSAL.NET) | Azure titleSuffix: Microsoft identity platform
-description: Learn about considerations for using Universal Windows Platform (UWP) with Microsoft Authentication Library for .NET (MSAL.NET).
+description: Learn about considerations for using Universal Windows Platform (UWP) with the Microsoft Authentication Library for .NET (MSAL.NET).
services: active-directory author: mmacy manager: CelesteDG
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-xamarin-android-considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-xamarin-android-considerations.md
@@ -1,7 +1,7 @@
--- title: Xamarin Android code configuration and troubleshooting (MSAL.NET) | Azure titleSuffix: Microsoft identity platform
-description: Learn about considerations for using Xamarin Android with Microsoft Authentication Library for .NET (MSAL.NET).
+description: Learn about considerations for using Xamarin Android with the Microsoft Authentication Library for .NET (MSAL.NET).
services: active-directory author: jmprieur manager: CelesteDG
@@ -19,7 +19,7 @@ ms.custom: "devx-track-csharp, aaddev"
# Configuration requirements and troubleshooting tips for Xamarin Android with MSAL.NET
-There are several configuration changes you're required to make in your code when using Xamarin Android with Microsoft Authentication Library for .NET (MSAL.NET). The following sections describe the required modifications, followed by a [Troubleshooting](#troubleshooting) section to help you avoid some of the most common issues.
+There are several configuration changes you're required to make in your code when using Xamarin Android with the Microsoft Authentication Library for .NET (MSAL.NET). The following sections describe the required modifications, followed by a [Troubleshooting](#troubleshooting) section to help you avoid some of the most common issues.
## Set the parent activity
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-xamarin-ios-considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-xamarin-ios-considerations.md
@@ -1,7 +1,7 @@
--- title: Xamarin iOS considerations (MSAL.NET) | Azure titleSuffix: Microsoft identity platform
-description: Learn about considerations for using Xamarin iOS with Microsoft Authentication Library for .NET (MSAL.NET).
+description: Learn about considerations for using Xamarin iOS with the Microsoft Authentication Library for .NET (MSAL.NET).
services: active-directory author: jmprieur manager: CelesteDG
@@ -19,7 +19,7 @@ ms.custom: "devx-track-csharp, aaddev"
# Considerations for using Xamarin iOS with MSAL.NET
-When you use Microsoft Authentication Library for .NET (MSAL.NET) on Xamarin iOS, you should:
+When you use the Microsoft Authentication Library for .NET (MSAL.NET) on Xamarin iOS, you should:
- Override and implement the `OpenUrl` function in `AppDelegate`. - Enable keychain groups.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-overview.md
@@ -1,7 +1,7 @@
--- title: Learn about MSAL | Azure titleSuffix: Microsoft identity platform
-description: Microsoft Authentication Library (MSAL) enables application developers to acquire tokens in order to call secured web APIs. These web APIs can be the Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API. MSAL supports multiple application architectures and platforms.
+description: The Microsoft Authentication Library (MSAL) enables application developers to acquire tokens in order to call secured web APIs. These web APIs can be the Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API. MSAL supports multiple application architectures and platforms.
services: active-directory author: mmacy manager: CelesteDG
@@ -17,8 +17,8 @@ ms.custom: aaddev, identityplatformtop40
#Customer intent: As an application developer, I want to learn about the Microsoft Authentication Library so I can decide if this platform meets my application development needs and requirements. ---
-# Overview of Microsoft Authentication Library (MSAL)
-Microsoft Authentication Library (MSAL) enables developers to acquire [tokens](developer-glossary.md#security-token) from the Microsoft identity platform endpoint in order to authenticate users and access secured web APIs. It can be used to provide secure access to Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API. MSAL supports many different application architectures and platforms including .NET, JavaScript, Java, Python, Android, and iOS.
+# Overview of the Microsoft Authentication Library (MSAL)
+The Microsoft Authentication Library (MSAL) enables developers to acquire [tokens](developer-glossary.md#security-token) from the Microsoft identity platform endpoint in order to authenticate users and access secured web APIs. It can be used to provide secure access to Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API. MSAL supports many different application architectures and platforms including .NET, JavaScript, Java, Python, Android, and iOS.
MSAL gives you many ways to get tokens, with a consistent API for a number of platforms. Using MSAL provides the following benefits:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-python-adfs-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-python-adfs-support.md
@@ -1,7 +1,7 @@
--- title: Azure AD FS support (MSAL Python) titleSuffix: Microsoft identity platform
-description: Learn about Active Directory Federation Services (AD FS) support in Microsoft Authentication Library for Python
+description: Learn about Active Directory Federation Services (AD FS) support in the Microsoft Authentication Library for Python
services: active-directory author: abhidnya13 manager: CelesteDG
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/perms-for-given-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/perms-for-given-api.md
@@ -12,7 +12,7 @@ ms.workload: identity
ms.topic: conceptual ms.date: 07/15/2019 ms.author: ryanwi-
+ROBOTS: NOINDEX
--- # How to select permissions for a given API
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-nodejs-webapp-msal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-webapp-msal.md
@@ -3,15 +3,15 @@ title: "Quickstart: Add Authentication to a Node web app with MSAL Node | Azure"
titleSuffix: Microsoft identity platform description: In this quickstart, you learn how to implement authentication with a Node.js web app and the Microsoft Authentication Library (MSAL) for Node.js. services: active-directory
-author: amikuma
-manager: saeeda
+author: mmacy
+manager: celested
ms.service: active-directory ms.subservice: develop ms.topic: quickstart ms.workload: identity ms.date: 10/22/2020
-ms.author: amikuma
+ms.author: marsma
ms.custom: aaddev, scenarios:getting-started, languages:js, devx-track-js # Customer intent: As an application developer, I want to know how to set up authentication in a web application built using Node.js and MSAL Node. ---
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/redirect-uris-ios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/redirect-uris-ios.md
@@ -1,7 +1,7 @@
--- title: Use redirect URIs with MSAL (iOS/macOS) | Azure titleSuffix: Microsoft identity platform
-description: Learn about the differences between Microsoft Authentication Library for ObjectiveC (MSAL for iOS and macOS) and Azure AD Authentication Library for ObjectiveC (ADAL.ObjC) and how to migrate between them.
+description: Learn about the differences between the Microsoft Authentication Library for ObjectiveC (MSAL for iOS and macOS) and Azure AD Authentication Library for ObjectiveC (ADAL.ObjC) and how to migrate between them.
services: active-directory author: mmacy manager: CelesteDG
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/reference-v2-libraries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-v2-libraries.md
@@ -76,7 +76,7 @@ In term of supported operating systems vs languages, the mapping is the followin
| Swift <br> Objective-C | | | [MSAL for iOS and macOS](msal-overview.md) | [MSAL for iOS and macOS](msal-overview.md) | | | ![Java](media/sample-v2-code/logo_java.png) Java | msal4j | msal4j | msal4j | | MSAL Android | | ![Python](media/sample-v2-code/logo_python.png) Python | MSAL Python | MSAL Python | MSAL Python |
-| ![Node.Js](media/sample-v2-code/logo_nodejs.png) Node.JS | Passport.node | Passport.node | Passport.node |
+| ![Node.js](media/sample-v2-code/logo_nodejs.png) Node.js | Passport.node | Passport.node | Passport.node |
See also [Scenarios by supported platforms and languages](authentication-flows-app-scenarios.md#scenarios-and-supported-platforms-and-languages)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/registration-config-change-token-lifetime-how-to https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/registration-config-change-token-lifetime-how-to.md
@@ -12,6 +12,7 @@ ms.topic: conceptual
ms.date: 10/23/2020 ms.author: ryanwi ms.custom: aaddev, seoapril2019
+ROBOTS: NOINDEX
--- # How to change the token lifetime defaults for a custom-developed application
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/registration-config-how-to https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/registration-config-how-to.md
@@ -13,6 +13,7 @@ ms.workload: identity
ms.topic: conceptual ms.date: 05/07/2020 ms.author: ryanwi
+ROBOTS: NOINDEX
--- # How to discover endpoints
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/registration-config-multi-tenant-application-add-to-gallery-how-to https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/registration-config-multi-tenant-application-add-to-gallery-how-to.md
@@ -14,7 +14,7 @@ ms.topic: conceptual
ms.date: 09/11/2018 ms.author: ryanwi ms.reviewer: jeedes-
+ROBOTS: NOINDEX
--- # Add a multitenant application to the Azure AD application gallery
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/registration-config-specific-application-property-how-to https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/registration-config-specific-application-property-how-to.md
@@ -12,7 +12,7 @@ ms.workload: identity
ms.topic: conceptual ms.date: 06/28/2019 ms.author: ryanwi-
+ROBOTS: NOINDEX
--- # Azure portal registration fields for custom-developed apps
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/registration-config-sso-how-to https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/registration-config-sso-how-to.md
@@ -12,7 +12,7 @@ ms.workload: identity
ms.topic: conceptual ms.date: 07/15/2019 ms.author: ryanwi-
+ROBOTS: NOINDEX
--- # How to configure single sign-on for an application
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/sample-v2-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/sample-v2-code.md
@@ -65,7 +65,7 @@ The following samples illustrate web applications that sign in users. Some sampl
## Desktop and mobile public client apps
-The following samples show public client applications (desktop or mobile applications) that access the Microsoft Graph API, or your own web API in the name of a user. Apart from the *Desktop (Console) with WAM* sample, all these client applications use Microsoft Authentication Library (MSAL).
+The following samples show public client applications (desktop or mobile applications) that access the Microsoft Graph API, or your own web API in the name of a user. Apart from the *Desktop (Console) with WAM* sample, all these client applications use the Microsoft Authentication Library (MSAL).
| Client application | Platform | Flow/grant | Calls Microsoft Graph | Calls an ASP.NET Core web API | | ------------------ | -------- | ----------| ---------- | ------------------------- |
@@ -96,7 +96,7 @@ The following samples show an application that accesses the Microsoft Graph API
## Headless applications
-The following sample shows a public client application running on a device without a web browser. The app can be a command-line tool, an app running on Linux or Mac, or an IoT application. The sample features an app accessing the Microsoft Graph API, in the name of a user who signs-in interactively on another device (such as a mobile phone). This client application uses Microsoft Authentication Library (MSAL).
+The following sample shows a public client application running on a device without a web browser. The app can be a command-line tool, an app running on Linux or Mac, or an IoT application. The sample features an app accessing the Microsoft Graph API, in the name of a user who signs-in interactively on another device (such as a mobile phone). This client application uses the Microsoft Authentication Library (MSAL).
| Client application | Platform | Flow/Grant | Calls Microsoft Graph | | ------------------ | -------- | ----------| ---------- |
@@ -136,7 +136,7 @@ The following samples show how to protect an Azure Function using HttpTrigger an
| ![This image shows the ASP.NET Core logo](media/sample-v2-code/logo_NETcore.png)</p>ASP.NET Core | ASP.NET Core web API (service) Azure Function of [dotnet-native-aspnetcore-v2](https://github.com/Azure-Samples/ms-identity-dotnet-webapi-azurefunctions) | | ![This image shows the Python logo](media/sample-v2-code/logo_python.png)</p>Python | Web API (service) of [Python](https://github.com/Azure-Samples/ms-identity-python-webapi-azurefunctions) | | ![This image shows the Node.js logo](media/sample-v2-code/logo_nodejs.png)</p>Node.js | Web API (service) of [Node.js and passport-azure-ad](https://github.com/Azure-Samples/ms-identity-nodejs-webapi-azurefunctions) |
-| ![This image shows the Node.js logo](media/sample-v2-code/logo_nodejs.png)</p>NodeJS | Web API (service) of [NodeJS and passport-azure-ad using on behalf of](https://github.com/Azure-Samples/ms-identity-nodejs-webapi-onbehalfof-azurefunctions) |
+| ![This image shows the Node.js logo](media/sample-v2-code/logo_nodejs.png)</p>Node.js | Web API (service) of [Node.js and passport-azure-ad using on behalf of](https://github.com/Azure-Samples/ms-identity-nodejs-webapi-onbehalfof-azurefunctions) |
## Other Microsoft Graph samples
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-mobile-acquire-token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-mobile-acquire-token.md
@@ -19,7 +19,7 @@ ms.custom: aaddev
# Get a token for a mobile app that calls web APIs
-Before your app can call protected web APIs, it needs an access token. This article walks you through the process to get a token by using Microsoft Authentication Library (MSAL).
+Before your app can call protected web APIs, it needs an access token. This article walks you through the process to get a token by using the Microsoft Authentication Library (MSAL).
## Define a scope
@@ -252,7 +252,7 @@ var result = await app.AcquireTokenInteractive(scopesForCustomerApi)
##### Other optional parameters
-To learn about the other optional parameters for `AcquireTokenInteractive`, see the [reference documentation for AcquireTokenInteractiveParameterBuilder](/dotnet/api/microsoft.identity.client.acquiretokeninteractiveparameterbuilder?view=azure-dotnet-preview#methods).
+To learn about the other optional parameters for `AcquireTokenInteractive`, see the [reference documentation for AcquireTokenInteractiveParameterBuilder](/dotnet/api/microsoft.identity.client.acquiretokeninteractiveparameterbuilder#methods).
### Acquire tokens via the protocol
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-mobile-call-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-mobile-call-api.md
@@ -19,7 +19,7 @@ ms.custom: aaddev
# Call a web API from a mobile app
-After your app signs in a user and receives tokens, Microsoft Authentication Library (MSAL) exposes information about the user, the user's environment, and the issued tokens. Your app can use these values to call a web API or display a welcome message to the user.
+After your app signs in a user and receives tokens, the Microsoft Authentication Library (MSAL) exposes information about the user, the user's environment, and the issued tokens. Your app can use these values to call a web API or display a welcome message to the user.
In this article, we'll first look at the MSAL result. Then we'll look at how to use an access token from `AuthenticationResult` or `result` to call a protected web API.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-mobile-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-mobile-overview.md
@@ -31,7 +31,7 @@ If you haven't already, create your first application by completing a quickstart
## Overview
-A personalized, seamless user experience is essential for mobile apps. Microsoft identity platform enables mobile developers to create that experience for iOS and Android users. Your application can sign in Azure Active Directory (Azure AD) users, personal Microsoft account users, and Azure AD B2C users. It can also acquire tokens to call a web API on their behalf. To implement these flows, we'll use Microsoft Authentication Library (MSAL). MSAL implements the industry standard [OAuth2.0 authorization code flow](v2-oauth2-auth-code-flow.md).
+A personalized, seamless user experience is essential for mobile apps. Microsoft identity platform enables mobile developers to create that experience for iOS and Android users. Your application can sign in Azure Active Directory (Azure AD) users, personal Microsoft account users, and Azure AD B2C users. It can also acquire tokens to call a web API on their behalf. To implement these flows, we'll use the Microsoft Authentication Library (MSAL). MSAL implements the industry standard [OAuth2.0 authorization code flow](v2-oauth2-auth-code-flow.md).
![Daemon apps](./media/scenarios/mobile-app.svg)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-protected-web-api-app-configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-protected-web-api-app-configuration.md
@@ -37,7 +37,7 @@ Consider the following questions:
The bearer token that's set in the header when the app is called holds information about the app identity. It also holds information about the user unless the web app accepts service-to-service calls from a daemon app.
-Here's a C# code example that shows a client calling the API after it acquires a token with Microsoft Authentication Library for .NET (MSAL.NET):
+Here's a C# code example that shows a client calling the API after it acquires a token with the Microsoft Authentication Library for .NET (MSAL.NET):
```csharp var scopes = new[] {$"api://.../access_as_user"};
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-token-exchange-saml-oauth https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-token-exchange-saml-oauth.md
@@ -25,7 +25,7 @@ Many apps are implemented with SAML. However, the Graph API uses the OIDC/OAuth
The general strategy is to add the OIDC/OAuth stack to your app. With your app that implements both standards you can use a session cookie. You aren't exchanging a token explicitly. You're logging a user in with SAML, which generates a session cookie. When the Graph API invokes an OAuth flow, you use the session cookie to authenticate. This strategy assumes the Conditional Access checks pass and the user is authorized. > [!NOTE]
-> The recommended library for adding OIDC/OAuth behavior is the Microsoft Authentication Library (MSAL). To learn more about MSAL, see [Overview of Microsoft Authentication Library (MSAL)](msal-overview.md). The previous library was called Active Directory Authentication Library (ADAL), however it is not recommended as MSAL is replacing it.
+> The recommended library for adding OIDC/OAuth behavior is the Microsoft Authentication Library (MSAL). To learn more about MSAL, see [Overview of the Microsoft Authentication Library (MSAL)](msal-overview.md). The previous library was called Active Directory Authentication Library (ADAL), however it is not recommended as MSAL is replacing it.
## Next steps - [Authentication flows and application scenarios](authentication-flows-app-scenarios.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/setup-multi-tenant-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/setup-multi-tenant-app.md
@@ -12,7 +12,7 @@ ms.workload: identity
ms.topic: conceptual ms.date: 07/15/2019 ms.author: ryanwi-
+ROBOTS: NOINDEX
--- # How to configure a new multi-tenant application
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-android.md
@@ -67,7 +67,7 @@ If you do not already have an Android application, follow these steps to set up
6. Set the **Minimum API level** to **API 19** or higher, and click **Finish**. 7. In the project view, choose **Project** in the dropdown to display source and non-source project files, open **app/build.gradle** and set `targetSdkVersion` to `28`.
-## Integrate with Microsoft Authentication Library
+## Integrate with the Microsoft Authentication Library
### Register your application
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-javascript-auth-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-javascript-auth-code.md
@@ -40,7 +40,7 @@ The application you create in this tutorial enables a JavaScript SPA to query th
This tutorial uses the following library:
-[msal.js](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser) Microsoft Authentication Library for JavaScript v2.0 browser package
+[msal.js](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser) the Microsoft Authentication Library for JavaScript v2.0 browser package
## Get the completed code sample
@@ -355,7 +355,7 @@ graphMeEndpoint: "https://graph.microsoft.com/v1.0/me",
graphMailEndpoint: "https://graph.microsoft.com/v1.0/me/messages" ```
-## Use Microsoft Authentication Library (MSAL) to sign in user
+## Use the Microsoft Authentication Library (MSAL) to sign in user
### Pop-up
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-windows-desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-windows-desktop.md
@@ -39,7 +39,7 @@ In this tutorial:
![Shows how the sample app generated by this tutorial works](./media/active-directory-develop-guidedsetup-windesktop-intro/windesktophowitworks.svg)
-The sample application that you create with this guide enables a Windows Desktop application that queries the Microsoft Graph API or a web API that accepts tokens from a Microsoft identity-platform endpoint. For this scenario, you add a token to HTTP requests via the Authorization header. Microsoft Authentication Library (MSAL) handles token acquisition and renewal.
+The sample application that you create with this guide enables a Windows Desktop application that queries the Microsoft Graph API or a web API that accepts tokens from a Microsoft identity-platform endpoint. For this scenario, you add a token to HTTP requests via the Authorization header. The Microsoft Authentication Library (MSAL) handles token acquisition and renewal.
## Handling token acquisition for accessing protected web APIs
@@ -83,7 +83,7 @@ To create your application, do the following:
``` > [!NOTE]
- > This command installs Microsoft Authentication Library. MSAL handles acquiring, caching, and refreshing user tokens that are used to access the APIs that are protected by Azure Active Directory v2.0
+ > This command installs the Microsoft Authentication Library. MSAL handles acquiring, caching, and refreshing user tokens that are used to access the APIs that are protected by Azure Active Directory v2.0
> ## Register your application
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-windows-uwp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-windows-uwp.md
@@ -38,7 +38,7 @@ In this tutorial:
![Shows how the sample app generated by this tutorial works](./media/tutorial-v2-windows-uwp/uwp-intro.svg)
-This guide creates a sample UWP application that queries the Microsoft Graph API. For this scenario, a token is added to HTTP requests by using the Authorization header. Microsoft Authentication Library handles token acquisitions and renewals.
+This guide creates a sample UWP application that queries the Microsoft Graph API. For this scenario, a token is added to HTTP requests by using the Authorization header. The Microsoft Authentication Library handles token acquisitions and renewals.
## NuGet packages
@@ -46,7 +46,7 @@ This guide uses the following NuGet package:
|Library|Description| |---|---|
-|[Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client)|Microsoft Authentication Library|
+|[Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client)| Microsoft Authentication Library|
|[Microsoft.Graph](https://www.nuget.org/packages/Microsoft.Graph)|Microsoft Graph Client Library| ## Set up your project
@@ -67,7 +67,7 @@ This guide creates an application that displays a button that queries the Micros
![Minimum and Target versions](./media/tutorial-v2-windows-uwp/select-uwp-target-minimum.png)
-### Add Microsoft Authentication Library to your project
+### Add the Microsoft Authentication Library to your project
1. In Visual Studio, select **Tools** > **NuGet Package Manager** > **Package Manager Console**. 1. Copy and paste the following commands in the **Package Manager Console** window:
@@ -78,7 +78,7 @@ This guide creates an application that displays a button that queries the Micros
``` > [!NOTE]
- > The first command installs [Microsoft Authentication Library (MSAL.NET)](https://aka.ms/msal-net). MSAL.NET acquires, caches, and refreshes user tokens that access APIs that are protected by the Microsoft identity platform. The second command installs [Microsoft Graph .NET Client Library](https://github.com/microsoftgraph/msgraph-sdk-dotnet) to authenticate requests to Microsoft Graph and make calls to the service.
+ > The first command installs the [Microsoft Authentication Library (MSAL.NET)](https://aka.ms/msal-net). MSAL.NET acquires, caches, and refreshes user tokens that access APIs that are protected by the Microsoft identity platform. The second command installs [Microsoft Graph .NET Client Library](https://github.com/microsoftgraph/msgraph-sdk-dotnet) to authenticate requests to Microsoft Graph and make calls to the service.
### Create your application's UI
@@ -99,9 +99,9 @@ Visual Studio creates *MainPage.xaml* as a part of your project template. Open t
</Grid> ```
-### Use Microsoft Authentication Library to get a token for the Microsoft Graph API
+### Use the Microsoft Authentication Library to get a token for the Microsoft Graph API
-This section shows how to use Microsoft Authentication Library to get a token for the Microsoft Graph API. Make changes to the *MainPage.xaml.cs* file.
+This section shows how to use the Microsoft Authentication Library to get a token for the Microsoft Graph API. Make changes to the *MainPage.xaml.cs* file.
1. In *MainPage.xaml.cs*, add the following references:
@@ -221,9 +221,9 @@ The `AcquireTokenInteractive` method results in a window that prompts users to s
#### Get a user token silently
-The `AcquireTokenSilent` method handles token acquisitions and renewals without any user interaction. After `AcquireTokenInteractive` runs for the first time and prompts the user for credentials, use the `AcquireTokenSilent` method to request tokens for later calls. That method acquires tokens silently. Microsoft Authentication Library handles token cache and renewal.
+The `AcquireTokenSilent` method handles token acquisitions and renewals without any user interaction. After `AcquireTokenInteractive` runs for the first time and prompts the user for credentials, use the `AcquireTokenSilent` method to request tokens for later calls. That method acquires tokens silently. The Microsoft Authentication Library handles token cache and renewal.
-Eventually, the `AcquireTokenSilent` method fails. Reasons for failure include a user that signed out or changed their password on another device. When Microsoft Authentication Library detects that the issue requires an interactive action, it throws an `MsalUiRequiredException` exception. Your application can handle this exception in two ways:
+Eventually, the `AcquireTokenSilent` method fails. Reasons for failure include a user that signed out or changed their password on another device. When the Microsoft Authentication Library detects that the issue requires an interactive action, it throws an `MsalUiRequiredException` exception. Your application can handle this exception in two ways:
* Your application calls `AcquireTokenInteractive` immediately. This call results in prompting the user to sign in. Normally, use this approach for online applications where there's no available offline content for the user. The sample generated by this guided setup follows the pattern. You see it in action the first time you run the sample.
@@ -293,9 +293,9 @@ private async void SignOutButton_Click(object sender, RoutedEventArgs e)
#### More information about signing out<a name="more-information-on-sign-out"></a>
-The `SignOutButton_Click` method removes the user from the Microsoft Authentication Library user cache. This method effectively tells Microsoft Authentication Library to forget the current user. A future request to acquire a token succeeds only if it's interactive.
+The `SignOutButton_Click` method removes the user from the Microsoft Authentication Library user cache. This method effectively tells the Microsoft Authentication Library to forget the current user. A future request to acquire a token succeeds only if it's interactive.
-The application in this sample supports a single user. Microsoft Authentication Library supports scenarios where the user can sign in on more than one account. An example is an email application where a user has several accounts.
+The application in this sample supports a single user. The Microsoft Authentication Library supports scenarios where the user can sign in on more than one account. An example is an email application where a user has several accounts.
### Display basic token information
@@ -318,7 +318,7 @@ private void DisplayBasicTokenInfo(AuthenticationResult authResult)
#### More information<a name="more-information-1"></a>
-ID tokens acquired by using **OpenID Connect** also contain a small subset of information pertinent to the user. `DisplayBasicTokenInfo` displays basic information contained in the token. This information includes the user's display name and ID. It also includes the expiration date of the token and the string that represents the access token itself. If you select the **Call Microsoft Graph API** button several times, you'll see that the same token was reused for later requests. You can also see the expiration date extended when Microsoft Authentication Library decides it's time to renew the token.
+ID tokens acquired by using **OpenID Connect** also contain a small subset of information pertinent to the user. `DisplayBasicTokenInfo` displays basic information contained in the token. This information includes the user's display name and ID. It also includes the expiration date of the token and the string that represents the access token itself. If you select the **Call Microsoft Graph API** button several times, you'll see that the same token was reused for later requests. You can also see the expiration date extended when the Microsoft Authentication Library decides it's time to renew the token.
### Display message
@@ -466,7 +466,7 @@ You also see basic information about the token acquired via `AcquireTokenInterac
|Property |Format |Description | |---------|---------|---------| |`Username` |`user@domain.com` |The username that identifies the user.|
-|`Token Expires` |`DateTime` |The time when the token expires. Microsoft Authentication Library extends the expiration date by renewing the token as necessary.|
+|`Token Expires` |`DateTime` |The time when the token expires. The Microsoft Authentication Library extends the expiration date by renewing the token as necessary.|
### More information about scopes and delegated permissions
@@ -506,4 +506,4 @@ You enable [integrated authentication on federated domains](#enable-integrated-a
Learn more about using the Microsoft Authentication Library (MSAL) for authorization and authentication in .NET applications: > [!div class="nextstepaction"]
-> [Overview of Microsoft Authentication Library (MSAL)](msal-overview.md)
+> [Overview of the Microsoft Authentication Library (MSAL)](msal-overview.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-conditional-access-dev-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-conditional-access-dev-guide.md
@@ -178,6 +178,6 @@ To try out this scenario, see our [JS SPA On-behalf-of code sample](https://gith
* To learn more about the capabilities, see [Conditional Access in Azure Active Directory](../conditional-access/overview.md). * For more Azure AD code samples, see [samples](sample-v2-code.md).
-* For more info on the MSAL SDK's and access the reference documentation, see [Microsoft Authentication Library overview](msal-overview.md).
+* For more info on the MSAL SDK's and access the reference documentation, see the [Microsoft Authentication Library overview](msal-overview.md).
* To learn more about multi-tenant scenarios, see [How to sign in users using the multi-tenant pattern](howto-convert-app-to-be-multi-tenant.md). * Learn more about [Conditional access and securing access to IoT apps](/azure/architecture/example-scenario/iot-aad/iot-aad).\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-auth-code-flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
@@ -10,7 +10,7 @@ ms.service: active-directory
ms.subservice: develop ms.workload: identity ms.topic: conceptual
-ms.date: 08/14/2020
+ms.date: 01/11/2021
ms.author: hirsin ms.reviewer: hirsin ms.custom: aaddev, identityplatformtop40
@@ -54,7 +54,7 @@ client_id=6731de76-14a6-49ae-97bc-6eba6914391e
&response_type=code &redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F &response_mode=query
-&scope=openid%20offline_access%20https%3A%2F%2Fgraph.microsoft.com%2Fmail.read
+&scope=https%3A%2F%2Fgraph.microsoft.com%2Fmail.read%20api%3A%2F%2F
&state=12345 &code_challenge=YTFjNjI1OWYzMzA3MTI4ZDY2Njg5M2RkNmVjNDE5YmEyZGRhOGYyM2IzNjdmZWFhMTQ1ODg3NDcxY2Nl &code_challenge_method=S256
@@ -68,10 +68,10 @@ client_id=6731de76-14a6-49ae-97bc-6eba6914391e
|--------------|-------------|--------------| | `tenant` | required | The `{tenant}` value in the path of the request can be used to control who can sign into the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more detail, see [protocol basics](active-directory-v2-protocols.md#endpoints). | | `client_id` | required | The **Application (client) ID** that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience assigned to your app. |
-| `response_type` | required | Must include `code` for the authorization code flow. |
+| `response_type` | required | Must include `code` for the authorization code flow. Can also include `id_token` or `token` if using the [hybrid flow](#request-an-id-token-as-well-hybrid-flow). |
| `redirect_uri` | required | The redirect_uri of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect_uris you registered in the portal, except it must be url encoded. For native & mobile apps, you should use the default value of `https://login.microsoftonline.com/common/oauth2/nativeclient`. | | `scope` | required | A space-separated list of [scopes](v2-permissions-and-consent.md) that you want the user to consent to. For the `/authorize` leg of the request, this can cover multiple resources, allowing your app to get consent for multiple web APIs you want to call. |
-| `response_mode` | recommended | Specifies the method that should be used to send the resulting token back to your app. Can be one of the following:<br/><br/>- `query`<br/>- `fragment`<br/>- `form_post`<br/><br/>`query` provides the code as a query string parameter on your redirect URI. If you're requesting an ID token using the implicit flow, you can't use `query` as specified in the [OpenID spec](https://openid.net/specs/oauth-v2-multiple-response-types-1_0.html#Combinations). If you're requesting just the code, you can use `query`, `fragment`, or `form_post`. `form_post` executes a POST containing the code to your redirect URI. For more info, see [OpenID Connect protocol](../azuread-dev/v1-protocols-openid-connect-code.md). |
+| `response_mode` | recommended | Specifies the method that should be used to send the resulting token back to your app. Can be one of the following:<br/><br/>- `query`<br/>- `fragment`<br/>- `form_post`<br/><br/>`query` provides the code as a query string parameter on your redirect URI. If you're requesting an ID token using the implicit flow, you can't use `query` as specified in the [OpenID spec](https://openid.net/specs/oauth-v2-multiple-response-types-1_0.html#Combinations). If you're requesting just the code, you can use `query`, `fragment`, or `form_post`. `form_post` executes a POST containing the code to your redirect URI. |
| `state` | recommended | A value included in the request that will also be returned in the token response. It can be a string of any content that you wish. A randomly generated unique value is typically used for [preventing cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The value can also encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. | | `prompt` | optional | Indicates the type of user interaction that is required. The only valid values at this time are `login`, `none`, and `consent`.<br/><br/>- `prompt=login` will force the user to enter their credentials on that request, negating single-sign on.<br/>- `prompt=none` is the opposite - it will ensure that the user isn't presented with any interactive prompt whatsoever. If the request can't be completed silently via single-sign on, the Microsoft identity platform endpoint will return an `interaction_required` error.<br/>- `prompt=consent` will trigger the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app.<br/>- `prompt=select_account` will interrupt single sign-on providing account selection experience listing all the accounts either in session or any remembered account or an option to choose to use a different account altogether.<br/> | | `login_hint` | optional | Can be used to pre-fill the username/email address field of the sign-in page for the user, if you know their username ahead of time. Often apps will use this parameter during re-authentication, having already extracted the username from a previous sign-in using the `preferred_username` claim. |
@@ -99,7 +99,7 @@ code=AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrq...
| `code` | The authorization_code that the app requested. The app can use the authorization code to request an access token for the target resource. Authorization_codes are short lived, typically they expire after about 10 minutes. | | `state` | If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. |
-You can also receive an access token and ID token if you request one and have the implicit grant enabled in your application registration. This is sometimes referred to as the "hybrid flow", and is used by frameworks like ASP.NET.
+You can also receive an ID token if you request one and have the implicit grant enabled in your application registration. This is sometimes referred to as the ["hybrid flow"](#request-an-id-token-as-well-hybrid-flow), and is used by frameworks like ASP.NET.
#### Error response
@@ -125,13 +125,60 @@ The following table describes the various error codes that can be returned in th
| `invalid_request` | Protocol error, such as a missing required parameter. | Fix and resubmit the request. This is a development error typically caught during initial testing. | | `unauthorized_client` | The client application isn't permitted to request an authorization code. | This error usually occurs when the client application isn't registered in Azure AD or isn't added to the user's Azure AD tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. | | `access_denied` | Resource owner denied consent | The client application can notify the user that it can't proceed unless the user consents. |
-| `unsupported_response_type` | The authorization server does not support the response type in the request. | Fix and resubmit the request. This is a development error typically caught during initial testing. |
+| `unsupported_response_type` | The authorization server does not support the response type in the request. | Fix and resubmit the request. This is a development error typically caught during initial testing. When seen in the [hybrid flow](#request-an-id-token-as-well-hybrid-flow), signals that you must enable the ID token implicit grant setting on the client app registration. |
| `server_error` | The server encountered an unexpected error.| Retry the request. These errors can result from temporary conditions. The client application might explain to the user that its response is delayed to a temporary error. | | `temporarily_unavailable` | The server is temporarily too busy to handle the request. | Retry the request. The client application might explain to the user that its response is delayed because of a temporary condition. | | `invalid_resource` | The target resource is invalid because it does not exist, Azure AD can't find it, or it's not correctly configured. | This error indicates the resource, if it exists, has not been configured in the tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. | | `login_required` | Too many or no users found | The client requested silent authentication (`prompt=none`), but a single user could not found. This may mean there are multiple users active in the session, or no users. This takes into account the tenant chosen (for example, if there are two Azure AD accounts active and one Microsoft account, and `consumers` is chosen, silent authentication will work). | | `interaction_required` | The request requires user interaction. | An additional authentication step or consent is required. Retry the request without `prompt=none`. |
+### Request an ID token as well (hybrid flow)
+
+To learn who the user is before redeeming an authorization code, it's common for applications to also request an ID token when they request the authorization code. This is called the *hybrid flow* because it mixes the implicit grant with the authorization code flow. The hybrid flow is commonly used in web apps that want to render a page for a user without blocking on code redemption, notably [ASP.NET](quickstart-v2-aspnet-core-webapp.md). Both single-page apps and traditional web apps benefit from reduced latency in this model.
+
+The hybrid flow is the same as the authorization code flow described earlier but with three additions, all of which are required to request an ID token: new scopes, a new response_type, and a new `nonce` query parameter.
+
+```
+// Line breaks for legibility only
+
+https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize?
+client_id=6731de76-14a6-49ae-97bc-6eba6914391e
+&response_type=code%20id_token
+&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F
+&response_mode=fragment
+&scope=openid%20offline_access%20https%3A%2F%2Fgraph.microsoft.com%2Fuser.read
+&state=12345
+&nonce=abcde
+&code_challenge=YTFjNjI1OWYzMzA3MTI4ZDY2Njg5M2RkNmVjNDE5YmEyZGRhOGYyM2IzNjdmZWFhMTQ1ODg3NDcxY2Nl
+&code_challenge_method=S256
+```
+
+| Updated Parameter | Required/optional | Description |
+|---------------|-------------|--------------|
+|`response_type`| Required | The addition of `id_token` indicates to the server that the application would like an ID token in the response from the `/authorize` endpoint. |
+|`scope`| Required | For ID tokens, must be updated to include the ID token scopes - `openid`, and optionally `profile` and `email`. |
+|`nonce`| Required| A value included in the request, generated by the app, that will be included in the resulting id_token as a claim. The app can then verify this value to mitigate token replay attacks. The value is typically a randomized, unique string that can be used to identify the origin of the request. |
+|`response_mode`| Recommended | Specifies the method that should be used to send the resulting token back to your app. Defaults to `query` for just an authorization code, but `fragment` if the request includes an id_token `response_type`.|
+
+The use of `fragment` as a response mode can cause issues for web apps that read the code from the redirect, as browsers do not pass the fragment to the web server. In these situations, apps are recommended to use the `form_post` response mode to ensure that all data is sent to the server.
+
+#### Successful response
+
+A successful response using `response_mode=fragment` looks like:
+
+```HTTP
+GET https://login.microsoftonline.com/common/oauth2/nativeclient#
+code=AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrq...
+&id_token=eYj...
+&state=12345
+```
+
+| Parameter | Description |
+|-----------|--------------|
+| `code` | The authorization code that the app requested. The app can use the authorization code to request an access token for the target resource. Authorization codes are short lived, typically expiring after about 10 minutes. |
+| `id_token` | An ID token for the user, issued via *implicit grant*. Contains a special `c_hash` claim that is the hash of the `code` in the same request. |
+| `state` | If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. |
+ ## Request an access token Now that you've acquired an authorization_code and have been granted permission by the user, you can redeem the `code` for an `access_token` to the desired resource. Do this by sending a `POST` request to the `/token` endpoint:
@@ -164,7 +211,7 @@ client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| `scope` | optional | A space-separated list of scopes. The scopes must all be from a single resource, along with OIDC scopes (`profile`, `openid`, `email`). For a more detailed explanation of scopes, refer to [permissions, consent, and scopes](v2-permissions-and-consent.md). This is a Microsoft extension to the authorization code flow, intended to allow apps to declare the resource they want the token for during token redemption.| | `code` | required | The authorization_code that you acquired in the first leg of the flow. | | `redirect_uri` | required | The same redirect_uri value that was used to acquire the authorization_code. |
-| `client_secret` | required for confidential web apps | The application secret that you created in the app registration portal for your app. You shouldn't use the application secret in a native app or single page app because client_secrets can't be reliably stored on devices or web pages. It's required for web apps and web APIs, which have the ability to store the client_secret securely on the server side. The client secret must be URL-encoded before being sent. For more information on uri encoding, see the [URI Generic Syntax specification](https://tools.ietf.org/html/rfc3986#page-12). |
+| `client_secret` | required for confidential web apps | The application secret that you created in the app registration portal for your app. You shouldn't use the application secret in a native app or single page app because client_secrets can't be reliably stored on devices or web pages. It's required for web apps and web APIs, which have the ability to store the client_secret securely on the server side. Like all parameters discussed here, the client secret must be URL-encoded before being sent, a step usually performed by the SDK. For more information on uri encoding, see the [URI Generic Syntax specification](https://tools.ietf.org/html/rfc3986#page-12). |
| `code_verifier` | recommended | The same code_verifier that was used to obtain the authorization_code. Required if PKCE was used in the authorization code grant request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). | ### Successful response
active-directory https://docs.microsoft.com/en-us/azure/active-directory/devices/assign-local-admin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/assign-local-admin.md
@@ -31,8 +31,7 @@ When you connect a Windows device with Azure AD using an Azure AD join, Azure AD
- The Azure AD device administrator role - The user performing the Azure AD join
-By adding Azure AD roles to the local administrators group, you can update the users that can manage a device anytime in Azure AD without modifying anything on the device. Currently, you cannot assign groups to an administrator role.
-Azure AD also adds the Azure AD device administrator role to the local administrators group to support the principle of least privilege (PoLP). In addition to the global administrators, you can also enable users that have been *only* assigned the device administrator role to manage a device.
+By adding Azure AD roles to the local administrators group, you can update the users that can manage a device anytime in Azure AD without modifying anything on the device. Azure AD also adds the Azure AD device administrator role to the local administrators group to support the principle of least privilege (PoLP). In addition to the global administrators, you can also enable users that have been *only* assigned the device administrator role to manage a device.
## Manage the global administrators role
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-sso-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso-faq.md
@@ -80,6 +80,7 @@ Follow these steps on the on-premises server where you are running Azure AD Conn
> [!NOTE] >You will need both domain administrator and global administrator credentials for the steps below.
+ >If you are not a domain admin and you were assigned permissions by the domain admin, you should call `Update-AzureADSSOForest -OnPremCredentials $creds -PreserveCustomPermissionsOnDesktopSsoAccount`
**Step 1. Get list of AD forests where Seamless SSO has been enabled**
@@ -101,9 +102,6 @@ Follow these steps on the on-premises server where you are running Azure AD Conn
2. Call `Update-AzureADSSOForest -OnPremCredentials $creds`. This command updates the Kerberos decryption key for the `AZUREADSSO` computer account in this specific AD forest and updates it in Azure AD.
- >[!NOTE]
- >If you are not a domain admin and you were assigned permissions by the domain admin, you should call `Update-AzureADSSOForest -OnPremCredentials $creds -PreserveCustomPermissionsOnDesktopSsoAccount`
-
3. Repeat the preceding steps for each AD forest that youΓÇÖve set up the feature on. >[!NOTE]
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/whatis-azure-ad-connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/whatis-azure-ad-connect.md
@@ -21,7 +21,7 @@ Azure AD Connect is the Microsoft tool designed to meet and accomplish your hybr
- [Pass-through authentication](how-to-connect-pta.md) - A sign-in method that allows users to use the same password on-premises and in the cloud, but doesn't require the additional infrastructure of a federated environment. - [Federation integration](how-to-connect-fed-whatis.md) - Federation is an optional part of Azure AD Connect and can be used to configure a hybrid environment using an on-premises AD FS infrastructure. It also provides AD FS management capabilities such as certificate renewal and additional AD FS server deployments. - [Synchronization](how-to-connect-sync-whatis.md) - Responsible for creating users, groups, and other objects. As well as, making sure identity information for your on-premises users and groups is matching the cloud. This synchronization also includes password hashes.-- [Health Monitoring]() - Azure AD Connect Health can provide robust monitoring and provide a central location in the Azure portal to view this activity.
+- [Health Monitoring](whatis-azure-ad-connect.md#what-is-azure-ad-connect-health) - Azure AD Connect Health can provide robust monitoring and provide a central location in the Azure portal to view this activity.
![What is Azure AD Connect](./media/whatis-hybrid-identity/arch.png)
@@ -71,4 +71,4 @@ Rich [usage metrics](how-to-connect-health-adfs.md#usage-analytics-for-ad-fs)|To
- [Hardware and prerequisites](how-to-connect-install-prerequisites.md) - [Express settings](how-to-connect-install-express.md) - [Customized settings](how-to-connect-install-custom.md)-- [Install Azure AD Connect Health agents](how-to-connect-health-agent-install.md)\ No newline at end of file
+- [Install Azure AD Connect Health agents](how-to-connect-health-agent-install.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/troubleshoot-adding-apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/troubleshoot-adding-apps.md
@@ -38,7 +38,7 @@ The delete button will be disabled in the following scenarios:
- For Microsoft application, you won't be able to delete them from the UI regardless of your role. -- For servicePrincipals that correspond to a managed identity. Managed identities service principals can't be deleted in the Enterprise apps blade. You need to go to the Azure resource to manage it. Lear more about [Managed Identity](../managed-identities-azure-resources/overview.md)
+- For servicePrincipals that correspond to a managed identity. Managed identities service principals can't be deleted in the Enterprise apps blade. You need to go to the Azure resource to manage it. Learn more about [Managed Identity](../managed-identities-azure-resources/overview.md)
## How to see the details of a portal notification You can see the details of any portal notification by following the steps below:
@@ -88,4 +88,4 @@ See the following descriptions for more details about the notifications.
- **Copy error** ΓÇô Select the **copy icon** to the right of the **Copy error** textbox to copy all the notification details to share with a support or product group - engineer - Example
- ```{"errorCode":"InternalUrl\_Duplicate","localizedErrorDetails":{"errorDetail":"Internal url 'https://google.com/' is invalid since it is already in use"},"operationResults":\[{"objectId":null,"displayName":null,"status":0,"details":"Internal url 'https://bing.com/' is invalid since it is already in use"}\],"timeStampUtc":"2017-03-23T19:50:26.465743Z","clientRequestId":"302fd775-3329-4670-a9f3-bea37004f0bb","internalTransactionId":"ea5b5475-03b9-4f08-8e95-bbb11289ab65","upn":"tperkins@f128.info","tenantId":"7918d4b5-0442-4a97-be2d-36f9f9962ece","userObjectId":"17f84be4-51f8-483a-b533-383791227a99"}```
\ No newline at end of file
+ ```{"errorCode":"InternalUrl\_Duplicate","localizedErrorDetails":{"errorDetail":"Internal url 'https://google.com/' is invalid since it is already in use"},"operationResults":\[{"objectId":null,"displayName":null,"status":0,"details":"Internal url 'https://bing.com/' is invalid since it is already in use"}\],"timeStampUtc":"2017-03-23T19:50:26.465743Z","clientRequestId":"302fd775-3329-4670-a9f3-bea37004f0bb","internalTransactionId":"ea5b5475-03b9-4f08-8e95-bbb11289ab65","upn":"tperkins@f128.info","tenantId":"7918d4b5-0442-4a97-be2d-36f9f9962ece","userObjectId":"17f84be4-51f8-483a-b533-383791227a99"}```
active-directory https://docs.microsoft.com/en-us/azure/active-directory/roles/permissions-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
@@ -765,6 +765,7 @@ Can manage all aspects of Azure AD and Microsoft services that use Azure AD iden
| microsoft.directory/directoryRoles/allProperties/allTasks | Create and delete directoryRoles, and read and update all properties in Azure Active Directory. | | microsoft.directory/directoryRoleTemplates/allProperties/allTasks | Create and delete directoryRoleTemplates, and read and update all properties in Azure Active Directory. | | microsoft.directory/domains/allProperties/allTasks | Create and delete domains, and read and update all properties in Azure Active Directory. |
+| microsoft.directory/entitlementManagement/allProperties/allTasks | Create and delete resources, and read and update all properties in Azure AD entitlement management. |
| microsoft.directory/groups/allProperties/allTasks | Create and delete groups, and read and update all properties in Azure Active Directory. | | microsoft.directory/groupsAssignableToRoles/allProperties/update | Update groups with isAssignableToRole property set to true in Azure Active Directory. | | microsoft.directory/groupsAssignableToRoles/create | Create groups with isAssignableToRole property set to true in Azure Active Directory. |
@@ -826,6 +827,7 @@ Can read and manage compliance configuration and reports in Azure AD and Microso
| --- | --- | | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health. | | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets for directory-level services. |
+| microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management. |
| microsoft.office365.complianceManager/allEntities/allTasks | Manage all aspects of Office 365 Compliance Manager | | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Microsoft 365 Service Health. | | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
@@ -1128,6 +1130,7 @@ Can read everything that a Global Administrator can, but not edit anything.
| microsoft.directory/directoryRoles/eligibleMembers/read | Read directoryRoles.eligibleMembers property in Azure Active Directory. | | microsoft.directory/directoryRoles/members/read | Read directoryRoles.members property in Azure Active Directory. | | microsoft.directory/domains/basic/read | Read basic properties on domains in Azure Active Directory. |
+| microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management. |
| microsoft.directory/groups/appRoleAssignments/read | Read groups.appRoleAssignments property in Azure Active Directory. | | microsoft.directory/groups/basic/read | Read basic properties on groups in Azure Active Directory. | | microsoft.directory/groups/hiddenMembers/read | Read groups.hiddenMembers property in Azure Active Directory. |
@@ -1226,7 +1229,7 @@ Can reset passwords for non-administrators and Helpdesk Administrators.
### Hybrid Identity Administrator permissions
-Enable, deploy, configure, manage, monitor and troubleshoot cloud provisioning and authentication services.
+Can manage AD to Azure AD cloud provisioning and federation settings.
| **Actions** | **Description** | | --- | --- |
@@ -1244,8 +1247,10 @@ Enable, deploy, configure, manage, monitor and troubleshoot cloud provisioning a
| microsoft.directory/applicationTemplates/instantiate | Instantiate gallery applications from application templates. | | microsoft.directory/auditLogs/allProperties/read | Read all properties (including privileged properties) on auditLogs in Azure Active Directory. | | microsoft.directory/cloudProvisioning/allProperties/allTasks | Read and configure all properties of Azure AD Cloud Provisioning service. |
-| microsoft.directory/federatedAuthentication/allProperties/allTasks | Manage all aspects of Active Directory Federated Services (ADFS) or 3rd party federation provider in Azure AD. |
+| microsoft.directory/domains/allProperties/read | Read all properties of domains. |
+| microsoft.directory/domains/federation/update | Update federation property of domains. |
| microsoft.directory/organization/dirSync/update | Update organization.dirSync property in Azure Active Directory. |
+| microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs. |
| microsoft.directory/servicePrincipals/audience/update | Update servicePrincipals.audience property in Azure Active Directory. | | microsoft.directory/servicePrincipals/authentication/update | Update servicePrincipals.authentication property in Azure Active Directory. | | microsoft.directory/servicePrincipals/basic/update | Update basic properties on servicePrincipals in Azure Active Directory. |
@@ -1703,9 +1708,10 @@ Can read security information and reports,and manage configuration in Azure AD a
| --- | --- | | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health. | | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets for directory-level services. |
-| microsoft.directory/bitlockerKeys/key/read | Read bitlocker key objects and properties (including recovery key) in Azure Active Directory. |
| microsoft.directory/applications/policies/update | Update applications.policies property in Azure Active Directory. | | microsoft.directory/auditLogs/allProperties/read | Read all properties (including privileged properties) on auditLogs in Azure Active Directory. |
+| microsoft.directory/bitlockerKeys/key/read | Read bitlocker key objects and properties (including recovery key) in Azure Active Directory. |
+| microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management. |
| microsoft.directory/identityProtection/allProperties/read | Read all resources in microsoft.aad.identityProtection. | | microsoft.directory/identityProtection/allProperties/update | Update all resources in microsoft.aad.identityProtection. | | microsoft.directory/policies/basic/update | Update basic properties on policies in Azure Active Directory. |
@@ -1756,6 +1762,7 @@ Can read security information and reports in Azure AD and Microsoft 365.
| --- | --- | | microsoft.directory/auditLogs/allProperties/read | Read all properties (including privileged properties) on auditLogs in Azure Active Directory. | | microsoft.directory/bitlockerKeys/key/read | Read bitlocker key objects and properties (including recovery key) in Azure Active Directory. |
+| microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management. |
| microsoft.directory/policies/conditionalAccess/basic/read | Read policies.conditionalAccess property in Azure Active Directory. | | microsoft.directory/signInReports/allProperties/read | Read all properties (including privileged properties) on signInReports in Azure Active Directory. | | microsoft.aad.identityProtection/allEntities/read | Read all resources in microsoft.aad.identityProtection. |
@@ -1921,6 +1928,7 @@ Can manage all aspects of users and groups, including resetting passwords for li
| microsoft.directory/contacts/basic/update | Update basic properties on contacts in Azure Active Directory. | | microsoft.directory/contacts/create | Create contacts in Azure Active Directory. | | microsoft.directory/contacts/delete | Delete contacts in Azure Active Directory. |
+| microsoft.directory/entitlementManagement/allProperties/allTasks | Create and delete resources, and read and update all properties in Azure AD entitlement management. |
| microsoft.directory/groups/appRoleAssignments/update | Update groups.appRoleAssignments property in Azure Active Directory. | | microsoft.directory/groups/basic/update | Update basic properties on groups in Azure Active Directory. | | microsoft.directory/groups/create | Create groups in Azure Active Directory. |
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/skytap-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/skytap-tutorial.md
@@ -82,7 +82,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
b. In the **Reply URL** text box, type a URL that uses the following pattern: `https://sso.connect.pingidentity.com/sso/sp/ACS.saml2`
-1. Select **Set additional URLs**, and perform the following steps if you want to configure the application in **SP** initiated mode:
+1. You can optionally select **Set additional URLs**, and perform the following steps to configure the application in **SP** initiated mode:
a. In the **Sign-on URL** text box, type a URL that uses the following pattern: `https://sso.connect.pingidentity.com/sso/sp/initsso?saasid=<saasid>&idpid=<idpid>`
@@ -155,4 +155,4 @@ When you select the Single Sign-on for Skytap tile in Access Panel, you should b
- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md) -- [Try Slack with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+- [Try Slack with Azure AD](https://aad.portal.azure.com/)
api-management https://docs.microsoft.com/en-us/azure/api-management/api-management-kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-kubernetes.md
@@ -95,7 +95,7 @@ In some cases, customers with regulatory constraints or strict security requirem
There are two modes of [deploying API Management into a VNet](./api-management-using-with-vnet.md) ΓÇô External and Internal.
-If API consumers do not reside in the cluster VNet, the External mode (Fig. 4) should be used. In this mode, the API Management gateway is injected into the cluster VNet but accessible from public internet via an external load balancer. It helps to hide the cluster completely while still allow external clients to consume the microservices. Additionally, you can use Azure networking capabilities such as Network Security Groups (NSG) to restrict network traffic.
+If API consumers do not reside in the cluster VNet, the External mode (Fig. 4) should be used. In this mode, the API Management gateway is injected into the cluster VNet but accessible from public internet via an external load balancer. It helps to hide the cluster completely while still allowing external clients to consume the microservices. Additionally, you can use Azure networking capabilities such as Network Security Groups (NSG) to restrict network traffic.
![External VNet mode](./media/api-management-aks/vnet-external.png)
api-management https://docs.microsoft.com/en-us/azure/api-management/api-management-using-with-vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-using-with-vnet.md
@@ -8,7 +8,6 @@ manager: erikre
editor: '' ms.service: api-management
-ms.workload: mobile
ms.tgt_pltfrm: na ms.topic: article ms.date: 12/10/2020
@@ -145,6 +144,9 @@ When an API Management service instance is hosted in a VNET, the ports in the fo
+ **Regional Service Tags**: NSG rules allowing outbound connectivity to Storage, SQL, and Event Hubs service tags may use the regional versions of those tags corresponding to the region containing the API Management instance (for example, Storage.WestUS for an API Management instance in the West US region). In multi-region deployments, the NSG in each region should allow traffic to the service tags for that region and the primary region.
+ > [!IMPORTANT]
+ > To enable publishing the [developer portal](api-management-howto-developer-portal.md) for an API Management instance in a virtual network, ensure that you also allow outbound connectivity to blob storage in the West US region. For example, use the **Storage.WestUS** service tag in an NSG rule. Connectivity to blob storage in the West US region is currently required to publish the developer portal for any API Management instance.
+ + **SMTP Relay**: Outbound network connectivity for the SMTP Relay, which resolves under the host `smtpi-co1.msn.com`, `smtpi-ch1.msn.com`, `smtpi-db3.msn.com`, `smtpi-sin.msn.com` and `ies.global.microsoft.com` + **Developer portal CAPTCHA**: Outbound network connectivity for the developer portal's CAPTCHA, which resolves under the hosts `client.hip.live.com` and `partner.hip.live.com`.
app-service https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-connect-msi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-web-tutorial-connect-msi.md
@@ -224,6 +224,9 @@ Type `EXIT` to return to the Cloud Shell prompt.
> [!NOTE] > The back-end services of managed identities also [maintains a token cache](overview-managed-identity.md#obtain-tokens-for-azure-resources) that updates the token for a target resource only when it expires. If you make a mistake configuring your SQL Database permissions and try to modify the permissions *after* trying to get a token with your app, you don't actually get a new token with the updated permissions until the cached token expires.
+> [!NOTE]
+> AAD is not supported for on-prem SQL Server, and this includes MSIs.
+ ### Modify connection string Remember that the same changes you made in *Web.config* or *appsettings.json* works with the managed identity, so the only thing to do is to remove the existing connection string in App Service, which Visual Studio created deploying your app the first time. Use the following command, but replace *\<app-name>* with the name of your app.
app-service https://docs.microsoft.com/en-us/azure/app-service/quickstart-java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-java.md
@@ -214,7 +214,7 @@ Property | Required | Description | Version
`<subscriptionId>` | false | Specify the subscription id. | 0.1.0+ `<resourceGroup>` | true | Azure Resource Group for your Web App. | 0.1.0+ `<appName>` | true | The name of your Web App. | 0.1.0+
-`<region>` | true | Specifies the region where your Web App will be hosted; the default value is **westeurope**. All valid regions at [Supported Regions](https://github.com/microsoft/azure-maven-plugins/blob/develop/azure-webapp-maven-plugin/README.md) section. | 0.1.0+
+`<region>` | true | Specifies the region where your Web App will be hosted; the default value is **westeurope**. All valid regions at [Supported Regions](https://azure.microsoft.com/global-infrastructure/services/?products=app-service) section. | 0.1.0+
`<pricingTier>` | false | The pricing tier for your Web App. The default value is **P1V2** for production workload, while **B2** is the recommended minimum for Java dev/test. [Learn more](https://azure.microsoft.com/pricing/details/app-service/linux/)| 0.1.0+ `<runtime>` | true | The runtime environment configuration, you could see the detail [here](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Web-App:-Configuration-Details). | 0.1.0+ `<deployment>` | true | The deployment configuration, you could see the details [here](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Web-App:-Configuration-Details). | 0.1.0+
application-gateway https://docs.microsoft.com/en-us/azure/application-gateway/application-gateway-diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-diagnostics.md
@@ -217,7 +217,7 @@ The access log is generated only if you've enabled it on each Application Gatewa
|serverRouted| The backend server that application gateway routes the request to.| |serverStatus| HTTP status code of the backend server.| |serverResponseLatency| Latency of the response from the backend server.|
-|host| Address listed in the host header of the request. If rewritten, this field contains the updated host name|
+|host| Address listed in the host header of the request. If rewritten using header rewrite, this field contains the updated host name|
|originalRequestUriWithArgs| This field contains the original request URL | |requestUri| This field contains the URL after the rewrite operation on Application Gateway | |originalHost| This field contains the original request host name
attestation https://docs.microsoft.com/en-us/azure/attestation/private-endpoint-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/private-endpoint-powershell.md new file mode 100644
@@ -0,0 +1,206 @@
+---
+title: Create a private endpoint using Azure PowerShell
+description: Create a private endpoint for Azure Attestation using Azure PowerShell
+services: attestation
+author: msmbaldwin
+ms.service: attestation
+ms.topic: overview
+ms.date: 08/31/2020
+ms.author: mbaldwin
++
+---
+# Quickstart: Create a Private Endpoint using Azure PowerShell
+
+Get started with Azure Private Link by using a private endpoint to connect securely to Azure Attestation.
+
+In this quickstart, you'll create a private endpoint for Azure Attestation and deploy a virtual machine to test the private connection.
+
+## Prerequisites
+
+* Learn about [Azure Private Link](/azure/private-link/private-link-overview)
+* [Set up Azure Attestation with Azure PowerShell](/azure/attestation/quickstart-powershell)
+
+## Create a resource group
+
+An Azure resource group is a logical container into which Azure resources are deployed and managed.
+
+Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup):
+
+```azurepowershell-interactive
+## Create to your Azure account subscription and create a resource group in a desired location. ##
+Connect-AzAccount
+Set-AzSubscription ΓÇ£mySubscriptionΓÇ¥
+$rg = ΓÇ£CreateAttestationPrivateLinkTutorial-rgΓÇ¥
+$loc= "eastusΓÇ¥
+New-AzResourceGroup -Name $rg -Location $loc
+```
+
+## Create a virtual network and bastion host
+
+In this section, you'll create a virtual network, subnet, and bastion host.
+
+The bastion host will be used to connect securely to the virtual machine for testing the private endpoint.
+
+Create a virtual network and bastion host with:
+
+* [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork)
+* [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress)
+* [New-AzBastion](/powershell/module/az.network/new-azbastion)
+
+```azurepowershell-interactive
+## Create backend subnet config. ##
+$subnetConfig = New-AzVirtualNetworkSubnetConfig -Name myBackendSubnet -AddressPrefix 10.0.0.0/24
+
+## Create Azure Bastion subnet. ##
+$bastsubnetConfig = New-AzVirtualNetworkSubnetConfig -Name AzureBastionSubnet -AddressPrefix 10.0.1.0/24
+
+## Create the virtual network. ##
+$vnet = New-AzVirtualNetwork -Name "myAttestationTutorialVNet" -ResourceGroupName $rg -Location $loc -AddressPrefix "10.0.0.0/16" -Subnet $subnetConfig, $bastsubnetConfig
+
+## Create public IP address for bastion host. ##
+$publicip = New-AzPublicIpAddress -Name "myBastionIP" -ResourceGroupName $rg -Location $loc -Sku "Standard" -AllocationMethod "Static"
+
+## Create bastion host ##
+New-AzBastion -ResourceGroupName $rg -Name "myBastion" -PublicIpAddress $publicip -VirtualNetwork $vnet
+```
+
+It can take a few minutes for the Azure Bastion host to deploy.
+
+## Create test virtual machine
+
+In this section, you'll create a virtual machine that will be used to test the private endpoint.
+
+Create the virtual machine with:
+
+ * [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential)
+ * [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface)
+ * [New-AzVM](/powershell/module/az.compute/new-azvm)
+ * [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig)
+ * [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem)
+ * [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage)
+ * [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface)
++
+```azurepowershell-interactive
+## Set credentials for server admin and password. ##
+$cred = Get-Credential
+
+## Command to create network interface for VM ##
+$nicVM = New-AzNetworkInterface -Name "myNicVM" -ResourceGroupName $rg -Location $loc -Subnet $vnet.Subnets[0]
+
+## Create a virtual machine configuration.##
+$vmConfig = New-AzVMConfig -VMName "myVM" -VMSize "Standard_DS1_v2" | Set-AzVMOperatingSystem -Windows -ComputerName "myVM" -Credential $cred | Set-AzVMSourceImage -PublisherName "MicrosoftWindowsServer" -Offer "WindowsServer" -Skus "2019-Datacenter" -Version "latest" | Add-AzVMNetworkInterface -Id $nicVM.Id
+
+## Create the virtual machine ##
+New-AzVM -ResourceGroupName $rg -Location $loc -VM $vmConfig
+```
+
+## Create an attestation provider
+
+```azurepowershell-interactive
+## Create an attestation provider ##
+$attestationProviderName = "myattestationprovider"
+$attestationProvider = New-AzAttestation -Name $attestationProviderName -ResourceGroupName $rg -Location $loc
+$attestationProviderId = $attestationProvider.Id
+
+## Access the attestation provider from local machine ##
+Enter nslookup <provider-name>.attest.azure.net. Replace <provider-name> with the name of the attestation provider instance you created in the previous steps.
+
+You'll receive a message similar to what is displayed below:
+
+## PowerShell copy. ##
+nslookup myattestationprovider.eus.attest.azure.net
+
+Server: cdns01.comcast.net
+Address: 2001:558:feed::1
+
+Non-authoritative answer:
+Name: eus.service.attest.azure.net
+Address: 20.62.219.160
+Aliases: myattestationprovider.eus.attest.azure.net
+ attesteusatm.trafficmanager.net
+```
+
+## Create private endpoint
+
+In this section, you'll create the private endpoint and connection using:
+
+* [New-AzPrivateLinkServiceConnection](/powershell/module/az.network/New-AzPrivateLinkServiceConnection)
+* [New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint)
+
+```azurepowershell-interactive
+## Create private endpoint connection. ##
+$privateEndpointConnection = New-AzPrivateLinkServiceConnection -Name "myConnection" -PrivateLinkServiceId $attestationProviderId -GroupID "Standard"
+
+## Disable private endpoint network policy ##
+ $vnet.Subnets[0].PrivateEndpointNetworkPolicies = "Disabled"
+$vnet | Set-AzVirtualNetwork
+
+## Create private endpoint
+New-AzPrivateEndpoint -ResourceGroupName $rg -Name "myPrivateEndpoint" -Location $loc -Subnet $vnet.Subnets[0] -PrivateLinkServiceConnection $privateEndpointConnection
+```
+## Configure the private DNS zone
+
+In this section you'll create and configure the private DNS zone using:
+
+* [New-AzPrivateDnsZone](/powershell/module/az.privatedns/new-azprivatednszone)
+* [New-AzPrivateDnsVirtualNetworkLink](/powershell/module/az.privatedns/new-azprivatednsvirtualnetworklink)
+* [New-AzPrivateDnsZoneConfig](/powershell/module/az.network/new-azprivatednszoneconfig)
+* [New-AzPrivateDnsZoneGroup](/powershell/module/az.network/new-azprivatednszonegroup)
+
+```azurepowershell-interactive
+## Create private dns zone. ##
+$zone = New-AzPrivateDnsZone -ResourceGroupName $rg -Name "privatelink.attest.azure.net"
+
+## Create dns network link. ##
+$link = New-AzPrivateDnsVirtualNetworkLink -ResourceGroupName $rg -ZoneName "privatelink.attest.azure.net" -Name "myLink" -VirtualNetworkId $vnet.Id
+
+## Create DNS configuration ##
+$config = New-AzPrivateDnsZoneConfig -Name "privatelink.attest.azure.net" -PrivateDnsZoneId $zone.ResourceId
+
+## Create DNS zone group. ##
+New-AzPrivateDnsZoneGroup -ResourceGroupName $rg -PrivateEndpointName "myPrivateEndpoint" -Name "myZoneGroup" -PrivateDnsZoneConfig $config
+```
+
+## Test connectivity to private endpoint
+
+In this section, you'll use the virtual machine you created in the previous step to connect to the SQL server across the private endpoint.
+
+1. Sign in to the [Azure portal](https://portal.azure.com)
+
+2. Select **Resource groups** in the left-hand navigation pane.
+
+3. Select **CreateAttestationPrivateLinkTutorial-rg**.
+
+4. Select **myVM**.
+
+5. On the overview page for **myVM**, select **Connect** then **Bastion**.
+
+6. Select the blue **Use Bastion** button.
+
+7. Enter the username and password that you entered during the virtual machine creation.
+
+8. Open Windows PowerShell on the server after you connect.
+
+9. Enter `nslookup <provider-name>.attest.azure.net`. Replace **\<provider-name>** with the name of the attestation provider instance you created in the previous steps. You'll receive a message similar to what is displayed below:
+
+ ```powershell
+
+ ## Access the attestation provider from local machine ##
+ nslookup myattestationprovider.eus.attest.azure.net
+
+ Server: cdns01.comcast.net
+ Address: 2001:558:feed::1
+
+ cdns01.comcast.net can't find myattestationprovider.eus.attest.azure.net: Non-existent domain
+
+ ## Access the attestation provider from the VM created in the same virtual network as the private endpoint. ##
+ nslookup myattestationprovider.eus.attest.azure.net
+
+ Server: UnKnown
+ Address: 168.63.129.16
+ Non-authoritative answer:
+ Name: myattestationprovider.eastus.test.attest.azure.net
+ ```
+
automation https://docs.microsoft.com/en-us/azure/automation/automation-hybrid-runbook-worker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-hybrid-runbook-worker.md
@@ -3,16 +3,14 @@ title: Azure Automation Hybrid Runbook Worker overview
description: This article provides an overview of the Hybrid Runbook Worker, which you can use to run runbooks on machines in your local datacenter or cloud provider. services: automation ms.subservice: process-automation
-ms.date: 11/23/2020
+ms.date: 01/11/2021
ms.topic: conceptual --- # Hybrid Runbook Worker overview Runbooks in Azure Automation might not have access to resources in other clouds or in your on-premises environment because they run on the Azure cloud platform. You can use the Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on the machine that's hosting the role and against resources in the environment to manage those local resources. Runbooks are stored and managed in Azure Automation and then delivered to one or more assigned machines.
-The following image illustrates this functionality:
-
-![Hybrid Runbook Worker overview](media/automation-hybrid-runbook-worker/automation.png)
+## Runbook Worker types
There are two types of Runbook Workers - system and user. The following table describes the difference between them.
@@ -23,18 +21,19 @@ There are two types of Runbook Workers - system and user. The following table de
A Hybrid Runbook Worker can run on either the Windows or the Linux operating system, and this role relies on the [Log Analytics agent](../azure-monitor/platform/log-analytics-agent.md) reporting to an Azure Monitor [Log Analytics workspace](../azure-monitor/platform/design-logs-deployment.md). The workspace is not only to monitor the machine for the supported operating system, but also to download the components required to install the Hybrid Runbook Worker.
-When Azure Automation [Update Management](./update-management/overview.md) is enabled, any machine connected to your Log Analytics workspace is automatically configured as a system Hybrid Runbook Worker.
+When Azure Automation [Update Management](./update-management/overview.md) is enabled, any machine connected to your Log Analytics workspace is automatically configured as a system Hybrid Runbook Worker. To configure it as a user Windows Hybrid Runbook Worker, see [Deploy a Windows Hybrid Runbook Worker](automation-windows-hrw-install.md) and for Linux, see [Deploy a Linux Hybrid Runbook Worker](automation-linux-hrw-install.md).
-Each user Hybrid Runbook Worker is a member of a Hybrid Runbook Worker group that you specify when you install the worker. A group can include a single worker, but you can include multiple workers in a group for high availability. Each machine can host one Hybrid Runbook Worker reporting to one Automation account; you cannot register the hybrid worker across multiple Automation accounts. This is because a hybrid worker can only listen for jobs from a single Automation account. For machines hosting the system Hybrid Runbook worker managed by Update Management, they can be added to a Hybrid Runbook Worker group. But you must use the same Automation account for both Update Management and the Hybrid Runbook Worker group membership.
+## How does it work?
-When you start a runbook on a user Hybrid Runbook Worker, you specify the group that it runs on. Each worker in the group polls Azure Automation to see if any jobs are available. If a job is available, the first worker to get the job takes it. The processing time of the jobs queue depends on the hybrid worker hardware profile and load. You can't specify a particular worker. Hybrid worker works on a polling mechanism (every 30 secs) and follows an order of first-come, first-serve. Depending on when a job was pushed, whichever hybrid worker pings the Automation service picks up the job. A single hybrid worker can generally pick up four jobs per ping (that is, every 30 seconds). If your rate of pushing jobs is higher than four per 30 seconds, then there is a high possibility another hybrid worker in the Hybrid Runbook Worker group picked up the job.
+![Hybrid Runbook Worker overview](media/automation-hybrid-runbook-worker/automation.png)
-To control the distribution of runbooks on Hybrid Runbook Workers and when or how the jobs are triggered, you can register the hybrid worker against different Hybrid Runbook Worker groups within your Automation account. Target the jobs against the specific group or groups in order to support your execution arrangement.
+Each user Hybrid Runbook Worker is a member of a Hybrid Runbook Worker group that you specify when you install the worker. A group can include a single worker, but you can include multiple workers in a group for high availability. Each machine can host one Hybrid Runbook Worker reporting to one Automation account; you cannot register the hybrid worker across multiple Automation accounts. A hybrid worker can only listen for jobs from a single Automation account. For machines hosting the system Hybrid Runbook worker managed by Update Management, they can be added to a Hybrid Runbook Worker group. But you must use the same Automation account for both Update Management and the Hybrid Runbook Worker group membership.
+
+When you start a runbook on a user Hybrid Runbook Worker, you specify the group that it runs on. Each worker in the group polls Azure Automation to see if any jobs are available. If a job is available, the first worker to get the job takes it. The processing time of the jobs queue depends on the hybrid worker hardware profile and load. You can't specify a particular worker. Hybrid worker works on a polling mechanism (every 30 secs) and follows an order of first-come, first-serve. Depending on when a job was pushed, whichever hybrid worker pings the Automation service picks up the job. A single hybrid worker can generally pick up four jobs per ping (that is, every 30 seconds). If your rate of pushing jobs is higher than four per 30 seconds, then there is a high possibility another hybrid worker in the Hybrid Runbook Worker group picked up the job.
-Use a Hybrid Runbook Worker instead of an [Azure sandbox](automation-runbook-execution.md#runbook-execution-environment) because it doesn't have many of the sandbox [limits](../azure-resource-manager/management/azure-subscription-service-limits.md#automation-limits) on disk space, memory, or network sockets. The limits on a hybrid worker are only related to the worker's own resources.
+A Hybrid Runbook Worker doesn't have many of the [Azure sandbox](automation-runbook-execution.md#runbook-execution-environment) resource [limits](../azure-resource-manager/management/azure-subscription-service-limits.md#automation-limits) on disk space, memory, or network sockets. The limits on a hybrid worker are only related to the worker's own resources, and they aren't constrained by the [fair share](automation-runbook-execution.md#fair-share) time limit that Azure sandboxes have.
-> [!NOTE]
-> Hybrid Runbook Workers aren't constrained by the [fair share](automation-runbook-execution.md#fair-share) time limit that Azure sandboxes have.
+To control the distribution of runbooks on Hybrid Runbook Workers and when or how the jobs are triggered, you can register the hybrid worker against different Hybrid Runbook Worker groups within your Automation account. Target the jobs against the specific group or groups in order to support your execution arrangement.
## Hybrid Runbook Worker installation
@@ -93,7 +92,7 @@ Azure Automation Hybrid Runbook Worker can be used in Azure Government to suppor
### Update Management addresses for Hybrid Runbook Worker
-In addition to the standard addresses and ports required for the Hybrid Runbook Worker, Update Management has additional network configuration requirements described under the [network planning](./update-management/overview.md#ports) section.
+In addition to the standard addresses and ports required for the Hybrid Runbook Worker, Update Management has other network configuration requirements described under the [network planning](./update-management/overview.md#ports) section.
## Azure Automation State Configuration on a Hybrid Runbook Worker
@@ -101,7 +100,7 @@ You can run [Azure Automation State Configuration](automation-dsc-overview.md) o
## Runbook Worker limits
-The maximum number of Hybrid Worker groups per Automation Account is 4000, and is applicable for both system & user hybrid workers. If you have more than 4,000 machines to manage, we recommend creating additional Automation accounts.
+The maximum number of Hybrid Worker groups per Automation Account is 4000, and is applicable for both system & user hybrid workers. If you have more than 4,000 machines to manage, we recommend creating another Automation account.
## Runbooks on a Hybrid Runbook Worker
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2-input https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md
@@ -295,7 +295,7 @@ namespace CosmosDBSamplesV2
The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a list of documents. The function is triggered by an HTTP request. The code uses a `DocumentClient` instance provided by the Azure Cosmos DB binding to read a list of documents. The `DocumentClient` instance could also be used for write operations. > [!NOTE]
-> You can also use the [IDocumentClient](/dotnet/api/microsoft.azure.documents.idocumentclient?view=azure-dotnet) interface to make testing easier.
+> You can also use the [IDocumentClient](/dotnet/api/microsoft.azure.documents.idocumentclient?view=azure-dotnet&preserve-view=true) interface to make testing easier.
```cs using Microsoft.AspNetCore.Http;
@@ -716,6 +716,270 @@ public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, Docume
} ```
+# [Java](#tab/java)
+
+This section contains the following examples:
+
+* [HTTP trigger, look up ID from query string - String parameter](#http-trigger-look-up-id-from-query-string---string-parameter-java)
+* [HTTP trigger, look up ID from query string - POJO parameter](#http-trigger-look-up-id-from-query-string---pojo-parameter-java)
+* [HTTP trigger, look up ID from route data](#http-trigger-look-up-id-from-route-data-java)
+* [HTTP trigger, look up ID from route data, using SqlQuery](#http-trigger-look-up-id-from-route-data-using-sqlquery-java)
+* [HTTP trigger, get multiple docs from route data, using SqlQuery](#http-trigger-get-multiple-docs-from-route-data-using-sqlquery-java)
+
+The examples refer to a simple `ToDoItem` type:
+
+```java
+public class ToDoItem {
+
+ private String id;
+ private String description;
+
+ public String getId() {
+ return id;
+ }
+
+ public String getDescription() {
+ return description;
+ }
+
+ @Override
+ public String toString() {
+ return "ToDoItem={id=" + id + ",description=" + description + "}";
+ }
+}
+```
+
+<a id="http-trigger-look-up-id-from-query-string---string-parameter-java"></a>
+
+### HTTP trigger, look up ID from query string - String parameter
+
+The following example shows a Java function that retrieves a single document. The function is triggered by an HTTP request that uses a query string to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a document from the specified database and collection, in String form.
+
+```java
+public class DocByIdFromQueryString {
+
+ @FunctionName("DocByIdFromQueryString")
+ public HttpResponseMessage run(
+ @HttpTrigger(name = "req",
+ methods = {HttpMethod.GET, HttpMethod.POST},
+ authLevel = AuthorizationLevel.ANONYMOUS)
+ HttpRequestMessage<Optional<String>> request,
+ @CosmosDBInput(name = "database",
+ databaseName = "ToDoList",
+ collectionName = "Items",
+ id = "{Query.id}",
+ partitionKey = "{Query.partitionKeyValue}",
+ connectionStringSetting = "Cosmos_DB_Connection_String")
+ Optional<String> item,
+ final ExecutionContext context) {
+
+ // Item list
+ context.getLogger().info("Parameters are: " + request.getQueryParameters());
+ context.getLogger().info("String from the database is " + (item.isPresent() ? item.get() : null));
+
+ // Convert and display
+ if (!item.isPresent()) {
+ return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
+ .body("Document not found.")
+ .build();
+ }
+ else {
+ // return JSON from Cosmos. Alternatively, we can parse the JSON string
+ // and return an enriched JSON object.
+ return request.createResponseBuilder(HttpStatus.OK)
+ .header("Content-Type", "application/json")
+ .body(item.get())
+ .build();
+ }
+ }
+}
+ ```
+
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBInput` annotation on function parameters whose value would come from Cosmos DB. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
+
+<a id="http-trigger-look-up-id-from-query-string---pojo-parameter-java"></a>
+
+### HTTP trigger, look up ID from query string - POJO parameter
+
+The following example shows a Java function that retrieves a single document. The function is triggered by an HTTP request that uses a query string to specify the ID and partition key value to look up. That ID and partition key value used to retrieve a document from the specified database and collection. The document is then converted to an instance of the ```ToDoItem``` POJO previously created, and passed as an argument to the function.
+
+```java
+public class DocByIdFromQueryStringPojo {
+
+ @FunctionName("DocByIdFromQueryStringPojo")
+ public HttpResponseMessage run(
+ @HttpTrigger(name = "req",
+ methods = {HttpMethod.GET, HttpMethod.POST},
+ authLevel = AuthorizationLevel.ANONYMOUS)
+ HttpRequestMessage<Optional<String>> request,
+ @CosmosDBInput(name = "database",
+ databaseName = "ToDoList",
+ collectionName = "Items",
+ id = "{Query.id}",
+ partitionKey = "{Query.partitionKeyValue}",
+ connectionStringSetting = "Cosmos_DB_Connection_String")
+ ToDoItem item,
+ final ExecutionContext context) {
+
+ // Item list
+ context.getLogger().info("Parameters are: " + request.getQueryParameters());
+ context.getLogger().info("Item from the database is " + item);
+
+ // Convert and display
+ if (item == null) {
+ return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
+ .body("Document not found.")
+ .build();
+ }
+ else {
+ return request.createResponseBuilder(HttpStatus.OK)
+ .header("Content-Type", "application/json")
+ .body(item)
+ .build();
+ }
+ }
+}
+ ```
+
+<a id="http-trigger-look-up-id-from-route-data-java"></a>
+
+### HTTP trigger, look up ID from route data
+
+The following example shows a Java function that retrieves a single document. The function is triggered by an HTTP request that uses a route parameter to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a document from the specified database and collection, returning it as an ```Optional<String>```.
+
+```java
+public class DocByIdFromRoute {
+
+ @FunctionName("DocByIdFromRoute")
+ public HttpResponseMessage run(
+ @HttpTrigger(name = "req",
+ methods = {HttpMethod.GET, HttpMethod.POST},
+ authLevel = AuthorizationLevel.ANONYMOUS,
+ route = "todoitems/{partitionKeyValue}/{id}")
+ HttpRequestMessage<Optional<String>> request,
+ @CosmosDBInput(name = "database",
+ databaseName = "ToDoList",
+ collectionName = "Items",
+ id = "{id}",
+ partitionKey = "{partitionKeyValue}",
+ connectionStringSetting = "Cosmos_DB_Connection_String")
+ Optional<String> item,
+ final ExecutionContext context) {
+
+ // Item list
+ context.getLogger().info("Parameters are: " + request.getQueryParameters());
+ context.getLogger().info("String from the database is " + (item.isPresent() ? item.get() : null));
+
+ // Convert and display
+ if (!item.isPresent()) {
+ return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
+ .body("Document not found.")
+ .build();
+ }
+ else {
+ // return JSON from Cosmos. Alternatively, we can parse the JSON string
+ // and return an enriched JSON object.
+ return request.createResponseBuilder(HttpStatus.OK)
+ .header("Content-Type", "application/json")
+ .body(item.get())
+ .build();
+ }
+ }
+}
+ ```
+
+ <a id="http-trigger-look-up-id-from-route-data-using-sqlquery-java"></a>
+
+### HTTP trigger, look up ID from route data, using SqlQuery
+
+The following example shows a Java function that retrieves a single document. The function is triggered by an HTTP request that uses a route parameter to specify the ID to look up. That ID is used to retrieve a document from the specified database and collection, converting the result set to a ```ToDoItem[]```, since many documents may be returned, depending on the query criteria.
+
+> [!NOTE]
+> If you need to query by just the ID, it is recommended to use a look up, like the [previous examples](#http-trigger-look-up-id-from-query-string---pojo-parameter-java), as it will consume less [request units](../cosmos-db/request-units.md). Point read operations (GET) are [more efficient](../cosmos-db/optimize-cost-reads-writes.md) than queries by ID.
+>
+
+```java
+public class DocByIdFromRouteSqlQuery {
+
+ @FunctionName("DocByIdFromRouteSqlQuery")
+ public HttpResponseMessage run(
+ @HttpTrigger(name = "req",
+ methods = {HttpMethod.GET, HttpMethod.POST},
+ authLevel = AuthorizationLevel.ANONYMOUS,
+ route = "todoitems2/{id}")
+ HttpRequestMessage<Optional<String>> request,
+ @CosmosDBInput(name = "database",
+ databaseName = "ToDoList",
+ collectionName = "Items",
+ sqlQuery = "select * from Items r where r.id = {id}",
+ connectionStringSetting = "Cosmos_DB_Connection_String")
+ ToDoItem[] item,
+ final ExecutionContext context) {
+
+ // Item list
+ context.getLogger().info("Parameters are: " + request.getQueryParameters());
+ context.getLogger().info("Items from the database are " + item);
+
+ // Convert and display
+ if (item == null) {
+ return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
+ .body("Document not found.")
+ .build();
+ }
+ else {
+ return request.createResponseBuilder(HttpStatus.OK)
+ .header("Content-Type", "application/json")
+ .body(item)
+ .build();
+ }
+ }
+}
+ ```
+
+ <a id="http-trigger-get-multiple-docs-from-route-data-using-sqlquery-java"></a>
+
+### HTTP trigger, get multiple docs from route data, using SqlQuery
+
+The following example shows a Java function that retrieves multiple documents. The function is triggered by an HTTP request that uses a route parameter ```desc``` to specify the string to search for in the ```description``` field. The search term is used to retrieve a collection of documents from the specified database and collection, converting the result set to a ```ToDoItem[]``` and passing it as an argument to the function.
+
+```java
+public class DocsFromRouteSqlQuery {
+
+ @FunctionName("DocsFromRouteSqlQuery")
+ public HttpResponseMessage run(
+ @HttpTrigger(name = "req",
+ methods = {HttpMethod.GET},
+ authLevel = AuthorizationLevel.ANONYMOUS,
+ route = "todoitems3/{desc}")
+ HttpRequestMessage<Optional<String>> request,
+ @CosmosDBInput(name = "database",
+ databaseName = "ToDoList",
+ collectionName = "Items",
+ sqlQuery = "select * from Items r where contains(r.description, {desc})",
+ connectionStringSetting = "Cosmos_DB_Connection_String")
+ ToDoItem[] items,
+ final ExecutionContext context) {
+
+ // Item list
+ context.getLogger().info("Parameters are: " + request.getQueryParameters());
+ context.getLogger().info("Number of items from the database is " + (items == null ? 0 : items.length));
+
+ // Convert and display
+ if (items == null) {
+ return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
+ .body("No documents found.")
+ .build();
+ }
+ else {
+ return request.createResponseBuilder(HttpStatus.OK)
+ .header("Content-Type", "application/json")
+ .body(items)
+ .build();
+ }
+ }
+}
+ ```
+ # [JavaScript](#tab/javascript) This section contains the following examples that read a single document by specifying an ID value from various sources:
@@ -870,59 +1134,274 @@ Here's the *function.json* file:
} ```
-Here's the JavaScript code:
+Here's the JavaScript code:
+
+```javascript
+module.exports = function (context, req, toDoItem) {
+ context.log('JavaScript queue trigger function processed work item');
+ if (!toDoItem)
+ {
+ context.log("ToDo item not found");
+ }
+ else
+ {
+ context.log("Found ToDo item, Description=" + toDoItem.Description);
+ }
+
+ context.done();
+};
+```
+
+<a id="queue-trigger-get-multiple-docs-using-sqlquery-javascript"></a>
+
+### Queue trigger, get multiple docs, using SqlQuery
+
+The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function retrieves multiple documents specified by a SQL query, using a queue trigger to customize the query parameters.
+
+The queue trigger provides a parameter `departmentId`. A queue message of `{ "departmentId" : "Finance" }` would return all records for the finance department.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "name": "documents",
+ "type": "cosmosDB",
+ "direction": "in",
+ "databaseName": "MyDb",
+ "collectionName": "MyCollection",
+ "sqlQuery": "SELECT * from c where c.departmentId = {departmentId}",
+ "connectionStringSetting": "CosmosDBConnection"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+Here's the JavaScript code:
+
+```javascript
+module.exports = function (context, input) {
+ var documents = context.bindings.documents;
+ for (var i = 0; i < documents.length; i++) {
+ var document = documents[i];
+ // operate on each document
+ }
+ context.done();
+};
+```
+
+# [PowerShell](#tab/powershell)
+
+* [Queue trigger, look up ID from JSON](#queue-trigger-look-up-id-from-json-ps)
+* [HTTP trigger, look up ID from query string](#http-trigger-id-query-string-ps)
+* [HTTP trigger, look up ID from route data](#http-trigger-id-route-data-ps)
+* [Queue trigger, get multiple docs, using SqlQuery](#queue-trigger-multiple-docs-sqlquery-ps)
+
+### Queue trigger, look up ID from JSON
+
+The following example demonstrates how to read and update a single Cosmos DB document. The document's unique identifier is provided through JSON value in a queue message.
+
+The Cosmos DB input binding is listed first in the list of bindings found in the function's configuration file (_function.json_).
+
+<a name="queue-trigger-look-up-id-from-json-ps"></a>
+
+```json
+{
+ΓÇ» "name":ΓÇ»"InputDocumentIn",
+  "type": "cosmosDB",
+  "databaseName": "MyDatabase",
+  "collectionName": "MyCollection",
+  "id" : "{queueTrigger_payload_property}",
+  "partitionKey": "{queueTrigger_payload_property}",
+  "connectionStringSetting": "CosmosDBConnection",
+  "direction": "in"
+},
+{
+  "name": "InputDocumentOut",
+  "type": "cosmosDB",
+  "databaseName": "MyDatabase",
+  "collectionName": "MyCollection",
+  "createIfNotExists": false,
+  "partitionKey": "{queueTrigger_payload_property}",
+  "connectionStringSetting": "CosmosDBConnection",
+  "direction": "out"
+}
+```
+
+The _run.ps1_ file has the PowerShell code which reads the incoming document and outputs changes.
+
+```powershell
+param($QueueItem,ΓÇ»$InputDocumentIn,ΓÇ»$TriggerMetadata)
+
+$DocumentΓÇ»=ΓÇ»$InputDocumentIn
+$Document.text = 'This was updated!'
+
+Push-OutputBinding -Name InputDocumentOut -Value $Document 
+```
+
+<a name="http-trigger-id-query-string-ps"></a>
+
+### HTTP trigger, look up ID from query string
+
+The following example demonstrates how to read and update a single Cosmos DB document from a web API. The document's unique identifier is provided through a querystring parameter from the HTTP request, as defined in the binding's `"Id": "{Query.Id}"` property.
+
+The Cosmos DB input binding is listed first in the list of bindings found in the function's configuration file (_function.json_).
+
+```json
+{
+  "bindings": [
+    {
+      "type": "cosmosDB",
+      "name": "ToDoItem",
+      "databaseName": "ToDoItems",
+      "collectionName": "Items",
+      "connectionStringSetting": "CosmosDBConnection",
+      "direction": "in",
+      "Id": "{Query.id}",
+      "PartitionKey": "{Query.partitionKeyValue}"
+    },
+    {
+      "authLevel": "anonymous",
+      "name": "Request",
+      "type": "httpTrigger",
+      "direction": "in",
+      "methods": [
+        "get",
+        "post"
+      ]
+    },
+    {
+      "name": "Response",
+      "type": "http",
+      "direction": "out"
+    },
+  ],
+  "disabled": false
+}
+```
+ΓÇ»
+The the _run.ps1_ file has the PowerShell code which reads the incoming document and outputs changes.
+
+```powershell
+using namespace System.Net
+
+param($Request,ΓÇ»$ToDoItem,ΓÇ»$TriggerMetadata)
+
+Write-Host 'PowerShell HTTP trigger function processed a request'
+
+ifΓÇ»(-notΓÇ»$ToDoItem)ΓÇ»{
+    Write-Host 'ToDo item not found'
+
+    Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
+        StatusCode = [HttpStatusCode]::NotFound
+        Body = $ToDoItem.Description
+    })
+
+} else {
+
+    Write-Host "Found ToDo item, Description=$($ToDoItem.Description)"
+
+    Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
+        StatusCode = [HttpStatusCode]::OK
+        Body = $ToDoItem.Description
+    })
+}
+```
+
+<a name="http-trigger-id-route-data-ps"></a>
-```javascript
-module.exports = function (context, req, toDoItem) {
- context.log('JavaScript queue trigger function processed work item');
- if (!toDoItem)
- {
- context.log("ToDo item not found");
- }
- else
- {
- context.log("Found ToDo item, Description=" + toDoItem.Description);
- }
+### HTTP trigger, look up ID from route data
- context.done();
-};
+The following example demonstrates how to read and update a single Cosmos DB document from a web API. The document's unique identifier is provided through a route parameter. The route parameter is defined in the HTTP request binding's `route` property and referenced in the Cosmos DB `"Id": "{Id}"` binding property.
+
+The Cosmos DB input binding is listed first in the list of bindings found in the function's configuration file (_function.json_).
+
+```json
+{
+  "bindings": [
+    {
+      "type": "cosmosDB",
+      "name": "ToDoItem",
+      "databaseName": "ToDoItems",
+      "collectionName": "Items",
+      "connectionStringSetting": "CosmosDBConnection",
+      "direction": "in",
+      "Id": "{id}",
+      "PartitionKey": "{partitionKeyValue}"
+    },
+    {
+      "authLevel": "anonymous",
+      "name": "Request",
+      "type": "httpTrigger",
+      "direction": "in",
+      "methods": [
+        "get",
+        "post"
+      ],
+      "route": "todoitems/{partitionKeyValue}/{id}"
+    },
+    {
+      "name": "Response",
+      "type": "http",
+      "direction": "out"
+    }
+  ],
+  "disabled": false
+}
```
-<a id="queue-trigger-get-multiple-docs-using-sqlquery-javascript"></a>
+The the _run.ps1_ file has the PowerShell code which reads the incoming document and outputs changes.
-### Queue trigger, get multiple docs, using SqlQuery
+```powershell
+using namespace System.Net
-The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function retrieves multiple documents specified by a SQL query, using a queue trigger to customize the query parameters.
+param($Request,ΓÇ»$ToDoItem,ΓÇ»$TriggerMetadata)
-The queue trigger provides a parameter `departmentId`. A queue message of `{ "departmentId" : "Finance" }` would return all records for the finance department.
+Write-Host 'PowerShell HTTP trigger function processed a request'
-Here's the binding data in the *function.json* file:
+ifΓÇ»(-notΓÇ»$ToDoItem)ΓÇ»{
+    Write-Host 'ToDo item not found'
+
+    Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
+        StatusCode = [HttpStatusCode]::NotFound
+        Body = $ToDoItem.Description
+    })
+
+} else {
+    Write-Host "Found ToDo item, Description=$($ToDoItem.Description)"
+ΓÇ»
+    Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
+        StatusCode = [HttpStatusCode]::OK
+        Body = $ToDoItem.Description
+    })
+}
+```
+
+<a name="queue-trigger-multiple-docs-sqlquery-ps"></a>
+
+### Queue trigger, get multiple docs, using SqlQuery
+
+The following example demonstrates how to read multiple Cosmos DB documents. The function's configuration file (_function.json_) defines the binding properties, which includes the `sqlQuery`. The SQL statement provided to the `sqlQuery` property selects the set of documents provided to the function.
```json
-{
- "name": "documents",
- "type": "cosmosDB",
- "direction": "in",
- "databaseName": "MyDb",
- "collectionName": "MyCollection",
- "sqlQuery": "SELECT * from c where c.departmentId = {departmentId}",
- "connectionStringSetting": "CosmosDBConnection"
-}
+{
+  "name": "Documents",
+  "type": "cosmosDB",
+  "direction": "in",
+  "databaseName": "MyDb",
+  "collectionName": "MyCollection",
+  "sqlQuery": "SELECT * from c where c.departmentId = {departmentId}",
+  "connectionStringSetting": "CosmosDBConnection"
+}
```
-The [configuration](#configuration) section explains these properties.
+The the _run1.ps_ file has the PowerShell code which reads the incoming documents.
-Here's the JavaScript code:
+```powershell
+param($QueueItem,ΓÇ»$Documents,ΓÇ»$TriggerMetadata)
-```javascript
- module.exports = function (context, input) {
- var documents = context.bindings.documents;
- for (var i = 0; i < documents.length; i++) {
- var document = documents[i];
- // operate on each document
- }
- context.done();
- };
+foreach ($Document in $Documents) {
+    # operate on each document
+}
``` # [Python](#tab/python)
@@ -1019,381 +1498,117 @@ Here's the *function.json* file:
], "scriptFile": "__init__.py" }
-```
-
-Here's the Python code:
-
-```python
-import logging
-import azure.functions as func
--
-def main(req: func.HttpRequest, todoitems: func.DocumentList) -> str:
- if not todoitems:
- logging.warning("ToDo item not found")
- else:
- logging.info("Found ToDo item, Description=%s",
- todoitems[0]['description'])
-
- return 'OK'
-```
-
-<a id="http-trigger-look-up-id-from-route-data-python"></a>
-
-### HTTP trigger, look up ID from route data
-
-The following example shows a [Python function](functions-reference-python.md) that retrieves a single document. The function is triggered by an HTTP request that uses route data to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a `ToDoItem` document from the specified database and collection.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "authLevel": "anonymous",
- "name": "req",
- "type": "httpTrigger",
- "direction": "in",
- "methods": [
- "get",
- "post"
- ],
- "route":"todoitems/{partitionKeyValue}/{id}"
- },
- {
- "name": "$return",
- "type": "http",
- "direction": "out"
- },
- {
- "type": "cosmosDB",
- "name": "todoitems",
- "databaseName": "ToDoItems",
- "collectionName": "Items",
- "connection": "CosmosDBConnection",
- "direction": "in",
- "Id": "{id}",
- "PartitionKey": "{partitionKeyValue}"
- }
- ],
- "disabled": false,
- "scriptFile": "__init__.py"
-}
-```
-
-Here's the Python code:
-
-```python
-import logging
-import azure.functions as func
--
-def main(req: func.HttpRequest, todoitems: func.DocumentList) -> str:
- if not todoitems:
- logging.warning("ToDo item not found")
- else:
- logging.info("Found ToDo item, Description=%s",
- todoitems[0]['description'])
- return 'OK'
-```
-
-<a id="queue-trigger-get-multiple-docs-using-sqlquery-python"></a>
-
-### Queue trigger, get multiple docs, using SqlQuery
-
-The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function retrieves multiple documents specified by a SQL query, using a queue trigger to customize the query parameters.
-
-The queue trigger provides a parameter `departmentId`. A queue message of `{ "departmentId" : "Finance" }` would return all records for the finance department.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "name": "documents",
- "type": "cosmosDB",
- "direction": "in",
- "databaseName": "MyDb",
- "collectionName": "MyCollection",
- "sqlQuery": "SELECT * from c where c.departmentId = {departmentId}",
- "connectionStringSetting": "CosmosDBConnection"
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the Python code:
-
-```python
-import azure.functions as func
-
-def main(queuemsg: func.QueueMessage, documents: func.DocumentList):
- for document in documents:
- # operate on each document
-```
-
-# [Java](#tab/java)
-
-This section contains the following examples:
-
-* [HTTP trigger, look up ID from query string - String parameter](#http-trigger-look-up-id-from-query-string---string-parameter-java)
-* [HTTP trigger, look up ID from query string - POJO parameter](#http-trigger-look-up-id-from-query-string---pojo-parameter-java)
-* [HTTP trigger, look up ID from route data](#http-trigger-look-up-id-from-route-data-java)
-* [HTTP trigger, look up ID from route data, using SqlQuery](#http-trigger-look-up-id-from-route-data-using-sqlquery-java)
-* [HTTP trigger, get multiple docs from route data, using SqlQuery](#http-trigger-get-multiple-docs-from-route-data-using-sqlquery-java)
-
-The examples refer to a simple `ToDoItem` type:
-
-```java
-public class ToDoItem {
-
- private String id;
- private String description;
-
- public String getId() {
- return id;
- }
-
- public String getDescription() {
- return description;
- }
-
- @Override
- public String toString() {
- return "ToDoItem={id=" + id + ",description=" + description + "}";
- }
-}
-```
-
-<a id="http-trigger-look-up-id-from-query-string---string-parameter-java"></a>
-
-### HTTP trigger, look up ID from query string - String parameter
-
-The following example shows a Java function that retrieves a single document. The function is triggered by an HTTP request that uses a query string to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a document from the specified database and collection, in String form.
-
-```java
-public class DocByIdFromQueryString {
-
- @FunctionName("DocByIdFromQueryString")
- public HttpResponseMessage run(
- @HttpTrigger(name = "req",
- methods = {HttpMethod.GET, HttpMethod.POST},
- authLevel = AuthorizationLevel.ANONYMOUS)
- HttpRequestMessage<Optional<String>> request,
- @CosmosDBInput(name = "database",
- databaseName = "ToDoList",
- collectionName = "Items",
- id = "{Query.id}",
- partitionKey = "{Query.partitionKeyValue}",
- connectionStringSetting = "Cosmos_DB_Connection_String")
- Optional<String> item,
- final ExecutionContext context) {
-
- // Item list
- context.getLogger().info("Parameters are: " + request.getQueryParameters());
- context.getLogger().info("String from the database is " + (item.isPresent() ? item.get() : null));
-
- // Convert and display
- if (!item.isPresent()) {
- return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
- .body("Document not found.")
- .build();
- }
- else {
- // return JSON from Cosmos. Alternatively, we can parse the JSON string
- // and return an enriched JSON object.
- return request.createResponseBuilder(HttpStatus.OK)
- .header("Content-Type", "application/json")
- .body(item.get())
- .build();
- }
- }
-}
- ```
-
-In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBInput` annotation on function parameters whose value would come from Cosmos DB. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
-
-<a id="http-trigger-look-up-id-from-query-string---pojo-parameter-java"></a>
-
-### HTTP trigger, look up ID from query string - POJO parameter
+```
-The following example shows a Java function that retrieves a single document. The function is triggered by an HTTP request that uses a query string to specify the ID and partition key value to look up. That ID and partition key value used to retrieve a document from the specified database and collection. The document is then converted to an instance of the ```ToDoItem``` POJO previously created, and passed as an argument to the function.
+Here's the Python code:
-```java
-public class DocByIdFromQueryStringPojo {
+```python
+import logging
+import azure.functions as func
- @FunctionName("DocByIdFromQueryStringPojo")
- public HttpResponseMessage run(
- @HttpTrigger(name = "req",
- methods = {HttpMethod.GET, HttpMethod.POST},
- authLevel = AuthorizationLevel.ANONYMOUS)
- HttpRequestMessage<Optional<String>> request,
- @CosmosDBInput(name = "database",
- databaseName = "ToDoList",
- collectionName = "Items",
- id = "{Query.id}",
- partitionKey = "{Query.partitionKeyValue}",
- connectionStringSetting = "Cosmos_DB_Connection_String")
- ToDoItem item,
- final ExecutionContext context) {
- // Item list
- context.getLogger().info("Parameters are: " + request.getQueryParameters());
- context.getLogger().info("Item from the database is " + item);
+def main(req: func.HttpRequest, todoitems: func.DocumentList) -> str:
+ if not todoitems:
+ logging.warning("ToDo item not found")
+ else:
+ logging.info("Found ToDo item, Description=%s",
+ todoitems[0]['description'])
- // Convert and display
- if (item == null) {
- return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
- .body("Document not found.")
- .build();
- }
- else {
- return request.createResponseBuilder(HttpStatus.OK)
- .header("Content-Type", "application/json")
- .body(item)
- .build();
- }
- }
-}
- ```
+ return 'OK'
+```
-<a id="http-trigger-look-up-id-from-route-data-java"></a>
+<a id="http-trigger-look-up-id-from-route-data-python"></a>
### HTTP trigger, look up ID from route data
-The following example shows a Java function that retrieves a single document. The function is triggered by an HTTP request that uses a route parameter to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a document from the specified database and collection, returning it as an ```Optional<String>```.
-
-```java
-public class DocByIdFromRoute {
-
- @FunctionName("DocByIdFromRoute")
- public HttpResponseMessage run(
- @HttpTrigger(name = "req",
- methods = {HttpMethod.GET, HttpMethod.POST},
- authLevel = AuthorizationLevel.ANONYMOUS,
- route = "todoitems/{partitionKeyValue}/{id}")
- HttpRequestMessage<Optional<String>> request,
- @CosmosDBInput(name = "database",
- databaseName = "ToDoList",
- collectionName = "Items",
- id = "{id}",
- partitionKey = "{partitionKeyValue}",
- connectionStringSetting = "Cosmos_DB_Connection_String")
- Optional<String> item,
- final ExecutionContext context) {
+The following example shows a [Python function](functions-reference-python.md) that retrieves a single document. The function is triggered by an HTTP request that uses route data to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a `ToDoItem` document from the specified database and collection.
- // Item list
- context.getLogger().info("Parameters are: " + request.getQueryParameters());
- context.getLogger().info("String from the database is " + (item.isPresent() ? item.get() : null));
+Here's the *function.json* file:
- // Convert and display
- if (!item.isPresent()) {
- return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
- .body("Document not found.")
- .build();
- }
- else {
- // return JSON from Cosmos. Alternatively, we can parse the JSON string
- // and return an enriched JSON object.
- return request.createResponseBuilder(HttpStatus.OK)
- .header("Content-Type", "application/json")
- .body(item.get())
- .build();
- }
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get",
+ "post"
+ ],
+ "route":"todoitems/{partitionKeyValue}/{id}"
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "type": "cosmosDB",
+ "name": "todoitems",
+ "databaseName": "ToDoItems",
+ "collectionName": "Items",
+ "connection": "CosmosDBConnection",
+ "direction": "in",
+ "Id": "{id}",
+ "PartitionKey": "{partitionKeyValue}"
}
+ ],
+ "disabled": false,
+ "scriptFile": "__init__.py"
}
- ```
-
- <a id="http-trigger-look-up-id-from-route-data-using-sqlquery-java"></a>
+```
-### HTTP trigger, look up ID from route data, using SqlQuery
+Here's the Python code:
-The following example shows a Java function that retrieves a single document. The function is triggered by an HTTP request that uses a route parameter to specify the ID to look up. That ID is used to retrieve a document from the specified database and collection, converting the result set to a ```ToDoItem[]```, since many documents may be returned, depending on the query criteria.
+```python
+import logging
+import azure.functions as func
-> [!NOTE]
-> If you need to query by just the ID, it is recommended to use a look up, like the [previous examples](#http-trigger-look-up-id-from-query-string---pojo-parameter-java), as it will consume less [request units](../cosmos-db/request-units.md). Point read operations (GET) are [more efficient](../cosmos-db/optimize-cost-reads-writes.md) than queries by ID.
->
-```java
-public class DocByIdFromRouteSqlQuery {
+def main(req: func.HttpRequest, todoitems: func.DocumentList) -> str:
+ if not todoitems:
+ logging.warning("ToDo item not found")
+ else:
+ logging.info("Found ToDo item, Description=%s",
+ todoitems[0]['description'])
+ return 'OK'
+```
- @FunctionName("DocByIdFromRouteSqlQuery")
- public HttpResponseMessage run(
- @HttpTrigger(name = "req",
- methods = {HttpMethod.GET, HttpMethod.POST},
- authLevel = AuthorizationLevel.ANONYMOUS,
- route = "todoitems2/{id}")
- HttpRequestMessage<Optional<String>> request,
- @CosmosDBInput(name = "database",
- databaseName = "ToDoList",
- collectionName = "Items",
- sqlQuery = "select * from Items r where r.id = {id}",
- connectionStringSetting = "Cosmos_DB_Connection_String")
- ToDoItem[] item,
- final ExecutionContext context) {
+<a id="queue-trigger-get-multiple-docs-using-sqlquery-python"></a>
- // Item list
- context.getLogger().info("Parameters are: " + request.getQueryParameters());
- context.getLogger().info("Items from the database are " + item);
+### Queue trigger, get multiple docs, using SqlQuery
- // Convert and display
- if (item == null) {
- return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
- .body("Document not found.")
- .build();
- }
- else {
- return request.createResponseBuilder(HttpStatus.OK)
- .header("Content-Type", "application/json")
- .body(item)
- .build();
- }
- }
-}
- ```
+The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function retrieves multiple documents specified by a SQL query, using a queue trigger to customize the query parameters.
- <a id="http-trigger-get-multiple-docs-from-route-data-using-sqlquery-java"></a>
+The queue trigger provides a parameter `departmentId`. A queue message of `{ "departmentId" : "Finance" }` would return all records for the finance department.
-### HTTP trigger, get multiple docs from route data, using SqlQuery
+Here's the binding data in the *function.json* file:
-The following example shows a Java function that retrieves multiple documents. The function is triggered by an HTTP request that uses a route parameter ```desc``` to specify the string to search for in the ```description``` field. The search term is used to retrieve a collection of documents from the specified database and collection, converting the result set to a ```ToDoItem[]``` and passing it as an argument to the function.
+```json
+{
+ "name": "documents",
+ "type": "cosmosDB",
+ "direction": "in",
+ "databaseName": "MyDb",
+ "collectionName": "MyCollection",
+ "sqlQuery": "SELECT * from c where c.departmentId = {departmentId}",
+ "connectionStringSetting": "CosmosDBConnection"
+}
+```
-```java
-public class DocsFromRouteSqlQuery {
+The [configuration](#configuration) section explains these properties.
- @FunctionName("DocsFromRouteSqlQuery")
- public HttpResponseMessage run(
- @HttpTrigger(name = "req",
- methods = {HttpMethod.GET},
- authLevel = AuthorizationLevel.ANONYMOUS,
- route = "todoitems3/{desc}")
- HttpRequestMessage<Optional<String>> request,
- @CosmosDBInput(name = "database",
- databaseName = "ToDoList",
- collectionName = "Items",
- sqlQuery = "select * from Items r where contains(r.description, {desc})",
- connectionStringSetting = "Cosmos_DB_Connection_String")
- ToDoItem[] items,
- final ExecutionContext context) {
+Here's the Python code:
- // Item list
- context.getLogger().info("Parameters are: " + request.getQueryParameters());
- context.getLogger().info("Number of items from the database is " + (items == null ? 0 : items.length));
+```python
+import azure.functions as func
- // Convert and display
- if (items == null) {
- return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
- .body("No documents found.")
- .build();
- }
- else {
- return request.createResponseBuilder(HttpStatus.OK)
- .header("Content-Type", "application/json")
- .body(items)
- .build();
- }
- }
-}
- ```
+def main(queuemsg: func.QueueMessage, documents: func.DocumentList):
+ for document in documents:
+ # operate on each document
+```
---
@@ -1409,17 +1624,21 @@ The attribute's constructor takes the database name and collection name. For inf
Attributes are not supported by C# Script.
+# [Java](#tab/java)
+
+From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBOutput` annotation on parameters that write to Cosmos DB. The annotation parameter type should be `OutputBinding<T>`, where `T` is either a native Java type or a POJO.
+ # [JavaScript](#tab/javascript) Attributes are not supported by JavaScript.
-# [Python](#tab/python)
+# [PowerShell](#tab/powershell)
-Attributes are not supported by Python.
+Attributes are not supported by PowerShell.
-# [Java](#tab/java)
+# [Python](#tab/python)
-From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBOutput` annotation on parameters that write to Cosmos DB. The annotation parameter type should be `OutputBinding<T>`, where `T` is either a native Java type or a POJO.
+Attributes are not supported by Python.
---
@@ -1452,17 +1671,21 @@ When the function exits successfully, any changes made to the input document via
When the function exits successfully, any changes made to the input document via named input parameters are automatically persisted.
+# [Java](#tab/java)
+
+From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), the [@CosmosDBInput](/java/api/com.microsoft.azure.functions.annotation.cosmosdbinput) annotation exposes Cosmos DB data to the function. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
+ # [JavaScript](#tab/javascript)
-Updates are not made automatically upon function exit. Instead, use `context.bindings.<documentName>In` and `context.bindings.<documentName>Out` to make updates. See the JavaScript example.
+Updates are not made automatically upon function exit. Instead, use `context.bindings.<documentName>In` and `context.bindings.<documentName>Out` to make updates. See the [JavaScript example](#example) for more detail.
-# [Python](#tab/python)
+# [PowerShell](#tab/powershell)
-Data is made available to the function via a `DocumentList` parameter. Changes made to the document are not automatically persisted.
+Updates to documents are not made automatically upon function exit. To update documents in a function use an [output binding](./functions-bindings-cosmosdb-v2-input.md). See the [PowerShell example](#example) for more detail.
-# [Java](#tab/java)
+# [Python](#tab/python)
-From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), the [@CosmosDBInput](/java/api/com.microsoft.azure.functions.annotation.cosmosdbinput) annotation exposes Cosmos DB data to the function. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
+Data is made available to the function via a `DocumentList` parameter. Changes made to the document are not automatically persisted.
---
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2-output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-cosmosdb-v2-output.md
@@ -243,136 +243,6 @@ public static async Task Run(ToDoItem[] toDoItemsIn, IAsyncCollector<ToDoItem> t
} ```
-# [JavaScript](#tab/javascript)
-
-The following example shows an Azure Cosmos DB output binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function uses a queue input binding for a queue that receives JSON in the following format:
-
-```json
-{
- "name": "John Henry",
- "employeeId": "123456",
- "address": "A town nearby"
-}
-```
-
-The function creates Azure Cosmos DB documents in the following format for each record:
-
-```json
-{
- "id": "John Henry-123456",
- "name": "John Henry",
- "employeeId": "123456",
- "address": "A town nearby"
-}
-```
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "name": "employeeDocument",
- "type": "cosmosDB",
- "databaseName": "MyDatabase",
- "collectionName": "MyCollection",
- "createIfNotExists": true,
- "connectionStringSetting": "MyAccount_COSMOSDB",
- "direction": "out"
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the JavaScript code:
-
-```javascript
- module.exports = function (context) {
-
- context.bindings.employeeDocument = JSON.stringify({
- id: context.bindings.myQueueItem.name + "-" + context.bindings.myQueueItem.employeeId,
- name: context.bindings.myQueueItem.name,
- employeeId: context.bindings.myQueueItem.employeeId,
- address: context.bindings.myQueueItem.address
- });
-
- context.done();
- };
-```
-
-For bulk insert form the objects first and then run the stringify function. Here's the JavaScript code:
-
-```javascript
- module.exports = function (context) {
-
- context.bindings.employeeDocument = JSON.stringify([
- {
- "id": "John Henry-123456",
- "name": "John Henry",
- "employeeId": "123456",
- "address": "A town nearby"
- },
- {
- "id": "John Doe-123457",
- "name": "John Doe",
- "employeeId": "123457",
- "address": "A town far away"
- }]);
-
- context.done();
- };
-```
-
-# [Python](#tab/python)
-
-The following example demonstrates how to write a document to an Azure CosmosDB database as the output of a function.
-
-The binding definition is defined in *function.json* where *type* is set to `cosmosDB`.
-
-```json
-{
- "scriptFile": "__init__.py",
- "bindings": [
- {
- "authLevel": "function",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "type": "cosmosDB",
- "direction": "out",
- "name": "doc",
- "databaseName": "demodb",
- "collectionName": "data",
- "createIfNotExists": "true",
- "connectionStringSetting": "AzureCosmosDBConnectionString"
- },
- {
- "type": "http",
- "direction": "out",
- "name": "$return"
- }
- ]
-}
-```
-
-To write to the database, pass a document object to the `set` method of the database parameter.
-
-```python
-import azure.functions as func
-
-def main(req: func.HttpRequest, doc: func.Out[func.Document]) -> func.HttpResponse:
-
- request_body = req.get_body()
-
- doc.set(func.Document.from_json(request_body))
-
- return 'OK'
-```
- # [Java](#tab/java) * [Queue trigger, save message to database via return value](#queue-trigger-save-message-to-database-via-return-value-java)
@@ -540,6 +410,165 @@ The following example shows a Java function that writes multiple documents to Co
In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBOutput` annotation on parameters that will be written to Cosmos DB. The annotation parameter type should be ```OutputBinding<T>```, where T is either a native Java type or a POJO.
+# [JavaScript](#tab/javascript)
+
+The following example shows an Azure Cosmos DB output binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function uses a queue input binding for a queue that receives JSON in the following format:
+
+```json
+{
+ "name": "John Henry",
+ "employeeId": "123456",
+ "address": "A town nearby"
+}
+```
+
+The function creates Azure Cosmos DB documents in the following format for each record:
+
+```json
+{
+ "id": "John Henry-123456",
+ "name": "John Henry",
+ "employeeId": "123456",
+ "address": "A town nearby"
+}
+```
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "name": "employeeDocument",
+ "type": "cosmosDB",
+ "databaseName": "MyDatabase",
+ "collectionName": "MyCollection",
+ "createIfNotExists": true,
+ "connectionStringSetting": "MyAccount_COSMOSDB",
+ "direction": "out"
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+Here's the JavaScript code:
+
+```javascript
+ module.exports = function (context) {
+
+ context.bindings.employeeDocument = JSON.stringify({
+ id: context.bindings.myQueueItem.name + "-" + context.bindings.myQueueItem.employeeId,
+ name: context.bindings.myQueueItem.name,
+ employeeId: context.bindings.myQueueItem.employeeId,
+ address: context.bindings.myQueueItem.address
+ });
+
+ context.done();
+ };
+```
+
+For bulk insert form the objects first and then run the stringify function. Here's the JavaScript code:
+
+```javascript
+ module.exports = function (context) {
+
+ context.bindings.employeeDocument = JSON.stringify([
+ {
+ "id": "John Henry-123456",
+ "name": "John Henry",
+ "employeeId": "123456",
+ "address": "A town nearby"
+ },
+ {
+ "id": "John Doe-123457",
+ "name": "John Doe",
+ "employeeId": "123457",
+ "address": "A town far away"
+ }]);
+
+ context.done();
+ };
+```
+
+# [PowerShell](#tab/powershell)
+
+The following example show how to write data to Cosmos DB using an output binding. The binding is declared in the function's configuration file (_functions.json_), and take data from a queue message and writes out to a Cosmos DB document.
+
+```json
+{
+  "name": "EmployeeDocument",
+  "type": "cosmosDB",
+  "databaseName": "MyDatabase",
+  "collectionName": "MyCollection",
+  "createIfNotExists": true,
+  "connectionStringSetting": "MyStorageConnectionAppSetting",
+  "direction": "out"
+}
+```
+
+In the _run.ps1_ file, the object returned from the function is mapped to an `EmployeeDocument` object, which is persisted in the database.
+
+```powershell
+param($QueueItem,ΓÇ»$TriggerMetadata)
+
+Push-OutputBinding -Name EmployeeDocument -Value @{
+    id = $QueueItem.name + '-' + $QueueItem.employeeId
+    name = $QueueItem.name
+    employeeId = $QueueItem.employeeId
+    address = $QueueItem.address
+}
+```
+
+# [Python](#tab/python)
+
+The following example demonstrates how to write a document to an Azure CosmosDB database as the output of a function.
+
+The binding definition is defined in *function.json* where *type* is set to `cosmosDB`.
+
+```json
+{
+ "scriptFile": "__init__.py",
+ "bindings": [
+ {
+ "authLevel": "function",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "type": "cosmosDB",
+ "direction": "out",
+ "name": "doc",
+ "databaseName": "demodb",
+ "collectionName": "data",
+ "createIfNotExists": "true",
+ "connectionStringSetting": "AzureCosmosDBConnectionString"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "$return"
+ }
+ ]
+}
+```
+
+To write to the database, pass a document object to the `set` method of the database parameter.
+
+```python
+import azure.functions as func
+
+def main(req: func.HttpRequest, doc: func.Out[func.Document]) -> func.HttpResponse:
+
+ request_body = req.get_body()
+
+ doc.set(func.Document.from_json(request_body))
+
+ return 'OK'
+```
+ --- ## Attributes and annotations
@@ -564,17 +593,21 @@ The attribute's constructor takes the database name and collection name. For inf
Attributes are not supported by C# Script.
+# [Java](#tab/java)
+
+The `CosmosDBOutput` annotation is available to write data to Cosmos DB. You can apply the annotation to the function or to an individual function parameter. When used on the function method, the return value of the function is what is written to Cosmos DB. If you use the annotation with a parameter, the parameter's type must be declared as an `OutputBinding<T>` where `T` a native Java type or a POJO.
+ # [JavaScript](#tab/javascript) Attributes are not supported by JavaScript.
-# [Python](#tab/python)
+# [PowerShell](#tab/powershell)
-Attributes are not supported by Python.
+Attributes are not supported by PowerShell.
-# [Java](#tab/java)
+# [Python](#tab/python)
-The `CosmosDBOutput` annotation is available to write data to Cosmos DB. You can apply the annotation to the function or to an individual function parameter. When used on the function method, the return value of the function is what is written to Cosmos DB. If you use the annotation with a parameter, the parameter's type must be declared as an `OutputBinding<T>` where `T` a native Java type or a POJO.
+Attributes are not supported by Python.
---
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2-trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md
@@ -86,6 +86,27 @@ Here's the C# script code:
} ```
+# [Java](#tab/java)
+
+This function is invoked when there are inserts or updates in the specified database and collection.
+
+```java
+ @FunctionName("cosmosDBMonitor")
+ public void cosmosDbProcessor(
+ @CosmosDBTrigger(name = "items",
+ databaseName = "ToDoList",
+ collectionName = "Items",
+ leaseCollectionName = "leases",
+ createLeaseCollectionIfNotExists = true,
+ connectionStringSetting = "AzureCosmosDBConnection") String[] items,
+ final ExecutionContext context ) {
+ context.getLogger().info(items.length + "item(s) is/are changed.");
+ }
+```
++
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBTrigger` annotation on parameters whose value would come from Cosmos DB. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
+ # [JavaScript](#tab/javascript) The following example shows a Cosmos DB trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function writes log messages when Cosmos DB records are added or modified.
@@ -115,6 +136,31 @@ Here's the JavaScript code:
} ```
+# [PowerShell](#tab/powershell)
+
+The following example shows how to run a function as data changes in Cosmos DB.
+
+```json
+{
+  "type": "cosmosDBTrigger",
+  "name": "Documents",
+  "direction": "in",
+  "leaseCollectionName": "leases",
+  "connectionStringSetting": "MyStorageConnectionAppSetting",
+  "databaseName": "Tasks",
+  "collectionName": "Items",
+  "createLeaseCollectionIfNotExists": true
+}
+```
+
+In the _run.ps1_ file, you have access to the document that triggers the function via the `$Documents` parameter.
+
+```powershell
+param($Documents,ΓÇ»$TriggerMetadata)
+
+Write-Host "First document Id modified : $($Documents[0].id)"
+```
+ # [Python](#tab/python) The following example shows a Cosmos DB trigger binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function writes log messages when Cosmos DB records are modified.
@@ -146,27 +192,6 @@ Here's the Python code:
logging.info('First document Id modified: %s', documents[0]['id']) ```
-# [Java](#tab/java)
-
-This function is invoked when there are inserts or updates in the specified database and collection.
-
-```java
- @FunctionName("cosmosDBMonitor")
- public void cosmosDbProcessor(
- @CosmosDBTrigger(name = "items",
- databaseName = "ToDoList",
- collectionName = "Items",
- leaseCollectionName = "leases",
- createLeaseCollectionIfNotExists = true,
- connectionStringSetting = "AzureCosmosDBConnection") String[] items,
- final ExecutionContext context ) {
- context.getLogger().info(items.length + "item(s) is/are changed.");
- }
-```
--
-In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBTrigger` annotation on parameters whose value would come from Cosmos DB. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
- --- ## Attributes and annotations
@@ -193,17 +218,21 @@ For a complete example, see [Trigger](#example).
Attributes are not supported by C# Script.
+# [Java](#tab/java)
+
+From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBInput` annotation on parameters that read data from Cosmos DB.
+ # [JavaScript](#tab/javascript) Attributes are not supported by JavaScript.
-# [Python](#tab/python)
+# [PowerShell](#tab/powershell)
-Attributes are not supported by Python.
+Attributes are not supported by PowerShell.
-# [Java](#tab/java)
+# [Python](#tab/python)
-From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBInput` annotation on parameters that read data from Cosmos DB.
+Attributes are not supported by Python.
---
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-function-linux-custom-image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-function-linux-custom-image.md
@@ -351,7 +351,7 @@ To deploy your function code to Azure, you need to create three resources:
- A resource group, which is a logical container for related resources. - An Azure Storage account, which maintains state and other information about your projects.-- An Azure functions app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment, and sharing of resources.
+- A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment, and sharing of resources.
You use Azure CLI commands to create these items. Each command provides JSON output upon completion.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/supported-languages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/supported-languages.md
@@ -23,9 +23,9 @@ There are two levels of support:
[!INCLUDE [functions-supported-languages](../../includes/functions-supported-languages.md)]
-## Custom handlers (preview)
+## Custom handlers
-Custom handlers are lightweight web servers that receive events from the Azure Functions host. Any language that supports HTTP primitives can implement a custom handler. This means that custom handlers can be use to create functions in languages that aren't officially supported. To learn more, see [Azure Functions custom handlers (preview)](functions-custom-handlers.md).
+Custom handlers are lightweight web servers that receive events from the Azure Functions host. Any language that supports HTTP primitives can implement a custom handler. This means that custom handlers can be used to create functions in languages that aren't officially supported. To learn more, see [Azure Functions custom handlers](functions-custom-handlers.md).
## Language extensibility
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-metric-multiple-time-series-single-rule https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-metric-multiple-time-series-single-rule.md
@@ -4,7 +4,7 @@ description: Alert at scale using a single alert rule for multiple time series
author: harelbr ms.author: harelbr ms.topic: conceptual
-ms.date: 11/12/2020
+ms.date: 01/11/2021
ms.subservice: alerts ---
@@ -157,7 +157,7 @@ For this alert rule, six metric time-series are being monitored separately:
1. **Selecting all current and future dimensions** ΓÇô You can choose to monitor all possible values of a dimension, including future values. Such an alert rule will scale automatically to monitor all values of the dimension without you needing to modify the alert rule every time a dimension value is added or removed. 2. **Excluding dimensions** ΓÇô Selecting the 'Γëá' (exclude) operator for a dimension value is equivalent to selecting all other values of that dimension, including future values.
-3. **New and custom dimensions** ΓÇô The dimension values displayed in the Azure portal are based on metric data collected in the last three days. If the dimension value youΓÇÖre looking for isnΓÇÖt yet emitted, you can add a custom dimension value.
+3. **New and custom dimensions** ΓÇô The dimension values displayed in the Azure portal are based on metric data collected in the last day. If the dimension value youΓÇÖre looking for isnΓÇÖt yet emitted, you can add a custom dimension value.
4. **Matching dimensions with a prefix** - You can choose to monitor all dimension values that start with a specific pattern, by selecting the 'Starts with' operator and entering a custom prefix. ![Advanced multi-dimension features](media/alerts-metric-multiple-time-series-single-rule/advanced-features.png)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-metric-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-metric-overview.md
@@ -1,7 +1,7 @@
--- title: Understand how metric alerts work in Azure Monitor. description: Get an overview of what you can do with metric alerts and how they work in Azure Monitor.
-ms.date: 09/30/2020
+ms.date: 01/11/2021
ms.topic: conceptual ms.subservice: alerts
@@ -133,7 +133,7 @@ This feature is currently supported for platform metrics (not custom metrics) fo
| Service | Public Azure | Government | China | |:--------|:--------|:--------|:--------|
-| Virtual machines<sup>1</sup> | **Yes** | No | No |
+| Virtual machines<sup>1</sup> | **Yes** | **Yes** | No |
| SQL server databases | **Yes** | **Yes** | **Yes** | | SQL server elastic pools | **Yes** | **Yes** | **Yes** | | NetApp files capacity pools | **Yes** | **Yes** | **Yes** |
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-metric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-metric.md
@@ -4,7 +4,7 @@ description: Learn how to use Azure portal or CLI to create, view, and manage me
author: harelbr ms.author: harelbr ms.topic: conceptual
-ms.date: 08/11/2020
+ms.date: 01/11/2021
ms.subservice: alerts --- # Create, view, and manage metric alerts using Azure Monitor
@@ -35,9 +35,9 @@ The following procedure describes how to create a metric alert rule in Azure por
7. You will see a chart for the metric for the last six hours. Use the **Chart period** dropdown to select to see longer history for the metric. 8. If the metric has dimensions, you will see a dimensions table presented. Select one or more values per dimension.
- - The displayed dimension values are based on metric data from the last three days.
- - If the dimension value you're looking for isn't displayed, click "+" to add a custom value.
- - You can also **Select \*** for any of the dimensions. **Select \*** will dynamically scale the selection to all current and future values for a dimension.
+ - The displayed dimension values are based on metric data from the last day.
+ - If the dimension value you're looking for isn't displayed, click "Add custom value" to add a custom dimension value.
+ - You can also **Select all current and future values** for any of the dimensions. This will dynamically scale the selection to all current and future values for a dimension.
The metric alert rule will evaluate the condition for all combinations of values selected. [Learn more about how alerting on multi-dimensional metrics works](alerts-metric-overview.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-troubleshoot-metric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-troubleshoot-metric.md
@@ -4,7 +4,7 @@ description: Common issues with Azure Monitor metric alerts and possible solutio
author: harelbr ms.author: harelbr ms.topic: troubleshooting
-ms.date: 01/03/2021
+ms.date: 01/11/2021
ms.subservice: alerts --- # Troubleshooting problems in Azure Monitor metric alerts
@@ -85,9 +85,9 @@ If youΓÇÖre looking to alert on a specific metric but canΓÇÖt see it when creati
If you're looking to alert on [specific dimension values of a metric](./alerts-metric-overview.md#using-dimensions), but cannot find these values, note the following: 1. It might take a few minutes for the dimension values to appear under the **Dimension values** list
-1. The displayed dimension values are based on metric data collected in the last three days
-1. If the dimension value isnΓÇÖt yet emitted, click the '+' sign to add a custom value
-1. If youΓÇÖd like to alert on all possible values of a dimension (including future values), check the 'Select *' checkbox
+1. The displayed dimension values are based on metric data collected in the last day
+1. If the dimension value isnΓÇÖt yet emitted or isn't shown, you can use the 'Add custom value' option to add a custom dimension value
+1. If youΓÇÖd like to alert on all possible values of a dimension (including future values), choose the 'Select all current and future values' option
## Metric alert rules still defined on a deleted resource
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/azure-data-explorer-monitor-cross-service-query https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/azure-data-explorer-monitor-cross-service-query.md
@@ -1,7 +1,7 @@
--- title: Cross service query between Azure Monitor and Azure Data Explorer (preview) description: Query Azure Data Explorer data through Azure Log Analytics tools vice versa to join and analyze all your data in one place.
-author: orens
+author: osalzberg
ms.author: bwren ms.reviewer: bwren ms.subservice: logs
@@ -42,4 +42,4 @@ Use Azure Data Explorer to query data that was exported from your Log Analytics
Learn more about: * [create cross service queries between Azure Data Explorer and Azure Monitor](https://docs.microsoft.com/azure/data-explorer/query-monitor-data). Query Azure Monitor data from Azure Data Explorer * [create cross service queries between Azure Monitor and Azure Data Explorer](https://docs.microsoft.com/azure/azure-monitor/platform/azure-monitor-data-explorer-proxy). Query Azure Data Explorer data from Azure Monitor
-* [Log Analytics workspace data export in Azure Monitor (preview)](https://docs.microsoft.com/azure/data-explorer/query-monitor-data). Link and query Azure Blob storage account with Log Analytics Exported data.
\ No newline at end of file
+* [Log Analytics workspace data export in Azure Monitor (preview)](https://docs.microsoft.com/azure/data-explorer/query-monitor-data). Link and query Azure Blob storage account with Log Analytics Exported data.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/azure-monitor-troubleshooting-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/azure-monitor-troubleshooting-logs.md new file mode 100644
@@ -0,0 +1,139 @@
+---
+title: Azure Monitor Troubleshooting logs (Preview)
+description: Use Azure Monitor to quickly, or periodically investigate issues, troubleshoot code or configuration problems or address support cases, which often rely upon searching over high volume of data for specific insights.
+author: osalzberg
+ms.author: bwren
+ms.reviewer: bwren
+ms.subservice: logs
+ms.topic: conceptual
+ms.date: 12/29/2020
+
+---
+
+# Azure Monitor Troubleshooting logs (Preview)
+Use Azure Monitor to quickly and/or periodically investigate issues, troubleshoot code or configuration problems or address support cases, which often rely upon searching over high volume of data for specific insights.
+
+>[!NOTE]
+> * Troubleshooting Logs is in preview mode.
+>* Contact the [Log Analytics team](mailto:orens@microsoft.com) with any questions or to apply the feature.
+## Troubleshoot and query your code or configuration issues
+Use Azure Monitor Troubleshooting Logs to fetch your records and investigate problems and issues in a simpler and cheaper way using KQL.
+Troubleshooting Logs decrees your charges by giving you basic capabilities for troubleshooting.
+
+> [!NOTE]
+>* The decision for troubleshooting mode is configurable.
+>* Troubleshooting Logs can be applied to specific tables, currently on "Container Logs" and "App Traces" tables.
+>* There is a 4 days free retention period, can be extended in addition cost.
+>* By default, the tables inherits the workspace retention. To avoid additional charges, it is recommended to change these tables retention. [Click here to learn how to change table retention](https://docs.microsoft.com//azure/azure-monitor/platform/manage-cost-storage).
+
+## Turn on Troubleshooting Logs on your tables
+
+To turn on Troubleshooting Logs in your workspace, you need to use the following API call.
+```http
+PUT https://PortalURL/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}
+
+(With body in the form of a GET single table request response)
+
+Response:
+
+{
+ "properties": {
+ "retentionInDays": 40,
+ "isTroubleshootingAllowed": true,
+ "isTroubleshootEnabled": true
+ },
+ "id": "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}",
+ "name": "{tableName}"
+ }
+```
+## Check if the Troubleshooting logs feature is enabled for a given table
+To check whether the Troubleshooting Log is enabled for a given table, you can use the following API call.
+
+```http
+GET https://PortalURL/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}
+
+Response:
+"properties": {
+ "retentionInDays": 30,
+ "isTroubleshootingAllowed": true,
+ "isTroubleshootEnabled": true,
+ "lastTroubleshootDate": "Thu, 19 Nov 2020 07:40:51 GMT"
+ },
+ "id": "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/microsoft.operationalinsights/workspaces/{workspaceName}/tables/{tableName}",
+ "name": " {tableName}"
+
+```
+## Check if the Troubleshooting logs feature is enabled for all of the tables in a workspace
+To check which tables have the Troubleshooting Log enabled, you can use the following API call.
+
+```http
+GET "https://PortalURL/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables"
+
+Response:
+{
+ "properties": {
+ "retentionInDays": 30,
+ "isTroubleshootingAllowed": true,
+ "isTroubleshootEnabled": true,
+ "lastTroubleshootDate": "Thu, 19 Nov 2020 07:40:51 GMT"
+ },
+ "id": "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/microsoft.operationalinsights/workspaces/{workspaceName}/tables/table1",
+ "name": "table1"
+ },
+ {
+ "properties": {
+ "retentionInDays": 7,
+ "isTroubleshootingAllowed": true
+ },
+ "id": "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/microsoft.operationalinsights/workspaces/{workspaceName}/tables/table2",
+ "name": "table2"
+ },
+ {
+ "properties": {
+ "retentionInDays": 7,
+ "isTroubleshootingAllowed": false
+ },
+ "id": "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/microsoft.operationalinsights/workspaces/{workspaceName}/tables/table3",
+ "name": "table3"
+ }
+```
+## Turn off Troubleshooting Logs on your tables
+
+To turn off Troubleshooting Logs in your workspace, you need to use the following API call.
+```http
+PUT https://PortalURL/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}
+
+(With body in the form of a GET single table request response)
+
+Response:
+
+{
+ "properties": {
+ "retentionInDays": 40,
+ "isTroubleshootingAllowed": true,
+ "isTroubleshootEnabled": false
+ },
+ "id": "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}",
+ "name": "{tableName}"
+ }
+```
+>[!TIP]
+>* You can use any REST API tool to run the commands. [Read More](https://docs.microsoft.com/rest/api/azure/)
+>* You need to use the Bearer token for authentication. [Read More](https://social.technet.microsoft.com/wiki/contents/articles/51140.azure-rest-management-api-the-quickest-way-to-get-your-bearer-token.aspx)
+
+>[!NOTE]
+>* The "isTroubleshootingAllowed" flag ΓÇô describes if the table is allowed in the service
+>* The "isTroubleshootEnabled" indicates if the feature is enabled for the table - can be switched on or off (true or false)
+>* When disabling the "isTroubleshootEnabled" flag for a specific table, re-enabling it is possible only one week after the prior enable date.
+>* Currently this is supported only for tables under (some other SKUs will also be supported in the future) - [Read more about pricing](https://docs.microsoft.com/services-hub/health/azure_pricing).
+
+## Query limitations for Troubleshooting
+There are few limitations for a table that is marked as "Troubleshooting Logs":
+* Will get less processing resources and therefore, will not be suitable for large dashboards, complex analytics, or many concurrent API calls.
+* Queries are limited to a time range of two days.
+* purging will not work ΓÇô [Read more about purge](https://docs.microsoft.com/rest/api/loganalytics/workspacepurge/purge).
+* Alerts are not supported through this service.
+## Next steps
+* [Write queries](https://docs.microsoft.com/azure/data-explorer/write-queries)
++
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/customer-managed-keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/customer-managed-keys.md
@@ -5,7 +5,7 @@ ms.subservice: logs
ms.topic: conceptual author: yossi-y ms.author: yossiy
-ms.date: 11/18/2020
+ms.date: 01/10/2021
---
@@ -32,7 +32,7 @@ Log Analytics Dedicated Clusters use a Capacity Reservation [pricing model](../l
## How Customer-Managed key works in Azure Monitor
-Azure Monitor uses system-assigned managed identity to grant access to your Azure Key Vault. The identity of the Log Analytics cluster is supported at the cluster level and allowing Customer-Managed key on multiple workspaces, a new Log Analytics *Cluster* resource performs as an intermediate identity connection between your Key Vault and your Log Analytics workspaces. The Log Analytics cluster storage uses the managed identity that\'s associated with the *Cluster* resource to authenticate to your Azure Key Vault via Azure Active Directory.
+Azure Monitor uses managed identity to grant access to your Azure Key Vault. The identity of the Log Analytics cluster is supported at the cluster level. To allow Customer-Managed key protection on multiple workspaces, a new Log Analytics *Cluster* resource performs as an intermediate identity connection between your Key Vault and your Log Analytics workspaces. The cluster's storage uses the managed identity that\'s associated with the *Cluster* resource to authenticate to your Azure Key Vault via Azure Active Directory.
After the Customer-managed key configuration, new ingested data to workspaces linked to your dedicated cluster gets encrypted with your key. You can unlink workspaces from the cluster at any time. New data then gets ingested to Log Analytics storage and encrypted with Microsoft key, while you can query your new and old data seamlessly.
@@ -121,6 +121,12 @@ These settings can be updated in Key Vault via CLI and PowerShell:
## Create cluster
+> [!INFORMATION]
+> Clusters support two [managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types). System-assigned managed identity is created with the cluster when you enter `SystemAssigned` identity type and this can be used later to grant access to your Key Vault. If you want to create a cluster that is configured for Customer-managed key at creation, create the cluster with User-assigned managed identity that is granted in your Key Vault -- Update the cluster with `UserAssigned` identity type, the identity's resource ID in `UserAssignedIdentities` and provide provide your key details in `keyVaultProperties`.
+
+> [!IMPORTANT]
+> Currently you can't defined Customer-managed key with User-assigned managed identity if your Key Vault is located in Private-Link (vNet). This limitation isn't applied to System-assigned managed identity.
+ Follow the procedure illustrated in [Dedicated Clusters article](../log-query/logs-dedicated-clusters.md#creating-a-cluster). ## Grant Key Vault permissions
@@ -128,7 +134,7 @@ Follow the procedure illustrated in [Dedicated Clusters article](../log-query/lo
Create access policy in Key Vault to grants permissions to your cluster. These permissions are used by the underlay Azure Monitor storage. Open your Key Vault in Azure portal and click *"Access Policies"* then *"+ Add Access Policy"* to create a policy with these settings: - Key permissions: select *'Get'*, *'Wrap Key'* and *'Unwrap Key'*.-- Select principal: enter the cluster name or principal-id.
+- Select principal: depending on the identity type used in the cluster (system or user assigned managed identity) enter either cluster name or cluster principal ID for system assigned managed identity or the user assigned managed identity name.
![grant Key Vault permissions](media/customer-managed-keys/grant-key-vault-permissions-8bit.png)
@@ -234,11 +240,15 @@ Follow the procedure illustrated in [Dedicated Clusters article](../log-query/lo
## Key revocation
-You can revoke access to data by disabling your key, or deleting the cluster's access policy in your Key Vault. The Log Analytics cluster storage will always respect changes in key permissions within an hour or sooner and Storage will become unavailable. Any new data ingested to workspaces linked with your cluster gets dropped and won't be recoverable, data is inaccessible and queries to these workspaces fail. Previously ingested data remains in storage as long as your cluster and your workspaces aren't deleted. Inaccessible data is governed by the data-retention policy and will be purged when retention is reached.
+You can revoke access to data by disabling your key, or deleting the cluster's access policy in your Key Vault.
+
+> [!IMPORTANT]
+> - If your cluster is set with User-assigned managed identity, setting `UserAssignedIdentities` with `None` suspends the cluster and prevents access to your data, but you can't revert the revocation and activate the cluster without opening support request. This limitation isn't applied to System-assigned managed identity.
+> - The recommended key revocation action is by disabling your key in your Key Vault.
-Ingested data in last 14 days is also kept in hot-cache (SSD-backed) for efficient query engine operation. This gets deleted on key revocation operation and becomes inaccessible as well.
+The cluster storage will always respect changes in key permissions within an hour or sooner and storage will become unavailable. Any new data ingested to workspaces linked with your cluster gets dropped and won't be recoverable, data becomes inaccessible and queries on these workspaces fail. Previously ingested data remains in storage as long as your cluster and your workspaces aren't deleted. Inaccessible data is governed by the data-retention policy and will be purged when retention is reached. Ingested data in last 14 days is also kept in hot-cache (SSD-backed) for efficient query engine operation. This gets deleted on key revocation operation and becomes inaccessible as well.
-Storage periodically polls your Key Vault to attempt to unwrap the encryption key and once accessed, data ingestion and query resume within 30 minutes.
+The cluster's storage periodically polls your Key Vault to attempt to unwrap the encryption key and once accessed, data ingestion and query resume within 30 minutes.
## Key rotation
@@ -401,6 +411,38 @@ Customer-Managed key is provided on dedicated cluster and these operations are r
- If you create a cluster and get an error "<region-name> doesnΓÇÖt support Double Encryption for clusters.", you can still create the cluster without Double Encryption. Add `"properties": {"isDoubleEncryptionEnabled": false}` property in the REST request body. - Double encryption setting can not be changed after the cluster has been created.
+ - If your cluster is set with User-assigned managed identity, setting `UserAssignedIdentities` with `None` suspends the cluster and prevents access to your data, but you can't revert the revocation and activate the cluster without opening support request. This limitation isn' applied to System-assigned managed identity.
+
+ - Currently you can't defined Customer-managed key with User-assigned managed identity if your Key Vault is located in Private-Link (vNet). This limitation isn't applied to System-assigned managed identity.
+
+## Troubleshooting
+
+- Behavior with Key Vault availability
+ - In normal operation -- Storage caches AEK for short periods of time and goes back to Key Vault to unwrap periodically.
+
+ - Transient connection errors -- Storage handles transient errors (timeouts, connection failures, DNS issues) by allowing keys to stay in cache for a short while longer and this overcomes any small blips in availability. The query and ingestion capabilities continue without interruption.
+
+ - Live site -- unavailability of about 30 minutes will cause the Storage account to become unavailable. The query capability is unavailable and ingested data is cached for several hours using Microsoft key to avoid data loss. When access to Key Vault is restored, query becomes available and the temporary cached data is ingested to the data-store and encrypted with Customer-Managed key.
+
+ - Key Vault access rate -- The frequency that Azure Monitor Storage accesses Key Vault for wrap and unwrap operations is between 6 to 60 seconds.
+
+- If you create a cluster and specify the KeyVaultProperties immediately, the operation may fail since the
+ access policy can't be defined until system identity is assigned to the cluster.
+
+- If you update existing cluster with KeyVaultProperties and 'Get' key Access Policy is missing in Key Vault, the operation will fail.
+
+- If you get conflict error when creating a cluster ΓÇô It may be that you have deleted your cluster in the last 14 days and itΓÇÖs in a soft-delete period. The cluster name remains reserved during the soft-delete period and you can't create a new cluster with that name. The name is released after the soft-delete period when the cluster is permanently deleted.
+
+- If you update your cluster while an operation is in progress, the operation will fail.
+
+- If you fail to deploy your cluster, verify that your Azure Key Vault, cluster and linked Log Analytics workspaces are in the same region. The can be in different subscriptions.
+
+- If you update your key version in Key Vault and don't update the new key identifier details in the cluster, the Log Analytics cluster will keep using your previous key and your data will become inaccessible. Update new key identifier details in the cluster to resume data ingestion and ability to query data.
+
+- Some operations are long and can take a while to complete -- these are cluster create, cluster key update and cluster delete. You can check the operation status in two ways:
+ 1. when using REST, copy the Azure-AsyncOperation URL value from the response and follow the [asynchronous operations status check](#asynchronous-operations-and-status-check).
+ 2. Send GET request to cluster or workspace and observe the response. For example, unlinked workspace won't have the *clusterResourceId* under *features*.
+ - Error messages **Cluster Create**
@@ -438,35 +480,6 @@ Customer-Managed key is provided on dedicated cluster and these operations are r
**Workspace unlink** - 404 -- Workspace not found. The workspace you specified doesnΓÇÖt exist or was deleted. - 409 -- Workspace link or unlink operation in process.-
-## Troubleshooting
--- Behavior with Key Vault availability
- - In normal operation -- Storage caches AEK for short periods of time and goes back to Key Vault to unwrap periodically.
-
- - Transient connection errors -- Storage handles transient errors (timeouts, connection failures, DNS issues) by allowing keys to stay in cache for a short while longer and this overcomes any small blips in availability. The query and ingestion capabilities continue without interruption.
-
- - Live site -- unavailability of about 30 minutes will cause the Storage account to become unavailable. The query capability is unavailable and ingested data is cached for several hours using Microsoft key to avoid data loss. When access to Key Vault is restored, query becomes available and the temporary cached data is ingested to the data-store and encrypted with Customer-Managed key.
-
- - Key Vault access rate -- The frequency that Azure Monitor Storage accesses Key Vault for wrap and unwrap operations is between 6 to 60 seconds.
--- If you create a cluster and specify the KeyVaultProperties immediately, the operation may fail since the
- access policy can't be defined until system identity is assigned to the cluster.
--- If you update existing cluster with KeyVaultProperties and 'Get' key Access Policy is missing in Key Vault, the operation will fail.--- If you get conflict error when creating a cluster – It may be that you have deleted your cluster in the last 14 days and it’s in a soft-delete period. The cluster name remains reserved during the soft-delete period and you can't create a new cluster with that name. The name is released after the soft-delete period when the cluster is permanently deleted.--- If you update your cluster while an operation is in progress, the operation will fail.--- If you fail to deploy your cluster, verify that your Azure Key Vault, cluster and linked Log Analytics workspaces are in the same region. The can be in different subscriptions.--- If you update your key version in Key Vault and don't update the new key identifier details in the cluster, the Log Analytics cluster will keep using your previous key and your data will become inaccessible. Update new key identifier details in the cluster to resume data ingestion and ability to query data.--- Some operations are long and can take a while to complete -- these are cluster create, cluster key update and cluster delete. You can check the operation status in two ways:
- 1. when using REST, copy the Azure-AsyncOperation URL value from the response and follow the [asynchronous operations status check](#asynchronous-operations-and-status-check).
- 2. Send GET request to cluster or workspace and observe the response. For example, unlinked workspace won't have the *clusterResourceId* under *features*.
- ## Next steps - Learn about [Log Analytics dedicated cluster billing](../platform/manage-cost-storage.md#log-analytics-dedicated-clusters)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-definition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-definition.md
@@ -60,7 +60,7 @@ After you've prepped your ITSM tools, complete these steps to create a connectio
1. In **All resources**, look for **ServiceDesk(*your workspace name*)**:
- ![Screenshot that shows recent resources in the Azure portal.](media/itsmc-overview/itsm-connections.png)
+ ![Screenshot that shows recent resources in the Azure portal.](media/itsmc-definition/create-new-connection-from-resource.png)
1. Under **Workspace Data Sources** in the left pane, select **ITSM Connections**:
@@ -127,11 +127,12 @@ Use the following procedure to create action groups:
>[!NOTE] >
- > * This section is relevant only to Log Search Alerts.
- > * Metric Alerts and Activity Log Alerts will always create one work item per alert.
+ > * This section is relevant only for Log Search Alerts.
+ > * For all other alert types one work item will be created per alert.
- * In a case you select in the work item dropdown "Incident" or "Alert":
- * If you check the **"Create individual work items for each Configuration Item"** check box, every configuration item in every alert will create a new work item. There can be more than one work item per configuration item in the ITSM system.
+ * In a case you select in the "Work Item" dropdown "Incident" or "Alert":
+ ![Screenshot that shows the ITSM Incident window.](media/itsmc-overview/itsm-action-configuration.png)
+ * If you check the **"Create individual work items for each Configuration Item"** check box, every configuration item in every alert will create a new work item. As a result of several alert for the same configuration items impacted, there are going to be more than one work item for each configuration item.
For example: 1) Alert 1 with 3 Configuration Items: A, B, C - will create 3 work items.
@@ -145,15 +146,14 @@ Use the following procedure to create action groups:
For example: 1) Alert 1 with 3 Configuration Items: A, B, C - will create 1 work item.
- 2) Alert 2 for the same alert rule as phase 1 with 1 Configuration Item: D - will be merged to the work item in phase 1.
+ 2) Alert 2 for the same alert rule as in step a with 1 Configuration Item: D - D will be attached to the impacted configuration items list in the work item created in the step a.
3) Alert 3 for a different alert rule with 1 Configuration Item: E - will create 1 work item.
- ![Screenshot that shows the ITSM Incident window.](media/itsmc-overview/itsm-action-configuration.png)
+ * In a case you select in the "Work Item" dropdown "Event":
+ ![Screenshot that shows the ITSM Event window.](media/itsmc-overview/itsm-action-configuration-event.png)
- * In a case you select in the work item dropdown "Event":
* If you select **"Create individual work items for each Log Entry (Configuration item field is not filled. Can result in large number of work items.)"** in the radio buttons selection, a work item will be created per each row in the search results of the log search alert query. In the payload of the work item the description property will have the row from the search results. * If you select **"Create individual work items for each Configuration Item"** in the radio buttons selection, every configuration item in every alert will create a new work item. There can be more than one work item per configuration item in the ITSM system. This will be the same as the checking the checkbox in Incident/Alert section.
- ![Screenshot that shows the ITSM Event window.](media/itsmc-overview/itsm-action-configuration-event.png)
10. Select **OK**.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-resync-servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-resync-servicenow.md
@@ -18,7 +18,7 @@ ITSM gives you the option to send the alerts to external ticketing system such a
## Visualize and analyze the incident and change request data
-Depending on your configuration when you set up a connection, ITSMC can sync up to 120 days of incident and change request data. The log record schema for this data is provided in the [Additional information Section](./itsmc-overview.md) of this article.
+Depending on your configuration when you set up a connection, ITSMC can sync up to 120 days of incident and change request data. The log record schema for this data is provided in the [Additional information Section](./itsmc-synced-data.md) of this article.
You can visualize the incident and change request data by using the ITSMC dashboard:
@@ -26,6 +26,31 @@ You can visualize the incident and change request data by using the ITSMC dashbo
The dashboard also provides information about connector status, which you can use as a starting point to analyze problems with the connections.
+### Error Investigation using the dashboard
+
+In order to view the errors in the dashboard, you should follow the next steps:
+
+1. In **All resources**, look for **ServiceDesk(*your workspace name*)**:
+
+ ![Screenshot that shows recent resources in the Azure portal.](media/itsmc-definition/create-new-connection-from-resource.png)
+
+2. Under **Workspace Data Sources** in the left pane, select **ITSM Connections**:
+
+ ![Screenshot that shows the ITSM Connections menu item.](media/itsmc-overview/add-new-itsm-connection.png)
+
+3. Under **Summary** in the left box **IT Service Management Connector**, select **View Summary**:
+
+ ![Screenshot that shows view summary.](media/itsmc-resync-servicenow/dashboard-view-summary.png)
+
+4. Under **Summary** in the left box **IT Service Management Connector**, click on the graph:
+
+ ![Screenshot that shows graph click.](media/itsmc-resync-servicenow/dashboard-graph-click.png)
+
+5. Using this dashboard you will be able to review the status and the errors in your connector.
+ ![Screenshot that shows connector status.](media/itsmc-resync-servicenow/connector-dashboard.png)
+
+### Service map
+ You can also visualize the incidents synced against the affected computers in Service Map. Service Map automatically discovers the application components on Windows and Linux systems and maps the communication between services. It allows you to view your servers as you think of them: as interconnected systems that deliver critical services. Service Map shows connections between servers, processes, and ports across any TCP-connected architecture. Other than the installation of an agent, no configuration is required. For more information, see [Using Service Map](../insights/service-map.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/whats-new.md
@@ -5,12 +5,70 @@ ms.subservice:
ms.topic: overview author: bwren ms.author: bwren
-ms.date: 12/04/2020
+ms.date: 01/11/2021
--- # What's new in Azure Monitor documentation? This article provides lists Azure Monitor articles that are either new or have been significantly updated. It will be refreshed the first week of each month to include article updates from the previous month.
+## December 2020
+
+### General
+- [Azure Monitor customer-managed key](platform/customer-managed-keys.md) - Added error messages.
+- [Partners who integrate with Azure Monitor](platform/partners.md) - Added section on Event Hub integration.
+
+### Agents
+- [Cross-resource query Azure Data Explorer by using Azure Monitor](platform/azure-monitor-data-explorer-proxy.md) - New article.
+- [Overview of the Azure monitoring agents](platform/agents-overview.md) - Added Oracle 8 support.
+
+### Alerts
+- [Troubleshooting Azure metric alerts](platform/alerts-troubleshoot-metric.md) - Added troubleshooting for dynamic thresholds.
+- [IT Service Management Connector in Log Analytics](platform/itsmc-definition.md) - New article.
+- [IT Service Management Connector overview](platform/itsmc-overview.md) - Restructured troubleshooting information.
+- [Connect Cherwell with IT Service Management Connector](platform/itsmc-connections-cherwell.md) - New article.
+- [Connect Provance with IT Service Management Connector](platform/itsmc-connections-provance.md) - New article.
+- [Connect SCSM with IT Service Management Connector](platform/itsmc-connections-scsm.md) - New article.
+- [Connect ServiceNow with IT Service Management Connector](platform/itsmc-connections-servicenow.md) - New article.
+- [How to manually fix ServiceNow sync problems](platform/itsmc-resync-servicenow.md) - Restructured troubleshooting information.
++++
+### Application Insights
+- [Azure Application Insights for JavaScript web apps](app/javascript.md) - Added connection string setup.
+- [Azure Application Insights standard metrics](app/standard-metrics.md) - New article.
+- [Azure Monitor Application Insights Java](app/java-in-process-agent.md) - Additional information on sending custom telemetry from your application.
+- [Continuous export of telemetry from Application Insights](app/export-telemetry.md) - Added diagnostic settings based export.
+- [Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions](app/snapshot-debugger-function-app.md) - New article.
+- [IP addresses used by Application Insights and Log Analytics](app/ip-addresses.md) - Added IP addresses for Azure Government.
+- [Troubleshoot problems with Azure Application Insights Profiler](app/profiler-troubleshooting.md) - Added information on Diagnostic Services site extension' status page.
+- [Troubleshoot your Azure Application Insights availability tests](app/troubleshoot-availability.md) - Updates to troubleshooting for ping tests.
+- [Troubleshooting Azure Monitor Application Insights for Java](app/java-standalone-troubleshoot.md) - New article.
+
+### Containers
+- [Reports in Azure Monitor for containers](insights/container-insights-reports.md) - New article.
+
+### Logs
+- [Azure Monitor Logs Dedicated Clusters](log-query/logs-dedicated-clusters.md) - Added automated commands, methods to unlink and remove, and troubleshooting.
+- [Cross service query between Azure Monitor and Azure Data Explorer (preview)](platform/azure-data-explorer-monitor-cross-service-query.md) - New article.
+- [Log Analytics workspace data export in Azure Monitor (preview)](platform/logs-data-export.md) - Added ARM templates.
+
+### Metrics
+- [Advanced features of Azure Metrics Explorer](platform/metrics-charts.md) - Added information on resource scope picker.
+- [Viewing multiple resources in Metrics Explorer](platform/metrics-dynamic-scope.md) - New article.
+
+### Networks
+- [Azure Networking Analytics solution in Azure Monitor](insights/azure-networking-analytics.md) - Added information on Network Insights workbook.
+
+### Virtual Machines
+- [Enable Azure Monitor for a hybrid environment](insights/vminsights-enable-hybrid.md) - New version of dependency agent.
++
+### Visualizations
+- [Azure Monitor workbook map visualizations](platform/workbooks-map-visualizations.md) - New article.
+- [Azure Monitor Workbooks bring your own storage](platform/workbooks-bring-your-own-storage.md) - New article.
++ ## November 2020 ### General
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-solution-architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
@@ -13,7 +13,7 @@ ms.workload: storage
ms.tgt_pltfrm: na ms.devlang: na ms.topic: conceptual
-ms.date: 01/04/2020
+ms.date: 01/11/2021
ms.author: b-juche --- # Solution architectures using Azure NetApp Files
@@ -131,6 +131,7 @@ This section provides solutions for Azure platform services.
* [Integrate Azure NetApp Files with Azure Kubernetes Service](../aks/azure-netapp-files.md) * [Out-of-This-World Kubernetes performance on Azure with Azure NetApp Files](https://cloud.netapp.com/blog/ma-anf-blg-configure-kubernetes-openshift) * [Trident - Storage Orchestrator for Containers](https://netapp-trident.readthedocs.io/en/stable-v20.04/kubernetes/operations/tasks/backends/anf.html)
+* [Magento e-commerce platform in Azure Kubernetes Service (AKS)](/azure/architecture/example-scenario/magento/magento-azure)
### Azure Batch
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/move-support-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-support-resources.md
@@ -2,7 +2,7 @@
title: Move operation support by resource type description: Lists the Azure resource types that can be moved to a new resource group or subscription. ms.topic: conceptual
-ms.date: 12/07/2020
+ms.date: 01/11/2021
--- # Move operation support for resources
@@ -749,7 +749,7 @@ Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | > | ------------- | ----------- | ---------- | > | availableskus | No | No |
-> | databoxedgedevices | Yes | Yes |
+> | databoxedgedevices | No | No |
## Microsoft.Databricks
azure-sql-edge https://docs.microsoft.com/en-us/azure/azure-sql-edge/track-data-changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/track-data-changes.md
@@ -31,6 +31,9 @@ To administer and monitor this feature, see [Administer and monitor change data
To understand how to query and work with the changed data, see [Work with change data](/sql/relational-databases/track-changes/work-with-change-data-sql-server).
+> [!NOTE]
+> Change Data Capture functions which are dependent on CLR are not supported on Azure SQL Edge.
+ ## Change tracking To understand the details of how this feature works, see [About change tracking](/sql/relational-databases/track-changes/about-change-tracking-sql-server).
@@ -58,4 +61,4 @@ For more information, see [Temporal tables](/sql/relational-databases/tables/tem
- [Data streaming in Azure SQL Edge ](stream-data.md) - [Machine learning and AI with ONNX in Azure SQL Edge ](onnx-overview.md) - [Configure replication to Azure SQL Edge](configure-replication.md)-- [Backup and restore databases in Azure SQL Edge](backup-restore.md)\ No newline at end of file
+- [Backup and restore databases in Azure SQL Edge](backup-restore.md)
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/automated-backups-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automated-backups-overview.md
@@ -116,7 +116,7 @@ Backup storage consumption up to the maximum data size for a database is not cha
## Backup retention
-For all new, restored, and copied databases, Azure SQL Database and Azure SQL Managed Instance retain sufficient backups to allow PITR within the last 7 days by default. With the exception of Hyperscale databases, you can [change backup retention period](#change-the-pitr-backup-retention-period) per each active database in the 1-35 day range. As described in [Backup storage consumption](#backup-storage-consumption), backups stored to enable PITR may be older than the retention period. For Azure SQL Managed Instance only, it is possible to set the PITR backup retention rate once a database has been deleted in the 0-35 days range.
+For all new, restored, and copied databases, Azure SQL Database and Azure SQL Managed Instance retain sufficient backups to allow PITR within the last 7 days by default. With the exception of Hyperscale and Basic tier databases, you can [change backup retention period](#change-the-pitr-backup-retention-period) per each active database in the 1-35 day range. As described in [Backup storage consumption](#backup-storage-consumption), backups stored to enable PITR may be older than the retention period. For Azure SQL Managed Instance only, it is possible to set the PITR backup retention rate once a database has been deleted in the 0-35 days range.
If you delete a database, the system keeps backups in the same way it would for an online database with its specific retention period. You cannot change backup retention period for a deleted database.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/database-export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-export.md
@@ -9,7 +9,7 @@ author: stevestein
ms.custom: sqldbrb=2 ms.author: sstein ms.reviewer:
-ms.date: 07/16/2019
+ms.date: 01/11/2021
ms.topic: how-to --- # Export to a BACPAC file - Azure SQL Database and Azure SQL Managed Instance
@@ -24,6 +24,7 @@ When you need to export a database for archiving or for moving to another platfo
- If you are exporting to blob storage, the maximum size of a BACPAC file is 200 GB. To archive a larger BACPAC file, export to local storage. - Exporting a BACPAC file to Azure premium storage using the methods discussed in this article is not supported. - Storage behind a firewall is currently not supported.
+- Storage file name or the input value for StorageURI should be less than 128 characters long and cannot end with '.' and cannot contain special characters like a space character or '<,>,*,%,&,:,\,/,?'.
- If the export operation exceeds 20 hours, it may be canceled. To increase performance during export, you can: - Temporarily increase your compute size.
@@ -106,4 +107,4 @@ $exportStatus
- To learn about exporting a BACPAC from a SQL Server database, see [Export a Data-tier Application](/sql/relational-databases/data-tier-applications/export-a-data-tier-application) - To learn about using the Data Migration Service to migrate a database, see [Migrate from SQL Server to Azure SQL Database offline using DMS](../../dms/tutorial-sql-server-to-azure-sql.md). - If you are exporting from SQL Server as a prelude to migration to Azure SQL Database, see [Migrate a SQL Server database to Azure SQL Database](migrate-to-database-from-sql-server.md).-- To learn how to manage and share storage keys and shared access signatures securely, see [Azure Storage Security Guide](../../storage/blobs/security-recommendations.md).\ No newline at end of file
+- To learn how to manage and share storage keys and shared access signatures securely, see [Azure Storage Security Guide](../../storage/blobs/security-recommendations.md).
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/faq.md
@@ -25,6 +25,10 @@ The service is continuously being added to new regions, so view the [latest serv
All Azure services will be available to Azure VMware Solution customers. Performance and availability limitations for specific services will need to be addressed on a case-by-case basis.
+#### What guest operating systems are compatible with Azure VMware Solution?
+
+You can find information about guest operating system compatibility with vSphere by using the [VMware Compatibility Guide](https://www.vmware.com/resources/compatibility/search.php?deviceCategory=software&details=1&releases=485&page=1&display_interval=10&sortColumn=Partner&sortOrder=Asc&testConfig=16). To identify the version of vSphere running in Azure VMware Solution, see [VMware software versions](concepts-private-clouds-clusters.md#vmware-software-versions).
+ #### Do I use the same tools that I use now to manage private cloud resources? Yes. The Azure portal is used for deployment and several management operations. vCenter and NSX Manager are used to manage vSphere and NSX-T resources.
backup https://docs.microsoft.com/en-us/azure/backup/sap-hana-backup-support-matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/sap-hana-backup-support-matrix.md
@@ -22,7 +22,7 @@ Azure Backup supports the backup of SAP HANA databases to Azure. This article su
| **OS versions** | SLES 12 with SP2, SP3,SP4 and SP5; SLES 15 with SP0, SP1, SP2 <br><br> As of August 1st, 2020, SAP HANA backup for RHEL (7.4, 7.6, 7.7 & 8.1) is generally available. | | | **HANA versions** | SDC on HANA 1.x, MDC on HANA 2.x <= SPS04 Rev 48, SPS05 (yet to be validated for encryption enabled scenarios) | | | **HANA deployments** | SAP HANA on a single Azure VM - Scale up only. <br><br> For high availability deployments, both the nodes on the two different machines are treated as individual nodes with separate data chains. | Scale-out <br><br> In high availability deployments, backup doesnΓÇÖt failover to the secondary node automatically. Configuring backup should be done separately for each node. |
-| **HANA Instances** | A single SAP HANA instance on a single Azure VM ΓÇô scale up only | Multiple SAP HANA instances on a single VM |
+| **HANA Instances** | A single SAP HANA instance on a single Azure VM ΓÇô scale up only | Multiple SAP HANA instances on a single VM. You can protect only one of these multiple instances at a time. |
| **HANA database types** | Single Database Container (SDC) ON 1.x, Multi-Database Container (MDC) on 2.x | MDC in HANA 1.x | | **HANA database size** | HANA databases of size <= 2 TB (this isn't the memory size of the HANA system) | | | **Backup types** | Full, Differential, Incremental (Preview) and Log backups | Snapshots |
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/includes/quickstarts-sdk/csharp-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/includes/quickstarts-sdk/csharp-sdk.md
@@ -46,7 +46,7 @@ Once you've created a new project, install the client library by right-clicking
In a console window (such as cmd, PowerShell, or Bash), use the `dotnet new` command to create a new console app with the name `computer-vision-quickstart`. This command creates a simple "Hello World" C# project with a single source file: *Program.cs*. ```console
-dotnet new console -n (product-name)-quickstart
+dotnet new console -n computer-vision-quickstart
``` Change your directory to the newly created app folder. You can build the application with:
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/includes/quickstarts-sdk/python-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/includes/quickstarts-sdk/python-sdk.md
@@ -38,6 +38,12 @@ You can install the client library with:
pip install --upgrade azure-cognitiveservices-vision-computervision ```
+Also install the Pillow library.
+
+```console
+pip install pillow
+```
+ ### Create a new Python application Create a new Python file&mdash;*quickstart-file.py*, for example. Then open it in your preferred editor or IDE and import the following libraries.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/How-To/set-up-qnamaker-service-azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/set-up-qnamaker-service-azure.md
@@ -122,12 +122,18 @@ In order to keep the prediction endpoint app loaded even when there is no traffi
Learn more about how to configure the App Service [General settings](../../../app-service/configure-common.md#configure-general-settings). ### Configure App Service Environment to host QnA Maker App Service
-The App Service Environment can be used to host QnA Maker app service. If the App Service Environment is internal, then you need to follow these steps:
-1. Create an App service and an Azure search service.
-2. Expose the app service and allow QnA Maker availability as:
- * Publicly available - default
- * DNS service tag: `CognitiveServicesManagement`
-3. Create a QnA Maker cognitive service instance (Microsoft.CognitiveServices/accounts) using Azure Resource Manager, where QnA Maker endpoint should be set to App Service Environment.
+The App Service Environment(ASE) can be used to host QnA Maker App service. Please follow the steps below:
+
+1. Create an App Service Environment and mark it as ΓÇ£externalΓÇ¥. Please follow the [tutorial](https://docs.microsoft.com/azure/app-service/environment/create-external-ase) for instructions.
+2. Create an App service inside the App Service Environment.
+ * Check the configuration for the App service and add 'PrimaryEndpointKey' as an application setting. The value for 'PrimaryEndpointKey' should be set to ΓÇ£\<app-name\>-PrimaryEndpointKeyΓÇ¥. The App Name is defined in the App service URL. For instance, if the App service URL is "mywebsite.myase.p.azurewebsite.net", then the app-name is "mywebsite". In this case, the value for 'PrimaryEndpointKey' should be set to ΓÇ£mywebsite-PrimaryEndpointKeyΓÇ¥.
+ * Create an Azure search service.
+ * Ensure Azure Search and App Settings are appropriately configured.
+ Please follow this [tutorial](https://docs.microsoft.com/azure/cognitive-services/qnamaker/reference-app-service?tabs=v1#app-service).
+3. Update the Network Security Group associated with the App Service Environment
+ * Update pre-created Inbound Security Rules as per your requirements.
+ * Add a new Inbound Security Rule with source as 'Service Tag' and source service tag as 'CognitiveServicesManagement'.
+4. Create a QnA Maker cognitive service instance (Microsoft.CognitiveServices/accounts) using Azure Resource Manager, where QnA Maker endpoint should be set to the App Service Endpoint created above (https:// mywebsite.myase.p.azurewebsite.net).
### Network isolation for App Service
@@ -380,4 +386,4 @@ If you delete any of the Azure resources used for your QnA Maker knowledge bases
Learn more about the [App service](../../../app-service/index.yml) and [Search service](../../../search/index.yml). > [!div class="nextstepaction"]
-> [Learn how to author with others](../index.yml)
\ No newline at end of file
+> [Learn how to author with others](../index.yml)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/faq-stt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/faq-stt.md
@@ -131,7 +131,9 @@ See [Speech Services Quotas and Limits](speech-services-quotas-and-limits.md).
**Q: How long will it take to train a custom model with audio data?**
-**A**: Training a model with audio data is a lengthy process. Depending on the amount of data, it can take several days to create a custom model. If it cannot be finished within one week, the service might abort the training operation and report the model as failed. For faster results, use one of the [regions](custom-speech-overview.md#set-up-your-azure-account) where dedicated hardware is available for training. You can copy the fully trained model to another region using the [REST API](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription). Training with just text is much faster and typically finishes within minutes.
+**A**: Training a model with audio data can be a lengthy process. Depending on the amount of data, it can take several days to create a custom model. If it cannot be finished within one week, the service might abort the training operation and report the model as failed.
+
+For faster results, use one of the [regions](custom-speech-overview.md#set-up-your-azure-account) where dedicated hardware is available for training. In general, the service processes approximately 10 hours of audio data per day in regions with such hardware. It can only process about 1 hour of audio data per day in other regions. You can copy the fully trained model to another region using the [REST API](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription). Training with just text is much faster and typically finishes within minutes.
Some base models cannot be customized with audio data. For them the service will just use the text of the transcription for training and ignore the audio data. Training will then be finished much faster and results will be the same as training with just text.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
@@ -45,7 +45,7 @@ This table lists accepted data types, when each data type should be used, and th
| [Audio + Human-labeled transcripts](#audio--human-labeled-transcript-data-for-testingtraining) | Yes<br>Used to evaluate accuracy | 0.5-5 hours of audio | Yes | 1-20 hours of audio | | [Related text](#related-text-data-for-training) | No | N/a | Yes | 1-200 MB of related text |
-When you train a new model, start with [related text](#related-text-data-for-training). This data will already improve the recognition of special terms and phrases.
+When you train a new model, start with [related text](#related-text-data-for-training). This data will already improve the recognition of special terms and phrases. Training with text is much faster than training with audio (minutes vs. days).
Files should be grouped by type into a dataset and uploaded as a .zip file. Each dataset can only contain a single data type.
@@ -133,7 +133,9 @@ After you've gathered your audio files and corresponding transcriptions, package
> [!div class="mx-imgBorder"] > ![Select audio from the Speech Portal](./media/custom-speech/custom-speech-audio-transcript-pairs.png)
-See [Set up your Azure account](custom-speech-overview.md#set-up-your-azure-account) for a list of recommended regions for your Speech service subscriptions. Setting up the Speech subscriptions in one of these regions will reduce the time it takes to train the model.
+See [Set up your Azure account](custom-speech-overview.md#set-up-your-azure-account) for a list of recommended regions for your Speech service subscriptions. Setting up the Speech subscriptions in one of these regions will reduce the time it takes to train the model. In these regions, training can process about 10 hours of audio per day compared to just 1 hour per day in other regions. If model training cannot be completed within a week, the model will be marked as failed.
+
+Not all base models support training with audio data. If the base model does not support it, the service will ignore the audio and just train with the text of the transcriptions. In this case, training will be the same as training with related text.
## Related text data for training
@@ -146,6 +148,8 @@ Product names or features that are unique, should include related text data for
Sentences can be provided as a single text file or multiple text files. To improve accuracy, use text data that is closer to the expected spoken utterances. Pronunciations should be provided as a single text file. Everything can be packaged as a single zip file and uploaded to the <a href="https://speech.microsoft.com/customspeech" target="_blank">Custom Speech portal <span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+Training with related text usually completes within a few minutes.
+ ### Guidelines to create a sentences file To create a custom model using sentences, you'll need to provide a list of sample utterances. Utterances _do not_ need to be complete or grammatically correct, but they must accurately reflect the spoken input you expect in production. If you want certain terms to have increased weight, add several sentences that include these specific terms.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/cpp/windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/cpp/windows.md
@@ -72,8 +72,8 @@ Insert this code below your `IntentRecognizer`. Make sure that you replace `"You
This example uses the `AddIntent()` function to individually add intents. If you want to add all intents from a model, use `AddAllIntents(model)` and pass the model. > [!NOTE]
-> You can create a LanguageUnderstandingModel by passing an endpoint URL to the FromEndpoint method.
-> Speech SDK only supports LUIS v2.0 endpoints, and
+> Speech SDK only supports LUIS v2.0 endpoints.
+> You must manually modify the v3.0 endpoint URL found in the example query field to use a v2.0 URL pattern.
> LUIS v2.0 endpoints always follow one of these two patterns: > * `https://{AzureResourceName}.cognitiveservices.azure.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=` > * `https://{Region}.api.cognitive.microsoft.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/csharp/dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/csharp/dotnet.md
@@ -70,8 +70,8 @@ You need to associate a `LanguageUnderstandingModel` with the intent recognizer,
This example uses the `AddIntent()` function to individually add intents. If you want to add all intents from a model, use `AddAllIntents(model)` and pass the model. > [!NOTE]
-> You can create a LanguageUnderstandingModel by passing an endpoint URL to the FromEndpoint method.
-> Speech SDK only supports LUIS v2.0 endpoints, and
+> Speech SDK only supports LUIS v2.0 endpoints.
+> You must manually modify the v3.0 endpoint URL found in the example query field to use a v2.0 URL pattern.
> LUIS v2.0 endpoints always follow one of these two patterns: > * `https://{AzureResourceName}.cognitiveservices.azure.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=` > * `https://{Region}.api.cognitive.microsoft.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/java/jre https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/java/jre.md
@@ -67,8 +67,8 @@ Insert this code below your `IntentRecognizer`. Make sure that you replace `"You
This example uses the `addIntent()` function to individually add intents. If you want to add all intents from a model, use `addAllIntents(model)` and pass the model. > [!NOTE]
-> You can create a LanguageUnderstandingModel by passing an endpoint URL to the FromEndpoint method.
-> Speech SDK only supports LUIS v2.0 endpoints, and
+> Speech SDK only supports LUIS v2.0 endpoints.
+> You must manually modify the v3.0 endpoint URL found in the example query field to use a v2.0 URL pattern.
> LUIS v2.0 endpoints always follow one of these two patterns: > * `https://{AzureResourceName}.cognitiveservices.azure.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=` > * `https://{Region}.api.cognitive.microsoft.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/javascript/browser https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/javascript/browser.md
@@ -184,8 +184,8 @@ Insert this code below your `IntentRecognizer`. Make sure that you replace `"You
``` > [!NOTE]
-> You can create a LanguageUnderstandingModel by passing an endpoint URL to the FromEndpoint method.
-> Speech SDK only supports LUIS v2.0 endpoints, and
+> Speech SDK only supports LUIS v2.0 endpoints.
+> You must manually modify the v3.0 endpoint URL found in the example query field to use a v2.0 URL pattern.
> LUIS v2.0 endpoints always follow one of these two patterns: > * `https://{AzureResourceName}.cognitiveservices.azure.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=` > * `https://{Region}.api.cognitive.microsoft.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/python/python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/python/python.md
@@ -67,8 +67,8 @@ Insert this code below your `IntentRecognizer`. Make sure that you replace `"You
This example uses the `add_intents()` function to add a list of explicitly-defined intents. If you want to add all intents from a model, use `add_all_intents(model)` and pass the model. > [!NOTE]
-> You can create a LanguageUnderstandingModel by passing an endpoint URL to the FromEndpoint method.
-> Speech SDK only supports LUIS v2.0 endpoints, and
+> Speech SDK only supports LUIS v2.0 endpoints.
+> You must manually modify the v3.0 endpoint URL found in the example query field to use a v2.0 URL pattern.
> LUIS v2.0 endpoints always follow one of these two patterns: > * `https://{AzureResourceName}.cognitiveservices.azure.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=` > * `https://{Region}.api.cognitive.microsoft.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-get-started-docker-cli.md
@@ -32,6 +32,8 @@ docker login myregistry.azurecr.io
``` Both commands return `Login Succeeded` once completed.
+> [!NOTE]
+>* You might want to use Visual Studio Code with Docker extention for a faster and more convenient login.
> [!TIP] > Always specify the fully qualified registry name (all lowercase) when you use `docker login` and when you tag images for pushing to your registry. In the examples in this article, the fully qualified name is *myregistry.azurecr.io*.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/concepts-pipeline-execution-triggers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-pipeline-execution-triggers.md
@@ -376,7 +376,7 @@ The following table provides a comparison of the tumbling window trigger and sch
| **Reliability** | 100% reliability. Pipeline runs can be scheduled for all windows from a specified start date without gaps. | Less reliable. | | **Retry capability** | Supported. Failed pipeline runs have a default retry policy of 0, or a policy that's specified by the user in the trigger definition. Automatically retries when the pipeline runs fail due to concurrency/server/throttling limits (that is, status codes 400: User Error, 429: Too many requests, and 500: Internal Server error). | Not supported. | | **Concurrency** | Supported. Users can explicitly set concurrency limits for the trigger. Allows between 1 and 50 concurrent triggered pipeline runs. | Not supported. |
-| **System variables** | Along with @trigger().scheduledTime and @trigger().startTime, it also supports the use of the **WindowStart** and **WindowEnd** system variables. Users can access `triggerOutputs().windowStartTime` and `triggerOutputs().windowEndTime` as trigger system variables in the trigger definition. The values are used as the window start time and window end time, respectively. For example, for a tumbling window trigger that runs every hour, for the window 1:00 AM to 2:00 AM, the definition is `triggerOutputs().WindowStartTime = 2017-09-01T01:00:00Z` and `triggerOutputs().WindowEndTime = 2017-09-01T02:00:00Z`. | Only supports default @trigger().scheduledTime and @trigger().startTime variables. |
+| **System variables** | Along with @trigger().scheduledTime and @trigger().startTime, it also supports the use of the **WindowStart** and **WindowEnd** system variables. Users can access `trigger().outputs.windowStartTime` and `trigger().outputs.windowEndTime` as trigger system variables in the trigger definition. The values are used as the window start time and window end time, respectively. For example, for a tumbling window trigger that runs every hour, for the window 1:00 AM to 2:00 AM, the definition is `trigger().outputs.windowStartTime = 2017-09-01T01:00:00Z` and `trigger().outputs.windowEndTime = 2017-09-01T02:00:00Z`. | Only supports default @trigger().scheduledTime and @trigger().startTime variables. |
| **Pipeline-to-trigger relationship** | Supports a one-to-one relationship. Only one pipeline can be triggered. | Supports many-to-many relationships. Multiple triggers can kick off a single pipeline. A single trigger can kick off multiple pipelines. | ## Next steps
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connect-data-factory-to-azure-purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connect-data-factory-to-azure-purview.md
@@ -24,10 +24,10 @@ Azure Purview is a new cloud service for use by data users centrally manage data
For how to register data factory in Azure Purview, see [How to connect Azure Data Factory and Azure Purview](https://docs.microsoft.com/azure/purview/how-to-link-azure-data-factory). ## Report Lineage data to Azure Purview
-When customers run Copy, Data flow or Execute SSIS package activity in Azure data factory, customers could get the dependency relationship and have a high-level overview of whole workflow process among data sources and destination.
-For how to collect lineage from Azure data factory, see [data factory lineage](https://docs.microsoft.com/azure/purview/how-to-link-azure-data-factory#supported-azure-data-factory-activities).
+When customers run Copy, Data flow or Execute SSIS package activity in Azure Data Factory, customers could get the dependency relationship and have a high-level overview of whole workflow process among data sources and destination.
+For how to collect lineage from Azure Data Factory, see [data factory lineage](https://docs.microsoft.com/azure/purview/how-to-link-azure-data-factory#supported-azure-data-factory-activities).
## Next steps [Catalog lineage user guide](https://docs.microsoft.com/azure/purview/catalog-lineage-user-guide)
-[Tutorial: Push Data Factory lineage data to Azure Purview](turorial-push-lineage-to-purview.md)
\ No newline at end of file
+[Tutorial: Push Data Factory lineage data to Azure Purview](turorial-push-lineage-to-purview.md)
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-sql-data-warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-data-warehouse.md
@@ -10,7 +10,7 @@ ms.service: data-factory
ms.workload: data-services ms.topic: conceptual ms.custom: seo-lt-2019
-ms.date: 12/18/2020
+ms.date: 01/11/2021
--- # Copy and transform data in Azure Synapse Analytics by using Azure Data Factory
@@ -386,7 +386,7 @@ To copy data to Azure Synapse Analytics, set the sink type in Copy Activity to *
| writeBatchTimeout | Wait time for the batch insert operation to finish before it times out.<br/><br/>The allowed value is **timespan**. Example: "00:30:00" (30 minutes). | No.<br/>Apply when using bulk insert. | | preCopyScript | Specify a SQL query for Copy Activity to run before writing data into Azure Synapse Analytics in each run. Use this property to clean up the preloaded data. | No | | tableOption | Specifies whether to [automatically create the sink table](copy-activity-overview.md#auto-create-sink-tables) if not exists based on the source schema. Allowed values are: `none` (default), `autoCreate`. |No |
-| disableMetricsCollection | Data Factory collects metrics such as Azure Synapse Analytics DWUs for copy performance optimization and recommendations. If you are concerned with this behavior, specify `true` to turn it off. | No (default is `false`) |
+| disableMetricsCollection | Data Factory collects metrics such as Azure Synapse Analytics DWUs for copy performance optimization and recommendations, which introduces additional master DB access. If you are concerned with this behavior, specify `true` to turn it off. | No (default is `false`) |
#### Azure Synapse Analytics sink example
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-sql-database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-database.md
@@ -10,7 +10,7 @@ ms.service: data-factory
ms.workload: data-services ms.topic: conceptual ms.custom: seo-lt-2019
-ms.date: 12/18/2020
+ms.date: 01/11/2021
--- # Copy and transform data in Azure SQL Database by using Azure Data Factory
@@ -268,7 +268,7 @@ To copy data from Azure SQL Database, the following properties are supported in
| partitionOptions | Specifies the data partitioning options used to load data from Azure SQL Database. <br>Allowed values are: **None** (default), **PhysicalPartitionsOfTable**, and **DynamicRange**.<br>When a partition option is enabled (that is, not `None`), the degree of parallelism to concurrently load data from an Azure SQL Database is controlled by the [`parallelCopies`](copy-activity-performance-features.md#parallel-copy) setting on the copy activity. | No | | partitionSettings | Specify the group of the settings for data partitioning. <br>Apply when the partition option isn't `None`. | No | | ***Under `partitionSettings`:*** | | |
-| partitionColumnName | Specify the name of the source column **in integer or date/datetime type** (`int`, `smallint`, `bigint`, `date`, `smalldatetime`, `datetime`, `datetime2`, or `datetimeoffset`) that will be used by range partitioning for parallel copy. If not specified, the index or the primary key of the table is auto-detected and used as the partition column.<br>Apply when the partition option is `DynamicRange`. If you use a query to retrieve the source data, hook `?AdfDynamicRangePartitionCondition ` in the WHERE clause. For an example, see the [Parallel copy from SQL database](#parallel-copy-from-sql-database) section. | No |
+| partitionColumnName | Specify the name of the source column **in integer or date/datetime type** (`int`, `smallint`, `bigint`, `date`, `smalldatetime`, `datetime`, `datetime2`, or `datetimeoffset`) that will be used by range partitioning for parallel copy. If not specified, the index or the primary key of the table is autodetected and used as the partition column.<br>Apply when the partition option is `DynamicRange`. If you use a query to retrieve the source data, hook `?AdfDynamicRangePartitionCondition ` in the WHERE clause. For an example, see the [Parallel copy from SQL database](#parallel-copy-from-sql-database) section. | No |
| partitionUpperBound | The maximum value of the partition column for partition range splitting. This value is used to decide the partition stride, not for filtering the rows in table. All rows in the table or query result will be partitioned and copied. If not specified, copy activity auto detect the value. <br>Apply when the partition option is `DynamicRange`. For an example, see the [Parallel copy from SQL database](#parallel-copy-from-sql-database) section. | No | | partitionLowerBound | The minimum value of the partition column for partition range splitting. This value is used to decide the partition stride, not for filtering the rows in table. All rows in the table or query result will be partitioned and copied. If not specified, copy activity auto detect the value.<br>Apply when the partition option is `DynamicRange`. For an example, see the [Parallel copy from SQL database](#parallel-copy-from-sql-database) section. | No |
@@ -382,7 +382,7 @@ To copy data to Azure SQL Database, the following properties are supported in th
| storedProcedureParameters |Parameters for the stored procedure.<br/>Allowed values are name and value pairs. Names and casing of parameters must match the names and casing of the stored procedure parameters. | No | | writeBatchSize | Number of rows to insert into the SQL table *per batch*.<br/> The allowed value is **integer** (number of rows). By default, Azure Data Factory dynamically determines the appropriate batch size based on the row size. | No | | writeBatchTimeout | The wait time for the batch insert operation to finish before it times out.<br/> The allowed value is **timespan**. An example is "00:30:00" (30 minutes). | No |
-| disableMetricsCollection | Data Factory collects metrics such as Azure SQL Database DTUs for copy performance optimization and recommendations. If you are concerned with this behavior, specify `true` to turn it off. | No (default is `false`) |
+| disableMetricsCollection | Data Factory collects metrics such as Azure SQL Database DTUs for copy performance optimization and recommendations, which introduces additional master DB access. If you are concerned with this behavior, specify `true` to turn it off. | No (default is `false`) |
**Example 1: Append data**
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-salesforce-service-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-salesforce-service-cloud.md
@@ -10,7 +10,7 @@ ms.service: data-factory
ms.workload: data-services ms.topic: conceptual ms.custom: seo-lt-2019
-ms.date: 07/13/2020
+ms.date: 01/11/2021
--- # Copy data from and to Salesforce Service Cloud by using Azure Data Factory
@@ -32,7 +32,7 @@ Specifically, this Salesforce Service Cloud connector supports:
- Salesforce Developer, Professional, Enterprise, or Unlimited editions. - Copying data from and to Salesforce production, sandbox, and custom domain.
-The Salesforce connector is built on top of the Salesforce REST/Bulk API. By default, the connector uses [v45](https://developer.salesforce.com/docs/atlas.en-us.218.0.api_rest.meta/api_rest/dome_versions.htm) to copy data from Salesforce, and uses [v40](https://developer.salesforce.com/docs/atlas.en-us.208.0.api_asynch.meta/api_asynch/asynch_api_intro.htm) to copy data to Salesforce. You can also explicitly set the API version used to read/write data via [`apiVersion` property](#linked-service-properties) in linked service.
+The Salesforce connector is built on top of the Salesforce REST/Bulk API. By default, when copying data from Salesforce, the connector uses [v45](https://developer.salesforce.com/docs/atlas.en-us.218.0.api_rest.meta/api_rest/dome_versions.htm) and automatically chooses between REST and Bulk APIs based on the data size ΓÇô when the result set is large, Bulk API is used for better performance; when writing data to Salesforce, the connector uses [v40](https://developer.salesforce.com/docs/atlas.en-us.208.0.api_asynch.meta/api_asynch/asynch_api_intro.htm) of Bulk API. You can also explicitly set the API version used to read/write data via [`apiVersion` property](#linked-service-properties) in linked service.
## Prerequisites
@@ -286,7 +286,7 @@ When copying data from Salesforce Service Cloud, you can use either SOQL query o
|:--- |:--- |:--- | | Column selection | Need to enumerate the fields to be copied in the query, e.g. `SELECT field1, filed2 FROM objectname` | `SELECT *` is supported in addition to column selection. | | Quotation marks | Filed/object names cannot be quoted. | Field/object names can be quoted, e.g. `SELECT "id" FROM "Account"` |
-| Datetime format | Refer to details [here](https://developer.salesforce.com/docs/atlas.en-us.soql_sosl.meta/soql_sosl/sforce_api_calls_soql_select_dateformats.htm) and samples in next section. | Refer to details [here](/sql/odbc/reference/develop-app/date-time-and-timestamp-literals?view=sql-server-2017) and samples in next section. |
+| Datetime format | Refer to details [here](https://developer.salesforce.com/docs/atlas.en-us.soql_sosl.meta/soql_sosl/sforce_api_calls_soql_select_dateformats.htm) and samples in next section. | Refer to details [here](/sql/odbc/reference/develop-app/date-time-and-timestamp-literals) and samples in next section. |
| Boolean values | Represented as `False` and `True`, e.g. `SELECT … WHERE IsDeleted=True`. | Represented as 0 or 1, e.g. `SELECT … WHERE IsDeleted=1`. | | Column renaming | Not supported. | Supported, e.g.: `SELECT a AS b FROM …`. | | Relationship | Supported, e.g. `Account_vod__r.nvs_Country__c`. | Not supported. |
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-salesforce https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-salesforce.md
@@ -10,7 +10,7 @@ ms.service: data-factory
ms.workload: data-services ms.topic: conceptual ms.custom: seo-lt-2019
-ms.date: 07/13/2020
+ms.date: 01/11/2021
--- # Copy data from and to Salesforce by using Azure Data Factory
@@ -37,7 +37,7 @@ Specifically, this Salesforce connector supports:
- Salesforce Developer, Professional, Enterprise, or Unlimited editions. - Copying data from and to Salesforce production, sandbox, and custom domain.
-The Salesforce connector is built on top of the Salesforce REST/Bulk API. By default, the connector uses [v45](https://developer.salesforce.com/docs/atlas.en-us.218.0.api_rest.meta/api_rest/dome_versions.htm) to copy data from Salesforce, and uses [v40](https://developer.salesforce.com/docs/atlas.en-us.208.0.api_asynch.meta/api_asynch/asynch_api_intro.htm) to copy data to Salesforce. You can also explicitly set the API version used to read/write data via [`apiVersion` property](#linked-service-properties) in linked service.
+The Salesforce connector is built on top of the Salesforce REST/Bulk API. By default, when copying data from Salesforce, the connector uses [v45](https://developer.salesforce.com/docs/atlas.en-us.218.0.api_rest.meta/api_rest/dome_versions.htm) and automatically chooses between REST and Bulk APIs based on the data size ΓÇô when the result set is large, Bulk API is used for better performance; when writing data to Salesforce, the connector uses [v40](https://developer.salesforce.com/docs/atlas.en-us.208.0.api_asynch.meta/api_asynch/asynch_api_intro.htm) of Bulk API. You can also explicitly set the API version used to read/write data via [`apiVersion` property](#linked-service-properties) in linked service.
## Prerequisites
@@ -297,7 +297,7 @@ When copying data from Salesforce, you can use either SOQL query or SQL query. N
|:--- |:--- |:--- | | Column selection | Need to enumerate the fields to be copied in the query, e.g. `SELECT field1, filed2 FROM objectname` | `SELECT *` is supported in addition to column selection. | | Quotation marks | Filed/object names cannot be quoted. | Field/object names can be quoted, e.g. `SELECT "id" FROM "Account"` |
-| Datetime format | Refer to details [here](https://developer.salesforce.com/docs/atlas.en-us.soql_sosl.meta/soql_sosl/sforce_api_calls_soql_select_dateformats.htm) and samples in next section. | Refer to details [here](/sql/odbc/reference/develop-app/date-time-and-timestamp-literals?view=sql-server-2017) and samples in next section. |
+| Datetime format | Refer to details [here](https://developer.salesforce.com/docs/atlas.en-us.soql_sosl.meta/soql_sosl/sforce_api_calls_soql_select_dateformats.htm) and samples in next section. | Refer to details [here](/sql/odbc/reference/develop-app/date-time-and-timestamp-literals) and samples in next section. |
| Boolean values | Represented as `False` and `True`, e.g. `SELECT … WHERE IsDeleted=True`. | Represented as 0 or 1, e.g. `SELECT … WHERE IsDeleted=1`. | | Column renaming | Not supported. | Supported, e.g.: `SELECT a AS b FROM …`. | | Relationship | Supported, e.g. `Account_vod__r.nvs_Country__c`. | Not supported. |
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-flow-expression-functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-expression-functions.md
@@ -251,8 +251,8 @@ ___
### <code>fromUTC</code> <code><b>fromUTC(<i>&lt;value1&gt;</i> : timestamp, [<i>&lt;value2&gt;</i> : string]) => timestamp</b></code><br/><br/> Converts to the timestamp from UTC. You can optionally pass the timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. It is defaulted to the current timezone. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html.
-* ``fromUTC(currentTimeStamp()) == toTimestamp('2050-12-12 19:18:12') -> false``
-* ``fromUTC(currentTimeStamp(), 'Asia/Seoul') != toTimestamp('2050-12-12 19:18:12') -> true``
+* ``fromUTC(currentTimestamp()) == toTimestamp('2050-12-12 19:18:12') -> false``
+* ``fromUTC(currentTimestamp(), 'Asia/Seoul') != toTimestamp('2050-12-12 19:18:12') -> true``
___ ### <code>greater</code> <code><b>greater(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => boolean</b></code><br/><br/>
@@ -1190,8 +1190,8 @@ ___
### <code>toUTC</code> <code><b>toUTC(<i>&lt;value1&gt;</i> : timestamp, [<i>&lt;value2&gt;</i> : string]) => timestamp</b></code><br/><br/> Converts the timestamp to UTC. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. It is defaulted to the current timezone. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html.
-* ``toUTC(currentTimeStamp()) == toTimestamp('2050-12-12 19:18:12') -> false``
-* ``toUTC(currentTimeStamp(), 'Asia/Seoul') != toTimestamp('2050-12-12 19:18:12') -> true``
+* ``toUTC(currentTimestamp()) == toTimestamp('2050-12-12 19:18:12') -> false``
+* ``toUTC(currentTimestamp(), 'Asia/Seoul') != toTimestamp('2050-12-12 19:18:12') -> true``
## Metafunctions
data-factory https://docs.microsoft.com/en-us/azure/data-factory/source-control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/source-control.md
@@ -20,6 +20,7 @@ By default, the Azure Data Factory user interface experience (UX) authors direct
- The Data Factory service doesn't include a repository for storing the JSON entities for your changes. The only way to save changes is via the **Publish All** button and all changes are published directly to the data factory service. - The Data Factory service isn't optimized for collaboration and version control.
+- The Azure Resource Manager template required to deploy Data Factory itself is not included.
To provide a better authoring experience, Azure Data Factory allows you to configure a Git repository with either Azure Repos or GitHub. Git is a version control system that allows for easier change tracking and collaboration. This article will outline how to configure and work in a git repository along with highlighting best practices and a troubleshooting guide.
dedicated-hsm https://docs.microsoft.com/en-us/azure/dedicated-hsm/tutorial-deploy-hsm-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/tutorial-deploy-hsm-powershell.md
@@ -13,7 +13,7 @@ ms.custom: "mvc, seodec18, devx-track-azurepowershell"
ms.tgt_pltfrm: na ms.workload: na ms.date: 07/14/2020
-ms.author: johndaw
+ms.author: mbaldwin
--- # Tutorial ΓÇô Deploying HSMs into an existing virtual network using PowerShell
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-geo-dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-geo-dr.md
@@ -65,7 +65,29 @@ The following section is an overview of the failover process, and explains how t
### Setup
-You first create or use an existing primary namespace, and a new secondary namespace, then pair the two. This pairing gives you an alias that you can use to connect. Because you use an alias, you don't have to change connection strings. Only new namespaces can be added to your failover pairing. Finally, you should add some monitoring to detect if a failover is necessary. In most cases, the service is one part of a large ecosystem, thus automatic failovers are rarely possible, as often failovers must be performed in sync with the remaining subsystem or infrastructure.
+You first create or use an existing primary namespace, and a new secondary namespace, then pair the two. This pairing gives you an alias that you can use to connect. Because you use an alias, you don't have to change connection strings. Only new namespaces can be added to your failover pairing.
+
+1. Create the primary namespace.
+1. Create the secondary namespace. This step is optional. You can create the secondary namespace while creating the pairing in the next step.
+1. In the Azure portal, navigate to your primary namespace.
+1. Select **Geo-recovery** on the left menu, and select **Initiate pairing** on the toolbar.
+
+ :::image type="content" source="./media/event-hubs-geo-dr/primary-namspace-initiate-pairing-button.png" alt-text="Initiate pairing from the primary namespace":::
+1. On the **Initiate pairing** page, select an existing secondary namespace or create one, and then select **Create**. In the following example, an existing secondary namespace is selected.
+
+ :::image type="content" source="./media/event-hubs-geo-dr/initiate-pairing-page.png" alt-text="Select the secondary namespace":::
+1. Now, when you select **Geo-recovery** for the primary namespace, you should see the **Geo-DR Alias** page that looks like the following image:
+
+ :::image type="content" source="./media/event-hubs-geo-dr/geo-dr-alias-page.png" alt-text="Geo-DR alias page":::
+1. On this **Overview** page, you can do the following actions:
+ 1. Break the pairing between primary and secondary namespaces. Select **Break pairing** on the toolbar.
+ 1. Manually failover to the secondary namespace. Select **Failover** on the toolbar.
+
+ > [!WARNING]
+ > Failing over will activate the secondary namespace and remove the primary namespace from the Geo-Disaster Recovery pairing. Create another namespace to have a new geo-disaster recovery pair.
+1. On the **Geo-DR Alias** page, select **Shared access policies** to access the primary connection string for the alias. Use this connection string instead of using the connection string to the primary/secondary namespace directly.
+
+Finally, you should add some monitoring to detect if a failover is necessary. In most cases, the service is one part of a large ecosystem, thus automatic failovers are rarely possible, as often failovers must be performed in sync with the remaining subsystem or infrastructure.
### Example
@@ -128,7 +150,7 @@ You can enable Availability Zones on new namespaces only, using the Azure portal
![3][] ## Private endpoints
-This section provides additional considerations when using Geo-disaster recovery with namespaces that use private endpoints. To learn about using private endpoints with Event Hubs in general, see [Configure private endpoints](private-link-service.md).
+This section provides more considerations when using Geo-disaster recovery with namespaces that use private endpoints. To learn about using private endpoints with Event Hubs in general, see [Configure private endpoints](private-link-service.md).
### New pairings If you try to create a pairing between a primary namespace with a private endpoint and a secondary namespace without a private endpoint, the pairing will fail. The pairing will succeed only if both primary and secondary namespaces have private endpoints. We recommend that you use same configurations on the primary and secondary namespaces and on virtual networks in which private endpoints are created.
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-programming-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-programming-guide.md
@@ -139,7 +139,6 @@ In addition to the advanced run-time features of Event Processor Host, the Event
> [!NOTE] > Currently, only REST API supports this feature ([publisher revocation](/rest/api/eventhub/revoke-publisher)).
-For more information about publisher revocation and how to send to Event Hubs as a publisher, see the [Event Hubs Large Scale Secure Publishing](https://code.msdn.microsoft.com/Service-Bus-Event-Hub-99ce67ab) sample.
## Next steps
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-howto-routing-portal-resource-manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-routing-portal-resource-manager.md
@@ -6,7 +6,7 @@ author: duongau
ms.service: expressroute ms.topic: tutorial
-ms.date: 01/07/2021
+ms.date: 01/11/2021
ms.author: duau
@@ -133,7 +133,7 @@ This section helps you create, get, update, and delete the Azure private peering
2. Configure Azure private peering for the circuit. Make sure that you have the following items before you continue with the next steps:
- * A pair of /30 subnets owned by you and registered in an RIR / IRR. One subnet will be used for the primary link, while the other will be used for the secondary link. From each of these subnets, you will assign the first usable IP address to your router as Microsoft uses the second usable IP for its router. You have three options for this pair of subnets:
+ * A pair of /30 subnets owned by you. One subnet will be used for the primary link, while the other will be used for the secondary link. From each of these subnets, you will assign the first usable IP address to your router as Microsoft uses the second usable IP for its router.
* A valid VLAN ID to establish this peering on. Ensure that no other peering in the circuit uses the same VLAN ID. Both Primary and Secondary links you must use the same VLAN ID. * AS number for peering. You can use both 2-byte and 4-byte AS numbers. You can use a private AS number for this peering except for the number from 65515 to 65520, inclusively. * You must advertise the routes from your on-premises Edge router to Azure via BGP when you configure the private peering.
firewall https://docs.microsoft.com/en-us/azure/firewall/protect-azure-kubernetes-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/protect-azure-kubernetes-service.md
@@ -5,7 +5,7 @@ author: vhorne
ms.service: firewall services: firewall ms.topic: how-to
-ms.date: 09/03/2020
+ms.date: 01/11/2021
ms.author: victorh ---
@@ -42,7 +42,7 @@ Azure Firewall provides an AKS FQDN Tag to simplify the configuration. Use the f
- TCP [*IPAddrOfYourAPIServer*]:443 is required if you have an app that needs to talk to the API server. This change can be set after the cluster is created. - TCP port 9000, and UDP port 1194 for the tunnel front pod to communicate with the tunnel end on the API server.
- To be more specific, see the **.hcp.<location>.azmk8s.io* and addresses in the following table:
+ To be more specific, see the addresses in the following table:
| Destination Endpoint | Protocol | Port | Use | |----------------------------------------------------------------------------------|----------|---------|------|
firewall https://docs.microsoft.com/en-us/azure/firewall/snat-private-range https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/snat-private-range.md
@@ -5,7 +5,7 @@ services: firewall
author: vhorne ms.service: firewall ms.topic: how-to
-ms.date: 11/16/2020
+ms.date: 01/11/2021
ms.author: victorh ---
@@ -21,6 +21,9 @@ If your organization uses a public IP address range for private networks, Azure
- To configure the firewall to **always** SNAT regardless of the destination address, use **255.255.255.255/32** as your private IP address range.
+> [!IMPORTANT]
+> The private address range that you specify only applies to network rules. Currently, application rules always SNAT.
+ > [!IMPORTANT] > If you want to specify your own private IP address ranges, and keep the default IANA RFC 1918 address ranges, make sure your custom list still includes the IANA RFC 1918 range.
governance https://docs.microsoft.com/en-us/azure/governance/policy/concepts/guest-configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration.md
@@ -233,8 +233,6 @@ Windows: `C:\ProgramData\GuestConfig\gc_agent_logs\gc_agent.log`
Linux: `/var/lib/GuestConfig/gc_agent_logs/gc_agent.log`
-Where `<version>` refers to the current version number.
- ### Collecting logs remotely The first step in troubleshooting Guest Configuration configurations or modules should be to use the
governance https://docs.microsoft.com/en-us/azure/governance/policy/how-to/extension-for-vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/extension-for-vscode.md
@@ -1,20 +1,19 @@
--- title: Azure Policy extension for Visual Studio Code description: Learn how to use the Azure Policy extension for Visual Studio Code to look up Azure Resource Manager aliases.
-ms.date: 10/20/2020
+ms.date: 01/11/2021
ms.topic: how-to --- # Use Azure Policy extension for Visual Studio Code
-> Applies to Azure Policy extension version **0.1.0** and newer
+> Applies to Azure Policy extension version **0.1.1** and newer
Learn how to use the Azure Policy extension for Visual Studio Code to look up [aliases](../concepts/definition-structure.md#aliases), review resources and policies, export objects, and evaluate policy definitions. First, we'll describe how to install the Azure Policy extension in Visual Studio Code. Then we'll walk through how to look up aliases.
-The Azure Policy extension for Visual Studio Code can be installed on all platforms that are
-supported by Visual Studio Code. This support includes Windows, Linux, and macOS.
+The Azure Policy extension for Visual Studio Code can be installed on Windows.
## Prerequisites
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-export-data-legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-export-data-legacy.md
@@ -14,7 +14,7 @@ ms.service: iot-central
> [!Note] > This article describes the legacy data export features in IoT Central. >
-> - For information about the new preview data export features, see [Export IoT data to cloud destinations using data export](./howto-export-data.md).
+> - For information about the latest data export features, see [Export IoT data to cloud destinations using data export](./howto-export-data.md).
> - To learn about the differences between the preview data export and legacy data export features, see the [comparison table](./howto-export-data.md#comparison-of-legacy-data-export-and-data-export). This article describes how to use the data export feature in Azure IoT Central. This feature lets you export your data continuously to **Azure Event Hubs**, **Azure Service Bus**, or **Azure Blob storage** instances. Data export uses the JSON format and can include telemetry, device information, and device template information. Use the exported data for:
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-export-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-export-data.md
@@ -7,16 +7,13 @@ ms.author: viviali
ms.date: 11/05/2020 ms.topic: how-to ms.service: iot-central
-ms.custom: contperf-fy21q1
+ms.custom: contperf-fy21q1, contperf-fy21q3
--- # Export IoT data to cloud destinations using data export > [!Note]
-> This article describes the data export features in IoT Central.
->
-> - For information about the legacy data export features, see [Export IoT data to cloud destinations using data export (legacy)](./howto-export-data-legacy.md).
-> - To learn about the differences between the data export and legacy data export features, see the [comparison table](#comparison-of-legacy-data-export-and-data-export) below.
+> This article describes the data export features in IoT Central. For information about the legacy data export features, see [Export IoT data to cloud destinations using data export (legacy)](./howto-export-data-legacy.md).
This article describes how to use the new data export feature in Azure IoT Central. Use this feature to continuously export filtered and enriched IoT data from your IoT Central application. Data export pushes changes in near real time to other parts of your cloud solution for warm-path insights, analytics, and storage.
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/tutorial-create-telemetry-rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-create-telemetry-rules.md
@@ -113,4 +113,4 @@ In this tutorial, you learned how to:
Now that you've defined a threshold-based rule the suggested next step is to learn how to: > [!div class="nextstepaction"]
-> [Configure continuous data export](./howto-export-data.md).
+> [Create webhooks on rules](./howto-create-webhooks.md).
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/how-to-store-data-blob https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-store-data-blob.md
@@ -190,7 +190,7 @@ The following quickstart samples use languages that are also supported by IoT Ed
## Connect to your local storage with Azure Storage Explorer
-You can use [Azure Storage Explorer](https://github.com/microsoft/AzureStorageExplorer/releases/tag/v1.14.2) to connect to your local storage account.
+You can use [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) to connect to your local storage account.
1. Download and install Azure Storage Explorer
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/howto-logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/howto-logging.md
@@ -21,20 +21,10 @@ After you create one or more key vaults, you'll likely want to monitor how and w
To complete this tutorial, you must have the following: * An existing key vault that you have been using.
-* The Azure CLI or Azure PowerShell.
+* [Azure Cloud Shell](https://shell.azure.com) - Bash environment
* Sufficient storage on Azure for your Key Vault logs.
-If you choose to install and use the CLI locally, you will need the Azure CLI version 2.0.4 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli). To sign in to Azure using the CLI you can type:
-
-```azurecli-interactive
-az login
-```
-
-If you choose to install and use PowerShell locally, you will need the Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-
-```powershell-interactive
-Connect-AzAccount
-```
+This guide commands are formatted for [Cloud Shell](https://shell.azure.com) with Bash as an environment.
## Connect to your Key Vault subscription
@@ -158,7 +148,7 @@ az storage blob list --account-name "<your-unique-storage-account-name>" --conta
With Azure PowerShell, use the [Get-AzStorageBlob](/powershell/module/az.storage/get-azstorageblob?view=azps-4.7.0) list all the blobs in this container, enter: ```powershell
-Get-AzStorageBlob -Container $container -Context $sa.Context
+Get-AzStorageBlob -Container "insights-logs-auditevent" -Context $sa.Context
``` As you will see from the output of either the Azure CLI command or the Azure PowerShell cmdlet, the name of the blobs are in the format `resourceId=<ARM resource ID>/y=<year>/m=<month>/d=<day of month>/h=<hour>/m=<minute>/filename.json`. The date and time values use UTC.
@@ -174,7 +164,7 @@ az storage blob download --container-name "insights-logs-auditevent" --file <pat
With Azure PowerShell, use the [Gt-AzStorageBlobs](/powershell/module/az.storage/get-azstorageblob?view=azps-4.7.0) cmdlet to get a list of the blobs, then pipe that to the [Get-AzStorageBlobContent](/powershell/module/az.storage/get-azstorageblobcontent?view=azps-4.7.0) cmdlet to download the logs to your chosen path. ```powershell-interactive
-$blobs = Get-AzStorageBlob -Container $container -Context $sa.Context | Get-AzStorageBlobContent -Destination "<path-to-file>"
+$blobs = Get-AzStorageBlob -Container "insights-logs-auditevent" -Context $sa.Context | Get-AzStorageBlobContent -Destination "<path-to-file>"
``` When you run this second cmdlet in PowerShell, the **/** delimiter in the blob names creates a full folder structure under the destination folder. You'll use this structure to download and store the blobs as files.
@@ -184,19 +174,19 @@ To selectively download blobs, use wildcards. For example:
* If you have multiple key vaults and want to download logs for just one key vault, named CONTOSOKEYVAULT3: ```powershell
- Get-AzStorageBlob -Container $container -Context $sa.Context -Blob '*/VAULTS/CONTOSOKEYVAULT3
+ Get-AzStorageBlob -Container "insights-logs-auditevent" -Context $sa.Context -Blob '*/VAULTS/CONTOSOKEYVAULT3
``` * If you have multiple resource groups and want to download logs for just one resource group, use `-Blob '*/RESOURCEGROUPS/<resource group name>/*'`: ```powershell
- Get-AzStorageBlob -Container $container -Context $sa.Context -Blob '*/RESOURCEGROUPS/CONTOSORESOURCEGROUP3/*'
+ Get-AzStorageBlob -Container "insights-logs-auditevent" -Context $sa.Context -Blob '*/RESOURCEGROUPS/CONTOSORESOURCEGROUP3/*'
``` * If you want to download all the logs for the month of January 2019, use `-Blob '*/year=2019/m=01/*'`: ```powershell
- Get-AzStorageBlob -Container $container -Context $sa.Context -Blob '*/year=2016/m=01/*'
+ Get-AzStorageBlob -Container "insights-logs-auditevent" -Context $sa.Context -Blob '*/year=2016/m=01/*'
``` You're now ready to start looking at what's in the logs. But before we move on to that, you should know two more commands:
@@ -213,4 +203,4 @@ For more information, including how to set this up, see [Azure Key Vault in Azur
- For conceptual information, including how to interpret Key Vault logs, see [Key Vault logging](logging.md) - For a tutorial that uses Azure Key Vault in a .NET web application, see [Use Azure Key Vault from a web application](tutorial-net-create-vault-azure-web-app.md).-- For programming references, see [the Azure Key Vault developer's guide](developers-guide.md).\ No newline at end of file
+- For programming references, see [the Azure Key Vault developer's guide](developers-guide.md).
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/key-vault-integrate-kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/key-vault-integrate-kubernetes.md
@@ -1,8 +1,8 @@
--- title: Integrate Azure Key Vault with Kubernetes description: In this tutorial, you access and retrieve secrets from your Azure key vault by using the Secrets Store Container Storage Interface (CSI) driver to mount into Kubernetes pods.
-author: ShaneBala-keyvault
-ms.author: sudbalas
+author: msmbaldwin
+ms.author: mbaldwin
ms.service: key-vault ms.subservice: general ms.topic: tutorial
@@ -19,12 +19,11 @@ In this tutorial, you access and retrieve secrets from your Azure key vault by u
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a service principal or use managed identities.
+> * Use managed identities.
> * Deploy an Azure Kubernetes Service (AKS) cluster by using the Azure CLI. > * Install Helm and the Secrets Store CSI driver. > * Create an Azure key vault and set your secrets. > * Create your own SecretProviderClass object.
-> * Assign your service principal or use managed identities.
> * Deploy your pod with mounted secrets from your key vault. ## Prerequisites
@@ -33,22 +32,7 @@ In this tutorial, you learn how to:
* Before you start this tutorial, install the [Azure CLI](/cli/azure/install-azure-cli-windows?view=azure-cli-latest).
-## Create a service principal or use managed identities
-
-If you plan to use managed identities, you can move on to the next section.
-
-Create a service principal to control which resources can be accessed from your Azure key vault. This service principal's access is restricted by the roles assigned to it. This feature gives you control over how the service principal can manage your secrets. In the following example, the name of the service principal is *contosoServicePrincipal*.
-
-```azurecli
-az ad sp create-for-rbac --name contosoServicePrincipal --skip-assignment
-```
-This operation returns a series of key/value pairs:
-
-![Screenshot showing the appId and password for contosoServicePrincipal](../media/kubernetes-key-vault-1.png)
-
-Copy the **appId** and **password** credentials for later use.
-
-## Flow for using Managed Identity
+## Use managed identities
This diagram illustrates the AKSΓÇôKey Vault integration flow for Managed Identity:
@@ -61,7 +45,7 @@ You don't need to use Azure Cloud Shell. Your command prompt (terminal) with the
Complete the "Create a resource group," "Create AKS cluster," and "Connect to the cluster" sections in [Deploy an Azure Kubernetes Service cluster by using the Azure CLI](../../aks/kubernetes-walkthrough.md). > [!NOTE]
-> If you plan to use a pod identity instead of a service principal, be sure to enable it when you create the Kubernetes cluster, as shown in the following command:
+> If you plan to use a pod identity, be sure to enable it when you create the Kubernetes cluster, as shown in the following command:
> > ```azurecli > az aks create -n contosoAKSCluster -g contosoResourceGroup --kubernetes-version 1.16.9 --node-count 1 --enable-managed-identity
@@ -117,7 +101,7 @@ To create your own custom SecretProviderClass object with provider-specific para
In the sample SecretProviderClass YAML file, fill in the missing parameters. The following parameters are required:
-* **userAssignedIdentityID**: # [REQUIRED] If you're using a service principal, use the client ID to specify which user-assigned managed identity to use. If you're using a user-assigned identity as the VM's managed identity, specify the identity's client ID. If the value is empty, it defaults to use the system-assigned identity on the VM
+* **userAssignedIdentityID**: # [REQUIRED] If the value is empty, it defaults to use the system-assigned identity on the VM
* **keyvaultName**: The name of your key vault * **objects**: The container for all of the secret content you want to mount * **objectName**: The name of the secret content
@@ -143,9 +127,8 @@ spec:
parameters: usePodIdentity: "false" # [REQUIRED] Set to "true" if using managed identities useVMManagedIdentity: "false" # [OPTIONAL] if not provided, will default to "false"
- userAssignedIdentityID: "servicePrincipalClientID" # [REQUIRED] If you're using a service principal, use the client id to specify which user-assigned managed identity to use. If you're using a user-assigned identity as the VM's managed identity, specify the identity's client id. If the value is empty, it defaults to use the system-assigned identity on the VM
- # az ad sp show --id http://contosoServicePrincipal --query appId -o tsv
- # the preceding command will return the client ID of your service principal
+ userAssignedIdentityID: "servicePrincipalClientID" # [REQUIRED] If you're using a user-assigned identity as the VM's managed identity, specify the identity's client id. If the value is empty, it defaults to use the system-assigned identity on the VM
+
keyvaultName: "contosoKeyVault5" # [REQUIRED] the name of the key vault # az keyvault show --name contosoKeyVault5 # the preceding command will display the key vault metadata, which includes the subscription ID, resource group name, key vault
@@ -170,58 +153,18 @@ The following image shows the console output for **az keyvault show --name conto
![Screenshot showing the console output for "az keyvault show --name contosoKeyVault5"](../media/kubernetes-key-vault-4.png)
-## Assign your service principal or use managed identities
+## Assign managed identity
-### Assign a service principal
-
-If you're using a service principal, grant permissions for it to access your key vault and retrieve secrets. Assign the *Reader* role, and grant the service principal permissions to *get* secrets from your key vault by doing the following command:
-
-1. Assign your service principal to your existing key vault. The **$AZURE_CLIENT_ID** parameter is the **appId** that you copied after you created your service principal.
- ```azurecli
- az role assignment create --role Reader --assignee $AZURE_CLIENT_ID --scope /subscriptions/$SUBID/resourcegroups/$KEYVAULT_RESOURCE_GROUP/providers/Microsoft.KeyVault/vaults/$KEYVAULT_NAME
- ```
-
- The output of the command is shown in the following image:
-
- ![Screenshot showing the principalId value](../media/kubernetes-key-vault-5.png)
-
-1. Grant the service principal permissions to get secrets:
- ```azurecli
- az keyvault set-policy -n $KEYVAULT_NAME --secret-permissions get --spn $AZURE_CLIENT_ID
- az keyvault set-policy -n $KEYVAULT_NAME --key-permissions get --spn $AZURE_CLIENT_ID
- ```
-
-1. You've now configured your service principal with permissions to read secrets from your key vault. The **$AZURE_CLIENT_SECRET** is the password of your service principal. Add your service principal credentials as a Kubernetes secret that's accessible by the Secrets Store CSI driver:
- ```azurecli
- kubectl create secret generic secrets-store-creds --from-literal clientid=$AZURE_CLIENT_ID --from-literal clientsecret=$AZURE_CLIENT_SECRET
- ```
-
-> [!NOTE]
-> If you're deploying the Kubernetes pod and you receive an error about an invalid Client Secret ID, you might have an older Client Secret ID that was expired or reset. To resolve this issue, delete your *secrets-store-creds* secret and create a new one with the current Client Secret ID. To delete your *secrets-store-creds*, run the following command:
->
-> ```azurecli
-> kubectl delete secrets secrets-store-creds
-> ```
-
-If you forgot your service principal's Client Secret ID, you can reset it by using the following command:
-
-```azurecli
-az ad sp credential reset --name contosoServicePrincipal --credential-description "APClientSecret" --query password -o tsv
-```
-
-### Use managed identities
-
-If you're using managed identities, assign specific roles to the AKS cluster you've created.
+Assign specific roles to the AKS cluster you've created.
1. To create, list, or read a user-assigned managed identity, your AKS cluster needs to be assigned the [Managed Identity Operator](../../role-based-access-control/built-in-roles.md#managed-identity-operator) role. Make sure that the **$clientId** is the Kubernetes cluster's clientId. For the scope, it will be under your Azure subscription service, specifically the node resource group that was made when the AKS cluster was created. This scope will ensure only resources within that group are affected by the roles assigned below. ```azurecli RESOURCE_GROUP=contosoResourceGroup
- az role assignment create --role "Managed Identity Operator" --assignee $clientId --scope /subscriptions/$SUBID/resourcegroups/$RESOURCE_GROUP
- az role assignment create --role "Managed Identity Operator" --assignee $clientId --scope /subscriptions/$SUBID/resourcegroups/$NODE_RESOURCE_GROUP
+ az role assignment create --role "Managed Identity Operator" --assignee $clientId --scope /subscriptions/<SUBID>/resourcegroups/$RESOURCE_GROUP
- az role assignment create --role "Virtual Machine Contributor" --assignee $clientId --scope /subscriptions/$SUBID/resourcegroups/$NODE_RESOURCE_GROUP
+ az role assignment create --role "Virtual Machine Contributor" --assignee $clientId --scope /subscriptions/<SUBID>/resourcegroups/$RESOURCE_GROUP
``` 1. Install the Azure Active Directory (Azure AD) identity into AKS.
@@ -238,7 +181,7 @@ If you're using managed identities, assign specific roles to the AKS cluster you
1. Assign the *Reader* role to the Azure AD identity that you created in the preceding step for your key vault, and then grant the identity permissions to get secrets from your key vault. Use the **clientId** and **principalId** from the Azure AD identity. ```azurecli
- az role assignment create --role "Reader" --assignee $principalId --scope /subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/contosoResourceGroup/providers/Microsoft.KeyVault/vaults/contosoKeyVault5
+ az role assignment create --role "Reader" --assignee $principalId --scope /subscriptions/<SUBID>/resourceGroups/contosoResourceGroup/providers/Microsoft.KeyVault/vaults/contosoKeyVault5
az keyvault set-policy -n contosoKeyVault5 --secret-permissions get --spn $clientId az keyvault set-policy -n contosoKeyVault5 --key-permissions get --spn $clientId
@@ -251,16 +194,6 @@ To configure your SecretProviderClass object, run the following command:
kubectl apply -f secretProviderClass.yaml ```
-### Use a service principal
-
-If you're using a service principal, use the following command to deploy your Kubernetes pods with the SecretProviderClass and the secrets-store-creds that you configured earlier. Here are the deployment templates:
-* For [Linux](https://github.com/Azure/secrets-store-csi-driver-provider-azure/blob/master/examples/nginx-pod-inline-volume-service-principal.yaml)
-* For [Windows](https://github.com/Azure/secrets-store-csi-driver-provider-azure/blob/master/examples/windows-pod-secrets-store-inline-volume-secret-providerclass.yaml)
-
-```azurecli
-kubectl apply -f updateDeployment.yaml
-```
- ### Use managed identities If you're using managed identities, create an *AzureIdentity* in your cluster that references the identity that you created earlier. Then, create an *AzureIdentityBinding* that references the AzureIdentity you created. Fill out the parameters in the following template, and then save it as *podIdentityAndBinding.yaml*.
@@ -314,8 +247,6 @@ spec:
readOnly: true volumeAttributes: secretProviderClass: azure-kvname
- nodePublishSecretRef: # Only required when using service principal mode
- name: secrets-store-creds # Only required when using service principal mode
``` Run the following command to deploy your pod:
logic-apps https://docs.microsoft.com/en-us/azure/logic-apps/connect-virtual-network-vnet-isolated-environment-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md
@@ -3,14 +3,14 @@ title: Access to Azure virtual networks
description: Overview about how integration service environments (ISEs) help logic apps access Azure virtual networks (VNETs) services: logic-apps ms.suite: integration
-ms.reviewer: jonfan, logicappspm
+ms.reviewer: estfan, logicappspm, azla
ms.topic: conceptual
-ms.date: 11/12/2020
+ms.date: 01/11/2021
--- # Access to Azure Virtual Network resources from Azure Logic Apps by using integration service environments (ISEs)
-Sometimes, your logic apps need access to secured resources, such as virtual machines (VMs) and other systems or services, that are inside or connected to an [Azure virtual network](../virtual-network/virtual-networks-overview.md). To set up this access, you can [create an *integration service environment* (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment.md). An ISE is an instance of the Logic Apps service that uses dedicated resources and runs separately from the "global" multi-tenant Logic Apps service.
+Sometimes, your logic apps need access to secured resources, such as virtual machines (VMs) and other systems or services, that are inside or connected to an [Azure virtual network](../virtual-network/virtual-networks-overview.md). To set up this access, you can [create an *integration service environment* (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment.md). An ISE is an instance of the Logic Apps service that uses dedicated resources and runs separately from the "global" multi-tenant Logic Apps service. Data in an ISE stays in the [same region where you create and deploy that ISE](https://azure.microsoft.com/global-infrastructure/data-residency/).
For example, some Azure virtual networks use private endpoints, which you can set up through [Azure Private Link](../private-link/private-link-overview.md), to provide access to Azure PaaS services, such as Azure Storage, Azure Cosmos DB, or Azure SQL Database, partner services, or customer services that are hosted on Azure. If your logic apps need access to virtual networks that use private endpoints, you must create, deploy, and run those logic apps inside an ISE.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/designer-error-codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/designer-error-codes.md
@@ -1589,4 +1589,9 @@ To get more help, we recommend that you post the detailed message that accompani
|------------------------| |Library exception.| |Library exception: {exception}.|
-|Unknown library exception: {exception}. {customer_support_guidance}.|
\ No newline at end of file
+|Unknown library exception: {exception}. {customer_support_guidance}.|
++
+## Execute Python Script Module
+
+Search **in azureml_main** in **70_driver_logs** of **Execute Python Script Module** and you could find which line occurred error. For example, "File "/tmp/tmp01_ID/user_script.py", line 17, in azureml_main" indicates that the error occurred in the 17 line of your python script.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/execute-python-script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/execute-python-script.md
@@ -10,7 +10,7 @@ ms.custom: devx-track-python
author: likebupt ms.author: keli19
-ms.date: 12/02/2020
+ms.date: 01/02/2021
--- # Execute Python Script module
@@ -55,7 +55,7 @@ if spec is None:
> [!WARNING] > Excute Python Script module does not support installing packages that depend on extra native libraries with command like "apt-get", such as Java, PyODBC and etc. This is because this module is executed in a simple environment with Python pre-installed only and with non-admin permission.
-## Access to registered datasets
+## Access to current workspace and registered datasets
You can refer to the following sample code to access to the [registered datasets](../how-to-create-register-datasets.md) in your workspace:
@@ -66,8 +66,10 @@ def azureml_main(dataframe1 = None, dataframe2 = None):
print(f'Input pandas.DataFrame #1: {dataframe1}') from azureml.core import Run run = Run.get_context(allow_offline=True)
+ #access to current workspace
ws = run.experiment.workspace
+ #access to registered dataset of current workspace
from azureml.core import Dataset dataset = Dataset.get_by_name(ws, name='test-register-tabular-in-designer') dataframe1 = dataset.to_pandas_dataframe()
@@ -214,7 +216,9 @@ The Execute Python Script module contains sample Python code that you can use as
6. Submit the pipeline.
- All of the data and code is loaded into a virtual machine, and run using the specified Python environment.
+ If the module is completed, check the output if as expected.
+
+ If the module is failed, you need to do some troubleshooting. Select the module, and open **Outputs+logs** in the right pane. Open **70_driver_log.txt** and search **in azureml_main**, then you could find which line caused the error. For example, "File "/tmp/tmp01_ID/user_script.py", line 17, in azureml_main" indicates that the error occurred in the 17 line of your python script.
## Results
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/azure-machine-learning-release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/azure-machine-learning-release-notes.md
@@ -6,8 +6,8 @@ services: machine-learning
ms.service: machine-learning ms.subservice: core ms.topic: reference
-ms.author: jmartens
-author: j-martens
+ms.author: larryfr
+author: BlackMist
ms.date: 09/10/2020 ---
@@ -15,6 +15,26 @@ ms.date: 09/10/2020
In this article, learn about Azure Machine Learning releases. For the full SDK reference content, visit the Azure Machine Learning's [**main SDK for Python**](/python/api/overview/azure/ml/intro?preserve-view=true&view=azure-ml-py) reference page. +
+ ## 2021-01-11
+
+### Azure Machine Learning SDK for Python v1.20.0
++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + framework_version added in OptimizationConfig. It will be used when model is registered with framework MULTI.
+ + **azureml-automl-runtime**
+ + In this update, we added holt winters exponential smoothing to forecasting toolbox of AutoML SDK. Given a time series, the best model is selected by [AICc (Corrected Akaike's Information Criterion)](https://otexts.com/fpp3/selecting-predictors.html#selecting-predictors) and returned.
+ + **azureml-contrib-optimization**
+ + framework_version added in OptimizationConfig. It will be used when model is registered with framework MULTI.
+ + **azureml-pipeline-steps**
+ + Introducing CommandStep which would take command to process. Command can include executables, shell commands, scripts, etc.
+ + **azureml-core**
+ + Now workspace creation supports user assigned identity. Adding the uai support from SDK/CLI
+ + Fixed issue on service.reload() to pick up changes on score.py in local deployment.
+ + `run.get_details()` has an extra field named "submittedBy" which displays the author's name for this run.
+ + Edited Model.register method documentation to mention how to register model from run directly
+
+ ## 2020-12-31 ### Azure Machine Learning Studio Notebooks Experience (December Update) + **New features**
@@ -26,6 +46,7 @@ In this article, learn about Azure Machine Learning releases. For the full SDK
+ Improved page load times + Improved performance + Improved speed and kernel reliability+ ## 2020-12-07
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/classic/ai-gallery-control-personal-data-dsr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/ai-gallery-control-personal-data-dsr.md
@@ -10,7 +10,7 @@ author: likebupt
ms.author: keli19 ms.custom: seodec18 ms.date: 05/25/2018
-ms.reviewer: jmartens, mldocs
+ms.reviewer: mldocs
--- # View and delete in-product user data from Azure AI Gallery
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/concept-automated-ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-automated-ml.md
@@ -6,7 +6,6 @@ services: machine-learning
ms.service: machine-learning ms.subservice: core ms.topic: conceptual
-ms.reviewer: jmartens
author: cartacioS ms.author: sacartac ms.date: 10/27/2020
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/concept-model-management-and-deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-model-management-and-deployment.md
@@ -6,7 +6,6 @@ services: machine-learning
ms.service: machine-learning ms.subservice: core ms.topic: conceptual
-ms.reviewer: jmartens
author: jpe316 ms.author: jordane ms.date: 03/17/2020
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/concept-onnx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-onnx.md
@@ -7,7 +7,6 @@ ms.service: machine-learning
ms.subservice: core ms.topic: conceptual
-ms.reviewer: jmartens
ms.author: prasantp author: prasanthpul ms.date: 06/18/2020
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-register-datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-register-datasets.md
@@ -177,6 +177,39 @@ titanic_ds.take(3).to_pandas_dataframe()
To reuse and share datasets across experiments in your workspace, [register your dataset](#register-datasets). +
+## Explore data
+
+After you create and [register](#register-datasets) your dataset, you can load it into your notebook for data exploration prior to model training. If you don't need to do any data exploration, see how to consume datasets in your training scripts for submitting ML experiments in [Train with datasets](how-to-train-with-datasets.md).
+
+For FileDatasets, you can either **mount** or **download** your dataset, and apply the python libraries you'd normally use for data exploration. [Learn more about mount vs download](how-to-train-with-datasets.md#mount-vs-download).
+
+```python
+# download the dataset
+dataset.download(target_path='.', overwrite=False)
+
+# mount dataset to the temp directory at `mounted_path`
+
+import tempfile
+mounted_path = tempfile.mkdtemp()
+mount_context = dataset.mount(mounted_path)
+
+mount_context.start()
+```
+
+For TabularDatasets, use the [`to_pandas_dataframe()`](/python/api/azureml-core/azureml.data.tabulardataset?preserve-view=true&view=azure-ml-py#to-pandas-dataframe-on-error--null---out-of-range-datetime--null--) method to view your data in a dataframe.
+
+```python
+# preview the first 3 rows of titanic_ds
+titanic_ds.take(3).to_pandas_dataframe()
+```
+
+|(Index)|PassengerId|Survived|Pclass|Name|Sex|Age|SibSp|Parch|Ticket|Fare|Cabin|Embarked
+-|-----------|--------|------|----|---|---|-----|-----|------|----|-----|--------|
+0|1|False|3|Braund, Mr. Owen Harris|male|22.0|1|0|A/5 21171|7.2500||S
+1|2|True|1|Cumings, Mrs. John Bradley (Florence Briggs Th...|female|38.0|1|0|PC 17599|71.2833|C85|C
+2|3|True|3|Heikkinen, Miss. Laina|female|26.0|0|0|STON/O2. 3101282|7.9250||S
+ ## Create a dataset from pandas dataframe To create a TabularDataset from an in memory pandas dataframe, write the data to a local file, like a csv, and create your dataset from that file. The following code demonstrates this workflow.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-debug-parallel-run-step https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-debug-parallel-run-step.md
@@ -7,7 +7,7 @@ ms.service: machine-learning
ms.subservice: core ms.topic: troubleshooting ms.custom: troubleshooting
-ms.reviewer: jmartens, larryfr, vaidyas, laobri, tracych
+ms.reviewer: larryfr, vaidyas, laobri, tracych
ms.author: trmccorm author: tmccrmck ms.date: 09/23/2020
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-debug-pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-debug-pipelines.md
@@ -29,6 +29,7 @@ The following table contains common problems during pipeline development, with p
| Pipeline not reusing steps | Step reuse is enabled by default, but ensure you haven't disabled it in a pipeline step. If reuse is disabled, the `allow_reuse` parameter in the step will be set to `False`. | | Pipeline is rerunning unnecessarily | To ensure that steps only rerun when their underlying data or scripts change, decouple your source-code directories for each step. If you use the same source directory for multiple steps, you may experience unnecessary reruns. Use the `source_directory` parameter on a pipeline step object to point to your isolated directory for that step, and ensure you aren't using the same `source_directory` path for multiple steps. | | Step slowing down over training epochs or other looping behavior | Try switching any file writes, including logging, from `as_mount()` to `as_upload()`. The **mount** mode uses a remote virtualized filesystem and uploads the entire file each time it is appended to. |
+| Compute target takes a long time to start | Docker images for compute targets are loaded from Azure Container Registry (ACR). By default, Azure Machine Learning creates an ACR that uses the *basic* service tier. Changing the ACR for your workspace to standard or premium tier may reduce the time it takes to build and load images. For more information, see [Azure Container Registry service tiers](../container-registry/container-registry-skus.md). |
### Authentication errors
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-and-where https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-and-where.md
@@ -317,6 +317,8 @@ The following table describes the different service states:
| Failed | The service has failed to deploy due to an error or crash. | Yes | | Healthy | The service is healthy and the endpoint is available. | Yes |
+> [!TIP]
+> When deploying, Docker images for compute targets are built and loaded from Azure Container Registry (ACR). By default, Azure Machine Learning creates an ACR that uses the *basic* service tier. Changing the ACR for your workspace to standard or premium tier may reduce the time it takes to build and deploy images to your compute targets. For more information, see [Azure Container Registry service tiers](../container-registry/container-registry-skus.md).
### <a id="azuremlcompute"></a> Batch inference Azure Machine Learning Compute targets are created and managed by Azure Machine Learning. They can be used for batch prediction from Azure Machine Learning pipelines.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-enable-app-insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-enable-app-insights.md
@@ -5,7 +5,6 @@ description: Learn how to collect data from models deployed to web service endpo
services: machine-learning ms.service: machine-learning ms.subservice: core
-ms.reviewer: jmartens
ms.author: larryfr author: blackmist ms.date: 09/15/2020
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-export-delete-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-export-delete-data.md
@@ -5,7 +5,6 @@ description: Learn how to export or delete your workspace with the Azure Machine
services: machine-learning ms.service: machine-learning ms.subservice: core
-ms.reviewer: jmartens
author: lobrien ms.author: laobri ms.date: 04/24/2020
@@ -14,8 +13,6 @@ ms.custom: how-to
--- # Export or delete your Machine Learning service workspace data -- In Azure Machine Learning, you can export or delete your workspace data using either the portal's graphical interface or the Python SDK. This article describes both options. [!INCLUDE [GDPR-related guidance](../../includes/gdpr-dsr-and-stp-note.md)]
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-quotas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-quotas.md
@@ -5,7 +5,6 @@ description: Learn about the quotas and limits on resources for Azure Machine Le
services: machine-learning ms.service: machine-learning ms.subservice: core
-ms.reviewer: jmartens
author: nishankgu ms.author: nigup ms.date: 12/1/2020
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-secure-web-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-web-service.md
@@ -5,7 +5,6 @@ description: Learn how to enable HTTPS with TLS version 1.2 to secure a web serv
services: machine-learning ms.service: machine-learning ms.subservice: core
-ms.reviewer: jmartens
ms.author: aashishb author: aashishb ms.date: 01/04/2021
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-set-up-training-targets.md
@@ -215,6 +215,8 @@ method, or from the Experiment tab view in Azure Machine Learning studio client
If you are submitting a user-created environment with your run, consider using the latest version of azureml-core in that environment. Versions >= 1.18.0 of azureml-core already pin PyJWT < 2.0.0. If you need to use a version of azureml-core < 1.18.0 in the environment you submit, make sure to specify PyJWT < 2.0.0 in your pip dependencies.
+* **Compute target takes a long time to start**: The Docker images for compute targets are loaded from Azure Container Registry (ACR). By default, Azure Machine Learning creates an ACR that uses the *basic* service tier. Changing the ACR for your workspace to standard or premium tier may reduce the time it takes to build and load images. For more information, see [Azure Container Registry service tiers](../container-registry/container-registry-skus.md).
+ ## Next steps * [Tutorial: Train a model](tutorial-train-models-with-aml.md) uses a managed compute target to train a model.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-vs-code-remote https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-set-up-vs-code-remote.md
@@ -7,8 +7,8 @@ ms.service: machine-learning
ms.subservice: core ms.topic: conceptual ms.custom: how-to
-ms.author: jmartens
-author: j-martens
+ms.author: luquinta
+author: luisquintanilla
ms.date: 11/16/2020 # As a data scientist, I want to connect to an Azure Machine Learning compute instance in Visual Studio Code to access my resources and run my code. ---
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-track-designer-experiments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-track-designer-experiments.md
@@ -8,7 +8,7 @@ ms.author: keli19
ms.reviewer: peterlu ms.service: machine-learning ms.subservice: core
-ms.date: 11/25/2020
+ms.date: 01/11/2021
ms.topic: conceptual ms.custom: designer ---
@@ -22,7 +22,7 @@ For more information on logging metrics using the SDK authoring experience, see
## Enable logging with Execute Python Script
-Use the __Execute Python Script__ module to enable logging in designer pipelines. Although you can log any value with this workflow, it's especially useful to log metrics from the __Evaluate Model__ module to track model performance across runs.
+Use the [Execute Python Script](./algorithm-module-reference/execute-python-script.md) module to enable logging in designer pipelines. Although you can log any value with this workflow, it's especially useful to log metrics from the __Evaluate Model__ module to track model performance across runs.
The following example shows you how to log the mean squared error of two trained models using the Evaluate Model and Execute Python Script modules.
@@ -48,7 +48,7 @@ The following example shows you how to log the mean squared error of two trained
# Log left output port result of Evaluate Model. This also works when evaluate only 1 model. parent_run.log(name='Mean_Absolute_Error (left port)', value=dataframe1['Mean_Absolute_Error'][0])
- # Log right output port result of Evaluate Model.
+ # Log right output port result of Evaluate Model. The following line should be deleted if you only connect one Score Module to the` left port of Evaluate Model module.
parent_run.log(name='Mean_Absolute_Error (right port)', value=dataframe1['Mean_Absolute_Error'][1]) return dataframe1,
@@ -76,3 +76,4 @@ In this article, you learned how to use logs in the designer. For next steps, se
* Learn how to troubleshoot designer pipelines, see [Debug & troubleshoot ML pipelines](how-to-debug-pipelines.md#azure-machine-learning-designer). * Learn how to use the Python SDK to log metrics in the SDK authoring experience, see [Enable logging in Azure ML training runs](how-to-track-experiments.md).
+* Learn how to use [Execute Python Script](./algorithm-module-reference/execute-python-script.md) in the designer.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-train-with-datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-with-datasets.md
@@ -20,10 +20,12 @@ ms.custom: how-to, devx-track-python, data4ml
# Train with datasets in Azure Machine Learning
-In this article, you learn how to work with [Azure Machine Learning datasets](/python/api/azureml-core/azureml.core.dataset%28class%29?preserve-view=true&view=azure-ml-py) in your training experiments. You can use datasets in your local or remote compute target without worrying about connection strings or data paths.
+In this article, you learn how to work with [Azure Machine Learning datasets](/python/api/azureml-core/azureml.core.dataset%28class%29?preserve-view=true&view=azure-ml-py) to train machine learning models. You can use datasets in your local or remote compute target without worrying about connection strings or data paths.
Azure Machine Learning datasets provide a seamless integration with Azure Machine Learning training functionality like [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig?preserve-view=true&view=azure-ml-py), [HyperDrive](/python/api/azureml-train-core/azureml.train.hyperdrive?preserve-view=true&view=azure-ml-py) and [Azure Machine Learning pipelines](how-to-create-your-first-pipeline.md).
+If you are not ready to make your data available for model training, but want to load your data to your notebook for data exploration, see how to [explore the data in your dataset](how-to-create-register-datasets.md#explore-data).
+ ## Prerequisites To create and train with datasets, you need:
@@ -32,7 +34,7 @@ To create and train with datasets, you need:
* An [Azure Machine Learning workspace](how-to-manage-workspace.md).
-* The [Azure Machine Learning SDK for Python installed](/python/api/overview/azure/ml/install?preserve-view=true&view=azure-ml-py) (>= 1.13.0), which includes the azureml-datasets package.
+* The [Azure Machine Learning SDK for Python installed](/python/api/overview/azure/ml/install?preserve-view=true&view=azure-ml-py) (>= 1.13.0), which includes the `azureml-datasets` package.
> [!Note] > Some Dataset classes have dependencies on the [azureml-dataprep](/python/api/azureml-dataprep/?preserve-view=true&view=azure-ml-py) package. For Linux users, these classes are supported only on the following distributions: Red Hat Enterprise Linux, Ubuntu, Fedora, and CentOS.
@@ -63,7 +65,7 @@ The following code configures a script argument `--input-data` that you will spe
> [!Note] > If your original data source contains NaN, empty strings or blank values, when you use `to_pandas_dataframe()`, then those values are replaced as a *Null* value.
-If you need to load the prepared data into a new dataset from an in-memory pandas dataframe, write the data to a local file, like a parquet, and create a new dataset from that file. You can also create datasets from local files or paths in datastores. Learn more about [how to create datasets](how-to-create-register-datasets.md).
+If you need to load the prepared data into a new dataset from an in-memory pandas dataframe, write the data to a local file, like a parquet, and create a new dataset from that file. Learn more about [how to create datasets](how-to-create-register-datasets.md).
```Python %%writefile $script_folder/train_titanic.py
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-troubleshoot-deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-troubleshoot-deployment.md
@@ -7,7 +7,6 @@ ms.service: machine-learning
ms.subservice: core author: gvashishtha ms.author: gopalv
-ms.reviewer: jmartens
ms.date: 11/25/2020 ms.topic: troubleshooting ms.custom: contperf-fy20q4, devx-track-python, deploy, contperf-fy21q2
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/overview-what-happened-to-workbench https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/overview-what-happened-to-workbench.md
@@ -7,9 +7,8 @@ ms.service: machine-learning
ms.subservice: core ms.topic: conceptual ms.custom: how-to
-ms.reviewer: jmartens
-author: j-martens
-ms.author: jmartens
+ms.author: larryfr
+author: BlackMist
ms.date: 03/05/2020 --- # What happened to Azure Machine Learning Workbench?
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/overview-what-is-azure-ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/overview-what-is-azure-ml.md
@@ -5,8 +5,8 @@ services: machine-learning
ms.service: machine-learning ms.subservice: core ms.topic: overview
-author: j-martens
-ms.author: jmartens
+ms.author: larryfr
+author: BlackMist
ms.date: 11/04/2019 ms.custom: devx-track-python ---
@@ -42,7 +42,7 @@ Azure Machine Learning provides all the tools developers and data scientists nee
+ R scripts or notebooks in which you use the <a href="https://azure.github.io/azureml-sdk-for-r/reference/https://docsupdatetracker.net/index.html" target="_blank">SDK for R</a> to write your own code, or use the R modules in the designer.
-+ + The [Many Models Solution Accelerator](https://aka.ms/many-models) (preview) builds on Azure Machine Learning and enables you to train, operate, and manage hundreds or even thousands of machine learning models.
++ The [Many Models Solution Accelerator](https://aka.ms/many-models) (preview) builds on Azure Machine Learning and enables you to train, operate, and manage hundreds or even thousands of machine learning models. + [Machine learning extension for Visual Studio Code users](tutorial-setup-vscode-extension.md)
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/policy-reference.md
@@ -2,8 +2,8 @@
title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. ms.date: 01/08/2021
-author: j-martens
-ms.author: jmartens
+ms.author: larryfr
+author: BlackMist
ms.topic: reference ms.service: machine-learning ms.custom: subject-policy-reference
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/reference-azure-machine-learning-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-azure-machine-learning-cli.md
@@ -6,7 +6,6 @@ ms.service: machine-learning
ms.subservice: core ms.topic: reference
-ms.reviewer: jmartens
ms.author: jordane author: jpe316 ms.date: 06/22/2020
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-bring-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-bring-data.md
@@ -35,9 +35,6 @@ In this tutorial, you:
## Prerequisites * Completion of [part 3](tutorial-1st-experiment-sdk-train.md) of the series.
-* Introductory knowledge of the Python language and machine learning workflows.
-* Local development environment, such as Visual Studio Code, Jupyter, or PyCharm.
-* Python (version 3.5 to 3.7).
## Adjust the training script
@@ -126,7 +123,7 @@ The `target_path` value specifies the path on the datastore where the CIFAR10 da
>[!TIP] > While you're using Azure Machine Learning to upload the data, you can use [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) to upload ad hoc files. If you need an ETL tool, you can use [Azure Data Factory](../data-factory/introduction.md) to ingest your data into Azure.
-Run the Python file to upload the data. (The upload should be quick, less than 60 seconds.)
+In the window that has the activated *tutorial1* conda environment, run the Python file to upload the data. (The upload should be quick, less than 60 seconds.)
```bash python 05-upload-data.py
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-hello-world https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-hello-world.md
@@ -31,9 +31,6 @@ In this tutorial, you will:
## Prerequisites - Completion of [part 1](tutorial-1st-experiment-sdk-setup-local.md) if you don't already have an Azure Machine Learning workspace.-- Introductory knowledge of the Python language and machine learning workflows.-- Local development environment, such as Visual Studio Code, Jupyter, or PyCharm.-- Python (version 3.5 to 3.7). ## Create and run a Python script locally
@@ -59,7 +56,7 @@ tutorial
### <a name="test"></a>Test your script locally
-You can run your code locally, by using your favorite IDE or a terminal. Running code locally has the benefit of interactive debugging of code.
+You can run your code locally, by using your favorite IDE or a terminal. Running code locally has the benefit of interactive debugging of code. In the window that has the activated *tutorial1* conda environment, run the Python file:
```bash cd <path/to/tutorial>
@@ -89,8 +86,6 @@ aml_url = run.get_portal_url()
print(aml_url) ``` -- ### Understand the code Here's a description of how the control script works:
@@ -143,13 +138,6 @@ Here's a description of how the control script works:
Run your control script, which in turn runs `hello.py` on the compute cluster that you created in the [setup tutorial](tutorial-1st-experiment-sdk-setup-local.md).
-The very first run will take 5-10 minutes to complete. This is because the following occurs:
-
-* A docker image is built in the cloud
-* The compute cluster is resized from 0 to 1 node
-* The docker image is downloaded to the compute.
-
-Subsequent runs are much quicker (~15 seconds) as the docker image is cached on the compute - you can test this by resubmitting the code below after the first run has completed.
```bash python 03-run-hello.py
@@ -163,11 +151,18 @@ python 03-run-hello.py
## <a name="monitor"></a>Monitor your code in the cloud by using the studio
-The output will contain a link to the studio that looks something like this:
+The output from your script will contain a link to the studio that looks something like this:
`https://ml.azure.com/experiments/hello-world/runs/<run-id>?wsid=/subscriptions/<subscription-id>/resourcegroups/<resource-group>/workspaces/<workspace-name>`.
-Follow the link and go to the **Outputs + logs** tab. There you can see a
-`70_driver_log.txt` file that looks like this:
+Follow the link. At first, you'll see a status of **Preparing**. The very first run will take 5-10 minutes to complete. This is because the following occurs:
+
+* A docker image is built in the cloud
+* The compute cluster is resized from 0 to 1 node
+* The docker image is downloaded to the compute.
+
+Subsequent runs are much quicker (~15 seconds) as the docker image is cached on the compute. You can test this by resubmitting the code below after the first run has completed.
+
+Once the job completes, go to the **Outputs + logs** tab. There you can see a `70_driver_log.txt` file that looks like this:
```txt 1: [2020-08-04T22:15:44.407305] Entering context manager injector.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-sdk-setup-local.md
@@ -27,30 +27,47 @@ In part 1 of this tutorial series, you will:
> * Set up a compute cluster. > [!NOTE]
-> This tutorial series focuses the Azure Machine Learning concepts suited to Python *jobs-based* machine learning tasks that are compute-intensive and/or require reproducibility. If you are more interested in an exploratory workflow, you could instead use [Jupyter or RStudio on an Azure Machine Learning compute instance](tutorial-1st-experiment-sdk-setup.md).
+> This tutorial series focuses on the Azure Machine Learning concepts required to submit **batch jobs** - this is where the code is submitted to the cloud to run in the background without any user interaction. This is useful for finished scripts or code you wish to run repeatedly, or for compute-intensive machine learning tasks. If you are more interested in an exploratory workflow, you could instead use [Jupyter or RStudio on an Azure Machine Learning compute instance](tutorial-1st-experiment-sdk-setup.md).
## Prerequisites - An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try [Azure Machine Learning](https://aka.ms/AMLFree).-- Familiarity with Python and [Machine Learning concepts](concept-azure-machine-learning-architecture.md). Examples include environments, training, and scoring.-- Local development environment, such as Visual Studio Code, Jupyter, or PyCharm.-- Python (version 3.5 to 3.7).-
+- [Anaconda](https://www.anaconda.com/download/) or [Miniconda](https://www.anaconda.com/download/) to manage Python virtual environments and install packages.
## Install the Azure Machine Learning SDK
-Throughout this tutorial, we make use of the Azure Machine Learning SDK for Python.
+Throughout this tutorial, you will use the Azure Machine Learning SDK for Python. To avoid Python dependency issues, you'll create an isolated environment. This tutorial series uses Conda to create that environment. If you prefer to use other solutions, such as `venv`, `virtualenv`, or docker, make sure you use a Python version >=3.5 and < 3.9.
-You can use the tools most familiar to you (for example, Conda and pip) to set up a Python environment to use throughout this tutorial. Install into your Python environment the Azure Machine Learning SDK for Python via pip:
+Check if you have Conda installed on your system:
+
+```bash
+conda --version
+```
+
+If this command returns a `conda not found` error, [download and install Miniconda](https://docs.conda.io/en/latest/miniconda.html).
+
+Once you have installed Conda, use a terminal or Anaconda Prompt window to create a new environment:
```bash
+conda create -n tutorial python=3.7
+```
+
+Next, install the Azure Machine Learning SDK into the conda environment you created:
+
+```bash
+conda activate tutorial
pip install azureml-sdk ```
+
+> [!NOTE]
+> It takes approximately 5 minutes for the Azure Machine Learning SDK install to complete.
+ > [!div class="nextstepaction"] > [I installed the SDK](?success=install-sdk#dir) [I ran into an issue](https://www.research.net/r/7C8Z3DN?issue=install-sdk) ## <a name="dir"></a>Create a directory structure for code+ We recommend that you set up the following simple directory structure for this tutorial: ```markdown
@@ -63,8 +80,9 @@ tutorial
> [!TIP] > You can create the hidden .azureml subdirectory in a terminal window. Or use the following:
+>
> * In a Mac Finder window use **Command + Shift + .** to toggle the ability to see and create directories that begin with a dot.
-> * In Windows 10, see [how to view hidden files and folders](https://support.microsoft.com/en-us/windows/view-hidden-files-and-folders-in-windows-10-97fbc472-c603-9d90-91d0-1166d1d9f4b5).
+> * In a Windows 10 File Explorer, see [how to view hidden files and folders](https://support.microsoft.com/en-us/windows/view-hidden-files-and-folders-in-windows-10-97fbc472-c603-9d90-91d0-1166d1d9f4b5).
> * In the Linux Graphical Interface, use **Ctrl + h** or the **View** menu and check the box to **Show hidden files**. > [!div class="nextstepaction"]
@@ -99,7 +117,7 @@ ws = Workspace.create(name='<my_workspace_name>', # provide a name for your work
ws.write_config(path='.azureml') ```
-Run this code from the `tutorial` directory:
+In the window that has the activated *tutorial1* conda environment, run this code from the `tutorial` directory.
```bash cd <path/to/tutorial>
@@ -159,7 +177,7 @@ except ComputeTargetException:
cpu_cluster.wait_for_completion(show_output=True) ```
-Run the Python file:
+In the window that has the activated *tutorial1* conda environment, run the Python file:
```bash python ./02-create-compute.py
@@ -182,6 +200,19 @@ tutorial
> [!div class="nextstepaction"] > [I created a compute cluster](?success=create-compute-cluster#next-steps) [I ran into an issue](https://www.research.net/r/7C8Z3DN?issue=create-compute-cluster)
+## View in the studio
+
+Sign in to [Azure Machine Learning studio](https://ml.azure.com) to view the workspace and compute instance you created.
+
+1. Select the **Subscription** you used to create the workspace.
+1. Select the **Machine Learning workspace** you created, *tutorial-ws*.
+1. Once the workspace loads, on the left side, select **Compute**.
+1. At the top, select the **Compute clusters** tab.
+
+:::image type="content" source="media/tutorial-1st-experiment-sdk-local/compute-instance-in-studio.png" alt-text="Screenshot: View the compute instance in your workspace.":::
+
+This view shows the provisioned compute cluster, along with the number of idle nodes, busy nodes, and unprovisioned nodes. Since you haven't used the cluster yet, all the nodes are currently unprovisioned.
+ ## Next steps In this setup tutorial, you have:
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-sdk-train.md
@@ -36,9 +36,6 @@ In this tutorial, you:
## Prerequisites * Completion of [part 2](tutorial-1st-experiment-hello-world.md) of the series.
-* Introductory knowledge of the Python language and machine learning workflows.
-* Local development environment, such as Visual Studio Code, Jupyter, or PyCharm.
-* Python (version 3.5 to 3.7).
## Create training scripts
@@ -48,8 +45,7 @@ The following code is taken from [this introductory example](https://pytorch.org
:::code language="python" source="~/MachineLearningNotebooks/tutorials/get-started-day1/IDE-users/src/model.py":::
-Next you define the training script. This script downloads the CIFAR10 dataset by using PyTorch `torchvision.dataset` APIs, sets up the network defined in
-`model.py`, and trains it for two epochs by using standard SGD and cross-entropy loss.
+Next you define the training script. This script downloads the CIFAR10 dataset by using PyTorch `torchvision.dataset` APIs, sets up the network defined in `model.py`, and trains it for two epochs by using standard SGD and cross-entropy loss.
Create a `train.py` script in the `src` subdirectory:
@@ -73,9 +69,7 @@ tutorial
> [!div class="nextstepaction"] > [I created the training scripts](?success=create-scripts#environment) [I ran into an issue](https://www.research.net/r/7CTJQQN?issue=create-scripts)
-## <a name="environment"></a> Create a Python environment
-
-For demonstration purposes, we're going to use a Conda environment. (The steps for a pip virtual environment are almost identical.)
+## <a name="environment"></a> Create a new Python environment
Create a file called `pytorch-env.yml` in the `.azureml` hidden directory:
@@ -88,18 +82,19 @@ This environment has all the dependencies that your model and training script re
## <a name="test-local"></a> Test locally
-Use the following code to test your script runs locally in this environment:
+Use the following code to test your script locally in the new environment.
```bash
-conda env create -f .azureml/pytorch-env.yml # create conda environment
-conda activate pytorch-env # activate conda environment
+conda deactivate # If you are still using the tutorial environment, exit it
+conda env create -f .azureml/pytorch-env.yml # create the new Conda environment
+conda activate pytorch-env # activate new Conda environment
python src/train.py # train model ``` After you run this script, you'll see the data downloaded into a directory called `tutorial/data`. > [!div class="nextstepaction"]
-> [I created the environment file](?success=test-local#create-local) [I ran into an issue](https://www.research.net/r/7CTJQQN?issue=test-local)
+> [I ran the code locally](?success=test-local#create-local) [I ran into an issue](https://www.research.net/r/7CTJQQN?issue=test-local)
## <a name="create-local"></a> Create the control script
@@ -159,11 +154,11 @@ if __name__ == "__main__":
## <a name="submit"></a> Submit the run to Azure Machine Learning
-If you switched local environments, be sure to switch back to an environment that has the Azure Machine Learning SDK for Python installed.
-
-Then run:
+Switch back to the *tutorial* environment that has the Azure Machine Learning SDK for Python installed. Since the training code isn't running on your computer, you don't need to have PyTorch installed. But you do need the `azureml-sdk`, which is in the *tutorial* environment.
```bash
+conda deactivate
+conda activate tutorial
python 04-run-pytorch.py ```
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-power-bi-custom-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-power-bi-custom-model.md
@@ -1,7 +1,7 @@
---
-title: "Tutorial: Create the predictive model by using a notebook (part 1 of 2)"
+title: "Tutorial: Create the predictive model with a notebook (part 1 of 2)"
titleSuffix: Azure Machine Learning
-description: Learn how to build and deploy a machine learning model by using code in a Jupyter Notebook. You can use the model to predict outcomes in Microsoft Power BI.
+description: Learn how to build and deploy a machine learning model by using code in a Jupyter Notebook. Also create a scoring script that defines input and output for easy integration into Microsoft Power BI.
services: machine-learning ms.service: machine-learning ms.subservice: core
@@ -12,9 +12,9 @@ ms.reviewer: sdgilley
ms.date: 12/11/2020 ---
-# Tutorial: Power BI integration - Create the predictive model by using a Jupyter Notebook (part 1 of 2)
+# Tutorial: Power BI integration - Create the predictive model with a Jupyter Notebook (part 1 of 2)
-In part 1 of this tutorial, you train and deploy a predictive machine learning model by using code in a Jupyter Notebook. In part 2, you'll use the model to predict outcomes in Microsoft Power BI.
+In part 1 of this tutorial, you train and deploy a predictive machine learning model by using code in a Jupyter Notebook. You will also create a scoring script to define the input and output schema of the model for integration into Power BI. In part 2, you'll use the model to predict outcomes in Microsoft Power BI.
In this tutorial, you:
@@ -22,6 +22,7 @@ In this tutorial, you:
> * Create a Jupyter Notebook. > * Create an Azure Machine Learning compute instance. > * Train a regression model by using scikit-learn.
+> * Write a scoring script that defines the input and output for easy integration into Microsoft Power BI.
> * Deploy the model to a real-time scoring endpoint. There are three ways to create and deploy the model that you'll use in Power BI. This article covers "Option A: Train and deploy models by using notebooks." This option is a code-first authoring experience. It uses Jupyter notebooks that are hosted in Azure Machine Learning Studio.
@@ -152,7 +153,7 @@ You can also view the model in Azure Machine Learning Studio. In the menu on the
:::image type="content" source="media/tutorial-power-bi/model.png" alt-text="Screenshot showing how to view a model.":::
-### Define the scoring script
+## Define the scoring script
When you deploy a model that will be integrated into Power BI, you need to define a Python *scoring script* and custom environment. The scoring script contains two functions:
@@ -160,7 +161,7 @@ When you deploy a model that will be integrated into Power BI, you need to defin
- The `run(data)` function runs when a call to the service includes input data that needs to be scored. >[!NOTE]
-> This article uses Python decorators to define the schema of the input and output data. This setup is important for the Power BI integration.
+> The Python decorators in the code below define the schema of the input and output data, which is important for integration into Power BI.
Copy the following code and paste it into a new *code cell* in your notebook. The following code snippet has cell magic that writes the code to a file named *score.py*.
marketplace https://docs.microsoft.com/en-us/azure/marketplace/determine-your-listing-type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/determine-your-listing-type.md
@@ -6,7 +6,7 @@ ms.subservice: partnercenter-marketplace-publisher
ms.topic: conceptual author: trkeya ms.author: trkeya
-ms.date: 11/16/2020
+ms.date: 12/18/2020
--- # Introduction to listing options
@@ -21,22 +21,24 @@ When you create an offer type, you choose one or more listing options. These opt
This table shows which listing options are available for each offer type.
-| Offer type | Free Trial | Test Drive | Contact Me | Get It Now (Transactable) |
+| Offer type | Free Trial | Test Drive | Contact Me | Get It Now `*` |
| ------------ | ------------- | ------------- | ------------- | ------------- | | Azure Application (Managed app) | | &#10004; | | &#10004; |
-| Azure Application (Solution template) | | | | |
+| Azure Application (Solution template) | | | | &#10004; |
| Consulting service | | | &#10004; | |
-| Azure Container | | | | |
-| Dynamics 365 business central | &#10004; | &#10004; | &#10004; | |
-| Dynamics 365 Customer Engagement & PowerApps | &#10004; | &#10004; | &#10004; | |
-| Dynamics 365 for operations | &#10004; | &#10004; | &#10004; | |
-| IoT Edge module | | | | |
-| Managed Service | | | | |
-| Power BI App | | | | |
+| Azure Container | | | | &#10004; |
+| Dynamics 365 business central | &#10004; | &#10004; | &#10004; | &#10004; |
+| Dynamics 365 Customer Engagement & PowerApps | &#10004; | &#10004; | &#10004; | &#10004; |
+| Dynamics 365 for operations | &#10004; | &#10004; | &#10004; | &#10004; |
+| IoT Edge module | | | | &#10004; |
+| Managed Service | | | | &#10004; |
+| Power BI App | | | | &#10004; |
| Azure Virtual Machine | &#10004; | &#10004; | | &#10004; | | Software as a service | &#10004; | &#10004; | &#10004; | &#10004; | ||||||
+`*` The Get It Now listing option includes Get It Now (Free), bring your own license (BYOL), Subscription, and Usage-based pricing. For details, see [Get It Now](#get-it-now).
+ ### Free Trial Use the commercial marketplace to enhance discoverability and automate provisioning of your solution's trial experience. This enables prospective customers to use your software as a service (SaaS), IaaS or Microsoft in-app experience at no cost from 30 days to six months, depending on the offer type.
migrate https://docs.microsoft.com/en-us/azure/migrate/server-migrate-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/server-migrate-overview.md
@@ -28,7 +28,7 @@ Use these selected comparisons to help you decide which method to use. You can a
**Appliance deployment** | The [Azure Migrate appliance](migrate-appliance.md) is deployed on-premises. | The [Azure Migrate Replication appliance](migrate-replication-appliance.md) is deployed on-premises. **Site Recovery compatibility** | Compatible. | You can't replicate with Azure Migrate Server Migration if you've set up replication for a machine using Site Recovery. **Target disk** | Managed disks | Managed disks
-**Disk limits** | OS disk: 2 TB<br/><br/> Data disk: 32 TB<br/><br/> Maximum disks: 60 | OS disk: 2 TB<br/><br/> Data disk: 8 TB<br/><br/> Maximum disks: 63
+**Disk limits** | OS disk: 2 TB<br/><br/> Data disk: 32 TB<br/><br/> Maximum disks: 60 | OS disk: 2 TB<br/><br/> Data disk: 32 TB<br/><br/> Maximum disks: 63
**Passthrough disks** | Not supported | Supported **UEFI boot** | Supported. | Supported.
migrate https://docs.microsoft.com/en-us/azure/migrate/troubleshoot-changed-block-tracking-replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/troubleshoot-changed-block-tracking-replication.md
@@ -293,6 +293,24 @@ This is a known VMware issue in which the disk size indicated by snapshot become
This happens when the NFC host buffer is out of memory. To resolve this issue, you need to move the VM (compute vMotion) to a different host, which has free resources.
+## Replication cycle failed
+
+**Error ID:** 181008
+
+**Error Message:** VM: 'VMName'. Error: No disksnapshots were found for the snapshot replication with snapshot Id : 'SnapshotID'.
+
+**Possible Causes:**
+
+Possible reasons are:
+1. Path of one or more included disks changed due to Storage VMotion.
+2. One or more included disks is no longer attached to the VM.
+
+**Recommendation:**
+
+Following recommendations are provided
+1. Restore the included disks to original path using storage vMotion and then disable storage vmotion.
+2. Disable Storage VMotion, if enabled, stop replication on the virtual machine, and replicate the virtual machine again. If the issue persists, contact support.
+ ## Next Steps Continue VM replication, and perform [test migration](./tutorial-migrate-vmware.md#run-a-test-migration).
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/traffic-analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/traffic-analytics.md
@@ -265,7 +265,7 @@ Some of the insights you might want to gain after Traffic Analytics is fully con
- Statistics of blocked traffic. - Why is a host blocking a significant volume of benign traffic? This behavior requires further investigation and probably optimization of configuration - Statistics of malicious allowed/blocked traffic
- - Why is a host receiving malicious traffic and why flows from malicious source is allowed? This behavior requires further investigation and probably optimization of configuration.
+ - Why is a host receiving malicious traffic and why are flows from malicious sources allowed? This behavior requires further investigation and probably optimization of configuration.
Select **See all**, under **Host**, as shown in the following picture:
@@ -427,4 +427,4 @@ To get answers to frequently asked questions, see [Traffic analytics FAQ](traffi
## Next steps - To learn how to enable flow logs, see [Enabling NSG flow logging](network-watcher-nsg-flow-logging-portal.md).-- To understand the schema and processing details of Traffic Analytics, see [Traffic analytics schema](traffic-analytics-schema.md).\ No newline at end of file
+- To understand the schema and processing details of Traffic Analytics, see [Traffic analytics schema](traffic-analytics-schema.md).
security-center https://docs.microsoft.com/en-us/azure/security-center/alerts-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/alerts-reference.md
@@ -10,7 +10,7 @@ ms.devlang: na
ms.topic: overview ms.tgt_pltfrm: na ms.workload: na
-ms.date: 01/05/2021
+ms.date: 01/11/2021
ms.author: memildin ---
@@ -208,41 +208,44 @@ At the bottom of this page, there's a table describing the Azure Security Center
[Further details and notes](defender-for-app-service-introduction.md)
-| Alert (Alert Type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
-|------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------:|----------|
-| **An attempt to run Linux commands on a Windows App Service**<br>(AppServices_LinuxCommandOnWindows) | Analysis of App Service processes detected an attempt to run a Linux command on a Windows App Service. This action was running by the web application. This behavior is often seen during campaigns that exploit a vulnerability in a common web application. | - | Medium |
-| **An IP that connected to your Azure App Service FTP Interface was found in Threat Intelligence**<br>(AppServices_IncomingTiClientIpFtp) | Azure App Service FTP log indicates a connection from a source address that was found in the threat intelligence feed. During this connection, a user accessed the pages listed. | InitialAccess | Medium |
-| **Anomalous requests pattern detected**<br>(AppServices_HttpAnomalies) | Azure App Service activity log indicates an anomalous HTTP activity to the App Service from %{Source IP.<br>This activity resembles a pattern of fuzzing or brute force activity. | - | Medium |
-| **Attempt to run high privilege command detected**<br>(AppServices_HighPrivilegeCommand) | Analysis of App Service processes detected an attempt to run a command that requires high privileges.<br>The command ran in the web application context. While this behavior can be legitimate, in web applications this behavior is also observed in malicious activities. | - | Medium |
-| **Azure Security Center test alert for App Service (not a threat)**<br>(AppServices_EICAR) | This is a test alert generated by Azure Security Center. No further action is needed. | - | High |
-| **Connection to web page from anomalous IP address detected**<br>(AppServices_AnomalousPageAccess) | Azure App Service activity log indicates a connection to a sensitive web page from a source IP address that hasn't connected to it before. This might indicate that someone is attempting a brute force attack into your web app administration pages. It might also be the result of a new IP address being used by a legitimate user. | InitialAccess | Medium |
-| **Detected encoded executable in command line data**<br>(AppServices_Base64EncodedExecutableInCommandLineParams) | Analysis of host data on {Compromised host} detected a base-64 encoded executable. This has previously been associated with attackers attempting to construct executables on-the-fly through a sequence of commands, and attempting to evade intrusion detection systems by ensuring that no individual command would trigger an alert. This could be legitimate activity, or an indication of a compromised host. | DefenseEvasion, Execution | High |
-| **Digital currency mining related behavior detected**<br>(AppServices_DigitalCurrencyMining) | Analysis of host data on Inn-Flow-WebJobs detected the execution of a process or command normally associated with digital currency mining. | Execution | High |
-| **Executable decoded using certutil**<br>(AppServices_ExecutableDecodedUsingCertutil) | Analysis of host data on [Compromised entity] detected that certutil.exe, a built-in administrator utility, was being used to decode an executable instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using a tool such as certutil.exe to decode a malicious executable that will then be subsequently executed. | DefenseEvasion, Execution | High |
-| **Fileless Attack Behavior Detected**<br>(AppServices_FilelessAttackBehaviorDetection) | The memory of the process specified below contains behaviors commonly used by fileless attacks.<br>Specific behaviors include: {list of observed behaviors} | Execution | Medium |
-| **Fileless Attack Technique Detected**<br>(AppServices_FilelessAttackTechniqueDetection) | The memory of the process specified below contains evidence of a fileless attack technique. Fileless attacks are used by attackers to execute code while evading detection by security software.<br>Specific behaviors include: {list of observed behaviors} | Execution | High |
-| **Fileless Attack Toolkit Detected**<br>(AppServices_FilelessAttackToolkitDetection) | The memory of the process specified below contains a fileless attack toolkit: {ToolKitName}. Fileless attack toolkits typically do not have a presence on the filesystem, making detection by traditional anti-virus software difficult.<br>Specific behaviors include: {list of observed behaviors} | DefenseEvasion, Execution | High |
-| **Phishing content hosted on Azure Webapps**<br>(AppServices_PhishingContent) | URL used for phishing attack found on the Azure AppServices website. This URL was part of a phishing attack sent to Microsoft 365 customers. The content typically lures visitors into entering their corporate credentials or financial information into a legitimate looking website. | Collection | High |
-| **PHP file in upload folder**<br>(AppServices_PhpInUploadFolder) | Azure App Service activity log indicates an access to a suspicious PHP page located in the upload folder.<br>This type of folder does not usually contain PHP files. The existence of this type of file might indicate an exploitation taking advantage of arbitrary file upload vulnerabilities. | Execution | Medium |
-| **Raw data download detected**<br>(AppServices_DownloadCodeFromWebsite) | Analysis of App Service processes detected an attempt to download code from raw-data websites such as Pastebin. This action was run by a PHP process. This behavior is associated with attempts to download web shells or other malicious components to the App Service. | Execution | Medium |
-| **Saving curl output to disk detected**<br>(AppServices_CurlToDisk) | Analysis of App Service processes detected the running of a curl command in which the output was saved to the disk. While this behavior can be legitimate, in web applications this behavior is also observed in malicious activities such as attempts to infect websites with web shells. | - | Low |
-| **Spam folder referrer detected**<br>(AppServices_SpamReferrer) | Azure App Service activity log indicates web activity that was identified as originating from a web site associated with spam activity. This can occur if your website is compromised and used for spam activity. | - | Low |
-| **Suspicious access to possibly vulnerable web page detected**<br>(AppServices_ScanSensitivePage) | Azure App Service activity log indicates a web page that seems to be sensitive was accessed. This suspicious activity originated from a source IP address whose access pattern resembles that of a web scanner.<br>This activity is often associated with an attempt by an attacker to scan your network to try and gain access to sensitive or vulnerable web pages. | - | Low |
-| **Suspicious download using Certutil detected**<br>(AppServices_DownloadUsingCertutil) | Analysis of host data on {NAME} detected the use of certutil.exe, a built-in administrator utility, for the download of a binary instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using certutil.exe to download and decode a malicious executable that will then be subsequently executed. | Execution | Medium |
-| **Suspicious PHP execution detected**<br>(AppServices_SuspectPhp) | Machine logs indicate that a suspicious PHP process is running. The action included an attempt to run operating system commands or PHP code from the command line, by using the PHP process. While this behavior can be legitimate, in web applications this behavior might indicate malicious activities, such as attempts to infect websites with web shells. | Execution | Medium |
-| **Suspicious PowerShell cmdlets executed**<br>(AppServices_PowerShellPowerSploitScriptExecution) | Analysis of host data indicates execution of known malicious PowerShell PowerSploit cmdlets. | Execution | Medium |
-| **Suspicious process executed**<br>(AppServices_KnownCredentialAccessTools) | Machine logs indicate that the suspicious process: '%{process path}' was running on the machine, often associated with attacker attempts to access credentials. | CredentialAccess | High |
-| **Suspicious process name detected**<br>(AppServices_ProcessWithKnownSuspiciousExtension) | Analysis of host data on {NAME} detected a process whose name is suspicious, for example corresponding to a known attacker tool or named in a way that is suggestive of attacker tools that try to hide in plain sight. This process could be legitimate activity, or an indication that one of your machines has been compromised. | Persistence, DefenseEvasion | Medium |
-| **Suspicious SVCHOST process executed**<br>(AppServices_SVCHostFromInvalidPath) | The system process SVCHOST was observed running in an abnormal context. Malware often use SVCHOST to mask its malicious activity. | DefenseEvasion, Execution | High |
-| **Suspicious User Agent detected**<br>(AppServices_UserAgentInjection) | Azure App Service activity log indicates requests with suspicious user agent. This behavior can indicate on attempts to exploit a vulnerability in your App Service application. | InitialAccess | Medium |
-| **Suspicious WordPress theme invocation detected**<br>(AppServices_WpThemeInjection) | Azure App Service activity log indicates a possible code injection activity on your App Service resource.<br>The suspicious activity detected resembles that of a manipulation of WordPress theme to support server side execution of code, followed by a direct web request to invoke the manipulated theme file.<br>This type of activity was seen in the past as part of an attack campaign over WordPress. | Execution | High |
-| **Vulnerability scanner detected**<br>(AppServices_DrupalScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting a content management system (CMS). | PreAttack | Medium |
-| **Vulnerability scanner detected**<br>(AppServices_JoomlaScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting Joomla applications. | PreAttack | Medium |
-| **Vulnerability scanner detected**<br>(AppServices_WpScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting WordPress applications. | PreAttack | Medium |
-| **NMap scanning detected**<br>(AppServices_Nmap) | Azure App Service activity log indicates a possible web fingerprinting activity on your App Service resource.<br>The suspicious activity detected is associated with NMAP. Attackers often use this tool for probing the web application to find vulnerabilities. | PreAttack | Medium |
-| **Web fingerprinting detected**<br>(AppServices_WebFingerprinting) | Azure App Service activity log indicates a possible web fingerprinting activity on your App Service resource.<br>The suspicious activity detected is associated with a tool called Blind Elephant. The tool fingerprint web servers and tries to detect the installed applications and version.<br>Attackers often use this tool for probing the web application to find vulnerabilities. | PreAttack | Medium |
-| **Website is tagged as malicious in threat intelligence feed**<br>(AppServices_SmartScreen) | Your website as described below is marked as a malicious site by Windows SmartScreen. If you think this is a false positive, contact Windows SmartScreen via report feedback link provided. | Collection | Medium |
-| | |
+| Alert (Alert Type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
+|------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------:|----------|
+| **An attempt to run Linux commands on a Windows App Service**<br>(AppServices_LinuxCommandOnWindows) | Analysis of App Service processes detected an attempt to run a Linux command on a Windows App Service. This action was running by the web application. This behavior is often seen during campaigns that exploit a vulnerability in a common web application. <br>(Applies to: App Service on Windows) | - | Medium |
+| **An IP that connected to your Azure App Service FTP Interface was found in Threat Intelligence**<br>(AppServices_IncomingTiClientIpFtp) | Azure App Service FTP log indicates a connection from a source address that was found in the threat intelligence feed. During this connection, a user accessed the pages listed. <br>(Applies to: App Service on Windows and App Service on Linux) | InitialAccess | Medium |
+| **Attempt to run high privilege command detected**<br>(AppServices_HighPrivilegeCommand) | Analysis of App Service processes detected an attempt to run a command that requires high privileges.<br>The command ran in the web application context. While this behavior can be legitimate, in web applications this behavior is also observed in malicious activities. <br>(Applies to: App Service on Windows) | - | Medium |
+| **Azure Security Center test alert for App Service (not a threat)**<br>(AppServices_EICAR) | This is a test alert generated by Azure Security Center. No further action is needed. <br>(Applies to: App Service on Windows and App Service on Linux) | - | High |
+| **Connection to web page from anomalous IP address detected**<br>(AppServices_AnomalousPageAccess) | Azure App Service activity log indicates a connection to a sensitive web page from a source IP address that hasn't connected to it before. This might indicate that someone is attempting a brute force attack into your web app administration pages. It might also be the result of a new IP address being used by a legitimate user. <br>(Applies to: App Service on Windows and App Service on Linux) | InitialAccess | Medium |
+| **Detected encoded executable in command line data**<br>(AppServices_Base64EncodedExecutableInCommandLineParams) | Analysis of host data on {Compromised host} detected a base-64 encoded executable. This has previously been associated with attackers attempting to construct executables on-the-fly through a sequence of commands, and attempting to evade intrusion detection systems by ensuring that no individual command would trigger an alert. This could be legitimate activity, or an indication of a compromised host. <br>(Applies to: App Service on Windows) | DefenseEvasion, Execution | High |
+| **Detected file download from a known malicious source**<br>(AppServices_SuspectDownload) | Analysis of host data has detected the download of a file from a known malware source on your host <br>(Applies to: App Service on Linux) | PrivilegeEscalation, Execution, Exfiltration, CommandAndControl | Medium |
+| **Digital currency mining related behavior detected**<br>(AppServices_DigitalCurrencyMining) | Analysis of host data on Inn-Flow-WebJobs detected the execution of a process or command normally associated with digital currency mining. <br>(Applies to: App Service on Windows and App Service on Linux) | Execution | High |
+| **Executable decoded using certutil**<br>(AppServices_ExecutableDecodedUsingCertutil) | Analysis of host data on [Compromised entity] detected that certutil.exe, a built-in administrator utility, was being used to decode an executable instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using a tool such as certutil.exe to decode a malicious executable that will then be subsequently executed. <br>(Applies to: App Service on Windows) | DefenseEvasion, Execution | High |
+| **Fileless Attack Behavior Detected**<br>(AppServices_FilelessAttackBehaviorDetection) | The memory of the process specified below contains behaviors commonly used by fileless attacks.<br>Specific behaviors include: {list of observed behaviors} <br>(Applies to: App Service on Windows and App Service on Linux) | Execution | Medium |
+| **Fileless Attack Technique Detected**<br>(AppServices_FilelessAttackTechniqueDetection) | The memory of the process specified below contains evidence of a fileless attack technique. Fileless attacks are used by attackers to execute code while evading detection by security software.<br>Specific behaviors include: {list of observed behaviors} <br>(Applies to: App Service on Windows and App Service on Linux) | Execution | High |
+| **Fileless Attack Toolkit Detected**<br>(AppServices_FilelessAttackToolkitDetection) | The memory of the process specified below contains a fileless attack toolkit: {ToolKitName}. Fileless attack toolkits typically do not have a presence on the filesystem, making detection by traditional anti-virus software difficult.<br>Specific behaviors include: {list of observed behaviors} <br>(Applies to: App Service on Windows and App Service on Linux) | DefenseEvasion, Execution | High |
+| **NMap scanning detected**<br>(AppServices_Nmap) | Azure App Service activity log indicates a possible web fingerprinting activity on your App Service resource.<br>The suspicious activity detected is associated with NMAP. Attackers often use this tool for probing the web application to find vulnerabilities. <br>(Applies to: App Service on Windows) | PreAttack | Medium |
+| **Phishing content hosted on Azure Webapps**<br>(AppServices_PhishingContent) | URL used for phishing attack found on the Azure AppServices website. This URL was part of a phishing attack sent to Microsoft 365 customers. The content typically lures visitors into entering their corporate credentials or financial information into a legitimate looking website. <br>(Applies to: App Service on Windows and App Service on Linux) | Collection | High |
+| **PHP file in upload folder**<br>(AppServices_PhpInUploadFolder) | Azure App Service activity log indicates an access to a suspicious PHP page located in the upload folder.<br>This type of folder does not usually contain PHP files. The existence of this type of file might indicate an exploitation taking advantage of arbitrary file upload vulnerabilities. <br>(Applies to: App Service on Windows and App Service on Linux) | Execution | Medium |
+| **Possible Cryptocoinminer download detected**<br>(AppServices_CryptoCoinMinerDownload) | Analysis of host data has detected the download of a file normally associated with digital currency mining <br>(Applies to: App Service on Linux) | DefenseEvasion, CommandAndControl, Exploitation | Medium |
+| **Potential reverse shell detected**<br>(AppServices_ReverseShell) | Analysis of host data detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns. <br>(Applies to: App Service on Linux) | Exfiltration, Exploitation | Medium |
+| **Raw data download detected**<br>(AppServices_DownloadCodeFromWebsite) | Analysis of App Service processes detected an attempt to download code from raw-data websites such as Pastebin. This action was run by a PHP process. This behavior is associated with attempts to download web shells or other malicious components to the App Service. <br>(Applies to: App Service on Windows) | Execution | Medium |
+| **Saving curl output to disk detected**<br>(AppServices_CurlToDisk) | Analysis of App Service processes detected the running of a curl command in which the output was saved to the disk. While this behavior can be legitimate, in web applications this behavior is also observed in malicious activities such as attempts to infect websites with web shells. <br>(Applies to: App Service on Windows) | - | Low |
+| **Spam folder referrer detected**<br>(AppServices_SpamReferrer) | Azure App Service activity log indicates web activity that was identified as originating from a web site associated with spam activity. This can occur if your website is compromised and used for spam activity. <br>(Applies to: App Service on Windows and App Service on Linux) | - | Low |
+| **Suspicious access to possibly vulnerable web page detected**<br>(AppServices_ScanSensitivePage) | Azure App Service activity log indicates a web page that seems to be sensitive was accessed. This suspicious activity originated from a source IP address whose access pattern resembles that of a web scanner.<br>This activity is often associated with an attempt by an attacker to scan your network to try and gain access to sensitive or vulnerable web pages. <br>(Applies to: App Service on Windows and App Service on Linux) | - | Low |
+| **Suspicious domain name reference**<br>(AppServices_CommandlineSuspectDomain) | Analysis of host data detected reference to suspicious domain name. Such activity, while possibly legitimate user behaviour, is frequently an indication of the download or execution of malicious software. Typical related attacker activity is likely to include the download and execution of further malicious software or remote administration tools. <br>(Applies to: App Service on Linux) | Exfiltration | Low |
+| **Suspicious download using Certutil detected**<br>(AppServices_DownloadUsingCertutil) | Analysis of host data on {NAME} detected the use of certutil.exe, a built-in administrator utility, for the download of a binary instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using certutil.exe to download and decode a malicious executable that will then be subsequently executed. <br>(Applies to: App Service on Windows) | Execution | Medium |
+| **Suspicious PHP execution detected**<br>(AppServices_SuspectPhp) | Machine logs indicate that a suspicious PHP process is running. The action included an attempt to run operating system commands or PHP code from the command line, by using the PHP process. While this behavior can be legitimate, in web applications this behavior might indicate malicious activities, such as attempts to infect websites with web shells. <br>(Applies to: App Service on Windows and App Service on Linux) | Execution | Medium |
+| **Suspicious PowerShell cmdlets executed**<br>(AppServices_PowerShellPowerSploitScriptExecution) | Analysis of host data indicates execution of known malicious PowerShell PowerSploit cmdlets. <br>(Applies to: App Service on Windows) | Execution | Medium |
+| **Suspicious process executed**<br>(AppServices_KnownCredentialAccessTools) | Machine logs indicate that the suspicious process: '%{process path}' was running on the machine, often associated with attacker attempts to access credentials. <br>(Applies to: App Service on Windows) | CredentialAccess | High |
+| **Suspicious process name detected**<br>(AppServices_ProcessWithKnownSuspiciousExtension) | Analysis of host data on {NAME} detected a process whose name is suspicious, for example corresponding to a known attacker tool or named in a way that is suggestive of attacker tools that try to hide in plain sight. This process could be legitimate activity, or an indication that one of your machines has been compromised. <br>(Applies to: App Service on Windows) | Persistence, DefenseEvasion | Medium |
+| **Suspicious SVCHOST process executed**<br>(AppServices_SVCHostFromInvalidPath) | The system process SVCHOST was observed running in an abnormal context. Malware often use SVCHOST to mask its malicious activity. <br>(Applies to: App Service on Windows) | DefenseEvasion, Execution | High |
+| **Suspicious User Agent detected**<br>(AppServices_UserAgentInjection) | Azure App Service activity log indicates requests with suspicious user agent. This behavior can indicate on attempts to exploit a vulnerability in your App Service application. <br>(Applies to: App Service on Windows and App Service on Linux) | InitialAccess | Medium |
+| **Suspicious WordPress theme invocation detected**<br>(AppServices_WpThemeInjection) | Azure App Service activity log indicates a possible code injection activity on your App Service resource.<br>The suspicious activity detected resembles that of a manipulation of WordPress theme to support server side execution of code, followed by a direct web request to invoke the manipulated theme file.<br>This type of activity was seen in the past as part of an attack campaign over WordPress. <br>(Applies to: App Service on Windows and App Service on Linux) | Execution | High |
+| **Vulnerability scanner detected**<br>(AppServices_DrupalScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting a content management system (CMS). <br>(Applies to: App Service on Windows) | PreAttack | Medium |
+| **Vulnerability scanner detected**<br>(AppServices_JoomlaScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting Joomla applications. <br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium |
+| **Vulnerability scanner detected**<br>(AppServices_WpScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting WordPress applications. <br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium |
+| **Web fingerprinting detected**<br>(AppServices_WebFingerprinting) | Azure App Service activity log indicates a possible web fingerprinting activity on your App Service resource.<br>The suspicious activity detected is associated with a tool called Blind Elephant. The tool fingerprint web servers and tries to detect the installed applications and version.<br>Attackers often use this tool for probing the web application to find vulnerabilities. <br>(Applies to: App Service on Windows) | PreAttack | Medium |
+| **Website is tagged as malicious in threat intelligence feed**<br>(AppServices_SmartScreen) | Your website as described below is marked as a malicious site by Windows SmartScreen. If you think this is a false positive, contact Windows SmartScreen via report feedback link provided. <br>(Applies to: App Service on Windows and App Service on Linux) | Collection | Medium |
+| | |
@@ -490,7 +493,7 @@ Security Center's supported kill chain intents are based on [version 7 of the MI
| Tactic | Description | |-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| **PreAttack** | PreAttack could be either an attempt to access a certain resource regardless of a malicious intent, or a failed attempt to gain access to a target system to gather information prior to exploitation. This step is usually detected as an attempt, originating from outside the network, to scan the target system and identify an entry point. </br>Further details on the PreAttack stage can be read in [MITRE's page](https://attack.mitre.org/matrices/pre/). |
+| **PreAttack** | PreAttack could be either an attempt to access a certain resource regardless of a malicious intent, or a failed attempt to gain access to a target system to gather information prior to exploitation. This step is usually detected as an attempt, originating from outside the network, to scan the target system and identify an entry point. |
| **InitialAccess** | InitialAccess is the stage where an attacker manages to get a foothold on the attacked resource. This stage is relevant for compute hosts and resources such as user accounts, certificates etc. Threat actors will often be able to control the resource after this stage. | | **Persistence** | Persistence is any access, action, or configuration change to a system that gives a threat actor a persistent presence on that system. Threat actors will often need to maintain access to systems through interruptions such as system restarts, loss of credentials, or other failures that would require a remote access tool to restart or provide an alternate backdoor for them to regain access. | | **PrivilegeEscalation** | Privilege escalation is the result of actions that allow an adversary to obtain a higher level of permissions on a system or network. Certain tools or actions require a higher level of privilege to work and are likely necessary at many points throughout an operation. User accounts with permissions to access specific systems or perform specific functions necessary for adversaries to achieve their objective may also be considered an escalation of privilege. |
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-adaptive-application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-adaptive-application.md
@@ -154,11 +154,6 @@ To edit the rules for a group of machines:
:::image type="content" source="./media/security-center-adaptive-application/adaptive-application-group-settings.png" alt-text="The group settings page for adaptive application controls" lightbox="./media/security-center-adaptive-application/adaptive-application-group-settings.png":::
- > [!IMPORTANT]
- > The **Enforce** option, in the file type protection mode settings, is greyed out in **all** scenarios. No enforcement options are available at this time.
- >
- > :::image type="content" source="./media/security-center-adaptive-application/adaptive-application-modes.png" alt-text="The enforce mode for file protection is permanently grayed out. No enforcement options are available.":::
- 1. Optionally, modify the group's name or file type protection modes. 1. Select **Apply** and **Save**.
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-pricing.md
@@ -67,6 +67,7 @@ Below is the pricing page for an example subscription. You'll notice that each p
- [What are the plans offered by Security Center?](#what-are-the-plans-offered-by-security-center) - [How do I enable Azure Defender for my subscription?](#how-do-i-enable-azure-defender-for-my-subscription) - [Can I enable Azure Defender for servers on a subset of servers in my subscription?](#can-i-enable-azure-defender-for-servers-on-a-subset-of-servers-in-my-subscription)
+- [If I already have a license for Microsoft Defender for Endpoint can I get a discount for Azure Defender?](#if-i-already-have-a-license-for-microsoft-defender-for-endpoint-can-i-get-a-discount-for-azure-defender)
- [My subscription has Azure Defender for servers enabled, do I pay for not-running servers?](#my-subscription-has-azure-defender-for-servers-enabled-do-i-pay-for-not-running-servers) - [Will I be charged for machines without the Log Analytics agent installed?](#will-i-be-charged-for-machines-without-the-log-analytics-agent-installed) - [If a Log Analytics agent reports to multiple workspaces, will I be charged twice?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-will-i-be-charged-twice)
@@ -106,6 +107,10 @@ No. When you enable [Azure Defender for servers](defender-for-servers-introducti
An alternative is to enable Azure Defender for servers at the Log Analytics workspace level. If you do this, only servers reporting to that workspace will be protected and billed. However, several capabilities will be unavailable. These include just-in-time VM access, network detections, regulatory compliance, adaptive network hardening, adaptive application control, and more.
+### If I already have a license for Microsoft Defender for Endpoint can I get a discount for Azure Defender?
+If you've already got a license for Microsoft Defender for Endpoint, you won't have to pay for that part of your Azure Defender license.
+
+To confirm your discount, contact Security Center's support team and provide the relevant workspace ID, region, and license information.
### My subscription has Azure Defender for servers enabled, do I pay for not-running servers? No. When you enable [Azure Defender for servers](defender-for-servers-introduction.md) on a subscription, you'll be billed hourly for running servers only. You won't be charged for any server that's turned off, during the time it's off.
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-wdatp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-wdatp.md
@@ -116,13 +116,19 @@ To generate a benign Microsoft Defender for Endpoint test alert:
## FAQ for Security Center's integrated Microsoft Defender for Endpoint
-### What are the licensing requirements for Microsoft Defender for Endpoint?
+- [What are the licensing requirements for Microsoft Defender for Endpoint?](#what-are-the-licensing-requirements-for-microsoft-defender-for-endpoint)
+- [If I already have a license for Microsoft Defender for Endpoint can I get a discount for Azure Defender?](#if-i-already-have-a-license-for-microsoft-defender-for-endpoint-can-i-get-a-discount-for-azure-defender)
+- [How do I switch from a third-party EDR tool?](#how-do-i-switch-from-a-third-party-edr-tool)
+### What are the licensing requirements for Microsoft Defender for Endpoint?
Defender for Endpoint is included at no additional cost with **Azure Defender for servers**. Alternatively, it can be purchased separately for 50 machines or more.
+### If I already have a license for Microsoft Defender for Endpoint can I get a discount for Azure Defender?
+If you've already got a license for Microsoft Defender for Endpoint, you won't have to pay for that part of your Azure Defender license.
-### How do I switch from a third-party EDR tool?
+To confirm your discount, contact Security Center's support team and provide the relevant workspace ID, region, and license information.
+### How do I switch from a third-party EDR tool?
Full instructions for switching from a non-Microsoft endpoint solution are available in the Microsoft Defender for Endpoint documentation: [Migration overview](/windows/security/threat-protection/microsoft-defender-atp/switch-to-microsoft-defender-migration).
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-azure-sql-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-azure-sql-logs.md new file mode 100644
@@ -0,0 +1,105 @@
+---
+title: Connect Azure SQL database diagnostics and auditing logs to Azure Sentinel
+description: Learn how to connect Azure SQL database diagnostics logs and security auditing logs to Azure Sentinel.
+author: yelevin
+manager: rkarlin
+ms.service: azure-sentinel
+ms.subservice: azure-sentinel
+ms.topic: how-to
+ms.date: 01/06/2021
+ms.author: yelevin
+---
+# Connect Azure SQL database diagnostics and auditing logs
+
+Azure SQL is a fully managed, Platform-as-a-Service (PaaS) database engine that handles most database management functions, such as upgrading, patching, backups, and monitoring, without user involvement.
+
+The Azure SQL database connector lets you stream your databases' auditing and diagnostic logs into Sentinel, allowing you to continuously monitor activity in all your instances.
+
+- Connecting diagnostics logs allows you to send database diagnostics logs of different data types to your Sentinel workspace.
+
+- Connecting auditing logs allows you to stream security audit logs from all your Azure SQL databases at the server level.
+
+Learn more about [monitoring Azure SQL Databases](../azure-sql/database/metrics-diagnostic-telemetry-logging-streaming-export-configure.md).
+
+## Prerequisites
+
+- You must have read and write permissions on the Azure Sentinel workspace.
+
+- To connect auditing logs, you must have read and write permissions to Azure SQL Server audit settings.
+
+## Connect to Azure SQL database
+
+1. From the Azure Sentinel navigation menu, select **Data connectors**.
+
+1. Select **Azure SQL Database** from the data connectors gallery, and then select **Open Connector Page** on the preview pane.
+
+1. In the **Configuration** section of the connector page, note the two categories of logs you can connect.
+
+### Connect diagnostics logs
+
+1. Under **Diagnostics logs**, expand **Enable diagnostics logs on each of your Azure SQL databases manually**.
+
+1. Select the **Open Azure SQL >** link to open the **Azure SQL** resources blade.
+
+1. **(Optional)** To find your database resource easily, select **Add filter** on the filters bar at the top.
+ 1. In the **Filter** drop-down list, select **Resource type**.
+ 1. In the **Value** drop-down list, deselect **Select all**, then select **SQL database**.
+ 1. Click **Apply**.
+
+1. Select the database resource whose diagnostics logs you want to send to Azure Sentinel.
+
+ > [!NOTE]
+ > For each database resource whose logs you want to collect, you must repeat this process, starting from this step.
+
+1. From the resource page of the database you selected, under **Monitoring** on the navigation menu, select **Diagnostic settings**.
+
+ 1. Select the **+ Add diagnostic setting** link at the bottom of the table.ΓÇï
+
+ 1. In the **Diagnostic setting** screen, enter a name in the **Diagnostic setting name** field.
+
+ 1. In the **Destination details** column, mark the **Send to Log Analytics workspace** check box. Two new fields will be displayed below it. Choose the relevant **Subscription** and **Log Analytics workspace** (where Azure Sentinel resides).ΓÇï
+
+ 1. In the **Category details** column, mark the check boxes of the log and metric types you want to ingest. We recommend selecting all available types under both **log** and **metric**.ΓÇï
+
+ 1. Select **Save** at the top of the screen.
+
+- Alternatively, you can use the supplied **PowerShell script** to connect your diagnostics logs.
+ 1. Under **Diagnostics logs**, expand **Enable by PowerShell script**.
+
+ 1. Copy the code block and paste in PowerShell.
+
+### Connect audit logs
+
+1. Under **Auditing logs (preview)**, expand **Enable auditing logs on all Azure SQL databases (at the server level)**.
+
+1. Select the **Open Azure SQL >** link to open the **SQL servers** resource blade.
+
+1. Select the SQL server whose auditing logs you want to send to Azure Sentinel.
+
+ > [!NOTE]
+ > For each server resource whose logs you want to collect, you must repeat this process, starting from this step.
+
+1. From the resource page of the server you selected, under **Security** on the navigation menu, select **Auditing**.
+
+ 1. Move the **Enable Azure SQL Auditing** toggle to **ON**.ΓÇï
+
+ 1. Under **Audit log destination**, select **Log Analytics (Preview)**.
+
+ 1. From the list of workspaces that appears, choose your workspace (where Azure Sentinel resides).ΓÇï
+
+ 1. Select **Save** at the top of the screen.
+
+- Alternatively, you can use the supplied **PowerShell script** to connect your diagnostics logs.
+ 1. Under **Auditing logs**, expand **Enable by PowerShell script**.
+
+ 1. Copy the code block and paste in PowerShell.
++
+> [!NOTE]
+>
+> With this particular data connector, the connectivity status indicators (a color stripe in the data connectors gallery and connection icons next to the data type names) will show as *connected* (green) only if data has been ingested at some point in the past two weeks. Once two weeks have passed with no data ingestion, the connector will show as being disconnected. The moment more data comes through, the *connected* status will return.
+
+## Next steps
+In this document, you learned how to connect Azure SQL database diagnostics and auditing logs to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
+- Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md).
+- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).
\ No newline at end of file
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-dynamics-365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-dynamics-365.md new file mode 100644
@@ -0,0 +1,69 @@
+---
+title: Connect Dynamics 365 logs to Azure Sentinel | Microsoft Docs
+description: Learn to use the Dynamics 365 Common Data Service (CDS) activities connector to bring in information about ongoing admin, user, and support activities.
+services: sentinel
+documentationcenter: na
+author: yelevin
+manager: rkarlin
+editor: ''
+
+ms.service: azure-sentinel
+ms.subservice: azure-sentinel
+ms.devlang: na
+ms.topic: how-to
+ms.tgt_pltfrm: na
+ms.workload: na
+ms.date: 12/13/2020
+ms.author: yelevin
+
+---
+# Connect Dynamics 365 activity logs to Azure Sentinel
+
+The [Dynamics 365](/office365/servicedescriptions/microsoft-dynamics-365-online-service-description) Common Data Service (CDS) activities connector provides insight into admin, user, and support activities, as well as Microsoft Social Engagement logging events. By connecting Dynamics 365 CRM logs into Azure Sentinel, you can view this data in workbooks, use it to create custom alerts, and leverage it to improve your investigation process. This new Azure Sentinel connector collects the Dynamics CDS data from the Office Management API. To learn more about the Dynamics CDS activities audited in Power Platform, visit [Enable and Use Activity Logging](/power-platform/admin/enable-use-comprehensive-auditing).
+
+> [!IMPORTANT]
+>
+> The Dynamics 365 Common Data Service (CDS) activities connector is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+- You must have read and write permissions on your Azure Sentinel workspace.
+
+- You must have a [Microsoft Dynamics 365 production license](/office365/servicedescriptions/microsoft-dynamics-365-online-service-description). This connector is not available for sandbox environments.
+ - A Microsoft 365 Enterprise [E3 or E5](/power-platform/admin/enable-use-comprehensive-auditing#requirements) subscription is required to do Activity Logging.
+
+- To pull data from the Office Management API:
+ - You must be a global administrator on your tenant.
+
+ - [Office audit logging](/office365/servicedescriptions/office-365-platform-service-description/office-365-securitycompliance-center) must be enabled in [Office Security and Compliance Center](/microsoft-365/compliance/search-the-audit-log-in-security-and-compliance).
+
+## Enable the Dynamics 365 activities data connector
+
+1. From the Azure Sentinel navigation menu, select **Data connectors**.
+
+1. From the **Data connectors** gallery, select **Dynamics 365 (Preview)**, and then select **Open connector page** on the preview pane.
+
+1. On the **Instructions** tab, under **Configuration**, click **Connect**.
+
+ Once the connector is activated, it will take around 30 minutes until you will be able to see data arriving in the graph.
+
+ According to the [Office audit log in the compliance center](/microsoft-365/compliance/search-the-audit-log-in-security-and-compliance#requirements-to-search-the-audit-log), it can take up to 30 minutes or up to 24 hours after an event occurs for the corresponding audit log record to be returned in the results.
+
+1. The Microsoft Dynamics audit logs can be found in the `Dynamics365Activity` table. See the table's [schema reference](/azure/azure-monitor/reference/tables/dynamics365activity).
+
+## Querying the data
+
+1. From the Azure Sentinel navigation menu, select **Logs**.
+
+1. Run the following query to verify that logs arrive:
+
+ ```kusto
+ Dynamics365Activity
+ | take 10
+ ```
++
+## Next steps
+In this document, you learned how to connect Dynamics 365 activities data to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
+- Learn how to [get visibility into your data and potential threats](quickstart-get-visibility.md).
+- Get started detecting threats with Azure Sentinel, using [built-in](tutorial-detect-threats-built-in.md) or [custom](tutorial-detect-threats-custom.md) rules.
service-fabric https://docs.microsoft.com/en-us/azure/service-fabric/how-to-managed-identity-service-fabric-app-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-identity-service-fabric-app-code.md
@@ -10,12 +10,65 @@ ms.date: 10/09/2019
Service Fabric applications can leverage managed identities to access other Azure resources which support Azure Active Directory-based authentication. An application can obtain an [access token](../active-directory/develop/developer-glossary.md#access-token) representing its identity, which may be system-assigned or user-assigned, and use it as a 'bearer' token to authenticate itself to another service - also known as a [protected resource server](../active-directory/develop/developer-glossary.md#resource-server). The token represents the identity assigned to the Service Fabric application, and will only be issued to Azure resources (including SF applications) which share that identity. Refer to the [managed identity overview](../active-directory/managed-identities-azure-resources/overview.md) documentation for a detailed description of managed identities, as well as the distinction between system-assigned and user-assigned identities. We will refer to a managed-identity-enabled Service Fabric application as the [client application](../active-directory/develop/developer-glossary.md#client-application) throughout this article.
+See a companion sample application that demonstrates using system-assigned and user-assigned [Service Fabric application managed identities](https://github.com/Azure-Samples/service-fabric-managed-identity) with Reliable Services and containers.
+ > [!IMPORTANT] > A managed identity represents the association between an Azure resource and a service principal in the corresponding Azure AD tenant associated with the subscription containing the resource. As such, in the context of Service Fabric, managed identities are only supported for applications deployed as Azure resources. > [!IMPORTANT] > Prior to using the managed identity of a Service Fabric application, the client application must be granted access to the protected resource. Please refer to the list of [Azure services which support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-managed-identities-for-azure-resources) to check for support, and then to the respective service's documentation for specific steps to grant an identity access to resources of interest.
+
+
+## Leverage a managed identity using Azure.Identity
+
+The Azure Identity SDK now supports Service Fabric. Using Azure.Identity makes writing code to use Service Fabric app managed identities easier because it handles fetching tokens, caching tokens, and server authentication. While accessing most Azure resources, the concept of a token is hidden.
+
+Service Fabric support is available in the following versions for these languages:
+- [C# in version 1.3.0](https://www.nuget.org/packages/Azure.Identity). See a [C# sample](https://github.com/Azure-Samples/service-fabric-managed-identity).
+- [Python in version 1.5.0](https://pypi.org/project/azure-identity/). See a [Python sample](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/identity/azure-identity/tests/managed-identity-live/service-fabric/service_fabric.md).
+- [Java in version 1.2.0](https://docs.microsoft.com/java/api/overview/azure/identity-readme?view=azure-java-stable).
+
+C# sample of initializing credentials and using the credentials to fetch a secret from Azure Key Vault:
+
+```csharp
+using Azure.Identity;
+using Azure.Security.KeyVault.Secrets;
+
+namespace MyMIService
+{
+ internal sealed class MyMIService : StatelessService
+ {
+ protected override async Task RunAsync(CancellationToken cancellationToken)
+ {
+ try
+ {
+ // Load the service fabric application managed identity assigned to the service
+ ManagedIdentityCredential creds = new ManagedIdentityCredential();
+
+ // Create a client to keyvault using that identity
+ SecretClient client = new SecretClient(new Uri("https://mykv.vault.azure.net/"), creds);
+
+ // Fetch a secret
+ KeyVaultSecret secret = (await client.GetSecretAsync("mysecret", cancellationToken: cancellationToken)).Value;
+ }
+ catch (CredentialUnavailableException e)
+ {
+ // Handle errors with loading the Managed Identity
+ }
+ catch (RequestFailedException)
+ {
+ // Handle errors with fetching the secret
+ }
+ catch (Exception e)
+ {
+ // Handle generic errors
+ }
+ }
+ }
+}
+
+```
## Acquiring an access token using REST API In clusters enabled for managed identity, the Service Fabric runtime exposes a localhost endpoint which applications can use to obtain access tokens. The endpoint is available on every node of the cluster, and is accessible to all entities on that node. Authorized callers may obtain access tokens by calling this endpoint and presenting an authentication code; the code is generated by the Service Fabric runtime for each distinct service code package activation, and is bound to the lifetime of the process hosting that service code package.
@@ -374,3 +427,4 @@ See [Azure services that support Azure AD authentication](../active-directory/ma
* [Deploy an Azure Service Fabric application with a system-assigned managed identity](./how-to-deploy-service-fabric-application-system-assigned-managed-identity.md) * [Deploy an Azure Service Fabric application with a user-assigned managed identity](./how-to-deploy-service-fabric-application-user-assigned-managed-identity.md) * [Grant an Azure Service Fabric application access to other Azure resources](./how-to-grant-access-other-resources.md)
+* [Explore a sample application using Service Fabric Managed Identity](https://github.com/Azure-Samples/service-fabric-managed-identity)
site-recovery https://docs.microsoft.com/en-us/azure/site-recovery/failover-failback-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/failover-failback-overview.md
@@ -40,7 +40,7 @@ To connect to the Azure VMs created after failover using RDP/SSH, there are a nu
**Failover** | **Location** | **Actions** --- | --- | ---
-**Azure VM (Windows(** | On the on-premises machine before failover | **Access over the internet**: Enable RDP. Make sure that TCP and UDP rules are added for **Public**, and that RDP is allowed for all profiles in **Windows Firewall** > **Allowed Apps**.<br/><br/> **Access over site-to-site VPN**: Enable RDP on the machine. Check that RDP is allowed in the **Windows Firewall** -> **Allowed apps and features**, for **Domain and Private** networks.<br/><br/> Make sure the operating system SAN policy is set to **OnlineAll**. [Learn more](https://support.microsoft.com/kb/3031135).<br/><br/> Make sure there are no Windows updates pending on the VM when you trigger a failover. Windows Update might start when you fail over, and you won't be able to log onto the VM until updates are done.
+**Azure VM running Windows** | On the on-premises machine before failover | **Access over the internet**: Enable RDP. Make sure that TCP and UDP rules are added for **Public**, and that RDP is allowed for all profiles in **Windows Firewall** > **Allowed Apps**.<br/><br/> **Access over site-to-site VPN**: Enable RDP on the machine. Check that RDP is allowed in the **Windows Firewall** -> **Allowed apps and features**, for **Domain and Private** networks.<br/><br/> Make sure the operating system SAN policy is set to **OnlineAll**. [Learn more](https://support.microsoft.com/kb/3031135).<br/><br/> Make sure there are no Windows updates pending on the VM when you trigger a failover. Windows Update might start when you fail over, and you won't be able to log onto the VM until updates are done.
**Azure VM running Windows** | On the Azure VM after failover | [Add a public IP address](/archive/blogs/srinathv/how-to-add-a-public-ip-address-to-azure-vm-for-vm-failed-over-using-asr) for the VM.<br/><br/> The network security group rules on the failed over VM (and the Azure subnet to which it is connected) must allow incoming connections to the RDP port.<br/><br/> Check **Boot diagnostics** to verify a screenshot of the VM. If you can't connect, check that the VM is running, and review [troubleshooting tips](https://social.technet.microsoft.com/wiki/contents/articles/31666.troubleshooting-remote-desktop-connection-after-failover-using-asr.aspx). **Azure VM running Linux** | On the on-premises machine before failover | Ensure that the Secure Shell service on the VM is set to start automatically on system boot.<br/><br/> Check that firewall rules allow an SSH connection to it. **Azure VM running Linux** | On the Azure VM after failover | The network security group rules on the failed over VM (and the Azure subnet to which it is connected) need to allow incoming connections to the SSH port.<br/><br/> [Add a public IP address](/archive/blogs/srinathv/how-to-add-a-public-ip-address-to-azure-vm-for-vm-failed-over-using-asr) for the VM.<br/><br/> Check **Boot diagnostics** for a screenshot of the VM.<br/><br/>
@@ -158,4 +158,4 @@ After failing back to the on-premises site, you enable **Reverse Replicate** to
- [Create](site-recovery-create-recovery-plans.md) a recovery plan. - Fail over [VMs in a recovery plan](site-recovery-failover.md). - [Prepare for](vmware-azure-failback.md) VMware reprotection and failback.-- Fail back [Hyper-V VMs](hyper-v-azure-failback.md).\ No newline at end of file
+- Fail back [Hyper-V VMs](hyper-v-azure-failback.md).
spring-cloud https://docs.microsoft.com/en-us/azure/spring-cloud/spring-cloud-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/spring-cloud-faq.md
@@ -32,7 +32,7 @@ Security and privacy are among the top priorities for Azure and Azure Spring Clo
### In which regions is Azure Spring Cloud available?
-East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central and UAE North.
+East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, UAE North, Central India, Korea Central and East Asia. [Learn More](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud)
### Is any customer data stored outside of the specified region?
storage https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-directory-file-acl-java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-directory-file-acl-java.md
@@ -3,7 +3,7 @@ title: Azure Data Lake Storage Gen2 Java SDK for files & ACLs
description: Use Azure Storage libraries for Java to manage directories and file and directory access control lists (ACL) in storage accounts that has hierarchical namespace (HNS) enabled. author: normesta ms.service: storage
-ms.date: 09/10/2020
+ms.date: 01/11/2021
ms.custom: devx-track-java ms.author: normesta ms.topic: how-to
@@ -32,7 +32,6 @@ If you plan to authenticate your client application by using Azure Active Direct
Next, add these imports statements to your code file. ```java
-import com.azure.core.credential.TokenCredential;
import com.azure.storage.common.StorageSharedKeyCredential; import com.azure.storage.file.datalake.DataLakeDirectoryClient; import com.azure.storage.file.datalake.DataLakeFileClient;
@@ -40,11 +39,16 @@ import com.azure.storage.file.datalake.DataLakeFileSystemClient;
import com.azure.storage.file.datalake.DataLakeServiceClient; import com.azure.storage.file.datalake.DataLakeServiceClientBuilder; import com.azure.storage.file.datalake.models.ListPathsOptions;
+import com.azure.storage.file.datalake.models.PathItem;
+import com.azure.storage.file.datalake.models.AccessControlChangeCounters;
+import com.azure.storage.file.datalake.models.AccessControlChangeResult;
+import com.azure.storage.file.datalake.models.AccessControlType;
import com.azure.storage.file.datalake.models.PathAccessControl; import com.azure.storage.file.datalake.models.PathAccessControlEntry;
-import com.azure.storage.file.datalake.models.PathItem;
import com.azure.storage.file.datalake.models.PathPermissions;
+import com.azure.storage.file.datalake.models.PathRemoveAccessControlEntry;
import com.azure.storage.file.datalake.models.RolePermissions;
+import com.azure.storage.file.datalake.options.PathSetAccessControlRecursiveOptions;
``` ## Connect to the account
@@ -57,22 +61,7 @@ This is the easiest way to connect to an account.
This example creates a **DataLakeServiceClient** instance by using an account key.
-```java
-
-static public DataLakeServiceClient GetDataLakeServiceClient
-(String accountName, String accountKey){
-
- StorageSharedKeyCredential sharedKeyCredential =
- new StorageSharedKeyCredential(accountName, accountKey);
-
- DataLakeServiceClientBuilder builder = new DataLakeServiceClientBuilder();
-
- builder.credential(sharedKeyCredential);
- builder.endpoint("https://" + accountName + ".dfs.core.windows.net");
-
- return builder.buildClient();
-}
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/Authorize_DataLake.java" id="Snippet_AuthorizeWithKey":::
### Connect by using Azure Active Directory (Azure AD)
@@ -80,22 +69,7 @@ You can use the [Azure identity client library for Java](https://github.com/Azur
This example creates a **DataLakeServiceClient** instance by using a client ID, a client secret, and a tenant ID. To get these values, see [Acquire a token from Azure AD for authorizing requests from a client application](../common/storage-auth-aad-app.md).
-```java
-static public DataLakeServiceClient GetDataLakeServiceClient
- (String accountName, String clientId, String ClientSecret, String tenantID){
-
- String endpoint = "https://" + accountName + ".dfs.core.windows.net";
-
- ClientSecretCredential clientSecretCredential = new ClientSecretCredentialBuilder()
- .clientId(clientId)
- .clientSecret(ClientSecret)
- .tenantId(tenantID)
- .build();
-
- DataLakeServiceClientBuilder builder = new DataLakeServiceClientBuilder();
- return builder.credential(clientSecretCredential).endpoint(endpoint).buildClient();
- }
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/Authorize_DataLake.java" id="Snippet_AuthorizeWithAzureAD":::
> [!NOTE] > For more examples, see the [Azure identity client library for Java](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/identity/azure-identity) documentation.
@@ -107,13 +81,7 @@ A container acts as a file system for your files. You can create one by calling
This example creates a container named `my-file-system`.
-```java
-static public DataLakeFileSystemClient CreateFileSystem
-(DataLakeServiceClient serviceClient){
-
- return serviceClient.createFileSystem("my-file-system");
-}
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/CRUD_DataLake.java" id="Snippet_CreateFileSystem":::
## Create a directory
@@ -121,19 +89,7 @@ Create a directory reference by calling the **DataLakeFileSystemClient.createDir
This example adds a directory named `my-directory` to a container, and then adds a sub-directory named `my-subdirectory`.
-```java
-static public DataLakeDirectoryClient CreateDirectory
-(DataLakeServiceClient serviceClient, String fileSystemName){
-
- DataLakeFileSystemClient fileSystemClient =
- serviceClient.getFileSystemClient(fileSystemName);
-
- DataLakeDirectoryClient directoryClient =
- fileSystemClient.createDirectory("my-directory");
-
- return directoryClient.createSubDirectory("my-subdirectory");
-}
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/CRUD_DataLake.java" id="Snippet_CreateDirectory":::
## Rename or move a directory
@@ -141,31 +97,11 @@ Rename or move a directory by calling the **DataLakeDirectoryClient.rename** met
This example renames a sub-directory to the name `my-subdirectory-renamed`.
-```java
-static public DataLakeDirectoryClient
- RenameDirectory(DataLakeFileSystemClient fileSystemClient){
-
- DataLakeDirectoryClient directoryClient =
- fileSystemClient.getDirectoryClient("my-directory/my-subdirectory");
-
- return directoryClient.rename(
- fileSystemClient.getFileSystemName(),"my-subdirectory-renamed");
-}
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/CRUD_DataLake.java" id="Snippet_RenameDirectory":::
This example moves a directory named `my-subdirectory-renamed` to a sub-directory of a directory named `my-directory-2`.
-```java
-static public DataLakeDirectoryClient MoveDirectory
-(DataLakeFileSystemClient fileSystemClient){
-
- DataLakeDirectoryClient directoryClient =
- fileSystemClient.getDirectoryClient("my-directory/my-subdirectory-renamed");
-
- return directoryClient.rename(
- fileSystemClient.getFileSystemName(),"my-directory-2/my-subdirectory-renamed");
-}
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/CRUD_DataLake.java" id="Snippet_MoveDirectory":::
## Delete a directory
@@ -173,42 +109,15 @@ Delete a directory by calling the **DataLakeDirectoryClient.deleteWithResponse**
This example deletes a directory named `my-directory`.
-```java
-static public void DeleteDirectory(DataLakeFileSystemClient fileSystemClient){
-
- DataLakeDirectoryClient directoryClient =
- fileSystemClient.getDirectoryClient("my-directory");
-
- directoryClient.deleteWithResponse(true, null, null, null);
-}
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/CRUD_DataLake.java" id="Snippet_DeleteDirectory":::
## Upload a file to a directory First, create a file reference in the target directory by creating an instance of the **DataLakeFileClient** class. Upload a file by calling the **DataLakeFileClient.append** method. Make sure to complete the upload by calling the **DataLakeFileClient.FlushAsync** method.
-This example uploads a text file to a directory named `my-directory`.`
-
-```java
-static public void UploadFile(DataLakeFileSystemClient fileSystemClient)
- throws FileNotFoundException{
-
- DataLakeDirectoryClient directoryClient =
- fileSystemClient.getDirectoryClient("my-directory");
-
- DataLakeFileClient fileClient = directoryClient.createFile("uploaded-file.txt");
-
- File file = new File("C:\\mytestfile.txt");
-
- InputStream targetStream = new BufferedInputStream(new FileInputStream(file));
-
- long fileSize = file.length();
+This example uploads a text file to a directory named `my-directory`.
- fileClient.append(targetStream, 0, fileSize);
-
- fileClient.flush(fileSize);
-}
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/CRUD_DataLake.java" id="Snippet_UploadFile":::
> [!TIP] > If your file size is large, your code will have to make multiple calls to the **DataLakeFileClient.append** method. Consider using the **DataLakeFileClient.uploadFromFile** method instead. That way, you can upload the entire file in a single call.
@@ -219,79 +128,19 @@ static public void UploadFile(DataLakeFileSystemClient fileSystemClient)
Use the **DataLakeFileClient.uploadFromFile** method to upload large files without having to make multiple calls to the **DataLakeFileClient.append** method.
-```java
-static public void UploadFileBulk(DataLakeFileSystemClient fileSystemClient)
- throws FileNotFoundException{
-
- DataLakeDirectoryClient directoryClient =
- fileSystemClient.getDirectoryClient("my-directory");
-
- DataLakeFileClient fileClient = directoryClient.getFileClient("uploaded-file.txt");
-
- fileClient.uploadFromFile("C:\\mytestfile.txt");
-
- }
-
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/CRUD_DataLake.java" id="Snippet_UploadFileBulk":::
## Download from a directory First, create a **DataLakeFileClient** instance that represents the file that you want to download. Use the **DataLakeFileClient.read** method to read the file. Use any .NET file processing API to save bytes from the stream to a file.
-```java
-static public void DownloadFile(DataLakeFileSystemClient fileSystemClient)
- throws FileNotFoundException, java.io.IOException{
-
- DataLakeDirectoryClient directoryClient =
- fileSystemClient.getDirectoryClient("my-directory");
-
- DataLakeFileClient fileClient =
- directoryClient.getFileClient("uploaded-file.txt");
-
- File file = new File("C:\\downloadedFile.txt");
-
- OutputStream targetStream = new FileOutputStream(file);
-
- fileClient.read(targetStream);
-
- targetStream.close();
-
-}
-
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/CRUD_DataLake.java" id="Snippet_DownloadFile":::
## List directory contents This example, prints the names of each file that is located in a directory named `my-directory`.
-```java
-static public void ListFilesInDirectory(DataLakeFileSystemClient fileSystemClient){
-
- ListPathsOptions options = new ListPathsOptions();
- options.setPath("my-directory");
-
- PagedIterable<PathItem> pagedIterable =
- fileSystemClient.listPaths(options, null);
-
- java.util.Iterator<PathItem> iterator = pagedIterable.iterator();
-
- PathItem item = iterator.next();
-
- while (item != null)
- {
- System.out.println(item.getName());
--
- if (!iterator.hasNext())
- {
- break;
- }
-
- item = iterator.next();
- }
-
-}
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/CRUD_DataLake.java" id="Snippet_ListFilesInDirectory":::
## Manage access control lists (ACLs)
@@ -307,43 +156,7 @@ This example gets and then sets the ACL of a directory named `my-directory`. Thi
> [!NOTE] > If your application authorizes access by using Azure Active Directory (Azure AD), then make sure that the security principal that your application uses to authorize access has been assigned the [Storage Blob Data Owner role](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner). To learn more about how ACL permissions are applied and the effects of changing them, see [Access control in Azure Data Lake Storage Gen2](./data-lake-storage-access-control.md).
-```java
-static public void ManageDirectoryACLs(DataLakeFileSystemClient fileSystemClient){
-
- DataLakeDirectoryClient directoryClient =
- fileSystemClient.getDirectoryClient("my-directory");
-
- PathAccessControl directoryAccessControl =
- directoryClient.getAccessControl();
-
- List<PathAccessControlEntry> pathPermissions = directoryAccessControl.getAccessControlList();
-
- System.out.println(PathAccessControlEntry.serializeList(pathPermissions));
-
- RolePermissions groupPermission = new RolePermissions();
- groupPermission.setExecutePermission(true).setReadPermission(true);
-
- RolePermissions ownerPermission = new RolePermissions();
- ownerPermission.setExecutePermission(true).setReadPermission(true).setWritePermission(true);
-
- RolePermissions otherPermission = new RolePermissions();
- otherPermission.setReadPermission(true);
-
- PathPermissions permissions = new PathPermissions();
-
- permissions.setGroup(groupPermission);
- permissions.setOwner(ownerPermission);
- permissions.setOther(otherPermission);
-
- directoryClient.setPermissions(permissions, null, null);
-
- pathPermissions = directoryClient.getAccessControl().getAccessControlList();
-
- System.out.println(PathAccessControlEntry.serializeList(pathPermissions));
-
-}
-
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/ACL_DataLake.java" id="Snippet_ManageDirectoryACLs":::
You can also get and set the ACL of the root directory of a container. To get the root directory, pass an empty string (`""`) into the **DataLakeFileSystemClient.getDirectoryClient** method.
@@ -354,45 +167,7 @@ This example gets and then sets the ACL of a file named `upload-file.txt`. This
> [!NOTE] > If your application authorizes access by using Azure Active Directory (Azure AD), then make sure that the security principal that your application uses to authorize access has been assigned the [Storage Blob Data Owner role](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner). To learn more about how ACL permissions are applied and the effects of changing them, see [Access control in Azure Data Lake Storage Gen2](./data-lake-storage-access-control.md).
-```java
-static public void ManageFileACLs(DataLakeFileSystemClient fileSystemClient){
-
- DataLakeDirectoryClient directoryClient =
- fileSystemClient.getDirectoryClient("my-directory");
-
- DataLakeFileClient fileClient =
- directoryClient.getFileClient("uploaded-file.txt");
-
- PathAccessControl fileAccessControl =
- fileClient.getAccessControl();
-
- List<PathAccessControlEntry> pathPermissions = fileAccessControl.getAccessControlList();
-
- System.out.println(PathAccessControlEntry.serializeList(pathPermissions));
-
- RolePermissions groupPermission = new RolePermissions();
- groupPermission.setExecutePermission(true).setReadPermission(true);
-
- RolePermissions ownerPermission = new RolePermissions();
- ownerPermission.setExecutePermission(true).setReadPermission(true).setWritePermission(true);
-
- RolePermissions otherPermission = new RolePermissions();
- otherPermission.setReadPermission(true);
-
- PathPermissions permissions = new PathPermissions();
-
- permissions.setGroup(groupPermission);
- permissions.setOwner(ownerPermission);
- permissions.setOther(otherPermission);
-
- fileClient.setPermissions(permissions, null, null);
-
- pathPermissions = fileClient.getAccessControl().getAccessControlList();
-
- System.out.println(PathAccessControlEntry.serializeList(pathPermissions));
-
-}
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/ACL_DataLake.java" id="Snippet_ManageFileACLs":::
### Set an ACL recursively
storage https://docs.microsoft.com/en-us/azure/storage/blobs/recursive-access-control-lists https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/recursive-access-control-lists.md
@@ -5,7 +5,7 @@ author: normesta
ms.subservice: data-lake-storage-gen2 ms.service: storage ms.topic: how-to
-ms.date: 11/17/2020
+ms.date: 01/11/2021
ms.author: normesta ms.reviewer: prishet ms.custom: devx-track-csharp, devx-track-azurecli
@@ -253,22 +253,7 @@ This is the easiest way to connect to an account.
This example creates a **DataLakeServiceClient** instance by using an account key.
-```java
-
-static public DataLakeServiceClient GetDataLakeServiceClient
-(String accountName, String accountKey){
-
- StorageSharedKeyCredential sharedKeyCredential =
- new StorageSharedKeyCredential(accountName, accountKey);
-
- DataLakeServiceClientBuilder builder = new DataLakeServiceClientBuilder();
-
- builder.credential(sharedKeyCredential);
- builder.endpoint("https://" + accountName + ".dfs.core.windows.net");
-
- return builder.buildClient();
-}
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/Authorize_DataLake.java" id="Snippet_AuthorizeWithKey":::
#### Connect by using Azure Active Directory (Azure AD)
@@ -276,22 +261,7 @@ You can use the [Azure identity client library for Java](https://github.com/Azur
This example creates a **DataLakeServiceClient** instance by using a client ID, a client secret, and a tenant ID. To get these values, see [Acquire a token from Azure AD for authorizing requests from a client application](../common/storage-auth-aad-app.md).
-```java
-static public DataLakeServiceClient GetDataLakeServiceClient
- (String accountName, String clientId, String ClientSecret, String tenantID){
-
- String endpoint = "https://" + accountName + ".dfs.core.windows.net";
-
- ClientSecretCredential clientSecretCredential = new ClientSecretCredentialBuilder()
- .clientId(clientId)
- .clientSecret(ClientSecret)
- .tenantId(tenantID)
- .build();
-
- DataLakeServiceClientBuilder builder = new DataLakeServiceClientBuilder();
- return builder.credential(clientSecretCredential).endpoint(endpoint).buildClient();
- }
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/Authorize_DataLake.java" id="Snippet_AuthorizeWithAzureAD":::
> [!NOTE] > For more examples, see the [Azure identity client library for Java](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/identity/azure-identity) documentation.
@@ -418,68 +388,7 @@ If you want to set a **default** ACL entry, then you can call the **setDefaultSc
This example sets the ACL of a directory named `my-parent-directory`. This method accepts a boolean parameter named `isDefaultScope` that specifies whether to set the default ACL. That parameter is used in each call to the **setDefaultScope** method of the [PathAccessControlEntry](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-storage-file-datalake/12.3.0-beta.1/https://docsupdatetracker.net/index.html). The entries of the ACL give the owning user read, write, and execute permissions, gives the owning group only read and execute permissions, and gives all others no access. The last ACL entry in this example gives a specific user with the object ID "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" read and execute permissions.
-```java
-static public void SetACLRecursively(DataLakeFileSystemClient fileSystemClient, Boolean isDefaultScope){
-
- DataLakeDirectoryClient directoryClient =
- fileSystemClient.getDirectoryClient("my-parent-directory");
-
- List<PathAccessControlEntry> pathAccessControlEntries =
- new ArrayList<PathAccessControlEntry>();
-
- // Create owner entry.
- PathAccessControlEntry ownerEntry = new PathAccessControlEntry();
-
- RolePermissions ownerPermission = new RolePermissions();
- ownerPermission.setExecutePermission(true).setReadPermission(true).setWritePermission(true);
-
- ownerEntry.setDefaultScope(isDefaultScope);
- ownerEntry.setAccessControlType(AccessControlType.USER);
- ownerEntry.setPermissions(ownerPermission);
-
- pathAccessControlEntries.add(ownerEntry);
-
- // Create group entry.
- PathAccessControlEntry groupEntry = new PathAccessControlEntry();
-
- RolePermissions groupPermission = new RolePermissions();
- groupPermission.setExecutePermission(true).setReadPermission(true).setWritePermission(false);
-
- groupEntry.setDefaultScope(isDefaultScope);
- groupEntry.setAccessControlType(AccessControlType.GROUP);
- groupEntry.setPermissions(groupPermission);
-
- pathAccessControlEntries.add(groupEntry);
-
- // Create other entry.
- PathAccessControlEntry otherEntry = new PathAccessControlEntry();
-
- RolePermissions otherPermission = new RolePermissions();
- otherPermission.setExecutePermission(false).setReadPermission(false).setWritePermission(false);
-
- otherEntry.setDefaultScope(isDefaultScope);
- otherEntry.setAccessControlType(AccessControlType.OTHER);
- otherEntry.setPermissions(otherPermission);
-
- pathAccessControlEntries.add(otherEntry);
-
- // Create named user entry.
- PathAccessControlEntry userEntry = new PathAccessControlEntry();
-
- RolePermissions userPermission = new RolePermissions();
- userPermission.setExecutePermission(true).setReadPermission(true).setWritePermission(false);
-
- userEntry.setDefaultScope(isDefaultScope);
- userEntry.setAccessControlType(AccessControlType.USER);
- userEntry.setEntityId("xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx");
- userEntry.setPermissions(userPermission);
-
- pathAccessControlEntries.add(userEntry);
-
- directoryClient.setAccessControlRecursive(pathAccessControlEntries);
-
-}
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/ACL_DataLake.java" id="Snippet_SetACLRecursively":::
### [Python](#tab/python)
@@ -579,31 +488,7 @@ If you want to update a **default** ACL entry, then you can the **setDefaultScop
This example updates an ACL entry with write permission. This method accepts a boolean parameter named `isDefaultScope` that specifies whether to update the default ACL. That parameter is used in the call to the **setDefaultScope** method of the [PathAccessControlEntry](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-storage-file-datalake/12.3.0-beta.1/https://docsupdatetracker.net/index.html).
-```java
-static public void UpdateACLRecursively(DataLakeFileSystemClient fileSystemClient, Boolean isDefaultScope){
-
- DataLakeDirectoryClient directoryClient =
- fileSystemClient.getDirectoryClient("my-parent-directory");
-
- List<PathAccessControlEntry> pathAccessControlEntries =
- new ArrayList<PathAccessControlEntry>();
-
- // Create named user entry.
- PathAccessControlEntry userEntry = new PathAccessControlEntry();
-
- RolePermissions userPermission = new RolePermissions();
- userPermission.setExecutePermission(true).setReadPermission(true).setWritePermission(true);
-
- userEntry.setDefaultScope(isDefaultScope);
- userEntry.setAccessControlType(AccessControlType.USER);
- userEntry.setEntityId("xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx");
- userEntry.setPermissions(userPermission);
-
- pathAccessControlEntries.add(userEntry);
-
- directoryClient.updateAccessControlRecursive(pathAccessControlEntries);
-}
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/ACL_DataLake.java" id="Snippet_UpdateACLRecursively":::
### [Python](#tab/python)
@@ -699,32 +584,7 @@ If you want to remove a **default** ACL entry, then you can the **setDefaultScop
This example removes an ACL entry from the ACL of the directory named `my-parent-directory`. This method accepts a boolean parameter named `isDefaultScope` that specifies whether to remove the entry from the default ACL. That parameter is used in the call to the **setDefaultScope** method of the [PathAccessControlEntry](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-storage-file-datalake/12.3.0-beta.1/https://docsupdatetracker.net/index.html). -
-```java
-static public void RemoveACLRecursively(DataLakeFileSystemClient fileSystemClient, Boolean isDefaultScope){
-
- DataLakeDirectoryClient directoryClient =
- fileSystemClient.getDirectoryClient("my-parent-directory");
-
- List<PathRemoveAccessControlEntry> pathRemoveAccessControlEntries =
- new ArrayList<PathRemoveAccessControlEntry>();
-
- // Create named user entry.
- PathRemoveAccessControlEntry userEntry = new PathRemoveAccessControlEntry();
-
- RolePermissions userPermission = new RolePermissions();
- userPermission.setExecutePermission(true).setReadPermission(true).setWritePermission(true);
-
- userEntry.setDefaultScope(isDefaultScope);
- userEntry.setAccessControlType(AccessControlType.USER);
- userEntry.setEntityId("xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx");
-
- pathRemoveAccessControlEntries.add(userEntry);
-
- directoryClient.removeAccessControlRecursive(pathRemoveAccessControlEntries);
-
-}
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/ACL_DataLake.java" id="Snippet_RemoveACLRecursively":::
### [Python](#tab/python)
@@ -799,36 +659,7 @@ To see an example that sets ACLs recursively in batches by specifying a batch si
This example returns a continuation token in the event of a failure. The application can call this example method again after the error has been addressed, and pass in the continuation token. If this example method is called for the first time, the application can pass in a value of `null` for the continuation token parameter.
-```java
-static public String ResumeSetACLRecursively(DataLakeFileSystemClient fileSystemClient,
-DataLakeDirectoryClient directoryClient,
-List<PathAccessControlEntry> accessControlList,
-String continuationToken){
-
- try{
- PathSetAccessControlRecursiveOptions options = new PathSetAccessControlRecursiveOptions(accessControlList);
-
- options.setContinuationToken(continuationToken);
-
- Response<AccessControlChangeResult> accessControlChangeResult =
- directoryClient.setAccessControlRecursiveWithResponse(options, null, null);
-
- if (accessControlChangeResult.getValue().getCounters().getFailedChangesCount() > 0)
- {
- continuationToken =
- accessControlChangeResult.getValue().getContinuationToken();
- }
-
- return continuationToken;
-
- }
- catch(Exception ex){
-
- System.out.println(ex.toString());
- return continuationToken;
- }
-}
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/ACL_DataLake.java" id="Snippet_ResumeSetACLRecursively":::
### [Python](#tab/python)
@@ -901,31 +732,7 @@ To ensure that the process completes uninterrupted, call the **setContinueOnFail
This example sets ACL entries recursively. If this code encounters a permission error, it records that failure and continues execution. This example prints the number of failures to the console.
-```java
-static public void ContinueOnFailure(DataLakeFileSystemClient fileSystemClient,
-DataLakeDirectoryClient directoryClient,
-List<PathAccessControlEntry> accessControlList){
-
- PathSetAccessControlRecursiveOptions options =
- new PathSetAccessControlRecursiveOptions(accessControlList);
-
- options.setContinueOnFailure(true);
-
- Response<AccessControlChangeResult> accessControlChangeResult =
- directoryClient.setAccessControlRecursiveWithResponse(options, null, null);
-
- AccessControlChangeCounters counters = accessControlChangeResult.getValue().getCounters();
-
- System.out.println("Number of directories changes: " +
- counters.getChangedDirectoriesCount());
-
- System.out.println("Number of files changed: " +
- counters.getChangedDirectoriesCount());
-
- System.out.println("Number of failures: " +
- counters.getChangedDirectoriesCount());
-}
-```
+:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/ACL_DataLake.java" id="Snippet_ContinueOnFailure":::
### [Python](#tab/python)
storage https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-account-create.md
@@ -7,7 +7,7 @@ author: tamram
ms.service: storage ms.topic: how-to
-ms.date: 12/11/2020
+ms.date: 01/11/2021
ms.author: tamram ms.subservice: common ms.custom: devx-track-azurecli, devx-track-azurepowershell
@@ -211,7 +211,7 @@ read resourceGroupName &&
echo "Enter the location (i.e. centralus):" && read location && az group create --name $resourceGroupName --location "$location" &&
-az deployment group create --resource-group $resourceGroupName --template-file "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json"
+az deployment group create --resource-group $resourceGroupName --template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json"
``` > [!NOTE]
@@ -282,4 +282,4 @@ Alternately, you can delete the resource group, which deletes the storage accoun
- [Storage account overview](storage-account-overview.md) - [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md) - [Move an Azure Storage account to another region](storage-account-move.md)-- [Recover a deleted storage account](storage-account-recover.md)\ No newline at end of file
+- [Recover a deleted storage account](storage-account-recover.md)
storage https://docs.microsoft.com/en-us/azure/storage/common/storage-network-security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-network-security.md
@@ -16,7 +16,7 @@ ms.subservice: common
Azure Storage provides a layered security model. This model enables you to secure and control the level of access to your storage accounts that your applications and enterprise environments demand, based on the type and subset of networksΓÇï used. When network rules are configured, only applications requesting data over the specified set of networks can access a storage account. You can limit access to your storage account to requests originating from specified IP addresses, IP ranges or from a list of subnets in an Azure Virtual Network (VNet).
-Storage accounts have a public endpoint that is accessible through the internet. You can also create [Private Endpoints for your storage account](storage-private-endpoints.md), which assigns a private IP address from your VNet to the storage account, and secures all traffic between your VNet and the storage account over a private link. The Azure storage firewall provides access control access for the public endpoint of your storage account. You can also use the firewall to block all access through the public endpoint when using private endpoints. Your storage firewall configuration also enables select trusted Azure platform services to access the storage account securely.
+Storage accounts have a public endpoint that is accessible through the internet. You can also create [Private Endpoints for your storage account](storage-private-endpoints.md), which assigns a private IP address from your VNet to the storage account, and secures all traffic between your VNet and the storage account over a private link. The Azure storage firewall provides access control for the public endpoint of your storage account. You can also use the firewall to block all access through the public endpoint when using private endpoints. Your storage firewall configuration also enables select trusted Azure platform services to access the storage account securely.
An application that accesses a storage account when network rules are in effect still requires proper authorization for the request. Authorization is supported with Azure Active Directory (Azure AD) credentials for blobs and queues, with a valid account access key, or with an SAS token.
@@ -29,13 +29,13 @@ An application that accesses a storage account when network rules are in effect
## Scenarios
-To secure your storage account, you should first configure a rule to deny access to traffic from all networks (including internet traffic) on the public endpoint, by default. Then, you should configure rules that grant access to traffic from specific VNets. You can also configure rules to grant access to traffic from select public internet IP address ranges, enabling connections from specific internet or on-premises clients. This configuration enables you to build a secure network boundary for your applications.
+To secure your storage account, you should first configure a rule to deny access to traffic from all networks (including internet traffic) on the public endpoint, by default. Then, you should configure rules that grant access to traffic from specific VNets. You can also configure rules to grant access to traffic from selected public internet IP address ranges, enabling connections from specific internet or on-premises clients. This configuration enables you to build a secure network boundary for your applications.
You can combine firewall rules that allow access from specific virtual networks and from public IP address ranges on the same storage account. Storage firewall rules can be applied to existing storage accounts, or when creating new storage accounts. Storage firewall rules apply to the public endpoint of a storage account. You don't need any firewall access rules to allow traffic for private endpoints of a storage account. The process of approving the creation of a private endpoint grants implicit access to traffic from the subnet that hosts the private endpoint.
-Network rules are enforced on all network protocols to Azure storage, including REST and SMB. To access data using tools such as the Azure portal, Storage Explorer, and AZCopy, explicit network rules must be configured.
+Network rules are enforced on all network protocols for Azure storage, including REST and SMB. To access data using tools such as the Azure portal, Storage Explorer, and AZCopy, explicit network rules must be configured.
Once network rules are applied, they're enforced for all requests. SAS tokens that grant access to a specific IP address serve to limit the access of the token holder, but don't grant new access beyond configured network rules.
storage https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-redundancy.md
@@ -151,6 +151,7 @@ The following table describes key parameters for each redundancy option:
| Percent durability of objects over a given year | at least 99.999999999% (11 9's) | at least 99.9999999999% (12 9's) | at least 99.99999999999999% (16 9's) | at least 99.99999999999999% (16 9's) | | Availability for read requests | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool access tier) for GRS<br /><br />At least 99.99% (99.9% for cool access tier) for RA-GRS | At least 99.9% (99% for cool access tier) for GZRS<br /><br />At least 99.99% (99.9% for cool access tier) for RA-GZRS | | Availability for write requests | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool access tier) |
+| Number of copies of data maintained on separate nodes. | 3 | 3 | 6 | 6 |
### Durability and availability by outage scenario
storage https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-use-azcopy-v10.md
@@ -100,7 +100,7 @@ To find example commands, see any of these articles.
| Service | Article | |--------|-----------|
-|Azure Blob storage |[Upload files to Azure Blob storage](storage-use-azcopy-blobs-upload.md)<br><br>[Download blobs from Azure Blob storage](storage-use-azcopy-blobs-download.md)<br><br>[Copy blobs between Azure storage accounts](storage-use-azcopy-blobs-download.md)<br><br>[Synchronize with Azure Blob storage](storage-use-azcopy-blobs-download.md)|
+|Azure Blob storage |[Upload files to Azure Blob storage](storage-use-azcopy-blobs-upload.md)<br><br>[Download blobs from Azure Blob storage](storage-use-azcopy-blobs-download.md)<br><br>[Copy blobs between Azure storage accounts](storage-use-azcopy-blobs-copy.md)<br><br>[Synchronize with Azure Blob storage](storage-use-azcopy-blobs-synchronize.md)|
|Azure Files |[Transfer data with AzCopy and file storage](storage-use-azcopy-files.md)| |Amazon S3|[Transfer data with AzCopy and Amazon S3 buckets](storage-use-azcopy-s3.md)| |Azure Stack storage|[Transfer data with AzCopy and Azure Stack storage](/azure-stack/user/azure-stack-storage-transfer#azcopy)|
@@ -164,4 +164,4 @@ If you need to use the previous version of AzCopy, see either of the following l
## Next steps
-If you have questions, issues, or general feedback, submit them [on GitHub](https://github.com/Azure/azure-storage-azcopy) page.
\ No newline at end of file
+If you have questions, issues, or general feedback, submit them [on GitHub](https://github.com/Azure/azure-storage-azcopy) page.
storage https://docs.microsoft.com/en-us/azure/storage/common/transport-layer-security-configure-minimum-version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/transport-layer-security-configure-minimum-version.md
@@ -33,9 +33,8 @@ To log requests to your Azure Storage account and determine the TLS version used
Azure Storage logging in Azure Monitor supports using log queries to analyze log data. To query logs, you can use an Azure Log Analytics workspace. To learn more about log queries, see [Tutorial: Get started with Log Analytics queries](../../azure-monitor/log-query/log-analytics-tutorial.md).
-To log Azure Storage data with Azure Monitor and analyze it with Azure Log Analytics, you must first create a diagnostic setting that indicates what types of requests and for which storage services you want to log data. To create a diagnostic setting in the Azure portal, follow these steps:
+To log Azure Storage data with Azure Monitor and analyze it with Azure Log Analytics, you must first create a diagnostic setting that indicates what types of requests and for which storage services you want to log data. Azure Storage logs in Azure Monitor is in public preview and is available for preview testing in all public cloud regions. This preview enables logs for blobs (including Azure Data Lake Storage Gen2), files, queues, and tables. To create a diagnostic setting in the Azure portal, follow these steps:
-1. Enroll in the [Azure Storage logging in Azure Monitor preview](https://forms.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRxW65f1VQyNCuBHMIMBV8qlUM0E0MFdPRFpOVTRYVklDSE1WUTcyTVAwOC4u).
1. Create a new Log Analytics workspace in the subscription that contains your Azure Storage account. After you configure logging for your storage account, the logs will be available in the Log Analytics workspace. For more information, see [Create a Log Analytics workspace in the Azure portal](../../azure-monitor/learn/quick-create-workspace.md). 1. Navigate to your storage account in the Azure portal. 1. In the Monitoring section, select **Diagnostic settings (preview)**.
@@ -363,4 +362,4 @@ When a client sends a request to storage account, the client establishes a conne
## Next steps - [Configure Transport Layer Security (TLS) for a client application](transport-layer-security-configure-client-version.md)-- [Security recommendations for Blob storage](../blobs/security-recommendations.md)\ No newline at end of file
+- [Security recommendations for Blob storage](../blobs/security-recommendations.md)
storage https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-introduction.md
@@ -46,7 +46,7 @@ Azure file shares can be used to:
## Key benefits * **Shared access**. Azure file shares support the industry standard SMB and NFS protocols, meaning you can seamlessly replace your on-premises file shares with Azure file shares without worrying about application compatibility. Being able to share a file system across multiple machines, applications/instances is a significant advantage with Azure Files for applications that need shareability. * **Fully managed**. Azure file shares can be created without the need to manage hardware or an OS. This means you don't have to deal with patching the server OS with critical security upgrades or replacing faulty hard disks.
-* **Scripting and tooling**. PowerShell cmdlets and Azure CLI can be used to create, mount, and manage Azure file shares as part of the administration of Azure applications.You can create and manage Azure file shares using Azure portal and Azure Storage Explorer.
+* **Scripting and tooling**. PowerShell cmdlets and Azure CLI can be used to create, mount, and manage Azure file shares as part of the administration of Azure applications. You can create and manage Azure file shares using Azure portal and Azure Storage Explorer.
* **Resiliency**. Azure Files has been built from the ground up to be always available. Replacing on-premises file shares with Azure Files means you no longer have to wake up to deal with local power outages or network issues. * **Familiar programmability**. Applications running in Azure can access data in the share via file [system I/O APIs](/dotnet/api/system.io.file). Developers can therefore leverage their existing code and skills to migrate existing applications. In addition to System IO APIs, you can use [Azure Storage Client Libraries](/previous-versions/azure/dn261237(v=azure.100)) or the [Azure Storage REST API](/rest/api/storageservices/file-service-rest-api).
@@ -56,4 +56,4 @@ Azure file shares can be used to:
* [Connect and mount an SMB share on Windows](storage-how-to-use-files-windows.md) * [Connect and mount an SMB share on Linux](storage-how-to-use-files-linux.md) * [Connect and mount an SMB share on macOS](storage-how-to-use-files-mac.md)
-* [How to create an NFS share](storage-files-how-to-create-nfs-shares.md)
\ No newline at end of file
+* [How to create an NFS share](storage-files-how-to-create-nfs-shares.md)
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/workspace-connected-create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/workspace-connected-create.md
@@ -53,5 +53,8 @@ The following steps must be completed to ensure that your existing dedicated SQL
5. Open the **Data hub** and expand the dedicated SQL pool in the Object explorer to ensure that you've access and can query your data warehouse.
+ > [!NOTE]
+ > A connected workspace can be deleted at anytime. Deleting the workspace will not delete the connected dedicated SQL pool (formerly SQL DW). Workspace feature can be re-enable on the dedicated SQL pool (formerly SQL DW) when the delete operation has completed.
+ ## Next steps Getting started with [Synapse Workspace and Studio](../get-started.md).
time-series-insights https://docs.microsoft.com/en-us/azure/time-series-insights/concepts-streaming-ingress-throughput-limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/concepts-streaming-ingress-throughput-limits.md
@@ -30,7 +30,7 @@ By default, Azure Time Series Insights Gen2 can ingest incoming data at a rate o
> [!TIP] >
-> * Environment support for ingesting speeds up to 8 MBps can be provided by request.
+> * Environment support for ingesting speeds up to 2 MBps can be provided by request.
> * Contact us if you require higher throughput by submitting a support ticket through the Azure portal. * **Example 1:**
@@ -43,10 +43,10 @@ By default, Azure Time Series Insights Gen2 can ingest incoming data at a rate o
* **Example 2:**
- Contoso Fleet Analytics has 40,000 devices that emit an event every second. They are using an Event Hub with a partition count of 2 as the Azure Time Series Insights Gen2 event source. The size of an event is 200 bytes.
+ Contoso Fleet Analytics has 10,000 devices that emit an event every second. They are using an Event Hub with a partition count of 2 as the Azure Time Series Insights Gen2 event source. The size of an event is 200 bytes.
- * The environment ingestion rate would be: **40,000 devices * 200 bytes/event * 1 event/sec = 8 MBps**.
- * Assuming balanced partitions, their per partition rate would be 4 MBps.
+ * The environment ingestion rate would be: **10,000 devices * 200 bytes/event * 1 event/sec = 2 MBps**.
+ * Assuming balanced partitions, their per partition rate would be 1 MBps.
* Contoso Fleet Analytics' ingestion rate is over the environment and partition limits. They can submit a request to Azure Time Series Insights Gen2 through the Azure portal to increase the ingestion rate for their environment, and create an Event Hub with more partitions to be within the limits. ## Hub partitions and per partition limits
time-series-insights https://docs.microsoft.com/en-us/azure/time-series-insights/how-to-diagnose-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/how-to-diagnose-troubleshoot.md
@@ -66,7 +66,7 @@ You might be sending data without the Time Series ID.
- This problem might occur because your environment is being throttled. > [!NOTE]
- > At this time, Time Series Insights supports a maximum ingestion rate of 6 Mbps.
+ > At this time, Time Series Insights supports a maximum ingestion rate of 1 Mbps.
## Problem: Data was showing, but now ingestion has stopped
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/boot-diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/boot-diagnostics.md
@@ -16,11 +16,15 @@ Boot diagnostics is a debugging feature for Azure virtual machines (VM) that all
## Boot diagnostics storage account When creating a VM in Azure portal, boot diagnostics is enabled by default. The recommended boot diagnostics experience is to use a managed storage account, as it yields significant performance improvements in the time to create an Azure VM. This is because an Azure managed storage account will be used, removing the time it takes to create a new user storage account to store the boot diagnostics data.
-An alternative boot diagnostics experience is to use a user managed storage account. A user can either create a new storage account or use an existing one.
- > [!IMPORTANT] > The boot diagnostics data blobs (which comprise of logs and snapshot images) are stored in a managed storage account. Customers will be charged only on used GiBs by the blobs, not on the disk's provisioned size. The snapshot meters will be used for billing of the managed storage account. Because the managed accounts are created on either Standard LRS or Standard ZRS, customers will be charged at $0.05/GB per month for the size of their diagnostic data blobs only. For more information on this pricing, see [Managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/). Customers will see this charge tied to their VM resource URI.
+An alternative boot diagnostic experience is to use a user managed storage account. A user can either create a new storage account or use an existing one.
+> [!NOTE]
+> User managed storage accounts associated with boot diagnostics require the storage account and the associated virtual machines reside in the same subscription.
+++ ## Boot diagnostics view Located in the virtual machine blade, the boot diagnostics option is under the *Support and Troubleshooting* section in the Azure portal. Selecting boot diagnostics will display a screenshot and serial log information. The serial log contains kernel messaging and the screenshot is a snapshot of your VMs current state. Based on if the VM is running Windows or Linux determines what the expected screenshot would look like. For Windows, users will see a desktop background and for Linux, users will see a login prompt.
@@ -102,4 +106,4 @@ Everything after API version 2020-06-01 supports managed boot diagnostics. For m
## Next steps
-Learn more about the [Azure Serial Console](./troubleshooting/serial-console-overview.md) and how to use boot diagnostics to [troubleshoot virtual machines in Azure](./troubleshooting/boot-diagnostics.md).
\ No newline at end of file
+Learn more about the [Azure Serial Console](./troubleshooting/serial-console-overview.md) and how to use boot diagnostics to [troubleshoot virtual machines in Azure](./troubleshooting/boot-diagnostics.md).
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/dav4-dasv4-series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dav4-dasv4-series.md
@@ -27,14 +27,14 @@ Dav4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / Read MBps / Write MBps | Max NICs | Expected network bandwidth (Mbps) | |-----|-----|-----|-----|-----|-----|-----|-----|
-| Standard_D2a_v4 | 2 | 8 | 50 | 4 | 3000 / 46 / 23 | 2 | 1000 |
-| Standard_D4a_v4 | 4 | 16 | 100 | 8 | 6000 / 93 / 46 | 2 | 2000 |
-| Standard_D8a_v4 | 8 | 32 | 200 | 16 | 12000 / 187 / 93 | 4 | 4000 |
-| Standard_D16a_v4| 16 | 64 | 400 |32 | 24000 / 375 / 187 |8 | 8000 |
-| Standard_D32a_v4| 32 | 128| 800 | 32 | 48000 / 750 / 375 |8 | 16000 |
-| Standard_D48a_v4| 48 | 192| 1200 | 32 | 96000 / 1000 / 500 | 8 | 24000 |
-| Standard_D64a_v4| 64 | 256 | 1600 | 32 | 96000 / 1000 / 500 | 8 | 30000 |
-| Standard_D96a_v4| 96 | 384 | 2400 | 32 | 96000 / 1000 / 500 | 8 | 30000 |
+| Standard_D2a_v4 | 2 | 8 | 50 | 4 | 3000 / 46 / 23 | 2 | 800 |
+| Standard_D4a_v4 | 4 | 16 | 100 | 8 | 6000 / 93 / 46 | 2 | 1600 |
+| Standard_D8a_v4 | 8 | 32 | 200 | 16 | 12000 / 187 / 93 | 4 | 3200 |
+| Standard_D16a_v4| 16 | 64 | 400 |32 | 24000 / 375 / 187 |8 | 6400 |
+| Standard_D32a_v4| 32 | 128| 800 | 32 | 48000 / 750 / 375 |8 | 12800 |
+| Standard_D48a_v4| 48 | 192| 1200 | 32 | 96000 / 1000 / 500 | 8 | 19200 |
+| Standard_D64a_v4| 64 | 256 | 1600 | 32 | 96000 / 1000 / 500 | 8 | 25600 |
+| Standard_D96a_v4| 96 | 384 | 2400 | 32 | 96000 / 1000 / 500 | 8 | 32000 |
## Dasv4-series
@@ -50,14 +50,14 @@ Dasv4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) | Max uncached disk throughput: IOPS / MBps | Max NICs | Expected network bandwidth (Mbps) | |-----|-----|-----|-----|-----|-----|-----|-----|-----|
-| Standard_D2as_v4|2|8|16|4|4000 / 32 (50)|3200 / 48|2 | 1000 |
-| Standard_D4as_v4|4|16|32|8|8000 / 64 (100)|6400 / 96|2 | 2000 |
-| Standard_D8as_v4|8|32|64|16|16000 / 128 (200)|12800 / 192|4 | 4000 |
-| Standard_D16as_v4|16|64|128|32|32000 / 255 (400)|25600 / 384|8 | 8000 |
-| Standard_D32as_v4|32|128|256|32|64000 / 510 (800)|51200 / 768|8 | 16000 |
-| Standard_D48as_v4|48|192|384|32|96000 / 1020 (1200)|76800 / 1148|8 | 24000 |
-| Standard_D64as_v4|64|256|512|32|128000 / 1020 (1600)|80000 / 1200|8 | 30000 |
-| Standard_D96as_v4|96|384|768|32|192000 / 1020 (2400)|80000 / 1200|8 | 30000 |
+| Standard_D2as_v4|2|8|16|4|4000 / 32 (50)|3200 / 48|2 | 800 |
+| Standard_D4as_v4|4|16|32|8|8000 / 64 (100)|6400 / 96|2 | 1600 |
+| Standard_D8as_v4|8|32|64|16|16000 / 128 (200)|12800 / 192|4 | 3200 |
+| Standard_D16as_v4|16|64|128|32|32000 / 255 (400)|25600 / 384|8 | 6400 |
+| Standard_D32as_v4|32|128|256|32|64000 / 510 (800)|51200 / 768|8 | 12800 |
+| Standard_D48as_v4|48|192|384|32|96000 / 1020 (1200)|76800 / 1148|8 | 19200 |
+| Standard_D64as_v4|64|256|512|32|128000 / 1020 (1600)|80000 / 1200|8 | 25600 |
+| Standard_D96as_v4|96|384|768|32|192000 / 1020 (2400)|80000 / 1200|8 | 32000 |
[!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
@@ -76,4 +76,4 @@ More information on Disks Types : [Disk Types](./disks-types.md#ultra-disk)
## Next steps
-Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
\ No newline at end of file
+Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/eav4-easv4-series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/eav4-easv4-series.md
@@ -27,15 +27,15 @@ Eav4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / Read MBps / Write MBps | Max NICs | Expected network bandwidth (Mbps) | | -----|-----|-----|-----|-----|-----|-----|-----|
-| Standard\_E2a\_v4|2|16|50|4|3000 / 46 / 23|2 | 1000 |
-| Standard\_E4a\_v4|4|32|100|8|6000 / 93 / 46|2 | 2000 |
-| Standard\_E8a\_v4|8|64|200|16|12000 / 187 / 93|4 | 4000 |
-| Standard\_E16a\_v4|16|128|400|32|24000 / 375 / 187|8 | 8000 |
-| Standard\_E20a\_v4|20|160|500|32|30000 / 468 / 234|8 | 10000 |
-| Standard\_E32a\_v4|32|256|800|32|48000 / 750 / 375|8 | 16000 |
-| Standard\_E48a\_v4|48|384|1200|32|96000 / 1000 (500)|8 | 24000 |
-| Standard\_E64a\_v4|64|512|1600|32|96000 / 1000 (500)|8 | 30000 |
-| Standard\_E96a\_v4|96|672|2400|32|96000 / 1000 (500)|8 | 30000 |
+| Standard\_E2a\_v4|2|16|50|4|3000 / 46 / 23|2 | 800 |
+| Standard\_E4a\_v4|4|32|100|8|6000 / 93 / 46|2 | 1600 |
+| Standard\_E8a\_v4|8|64|200|16|12000 / 187 / 93|4 | 3200 |
+| Standard\_E16a\_v4|16|128|400|32|24000 / 375 / 187|8 | 6400 |
+| Standard\_E20a\_v4|20|160|500|32|30000 / 468 / 234|8 | 8000 |
+| Standard\_E32a\_v4|32|256|800|32|48000 / 750 / 375|8 | 12800 |
+| Standard\_E48a\_v4|48|384|1200|32|96000 / 1000 (500)|8 | 19200 |
+| Standard\_E64a\_v4|64|512|1600|32|96000 / 1000 (500)|8 | 25600 |
+| Standard\_E96a\_v4|96|672|2400|32|96000 / 1000 (500)|8 | 32000 |
## Easv4-series
@@ -51,15 +51,15 @@ Easv4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) | Max uncached disk throughput: IOPS / MBps | Max NICs | Expected network bandwidth (Mbps) | |-----|-----|-----|-----|-----|-----|-----|-----|-----|
-| Standard_E2as_v4|2|16|32|4|4000 / 32 (50)|3200 / 48|2 | 1000 |
-| Standard_E4as_v4|4|32|64|8|8000 / 64 (100)|6400 / 96|2 | 2000 |
-| Standard_E8as_v4|8|64|128|16|16000 / 128 (200)|12800 / 192|4 | 4000 |
-| Standard_E16as_v4|16|128|256|32|32000 / 255 (400)|25600 / 384|8 | 8000 |
-| Standard_E20as_v4|20|160|320|32|40000 / 320 (500)|32000 / 480|8 | 10000 |
-| Standard_E32as_v4|32|256|512|32|64000 / 510 (800)|51200 / 768|8 | 16000 |
-| Standard_E48as_v4|48|384|768|32|96000 / 1020 (1200)|76800 / 1148|8 | 24000 |
-| Standard_E64as_v4|64|512|1024|32|128000 / 1020 (1600)|80000 / 1200|8 | 30000 |
-| Standard_E96as_v4 <sup>1</sup>|96|672|1344|32|192000 / 1020 (2400)|80000 / 1200|8 | 30000 |
+| Standard_E2as_v4|2|16|32|4|4000 / 32 (50)|3200 / 48|2 | 800 |
+| Standard_E4as_v4|4|32|64|8|8000 / 64 (100)|6400 / 96|2 | 1600 |
+| Standard_E8as_v4|8|64|128|16|16000 / 128 (200)|12800 / 192|4 | 3200 |
+| Standard_E16as_v4|16|128|256|32|32000 / 255 (400)|25600 / 384|8 | 6400 |
+| Standard_E20as_v4|20|160|320|32|40000 / 320 (500)|32000 / 480|8 | 8000 |
+| Standard_E32as_v4|32|256|512|32|64000 / 510 (800)|51200 / 768|8 | 12800 |
+| Standard_E48as_v4|48|384|768|32|96000 / 1020 (1200)|76800 / 1148|8 | 19200 |
+| Standard_E64as_v4|64|512|1024|32|128000 / 1020 (1600)|80000 / 1200|8 | 25600 |
+| Standard_E96as_v4 <sup>1</sup>|96|672|1344|32|192000 / 1020 (2400)|80000 / 1200|8 | 32000 |
<sup>1</sup> [Constrained core sizes available](./constrained-vcpu.md).
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/linux/mysql-on-opensuse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/mysql-on-opensuse.md
@@ -32,7 +32,7 @@ Create the VM. In this example, the VM is named *myVM* and the VM size is *Stand
```azurecli-interactive az vm create --resource-group mySQLSUSEResourceGroup \ --name myVM \
- --image openSUSE-Leap \
+ --image SUSE:openSUSE-Leap:15-2:latest \
--size Standard_D2s_v3 \ --generate-ssh-keys ```
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/linux/tutorial-automate-vm-deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/tutorial-automate-vm-deployment.md
@@ -5,13 +5,8 @@ services: virtual-machines-linux
documentationcenter: virtual-machines author: cynthn manager: gwallace-
-tags: azure-resource-manager
-
-ms.assetid:
ms.service: virtual-machines-linux ms.topic: tutorial
-ms.tgt_pltfrm: vm-linux
ms.workload: infrastructure ms.date: 09/12/2019 ms.author: cynthn
@@ -38,17 +33,7 @@ If you choose to install and use the CLI locally, this tutorial requires that yo
Cloud-init also works across distributions. For example, you don't use **apt-get install** or **yum install** to install a package. Instead you can define a list of packages to install. Cloud-init automatically uses the native package management tool for the distro you select.
-We are working with our partners to get cloud-init included and working in the images that they provide to Azure. The following table outlines the current cloud-init availability on Azure platform images:
-
-| Publisher | Offer | SKU | Version | cloud-init ready |
-|:--- |:--- |:--- |:--- |:--- |
-|Canonical |UbuntuServer |18.04-LTS |latest |yes |
-|Canonical |UbuntuServer |16.04-LTS |latest |yes |
-|Canonical |UbuntuServer |14.04.5-LTS |latest |yes |
-|CoreOS |CoreOS |Stable |latest |yes |
-|OpenLogic 7.6 |CentOS |7-CI |latest |preview |
-|RedHat 7.6 |RHEL |7-RAW-CI |7.6.2019072418 |yes |
-|RedHat 7.7 |RHEL |7-RAW-CI |7.7.2019081601 |preview |
+We are working with our partners to get cloud-init included and working in the images that they provide to Azure. For detailed information cloud-init support for each distribution, see [Cloud-init support for VMs in Azure](using-cloud-init.md).
## Create cloud-init config file
@@ -146,7 +131,7 @@ The following steps show how you can:
- Create a VM and inject the certificate ### Create an Azure Key Vault
-First, create a Key Vault with [az keyvault create](/cli/azure/keyvault#az-keyvault-create) and enable it for use when you deploy a VM. Each Key Vault requires a unique name, and should be all lower case. Replace *mykeyvault* in the following example with your own unique Key Vault name:
+First, create a Key Vault with [az keyvault create](/cli/azure/keyvault#az-keyvault-create) and enable it for use when you deploy a VM. Each Key Vault requires a unique name, and should be all lower case. Replace `mykeyvault` in the following example with your own unique Key Vault name:
```azurecli-interactive keyvault_name=mykeyvault
vmware-cloudsimple https://docs.microsoft.com/en-us/azure/vmware-cloudsimple/index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vmware-cloudsimple/index.md
@@ -1,8 +1,8 @@
--- title: Azure VMware Solution by CloudSimple description: Learn about Azure VMware Solutions by CloudSimple, including an overview, quickstarts, concepts, tutorials, and how-to guides.
-author: sharaths-cs
-ms.author: b-mashar
+author: Ajayan1008
+ms.author: v-hborys
ms.date: 08/20/2019 ms.topic: article ms.service: azure-vmware-cloudsimple