Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Add Captcha | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-captcha.md | zone_pivot_groups: b2c-policy-type [!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)] -Azure Active Directory B2C (Azure AD B2C) allows you to enable CAPTCHA prevent to automated attacks on your consumer-facing applications. Azure AD B2CΓÇÖs CAPTCHA supports both audio and visual CAPTCHA challenges. You can enable this security feature in both sign-up and sign-in flows for your local accounts. CAPTCHA isn't applicable for social identity providers' sign-in. +Azure Active Directory B2C (Azure AD B2C) allows you to enable CAPTCHA to prevent automated attacks on your consumer-facing applications. Azure AD B2CΓÇÖs CAPTCHA supports both audio and visual CAPTCHA challenges. You can enable this security feature in both sign-up and sign-in flows for your local accounts. CAPTCHA isn't applicable for social identity providers' sign-in. > [!NOTE] > This feature is in public preview To enable CAPTCHA in MFA flow, you need to make an update in two technical profi ... </TechnicalProfile> ```--> [!NOTE] -> - You can't add CAPTCHA to an MFA step in a sign-up only user flow. -> - In an MFA flow, CAPTCHA is applicable where the MFA method you select is SMS or phone call, SMS only or Phone call only. - ## Upload the custom policy files Use the steps in [Upload the policies](tutorial-create-user-flows.md?pivots=b2c-custom-policy&branch=pr-en-us-260336#upload-the-policies) to upload your custom policy files. Use the steps in [Upload the policies](tutorial-create-user-flows.md?pivots=b2c- ## Test the custom policy Use the steps in [Test the custom policy](tutorial-create-user-flows.md?pivots=b2c-custom-policy#test-the-custom-policy) to test and confirm that CAPTCHA is enabled for your chosen flow. You should be prompted to enter the characters you see or hear depending on the CAPTCHA type, visual or audio, you choose.+ ::: zone-end +> [!NOTE] +> - You can't add CAPTCHA to an MFA step in a sign-up only user flow. +> - In an MFA flow, CAPTCHA is applicable where the MFA method you select is SMS or phone call, SMS only or Phone call only. + ## Next steps - Learn how to [Define a CAPTCHA technical profile](captcha-technical-profile.md). |
active-directory-b2c | Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-domain.md | When using custom domains, consider the following: - You can set up multiple custom domains. For the maximum number of supported custom domains, see [Microsoft Entra service limits and restrictions](/entra/identity/users/directory-service-limits-restrictions) for Azure AD B2C and [Azure subscription and service limits, quotas, and constraints](/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-front-door-classic-limits) for Azure Front Door. - Azure Front Door is a separate Azure service, so extra charges will be incurred. For more information, see [Front Door pricing](https://azure.microsoft.com/pricing/details/frontdoor).-- If you've multiple applications, migrate all oft them to the custom domain because the browser stores the Azure AD B2C session under the domain name currently being used.+- If you've multiple applications, migrate all of them to the custom domain because the browser stores the Azure AD B2C session under the domain name currently being used. - After you configure custom domains, users will still be able to access the Azure AD B2C default domain name *<tenant-name>.b2clogin.com*. You need to block access to the default domain so that attackers can't use it to access your apps or run distributed denial-of-service (DDoS) attacks. [Submit a support ticket](find-help-open-support-ticket.md) to request for the blocking of access to the default domain. > [!WARNING] |
active-directory-b2c | Custom Policies Series Sign Up Or Sign In Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-sign-up-or-sign-in-federation.md | Notice the claims transformations we defined in [step 3.2](#step-32define-cla Just like in sign-in with a local account, you need to configure the [Microsoft Entra Technical Profiles](active-directory-technical-profile.md), which you use to connect to Microsoft Entra ID storage, to store or read a user social account. -1. In the `ContosoCustomPolicy.XML` file, locate the `AAD-UserUpdate` technical profile and then add a new technical profile by using the following code: +1. In the `ContosoCustomPolicy.XML` file, locate the `AAD-UserRead` technical profile and then add a new technical profile by using the following code: ```xml <TechnicalProfile Id="AAD-UserWriteUsingAlternativeSecurityId"> |
active-directory-b2c | Custom Policies Series Store User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-store-user.md | To configure a display control, use the following steps: You can configure a Microsoft Entra ID technical profile to update a user account instead of attempting to create a new one. To do so, set the Microsoft Entra ID technical profile to throw an error if the specified user account doesn't already exist in the `Metadata` collection by using the following code. The *Operation* needs to be set to *Write*: ```xml- <!--<Item Key="Operation">Write</Item>--> + <Item Key="Operation">Write</Item> <Item Key="RaiseErrorIfClaimsPrincipalDoesNotExist">true</Item> ``` |
active-directory-b2c | Implicit Flow Single Page Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/implicit-flow-single-page-application.md | -#Customer intent: As a developer building a single-page application (SPA) with a JavaScript framework, I want to implement OAuth 2.0 implicit flow for sign-in using Azure Active Directory B2C, so that I can securely authenticate users without server-to-server exchange and handle user flows like sign-up and profile management. +#Customer intent: As a developer building a single-page application (SPA) with a JavaScript framework, I want to implement OAuth 2.0 implicit flow for sign-in using Azure AD B2C, so that I can securely authenticate users without server-to-server exchange and handle user flows like sign-up and profile management. |
active-directory-b2c | Phone Based Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/phone-based-mfa.md | description: Learn tips for securing phone-based multifactor authentication in y - - Previously updated : 01/11/2024 Last updated : 03/01/2024 Take the following actions to help mitigate fraudulent sign-ups. - [Enable the email one-time passcode feature (OTP)](phone-authentication-user-flows.md) for MFA (applies to both sign-up and sign-in flows). - [Configure a Conditional Access policy](conditional-access-user-flow.md) to block sign-ins based on location (applies to sign-in flows only, not sign-up flows).- - Use API connectors to [integrate with an anti-bot solution like reCAPTCHA](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-captcha) (applies to sign-up flows). + - To prevent automated attacks on your consumer-facing apps, [enable CAPTCHA](add-captcha.md). Azure AD B2CΓÇÖs CAPTCHA supports both audio and visual CAPTCHA challenges, and applies to both sign-up and sign-in flows for your local accounts. - Remove country codes that aren't relevant to your organization from the drop-down menu where the user verifies their phone number (this change will apply to future sign-ups): |
ai-services | How To Store User Preferences | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-store-user-preferences.md | This functionality can be used as an alternate means to storing user preferences ## Enable storing user preferences -The Immersive Reader SDK [launchAsync](./reference.md#launchasync) `options` parameter contains the `-onPreferencesChanged` callback. This function is called anytime the user changes their preferences. The `value` parameter contains a string, which represents the user's current preferences. This string is then stored, for that user, by the host application. +The Immersive Reader SDK [launchAsync](reference.md#function-launchasync) `options` parameter contains the `-onPreferencesChanged` callback. This function will be called anytime the user changes their preferences. The `value` parameter contains a string, which represents the user's current preferences. This string is then stored, for that user, by the host application. ```typescript const options = { |
ai-services | Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/reference.md | Title: "Immersive Reader SDK Reference" + Title: Immersive Reader SDK Javascript reference -description: The Immersive Reader SDK contains a JavaScript library that allows you to integrate the Immersive Reader into your application. +description: Learn about the Immersive Reader JavaScript library that allows you to integrate Immersive Reader into your application. #-+ Previously updated : 11/15/2021- Last updated : 02/28/2024+ -# Immersive Reader JavaScript SDK Reference (v1.4) +# Immersive Reader JavaScript SDK reference (v1.4) The Immersive Reader SDK contains a JavaScript library that allows you to integrate the Immersive Reader into your application. -You may use `npm`, `yarn`, or an `HTML` `<script>` element to include the library of the latest stable build in your web application: +You can use `npm`, `yarn`, or an HTML `<script>` element to include the library of the latest stable build in your web application: ```html <script type='text/javascript' src='https://ircdname.azureedge.net/immersivereadersdk/immersive-reader-sdk.1.4.0.js'></script> yarn add @microsoft/immersive-reader-sdk ## Functions -The SDK exposes the functions: +The SDK exposes these functions: -- [`ImmersiveReader.launchAsync(token, subdomain, content, options)`](#launchasync)+- [ImmersiveReader.launchAsync(token, subdomain, content, options)](#function-launchasync) +- [ImmersiveReader.close()](#function-close) +- [ImmersiveReader.renderButtons(options)](#function-renderbuttons) -- [`ImmersiveReader.close()`](#close)+### Function: `launchAsync` -- [`ImmersiveReader.renderButtons(options)`](#renderbuttons)--<br> --## launchAsync --Launches the Immersive Reader within an `HTML` `iframe` element in your web application. The size of your content is limited to a maximum of 50 MB. +`ImmersiveReader.launchAsync(token, subdomain, content, options)` launches the Immersive Reader within an HTML `iframe` element in your web application. The size of your content is limited to a maximum of 50 MB. ```typescript launchAsync(token: string, subdomain: string, content: Content, options?: Options): Promise<LaunchResponse>; ``` -#### launchAsync Parameters --| Name | Type | Description | +| Parameter | Type | Description | | - | - | |-| `token` | string | The Microsoft Entra authentication token. For more information, see [How-To Create an Immersive Reader Resource](./how-to-create-immersive-reader.md). | -| `subdomain` | string | The custom subdomain of your Immersive Reader resource in Azure. For more information, see [How-To Create an Immersive Reader Resource](./how-to-create-immersive-reader.md). | -| `content` | [Content](#content) | An object containing the content to be shown in the Immersive Reader. | -| `options` | [Options](#options) | Options for configuring certain behaviors of the Immersive Reader. Optional. | +| token | string | The Microsoft Entra authentication token. To learn more, see [How to create an Immersive Reader resource](how-to-create-immersive-reader.md). | +| subdomain | string | The custom subdomain of your [Immersive Reader resource](how-to-create-immersive-reader.md) in Azure. | +| content | [Content](#content) | An object that contains the content to be shown in the Immersive Reader. | +| options | [Options](#options) | Options for configuring certain behaviors of the Immersive Reader. Optional. | #### Returns -Returns a `Promise<LaunchResponse>`, which resolves when the Immersive Reader is loaded. The `Promise` resolves to a [`LaunchResponse`](#launchresponse) object. +Returns a `Promise<LaunchResponse>`, which resolves when the Immersive Reader is loaded. The `Promise` resolves to a [LaunchResponse](#launchresponse) object. #### Exceptions -The returned `Promise` will be rejected with an [`Error`](#error) object if the Immersive Reader fails to load. For more information, see the [error codes](#error-codes). --<br> +If the Immersive Reader fails to load, the returned `Promise` is rejected with an [Error](#error) object. -## close +### Function: `close` -Closes the Immersive Reader. +`ImmersiveReader.close()` closes the Immersive Reader. -An example use case for this function is if the exit button is hidden by setting ```hideExitButton: true``` in [options](#options). Then, a different button (for example a mobile header's back arrow) can call this ```close``` function when it's clicked. +An example use case for this function is if the exit button is hidden by setting `hideExitButton: true` in [options](#options). Then, a different button (for example, a mobile header's back arrow) can call this `close` function when it's clicked. ```typescript close(): void; ``` -<br> --## Immersive Reader Launch Button --The SDK provides default styling for the button for launching the Immersive Reader. Use the `immersive-reader-button` class attribute to enable this styling. For more information, see [How-To Customize the Immersive Reader button](./how-to-customize-launch-button.md). --```html -<div class='immersive-reader-button'></div> -``` --#### Optional attributes --Use the following attributes to configure the look and feel of the button. --| Attribute | Description | -| | -- | -| `data-button-style` | Sets the style of the button. Can be `icon`, `text`, or `iconAndText`. Defaults to `icon`. | -| `data-locale` | Sets the locale. For example, `en-US` or `fr-FR`. Defaults to English `en`. | -| `data-icon-px-size` | Sets the size of the icon in pixels. Defaults to 20px. | --<br> +### Function: `renderButtons` -## renderButtons +The `ImmersiveReader.renderButtons(options)` function isn't necessary if you use the [How to customize the Immersive Reader button](how-to-customize-launch-button.md) guidance. -The ```renderButtons``` function isn't necessary if you're using the [How-To Customize the Immersive Reader button](./how-to-customize-launch-button.md) guidance. --This function styles and updates the document's Immersive Reader button elements. If ```options.elements``` is provided, then the buttons will be rendered within each element provided in ```options.elements```. Using the ```options.elements``` parameter is useful when you have multiple sections in your document on which to launch the Immersive Reader, and want a simplified way to render multiple buttons with the same styling, or want to render the buttons with a simple and consistent design pattern. To use this function with the [renderButtons options](#renderbuttons-options) parameter, call ```ImmersiveReader.renderButtons(options: RenderButtonsOptions);``` on page load as demonstrated in the below code snippet. Otherwise, the buttons will be rendered within the document's elements that have the class ```immersive-reader-button``` as shown in [How-To Customize the Immersive Reader button](./how-to-customize-launch-button.md). +This function styles and updates the document's Immersive Reader button elements. If `options.elements` is provided, then the buttons are rendered within each element provided in `options.elements`. Using the `options.elements` parameter is useful when you have multiple sections in your document on which to launch the Immersive Reader, and want a simplified way to render multiple buttons with the same styling, or want to render the buttons with a simple and consistent design pattern. To use this function with the [renderButtons options](#renderbuttons-options) parameter, call `ImmersiveReader.renderButtons(options: RenderButtonsOptions);` on page load as demonstrated in the following code snippet. Otherwise, the buttons are rendered within the document's elements that have the class `immersive-reader-button` as shown in [How to customize the Immersive Reader button](how-to-customize-launch-button.md). ```typescript // This snippet assumes there are two empty div elements in const btns: HTMLDivElement[] = [btn1, btn2]; ImmersiveReader.renderButtons({elements: btns}); ``` -See the above [Optional Attributes](#optional-attributes) for more rendering options. To use these options, add any of the option attributes to each ```HTMLDivElement``` in your page HTML. +See the [launch button](#launch-button) optional attributes for more rendering options. To use these options, add any of the option attributes to each `HTMLDivElement` in your page HTML. ```typescript renderButtons(options?: RenderButtonsOptions): void; ``` -#### renderButtons Parameters --| Name | Type | Description | +| Parameter | Type | Description | | - | - | |-| `options` | [renderButtons options](#renderbuttons-options) | Options for configuring certain behaviors of the renderButtons function. Optional. | +| options | [renderButtons options](#renderbuttons-options) | Options for configuring certain behaviors of the renderButtons function. Optional. | -### renderButtons Options +#### renderButtons options Options for rendering the Immersive Reader buttons. Options for rendering the Immersive Reader buttons. } ``` -#### renderButtons Options Parameters --| Setting | Type | Description | +| Parameter | Type | Description | | - | - | -- | | elements | HTMLDivElement[] | Elements to render the Immersive Reader buttons in. | -##### `elements` ```Parameters Type: HTMLDivElement[] Required: false ``` -<br> +## Launch button ++The SDK provides default styling for the Immersive Reader launch button. Use the `immersive-reader-button` class attribute to enable this styling. For more information, see [How to customize the Immersive Reader button](how-to-customize-launch-button.md). ++```html +<div class='immersive-reader-button'></div> +``` ++Use the following optional attributes to configure the look and feel of the button. ++| Attribute | Description | +| | -- | +| data-button-style | Sets the style of the button. Can be `icon`, `text`, or `iconAndText`. Defaults to `icon`. | +| data-locale | Sets the locale. For example, `en-US` or `fr-FR`. Defaults to English `en`. | +| data-icon-px-size | Sets the size of the icon in pixels. Defaults to 20 px. | ## LaunchResponse -Contains the response from the call to `ImmersiveReader.launchAsync`. A reference to the `HTML` `iframe` element that contains the Immersive Reader can be accessed via `container.firstChild`. +Contains the response from the call to `ImmersiveReader.launchAsync`. A reference to the HTML `iframe` element that contains the Immersive Reader can be accessed via `container.firstChild`. ```typescript { Contains the response from the call to `ImmersiveReader.launchAsync`. A referenc } ``` -#### LaunchResponse Parameters --| Setting | Type | Description | +| Parameter | Type | Description | | - | - | -- | | container | HTMLDivElement | HTML element that contains the Immersive Reader `iframe` element. | | sessionId | String | Globally unique identifier for this session, used for debugging. | | charactersProcessed | number | Total number of characters processed |- + ## Error Contains information about an error. Contains information about an error. } ``` -#### Error Parameters --| Setting | Type | Description | +| Parameter | Type | Description | | - | - | -- |-| code | String | One of a set of error codes. For more information, see [Error codes](#error-codes). | +| code | String | One of a set of error codes. | | message | String | Human-readable representation of the error. | -#### Error codes --| Code | Description | +| Error code | Description | | - | -- |-| BadArgument | Supplied argument is invalid, see `message` parameter of the [Error](#error). | +| BadArgument | Supplied argument is invalid. See `message` parameter of the error. | | Timeout | The Immersive Reader failed to load within the specified timeout. | | TokenExpired | The supplied token is expired. | | Throttled | The call rate limit has been exceeded. | -<br> - ## Types ### Content Contains the content to be shown in the Immersive Reader. } ``` -#### Content Parameters --| Name | Type | Description | +| Parameter | Type | Description | | - | - | | | title | String | Title text shown at the top of the Immersive Reader (optional) | | chunks | [Chunk[]](#chunk) | Array of chunks | Required: true Default value: null ``` -<br> - ### Chunk -A single chunk of data, which will be passed into the Content of the Immersive Reader. +A single chunk of data, which is passed into the content of the Immersive Reader. ```typescript { A single chunk of data, which will be passed into the Content of the Immersive R } ``` -#### Chunk Parameters --| Name | Type | Description | +| Parameter | Type | Description | | - | - | | | content | String | The string that contains the content sent to the Immersive Reader. |-| lang | String | Language of the text, the value is in IETF BCP 47-language tag format, for example, en, es-ES. Language will be detected automatically if not specified. For more information, see [Supported Languages](#supported-languages). | +| lang | String | Language of the text, the value is in *IETF BCP 47-language* tag format, for example, en, es-ES. Language is detected automatically if not specified. For more information, see [Supported languages](#supported-languages). | | mimeType | string | Plain text, MathML, HTML & Microsoft Word DOCX formats are supported. For more information, see [Supported MIME types](#supported-mime-types). | ##### `content` Default value: "text/plain" #### Supported MIME types -| MIME Type | Description | +| MIME type | Description | | | -- | | text/plain | Plain text. |-| text/html | HTML content. [Learn more](#html-support)| -| application/mathml+xml | Mathematical Markup Language (MathML). [Learn more](./how-to/display-math.md). -| application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Word .docx format document. ---<br> +| text/html | [HTML content](#html-support). | +| application/mathml+xml | [Mathematical Markup Language (MathML)](how-to/display-math.md). | +| application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Word .docx format document. | ## Options Contains properties that configure certain behaviors of the Immersive Reader. } ``` -#### Options Parameters --| Name | Type | Description | +| Parameter | Type | Description | | - | - | |-| uiLang | String | Language of the UI, the value is in IETF BCP 47-language tag format, for example, en, es-ES. Defaults to browser language if not specified. | -| timeout | Number | Duration (in milliseconds) before [launchAsync](#launchasync) fails with a timeout error (default is 15,000 ms). This timeout only applies to the initial launch of the Reader page, when the Reader page opens successfully and the spinner starts. Adjustment of the timeout should'nt be necessary. | -| uiZIndex | Number | Z-index of the `HTML` `iframe` element that will be created (default is 1000). | -| useWebview | Boolean| Use a webview tag instead of an `HTML` `iframe` element, for compatibility with Chrome Apps (default is false). | +| uiLang | String | Language of the UI, the value is in *IETF BCP 47-language* tag format, for example, en, es-ES. Defaults to browser language if not specified. | +| timeout | Number | Duration (in milliseconds) before [launchAsync](#function-launchasync) fails with a timeout error (default is 15,000 ms). This timeout only applies to the initial launch of the Reader page, when the Reader page opens successfully and the spinner starts. Adjustment of the timeout shouldn't be necessary. | +| uiZIndex | Number | Z-index of the HTML `iframe` element that is created (default is 1000). | +| useWebview | Boolean| Use a webview tag instead of an HTML `iframe` element, for compatibility with Chrome Apps (default is false). | | onExit | Function | Executes when the Immersive Reader exits. | | customDomain | String | Reserved for internal use. Custom domain where the Immersive Reader webapp is hosted (default is null). | | allowFullscreen | Boolean | The ability to toggle fullscreen (default is true). |-| parent | Node | Node in which the `HTML` `iframe` element or `Webview` container is placed. If the element doesn't exist, iframe is placed in `body`. | -| hideExitButton | Boolean | Hides the Immersive Reader's exit button arrow (default is false). This value should only be true if there's an alternative mechanism provided to exit the Immersive Reader (e.g a mobile toolbar's back arrow). | -| cookiePolicy | [CookiePolicy](#cookiepolicy-options) | Setting for the Immersive Reader's cookie usage (default is *CookiePolicy.Disable*). It's the responsibility of the host application to obtain any necessary user consent following EU Cookie Compliance Policy. For more information, see [Cookie Policy Options](#cookiepolicy-options). | +| parent | Node | Node in which the HTML `iframe` element or `Webview` container is placed. If the element doesn't exist, iframe is placed in `body`. | +| hideExitButton | Boolean | Hides the Immersive Reader's exit button arrow (default is false). This value should only be true if there's an alternative mechanism provided to exit the Immersive Reader (for example, a mobile toolbar's back arrow). | +| cookiePolicy | [CookiePolicy](#cookiepolicy-options) | Setting for the Immersive Reader's cookie usage (default is *CookiePolicy.Disable*). It's the responsibility of the host application to obtain any necessary user consent following EU Cookie Compliance Policy. For more information, see [Cookie Policy options](#cookiepolicy-options). | | disableFirstRun | Boolean | Disable the first run experience. | | readAloudOptions | [ReadAloudOptions](#readaloudoptions) | Options to configure Read Aloud. | | translationOptions | [TranslationOptions](#translationoptions) | Options to configure translation. | | displayOptions | [DisplayOptions](#displayoptions) | Options to configure text size, font, theme, and so on. |-| preferences | String | String returned from onPreferencesChanged representing the user's preferences in the Immersive Reader. For more information, see [Settings Parameters](#settings-parameters) and [How-To Store User Preferences](./how-to-store-user-preferences.md). | -| onPreferencesChanged | Function | Executes when the user's preferences have changed. For more information, see [How-To Store User Preferences](./how-to-store-user-preferences.md). | +| preferences | String | String returned from onPreferencesChanged representing the user's preferences in the Immersive Reader. For more information, see [How to store user preferences](how-to-store-user-preferences.md). | +| onPreferencesChanged | Function | Executes when the user's preferences have changed. For more information, see [How to store user preferences](how-to-store-user-preferences.md). | | disableTranslation | Boolean | Disable the word and document translation experience. |-| disableGrammar | Boolean | Disable the Grammar experience. This option will also disable Syllables, Parts of Speech and Picture Dictionary, which depends on Parts of Speech. | -| disableLanguageDetection | Boolean | Disable Language Detection to ensure the Immersive Reader only uses the language that is explicitly specified on the [Content](#content)/[Chunk[]](#chunk). This option should be used sparingly, primarily in situations where language detection isn't working, for instance, this issue is more likely to happen with short passages of fewer than 100 characters. You should be certain about the language you're sending, as text-to-speech won't have the correct voice. Syllables, Parts of Speech and Picture Dictionary won't work correctly if the language isn't correct. | +| disableGrammar | Boolean | Disable the Grammar experience. This option also disables Syllables, Parts of Speech, and Picture Dictionary, which depends on Parts of Speech. | +| disableLanguageDetection | Boolean | Disable Language Detection to ensure the Immersive Reader only uses the language that is explicitly specified on the [Content](#content)/[Chunk[]](#chunk). This option should be used sparingly, primarily in situations where language detection isn't working. For instance, this issue is more likely to happen with short passages of fewer than 100 characters. You should be certain about the language you're sending, as text-to-speech won't have the correct voice. Syllables, Parts of Speech, and Picture Dictionary don't work correctly if the language isn't correct. | ##### `uiLang` ```Parameters Default value: null ``` ##### `preferences`--> [!CAUTION] -> **IMPORTANT** Do not attempt to programmatically change the values of the `-preferences` string sent to and from the Immersive Reader application as this may cause unexpected behavior resulting in a degraded user experience for your customers. Host applications should never assign a custom value to or manipulate the `-preferences` string. When using the `-preferences` string option, use only the exact value that was returned from the `-onPreferencesChanged` callback option. - ```Parameters Type: String Required: false Default value: null ``` +> [!CAUTION] +> Don't attempt to programmatically change the values of the `-preferences` string sent to and from the Immersive Reader application because this might cause unexpected behavior resulting in a degraded user experience. Host applications should never assign a custom value to or manipulate the `-preferences` string. When using the `-preferences` string option, use only the exact value that was returned from the `-onPreferencesChanged` callback option. + ##### `onPreferencesChanged` ```Parameters Type: Function Required: false Default value: null ``` -<br> - ## ReadAloudOptions ```typescript type ReadAloudOptions = { }; ``` -#### ReadAloudOptions Parameters --| Name | Type | Description | +| Parameter | Type | Description | | - | - | |-| voice | String | Voice, either "Female" or "Male". Not all languages support both genders. | -| speed | Number | Playback speed, must be between 0.5 and 2.5, inclusive. | +| voice | String | Voice, either *Female* or *Male*. Not all languages support both genders. | +| speed | Number | Playback speed. Must be between 0.5 and 2.5, inclusive. | | autoPlay | Boolean | Automatically start Read Aloud when the Immersive Reader loads. | +> [!NOTE] +> Due to browser limitations, autoplay is not supported in Safari. + ##### `voice` ```Parameters Type: String Default value: 1 Values available: 0.5, 0.75, 1, 1.25, 1.5, 1.75, 2, 2.25, 2.5 ``` -> [!NOTE] -> Due to browser limitations, autoplay is not supported in Safari. --<br> - ## TranslationOptions ```typescript type TranslationOptions = { }; ``` -#### TranslationOptions Parameters --| Name | Type | Description | +| Parameter | Type | Description | | - | - | |-| language | String | Sets the translation language, the value is in IETF BCP 47-language tag format, for example, fr-FR, es-MX, zh-Hans-CN. Required to automatically enable word or document translation. | +| language | String | Sets the translation language, the value is in *IETF BCP 47-language* tag format, for example, fr-FR, es-MX, zh-Hans-CN. Required to automatically enable word or document translation. | | autoEnableDocumentTranslation | Boolean | Automatically translate the entire document. | | autoEnableWordTranslation | Boolean | Automatically enable word translation. | type TranslationOptions = { Type: String Required: true Default value: null -Values available: For more information, see the Supported Languages section +Values available: For more information, see the Supported languages section ``` -<br> - ## ThemeOption ```typescript type DisplayOptions = { }; ``` -#### DisplayOptions Parameters --| Name | Type | Description | +| Parameter | Type | Description | | - | - | | | textSize | Number | Sets the chosen text size. | | increaseSpacing | Boolean | Sets whether text spacing is toggled on or off. |-| fontFamily | String | Sets the chosen font ("Calibri", "ComicSans", or "Sitka"). | -| themeOption | ThemeOption | Sets the chosen Theme of the reader ("Light", "Dark"). | +| fontFamily | String | Sets the chosen font (*Calibri*, *ComicSans*, or *Sitka*). | +| themeOption | ThemeOption | Sets the chosen theme of the reader (*Light*, *Dark*). | ##### `textSize` ```Parameters Default value: "Calibri" Values available: "Calibri", "Sitka", "ComicSans" ``` -<br> --## CookiePolicy Options +## CookiePolicy options ```typescript enum CookiePolicy { Disable, Enable } ``` -**The settings listed below are for informational purposes only**. The Immersive Reader stores its settings, or user preferences, in cookies. This *cookiePolicy* option **disables** the use of cookies by default to follow EU Cookie Compliance laws. If you want to re-enable cookies and restore the default functionality for Immersive Reader user preferences, your website or application will need proper consent from the user to enable cookies. Then, to re-enable cookies in the Immersive Reader, you must explicitly set the *cookiePolicy* option to *CookiePolicy.Enable* when launching the Immersive Reader. The table below describes what settings the Immersive Reader stores in its cookie when the *cookiePolicy* option is enabled. +**The following settings are for informational purposes only**. The Immersive Reader stores its settings, or user preferences, in cookies. This *cookiePolicy* option **disables** the use of cookies by default to follow EU Cookie Compliance laws. If you want to re-enable cookies and restore the default functionality for Immersive Reader user preferences, your website or application needs proper consent from the user to enable cookies. Then, to re-enable cookies in the Immersive Reader, you must explicitly set the *cookiePolicy* option to *CookiePolicy.Enable* when launching the Immersive Reader. -#### Settings Parameters +The following table describes what settings the Immersive Reader stores in its cookie when the *cookiePolicy* option is enabled. | Setting | Type | Description | | - | - | -- | | textSize | Number | Sets the chosen text size. |-| fontFamily | String | Sets the chosen font ("Calibri", "ComicSans", or "Sitka"). | +| fontFamily | String | Sets the chosen font (*Calibri*, *ComicSans*, or *Sitka*). | | textSpacing | Number | Sets whether text spacing is toggled on or off. | | formattingEnabled | Boolean | Sets whether HTML formatting is toggled on or off. |-| theme | String | Sets the chosen theme (e.g "Light", "Dark"...). | +| theme | String | Sets the chosen theme (*Light*, *Dark*). | | syllabificationEnabled | Boolean | Sets whether syllabification toggled on or off. | | nounHighlightingEnabled | Boolean | Sets whether noun-highlighting is toggled on or off. | | nounHighlightingColor | String | Sets the chosen noun-highlighting color. | enum CookiePolicy { Disable, Enable } | pictureDictionaryEnabled | Boolean | Sets whether Picture Dictionary is toggled on or off. | | posLabelsEnabled | Boolean | Sets whether the superscript text label of each highlighted Part of Speech is toggled on or off. | -<br> +## Supported languages -## Supported Languages --The translation feature of Immersive Reader supports many languages. For more information, see [Language Support](./language-support.md). --<br> +The translation feature of Immersive Reader supports many languages. For more information, see [Language support](language-support.md). ## HTML support -When formatting is enabled, the following content will be rendered as HTML in the Immersive Reader. +When formatting is enabled, the following content is rendered as HTML in the Immersive Reader. -| HTML | Supported Content | +| HTML | Supported content | | | -- |-| Font Styles | Bold, Italic, Underline, Code, Strikethrough, Superscript, Subscript | -| Unordered Lists | Disc, Circle, Square | -| Ordered Lists | Decimal, Upper-Alpha, Lower-Alpha, Upper-Roman, Lower-Roman | +| Font styles | Bold, italic, underline, code, strikethrough, superscript, subscript | +| Unordered lists | Disc, circle, square | +| Ordered lists | Decimal, upper-Alpha, lower-Alpha, upper-Roman, lower-Roman | -Unsupported tags will be rendered comparably. Images and tables are currently not supported. --<br> +Unsupported tags are rendered comparably. Images and tables are currently not supported. ## Browser support Use the most recent versions of the following browsers for the best experience with the Immersive Reader. * Microsoft Edge-* Internet Explorer 11 * Google Chrome * Mozilla Firefox * Apple Safari -<br> --## Next steps +## Next step -* Explore the [Immersive Reader SDK on GitHub](https://github.com/microsoft/immersive-reader-sdk) -* [Quickstart: Create a web app that launches the Immersive Reader (C#)](./quickstarts/client-libraries.md?pivots=programming-language-csharp) +> [!div class="nextstepaction"] +> [Explore the Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) |
ai-services | Security How To Update Role Assignment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/security-how-to-update-role-assignment.md | Title: "Security Advisory: Update Role Assignment for Microsoft Entra authentication permissions" + Title: "Update role assignment for Microsoft Entra authentication" -description: This article will show you how to update the role assignment on existing Immersive Reader resources due to a security bug discovered in November 2021 +description: Learn how to update the role assignment on existing Immersive Reader resources due to a security bug. #-+ Previously updated : 01/06/2022- Last updated : 02/28/2024+ -# Security Advisory: Update Role Assignment for Microsoft Entra authentication permissions +# Security advisory: Update role assignment for Microsoft Entra authentication -A security bug has been discovered with Immersive Reader Microsoft Entra authentication configuration. We are advising that you change the permissions on your Immersive Reader resources as described below. +A security bug was discovered that affects Microsoft Entra authentication for Immersive Reader. We advise that you change the permissions on your Immersive Reader resources. ## Background -A security bug was discovered that relates to Microsoft Entra authentication for Immersive Reader. When initially creating your Immersive Reader resources and configuring them for Microsoft Entra authentication, it is necessary to grant permissions for the Microsoft Entra application identity to access your Immersive Reader resource. This is known as a Role Assignment. The Azure role that was previously used for permissions was the [Cognitive Services User](../../role-based-access-control/built-in-roles.md#cognitive-services-user) role. +When you initially create your Immersive Reader resources and configure them for Microsoft Entra authentication, it's necessary to grant permissions for the Microsoft Entra application identity to access your Immersive Reader resource. This is known as a *role assignment*. The Azure role that was previously used for permissions was the [Cognitive Services User](../../role-based-access-control/built-in-roles.md#cognitive-services-user) role. -During a security audit, it was discovered that this Cognitive Services User role has permissions to [List Keys](/rest/api/cognitiveservices/accountmanagement/accounts/list-keys). This is slightly concerning because Immersive Reader integrations involve the use of this Microsoft Entra access token in client web apps and browsers, and if the access token were to be stolen by a bad actor or attacker, there is a concern that this access token could be used to `list keys` of your Immersive Reader resource. If an attacker could `list keys` for your resource, then they would obtain the `Subscription Key` for your resource. The `Subscription Key` for your resource is used as an authentication mechanism and is considered a secret. If an attacker had the resource's `Subscription Key`, it would allow them to make valid and authenticated API calls to your Immersive Reader resource endpoint, which could lead to Denial of Service due to the increased usage and throttling on your endpoint. It would also allow unauthorized use of your Immersive Reader resource, which would lead to increased charges on your bill. +During a security audit, it was discovered that this Cognitive Services User role has permissions to [list keys](/rest/api/cognitiveservices/accountmanagement/accounts/list-keys). This is slightly concerning because Immersive Reader integrations involve the use of this Microsoft Entra access token in client web apps and browsers. If the access token were stolen by a bad actor or attacker, there's a concern that this access token could be used to `list keys` for your Immersive Reader resource. If an attacker could `list keys` for your resource, then they would obtain the `Subscription Key` for your resource. The `Subscription Key` for your resource is used as an authentication mechanism and is considered a secret. If an attacker had the resource's `Subscription Key`, it would allow them to make valid and authenticated API calls to your Immersive Reader resource endpoint, which could lead to Denial of Service due to the increased usage and throttling on your endpoint. It would also allow unauthorized use of your Immersive Reader resource, which would lead to increased charges on your bill. -In practice however, this attack or exploit is not likely to occur or may not even be possible. For Immersive Reader scenarios, customers obtain Microsoft Entra access tokens with an audience of `https://cognitiveservices.azure.com`. In order to successfully `list keys` for your resource, the Microsoft Entra access token would need to have an audience of `https://management.azure.com`. Generally speaking, this is not too much of a concern, since the access tokens used for Immersive Reader scenarios would not work to `list keys`, as they do not have the required audience. In order to change the audience on the access token, an attacker would have to hijack the token acquisition code and change the audience before the call is made to Microsoft Entra ID to acquire the token. Again, this is not likely to be exploited because, as an Immersive Reader authentication best practice, we advise that customers create Microsoft Entra access tokens on the web application backend, not in the client or browser. In those cases, since the token acquisition happens on the backend service, it's not as likely or perhaps even possible that attacker could compromise that process and change the audience. +In practice, however, this attack or exploit isn't likely to occur or might not even be possible. For Immersive Reader scenarios, customers obtain Microsoft Entra access tokens with an audience of `https://cognitiveservices.azure.com`. In order to successfully `list keys` for your resource, the Microsoft Entra access token would need to have an audience of `https://management.azure.com`. Generally speaking, this isn't much of a concern, since the access tokens used for Immersive Reader scenarios wouldn't work to `list keys`, as they don't have the required audience. In order to change the audience on the access token, an attacker would have to hijack the token acquisition code and change the audience before the call is made to Microsoft Entra ID to acquire the token. Again, this isn't likely to be exploited because, as an Immersive Reader authentication best practice, we advise that customers create Microsoft Entra access tokens on the web application backend, not in the client or browser. In those cases, since the token acquisition happens on the backend service, it's not as likely or perhaps even possible that an attacker could compromise that process and change the audience. -The real concern comes when or if any customer were to acquire tokens from Microsoft Entra ID directly in client code. We strongly advise against this, but since customers are free to implement as they see fit, it is possible that some customers are doing this. +The real concern comes when or if any customer were to acquire tokens from Microsoft Entra ID directly in client code. We strongly advise against this, but since customers are free to implement as they see fit, it's possible that some customers are doing this. -To mitigate the concerns about any possibility of using the Microsoft Entra access token to `list keys`, we have created a new built-in Azure role called `Cognitive Services Immersive Reader User` that does not have the permissions to `list keys`. This new role is not a shared role for the Azure AI services platform like `Cognitive Services User` role is. This new role is specific to Immersive Reader and will only allow calls to Immersive Reader APIs. +To mitigate the concerns about any possibility of using the Microsoft Entra access token to `list keys`, we created a new built-in Azure role called `Cognitive Services Immersive Reader User` that doesn't have the permissions to `list keys`. This new role isn't a shared role for the Azure AI services platform like `Cognitive Services User` role is. This new role is specific to Immersive Reader and only allows calls to Immersive Reader APIs. -We are advising that ALL customers migrate to using the new `Cognitive Services Immersive Reader User` role instead of the original `Cognitive Services User` role. We have provided a script below that you can run on each of your resources to switch over the role assignment permissions. +We advise ALL customers to use the new `Cognitive Services Immersive Reader User` role instead of the original `Cognitive Services User` role. We have provided a script below that you can run on each of your resources to switch over the role assignment permissions. This recommendation applies to ALL customers, to ensure that this vulnerability is patched for everyone, no matter what the implementation scenario or likelihood of attack. -If you do NOT do this, nothing will break. The old role will continue to function. The security impact for most customers is minimal. However, it is advised that you migrate to the new role to mitigate the security concerns discussed above. Applying this update is a security advisory recommendation; it is not a mandate. +If you do NOT do this, nothing will break. The old role will continue to function. The security impact for most customers is minimal. However, we advise that you migrate to the new role to mitigate the security concerns discussed. Applying this update is a security advisory recommendation; it's not a mandate. -Any new Immersive Reader resources you create with our script at [How to: Create an Immersive Reader resource](./how-to-create-immersive-reader.md) will automatically use the new role. +Any new Immersive Reader resources you create with our script at [How to: Create an Immersive Reader resource](./how-to-create-immersive-reader.md) automatically use the new role. +## Update role and rotate your subscription keys -## Call to action +If you created and configured an Immersive Reader resource using the instructions at [How to: Create an Immersive Reader resource](./how-to-create-immersive-reader.md) before February 2022, we advise that you perform the following operation to update the role assignment permissions on ALL of your Immersive Reader resources. The operation involves running a script to update the role assignment on a single resource. If you have multiple resources, run this script multiple times, once for each resource. -If you created and configured an Immersive Reader resource using the instructions at [How to: Create an Immersive Reader resource](./how-to-create-immersive-reader.md) prior to February 2022, it is advised that you perform the operation below to update the role assignment permissions on ALL of your Immersive Reader resources. The operation involves running a script to update the role assignment on a single resource. If you have multiple resources, run this script multiple times, once for each resource. +After you update the role using the following script, we also advise that you rotate the subscription keys on your resource. This is in case your keys were compromised by the exploit, and somebody is actually using your resource with subscription key authentication without your consent. Rotating the keys renders the previous keys invalid and denies any further access. For customers using Microsoft Entra authentication, which should be everyone per current Immersive Reader SDK implementation, rotating the keys has no effect on the Immersive Reader service, since Microsoft Entra access tokens are used for authentication, not the subscription key. Rotating the subscription keys is just another precaution. -After you have updated the role using the script below, it is also advised that you rotate the subscription keys on your resource. This is in case your keys have been compromised by the exploit above, and somebody is actually using your resource with subscription key authentication without your consent. Rotating the keys will render the previous keys invalid and deny any further access. For customers using Microsoft Entra authentication, which should be everyone per current Immersive Reader SDK implementation, rotating the keys will have no impact on the Immersive Reader service, since Microsoft Entra access tokens are used for authentication, not the subscription key. Rotating the subscription keys is just another precaution. +You can rotate the subscription keys in the [Azure portal](https://portal.azure.com). Navigate to your resource and then to the `Keys and Endpoint` section. At the top, there are buttons to `Regenerate Key1` and `Regenerate Key2`. -You can rotate the subscription keys on the [Azure portal](https://portal.azure.com). Navigate to your resource and then to the `Keys and Endpoint` blade. At the top, there are buttons to `Regenerate Key1` and `Regenerate Key2`. -----### Use Azure PowerShell environment to update your Immersive Reader resource Role assignment +### Use Azure PowerShell to update your role assignment 1. Start by opening the [Azure Cloud Shell](../../cloud-shell/overview.md). Ensure that Cloud Shell is set to PowerShell in the upper-left hand dropdown or by typing `pwsh`. You can rotate the subscription keys on the [Azure portal](https://portal.azure. throw "Error: Failed to find Immersive Reader resource" } - # Get the Azure AD application service principal + # Get the Microsoft Entra application service principal $principalId = az ad sp show --id $AADAppIdentifierUri --query "objectId" -o tsv if (-not $principalId) {- throw "Error: Failed to find Azure AD application service principal" + throw "Error: Failed to find Microsoft Entra application service principal" } $newRoleName = "Cognitive Services Immersive Reader User" You can rotate the subscription keys on the [Azure portal](https://portal.azure. } ``` -1. Run the function `Update-ImmersiveReaderRoleAssignment`, supplying the '<PARAMETER_VALUES>' placeholders below with your own values as appropriate. +1. Run the function `Update-ImmersiveReaderRoleAssignment`, replacing the `<PARAMETER_VALUES>` placeholders with your own values as appropriate. - ```azurepowershell-interactive - Update-ImmersiveReaderRoleAssignment -SubscriptionName '<SUBSCRIPTION_NAME>' -ResourceGroupName '<RESOURCE_GROUP_NAME>' -ResourceName '<RESOURCE_NAME>' -AADAppIdentifierUri '<AAD_APP_IDENTIFIER_URI>' + ```azurepowershell + Update-ImmersiveReaderRoleAssignment -SubscriptionName '<SUBSCRIPTION_NAME>' -ResourceGroupName '<RESOURCE_GROUP_NAME>' -ResourceName '<RESOURCE_NAME>' -AADAppIdentifierUri '<MICROSOFT_ENTRA_APP_IDENTIFIER_URI>' ``` - The full command will look something like the following. Here we have put each parameter on its own line for clarity, so you can see the whole command. Do not copy or use this command as-is. Copy and use the command above with your own values. This example has dummy values for the '<PARAMETER_VALUES>' above. Yours will be different, as you will come up with your own names for these values. + The full command looks something like the following. Here we put each parameter on its own line for clarity, so you can see the whole command. Don't copy or use this command as-is. Copy and use the command with your own values. This example has dummy values for the `<PARAMETER_VALUES>`. Yours will be different, as you come up with your own names for these values. - ```Update-ImmersiveReaderRoleAssignment```<br> - ``` -SubscriptionName 'MyOrganizationSubscriptionName'```<br> - ``` -ResourceGroupName 'MyResourceGroupName'```<br> - ``` -ResourceName 'MyOrganizationImmersiveReader'```<br> - ``` -AADAppIdentifierUri 'https://MyOrganizationImmersiveReaderAADApp'```<br> + ```azurepowershell + Update-ImmersiveReaderRoleAssignment + -SubscriptionName 'MyOrganizationSubscriptionName' + -ResourceGroupName 'MyResourceGroupName' + -ResourceName 'MyOrganizationImmersiveReader' + -AADAppIdentifierUri 'https://MyOrganizationImmersiveReaderAADApp' + ``` | Parameter | Comments | | | | | SubscriptionName |The name of your Azure subscription. |- | ResourceGroupName |The name of the Resource Group that contains your Immersive Reader resource. | + | ResourceGroupName |The name of the resource group that contains your Immersive Reader resource. | | ResourceName |The name of your Immersive Reader resource. |- | AADAppIdentifierUri |The URI for your Azure AD app. | -+ | AADAppIdentifierUri |The URI for your Microsoft Entra app. | -## Next steps +## Next step -* View the [Node.js quickstart](./quickstarts/client-libraries.md?pivots=programming-language-nodejs) to see what else you can do with the Immersive Reader SDK using Node.js -* View the [Android tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Java or Kotlin for Android -* View the [iOS tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Swift for iOS -* View the [Python tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Python -* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](./reference.md) +> [!div class="nextstepaction"] +> [Quickstart: Get started with Immersive Reader](quickstarts/client-libraries.md) |
ai-services | Tutorial Ios Picture Immersive Reader | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/tutorial-ios-picture-immersive-reader.md | Title: "Tutorial: Create an iOS app that takes a photo and launches it in the Immersive Reader (Swift)" -description: In this tutorial, you will build an iOS app from scratch and add the Picture to Immersive Reader functionality. +description: Learn how to build an iOS app from scratch and add the Picture to Immersive Reader functionality. #-+ Previously updated : 01/14/2020- Last updated : 02/28/2024+ #Customer intent: As a developer, I want to integrate two Azure AI services, the Immersive Reader and the Read API into my iOS application so that I can view any text from a photo in the Immersive Reader. The [Immersive Reader](https://www.onenote.com/learningtools) is an inclusively The [Azure AI Vision Read API](../../ai-services/computer-vision/overview-ocr.md) detects text content in an image using Microsoft's latest recognition models and converts the identified text into a machine-readable character stream. -In this tutorial, you will build an iOS app from scratch and integrate the Read API, and the Immersive Reader by using the Immersive Reader SDK. A full working sample of this tutorial is available [here](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/ios). --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin. +In this tutorial, you build an iOS app from scratch and integrate the Read API and the Immersive Reader by using the Immersive Reader SDK. A full working sample of this tutorial is available [on GitHub](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/ios). ## Prerequisites -* [Xcode](https://apps.apple.com/us/app/xcode/id497799835?mt=12) -* An Immersive Reader resource configured for Microsoft Entra authentication. Follow [these instructions](./how-to-create-immersive-reader.md) to get set up. You will need some of the values created here when configuring the sample project properties. Save the output of your session into a text file for future reference. -* Usage of this sample requires an Azure subscription to the Azure AI Vision service. [Create an Azure AI Vision resource in the Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision). +* An Azure subscription. You can [create one for free](https://azure.microsoft.com/free/ai-services/). +* MacOS and [Xcode](https://apps.apple.com/us/app/xcode/id497799835?mt=12). +* An Immersive Reader resource configured for Microsoft Entra authentication. Follow [these instructions](how-to-create-immersive-reader.md) to get set up. +* A subscription to the Azure AI Vision service. Create an [Azure AI Vision resource in the Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision). ## Create an Xcode project Create a new project in Xcode. -![New Project](./media/ios/xcode-create-project.png) Choose **Single View App**. -![New Single View App](./media/ios/xcode-single-view-app.png) ## Get the SDK CocoaPod The easiest way to use the Immersive Reader SDK is via CocoaPods. To install via Cocoapods: -1. [Install CocoaPods](http://guides.cocoapods.org/using/getting-started.html) - Follow the getting started guide to install Cocoapods. +1. Follow the [guide to install Cocoapods](http://guides.cocoapods.org/using/getting-started.html). 2. Create a Podfile by running `pod init` in your Xcode project's root directory. The easiest way to use the Immersive Reader SDK is via CocoaPods. To install via ## Acquire a Microsoft Entra authentication token -You need some values from the Microsoft Entra authentication configuration prerequisite step above for this part. Refer back to the text file you saved of that session. +You need some values from the Microsoft Entra authentication configuration step in the prerequisites section. Refer back to the text file you saved from that session. ````text TenantId => Azure subscription TenantId-ClientId => Azure AD ApplicationId -ClientSecret => Azure AD Application Service Principal password +ClientId => Microsoft Entra ApplicationId +ClientSecret => Microsoft Entra Application Service Principal password Subdomain => Immersive Reader resource subdomain (resource 'Name' if the resource was created in the Azure portal, or 'CustomSubDomain' option if the resource was created with Azure CLI PowerShell. Check the Azure portal for the subdomain on the Endpoint in the resource Overview page, for example, 'https://[SUBDOMAIN].cognitiveservices.azure.com/') ```` -In the main project folder, which contains the ViewController.swift file, create a Swift class file called Constants.swift. Replace the class with the following code, adding in your values where applicable. Keep this file as a local file that only exists on your machine and be sure not to commit this file into source control, as it contains secrets that should not be made public. It is recommended that you do not keep secrets in your app. Instead, we recommend using a backend service to obtain the token, where the secrets can be kept outside of the app and off of the device. The backend API endpoint should be secured behind some form of authentication (for example, [OAuth](https://oauth.net/2/)) to prevent unauthorized users from obtaining tokens to use against your Immersive Reader service and billing; that work is beyond the scope of this tutorial. +In the main project folder, which contains the *ViewController.swift* file, create a Swift class file called `Constants.swift`. Replace the class with the following code, adding in your values where applicable. Keep this file as a local file that only exists on your machine and be sure not to commit this file into source control because it contains secrets that shouldn't be made public. We recommended that you don't keep secrets in your app. Instead, use a backend service to obtain the token, where the secrets can be kept outside of the app and off of the device. The backend API endpoint should be secured behind some form of authentication (for example, [OAuth](https://oauth.net/2/)) to prevent unauthorized users from obtaining tokens to use against your Immersive Reader service and billing; that work is beyond the scope of this tutorial. ## Set up the app to run without a storyboard -Open AppDelegate.swift and replace the file with the following code. +Open *AppDelegate.swift* and replace the file with the following code. ```swift import UIKit class AppDelegate: UIResponder, UIApplicationDelegate { ## Add functionality for taking and uploading photos -Rename ViewController.swift to PictureLaunchViewController.swift and replace the file with the following code. +Rename *ViewController.swift* to *PictureLaunchViewController.swift* and replace the file with the following code. ```swift import UIKit class PictureLaunchViewController: UIViewController, UINavigationControllerDeleg }) } - /// Retrieves the token for the Immersive Reader using Azure Active Directory authentication + /// Retrieves the token for the Immersive Reader using Microsoft Entra authentication /// /// - Parameters:- /// -onSuccess: A closure that gets called when the token is successfully recieved using Azure Active Directory authentication. - /// -theToken: The token for the Immersive Reader recieved using Azure Active Directory authentication. - /// -onFailure: A closure that gets called when the token fails to be obtained from the Azure Active Directory Authentication. - /// -theError: The error that occurred when the token fails to be obtained from the Azure Active Directory Authentication. + /// -onSuccess: A closure that gets called when the token is successfully received using Microsoft Entra authentication. + /// -theToken: The token for the Immersive Reader received using Microsoft Entra authentication. + /// -onFailure: A closure that gets called when the token fails to be obtained from the Microsoft Entra authentication. + /// -theError: The error that occurred when the token fails to be obtained from the Microsoft Entra authentication. func getToken(onSuccess: @escaping (_ theToken: String) -> Void, onFailure: @escaping ( _ theError: String) -> Void) { let tokenForm = "grant_type=client_credentials&resource=https://cognitiveservices.azure.com/&client_id=" + Constants.clientId + "&client_secret=" + Constants.clientSecret class PictureLaunchViewController: UIViewController, UINavigationControllerDeleg ## Build and run the app Set the archive scheme in Xcode by selecting a simulator or device target.-![Archive scheme](./media/ios/xcode-archive-scheme.png)<br/> -![Select Target](./media/ios/xcode-select-target.png) -In Xcode, press Ctrl + R or select the play button to run the project and the app should launch on the specified simulator or device. +++In Xcode, press **Ctrl+R** or select the play button to run the project. The app should launch on the specified simulator or device. In your app, you should see: -![Sample app](./media/ios/picture-to-immersive-reader-ipad-app.png) -Inside the app, take or upload a photo of text by pressing the 'Take Photo' button or 'Choose Photo from Library' button and the Immersive Reader will then launch displaying the text from the photo. +Take or upload a photo of text by pressing the **Take Photo** button or **Choose Photo from Library** button. The Immersive Reader then launches and displays the text from the photo. -![Immersive Reader](./media/ios/picture-to-immersive-reader-ipad.png) -## Next steps +## Next step -* Explore the [Immersive Reader SDK Reference](./reference.md) +> [!div class="nextstepaction"] +> [Explore the Immersive Reader SDK reference](reference.md) |
ai-services | Conversation Summarization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/how-to/conversation-summarization.md | -## Conversation summarization types +## Conversation summarization aspects -- Chapter title and narrative (general conversation) are designed to summarize a conversation into chapter titles, and a summarization of the conversation's contents. This summarization type works on conversations with any number of parties. +- Chapter title and narrative (general conversation) are designed to summarize a conversation into chapter titles, and a summarization of the conversation's contents. This summarization aspect works on conversations with any number of parties. - Issue and resolution (call center focused) is designed to summarize text chat logs between customers and customer-service agents. This feature is capable of providing both issues and resolutions present in these logs, which occur between two parties. +- Narrative is designed to summarize the narrative of a conversation. + - Recap is designed to condense lengthy meetings or conversations into a concise one-paragraph summary to provide a quick overview. - Follow-up tasks is designed to summarize action items and tasks that arise during a meeting.+For easier navigation, here are links to the corresponding sections for each service: ++|Aspect |Section | +|--|| +|Issue and Resolution |[Issue and Resolution](#get-summaries-from-text-chats)| +|Chapter Title |[Chapter Title](#get-chapter-titles) | +|Narrative |[Narrative](#get-narrative-summarization) | +|Recap and Follow-up |[Recap and follow-up](#get-narrative-summarization) | + ## Features The conversation summarization API uses natural language processing techniques to summarize conversations into shorter summaries per request. Conversation summarization can summarize for issues and resolutions discussed in a two-party conversation or summarize a long conversation into chapters and a short narrative for each chapter. |
ai-services | Document Summarization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/how-to/document-summarization.md | Document summarization is designed to shorten content that users consider too lo **Abstractive summarization**: Produces a summary by generating summarized sentences from the document that capture the main idea. -Both of these capabilities are able to summarize around specific items of interest when specified. +**Query-focused summarization**: Allows you to use a query when summarizing. ++Each of these capabilities are able to summarize around specific items of interest when specified. The AI models used by the API are provided by the service, you just have to send content for analysis. +For easier navigation, here are links to the corresponding sections for each service: ++|Aspect |Section | +|-|-| +|Extractive |[Extractive Summarization](#try-document-extractive-summarization) | +|Abstractive |[Abstrctive Summarization](#try-document-abstractive-summarization)| +|Query-focused|[Query-focused Summarization](#query-based-summarization) | ++ ## Features > [!TIP] |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/overview.md | Summarization is one of the features offered by [Azure AI Language](../overview. Though the services are labeled document and conversation summarization, document summarization only accepts plain text blocks, and conversation summarization accept various speech artifacts in order for the model to learn more. If you want to process a conversation but only care about text, you can use document summarization for that scenario. -Custom Summarization enables users to build custom AI models to summarize unstructured text, such as contracts or novels. By creating a Custom Summarization project, developers can iteratively label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](custom/quickstart.md). - # [Document summarization](#tab/document-summarization) This documentation contains the following article types: This documentation contains the following article types: * **[Quickstarts](quickstart.md?pivots=rest-api&tabs=document-summarization)** are getting-started instructions to guide you through making requests to the service. * **[How-to guides](how-to/document-summarization.md)** contain instructions for using the service in more specific or customized ways. -Document summarization uses natural language processing techniques to generate a summary for documents. There are two supported API approaches to automatic summarization: extractive and abstractive. +Document summarization uses natural language processing techniques to generate a summary for documents. There are three supported API approaches to automatic summarization: extractive, abstractive and query-focused. Extractive summarization extracts sentences that collectively represent the most important or relevant information within the original content. Abstractive summarization generates a summary with concise, coherent sentences or words that aren't verbatim extract sentences from the original document. These features are designed to shorten content that could be considered too long to read. For more information, *see* [**Use native documents for language processing**](. ## Key features -There are two types of document summarization this API provides: +There are the aspects of document summarization this API provides: -* **Extractive summarization**: Produces a summary by extracting salient sentences within the document. +* [**Extractive summarization**](how-to/document-summarization.md#try-document-extractive-summarization): Produces a summary by extracting salient sentences within the document. * Multiple extracted sentences: These sentences collectively convey the main idea of the document. They're original sentences extracted from the input document's content. * Rank score: The rank score indicates how relevant a sentence is to a document's main topic. Document summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank. * Multiple returned sentences: Determine the maximum number of sentences to be returned. For example, if you request a three-sentence summary extractive summarization returns the three highest scored sentences. * Positional information: The start position and length of extracted sentences. -* **Abstractive summarization**: Generates a summary that doesn't use the same words as in the document, but captures the main idea. +* [**Abstractive summarization**](how-to/document-summarization.md#try-document-abstractive-summarization): Generates a summary that doesn't use the same words as in the document, but captures the main idea. * Summary texts: Abstractive summarization returns a summary for each contextual input range within the document. A long document can be segmented so multiple groups of summary texts can be returned with their contextual input range. * Contextual input range: The range within the input document that was used to generate the summary text. +* [**Query-focused summarization**](how-to/document-summarization.md#query-based-summarization): Generates a summary based on a query + As an example, consider the following paragraph of text: *"At Microsoft, we are on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, there's magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code enables us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pretrained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we achieve human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."* This documentation contains the following article types: Conversation summarization supports the following features: -* **Issue/resolution summarization**: A call center specific feature that gives a summary of issues and resolutions in conversations between customer-service agents and your customers. -* **Chapter title summarization**: Segments a conversation into chapters based on the topics discussed in the conversation, and gives suggested chapter titles of the input conversation. -* **Recap**: Summarizes a conversation into a brief paragraph. -* **Narrative summarization**: Generates detail call notes, meeting notes or chat summaries of the input conversation. -* **Follow-up tasks**: Gives a list of follow-up tasks discussed in the input conversation. +* [**Issue/resolution summarization**](how-to/conversation-summarization.md#get-summaries-from-text-chats): A call center specific feature that gives a summary of issues and resolutions in conversations between customer-service agents and your customers. +* [**Chapter title summarization**](how-to/conversation-summarization.md#get-chapter-titles): Segments a conversation into chapters based on the topics discussed in the conversation, and gives suggested chapter titles of the input conversation. +* [**Recap**](how-to/conversation-summarization.md#get-narrative-summarization): Summarizes a conversation into a brief paragraph. +* [**Narrative summarization**](how-to/conversation-summarization.md#get-narrative-summarization): Generates detail call notes, meeting notes or chat summaries of the input conversation. +* [**Follow-up tasks**](how-to/conversation-summarization.md#get-narrative-summarization): Gives a list of follow-up tasks discussed in the input conversation. ## When to use issue and resolution summarization |
ai-services | Api Version Deprecation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/api-version-deprecation.md | This version contains support for all the latest Azure OpenAI features including - [Fine-tuning](./how-to/fine-tuning.md) `gpt-35-turbo`, `babbage-002`, and `davinci-002` models.[**Added in 2023-10-01-preview**] - [Whisper](./whisper-quickstart.md). [**Added in 2023-09-01-preview**] - [Function calling](./how-to/function-calling.md) [**Added in 2023-07-01-preview**]-- [DALL-E](./dall-e-quickstart.md) [**Added in 2023-06-01-preview**] - [Retrieval augmented generation with the on your data feature](./use-your-data-quickstart.md). [**Added in 2023-06-01-preview**] ## Retiring soon This version contains support for all the latest Azure OpenAI features including On April 2, 2024 the following API preview releases will be retired and will stop accepting API requests: - 2023-03-15-preview-- 2023-06-01-preview - 2023-07-01-preview - 2023-08-01-preview+- 2023-09-01-preview +- 2023-10-01-preview +- 2023-12-01-preview To avoid service disruptions, you must update to use the latest preview version before the retirement date. |
ai-services | Assistants Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/assistants-reference.md | curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id +## File upload API reference ++Assistants use the [same API for file upload as fine-tuning](/rest/api/azureopenai/files/upload?view=rest-azureopenai-2024-02-15-preview&tabs=HTTP). When uploading a file you have to specify an appropriate value for the [purpose parameter](/rest/api/azureopenai/files/upload?view=rest-azureopenai-2024-02-15-preview&tabs=HTTP#purpose). ++ ## Assistant object | Field | Type | Description | |
ai-services | Assistants | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/assistants.md | We provide a walkthrough of the Assistants playground in our [quickstart guide]( |**Run** | Activation of an Assistant to begin running based on the contents of the Thread. The Assistant uses its configuration and the ThreadΓÇÖs Messages to perform tasks by calling models and tools. As part of a Run, the Assistant appends Messages to the Thread.| |**Run Step** | A detailed list of steps the Assistant took as part of a Run. An Assistant can call tools or create Messages during itΓÇÖs run. Examining Run Steps allows you to understand how the Assistant is getting to its final results. | +## Assistants data access ++Currently, assistants, threads, messages, and files created for Assistants are scoped at the Azure OpenAI resource level. Therefore, anyone with access to the Azure OpenAI resource or API key access is able to read/write assistants, threads, messages, and files. ++We strongly recommend the following data access controls: ++- Implement authorization. Before performing reads or writes on assistants, threads, messages, and files, ensure that the end-user is authorized to do so. +- Restrict Azure OpenAI resource and API key access. Carefully consider who has access to Azure OpenAI resources where assistants are being used and associated API keys. +- Routinely audit which accounts/individuals have access to the Azure OpenAI resource. API keys and resource level access enable a wide range of operations including reading and modifying messages and files. +- Enable [diagnostic settings](../how-to/monitoring.md#configure-diagnostic-settings) to allow long-term tracking of certain aspects of the Azure OpenAI resource's activity log. + ## See also * Learn more about Assistants and [Code Interpreter](../how-to/code-interpreter.md) * Learn more about Assistants and [function calling](../how-to/assistant-functions.md) * [Azure OpenAI Assistants API samples](https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/Assistants)--- |
ai-services | Use Your Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md | When you chat with a model, providing a history of the chat will help the model ## Token usage estimation for Azure OpenAI On Your Data +Azure OpenAI On Your Data Retrieval Augmented Generation (RAG) service that leverages both a search service (such as Azure AI Search) and generation (Azure OpenAI models) to let users get answers for their questions based on provided data. +As part of this RAG pipeline, there are are three steps at a high-level: ++1. Reformulate the user query into a list of search intents. This is done by making a call to the model with a prompt that includes instructions, the user question, and conversation history. Let's call this an *intent prompt*. ++1. For each intent, multiple document chunks are retrieved from the search service. After filtering out irrelevant chunks based on the user-specified threshold of strictness and reranking/aggregating the chunks based on internal logic, the user-specified number of document chunks are chosen. ++1. These document chunks, along with the user question, conversation history, role information, and instructions are sent to the model to generate the final model response. Let's call this the *generation prompt*. ++In total, there are two calls made to the model: ++* For processing the intent: The token estimate for the *intent prompt* includes those for the user question, conversation history and the instructions sent to the model for intent generation. ++* For generating the response: The token estimate for the *generation prompt* includes those for the user question, conversation history, the retrieved list of document chunks, role information and the instructions sent to it for generation. ++The model generated output tokens (both intents and response) need to be taken into account for total token estimation. Summing up all the four columns below gives the average total tokens used for generating a response. ++| Model | Generation prompt token count | Intent prompt token count | Response token count | Intent token count | +|--|--|--|--|--| +| gpt-35-turbo-16k | 4297 | 1366 | 111 | 25 | +| gpt-4-0613 | 3997 | 1385 | 118 | 18 | +| gpt-4-1106-preview | 4538 | 811 | 119 | 27 | +| gpt-35-turbo-1106 | 4854 | 1372 | 110 | 26 | ++The above numbers are based on testing on a data set with: ++* 191 conversations +* 250 questions +* 10 average tokens per question +* 4 conversational turns per conversation on average ++And the following [parameters](#runtime-parameters). ++|Setting |Value | +||| +|Number of retrieved documents | 5 | +|Strictness | 3 | +|Chunk size | 1024 | +|Limit responses to ingested data? | True | ++These estimates will vary based on the values set for the above parameters. For example, if the number of retrieved documents is set to 10 and strictness is set to 1, the token count will go up. If returned responses aren't limited to the ingested data, there are fewer instructions given to the model and the number of tokens will go down. ++The estimates also depend on the nature of the documents and questions being asked. For example, if the questions are open-ended, the responses are likely to be longer. Similarly, a longer system message would contribute to a longer prompt that consumes more tokens, and if the conversation history is long, the prompt will be longer. | Model | Max tokens for system message | Max tokens for model response | |--|--|--| When you chat with a model, providing a history of the chat will help the model | GPT-4-0613-8K | 400 | 1500 | | GPT-4-0613-32K | 2000 | 6400 | -The table above shows the total number of tokens available for each model type. It also determines the maximum number of tokens that can be used for the [system message](#system-message) and the model response. Additionally, the following also consume tokens: +The table above shows the maximum number of tokens that can be used for the [system message](#system-message) and the model response. Additionally, the following also consume tokens: -* The meta prompt (MP): if you limit responses from the model to the grounding data content (`inScope=True` in the API), the maximum number of tokens is 4,036 tokens. Otherwise (for example if `inScope=False`) the maximum is 3,444 tokens. This number is variable depending on the token length of the user question and conversation history. This estimate includes the base prompt and the query rewriting prompts for retrieval. +* The meta prompt: if you limit responses from the model to the grounding data content (`inScope=True` in the API), the maximum number of tokens higher. Otherwise (for example if `inScope=False`) the maximum is lower. This number is variable depending on the token length of the user question and conversation history. This estimate includes the base prompt and the query rewriting prompts for retrieval. * User question and history: Variable but capped at 2,000 tokens. * Retrieved documents (chunks): The number of tokens used by the retrieved document chunks depends on multiple factors. The upper bound for this is the number of retrieved document chunks multiplied by the chunk size. It will, however, be truncated based on the tokens available tokens for the specific model being used after counting the rest of fields. 20% of the available tokens are reserved for the model response. The remaining 80% of available tokens include the meta prompt, the user question and conversation history, and the system message. The remaining token budget is used by the retrieved document chunks. +In order to compute the number of tokens consumed by your input (such as your question, the system message/role information), use the following code sample. + ```python import tiktoken class TokenEstimator(object): token_output = TokenEstimator.estimate_tokens(input_text) ``` + ## Troubleshooting ### Failed ingestion jobs |
ai-services | Assistant Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/assistant-functions.md | After you submit tool outputs, the **Run** will enter the `queued` state before ## See also +* [Assistants API Reference](../assistants-reference.md) * Learn more about how to use Assistants with our [How-to guide on Assistants](../how-to/assistant.md). * [Azure OpenAI Assistants API samples](https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/Assistants) |
ai-services | Code Interpreter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/code-interpreter.md | We recommend using assistants with the latest models to take advantage of the ne |.xml|application/xml or "text/xml"| |.zip|application/zip| +### File upload API reference ++Assistants use the [same API for file upload as fine-tuning](/rest/api/azureopenai/files/upload?view=rest-azureopenai-2024-02-15-preview&tabs=HTTP). When uploading a file you have to specify an appropriate value for the [purpose parameter](/rest/api/azureopenai/files/upload?view=rest-azureopenai-2024-02-15-preview&tabs=HTTP#purpose). + ## Enable Code Interpreter # [Python 1.x](#tab/python) curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2 { "type": "code_interpreter" } ], "model": "gpt-4-1106-preview",- "file_ids": ["file_123abc456"] + "file_ids": ["assistant-123abc456"] }' ``` curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/<YOUR-THREAD-ID> -d '{ "role": "user", "content": "I need to solve the equation `3x + 11 = 14`. Can you help me?",- "file_ids": ["file_123abc456"] + "file_ids": ["asssistant-123abc456"] }' ``` Files generated by Code Interpreter can be found in the Assistant message respon "content": [ { "image_file": {- "file_id": "file-1YGVTvNzc2JXajI5JU9F0HMD" + "file_id": "assistant-1YGVTvNzc2JXajI5JU9F0HMD" }, "type": "image_file" }, client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") ) -image_data = client.files.content("file-abc123") +image_data = client.files.content("assistant-abc123") image_data_bytes = image_data.read() with open("./my-image.png", "wb") as file: curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/files/<YOUR-FILE-ID>/con ## See also +* [File Upload API reference](/rest/api/azureopenai/files/upload?view=rest-azureopenai-2024-02-15-preview&tabs=HTTP) +* [Assistants API Reference](../assistants-reference.md) * Learn more about how to use Assistants with our [How-to guide on Assistants](../how-to/assistant.md). * [Azure OpenAI Assistants API samples](https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/Assistants) |
ai-services | Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/managed-identity.md | -In the following sections, you'll use the Azure CLI to assign roles, and obtain a bearer token to call the OpenAI resource. If you get stuck, links are provided in each section with all available options for each command in Azure Cloud Shell/Azure CLI. +In the following sections, you'll use the Azure CLI to sign in, and obtain a bearer token to call the OpenAI resource. If you get stuck, links are provided in each section with all available options for each command in Azure Cloud Shell/Azure CLI. ## Prerequisites In the following sections, you'll use the Azure CLI to assign roles, and obtain ../../cognitive-services-custom-subdomains.md) - Azure CLI - [Installation Guide](/cli/azure/install-azure-cli)-- The following Python libraries: os, requests, json+- The following Python libraries: os, requests, json, openai, azure-identity ++## Assign yourself to the Cognitive Services User role ++Assign yourself the [Cognitive Services User](role-based-access-control.md#cognitive-services-contributor) role to allow you to use your account to make Azure OpenAI API calls rather than having to use key-based auth. After you make this change it can take up to 5 minutes before the change takes effect. ## Sign into the Azure CLI -To sign-in to the Azure CLI, run the following command and complete the sign-in. You may need to do it again if your session has been idle for too long. +To sign-in to the Azure CLI, run the following command and complete the sign-in. You might need to do it again if your session has been idle for too long. ```azurecli az login ``` -## Assign yourself to the Cognitive Services User role --Assigning yourself to the "Cognitive Services User" role will allow you to use your account for access to the specific Azure AI services resource. --1. Get your user information -- ```azurecli - export user=$(az account show --query "user.name" -o tsv) - ``` +## Chat Completions ++```python +from azure.identity import DefaultAzureCredential, get_bearer_token_provider +from openai import AzureOpenAI ++token_provider = get_bearer_token_provider( + DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default" +) ++client = AzureOpenAI( + api_version="2024-02-15-preview", + azure_endpoint="https://{your-custom-endpoint}.openai.azure.com/", + azure_ad_token_provider=token_provider +) ++response = client.chat.completions.create( + model="gpt-35-turbo-0125", # model = "deployment_name". + messages=[ + {"role": "system", "content": "You are a helpful assistant."}, + {"role": "user", "content": "Does Azure OpenAI support customer managed keys?"}, + {"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."}, + {"role": "user", "content": "Do other Azure AI services support this too?"} + ] +) ++print(response.choices[0].message.content) +``` -2. Assign yourself to ΓÇ£Cognitive Services UserΓÇ¥ role. +## Querying Azure OpenAI with the control plane API - ```azurecli - export resourceId=$(az group show -g $RG --query "id" -o tsv) - az role assignment create --role "Cognitive Services User" --assignee $user --scope $resourceId - ``` +```python +import requests +import json +from azure.identity import DefaultAzureCredential - > [!NOTE] - > Role assignment change will take ~5 mins to become effective. +region = "eastus" +token_credential = DefaultAzureCredential() +subscriptionId = "{YOUR-SUBSCRIPTION-ID}" -3. Acquire a Microsoft Entra access token. Access tokens expire in one hour. you'll then need to acquire another one. - ```azurecli - export accessToken=$(az account get-access-token --resource https://cognitiveservices.azure.com --query "accessToken" -o tsv) - ``` +token = token_credential.get_token('https://management.azure.com/.default') +headers = {'Authorization': 'Bearer ' + token.token} -4. Make an API call +url = f"https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.CognitiveServices/locations/{region}/models?api-version=2023-05-01" -Use the access token to authorize your API call by setting the `Authorization` header value. +response = requests.get(url, headers=headers) +data = json.loads(response.text) -```bash -curl ${endpoint%/}/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15 \ --H "Content-Type: application/json" \--H "Authorization: Bearer $accessToken" \--d '{ "prompt": "Once upon a time" }'+print(json.dumps(data, indent=4)) ``` ## Authorize access to managed identities |
ai-services | How To Migrate To Custom Neural Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-migrate-to-custom-neural-voice.md | -> We are retiring the standard non-neural training tier of custom voice from March 1, 2021 through February 29, 2024. If you used a non-neural custom voice with your Speech resource prior to March 1, 2021 then you can continue to do so until February 29, 2024. All other Speech resources can only use custom neural voice. After February 29, 2024, the non-neural custom voices won't be supported with any Speech resource. -> -> The pricing for custom voice is different from custom neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Custom voice (non-neural training) is referred as **Custom**. +> The standard non-neural training tier of custom voice is retired as of February 29, 2024. You could have used a non-neural custom voice with your Speech resource prior to February 29, 2024. Now you can only use custom neural voice with your Speech resources. If you have a non-neural custom voice, you must migrate to custom neural voice. The custom neural voice lets you build higher-quality voice models while requiring less data. You can develop more realistic, natural, and conversational voices. Your customers and end users benefit from the latest Text to speech technology, in a responsible way. |
ai-services | Migrate V2 To V3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migrate-v2-to-v3.md | -> The Speech to text REST API v2.0 is deprecated and will be retired by February 29, 2024. Please migrate your applications to the Speech to text REST API v3.2. Complete the steps in this article and then see the Speech to text REST API [v3.0 to v3.1](migrate-v3-0-to-v3-1.md) and [v3.1 to v3.2](migrate-v3-1-to-v3-2.md) migration guides for additional requirements. +> The Speech to text REST API v2.0 is retired as of February 29, 2024. Please migrate your applications to the Speech to text REST API v3.2. Complete the steps in this article and then see the Speech to text REST API [v3.0 to v3.1](migrate-v3-0-to-v3-1.md) and [v3.1 to v3.2](migrate-v3-1-to-v3-2.md) migration guides for additional requirements. ## Forward compatibility |
ai-services | Migration Overview Neural Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migration-overview-neural-voice.md | We're retiring two features from [text to speech](index-text-to-speech.yml) capa ## Custom voice (non-neural training) > [!IMPORTANT]-> We are retiring the standard non-neural training tier of custom voice from March 1, 2021 through February 29, 2024. If you used a non-neural custom voice with your Speech resource prior to March 1, 2021 then you can continue to do so until February 29, 2024. All other Speech resources can only use custom neural voice. After February 29, 2024, the non-neural custom voices won't be supported with any Speech resource. -> -> The pricing for custom voice is different from custom neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Custom voice (non-neural training) is referred as **Custom**. +> The standard non-neural training tier of custom voice is retired as of February 29, 2024. You could have used a non-neural custom voice with your Speech resource prior to February 29, 2024. Now you can only use custom neural voice with your Speech resources. If you have a non-neural custom voice, you must migrate to custom neural voice. Go to [this article](how-to-migrate-to-custom-neural-voice.md) to learn how to migrate to custom neural voice. |
ai-services | Quickstart Text Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/quickstart-text-rest-api.md | Add the following code sample to your `index.js` file. **Make sure you update th params: { 'api-version': '3.0', 'from': 'en',- 'to': ['fr', 'zu'] + 'to': 'fr,zu' }, data: [{ 'text': 'I would really like to drive your car around the block a few times!' |
ai-studio | Create Manage Compute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-manage-compute.md | To create a compute instance in Azure AI Studio: - **Assign to another user**: You can create a compute instance on behalf of another user. Note that a compute instance can't be shared. It can only be used by a single assigned user. By default, it will be assigned to the creator and you can change this to a different user. - **Assign a managed identity**: You can attach system assigned or user assigned managed identities to grant access to resources. The name of the created system managed identity will be in the format `/workspace-name/computes/compute-instance-name` in your Microsoft Entra ID. - **Enable SSH access**: Enter credentials for an administrator user account that will be created on each compute node. These can be used to SSH to the compute nodes.-Note that disabling SSH prevents SSH access from the public internet. But when a private virtual network is used, users can still SSH from within the virtual network. +Note that disabling SSH prevents SSH access from the public internet. When a private virtual network is used, users can still SSH from within the virtual network. - **Enable virtual network**: - If you're using an Azure Virtual Network, specify the Resource group, Virtual network, and Subnet to create the compute instance inside an Azure Virtual Network. You can also select No public IP to prevent the creation of a public IP address, which requires a private link workspace. You must also satisfy these network requirements for virtual network setup. - If you're using a managed virtual network, the compute instance is created inside the managed virtual network. You can also select No public IP to prevent the creation of a public IP address. For more information, see managed compute with a managed network. You can start or stop a compute instance from the Azure AI Studio. ## Next steps - [Create and manage prompt flow runtimes](./create-manage-runtime.md)-- [Vulnerability management](../concepts/vulnerability-management.md)+- [Vulnerability management](../concepts/vulnerability-management.md) |
aks | Ai Toolchain Operator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ai-toolchain-operator.md | + + Title: Deploy an AI model on Azure Kubernetes Service (AKS) with the AI toolchain operator (preview) +description: Learn how to enable the AI toolchain operator add-on on Azure Kubernetes Service (AKS) to simplify OSS AI model management and deployment. ++ Last updated : 02/28/2024+++# Deploy an AI model on Azure Kubernetes Service (AKS) with the AI toolchain operator (preview) ++The AI toolchain operator (KAITO) is a managed add-on for AKS that simplifies the experience of running OSS AI models on your AKS clusters. The AI toolchain operator automatically provisions the necessary GPU nodes and sets up the associated inference server as an endpoint server to your AI models. Using this add-on reduces your onboarding time and enables you to focus on AI model usage and development rather than infrastructure setup. ++This article shows you how to enable the AI toolchain operator add-on and deploy an AI model on AKS. +++## Before you begin ++* This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for AKS](./concepts-clusters-workloads.md). +* For ***all hosted model inference images*** and recommended infrastructure setup, see the [KAITO GitHub repository](https://github.com/Azure/kaito). ++## Prerequisites ++* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. + * If you have multiple Azure subscriptions, make sure you select the correct subscription in which the resources will be created and charged using the [az account set][az-account-set] command. ++ > [!NOTE] + > The subscription you use must have GPU VM quota. ++* Azure CLI version 2.47.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). +* The Kubernetes command-line client, kubectl, installed and configured. For more information, see [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). +* [Install the Azure CLI AKS preview extension](#install-the-azure-cli-preview-extension). +* [Register the AI toolchain operator add-on feature flag](#register-the-ai-toolchain-operator-add-on-feature-flag). ++### Install the Azure CLI preview extension ++1. Install the Azure CLI preview extension using the [az extension add][az-extension-add] command. ++ ```azurecli-interactive + az extension add --name aks-preview + ``` ++2. Update the extension to make sure you have the latest version using the [az extension update][az-extension-update] command. ++ ```azurecli-interactive + az extension update --name aks-preview + ``` ++### Register the AI toolchain operator add-on feature flag ++1. Register the AIToolchainOperatorPreview feature flag using the [az feature register][az-feature-register] command. ++ ```azurecli-interactive + az feature register --namespace "Microsoft.ContainerService" --name "AIToolchainOperatorPreview" + ``` ++ It takes a few minutes for the registration to complete. ++2. Verify the registration using the [az feature show][az-feature-show] command. ++ ```azurecli-interactive + az feature show --namespace "Microsoft.ContainerService" --name "AIToolchainOperatorPreview" + ``` ++### Export environment variables ++* To simplify the configuration steps in this article, you can define environment variables using the following commands. Make sure to replace the placeholder values with your own. ++ ```azurecli-interactive + export AZURE_SUBSCRIPTION_ID="mySubscriptionID" + export AZURE_RESOURCE_GROUP="myResourceGroup" + export AZURE_LOCATION="myLocation" + export CLUSTER_NAME="myClusterName" + ``` ++## Enable the AI toolchain operator add-on on an AKS cluster ++The following sections describe how to create an AKS cluster with the AI toolchain operator add-on enabled and deploy a default hosted AI model. ++### Create an AKS cluster with the AI toolchain operator add-on enabled ++1. Create an Azure resource group using the [az group create][az-group-create] command. ++ ```azurecli-interactive + az group create --name ${AZURE_RESOURCE_GROUP} --location ${AZURE_LOCATION} + ``` ++2. Create an AKS cluster with the AI toolchain operator add-on enabled using the [az aks create][az-aks-create] command with the `--enable-ai-toolchain-operator` and `--enable-oidc-issuer` flags. ++ ```azurecli-interactive + az aks create --location ${AZURE_LOCATION} \ + --resource-group ${AZURE_RESOURCE_GROUP} \ + --name ${CLUSTER_NAME} \ + --enable-oidc-issuer \ + --enable-ai-toolchain-operator + ``` ++ > [!NOTE] + > AKS creates a managed identity once you enable the AI toolchain operator add-on. The managed identity is used to create GPU node pools in the managed AKS cluster. Proper permissions need to be set for it manually following the steps introduced in the following sections. + > + > AI toolchain operator enablement requires the enablement of OIDC issuer. ++3. On an existing AKS cluster, you can enable the AI toolchain operator add-on using the [az aks update][az-aks-update] command. ++ ```azurecli-interactive + az aks update --name ${CLUSTER_NAME} \ + --resource-group ${AZURE_RESOURCE_GROUP} \ + --enable-oidc-issuer \ + --enable-ai-toolchain-operator + ``` ++## Connect to your cluster ++1. Configure `kubectl` to connect to your cluster using the [az aks get-credentials][az-aks-get-credentials] command. ++ ```azurecli-interactive + az aks get-credentials --resource-group ${AZURE_RESOURCE_GROUP} --name ${CLUSTER_NAME} + ``` ++2. Verify the connection to your cluster using the `kubectl get` command. ++ ```azurecli-interactive + kubectl get nodes + ``` ++## Export environment variables ++* Export environment variables for the MC resource group, principal ID identity, and KAITO identity using the following commands: ++ ```azurecli-interactive + export MC_RESOURCE_GROUP=$(az aks show --resource-group ${AZURE_RESOURCE_GROUP} \ + --name ${CLUSTER_NAME} \ + --query nodeResourceGroup \ + -o tsv) + export PRINCIPAL_ID=$(az identity show --name "ai-toolchain-operator-${CLUSTER_NAME}" \ + --resource-group "${MC_RESOURCE_GROUP}" \ + --query 'principalId' + -o tsv) + export KAITO_IDENTITY_NAME="ai-toolchain-operator-${CLUSTER_NAME}" + ``` ++## Get the AKS OpenID Connect (OIDC) Issuer ++* Get the AKS OIDC Issuer URL and export it as an environment variable: ++ ```azurecli-interactive + export AKS_OIDC_ISSUER=$(az aks show --resource-group "${AZURE_RESOURCE_GROUP}" \ + --name "${CLUSTER_NAME}" \ + --query "oidcIssuerProfile.issuerUrl" \ + -o tsv) + ``` ++## Create role assignment for the service principal ++* Create a new role assignment for the service principal using the [az role assignment create][az-role-assignment-create] command. ++ ```azurecli-interactive + az role assignment create --role "Contributor" \ + --assignee "${PRINCIPAL_ID}" \ + --scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AZURE_RESOURCE_GROUP}" + ``` ++## Establish a federated identity credential ++* Create the federated identity credential between the managed identity, AKS OIDC issuer, and subject using the [az identity federated-credential create][az-identity-federated-credential-create] command. ++ ```azurecli-interactive + az identity federated-credential create --name "kaito-federated-identity" \ + --identity-name "${KAITO_IDENTITY_NAME}" \ + -g "${MC_RESOURCE_GROUP}" \ + --issuer "${AKS_OIDC_ISSUER}" \ + --subject system:serviceaccount:"kube-system:kaito-gpu-provisioner" \ + --audience api://AzureADTokenExchange + ``` ++## Verify that your deployment is running ++1. Restart the KAITO GPU provisioner deployment on your pods using the `kubectl rollout restart` command: ++ ```azurecli-interactive + kubectl rollout restart deployment/kaito-gpu-provisioner -n kube-system + ``` ++2. Verify that the deployment is running using the `kubectl get` command: ++ ```azurecli-interactive + kubectl get deployment -n kube-system | grep kaito + ``` ++## Deploy a default hosted AI model ++1. Deploy the Falcon 7B model YAML file from the GitHub repository using the `kubectl apply` command. ++ ```azurecli-interactive + kubectl apply -f https://raw.githubusercontent.com/Azure/kaito/main/examples/kaito_workspace_falcon_7b.yaml + ``` ++2. Track the live resource changes in your workspace using the `kubectl get` command. ++ ```azurecli-interactive + kubectl get workspace workspace-falcon-7b -w + ``` ++ > [!NOTE] + > As you track the live resource changes in your workspace, note that machine readiness can take up to 10 minutes, and workspace readiness up to 20 minutes. ++3. Check your service and get the service IP address using the `kubectl get svc` command. ++ ```azurecli-interactive + export SERVICE_IP=$(kubectl get svc workspace-falcon-7b -o jsonpath='{.spec.clusterIP}') + ``` ++4. Run the Falcon 7B model with a sample input of your choice using the following `curl` command: ++ ```azurecli-interactive + kubectl run -it --rm --restart=Never curl --image=curlimages/curl -- curl -X POST http://$CLUSTERIP/chat -H "accept: application/json" -H "Content-Type: application/json" -d "{"prompt":"YOUR QUESTION HERE"}" + ``` ++## Clean up resources ++If you no longer need these resources, you can delete them to avoid incurring extra Azure charges. ++* Delete the resource group and its associated resources using the [az group delete][az-group-delete] command. ++ ```azurecli-interactive + az group delete --name "${AZURE_RESOURCE_GROUP}" --yes --no-wait + ``` ++## Next steps ++For more inference model options, see the [KAITO GitHub repository](https://github.com/Azure/kaito). ++<!-- LINKS --> +[az-group-create]: /cli/azure/group#az_group_create +[az-group-delete]: /cli/azure/group#az_group_delete +[az-aks-create]: /cli/azure/aks#az_aks_create +[az-aks-update]: /cli/azure/aks#az_aks_update +[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials +[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create +[az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az_identity_federated_credential_create +[az-account-set]: /cli/azure/account#az_account_set +[az-extension-add]: /cli/azure/extension#az_extension_add +[az-extension-update]: /cli/azure/extension#az_extension_update +[az-feature-register]: /cli/azure/feature#az_feature_register +[az-feature-show]: /cli/azure/feature#az_feature_show +[az-provider-register]: /cli/azure/provider#az_provider_register |
aks | Istio About | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-about.md | Title: Istio-based service mesh add-on for Azure Kubernetes Service (preview) + Title: Istio-based service mesh add-on for Azure Kubernetes Service description: Istio-based service mesh add-on for Azure Kubernetes Service. Last updated 04/09/2023 + -# Istio-based service mesh add-on for Azure Kubernetes Service (preview) +# Istio-based service mesh add-on for Azure Kubernetes Service [Istio][istio-overview] addresses the challenges developers and operators face with a distributed or microservices architecture. The Istio-based service mesh add-on provides an officially supported and tested integration for Azure Kubernetes Service (AKS). - ## What is a Service Mesh? Modern applications are typically architected as distributed collections of microservices, with each collection of microservices performing some discrete business function. A service mesh is a dedicated infrastructure layer that you can add to your applications. It allows you to transparently add capabilities like observability, traffic management, and security, without adding them to your own code. The term **service mesh** describes both the type of software you use to implement this pattern, and the security or network domain that is created when you use that software. This service mesh add-on uses and builds on top of open-source Istio. The add-on Istio-based service mesh add-on for AKS has the following limitations: * The add-on doesn't work on AKS clusters that are using [Open Service Mesh addon for AKS][open-service-mesh-about]. * The add-on doesn't work on AKS clusters that have Istio installed on them already outside the add-on installation.-* Managed lifecycle of mesh on how Istio versions are installed and later made available for upgrades. +* The add-on doesn't support adding pods associated with virtual nodes to be added under the mesh. * Istio doesn't support Windows Server containers. * Customization of mesh based on the following custom resources is blocked for now - `EnvoyFilter, ProxyConfig, WorkloadEntry, WorkloadGroup, Telemetry, IstioOperator, WasmPlugin`+* Gateway API for Istio ingress gateway or managing mesh traffic (GAMMA) are currently not yet supported with Istio addon. ## Next steps |
aks | Istio Deploy Addon | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-deploy-addon.md | Title: Deploy Istio-based service mesh add-on for Azure Kubernetes Service (preview) -description: Deploy Istio-based service mesh add-on for Azure Kubernetes Service (preview) + Title: Deploy Istio-based service mesh add-on for Azure Kubernetes Service +description: Deploy Istio-based service mesh add-on for Azure Kubernetes Service Last updated 04/09/2023 + -# Deploy Istio-based service mesh add-on for Azure Kubernetes Service (preview) +# Deploy Istio-based service mesh add-on for Azure Kubernetes Service This article shows you how to install the Istio-based service mesh add-on for Azure Kubernetes Service (AKS) cluster. For more information on Istio and the service mesh add-on, see [Istio-based service mesh add-on for Azure Kubernetes Service][istio-about]. - ## Before you begin ### Set environment variables export RESOURCE_GROUP=<resource-group-name> export LOCATION=<location> ``` -### Verify Azure CLI and aks-preview extension versions -The add-on requires: -* Azure CLI version 2.49.0 or later installed. To install or upgrade, see [Install Azure CLI][azure-cli-install]. -* `aks-preview` Azure CLI extension of version 0.5.163 or later installed --You can run `az --version` to verify above versions. --To install the aks-preview extension, run the following command: --```azurecli-interactive -az extension add --name aks-preview -``` -Run the following command to update to the latest version of the extension released: +### Verify Azure CLI version -```azurecli-interactive -az extension update --name aks-preview -``` +The add-on requires Azure CLI version 2.57.0 or later installed. You can run `az --version` to verify version. To install or upgrade, see [Install Azure CLI][azure-cli-install]. ## Install Istio add-on at the time of cluster creation Confirm the `istiod` pod has a status of `Running`. For example: ``` NAME READY STATUS RESTARTS AGE-istiod-asm-1-17-74f7f7c46c-xfdtl 1/1 Running 0 2m +istiod-asm-1-18-74f7f7c46c-xfdtl 1/1 Running 0 2m ``` ## Enable sidecar injection istiod-asm-1-17-74f7f7c46c-xfdtl 1/1 Running 0 2m To automatically install sidecar to any new pods, annotate your namespaces: ```bash-kubectl label namespace default istio.io/rev=asm-1-17 +kubectl label namespace default istio.io/rev=asm-1-18 ``` > [!IMPORTANT]-> The default `istio-injection=enabled` labeling doesn't work. Explicit versioning (`istio.io/rev=asm-1-17`) is required. +> The default `istio-injection=enabled` labeling doesn't work. Explicit versioning (`istio.io/rev=asm-1-18`) is required. For manual injection of sidecar using `istioctl kube-inject`, you need to specify extra parameters for `istioNamespace` (`-i`) and `revision` (`-r`). Example: ```bash-kubectl apply -f <(istioctl kube-inject -f sample.yaml -i aks-istio-system -r asm-1-17) -n foo +kubectl apply -f <(istioctl kube-inject -f sample.yaml -i aks-istio-system -r asm-1-18) -n foo ``` ## Deploy sample application kubectl apply -f <(istioctl kube-inject -f sample.yaml -i aks-istio-system -r as Use `kubectl apply` to deploy the sample application on the cluster: ```bash-kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/bookinfo/platform/kube/bookinfo.yaml +kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.18/samples/bookinfo/platform/kube/bookinfo.yaml ``` Confirm several deployments and services are created on your cluster. For example: To test this sample application against ingress, check out [next-steps](#next-st Use `kubectl delete` to delete the sample application: ```bash-kubectl delete -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/bookinfo/platform/kube/bookinfo.yaml +kubectl delete -f https://raw.githubusercontent.com/istio/istio/release-1.18/samples/bookinfo/platform/kube/bookinfo.yaml ``` If you don't intend to enable Istio ingress on your cluster and want to disable the Istio add-on, run the following command: az group delete --name ${RESOURCE_GROUP} --yes --no-wait [uninstall-osm-addon]: open-service-mesh-uninstall-add-on.md [uninstall-istio-oss]: https://istio.io/latest/docs/setup/install/istioctl/#uninstall-istio -[istio-deploy-ingress]: istio-deploy-ingress.md +[istio-deploy-ingress]: istio-deploy-ingress.md |
aks | Istio Deploy Ingress | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-deploy-ingress.md | Title: Azure Kubernetes Service (AKS) external or internal ingresses for Istio service mesh add-on (preview) -description: Deploy external or internal ingresses for Istio service mesh add-on for Azure Kubernetes Service (preview) + Title: Azure Kubernetes Service (AKS) external or internal ingresses for Istio service mesh add-on +description: Deploy external or internal ingresses for Istio service mesh add-on for Azure Kubernetes Service -+ Last updated 08/07/2023-+ -# Azure Kubernetes Service (AKS) external or internal ingresses for Istio service mesh add-on deployment (preview) +# Azure Kubernetes Service (AKS) external or internal ingresses for Istio service mesh add-on deployment This article shows you how to deploy external or internal ingresses for Istio service mesh add-on for Azure Kubernetes Service (AKS) cluster. - ## Prerequisites This guide assumes you followed the [documentation][istio-deploy-addon] to enable the Istio add-on on an AKS cluster, deploy a sample application and set environment variables. |
aks | Istio Meshconfig | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-meshconfig.md | Title: Configure Istio-based service mesh add-on for Azure Kubernetes Service (preview) -description: Configure Istio-based service mesh add-on for Azure Kubernetes Service (preview) + Title: Configure Istio-based service mesh add-on for Azure Kubernetes Service +description: Configure Istio-based service mesh add-on for Azure Kubernetes Service Last updated 02/14/2024 + -# Configure Istio-based service mesh add-on for Azure Kubernetes Service (preview) +# Configure Istio-based service mesh add-on for Azure Kubernetes Service Open-source Istio uses [MeshConfig][istio-meshconfig] to define mesh-wide settings for the Istio service mesh. Istio-based service mesh add-on for AKS builds on top of MeshConfig and classifies different properties as supported, allowed, and blocked. This article walks through how to configure Istio-based service mesh add-on for Azure Kubernetes Service and the support policy applicable for such configuration. - ## Prerequisites This guide assumes you followed the [documentation][istio-deploy-addon] to enable the Istio add-on on an AKS cluster. |
aks | Istio Plugin Ca | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-plugin-ca.md | Title: Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service (preview) -description: Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service (preview) + Title: Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service +description: Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service Last updated 12/04/2023++ -# Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service (preview) +# Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service -In the Istio-based service mesh addon for Azure Kubernetes Service (preview), by default the Istio certificate authority (CA) generates a self-signed root certificate and key and uses them to sign the workload certificates. To protect the root CA key, you should use a root CA, which runs on a secure machine offline. You can use the root CA to issue intermediate certificates to the Istio CAs that run in each cluster. An Istio CA can sign workload certificates using the administrator-specified certificate and key, and distribute an administrator-specified root certificate to the workloads as the root of trust. This article addresses how to bring your own certificates and keys for Istio CA in the Istio-based service mesh add-on for Azure Kubernetes Service. +In the Istio-based service mesh addon for Azure Kubernetes Service, by default the Istio certificate authority (CA) generates a self-signed root certificate and key and uses them to sign the workload certificates. To protect the root CA key, you should use a root CA, which runs on a secure machine offline. You can use the root CA to issue intermediate certificates to the Istio CAs that run in each cluster. An Istio CA can sign workload certificates using the administrator-specified certificate and key, and distribute an administrator-specified root certificate to the workloads as the root of trust. This article addresses how to bring your own certificates and keys for Istio CA in the Istio-based service mesh add-on for Azure Kubernetes Service. [ ![Diagram that shows root and intermediate CA with Istio.](./media/istio/istio-byo-ca.png) ](./media/istio/istio-byo-ca.png#lightbox) This article addresses how you can configure the Istio certificate authority with a root certificate, signing certificate and key provided as inputs using Azure Key Vault to the Istio-based service mesh add-on. - ## Before you begin -### Verify Azure CLI and aks-preview extension versions --The add-on requires: -* Azure CLI version 2.49.0 or later installed. To install or upgrade, see [Install Azure CLI][install-azure-cli]. -* `aks-preview` Azure CLI extension of version 0.5.163 or later installed --You can run `az --version` to verify above versions. --To install the aks-preview extension, run the following command: --```azurecli-interactive -az extension add --name aks-preview -``` --Run the following command to update to the latest version of the extension released: +### Verify Azure CLI version -```azurecli-interactive -az extension update --name aks-preview -``` +The add-on requires Azure CLI version 2.57.0 or later installed. You can run `az --version` to verify version. To install or upgrade, see [Install Azure CLI][azure-cli-install]. ### Set up Azure Key Vault |
aks | Istio Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-upgrade.md | Title: Upgrade Istio-based service mesh add-on for Azure Kubernetes Service (preview) -description: Upgrade Istio-based service mesh add-on for Azure Kubernetes Service (preview). + Title: Upgrade Istio-based service mesh add-on for Azure Kubernetes Service +description: Upgrade Istio-based service mesh add-on for Azure Kubernetes Service Last updated 05/04/2023-++ -# Upgrade Istio-based service mesh add-on for Azure Kubernetes Service (preview) +# Upgrade Istio-based service mesh add-on for Azure Kubernetes Service This article addresses upgrade experiences for Istio-based service mesh add-on for Azure Kubernetes Service (AKS). |
aks | Quick Kubernetes Deploy Azd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-azd.md | Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using the AZD CLI. Last updated 02/06/2024-+ #Customer intent: As a developer or cluster operator, I want to deploy an AKS cluster and deploy an application so I can see how to run applications using the managed Kubernetes service in Azure. The Azure Development Template contains all the code needed to create the servic Run `azd auth login` 1. Copy the device code that appears.-2. Hit enter to open in a new tab the auth portal. -3. Enter in your Microsoft Credentials in the new page. -4. Confirm that it's you trying to connect to Azure CLI. If you encounter any issues, skip to the Troubleshooting section. -5. Verify the message "Device code authentication completed. Logged in to Azure." appears in your original terminal. +1. Hit enter to open in a new tab the auth portal. +1. Enter in your Microsoft Credentials in the new page. +1. Confirm that it's you trying to connect to Azure CLI. If you encounter any issues, skip to the Troubleshooting section. +1. Verify the message "Device code authentication completed. Logged in to Azure." appears in your original terminal. -### Troubleshooting: Can't Connect to Localhost --Certain Azure security policies cause conflicts when trying to sign in. As a workaround, you can perform a curl request to the localhost url you were redirected to after you logged in. --The workaround requires the Azure CLI for authentication. If you don't have it or aren't using GitHub Codespaces, install the [Azure CLI][install-azure-cli]. --1. Inside a terminal, run `az login --scope https://graph.microsoft.com/.default` -2. Copy the "localhost" URL from the failed redirect -3. In a new terminal window, type `curl` and paste your url -4. If it works, code for a webpage saying "You have logged into Microsoft Azure!" appears -5. Close the terminal and go back to the old terminal -6. Copy and note down which subscription_id you want to use -7. Paste in the subscription_ID to the command `az account set -n {sub}` - If you have multiple Azure subscriptions, select the appropriate subscription for billing using the [az account set](/cli/azure/account#az-account-set) command. The workaround requires the Azure CLI for authentication. If you don't have it o The step can take longer depending on your internet speed. 1. Create all your resources with the `azd up` command.-2. Select which Azure subscription and region for your AKS Cluster. -3. Wait as azd automatically runs the commands for pre-provision and post-provision steps. -4. At the end, your output shows the newly created deployments and services. +1. Select which Azure subscription and region for your AKS Cluster. +1. Wait as azd automatically runs the commands for pre-provision and post-provision steps. +1. At the end, your output shows the newly created deployments and services. ```output deployment.apps/rabbitmq created The step can take longer depending on your internet speed. When your application is created, a Kubernetes service exposes the application's front end service to the internet. This process can take a few minutes to complete. Once completed, follow these steps verify and test the application by opening up the store-front page. +1. Set your namespace as the demo namespace `pets` with the `kubectl set-context` command. ++ ```console + kubectl config set-context --current --namespace=pets + ``` + 1. View the status of the deployed pods with the [kubectl get pods][kubectl-get] command. Check that all pods are in the `Running` state before proceeding: When your application is created, a Kubernetes service exposes the application's Once on the store page, you can add new items to your cart and check them out. To verify, visit the Azure Service in your portal to view the records of the transactions for your store app. -<!-- Image of Storefront Checkout --> - ## Delete the cluster Once you're finished with the quickstart, remember to clean up all your resources to avoid Azure charges. |
aks | Quick Kubernetes Deploy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md | Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the A description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using the Azure portal. Previously updated : 01/11/2024 Last updated : 03/01/2024 #Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure. This quickstart assumes a basic understanding of Kubernetes concepts. For more i :::image type="content" source="media/quick-kubernetes-deploy-portal/create-node-pool-linux.png" alt-text="Screenshot showing how to create a node pool running Ubuntu Linux." lightbox="media/quick-kubernetes-deploy-portal/create-node-pool-linux.png"::: -1. Leave all settings on the other tabs set to their defaults. +1. Leave all settings on the other tabs set to their defaults, except for the settings on the **Monitoring** tab. By default, the [Azure Monitor features][azure-monitor-features-containers] Container insights, Azure Monitor managed service for Prometheus, and Azure Managed Grafana are enabled. You can save costs by disabling them. 1. Select **Review + create** to run validation on the cluster configuration. After validation completes, select **Create** to create the AKS cluster. To learn more about AKS and walk through a complete code-to-deployment example, [intro-azure-linux]: ../../azure-linux/intro-azure-linux.md [baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json [aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json+[azure-monitor-features-containers]: ../../azure-monitor/containers/container-insights-overview.md |
aks | Tutorial Kubernetes Deploy Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-application.md | Title: Kubernetes on Azure tutorial - Deploy an application to Azure Kubernetes Service (AKS) description: In this Azure Kubernetes Service (AKS) tutorial, you deploy a multi-container application to your cluster using images stored in Azure Container Registry. Previously updated : 11/02/2023- Last updated : 02/20/2023+ #Customer intent: As a developer, I want to learn how to deploy apps to an Azure Kubernetes Service (AKS) cluster so that I can deploy and run my own applications. In this tutorial, part four of seven, you deploy a sample application into a Kub ## Before you begin -In previous tutorials, you packaged an application into a container image, uploaded the image to Azure Container Registry, and created a Kubernetes cluster. To complete this tutorial, you need the pre-created `aks-store-quickstart.yaml` Kubernetes manifest file. This file download was included with the application source code in a previous tutorial. Make sure you cloned the repo and changed directories into the cloned repo. If you haven't completed these steps and want to follow along, start with [Tutorial 1 - Prepare application for AKS][aks-tutorial-prepare-app]. +In previous tutorials, you packaged an application into a container image, uploaded the image to Azure Container Registry, and created a Kubernetes cluster. To complete this tutorial, you need the precreated `aks-store-quickstart.yaml` Kubernetes manifest file. This file was downloaded in the application source code from [Tutorial 1 - Prepare application for AKS][aks-tutorial-prepare-app]. ### [Azure CLI](#tab/azure-cli) -This tutorial requires Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. +This tutorial requires Azure CLI version 2.0.53 or later. Check your version with `az --version`. To install or upgrade, see [Install Azure CLI][azure-cli-install]. ### [Azure PowerShell](#tab/azure-powershell) -This tutorial requires Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install]. +This tutorial requires Azure PowerShell version 5.9.0 or later. Check your version with `Get-InstalledModule -Name Az`. To install or upgrade, see [Install Azure PowerShell][azure-powershell-install]. ++### [Azure Developer CLI](#tab/azure-azd) ++This tutorial requires Azure Developer CLI (AZD) version 1.5.1 or later. Check your version with `azd version`. To install or upgrade, see [Install Azure Developer CLI][azure-azd-install]. In these tutorials, your Azure Container Registry (ACR) instance stores the cont az acr list --resource-group myResourceGroup --query "[].{acrLoginServer:loginServer}" --output table ``` -2. Make sure you're in the cloned *aks-store-demo* directory, and then open the manifest file with a text editor, such as `vi`: +2. Make sure you're in the cloned *aks-store-demo* directory, and then open the manifest file with a text editor, such as `vi`. ```azurecli-interactive vi aks-store-quickstart.yaml In these tutorials, your Azure Container Registry (ACR) instance stores the cont (Get-AzContainerRegistry -ResourceGroupName myResourceGroup -Name <acrName>).LoginServer ``` -2. Make sure you're in the cloned *aks-store-demo* directory, and then open the manifest file with a text editor, such as `vi`: +2. Make sure you're in the cloned *aks-store-demo* directory, and then open the manifest file with a text editor, such as `vi`. ```azurepowershell-interactive vi aks-store-quickstart.yaml In these tutorials, your Azure Container Registry (ACR) instance stores the cont 4. Save and close the file. In `vi`, use `:wq`. ++### [Azure Developer CLI](#tab/azure-azd) ++AZD doesn't require a container registry step since it's in the template. + -## Deploy the application +## Run the application ++### [Azure CLI](#tab/azure-cli) ++1. Deploy the application using the [`kubectl apply`][kubectl-apply] command, which parses the manifest file and creates the defined Kubernetes objects. ++ ```console + kubectl apply -f aks-store-quickstart.yaml + ``` ++ The following example output shows the resources successfully created in the AKS cluster: ++ ```output + deployment.apps/rabbitmq created + service/rabbitmq created + deployment.apps/order-service created + service/order-service created + deployment.apps/product-service created + service/product-service created + deployment.apps/store-front created + service/store-front created + ``` ++2. Check the deployment is successful by viewing the pods with `kubectl` ++ ```console + kubectl get pods + ``` ++### [Azure PowerShell](#tab/azure-powershell) -* Deploy the application using the [`kubectl apply`][kubectl-apply] command, which parses the manifest file and creates the defined Kubernetes objects. +1. Deploy the application using the [`kubectl apply`][kubectl-apply] command, which parses the manifest file and creates the defined Kubernetes objects. ```console kubectl apply -f aks-store-quickstart.yaml In these tutorials, your Azure Container Registry (ACR) instance stores the cont service/store-front created ``` +2. Check the deployment is successful by viewing the pods with `kubectl` ++ ```console + kubectl get pods + ``` ++### [Azure Developer CLI](#tab/azure-azd) ++Deployment in AZD in broken down into multiple stages represented by hooks. Run `azd up` as an all-in-one command. ++When you first run azd up, you're prompted to select which Subscription and Region to host your Azure resources. ++You can update these variables for `AZURE_LOCATION` and `AZURE_SUBSCRIPTION_ID` from inside the `.azure/<your-env-name>/.env` file. +++ ## Test the application When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete. +### Command Line + 1. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument. ```console kubectl get service store-front --watch ``` - Initially, the `EXTERNAL-IP` for the *store-front* service shows as *pending*. + Initially, the `EXTERNAL-IP` for the *store-front* service shows as *pending*: ```output store-front LoadBalancer 10.0.34.242 <pending> 80:30676/TCP 5s When the application runs, a Kubernetes service exposes the application front en 3. View the application in action by opening a web browser to the external IP address of your service. + :::image type="content" source="./learn/media/quick-kubernetes-deploy-cli/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="./learn/media/quick-kubernetes-deploy-cli/aks-store-application.png"::: + If the application doesn't load, it might be an authorization problem with your image registry. To view the status of your containers, use the `kubectl get pods` command. If you can't pull the container images, see [Authenticate with Azure Container Registry from Azure Kubernetes Service](cluster-container-registry-integration.md). +### Azure portal ++Navigate to your Azure portal to find your deployment information. ++1. Open your [Resource Group][azure-rg] on the Azure portal +1. Navigate to the Kubernetes service for your cluster +1. Select `Services and Ingress` under `Kubernetes Resources` +1. Copy the External IP shown in the column for store-front +1. Paste the IP into your browser and visit your store page ++ :::image type="content" source="./learn/media/quick-kubernetes-deploy-cli/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="./learn/media/quick-kubernetes-deploy-cli/aks-store-application.png"::: + ## Next steps In this tutorial, you deployed a sample Azure application to a Kubernetes cluster in AKS. You learned how to: > [!div class="checklist"]-> +> > * Update a Kubernetes manifest file. > * Run an application in Kubernetes. > * Test the application. In the next tutorial, you learn how to use PaaS services for stateful workloads > [Use PaaS services for stateful workloads in AKS][aks-tutorial-paas] <!-- LINKS - external -->+[azure-rg]:https://ms.portal.azure.com/#browse/resourcegroups [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get <!-- LINKS - internal --> [aks-tutorial-prepare-app]: ./tutorial-kubernetes-prepare-app.md [az-acr-list]: /cli/azure/acr+[azure-azd-install]: /azure/developer/azure-developer-cli/install-azd [azure-cli-install]: /cli/azure/install-azure-cli [azure-powershell-install]: /powershell/azure/install-az-ps [get-azcontainerregistry]: /powershell/module/az.containerregistry/get-azcontainerregistry |
aks | Tutorial Kubernetes Deploy Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-cluster.md | Title: Kubernetes on Azure tutorial - Deploy an Azure Kubernetes Service (AKS) cluster -description: In this Azure Kubernetes Service (AKS) tutorial, you create an AKS cluster and use kubectl to connect to the Kubernetes main node. + Title: Kubernetes on Azure tutorial - Create an Azure Kubernetes Service (AKS) cluster +description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to create an AKS cluster and use kubectl to connect to the Kubernetes main node. Previously updated : 10/23/2023- Last updated : 02/14/2024+ #Customer intent: As a developer or IT pro, I want to learn how to create an Azure Kubernetes Service (AKS) cluster so that I can deploy and run my own applications. -# Tutorial - Deploy an Azure Kubernetes Service (AKS) cluster +# Tutorial - Create an Azure Kubernetes Service (AKS) cluster Kubernetes provides a distributed platform for containerized applications. With Azure Kubernetes Service (AKS), you can quickly create a production ready Kubernetes cluster. In this tutorial, part three of seven, you deploy a Kubernetes cluster in AKS. Y ## Before you begin -In previous tutorials, you created a container image and uploaded it to an ACR instance. If you haven't completed these steps and want to follow along, start with [Tutorial 1 - Prepare application for AKS][aks-tutorial-prepare-app]. +In previous tutorials, you created a container image and uploaded it to an ACR instance. Start with [Tutorial 1 - Prepare application for AKS][aks-tutorial-prepare-app] to follow along. -* If you're using Azure CLI, this tutorial requires that you're running the Azure CLI version 2.0.53 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. -* If you're using Azure PowerShell, this tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install]. +* If you're using Azure CLI, this tutorial requires that you're running the Azure CLI version 2.0.53 or later. Check your version with `az --version`. To install or upgrade, see [Install Azure CLI][azure-cli-install]. +* If you're using Azure PowerShell, this tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Check your version with `Get-InstalledModule -Name Az`. To install or upgrade, see [Install Azure PowerShell][azure-powershell-install]. +* If you're using Azure Developer CLI, this tutorial requires that you're running the Azure Developer CLI version 1.5.1 or later. Check your version with `azd version`. To install or upgrade, see [Install Azure Developer CLI][azure-azd-install]. To learn more about AKS and Kubernetes RBAC, see [Control access to cluster reso ### [Azure CLI](#tab/azure-cli) -This tutorial requires Azure CLI version 2.0.53 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. +This tutorial requires Azure CLI version 2.0.53 or later. Check your version with `az --version`. To install or upgrade, see [Install Azure CLI][azure-cli-install]. ### [Azure PowerShell](#tab/azure-powershell) -This tutorial requires Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install]. +This tutorial requires Azure PowerShell version 5.9.0 or later. Check your version with `Get-InstalledModule -Name Az`. To install or upgrade, see [Install Azure PowerShell][azure-powershell-install]. ---## Create an AKS cluster --AKS clusters can use [Kubernetes role-based access control (Kubernetes RBAC)][k8s-rbac], which allows you to define access to resources based on roles assigned to users. Permissions are combined when users are assigned multiple roles. Permissions can be scoped to either a single namespace or across the whole cluster. For more information, see [Control access to cluster resources using Kubernetes RBAC and Azure Active Directory identities in AKS][aks-k8s-rbac]. --For information about AKS resource limits and region availability, see [Quotas, virtual machine size restrictions, and region availability in AKS][quotas-skus-regions]. --> [!NOTE] -> To ensure your cluster operates reliably, you should run at least two nodes. --### [Azure CLI](#tab/azure-cli) --To allow an AKS cluster to interact with other Azure resources, the Azure platform automatically creates a cluster identity. In this example, the cluster identity is [granted the right to pull images][container-registry-integration] from the ACR instance you created in the previous tutorial. To execute the command successfully, you need to have an **Owner** or **Azure account administrator** role in your Azure subscription. --* Create an AKS cluster using the [`az aks create`][az aks create] command. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region. -- ```azurecli-interactive - az aks create \ - --resource-group myResourceGroup \ - --name myAKSCluster \ - --node-count 2 \ - --generate-ssh-keys \ - --attach-acr <acrName> - ``` -- > [!NOTE] - > If you already generated SSH keys, you may encounter an error similar to `linuxProfile.ssh.publicKeys.keyData is invalid`. To proceed, retry the command without the `--generate-ssh-keys` parameter. --### [Azure PowerShell](#tab/azure-powershell) +### [Azure Developer CLI](#tab/azure-azd) -To allow an AKS cluster to interact with other Azure resources, the Azure platform automatically creates a cluster identity. In this example, the cluster identity is [granted the right to pull images][container-registry-integration] from the ACR instance you created in the previous tutorial. To execute the command successfully, you need to have an **Owner** or **Azure account administrator** role in your Azure subscription. --* Create an AKS cluster using the [`New-AzAksCluster`][new-azakscluster] cmdlet. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region. -- ```azurepowershell-interactive - New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -GenerateSshKey -AcrNameToAttach <acrName> - ``` -- > [!NOTE] - > If you already generated SSH keys, you may encounter an error similar to `linuxProfile.ssh.publicKeys.keyData is invalid`. To proceed, retry the command without the `-GenerateSshKey` parameter. +This tutorial requires Azure Developer CLI version 1.5.1 or later. Check your version with `azd version`. To install or upgrade, see [Install Azure Developer CLI][azure-azd-install]. -To avoid needing an **Owner** or **Azure account administrator** role, you can also manually configure a service principal to pull images from ACR. For more information, see [ACR authentication with service principals](../container-registry/container-registry-auth-service-principal.md) or [Authenticate from Kubernetes with a pull secret](../container-registry/container-registry-auth-kubernetes.md). Alternatively, you can use a [managed identity](use-managed-identity.md) instead of a service principal for easier management. --After a few minutes, the deployment completes and returns JSON-formatted information about the AKS deployment. - ## Install the Kubernetes CLI You use the Kubernetes CLI, [`kubectl`][kubectl], to connect to your Kubernetes cluster. If you use the Azure Cloud Shell, `kubectl` is already installed. If you're running the commands locally, you can use the Azure CLI or Azure PowerShell to install `kubectl`. You use the Kubernetes CLI, [`kubectl`][kubectl], to connect to your Kubernetes Install-AzAksCliTool ``` +### [Azure Developer CLI](#tab/azure-azd) ++AZD Environments in a codespace automatically download all dependencies found in `./devcontainer/devcontainer.json`. The Kubernetes CLI is in the file, along with any ACR images. ++* To install `kubectl` locally, use the [`az aks install-cli`][az aks install-cli] command. ++ ```azurecli + az aks install-cli + ``` + ## Connect to cluster using kubectl You use the Kubernetes CLI, [`kubectl`][kubectl], to connect to your Kubernetes kubectl get nodes ``` - The following example output shows a list of the cluster nodes: + The following example output shows a list of the cluster nodes. ```output NAME STATUS ROLES AGE VERSION You use the Kubernetes CLI, [`kubectl`][kubectl], to connect to your Kubernetes kubectl get nodes ``` - The following example output shows a list of the cluster nodes: + The following example output shows a list of the cluster nodes. ```output NAME STATUS ROLES AGE VERSION You use the Kubernetes CLI, [`kubectl`][kubectl], to connect to your Kubernetes aks-nodepool1-19366578-vmss000003 Ready agent 47h v1.25.6 ``` +### [Azure Developer CLI](#tab/azure-azd) ++Sign in to your Azure Account through AZD configures your credentials. ++1. Authenticate using AZD. ++ ```azurecli-interactive + azd auth login + ``` ++2. Follow the directions for your auth method. ++3. Verify the connection to your cluster using the [`kubectl get nodes`][kubectl-get] command. ++ ```azurecli-interactive + kubectl get nodes + ``` ++ The following example output shows a list of the cluster nodes. ++ ```output + NAME STATUS ROLES AGE VERSION + aks-nodepool1-19366578-vmss000002 Ready agent 47h v1.25.6 + aks-nodepool1-19366578-vmss000003 Ready agent 47h v1.25.6 + ``` +++++## Create an AKS cluster ++AKS clusters can use [Kubernetes role-based access control (Kubernetes RBAC)][k8s-rbac], which allows you to define access to resources based on roles assigned to users. Permissions are combined when users are assigned multiple roles. Permissions can be scoped to either a single namespace or across the whole cluster. For more information, see [Control access to cluster resources using Kubernetes RBAC and Microsoft Entra ID in AKS][aks-k8s-rbac]. ++For information about AKS resource limits and region availability, see [Quotas, virtual machine size restrictions, and region availability in AKS][quotas-skus-regions]. ++> [!NOTE] +> To ensure your cluster operates reliably, you should run at least two nodes. ++### [Azure CLI](#tab/azure-cli) ++To allow an AKS cluster to interact with other Azure resources, the Azure platform automatically creates a cluster identity. In this example, the cluster identity is [granted the right to pull images][container-registry-integration] from the ACR instance you created in the previous tutorial. To execute the command successfully, you need to have an **Owner** or **Azure account administrator** role in your Azure subscription. ++* Create an AKS cluster using the [`az aks create`][az aks create] command. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region. ++ ```azurecli-interactive + az aks create \ + --resource-group myResourceGroup \ + --name myAKSCluster \ + --node-count 2 \ + --generate-ssh-keys \ + --attach-acr <acrName> + ``` ++ > [!NOTE] + > If you already generated SSH keys, you may encounter an error similar to `linuxProfile.ssh.publicKeys.keyData is invalid`. To proceed, retry the command without the `--generate-ssh-keys` parameter. ++To avoid needing an **Owner** or **Azure account administrator** role, you can also manually configure a service principal to pull images from ACR. For more information, see [ACR authentication with service principals](../container-registry/container-registry-auth-service-principal.md) or [Authenticate from Kubernetes with a pull secret](../container-registry/container-registry-auth-kubernetes.md). Alternatively, you can use a [managed identity](use-managed-identity.md) instead of a service principal for easier management. ++### [Azure PowerShell](#tab/azure-powershell) ++To allow an AKS cluster to interact with other Azure resources, the Azure platform automatically creates a cluster identity. In this example, the cluster identity is [granted the right to pull images][container-registry-integration] from the ACR instance you created in the previous tutorial. To execute the command successfully, you need to have an **Owner** or **Azure account administrator** role in your Azure subscription. ++* Create an AKS cluster using the [`New-AzAksCluster`][new-azakscluster] cmdlet. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region. ++ ```azurepowershell-interactive + New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -GenerateSshKey -AcrNameToAttach <acrName> + ``` ++ > [!NOTE] + > If you already generated SSH keys, you may encounter an error similar to `linuxProfile.ssh.publicKeys.keyData is invalid`. To proceed, retry the command without the `-GenerateSshKey` parameter. ++To avoid needing an **Owner** or **Azure account administrator** role, you can also manually configure a service principal to pull images from ACR. For more information, see [ACR authentication with service principals](../container-registry/container-registry-auth-service-principal.md) or [Authenticate from Kubernetes with a pull secret](../container-registry/container-registry-auth-kubernetes.md). Alternatively, you can use a [managed identity](use-managed-identity.md) instead of a service principal for easier management. ++### [Azure Developer CLI](#tab/azure-azd) ++AZD packages the deployment of clusters with the application itself using `azd up`. This command is covered in the next tutorial. + ## Next steps In the next tutorial, you learn how to deploy an application to your cluster. [az aks create]: /cli/azure/aks#az_aks_create [az aks install-cli]: /cli/azure/aks#az_aks_install_cli [az aks get-credentials]: /cli/azure/aks#az_aks_get_credentials+[azure-azd-install]: /azure/developer/azure-developer-cli/install-azd [azure-cli-install]: /cli/azure/install-azure-cli [container-registry-integration]: ./cluster-container-registry-integration.md [quotas-skus-regions]: quotas-skus-regions.md |
aks | Tutorial Kubernetes Prepare App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-prepare-app.md | Title: Kubernetes on Azure tutorial - Prepare an application for Azure Kubernetes Service (AKS) description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to prepare and build a multi-container app with Docker Compose that you can then deploy to AKS. Previously updated : 10/23/2023- Last updated : 02/15/2023+ #Customer intent: As a developer, I want to learn how to build a container-based application so that I can deploy the app to Azure Kubernetes Service. To complete this tutorial, you need a local Docker development environment runni > [!NOTE] > Azure Cloud Shell doesn't include the Docker components required to complete every step in these tutorials. Therefore, we recommend using a full Docker development environment. ++ ## Get application code The [sample application][sample-application] used in this tutorial is a basic store front app including the following Kubernetes deployments and The [sample application][sample-application] used in this tutorial is a basic st * **Order service**: Places orders. * **Rabbit MQ**: Message queue for an order queue. ++### [Git](#tab/azure-cli) + 1. Use [git][] to clone the sample application to your development environment. ```console The [sample application][sample-application] used in this tutorial is a basic st cd aks-store-demo ``` ++### [Azure Developer CLI](#tab/azure-azd) ++1. If you are using AZD locally, create an empty directory named `aks-store-demo` to host the azd template files. ++ ```azurecli + mkdir aks-store-demo + ``` ++1. Change into the new directory to load all the files from the azd template. ++ ```azurecli + cd aks-store-demo + ``` ++1. Run the Azure Developer CLI ([azd][]) init command which clones the sample application into your empty directory. ++ Here, the `--template` flag is specified to point to the aks-store-demo application. ++ ```azurecli + azd init --template aks-store-demo + ``` +++ ## Review Docker Compose file -The sample application you create in this tutorial uses the [*docker-compose-quickstart* YAML file](https://github.com/Azure-Samples/aks-store-demo/blob/main/docker-compose-quickstart.yml) in the [repository](https://github.com/Azure-Samples/aks-store-demo/tree/main) you cloned in the previous step. +The sample application you create in this tutorial uses the [*docker-compose-quickstart* YAML file](https://github.com/Azure-Samples/aks-store-demo/blob/main/docker-compose-quickstart.yml) from the [repository](https://github.com/Azure-Samples/aks-store-demo/tree/main) you cloned. ```yaml version: "3.7" networks: driver: bridge ``` -## Create container images and run application +++## Create container images and run application ++### [Docker](#tab/azure-cli) You can use [Docker Compose][docker-compose] to automate building container images and the deployment of multi-container applications. -1. Create the container image, download the Redis image, and start the application using the `docker compose` command. +### Docker ++1. Create the container image, download the Redis image, and start the application using the `docker compose` command: ```console docker compose -f docker-compose-quickstart.yml up -d Since you validated the application's functionality, you can stop and remove the docker compose down ``` ++### [Azure Developer CLI](#tab/azure-azd) ++When you use AZD, there are no manual container image dependencies. AZD handles the provisioning, deployment, and cleans up of your applications and clusters with the `azd up` and `azd down` commands, similar to Docker. ++You can customize the preparation steps to use either Terraform or Bicep before deploying the cluster. ++1. This is selected within your `azure.yaml` infra section. By default, this project uses terraform. ++ ```yml + infra: + provider: terraform + path: infra/terraform ++2. To select Bicep change the provider and path from terraform to bicep ++ ```yml + infra: + provider: bicep + path: infra/bicep + ``` ++ ## Next steps +### [Azure CLI](#tab/azure-cli) + In this tutorial, you created a sample application, created container images for the application, and then tested the application. You learned how to: > [!div class="checklist"] In the next tutorial, you learn how to store container images in an ACR. > [!div class="nextstepaction"] > [Push images to Azure Container Registry][aks-tutorial-prepare-acr] +### [Azure Developer CLI](#tab/azure-azd) ++In this tutorial, you cloned a sample application using AZD. You learned how to: ++> [!div class="checklist"] +> * Clone a sample azd template from GitHub. +> * View where container images are used from the sample application source. ++In the next tutorial, you learn how to create a cluster using the azd template you cloned. ++> [!div class="nextstepaction"] +> [Create an AKS Cluster][aks-tutorial-deploy-cluster] +++ <!-- LINKS - external --> [docker-compose]: https://docs.docker.com/compose/ [docker-for-linux]: https://docs.docker.com/engine/installation/#supported-platforms In the next tutorial, you learn how to store container images in an ACR. [sample-application]: https://github.com/Azure-Samples/aks-store-demo <!-- LINKS - internal -->-[aks-tutorial-prepare-acr]: ./tutorial-kubernetes-prepare-acr.md +[aks-tutorial-prepare-acr]: ./tutorial-kubernetes-prepare-acr.md +[aks-tutorial-deploy-cluster]: ./tutorial-kubernetes-deploy-cluster.md +[azd]: /azure/developer/azure-developer-cli/install-azd |
api-center | Enable Api Center Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/enable-api-center-portal.md | If the user is assigned the role, there might be a problem with the registration az provider register --namespace Microsoft.ApiCenter ``` +### Unable to sign in to portal ++If users who have been assigned the **Azure API Center Data Reader** role can't complete the sign-in flow after selecting **Sign in** in the API Center portal, there might be a problem with the configuration of the Microsoft Entra ID identity provider. ++In the Microsoft Entra app registration, review and, if needed, update the **Redirect URI** settings: ++* Platform: **Single-page application (SPA)** +* URI: `https://<api-center-name>.portal.<region>.azure-apicenter.ms`. This value must be the URI shown for the Microsoft Entra ID provider for your API Center portal. + ### Unable to select Azure API Center permissions in Microsoft Entra app registration If you're unable to request API permissions to Azure API Center in your Microsoft Entra app registration for the API Center portal, check that you are searching for **Azure API Center** (or application ID `c3ca1a77-7a87-4dba-b8f8-eea115ae4573`). |
api-management | Api Management Howto Ip Addresses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ip-addresses.md | In [multi-regional deployments](api-management-howto-deploy-multi-region.md), ea If your API Management service is inside a virtual network, it will have two types of IP addresses: public and private. -* Public IP addresses are used for internal communication on port `3443` - for managing configuration (for example, through Azure Resource Manager). In the external VNet configuration, they are also used for runtime API traffic. +* Public IP addresses are used for internal communication on port `3443` - for managing configuration (for example, through Azure Resource Manager). In the *external* VNet configuration, they are also used for runtime API traffic. In the *internal* VNet configuration, public IP addresses are only used for Azure internal management operations and don't expose your instance to the internet. * Private virtual IP (VIP) addresses, available **only** in the [internal VNet mode](api-management-using-with-internal-vnet.md), are used to connect from within the network to API Management endpoints - gateways, the developer portal, and the management plane for direct API access. You can use them for setting up DNS records within the network. In the Developer, Basic, Standard, and Premium tiers of API Management, the publ * The service subscription is disabled or warned (for example, for nonpayment) and then reinstated. [Learn more about subscription states](/azure/cost-management-billing/manage/subscription-states) * (Developer and Premium tiers) Azure Virtual Network is added to or removed from the service. * (Developer and Premium tiers) API Management service is switched between external and internal VNet deployment mode.-* (Developer and Premium tiers) API Management service is moved to a different subnet, or [migrated](migrate-stv1-to-stv2.md) from the `stv1` to the `stv2` compute platform.. +* (Developer and Premium tiers) API Management service is moved to a different subnet, or [migrated](migrate-stv1-to-stv2.md) from the `stv1` to the `stv2` compute platform. * (Premium tier) [Availability zones](../reliability/migrate-api-mgt.md) are enabled, added, or removed. * (Premium tier) In [multi-regional deployments](api-management-howto-deploy-multi-region.md), the regional IP address changes if a region is vacated and then reinstated. |
api-management | Developer Portal Extend Custom Functionality | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-extend-custom-functionality.md | |
app-service | Configure Basic Auth Disable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-basic-auth-disable.md | Title: Disable basic authentication for deployment description: Learn how to secure App Service deployment by disabling basic authentication. keywords: azure app service, security, deployment, FTP, MsDeploy Previously updated : 01/26/2024 Last updated : 02/29/2024 App Service provides basic authentication for FTP and WebDeploy clients to conne ## Disable basic authentication +Two different controls for basic authentication are available. Specifically: ++- For [FTP deployment](deploy-ftp.md), basic authentication is controlled by the `basicPublishingCredentialsPolicies/ftp` flag (**FTP Basic Auth Publishing Credentials** option in the portal). +- For other deployment methods that use basic authentication, such as Visual Studio, local Git, and GitHub, basic authentication is controlled by the `basicPublishingCredentialsPolicies/scm` flag (**SCM Basic Auth Publishing Credentials** option in the portal). + ### [Azure portal](#tab/portal) -1. In the [Azure portal], search for and select **App Services**, and then select your app. +1. In the [Azure portal](https://portal.azure.com), search for and select **App Services**, and then select your app. -1. In the app's left menu, select **Configuration**. +1. In the app's left menu, select **Configuration** > **General settings**. -1. For **Basic Auth Publishing Credentials**, select **Off**, then select **Save**. +1. For **SCM Basic Auth Publishing Credentials** or **FTP Basic Auth Publishing Credentials**, select **Off**, then select **Save**. :::image type="content" source="media/configure-basic-auth-disable/basic-auth-disable.png" alt-text="A screenshot showing how to disable basic authentication for Azure App Service in the Azure portal."::: To confirm that Git access is blocked, try [local Git deployment](deploy-local-g ## Deployment without basic authentication -When you disable basic authentication, deployment methods that depend on basic authentication stop working. The following table shows how various deployment methods behave when basic authentication is disabled, and if there's any fallback mechanism. For more information, see [Authentication types by deployment methods in Azure App Service](deploy-authentication-types.md). +When you disable basic authentication, deployment methods that depend on basic authentication stop working. ++The following table shows how various deployment methods behave when basic authentication is disabled, and if there's any fallback mechanism. For more information, see [Authentication types by deployment methods in Azure App Service](deploy-authentication-types.md). | Deployment method | When basic authentication is disabled | |-|-| |
app-service | Deploy Continuous Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-continuous-deployment.md | Title: Configure continuous deployment description: Learn how to enable CI/CD to Azure App Service from GitHub, Bitbucket, Azure Repos, or other repos. Select the build pipeline that fits your needs. ms.assetid: 6adb5c84-6cf3-424e-a336-c554f23b4000 Previously updated : 01/26/2024 Last updated : 02/29/2024 You can customize the GitHub Actions build provider in these ways: # [App Service Build Service](#tab/appservice) > [!NOTE]-> App Service Build Service requires [basic authentication to be enabled](configure-basic-auth-disable.md) for the webhook to work. For more information, see [Deployment without basic authentication](configure-basic-auth-disable.md#deployment-without-basic-authentication). +> App Service Build Service requires [SCM basic authentication to be enabled](configure-basic-auth-disable.md) for the webhook to work. For more information, see [Deployment without basic authentication](configure-basic-auth-disable.md#deployment-without-basic-authentication). App Service Build Service is the deployment and build engine native to App Service, otherwise known as Kudu. When this option is selected, App Service adds a webhook into the repository you authorized. Any code push to the repository triggers the webhook, and App Service pulls the changes into its repository and performs any deployment tasks. For more information, see [Deploying from GitHub (Kudu)](https://github.com/projectkudu/kudu/wiki/Deploying-from-GitHub). |
app-service | Deploy Ftp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-ftp.md | description: Learn how to deploy your app to Azure App Service using FTP or FTPS ms.assetid: ae78b410-1bc0-4d72-8fc4-ac69801247ae Previously updated : 01/26/2024 Last updated : 02/29/2024 or API app to [Azure App Service](./overview.md). The FTP/S endpoint for your app is already active. No configuration is necessary to enable FTP/S deployment. > [!NOTE]-> When [basic authentication is disabled](configure-basic-auth-disable.md), FTP/S deployment doesn't work, and you can't view or configure FTP credentials in the app's Deployment Center. +> When [FTP basic authentication is disabled](configure-basic-auth-disable.md), FTP/S deployment doesn't work, and you can't view or configure FTP credentials in the app's Deployment Center. ## Get deployment credentials |
app-service | Deploy Local Git | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-local-git.md | Title: Deploy from local Git repo description: Learn how to enable local Git deployment to Azure App Service. One of the simplest ways to deploy code from your local machine. ms.assetid: ac50a623-c4b8-4dfd-96b2-a09420770063 Previously updated : 01/26/2024 Last updated : 02/29/2024 -> When [basic authentication is disabled](configure-basic-auth-disable.md), Local Git deployment doesn't work, and you can't configure Local Git deployment in the app's Deployment Center. +> When [SCM basic authentication is disabled](configure-basic-auth-disable.md), Local Git deployment doesn't work, and you can't configure Local Git deployment in the app's Deployment Center. ## Prerequisites |
app-service | How To Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md | zone_pivot_groups: app-service-cli-portal # Use the in-place migration feature to migrate App Service Environment v1 and v2 to App Service Environment v3 > [!NOTE]-> The migration feature described in this article is used for in-place (same subnet) automated migration of App Service Environment v1 and v2 to App Service Environment v3. If you're looking for information on the side by side migration feature, see [Migrate to App Service Environment v3 by using the side by side migration feature](side-by-side-migrate.md). If you're looking for information on manual migration options, see [Manual migration options](migration-alternatives.md). For help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). For more information on App Service Environment v3, see [App Service Environment v3 overview](overview.md). +> The migration feature described in this article is used for in-place (same subnet) automated migration of App Service Environment v1 and v2 to App Service Environment v3. If you're looking for information on the side-by-side migration feature, see [Migrate to App Service Environment v3 by using the side-by-side migration feature](side-by-side-migrate.md). If you're looking for information on manual migration options, see [Manual migration options](migration-alternatives.md). For help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). For more information on App Service Environment v3, see [App Service Environment v3 overview](overview.md). > You can automatically migrate App Service Environment v1 and v2 to [App Service Environment v3](overview.md) by using the in-place migration feature. To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [overview of the in-place migration feature](migrate.md). |
app-service | How To Side By Side Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-side-by-side-migrate.md | Title: Use the side by side migration feature to migrate your App Service Environment v2 to App Service Environment v3 -description: Learn how to migrate your App Service Environment v2 to App Service Environment v3 by using the side by side migration feature. + Title: Use the side-by-side migration feature to migrate your App Service Environment v2 to App Service Environment v3 +description: Learn how to migrate your App Service Environment v2 to App Service Environment v3 by using the side-by-side migration feature. Last updated 2/21/2024 # zone_pivot_groups: app-service-cli-portal -# Use the side by side migration feature to migrate App Service Environment v2 to App Service Environment v3 (Preview) +# Use the side-by-side migration feature to migrate App Service Environment v2 to App Service Environment v3 (Preview) > [!NOTE]-> The migration feature described in this article is used for side by side (different subnet) automated migration of App Service Environment v2 to App Service Environment v3 and is currently **in preview**. +> The migration feature described in this article is used for side-by-side (different subnet) automated migration of App Service Environment v2 to App Service Environment v3 and is currently **in preview**. > > If you're looking for information on the in-place migration feature, see [Migrate to App Service Environment v3 by using the in-place migration feature](migrate.md). If you're looking for information on manual migration options, see [Manual migration options](migration-alternatives.md). For help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). For more information on App Service Environment v3, see [App Service Environment v3 overview](overview.md). > -You can automatically migrate App Service Environment v2 to [App Service Environment v3](overview.md) by using the side by side migration feature. To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [overview of the side by side migration feature](side-by-side-migrate.md). +You can automatically migrate App Service Environment v2 to [App Service Environment v3](overview.md) by using the side-by-side migration feature. To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [overview of the side-by-side migration feature](side-by-side-migrate.md). > [!IMPORTANT] > We recommend that you use this feature for development environments before migrating any production environments, to avoid unexpected problems. Please provide any feedback related to this article or the feature by using the buttons at the bottom of the page. Follow the steps described here in order and as written, because you're making A For this guide, [install the Azure CLI](/cli/azure/install-azure-cli) or use [Azure Cloud Shell](https://shell.azure.com/). +> [!IMPORTANT] +> During the migration, the Azure portal might show incorrect information about your App Service Environment and your apps. Don't go to the Migration experience in the Azure portal since the side-by-side migration feature isn't available there. We recommend that you use the Azure CLI to check the status of your migration. If you have any questions about the status of your migration or your apps, contact support. +> + ## 1. Select the subnet for your new App Service Environment v3 Select a subnet in your App Service Environment v3 that meets the [subnet requirements for App Service Environment v3](./networking.md#subnet-requirements). Note the name of the subnet you select. This subnet must be different than the subnet your existing App Service Environment is in. ASE_ID=$(az appservice ase show --name $ASE_NAME --resource-group $ASE_RG --quer ## 3. Validate migration is supported -The following command checks whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. See the [troubleshooting](side-by-side-migrate.md#troubleshooting) section for descriptions of the potential error messages that you can get. If your environment [isn't supported for migration using the side by side migration feature](side-by-side-migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the side by side migration feature, see the [manual migration options](migration-alternatives.md). +The following command checks whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. See the [troubleshooting](side-by-side-migrate.md#troubleshooting) section for descriptions of the potential error messages that you can get. If your environment [isn't supported for migration using the side-by-side migration feature](side-by-side-migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the side-by-side migration feature, see the [manual migration options](migration-alternatives.md). ```azurecli az rest --method post --uri "${ASE_ID}/NoDowntimeMigrate?phase=Validation&api-version=2022-03-01" |
app-service | Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md | Title: Migrate to App Service Environment v3 by using the in-place migration fea description: Overview of the in-place migration feature for migration to App Service Environment v3. Previously updated : 02/22/2024 Last updated : 03/1/2024 # Migration to App Service Environment v3 using the in-place migration feature > [!NOTE]-> The migration feature described in this article is used for in-place (same subnet) automated migration of App Service Environment v1 and v2 to App Service Environment v3. If you're looking for information on the side by side migration feature, see [Migrate to App Service Environment v3 by using the side by side migration feature](side-by-side-migrate.md). If you're looking for information on manual migration options, see [Manual migration options](migration-alternatives.md). For help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). For more information on App Service Environment v3, see [App Service Environment v3 overview](overview.md). +> The migration feature described in this article is used for in-place (same subnet) automated migration of App Service Environment v1 and v2 to App Service Environment v3. If you're looking for information on the side-by-side migration feature, see [Migrate to App Service Environment v3 by using the side-by-side migration feature](side-by-side-migrate.md). If you're looking for information on manual migration options, see [Manual migration options](migration-alternatives.md). For help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). For more information on App Service Environment v3, see [App Service Environment v3 overview](overview.md). > App Service can automate migration of your App Service Environment v1 and v2 to an [App Service Environment v3](overview.md). There are different migration options. Review the [migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree) to decide which option is best for your use case. App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue. If your App Service Environment doesn't pass the validation checks or you try to |Migrate can only be called on an ASE in ARM VNET and this ASE is in Classic VNET. |App Service Environments in Classic VNets can't migrate using the in-place migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). | |ASEv3 Migration is not yet ready. |The underlying infrastructure isn't ready to support App Service Environment v3. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the in-place migration feature to be available in your region. | |Migration cannot be called on this ASE, please contact support for help migrating. |Support needs to be engaged for migrating this App Service Environment. This issue is potentially due to custom settings used by this environment. |Open a support case to engage support to resolve your issue. |-|Migrate cannot be called if IP SSL is enabled on any of the sites.|App Service Environments that have sites with IP SSL enabled can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. | +|Migrate cannot be called if IP SSL is enabled on any of the sites.|App Service Environments that have sites with IP SSL enabled can't be migrated using the migration feature. |Remove the IP SSL from all of your apps in the App Service Environment to enable the migration feature. | |Full migration cannot be called before IP addresses are generated. |This error appears if you attempt to migrate before finishing the premigration steps. |Ensure you complete all premigration steps before you attempt to migrate. See the [step-by-step guide for migrating](how-to-migrate.md). | |Migration to ASEv3 is not allowed for this ASE. |You can't migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). | |Subscription has too many App Service Environments. Please remove some before trying to create more.|The App Service Environment [quota for your subscription](../../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) is met. |Remove unneeded environments or contact support to review your options. | Migration requires a three to six hour service window for App Service Environmen - The existing App Service Environment is shut down and replaced by the new App Service Environment v3. - All App Service plans in the App Service Environment are converted from the Isolated to Isolated v2 tier. - All of the apps that are on your App Service Environment are temporarily down. **You should expect about one hour of downtime during this period**.- - If you can't support downtime, see the [side by side migration feature](side-by-side-migrate.md) or the [migration-alternatives](migration-alternatives.md#migrate-manually). + - If you can't support downtime, see the [side-by-side migration feature](side-by-side-migrate.md) or the [migration-alternatives](migration-alternatives.md#migrate-manually). - The public addresses that are used by the App Service Environment change to the IPs generated during the IP generation step. As in the IP generation step, you can't scale, modify your App Service Environment, or deploy apps to it during this process. When migration is complete, the apps that were on the old App Service Environment are running on the new App Service Environment v3. |
app-service | Migration Alternatives | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md | You can [deploy ARM templates](../deploy-complex-application-predictably.md) by ## Migrate manually -The [in-place migration feature](migrate.md) automates the migration to App Service Environment v3 and transfers all of your apps to the new environment. There's about one hour of downtime during this migration. If your apps can't have any downtime, we recommend that you use the [side by side migration feature](side-by-side-migrate.md), which is a zero-downtime migration option since the new environment is created in a different subnet. If you also choose not to use the side by side migration feature, you can use one of the manual options to re-create your apps in App Service Environment v3. +The [in-place migration feature](migrate.md) automates the migration to App Service Environment v3 and transfers all of your apps to the new environment. There's about one hour of downtime during this migration. If your apps can't have any downtime, we recommend that you use the [side-by-side migration feature](side-by-side-migrate.md), which is a zero-downtime migration option since the new environment is created in a different subnet. If you also choose not to use the side-by-side migration feature, you can use one of the manual options to re-create your apps in App Service Environment v3. You can distribute traffic between your old and new environments by using [Application Gateway](../networking/app-gateway-with-service-endpoints.md). If you're using an internal load balancer (ILB) App Service Environment, [create an Azure Application Gateway instance](integrate-with-application-gateway.md) with an extra back-end pool to distribute traffic between your environments. For information about ILB App Service Environments and internet-facing App Service Environments, see [Application Gateway integration](../overview-app-gateway-integration.md). |
app-service | Side By Side Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md | Title: Migrate to App Service Environment v3 by using the side by side migration feature -description: Overview of the side by side migration feature for migration to App Service Environment v3. + Title: Migrate to App Service Environment v3 by using the side-by-side migration feature +description: Overview of the side-by-side migration feature for migration to App Service Environment v3. Previously updated : 2/22/2024 Last updated : 3/1/2024 -# Migration to App Service Environment v3 using the side by side migration feature (Preview) +# Migration to App Service Environment v3 using the side-by-side migration feature (Preview) > [!NOTE]-> The migration feature described in this article is used for side by side (different subnet) automated migration of App Service Environment v2 to App Service Environment v3 and is currently **in preview**. +> The migration feature described in this article is used for side-by-side (different subnet) automated migration of App Service Environment v2 to App Service Environment v3 and is currently **in preview**. > > If you're looking for information on the in-place migration feature, see [Migrate to App Service Environment v3 by using the in-place migration feature](migrate.md). If you're looking for information on manual migration options, see [Manual migration options](migration-alternatives.md). For help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). For more information on App Service Environment v3, see [App Service Environment v3 overview](overview.md). > App Service can automate migration of your App Service Environment v1 and v2 to an [App Service Environment v3](overview.md). There are different migration options. Review the [migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree) to decide which option is best for your use case. App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue. -The side by side migration feature automates your migration to App Service Environment v3. The side by side migration feature creates a new App Service Environment v3 with all of your apps in a different subnet. Your existing App Service Environment isn't deleted until you initiate its deletion at the end of the migration process. Because of this process, there's a rollback option if you need to cancel your migration. This migration option is best for customers who want to migrate to App Service Environment v3 with zero downtime and can support using a different subnet for their new environment. If you need to use the same subnet and can support about one hour of application downtime, see the [in-place migration feature](migrate.md). For manual migration options that allow you to migrate at your own pace, see [manual migration options](migration-alternatives.md). +The side-by-side migration feature automates your migration to App Service Environment v3. The side-by-side migration feature creates a new App Service Environment v3 with all of your apps in a different subnet. Your existing App Service Environment isn't deleted until you initiate its deletion at the end of the migration process. Because of this process, there's a rollback option if you need to cancel your migration. This migration option is best for customers who want to migrate to App Service Environment v3 with zero downtime and can support using a different subnet for their new environment. If you need to use the same subnet and can support about one hour of application downtime, see the [in-place migration feature](migrate.md). For manual migration options that allow you to migrate at your own pace, see [manual migration options](migration-alternatives.md). > [!IMPORTANT] > It is recommended to use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page. The side by side migration feature automates your migration to App Service Envir ## Supported scenarios -At this time, the side by side migration feature supports migrations to App Service Environment v3 in the following regions: +At this time, the side-by-side migration feature doesn't support migrations to App Service Environment v3 in the following regions: ### Azure Public -- East Asia-- North Europe-- West Central US-- West US 2+- UAE Central -The following App Service Environment configurations can be migrated using the side by side migration feature. The table gives the App Service Environment v3 configuration when using the side by side migration feature based on your existing App Service Environment. +### Azure Government ++- US DoD Central +- US DoD East +- US Gov Arizona +- US Gov Texas +- US Gov Virginia ++### Microsoft Azure operated by 21Vianet ++- China East 2 +- China North 2 ++The following App Service Environment configurations can be migrated using the side-by-side migration feature. The table gives the App Service Environment v3 configuration when using the side-by-side migration feature based on your existing App Service Environment. |Configuration |App Service Environment v3 Configuration | ||--| App Service Environment v3 can be deployed as [zone redundant](../../availabilit If you want your new App Service Environment v3 to use a custom domain suffix and you aren't using one currently, custom domain suffix can be configured during the migration set-up or at any time once migration is complete. For more information, see [Configure custom domain suffix for App Service Environment](./how-to-custom-domain-suffix.md). If your existing environment has a custom domain suffix and you no longer want to use it, don't configure a custom domain suffix during the migration set-up. -## Side by side migration feature limitations +## Side-by-side migration feature limitations -The following are limitations when using the side by side migration feature: +The following are limitations when using the side-by-side migration feature: - Your new App Service Environment v3 is in a different subnet but the same virtual network as your existing environment. - You can't change the region your App Service Environment is located in. - ELB App Service Environment canΓÇÖt be migrated to ILB App Service Environment v3 and vice versa.+- The side-by-side migration feature is only available using the CLI or via REST API. The feature isn't available in the Azure portal. App Service Environment v3 doesn't support the following features that you might be using with your current App Service Environment v2. - Configuring an IP-based TLS/SSL binding with your apps. - App Service Environment v3 doesn't fall back to Azure DNS if your configured custom DNS servers in the virtual network aren't able to resolve a given name. If this behavior is required, ensure that you have a forwarder to a public DNS or include Azure DNS in the list of custom DNS servers. -The side by side migration feature doesn't support the following scenarios. See the [manual migration options](migration-alternatives.md) if your App Service Environment falls into one of these categories. +The side-by-side migration feature doesn't support the following scenarios. See the [manual migration options](migration-alternatives.md) if your App Service Environment falls into one of these categories. - App Service Environment v1 - You can find the version of your App Service Environment by navigating to your App Service Environment in the [Azure portal](https://portal.azure.com) and selecting **Configuration** under **Settings** on the left-hand side. You can also use [Azure Resource Explorer](https://resources.azure.com/) and review the value of the `kind` property for your App Service Environment. The side by side migration feature doesn't support the following scenarios. See - ELB App Service Environment v2 with IP SSL addresses - [Zone pinned](zone-redundancy.md) App Service Environment v2 -The App Service platform reviews your App Service Environment to confirm side by side migration support. If your scenario doesn't pass all validation checks, you can't migrate at this time using the side by side migration feature. If your environment is in an unhealthy or suspended state, you can't migrate until you make the needed updates. +The App Service platform reviews your App Service Environment to confirm side-by-side migration support. If your scenario doesn't pass all validation checks, you can't migrate at this time using the side-by-side migration feature. If your environment is in an unhealthy or suspended state, you can't migrate until you make the needed updates. > [!NOTE] > App Service Environment v3 doesn't support IP SSL. If you use IP SSL, you must remove all IP SSL bindings before migrating to App Service Environment v3. The migration feature will support your environment once all IP SSL bindings are removed. If your App Service Environment doesn't pass the validation checks or you try to |Error message |Description |Recommendation | |||-|-|Migrate can only be called on an ASE in ARM VNET and this ASE is in Classic VNET. |App Service Environments in Classic virtual networks can't migrate using the side by side migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). | -|ASEv3 Migration is not yet ready. |The underlying infrastructure isn't ready to support App Service Environment v3. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the side by side migration feature to be available in your region. | +|Migrate can only be called on an ASE in ARM VNET and this ASE is in Classic VNET. |App Service Environments in Classic virtual networks can't migrate using the side-by-side migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). | +|ASEv3 Migration is not yet ready. |The underlying infrastructure isn't ready to support App Service Environment v3. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the side-by-side migration feature to be available in your region. | |Cannot enable zone redundancy for this ASE. |The region the App Service Environment is in doesn't support zone redundancy. |If you need to enable zone redundancy, use one of the manual migration options to migrate to a [region that supports zone redundancy](overview.md#regions). | |Migrate cannot be called on this custom DNS suffix ASE at this time. |Custom domain suffix migration is blocked. |Open a support case to engage support to resolve your issue. | |Zone redundant ASE migration cannot be called at this time. |Zone redundant App Service Environment migration is blocked. |Open a support case to engage support to resolve your issue. |-|Migrate cannot be called on ASEv2 that is zone-pinned. |App Service Environment v2 that's zone pinned can't be migrated using the side by side migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. | +|Migrate cannot be called on ASEv2 that is zone-pinned. |App Service Environment v2 that's zone pinned can't be migrated using the side-by-side migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. | |Existing revert migration operation ongoing, please try again later. |A previous migration attempt is being reverted. |Wait until the revert that's in progress completes before attempting to start migration again. | |Properties.VirtualNetwork.Id should contain the subnet resource ID. |The error appears if you attempt to migrate without providing a new subnet for the placement of your App Service Environment v3. |Ensure you follow the guidance and complete the step to identify the subnet you'll use for your App Service Environment v3. | |Unable to move to `<requested phase>` from the current phase `<previous phase>` of No Downtime Migration. |This error appears if you attempt to do a migration step in the incorrect order. |Ensure you follow the migration steps in order. | |Failed to start revert operation on ASE in hybrid state, please try again later. |This error appears if you try to revert the migration but something goes wrong. This error doesn't affect either your old or your new environment. |Open a support case to engage support to resolve your issue. |-|This ASE cannot be migrated without downtime. |This error appears if you try to use the side by side migration feature on an App Service Environment v1. |The side by side migration feature doesn't support App Service Environment v1. Migrate using the [in-place migration feature](migrate.md) or one of the [manual migration options](migration-alternatives.md). | +|This ASE cannot be migrated without downtime. |This error appears if you try to use the side-by-side migration feature on an App Service Environment v1. |The side-by-side migration feature doesn't support App Service Environment v1. Migrate using the [in-place migration feature](migrate.md) or one of the [manual migration options](migration-alternatives.md). | |Migrate is not available for this subscription. |Support needs to be engaged for migrating this App Service Environment.|Open a support case to engage support to resolve your issue.| |Zone redundant migration cannot be called since the IP addresses created during pre-migrate are not zone redundant. |This error appears if you attempt a zone redundant migration but didn't create zone redundant IPs during the IP generation step. |Open a support case to engage support if you need to enable zone redundancy. Otherwise, you can migrate without enabling zone redundancy. |-|Migrate cannot be called if IP SSL is enabled on any of the sites. |App Service Environments that have sites with IP SSL enabled can't be migrated using the side by side migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, you can disable the IP SSL on all sites in the App Service Environment and attempt migration again. | +|Migrate cannot be called if IP SSL is enabled on any of the sites. |App Service Environments that have sites with IP SSL enabled can't be migrated using the side-by-side migration feature. |Remove the IP SSL from all of your apps in the App Service Environment to enable the migration feature. | |Cannot migrate within the same subnet. |The error appears if you specify the same subnet that your current environment is in for placement of your App Service Environment v3. |You must specify a different subnet for your App Service Environment v3. If you need to use the same subnet, migrate using the [in-place migration feature](migrate.md). | |Subscription has too many App Service Environments. Please remove some before trying to create more.|The App Service Environment [quota for your subscription](../../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) is met. |Remove unneeded environments or contact support to review your options. | |Migrate cannot be called on this ASE until the active upgrade has finished. |App Service Environments can't be migrated during platform upgrades. You can set your [upgrade preference](how-to-upgrade-preference.md) from the Azure portal. In some cases, an upgrade is initiated when visiting the migration page if your App Service Environment isn't on the current build. |Wait until the upgrade finishes and then migrate. | If your App Service Environment doesn't pass the validation checks or you try to |Migration is invalid. Your ASE needs to be upgraded to the latest build to ensure successful migration. We will upgrade your ASE now. Please try migrating again in few hours once platform upgrade has finished. |Your App Service Environment isn't on the minimum build required for migration. An upgrade is started. Your App Service Environment won't be impacted, but you won't be able to scale or make changes to your App Service Environment while the upgrade is in progress. You won't be able to migrate until the upgrade finishes. |Wait until the upgrade finishes and then migrate. | |Full migration cannot be called before IP addresses are generated. |This error appears if you attempt to migrate before finishing the premigration steps. |Ensure you complete all premigration steps before you attempt to migrate. See the [step-by-step guide for migrating](how-to-side-by-side-migrate.md). | -## Overview of the migration process using the side by side migration feature +## Overview of the migration process using the side-by-side migration feature -Side by side migration consists of a series of steps that must be followed in order. Key points are given for a subset of the steps. It's important to understand what happens during these steps and how your environment and apps are impacted. After reviewing the following information and when you're ready to migrate, follow the [step-by-step guide](how-to-side-by-side-migrate.md). +Side-by-side migration consists of a series of steps that must be followed in order. Key points are given for a subset of the steps. It's important to understand what happens during these steps and how your environment and apps are impacted. After reviewing the following information and when you're ready to migrate, follow the [step-by-step guide](how-to-side-by-side-migrate.md). ### Select and prepare the subnet for your new App Service Environment v3 There's no application downtime during the migration, but as in the IP generatio > Since scaling is blocked during the migration, you should scale your environment to the desired size before starting the migration. > -Side by side migration requires a three to six hour service window for App Service Environment v2 to v3 migrations. During migration, scaling and environment configurations are blocked and the following events occur: +Side-by-side migration requires a three to six hour service window for App Service Environment v2 to v3 migrations. During migration, scaling and environment configurations are blocked and the following events occur: - The new App Service Environment v3 is created in the subnet you selected. - Your new App Service plans are created in the new App Service Environment v3 with the corresponding Isolated v2 tier. The final step is to redirect traffic to your new App Service Environment v3 and Once you're ready to redirect traffic, you can complete the final step of the migration. This step updates internal DNS records to point to the load balancer IP address of your new App Service Environment v3 and the frontends that were created during the migration. Changes are effective immediately. This step also shuts down your old App Service Environment and deletes it. Your new App Service Environment v3 is now your production environment. > [!IMPORTANT]-> During the preview, in some cases there may be up to 20 minutes of downtime when you complete the final step of the migration. This downtime is due to the DNS change. The downtime is expected to be removed once the feature is generally available. If you have a requirement for zero downtime, you should wait until the side by side migration feature is generally available. During preview, however, you can still use the side by side migration feature to migrate your dev environments to App Service Environment v3 to learn about the migration process and see how it impacts your workloads. +> During the preview, in some cases there may be up to 20 minutes of downtime when you complete the final step of the migration. This downtime is due to the DNS change. The downtime is expected to be removed once the feature is generally available. If you have a requirement for zero downtime, you should wait until the side-by-side migration feature is generally available. During preview, however, you can still use the side-by-side migration feature to migrate your dev environments to App Service Environment v3 to learn about the migration process and see how it impacts your workloads. > If you discover any issues with your new App Service Environment v3, don't run the command to redirect customer traffic. This command also initiates the deletion of your App Service Environment v2. If you find an issue, you can revert all changes and return to your old App Service Environment v2. The revert process takes 3 to 6 hours to complete. There's no downtime associated with this process. Once the revert process completes, your old App Service Environment is back online and your new App Service Environment v3 is deleted. You can then attempt the migration again once you resolve any issues. The App Service plan SKUs available for App Service Environment v3 run on the Is ## Frequently asked questions - **What if migrating my App Service Environment is not currently supported?** - You can't migrate using the side by side migration feature at this time. If you have an unsupported environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md). + You can't migrate using the side-by-side migration feature at this time. If you have an unsupported environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md). - **How do I choose which migration option is right for me?** Review the [migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree) to decide which option is best for your use case.-- **How do I know if I should use the side by side migration feature?** - The side by side migration feature is best for customers who want to migrate to App Service Environment v3 but can't support application downtime. Since a new subnet is used for your new environment, there are networking considerations to be aware of, including new IPs. If you can support downtime, see the [in-place migration feature](migrate.md), which results in minimal configuration changes, or the [manual migration options](migration-alternatives.md). The in-place migration feature creates your App Service Environment v3 in the same subnet as your existing environment and uses the same networking infrastructure. +- **How do I know if I should use the side-by-side migration feature?** + The side-by-side migration feature is best for customers who want to migrate to App Service Environment v3 but can't support application downtime. Since a new subnet is used for your new environment, there are networking considerations to be aware of, including new IPs. If you can support downtime, see the [in-place migration feature](migrate.md), which results in minimal configuration changes, or the [manual migration options](migration-alternatives.md). The in-place migration feature creates your App Service Environment v3 in the same subnet as your existing environment and uses the same networking infrastructure. - **Will I experience downtime during the migration?** - No, there's no downtime during the side by side migration process. Your apps continue to run on your existing App Service Environment until you complete the final step of the migration where DNS changes are effective immediately. Once you complete the final step, your old App Service Environment is shut down and deleted. Your new App Service Environment v3 is now your production environment. + No, there's no downtime during the side-by-side migration process. Your apps continue to run on your existing App Service Environment until you complete the final step of the migration where DNS changes are effective immediately. Once you complete the final step, your old App Service Environment is shut down and deleted. Your new App Service Environment v3 is now your production environment. - **Will I need to do anything to my apps after the migration to get them running on the new App Service Environment?** No, all of your apps running on the old environment are automatically migrated to the new environment and run like before. No user input is needed. - **What if my App Service Environment has a custom domain suffix?** - The side by side migration feature supports this [migration scenario](#supported-scenarios). + The side-by-side migration feature supports this [migration scenario](#supported-scenarios). - **What if my App Service Environment is zone pinned?** - The side by side migration feature doesn't support this [migration scenario](#supported-scenarios) at this time. If you have a zone pinned App Service Environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md). + The side-by-side migration feature doesn't support this [migration scenario](#supported-scenarios) at this time. If you have a zone pinned App Service Environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md). - **What if my App Service Environment has IP SSL addresses?** - IP SSL isn't supported on App Service Environment v3. You must remove all IP SSL bindings before migrating using the migration feature or one of the manual options. If you intend to use the side by side migration feature, once you remove all IP SSL bindings, you pass that validation check and can proceed with the automated migration. + IP SSL isn't supported on App Service Environment v3. You must remove all IP SSL bindings before migrating using the migration feature or one of the manual options. If you intend to use the side-by-side migration feature, once you remove all IP SSL bindings, you pass that validation check and can proceed with the automated migration. - **What properties of my App Service Environment will change?** - You're on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. Both your inbound and outbound IPs change when using the side by side migration feature. Note for ELB App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). For a full comparison of the App Service Environment versions, see [App Service Environment version comparison](version-comparison.md). + You're on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. Both your inbound and outbound IPs change when using the side-by-side migration feature. Note for ELB App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). For a full comparison of the App Service Environment versions, see [App Service Environment version comparison](version-comparison.md). - **What happens if migration fails or there is an unexpected issue during the migration?** - If there's an unexpected issue, support teams are on hand. We recommend that you migrate dev environments before touching any production environments to learn about the migration process and see how it impacts your workloads. With the side by side migration feature, you can revert all changes if there's any issues. + If there's an unexpected issue, support teams are on hand. We recommend that you migrate dev environments before touching any production environments to learn about the migration process and see how it impacts your workloads. With the side-by-side migration feature, you can revert all changes if there's any issues. - **What happens to my old App Service Environment?** - If you decide to migrate an App Service Environment using the side by side migration feature, your old environment is used up until the final step in the migration process. Once you complete the final step, the old environment and all of the apps hosted on it get shutdown and deleted. Your old environment is no longer accessible. A rollback to the old environment at this point isn't possible. + If you decide to migrate an App Service Environment using the side-by-side migration feature, your old environment is used up until the final step in the migration process. Once you complete the final step, the old environment and all of the apps hosted on it get shutdown and deleted. Your old environment is no longer accessible. A rollback to the old environment at this point isn't possible. - **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?** After 31 August 2024, if you don't migrate to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, [App Service Environment v1/v2 will no longer be available after that date](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain. ## Next steps > [!div class="nextstepaction"]-> [Migrate your App Service Environment to App Service Environment v3 using the side by side migration feature](how-to-side-by-side-migrate.md) +> [Migrate your App Service Environment to App Service Environment v3 using the side-by-side migration feature](how-to-side-by-side-migrate.md) > [!div class="nextstepaction"] > [App Service Environment v3 Networking](networking.md) |
app-service | Upgrade To Asev3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/upgrade-to-asev3.md | This page is your one-stop shop for guidance and resources to help you upgrade s |Step|Action|Resources| |-|||-|**1**|**Pre-flight check**|Determine if your environment meets the prerequisites to automate your upgrade using one of the automated migration features. Decide whether an in-place or side by side migration is right for your use case.<br><br>- [Migration path decision tree](#migration-path-decision-tree)<br>- [Automated upgrade using the in-place migration feature](migrate.md)<br>- [Automated upgrade using the side by side migration feature](side-by-side-migrate.md)<br><br>If not, you can upgrade manually.<br><br>- [Manual migration](migration-alternatives.md)| -|**2**|**Migrate**|Based on results of your review, either upgrade using one of the automated migration features or follow the manual steps.<br><br>- [Use the in-place automated migration feature](how-to-migrate.md)<br>- [Use the side by side automated migration feature](how-to-side-by-side-migrate.md)<br>- [Migrate manually](migration-alternatives.md)| -|**3**|**Testing and troubleshooting**|Upgrading using one of the automated migration features requires a 3-6 hour service window. If you use the side by side migration feature, you have the opportunity to [test and validate your App Service Environment v3](side-by-side-migrate.md#redirect-customer-traffic-and-complete-migration) before completing the upgrade. Support teams are monitoring upgrades to ensure success. If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).| +|**1**|**Pre-flight check**|Determine if your environment meets the prerequisites to automate your upgrade using one of the automated migration features. Decide whether an in-place or side-by-side migration is right for your use case.<br><br>- [Migration path decision tree](#migration-path-decision-tree)<br>- [Automated upgrade using the in-place migration feature](migrate.md)<br>- [Automated upgrade using the side-by-side migration feature](side-by-side-migrate.md)<br><br>If not, you can upgrade manually.<br><br>- [Manual migration](migration-alternatives.md)| +|**2**|**Migrate**|Based on results of your review, either upgrade using one of the automated migration features or follow the manual steps.<br><br>- [Use the in-place automated migration feature](how-to-migrate.md)<br>- [Use the side-by-side automated migration feature](how-to-side-by-side-migrate.md)<br>- [Migrate manually](migration-alternatives.md)| +|**3**|**Testing and troubleshooting**|Upgrading using one of the automated migration features requires a 3-6 hour service window. If you use the side-by-side migration feature, you have the opportunity to [test and validate your App Service Environment v3](side-by-side-migrate.md#redirect-customer-traffic-and-complete-migration) before completing the upgrade. Support teams are monitoring upgrades to ensure success. If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).| |**4**|**Optimize your App Service plans**|Once your upgrade is complete, you can optimize the App Service plans for additional benefits.<br><br>Review the autoselected Isolated v2 SKU sizes and scale up or scale down your App Service plans as needed.<br><br>- [Scale down your App Service plans](../manage-scale-up.md)<br>- [App Service Environment post-migration scaling guidance](migrate.md#pricing)<br><br>Explore reserved instance pricing, savings plans, and check out the pricing estimates if needed.<br><br>- [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service/windows/)<br>- [How reservation discounts apply to Isolated v2 instances](../../cost-management-billing/reservations/reservation-discount-app-service.md#how-reservation-discounts-apply-to-isolated-v2-instances)<br>- [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator)| |**5**|**Learn more**|On-demand: [Learn Live webinar with Azure FastTrack Architects](https://www.youtube.com/watch?v=lI9TK_v-dkg&ab_channel=MicrosoftDeveloper).<br><br>Need more help? [Submit a request](https://cxp.azure.com/nominationportal/nominationform/fasttrack) to contact FastTrack.<br><br>[Frequently asked questions](migrate.md#frequently-asked-questions)<br><br>[Community support](https://aka.ms/asev1v2retirement)| App Service Environment v3 is the latest version of App Service Environment. It' There are two automated migration features available to help you upgrade to App Service Environment v3. - **In-place migration feature** migrates your App Service Environment to App Service Environment v3 in-place. In-place means that your App Service Environment v3 replaces your existing App Service Environment in the same subnet. There's application downtime during the migration because a subnet can only have a single App Service Environment at a given time. For more information about this feature, see [Automated upgrade using the in-place migration feature](migrate.md).-- **Side by side migration feature** creates a new App Service Environment v3 in a different subnet that you choose and recreates all of your App Service plans and apps in that new environment. Your existing environment is up and running during the entire migration. Once the new App Service Environment v3 is ready, you can redirect traffic to the new environment and complete the migration. There's no application downtime during the migration. For more information about this feature, see [Automated upgrade using the side by side migration feature](side-by-side-migrate.md).+- **Side-by-side migration feature** creates a new App Service Environment v3 in a different subnet that you choose and recreates all of your App Service plans and apps in that new environment. Your existing environment is up and running during the entire migration. Once the new App Service Environment v3 is ready, you can redirect traffic to the new environment and complete the migration. There's no application downtime during the migration. For more information about this feature, see [Automated upgrade using the side-by-side migration feature](side-by-side-migrate.md). - **Manual migration options** are available if you can't use the automated migration features. For more information about these options, see [Migration alternatives](migration-alternatives.md). ### Migration path decision tree Got 2 minutes? We'd love to hear about your upgrade experience in this quick, an > [Migration to App Service Environment v3 using the in-place migration feature](migrate.md) > [!div class="nextstepaction"]-> [Migration to App Service Environment v3 using the side by side migration feature](side-by-side-migrate.md) +> [Migration to App Service Environment v3 using the side-by-side migration feature](side-by-side-migrate.md) > [!div class="nextstepaction"] > [Manually migrate to App Service Environment v3](migration-alternatives.md) |
application-gateway | Tcp Tls Proxy Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tcp-tls-proxy-overview.md | -## Application Gateway Layer 4 capabilities +## TLS/TCP proxy capabilities on Application Gateway As a reverse proxy service, the Layer 4 operations of Application Gateway work similar to its Layer 7 proxy operations. A client establishes a TCP connection with Application Gateway, and Application Gateway itself initiates a new TCP connection to a backend server from the backend pool. The following figure shows typical operation. Process flow: 1. A client initiates a TCP or TLS connection with the application gateway using its frontend listener's IP address and port number. This establishes the frontend connection. Once the connection is established, the client sends a request using the required application layer protocol. 2. The application gateway establishes a new connection with one of the backend targets from the associated backend pool (forming the backend connection) and sends the client request to that backend server. 3. The response from the backend server is sent back to the client by the application gateway. -4. The same frontend TCP connection is used for subsequent requests from the client unless the TCP idle timeout closes that connection. +4. The same frontend TCP connection is used for subsequent requests from the client unless the TCP idle timeout closes that connection. ++### Comparing Azure Load Balancer with Azure Application Gateway: +| Product | Type | +| - | - | +| [**Azure Load Balancer**](../load-balancer/load-balancer-overview.md) | A pass-through load balancer where a client directly establishes a connection with a backend server selected by the Load Balancer's distribution algorithm. | +| **Azure Application Gateway** | Terminating load balancer where a client directly establishes a connection with Application Gateway and a separate connection is initiated with a backend server selected by Application Gateway's distribution algorithm. | + ## Features Process flow: ## Next steps -[Configure Azure Application Gateway TCP/TLS proxy](how-to-tcp-tls-proxy.md) +- [Configure Azure Application Gateway TCP/TLS proxy](how-to-tcp-tls-proxy.md) +- Visit [frequently asked questions (FAQs)](application-gateway-faq.yml#configurationtls-tcp-proxy) |
azure-app-configuration | Reference Kubernetes Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/reference-kubernetes-provider.md | -The following reference outlines the properties supported by the Azure App Configuration Kubernetes Provider. +The following reference outlines the properties supported by the Azure App Configuration Kubernetes Provider `v1.2.0`. See [release notes](https://github.com/Azure/AppConfiguration/blob/main/releaseNotes/KubernetesProvider.md) for more information on the change. ## Properties An `AzureAppConfigurationProvider` resource has the following top-level child pr |auth|The authentication method to access Azure App Configuration.|false|object| |configuration|The settings for querying and processing key-values in Azure App Configuration.|false|object| |secret|The settings for Key Vault references in Azure App Configuration.|conditional|object|+|featureFlag|The settings for feature flags in Azure App Configuration.|false|object| The `spec.target` property has the following child property. The `spec.target` property has the following child property. |configMapName|The name of the ConfigMap to be created.|true|string| |configMapData|The setting that specifies how the retrieved data should be populated in the generated ConfigMap.|false|object| -If the `spec.target.configMapData` property is not set, the generated ConfigMap will be populated with the list of key-values retrieved from Azure App Configuration, which allows the ConfigMap to be consumed as environment variables. Update this property if you wish to consume the ConfigMap as a mounted file. This property has the following child properties. +If the `spec.target.configMapData` property is not set, the generated ConfigMap is populated with the list of key-values retrieved from Azure App Configuration, which allows the ConfigMap to be consumed as environment variables. Update this property if you wish to consume the ConfigMap as a mounted file. This property has the following child properties. |Name|Description|Required|Type| ||||| If the `spec.target.configMapData` property is not set, the generated ConfigMap |key|The key name of the retrieved data when the `type` is set to `json`, `yaml` or `properties`. Set it to the file name if the ConfigMap is set up to be consumed as a mounted file.|conditional|string| |separator|The delimiter that is used to output the ConfigMap data in hierarchical format when the type is set to `json` or `yaml`. The separator is empty by default and the generated ConfigMap contains key-values in their original form. Configure this setting only if the configuration file loader used in your application can't load key-values without converting them to the hierarchical format.|optional|string| -The `spec.auth` property isn't required if the connection string of your App Configuration store is provided by setting the `spec.connectionStringReference` property. Otherwise, one of the identities, service principal, workload identity, or managed identity, will be used for authentication. The `spec.auth` has the following child properties. Only one of them should be specified. If none of them are set, the system-assigned managed identity of the virtual machine scale set will be used. +The `spec.auth` property isn't required if the connection string of your App Configuration store is provided by setting the `spec.connectionStringReference` property. Otherwise, one of the identities, service principal, workload identity, or managed identity, is used for authentication. The `spec.auth` has the following child properties. Only one of them should be specified. If none of them are set, the system-assigned managed identity of the virtual machine scale set is used. |Name|Description|Required|Type| ||||| The `spec.configuration` has the following child properties. ||||| |selectors|The list of selectors for key-value filtering.|false|object array| |trimKeyPrefixes|The list of key prefixes to be trimmed.|false|string array|-|refresh|The settings for refreshing data from Azure App Configuration. If the property is absent, data from Azure App Configuration will not be refreshed.|false|object| +|refresh|The settings for refreshing key-values from Azure App Configuration. If the property is absent, key-values from Azure App Configuration are not refreshed.|false|object| -If the `spec.configuration.selectors` property isn't set, all key-values with no label will be downloaded. It contains an array of *selector* objects, which have the following child properties. +If the `spec.configuration.selectors` property isn't set, all key-values with no label are downloaded. It contains an array of *selector* objects, which have the following child properties. |Name|Description|Required|Type| ||||| The `spec.configuration.refresh` property has the following child properties. |Name|Description|Required|Type| |||||-|enabled|The setting that determines whether data from Azure App Configuration is automatically refreshed. If the property is absent, a default value of `false` will be used.|false|bool| -|monitoring|The key-values monitored for change detection, aka sentinel keys. The data from Azure App Configuration will be refreshed only if at least one of the monitored key-values is changed.|true|object| -|interval|The interval at which the data will be refreshed from Azure App Configuration. It must be greater than or equal to 1 second. If the property is absent, a default value of 30 seconds will be used.|false|duration string| +|enabled|The setting that determines whether key-values from Azure App Configuration is automatically refreshed. If the property is absent, a default value of `false` is used.|false|bool| +|monitoring|The key-values monitored for change detection, aka sentinel keys. The key-values from Azure App Configuration are refreshed only if at least one of the monitored key-values is changed.|true|object| +|interval|The interval at which the key-values are refreshed from Azure App Configuration. It must be greater than or equal to 1 second. If the property is absent, a default value of 30 seconds is used.|false|duration string| The `spec.configuration.refresh.monitoring.keyValues` is an array of objects, which have the following child properties. The `spec.secret` property has the following child properties. It is required if ||||| |target|The destination of the retrieved secrets in Kubernetes.|true|object| |auth|The authentication method to access Key Vaults.|false|object|-|refresh|The settings for refreshing data from Key Vaults. If the property is absent, data from Key Vaults will not be refreshed unless the corresponding Key Vault references are reloaded.|false|object| +|refresh|The settings for refreshing data from Key Vaults. If the property is absent, data from Key Vaults is not refreshed unless the corresponding Key Vault references are reloaded.|false|object| The `spec.secret.target` property has the following child property. The authentication method of each *Key Vault* can be specified with the followin |workloadIdentity|The settings of the workload identity used for authentication with a Key Vault. It has the same child properties as `spec.auth.workloadIdentity`.|false|object| |managedIdentityClientId|The client ID of a user-assigned managed identity of virtual machine scale set used for authentication with a Key Vault.|false|string| -The `spec.secret.refresh` property has the following child property. +The `spec.secret.refresh` property has the following child properties. |Name|Description|Required|Type| |||||-|enabled|The setting that determines whether data from Key Vaults is automatically refreshed. If the property is absent, a default value of `false` will be used.|false|bool| -|interval|The interval at which the data will be refreshed from Key Vault. It must be greater than or equal to 1 minute. The Key Vault refresh is independent of the App Configuration refresh configured via `spec.configuration.refresh`.|true|duration string| +|enabled|The setting that determines whether data from Key Vaults is automatically refreshed. If the property is absent, a default value of `false` is used.|false|bool| +|interval|The interval at which the data is refreshed from Key Vault. It must be greater than or equal to 1 minute. The Key Vault refresh is independent of the App Configuration refresh configured via `spec.configuration.refresh`.|true|duration string| ++The `spec.featureFlag` property has the following child properties. It is required if any feature flags are expected to be downloaded. ++|Name|Description|Required|Type| +||||| +|selectors|The list of selectors for feature flag filtering.|false|object array| +|refresh|The settings for refreshing feature flags from Azure App Configuration. If the property is absent, feature flags from Azure App Configuration are not refreshed.|false|object| ++If the `spec.featureFlag.selectors` property isn't set, feature flags are not downloaded. It contains an array of *selector* objects, which have the following child properties. ++|Name|Description|Required|Type| +||||| +|keyFilter|The key filter for querying feature flags.|true|string| +|labelFilter|The label filter for querying feature flags.|false|string| ++The `spec.featureFlag.refresh` property has the following child properties. ++|Name|Description|Required|Type| +||||| +|enabled|The setting that determines whether feature flags from Azure App Configuration are automatically refreshed. If the property is absent, a default value of `false` is used.|false|bool| +|interval|The interval at which the feature flags are refreshed from Azure App Configuration. It must be greater than or equal to 1 second. If the property is absent, a default value of 30 seconds is used.|false|duration string| ++## Installation ++Use the following `helm install` command to install the Azure App Configuration Kubernetes Provider. See [helm-values.yaml](https://github.com/Azure/AppConfiguration-KubernetesProvider/blob/main/deploy/parameter/helm-values.yaml) for the complete list of parameters and their default values. You can override the default values by passing the `--set` flag to the command. + +```bash +helm install azureappconfiguration.kubernetesprovider \ + oci://mcr.microsoft.com/azure-app-configuration/helmchart/kubernetes-provider \ + --namespace azappconfig-system \ + --create-namespace +``` ++### Autoscaling ++By default, autoscaling is disabled. However, if you have multiple `AzureAppConfigurationProvider` resources to produce multiple ConfigMaps/Secrets, you can enable horizontal pod autoscaling by setting `autoscaling.enabled` to `true`. ## Examples spec: interval: 1h ``` +### Feature Flags ++In the following sample, feature flags with keys starting with `app1` and labels equivalent to `common` are downloaded and refreshed every 10 minutes. ++``` yaml +apiVersion: azconfig.io/v1 +kind: AzureAppConfigurationProvider +metadata: + name: appconfigurationprovider-sample +spec: + endpoint: <your-app-configuration-store-endpoint> + target: + configMapName: configmap-created-by-appconfig-provider + featureFlag: + selectors: + - keyFilter: app1* + labelFilter: common + refresh: + enabled: true + interval: 10m +``` + ### ConfigMap Consumption Applications running in Kubernetes typically consume the ConfigMap either as environment variables or as configuration files. If the `configMapData.type` property is absent or is set to default, the ConfigMap is populated with the itemized list of data retrieved from Azure App Configuration, which can be easily consumed as environment variables. If the `configMapData.type` property is set to json, yaml or properties, data retrieved from Azure App Configuration is grouped into one item with key name specified by the `configMapData.key` property in the generated ConfigMap, which can be consumed as a mounted file. Assuming an App Configuration store has these key-values: #### [default](#tab/default) -and the `configMapData.type` property is absent or set to `default`, +And the `configMapData.type` property is absent or set to `default`, ``` yaml apiVersion: azconfig.io/v1 spec: configMapName: configmap-created-by-appconfig-provider ``` -the generated ConfigMap will be populated with the following data: +The generated ConfigMap is populated with the following data: ``` yaml data: data: #### [json](#tab/json) -and the `configMapData.type` property is set to `json`, +And the `configMapData.type` property is set to `json`, ``` yaml apiVersion: azconfig.io/v1 spec: key: appSettings.json ``` -the generated ConfigMap will be populated with the following data: +The generated ConfigMap is populated with the following data: ``` yaml data: data: #### [yaml](#tab/yaml) -and the `configMapData.type` property is set to `yaml`, +And the `configMapData.type` property is set to `yaml`, ``` yaml apiVersion: azconfig.io/v1 spec: key: appSettings.yaml ``` -the generated ConfigMap will be populated with the following data: +The generated ConfigMap is populated with the following data: ``` yaml data: data: #### [properties](#tab/properties) -and the `configMapData.type` property is set to `properties`, +And the `configMapData.type` property is set to `properties`, ``` yaml apiVersion: azconfig.io/v1 spec: key: app.properties ``` -the generated ConfigMap will be populated with the following data: +The generated ConfigMap is populated with the following data: ``` yaml data: |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md | To Arc-enable a System Center VMM management server, deploy [Azure Arc resource The following image shows the architecture for the Arc-enabled SCVMM: ## How is Arc-enabled SCVMM different from Arc-enabled Servers |
azure-functions | Functions Bindings Cache Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-input.md | + + Title: Azure Cache for Redis input binding for Azure Functions (preview) +description: Learn how to use input bindings to connect to Azure Cache for Redis from Azure Functions. +++++ Last updated : 02/27/2024+zone_pivot_groups: programming-languages-set-functions-lang-workers +++# Azure Cache for Redis input binding for Azure Functions (preview) ++When a function runs, the Azure Cache for Redis input binding retrieves data from a cache and passes it to your function as an input parameter. ++For information on setup and configuration details, see the [overview](functions-bindings-cache.md). ++<! Replace with the following when Node.js v4 is supported: +--> +<! Replace with the following when Python v2 is supported: +--> ++## Example +++> [!IMPORTANT] +> +>For .NET functions, using the _isolated worker_ model is recommended over the _in-process_ model. For a comparison of the _in-process_ and _isolated worker_ models, see differences between the _isolated worker_ model and the _in-process_ model for .NET on Azure Functions. ++The following code uses the key from the pub/sub trigger to obtain and log the value from an input binding using a `GET` command: ++### [Isolated process](#tab/isolated-process) ++```csharp +using Microsoft.Extensions.Logging; ++namespace Microsoft.Azure.Functions.Worker.Extensions.Redis.Samples.RedisInputBinding +{ + public class SetGetter + { + private readonly ILogger<SetGetter> logger; ++ public SetGetter(ILogger<SetGetter> logger) + { + this.logger = logger; + } ++ [Function(nameof(SetGetter))] + public void Run( + [RedisPubSubTrigger(Common.connectionStringSetting, "__keyevent@0__:set")] string key, + [RedisInput(Common.connectionStringSetting, "GET {Message}")] string value) + { + logger.LogInformation($"Key '{key}' was set to value '{value}'"); + } + } +} ++``` ++### [In-process](#tab/in-process) ++```csharp +using Microsoft.Extensions.Logging; ++namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisPubSubTrigger +{ + internal class SetGetter + { + [FunctionName(nameof(SetGetter))] + public static void Run( + [RedisPubSubTrigger(Common.connectionStringSetting, "__keyevent@0__:set")] string key, + [Redis(Common.connectionStringSetting, "GET {Message}")] string value, + ILogger logger) + { + logger.LogInformation($"Key '{key}' was set to value '{value}'"); + } + } +} +``` ++++More samples for the Azure Cache for Redis input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-redis-extension). +<!-- link to redis samples --> +The following code uses the key from the pub/sub trigger to obtain and log the value from an input binding using a `GET` command: ++```java +package com.function.RedisInputBinding; ++import com.microsoft.azure.functions.*; +import com.microsoft.azure.functions.annotation.*; +import com.microsoft.azure.functions.redis.annotation.*; ++public class SetGetter { + @FunctionName("SetGetter") + public void run( + @RedisPubSubTrigger( + name = "key", + connection = "redisConnectionString", + channel = "__keyevent@0__:set") + String key, + @RedisInput( + name = "value", + connection = "redisConnectionString", + command = "GET {Message}") + String value, + final ExecutionContext context) { + context.getLogger().info("Key '" + key + "' was set to value '" + value + "'"); + } +} +``` ++### [Model v3](#tab/nodejs-v3) ++This function.json defines both a pub/sub trigger and an input binding to the GET message on an Azure Cache for Redis instance: ++```json +{ + "bindings": [ + { + "type": "redisPubSubTrigger", + "connection": "redisConnectionString", + "channel": "__keyevent@0__:set", + "name": "key", + "direction": "in" + }, + { + "type": "redis", + "connection": "redisConnectionString", + "command": "GET {Message}", + "name": "value", + "direction": "in" + } + ], + "scriptFile": "index.js" +} +``` ++This JavaScript code (from index.js) retrives and logs the cached value related to the key provided by the pub/sub trigger. ++```nodejs ++module.exports = async function (context, key, value) { + context.log("Key '" + key + "' was set to value '" + value + "'"); +} ++``` ++### [Model v4](#tab/nodejs-v4) ++<! Replace with the following when Node.js v4 is supported: +--> +++++This function.json defines both a pub/sub trigger and an input binding to the GET message on an Azure Cache for Redis instance: +<!Note: it might be confusing that the binding `name` and the parameter name are the same in these examples. > +```json +{ + "bindings": [ + { + "type": "redisPubSubTrigger", + "connection": "redisConnectionString", + "channel": "__keyevent@0__:set", + "name": "key", + "direction": "in" + }, + { + "type": "redis", + "connection": "redisConnectionString", + "command": "GET {Message}", + "name": "value", + "direction": "in" + } + ], + "scriptFile": "run.ps1" +} +``` ++This PowerShell code (from run.ps1) retrieves and logs the cached value related to the key provided by the pub/sub trigger. ++```powershell +param($key, $value, $TriggerMetadata) +Write-Host "Key '$key' was set to value '$value'" +``` +++The following example uses a pub/sub trigger with an input binding to the GET message on an Azure Cache for Redis instance. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md). ++### [v1](#tab/python-v1) ++This function.json defines both a pub/sub trigger and an input binding to the GET message on an Azure Cache for Redis instance: ++```json +{ + "bindings": [ + { + "type": "redisPubSubTrigger", + "connection": "redisConnectionString", + "channel": "__keyevent@0__:set", + "name": "key", + "direction": "in" + }, + { + "type": "redis", + "connection": "redisConnectionString", + "command": "GET {Message}", + "name": "value", + "direction": "in" + } + ] +} +``` ++This Python code (from \_\_init\_\_.py) retrives and logs the cached value related to the key provided by the pub/sub trigger: ++```python ++import logging ++def main(key: str, value: str): + logging.info("Key '" + key + "' was set to value '" + value + "'") ++``` ++The [configuration](#configuration) section explains these properties. ++### [v2](#tab/python-v2) ++<! Replace with the following when Python v2 is supported: +--> ++++## Attributes + +> [!NOTE] +> Not all commands are supported for this binding. At the moment, only read commands that return a single output are supported. The full list can be found [here](https://github.com/Azure/azure-functions-redis-extension/blob/main/src/Microsoft.Azure.WebJobs.Extensions.Redis/Bindings/RedisAsyncConverter.cs#L63) ++|Attribute property | Description | +|-|--| +| `Connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` | +| `Command` | The redis-cli command to be executed on the cache with all arguments separated by spaces, such as: `GET key`, `HGET key field`. | ++## Annotations ++The `RedisInput` annotation supports these properties: ++| Property | Description | +|-|| +| `name` | The name of the specific input binding. | +| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` | +| `command` | The redis-cli command to be executed on the cache with all arguments separated by spaces, such as: `GET key` or `HGET key field`. | +## Configuration ++The following table explains the binding configuration properties that you set in the function.json file. ++| function.json property | Description | +||-| +| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` | +| `command` | The redis-cli command to be executed on the cache with all arguments separated by spaces, such as: `GET key`, `HGET key field`. | ++> [!NOTE] +> Python v2 and Node.js v4 for Functions don't use function.json to define the function. Both of these new language versions aren't currently supported by Azure Redis Cache bindings. +++See the [Example section](#example) for complete examples. ++## Usage ++The input binding expects to receive a string from the cache. +When you use a custom type as the binding parameter, the extension tries to deserialize a JSON-formatted string into the custom type of this parameter. ++## Related content ++- [Introduction to Azure Functions](functions-overview.md) +- [Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started) +- [Tutorial: Create a write-behind cache by using Azure Functions and Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-write-behind) +- [Redis connection string](functions-bindings-cache.md#redis-connection-string) |
azure-functions | Functions Bindings Cache Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-output.md | + + Title: Using Redis Output bindings with Azure Functions for Azure Cache for Redis (preview) +description: Learn how to use Redis output binding on an Azure Functions. ++zone_pivot_groups: programming-languages-set-functions-lang-workers ++++ Last updated : 02/27/2024+++# Azure Cache for Redis output binding for Azure Functions (preview) ++The Azure Cache for Redis output bindings lets you change the keys in a cache based on a set of available trigger on the cache. ++For information on setup and configuration details, see the [overview](functions-bindings-cache.md). ++<! Replace with the following when Node.js v4 is supported: +--> +<! Replace with the following when Python v2 is supported: +--> ++## Example +++The following example shows a pub/sub trigger on the set event with an output binding to the same Redis instance. The set event triggers the cache and the output binding returns a delete command for the key that triggered the function. ++> [!IMPORTANT] +> +>For .NET functions, using the _isolated worker_ model is recommended over the _in-process_ model. For a comparison of the _in-process_ and _isolated worker_ models, see differences between the _isolated worker_ model and the _in-process_ model for .NET on Azure Functions. ++### [In-process](#tab/in-process) ++```c# +using Microsoft.Extensions.Logging; ++namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisOutputBinding +{ + internal class SetDeleter + { + [FunctionName(nameof(SetDeleter))] + public static void Run( + [RedisPubSubTrigger(Common.connectionStringSetting, "__keyevent@0__:set")] string key, + [Redis(Common.connectionStringSetting, "DEL")] out string[] arguments, + ILogger logger) + { + logger.LogInformation($"Deleting recently SET key '{key}'"); + arguments = new string[] { key }; + } + } +} +``` ++### [Isolated process](#tab/isolated-process) ++```csharp +using Microsoft.Extensions.Logging; ++namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisOutputBinding +{ + internal class SetDeleter + { + [FunctionName(nameof(SetDeleter))] + [return: Redis(Common.connectionStringSetting, "DEL")] + public static string Run( + [RedisPubSubTrigger(Common.connectionStringSetting, "__keyevent@0__:set")] string key, + ILogger logger) + { + logger.LogInformation($"Deleting recently SET key '{key}'"); + return key; + } + } +} +``` ++++The following example shows a pub/sub trigger on the set event with an output binding to the same Redis instance. The set event triggers the cache and the output binding returns a delete command for the key that triggered the function. +<!Note: it might be confusing that the binding `name` and the parameter name are the same in these examples. > +```java +package com.function.RedisOutputBinding; ++import com.microsoft.azure.functions.*; +import com.microsoft.azure.functions.annotation.*; +import com.microsoft.azure.functions.redis.annotation.*; ++public class SetDeleter { + @FunctionName("SetDeleter") + @RedisOutput( + name = "value", + connection = "redisConnectionString", + command = "DEL") + public String run( + @RedisPubSubTrigger( + name = "key", + connection = "redisConnectionString", + channel = "__keyevent@0__:set") + String key, + final ExecutionContext context) { + context.getLogger().info("Deleting recently SET key '" + key + "'"); + return key; + } +} ++``` ++This example shows a pub/sub trigger on the set event with an output binding to the same Redis instance. The set event triggers the cache and the output binding returns a delete command for the key that triggered the function. ++### [Model v3](#tab/nodejs-v3) ++The bindings are defined in this `function.json`` file: ++```json +{ + "bindings": [ + { + "type": "redisPubSubTrigger", + "connection": "redisConnectionString", + "channel": "__keyevent@0__:set", + "name": "key", + "direction": "in" + }, + { + "type": "redis", + "connection": "redisConnectionString", + "command": "DEL", + "name": "$return", + "direction": "out" + } + ], + "scriptFile": "index.js" +} +``` ++This code from the `index.js` file takes the key from the trigger and returns it to the output binding to delete the cached item. ++```javascript +module.exports = async function (context, key) { + context.log("Deleting recently SET key '" + key + "'"); + return key; +} ++``` ++### [Model v4](#tab/nodejs-v4) ++<! Replace with the following when Node.js v4 is supported: +--> +++This example shows a pub/sub trigger on the set event with an output binding to the same Redis instance. The set event triggers the cache and the output binding returns a delete command for the key that triggered the function. ++The bindings are defined in this `function.json` file: ++```json +{ + "bindings": [ + { + "type": "redisPubSubTrigger", + "connection": "redisLocalhost", + "channel": "__keyevent@0__:set", + "name": "key", + "direction": "in" + }, + { + "type": "redis", + "connection": "redisLocalhost", + "command": "DEL", + "name": "retVal", + "direction": "out" + } + ], + "scriptFile": "run.ps1" +} ++``` ++This code from the `run.ps1` file takes the key from the trigger and passes it to the output binding to delete the cached item. ++```powershell +param($key, $TriggerMetadata) +Write-Host "Deleting recently SET key '$key'" +Push-OutputBinding -Name retVal -Value $key +``` ++This example shows a pub/sub trigger on the set event with an output binding to the same Redis instance. The set event triggers the cache and the output binding returns a delete command for the key that triggered the function. ++### [v1](#tab/python-v1) ++The bindings are defined in this `function.json` file: ++```json +{ + "bindings": [ + { + "type": "redisPubSubTrigger", + "connection": "redisLocalhost", + "channel": "__keyevent@0__:set", + "name": "key", + "direction": "in" + }, + { + "type": "redis", + "connection": "redisLocalhost", + "command": "DEL", + "name": "$return", + "direction": "out" + } + ], + "scriptFile": "__init__.py" +} +``` ++This code from the `__init__.py` file takes the key from the trigger and passes it to the output binding to delete the cached item. ++```python +import logging ++def main(key: str) -> str: + logging.info("Deleting recently SET key '" + key + "'") + return key +``` ++### [v2](#tab/python-v2) ++<! Replace with the following when Node.js v4 is supported: +--> ++++## Attributes ++> [!NOTE] +> All commands are supported for this binding. ++The way in which you define an output binding parameter depends on whether your C# functions runs [in-process](functions-dotnet-class-library.md) or in an [isolated worker process](dotnet-isolated-process-guide.md). ++The output binding is defined this way: ++| Definition | Example | Description | +| -- | -- | -- | +| On an `out` parameter | `[Redis(<Connection>, <Command>)] out string <Return_Variable>` | The string variable returned by the method is a key value that the binding uses to execute the command against the specific cache. | ++In this case, the type returned by the method is a key value that the binding uses to execute the command against the specific cache. ++When your function has multiple output bindings, you can instead apply the binding attribute to the property of a type that is a key value, which the binding uses to execute the command against the specific cache. For more information, see [Multiple output bindings](dotnet-isolated-process-guide.md#multiple-output-bindings). ++++Regardless of the C# process mode, the same properties are supported by the output binding attribute: ++| Attribute property | Description | +|--| -| +| `Connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` | +| `Command` | The redis-cli command to be executed on the cache, such as: `DEL`. | +++## Annotations ++The `RedisOutput` annotation supports these properties: ++| Property | Description | +|-|| +| `name` | The name of the specific input binding. | +| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` | +| `command` | The redis-cli command to be executed on the cache, such as: `DEL`. | +++## Configuration ++The following table explains the binding configuration properties that you set in the _function.json_ file. ++| Property | Description | +|-|| +| `name` | The name of the specific input binding. | +| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` | +| `command` | The redis-cli command to be executed on the cache, such as: `DEL`. | +++See the [Example section](#example) for complete examples. ++## Usage ++The output returns a string, which is the key of the cache entry on which apply the specific command. ++There are three types of connections that are allowed from an Azure Functions instance to a Redis Cache in your deployments. For local development, you can also use service principal secrets. Use the `appsettings` to configure each of the following types of client authentication, assuming the `Connection` was set to `Redis` in the function. ++## Related content ++- [Introduction to Azure Functions](functions-overview.md) +- [Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started) +- [Tutorial: Create a write-behind cache by using Azure Functions and Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-write-behind) +- [Redis connection string](functions-bindings-cache.md#redis-connection-string) +- [Multiple output bindings](dotnet-isolated-process-guide.md#multiple-output-bindings) |
azure-functions | Functions Bindings Cache Trigger Redislist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redislist.md | Title: Using RedisListTrigger Azure Function (preview) -description: Learn how to use RedisListTrigger Azure Functions + Title: RedisListTrigger for Azure Functions (preview) +description: Learn how to use the RedisListTrigger Azure Functions for Azure Cache for Redis. zone_pivot_groups: programming-languages-set-functions-lang-workers -# RedisListTrigger Azure Function (preview) +# RedisListTrigger for Azure Functions (preview) The `RedisListTrigger` pops new elements from a list and surfaces those entries to the function. +For more information about Azure Cache for Redis triggers and bindings, [Redis Extension for Azure Functions](https://github.com/Azure/azure-functions-redis-extension/tree/main). + ## Scope of availability for functions triggers |Tier | Basic | Standard, Premium | Enterprise, Enterprise Flash | The `RedisListTrigger` pops new elements from a list and surfaces those entries | Lists | Yes | Yes | Yes | > [!IMPORTANT]-> Redis triggers aren't currently supported for functions running in the [Consumption plan](consumption-plan.md). +> Redis triggers aren't currently supported for functions running in the [Consumption plan](consumption-plan.md). > +<! Replace with the following when Node.js v4 is supported: +--> +<! Replace with the following when Python v2 is supported: +--> + ## Example ::: zone pivot="programming-language-csharp" -The following sample polls the key `listTest` at a localhost Redis instance at `127.0.0.1:6379`: +> [!IMPORTANT] +> +>For .NET functions, using the _isolated worker_ model is recommended over the _in-process_ model. For a comparison of the _in-process_ and _isolated worker_ models, see differences between the _isolated worker_ model and the _in-process_ model for .NET on Azure Functions. ++The following sample polls the key `listTest`.: ### [Isolated worker model](#tab/isolated-process) -The isolated process examples aren't available in preview. +```csharp +using Microsoft.Extensions.Logging; ++namespace Microsoft.Azure.Functions.Worker.Extensions.Redis.Samples.RedisListTrigger +{ + public class SimpleListTrigger + { + private readonly ILogger<SimpleListTrigger> logger; ++ public SimpleListTrigger(ILogger<SimpleListTrigger> logger) + { + this.logger = logger; + } ++ [Function(nameof(SimpleListTrigger))] + public void Run( + [RedisListTrigger(Common.connectionStringSetting, "listTest")] string entry) + { + logger.LogInformation(entry); + } + } +} ++``` ### [In-process model](#tab/in-process) ```csharp-[FunctionName(nameof(ListsTrigger))] -public static void ListsTrigger( - [RedisListTrigger("Redis", "listTest")] string entry, - ILogger logger) +using Microsoft.Extensions.Logging; ++namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisListTrigger {- logger.LogInformation($"The entry pushed to the list listTest: '{entry}'"); + internal class SimpleListTrigger + { + [FunctionName(nameof(SimpleListTrigger))] + public static void Run( + [RedisListTrigger(Common.connectionStringSetting, "listTest")] string entry, + ILogger logger) + { + logger.LogInformation(entry); + } + } } ``` public static void ListsTrigger( The following sample polls the key `listTest` at a localhost Redis instance at `redisLocalhost`: ```java- @FunctionName("ListTrigger") - public void ListTrigger( +package com.function.RedisListTrigger; ++import com.microsoft.azure.functions.*; +import com.microsoft.azure.functions.annotation.*; +import com.microsoft.azure.functions.redis.annotation.*; ++public class SimpleListTrigger { + @FunctionName("SimpleListTrigger") + public void run( @RedisListTrigger(- name = "entry", - connectionStringSetting = "redisLocalhost", + name = "req", + connection = "redisConnectionString", key = "listTest",- pollingIntervalInMs = 100, - messagesPerWorker = 10, - count = 1, - listPopFromBeginning = false) - String entry, + pollingIntervalInMs = 1000, + maxBatchSize = 1) + String message, final ExecutionContext context) {- context.getLogger().info(entry); + context.getLogger().info(message); }+} ``` ::: zone-end ::: zone pivot="programming-language-javascript" -### [v3](#tab/node-v3) +### [Model v3](#tab/node-v3) This sample uses the same `index.js` file, with binding data in the `function.json` file. module.exports = async function (context, entry) { From `function.json`, here's the binding data: -```javascript +```json {- "bindings": [ - { - "type": "redisListTrigger", - "listPopFromBeginning": true, - "connectionStringSetting": "redisLocalhost", - "key": "listTest", - "pollingIntervalInMs": 1000, - "messagesPerWorker": 100, - "count": 10, - "name": "entry", - "direction": "in" - } - ], - "scriptFile": "index.js" + "bindings": [ + { + "type": "redisListTrigger", + "listPopFromBeginning": true, + "connection": "redisConnectionString", + "key": "listTest", + "pollingIntervalInMs": 1000, + "maxBatchSize": 16, + "name": "entry", + "direction": "in" + } + ], + "scriptFile": "index.js" } ``` -### [v4](#tab/node-v4) +### [Model v4](#tab/node-v4) -The JavaScript v4 programming model example isn't available in preview. +<! Replace with the following when Node.js v4 is supported: +--> Write-Host $entry From `function.json`, here's the binding data: -```powershell +```json {- "bindings": [ - { - "type": "redisListTrigger", - "listPopFromBeginning": true, - "connectionStringSetting": "redisLocalhost", - "key": "listTest", - "pollingIntervalInMs": 1000, - "messagesPerWorker": 100, - "count": 10, - "name": "entry", - "direction": "in" - } - ], - "scriptFile": "run.ps1" + "bindings": [ + { + "type": "redisListTrigger", + "listPopFromBeginning": true, + "connection": "redisConnectionString", + "key": "listTest", + "pollingIntervalInMs": 1000, + "maxBatchSize": 16, + "name": "entry", + "direction": "in" + } + ], + "scriptFile": "run.ps1" } ``` From `function.json`, here's the binding data: ```json {- "bindings": [ - { - "type": "redisListTrigger", - "listPopFromBeginning": true, - "connectionStringSetting": "redisLocalhost", - "key": "listTest", - "pollingIntervalInMs": 1000, - "messagesPerWorker": 100, - "count": 10, - "name": "entry", - "direction": "in" - } - ], - "scriptFile": "__init__.py" + "bindings": [ + { + "type": "redisListTrigger", + "listPopFromBeginning": true, + "connection": "redisConnectionString", + "key": "listTest", + "pollingIntervalInMs": 1000, + "maxBatchSize": 16, + "name": "entry", + "direction": "in" + } + ], + "scriptFile": "__init__.py" } ``` ### [v2](#tab/python-v2) -The Python v2 programming model example isn't available in preview. +<! Replace with the following when Python v2 is supported: +--> The Python v2 programming model example isn't available in preview. | Parameter | Description | Required | Default | |||:--:|--:|-| `ConnectionStringSetting` | Name of the setting in the `appsettings` that holds the cache connection string (for example, `<cacheName>.redis.cache.windows.net:6380,password=...`). | Yes | | +| `Connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` | Yes | | | `Key` | Key to read from. This field can be resolved using `INameResolver`. | Yes | | | `PollingIntervalInMs` | How often to poll Redis in milliseconds. | Optional | `1000` | | `MessagesPerWorker` | How many messages each functions instance should process. Used to determine how many instances the function should scale to. | Optional | `100` |-| `Count` | Number of entries to pop from Redis at one time. These are processed in parallel. Only supported on Redis 6.2+ using the `COUNT` argument in [`LPOP`](https://redis.io/commands/lpop/) and [`RPOP`](https://redis.io/commands/rpop/). | Optional | `10` | +| `Count` | Number of entries to pop from Redis at one time. Entries are processed in parallel. Only supported on Redis 6.2+ using the `COUNT` argument in [`LPOP`](https://redis.io/commands/lpop/) and [`RPOP`](https://redis.io/commands/rpop/). | Optional | `10` | | `ListPopFromBeginning` | Determines whether to pop entries from the beginning using [`LPOP`](https://redis.io/commands/lpop/), or to pop entries from the end using [`RPOP`](https://redis.io/commands/rpop/). | Optional | `true` | ::: zone-end The Python v2 programming model example isn't available in preview. | Parameter | Description | Required | Default | ||-|:--:|--:| | `name` | "entry" | | |-| `connectionStringSetting` | The name of the setting in the `appsettings` that contains the cache connection string. For example: `<cacheName>.redis.cache.windows.net:6380,password...` | Yes | | +| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...`| Yes | | | `key` | This field can be resolved using INameResolver. | Yes | | | `pollingIntervalInMs` | How often to poll Redis in milliseconds. | Optional | `1000` | | `messagesPerWorker` | How many messages each functions instance should process. Used to determine how many instances the function should scale to. | Optional | `100` | The following table explains the binding configuration properties that you set i ||-|:--:|--:| | `type` | Name of the trigger. | No | | | `listPopFromBeginning` | Whether to delete the stream entries after the function has run. Set to `true`. | Yes | `true` |-| `connectionString` | The name of the setting in the `appsettings` that contains the cache connection string. For example: `<cacheName>.redis.cache.windows.net:6380,password...` | No | | +| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` | No | | | `key` | This field can be resolved using `INameResolver`. | No | | | `pollingIntervalInMs` | How often to poll Redis in milliseconds. | Yes | `1000` | | `messagesPerWorker` | How many messages each functions instance should process. Used to determine how many instances the function should scale to. | Yes | `100` |-| `count` | Number of entries to read from the cache at one time. These are processed in parallel. | Yes | `10` | +| `count` | Number of entries to read from the cache at one time. Entries are processed in parallel. | Yes | `10` | | `name` | ? | Yes | | | `direction` | Set to `in`. | No | | See the Example section for complete examples. The `RedisListTrigger` pops new elements from a list and surfaces those entries to the function. The trigger polls Redis at a configurable fixed interval, and uses [`LPOP`](https://redis.io/commands/lpop/) and [`RPOP`](https://redis.io/commands/rpop/) to pop entries from the lists. -### Output ---> [!NOTE] -> Once the `RedisListTrigger` becomes generally available, the following information will be moved to a dedicated Output page. --StackExchange.Redis.RedisValue --| Output Type | Description | -||| -| [`StackExchange.Redis.RedisValue`](https://github.com/StackExchange/StackExchange.Redis/blob/main/src/StackExchange.Redis/RedisValue.cs) | `string`, `byte[]`, `ReadOnlyMemory<byte>`: The entry from the list. | -| `Custom` | The trigger uses Json.NET serialization to map the message from the channel from a `string` to a custom type. | -- -> [!NOTE] -> Once the `RedisListTrigger` becomes generally available, the following information will be moved to a dedicated Output page. --| Output Type | Description | +| Type | Description | |-|--| | `byte[]` | The message from the channel. | | `string` | The message from the channel. | | `Custom` | The trigger uses Json.NET serialization to map the message from the channel from a `string` into a custom type. | ---- ::: zone-end ## Related content StackExchange.Redis.RedisValue - [Introduction to Azure Functions](functions-overview.md) - [Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started) - [Tutorial: Create a write-behind cache by using Azure Functions and Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-write-behind)+- [Redis connection string](functions-bindings-cache.md#redis-connection-string) - [Redis lists](https://redis.io/docs/data-types/lists/) |
azure-functions | Functions Bindings Cache Trigger Redispubsub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redispubsub.md | Title: Using RedisPubSubTrigger Azure Function (preview) -description: Learn how to use RedisPubSubTrigger Azure Function + Title: RedisPubSubTrigger for Azure Functions (preview) +description: Learn how to use RedisPubSubTrigger Azure Function with Azure Cache for Redis. zone_pivot_groups: programming-languages-set-functions-lang-workers -# RedisPubSubTrigger Azure Function (preview) +# RedisPubSubTrigger for Azure Functions (preview) Redis features [publish/subscribe functionality](https://redis.io/docs/interact/pubsub/) that enables messages to be sent to Redis and broadcast to subscribers. +For more information about Azure Cache for Redis triggers and bindings, [Redis Extension for Azure Functions](https://github.com/Azure/azure-functions-redis-extension/tree/main). + ## Scope of availability for functions triggers |Tier | Basic | Standard, Premium | Enterprise, Enterprise Flash | Redis features [publish/subscribe functionality](https://redis.io/docs/interact/ > This trigger isn't supported on a [consumption plan](/azure/azure-functions/consumption-plan) because Redis PubSub requires clients to always be actively listening to receive all messages. For consumption plans, your function might miss certain messages published to the channel. > +<! Replace with the following when Node.js v4 is supported: +--> +<! Replace with the following when Node.js v4 is supported: +--> + ## Examples ::: zone pivot="programming-language-csharp" [!INCLUDE [dotnet-execution](../../includes/functions-dotnet-execution-model.md)] +> [!IMPORTANT] +> +>For .NET functions, using the _isolated worker_ model is recommended over the _in-process_ model. For a comparison of the _in-process_ and _isolated worker_ models, see differences between the _isolated worker_ model and the _in-process_ model for .NET on Azure Functions. + ### [Isolated worker model](#tab/isolated-process) -The isolated process examples aren't available in preview. +This sample listens to the channel `pubsubTest`. ```csharp-//TBD +using Microsoft.Extensions.Logging; ++namespace Microsoft.Azure.Functions.Worker.Extensions.Redis.Samples.RedisPubSubTrigger +{ + internal class SimplePubSubTrigger + { + private readonly ILogger<SimplePubSubTrigger> logger; ++ public SimplePubSubTrigger(ILogger<SimplePubSubTrigger> logger) + { + this.logger = logger; + } ++ [Function(nameof(SimplePubSubTrigger))] + public void Run( + [RedisPubSubTrigger(Common.connectionStringSetting, "pubsubTest")] string message) + { + logger.LogInformation(message); + } + } +} ++``` ++This sample listens to any keyspace notifications for the key `keyspaceTest`. ++```csharp +using Microsoft.Extensions.Logging; ++namespace Microsoft.Azure.Functions.Worker.Extensions.Redis.Samples.RedisPubSubTrigger +{ + internal class KeyspaceTrigger + { + private readonly ILogger<KeyspaceTrigger> logger; ++ public KeyspaceTrigger(ILogger<KeyspaceTrigger> logger) + { + this.logger = logger; + } + + [Function(nameof(KeyspaceTrigger))] + public void Run( + [RedisPubSubTrigger(Common.connectionStringSetting, "__keyspace@0__:keyspaceTest")] string message) + { + logger.LogInformation(message); + } + } +} ++``` ++This sample listens to any `keyevent` notifications for the delete command [`DEL`](https://redis.io/commands/del/). ++```csharp +using Microsoft.Extensions.Logging; ++namespace Microsoft.Azure.Functions.Worker.Extensions.Redis.Samples.RedisPubSubTrigger +{ + internal class KeyeventTrigger + { + private readonly ILogger<KeyeventTrigger> logger; ++ public KeyeventTrigger(ILogger<KeyeventTrigger> logger) + { + this.logger = logger; + } + + [Function(nameof(KeyeventTrigger))] + public void Run( + [RedisPubSubTrigger(Common.connectionStringSetting, "__keyevent@0__:del")] string message) + { + logger.LogInformation($"Key '{message}' deleted."); + } + } +} + ``` ### [In-process model](#tab/in-process) The isolated process examples aren't available in preview. This sample listens to the channel `pubsubTest`. ```csharp-[FunctionName(nameof(PubSubTrigger))] -public static void PubSubTrigger( - [RedisPubSubTrigger("redisConnectionString", "pubsubTest")] string message, - ILogger logger) +using Microsoft.Extensions.Logging; ++namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisPubSubTrigger {- logger.LogInformation(message); + internal class SimplePubSubTrigger + { + [FunctionName(nameof(SimplePubSubTrigger))] + public static void Run( + [RedisPubSubTrigger(Common.connectionStringSetting, "pubsubTest")] string message, + ILogger logger) + { + logger.LogInformation(message); + } + } } ``` -This sample listens to any keyspace notifications for the key `myKey`. +This sample listens to any keyspace notifications for the key `keyspaceTest`. ```csharp -[FunctionName(nameof(KeyspaceTrigger))] -public static void KeyspaceTrigger( - [RedisPubSubTrigger("redisConnectionString", "__keyspace@0__:myKey")] string message, - ILogger logger) +using Microsoft.Extensions.Logging; ++namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisPubSubTrigger {- logger.LogInformation(message); + internal class KeyspaceTrigger + { + [FunctionName(nameof(KeyspaceTrigger))] + public static void Run( + [RedisPubSubTrigger(Common.connectionStringSetting, "__keyspace@0__:keyspaceTest")] string message, + ILogger logger) + { + logger.LogInformation(message); + } + } } ``` This sample listens to any `keyevent` notifications for the delete command [`DEL`](https://redis.io/commands/del/). ```csharp-[FunctionName(nameof(KeyeventTrigger))] -public static void KeyeventTrigger( - [RedisPubSubTrigger("redisConnectionString", "__keyevent@0__:del")] string message, - ILogger logger) +using Microsoft.Extensions.Logging; ++namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisPubSubTrigger {- logger.LogInformation(message); + internal class KeyeventTrigger + { + [FunctionName(nameof(KeyeventTrigger))] + public static void Run( + [RedisPubSubTrigger(Common.connectionStringSetting, "__keyevent@0__:del")] string message, + ILogger logger) + { + logger.LogInformation($"Key '{message}' deleted."); + } + } } ``` public static void KeyeventTrigger( This sample listens to the channel `pubsubTest`. ```java-@FunctionName("PubSubTrigger") - public void PubSubTrigger( +package com.function.RedisPubSubTrigger; ++import com.microsoft.azure.functions.*; +import com.microsoft.azure.functions.annotation.*; +import com.microsoft.azure.functions.redis.annotation.*; ++public class SimplePubSubTrigger { + @FunctionName("SimplePubSubTrigger") + public void run( @RedisPubSubTrigger(- name = "message", - connectionStringSetting = "redisConnectionString", + name = "req", + connection = "redisConnectionString", channel = "pubsubTest") String message, final ExecutionContext context) { context.getLogger().info(message); }+} ``` This sample listens to any keyspace notifications for the key `myKey`. ```java-@FunctionName("KeyspaceTrigger") - public void KeyspaceTrigger( +package com.function.RedisPubSubTrigger; ++import com.microsoft.azure.functions.*; +import com.microsoft.azure.functions.annotation.*; +import com.microsoft.azure.functions.redis.annotation.*; ++public class KeyspaceTrigger { + @FunctionName("KeyspaceTrigger") + public void run( @RedisPubSubTrigger(- name = "message", - connectionStringSetting = "redisConnectionString", - channel = "__keyspace@0__:myKey") + name = "req", + connection = "redisConnectionString", + channel = "__keyspace@0__:keyspaceTest") String message, final ExecutionContext context) { context.getLogger().info(message); }+} ``` This sample listens to any `keyevent` notifications for the delete command [`DEL`](https://redis.io/commands/del/). ```java- @FunctionName("KeyeventTrigger") - public void KeyeventTrigger( +package com.function.RedisPubSubTrigger; ++import com.microsoft.azure.functions.*; +import com.microsoft.azure.functions.annotation.*; +import com.microsoft.azure.functions.redis.annotation.*; ++public class KeyeventTrigger { + @FunctionName("KeyeventTrigger") + public void run( @RedisPubSubTrigger(- name = "message", - connectionStringSetting = "redisConnectionString", + name = "req", + connection = "redisConnectionString", channel = "__keyevent@0__:del") String message, final ExecutionContext context) { context.getLogger().info(message); }+} ``` ::: zone-end ::: zone pivot="programming-language-javascript" -### [v3](#tab/node-v3) +### [Model v3](#tab/node-v3) This sample uses the same `index.js` file, with binding data in the `function.json` file determining on which channel the trigger occurs. Here's binding data to listen to the channel `pubsubTest`. "bindings": [ { "type": "redisPubSubTrigger",- "connectionStringSetting": "redisConnectionString", + "connection": "redisConnectionString", "channel": "pubsubTest", "name": "message", "direction": "in" Here's binding data to listen to the channel `pubsubTest`. } ``` -Here's binding data to listen to keyspace notifications for the key `myKey`. +Here's binding data to listen to keyspace notifications for the key `keyspaceTest`. ```json { "bindings": [ { "type": "redisPubSubTrigger",- "connectionStringSetting": "redisConnectionString", - "channel": "__keyspace@0__:myKey", + "connection": "redisConnectionString", + "channel": "__keyspace@0__:keyspaceTest", "name": "message", "direction": "in" } Here's binding data to listen to `keyevent` notifications for the delete command "bindings": [ { "type": "redisPubSubTrigger",- "connectionStringSetting": "redisConnectionString", + "connection": "redisConnectionString", "channel": "__keyevent@0__:del", "name": "message", "direction": "in" Here's binding data to listen to `keyevent` notifications for the delete command ], "scriptFile": "index.js" }+ ```-### [v4](#tab/node-v4) -The JavaScript v4 programming model example isn't available in preview. +### [Model v4](#tab/node-v4) ++<! Replace with the following when Node.js v4 is supported: +--> ::: zone-end Here's binding data to listen to the channel `pubsubTest`. "bindings": [ { "type": "redisPubSubTrigger",- "connectionStringSetting": "redisConnectionString", + "connection": "redisConnectionString", "channel": "pubsubTest", "name": "message", "direction": "in" Here's binding data to listen to the channel `pubsubTest`. } ``` -Here's binding data to listen to keyspace notifications for the key `myKey`. +Here's binding data to listen to keyspace notifications for the key `keyspaceTest`. ```json { "bindings": [ { "type": "redisPubSubTrigger",- "connectionStringSetting": "redisConnectionString", - "channel": "__keyspace@0__:myKey", + "connection": "redisConnectionString", + "channel": "__keyspace@0__:keyspaceTest", "name": "message", "direction": "in" } Here's binding data to listen to `keyevent` notifications for the delete command "bindings": [ { "type": "redisPubSubTrigger",- "connectionStringSetting": "redisConnectionString", + "connection": "redisConnectionString", "channel": "__keyevent@0__:del", "name": "message", "direction": "in" Here's binding data to listen to the channel `pubsubTest`. "bindings": [ { "type": "redisPubSubTrigger",- "connectionStringSetting": "redisConnectionString", + "connection": "redisConnectionString", "channel": "pubsubTest", "name": "message", "direction": "in" Here's binding data to listen to the channel `pubsubTest`. } ``` -Here's binding data to listen to keyspace notifications for the key `myKey`. +Here's binding data to listen to keyspace notifications for the key `keyspaceTest`. ```json { "bindings": [ { "type": "redisPubSubTrigger",- "connectionStringSetting": "redisConnectionString", - "channel": "__keyspace@0__:myKey", + "connection": "redisConnectionString", + "channel": "__keyspace@0__:keyspaceTest", "name": "message", "direction": "in" } Here's binding data to listen to `keyevent` notifications for the delete command "bindings": [ { "type": "redisPubSubTrigger",- "connectionStringSetting": "redisConnectionString", + "connection": "redisConnectionString", "channel": "__keyevent@0__:del", "name": "message", "direction": "in" Here's binding data to listen to `keyevent` notifications for the delete command ### [v2](#tab/python-v2) -The Python v2 programming model example isn't available in preview. +<! Replace with the following when Python v2 is supported: +--> The Python v2 programming model example isn't available in preview. | Parameter | Description | Required | Default | ||--|:--:| --:|-| `ConnectionStringSetting` | Name of the setting in the `appsettings` that holds the cache connection string. For example,`<cacheName>.redis.cache.windows.net:6380,password=...`. | Yes | | +| `Connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` | Yes | | | `Channel` | The pub sub channel that the trigger should listen to. Supports glob-style channel patterns. This field can be resolved using `INameResolver`. | Yes | | ::: zone-end The Python v2 programming model example isn't available in preview. | Parameter | Description | Required | Default | ||--|: --:| --:| | `name` | Name of the variable holding the value returned by the function. | Yes | |-| `connectionStringSetting` | Name of the setting in the `appsettings` that holds the cache connection string (for example, `<cacheName>.redis.cache.windows.net:6380,password=...`) | Yes | | +| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...`| Yes | | | `channel` | The pub sub channel that the trigger should listen to. Supports glob-style channel patterns. | Yes | | ::: zone-end The Python v2 programming model example isn't available in preview. | function.json property | Description | Required | Default | ||--| :--:| --:|-| `type` | Trigger type. For the pub sub trigger, this is `redisPubSubTrigger`. | Yes | | -| `connectionStringSetting` | Name of the setting in the `appsettings` that holds the cache connection string (for example, `<cacheName>.redis.cache.windows.net:6380,password=...`) | Yes | | +| `type` | Trigger type. For the pub sub trigger, the type is `redisPubSubTrigger`. | Yes | | +| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...`| Yes | | | `channel` | Name of the pub sub channel that is being subscribed to | Yes | | | `name` | Name of the variable holding the value returned by the function. | Yes | | | `direction` | Must be set to `in`. | Yes | | The Python v2 programming model example isn't available in preview. ::: zone-end >[!IMPORTANT]->The `connectionStringSetting` parameter does not hold the Redis cache connection string itself. Instead, it points to the name of the environment variable that holds the connection string. This makes the application more secure. For more information, see [Redis connection string](functions-bindings-cache.md#redis-connection-string). +>The `connection` parameter does not hold the Redis cache connection string itself. Instead, it points to the name of the environment variable that holds the connection string. This makes the application more secure. For more information, see [Redis connection string](functions-bindings-cache.md#redis-connection-string). > ## Usage Redis features [publish/subscribe functionality](https://redis.io/docs/interact/ - The `RedisPubSubTrigger` isn't capable of listening to [keyspace notifications](https://redis.io/docs/manual/keyspace-notifications/) on clustered caches. - Basic tier functions don't support triggering on `keyspace` or `keyevent` notifications through the `RedisPubSubTrigger`. - The `RedisPubSubTrigger` isn't supported on a [consumption plan](/azure/azure-functions/consumption-plan) because Redis PubSub requires clients to always be actively listening to receive all messages. For consumption plans, your function might miss certain messages published to the channel.-- Functions with the `RedisPubSubTrigger` shouldn't be scaled out to multiple instances. Each instance listens and processes each pub sub message, resulting in duplicate processing+- Functions with the `RedisPubSubTrigger` shouldn't be scaled out to multiple instances. Each instance listens and processes each pub sub message, resulting in duplicate processing. > [!WARNING] > This trigger isn't supported on a [consumption plan](/azure/azure-functions/consumption-plan) because Redis PubSub requires clients to always be actively listening to receive all messages. For consumption plans, your function might miss certain messages published to the channel. Because these events are published on pub/sub channels, the `RedisPubSubTrigger` > [!IMPORTANT] > In Azure Cache for Redis, `keyspace` events must be enabled before notifications are published. For more information, see [Advanced Settings](/azure/azure-cache-for-redis/cache-configure#keyspace-notifications-advanced-settings). -## Output - ::: zone pivot="programming-language-csharp" -> [!NOTE] -> Once the `RedisPubSubTrigger` becomes generally available, the following information will be moved to a dedicated Output page. +| Type | Description| +||| +| `string` | The channel message serialized as JSON (UTF-8 encoded for byte types) in the format that follows. | +| `Custom`| The trigger uses Json.NET serialization to map the message from the channel into the given custom type. | +JSON string format -| Output Type | Description| -||| -| [`StackExchange.Redis.ChannelMessage`](https://github.com/StackExchange/StackExchange.Redis/blob/main/src/StackExchange.Redis/ChannelMessageQueue.cs)| The value returned by `StackExchange.Redis`. | -| [`StackExchange.Redis.RedisValue`](https://github.com/StackExchange/StackExchange.Redis/blob/main/src/StackExchange.Redis/RedisValue.cs)| `string`, `byte[]`, `ReadOnlyMemory<byte>`: The message from the channel. | -| `Custom`| The trigger uses Json.NET serialization to map the message from the channel from a `string` into a custom type. | +```json +{ + "SubscriptionChannel":"__keyspace@0__:*", + "Channel":"__keyspace@0__:mykey", + "Message":"set" +} ++``` ::: zone-end ::: zone pivot="programming-language-java,programming-language-javascript,programming-language-powershell,programming-language-python" -> [!NOTE] -> Once the `RedisPubSubTrigger` becomes generally available, the following information will be moved to a dedicated Output page. --| Output Type | Description | +| Type | Description | |-|--|-| `byte[]` | The message from the channel. | -| `string` | The message from the channel. | +| `string` | The channel message serialized as JSON (UTF-8 encoded for byte types) in the format that follows. | | `Custom` | The trigger uses Json.NET serialization to map the message from the channel from a `string` into a custom type. | --+```json +{ + "SubscriptionChannel":"__keyspace@0__:*", + "Channel":"__keyspace@0__:mykey", + "Message":"set" +} +``` ::: zone-end Because these events are published on pub/sub channels, the `RedisPubSubTrigger` - [Introduction to Azure Functions](functions-overview.md) - [Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started) - [Tutorial: Create a write-behind cache by using Azure Functions and Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-write-behind)+- [Redis connection string](functions-bindings-cache.md#redis-connection-string) - [Redis pub sub messages](https://redis.io/docs/manual/pubsub/) |
azure-functions | Functions Bindings Cache Trigger Redisstream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redisstream.md | Title: Using RedisStreamTrigger Azure Function (preview) -description: Learn how to use RedisStreamTrigger Azure Function + Title: RedisStreamTrigger for Azure Functions (preview) +description: Learn how to use RedisStreamTrigger Azure Function for Azure Cache for Redis. zone_pivot_groups: programming-languages-set-functions-lang-workers -# RedisStreamTrigger Azure Function (preview) +# RedisStreamTrigger for Azure Functions (preview) The `RedisStreamTrigger` reads new entries from a stream and surfaces those elements to the function. +For more information, see [RedisStreamTrigger](https://github.com/Azure/azure-functions-redis-extension/tree/mapalan/UpdateReadMe/samples/dotnet/RedisStreamTrigger). + | Tier | Basic | Standard, Premium | Enterprise, Enterprise Flash | ||:--:|:--:|:-:| | Streams | Yes | Yes | Yes | > [!IMPORTANT]-> Redis triggers aren't currently supported for functions running in the [Consumption plan](consumption-plan.md). +> Redis triggers aren't currently supported for functions running in the [Consumption plan](consumption-plan.md). > +<! Replace with the following when Node.js v4 is supported: +--> +<! Replace with the following when Python v2 is supported: +--> + ## Example +> [!IMPORTANT] +> +>For .NET functions, using the _isolated worker_ model is recommended over the _in-process_ model. For a comparison of the _in-process_ and _isolated worker_ models, see differences between the _isolated worker_ model and the _in-process_ model for .NET on Azure Functions. + ::: zone pivot="programming-language-csharp" [!INCLUDE [dotnet-execution](../../includes/functions-dotnet-execution-model.md)] ### [Isolated worker model](#tab/isolated-process) -The isolated process examples aren't available in preview. ```csharp-//TBD +using Microsoft.Extensions.Logging; ++namespace Microsoft.Azure.Functions.Worker.Extensions.Redis.Samples.RedisStreamTrigger +{ + internal class SimpleStreamTrigger + { + private readonly ILogger<SimpleStreamTrigger> logger; ++ public SimpleStreamTrigger(ILogger<SimpleStreamTrigger> logger) + { + this.logger = logger; + } ++ [Function(nameof(SimpleStreamTrigger))] + public void Run( + [RedisStreamTrigger(Common.connectionStringSetting, "streamKey")] string entry) + { + logger.LogInformation(entry); + } + } +} ``` ### [In-process model](#tab/in-process) ```csharp -[FunctionName(nameof(StreamsTrigger))] -public static void StreamsTrigger( - [RedisStreamTrigger("Redis", "streamTest")] string entry, - ILogger logger) +using Microsoft.Extensions.Logging; ++namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisStreamTrigger {- logger.LogInformation($"The entry pushed to the list listTest: '{entry}'"); + internal class SimpleStreamTrigger + { + [FunctionName(nameof(SimpleStreamTrigger))] + public static void Run( + [RedisStreamTrigger(Common.connectionStringSetting, "streamKey")] string entry, + ILogger logger) + { + logger.LogInformation(entry); + } + } }+ ``` public static void StreamsTrigger( ```java - @FunctionName("StreamTrigger") - public void StreamTrigger( +package com.function.RedisStreamTrigger; ++import com.microsoft.azure.functions.*; +import com.microsoft.azure.functions.annotation.*; +import com.microsoft.azure.functions.redis.annotation.*; ++public class SimpleStreamTrigger { + @FunctionName("SimpleStreamTrigger") + public void run( @RedisStreamTrigger(- name = "entry", - connectionStringSetting = "redisLocalhost", + name = "req", + connection = "redisConnectionString", key = "streamTest",- pollingIntervalInMs = 100, - messagesPerWorker = 10, - count = 1, - deleteAfterProcess = true) - String entry, + pollingIntervalInMs = 1000, + maxBatchSize = 1) + String message, final ExecutionContext context) {- context.getLogger().info(entry); + context.getLogger().info(message); }+} ``` ::: zone-end ::: zone pivot="programming-language-javascript" -### [v3](#tab/node-v3) +### [Model v3](#tab/node-v3) This sample uses the same `index.js` file, with binding data in the `function.json` file. From `function.json`, here's the binding data: "bindings": [ { "type": "redisStreamTrigger",- "deleteAfterProcess": false, - "connectionStringSetting": "redisLocalhost", + "connection": "redisConnectionString", "key": "streamTest", "pollingIntervalInMs": 1000,- "messagesPerWorker": 100, - "count": 10, + "maxBatchSize": 16, "name": "entry", "direction": "in" } From `function.json`, here's the binding data: } ``` -### [v4](#tab/node-v4) +### [Model v4](#tab/node-v4) -The JavaScript v4 programming model example isn't available in preview. + <! Replace with the following when Node.js v4 is supported: + [!INCLUDE [functions-nodejs-model-tabs-description](../../includes/functions-nodejs-model-tabs-description.md)] + --> + [!INCLUDE [functions-nodejs-model-tabs-redis-preview](../../includes/functions-nodejs-model-tabs-redis-preview.md)] Write-Host ($entry | ConvertTo-Json) From `function.json`, here's the binding data: -```powershell +```json { "bindings": [ { "type": "redisStreamTrigger",- "deleteAfterProcess": false, - "connectionStringSetting": "redisLocalhost", + "connection": "redisConnectionString", "key": "streamTest", "pollingIntervalInMs": 1000,- "messagesPerWorker": 100, - "count": 10, + "maxBatchSize": 16, "name": "entry", "direction": "in" } From `function.json`, here's the binding data: "bindings": [ { "type": "redisStreamTrigger",- "deleteAfterProcess": false, - "connectionStringSetting": "redisLocalhost", + "connection": "redisConnectionString", "key": "streamTest", "pollingIntervalInMs": 1000,- "messagesPerWorker": 100, - "count": 10, + "maxBatchSize": 16, "name": "entry", "direction": "in" } From `function.json`, here's the binding data: ### [v2](#tab/python-v2) -The Python v2 programming model example isn't available in preview. +<! Replace with the following when Python v2 is supported: +--> The Python v2 programming model example isn't available in preview. | Parameters | Description | Required | Default | ||-|:--:|--:|-| `ConnectionStringSetting` | The name of the setting in the `appsettings` that contains cache connection string For example: `<cacheName>.redis.cache.windows.net:6380,password=...` | Yes | | +| `Connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...`| Yes | | | `Key` | Key to read from. | Yes | | | `PollingIntervalInMs` | How often to poll the Redis server in milliseconds. | Optional | `1000` | | `MessagesPerWorker` | The number of messages each functions worker should process. Used to determine how many workers the function should scale to. | Optional | `100` | The Python v2 programming model example isn't available in preview. | Parameter | Description | Required | Default | ||-|:--:|--:| | `name` | `entry` | Yes | |-| `connectionStringSetting` | The name of the setting in the `appsettings` that contains cache connection string For example: `<cacheName>.redis.cache.windows.net:6380,password=...` | Yes | | +| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` | Yes | | | `key` | Key to read from. | Yes | | | `pollingIntervalInMs` | How frequently to poll Redis, in milliseconds. | Optional | `1000` |-| `messagesPerWorker` | The number of messages each functions worker should process. It's used to determine how many workers the function should scale to | Optional | `100` | -| `count` | Number of entries to read from Redis at one time. These are processed in parallel. | Optional | `10` | +| `messagesPerWorker` | The number of messages each functions worker should process. It's used to determine how many workers the function should scale to. | Optional | `100` | +| `count` | Number of entries to read from Redis at one time. Entries are processed in parallel. | Optional | `10` | | `deleteAfterProcess` | Whether to delete the stream entries after the function has run. | Optional | `false` | ::: zone-end The following table explains the binding configuration properties that you set i ||-|:--:|--:| | `type` | | Yes | | | `deleteAfterProcess` | | Optional | `false` |-| `connectionStringSetting` | The name of the setting in the `appsettings` that contains cache connection string For example: `<cacheName>.redis.cache.windows.net:6380,password=...` | Yes | | +| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` | Yes | | | `key` | The key to read from. | Yes | | | `pollingIntervalInMs` | How often to poll Redis in milliseconds. | Optional | `1000` | | `messagesPerWorker` | (optional) The number of messages each functions worker should process. Used to determine how many workers the function should scale | Optional | `100` | The `RedisStreamTrigger` Azure Function reads new entries from a stream and surf The trigger polls Redis at a configurable fixed interval, and uses [`XREADGROUP`](https://redis.io/commands/xreadgroup/) to read elements from the stream. -The consumer group for all function instances is the `ID` of the function. For example, `Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisSamples.StreamTrigger` for the `StreamTrigger` sample. Each function creates a new random GUID to use as its consumer name within the group to ensure that scaled out instances of the function don't read the same messages from the stream. +The consumer group for all instances of a function is the name of the function, that is, `SimpleStreamTrigger` for the [StreamTrigger sample](https://github.com/Azure/azure-functions-redis-extension/blob/main/samples/dotnet/RedisStreamTrigger/SimpleStreamTrigger.cs). -### Output +Each functions instance uses the [`WEBSITE_INSTANCE_ID`](/azure/app-service/reference-app-settings?tabs=kudu%2Cdotnet#scaling) or generates a random GUID to use as its consumer name within the group to ensure that scaled out instances of the function don't read the same messages from the stream. --> [!NOTE] -> Once the `RedisStreamTrigger` becomes generally available, the following information will be moved to a dedicated Output page. +<!-- ::: zone pivot="programming-language-csharp" -| Output Type | Description | +| Type | Description | |-|--| | [`StackExchange.Redis.ChannelMessage`](https://github.com/StackExchange/StackExchange.Redis/blob/main/src/StackExchange.Redis/ChannelMessageQueue.cs) | The value returned by `StackExchange.Redis`. | | `StackExchange.Redis.NameValueEntry[]`, `Dictionary<string, string>` | The values contained within the entry. | | `string, byte[], ReadOnlyMemory<byte>` | The stream entry serialized as JSON (UTF-8 encoded for byte types) in the following format: `{"Id":"1658354934941-0","Values":{"field1":"value1","field2":"value2","field3":"value3"}}` | | `Custom` | The trigger uses Json.NET serialization to map the message from the channel from a `string` into a custom type. | - -> [!NOTE] -> Once the `RedisStreamTrigger` becomes generally available, the following information will be moved to a dedicated Output page. -| Output Type | Description | +| Type | Description | |-|--| | `byte[]` | The message from the channel. | | `string` | The message from the channel. | | `Custom` | The trigger uses Json.NET serialization to map the message from the channel from a `string` into a custom type. | --- ::: zone-end ## Related content The consumer group for all function instances is the `ID` of the function. For e - [Introduction to Azure Functions](functions-overview.md) - [Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started) - [Using Azure Functions and Azure Cache for Redis to create a write-behind cache](/azure/azure-cache-for-redis/cache-tutorial-write-behind)+- [Redis connection string](functions-bindings-cache.md#redis-connection-string) - [Redis streams](https://redis.io/docs/data-types/streams/) |
azure-functions | Functions Bindings Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache.md | Azure Cache for Redis can be used as a trigger for Azure Functions, allowing you You can integrate Azure Cache for Redis and Azure Functions to build functions that react to events from Azure Cache for Redis or external systems. -| Action | Direction | Type | Preview | -||--||| -| Triggers on Redis pub sub messages | N/A | [RedisPubSubTrigger](functions-bindings-cache-trigger-redispubsub.md) | Yes| -| Triggers on Redis lists | N/A | [RedisListsTrigger](functions-bindings-cache-trigger-redislist.md) | Yes | -| Triggers on Redis streams | N/A | [RedisStreamsTrigger](functions-bindings-cache-trigger-redisstream.md) | Yes | +| Action | Direction | Support level | +||--|--| +| [Trigger on Redis pub sub messages](functions-bindings-cache-trigger-redispubsub.md) | Trigger | Preview | +| [Trigger on Redis lists](functions-bindings-cache-trigger-redislist.md) | Trigger | Preview | +| [Trigger on Redis streams](functions-bindings-cache-trigger-redisstream.md) | Trigger | Preview | +| [Read a cached value](functions-bindings-cache-input.md) | Input | Preview | +| [Write a values to cache](functions-bindings-cache-output.md) | Output | Preview | -## Scope of availability for functions triggers +## Scope of availability for functions triggers and bindings |Tier | Basic | Standard, Premium | Enterprise, Enterprise Flash | ||::|::|::| |Pub/Sub | Yes | Yes | Yes | |Lists | Yes | Yes | Yes | |Streams | Yes | Yes | Yes |+|Bindings | Yes | Yes | Yes | > [!IMPORTANT]-> Redis triggers aren't currently supported for functions running in the [Consumption plan](consumption-plan.md). -> +> Redis triggers are currently only supported for functions running in either a [Elastic Premium plan](functions-premium-plan.md) or a dedicated [App Service plan](./dedicated-plan.md). ::: zone pivot="programming-language-csharp" dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease ::: zone-end ::: zone pivot="programming-language-javascript,programming-language-python,programming-language-powershell" -1. Add the extension bundle by adding or replacing the following code in your _host.json_ file: +Add the extension bundle by adding or replacing the following code in your _host.json_ file: - <!-- I don't see this in the samples. --> - ```json + ```json { "version": "2.0", "extensionBundle": { dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease "version": "[4.11.*, 5.0.0)" } }+ ``` - >[!WARNING] - >The Redis extension is currently only available in a preview bundle release. - > +>[!WARNING] +>The Redis extension is currently only available in a preview bundle release. +> ::: zone-end ## Redis connection string -Azure Cache for Redis triggers and bindings have a required property for the cache connection string. The connection string can be found on the [**Access keys**](/azure/azure-cache-for-redis/cache-configure#access-keys) menu in the Azure Cache for Redis portal. The Redis trigger or binding looks for an environmental variable holding the connection string with the name passed to the `ConnectionStringSetting` parameter. In local development, the `ConnectionStringSetting` can be defined using the [local.settings.json](/azure/azure-functions/functions-develop-local#local-settings-file) file. When deployed to Azure, [application settings](/azure/azure-functions/functions-how-to-use-azure-function-app-settings) can be used. +Azure Cache for Redis triggers and bindings have a required property for the cache connection string. The connection string can be found on the [**Access keys**](/azure/azure-cache-for-redis/cache-configure#access-keys) menu in the Azure Cache for Redis portal. The Redis trigger or binding looks for an environmental variable holding the connection string with the name passed to the `Connection` parameter. ++In local development, the `Connection` can be defined using the [local.settings.json](/azure/azure-functions/functions-develop-local#local-settings-file) file. When deployed to Azure, [application settings](/azure/azure-functions/functions-how-to-use-azure-function-app-settings) can be used. ++When connecting to a cache instance with an Azure function, you can use three types of connections in your deployments: Connection string, System-assigned managed identity, and User-assigned managed identity ++For local development, you can also use service principal secrets. ++Use the `appsettings` to configure each of the following types of client authentication, assuming the `Connection` was set to `Redis` in the function. ++### Connection string ++```JSON +"Redis": "<cacheName>.redis.cache.windows.net:6380,password=..." +``` ++### System-assigned managed identity ++```JSON +"Redis:redisHostName": "<cacheName>.redis.cache.windows.net", +"Redis:principalId": "<principalId>" +``` ++### User-assigned managed identity ++```JSON +"Redis:redisHostName": "<cacheName>.redis.cache.windows.net", +"Redis:principalId": "<principalId>", +"Redis:clientId": "<clientId>" +``` ++### Service Principal Secret ++Connections using Service Principal Secrets are only available during local development. ++```JSON +"Redis:redisHostName": "<cacheName>.redis.cache.windows.net", +"Redis:principalId": "<principalId>", +"Redis:clientId": "<clientId>" +"Redis:tenantId": "<tenantId>" +"Redis:clientSecret": "<clientSecret>" +``` ## Related content |
azure-health-insights | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/get-started.md | The Service URL to access your service is: https://```YOUR-NAME```.cognitiveserv To send an API request, you need your Azure AI services account endpoint and key. -You can find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/onco-phenotype/create-job). ++<!-- You can also find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/radiology-insights/create-job). --> ++ ![[Screenshot of the Keys and Endpoints for the Radiology Insights.](../media/keys-and-endpoints.png)](../media/keys-and-endpoints.png#lightbox) Ocp-Apim-Subscription-Key: {cognitive-services-account-key} } ``` -You can also find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/onco-phenotype/create-job). +<!-- You can also find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/radiology-insights/create-job). --> ++++ ### Evaluating a response that contains a case http://{cognitive-services-account-endpoint}/health-insights/radiology-insights/ "status": "succeeded" } ```-You can find a full view of the [response parameters here](/rest/api/cognitiveservices/healthinsights/onco-phenotype/get-job). +<!-- You can also find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/radiology-insights/get-job). --> ## Data limits |
azure-health-insights | Inferences | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/inferences.md | -This document describes details of all inferences generated by application of RI to a radiology document. +This document describes the details of all inferences generated by application of RI to a radiology document. The Radiology Insights feature of Azure Health Insights uses natural language processing techniques to process unstructured medical radiology documents. It adds several types of inferences that help the user to effectively monitor, understand, and improve financial and clinical outcomes in a radiology workflow context. The types of inferences currently supported by the system are: AgeMismatch, SexM -To interact with the Radiology-Insights model, you can provide several model configuration parameters that modify the outcome of the responses. One of the configurations is ΓÇ£inferenceTypesΓÇ¥, which can be used if only part of the Radiology Insights inferences is required. If this list is omitted or empty, the model returns all the inference types. +To interact with the Radiology-Insights model, you can provide several model configuration parameters that modify the outcome of the responses. One of the configurations is "inferenceTypes", which can be used if only part of the Radiology Insights inferences is required. If this list is omitted or empty, the model returns all the inference types. ```json "configuration" : { To interact with the Radiology-Insights model, you can provide several model con An age mismatch occurs when the document gives a certain age for the patient, which differs from the age that is calculated based on the patientΓÇÖs info birthdate and the encounter period in the request. - kind: RadiologyInsightsInferenceType.AgeMismatch; -<details><summary>Examples request/response json</summary> +Examples request/response json: + [!INCLUDE [Example input json](../includes/example-inference-age-mismatch-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-inference-age-mismatch-json-response.md)]-</details> + **Laterality Discrepancy** A laterality mismatch is mostly flagged when the orderedProcedure is for a body part with a laterality and the text refers to the opposite laterality.-Example: ΓÇ£x-ray right footΓÇ¥, ΓÇ£left foot is normalΓÇ¥ +Example: "x-ray right foot", "left foot is normal" - kind: RadiologyInsightsInferenceType.LateralityDiscrepancy - LateralityIndication: FHIR.R4.CodeableConcept - DiscrepancyType: LateralityDiscrepancyType There are three possible discrepancy types:-- ΓÇ£orderLateralityMismatchΓÇ¥ means that the laterality in the text conflicts with the one in the order.-- ΓÇ£textLateralityContradictionΓÇ¥ means that there's a body part with left or right in the finding section, and the same body part occurs with the opposite laterality in the impression section.-- ΓÇ£textLateralityMissingΓÇ¥ means that the laterality mentioned in the order never occurs in the text.+- "orderLateralityMismatch" means that the laterality in the text conflicts with the one in the order. +- "textLateralityContradiction" means that there's a body part with left or right in the finding section, and the same body part occurs with the opposite laterality in the impression section. +- "textLateralityMissing" means that the laterality mentioned in the order never occurs in the text. The lateralityIndication is a FHIR.R4.CodeableConcept. There are two possible values (SNOMED codes): The lateralityIndication is a FHIR.R4.CodeableConcept. There are two possible va The meaning of this field is as follows: - For orderLateralityMismatch: concept in the text that the laterality was flagged for. - For textLateralityContradiction: concept in the impression section that the laterality was flagged for.-- For ΓÇ£textLateralityMissingΓÇ¥, this field isn't filled in.+- For "textLateralityMissing", this field isn't filled in. ++A mismatch with discrepancy type "textLaterityMissing" has no token extensions. -A mismatch with discrepancy type ΓÇ£textLaterityMissingΓÇ¥ has no token extensions. +Examples request/response json: -<details><summary>Examples request/response json</summary> [!INCLUDE [Example input json](../includes/example-inference-laterality-discrepancy-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-inference-laterality-discrepancy-json-response.md)]-</details> + A mismatch with discrepancy type ΓÇ£textLaterityMissingΓÇ¥ has no token extensio This mismatch occurs when the document gives a different sex for the patient than stated in the patientΓÇÖs info in the request. If the patient info contains no sex, then the mismatch can also be flagged when there's contradictory language about the patientΓÇÖs sex in the text. - kind: RadiologyInsightsInferenceType.SexMismatch - sexIndication: FHIR.R4.CodeableConcept -Field ΓÇ£sexIndicationΓÇ¥ contains one coding with a SNOMED concept for either MALE (FINDING) if the document refers to a male or FEMALE (FINDING) if the document refers to a female: +Field "sexIndication" contains one coding with a SNOMED concept for either MALE (FINDING) if the document refers to a male or FEMALE (FINDING) if the document refers to a female: - 248153007: MALE (FINDING) - 248152002: FEMALE (FINDING) -<details><summary>Examples request/response json</summary> +Examples request/response json: + [!INCLUDE [Example input json](../includes/example-inference-sex-mismatch-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-inference-sex-mismatch-json-response.md)]-</details> + CompleteOrderDiscrepancy is created if there's a complete orderedProcedure - mea - MissingBodyParts: Array FHIR.R4.CodeableConcept - missingBodyPartMeasurements: Array FHIR.R4.CodeableConcept -Field ΓÇ£ordertypeΓÇ¥ contains one Coding, with one of the following Loinc codes: +Field "ordertype" contains one Coding, with one of the following Loinc codes: - 24558-9: US Abdomen - 24869-0: US Pelvis - 24531-6: US Retroperitoneum - 24601-7: US breast -Fields ΓÇ£missingBodyPartsΓÇ¥ and/or ΓÇ£missingBodyPartsMeasurementsΓÇ¥ contain body parts (radlex codes) that are missing or whose measurements are missing. The token extensions refer to body parts or measurements that are present (or words that imply them). +Fields "missingBodyParts" and/or "missingBodyPartsMeasurements" contain body parts (radlex codes) that are missing or whose measurements are missing. The token extensions refer to body parts or measurements that are present (or words that imply them). -<details><summary>Examples request/response json</summary> +Examples request/response json: + [!INCLUDE [Example input json](../includes/example-inference-complete-order-discrepancy-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-inference-complete-order-discrepancy-json-response.md)]-</details> + This inference is created if there's a limited order, meaning that not all body - PresentBodyParts: Array FHIR.R4.CodeableConcept - PresentBodyPartMeasurements: Array FHIR.R4.CodeableConcept -Field ΓÇ£ordertypeΓÇ¥ contains one Coding, with one of the following Loinc codes: +Field "ordertype" contains one Coding, with one of the following Loinc codes: - 24558-9: US Abdomen - 24869-0: US Pelvis - 24531-6: US Retroperitoneum - 24601-7: US breast -Fields ΓÇ£presentBodyPartsΓÇ¥ and/or ΓÇ£presentBodyPartsMeasurementsΓÇ¥ contain body parts (radlex codes) that are present or whose measurements are present. The token extensions refer to body parts or measurements that are present (or words that imply them). +Fields "presentBodyParts" and/or "presentBodyPartsMeasurements" contain body parts (radlex codes) that are present or whose measurements are present. The token extensions refer to body parts or measurements that are present (or words that imply them). -<details><summary>Examples request/response json</summary> +Examples request/response json: + [!INCLUDE [Example input json](../includes/example-inference-limited-order-discrepancy-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-inference-limited-order-discrepancy-json-response.md)]-</details> + **Finding** -This inference is created for a medical problem (for example ΓÇ£acute infection of the lungsΓÇ¥) or for a characteristic or a nonpathologic finding of a body part (for example ΓÇ£stomach normalΓÇ¥). +This inference is created for a medical problem (for example "acute infection of the lungs") or for a characteristic or a nonpathologic finding of a body part (for example "stomach normal"). - kind: RadiologyInsightsInferenceType.finding - finding: FHIR.R4.Observation Finding: Section and ci_sentence-Next to the token extensions, there can be an extension with url ΓÇ£sectionΓÇ¥. This extension has an inner extension with a display name that describes the section. The inner extension can also have a LOINC code. -There can also be an extension with url ΓÇ£ci_sentenceΓÇ¥. This extension refers to the sentence containing the first token of the clinical indicator (that is, the medical problem), if any. The generation of such a sentence is switchable. +Next to the token extensions, there can be an extension with url "section". This extension has an inner extension with a display name that describes the section. The inner extension can also have a LOINC code. +There can also be an extension with url "ci_sentence". This extension refers to the sentence containing the first token of the clinical indicator (that is, the medical problem), if any. The generation of such a sentence is switchable. -Finding: fields within field ΓÇ£findingΓÇ¥ -list of fields within field ΓÇ£findingΓÇ¥, except ΓÇ£componentΓÇ¥: -- status: is always set to ΓÇ£unknownΓÇ¥-- resourceType: is always set to "ObservationΓÇ¥+Finding: fields within field "finding" +list of fields within field "finding", except "component": +- status: is always set to "unknown" +- resourceType: is always set to "Observation" - interpretation: contains a sublist of the following SNOMED codes: - 7147002: NEW (QUALIFIER VALUE) - 36692007: KNOWN (QUALIFIER VALUE) list of fields within field ΓÇ£findingΓÇ¥, except ΓÇ£componentΓÇ¥: - 263730007: CONTINUAL (QUALIFIER VALUE) In this list, the string before the colon is the code, and the string after the colon is the display name.-If the value is ΓÇ£NONE (QUALIFIER VALUE)ΓÇ¥, the finding is absent. This value is, for example, ΓÇ£no sepsisΓÇ¥. -category: if filled, this field contains an array with one element. It contains one of the following SNOMED concepts: -- 439401001: DIAGNOSIS (OBSERVABLE ENTITY)-- 404684003: CLINICAL FINDING (FINDING)-- 162432007: SYMPTOM: GENERALIZED (FINDING)-- 246501002: TECHNIQUE (ATTRIBUTE)-- 91722005: PHYSICAL ANATOMICAL ENTITY (BODY STRUCTURE)+If the value is "NONE (QUALIFIER VALUE)", the finding is absent. This value is, for example, "no sepsis". code: - SNOMED code 404684003: CLINICAL FINDING (FINDING) (meaning that the finding has a clinical indicator) or - SNOMED code 123037004: BODY STRUCTURE (BODY STRUCTURE) (no clinical indicator.) -Finding: field ΓÇ£componentΓÇ¥ -Much relevant information is in the components. The componentΓÇÖs ΓÇ£codeΓÇ¥ field contains one CodeableConcept with one SNOMED code. +Finding: field "component" +Much relevant information is in the components. The componentΓÇÖs "code" field contains one CodeableConcept with one SNOMED code. Component description: (some of the components are optional) -Finding: component ΓÇ£subject of informationΓÇ¥ -This component has SNOMED code 131195008: SUBJECT OF INFORMATION (ATTRIBUTE). It also has the ΓÇ£valueCodeableConceptΓÇ¥ field filled. The value is a SNOMED code describing the medical problem that the finding pertains to. -At least one ΓÇ£subject of informationΓÇ¥ component is present if and only if the ΓÇ£finding.codeΓÇ¥ field has 404684003: CLINICAL FINDING (FINDING). There can be several "subject of informationΓÇ¥ components, with different concepts in the ΓÇ£valueCodeableConceptΓÇ¥ field. +Finding: component "subject of information" +This component has SNOMED code 131195008: SUBJECT OF INFORMATION (ATTRIBUTE). It also has the "valueCodeableConcept" field filled. The value is a SNOMED code describing the medical problem that the finding pertains to. +At least one "subject of information" component is present if and only if the "finding.code" field has 404684003: CLINICAL FINDING (FINDING). There can be several "subject of information" components, with different concepts in the "valueCodeableConcept" field. -Finding: component ΓÇ£anatomyΓÇ¥ -Zero or more components with SNOMED code ΓÇ£722871000000108: ANATOMY (QUALIFIER VALUE)ΓÇ¥. This component has field ΓÇ£valueCodeConceptΓÇ¥ filled with a SNOMED or radlex code. For example, for ΓÇ£lung infectionΓÇ¥ this component contains a code for the lungs. +Finding: component "anatomy" +Zero or more components with SNOMED code "722871000000108: ANATOMY (QUALIFIER VALUE)". This component has field "valueCodeConcept" filled with a SNOMED or radlex code. For example, for "lung infection" this component contains a code for the lungs. -Finding: component ΓÇ£regionΓÇ¥ -Zero or more components with SNOMED code 45851105: REGION (ATTRIBUTE). Like anatomy, this component has field ΓÇ£valueCodeableConceptΓÇ¥ filled with a SNOMED or radlex code. Such a concept refers to the body region of the anatomy. For example, if the anatomy is a code for the vagina, the region may be a code for the female reproductive system. +Finding: component "region" +Zero or more components with SNOMED code 45851105: REGION (ATTRIBUTE). Like anatomy, this component has field "valueCodeableConcept" filled with a SNOMED or radlex code. Such a concept refers to the body region of the anatomy. For example, if the anatomy is a code for the vagina, the region may be a code for the female reproductive system. -Finding: component ΓÇ£lateralityΓÇ¥ -Zero or more components with code 45651917: LATERALITY (ATTRIBUTE). Each has field ΓÇ£valueCodeableConceptΓÇ¥ set to a SNOMED concept pertaining to the laterality of the finding. For example, this component is filled for a finding pertaining to the right arm. +Finding: component "laterality" +Zero or more components with code 45651917: LATERALITY (ATTRIBUTE). Each has field "valueCodeableConcept" set to a SNOMED concept pertaining to the laterality of the finding. For example, this component is filled for a finding pertaining to the right arm. -Finding: component ΓÇ£change valuesΓÇ¥ -Zero or more components with code 288533004: CHANGE VALUES (QUALIFIER VALUE). Each has field ΓÇ£valueCodeableConceptΓÇ¥ set to a SNOMED concept pertaining to a size change in the finding (for example, a nodule that is growing or decreasing). +Finding: component "change values" +Zero or more components with code 288533004: CHANGE VALUES (QUALIFIER VALUE). Each has field "valueCodeableConcept" set to a SNOMED concept pertaining to a size change in the finding (for example, a nodule that is growing or decreasing). -Finding: component ΓÇ£percentageΓÇ¥ -At most one component with code 45606679: PERCENT (PROPERTY) (QUALIFIER VALUE). It has field ΓÇ£valueStringΓÇ¥ set with either a value or a range consisting of a lower and upper value, separated by ΓÇ£-ΓÇ£. +Finding: component "percentage" +At most one component with code 45606679: PERCENT (PROPERTY) (QUALIFIER VALUE). It has field "valueString" set with either a value or a range consisting of a lower and upper value, separated by "-". -Finding: component ΓÇ£severityΓÇ¥ -At most one component with code 272141005: SEVERITIES (QUALIFIER VALUE), indicating how severe the medical problem is. It has field ΓÇ£valueCodeableConceptΓÇ¥ set with a SNOMED code from the following list: +Finding: component "severity" +At most one component with code 272141005: SEVERITIES (QUALIFIER VALUE), indicating how severe the medical problem is. It has field "valueCodeableConcept" set with a SNOMED code from the following list: - 255604002: MILD (QUALIFIER VALUE) - 6736007: MODERATE (SEVERITY MODIFIER) (QUALIFIER VALUE) - 24484000: SEVERE (SEVERITY MODIFIER) (QUALIFIER VALUE) - 371923003: MILD TO MODERATE (QUALIFIER VALUE) - 371924009: MODERATE TO SEVERE (QUALIFIER VALUE) -Finding: component ΓÇ£chronicityΓÇ¥ -At most one component with code 246452003: CHRONICITY (ATTRIBUTE), indicating whether the medical problem is chronic or acute. It has field ΓÇ£valueCodeableConceptΓÇ¥ set with a SNOMED code from the following list: +Finding: component "chronicity" +At most one component with code 246452003: CHRONICITY (ATTRIBUTE), indicating whether the medical problem is chronic or acute. It has field "valueCodeableConcept" set with a SNOMED code from the following list: - 255363002: SUDDEN (QUALIFIER VALUE) - 90734009: CHRONIC (QUALIFIER VALUE) - 19939008: SUBACUTE (QUALIFIER VALUE) - 255212004: ACUTE-ON-CHRONIC (QUALIFIER VALUE) -Finding: component ΓÇ£causeΓÇ¥ -At most one component with code 135650694: CAUSES OF HARM (QUALIFIER VALUE), indicating what the cause is of the medical problem. It has field ΓÇ£valueStringΓÇ¥ set to the strings of one or more tokens from the text, separated by ΓÇ£;;ΓÇ¥. +Finding: component "cause" +At most one component with code 135650694: CAUSES OF HARM (QUALIFIER VALUE), indicating what the cause is of the medical problem. It has field "valueString" set to the strings of one or more tokens from the text, separated by ";;". -Finding: component ΓÇ£qualifier valueΓÇ¥ +Finding: component "qualifier value" Zero or more components with code 362981000: QUALIFIER VALUE (QUALIFIER VALUE). This component refers to a feature of the medical problem. Every component has either:-- Field ΓÇ£valueStringΓÇ¥ set with token strings from the text, separated by ΓÇ£;;ΓÇ¥-- Or field ΓÇ£valueCodeableConceptΓÇ¥ set to a SNOMED code+- Field "valueString" set with token strings from the text, separated by ";;" +- Or field "valueCodeableConcept" set to a SNOMED code - Or no field set (then the meaning can be retrieved from the token extensions (rare occurrence)) -Finding: component ΓÇ£multipleΓÇ¥ -Exactly one component with code 46150521: MULTIPLE (QUALIFIER VALUE). It has field ΓÇ£valueBooleanΓÇ¥ set to true or false. This component indicates the difference between, for example, one nodule (multiple is false) or several nodules (multiple is true). This component has no token extensions. +Finding: component "multiple" +Exactly one component with code 46150521: MULTIPLE (QUALIFIER VALUE). It has field "valueBoolean" set to true or false. This component indicates the difference between, for example, one nodule (multiple is false) or several nodules (multiple is true). This component has no token extensions. -Finding: component ΓÇ£sizeΓÇ¥ -Zero or more components with code 246115007, "SIZE (ATTRIBUTE)". Even if there's just one size for a finding, there are several components if the size has two or three dimensions, for example, ΓÇ£2.1 x 3.3 cmΓÇ¥ or ΓÇ£1.2 x 2.2 x 1.5 cmΓÇ¥. There's a size component for every dimension. -Every component has field ΓÇ£interpretationΓÇ¥ set to either SNOMED code 15240007: CURRENT or 9130008: PREVIOUS, depending on whether the size was measured during this visit or in the past. -Every component has either field ΓÇ£valueQuantityΓÇ¥ or ΓÇ£valueRangeΓÇ¥ set. -If ΓÇ£valueQuantityΓÇ¥ is set, then ΓÇ£valueQuantity.valueΓÇ¥ is always set. In most cases, ΓÇ£valueQuantity.unitΓÇ¥ is set. It's possible that ΓÇ£valueQuantity.comparatorΓÇ¥ is also set, to either ΓÇ£>ΓÇ¥, ΓÇ£<ΓÇ¥, ΓÇ£>=ΓÇ¥ or ΓÇ£<=ΓÇ¥. For example, the component is set to ΓÇ£<=ΓÇ¥ for ΓÇ£the tumor is up to 2 cmΓÇ¥. -If ΓÇ£valueRangeΓÇ¥ is set, then ΓÇ£valueRange.lowΓÇ¥ and ΓÇ£valueRange.highΓÇ¥ are set to quantities with the same data as described in the previous paragraph. This field contains, for example, ΓÇ£The tumor is between 2.5 cm and 2.6 cm in size". +Finding: component "size" +Zero or more components with code 246115007, "SIZE (ATTRIBUTE)". Even if there's just one size for a finding, there are several components if the size has two or three dimensions, for example, "2.1 x 3.3 cm" or "1.2 x 2.2 x 1.5 cm". There's a size component for every dimension. +Every component has field "interpretation" set to either SNOMED code 15240007: CURRENT or 9130008: PREVIOUS, depending on whether the size was measured during this visit or in the past. +Every component has either field "valueQuantity" or "valueRange" set. +If "valueQuantity" is set, then "valueQuantity.value" is always set. In most cases, "valueQuantity.unit" is set. It's possible that "valueQuantity.comparator" is also set, to either ">", "<", ">=" or "<=". For example, the component is set to "<=" for "the tumor is up to 2 cm". +If "valueRange" is set, then "valueRange.low" and "valueRange.high" are set to quantities with the same data as described in the previous paragraph. This field contains, for example, "The tumor is between 2.5 cm and 2.6 cm in size". -<details><summary>Examples request/response json</summary> +Examples request/response json: + [!INCLUDE [Example input json](../includes/example-inference-finding-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-inference-finding-json-response.md)]-</details> + This inference is made for a new medical problem that requires attention within - kind: RadiologyInsightsInferenceType.criticalResult - result: CriticalResult -Field ΓÇ£result.descriptionΓÇ¥ gives a description of the medical problem, for example ΓÇ£MALIGNANCYΓÇ¥. -Field ΓÇ£result.findingΓÇ¥, if set, contains the same information as the ΓÇ£findingΓÇ¥ field in a finding inference. +Field "result.description" gives a description of the medical problem, for example "MALIGNANCY". +Field "result.finding", if set, contains the same information as the "finding" field in a finding inference. Next to token extensions, there can be an extension for a section. This field contains the most specific section that the first token of the critical result is in (or to be precise, the first token that is in a section). This section is in the same format as a section for a finding. -<details><summary>Examples request/response json</summary> +Examples request/response json: + [!INCLUDE [Example input json](../includes/example-inference-critical-result-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-inference-critical-result-json-response.md)]-</details> + recommendedProcedure: ProcedureRecommendation - follow up Recommendation: sentences Next to the token extensions, there can be an extension containing sentences. This behavior is switchable. - follow up Recommendation: boolean fields-ΓÇ£isHedgingΓÇ¥ mean that the recommendation is uncertain, for example, ΓÇ£a follow-up could be doneΓÇ¥. ΓÇ£isConditionalΓÇ¥ is for input like ΓÇ£If the patient continues having pain, an MRI should be performed.ΓÇ¥ -ΓÇ£isOptionsΓÇ¥: is also for conditional input. -ΓÇ£isGuidelineΓÇ¥ means that the recommendation is in a general guideline like the following: +"isHedging" mean that the recommendation is uncertain, for example, "a follow-up could be done". "isConditional" is for input like "If the patient continues having pain, an MRI should be performed." +"isOptions": is also for conditional input. +"isGuideline" means that the recommendation is in a general guideline like the following: BI-RADS CATEGORIES: - (0) Incomplete: Needs more imaging evaluation BI-RADS CATEGORIES: - (6) Known biopsy-proven malignancy - follow up Recommendation: effectiveDateTime and effectivePeriod-Field ΓÇ£effectiveDateTimeΓÇ¥ will be set when the procedure needs to be done (recommended) at a specific point in time. For example, ΓÇ£next WednesdayΓÇ¥. Field ΓÇ£effectivePeriodΓÇ¥ will be set if a specific period is mentioned, with a start and end datetime. For example, for ΓÇ£within six monthsΓÇ¥, the start datetime will be the date of service, and the end datetime will be the day six months after that. +Field "effectiveDateTime" will be set when the procedure needs to be done (recommended) at a specific point in time. For example, "next Wednesday". Field "effectivePeriod" will be set if a specific period is mentioned, with a start and end datetime. For example, for "within six months", the start datetime will be the date of service, and the end datetime will be the day six months after that. - follow up Recommendation: findings-If set, field ΓÇ£findingsΓÇ¥ contains one or more findings that have to do with the recommendation. For example, a leg scan (procedure) can be recommended because of leg pain (finding). -Every array element of field ΓÇ£findingsΓÇ¥ is a RecommendationFinding. Field RecommendationFinding.finding has the same information as a FindingInference.finding field. -For field ΓÇ£RecommendationFinding.RecommendationFindingStatusΓÇ¥, see the OpenAPI specification for the possible values. -Field ΓÇ£RecommendationFinding.criticalFindingΓÇ¥ is set if a critical result is associated with the finding. It then contains the same information as described for a critical result inference. +If set, field "findings" contains one or more findings that have to do with the recommendation. For example, a leg scan (procedure) can be recommended because of leg pain (finding). +Every array element of field "findings" is a RecommendationFinding. Field RecommendationFinding.finding has the same information as a FindingInference.finding field. +For field "RecommendationFinding.RecommendationFindingStatus", see the OpenAPI specification for the possible values. +Field "RecommendationFinding.criticalFinding" is set if a critical result is associated with the finding. It then contains the same information as described for a critical result inference. - follow up Recommendation: recommended procedure-Field ΓÇ£recommendedProcedureΓÇ¥ is either a GenericProcedureRecommendation, or an ImagingProcedureRecommendation. (Type ΓÇ£procedureRecommendationΓÇ¥ is a supertype for these two types.) +Field "recommendedProcedure" is either a GenericProcedureRecommendation, or an ImagingProcedureRecommendation. (Type "procedureRecommendation" is a supertype for these two types.) A GenericProcedureRecommendation has the following:-- Field ΓÇ£kindΓÇ¥ has value ΓÇ£genericProcedureRecommendationΓÇ¥-- Field ΓÇ£descriptionΓÇ¥ has either value ΓÇ£MANAGEMENT PROCEDURE (PROCEDURE)ΓÇ¥ or ΓÇ£CONSULTATION (PROCEDURE)ΓÇ¥-- Field ΓÇ£codeΓÇ¥ only contains an extension with tokens+- Field "kind" has value "genericProcedureRecommendation" +- Field "description" has either value "MANAGEMENT PROCEDURE (PROCEDURE)" or "CONSULTATION (PROCEDURE)" +- Field "code" only contains an extension with tokens An ImagingProcedureRecommendation has the following:-- Field ΓÇ£kindΓÇ¥ has value ΓÇ£imagingProcedureRecommendationΓÇ¥-- Field ΓÇ£imagingProceduresΓÇ¥ contains an array with one element of type ImagingProcedure. +- Field "kind" has value "imagingProcedureRecommendation" +- Field "imagingProcedures" contains an array with one element of type ImagingProcedure. This type has the following fields, the first 2 of which are always filled:-- ΓÇ£modalityΓÇ¥: a CodeableConcept containing at most one coding with a SNOMED code.-- ΓÇ£anatomyΓÇ¥: a CodeableConcept containing at most one coding with a SNOMED code.-- ΓÇ£laterality: a CodeableConcept containing at most one coding with a SNOMED code.-- ΓÇ£contrastΓÇ¥: not set.-- ΓÇ£viewΓÇ¥: not set.+- "modality": a CodeableConcept containing at most one coding with a SNOMED code. +- "anatomy": a CodeableConcept containing at most one coding with a SNOMED code. +- "laterality: a CodeableConcept containing at most one coding with a SNOMED code. +- "contrast": not set. +- "view": not set. -<details><summary>Examples request/response json</summary> +Examples request/response json: + [!INCLUDE [Example input json](../includes/example-1-inference-follow-up-recommendation-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-1-inference-follow-up-recommendation-json-response.md)]-</details> + This inference is created when findings or test results were communicated to a m - recipient: Array MedicalProfessionalType - wasAcknowledged: boolean -Field ΓÇ£wasAcknowledgedΓÇ¥ is set to true if the communication was verbal (nonverbal communication might not have reached the recipient yet and cannot be considered acknowledged). Field ΓÇ£dateTimeΓÇ¥ is set if the date-time of the communication is known. Field ΓÇ£recipientΓÇ¥ is set if the recipient(s) are known. See the OpenAPI spec for its possible values. +Field "wasAcknowledged" is set to true if the communication was verbal (nonverbal communication might not have reached the recipient yet and cannot be considered acknowledged). Field "dateTime" is set if the date-time of the communication is known. Field "recipient" is set if the recipient(s) are known. See the OpenAPI spec for its possible values. ++Examples request/response json: -<details><summary>Examples request/response json</summary> [!INCLUDE [Example input json](../includes/example-inference-follow-up-communication-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-inference-follow-up-communication-json-response.md)]-</details> + This inference is for the ordered radiology procedure(s). - imagingProcedures: Array ImagingProcedure - orderedProcedure: OrderedProcedure -Field ΓÇ£imagingProceduresΓÇ¥ contains one or more instances of an imaging procedure, as documented for the follow up recommendations. -Field ΓÇ£procedureCodesΓÇ¥, if set, contains LOINC codes. -Field ΓÇ£orderedProcedureΓÇ¥ contains the description(s) and the code(s) of the ordered procedure(s) as given by the client. The descriptions are in field ΓÇ£orderedProcedure.descriptionΓÇ¥, separated by ΓÇ£;;ΓÇ¥. The codes are in ΓÇ£orderedProcedure.code.codingΓÇ¥. In every coding in the array, only field ΓÇ£codingΓÇ¥ is set. +Field "imagingProcedures" contains one or more instances of an imaging procedure, as documented for the follow up recommendations. +Field "procedureCodes", if set, contains LOINC codes. +Field "orderedProcedure" contains the description(s) and the code(s) of the ordered procedure(s) as given by the client. The descriptions are in field "orderedProcedure.description", separated by ";;". The codes are in "orderedProcedure.code.coding". In every coding in the array, only field "coding" is set. -<details><summary>Examples request/response json</summary> +Examples request/response json: + [!INCLUDE [Example input json](../includes/example-inference-radiology-procedure-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-inference-radiology-procedure-json-response.md)]-</details> + |
azure-health-insights | Model Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/model-configuration.md | false | No Evidence is returned **FollowupRecommendationOptions** - includeRecommendationsWithNoSpecifiedModality - type: boolean- - description: Include/Exclude follow-up recommendations with no specific radiologic modality, default is false. + - description: To include or exclude follow-up recommendations with no specific radiologic modality. Default is false. - includeRecommendationsInReferences - type: boolean- - description: Include/Exclude follow-up recommendations in references to a guideline or article, default is false. + - description: To include or exclude follow-up recommendations in references to a guideline or article. Default is false. - provideFocusedSentenceEvidence - type: boolean - description: Provide a single focused sentence as evidence for the recommendation, default is false. -When includeEvidence is false, no evidence is returned. -This configuration overrules includeRecommendationsWithNoSpecifiedModality and provideFocusedSentenceEvidence and no evidence is shown. +IncludeEvidence ++- IncludeEvidence +- type: boolean +- Provide evidence for the inference, default is false with no evidence returned. + -When includeEvidence is true, it depends on the value set on the two other configurations whether the evidence of the inference or a single focused sentence is given as evidence. ## Examples When includeEvidence is true, it depends on the value set on the two other confi CDARecommendation_GuidelineFalseUnspecTrueLimited -The includeRecommendationsWithNoSpecifiedModality is true, includeRecommendationsInReferences is false, provideFocusedSentenceEvidence for recommendations is true and includeEvidence is true. +- includeRecommendationsWithNoSpecifiedModality is true +- includeRecommendationsInReferences are false +- provideFocusedSentenceEvidence for recommendations is true +- includeEvidence is true As a result, the model includes evidence for all inferences. - The model checks for follow-up recommendations with a specified modality. - The model checks for follow-up recommendations with no specific radiologic modality. - The model provides a single focused sentence as evidence for the recommendation. -<details><summary>Examples request/response json</summary> +Examples request/response json: + [!INCLUDE [Example input json](../includes/example-2-inference-follow-up-recommendation-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-2-inference-follow-up-recommendation-json-response.md)]-</details> + As a result, the model includes evidence for all inferences. CDARecommendation_GuidelineTrueUnspecFalseLimited -The includeRecommendationsWithNoSpecifiedModality is false, includeRecommendationsInReferences is true, provideFocusedSentenceEvidence for findings is true and includeEvidence is true. +- includeRecommendationsWithNoSpecifiedModality is false +- includeRecommendationsInReferences are true +- provideFocusedSentenceEvidence for findings is true +- includeEvidence is true As a result, the model includes evidence for all inferences. - The model checks for follow-up recommendations with a specified modality. As a result, the model includes evidence for all inferences. - The model provides a single focused sentence as evidence for the finding. -<details><summary>Examples request/response json</summary> +Examples request/response json: + [!INCLUDE [Example input json](../includes/example-1-inference-follow-up-recommendation-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-1-inference-follow-up-recommendation-json-response.md)]-</details> + |
azure-maps | Azure Maps Qps Rate Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-qps-rate-limits.md | The following list shows the QPS usage limits for each Azure Maps service by Pri | Copyright service | 10 | 10 | 10 | | Creator - Alias, TilesetDetails | 10 | Not Available | Not Available | | Creator - Conversion, Dataset, Feature State, Features, Map Configuration, Style, Routeset, Wayfinding | 50 | Not Available | Not Available |-| Data service (Deprecated<sup>1</sup>) | 50 | 50 | Not Available | -| Data registry service | 50 | 50 | Not Available | +| Data registry service | 50 | 50 |  Not Available  | +| Data service (Deprecated<sup>1</sup>) | 50 | 50 |  Not Available  | | Geolocation service | 50 | 50 | 50 |-| Render service - Traffic tiles and Static maps | 50 | 50 | 50 | | Render service - Road tiles | 500 | 500 | 50 | | Render service - Satellite tiles | 250 | 250 | Not Available |+| Render service - Static maps | 50 | 50 | 50 | +| Render service - Traffic tiles | 50 | 50 | 50 | | Render service - Weather tiles | 100 | 100 | 50 | | Route service - Batch | 10 | 10 | Not Available | | Route service - Non-Batch | 50 | 50 | 50 | | Search service - Batch | 10 | 10 | Not Available | | Search service - Non-Batch | 500 | 500 | 50 | | Search service - Non-Batch Reverse | 250 | 250 | 50 |-| Spatial service | 50 | 50 | Not Available | +| Spatial service | 50 | 50 |  Not Available  | | Timezone service | 50 | 50 | 50 | | Traffic service | 50 | 50 | 50 | | Weather service | 50 | 50 | 50 | |
azure-monitor | Azure Monitor Agent Extension Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md | -This article describes the version details for the Azure Monitor agent virtual machine extension. This extension deploys the agent on virtual machines, scale sets, and Arc-enabled servers (on premise servers with Azure Arc agent installed). +This article describes the version details for the Azure Monitor agent virtual machine extension. This extension deploys the agent on virtual machines, scale sets, and Arc-enabled servers (on-premises servers with Azure Arc agent installed). We strongly recommended to always update to the latest version, or opt in to the [Automatic Extension Update](../../virtual-machines/automatic-extension-upgrade.md) feature. We strongly recommended to always update to the latest version, or opt in to the ## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|+| February 2024 | **Windows**<ul><li>Fix memory leak in IIS log collection</li><li>Fix json parsing with unicode characters for some ingestion endpoints</li><li>Allow Client installer to run on AVD DevBox partner</li><li>Enable TLS 1.3 on supported Windows versions</li><li>Enable Agent Side Aggregation for Private Preview</li><li>Update MetricsExtension package to 2.2024.202.2043</li><li>Update AzureSecurityPack.Geneva package to 4.31</li></ul>**Linux**<ul><li></li></ul> | 1.24.0 | Coming soon | | January 2024 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.5 won't install on Arc enabled servers. **This issue was fixed in 1.29.6**</li></ul>**Windows**<ul><li>Added support for Transport Layer Security 1.3</li><li>Reverted a change to enable multiple IIS subscriptions to use same filter. Feature will be redeployed once memory leak is fixed.</li><li>Improved ETW event throughput rate</li></ul>**Linux**<ul><li>Fix Error messages logged intended for mdsd.err went to mdsd.warn instead in 1.29.4 only. Likely error messages: "Exception while uploading to Gig-LA : ...", "Exception while uploading to ODS: ...", "Failed to upload to ODS: ..."</li><li>Syslog time zones incorrect: AMA now uses machine current time when AMA receives an event to populate the TimeGenerated field. The previous behavior parsed the time zone from the Syslog event which caused incorrect times if a device sent an event from a time zone different than the AMA collector machine.</li><li>Reduced noise generated by AMAs' use of semanage when SELinux is enabled"</li></ul> | 1.23.0 | 1.29.5, 1.29.6 | | December 2023 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.4 won't install on Arc enabled servers. Fix is coming in 1.29.6.</li><li>Multiple IIS subscriptions causes a memory leak. feature reverted in 1.23.0.</ul>**Windows** <ul><li>Prevent CPU spikes by not using bookmark when resetting an Event Log subscription</li><li>Added missing fluentbit exe to AMA client setup for Custom Log support</li><li>Updated to latest AzureCredentialsManagementService and DsmsCredentialsManagement package</li><li>Update ME to v2.2023.1027.1417</li></ul>**Linux**<ul><li>Support for TLS V1.3</li><li>Support for nopri in Syslog</li><li>Ability to set disk quota from DCR Agent Settings</li><li>Add ARM64 Ubuntu 22 support</li><li>**Fixes**<ul><li>SysLog</li><ul><li>Parse syslog Palo Alto CEF with multiple space characters following the hostname</li><li>Fix an issue with incorrectly parsing messages containing two '\n' chars in a row</li><li>Improved support for non-RFC compliant devices</li><li>Support infoblox device messages containing both hostname and IP headers</li></ul><li>Fix AMA crash in RHEL 7.2</li><li>Remove dependency on "which" command</li><li>Fix port conflicts due to AMA using 13000 </li><li>Reliability and Performance improvements</li></ul></li></ul>| 1.22.0 | 1.29.4| | October 2023| **Windows** <ul><li>Minimize CPU spikes when resetting an Event Log subscription</li><li>Enable multiple IIS subscriptions to use same filter</li><li>Cleanup files and folders for inactive tenants in multitenant mode</li><li>AMA installer won't install unnecessary certs</li><li>AMA emits Telemetry table locally</li><li>Update Metric Extension to v2.2023.721.1630</li><li>Update AzureSecurityPack to v4.29.0.4</li><li>Update AzureWatson to v1.0.99</li></ul>**Linux**<ul><li> Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics</li><li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ul> |1.21.0|1.28.11| |
azure-monitor | Azure Monitor Agent Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md | The following features and services now have an Azure Monitor Agent version (som | Service or feature | Migration recommendation | Current state | More information | | : | : | : | : |-| [VM insights, Service Map, and Dependency agent](../vm/vminsights-overview.md) | Migrate to Azure Monitor Agent | Generally Available | [Enable VM Insights](../vm/vminsights-enable-overview.md) | -| [Container insights](../containers/container-insights-overview.md) | Migrate to Azure Monitor Agent | **Linux**: Generally available<br>**Windows**:Public preview | [Enable Container Insights](../containers/container-insights-onboard.md) | -| [Microsoft Sentinel](../../sentinel/overview.md) | Migrate to Azure Monitor Agent | Public Preview | See [AMA migration for Microsoft Sentinel](../../sentinel/ama-migrate.md). Only CEF and Firewall collection remain for GA status | -| [Change Tracking and Inventory](../../automation/change-tracking/overview-monitoring-agent.md) | Migrate to Azure Monitor Agent | Generally Available | [Migration guidance from Change Tracking and inventory using Log Analytics to Change Tracking and inventory using Azure Monitoring Agent version](../../automation/change-tracking/guidance-migration-log-analytics-monitoring-agent.md) | -| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Migrate to new service called Connection Monitor with Azure Monitor Agent | Generally Available | [Monitor network connectivity using Azure Monitor agent with connection monitor](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) | -| Azure Stack HCI Insights | Migrate to Azure Monitor Agent | Generally Available| [Monitor Azure Stack HCI with Insights](/azure-stack/hci/manage/monitor-hci-single) | -| [Azure Virtual Desktop (AVD) Insights](../../virtual-desktop/insights.md) | Migrate to Azure Monitor Agent |Generally Available | [Use Azure Virtual Desktop Insights to monitor your deployment](../../virtual-desktop/insights.md#session-host-data-settings) | +| [VM insights, Service Map, and Dependency agent](../vm/vminsights-overview.md) | Migrate to Azure Monitor Agent | Generally Available | [Enable VM Insights](../vm/vminsights-enable-overview.md) | +| [Microsoft Sentinel](../../sentinel/overview.md) | Migrate to Azure Monitor Agent | Public Preview | [AMA migration for Microsoft Sentinel](../../sentinel/ama-migrate.md). | +| [Change Tracking and Inventory](../../automation/change-tracking/overview-monitoring-agent.md) | Migrate to Azure Monitor Agent | Generally Available | [Migration for Change Tracking and inventory](../../automation/change-tracking/guidance-migration-log-analytics-monitoring-agent.md) | +| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Migrate to new service called Connection Monitor with Azure Monitor Agent | Generally Available | [Monitor network connectivity using connection monitor](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) | +| Azure Stack HCI Insights | Migrate to Azure Monitor Agent | Generally Available| [Monitor Azure Stack HCI with Insights](/azure-stack/hci/manage/monitor-hci-single) | +| [Azure Virtual Desktop (AVD) Insights](../../virtual-desktop/insights.md) | Migrate to Azure Monitor Agent |Generally Available | [Azure Virtual Desktop Insights](../../virtual-desktop/insights.md#session-host-data-settings) | | [Container Monitoring Solution](../containers/containers.md) | Migrate to new service called Container Insights with Azure Monitor Agent | Generally Available | [Enable Container Insights](../containers/container-insights-transition-solution.md) | | [DNS Collector](../../sentinel/connect-dns-ama.md) | Use new Sentinel Connector | Generally Available | [Enable DNS Connector](../../sentinel/connect-dns-ama.md)| -> [!NOTE] -> Features and services listed above in preview **may not be available in Azure Government and China clouds**. They will be available typically within a month *after* the features/services become generally available. - When you migrate the following services, which currently use Log Analytics agent, to their respective replacements (v2), you no longer need either of the monitoring agents: | Service | Migration recommendation | Current state | More information | | : | : | : | : |-| [Microsoft Defender for Cloud, Servers, SQL, and Endpoint](../../security-center/security-center-introduction.md) | Migrate to Microsoft Defender for Cloud (No dependency on Log Analytics agents or Azure Monitor Agent) | Generally available | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](../../defender-for-cloud/upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation)| +| [Microsoft Defender for Cloud, Servers, SQL, and Endpoint](../../security-center/security-center-introduction.md) | Migrate to Microsoft Defender for Cloud (No dependency on Log Analytics agents or Azure Monitor Agent) | Generally available | [Defender for Cloud plan for Log Analytics agent deprecation](../../defender-for-cloud/upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation)| | [Update Management](../../automation/update-management/overview.md) | Migrate to Azure Update Manager (No dependency on Log Analytics agents or Azure Monitor Agent) | Generally available | [Update Manager documentation](../../update-manager/update-manager-faq.md#la-agent-also-known-as-mma-is-retiring-and-will-be-replaced-with-ama-is-it-necessary-to-move-to-update-manager-or-can-i-continue-to-use-automation-update-management-with-ama) |-| [Automation Hybrid Runbook Worker overview](../../automation/automation-hybrid-runbook-worker.md) | Automation Hybrid Worker Extension (no dependency on Log Analytics agents or Azure Monitor Agent) | Generally available | [Migrate an existing Agent based to Extension based Hybrid Workers](../../automation/extension-based-hybrid-runbook-worker-install.md#migrate-an-existing-agent-based-to-extension-based-hybrid-workers) | +| [Automation Hybrid Runbook Worker overview](../../automation/automation-hybrid-runbook-worker.md) | Automation Hybrid Worker Extension (no dependency on Log Analytics agents or Azure Monitor Agent) | Generally available | [Migrate to Extension based Hybrid Workers](../../automation/extension-based-hybrid-runbook-worker-install.md#migrate-an-existing-agent-based-to-extension-based-hybrid-workers) | ++## Known parity gaps for solutions that may impact your migration +- ***Sentinel***: CEF and Windows firewall logs are not yet GA +- ***SQL Assessment Solution***: This is now part of SQL best practice assessment. The deployment policies require one Log Analytics Workspace per subscription, which is not the best practice recommended by the AMA team. +- ***Microsoft Defender for cloud***: Some features for the new agentless solution are in development. Your migration maybe impacted if you use FIM, Endpoint protection discovery recommendations, OS Misconfigurations (ASB recommendations) and Adaptive Application controls. +- ***Container Insights***: The Windows version is in public preview. ## Frequently asked questions |
azure-monitor | Asp Net Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md | public class HomeController : Controller For more information about custom data reporting in Application Insights, see [Application Insights custom metrics API reference](./api-custom-events-metrics.md). A similar approach can be used for sending custom metrics to Application Insights by using the [GetMetric API](./get-metric.md). +### How do I capture Request and Response body in my telemetry? ++ASP.NET Core has [built-in +support](https://learn.microsoft.com/aspnet/core/fundamentals/http-logging) for +logging HTTP Request/Response information (including body) via +[`ILogger`](#ilogger-logs). It is recommended to leverage this. This may +potentially expose personally identifiable information (PII) in telemetry, and +can cause costs (performance costs and Application Insights billing) to +significantly increase, so evaluate the risks carefully before using this. + ### How do I customize ILogger logs collection? The default setting for Application Insights is to only capture **Warning** and more severe logs. If the SDK is installed at build time as shown in this article, you don't need t Yes. Feature support for the SDK is the same in all platforms, with the following exceptions: * The SDK collects [event counters](./eventcounters.md) on Linux because [performance counters](./performance-counters.md) are only supported in Windows. Most metrics are the same.-* Although `ServerTelemetryChannel` is enabled by default, if the application is running in Linux or macOS, the channel doesn't automatically create a local storage folder to keep telemetry temporarily if there are network issues. Because of this limitation, telemetry is lost when there are temporary network or server issues. To work around this issue, configure a local folder for the channel. --### [ASP.NET Core 6.0](#tab/netcore6) --```csharp -using Microsoft.ApplicationInsights.Channel; -using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel; --var builder = WebApplication.CreateBuilder(args); --// The following will configure the channel to use the given folder to temporarily -// store telemetry items during network or Application Insights server issues. -// User should ensure that the given folder already exists -// and that the application has read/write permissions. -builder.Services.AddSingleton(typeof(ITelemetryChannel), - new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"}); -builder.Services.AddApplicationInsightsTelemetry(); --var app = builder.Build(); -``` --### [ASP.NET Core 3.1](#tab/netcore3) --```csharp -using Microsoft.ApplicationInsights.Channel; -using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel; --public void ConfigureServices(IServiceCollection services) -{ - // The following will configure the channel to use the given folder to temporarily - // store telemetry items during network or Application Insights server issues. - // User should ensure that the given folder already exists - // and that the application has read/write permissions. - services.AddSingleton(typeof(ITelemetryChannel), - new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"}); - services.AddApplicationInsightsTelemetry(); -} -``` --> [!NOTE] -> This .NET version is no longer supported. ----This limitation isn't applicable from version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.15.0) and later. -### Is this SDK supported for the new .NET Core 3.X Worker Service template applications? +### Is this SDK supported for Worker Services? -This SDK requires `HttpContext`. It doesn't work in any non-HTTP applications, including the .NET Core 3.X Worker Service applications. To enable Application Insights in such applications by using the newly released Microsoft.ApplicationInsights.WorkerService SDK, see [Application Insights for Worker Service applications (non-HTTP applications)](worker-service.md). +No. Please use [Application Insights for Worker Service applications (non-HTTP applications)](worker-service.md) for worker services. ### How can I uninstall the SDK? |
azure-monitor | Asp Net Dependencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md | Application Insights SDKs for .NET and .NET Core ship with `DependencyTrackingTe |[Azure Blob Storage, Table Storage, or Queue Storage](https://www.nuget.org/packages/WindowsAzure.Storage/) | Calls made with the Azure Storage client. | |[Azure Event Hubs client SDK](https://nuget.org/packages/Azure.Messaging.EventHubs) | Use the latest package: https://nuget.org/packages/Azure.Messaging.EventHubs. | |[Azure Service Bus client SDK](https://nuget.org/packages/Azure.Messaging.ServiceBus)| Use the latest package: https://nuget.org/packages/Azure.Messaging.ServiceBus. |-|[Azure Cosmos DB](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) | Tracked automatically if HTTP/HTTPS is used. TCP will also be captured automatically using preview package >= [3.33.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.33.0-preview). | +|[Azure Cosmos DB](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) | Tracked automatically if HTTP/HTTPS is used. Tracing for operations in direct mode with TCP will also be captured automatically using preview package >= [3.33.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.33.0-preview). For more details visit the [documentation](../../cosmos-db/nosql/sdk-observability.md). | If you're missing a dependency or using a different SDK, make sure it's in the list of [autocollected dependencies](#dependency-auto-collection). If the dependency isn't autocollected, you can track it manually with a [track dependency call](./api-custom-events-metrics.md#trackdependency). |
azure-monitor | Codeless Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md | Links are provided to more information for each supported scenario. |Azure App Service on Linux - Publish as Docker | :x: | [ :white_check_mark: :link: ](azure-web-apps-net-core.md?tabs=linux) | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: | |Azure Functions - basic | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | |Azure Functions - dependencies | :x: | :x: | [ :white_check_mark: :link: ](monitor-functions.md) | :x: | [ :white_check_mark: :link: ](monitor-functions.md#distributed-tracing-for-python-function-apps) |-|Azure Spring Cloud | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) | :x: | :x: | +|Azure Spring Apps | :x: | :x: | [ :white_check_mark: :link: ](../../spring-apps/enterprise/how-to-application-insights.md) | :x: | :x: | |Azure Kubernetes Service (AKS) | :x: | :x: | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | |Azure VMs Windows | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) ┬▓ ┬│ | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) ┬▓ ┬│ | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | |On-premises VMs Windows | [ :white_check_mark: :link: ](application-insights-asp-net-agent.md) ┬│ | [ :white_check_mark: :link: ](application-insights-asp-net-agent.md) ┬▓ ┬│ | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | |
azure-monitor | Java Get Started Supplemental | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-get-started-supplemental.md | For more information, see [Use Application Insights Java In-Process Agent in Azu ### Docker entry point -If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.19.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example: +If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.5.0.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example: ```-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.19.jar", "-jar", "<myapp.jar>"] +ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.5.0.jar", "-jar", "<myapp.jar>"] ``` -If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.19.jar"` somewhere before `-jar`, for example: +If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.5.0.jar"` somewhere before `-jar`, for example: ```-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.19.jar" -jar <myapp.jar> +ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.5.0.jar" -jar <myapp.jar> ``` FROM ... COPY target/*.jar app.jar -COPY agent/applicationinsights-agent-3.4.19.jar applicationinsights-agent-3.4.19.jar +COPY agent/applicationinsights-agent-3.5.0.jar applicationinsights-agent-3.5.0.jar COPY agent/applicationinsights.json applicationinsights.json ENV APPLICATIONINSIGHTS_CONNECTION_STRING="CONNECTION-STRING" -ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.19.jar", "-jar", "app.jar"] +ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.5.0.jar", "-jar", "app.jar"] ``` -In this example we have copied the `applicationinsights-agent-3.4.19.jar` and `applicationinsights.json` files from an `agent` folder (you can choose any folder of your machine). These two files have to be in the same folder in the Docker container. +In this example we have copied the `applicationinsights-agent-3.5.0.jar` and `applicationinsights.json` files from an `agent` folder (you can choose any folder of your machine). These two files have to be in the same folder in the Docker container. ### Third-party container images For information on setting up the Application Insights Java agent, see [Enabling If you installed Tomcat via `apt-get` or `yum`, you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.19.jar" +JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.5.0.jar" ``` #### Tomcat installed via download and unzip JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.19.jar" If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.19.jar" +CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.5.0.jar" ``` -If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.19.jar` to `CATALINA_OPTS`. +If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.5.0.jar` to `CATALINA_OPTS`. ### Tomcat 8 (Windows) If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `- Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.19.jar +set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.5.0.jar ``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.19.jar" +set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.5.0.jar" ``` -If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.19.jar` to `CATALINA_OPTS`. +If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.5.0.jar` to `CATALINA_OPTS`. #### Run Tomcat as a Windows service -Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.19.jar` to the `Java Options` under the `Java` tab. +Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.5.0.jar` to the `Java Options` under the `Java` tab. ### JBoss EAP 7 #### Standalone server -Add `-javaagent:path/to/applicationinsights-agent-3.4.19.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows): +Add `-javaagent:path/to/applicationinsights-agent-3.5.0.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows): ```java ...- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.19.jar -Xms1303m -Xmx1303m ..." + JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.5.0.jar -Xms1303m -Xmx1303m ..." ... ``` #### Domain server -Add `-javaagent:path/to/applicationinsights-agent-3.4.19.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`: +Add `-javaagent:path/to/applicationinsights-agent-3.5.0.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`: ```xml ... Add `-javaagent:path/to/applicationinsights-agent-3.4.19.jar` to the existing `j <jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->- <option value="-javaagent:path/to/applicationinsights-agent-3.4.19.jar"/> + <option value="-javaagent:path/to/applicationinsights-agent-3.5.0.jar"/> <option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options> Add these lines to `start.ini`: ``` --exec--javaagent:path/to/applicationinsights-agent-3.4.19.jar+-javaagent:path/to/applicationinsights-agent-3.5.0.jar ``` ### Payara 5 -Add `-javaagent:path/to/applicationinsights-agent-3.4.19.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`: +Add `-javaagent:path/to/applicationinsights-agent-3.5.0.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`: ```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>- -javaagent:path/to/applicationinsights-agent-3.4.19.jar> + -javaagent:path/to/applicationinsights-agent-3.5.0.jar> </jvm-options> ... </java-config> Add `-javaagent:path/to/applicationinsights-agent-3.4.19.jar` to the existing `j 1. In `Generic JVM arguments`, add the following JVM argument: ```- -javaagent:path/to/applicationinsights-agent-3.4.19.jar + -javaagent:path/to/applicationinsights-agent-3.5.0.jar ``` 1. Save and restart the application server. Add `-javaagent:path/to/applicationinsights-agent-3.4.19.jar` to the existing `j Create a new file `jvm.options` in the server directory (for example, `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.4.19.jar+-javaagent:path/to/applicationinsights-agent-3.5.0.jar ``` ### Others |
azure-monitor | Java Spring Boot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md | There are two options for enabling Application Insights Java with Spring Boot: J ## Enabling with JVM argument -Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.19.jar"` somewhere before `-jar`, for example: +Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.5.0.jar"` somewhere before `-jar`, for example: ```-java -javaagent:"path/to/applicationinsights-agent-3.4.19.jar" -jar <myapp.jar> +java -javaagent:"path/to/applicationinsights-agent-3.5.0.jar" -jar <myapp.jar> ``` ### Spring Boot via Docker entry point To enable Application Insights Java programmatically, you must add the following <dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-runtime-attach</artifactId>- <version>3.4.19</version> + <version>3.5.0</version> </dependency> ``` First, add the `applicationinsights-core` dependency: <dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>- <version>3.4.19</version> + <version>3.5.0</version> </dependency> ``` |
azure-monitor | Java Standalone Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md | More information and configuration options are provided in the following section ## Configuration file path -By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.19.jar`. +By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.5.0.jar`. You can specify your own configuration file path by using one of these two options: * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable * `applicationinsights.configuration.file` Java system property -If you specify a relative path, it resolves relative to the directory where `applicationinsights-agent-3.4.19.jar` is located. +If you specify a relative path, it resolves relative to the directory where `applicationinsights-agent-3.5.0.jar` is located. Alternatively, instead of using a configuration file, you can specify the entire _content_ of the JSON configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`. Or you can set the connection string by using the Java system property `applicat You can also set the connection string by specifying a file to load the connection string from. -If you specify a relative path, it resolves relative to the directory where `applicationinsights-agent-3.4.19.jar` is located. +If you specify a relative path, it resolves relative to the directory where `applicationinsights-agent-3.5.0.jar` is located. ```json { and add `applicationinsights-core` to your application: <dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>- <version>3.4.19</version> + <version>3.5.0</version> </dependency> ``` In the preceding configuration example: * `level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. * `path` can be an absolute or relative path. Relative paths are resolved against the directory where-`applicationinsights-agent-3.4.19.jar` is located. +`applicationinsights-agent-3.5.0.jar` is located. Starting from version 3.0.2, you can also set the self-diagnostics `level` by using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`. It then takes precedence over the self-diagnostics level specified in the JSON configuration. |
azure-monitor | Java Standalone Upgrade From 2X | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md | There are typically no code changes when upgrading to 3.x. The 3.x SDK dependenc Add the 3.x Java agent to your JVM command-line args, for example ```--javaagent:path/to/applicationinsights-agent-3.4.19.jar+-javaagent:path/to/applicationinsights-agent-3.5.0.jar ``` If you're using the Application Insights 2.x Java agent, just replace your existing `-javaagent:...` with the aforementioned example. |
azure-monitor | Opentelemetry Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md | dotnet add package Azure.Monitor.OpenTelemetry.Exporter #### [Java](#tab/java) -Download the [applicationinsights-agent-3.4.19.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.19/applicationinsights-agent-3.4.19.jar) file. +Download the [applicationinsights-agent-3.5.0.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.5.0/applicationinsights-agent-3.5.0.jar) file. > [!WARNING] > var loggerFactory = LoggerFactory.Create(builder => Java autoinstrumentation is enabled through configuration changes; no code changes are required. -Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.19.jar"` to your application's JVM args. +Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.5.0.jar"` to your application's JVM args. > [!TIP] > Sampling is enabled by default at a rate of 5 requests per second, aiding in cost management. Telemetry data may be missing in scenarios exceeding this rate. For more information on modifying sampling configuration, see [sampling overrides](./java-standalone-sampling-overrides.md). To paste your Connection String, select from the following options: B. Set via Configuration File - Java Only (Recommended) - Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.19.jar` with the following content: + Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.5.0.jar` with the following content: ```json { |
azure-vmware | Concepts Private Clouds Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-private-clouds-clusters.md | Title: Concepts - Private clouds and clusters description: Understand the key capabilities of Azure VMware Solution software-defined data centers and VMware vSphere clusters. Previously updated : 1/16/2024 Last updated : 3/1/2024 A private cloud includes clusters with: - Dedicated bare-metal server hosts provisioned with VMware ESXi hypervisor - VMware vCenter Server for managing ESXi and vSAN-- VMware NSX-T Data Center software-defined networking for vSphere workload VMs+- VMware NSX software-defined networking for vSphere workload VMs - VMware vSAN datastore for vSphere workload VMs - VMware HCX for workload mobility - Resources in the Azure underlay (required for connectivity and to operate the private cloud) Each Azure VMware Solution architectural component has the following function: - Azure Subscription: Provides controlled access, budget, and quota management for the Azure VMware Solution. - Azure Region: Groups data centers into Availability Zones (AZs) and then groups AZs into regions. - Azure Resource Group: Places Azure services and resources into logical groups.-- Azure VMware Solution Private Cloud: Offers compute, networking, and storage resources using VMware software, including vCenter Server, NSX-T Data Center software-defined networking, vSAN software-defined storage, and Azure bare-metal ESXi hosts. Azure NetApp Files, Azure Elastic SAN, and Pure Cloud Block Store are also supported.+- Azure VMware Solution Private Cloud: Offers compute, networking, and storage resources using VMware software, including vCenter Server, NSX software-defined networking, vSAN software-defined storage, and Azure bare-metal ESXi hosts. Azure NetApp Files, Azure Elastic SAN, and Pure Cloud Block Store are also supported. - Azure VMware Solution Resource Cluster: Provides compute, networking, and storage resources for customer workloads by scaling out the Azure VMware Solution private cloud using VMware software, including vSAN software-defined storage and Azure bare-metal ESXi hosts. Azure NetApp Files, Azure Elastic SAN, and Pure Cloud Block Store are also supported. - VMware HCX: Delivers mobility, migration, and network extension services. - VMware Site Recovery: Automates disaster recovery and storage replication services with VMware vSphere Replication. Third-party disaster recovery solutions Zerto Disaster Recovery and JetStream Software Disaster Recovery are also supported. Azure VMware Solution monitors the following conditions on the host: ## Backup and restore -Azure VMware Solution private cloud vCenter Server, NSX-T Data Center, and HCX Manager (if enabled) configurations are on a daily backup schedule. Open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) in the Azure portal to request restoration. +Azure VMware Solution private cloud vCenter Server, NSX, and HCX Manager (if enabled) configurations are on a daily backup schedule. Open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) in the Azure portal to request restoration. > [!NOTE] > Restorations are intended for catastrophic situations only. |
batch | Batch Automatic Scaling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-automatic-scaling.md | Title: Autoscale compute nodes in an Azure Batch pool description: Enable automatic scaling on an Azure Batch cloud pool to dynamically adjust the number of compute nodes in the pool. Previously updated : 08/23/2023 Last updated : 02/29/2024 You can get the value of these service-defined variables to make adjustments tha | $TaskSlotsPerNode |The number of task slots that can be used to run concurrent tasks on a single compute node in the pool. | | $CurrentDedicatedNodes |The current number of dedicated compute nodes. | | $CurrentLowPriorityNodes |The current number of Spot compute nodes, including any nodes that have been preempted. |+| $UsableNodeCount | The number of usable compute nodes. | | $PreemptedNodeCount | The number of nodes in the pool that are in a preempted state. | > [!WARNING] You can get the value of these service-defined variables to make adjustments tha > date, these service-defined variables will no longer be populated with sample data. Please discontinue use of these variables > before this date. -> [!WARNING] -> `$PreemptedNodeCount` is currently not available and returns `0` valued data. - > [!NOTE] > Use `$RunningTasks` when scaling based on the number of tasks running at a point in time, and `$ActiveTasks` when scaling based on the number of tasks that are queued up to run. $runningTasksSample = $RunningTasks.GetSample(60 * TimeInterval_Second, 120 * Ti Because there might be a delay in sample availability, you should always specify a time range with a look-back start time that's older than one minute. It takes approximately one minute for samples to propagate through the system, so samples in the range `(0 * TimeInterval_Second, 60 * TimeInterval_Second)` might not be available. Again, you can use the percentage parameter of `GetSample()` to force a particular sample percentage requirement. > [!IMPORTANT]-> We strongly recommend that you **avoid relying *only* on `GetSample(1)` in your autoscale formulas**. This is because `GetSample(1)` essentially says to the Batch service, "Give me the last sample you have, no matter how long ago you retrieved it." Since it's only a single sample, and it might be an older sample, it might not be representative of the larger picture of recent task or resource state. If you do use `GetSample(1)`, make sure that it's part of a larger statement and not the only data point that your formula relies on. +> We strongly recommend that you **avoid relying *only* on `GetSample(1)` in your autoscale formulas**. This is because `GetSample(1)` essentially says to the Batch service, "Give me the last sample you had, no matter how long ago you retrieved it." Since it's only a single sample, and it might be an older sample, it might not be representative of the larger picture of recent task or resource state. If you do use `GetSample(1)`, make sure that it's part of a larger statement and not the only data point that your formula relies on. ## Write an autoscale formula |
batch | Batch Pool No Public Ip Address | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-no-public-ip-address.md | - Title: Create an Azure Batch pool without public IP addresses (preview) -description: Learn how to create an Azure Batch pool without public IP addresses. - Previously updated : 05/30/2023----# Create a Batch pool without public IP addresses (preview) --> [!WARNING] -> This preview version will be retired on **31 March 2023**, and will be replaced by -> [Simplified node communication pool without public IP addresses](simplified-node-communication-pool-no-public-ip.md). -> For more information, see the [Retirement Migration Guide](batch-pools-without-public-ip-addresses-classic-retirement-migration-guide.md). --> [!IMPORTANT] -> - Support for pools without public IP addresses in Azure Batch is currently in public preview for the following regions: France Central, East Asia, West Central US, South Central US, West US 2, East US, North Europe, East US 2, Central US, West Europe, North Central US, West US, Australia East, Japan East, Japan West. -> - This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. -> - For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). --When you create an Azure Batch pool, you can provision the virtual machine configuration pool without a public IP address. This article explains how to set up a Batch pool without public IP addresses. --## Why use a pool without public IP addresses? --By default, all the compute nodes in an Azure Batch virtual machine configuration pool are assigned a public IP address. This address is used by the Batch service to schedule tasks and for communication with compute nodes, including outbound access to the internet. --To restrict access to these nodes and reduce the discoverability of these nodes from the internet, you can provision the pool without public IP addresses. --## Prerequisites --- **Authentication**. To use a pool without public IP addresses inside a [virtual network](./batch-virtual-network.md), the Batch client API must use Microsoft Entra authentication. Azure Batch support for Microsoft Entra ID is documented in [Authenticate Azure Batch services with Microsoft Entra ID](batch-aad-auth.md). If you aren't creating your pool within a virtual network, either Microsoft Entra authentication or key-based authentication can be used.--- **An Azure VNet**. If you're creating your pool in a [virtual network](batch-virtual-network.md), follow these requirements and configurations. To prepare a VNet with one or more subnets in advance, you can use the Azure portal, Azure PowerShell, the Azure CLI, or other methods.-- - The VNet must be in the same subscription and region as the Batch account you use to create your pool. -- - The subnet specified for the pool must have enough unassigned IP addresses to accommodate the number of VMs targeted for the pool; that is, the sum of the `targetDedicatedNodes` and `targetLowPriorityNodes` properties of the pool. If the subnet doesn't have enough unassigned IP addresses, the pool partially allocates the compute nodes, and a resize error occurs. -- - You must disable private link service and endpoint network policies. This action can be done by using Azure CLI: -- `az network vnet subnet update --vnet-name <vnetname> -n <subnetname> --resource-group <resourcegroup> --disable-private-endpoint-network-policies --disable-private-link-service-network-policies` --> [!IMPORTANT] -> For each 100 dedicated or Spot nodes, Batch allocates one private link service and one load balancer. These resources are limited by the subscription's [resource quotas](../azure-resource-manager/management/azure-subscription-service-limits.md). For large pools, you might need to [request a quota increase](batch-quota-limit.md#increase-a-quota) for one or more of these resources. Additionally, no resource locks should be applied to any resource created by Batch, since this prevent cleanup of resources as a result of user-initiated actions such as deleting a pool or resizing to zero. --## Current limitations --1. Pools without public IP addresses must use Virtual Machine Configuration and not Cloud Services Configuration. -1. [Custom endpoint configuration](pool-endpoint-configuration.md) to Batch compute nodes doesn't work with pools without public IP addresses. -1. Because there are no public IP addresses, you can't [use your own specified public IP addresses](create-pool-public-ip.md) with this type of pool. -1. [Basic VM size](../virtual-machines/sizes-previous-gen.md#basic-a) doesn't work with pools without public IP addresses. --## Create a pool without public IP addresses in the Azure portal --1. Navigate to your Batch account in the Azure portal. -1. In the **Settings** window on the left, select **Pools**. -1. In the **Pools** window, select **Add**. -1. On the **Add Pool** window, select the option you intend to use from the **Image Type** dropdown. -1. Select the correct **Publisher/Offer/Sku** of your image. -1. Specify the remaining required settings, including the **Node size**, **Target dedicated nodes**, and **Target Spot/low-priority nodes**, and any desired optional settings. -1. Optionally select a virtual network and subnet you wish to use. This virtual network must be in the same resource group as the pool you're creating. -1. In **IP address provisioning type**, select **NoPublicIPAddresses**. --![Screenshot of the Add pool screen with NoPublicIPAddresses selected.](./media/batch-pool-no-public-ip-address/create-pool-without-public-ip-address.png) --## Use the Batch REST API to create a pool without public IP addresses --The example below shows how to use the [Batch Service REST API](/rest/api/batchservice/pool/add) to create a pool that uses public IP addresses. --### REST API URI --```http -POST {batchURL}/pools?api-version=2020-03-01.11.0 -client-request-id: 00000000-0000-0000-0000-000000000000 -``` --### Request body --```json -"pool": { - "id": "pool2", - "vmSize": "standard_a1", - "virtualMachineConfiguration": { - "imageReference": { - "publisher": "Canonical", - "offer": "UbuntuServer", - "sku": "20.04-lts" - }, - "nodeAgentSKUId": "batch.node.ubuntu 20.04" - } - "networkConfiguration": { - "subnetId": "/subscriptions/<your_subscription_id>/resourceGroups/<your_resource_group>/providers/Microsoft.Network/virtualNetworks/<your_vnet_name>/subnets/<your_subnet_name>", - "publicIPAddressConfiguration": { - "provision": "NoPublicIPAddresses" - } - }, - "resizeTimeout": "PT15M", - "targetDedicatedNodes": 5, - "targetLowPriorityNodes": 0, - "taskSlotsPerNode": 3, - "taskSchedulingPolicy": { - "nodeFillType": "spread" - }, - "enableAutoScale": false, - "enableInterNodeCommunication": true, - "metadata": [ - { - "name": "myproperty", - "value": "myvalue" - } - ] -} -``` --> [!Important] -> This document references a release version of Linux that is nearing or at, End of Life(EOL). Please consider updating to a more current version. --## Outbound access to the internet --In a pool without public IP addresses, your virtual machines won't be able to access the public internet unless you configure your network setup appropriately, such as by using [virtual network NAT](../virtual-network/nat-gateway/nat-overview.md). NAT only allows outbound access to the internet from the virtual machines in the virtual network. Batch-created compute nodes won't be publicly accessible, since they don't have public IP addresses associated. --Another way to provide outbound connectivity is to use a user-defined route (UDR). This method lets you route traffic to a proxy machine that has public internet access. --## Next steps --- Learn more about [creating pools in a virtual network](batch-virtual-network.md).-- Learn how to [use private endpoints with Batch accounts](private-connectivity.md). |
batch | Batch Pool Vm Sizes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-vm-sizes.md | Title: Choose VM sizes and images for pools description: How to choose from the available VM sizes and OS versions for compute nodes in Azure Batch pools Previously updated : 02/13/2023 Last updated : 02/29/2024 # Choose a VM size and image for compute nodes in an Azure Batch pool az batch location list-skus --location <azure-region> ``` > [!TIP]-> Batch **does not** support any VM SKU sizes that have only remote storage. A local temporary disk is required for Batch. -> For example, Batch supports [ddv4 and ddsv4](../virtual-machines/ddv4-ddsv4-series.md), but does not support -> [dv4 and dsv4](../virtual-machines/dv4-dsv4-series.md). +> It's recommended to avoid VM SKUs/families with impending Batch support end of life (EOL) dates. These dates can be discovered +> via the [`ListSupportedVirtualMachineSkus` API](/rest/api/batchmanagement/location/list-supported-virtual-machine-skus), +> [PowerShell](/powershell/module/az.batch/get-azbatchsupportedvirtualmachinesku), +> or [Azure CLI](/cli/azure/batch/location#az-batch-location-list-skus). +> For more information, see the [Batch best practices guide](best-practices.md) regarding Batch pool VM SKU selection. ++Batch **doesn't** support any VM SKU sizes that have only remote storage. A local temporary disk is required for Batch. +For example, Batch supports [ddv4 and ddsv4](../virtual-machines/ddv4-ddsv4-series.md), but does not support +[dv4 and dsv4](../virtual-machines/dv4-dsv4-series.md). ### Using Generation 2 VM Images For example, using the Azure CLI, you can obtain the list of supported VM images az batch pool supported-images list ``` -It's recommended to avoid images with impending Batch support end of life (EOL) dates. These dates can be discovered via -the [`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages), -[PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images). -For more information, see the [Batch best practices guide](best-practices.md) regarding Batch pool VM image selection. +> [!TIP] +> It's recommended to avoid images with impending Batch support end of life (EOL) dates. These dates can be discovered via +> the [`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages), +> [PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images). +> For more information, see the [Batch best practices guide](best-practices.md) regarding Batch pool VM image selection. ## Next steps |
batch | Batch Rendering Application Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-rendering-application-reference.md | - Title: Use rendering applications -description: How to use rendering applications with Azure Batch. This article provides a brief description of how to run each rendering application. Previously updated : 08/02/2018----# Rendering applications --Rendering applications are used by creating Batch jobs and tasks. The task command line property specifies the appropriate command line and parameters. The easiest way to create the job tasks is to use the Batch Explorer templates as specified in [this article](./batch-rendering-using.md#using-batch-explorer). The templates can be viewed and modified versions created if necessary. --This article provides a brief description of how to run each rendering application. --## Rendering with Autodesk 3ds Max --### Renderer support --In addition to the renderers built into 3ds Max, the following renderers are available on the rendering VM images and can be referenced by the 3ds Max scene file: --* Autodesk Arnold -* Chaos Group V-Ray --### Task command line --Invoke the `3dsmaxcmdio.exe` application to perform command line rendering on a pool node. This application is on the path when the task is run. The `3dsmaxcmdio.exe` application has the same available parameters as the `3dsmaxcmd.exe` application, which is documented in the [3ds Max help documentation](https://help.autodesk.com/view/3DSMAX/2018/ENU/) (Rendering | Command-Line Rendering section). --For example: --``` -3dsmaxcmdio.exe -v:5 -rfw:0 -start:{0} -end:{0} -bitmapPath:"%AZ_BATCH_JOB_PREP_WORKING_DIR%\sceneassets\images" -outputName:dragon.jpg -w:1280 -h:720 "%AZ_BATCH_JOB_PREP_WORKING_DIR%\scenes\dragon.max" -``` --Notes: --* Great care must be taken to ensure the asset files are found. Ensure the paths are correct and relative using the **Asset Tracking** window, or use the `-bitmapPath` parameter on the command line. -* See if there are issues with the render, such as inability to find assets, by checking the `stdout.txt` file written by 3ds Max when the task is run. --### Batch Explorer templates --Pool and job templates can be accessed from the **Gallery** in Batch Explorer. The template source files are available in the [Batch Explorer data repository on GitHub](https://github.com/Azure/BatchExplorer-data/tree/master/ncj/3dsmax). --## Rendering with Autodesk Maya --### Renderer support --In addition to the renderers built into Maya, the following renderers are available on the rendering VM images and can be referenced by the 3ds Max scene file: --* Autodesk Arnold -* Chaos Group V-Ray --### Task command line --The `renderer.exe` command-line renderer is used in the task command line. The command-line renderer is documented in [Maya help](https://help.autodesk.com/view/MAYAUL/2018/ENU/?guid=GUID-EB558BC0-5C2B-439C-9B00-F97BCB9688E4). --In the following example, a job preparation task is used to copy the scene files and assets to the job preparation working directory, an output folder is used to store the rendering image, and frame 10 is rendered. --``` -render -renderer sw -proj "%AZ_BATCH_JOB_PREP_WORKING_DIR%" -verb -rd "%AZ_BATCH_TASK_WORKING_DIR%\output" -s 10 -e 10 -x 1920 -y 1080 "%AZ_BATCH_JOB_PREP_WORKING_DIR%\scene-file.ma" -``` --For V-Ray rendering, the Maya scene file would normally specify V-Ray as the renderer. It can also be specified on the command line: --``` -render -renderer vray -proj "%AZ_BATCH_JOB_PREP_WORKING_DIR%" -verb -rd "%AZ_BATCH_TASK_WORKING_DIR%\output" -s 10 -e 10 -x 1920 -y 1080 "%AZ_BATCH_JOB_PREP_WORKING_DIR%\scene-file.ma" -``` --For Arnold rendering, the Maya scene file would normally specify Arnold as the renderer. It can also be specified on the command line: --``` -render -renderer arnold -proj "%AZ_BATCH_JOB_PREP_WORKING_DIR%" -verb -rd "%AZ_BATCH_TASK_WORKING_DIR%\output" -s 10 -e 10 -x 1920 -y 1080 "%AZ_BATCH_JOB_PREP_WORKING_DIR%\scene-file.ma" -``` --### Batch Explorer templates --Pool and job templates can be accessed from the **Gallery** in Batch Explorer. The template source files are available in the [Batch Explorer data repository on GitHub](https://github.com/Azure/BatchExplorer-data/tree/master/ncj/maya). --## Next steps --Use the pool and job templates from the [data repository in GitHub](https://github.com/Azure/BatchExplorer-data/tree/master/ncj) using Batch Explorer. When required, create new templates or modify one of the supplied templates. |
batch | Batch Rendering Applications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-rendering-applications.md | - Title: Rendering applications -description: It's possible to use any rendering applications with Azure Batch. However, Azure Marketplace VM images are available with common applications pre-installed. Previously updated : 03/12/2021----# Pre-installed applications on Batch rendering VM images --> [!CAUTION] -> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. --It's possible to use any rendering applications with Azure Batch. However, Azure Marketplace VM images are available with common applications pre-installed. --Where applicable, pay-for-use licensing is available for the pre-installed rendering applications. When a Batch pool is created, the required applications can be specified and both the cost of VM and applications will be billed per minute. Application prices are listed on the [Azure Batch pricing page](https://azure.microsoft.com/pricing/details/batch/#graphic-rendering). --Some applications only support Windows, but most are supported on both Windows and Linux. --> [!WARNING] -> The rendering VM images and pay-for-use licensing have been [deprecated and will be retired on February 29, 2024](https://azure.microsoft.com/updates/azure-batch-rendering-vm-images-licensing-will-be-retired-on-29-february-2024/). To use Batch for rendering, [a custom VM image and standard application licensing should be used.](batch-rendering-functionality.md#batch-pools-using-custom-vm-images-and-standard-application-licensing) --## Applications on latest CentOS 7 rendering image --The following list applies to the CentOS rendering image, version 1.2.0. --* Autodesk Maya I/O 2020 Update 4.6 -* Autodesk Arnold for Maya 2020 (Arnold version 6.2.0.0) MtoA-4.2.0-2020 -* Chaos Group V-Ray for Maya 2020 (version 5.00.21) -* Blender (2.80) -* AZ 10 --## Applications on latest Windows Server rendering image --The following list applies to the Windows Server rendering image, version 1.5.0. --* Autodesk Maya I/O 2020 Update 4.4 -* Autodesk 3ds Max I/O 2021 Update 3 -* Autodesk Arnold for Maya 2020 (Arnold version 6.1.0.1) MtoA-4.1.1.1-2020 -* Autodesk Arnold for 3ds Max 2021 (Arnold version 6.1.0.1) MAXtoA-4.2.2.20-2021 -* Chaos Group V-Ray for Maya 2020 (version 5.00.21) -* Chaos Group V-Ray for 3ds Max 2021 (version 5.00.05) -* Blender (2.79) -* Blender (2.80) -* AZ 10 --> [!IMPORTANT] -> To run V-Ray with Maya outside of the [Azure Batch extension templates](https://github.com/Azure/batch-extension-templates), start `vrayses.exe` before running the render. To start the vrayses.exe outside of the templates you can use the following command `%MAYA_2020%\vray\bin\vrayses.exe"`. -> -> For an example, see the start task of the [Maya and V-Ray template](https://github.com/Azure/batch-extension-templates/blob/master/templates/maya/render-vray-windows/pool.template.json) on GitHub. --## Applications on previous Windows Server rendering images --The following list applies to Windows Server 2016, version 1.3.8 rendering images. --* Autodesk Maya I/O 2017 Update 5 (version 17.4.5459) -* Autodesk Maya I/O 2018 Update 6 (version 18.4.0.7622) -* Autodesk Maya I/O 2019 -* Autodesk 3ds Max I/O 2018 Update 4 (version 20.4.0.4254) -* Autodesk 3ds Max I/O 2019 Update 1 (version 21.2.0.2219) -* Autodesk 3ds Max I/O 2020 Update 2 -* Autodesk Arnold for Maya 2017 (Arnold version 5.3.0.2) MtoA-3.2.0.2-2017 -* Autodesk Arnold for Maya 2018 (Arnold version 5.3.0.2) MtoA-3.2.0.2-2018 -* Autodesk Arnold for Maya 2019 (Arnold version 5.3.0.2) MtoA-3.2.0.2-2019 -* Autodesk Arnold for 3ds Max 2018 (Arnold version 5.3.0.2)(version 1.2.926) -* Autodesk Arnold for 3ds Max 2019 (Arnold version 5.3.0.2)(version 1.2.926) -* Autodesk Arnold for 3ds Max 2020 (Arnold version 5.3.0.2)(version 1.2.926) -* Chaos Group V-Ray for Maya 2017 (version 4.12.01) -* Chaos Group V-Ray for Maya 2018 (version 4.12.01) -* Chaos Group V-Ray for Maya 2019 (version 4.04.03) -* Chaos Group V-Ray for 3ds Max 2018 (version 4.20.01) -* Chaos Group V-Ray for 3ds Max 2019 (version 4.20.01) -* Chaos Group V-Ray for 3ds Max 2020 (version 4.20.01) -* Blender (2.79) -* Blender (2.80) -* AZ 10 --The following list applies to Windows Server 2016, version 1.3.7 rendering images. --* Autodesk Maya I/O 2017 Update 5 (version 17.4.5459) -* Autodesk Maya I/O 2018 Update 4 (version 18.4.0.7622) -* Autodesk 3ds Max I/O 2019 Update 1 (version 21.2.0.2219) -* Autodesk 3ds Max I/O 2018 Update 4 (version 20.4.0.4254) -* Autodesk Arnold for Maya 2017 (Arnold version 5.2.0.1) MtoA-3.1.0.1-2017 -* Autodesk Arnold for Maya 2018 (Arnold version 5.2.0.1) MtoA-3.1.0.1-2018 -* Autodesk Arnold for 3ds Max 2018 (Arnold version 5.0.2.4)(version 1.2.926) -* Autodesk Arnold for 3ds Max 2019 (Arnold version 5.0.2.4)(version 1.2.926) -* Chaos Group V-Ray for Maya 2018 (version 3.52.03) -* Chaos Group V-Ray for 3ds Max 2018 (version 3.60.02) -* Chaos Group V-Ray for Maya 2019 (version 3.52.03) -* Chaos Group V-Ray for 3ds Max 2019 (version 4.10.01) -* Blender (2.79) --> [!NOTE] -> Chaos Group V-Ray for 3ds Max 2019 (version 4.10.01) introduces breaking changes to V-ray. To use the previous version (version 3.60.02), use Windows Server 2016, version 1.3.2 rendering nodes. --## Applications on previous CentOS rendering images --The following list applies to CentOS 7.6, version 1.1.6 rendering images. --* Autodesk Maya I/O 2017 Update 5 (cut 201708032230) -* Autodesk Maya I/O 2018 Update 2 (cut 201711281015) -* Autodesk Maya I/O 2019 Update 1 -* Autodesk Arnold for Maya 2017 (Arnold version 5.3.1.1) MtoA-3.2.1.1-2017 -* Autodesk Arnold for Maya 2018 (Arnold version 5.3.1.1) MtoA-3.2.1.1-2018 -* Autodesk Arnold for Maya 2019 (Arnold version 5.3.1.1) MtoA-3.2.1.1-2019 -* Chaos Group V-Ray for Maya 2017 (version 3.60.04) -* Chaos Group V-Ray for Maya 2018 (version 3.60.04) -* Blender (2.68) -* Blender (2.8) --## Next steps --To use the rendering VM images, they need to be specified in the pool configuration when a pool is created; see the [Batch pool capabilities for rendering](./batch-rendering-functionality.md). |
batch | Batch Rendering Functionality | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-rendering-functionality.md | Title: Rendering capabilities description: Standard Azure Batch capabilities are used to run rendering workloads and apps. Batch includes specific features to support rendering workloads. Previously updated : 12/13/2021 Last updated : 02/28/2024 The task command line strings will need to reference the applications and paths Most rendering applications will require licenses obtained from a license server. If there's an existing on-premises license server, then both the pool and license server need to be on the same [virtual network](../virtual-network/virtual-networks-overview.md). It is also possible to run a license server on an Azure VM, with the Batch pool and license server VM being on the same virtual network. -## Batch pools using rendering VM images --> [!WARNING] -> The rendering VM images and pay-for-use licensing have been [deprecated and will be retired on February 29, 2024](https://azure.microsoft.com/updates/azure-batch-rendering-vm-images-licensing-will-be-retired-on-29-february-2024/). To use Batch for rendering, [a custom VM image and standard application licensing should be used.](batch-rendering-functionality.md#batch-pools-using-custom-vm-images-and-standard-application-licensing) --### Rendering application installation --An Azure Marketplace rendering VM image can be specified in the pool configuration if only the pre-installed applications need to be used. --There is a Windows image and a CentOS image. In the [Azure Marketplace](https://azuremarketplace.microsoft.com), the VM images can be found by searching for 'batch rendering'. --The Azure portal and Batch Explorer provide GUI tools to select a rendering VM image when you create a pool. If using a Batch API, then specify the following property values for [ImageReference](/rest/api/batchservice/pool/add#imagereference) when creating a pool: --| Publisher | Offer | Sku | Version | -||||--| -| batch | rendering-centos73 | rendering | latest | -| batch | rendering-windows2016 | rendering | latest | --Other options are available if additional applications are required on the pool VMs: +## Batch pools using custom VM images * A custom image from the Azure Compute Gallery: * Using this option, you can configure your VM with the exact applications and specific versions that you require. For more information, see [Create a pool with the Azure Compute Gallery](batch-sig-images.md). Autodesk and Chaos Group have modified Arnold and V-Ray, respectively, to validate against an Azure Batch licensing service. Make sure you have the versions of these applications with this support, otherwise the pay-per-use licensing won't work. Current versions of Maya or 3ds Max don't require a license server when running headless (in batch/command-line mode). Contact Azure support if you're not sure how to proceed with this option. Other options are available if additional applications are required on the pool * Resource files: * Application files are uploaded to Azure blob storage, and you specify file references in the [pool start task](/rest/api/batchservice/pool/add#starttask). When pool VMs are created, the resource files are downloaded onto each VM. -### Pay-for-use licensing for pre-installed applications --The applications that will be used and have a licensing fee need to be specified in the pool configuration. --* Specify the `applicationLicenses` property when [creating a pool](/rest/api/batchservice/pool/add#request-body). The following values can be specified in the array of strings - "vray", "arnold", "3dsmax", "maya". -* When you specify one or more applications, then the cost of those applications is added to the cost of the VMs. Application prices are listed on the [Azure Batch pricing page](https://azure.microsoft.com/pricing/details/batch/#graphic-rendering). --> [!NOTE] -> If instead you connect to a license server to use the rendering applications, do not specify the `applicationLicenses` property. --You can use the Azure portal or Batch Explorer to select applications and show the application prices. --If an attempt is made to use an application, but the application hasnΓÇÖt been specified in the `applicationLicenses` property of the pool configuration or does not reach a license server, then the application execution fails with a licensing error and non-zero exit code. --### Environment variables for pre-installed applications --To be able to create the command line for rendering tasks, the installation location of the rendering application executables must be specified. System environment variables have been created on the Azure Marketplace VM images, which can be used instead of having to specify actual paths. These environment variables are in addition to the [standard Batch environment variables](./batch-compute-node-environment-variables.md) created for each task. --|Application|Application Executable|Environment Variable| -|||| -|Autodesk 3ds Max 2021|3dsmaxcmdio.exe|3DSMAX_2021_EXEC| -|Autodesk Maya 2020|render.exe|MAYA_2020_EXEC| -|Chaos Group V-Ray Standalone|vray.exe|VRAY_4.10.03_EXEC| -|Arnold 2020 command line|kick.exe|ARNOLD_2020_EXEC| -|Blender|blender.exe|BLENDER_2018_EXEC| - ## Azure VM families As with other workloads, rendering application system requirements vary, and performance requirements vary for jobs and projects. A large variety of VM families are available in Azure depending on your requirements ΓÇô lowest cost, best price/performance, best performance, and so on. When the Azure Marketplace VM images are used, then the best practice is to use ## Next steps -* Learn about [using rendering applications with Batch](batch-rendering-applications.md). +* Learn about [Batch rendering services](batch-rendering-service.md). * Learn about [Storage and data movement options for rendering asset and output files](batch-rendering-storage-data-movement.md). |
batch | Batch Rendering Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-rendering-service.md | Title: Rendering overview description: Introduction of using Azure for rendering and an overview of Azure Batch rendering capabilities Previously updated : 12/13/2021 Last updated : 02/29/2024 # Rendering using Azure -Rendering is the process of taking 3D models and converting them into 2D images. 3D scene files are authored in applications such as Autodesk 3ds Max, Autodesk Maya, and Blender. Rendering applications such as Autodesk Maya, Autodesk Arnold, Chaos Group V-Ray, and Blender Cycles produce 2D images. Sometimes single images are created from the scene files. However, it's common to model and render multiple images, and then combine them in an animation. +Rendering is the process of taking 3D models and converting them into 2D images. 3D scene files are authored in applications such as Autodesk 3ds Max, Autodesk Maya, and Blender. Rendering applications such as Autodesk Maya, Autodesk Arnold, Chaos Group V-Ray, and Blender Cycles produce 2D images. Sometimes single images are created from the scene files. However, it's common to model and render multiple images, and then combine them in an animation. The rendering workload is heavily used for special effects (VFX) in the Media and Entertainment industry. Rendering is also used in many other industries such as advertising, retail, oil and gas, and manufacturing. -The process of rendering is computationally intensive; there can be many frames/images to produce and each image can take many hours to render. Rendering is therefore a perfect batch processing workload that can leverage Azure to run many renders in parallel and utilize a wide range of hardware, including GPUs. +The process of rendering is computationally intensive; there can be many frames/images to produce and each image can take many hours to render. Rendering is therefore a perfect batch processing workload that can use Azure to run many renders in parallel and utilize a wide range of hardware, including GPUs. ## Why use Azure for rendering? For many reasons, rendering is a workload perfectly suited for Azure: * Rendering jobs can be split into many pieces that can be run in parallel using multiple VMs:- * Animations consist of many frames and each frame can be rendered in parallel. The more VMs available to process each frame, the faster all the frames and the animation can be produced. - * Some rendering software allows single frames to be broken up into multiple pieces, such as tiles or slices. Each piece can be rendered separately, then combined into the final image when all pieces have finished. The more VMs that are available, the faster a frame can be rendered. + * Animations consist of many frames and each frame can be rendered in parallel. The more VMs available to process each frame, the faster all the frames and the animation can be produced. + * Some rendering software allows single frames to be broken up into multiple pieces, such as tiles or slices. Each piece can be rendered separately, then combined into the final image when all pieces are finished. The more VMs that are available, the faster a frame can be rendered. * Rendering projects can require huge scale:- * Individual frames can be complex and require many hours to render, even on high-end hardware; animations can consist of hundreds of thousands of frames. A huge amount of compute is required to render high-quality animations in a reasonable amount of time. In some cases, over 100,000 cores have been used to render thousands of frames in parallel. + * Individual frames can be complex and require many hours to render, even on high-end hardware; animations can consist of hundreds of thousands of frames. A huge amount of compute is required to render high-quality animations in a reasonable amount of time. In some cases, over 100,000 cores are being used to render thousands of frames in parallel. * Rendering projects are project-based and require varying amounts of compute: * Allocate compute and storage capacity when required, scale it up or down according to load during a project, and remove it when a project is finished.- * Pay for capacity when allocated, but donΓÇÖt pay for it when there is no load, such as between projects. + * Pay for capacity when allocated, but donΓÇÖt pay for it when there's no load, such as between projects. * Cater for bursts due to unexpected changes; scale higher if there are unexpected changes late in a project and those changes need to be processed on a tight schedule. * Choose from a wide selection of hardware according to application, workload, and timeframe: * ThereΓÇÖs a wide selection of hardware available in Azure that can be allocated and managed with Batch.- * Depending on the project, the requirement may be for the best price/performance or the best overall performance. Different scenes and/or rendering applications will have different memory requirements. Some rendering application can leverage GPUs for the best performance or certain features. + * Depending on the project, the requirement may be for the best price/performance or the best overall performance. Different scenes and/or rendering applications can have different memory requirements. Some rendering applications can use GPUs for the best performance or certain features. * Low-priority or [Azure Spot VMs](https://azure.microsoft.com/pricing/spot/) reduce cost: * Low-priority and Spot VMs are available for a large discount compared to standard VMs and are suitable for some job types. ## Existing on-premises rendering environment -The most common case is for there to be an existing on-premises render farm being managed by a render management application such as PipelineFX Qube, Royal Render, Thinkbox Deadline, or a custom application. The requirement is to extend the on-premises render farm capacity using Azure VMs. +The most common case is for there to be an existing on-premises render farm that's managed by a render management application such as PipelineFX Qube, Royal Render, Thinkbox Deadline, or a custom application. The requirement is to extend the on-premises render farm capacity using Azure VMs. Azure infrastructure and services are used to create a hybrid environment where Azure is used to supplement the on-premises capacity. For example: * Use a [Virtual Network](../virtual-network/virtual-networks-overview.md) to place the Azure resources on the same network as the on-premises render farm. * Use [Avere vFXT for Azure](../avere-vfxt/avere-vfxt-overview.md) or [Azure HPC Cache](../hpc-cache/hpc-cache-overview.md) to cache source files in Azure to reduce bandwidth use and latency, maximizing performance.-* Ensure the existing license server is on the virtual network and purchase the additional licenses required to cater for the extra Azure-based capacity. +* Ensure the existing license server is on the virtual network and purchase more licenses as required to cater for the extra Azure-based capacity. ## No existing render farm -Client workstations may be performing rendering, but the rendering load is increasing and it is taking too long to solely use workstation capacity. +Client workstations may be performing rendering, but the rendering load is increasing and it's taking too long to solely use workstation capacity. There are two main options available: -* Deploy an on-premises render manager, such as Royal Render, and configure a hybrid environment to use Azure when further capacity or performance is required. A render manager is specifically tailored for rendering workloads and will include plug-ins for the popular client applications, enabling easy submission of rendering jobs. +* Deploy an on-premises render manager, such as Royal Render, and configure a hybrid environment to use Azure when further capacity or performance is required. A render manager is specially tailored for rendering workloads and will include plug-ins for the popular client applications, enabling easy submission of rendering jobs. -* A custom solution using Azure Batch to allocate and manage the compute capacity as well as providing the job scheduling to run the render jobs. +* A custom solution using Azure Batch to allocate and manage the compute capacity and providing the job scheduling to run the render jobs. ## Next steps - Learn how to [use Azure infrastructure and services to extend an existing on-premises render farm](https://azure.microsoft.com/solutions/high-performance-computing/rendering/). - Learn more about [Azure Batch rendering capabilities](batch-rendering-functionality.md). |
batch | Batch Rendering Using | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-rendering-using.md | - Title: Using rendering capabilities -description: How to use Azure Batch rendering capabilities. Try using the Batch Explorer application, either directly or invoked from a client application plug-in. Previously updated : 03/12/2020----# Using Azure Batch rendering --> [!WARNING] -> The rendering VM images and pay-for-use licensing have been [deprecated and will be retired on February 29, 2024](https://azure.microsoft.com/updates/azure-batch-rendering-vm-images-licensing-will-be-retired-on-29-february-2024/). To use Batch for rendering, [a custom VM image and standard application licensing should be used.](batch-rendering-functionality.md#batch-pools-using-custom-vm-images-and-standard-application-licensing) --There are several ways to use Azure Batch rendering: --* APIs: - * Write code using any of the Batch APIs. Developers can integrate Azure Batch capabilities into their existing applications or workflow, whether cloud or based on-premises. -* Command line tools: - * The [Azure command line](/cli/azure/) or [PowerShell](/powershell/azure/) can be used to script Batch use. - * In particular, the [Batch CLI template support](./batch-cli-templates.md) makes it much easier to create pools and submit jobs. -* Batch Explorer UI: - * [Batch Explorer](https://github.com/Azure/BatchLabs) is a cross-platform client tool that also allows Batch accounts to be managed and monitored. - * For each of the rendering applications, a number of pool and job templates are provided that can be used to easily create pools and to submit jobs. A set of templates is listed in the application UI, with the template files being accessed from GitHub. - * Custom templates can be authored from scratch or the supplied templates from GitHub can be copied and modified. -* Client application plug-ins: - * Plug-ins are available that allow Batch rendering to be used from directly within the client design and modeling applications. The plug-ins mainly invoke the Batch Explorer application with contextual information about the current 3D model and include features to help manage assets. --The best way to try Azure Batch rendering and simplest way for end-users, who are not developers and not Azure experts, is to use the Batch Explorer application, either directly or invoked from a client application plug-in. --## Using Batch Explorer --Batch Explorer [downloads are available](https://azure.github.io/BatchExplorer/) for Windows, OSX, and Linux. --### Using templates to create pools and run jobs --A comprehensive set of templates is available for use with Batch Explorer that makes it easy to create pools and submit jobs for the various rendering applications without having to specify all the properties required to create pools, jobs, and tasks directly with Batch. The templates available in Batch Explorer are stored and visible in [a GitHub repository](https://github.com/Azure/BatchExplorer-data/tree/master/ncj). --![Batch Explorer Gallery](./media/batch-rendering-using/batch-explorer-gallery.png) --Templates are provided that cater for all the applications present on the Marketplace rendering VM images. For each application multiple templates exist, including pool templates to cater for CPU and GPU pools, Windows and Linux pools; job templates include full frame or tiled Blender rendering and V-Ray distributed rendering. The set of supplied templates will be expanded over time to cater for other Batch capabilities, such as pool auto-scaling. --It's also possible for custom templates to be produced, from scratch or by modifying the supplied templates. Custom templates can be used by selecting the 'Local templates' item in the 'Gallery' section of Batch Explorer. --### File system and data movement --The 'Data' section in Batch Explorer allows files to be copied between a local file system and Azure Storage accounts. --## Next steps --* Learn about [using rendering applications with Batch](batch-rendering-applications.md). -* Learn about [Storage and data movement options for rendering asset and output files](batch-rendering-storage-data-movement.md). |
batch | Batch Sig Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-sig-images.md | When you create an Azure Batch pool using the Virtual Machine Configuration, you ## Benefits of the Azure Compute Gallery -When you use the Azure Compute Gallery for your custom image, you have control over the operating system type and configuration, as well as the type of data disks. Your Shared Image can include applications and reference data that become available on all the Batch pool nodes as soon as they are provisioned. +When you use the Azure Compute Gallery for your custom image, you have control over the operating system type and configuration, as well as the type of data disks. Your Shared Image can include applications and reference data that become available on all the Batch pool nodes as soon as they're provisioned. You can also have multiple versions of an image as needed for your environment. When you use an image version to create a VM, the image version is used to create new disks for the VM. Using a Shared Image configured for your scenario can provide several advantages ## Prerequisites -> [!NOTE] -> Currently, Azure Batch does not support the ΓÇÿTrustedLaunchΓÇÖ feature. You must use the standard security type to create a custom image instead. -> -> You need to authenticate using Microsoft Entra ID. If you use shared-key-auth, you will get an authentication error. - - **An Azure Batch account.** To create a Batch account, see the Batch quickstarts using the [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md). +> [!NOTE] +> Authentication using Microsoft Entra ID is required. If you use Shared Key Auth, you will get an authentication error. + - **an Azure Compute Gallery image**. To create a Shared Image, you need to have or create a managed image resource. The image should be created from snapshots of the VM's OS disk and optionally its attached data disks. > [!NOTE] The following steps show how to prepare a VM, take a snapshot, and create an ima ### Prepare a VM -If you are creating a new VM for the image, use a first party Azure Marketplace image supported by Batch as the base image for your managed image. Only first party images can be used as a base image. +If you're creating a new VM for the image, use a first party Azure Marketplace image supported by Batch as the base image for your managed image. Only first party images can be used as a base image. To get a full list of current Azure Marketplace image references supported by Azure Batch, use one of the following APIs to return a list of Windows and Linux VM images including the node agent SKU IDs for each image: To get a full list of current Azure Marketplace image references supported by Az Follow these guidelines when creating VMs: - Ensure the VM is created with a managed disk. This is the default storage setting when you create a VM.-- Do not install Azure extensions, such as the Custom Script extension, on the VM. If the image contains a pre-installed extension, Azure may encounter problems when deploying the Batch pool.+- Don't install Azure extensions, such as the Custom Script extension, on the VM. If the image contains a pre-installed extension, Azure may encounter problems when deploying the Batch pool. - When using attached data disks, you need to mount and format the disks from within a VM to use them. - Ensure that the base OS image you provide uses the default temp drive. The Batch node agent currently expects the default temp drive.-- Ensure that the OS disk is not encrypted.+- Ensure that the OS disk isn't encrypted. - Once the VM is running, connect to it via RDP (for Windows) or SSH (for Linux). Install any necessary software or copy desired data. - For faster pool provisioning, use the [ReadWrite disk cache setting](../virtual-machines/premium-storage-performance.md#disk-caching) for the VM's OS disk. Once you have successfully created your managed image, you need to create an Azu To create a pool from your Shared Image using the Azure CLI, use the `az batch pool create` command. Specify the Shared Image ID in the `--image` field. Make sure the OS type and SKU matches the versions specified by `--node-agent-sku-id` -> [!NOTE] -> You need to authenticate using Microsoft Entra ID. If you use shared-key-auth, you will get an authentication error. - > [!IMPORTANT] > The node agent SKU id must align with the publisher/offer/SKU in order for the node to start. Use the following steps to create a pool from a Shared Image in the Azure portal 1. In the **Image Type** section, select **Azure Compute Gallery**. 1. Complete the remaining sections with information about your managed image. 1. Select **OK**.-1. Once the node is allocated, use **Connect** to generate user and the RDP file for Windows OR use SSH to for Linux to login to the allocated node and verify. +1. Once the node is allocated, use **Connect** to generate user and the RDP file for Windows OR use SSH to for Linux to log in to the allocated node and verify. ![Create a pool with from a Shared image with the portal.](media/batch-sig-images/create-custom-pool.png) Use the following steps to create a pool from a Shared Image in the Azure portal If you plan to create a pool with hundreds or thousands of VMs or more using a Shared Image, use the following guidance. -- **Azure Compute Gallery replica numbers.** For every pool with up to 300 instances, we recommend you keep at least one replica. For example, if you are creating a pool with 3000 VMs, you should keep at least 10 replicas of your image. We always suggest keeping more replicas than minimum requirements for better performance.+- **Azure Compute Gallery replica numbers.** For every pool with up to 300 instances, we recommend you keep at least one replica. For example, if you're creating a pool with 3,000 VMs, you should keep at least 10 replicas of your image. We always suggest keeping more replicas than minimum requirements for better performance. -- **Resize timeout.** If your pool contains a fixed number of nodes (if it doesn't autoscale), increase the `resizeTimeout` property of the pool depending on the pool size. For every 1000 VMs, the recommended resize timeout is at least 15 minutes. For example, the recommended resize timeout for a pool with 2000 VMs is at least 30 minutes.+- **Resize timeout.** If your pool contains a fixed number of nodes (if it doesn't autoscale), increase the `resizeTimeout` property of the pool depending on the pool size. For every 1,000 VMs, the recommended resize timeout is at least 15 minutes. For example, the recommended resize timeout for a pool with 2,000 VMs is at least 30 minutes. ## Next steps |
batch | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md | Title: Best practices description: Learn best practices and useful tips for developing your Azure Batch solutions. Previously updated : 11/02/2023 Last updated : 02/29/2024 derived or aligned with. An image without a specified `batchSupportEndOfLife` da determined yet by the Batch service. Absence of a date doesn't indicate that the respective image will be supported indefinitely. An EOL date may be added or updated in the future at any time. +- **VM SKUs with impending end-of-life (EOL) dates:** As with VM images, VM SKUs or families may also reach Batch support +end of life (EOL). These dates can be discovered via the +[`ListSupportedVirtualMachineSkus` API](/rest/api/batchmanagement/location/list-supported-virtual-machine-skus), +[PowerShell](/powershell/module/az.batch/get-azbatchsupportedvirtualmachinesku), or +[Azure CLI](/cli/azure/batch/location#az-batch-location-list-skus). +Plan for the migration of your workload to a non-EOL VM SKU by creating a new pool with an appropriate supported VM SKU. +Absence of an associated `batchSupportEndOfLife` date for a VM SKU doesn't indicate that particular VM SKU will be +supported indefinitely. An EOL date may be added or updated in the future at any time. + - **Unique resource names:** Batch resources (jobs, pools, etc.) often come and go over time. For example, you may create a pool on Monday, delete it on Tuesday, and then create another similar pool on Thursday. Each new resource you create should be given a unique name that you haven't used before. You can create uniqueness by using a GUID (either as the entire resource name, or as a part of it) or by embedding the date and time that the resource was created in the resource name. Batch supports [DisplayName](/dotnet/api/microsoft.azure.batch.jobspecification.displayname), which can give a resource a more readable name even if the actual resource ID is something that isn't human-friendly. Using unique names makes it easier for you to differentiate which particular resource did something in logs and metrics. It also removes ambiguity if you ever have to file a support case for a resource. - **Continuity during pool maintenance and failure:** It's best to have your jobs use pools dynamically. If your jobs use the same pool for everything, there's a chance that jobs won't run if something goes wrong with the pool. This principle is especially important for time-sensitive workloads. For example, select or create a pool dynamically when you schedule each job, or have a way to override the pool name so that you can bypass an unhealthy pool. For the purposes of isolation, if your scenario requires isolating jobs or tasks #### Batch Node Agent updates -Batch node agents aren't automatically upgraded for pools that have non-zero compute nodes. To ensure your Batch pools receive the latest security fixes and updates to the Batch node agent, you need to either resize the pool to zero compute nodes or recreate the pool. It's recommended to monitor the [Batch Node Agent release notes](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md) to understand changes to new Batch node agent versions. Checking regularly for updates when they were released enables you to plan upgrades to the latest agent version. +Batch node agents aren't automatically upgraded for pools that have nonzero compute nodes. To ensure your Batch pools receive the latest security fixes and updates to the Batch node agent, you need to either resize the pool to zero compute nodes or recreate the pool. It's recommended to monitor the [Batch Node Agent release notes](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md) to understand changes to new Batch node agent versions. Checking regularly for updates when they were released enables you to plan upgrades to the latest agent version. Before you recreate or resize your pool, you should download any node agent logs for debugging purposes if you're experiencing issues with your Batch pool or compute nodes. This process is further discussed in the [Nodes](#nodes) section. Pool lifetime can vary depending upon the method of allocation and options appli - **Pool recreation:** Avoid deleting and recreating pools on a daily basis. Instead, create a new pool and then update your existing jobs to point to the new pool. Once all of the tasks have been moved to the new pool, then delete the old pool. -- **Pool efficiency and billing:** Batch itself incurs no extra charges. However, you do incur charges for Azure resources utilized, such as compute, storage, networking and any other resources that may be required for your Batch workload. You're billed for every compute node in the pool, regardless of the state it's in. For more information, see [Cost analysis and budgets for Azure Batch](budget.md).+- **Pool efficiency and billing:** Batch itself incurs no extra charges. However, you do incur charges for Azure resources utilized, such as compute, storage, networking, and any other resources that may be required for your Batch workload. You're billed for every compute node in the pool, regardless of the state it's in. For more information, see [Cost analysis and budgets for Azure Batch](budget.md). - **Ephemeral OS disks:** Virtual Machine Configuration pools can use [ephemeral OS disks](create-pool-ephemeral-os-disk.md), which create the OS disk on the VM cache or temporary SSD, to avoid extra costs associated with managed disks. A [job](jobs-and-tasks.md#jobs) is a container designed to contain hundreds, tho ### Fewer jobs, more tasks -Using a job to run a single task is inefficient. For example, it's more efficient to use a single job containing 1000 tasks rather than creating 100 jobs that contain 10 tasks each. If you used 1000 jobs, each with a single task that would be the least efficient, slowest, and most expensive approach to take. +Using a job to run a single task is inefficient. For example, it's more efficient to use a single job containing 1,000 tasks rather than creating 100 jobs that contain 10 tasks each. If you used 1,000 jobs, each with a single task that would be the least efficient, slowest, and most expensive approach to take. Avoid designing a Batch solution that requires thousands of simultaneously active jobs. There's no quota for tasks, so executing many tasks under as few jobs as possible efficiently uses your [job and job schedule quotas](batch-quota-limit.md#resource-quotas). Deleting tasks accomplishes two things: - Cleans up the corresponding task data on the node (provided `retentionTime` hasn't already been hit). This action helps ensure that your nodes don't fill up with task data and run out of disk space. > [!NOTE]-> For tasks just submitted to Batch, the DeleteTask API call takes up to 10 minutes to take effect. Before it takes effect, other tasks might be prevented from being scheduled. It's because Batch Scheduler still tries to schedule the tasks just deleted. If you want to delete one task shortly after it's submitted, please terminate the task instead (since the terminate task will take effect immediately). And then delete the task 10 minutes later. +> For tasks just submitted to Batch, the DeleteTask API call takes up to 10 minutes to take effect. Before it takes effect, +> other tasks might be prevented from being scheduled. It's because Batch Scheduler still tries to schedule the tasks just +> deleted. If you wanted to delete one task shortly after it's submitted, please terminate the task instead (since the +> terminate task will take effect immediately). And then delete the task 10 minutes later. ### Submit large numbers of tasks in collection Batch supports oversubscribing tasks on nodes (running more tasks than a node ha ### Design for retries and re-execution -Tasks can be automatically retried by Batch. There are two types of retries: user-controlled and internal. User-controlled retries are specified by the task's [maxTaskRetryCount](/dotnet/api/microsoft.azure.batch.taskconstraints.maxtaskretrycount). When a program specified in the task exits with a non-zero exit code, the task is retried up to the value of the `maxTaskRetryCount`. +Tasks can be automatically retried by Batch. There are two types of retries: user-controlled and internal. User-controlled retries are specified by the task's [maxTaskRetryCount](/dotnet/api/microsoft.azure.batch.taskconstraints.maxtaskretrycount). When a program specified in the task exits with a nonzero exit code, the task is retried up to the value of the `maxTaskRetryCount`. Although rare, a task can be retried internally due to failures on the compute node, such as not being able to update internal state or a failure on the node while the task is running. The task will be retried on the same compute node, if possible, up to an internal limit before giving up on the task and deferring the task to be rescheduled by Batch, potentially on a different compute node. section about attaching and preparing data disks for compute nodes. ### Attaching and preparing data disks Each individual compute node has the exact same data disk specification attached if specified as part of the Batch pool instance. Only-new data disks may be attached to Batch pools. These data disks attached to compute nodes aren't automatically partitioned, formatted or +new data disks may be attached to Batch pools. These data disks attached to compute nodes aren't automatically partitioned, formatted, or mounted. It's your responsibility to perform these operations as part of your [start task](jobs-and-tasks.md#start-task). These start tasks must be crafted to be idempotent. Re-execution of the start tasks on compute nodes is possible. If the start task isn't idempotent, potential data loss can occur on the data disks. Review the following guidance related to connectivity in your Batch solutions. ### Network Security Groups (NSGs) and User Defined Routes (UDRs) -When provisioning [Batch pools in a virtual network](batch-virtual-network.md), ensure that you're closely following the guidelines regarding the use of the BatchNodeManagement.*region* service tag, ports, protocols and direction of the rule. Use of the service tag is highly recommended; don't use underlying Batch service IP addresses as they can change over time. Using Batch service IP addresses directly can cause instability, interruptions, or outages for your Batch pools. +When provisioning [Batch pools in a virtual network](batch-virtual-network.md), ensure that you're closely following the guidelines regarding the use of the BatchNodeManagement.*region* service tag, ports, protocols, and direction of the rule. Use of the service tag is highly recommended; don't use underlying Batch service IP addresses as they can change over time. Using Batch service IP addresses directly can cause instability, interruptions, or outages for your Batch pools. For User Defined Routes (UDRs), it's recommended to use BatchNodeManagement.*region* [service tags](../virtual-network/virtual-networks-udr-overview.md#service-tags-for-user-defined-routes) instead of Batch service IP addresses as they can change over time. |
batch | Managed Identity Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/managed-identity-pools.md | Title: Configure managed identities in Batch pools description: Learn how to enable user-assigned managed identities on Batch pools and how to use managed identities within the nodes. Previously updated : 04/03/2023 Last updated : 02/29/2024 ms.devlang: csharp This topic explains how to enable user-assigned managed identities on Batch pool First, [create your user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) in the same tenant as your Batch account. You can create the identity using the Azure portal, the Azure Command-Line Interface (Azure CLI), PowerShell, Azure Resource Manager, or the Azure REST API. This managed identity doesn't need to be in the same resource group or even in the same subscription. -> [!IMPORTANT] +> [!TIP] > A system-assigned managed identity created for a Batch account for [customer data encryption](batch-customer-managed-key.md) > cannot be used as a user-assigned managed identity on a Batch pool as described in this document. If you wish to use the same > managed identity on both the Batch account and Batch pool, then use a common user-assigned managed identity instead. After you've created one or more user-assigned managed identities, you can creat - [Use the Azure portal to create the Batch pool](#create-batch-pool-in-azure-portal) - [Use the Batch .NET management library to create the Batch pool](#create-batch-pool-with-net) +> [!WARNING] +> In-place updates of pool managed identities are not supported while the pool has active nodes. Existing compute nodes +> will not be updated with changes. It is recommended to scale the pool down to zero compute nodes before modifying the +> identity collection to ensure all VMs have the same set of identities assigned. + ### Create Batch pool in Azure portal To create a Batch pool with a user-assigned managed identity through the Azure portal: var pool = await managementClient.Pool.CreateWithHttpMessagesAsync( cancellationToken: default(CancellationToken)).ConfigureAwait(false); ``` -> [!IMPORTANT] -> Managed identities are not updated on existing VMs once a pool has been started. It is recommended to scale the pool down to zero before modifying the identity collection to ensure all VMs -> have the same set of identities assigned. - ## Use user-assigned managed identities in Batch nodes Many Azure Batch functions that access other Azure resources directly on the compute nodes, such as Azure Storage or |
batch | Quick Run Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-run-python.md | Title: 'Quickstart: Use Python to create a pool and run a job' description: Follow this quickstart to run an app that uses the Azure Batch client library for Python to create and run Batch pools, nodes, jobs, and tasks. Previously updated : 04/13/2023 Last updated : 03/01/2024 ms.devlang: python After you complete this quickstart, you understand the [key concepts of the Batc - A Batch account with a linked Azure Storage account. You can create the accounts by using any of the following methods: [Azure CLI](quick-create-cli.md) | [Azure portal](quick-create-portal.md) | [Bicep](quick-create-bicep.md) | [ARM template](quick-create-template.md) | [Terraform](quick-create-terraform.md). -- [Python](https://python.org/downloads) version 3.6 or later, which includes the [pip](https://pip.pypa.io/en/stable/installing) package manager.+- [Python](https://python.org/downloads) version 3.8 or later, which includes the [pip](https://pip.pypa.io/en/stable/installing) package manager. ## Run the app |
batch | Tutorial Parallel Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-parallel-python.md | Title: "Tutorial: Run a parallel workload using the Python API" description: Learn how to process media files in parallel using ffmpeg in Azure Batch with the Batch Python client library. ms.devlang: python Previously updated : 05/25/2023 Last updated : 03/01/2024 In this tutorial, you convert MP4 media files to MP3 format, in parallel, by usi ## Prerequisites -* [Python version 3.7 or later](https://www.python.org/downloads/) +* [Python version 3.8 or later](https://www.python.org/downloads/) * [pip package manager](https://pip.pypa.io/en/stable/installation/) |
batch | Tutorial Run Python Batch Azure Data Factory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-run-python-batch-azure-data-factory.md | Title: 'Tutorial: Run a Batch job through Azure Data Factory' description: Learn how to use Batch Explorer, Azure Storage Explorer, and a Python script to run a Batch workload through an Azure Data Factory pipeline. ms.devlang: python Previously updated : 04/20/2023 Last updated : 03/01/2024 In this tutorial, you learn how to: - A Data Factory instance. To create the data factory, follow the instructions in [Create a data factory](/azure/data-factory/quickstart-create-data-factory-portal#create-a-data-factory). - [Batch Explorer](https://azure.github.io/BatchExplorer) downloaded and installed. - [Storage Explorer](https://azure.microsoft.com/products/storage/storage-explorer) downloaded and installed.-- [Python 3.7 or above](https://www.python.org/downloads), with the [azure-storage-blob](https://pypi.org/project/azure-storage-blob) package installed by using `pip`.+- [Python 3.8 or above](https://www.python.org/downloads), with the [azure-storage-blob](https://pypi.org/project/azure-storage-blob) package installed by using `pip`. - The [iris.csv input dataset](https://github.com/Azure-Samples/batch-adf-pipeline-tutorial/blob/master/iris.csv) downloaded from GitHub. ## Use Batch Explorer to create a Batch pool and nodes |
communications-gateway | Configure Test Customer Teams Direct Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/configure-test-customer-teams-direct-routing.md | You must be able to sign in to the Microsoft 365 admin center for your test cust ## Choose a DNS subdomain label to use to identify the customer -Azure Communications Gateway's per-region domain names might be as follows, where the `<deployment_id>` subdomain is autogenerated and unique to the deployment: +Azure Communications Gateway has per-region domain names. You need to set up subdomains of these domain names for your test customer. Microsoft Phone System and Azure Communications Gateway use this subdomain to match calls to tenants. -* `r1.<deployment_id>.commsgw.azure.com` -* `r2.<deployment_id>.commsgw.azure.com` +1. Choose a DNS label to identify the test customer. + * The label can be up to 10 characters in length and can only contain letters, numbers, underscores, and dashes. + * You must not use wildcard subdomains or subdomains with multiple labels. + * For example, you could allocate the label `test`. +1. Use this label to create a subdomain of each per-region domain name for your Azure Communications Gateway. +1. Make a note of the label you choose and the corresponding subdomains. -Choose a DNS label to identify the test customer. The label can be up to 10 characters in length and can only contain letters, numbers, underscores, and dashes. You must not use wildcard subdomains or subdomains with multiple labels. For example, you could allocate the label `test`. +> [!TIP] +> To find your deployment's per-region domain names: +> 1. Sign in to the [Azure portal](https://azure.microsoft.com/). +> 1. Search for your Communications Gateway resource and select it. +> 1. Check that you're on the **Overview** of your Azure Communications Gateway resource. +> 1. Select **Properties**. +> 1. In each **Service Location** section, find the **Hostname** field. -You use this label to create a subdomain of each per-region domain name for your Azure Communications Gateway. Microsoft Phone System and Azure Communications Gateway use this subdomain to match calls to tenants. +For example, your per-region domain names might be as follows, where the `<deployment_id>` subdomain is autogenerated and unique to the deployment: -For example, the `test` label combined with the per-region domain names creates the following deployment-specific domain names: +* `r1.<deployment_id>.commsgw.azure.com` +* `r2.<deployment_id>.commsgw.azure.com` ++If you allocate the label `test`, this label combined with the per-region domain names creates the following domain names for your test customer: * `test.r1.<deployment_id>.commsgw.azure.com` * `test.r2.<deployment_id>.commsgw.azure.com` -Make a note of the label you choose and the corresponding subdomains. +++> [!TIP] +> Lab deployments have one per-region domain name. Your test customer therefore also only has one customer-specific per-region domain name. ## Start registering the subdomains in the customer tenant and get DNS TXT values To route calls to a customer tenant, the customer tenant must be configured with 1. Register the first customer-specific per-region domain name (for example `test.r1.<deployment_id>.commsgw.azure.com`). 1. Start the verification process using TXT records. 1. Note the TXT value that Microsoft 365 provides.-1. Repeat the previous step for the second customer-specific per-region domain name. +1. (Production deployments only) Repeat the previous step for the second customer-specific per-region domain name. > [!IMPORTANT] > Don't complete the verification process yet. You must carry out [Use Azure Communications Gateway's Provisioning API to configure the customer and generate DNS records](#use-azure-communications-gateways-provisioning-api-to-configure-the-customer-and-generate-dns-records) first. When you have used Azure Communications Gateway to generate the DNS records for 1. Sign into the Microsoft 365 admin center for the customer tenant as a Global Administrator. 1. Select **Settings** > **Domains**.-1. Finish verifying the two customer-specific per-region domain names by following [Add a subdomain to the customer tenant and verify it](/microsoftteams/direct-routing-sbc-multiple-tenants#add-a-subdomain-to-the-customer-tenant-and-verify-it). +1. Finish verifying the customer-specific per-region domain names by following [Add a subdomain to the customer tenant and verify it](/microsoftteams/direct-routing-sbc-multiple-tenants#add-a-subdomain-to-the-customer-tenant-and-verify-it). ## Configure the customer tenant's call routing to use Azure Communications Gateway |
communications-gateway | Connect Operator Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connect-operator-connect.md | You must [deploy Azure Communications Gateway](deploy.md). You must have access to a user account with the Microsoft Entra Global Administrator role. -You must allocate six "service verification" test numbers for each of Operator Connect and Teams Phone Mobile. These numbers are used by the Operator Connect and Teams Phone Mobile programs for continuous call testing. -+You must allocate "service verification" test numbers. These numbers are used by the Operator Connect and Teams Phone Mobile programs for continuous call testing. Production deployments need six numbers for each service. Lab deployments need three numbers for each service. - If you selected the service you're setting up as part of deploying Azure Communications Gateway, you've allocated numbers for the service already. - Otherwise, choose the phone numbers now (in E.164 format and including the country code) and names to identify them. We recommend names of the form OC1 and OC2 (for Operator Connect) and TPM1 and TPM2 (for Teams Phone Mobile). |
communications-gateway | Connect Teams Direct Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connect-teams-direct-routing.md | Microsoft Teams only sends traffic to domains that you confirm that you own. You 1. Select your Communications Gateway resource. Check that you're on the **Overview** of your Azure Communications Gateway resource. 1. Select **Properties**. 1. Find the field named **Domain**. This name is your deployment's _base domain name_.-1. In each **Service Location** section, find the **Hostname** field. This field provides the _per-region domain name_. Your deployment has two service regions and therefore two per-region domain names. -1. Note down the base domain name and the per-region domain names. You'll need these values in the next steps. +1. In each **Service Location** section, find the **Hostname** field. This field provides the _per-region domain name_. + - A production deployment has two service regions and therefore two per-region domain names. + - A lab deployment has one service region and therefore one per-region domain name. +1. Note down the base domain name and the per-region domain name(s). You'll need these values in the next steps. ## Register the base domain name for Azure Communications Gateway in your tenant To activate the base domain in Microsoft 365, you must have at least one user or ## Connect your tenant to Azure Communications Gateway -You most configure your Microsoft 365 tenant with two SIP trunks to Azure Communications Gateway. Each trunk connects to one of the per-region domain names that you found in [Find your Azure Communication Gateway's domain names](#find-your-azure-communication-gateways-domain-names). +You must configure your Microsoft 365 tenant with SIP trunks to Azure Communications Gateway. Each trunk connects to one of the per-region domain names that you found in [Find your Azure Communication Gateway's domain names](#find-your-azure-communication-gateways-domain-names). -Follow [Connect your Session Border Controller (SBC) to Direct Routing](/microsoftteams/direct-routing-connect-the-sbc), using the following configuration settings. +Use [Connect your Session Border Controller (SBC) to Direct Routing](/microsoftteams/direct-routing-connect-the-sbc) and the following configuration settings to set up the trunks. ++- For a production deployment, set up two trunks. +- For a lab deployment, set up one trunk. | Teams Admin Center setting | PowerShell parameter | Value to use (Admin Center / PowerShell) | | -- | -- | | |
communications-gateway | Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connectivity.md | The following table lists all the available connection types and whether they're |||||| | MAPS Voice |✅ |✅|✅|- Best media quality because of prioritization with Microsoft network<br>- No extra costs<br>- See [Internet peering for Peering Service Voice walkthrough](../internet-peering/walkthrough-communications-services-partner.md)| |ExpressRoute Microsoft Peering |✅|✅|✅|- Easy to deploy<br>- Extra cost<br>- Consult with your onboarding team and ensure that it's available in your region<br>- See [Using ExpressRoute for Microsoft PSTN services](/azure/expressroute/using-expressroute-for-microsoft-pstn)|-|Public internet |❌|✅|✅|- No extra setup<br>- Not recommended for production| +|Public internet |⚠️ Lab deployments only|✅|✅|- No extra setup<br>- Where available, not recommended for production | -Set up your network as in the following diagram and configure it in accordance with any network connectivity specifications for your chosen communications services. Your network must have two sites with cross-connect functionality. For more information on the reliability design for Azure Communications Gateway, see [Reliability in Azure Communications Gateway](reliability-communications-gateway.md). +> [!NOTE] +> The Operator Connect and Teams Phone Mobile programs do not allow production deployments to use the public internet. ++Set up your network as in the following diagram and configure it in accordance with any network connectivity specifications for your chosen communications services. For production deployments, your network must have two sites with cross-connect functionality. For more information on the reliability design for Azure Communications Gateway, see [Reliability in Azure Communications Gateway](reliability-communications-gateway.md). :::image type="content" source="media/azure-communications-gateway-network.svg" alt-text="Network diagram showing Azure Communications Gateway deployed into two Azure regions within one Azure Geography. The Azure Communications Gateway resource in each region connects to a communications service and both operator sites. Azure Communications Gateway uses MAPS or Express Route as its peering service between Azure and an operators network." lightbox="media/azure-communications-gateway-network.svg"::: +Lab deployments have one Azure service region and must connect to one site in your network. + ## IP addresses and domain names Azure Communications Gateway (ACG) deployments require multiple IP addresses and fully qualified domain names (FQDNs). The following diagram and table describe the IP addresses and FQDNs that you might need to know about. |
communications-gateway | Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md | You must have completed [Prepare to deploy Azure Communications Gateway](prepare |The name of the Azure subscription to use to create an Azure Communications Gateway resource. You must use the same subscription for all resources in your Azure Communications Gateway deployment. |**Project details: Subscription**| |The Azure resource group in which to create the Azure Communications Gateway resource. |**Project details: Resource group**| |The name for the deployment. This name can contain alphanumeric characters and `-`. It must be 3-24 characters long. |**Instance details: Name**|- |The management Azure region: the region in which your monitoring and billing data is processed. We recommend that you select a region near or colocated with the two regions for handling call traffic. |**Instance details: Region** + |The management Azure region: the region in which your monitoring and billing data is processed. We recommend that you select a region near or colocated with the two regions for handling call traffic. |**Instance details: Region** | + |The type of deployment. Choose from **Standard** (for production) or **Lab**. |**Instance details: SKU** | |The voice codecs to use between Azure Communications Gateway and your network. We recommend that you only specify any codecs if you have a strong reason to restrict codecs (for example, licensing of specific codecs) and you can't configure your network or endpoints not to offer specific codecs. Restricting codecs can reduce the overall voice quality due to lower-fidelity codecs being selected. |**Call Handling: Supported codecs**| |Whether your Azure Communications Gateway resource should handle emergency calls as standard calls or directly route them to the Emergency Routing Service Provider (US only; only for Operator Connect or Teams Phone Mobile). |**Call Handling: Emergency call handling**| |A comma-separated list of dial strings used for emergency calls. For Microsoft Teams, specify dial strings as the standard emergency number (for example `999`). For Zoom, specify dial strings in the format `+<country-code><emergency-number>` (for example `+44999`).|**Call Handling: Emergency dial strings**| You must have completed [Prepare to deploy Azure Communications Gateway](prepare Collect all of the values in the following table for both service regions in which you want to deploy Azure Communications Gateway. +> [!NOTE] +> Lab deployments have one Azure region and connect to one site in your network. + |**Value**|**Field name(s) in Azure portal**| |||- |The Azure regions to use for call traffic. |**Service Region One/Two: Region**| - |The IPv4 address used by Azure Communications Gateway to contact your network from this region. |**Service Region One/Two: Operator IP address**| + |The Azure region to use for call traffic. |**Service Region One/Two: Region**| + |The IPv4 address belonging to your network that Azure Communications Gateway should use to contact your network from this region. |**Service Region One/Two: Operator IP address**| |The set of IP addresses/ranges that are permitted as sources for signaling traffic from your network. Provide an IPv4 address range using CIDR notation (for example, 192.0.2.0/24) or an IPv4 address (for example, 192.0.2.0). You can also provide a comma-separated list of IPv4 addresses and/or address ranges.|**Service Region One/Two: Allowed Signaling Source IP Addresses/CIDR Ranges**| |The set of IP addresses/ranges that are permitted as sources for media traffic from your network. Provide an IPv4 address range using CIDR notation (for example, 192.0.2.0/24) or an IPv4 address (for example, 192.0.2.0). You can also provide a comma-separated list of IPv4 addresses and/or address ranges.|**Service Region One/Two: Allowed Media Source IP Address/CIDR Ranges**| |
communications-gateway | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/get-started.md | This article summarizes the steps and documentation that you need. Read the following articles to learn about Azure Communications Gateway. - [Your network and Azure Communications Gateway](role-in-network.md), to learn how Azure Communications Gateway fits into your network.-- [Onboarding with Included Benefits for Azure Communications Gateway](onboarding.md), to learn about onboarding to Operator Connect or Teams Phone Mobile and the support we can provide.+- [Onboarding with Included Benefits for Azure Communications Gateway](onboarding.md), to learn about onboarding to your chosen communications services and the support we can provide. +- [Lab Azure Communications Gateway overview](lab.md), to learn about when and how you could use a lab deployment. - [Connectivity for Azure Communications Gateway](connectivity.md) and [Reliability in Azure Communications Gateway](reliability-communications-gateway.md), to create a network design that includes Azure Communications Gateway. - [Overview of security for Azure Communications Gateway](security.md), to learn about how Azure Communications Gateway keeps customer data and your network secure. - [Provisioning API (preview) for Azure Communications Gateway](provisioning-platform.md), to learn about when you might need or want to integrate with the Provisioning API. Use the following procedures to deploy Azure Communications Gateway and connect 1. [Deploy Azure Communications Gateway](deploy.md) describes how to create your Azure Communications Gateway resource in the Azure portal and connect it to your networks. 1. [Integrate with Azure Communications Gateway's Provisioning API (preview)](integrate-with-provisioning-api.md) describes how to integrate with the Provisioning API. Integrating with the API is: - Required for Microsoft Teams Direct Routing and Zoom Phone Cloud Peering.- - Recommended for Operator Connect and Teams Phone Mobile because it enables flow-through API-based provisioning of your customers both on Azure Communications Gateway and in the Operator Connect environment. This enables additional functionality to be provided by Azure Communications Gateway, such as injecting custom SIP headers, while also fulfilling the requirement from the the Operator Connect and Teams Phone Mobile programs for you to use APIs for provisioning customers in the Operator Connect environment. For more information, see [Provisioning and Operator Connect APIs](interoperability-operator-connect.md#provisioning-and-operator-connect-apis). + - Recommended for Operator Connect and Teams Phone Mobile because it enables flow-through API-based provisioning of your customers both on Azure Communications Gateway and in the Operator Connect environment. This enables additional functionality to be provided by Azure Communications Gateway, such as injecting custom SIP headers, while also fulfilling the requirement from the Operator Connect and Teams Phone Mobile programs for you to use APIs for provisioning customers in the Operator Connect environment. For more information, see [Provisioning and Operator Connect APIs](interoperability-operator-connect.md#provisioning-and-operator-connect-apis). ## Integrate with your chosen communications services |
communications-gateway | Lab | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/lab.md | + + Title: Lab deployments for Azure Communications Gateway +description: Learn about the benefits of lab deployments for Azure Communications Gateway ++++ Last updated : 01/08/2024++#CustomerIntent: As someone planning a deployment, I want to know about lab deployments so that I can decide if I want one +++# Lab Azure Communications Gateway overview +++You can experiment with and test Azure Communications Gateway by connecting your preproduction networks to a dedicated Azure Communications Gateway _lab deployment_. A lab deployment is separate from the deployment for your production traffic. We call the deployment type that you use for production traffic a _production deployment_ or _standard deployment_. ++You must have deployed a standard deployment or be about to deploy a standard deployment. You can't use a lab deployment as a standalone Azure Communications Gateway deployment. ++## Uses of lab deployments ++Lab deployments allow you to make changes and test them without affecting your production deployment. For example, you can: ++- Test configuration changes to Azure Communications Gateway. +- Test new Azure Communications Gateway features and services (for example, configuring Microsoft Teams Direct Routing or Zoom Phone Cloud Peering). +- Test changes in your preproduction network, before rolling them out to your production networks. ++Lab deployments support all the communications services supported by production deployments. ++## Considerations for lab deployments ++Lab deployments: ++- Use a single Azure region, which means there's no geographic redundancy. +- Don't have an availability service-level agreement (SLA). +- Are limited to 200 users. ++For Operator Connect and Teams Phone Mobile, lab deployments connect to the same Microsoft Entra tenant as production deployments. Microsoft Teams configuration for your tenant shows configuration for your lab deployments and production deployments together. ++You can't automatically apply the same configuration to lab deployments and production deployments. You need to configure each deployment separately. +++## Setting up and using a lab deployment ++You plan for, order, and deploy lab deployments in the same way as production deployments. ++We recommend the following approach. ++1. Integrate your preproduction network with the lab deployment and your chosen communications services. +1. Carry out the acceptance test plan (ATP) and any automated testing for your communications services in your preproduction environment. +1. Integrate your production network with a production deployment and your communications services, by applying the working configuration from your preproduction environment to your production environment. +1. Optionally, carry out the acceptance plan in your production environment. +1. Carry out any automated tests and network failover tests in your production environment. ++You can separate access to lab deployments and production deployments by using Microsoft Entra ID to assign different permissions to the resources. ++## Related content ++- [Learn more about planning a deployment](get-started.md#learn-about-and-plan-for-azure-communications-gateway) +- [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md) |
communications-gateway | Plan And Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/plan-and-manage-costs.md | -After you've started using Azure Communications Gateway, you can use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. +After you start using Azure Communications Gateway, you can use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. -Costs for Azure Communications Gateway are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure Communications Gateway, your Azure bill includes all services and resources used in your Azure subscription, including third-party Azure services. +Costs for Azure Communications Gateway are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure Communications Gateway, your Azure bill includes all services and resources used in your Azure subscription, including non-Microsoft Azure services. ## Prerequisites Azure Communications Gateway runs on Azure infrastructure that accrues costs whe ### How you're charged for Azure Communications Gateway -When you deploy Azure Communications Gateway, you're charged for how you use the voice features of the product. The charges are based on the number of users assigned to the platform by a series of Azure Communications Gateway meters. The meters include: +When you deploy Azure Communications Gateway, you're charged for how you use the voice features of the product. The charges are based on a series of Azure Communications Gateway meters and the number of users assigned to the platform. ++The meters for production deployments include: - A "Fixed Network Service Fee" or a "Mobile Network Service Fee" meter. - This meter is charged hourly and includes the use of 999 users for testing and early adoption. When you deploy Azure Communications Gateway, you're charged for how you use the - If your deployment includes fixed networks and mobile networks, you're charged the Mobile Network Service Fee. - A series of tiered per-user meters that charge based on the number of users that are assigned to the deployment. These per-user fees are based on the maximum number of users during your billing cycle, excluding the 999 users included in the service availability fee. -For example, if you have 28,000 users assigned to the deployment each month, you're charged for: -* The service availability fee for each hour in the month -* 24,001 users in the 1000-25000 tier -* 3000 users in the 25000+ tier +For example, if you have 28,000 users assigned to a production deployment each month, you're charged for: +- The service availability fee for each hour in the month +- 24,001 users in the 1000-25000 tier +- 3000 users in the 25000+ tier ++Lab deployments are charged on a "Lab - Fixed or Mobile Fee" service availability meter. The meter includes 200 users. > [!NOTE] > A Microsoft Teams Direct Routing or Zoom Phone Cloud Peering user is any telephone number configured with Direct Routing service or Zoom service on Azure Communications Gateway. Billing for the user starts as soon as you have configured the number. At the end of your billing cycle, the charges for each meter are summed. Your bi > [!TIP] > If you receive a quote through Microsoft Volume Licensing, pricing may be presented as aggregated so that the values are easily readable (for example showing the per-user meters in groups of 10 or 100 rather than the pricing for individual users). This does not impact the way you will be billed. -If you've arranged any custom work with Microsoft, you might be charged an extra fee for that work. That fee isn't included in these meters. +If you arrange any custom work with Microsoft, you might be charged an extra fee for that work. That fee isn't included in these meters. If your Azure subscription has a spending limit, Azure prevents you from spending over your credit amount. As you create and use Azure resources, your credits are used. When you reach your credit limit, the resources that you deployed are disabled for the rest of that billing period. You can't change your credit limit, but you can remove it. For more information about spending limits, see [Azure spending limit](../cost-management-billing/manage/spending-limit.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). If you have multiple Azure Communications Gateway deployments and you move users ### Using Azure Prepayment with Azure Communications Gateway -You can pay for Azure Communications Gateway charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third-party products and services including those from the Azure Marketplace. +You can pay for Azure Communications Gateway charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for non-Microsoft products and services including those from the Azure Marketplace. ## Monitor costs |
communications-gateway | Prepare To Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md | The following sections describe the information you need to collect and the deci [!INCLUDE [communications-gateway-deployment-prerequisites](includes/communications-gateway-deployment-prerequisites.md)] +If you want to set up a lab deployment, you must have deployed a standard deployment or be about to deploy one. You can't use a lab deployment as a standalone Azure Communications Gateway deployment. + ## Arrange onboarding You need a Microsoft onboarding team to deploy Azure Communications Gateway. Azure Communications Gateway includes an onboarding program called [Included Benefits](onboarding.md). If you're not eligible for Included Benefits or you require more support, discuss your requirements with your Microsoft sales representative. We recommend that you use an existing Microsoft Entra tenant for Azure Communica The Operator Connect and Teams Phone Mobile environments inherit identities and configuration permissions from your Microsoft Entra tenant through a Microsoft application called Project Synergy. You must add this application to your Microsoft Entra tenant as part of [Connect Azure Communications Gateway to Operator Connect or Teams Phone Mobile](connect-operator-connect.md) (if your tenant does not already contain this application). +> [!IMPORTANT] +> For Operator Connect and Teams Phone Mobile, production deployments and lab deployments must connect to the same Microsoft Entra tenant. Microsoft Teams configuration for your tenant shows configuration for your lab deployments and production deployments together. ++ ## Get access to Azure Communications Gateway for your Azure subscription -Access to Azure Communications Gateway is restricted. When you've completed the previous steps in this article, contact your onboarding team and ask them to enable your subscription. If you don't already have an onboarding team, contact azcog-enablement@microsoft.com with your Azure subscription ID and contact details. +Access to Azure Communications Gateway is restricted. When you've completed the previous steps in this article: -Wait for confirmation that Azure Communications Gateway is enabled before moving on to the next step. +1. Contact your onboarding team and ask them to enable your subscription. If you don't already have an onboarding team, contact azcog-enablement@microsoft.com with your Azure subscription ID and contact details. +2. Wait for confirmation that Azure Communications Gateway is enabled before moving on to the next step. ## Create a network design If you plan to route emergency calls through Azure Communications Gateway, read - [Operator Connect and Teams Phone Mobile](emergency-calls-operator-connect.md) - [Zoom Phone Cloud Peering](emergency-calls-zoom.md) -## Configure Microsoft Azure Peering Service Voice or ExpressRoute +## Connect your network to Azure -Connect your network to Azure: +Configure connections between your network and Azure: - To configure Microsoft Azure Peering Service Voice (sometimes called MAPS Voice), follow the instructions in [Internet peering for Peering Service Voice walkthrough](../internet-peering/walkthrough-communications-services-partner.md). - To configure ExpressRoute Microsoft Peering, follow the instructions in [Tutorial: Configure peering for ExpressRoute circuit](../../articles/expressroute/expressroute-howto-routing-portal-resource-manager.md). |
communications-gateway | Reliability Communications Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/reliability-communications-gateway.md | Azure Communications Gateway ensures your service is reliable by using Azure red ## Azure Communications Gateway's redundancy model -Each Azure Communications Gateway deployment consists of three separate regions: a Management Region and two Service Regions. This article describes the two different region types and their distinct redundancy models. It covers both regional reliability with availability zones and cross-region reliability with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview). +Production Azure Communications Gateway deployments (also called standard deployments) consist of three separate regions: a _management region_ and two _service regions_. Lab deployments consist of one management region and one service region. ++This article describes the two different region types and their distinct redundancy models. It covers both regional reliability with availability zones and cross-region reliability with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview). :::image type="complex" source="media/reliability/azure-communications-gateway-management-and-service-regions.png" alt-text="Diagram of two service regions, a management region and two operator sites."::: Diagram showing two operator sites and the Azure regions for Azure Communications Gateway. Azure Communications Gateway has two service regions and one management region. The service regions connect to the management region and to the operator sites. The management region can be colocated with a service region. Each Azure Communications Gateway deployment consists of three separate regions: ## Service regions -Service regions contain the voice and API infrastructure used for handling traffic between your network and your chosen communications services. Each instance of Azure Communications Gateway consists of two service regions that are deployed in an active-active mode (as required by the Operator Connect and Teams Phone Mobile programs). Fast failover between the service regions is provided at the infrastructure/IP level and at the application (SIP/RTP/HTTP) level. +Service regions contain the voice and API infrastructure used for handling traffic between your network and your chosen communications services. ++Production Azure Communications Gateway deployments have two service regions that are deployed in an active-active mode (as required by the Operator Connect and Teams Phone Mobile programs). Fast failover between the service regions is provided at the infrastructure/IP level and at the application (SIP/RTP/HTTP) level. The service regions also contain the infrastructure for Azure Communications Gateway's [Provisioning API](provisioning-platform.md). > [!TIP]-> You must always have two service regions, even if one of the service regions chosen is in a single-region Azure Geography (for example, Qatar). If you choose a single-region Azure Geography, choose a second Azure region in a different Azure Geography. +> Production deployments must always have two service regions, even if one of the service regions chosen is in a single-region Azure Geography (for example, Qatar). If you choose a single-region Azure Geography, choose a second Azure region in a different Azure Geography. -These service regions are identical in operation and provide resiliency to both Zone and Regional failures. Each service region can carry 100% of the traffic using the Azure Communications Gateway instance. As such, end users should still be able to make and receive calls successfully during any Zone or Regional downtime. +The service regions are identical in operation and provide resiliency to both Zone and Regional failures. Each service region can carry 100% of the traffic using the Azure Communications Gateway instance. As such, end-users should still be able to make and receive calls successfully during any Zone or Regional downtime. ++Lab deployments have one service region. ### Call routing requirements Azure Communications Gateway offers a 'successful redial' redundancy model: calls handled by failing peers are terminated, but new calls are routed to healthy peers. This model mirrors the redundancy model provided by Microsoft Teams. -We expect your network to have two geographically redundant sites. Each site should be paired with an Azure Communications Gateway region. The redundancy model relies on cross-connectivity between your network and Azure Communications Gateway service regions. +For production deployments, we expect your network to have two geographically redundant sites. Each site should be paired with an Azure Communications Gateway region. The redundancy model relies on cross-connectivity between your network and Azure Communications Gateway service regions. :::image type="complex" source="media/reliability/azure-communications-gateway-service-region-redundancy.png" alt-text="Diagram of two operator sites and two service regions. Both service regions connect to both sites, with primary and secondary routes."::: Diagram of two operator sites (operator site A and operator site B) and two service regions (service region A and service region B). Operator site A has a primary route to service region A and a secondary route to service region B. Operator site B has a primary route to service region B and a secondary route to service region A. :::image-end::: +Lab deployments must connect to one site in your network. + Each Azure Communications Gateway service region provides an SRV record. This record contains all the SIP peers providing SBC functionality (for routing calls to communications services) within the region. If your Azure Communications Gateway includes Mobile Control Point (MCP), each service region provides an extra SRV record for MCP. Each per-region MCP record contains MCP within the region at top priority and MCP in the other region at a lower priority. When your network routes calls to Azure Communications Gateway's SIP peers for S If your Azure Communications Gateway deployment includes integrated Mobile Control Point (MCP), your network must do as follows for MCP: > [!div class="checklist"]-> - Detect when MCP in a region is unavailable, mark the targets for that region's SRV record as unavailable, and retry periodically to determine when the region is available again. MCP does not respond to SIP OPTIONS. +> - Detect when MCP in a region is unavailable, mark the targets for that region's SRV record as unavailable, and retry periodically to determine when the region is available again. MCP doesn't respond to SIP OPTIONS. > - Handle 5xx responses from MCP according to your organization's policy. For example, you could retry the request, or you could allow the call to continue without passing through Azure Communications Gateway and into Microsoft Phone System. The details of this routing behavior are specific to your network. You must agree them with your onboarding team during your integration project. During a zone-wide outage, calls handled by the affected zone are terminated, wi ## Disaster recovery: fallback to other regions - [!INCLUDE [introduction to disaster recovery](../reliability/includes/reliability-disaster-recovery-description-include.md)] - This section describes the behavior of Azure Communications Gateway during a region-wide outage. ### Disaster recovery: cross-region failover for service regions The SBC function in Azure Communications Gateway provides OPTIONS polling to all Provisioning API clients contact Azure Communications Gateway using the base domain name for your deployment. The DNS record for this domain has a time-to-live (TTL) of 60 seconds. When a region fails, Azure updates the DNS record to refer to another region, so clients making a new DNS lookup receive the details of the new region. We recommend ensuring that clients can make a new DNS lookup and retry a request 60 seconds after a timeout or a 5xx response. +> [!TIP] +> Lab deployments don't offer cross-region failover (because they have only one service region). + ### Disaster recovery: cross-region failover for management regions Voice traffic and provisioning through the Number Management Portal are unaffected by failures in the management region, because the corresponding Azure resources are hosted in service regions. Users of the Number Management Portal might need to sign in again. Monitoring services might be temporarily unavailable until service has been rest ## Choosing management and service regions -A single deployment of Azure Communications Gateway is designed to handle your traffic within a geographic area. Deploy both service regions within the same geographic area (for example North America). This model ensures that latency on voice calls remains within the limits required by the Operator Connect and Teams Phone Mobile programs. +A single deployment of Azure Communications Gateway is designed to handle your traffic within a geographic area. Deploy both service regions in a production deployment within the same geographic area (for example North America). This model ensures that latency on voice calls remains within the limits required by the Operator Connect and Teams Phone Mobile programs. Consider the following points when you choose your service region locations: |
communications-gateway | Request Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/request-changes.md | When you raise a request, we'll investigate. If we think the problem is caused b This article provides an overview of how to raise support requests for Azure Communications Gateway. For more detailed information on raising support requests, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). + ## Prerequisites We strongly recommend a Microsoft support plan that includes technical support, such as [Microsoft Unified Support](https://www.microsoft.com/en-us/unifiedsupport/overview). |
communications-gateway | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/whats-new.md | +## March 2024 ++### Lab deployments ++From March 2024, you can set up a dedicated lab deployment of Azure Communications Gateway. Lab deployments allow you to make changes and test them without affecting your production deployment. For example, you can: ++- Test configuration changes to Azure Communications Gateway. +- Test new Azure Communications Gateway features and services (for example, configuring Microsoft Teams Direct Routing or Zoom Phone Cloud Peering). +- Test changes in your preproduction network, before rolling them out to your production networks. ++You plan for, order, and deploy lab deployments in the same way as production deployments. You must have deployed a standard deployment or be about to deploy one. You can't use a lab deployment as a standalone Azure Communications Gateway deployment. ++For more information, see [Lab Azure Communications Gateway overview](lab.md). + ## February 2024 ### Flow-through provisioning for Operator Connect and Teams Phone Mobile |
cosmos-db | Quickstart Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-dotnet.md | |
cosmos-db | Quickstart Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-go.md | |
cosmos-db | Quickstart Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java.md | |
cosmos-db | Quickstart Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-nodejs.md | |
cosmos-db | Quickstart Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-python.md | |
cosmos-db | Sdk Observability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-observability.md | Distributed tracing is available in the following SDKs: |SDK |Supported version |Notes | |-|||-|.NET v3 SDK |[>= `3.33.0-preview`](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.33.0-preview) |This feature is on by default if you're using a supported preview SDK version. You can disable tracing by setting `IsDistributedTracingEnabled = false` in `CosmosClientOptions`. | +|.NET v3 SDK |[>= `3.36.0`](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.36.0) |This feature is available in both preview and non-preview versions. For non-preview versions it's off by default. You can enable tracing by setting `IsDistributedTracingEnabled = false` in `CosmosClientOptions.CosmosClientTelemetryOptions`. | +|.NET v3 SDK preview |[>= `3.33.0-preview`](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.33.0-preview) |This feature is available in both preview and non-preview versions. For preview versions it's on by default. You can disable tracing by setting `IsDistributedTracingEnabled = true` in `CosmosClientOptions.CosmosClientTelemetryOptions`. | |Java v4 SDK |[>= `4.43.0`](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.43.0) | | ## Trace attributes If you've configured logs in your trace provider, you can automatically get [dia ### [.NET](#tab/dotnet) -In addition to getting diagnostic logs for failed requests, point operations that take over 100 ms and query operations that take over 500 ms also generate diagnostics. You can configure the log level to control which diagnostics logs you receive. +In addition to getting diagnostic logs for failed requests, you can configure different latency thresholds for when to collect diagnostics from successful requests. The default values are 100 ms for point operations and 500 ms for non point operations and can be adjusted through client options. ++```csharp +CosmosClientOptions options = new CosmosClientOptions() +{ + CosmosClientTelemetryOptions = new CosmosClientTelemetryOptions() + { + DisableDistributedTracing = false, + CosmosThresholdOptions = new CosmosThresholdOptions() + { + PointOperationLatencyThreshold = TimeSpan.FromMilliseconds(100), + NonPointOperationLatencyThreshold = TimeSpan.FromMilliseconds(500) + } + }, +}; +``` ++You can configure the log level to control which diagnostics logs you receive. |Log Level |Description | |-|| |Error | Logs for errors only. |-|Warning | Logs for errors and high latency requests. | +|Warning | Logs for errors and high latency requests based on configured thresholds. | |Information | There are no specific information level logs. Logs in this level are the same as using Warning. | Depending on your application environment, there are different ways to configure the log level. Here's a sample configuration in `appSettings.json`: |
cost-management-billing | Limited Time Central Sweden | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/limited-time-central-sweden.md | +> [!NOTE] +> This limited-time offer expired on March 1, 2024. You can still purchase Azure Reserved VM Instances at regular discounted prices. For more information about reservation discount, see [How the Azure reservation discount is applied to virtual machines](../manage/understand-vm-reservation-charges.md). + ## Purchase the limited time offer To take advantage of this limited-time offer, [purchase](https://aka.ms/azure/pricing/SwedenCentral/Purchase1) a one-year term for Azure Reserved Virtual Machine Instances for qualified VM instances in the Sweden Central region. Enterprise Agreement and Microsoft Customer Agreement billing readers can view a These terms and conditions (hereinafter referred to as "terms") govern the limited time offer ("offer") provided by Microsoft to customers purchasing a one-year Azure Reserved VM Instance in Sweden Central between September 1, 2023 (12 AM Pacific Standard Time) ΓÇô February 29, 2024 (11:59 PM Pacific Standard Time), for any of the following VM series: -- Dadsv5-- Dasv5-- Ddsv5-- Ddv5-- Dldsv5-- Dlsv5-- Dsv5-- Dv5-- Eadsv5-- Easv5-- Ebdsv5-- Ebsv5-- Edsv5-- Edv5-- Esv5-- Ev5--The offer provides them with a discount up to 50% compared to pay-as-you-go pricing. The savings doesn't include operating system costs. Actual savings may vary based on instance type or usage. +- `Dadsv5` +- `Dasv5` +- `Ddsv5` +- `Ddv5` +- `Dldsv5` +- `Dlsv5` +- `Dsv5` +- `Dv5` +- `Eadsv5` +- `Easv5` +- `Ebdsv5` +- `Ebsv5` +- `Edsv5` +- `Edv5` +- `Esv5` +- `Ev5` ++The offer provides them with a discount up to 50% compared to pay-as-you-go pricing. The savings doesn't include operating system costs. Actual savings might vary based on instance type or usage. **Eligibility** - The Offer is open to individuals who meet the following criteria: The offer provides them with a discount up to 50% compared to pay-as-you-go pric **Offer details** - Upon successful purchase and payment for the one-year Azure Reserved VM Instance in Sweden Central for one or more of the qualified VMs during the specified period, the discount applies automatically to the number of running virtual machines in Sweden Central that match the reservation scope and attributes. You don't need to assign a reservation to a virtual machine to get the discounts. A reserved instance purchase covers only the compute part of your VM usage. For more information about how to pay and save with an Azure Reserved VM Instance, see [Prepay for Azure virtual machines to save money](../../virtual-machines/prepay-reserved-vm-instances.md?toc=/azure/cost-management-billing/reservations/toc.json&source=azlto3). -- Additional taxes may apply.-- Payment will be processed using the payment method on file for the selected subscriptions.+- Other taxes might apply. +- Payment is processed using the payment method on file for the selected subscriptions. - Estimated savings are calculated based on your current on-demand rate. **Qualifying purchase** - To be eligible for the 50% discount, customers must make a purchase of the one-year Azure Reserved Virtual Machine Instances for one of the following qualified VMs in Sweden Central between September 1, 2023, and February 29, 2024. -- Dadsv5-- Dasv5-- Ddsv5-- Ddv5-- Dldsv5-- Dlsv5-- Dsv5-- Dv5-- Eadsv5-- Easv5-- Ebdsv5-- Ebsv5-- Edsv5-- Edv5-- Esv5-- Ev5+- `Dadsv5` +- `Dasv5` +- `Ddsv5` +- `Ddv5` +- `Dldsv5` +- `Dlsv5` +- `Dsv5` +- `Dv5` +- `Eadsv5` +- `Easv5` +- `Ebdsv5` +- `Ebsv5` +- `Edsv5` +- `Edv5` +- `Esv5` +- `Ev5` Instance size flexibility is available for these VMs. For more information about Instance Size Flexibility, see [Virtual machine size flexibility](../../virtual-machines/reserved-vm-instance-size-flexibility.md?source=azlto7). Instance size flexibility is available for these VMs. For more information about - The discount only applies to resources associated with subscriptions purchased through Enterprise, Cloud Solution Provider (CSP), Microsoft Customer Agreement and individual plans with pay-as-you-go rates. - A reservation discount is "use-it-or-lose-it." So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours. - When you deallocate, delete, or scale the number of VMs, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are lost.-- Stopped VMs are billed and continue to use reservation hours. Deallocate or delete VM resources or scale-in other VMs to use your available reservation hours with other workloads.+- Stopped VMs are billed and continue to use reservation hours. To use your available reservation hours with other workloads, deallocate or delete VM resources or scale-in other VMs. - For more information about how Azure Reserved VM Instance discounts are applied, see [Understand Azure Reserved VM Instances discount](../manage/understand-vm-reservation-charges.md?source=azlto4). **Exchanges and refunds** - The offer follows standard exchange and refund policies for reservations. For more information about exchanges and refunds, see [Self-service exchanges and refunds for Azure Reservations](exchange-and-refund-azure-reservations.md?source=azlto6). Instance size flexibility is available for these VMs. For more information about **Termination or modification** - Microsoft reserves the right to modify, suspend, or terminate the offer at any time without prior notice. -If you have purchased the one-year Azure Reserved Virtual Machine Instances for the qualified VMs in Sweden Central between September 1, 2023, and February 29, 2024 you'll continue to get the discount throughout the one-year term, even if the offer is canceled. +If you purchased the one-year Azure Reserved Virtual Machine Instances for the qualified VMs in Sweden Central between September 1, 2023, and February 29, 2024 you'll continue to get the discount throughout the one-year term, even if the offer is canceled. By participating in the offer, customers agree to be bound by these terms and the decisions of Microsoft. Microsoft reserves the right to disqualify any customer who violates these terms or engages in any fraudulent or harmful activities related to the offer. |
cost-management-billing | Limited Time Us West | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/limited-time-us-west.md | +> [!NOTE] +> This limited-time offer expired on March 1, 2024. You can still purchase Azure Reserved VM Instances at regular discounted prices. For more information about reservation discount, see [How the Azure reservation discount is applied to virtual machines](../manage/understand-vm-reservation-charges.md). + ## Purchase the limited time offer To take advantage of this limited-time offer, [purchase](https://aka.ms/azure/pricing/USWest/Purchase) a one-year term for Azure Reserved Virtual Machine Instances for qualified `Dv3s` instances in the US West region. |
data-lake-store | Data Lake Store Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-access-control.md | - Title: Overview of access control in Data Lake Storage Gen1 | Microsoft Docs -description: Learn about the basics of the access control model of Azure Data Lake Storage Gen1, which derives from HDFS. ---- Previously updated : 03/26/2018----# Access control in Azure Data Lake Storage Gen1 --Azure Data Lake Storage Gen1 implements an access control model that derives from HDFS, which in turn derives from the POSIX access control model. This article summarizes the basics of the access control model for Data Lake Storage Gen1. --## Access control lists on files and folders --There are two kinds of access control lists (ACLs), **Access ACLs** and **Default ACLs**. --* **Access ACLs**: These control access to an object. Files and folders both have Access ACLs. --* **Default ACLs**: A "template" of ACLs associated with a folder that determine the Access ACLs for any child items that are created under that folder. Files do not have Default ACLs. ---Both Access ACLs and Default ACLs have the same structure. --> [!NOTE] -> Changing the Default ACL on a parent does not affect the Access ACL or Default ACL of child items that already exist. -> -> --## Permissions --The permissions on a filesystem object are **Read**, **Write**, and **Execute**, and they can be used on files and folders as shown in the following table: --| | File | Folder | -||-|-| -| **Read (R)** | Can read the contents of a file | Requires **Read** and **Execute** to list the contents of the folder| -| **Write (W)** | Can write or append to a file | Requires **Write** and **Execute** to create child items in a folder | -| **Execute (X)** | Does not mean anything in the context of Data Lake Storage Gen1 | Required to traverse the child items of a folder | --### Short forms for permissions --**RWX** is used to indicate **Read + Write + Execute**. A more condensed numeric form exists in which **Read=4**, **Write=2**, and **Execute=1**, the sum of which represents the permissions. Following are some examples. --| Numeric form | Short form | What it means | -|--||| -| 7 | `RWX` | Read + Write + Execute | -| 5 | `R-X` | Read + Execute | -| 4 | `R--` | Read | -| 0 | `` | No permissions | ---### Permissions do not inherit --In the POSIX-style model that's used by Data Lake Storage Gen1, permissions for an item are stored on the item itself. In other words, permissions for an item cannot be inherited from the parent items. --## Common scenarios related to permissions --Following are some common scenarios to help you understand which permissions are needed to perform certain operations on a Data Lake Storage Gen1 account. --| Operation | Object | / | Seattle/ | Portland/ | Data.txt | -|--||--||-|-| -| Read | Data.txt | `--X` | `--X` | `--X` | `R--` | -| Append to | Data.txt | `--X` | `--X` | `--X` | `-W-` | -| Delete | Data.txt | `--X` | `--X` | `-WX` | `` | -| Create | Data.txt | `--X` | `--X` | `-WX` | `` | -| List | / | `R-X` | `` | `` | `` | -| List | /Seattle/ | `--X` | `R-X` | `` | `` | -| List | /Seattle/Portland/ | `--X` | `--X` | `R-X` | `` | ---> [!NOTE] -> Write permissions on the file are not required to delete it as long as the previous two conditions are true. -> -> ---## Users and identities --Every file and folder has distinct permissions for these identities: --* The owning user -* The owning group -* Named users -* Named groups -* All other users --The identities of users and groups are Microsoft Entra identities. So unless otherwise noted, a "user," in the context of Data Lake Storage Gen1, can either mean a Microsoft Entra user or a Microsoft Entra security group. --### The super-user --A super-user has the most rights of all the users in the Data Lake Storage Gen1 account. A super-user: --* Has RWX Permissions to **all** files and folders. -* Can change the permissions on any file or folder. -* Can change the owning user or owning group of any file or folder. --All users that are part of the **Owners** role for a Data Lake Storage Gen1 account are automatically a super-user. --### The owning user --The user who created the item is automatically the owning user of the item. An owning user can: --* Change the permissions of a file that is owned. -* Change the owning group of a file that is owned, as long as the owning user is also a member of the target group. --> [!NOTE] -> The owning user *cannot* change the owning user of a file or folder. Only super-users can change the owning user of a file or folder. -> -> --### The owning group --**Background** --In the POSIX ACLs, every user is associated with a "primary group." For example, user "alice" might belong to the "finance" group. Alice might also belong to multiple groups, but one group is always designated as her primary group. In POSIX, when Alice creates a file, the owning group of that file is set to her primary group, which in this case is "finance." The owning group otherwise behaves similarly to assigned permissions for other users/groups. --Because there is no ΓÇ£primary groupΓÇ¥ associated to users in Data Lake Storage Gen1, the owning group is assigned as below. --**Assigning the owning group for a new file or folder** --* **Case 1**: The root folder "/". This folder is created when a Data Lake Storage Gen1 account is created. In this case, the owning group is set to an all-zero GUID. This value does not permit any access. It is a placeholder until such time a group is assigned. -* **Case 2** (Every other case): When a new item is created, the owning group is copied from the parent folder. --**Changing the owning group** --The owning group can be changed by: -* Any super-users. -* The owning user, if the owning user is also a member of the target group. --> [!NOTE] -> The owning group *cannot* change the ACLs of a file or folder. -> -> For accounts created on or before September 2018, the owning group was set to the user who created the account in the case of the root folder for **Case 1**, above. A single user account is not valid for providing permissions via the owning group, thus no permissions are granted by this default setting. You can assign this permission to a valid user group. ---## Access check algorithm --The following pseudocode represents the access check algorithm for Data Lake Storage Gen1 accounts. --``` -def access_check( user, desired_perms, path ) : - # access_check returns true if user has the desired permissions on the path, false otherwise - # user is the identity that wants to perform an operation on path - # desired_perms is a simple integer with values from 0 to 7 ( R=4, W=2, X=1). User desires these permissions - # path is the file or folder - # Note: the "sticky bit" is not illustrated in this algorithm - -# Handle super users. - if (is_superuser(user)) : - return True -- # Handle the owning user. Note that mask IS NOT used. - entry = get_acl_entry( path, OWNER ) - if (user == entry.identity) - return ( (desired_perms & entry.permissions) == desired_perms ) -- # Handle the named users. Note that mask IS used. - entries = get_acl_entries( path, NAMED_USER ) - for entry in entries: - if (user == entry.identity ) : - mask = get_mask( path ) - return ( (desired_perms & entry.permmissions & mask) == desired_perms) -- # Handle named groups and owning group - member_count = 0 - perms = 0 - entries = get_acl_entries( path, NAMED_GROUP | OWNING_GROUP ) - for entry in entries: - if (user_is_member_of_group(user, entry.identity)) : - member_count += 1 - perms | = entry.permissions - if (member_count>0) : - return ((desired_perms & perms & mask ) == desired_perms) - - # Handle other - perms = get_perms_for_other(path) - mask = get_mask( path ) - return ( (desired_perms & perms & mask ) == desired_perms) -``` --### The mask --As illustrated in the Access Check Algorithm, the mask limits access for **named users**, the **owning group**, and **named groups**. --> [!NOTE] -> For a new Data Lake Storage Gen1 account, the mask for the Access ACL of the root folder ("/") defaults to RWX. -> -> --### The sticky bit --The sticky bit is a more advanced feature of a POSIX filesystem. In the context of Data Lake Storage Gen1, it is unlikely that the sticky bit will be needed. In summary, if the sticky bit is enabled on a folder, a child item can only be deleted or renamed by the child item's owning user. --The sticky bit is not shown in the Azure portal. --## Default permissions on new files and folders --When a new file or folder is created under an existing folder, the Default ACL on the parent folder determines: --- A child folderΓÇÖs Default ACL and Access ACL.-- A child file's Access ACL (files do not have a Default ACL).--### umask --When creating a file or folder, umask is used to modify how the default ACLs are set on the child item. umask is a 9-bit value on parent folders that contains an RWX value for **owning user**, **owning group**, and **other**. --The umask for Azure Data Lake Storage Gen1 is a constant value set to 007. This value translates to --| umask component | Numeric form | Short form | Meaning | -||--||| -| umask.owning_user | 0 | `` | For owning user, copy the parent's Default ACL to the child's Access ACL | -| umask.owning_group | 0 | `` | For owning group, copy the parent's Default ACL to the child's Access ACL | -| umask.other | 7 | `RWX` | For other, remove all permissions on the child's Access ACL | --The umask value used by Azure Data Lake Storage Gen1 effectively means that the value for other is never transmitted by default on new children - regardless of what the Default ACL indicates. --The following pseudocode shows how the umask is applied when creating the ACLs for a child item. --``` -def set_default_acls_for_new_child(parent, child): - child.acls = [] - for entry in parent.acls : - new_entry = None - if (entry.type == OWNING_USER) : - new_entry = entry.clone(perms = entry.perms & (~umask.owning_user)) - elif (entry.type == OWNING_GROUP) : - new_entry = entry.clone(perms = entry.perms & (~umask.owning_group)) - elif (entry.type == OTHER) : - new_entry = entry.clone(perms = entry.perms & (~umask.other)) - else : - new_entry = entry.clone(perms = entry.perms ) - child_acls.add( new_entry ) -``` --## Common questions about ACLs in Data Lake Storage Gen1 --### Do I have to enable support for ACLs? --No. Access control via ACLs is always on for a Data Lake Storage Gen1 account. --### Which permissions are required to recursively delete a folder and its contents? --* The parent folder must have **Write + Execute** permissions. -* The folder to be deleted, and every folder within it, requires **Read + Write + Execute** permissions. --> [!NOTE] -> You do not need Write permissions to delete files in folders. Also, the root folder "/" can **never** be deleted. -> -> --### Who is the owner of a file or folder? --The creator of a file or folder becomes the owner. --### Which group is set as the owning group of a file or folder at creation? --The owning group is copied from the owning group of the parent folder under which the new file or folder is created. --### I am the owning user of a file but I donΓÇÖt have the RWX permissions I need. What do I do? --The owning user can change the permissions of the file to give themselves any RWX permissions they need. --### When I look at ACLs in the Azure portal I see user names but through APIs, I see GUIDs, why is that? --Entries in the ACLs are stored as GUIDs that correspond to users in Microsoft Entra ID. The APIs return the GUIDs as is. The Azure portal tries to make ACLs easier to use by translating the GUIDs into friendly names when possible. --### Why do I sometimes see GUIDs in the ACLs when I'm using the Azure portal? --A GUID is shown when the user doesn't exist in Microsoft Entra anymore. Usually this happens when the user has left the company or if their account has been deleted in Microsoft Entra ID. Also, ensure that you're using the right ID for setting ACLs (details in question below). --### When using service principal, what ID should I use to set ACLs? --On the Azure Portal, go to **Microsoft Entra ID -> Enterprise applications** and select your application. The **Overview** tab should display an Object ID and this is what should be used when adding ACLs for data access (and not Application Id). --### Does Data Lake Storage Gen1 support inheritance of ACLs? --No, but Default ACLs can be used to set ACLs for child files and folder newly created under the parent folder. --### What are the limits for ACL entries on files and folders? --32 ACLs can be set per file and per directory. Access and default ACLs each have their own 32 ACL entry limit. Use security groups for ACL assignments if possible. By using groups, you're less likely to exceed the maximum number of ACL entries per file or directory. --### Where can I learn more about POSIX access control model? --* [POSIX Access Control Lists on Linux](https://www.linux.com/news/posix-acls-linux) -* [HDFS permission guide](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html) -* [POSIX FAQ](https://www.opengroup.org/austin/papers/posix_faq.html) -* [POSIX 1003.1 2008](https://standards.ieee.org/wp-content/uploads/import/documents/interpretations/1003.1-2008_interp.pdf) -* [POSIX 1003.1 2013](https://pubs.opengroup.org/onlinepubs/9699919799.2013edition/) -* [POSIX 1003.1 2016](https://pubs.opengroup.org/onlinepubs/9699919799.2016edition/) -* [POSIX ACL on Ubuntu](https://help.ubuntu.com/community/FilePermissionsACLs) --## See also --* [Overview of Azure Data Lake Storage Gen1](data-lake-store-overview.md) |
data-lake-store | Data Lake Store Archive Eventhub Capture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-archive-eventhub-capture.md | - Title: Capture data from Event Hubs to Azure Data Lake Storage Gen1 -description: Learn how to use Azure Data Lake Storage Gen1 to capture data received by Azure Event Hubs. Begin by verifying the prerequisites. ---- Previously updated : 05/29/2018----# Use Azure Data Lake Storage Gen1 to capture data from Event Hubs --Learn how to use Azure Data Lake Storage Gen1 to capture data received by Azure Event Hubs. --## Prerequisites --* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). --* **An Azure Data Lake Storage Gen1 account**. For instructions on how to create one, see [Get started with Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md). --* **An Event Hubs namespace**. For instructions, see [Create an Event Hubs namespace](../event-hubs/event-hubs-create.md#create-an-event-hubs-namespace). Make sure the Data Lake Storage Gen1 account and the Event Hubs namespace are in the same Azure subscription. ---## Assign permissions to Event Hubs --In this section, you create a folder within the account where you want to capture the data from Event Hubs. You also assign permissions to Event Hubs so that it can write data into a Data Lake Storage Gen1 account. --1. Open the Data Lake Storage Gen1 account where you want to capture data from Event Hubs and then click on **Data Explorer**. -- ![Data Lake Storage Gen1 data explorer](./media/data-lake-store-archive-eventhub-capture/data-lake-store-open-data-explorer.png "Data Lake Storage Gen1 data explorer") --1. Click **New Folder** and then enter a name for folder where you want to capture the data. -- ![Create a new folder in Data Lake Storage Gen1](./media/data-lake-store-archive-eventhub-capture/data-lake-store-create-new-folder.png "Create a new folder in Data Lake Storage Gen1") --1. Assign permissions at the root of Data Lake Storage Gen1. -- a. Click **Data Explorer**, select the root of the Data Lake Storage Gen1 account, and then click **Access**. -- ![Screenshot of the Data explorer with the root of the account and the Access option called out.](./media/data-lake-store-archive-eventhub-capture/data-lake-store-assign-permissions-to-root.png "Assign permissions for the Data Lake Storage Gen1 root") -- b. Under **Access**, click **Add**, click **Select User or Group**, and then search for `Microsoft.EventHubs`. -- ![Screenshot of the Access page with the Add option, Select User or Group option, and Microsoft Eventhubs option called out.](./media/data-lake-store-archive-eventhub-capture/data-lake-store-assign-eventhub-sp.png "Assign permissions for the Data Lake Storage Gen1 root") - - Click **Select**. -- c. Under **Assign Permissions**, click **Select Permissions**. Set **Permissions** to **Execute**. Set **Add to** to **This folder and all children**. Set **Add as** to **An access permission entry and a default permission entry**. -- > [!IMPORTANT] - > When creating a new folder hierarchy for capturing data received by Azure Event Hubs, this is an easy way to ensure access to the destination folder. However, adding permissions to all children of a top level folder with many child files and folders may take a long time. If your root folder contains a large number of files and folders, it may be faster to add **Execute** permissions for `Microsoft.EventHubs` individually to each folder in the path to your final destination folder. -- ![Screenshot of the Assign Permissions section with the Select Permissions option called out. The Select Permissions section is next to it with the Execute option, Add to option, and Add as option called out.](./media/data-lake-store-archive-eventhub-capture/data-lake-store-assign-eventhub-sp1.png "Assign permissions for the Data Lake Storage Gen1 root") -- Click **OK**. --1. Assign permissions for the folder under the Data Lake Storage Gen1 account where you want to capture data. -- a. Click **Data Explorer**, select the folder in the Data Lake Storage Gen1 account, and then click **Access**. -- ![Screenshot of the Data explorer with a folder in the account and the Access option called out.](./media/data-lake-store-archive-eventhub-capture/data-lake-store-assign-permissions-to-folder.png "Assign permissions for the Data Lake Storage Gen1 folder") -- b. Under **Access**, click **Add**, click **Select User or Group**, and then search for `Microsoft.EventHubs`. -- ![Screenshot of the Data explorer Access page with the Add option, Select User or Group option, and Microsoft Eventhubs option called out.](./media/data-lake-store-archive-eventhub-capture/data-lake-store-assign-eventhub-sp.png "Assign permissions for the Data Lake Storage Gen1 folder") - - Click **Select**. -- c. Under **Assign Permissions**, click **Select Permissions**. Set **Permissions** to **Read, Write,** and **Execute**. Set **Add to** to **This folder and all children**. Finally, set **Add as** to **An access permission entry and a default permission entry**. -- ![Screenshot of the Assign Permissions section with the Select Permissions option called out. The Select Permissions section is next to it with the Read, Write, and Execute options, the Add to option, and the Add as option called out.](./media/data-lake-store-archive-eventhub-capture/data-lake-store-assign-eventhub-sp-folder.png "Assign permissions for the Data Lake Storage Gen1 folder") - - Click **OK**. --## Configure Event Hubs to capture data to Data Lake Storage Gen1 --In this section, you create an Event Hub within an Event Hubs namespace. You also configure the Event Hub to capture data to an Azure Data Lake Storage Gen1 account. This section assumes that you have already created an Event Hubs namespace. --1. From the **Overview** pane of the Event Hubs namespace, click **+ Event Hub**. -- ![Screenshot of the Overview pane with the Event Hub option called out.](./media/data-lake-store-archive-eventhub-capture/data-lake-store-create-event-hub.png "Create Event Hub") --1. Provide the following values to configure Event Hubs to capture data to Data Lake Storage Gen1. -- ![Screenshot of the Create Event Hub dialog box with the Name text box, the Capture option, the Capture Provider option, the Select Data Lake Store option, and the Data Lake Path option called out.](./media/data-lake-store-archive-eventhub-capture/data-lake-store-configure-eventhub.png "Create Event Hub") -- a. Provide a name for the Event Hub. - - b. For this tutorial, set **Partition Count** and **Message Retention** to the default values. - - c. Set **Capture** to **On**. Set the **Time Window** (how frequently to capture) and **Size Window** (data size to capture). - - d. For **Capture Provider**, select **Azure Data Lake Store** and then select the Data Lake Storage Gen1 account you created earlier. For **Data Lake Path**, enter the name of the folder you created in the Data Lake Storage Gen1 account. You only need to provide the relative path to the folder. -- e. Leave the **Sample capture file name formats** to the default value. This option governs the folder structure that is created under the capture folder. -- f. Click **Create**. --## Test the setup --You can now test the solution by sending data to the Azure Event Hub. Follow the instructions at [Send events to Azure Event Hubs](../event-hubs/event-hubs-dotnet-framework-getstarted-send.md). Once you start sending the data, you see the data reflected in Data Lake Storage Gen1 using the folder structure you specified. For example, you see a folder structure, as shown in the following screenshot, in your Data Lake Storage Gen1 account. --![Sample EventHub data in Data Lake Storage Gen1](./media/data-lake-store-archive-eventhub-capture/data-lake-store-eventhub-data-sample.png "Sample EventHub data in Data Lake Storage Gen1") --> [!NOTE] -> Even if you do not have messages coming into Event Hubs, Event Hubs writes empty files with just the headers into the Data Lake Storage Gen1 account. The files are written at the same time interval that you provided while creating the Event Hubs. -> -> --## Analyze data in Data Lake Storage Gen1 --Once the data is in Data Lake Storage Gen1, you can run analytical jobs to process and crunch the data. See [USQL Avro Example](https://github.com/Azure/usql/tree/master/Examples/AvroExamples) on how to do this using Azure Data Lake Analytics. - --## See also -* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md) -* [Copy data from Azure Storage Blobs to Data Lake Storage Gen1](data-lake-store-copy-data-azure-storage-blob.md) |
data-lake-store | Data Lake Store Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-best-practices.md | - Title: Best practices for using Azure Data Lake Storage Gen1 | Microsoft Docs -description: Learn the best practices about data ingestion, date security, and performance related to using Azure Data Lake Storage Gen1 (previously known as Azure Data Lake Store) ------ Previously updated : 06/27/2018----# Best practices for using Azure Data Lake Storage Gen1 ---In this article, you learn about best practices and considerations for working with Azure Data Lake Storage Gen1. This article provides information around security, performance, resiliency, and monitoring for Data Lake Storage Gen1. Before Data Lake Storage Gen1, working with truly big data in services like Azure HDInsight was complex. You had to shard data across multiple Blob storage accounts so that petabyte storage and optimal performance at that scale could be achieved. With Data Lake Storage Gen1, most of the hard limits for size and performance are removed. However, there are still some considerations that this article covers so that you can get the best performance with Data Lake Storage Gen1. --## Security considerations --Azure Data Lake Storage Gen1 offers POSIX access controls and detailed auditing for Microsoft Entra users, groups, and service principals. These access controls can be set to existing files and folders. The access controls can also be used to create defaults that can be applied to new files or folders. When permissions are set to existing folders and child objects, the permissions need to be propagated recursively on each object. If there are large number of files, propagating the permissions can take a long time. The time taken can range between 30-50 objects processed per second. Hence, plan the folder structure and user groups appropriately. Otherwise, it can cause unanticipated delays and issues when you work with your data. --Assume you have a folder with 100,000 child objects. If you take the lower bound of 30 objects processed per second, to update the permission for the whole folder could take an hour. More details on Data Lake Storage Gen1 ACLs are available at [Access control in Azure Data Lake Storage Gen1](data-lake-store-access-control.md). For improved performance on assigning ACLs recursively, you can use the Azure Data Lake Command-Line Tool. The tool creates multiple threads and recursive navigation logic to quickly apply ACLs to millions of files. The tool is available for Linux and Windows, and the [documentation](https://github.com/Azure/data-lake-adlstool) and [downloads](https://aka.ms/adlstool-download) for this tool can be found on GitHub. These same performance improvements can be enabled by your own tools written with the Data Lake Storage Gen1 [.NET](data-lake-store-data-operations-net-sdk.md) and [Java](data-lake-store-get-started-java-sdk.md) SDKs. --### Use security groups versus individual users --When working with big data in Data Lake Storage Gen1, most likely a service principal is used to allow services such as Azure HDInsight to work with the data. However, there might be cases where individual users need access to the data as well. In such cases, you must use Microsoft Entra ID [security groups](data-lake-store-secure-data.md#create-security-groups-in-azure-active-directory) instead of assigning individual users to folders and files. --Once a security group is assigned permissions, adding or removing users from the group doesnΓÇÖt require any updates to Data Lake Storage Gen1. This also helps ensure you don't exceed the limit of [32 Access and Default ACLs](../azure-resource-manager/management/azure-subscription-service-limits.md#data-lake-storage-limits) (this includes the four POSIX-style ACLs that are always associated with every file and folder: [the owning user](data-lake-store-access-control.md#the-owning-user), [the owning group](data-lake-store-access-control.md#the-owning-group), [the mask](data-lake-store-access-control.md#the-mask), and other). --### Security for groups --As discussed, when users need access to Data Lake Storage Gen1, itΓÇÖs best to use Microsoft Entra security groups. Some recommended groups to start with might be **ReadOnlyUsers**, **WriteAccessUsers**, and **FullAccessUsers** for the root of the account, and even separate ones for key subfolders. If there are any other anticipated groups of users that might be added later, but have not been identified yet, you might consider creating dummy security groups that have access to certain folders. Using security group ensures that later you do not need a long processing time for assigning new permissions to thousands of files. --### Security for service principals --Microsoft Entra service principals are typically used by services like Azure HDInsight to access data in Data Lake Storage Gen1. Depending on the access requirements across multiple workloads, there might be some considerations to ensure security inside and outside of the organization. For many customers, a single Microsoft Entra service principal might be adequate, and it can have full permissions at the root of the Data Lake Storage Gen1 account. Other customers might require multiple clusters with different service principals where one cluster has full access to the data, and another cluster with only read access. As with the security groups, you might consider making a service principal for each anticipated scenario (read, write, full) once a Data Lake Storage Gen1 account is created. --### Enable the Data Lake Storage Gen1 firewall with Azure service access --Data Lake Storage Gen1 supports the option of turning on a firewall and limiting access only to Azure services, which is recommended for a smaller attack vector from outside intrusions. Firewall can be enabled on the Data Lake Storage Gen1 account in the Azure portal via the **Firewall** > **Enable Firewall (ON)** > **Allow access to Azure services** options. --![Firewall settings in Data Lake Storage Gen1](./media/data-lake-store-best-practices/data-lake-store-firewall-setting.png "Firewall settings in Data Lake Storage Gen1") --Once firewall is enabled, only Azure services such as HDInsight, Data Factory, Azure Synapse Analytics, etc. have access to Data Lake Storage Gen1. Due to the internal network address translation used by Azure, the Data Lake Storage Gen1 firewall does not support restricting specific services by IP and is only intended for restrictions of endpoints outside of Azure, such as on-premises. --## Performance and scale considerations --One of the most powerful features of Data Lake Storage Gen1 is that it removes the hard limits on data throughput. Removing the limits enables customers to grow their data size and accompanied performance requirements without needing to shard the data. One of the most important considerations for optimizing Data Lake Storage Gen1 performance is that it performs the best when given parallelism. --### Improve throughput with parallelism --Consider giving 8-12 threads per core for the most optimal read/write throughput. This is due to blocking reads/writes on a single thread, and more threads can allow higher concurrency on the VM. To ensure that levels are healthy and parallelism can be increased, be sure to monitor the VMΓÇÖs CPU utilization. --### Avoid small file sizes --POSIX permissions and auditing in Data Lake Storage Gen1 comes with an overhead that becomes apparent when working with numerous small files. As a best practice, you must batch your data into larger files versus writing thousands or millions of small files to Data Lake Storage Gen1. Avoiding small file sizes can have multiple benefits, such as: --* Lowering the authentication checks across multiple files -* Reduced open file connections -* Faster copying/replication -* Fewer files to process when updating Data Lake Storage Gen1 POSIX permissions --Depending on what services and workloads are using the data, a good size to consider for files is 256 MB or greater. If the file sizes cannot be batched when landing in Data Lake Storage Gen1, you can have a separate compaction job that combines these files into larger ones. For more information and recommendation on file sizes and organizing the data in Data Lake Storage Gen1, see [Structure your data set](data-lake-store-performance-tuning-guidance.md#structure-your-data-set). --### Large file sizes and potential performance impact --Although Data Lake Storage Gen1 supports large files up to petabytes in size, for optimal performance and depending on the process reading the data, it might not be ideal to go above 2 GB on average. For example, when using **Distcp** to copy data between locations or different storage accounts, files are the finest level of granularity used to determine map tasks. So, if you are copying 10 files that are 1 TB each, at most 10 mappers are allocated. Also, if you have lots of files with mappers assigned, initially the mappers work in parallel to move large files. However, as the job starts to wind down only a few mappers remain allocated and you can be stuck with a single mapper assigned to a large file. Microsoft has submitted improvements to Distcp to address this issue in future Hadoop versions. --Another example to consider is when using Azure Data Lake Analytics with Data Lake Storage Gen1. Depending on the processing done by the extractor, some files that cannot be split (for example, XML, JSON) could suffer in performance when greater than 2 GB. In cases where files can be split by an extractor (for example, CSV), large files are preferred. --### Capacity plan for your workload --Azure Data Lake Storage Gen1 removes the hard IO throttling limits that are placed on Blob storage accounts. However, there are still soft limits that need to be considered. The default ingress/egress throttling limits meet the needs of most scenarios. If your workload needs to have the limits increased, work with Microsoft support. Also, look at the limits during the proof-of-concept stage so that IO throttling limits are not hit during production. If that happens, it might require waiting for a manual increase from the Microsoft engineering team. If IO throttling occurs, Azure Data Lake Storage Gen1 returns an error code of 429, and ideally should be retried with an appropriate exponential backoff policy. --### Optimize ΓÇ£writesΓÇ¥ with the Data Lake Storage Gen1 driver buffer --To optimize performance and reduce IOPS when writing to Data Lake Storage Gen1 from Hadoop, perform write operations as close to the Data Lake Storage Gen1 driver buffer size as possible. Try not to exceed the buffer size before flushing, such as when streaming using Apache Storm or Spark streaming workloads. When writing to Data Lake Storage Gen1 from HDInsight/Hadoop, it is important to know that Data Lake Storage Gen1 has a driver with a 4-MB buffer. Like many file system drivers, this buffer can be manually flushed before reaching the 4-MB size. If not, it is immediately flushed to storage if the next write exceeds the bufferΓÇÖs maximum size. Where possible, you must avoid an overrun or a significant underrun of the buffer when syncing/flushing policy by count or time window. --## Resiliency considerations --When architecting a system with Data Lake Storage Gen1 or any cloud service, you must consider your availability requirements and how to respond to potential interruptions in the service. An issue could be localized to the specific instance or even region-wide, so having a plan for both is important. Depending on the **recovery time objective** and the **recovery point objective** SLAs for your workload, you might choose a more or less aggressive strategy for high availability and disaster recovery. --### High availability and disaster recovery --High availability (HA) and disaster recovery (DR) can sometimes be combined together, although each has a slightly different strategy, especially when it comes to data. Data Lake Storage Gen1 already handles 3x replication under the hood to guard against localized hardware failures. However, since replication across regions is not built in, you must manage this yourself. When building a plan for HA, in the event of a service interruption the workload needs access to the latest data as quickly as possible by switching over to a separately replicated instance locally or in a new region. --In a DR strategy, to prepare for the unlikely event of a catastrophic failure of a region, it is also important to have data replicated to a different region. This data might initially be the same as the replicated HA data. However, you must also consider your requirements for edge cases such as data corruption where you may want to create periodic snapshots to fall back to. Depending on the importance and size of the data, consider rolling delta snapshots of 1-, 6-, and 24-hour periods on the local and/or secondary store, according to risk tolerances. --For data resiliency with Data Lake Storage Gen1, it is recommended to geo-replicate your data to a separate region with a frequency that satisfies your HA/DR requirements, ideally every hour. This frequency of replication minimizes massive data movements that can have competing throughput needs with the main system and a better recovery point objective (RPO). Additionally, you should consider ways for the application using Data Lake Storage Gen1 to automatically fail over to the secondary account through monitoring triggers or length of failed attempts, or at least send a notification to admins for manual intervention. Keep in mind that there is tradeoff of failing over versus waiting for a service to come back online. If the data hasn't finished replicating, a failover could cause potential data loss, inconsistency, or complex merging of the data. --Below are the top three recommended options for orchestrating replication between Data Lake Storage Gen1 accounts, and key differences between each of them. --| |Distcp |Azure Data Factory |AdlCopy | -||||| -|**Scale limits** | Bounded by worker nodes | Limited by Max Cloud Data Movement units | Bound by Analytics units | -|**Supports copying deltas** | Yes | No | No | -|**Built-in orchestration** | No (use Oozie Airflow or cron jobs) | Yes | No (Use Azure Automation or Windows Task Scheduler) | -|**Supported file systems** | ADL, HDFS, WASB, S3, GS, CFS |Numerous, see [Connectors](../data-factory/connector-azure-blob-storage.md). | ADL to ADL, WASB to ADL (same region only) | -|**OS support** |Any OS running Hadoop | N/A | Windows 10 | --### Use Distcp for data movement between two locations --Short for distributed copy, Distcp is a Linux command-line tool that comes with Hadoop and provides distributed data movement between two locations. The two locations can be Data Lake Storage Gen1, HDFS, WASB, or S3. This tool uses MapReduce jobs on a Hadoop cluster (for example, HDInsight) to scale out on all the nodes. Distcp is considered the fastest way to move big data without special network compression appliances. Distcp also provides an option to only update deltas between two locations, handles automatic retries, as well as dynamic scaling of compute. This approach is incredibly efficient when it comes to replicating things like Hive/Spark tables that can have many large files in a single directory and you only want to copy over the modified data. For these reasons, Distcp is the most recommended tool for copying data between big data stores. --Copy jobs can be triggered by Apache Oozie workflows using frequency or data triggers, as well as Linux cron jobs. For intensive replication jobs, it is recommended to spin up a separate HDInsight Hadoop cluster that can be tuned and scaled specifically for the copy jobs. This ensures that copy jobs do not interfere with critical jobs. If running replication on a wide enough frequency, the cluster can even be taken down between each job. If failing over to secondary region, make sure that another cluster is also spun up in the secondary region to replicate new data back to the primary Data Lake Storage Gen1 account once it comes back up. For examples of using Distcp, see [Use Distcp to copy data between Azure Storage Blobs and Data Lake Storage Gen1](data-lake-store-copy-data-wasb-distcp.md). --### Use Azure Data Factory to schedule copy jobs --Azure Data Factory can also be used to schedule copy jobs using a **Copy Activity**, and can even be set up on a frequency via the **Copy Wizard**. Keep in mind that Azure Data Factory has a limit of cloud data movement units (DMUs), and eventually caps the throughput/compute for large data workloads. Additionally, Azure Data Factory currently does not offer delta updates between Data Lake Storage Gen1 accounts, so folders like Hive tables would require a complete copy to replicate. Refer to the [Copy Activity tuning guide](../data-factory/copy-activity-performance.md) for more information on copying with Data Factory. --### AdlCopy --AdlCopy is a Windows command-line tool that allows you to copy data between two Data Lake Storage Gen1 accounts only within the same region. The AdlCopy tool provides a standalone option or the option to use an Azure Data Lake Analytics account to run your copy job. Though it was originally built for on-demand copies as opposed to a robust replication, it provides another option to do distributed copying across Data Lake Storage Gen1 accounts within the same region. For reliability, itΓÇÖs recommended to use the premium Data Lake Analytics option for any production workload. The standalone version can return busy responses and has limited scale and monitoring. --Like Distcp, the AdlCopy needs to be orchestrated by something like Azure Automation or Windows Task Scheduler. As with Data Factory, AdlCopy does not support copying only updated files, but recopies and overwrite existing files. For more information and examples of using AdlCopy, see [Copy data from Azure Storage Blobs to Data Lake Storage Gen1](data-lake-store-copy-data-azure-storage-blob.md). --## Monitoring considerations --Data Lake Storage Gen1 provides detailed diagnostic logs and auditing. Data Lake Storage Gen1 provides some basic metrics in the Azure portal under the Data Lake Storage Gen1 account and in Azure Monitor. Availability of Data Lake Storage Gen1 is displayed in the Azure portal. However, this metric is refreshed every seven minutes and cannot be queried through a publicly exposed API. To get the most up-to-date availability of a Data Lake Storage Gen1 account, you must run your own synthetic tests to validate availability. Other metrics such as total storage utilization, read/write requests, and ingress/egress can take up to 24 hours to refresh. So, more up-to-date metrics must be calculated manually through Hadoop command-line tools or aggregating log information. The quickest way to get the most recent storage utilization is running this HDFS command from a Hadoop cluster node (for example, head node): --```console -hdfs dfs -du -s -h adl://<adlsg1_account_name>.azuredatalakestore.net:443/ -``` --### Export Data Lake Storage Gen1 diagnostics --One of the quickest ways to get access to searchable logs from Data Lake Storage Gen1 is to enable log shipping to **Log Analytics** under the **Diagnostics** blade for the Data Lake Storage Gen1 account. This provides immediate access to incoming logs with time and content filters, along with alerting options (email/webhook) triggered within 15-minute intervals. For instructions, see [Accessing diagnostic logs for Azure Data Lake Storage Gen1](data-lake-store-diagnostic-logs.md). --For more real-time alerting and more control on where to land the logs, consider exporting logs to Azure EventHub where content can be analyzed individually or over a time window in order to submit real-time notifications to a queue. A separate application such as a [Logic App](../connectors/connectors-create-api-azure-event-hubs.md) can then consume and communicate the alerts to the appropriate channel, as well as submit metrics to monitoring tools like NewRelic, Datadog, or AppDynamics. Alternatively, if you are using a third-party tool such as ElasticSearch, you can export the logs to Blob Storage and use the [Azure Logstash plugin](https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash/logstash-input-azureblob) to consume the data into your Elasticsearch, Kibana, and Logstash (ELK) stack. --### Turn on debug-level logging in HDInsight --If Data Lake Storage Gen1 log shipping is not turned on, Azure HDInsight also provides a way to turn on [client-side logging for Data Lake Storage Gen1](data-lake-store-performance-tuning-mapreduce.md) via log4j. You must set the following property in **Ambari** > **YARN** > **Config** > **Advanced yarn-log4j configurations**: --`log4j.logger.com.microsoft.azure.datalake.store=DEBUG` --Once the property is set and the nodes are restarted, Data Lake Storage Gen1 diagnostics is written to the YARN logs on the nodes (/tmp/\<user\>/yarn.log), and important details like errors or throttling (HTTP 429 error code) can be monitored. This same information can also be monitored in Azure Monitor logs or wherever logs are shipped to in the [Diagnostics](data-lake-store-diagnostic-logs.md) blade of the Data Lake Storage Gen1 account. It is recommended to at least have client-side logging turned on or utilize the log shipping option with Data Lake Storage Gen1 for operational visibility and easier debugging. --### Run synthetic transactions --Currently, the service availability metric for Data Lake Storage Gen1 in the Azure portal has 7-minute refresh window. Also, it cannot be queried using a publicly exposed API. Hence, it is recommended to build a basic application that does synthetic transactions to Data Lake Storage Gen1 that can provide up to the minute availability. An example might be creating a WebJob, Logic App, or Azure Function App to perform a read, create, and update against Data Lake Storage Gen1 and send the results to your monitoring solution. The operations can be done in a temporary folder and then deleted after the test, which might be run every 30-60 seconds, depending on requirements. --## Directory layout considerations --When landing data into a data lake, itΓÇÖs important to pre-plan the structure of the data so that security, partitioning, and processing can be utilized effectively. Many of the following recommendations can be used whether itΓÇÖs with Azure Data Lake Storage Gen1, Blob Storage, or HDFS. Every workload has different requirements on how the data is consumed, but below are some common layouts to consider when working with IoT and batch scenarios. --### IoT structure --In IoT workloads, there can be a great deal of data being landed in the data store that spans across numerous products, devices, organizations, and customers. ItΓÇÖs important to pre-plan the directory layout for organization, security, and efficient processing of the data for down-stream consumers. A general template to consider might be the following layout: --```console -{Region}/{SubjectMatter(s)}/{yyyy}/{mm}/{dd}/{hh}/ -``` --For example, landing telemetry for an airplane engine within the UK might look like the following structure: --```console -UK/Planes/BA1293/Engine1/2017/08/11/12/ -``` --There's an important reason to put the date at the end of the folder structure. If you want to lock down certain regions or subject matters to users/groups, then you can easily do so with the POSIX permissions. Otherwise, if there was a need to restrict a certain security group to viewing just the UK data or certain planes, with the date structure in front a separate permission would be required for numerous folders under every hour folder. Additionally, having the date structure in front would exponentially increase the number of folders as time went on. --### Batch jobs structure --From a high-level, a commonly used approach in batch processing is to land data in an ΓÇ£inΓÇ¥ folder. Then, once the data is processed, put the new data into an ΓÇ£outΓÇ¥ folder for downstream processes to consume. This directory structure is seen sometimes for jobs that require processing on individual files and might not require massively parallel processing over large datasets. Like the IoT structure recommended above, a good directory structure has the parent-level folders for things such as region and subject matters (for example, organization, product/producer). This structure helps with securing the data across your organization and better management of the data in your workloads. Furthermore, consider date and time in the structure to allow better organization, filtered searches, security, and automation in the processing. The level of granularity for the date structure is determined by the interval on which the data is uploaded or processed, such as hourly, daily, or even monthly. --Sometimes file processing is unsuccessful due to data corruption or unexpected formats. In such cases, directory structure might benefit from a **/bad** folder to move the files to for further inspection. The batch job might also handle the reporting or notification of these *bad* files for manual intervention. Consider the following template structure: --```console -{Region}/{SubjectMatter(s)}/In/{yyyy}/{mm}/{dd}/{hh}/ -{Region}/{SubjectMatter(s)}/Out/{yyyy}/{mm}/{dd}/{hh}/ -{Region}/{SubjectMatter(s)}/Bad/{yyyy}/{mm}/{dd}/{hh}/ -``` --For example, a marketing firm receives daily data extracts of customer updates from their clients in North America. It might look like the following snippet before and after being processed: --```console -NA/Extracts/ACMEPaperCo/In/2017/08/14/updates_08142017.csv -NA/Extracts/ACMEPaperCo/Out/2017/08/14/processed_updates_08142017.csv -``` --In the common case of batch data being processed directly into databases such as Hive or traditional SQL databases, there isnΓÇÖt a need for an **/in** or **/out** folder since the output already goes into a separate folder for the Hive table or external database. For example, daily extracts from customers would land into their respective folders, and orchestration by something like Azure Data Factory, Apache Oozie, or Apache Airflow would trigger a daily Hive or Spark job to process and write the data into a Hive table. --## Next steps --* [Overview of Azure Data Lake Storage Gen1](data-lake-store-overview.md) -* [Access Control in Azure Data Lake Storage Gen1](data-lake-store-access-control.md) -* [Security in Azure Data Lake Storage Gen1](data-lake-store-security-overview.md) -* [Tuning Azure Data Lake Storage Gen1 for performance](data-lake-store-performance-tuning-guidance.md) -* [Performance tuning guidance for using HDInsight Spark with Azure Data Lake Storage Gen1](data-lake-store-performance-tuning-spark.md) -* [Performance tuning guidance for using HDInsight Hive with Azure Data Lake Storage Gen1](data-lake-store-performance-tuning-hive.md) -* [Create HDInsight clusters with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md) |
data-lake-store | Data Lake Store Comparison With Blob Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-comparison-with-blob-storage.md | - Title: Comparison of Azure Data Lake Storage Gen1 with Blob storage -description: Learn about the differences between Azure Data Lake Storage Gen1 and Azure Blob Storage regarding some key aspects of big data processing. ---- Previously updated : 03/26/2018----# Comparing Azure Data Lake Storage Gen1 and Azure Blob Storage ---The table in this article summarizes the differences between Azure Data Lake Storage Gen1 and Azure Blob Storage along some key aspects of big data processing. Azure Blob Storage is a general purpose, scalable object store that is designed for a wide variety of storage scenarios. Azure Data Lake Storage Gen1 is a hyper-scale repository that is optimized for big data analytics workloads. --| Category | Azure Data Lake Storage Gen1 | Azure Blob Storage | -| -- | - | | -| Purpose |Optimized storage for big data analytics workloads |General purpose object store for a wide variety of storage scenarios, including big data analytics | -| Use Cases |Batch, interactive, streaming analytics and machine learning data such as log files, IoT data, click streams, large datasets |Any type of text or binary data, such as application back end, backup data, media storage for streaming and general purpose data. Additionally, full support for analytics workloads; batch, interactive, streaming analytics and machine learning data such as log files, IoT data, click streams, large datasets | -| Key Concepts |Data Lake Storage Gen1 account contains folders, which in turn contains data stored as files |Storage account has containers, which in turn has data in the form of blobs | -| Structure |Hierarchical file system |Object store with flat namespace | -| API |REST API over HTTPS |REST API over HTTP/HTTPS | -| Server-side API |[WebHDFS-compatible REST API](/rest/api/datalakestore/) |[Azure Blob Storage REST API](/rest/api/storageservices/Blob-Service-REST-API) | -| Hadoop File System Client |Yes |Yes | -| Data Operations - Authentication |Based on [Microsoft Entra identities](../active-directory/develop/authentication-vs-authorization.md) |Based on shared secrets - [Account Access Keys](../storage/common/storage-account-keys-manage.md) and [Shared Access Signature Keys](../storage/common/storage-sas-overview.md). | -| Data Operations - Authentication Protocol |[OpenID Connect](https://openid.net/connect/). Calls must contain a valid JWT (JSON web token) issued by Microsoft Entra ID.|Hash-based Message Authentication Code (HMAC). Calls must contain a Base64-encoded SHA-256 hash over a part of the HTTP request. | -| Data Operations - Authorization |POSIX Access Control Lists (ACLs). ACLs based on Microsoft Entra identities can be set at the file and folder level. |For account-level authorization ΓÇô Use [Account Access Keys](../storage/common/storage-account-keys-manage.md)<br>For account, container, or blob authorization - Use [Shared Access Signature Keys](../storage/common/storage-sas-overview.md) | -| Data Operations - Auditing |Available. See [here](data-lake-store-diagnostic-logs.md) for information. |Available | -| Encryption data at rest |<ul><li>Transparent, Server side</li> <ul><li>With service-managed keys</li><li>With customer-managed keys in Azure KeyVault</li></ul></ul> |<ul><li>Transparent, Server side</li> <ul><li>With service-managed keys</li><li>With customer-managed keys in Azure KeyVault (preview)</li></ul><li>Client-side encryption</li></ul> | -| Management operations (for example, Account Create) |[Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) for account management |[Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) for account management | -| Developer SDKs |.NET, Java, Python, Node.js |.NET, Java, Python, Node.js, C++, Ruby, PHP, Go, Android, iOS | -| Analytics Workload Performance |Optimized performance for parallel analytics workloads. High Throughput and IOPS. |Optimized performance for parallel analytics workloads. | -| Size limits |No limits on account sizes, file sizes, or number of files |For specific limits, see [Scalability targets for standard storage accounts](../storage/common/scalability-targets-standard-account.md) and [Scalability and performance targets for Blob storage](../storage/blobs/scalability-targets.md). Larger account limits available by contacting [Azure Support](https://azure.microsoft.com/support/faq/) | -| Geo-redundancy |Locally redundant (multiple copies of data in one Azure region) |Locally redundant (LRS), zone redundant (ZRS), globally redundant (GRS), read-access globally redundant (RA-GRS). See [here](../storage/common/storage-redundancy.md) for more information | -| Service state |Generally available |Generally available | -| Regional availability |See [here](https://azure.microsoft.com/regions/#services) |Available in all Azure regions | -| Price |See [Pricing](https://azure.microsoft.com/pricing/details/data-lake-store/) |See [Pricing](https://azure.microsoft.com/pricing/details/storage/) | |
data-lake-store | Data Lake Store Compatible Oss Other Applications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-compatible-oss-other-applications.md | - Title: Big data applications compatible with Data Lake Storage Gen1 | Microsoft Docs -description: List of open source applications that work with Azure Data Lake Storage Gen1 (previously known as Azure Data Lake Store) ---- Previously updated : 06/27/2018----# Open Source Big Data applications that work with Azure Data Lake Storage Gen1 ---This article lists the open source big data applications that work with Azure Data Lake Storage Gen1. For the applications in the table below, only the versions available with the listed distribution are supported. For information on what versions of these applications are available with HDInsight, see [HDInsight component versioning](../hdinsight/hdinsight-component-versioning.md). --| Open Source Software | Distribution | -| | | -| [Apache Sqoop](https://sqoop.apache.org/) |HDInsight 3.2, 3.4, 3.5, and 3.6 | -| [MapReduce](https://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html) |HDInsight 3.2, 3.4, 3.5, and 3.6 | -| [Apache Storm](https://storm.apache.org/) |HDInsight 3.2, 3.4, 3.5, and 3.6 | -| [Apache Hive](https://hive.apache.org/) |HDInsight 3.2, 3.4, 3.5, and 3.6 | -| [HCatalog](https://cwiki.apache.org/confluence/display/Hive/HCatalog) |HDInsight 3.2, 3.4, 3.5, and 3.6 | -| [Apache Mahout](https://mahout.apache.org/) |HDInsight 3.2, 3.4, 3.5, and 3.6 | -| [Apache Pig/Pig Latin](https://pig.apache.org/) |HDInsight 3.2, 3.4, 3.5, and 3.6 | -| [Apache Oozie](https://oozie.apache.org/) |HDInsight 3.2, 3.4, 3.5, and 3.6 | -| [Apache Zookeeper](https://zookeeper.apache.org/) |HDInsight 3.2, 3.4, 3.5, and 3.6 | -| [Apache Tez](https://tez.apache.org/) |HDInsight 3.2, 3.4, 3.5, and 3.6 | -| [Apache Spark](https://spark.apache.org/) |HDInsight 3.4, 3.5, and 3.6 | ---## See also -* [Overview of Azure Data Lake Storage Gen1](data-lake-store-overview.md) - |
data-lake-store | Data Lake Store Connectivity From Vnets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-connectivity-from-vnets.md | - Title: Connect to Azure Data Lake Storage Gen1 from VNETs | Microsoft Docs -description: Learn how to enable access to Azure Data Lake Storage Gen1 from Azure virtual machines that have restricted access to resources. --- Previously updated : 01/31/2018-----# Access Azure Data Lake Storage Gen1 from VMs within an Azure VNET -Azure Data Lake Storage Gen1 is a PaaS service that runs on public Internet IP addresses. Any server that can connect to the public Internet can typically connect to Azure Data Lake Storage Gen1 endpoints as well. By default, all VMs that are in Azure VNETs can access the Internet and hence can access Azure Data Lake Storage Gen1. However, it is possible to configure VMs in a VNET to not have access to the Internet. For such VMs, access to Azure Data Lake Storage Gen1 is restricted as well. Blocking public Internet access for VMs in Azure VNETs can be done using any of the following approaches: --* By configuring Network Security Groups (NSG) -* By configuring User Defined Routes (UDR) -* By exchanging routes via BGP (industry standard dynamic routing protocol), when ExpressRoute is used, that block access to the Internet --In this article, you will learn how to enable access to Azure Data Lake Storage Gen1 from Azure VMs, which have been restricted to access resources using one of the three methods listed previously. --## Enabling connectivity to Azure Data Lake Storage Gen1 from VMs with restricted connectivity -To access Azure Data Lake Storage Gen1 from such VMs, you must configure them to access the IP address for the region where the Azure Data Lake Storage Gen1 account is available. You can identify the IP addresses for your Data Lake Storage Gen1 account regions by resolving the DNS names of your accounts (`<account>.azuredatalakestore.net`). To resolve DNS names of your accounts, you can use tools such as **nslookup**. Open a command prompt on your computer and run the following command: --```console -nslookup mydatastore.azuredatalakestore.net -``` --The output resembles the following. The value against **Address** property is the IP address associated with your Data Lake Storage Gen1 account. --```output -Non-authoritative answer: -Name: 1434ceb1-3a4b-4bc0-9c69-a0823fd69bba-mydatastore.projectcabostore.net -Address: 104.44.88.112 -Aliases: mydatastore.azuredatalakestore.net -``` ---### Enabling connectivity from VMs restricted by using NSG -When an NSG rule is used to block access to the Internet, then you can create another NSG that allows access to the Data Lake Storage Gen1 IP Address. For more information about NSG rules, see [Network security groups overview](../virtual-network/network-security-groups-overview.md). For instructions on how to create NSGs, see [How to create a network security group](../virtual-network/tutorial-filter-network-traffic.md). --### Enabling connectivity from VMs restricted by using UDR or ExpressRoute -When routes, either UDRs or BGP-exchanged routes, are used to block access to the Internet, a special route needs to be configured so that VMs in such subnets can access Data Lake Storage Gen1 endpoints. For more information, see [User-defined routes overview](../virtual-network/virtual-networks-udr-overview.md). For instructions on creating UDRs, see [Create UDRs in Resource Manager](../virtual-network/tutorial-create-route-table-powershell.md). --### Enabling connectivity from VMs restricted by using ExpressRoute -When an ExpressRoute circuit is configured, the on-premises servers can access Data Lake Storage Gen1 through public peering. More details on configuring ExpressRoute for public peering is available at [ExpressRoute FAQs](../expressroute/expressroute-faqs.md). --## See also -* [Overview of Azure Data Lake Storage Gen1](data-lake-store-overview.md) -* [Securing data stored in Azure Data Lake Storage Gen1](data-lake-store-security-overview.md) |
data-lake-store | Data Lake Store Copy Data Azure Storage Blob | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-copy-data-azure-storage-blob.md | - Title: Copy data from Azure Storage blobs to Data Lake Storage Gen1 -description: Use AdlCopy tool to copy data from Azure Storage Blobs to Azure Data Lake Storage Gen1 ---- Previously updated : 05/29/2018----# Copy data from Azure Storage Blobs to Azure Data Lake Storage Gen1 --> [!div class="op_single_selector"] -> * [Using DistCp](data-lake-store-copy-data-wasb-distcp.md) -> * [Using AdlCopy](data-lake-store-copy-data-azure-storage-blob.md) -> -> --Data Lake Storage Gen1 provides a command-line tool, AdlCopy, to copy data from the following sources: --* From Azure Storage blobs into Data Lake Storage Gen1. You can't use AdlCopy to copy data from Data Lake Storage Gen1 to Azure Storage blobs. -* Between two Data Lake Storage Gen1 accounts. --Also, you can use the AdlCopy tool in two different modes: --* **Standalone**, where the tool uses Data Lake Storage Gen1 resources to perform the task. -* **Using a Data Lake Analytics account**, where the units assigned to your Data Lake Analytics account are used to perform the copy operation. You might want to use this option when you are looking to perform the copy tasks in a predictable manner. --## Prerequisites --Before you begin this article, you must have the following: --* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). -* **Azure Storage blobs** container with some data. -* **A Data Lake Storage Gen1 account**. For instructions on how to create one, see [Get started with Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md) -* **Data Lake Analytics account (optional)** - See [Get started with Azure Data Lake Analytics](../data-lake-analytics/data-lake-analytics-get-started-portal.md) for instructions on how to create a Data Lake Analytics account. -* **AdlCopy tool**. Install the AdlCopy tool. --## Syntax of the AdlCopy tool --Use the following syntax to work with the AdlCopy tool --```console -AdlCopy /Source <Blob or Data Lake Storage Gen1 source> /Dest <Data Lake Storage Gen1 destination> /SourceKey <Key for Blob account> /Account <Data Lake Analytics account> /Units <Number of Analytics units> /Pattern -``` --The parameters in the syntax are described below: --| Option | Description | -| | | -| Source |Specifies the location of the source data in the Azure storage blob. The source can be a blob container, a blob, or another Data Lake Storage Gen1 account. | -| Dest |Specifies the Data Lake Storage Gen1 destination to copy to. | -| SourceKey |Specifies the storage access key for the Azure storage blob source. This is required only if the source is a blob container or a blob. | -| Account |**Optional**. Use this if you want to use Azure Data Lake Analytics account to run the copy job. If you use the /Account option in the syntax but do not specify a Data Lake Analytics account, AdlCopy uses a default account to run the job. Also, if you use this option, you must add the source (Azure Storage Blob) and destination (Azure Data Lake Storage Gen1) as data sources for your Data Lake Analytics account. | -| Units |Specifies the number of Data Lake Analytics units that will be used for the copy job. This option is mandatory if you use the **/Account** option to specify the Data Lake Analytics account. | -| Pattern |Specifies a regex pattern that indicates which blobs or files to copy. AdlCopy uses case-sensitive matching. The default pattern when no pattern is specified is to copy all items. Specifying multiple file patterns is not supported. | --## Use AdlCopy (as standalone) to copy data from an Azure Storage blob --1. Open a command prompt and navigate to the directory where AdlCopy is installed, typically `%HOMEPATH%\Documents\adlcopy`. -1. Run the following command to copy a specific blob from the source container to a Data Lake Storage Gen1 folder: -- ```console - AdlCopy /source https://<source_account>.blob.core.windows.net/<source_container>/<blob name> /dest swebhdfs://<dest_adlsg1_account>.azuredatalakestore.net/<dest_folder>/ /sourcekey <storage_account_key_for_storage_container> - ``` -- For example: -- ```console - AdlCopy /source https://mystorage.blob.core.windows.net/mycluster/HdiSamples/HdiSamples/WebsiteLogSampleData/SampleLog/909f2b.log /dest swebhdfs://mydatalakestorage.azuredatalakestore.net/mynewfolder/ /sourcekey uJUfvD6cEvhfLoBae2yyQf8t9/BpbWZ4XoYj4kAS5Jf40pZaMNf0q6a8yqTxktwVgRED4vPHeh/50iS9atS5LQ== - ``` -- >[!NOTE] - >The syntax above specifies the file to be copied to a folder in the Data Lake Storage Gen1 account. AdlCopy tool creates a folder if the specified folder name does not exist. -- You will be prompted to enter the credentials for the Azure subscription under which you have your Data Lake Storage Gen1 account. You will see an output similar to the following: -- ```output - Initializing Copy. - Copy Started. - 100% data copied. - Finishing Copy. - Copy Completed. 1 file copied. - ``` --1. You can also copy all the blobs from one container to the Data Lake Storage Gen1 account using the following command: -- ```console - AdlCopy /source https://<source_account>.blob.core.windows.net/<source_container>/ /dest swebhdfs://<dest_adlsg1_account>.azuredatalakestore.net/<dest_folder>/ /sourcekey <storage_account_key_for_storage_container> - ``` -- For example: -- ```console - AdlCopy /Source https://mystorage.blob.core.windows.net/mycluster/example/data/gutenberg/ /dest adl://mydatalakestorage.azuredatalakestore.net/mynewfolder/ /sourcekey uJUfvD6cEvhfLoBae2yyQf8t9/BpbWZ4XoYj4kAS5Jf40pZaMNf0q6a8yqTxktwVgRED4vPHeh/50iS9atS5LQ== - ``` --### Performance considerations --If you are copying from an Azure Blob Storage account, you may be throttled during copy on the blob storage side. This will degrade the performance of your copy job. To learn more about the limits of Azure Blob Storage, see Azure Storage limits at [Azure subscription and service limits](../azure-resource-manager/management/azure-subscription-service-limits.md). --## Use AdlCopy (as standalone) to copy data from another Data Lake Storage Gen1 account --You can also use AdlCopy to copy data between two Data Lake Storage Gen1 accounts. --1. Open a command prompt and navigate to the directory where AdlCopy is installed, typically `%HOMEPATH%\Documents\adlcopy`. -1. Run the following command to copy a specific file from one Data Lake Storage Gen1 account to another. -- ```console - AdlCopy /Source adl://<source_adlsg1_account>.azuredatalakestore.net/<path_to_file> /dest adl://<dest_adlsg1_account>.azuredatalakestore.net/<path>/ - ``` -- For example: -- ```console - AdlCopy /Source adl://mydatastorage.azuredatalakestore.net/mynewfolder/909f2b.log /dest adl://mynewdatalakestorage.azuredatalakestore.net/mynewfolder/ - ``` -- > [!NOTE] - > The syntax above specifies the file to be copied to a folder in the destination Data Lake Storage Gen1 account. AdlCopy tool creates a folder if the specified folder name does not exist. - > - > -- You will be prompted to enter the credentials for the Azure subscription under which you have your Data Lake Storage Gen1 account. You will see an output similar to the following: -- ```output - Initializing Copy. - Copy Started.| - 100% data copied. - Finishing Copy. - Copy Completed. 1 file copied. - ``` -1. The following command copies all files from a specific folder in the source Data Lake Storage Gen1 account to a folder in the destination Data Lake Storage Gen1 account. -- ```console - AdlCopy /Source adl://mydatastorage.azuredatalakestore.net/mynewfolder/ /dest adl://mynewdatalakestorage.azuredatalakestore.net/mynewfolder/ - ``` --### Performance considerations --When using AdlCopy as a standalone tool, the copy is run on shared, Azure-managed resources. The performance you may get in this environment depends on system load and available resources. This mode is best used for small transfers on an ad hoc basis. No parameters need to be tuned when using AdlCopy as a standalone tool. --## Use AdlCopy (with Data Lake Analytics account) to copy data --You can also use your Data Lake Analytics account to run the AdlCopy job to copy data from Azure storage blobs to Data Lake Storage Gen1. You would typically use this option when the data to be moved is in the range of gigabytes and terabytes, and you want better and predictable performance throughput. --To use your Data Lake Analytics account with AdlCopy to copy from an Azure Storage Blob, the source (Azure Storage Blob) must be added as a data source for your Data Lake Analytics account. For instructions on adding additional data sources to your Data Lake Analytics account, see [Manage Data Lake Analytics account data sources](../data-lake-analytics/data-lake-analytics-manage-use-portal.md#manage-data-sources). --> [!NOTE] -> If you are copying from an Azure Data Lake Storage Gen1 account as the source using a Data Lake Analytics account, you do not need to associate the Data Lake Storage Gen1 account with the Data Lake Analytics account. The requirement to associate the source store with the Data Lake Analytics account is only when the source is an Azure Storage account. -> -> --Run the following command to copy from an Azure Storage blob to a Data Lake Storage Gen1 account using Data Lake Analytics account: --```console -AdlCopy /source https://<source_account>.blob.core.windows.net/<source_container>/<blob name> /dest swebhdfs://<dest_adlsg1_account>.azuredatalakestore.net/<dest_folder>/ /sourcekey <storage_account_key_for_storage_container> /Account <data_lake_analytics_account> /Units <number_of_data_lake_analytics_units_to_be_used> -``` --For example: --```console -AdlCopy /Source https://mystorage.blob.core.windows.net/mycluster/example/data/gutenberg/ /dest swebhdfs://mydatalakestorage.azuredatalakestore.net/mynewfolder/ /sourcekey uJUfvD6cEvhfLoBae2yyQf8t9/BpbWZ4XoYj4kAS5Jf40pZaMNf0q6a8yqTxktwVgRED4vPHeh/50iS9atS5LQ== /Account mydatalakeanalyticaccount /Units 2 -``` --Similarly, run the following command to copy all files from a specific folder in the source Data Lake Storage Gen1 account to a folder in the destination Data Lake Storage Gen1 account using Data Lake Analytics account: --```console -AdlCopy /Source adl://mysourcedatalakestorage.azuredatalakestore.net/mynewfolder/ /dest adl://mydestdatastorage.azuredatalakestore.net/mynewfolder/ /Account mydatalakeanalyticaccount /Units 2 -``` --### Performance considerations --When copying data in the range of terabytes, using AdlCopy with your own Azure Data Lake Analytics account provides better and more predictable performance. The parameter that should be tuned is the number of Azure Data Lake Analytics Units to use for the copy job. Increasing the number of units will increase the performance of your copy job. Each file to be copied can use maximum one unit. Specifying more units than the number of files being copied will not increase performance. --## Use AdlCopy to copy data using pattern matching --In this section, you learn how to use AdlCopy to copy data from a source (in our example below we use Azure Storage Blob) to a destination Data Lake Storage Gen1 account using pattern matching. For example, you can use the steps below to copy all files with .csv extension from the source blob to the destination. --1. Open a command prompt and navigate to the directory where AdlCopy is installed, typically `%HOMEPATH%\Documents\adlcopy`. -1. Run the following command to copy all files with *.csv extension from a specific blob from the source container to a Data Lake Storage Gen1 folder: -- ```console - AdlCopy /source https://<source_account>.blob.core.windows.net/<source_container>/<blob name> /dest swebhdfs://<dest_adlsg1_account>.azuredatalakestore.net/<dest_folder>/ /sourcekey <storage_account_key_for_storage_container> /Pattern *.csv - ``` -- For example: -- ```console - AdlCopy /source https://mystorage.blob.core.windows.net/mycluster/HdiSamples/HdiSamples/FoodInspectionData/ /dest adl://mydatalakestorage.azuredatalakestore.net/mynewfolder/ /sourcekey uJUfvD6cEvhfLoBae2yyQf8t9/BpbWZ4XoYj4kAS5Jf40pZaMNf0q6a8yqTxktwVgRED4vPHeh/50iS9atS5LQ== /Pattern *.csv - ``` --## Billing --* If you use the AdlCopy tool as standalone you will be billed for egress costs for moving data, if the source Azure Storage account is not in the same region as the Data Lake Storage Gen1 account. -* If you use the AdlCopy tool with your Data Lake Analytics account, standard [Data Lake Analytics billing rates](https://azure.microsoft.com/pricing/details/data-lake-analytics/) will apply. --## Considerations for using AdlCopy --* AdlCopy (for version 1.0.5), supports copying data from sources that collectively have more than thousands of files and folders. However, if you encounter issues copying a large dataset, you can distribute the files/folders into different subfolders and use the path to those subfolders as the source instead. --## Performance considerations for using AdlCopy --AdlCopy supports copying data containing thousands of files and folders. However, if you encounter issues copying a large dataset, you can distribute the files/folders into smaller subfolders. AdlCopy was built for ad hoc copies. If you are trying to copy data on a recurring basis, you should consider using [Azure Data Factory](../data-factory/connector-azure-data-lake-store.md) that provides full management around the copy operations. --## Release notes --* 1.0.13 - If you are copying data to the same Azure Data Lake Storage Gen1 account across multiple adlcopy commands, you do not need to reenter your credentials for each run anymore. Adlcopy will now cache that information across multiple runs. --## Next steps --* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md) -* [Use Azure Data Lake Analytics with Data Lake Storage Gen1](../data-lake-analytics/data-lake-analytics-get-started-portal.md) -* [Use Azure HDInsight with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md) |
data-lake-store | Data Lake Store Copy Data Wasb Distcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-copy-data-wasb-distcp.md | - Title: Copy data to and from WASB into Azure Data Lake Storage Gen1 using DistCp -description: Use the DistCp tool to copy data to and from Azure Storage blobs to Azure Data Lake Storage Gen1 ---- Previously updated : 01/03/2020-----# Use DistCp to copy data between Azure Storage blobs and Azure Data Lake Storage Gen1 --> [!div class="op_single_selector"] -> * [Using DistCp](data-lake-store-copy-data-wasb-distcp.md) -> * [Using AdlCopy](data-lake-store-copy-data-azure-storage-blob.md) -> -> --If you have an HDInsight cluster with access to Azure Data Lake Storage Gen1, you can use Hadoop ecosystem tools like DistCp to copy data to and from an HDInsight cluster storage (WASB) into a Data Lake Storage Gen1 account. This article shows how to use the DistCp tool. --## Prerequisites --* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). -* **An Azure Data Lake Storage Gen1 account**. For instructions on how to create one, see [Get started with Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md). -* **Azure HDInsight cluster** with access to a Data Lake Storage Gen1 account. See [Create an HDInsight cluster with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md). Make sure you enable Remote Desktop for the cluster. --## Use DistCp from an HDInsight Linux cluster --An HDInsight cluster comes with the DistCp tool, which can be used to copy data from different sources into an HDInsight cluster. If you've configured the HDInsight cluster to use Data Lake Storage Gen1 as additional storage, you can use DistCp out-of-the-box to copy data to and from a Data Lake Storage Gen1 account. In this section, we look at how to use the DistCp tool. --1. From your desktop, use SSH to connect to the cluster. See [Connect to a Linux-based HDInsight cluster](../hdinsight/hdinsight-hadoop-linux-use-ssh-unix.md). Run the commands from the SSH prompt. --1. Verify whether you can access the Azure Storage blobs (WASB). Run the following command: -- ``` - hdfs dfs ΓÇôls wasb://<container_name>@<storage_account_name>.blob.core.windows.net/ - ``` -- The output provides a list of contents in the storage blob. --1. Similarly, verify whether you can access the Data Lake Storage Gen1 account from the cluster. Run the following command: -- ``` - hdfs dfs -ls adl://<data_lake_storage_gen1_account>.azuredatalakestore.net:443/ - ``` -- The output provides a list of files and folders in the Data Lake Storage Gen1 account. --1. Use DistCp to copy data from WASB to a Data Lake Storage Gen1 account. -- ``` - hadoop distcp wasb://<container_name>@<storage_account_name>.blob.core.windows.net/example/data/gutenberg adl://<data_lake_storage_gen1_account>.azuredatalakestore.net:443/myfolder - ``` -- The command copies the contents of the **/example/data/gutenberg/** folder in WASB to **/myfolder** in the Data Lake Storage Gen1 account. --1. Similarly, use DistCp to copy data from a Data Lake Storage Gen1 account to WASB. -- ``` - hadoop distcp adl://<data_lake_storage_gen1_account>.azuredatalakestore.net:443/myfolder wasb://<container_name>@<storage_account_name>.blob.core.windows.net/example/data/gutenberg - ``` -- The command copies the contents of **/myfolder** in the Data Lake Storage Gen1 account to **/example/data/gutenberg/** folder in WASB. --## Performance considerations while using DistCp --Because the DistCp toolΓÇÖs lowest granularity is a single file, setting the maximum number of simultaneous copies is the most important parameter to optimize it against Data Lake Storage Gen1. You can control the number of simultaneous copies by setting the number of mappers (ΓÇÿmΓÇÖ) parameter on the command line. This parameter specifies the maximum number of mappers that are used to copy data. The default value is 20. --Example: --``` - hadoop distcp wasb://<container_name>@<storage_account_name>.blob.core.windows.net/example/data/gutenberg adl://<data_lake_storage_gen1_account>.azuredatalakestore.net:443/myfolder -m 100 -``` --### How to determine the number of mappers to use --Here's some guidance that you can use. --* **Step 1: Determine total YARN memory** - The first step is to determine the YARN memory available to the cluster where you run the DistCp job. This information is available in the Ambari portal associated with the cluster. Navigate to YARN and view the **Configs** tab to see the YARN memory. To get the total YARN memory, multiply the YARN memory per node with the number of nodes you have in your cluster. --* **Step 2: Calculate the number of mappers** - The value of **m** is equal to the quotient of total YARN memory divided by the YARN container size. The YARN container size information is also available in the Ambari portal. Navigate to YARN and view the **Configs** tab. The YARN container size is displayed in this window. The equation to arrive at the number of mappers (**m**) is: -- `m = (number of nodes * YARN memory for each node) / YARN container size` --Example: --LetΓÇÖs assume that you have four D14v2s nodes in the cluster and you want to transfer 10 TB of data from 10 different folders. Each of the folders contains varying amounts of data and the file sizes within each folder are different. --* Total YARN memory - From the Ambari portal you determine that the YARN memory is 96 GB for a D14 node. So, total YARN memory for four node cluster is: -- `YARN memory = 4 * 96GB = 384GB` --* Number of mappers - From the Ambari portal you determine that the YARN container size is 3072 for a D14 cluster node. So, the number of mappers is: -- `m = (4 nodes * 96GB) / 3072MB = 128 mappers` --If other applications are using memory, you can choose to only use a portion of your clusterΓÇÖs YARN memory for DistCp. --### Copying large datasets --When the size of the dataset to be moved is large (for example, > 1 TB) or if you have many different folders, consider using multiple DistCp jobs. There's likely no performance gain, but it spreads out the jobs so that if any job fails, you need to only restart that specific job instead of the entire job. --### Limitations --* DistCp tries to create mappers that are similar in size to optimize performance. Increasing the number of mappers may not always increase performance. --* DistCp is limited to only one mapper per file. Therefore, you shouldn't have more mappers than you have files. Because DistCp can assign only one mapper to a file, this limits the amount of concurrency that can be used to copy large files. --* If you have a small number of large files, split them into 256-MB file chunks to give you more potential concurrency. --* If you're copying from an Azure Blob storage account, your copy job may be throttled on the Blob storage side. This degrades the performance of your copy job. To learn more about the limits of Azure Blob storage, see Azure Storage limits at [Azure subscription and service limits](../azure-resource-manager/management/azure-subscription-service-limits.md). --## See also --* [Copy data from Azure Storage blobs to Data Lake Storage Gen1](data-lake-store-copy-data-azure-storage-blob.md) -* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md) -* [Use Azure Data Lake Analytics with Data Lake Storage Gen1](../data-lake-analytics/data-lake-analytics-get-started-portal.md) -* [Use Azure HDInsight with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md) |
data-lake-store | Data Lake Store Data Operations Net Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-data-operations-net-sdk.md | - Title: .NET SDK - Filesystem operations on Data Lake Storage Gen1 - Azure -description: Use the Azure Data Lake Storage Gen1 .NET SDK for filesystem operations on Data Lake Storage Gen1 such as create folders, etc. ---- Previously updated : 01/03/2020-----# Filesystem operations on Data Lake Storage Gen1 using the .NET SDK --> [!div class="op_single_selector"] -> * [.NET SDK](data-lake-store-data-operations-net-sdk.md) -> * [Java SDK](data-lake-store-get-started-java-sdk.md) -> * [REST API](data-lake-store-data-operations-rest-api.md) -> * [Python](data-lake-store-data-operations-python.md) -> -> --In this article, you learn how to perform filesystem operations on Data Lake Storage Gen1 using the .NET SDK. Filesystem operations include creating folders in a Data Lake Storage Gen1 account, uploading files, downloading files, etc. --For instructions on how to do account management operations on Data Lake Storage Gen1 using the .NET SDK, see [Account management operations on Data Lake Storage Gen1 using .NET SDK](data-lake-store-get-started-net-sdk.md). --## Prerequisites --* **Visual Studio 2013 or above**. The instructions in this article use Visual Studio 2019. --* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). --* **Azure Data Lake Storage Gen1 account**. For instructions on how to create an account, see [Get started with Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md). --## Create a .NET application --The code sample available [on GitHub](https://github.com/Azure-Samples/data-lake-store-adls-dot-net-get-started/tree/master/AdlsSDKGettingStarted) walks you through the process of creating files in the store, concatenating files, downloading a file, and deleting some files in the store. This section of the article walks you through the main parts of the code. --1. In Visual Studio, select the **File** menu, **New**, and then **Project**. -1. Choose **Console App (.NET Framework)**, and then select **Next**. -1. In **Project name**, enter `CreateADLApplication`, and then select **Create**. -1. Add the NuGet packages to your project. -- 1. Right-click the project name in the Solution Explorer and click **Manage NuGet Packages**. - 1. In the **NuGet Package Manager** tab, make sure that **Package source** is set to **nuget.org**. Also, make sure the **Include prerelease** check box is selected. - 1. Search for and install the following NuGet packages: -- * `Microsoft.Azure.DataLake.Store` - This article uses v1.0.0. - * `Microsoft.Rest.ClientRuntime.Azure.Authentication` - This article uses v2.3.1. -- Close the **NuGet Package Manager**. --1. Open **Program.cs**, delete the existing code, and then include the following statements to add references to namespaces. -- ``` - using System; - using System.IO;using System.Threading; - using System.Linq; - using System.Text; - using System.Collections.Generic; - using System.Security.Cryptography.X509Certificates; // Required only if you're using an Azure AD application created with certificates - - using Microsoft.Rest; - using Microsoft.Rest.Azure.Authentication; - using Microsoft.Azure.DataLake.Store; - using Microsoft.IdentityModel.Clients.ActiveDirectory; - ``` --1. Declare the variables as shown below, and provide the values for the placeholders. Also, make sure the local path and file name you provide here exist on the computer. -- ``` - namespace SdkSample - { - class Program - { - private static string _adlsg1AccountName = "<DATA-LAKE-STORAGE-GEN1-NAME>.azuredatalakestore.net"; - } - } - ``` --In the remaining sections of the article, you can see how to use the available .NET methods to do operations such as authentication, file upload, etc. --## Authentication --* For end-user authentication for your application, see [End-user authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-end-user-authenticate-net-sdk.md). -* For service-to-service authentication for your application, see [Service-to-service authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-service-to-service-authenticate-net-sdk.md). --## Create client object --The following snippet creates the Data Lake Storage Gen1 filesystem client object, which is used to issue requests to the service. --``` -// Create client objects -AdlsClient client = AdlsClient.CreateClient(_adlsg1AccountName, adlCreds); -``` --## Create a file and directory --Add the following snippet to your application. This snippet adds a file and any parent directory that does not exist. --``` -// Create a file - automatically creates any parent directories that don't exist -// The AdlsOutputStream preserves record boundaries - it does not break records while writing to the store --using (var stream = client.CreateFile(fileName, IfExists.Overwrite)) -{ - byte[] textByteArray = Encoding.UTF8.GetBytes("This is test data to write.\r\n"); - stream.Write(textByteArray, 0, textByteArray.Length); -- textByteArray = Encoding.UTF8.GetBytes("This is the second line.\r\n"); - stream.Write(textByteArray, 0, textByteArray.Length); -} -``` --## Append to a file --The following snippet appends data to an existing file in Data Lake Storage Gen1 account. --``` -// Append to existing file --using (var stream = client.GetAppendStream(fileName)) -{ - byte[] textByteArray = Encoding.UTF8.GetBytes("This is the added line.\r\n"); - stream.Write(textByteArray, 0, textByteArray.Length); -} -``` --## Read a file --The following snippet reads the contents of a file in Data Lake Storage Gen1. --``` -//Read file contents --using (var readStream = new StreamReader(client.GetReadStream(fileName))) -{ - string line; - while ((line = readStream.ReadLine()) != null) - { - Console.WriteLine(line); - } -} -``` --## Get file properties --The following snippet returns the properties associated with a file or a directory. --``` -// Get file properties -var directoryEntry = client.GetDirectoryEntry(fileName); -PrintDirectoryEntry(directoryEntry); -``` --The definition of the `PrintDirectoryEntry` method is available as part of the sample [on GitHub](https://github.com/Azure-Samples/data-lake-store-adls-dot-net-get-started/tree/master/AdlsSDKGettingStarted). --## Rename a file --The following snippet renames an existing file in a Data Lake Storage Gen1 account. --``` -// Rename a file -string destFilePath = "/Test/testRenameDest3.txt"; -client.Rename(fileName, destFilePath, true); -``` --## Enumerate a directory --The following snippet enumerates directories in a Data Lake Storage Gen1 account. --``` -// Enumerate directory -foreach (var entry in client.EnumerateDirectory("/Test")) -{ - PrintDirectoryEntry(entry); -} -``` --The definition of the `PrintDirectoryEntry` method is available as part of the sample [on GitHub](https://github.com/Azure-Samples/data-lake-store-adls-dot-net-get-started/tree/master/AdlsSDKGettingStarted). --## Delete directories recursively --The following snippet deletes a directory, and all its subdirectories, recursively. --``` -// Delete a directory and all its subdirectories and files -client.DeleteRecursive("/Test"); -``` --## Samples --Here are a few samples that show how to use the Data Lake Storage Gen1 Filesystem SDK. --* [Basic sample on GitHub](https://github.com/Azure-Samples/data-lake-store-adls-dot-net-get-started/tree/master/AdlsSDKGettingStarted) -* [Advanced sample on GitHub](https://github.com/Azure-Samples/data-lake-store-adls-dot-net-samples) --## See also --* [Account management operations on Data Lake Storage Gen1 using .NET SDK](data-lake-store-get-started-net-sdk.md) -* [Data Lake Storage Gen1 .NET SDK Reference](/dotnet/api/overview/azure/data-lake-store) --## Next steps --* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md) |
data-lake-store | Data Lake Store Data Operations Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-data-operations-python.md | - Title: 'Python: Filesystem operations on Azure Data Lake Storage Gen1 | Microsoft Docs' -description: Learn how to use Python SDK to work with the Data Lake Storage Gen1 file system. ---- Previously updated : 05/29/2018------# Filesystem operations on Azure Data Lake Storage Gen1 using Python -> [!div class="op_single_selector"] -> * [.NET SDK](data-lake-store-data-operations-net-sdk.md) -> * [Java SDK](data-lake-store-get-started-java-sdk.md) -> * [REST API](data-lake-store-data-operations-rest-api.md) -> * [Python](data-lake-store-data-operations-python.md) -> -> --In this article, you learn how to use Python SDK to perform filesystem operations on Azure Data Lake Storage Gen1. For instructions on how to perform account management operations on Data Lake Storage Gen1 using Python, see [Account management operations on Data Lake Storage Gen1 using Python](data-lake-store-get-started-python.md). --## Prerequisites --* **Python**. You can download Python from [here](https://www.python.org/downloads/). This article uses Python 3.6.2. --* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). --* **Azure Data Lake Storage Gen1 account**. Follow the instructions at [Get started with Azure Data Lake Storage Gen1 using the Azure portal](data-lake-store-get-started-portal.md). --## Install the modules --To work with Data Lake Storage Gen1 using Python, you need to install three modules. --* The `azure-mgmt-resource` module, which includes Azure modules for Active Directory, etc. -* The `azure-mgmt-datalake-store` module, which includes the Azure Data Lake Storage Gen1 account management operations. For more information on this module, see the [azure-mgmt-datalake-store module reference](/python/api/azure-mgmt-datalake-store/). -* The `azure-datalake-store` module, which includes the Azure Data Lake Storage Gen1 filesystem operations. For more information on this module, see the [azure-datalake-store file-system module reference](/python/api/azure-datalake-store/azure.datalake.store.core/). --Use the following commands to install the modules. --```console -pip install azure-mgmt-resource -pip install azure-mgmt-datalake-store -pip install azure-datalake-store -``` --## Create a new Python application --1. In the IDE of your choice create a new Python application, for example, **mysample.py**. --2. Add the following lines to import the required modules -- ```python - ## Use this only for Azure AD service-to-service authentication - from azure.common.credentials import ServicePrincipalCredentials -- ## Use this only for Azure AD end-user authentication - from azure.common.credentials import UserPassCredentials -- ## Use this only for Azure AD multi-factor authentication - from msrestazure.azure_active_directory import AADTokenCredentials -- ## Required for Azure Data Lake Storage Gen1 account management - from azure.mgmt.datalake.store import DataLakeStoreAccountManagementClient - from azure.mgmt.datalake.store.models import DataLakeStoreAccount -- ## Required for Azure Data Lake Storage Gen1 filesystem management - from azure.datalake.store import core, lib, multithread -- ## Common Azure imports - from azure.mgmt.resource.resources import ResourceManagementClient - from azure.mgmt.resource.resources.models import ResourceGroup -- ## Use these as needed for your application - import logging, getpass, pprint, uuid, time - ``` --3. Save changes to mysample.py. --## Authentication --In this section, we talk about the different ways to authenticate with Microsoft Entra ID. The options available are: --* For end-user authentication for your application, see [End-user authentication with Data Lake Storage Gen1 using Python](data-lake-store-end-user-authenticate-python.md). -* For service-to-service authentication for your application, see [Service-to-service authentication with Data Lake Storage Gen1 using Python](data-lake-store-service-to-service-authenticate-python.md). --## Create filesystem client --The following snippet first creates the Data Lake Storage Gen1 account client. It uses the client object to create a Data Lake Storage Gen1 account. Finally, the snippet creates a filesystem client object. --```python -## Declare variables -subscriptionId = 'FILL-IN-HERE' -adlsAccountName = 'FILL-IN-HERE' --## Create a filesystem client object -adlsFileSystemClient = core.AzureDLFileSystem(adlCreds, store_name=adlsAccountName) -``` --## Create a directory --```python -## Create a directory -adlsFileSystemClient.mkdir('/mysampledirectory') -``` --## Upload a file --```python -## Upload a file -multithread.ADLUploader(adlsFileSystemClient, lpath='C:\\data\\mysamplefile.txt', rpath='/mysampledirectory/mysamplefile.txt', nthreads=64, overwrite=True, buffersize=4194304, blocksize=4194304) -``` ---## Download a file --```python -## Download a file -multithread.ADLDownloader(adlsFileSystemClient, lpath='C:\\data\\mysamplefile.txt.out', rpath='/mysampledirectory/mysamplefile.txt', nthreads=64, overwrite=True, buffersize=4194304, blocksize=4194304) -``` --## Delete a directory --```python -## Delete a directory -adlsFileSystemClient.rm('/mysampledirectory', recursive=True) -``` --## Next steps -* [Account management operations on Data Lake Storage Gen1 using Python](data-lake-store-get-started-python.md). --## See also --* [Azure Data Lake Storage Gen1 Python (Filesystem) Reference](/python/api/azure-datalake-store/azure.datalake.store.core) -* [Open Source Big Data applications compatible with Azure Data Lake Storage Gen1](data-lake-store-compatible-oss-other-applications.md) |
data-lake-store | Data Lake Store Data Operations Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-data-operations-rest-api.md | - Title: 'REST API: Filesystem operations on Azure Data Lake Storage Gen1 | Microsoft Docs' -description: Use WebHDFS REST APIs to perform filesystem operations on Azure Data Lake Storage Gen1 ---- Previously updated : 05/29/2018----# Filesystem operations on Azure Data Lake Storage Gen1 using REST API -> [!div class="op_single_selector"] -> * [.NET SDK](data-lake-store-data-operations-net-sdk.md) -> * [Java SDK](data-lake-store-get-started-java-sdk.md) -> * [REST API](data-lake-store-data-operations-rest-api.md) -> * [Python](data-lake-store-data-operations-python.md) -> -> --In this article, you learn how to use WebHDFS REST APIs and Data Lake Storage Gen1 REST APIs to perform filesystem operations on Azure Data Lake Storage Gen1. For instructions on how to perform account management operations on Data Lake Storage Gen1 using REST API, see [Account management operations on Data Lake Storage Gen1 using REST API](data-lake-store-get-started-rest-api.md). --## Prerequisites -* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). --* **Azure Data Lake Storage Gen1 account**. Follow the instructions at [Get started with Azure Data Lake Storage Gen1 using the Azure portal](data-lake-store-get-started-portal.md). --* **[cURL](https://curl.haxx.se/)**. This article uses cURL to demonstrate how to make REST API calls against a Data Lake Storage Gen1 account. --<a name='how-do-i-authenticate-using-azure-active-directory'></a> --## How do I authenticate using Microsoft Entra ID? -You can use two approaches to authenticate using Microsoft Entra ID. --* For end-user authentication for your application (interactive), see [End-user authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-end-user-authenticate-rest-api.md). -* For service-to-service authentication for your application (non-interactive), see [Service-to-service authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-service-to-service-authenticate-rest-api.md). ---## Create folders -This operation is based on the WebHDFS REST API call defined [here](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Make_a_Directory). --Use the following cURL command. Replace **\<yourstorename>** with your Data Lake Storage Gen1 account name. --```console -curl -i -X PUT -H "Authorization: Bearer <REDACTED>" -d "" 'https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/mytempdir/?op=MKDIRS' -``` --In the preceding command, replace \<`REDACTED`\> with the authorization token you retrieved earlier. This command creates a directory called **mytempdir** under the root folder of your Data Lake Storage Gen1 account. --If the operation completes successfully, you should see a response like the following snippet: --```output -{"boolean":true} -``` --## List folders -This operation is based on the WebHDFS REST API call defined [here](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#List_a_Directory). --Use the following cURL command. Replace **\<yourstorename>** with your Data Lake Storage Gen1 account name. --```console -curl -i -X GET -H "Authorization: Bearer <REDACTED>" 'https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/?op=LISTSTATUS' -``` --In the preceding command, replace \<`REDACTED`\> with the authorization token you retrieved earlier. --If the operation completes successfully, you should see a response like the following snippet: --```output -{ -"FileStatuses": { - "FileStatus": [{ - "length": 0, - "pathSuffix": "mytempdir", - "type": "DIRECTORY", - "blockSize": 268435456, - "accessTime": 1458324719512, - "modificationTime": 1458324719512, - "replication": 0, - "permission": "777", - "owner": "<GUID>", - "group": "<GUID>" - }] -} -} -``` --## Upload data -This operation is based on the WebHDFS REST API call defined [here](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File). --Use the following cURL command. Replace **\<yourstorename>** with your Data Lake Storage Gen1 account name. --```console -curl -i -X PUT -L -T 'C:\temp\list.txt' -H "Authorization: Bearer <REDACTED>" 'https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/mytempdir/list.txt?op=CREATE' -``` --In the preceding syntax **-T** parameter is the location of the file you are uploading. --The output is similar to the following snippet: - -```output -HTTP/1.1 307 Temporary Redirect -... -Location: https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/mytempdir/list.txt?op=CREATE&write=true -... -Content-Length: 0 --HTTP/1.1 100 Continue --HTTP/1.1 201 Created -... -``` --## Read data -This operation is based on the WebHDFS REST API call defined [here](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File). --Reading data from a Data Lake Storage Gen1 account is a two-step process. --* You first submit a GET request against the endpoint `https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/mytempdir/myinputfile.txt?op=OPEN`. This call returns a location to submit the next GET request to. -* You then submit the GET request against the endpoint `https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/mytempdir/myinputfile.txt?op=OPEN&read=true`. This call displays the contents of the file. --However, because there is no difference in the input parameters between the first and the second step, you can use the `-L` parameter to submit the first request. `-L` option essentially combines two requests into one and makes cURL redo the request on the new location. Finally, the output from all the request calls is displayed, like shown in the following snippet. Replace **\<yourstorename>** with your Data Lake Storage Gen1 account name. --```console -curl -i -L GET -H "Authorization: Bearer <REDACTED>" 'https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/mytempdir/myinputfile.txt?op=OPEN' -``` --You should see an output similar to the following snippet: --```output -HTTP/1.1 307 Temporary Redirect -... -Location: https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/mytempdir/somerandomfile.txt?op=OPEN&read=true -... --HTTP/1.1 200 OK -... --Hello, Data Lake Store user! -``` --## Rename a file -This operation is based on the WebHDFS REST API call defined [here](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Rename_a_FileDirectory). --Use the following cURL command to rename a file. Replace **\<yourstorename>** with your Data Lake Storage Gen1 account name. --```console -curl -i -X PUT -H "Authorization: Bearer <REDACTED>" -d "" 'https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/mytempdir/myinputfile.txt?op=RENAME&destination=/mytempdir/myinputfile1.txt' -``` --You should see an output similar to the following snippet: --```output -HTTP/1.1 200 OK -... --{"boolean":true} -``` --## Delete a file -This operation is based on the WebHDFS REST API call defined [here](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Delete_a_FileDirectory). --Use the following cURL command to delete a file. Replace **\<yourstorename>** with your Data Lake Storage Gen1 account name. --```console -curl -i -X DELETE -H "Authorization: Bearer <REDACTED>" 'https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/mytempdir/myinputfile1.txt?op=DELETE' -``` --You should see an output like the following: --```output -HTTP/1.1 200 OK -... --{"boolean":true} -``` --## Next steps -* [Account management operations on Data Lake Storage Gen1 using REST API](data-lake-store-get-started-rest-api.md). --## See also -* [Azure Data Lake Storage Gen1 REST API Reference](/rest/api/datalakestore/) -* [Open Source Big Data applications compatible with Azure Data Lake Storage Gen1](data-lake-store-compatible-oss-other-applications.md) |
data-lake-store | Data Lake Store Data Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-data-scenarios.md | - Title: Data scenarios involving Data Lake Storage Gen1 | Microsoft Docs -description: Understand the different scenarios and tools using which data can ingested, processed, downloaded, and visualized in Data Lake Storage Gen1 (previously known as Azure Data Lake Store) ---- Previously updated : 08/05/2022----# Using Azure Data Lake Storage Gen1 for big data requirements ---There are four key stages in big data processing: --* Ingesting large amounts of data into a data store, at real-time or in batches -* Processing the data -* Downloading the data -* Visualizing the data --In this article, we look at these stages with respect to Azure Data Lake Storage Gen1 to understand the options and tools available to meet your big data needs. --## Ingest data into Data Lake Storage Gen1 -This section highlights the different sources of data and the different ways in which that data can be ingested into a Data Lake Storage Gen1 account. --![Ingest data into Data Lake Storage Gen1](./media/data-lake-store-data-scenarios/ingest-data.png "Ingest data into Data Lake Storage Gen1") --### Ad hoc data -This represents smaller data sets that are used for prototyping a big data application. There are different ways of ingesting ad hoc data depending on the source of the data. --| Data Source | Ingest it using | -| | | -| Local computer |<ul> <li>[Azure portal](data-lake-store-get-started-portal.md)</li> <li>[Azure PowerShell](data-lake-store-get-started-powershell.md)</li> <li>[Azure CLI](data-lake-store-get-started-cli-2.0.md)</li> <li>[Using Data Lake Tools for Visual Studio](../data-lake-analytics/data-lake-analytics-data-lake-tools-get-started.md) </li></ul> | -| Azure Storage Blob |<ul> <li>[Azure Data Factory](../data-factory/connector-azure-data-lake-store.md)</li> <li>[AdlCopy tool](data-lake-store-copy-data-azure-storage-blob.md)</li><li>[DistCp running on HDInsight cluster](data-lake-store-copy-data-wasb-distcp.md)</li> </ul> | --### Streamed data -This represents data that can be generated by various sources such as applications, devices, sensors, etc. This data can be ingested into Data Lake Storage Gen1 by a variety of tools. These tools will usually capture and process the data on an event-by-event basis in real-time, and then write the events in batches into Data Lake Storage Gen1 so that they can be further processed. --Following are tools that you can use: --* [Azure Stream Analytics](../stream-analytics/stream-analytics-define-outputs.md) - Events ingested into Event Hubs can be written to Azure Data Lake Storage Gen1 using an Azure Data Lake Storage Gen1 output. -* [EventProcessorHost](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md) ΓÇô You can receive events from Event Hubs and then write it to Data Lake Storage Gen1 using the [Data Lake Storage Gen1 .NET SDK](data-lake-store-get-started-net-sdk.md). --### Relational data -You can also source data from relational databases. Over a period of time, relational databases collect huge amounts of data which can provide key insights if processed through a big data pipeline. You can use the following tools to move such data into Data Lake Storage Gen1. --* [Apache Sqoop](data-lake-store-data-transfer-sql-sqoop.md) -* [Azure Data Factory](../data-factory/copy-activity-overview.md) --### Web server log data (upload using custom applications) -This type of dataset is specifically called out because analysis of web server log data is a common use case for big data applications and requires large volumes of log files to be uploaded to Data Lake Storage Gen1. You can use any of the following tools to write your own scripts or applications to upload such data. --* [Azure CLI](data-lake-store-get-started-cli-2.0.md) -* [Azure PowerShell](data-lake-store-get-started-powershell.md) -* [Azure Data Lake Storage Gen1 .NET SDK](data-lake-store-get-started-net-sdk.md) -* [Azure Data Factory](../data-factory/copy-activity-overview.md) --For uploading web server log data, and also for uploading other kinds of data (e.g. social sentiments data), it is a good approach to write your own custom scripts/applications because it gives you the flexibility to include your data uploading component as part of your larger big data application. In some cases this code may take the form of a script or simple command line utility. In other cases, the code may be used to integrate big data processing into a business application or solution. --### Data associated with Azure HDInsight clusters -Most HDInsight cluster types (Hadoop, HBase, Storm) support Data Lake Storage Gen1 as a data storage repository. HDInsight clusters access data from Azure Storage Blobs (WASB). For better performance, you can copy the data from WASB into a Data Lake Storage Gen1 account associated with the cluster. You can use the following tools to copy the data. --* [Apache DistCp](data-lake-store-copy-data-wasb-distcp.md) -* [AdlCopy Service](data-lake-store-copy-data-azure-storage-blob.md) -* [Azure Data Factory](../data-factory/connector-azure-data-lake-store.md) --### Data stored in on-premises or IaaS Hadoop clusters -Large amounts of data may be stored in existing Hadoop clusters, locally on machines using HDFS. The Hadoop clusters may be in an on-premises deployment or may be within an IaaS cluster on Azure. There could be requirements to copy such data to Azure Data Lake Storage Gen1 for a one-off approach or in a recurring fashion. There are various options that you can use to achieve this. Below is a list of alternatives and the associated trade-offs. --| Approach | Details | Advantages | Considerations | -| | | | | -| Use Azure Data Factory (ADF) to copy data directly from Hadoop clusters to Azure Data Lake Storage Gen1 |[ADF supports HDFS as a data source](../data-factory/connector-hdfs.md) |ADF provides out-of-the-box support for HDFS and first class end-to-end management and monitoring |Requires Data Management Gateway to be deployed on-premises or in the IaaS cluster | -| Export data from Hadoop as files. Then copy the files to Azure Data Lake Storage Gen1 using appropriate mechanism. |You can copy files to Azure Data Lake Storage Gen1 using: <ul><li>[Azure PowerShell for Windows OS](data-lake-store-get-started-powershell.md)</li><li>[Azure CLI](data-lake-store-get-started-cli-2.0.md)</li><li>Custom app using any Data Lake Storage Gen1 SDK</li></ul> |Quick to get started. Can do customized uploads |Multi-step process that involves multiple technologies. Management and monitoring will grow to be a challenge over time given the customized nature of the tools | -| Use Distcp to copy data from Hadoop to Azure Storage. Then copy data from Azure Storage to Data Lake Storage Gen1 using appropriate mechanism. |You can copy data from Azure Storage to Data Lake Storage Gen1 using: <ul><li>[Azure Data Factory](../data-factory/copy-activity-overview.md)</li><li>[AdlCopy tool](data-lake-store-copy-data-azure-storage-blob.md)</li><li>[Apache DistCp running on HDInsight clusters](data-lake-store-copy-data-wasb-distcp.md)</li></ul> |You can use open-source tools. |Multi-step process that involves multiple technologies | --### Really large datasets -For uploading datasets that range in several terabytes, using the methods described above can sometimes be slow and costly. In such cases, you can use the options below. --* **Using Azure ExpressRoute**. Azure ExpressRoute lets you create private connections between Azure datacenters and infrastructure on your premises. This provides a reliable option for transferring large amounts of data. For more information, see [Azure ExpressRoute documentation](../expressroute/expressroute-introduction.md). -* **"Offline" upload of data**. If using Azure ExpressRoute is not feasible for any reason, you can use [Azure Import/Export service](../import-export/storage-import-export-service.md) to ship hard disk drives with your data to an Azure data center. Your data is first uploaded to Azure Storage Blobs. You can then use [Azure Data Factory](../data-factory/connector-azure-data-lake-store.md) or [AdlCopy tool](data-lake-store-copy-data-azure-storage-blob.md) to copy data from Azure Storage Blobs to Data Lake Storage Gen1. -- > [!NOTE] - > While using the Import/Export service, the file sizes on the disks that you ship to Azure data center should not be greater than 195 GB. - > - > --## Process data stored in Data Lake Storage Gen1 -Once the data is available in Data Lake Storage Gen1 you can run analysis on that data using the supported big data applications. Currently, you can use Azure HDInsight and Azure Data Lake Analytics to run data analysis jobs on the data stored in Data Lake Storage Gen1. --![Analyze data in Data Lake Storage Gen1](./media/data-lake-store-data-scenarios/analyze-data.png "Analyze data in Data Lake Storage Gen1") --You can look at the following examples. --* [Create an HDInsight cluster with Data Lake Storage Gen1 as storage](data-lake-store-hdinsight-hadoop-use-portal.md) -* [Use Azure Data Lake Analytics with Data Lake Storage Gen1](../data-lake-analytics/data-lake-analytics-get-started-portal.md) --## Download data from Data Lake Storage Gen1 -You might also want to download or move data from Azure Data Lake Storage Gen1 for scenarios such as: --* Move data to other repositories to interface with your existing data processing pipelines. For example, you might want to move data from Data Lake Storage Gen1 to Azure SQL Database or SQL Server. -* Download data to your local computer for processing in IDE environments while building application prototypes. --![Egress data from Data Lake Storage Gen1](./media/data-lake-store-data-scenarios/egress-data.png "Egress data from Data Lake Storage Gen1") --In such cases, you can use any of the following options: --* [Apache Sqoop](data-lake-store-data-transfer-sql-sqoop.md) -* [Azure Data Factory](../data-factory/copy-activity-overview.md) -* [Apache DistCp](data-lake-store-copy-data-wasb-distcp.md) --You can also use the following methods to write your own script/application to download data from Data Lake Storage Gen1. --* [Azure CLI](data-lake-store-get-started-cli-2.0.md) -* [Azure PowerShell](data-lake-store-get-started-powershell.md) -* [Azure Data Lake Storage Gen1 .NET SDK](data-lake-store-get-started-net-sdk.md) --## Visualize data in Data Lake Storage Gen1 -You can use a mix of services to create visual representations of data stored in Data Lake Storage Gen1. --![Visualize data in Data Lake Storage Gen1](./media/data-lake-store-data-scenarios/visualize-data.png "Visualize data in Data Lake Storage Gen1") --* You can start by using [Azure Data Factory to move data from Data Lake Storage Gen1 to Azure Synapse Analytics](../data-factory/copy-activity-overview.md) -* After that, you can [integrate Power BI with Azure Synapse Analytics](/power-bi/connect-data/service-azure-sql-data-warehouse-with-direct-connect) to create visual representation of the data. |
data-lake-store | Data Lake Store Data Transfer Sql Sqoop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-data-transfer-sql-sqoop.md | - Title: Copy data between Data Lake Storage Gen1 and Azure SQL - Sqoop | Microsoft Docs -description: Use Sqoop to copy data between Azure SQL Database and Azure Data Lake Storage Gen1 ---- Previously updated : 07/30/2019----# Copy data between Data Lake Storage Gen1 and Azure SQL Database using Sqoop --Learn how to use Apache Sqoop to import and export data between Azure SQL Database and Azure Data Lake Storage Gen1. --## What is Sqoop? --Big data applications are a natural choice for processing unstructured and semi-structured data, such as logs and files. However, you may also have a need to process structured data that's stored in relational databases. --[Apache Sqoop](https://sqoop.apache.org/docs/1.4.4/SqoopUserGuide.html) is a tool designed to transfer data between relational databases and a big data repository, such as Data Lake Storage Gen1. You can use it to import data from a relational database management system (RDBMS) such as Azure SQL Database into Data Lake Storage Gen1. You can then transform and analyze the data using big data workloads, and then export the data back into an RDBMS. In this article, you use a database in Azure SQL Database as your relational database to import/export from. --## Prerequisites --Before you begin, you must have the following: --* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). -* **An Azure Data Lake Storage Gen1 account**. For instructions on how to create the account, see [Get started with Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md) -* **Azure HDInsight cluster** with access to a Data Lake Storage Gen1 account. See [Create an HDInsight cluster with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md). This article assumes you have an HDInsight Linux cluster with Data Lake Storage Gen1 access. -* **Azure SQL Database**. For instructions on how to create a database in Azure SQL Database, see [Create a database in Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart) --## Create sample tables in the database --1. To start, create two sample tables in the database. Use [SQL Server Management Studio](/azure/azure-sql/database/connect-query-ssms) or Visual Studio to connect to the database and then run the following queries. -- **Create Table1** -- ```sql - CREATE TABLE [dbo].[Table1]( - [ID] [int] NOT NULL, - [FName] [nvarchar](50) NOT NULL, - [LName] [nvarchar](50) NOT NULL, - CONSTRAINT [PK_Table_1] PRIMARY KEY CLUSTERED - ( - [ID] ASC - ) - ) ON [PRIMARY] - GO - ``` -- **Create Table2** -- ```sql - CREATE TABLE [dbo].[Table2]( - [ID] [int] NOT NULL, - [FName] [nvarchar](50) NOT NULL, - [LName] [nvarchar](50) NOT NULL, - CONSTRAINT [PK_Table_2] PRIMARY KEY CLUSTERED - ( - [ID] ASC - ) - ) ON [PRIMARY] - GO - ``` --1. Run the following command to add some sample data to **Table1**. Leave **Table2** empty. Later, you'll import data from **Table1** into Data Lake Storage Gen1. Then, you'll export data from Data Lake Storage Gen1 into **Table2**. -- ```sql - INSERT INTO [dbo].[Table1] VALUES (1,'Neal','Kell'), (2,'Lila','Fulton'), (3, 'Erna','Myers'), (4,'Annette','Simpson'); - ``` --## Use Sqoop from an HDInsight cluster with access to Data Lake Storage Gen1 --An HDInsight cluster already has the Sqoop packages available. If you've configured the HDInsight cluster to use Data Lake Storage Gen1 as additional storage, you can use Sqoop (without any configuration changes) to import/export data between a relational database such as Azure SQL Database, and a Data Lake Storage Gen1 account. --1. For this article, we assume you created a Linux cluster so you should use SSH to connect to the cluster. See [Connect to a Linux-based HDInsight cluster](../hdinsight/hdinsight-hadoop-linux-use-ssh-unix.md). --1. Verify whether you can access the Data Lake Storage Gen1 account from the cluster. Run the following command from the SSH prompt: -- ```console - hdfs dfs -ls adl://<data_lake_storage_gen1_account>.azuredatalakestore.net/ - ``` -- This command provides a list of files/folders in the Data Lake Storage Gen1 account. --### Import data from Azure SQL Database into Data Lake Storage Gen1 --1. Navigate to the directory where Sqoop packages are available. Typically, this location is `/usr/hdp/<version>/sqoop/bin`. --1. Import the data from **Table1** into the Data Lake Storage Gen1 account. Use the following syntax: -- ```console - sqoop-import --connect "jdbc:sqlserver://<sql-database-server-name>.database.windows.net:1433;username=<username>@<sql-database-server-name>;password=<password>;database=<sql-database-name>" --table Table1 --target-dir adl://<data-lake-storage-gen1-name>.azuredatalakestore.net/Sqoop/SqoopImportTable1 - ``` -- The **sql-database-server-name** placeholder represents the name of the server where the database is running. **sql-database-name** placeholder represents the actual database name. -- For example, -- ```console - sqoop-import --connect "jdbc:sqlserver://mysqoopserver.database.windows.net:1433;username=user1@mysqoopserver;password=<password>;database=mysqoopdatabase" --table Table1 --target-dir adl://myadlsg1store.azuredatalakestore.net/Sqoop/SqoopImportTable1 - ``` --1. Verify that the data has been transferred to the Data Lake Storage Gen1 account. Run the following command: -- ```console - hdfs dfs -ls adl://hdiadlsg1store.azuredatalakestore.net/Sqoop/SqoopImportTable1/ - ``` -- You should see the following output. -- ```console - -rwxrwxrwx 0 sshuser hdfs 0 2016-02-26 21:09 adl://hdiadlsg1store.azuredatalakestore.net/Sqoop/SqoopImportTable1/_SUCCESS - -rwxrwxrwx 0 sshuser hdfs 12 2016-02-26 21:09 adl://hdiadlsg1store.azuredatalakestore.net/Sqoop/SqoopImportTable1/part-m-00000 - -rwxrwxrwx 0 sshuser hdfs 14 2016-02-26 21:09 adl://hdiadlsg1store.azuredatalakestore.net/Sqoop/SqoopImportTable1/part-m-00001 - -rwxrwxrwx 0 sshuser hdfs 13 2016-02-26 21:09 adl://hdiadlsg1store.azuredatalakestore.net/Sqoop/SqoopImportTable1/part-m-00002 - -rwxrwxrwx 0 sshuser hdfs 18 2016-02-26 21:09 adl://hdiadlsg1store.azuredatalakestore.net/Sqoop/SqoopImportTable1/part-m-00003 - ``` -- Each **part-m-*** file corresponds to a row in the source table, **Table1**. You can view the contents of the part-m-* files to verify. --### Export data from Data Lake Storage Gen1 into Azure SQL Database --1. Export the data from the Data Lake Storage Gen1 account to the empty table, **Table2**, in the Azure SQL Database. Use the following syntax. -- ```console - sqoop-export --connect "jdbc:sqlserver://<sql-database-server-name>.database.windows.net:1433;username=<username>@<sql-database-server-name>;password=<password>;database=<sql-database-name>" --table Table2 --export-dir adl://<data-lake-storage-gen1-name>.azuredatalakestore.net/Sqoop/SqoopImportTable1 --input-fields-terminated-by "," - ``` -- For example, -- ```console - sqoop-export --connect "jdbc:sqlserver://mysqoopserver.database.windows.net:1433;username=user1@mysqoopserver;password=<password>;database=mysqoopdatabase" --table Table2 --export-dir adl://myadlsg1store.azuredatalakestore.net/Sqoop/SqoopImportTable1 --input-fields-terminated-by "," - ``` --1. Verify that the data was uploaded to the SQL Database table. Use [SQL Server Management Studio](/azure/azure-sql/database/connect-query-ssms) or Visual Studio to connect to the Azure SQL Database and then run the following query. -- ```sql - SELECT * FROM TABLE2 - ``` -- This command should have the following output. -- ```output - ID FName LName - - - 1 Neal Kell - 2 Lila Fulton - 3 Erna Myers - 4 Annette Simpson - ``` --## Performance considerations while using Sqoop --For information about performance tuning your Sqoop job to copy data to Data Lake Storage Gen1, see the [Sqoop performance blog post](/archive/blogs/shanyu/performance-tuning-for-hdinsight-storm-and-microsoft-azure-eventhubs). --## Next steps --* [Copy data from Azure Storage Blobs to Data Lake Storage Gen1](data-lake-store-copy-data-azure-storage-blob.md) -* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md) -* [Use Azure Data Lake Analytics with Data Lake Storage Gen1](../data-lake-analytics/data-lake-analytics-get-started-portal.md) -* [Use Azure HDInsight with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md) |
data-lake-store | Data Lake Store Diagnostic Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-diagnostic-logs.md | - Title: Viewing diagnostic logs for Azure Data Lake Storage Gen1 | Microsoft Docs -description: 'Understand how to setup and access diagnostic logs for Azure Data Lake Storage Gen1 ' ---- Previously updated : 03/26/2018----# Accessing diagnostic logs for Azure Data Lake Storage Gen1 -Learn to enable diagnostic logging for your Azure Data Lake Storage Gen1 account and how to view the logs collected for your account. --Organizations can enable diagnostic logging for their Azure Data Lake Storage Gen1 account to collect data access audit trails that provides information such as list of users accessing the data, how frequently the data is accessed, how much data is stored in the account, etc. When enabled, the diagnostics and/or requests are logged on a best-effort basis. Both Requests and Diagnostics log entries are created only if there are requests made against the service endpoint. --## Prerequisites -* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). -* **Azure Data Lake Storage Gen1 account**. Follow the instructions at [Get started with Azure Data Lake Storage Gen1 using the Azure portal](data-lake-store-get-started-portal.md). --## Enable diagnostic logging for your Data Lake Storage Gen1 account -1. Sign on to the new [Azure portal](https://portal.azure.com). -2. Open your Data Lake Storage Gen1 account, and from your Data Lake Storage Gen1 account blade, click **Diagnostic settings**. -3. In the **Diagnostics settings** blade, click **Turn on diagnostics**. -- ![Screenshot of the Data Lake Storage Gen 1 account with the Diagnostic setting option and the Turn on diagnostics option called out.](./media/data-lake-store-diagnostic-logs/turn-on-diagnostics.png "Enable diagnostic logs") --3. In the **Diagnostics settings** blade, make the following changes to configure diagnostic logging. - - ![Screenshot of the Diagnostic setting section with the Name text box and the Save option called out.](./media/data-lake-store-diagnostic-logs/enable-diagnostic-logs.png "Enable diagnostic logs") - - * For **Name**, enter a value for the diagnostic log configuration. - * You can choose to store/process the data in different ways. - - * Select the option to **Archive to a storage account** to store logs to an Azure Storage account. You use this option if you want to archive the data that will be batch-processed at a later date. If you select this option you must provide an Azure Storage account to save the logs to. - - * Select the option to **Stream to an event hub** to stream log data to an Azure Event Hub. Most likely you will use this option if you have a downstream processing pipeline to analyze incoming logs at real time. If you select this option, you must provide the details for the Azure Event Hub you want to use. -- * Select the option to **Send to Log Analytics** to use the Azure Monitor service to analyze the generated log data. If you select this option, you must provide the details for the Log Analytics workspace that you would use the perform log analysis. See [View or analyze data collected with Azure Monitor logs search](../azure-monitor/logs/log-analytics-tutorial.md) for details on using Azure Monitor logs. - - * Specify whether you want to get audit logs or request logs or both. - * Specify the number of days for which the data must be retained. Retention is only applicable if you are using Azure storage account to archive log data. - * Click **Save**. --Once you have enabled diagnostic settings, you can watch the logs in the **Diagnostic Logs** tab. --## View diagnostic logs for your Data Lake Storage Gen1 account -There are two ways to view the log data for your Data Lake Storage Gen1 account. --* From the Data Lake Storage Gen1 account settings view -* From the Azure Storage account where the data is stored --### Using the Data Lake Storage Gen1 Settings view -1. From your Data Lake Storage Gen1 account **Settings** blade, click **Diagnostic Logs**. - - ![View diagnostic logs](./media/data-lake-store-diagnostic-logs/view-diagnostic-logs.png "View diagnostic logs") -2. In the **Diagnostics Logs** blade, you should see the logs categorized by **Audit Logs** and **Request Logs**. - - * Request logs capture every API request made on the Data Lake Storage Gen1 account. - * Audit Logs are similar to request Logs but provide a much more detailed breakdown of the operations being performed on the Data Lake Storage Gen1 account. For example, a single upload API call in request logs might result in multiple "Append" operations in the audit logs. -3. To download the logs, click the **Download** link against each log entry. --### From the Azure Storage account that contains log data -1. Open the Azure Storage account blade associated with Data Lake Storage Gen1 for logging, and then click Blobs. The **Blob service** blade lists two containers. - - ![Screenshot of the Data Lake Storage Gen 1 blade the the Blobs option selected and the Blog service blade with the names of the two blob services called out.](./media/data-lake-store-diagnostic-logs/view-diagnostic-logs-storage-account.png "View diagnostic logs") - - * The container **insights-logs-audit** contains the audit logs. - * The container **insights-logs-requests** contains the request logs. -2. Within these containers, the logs are stored under the following structure. - - ![Screenshot of the log structure as it is stored in the container.](./media/data-lake-store-diagnostic-logs/view-diagnostic-logs-storage-account-structure.png "View diagnostic logs") - - As an example, the complete path to an audit log could be `https://adllogs.blob.core.windows.net/insights-logs-audit/resourceId=/SUBSCRIPTIONS/<sub-id>/RESOURCEGROUPS/myresourcegroup/PROVIDERS/MICROSOFT.DATALAKESTORE/ACCOUNTS/mydatalakestorage/y=2016/m=07/d=18/h=04/m=00/PT1H.json` - - Similarly, the complete path to a request log could be `https://adllogs.blob.core.windows.net/insights-logs-requests/resourceId=/SUBSCRIPTIONS/<sub-id>/RESOURCEGROUPS/myresourcegroup/PROVIDERS/MICROSOFT.DATALAKESTORE/ACCOUNTS/mydatalakestorage/y=2016/m=07/d=18/h=14/m=00/PT1H.json` --## Understand the structure of the log data -The audit and request logs are in a JSON format. In this section, we look at the structure of JSON for request and audit logs. --### Request logs -Here's a sample entry in the JSON-formatted request log. Each blob has one root object called **records** that contains an array of log objects. --```json -{ -"records": - [ - . . . . - , - { - "time": "2016-07-07T21:02:53.456Z", - "resourceId": "/SUBSCRIPTIONS/<subscription_id>/RESOURCEGROUPS/<resource_group_name>/PROVIDERS/MICROSOFT.DATALAKESTORE/ACCOUNTS/<data_lake_storage_gen1_account_name>", - "category": "Requests", - "operationName": "GETCustomerIngressEgress", - "resultType": "200", - "callerIpAddress": "::ffff:1.1.1.1", - "correlationId": "4a11c709-05f5-417c-a98d-6e81b3e29c58", - "identity": "1808bd5f-62af-45f4-89d8-03c5e81bac30", - "properties": {"HttpMethod":"GET","Path":"/webhdfs/v1/Samples/Outputs/Drivers.csv","RequestContentLength":0,"StoreIngressSize":0 ,"StoreEgressSize":4096,"ClientRequestId":"3b7adbd9-3519-4f28-a61c-bd89506163b8","StartTime":"2016-07-07T21:02:52.472Z","EndTime":"2016-07-07T21:02:53.456Z","QueryParameters":"api-version=<version>&op=<operationName>"} - } - , - . . . . - ] -} -``` --#### Request log schema -| Name | Type | Description | -| | | | -| time |String |The timestamp (in UTC) of the log | -| resourceId |String |The ID of the resource that operation took place on | -| category |String |The log category. For example, **Requests**. | -| operationName |String |Name of the operation that is logged. For example, getfilestatus. | -| resultType |String |The status of the operation, For example, 200. | -| callerIpAddress |String |The IP address of the client making the request | -| correlationId |String |The ID of the log that can used to group together a set of related log entries | -| identity |Object |The identity that generated the log | -| properties |JSON |See below for details | --#### Request log properties schema -| Name | Type | Description | -| | | | -| HttpMethod |String |The HTTP Method used for the operation. For example, GET. | -| Path |String |The path the operation was performed on | -| RequestContentLength |int |The content length of the HTTP request | -| ClientRequestId |String |The ID that uniquely identifies this request | -| StartTime |String |The time at which the server received the request | -| EndTime |String |The time at which the server sent a response | -| StoreIngressSize |Long |Size in bytes ingressed to Data Lake Store | -| StoreEgressSize |Long |Size in bytes egressed from Data Lake Store | -| QueryParameters |String |Description: These are the http query parameters. Example 1: api-version=2014-01-01&op=getfilestatus Example 2: op=APPEND&append=true&syncFlag=DATA&filesessionid=bee3355a-4925-4435-bb4d-ceea52811aeb&leaseid=bee3355a-4925-4435-bb4d-ceea52811aeb&offset=28313319&api-version=2017-08-01 | --### Audit logs -Here's a sample entry in the JSON-formatted audit log. Each blob has one root object called **records** that contains an array of log objects --```json -{ -"records": - [ - . . . . - , - { - "time": "2016-07-08T19:08:59.359Z", - "resourceId": "/SUBSCRIPTIONS/<subscription_id>/RESOURCEGROUPS/<resource_group_name>/PROVIDERS/MICROSOFT.DATALAKESTORE/ACCOUNTS/<data_lake_storage_gen1_account_name>", - "category": "Audit", - "operationName": "SeOpenStream", - "resultType": "0", - "resultSignature": "0", - "correlationId": "381110fc03534e1cb99ec52376ceebdf;Append_BrEKAmg;25.66.9.145", - "identity": "A9DAFFAF-FFEE-4BB5-A4A0-1B6CBBF24355", - "properties": {"StreamName":"adl://<data_lake_storage_gen1_account_name>.azuredatalakestore.net/logs.csv"} - } - , - . . . . - ] -} -``` --#### Audit log schema -| Name | Type | Description | -| | | | -| time |String |The timestamp (in UTC) of the log | -| resourceId |String |The ID of the resource that operation took place on | -| category |String |The log category. For example, **Audit**. | -| operationName |String |Name of the operation that is logged. For example, getfilestatus. | -| resultType |String |The status of the operation, For example, 200. | -| resultSignature |String |Additional details on the operation. | -| correlationId |String |The ID of the log that can used to group together a set of related log entries | -| identity |Object |The identity that generated the log | -| properties |JSON |See below for details | --#### Audit log properties schema -| Name | Type | Description | -| | | | -| StreamName |String |The path the operation was performed on | --## Samples to process the log data -When sending logs from Azure Data Lake Storage Gen1 to Azure Monitor logs (see [View or analyze data collected with Azure Monitor logs search](../azure-monitor/logs/log-analytics-tutorial.md) for details on using Azure Monitor logs), the following query will return a table containing a list of user display names, the time of the events, and the count of events for the time of the event along with a visual chart. It can easily be modified to show user GUID or other attributes: --``` -search * -| where ( Type == "AzureDiagnostics" ) -| summarize count(TimeGenerated) by identity_s, TimeGenerated -``` ---Azure Data Lake Storage Gen1 provides a sample on how to process and analyze the log data. You can find the sample at [https://github.com/Azure/AzureDataLake/tree/master/Samples/AzureDiagnosticsSample](https://github.com/Azure/AzureDataLake/tree/master/Samples/AzureDiagnosticsSample). --## See also -* [Overview of Azure Data Lake Storage Gen1](data-lake-store-overview.md) -* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md) |
data-lake-store | Data Lake Store Disaster Recovery Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-disaster-recovery-guidance.md | - Title: Disaster recovery guidance for Azure Data Lake Storage Gen1 | Microsoft Docs -description: Learn how to further protect your data from region-wide outages or accidental deletions beyond the locally redundant storage of Azure Data Lake Storage Gen1. ---- Previously updated : 02/21/2018----# High availability and disaster recovery guidance for Data Lake Storage Gen1 --Data Lake Storage Gen1 provides locally redundant storage (LRS). Therefore, the data in your Data Lake Storage Gen1 account is resilient to transient hardware failures within a datacenter through automated replicas. This ensures durability and high availability, meeting the Data Lake Storage Gen1 SLA. This article provides guidance on how to further protect your data from rare region-wide outages or accidental deletions. --## Disaster recovery guidance --It's critical for you to prepare a disaster recovery plan. Review the information in this article and these additional resources to help you create your own plan. --* [Disaster recovery and high availability for Azure applications](/azure/architecture/framework/resiliency/backup-and-recovery) -* [Azure resiliency technical guidance](/azure/architecture/framework/resiliency/app-design) --### Best practice recommendations --We recommend that you copy your critical data to another Data Lake Storage Gen1 account in another region with a frequency aligned to the needs of your disaster recovery plan. There are a variety of methods to copy data including [ADLCopy](data-lake-store-copy-data-azure-storage-blob.md), [Azure PowerShell](data-lake-store-get-started-powershell.md), or [Azure Data Factory](../data-factory/connector-azure-data-lake-store.md). Azure Data Factory is a useful service for creating and deploying data movement pipelines on a recurring basis. --If a regional outage occurs, you can then access your data in the region where the data was copied. You can monitor the [Azure Service Health Dashboard](https://azure.microsoft.com/status/) to determine the Azure service status across the globe. --## Data corruption or accidental deletion recovery guidance --While Data Lake Storage Gen1 provides data resiliency through automated replicas, this does not prevent your application (or developers/users) from corrupting data or accidentally deleting it. --To prevent accidental deletion, we recommend that you first set the correct access policies for your Data Lake Storage Gen1 account. This includes applying [Azure resource locks](../azure-resource-manager/management/lock-resources.md) to lock down important resources and applying account and file level access control using the available [Data Lake Storage Gen1 security features](data-lake-store-security-overview.md). We also recommend that you routinely create copies of your critical data using [ADLCopy](data-lake-store-copy-data-azure-storage-blob.md), [Azure PowerShell](data-lake-store-get-started-powershell.md) or [Azure Data Factory](../data-factory/connector-azure-data-lake-store.md) in another Data Lake Storage Gen1 account, folder, or Azure subscription. This can be used to recover from a data corruption or deletion incident. Azure Data Factory is a useful service for creating and deploying data movement pipelines on a recurring basis. --You can also enable [diagnostic logging](data-lake-store-diagnostic-logs.md) for a Data Lake Storage Gen1 account to collect data access audit trails. The audit trails provide information about who might have deleted or updated a file. --## Next steps --* [Get started with Data Lake Storage Gen1](data-lake-store-get-started-portal.md) -* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md) |
data-lake-store | Data Lake Store Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-encryption.md | - Title: Encryption in Azure Data Lake Storage Gen1 | Microsoft Docs -description: Encryption in Azure Data Lake Storage Gen1 helps you protect your data, implement enterprise security policies, and meet regulatory compliance requirements. This article provides an overview of the design, and discusses some of the technical aspects of implementation. ---- Previously updated : 03/26/2018----# Encryption of data in Azure Data Lake Storage Gen1 --Encryption in Azure Data Lake Storage Gen1 helps you protect your data, implement enterprise security policies, and meet regulatory compliance requirements. This article provides an overview of the design, and discusses some of the technical aspects of implementation. --Data Lake Storage Gen1 supports encryption of data both at rest and in transit. For data at rest, Data Lake Storage Gen1 supports "on by default," transparent encryption. Here is what these terms mean in a bit more detail: --* **On by default**: When you create a new Data Lake Storage Gen1 account, the default setting enables encryption. Thereafter, data that is stored in Data Lake Storage Gen1 is always encrypted prior to storing on persistent media. This is the behavior for all data, and it cannot be changed after an account is created. -* **Transparent**: Data Lake Storage Gen1 automatically encrypts data prior to persisting, and decrypts data prior to retrieval. The encryption is configured and managed at the Data Lake Storage Gen1 account level by an administrator. No changes are made to the data access APIs. Thus, no changes are required in applications and services that interact with Data Lake Storage Gen1 because of encryption. --Data in transit (also known as data in motion) is also always encrypted in Data Lake Storage Gen1. In addition to encrypting data prior to storing to persistent media, the data is also always secured in transit by using HTTPS. HTTPS is the only protocol that is supported for the Data Lake Storage Gen1 REST interfaces. The following diagram shows how data becomes encrypted in Data Lake Storage Gen1: --![Diagram of data encryption in Data Lake Storage Gen1](./media/data-lake-store-encryption/fig1.png) ---## Set up encryption with Data Lake Storage Gen1 --Encryption for Data Lake Storage Gen1 is set up during account creation, and it is always enabled by default. You can either manage the keys yourself, or allow Data Lake Storage Gen1 to manage them for you (this is the default). --For more information, see [Getting started](./data-lake-store-get-started-portal.md). --## How encryption works in Data Lake Storage Gen1 --The following information covers how to manage master encryption keys, and it explains the three different types of keys you can use in data encryption for Data Lake Storage Gen1. --### Master encryption keys --Data Lake Storage Gen1 provides two modes for management of master encryption keys (MEKs). For now, assume that the master encryption key is the top-level key. Access to the master encryption key is required to decrypt any data stored in Data Lake Storage Gen1. --The two modes for managing the master encryption key are as follows: --* Service managed keys -* Customer managed keys --In both modes, the master encryption key is secured by storing it in Azure Key Vault. Key Vault is a fully managed, highly secure service on Azure that can be used to safeguard cryptographic keys. For more information, see [Key Vault](https://azure.microsoft.com/services/key-vault). --Here is a brief comparison of capabilities provided by the two modes of managing the MEKs. --| Question | Service managed keys | Customer managed keys | -| -- | -- | | -|How is data stored?|Always encrypted prior to being stored.|Always encrypted prior to being stored.| -|Where is the Master Encryption Key stored?|Key Vault|Key Vault| -|Are any encryption keys stored in the clear outside of Key Vault? |No|No| -|Can the MEK be retrieved by Key Vault?|No. After the MEK is stored in Key Vault, it can only be used for encryption and decryption.|No. After the MEK is stored in Key Vault, it can only be used for encryption and decryption.| -|Who owns the Key Vault instance and the MEK?|The Data Lake Storage Gen1 service|You own the Key Vault instance, which belongs in your own Azure subscription. The MEK in Key Vault can be managed by software or hardware.| -|Can you revoke access to the MEK for the Data Lake Storage Gen1 service?|No|Yes. You can manage access control lists in Key Vault, and remove access control entries to the service identity for the Data Lake Storage Gen1 service.| -|Can you permanently delete the MEK?|No|Yes. If you delete the MEK from Key Vault, the data in the Data Lake Storage Gen1 account cannot be decrypted by anyone, including the Data Lake Storage Gen1 service. <br><br> If you have explicitly backed up the MEK prior to deleting it from Key Vault, the MEK can be restored, and the data can then be recovered. However, if you have not backed up the MEK prior to deleting it from Key Vault, the data in the Data Lake Storage Gen1 account can never be decrypted thereafter.| ---Aside from this difference of who manages the MEK and the Key Vault instance in which it resides, the rest of the design is the same for both modes. --It's important to remember the following when you choose the mode for the master encryption keys: --* You can choose whether to use customer managed keys or service managed keys when you provision a Data Lake Storage Gen1 account. -* After a Data Lake Storage Gen1 account is provisioned, the mode cannot be changed. --### Encryption and decryption of data --There are three types of keys that are used in the design of data encryption. The following table provides a summary: --| Key | Abbreviation | Associated with | Storage location | Type | Notes | -|--|--|--|-||| -| Master Encryption Key | MEK | A Data Lake Storage Gen1 account | Key Vault | Asymmetric | It can be managed by Data Lake Storage Gen1 or you. | -| Data Encryption Key | DEK | A Data Lake Storage Gen1 account | Persistent storage, managed by the Data Lake Storage Gen1 service | Symmetric | The DEK is encrypted by the MEK. The encrypted DEK is what is stored on persistent media. | -| Block Encryption Key | BEK | A block of data | None | Symmetric | The BEK is derived from the DEK and the data block. | --The following diagram illustrates these concepts: --![Keys in data encryption](./media/data-lake-store-encryption/fig2.png) --#### Pseudo algorithm when a file is to be decrypted: -1. Check if the DEK for the Data Lake Storage Gen1 account is cached and ready for use. - - If not, then read the encrypted DEK from persistent storage, and send it to Key Vault to be decrypted. Cache the decrypted DEK in memory. It is now ready to use. -2. For every block of data in the file: - - Read the encrypted block of data from persistent storage. - - Generate the BEK from the DEK and the encrypted block of data. - - Use the BEK to decrypt data. ---#### Pseudo algorithm when a block of data is to be encrypted: -1. Check if the DEK for the Data Lake Storage Gen1 account is cached and ready for use. - - If not, then read the encrypted DEK from persistent storage, and send it to Key Vault to be decrypted. Cache the decrypted DEK in memory. It is now ready to use. -2. Generate a unique BEK for the block of data from the DEK. -3. Encrypt the data block with the BEK, by using AES-256 encryption. -4. Store the encrypted data block of data on persistent storage. --> [!NOTE] -> The DEK is always stored encrypted by the MEK, whether on persistent media or cached in memory. --## Key rotation --When you are using customer-managed keys, you can rotate the MEK. To learn how to set up a Data Lake Storage Gen1 account with customer-managed keys, see [Getting started](./data-lake-store-get-started-portal.md). --### Prerequisites --When you set up the Data Lake Storage Gen1 account, you have chosen to use your own keys. This option cannot be changed after the account has been created. The following steps assume that you are using customer-managed keys (that is, you have chosen your own keys from Key Vault). --Note that if you use the default options for encryption, your data is always encrypted by using keys managed by Data Lake Storage Gen1. In this option, you don't have the ability to rotate keys, as they are managed by Data Lake Storage Gen1. --### How to rotate the MEK in Data Lake Storage Gen1 --1. Sign in to the [Azure portal](https://portal.azure.com/). -2. Browse to the Key Vault instance that stores your keys associated with your Data Lake Storage Gen1 account. Select **Keys**. -- ![Screenshot of Key Vault](./media/data-lake-store-encryption/keyvault.png) --3. Select the key associated with your Data Lake Storage Gen1 account, and create a new version of this key. Note that Data Lake Storage Gen1 currently only supports key rotation to a new version of a key. It doesn't support rotating to a different key. -- ![Screenshot of Keys window, with New Version highlighted](./media/data-lake-store-encryption/keynewversion.png) --4. Browse to the Data Lake Storage Gen1 account, and select **Encryption**. -- ![Screenshot of Data Lake Storage Gen1 account window, with Encryption highlighted](./media/data-lake-store-encryption/select-encryption.png) --5. A message notifies you that a new key version of the key is available. Click **Rotate Key** to update the key to the new version. -- ![Screenshot of Data Lake Storage Gen1 window with message and Rotate Key highlighted](./media/data-lake-store-encryption/rotatekey.png) --This operation should take less than two minutes, and there is no expected downtime due to key rotation. After the operation is complete, the new version of the key is in use. --> [!IMPORTANT] -> After the key rotation operation is complete, the old version of the key is no longer actively used for encrypting new data. There may be cases however where accessing older data may need the old key. To allow for reading of such older data, do not delete the old key |
data-lake-store | Data Lake Store End User Authenticate Java Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-end-user-authenticate-java-sdk.md | - Title: End-user authentication - Java with Data Lake Storage Gen1 - Azure -description: Learn how to achieve end-user authentication with Azure Data Lake Storage Gen1 using Microsoft Entra ID with Java ---- Previously updated : 05/29/2018----# End-user authentication with Azure Data Lake Storage Gen1 using Java -> [!div class="op_single_selector"] -> * [Using Java](data-lake-store-end-user-authenticate-java-sdk.md) -> * [Using .NET SDK](data-lake-store-end-user-authenticate-net-sdk.md) -> * [Using Python](data-lake-store-end-user-authenticate-python.md) -> * [Using REST API](data-lake-store-end-user-authenticate-rest-api.md) -> -> ---In this article, you learn about how to use the Java SDK to do end-user authentication with Azure Data Lake Storage Gen1. For service-to-service authentication with Data Lake Storage Gen1 using Java SDK, see [Service-to-service authentication with Data Lake Storage Gen1 using Java](data-lake-store-service-to-service-authenticate-java.md). --## Prerequisites -* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). --* **Create a Microsoft Entra ID "Native" Application**. You must have completed the steps in [End-user authentication with Data Lake Storage Gen1 using Microsoft Entra ID](data-lake-store-end-user-authenticate-using-active-directory.md). --* [Maven](https://maven.apache.org/install.html). This tutorial uses Maven for build and project dependencies. Although it is possible to build without using a build system like Maven or Gradle, these systems make is much easier to manage dependencies. --* (Optional) And IDE like [IntelliJ IDEA](https://www.jetbrains.com/idea/download/) or [Eclipse](https://www.eclipse.org/downloads/) or similar. --## End-user authentication -1. Create a Maven project using [mvn archetype](https://maven.apache.org/guides/getting-started/maven-in-five-minutes.html) from the command line or using an IDE. For instructions on how to create a Java project using IntelliJ, see [here](https://www.jetbrains.com/help/idea/2016.1/creating-and-running-your-first-java-application.html). For instructions on how to create a project using Eclipse, see [here](https://help.eclipse.org/mars/index.jsp?topic=%2Forg.eclipse.jdt.doc.user%2FgettingStarted%2Fqs-3.htm). --2. Add the following dependencies to your Maven **pom.xml** file. Add the following snippet before the **\</project>** tag: - - ```xml - <dependencies> - <dependency> - <groupId>com.microsoft.azure</groupId> - <artifactId>azure-data-lake-store-sdk</artifactId> - <version>2.2.3</version> - </dependency> - <dependency> - <groupId>org.slf4j</groupId> - <artifactId>slf4j-nop</artifactId> - <version>1.7.21</version> - </dependency> - </dependencies> - ``` - - The first dependency is to use the Data Lake Storage Gen1 SDK (`azure-data-lake-store-sdk`) from the maven repository. The second dependency is to specify the logging framework (`slf4j-nop`) to use for this application. The Data Lake Storage Gen1 SDK uses [SLF4J](https://www.slf4j.org/) logging façade, which lets you choose from a number of popular logging frameworks, like Log4j, Java logging, Logback, etc., or no logging. For this example, we disable logging, hence we use the **slf4j-nop** binding. To use other logging options in your app, see [here](https://www.slf4j.org/manual.html#projectDep). --3. Add the following import statements to your application. -- ```java - import com.microsoft.azure.datalake.store.ADLException; - import com.microsoft.azure.datalake.store.ADLStoreClient; - import com.microsoft.azure.datalake.store.DirectoryEntry; - import com.microsoft.azure.datalake.store.IfExists; - import com.microsoft.azure.datalake.store.oauth2.AccessTokenProvider; - import com.microsoft.azure.datalake.store.oauth2.DeviceCodeTokenProvider; - ``` --4. Use the following snippet in your Java application to obtain token for the Active Directory native application you created earlier using the `DeviceCodeTokenProvider`. Replace **FILL-IN-HERE** with the actual values for the Microsoft Entra native application. -- ```java - private static String nativeAppId = "FILL-IN-HERE"; - - AccessTokenProvider provider = new DeviceCodeTokenProvider(nativeAppId); - ``` --The Data Lake Storage Gen1 SDK provides convenient methods that let you manage the security tokens needed to talk to the Data Lake Storage Gen1 account. However, the SDK does not mandate that only these methods be used. You can use any other means of obtaining token as well, like using the [Azure AD SDK](https://github.com/AzureAD/azure-activedirectory-library-for-java), or your own custom code. --## Next steps -In this article, you learned how to use end-user authentication to authenticate with Azure Data Lake Storage Gen1 using Java SDK. You can now look at the following articles that talk about how to use the Java SDK to work with Azure Data Lake Storage Gen1. --* [Data operations on Data Lake Storage Gen1 using Java SDK](data-lake-store-get-started-java-sdk.md) |
data-lake-store | Data Lake Store End User Authenticate Net Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-end-user-authenticate-net-sdk.md | - Title: End-user authentication - .NET with Data Lake Storage Gen1 - Azure -description: Learn how to achieve end-user authentication with Azure Data Lake Storage Gen1 using Microsoft Entra ID with .NET SDK ---- Previously updated : 09/22/2022----# End-user authentication with Azure Data Lake Storage Gen1 using .NET SDK -> [!div class="op_single_selector"] -> * [Using Java](data-lake-store-end-user-authenticate-java-sdk.md) -> * [Using .NET SDK](data-lake-store-end-user-authenticate-net-sdk.md) -> * [Using Python](data-lake-store-end-user-authenticate-python.md) -> * [Using REST API](data-lake-store-end-user-authenticate-rest-api.md) -> -> --In this article, you learn about how to use the .NET SDK to do end-user authentication with Azure Data Lake Storage Gen1. For service-to-service authentication with Data Lake Storage Gen1 using .NET SDK, see [Service-to-service authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-service-to-service-authenticate-net-sdk.md). --## Prerequisites -* **Visual Studio 2013 or above**. The instructions below use Visual Studio 2019. --* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). --* **Create a Microsoft Entra ID "Native" Application**. You must have completed the steps in [End-user authentication with Data Lake Storage Gen1 using Microsoft Entra ID](data-lake-store-end-user-authenticate-using-active-directory.md). --## Create a .NET application -1. In Visual Studio, select the **File** menu, **New**, and then **Project**. -2. Choose **Console App (.NET Framework)**, and then select **Next**. -3. In **Project name**, enter `CreateADLApplication`, and then select **Create**. --4. Add the NuGet packages to your project. -- 1. Right-click the project name in the Solution Explorer and click **Manage NuGet Packages**. - 2. In the **NuGet Package Manager** tab, make sure that **Package source** is set to **nuget.org** and that **Include prerelease** check box is selected. - 3. Search for and install the following NuGet packages: -- * `Microsoft.Azure.Management.DataLake.Store` - This tutorial uses v2.1.3-preview. - * `Microsoft.Rest.ClientRuntime.Azure.Authentication` - This tutorial uses v2.2.12. -- ![Add a NuGet source](./media/data-lake-store-get-started-net-sdk/data-lake-store-install-nuget-package.png "Create a new Azure Data Lake account") - 4. Close the **NuGet Package Manager**. --5. Open **Program.cs** -6. Replace the using statements with the following lines: -- ```csharp - using System; - using System.IO; - using System.Linq; - using System.Text; - using System.Threading; - using System.Collections.Generic; - - using Microsoft.Rest; - using Microsoft.Rest.Azure.Authentication; - using Microsoft.Azure.Management.DataLake.Store; - using Microsoft.Azure.Management.DataLake.Store.Models; - using Microsoft.IdentityModel.Clients.ActiveDirectory; - ``` --## End-user authentication -Add this snippet in your .NET client application. Replace the placeholder values with the values retrieved from a Microsoft Entra native application (listed as prerequisite). This snippet lets you authenticate your application **interactively** with Data Lake Storage Gen1, which means you are prompted to enter your Azure credentials. --For ease of use, the following snippet uses default values for client ID and redirect URI that are valid for any Azure subscription. In the following snippet, you only need to provide the value for your tenant ID. You can retrieve the Tenant ID using the instructions provided at [Get the tenant ID](../active-directory/develop/howto-create-service-principal-portal.md#sign-in-to-the-application). - -- Replace the Main() function with the following code:-- ```csharp - private static void Main(string[] args) - { - //User login via interactive popup - string TENANT = "<AAD-directory-domain>"; - string CLIENTID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"; - System.Uri ARM_TOKEN_AUDIENCE = new System.Uri(@"https://management.core.windows.net/"); - System.Uri ADL_TOKEN_AUDIENCE = new System.Uri(@"https://datalake.azure.net/"); - string MY_DOCUMENTS = System.Environment.GetFolderPath(System.Environment.SpecialFolder.MyDocuments); - string TOKEN_CACHE_PATH = System.IO.Path.Combine(MY_DOCUMENTS, "my.tokencache"); - var tokenCache = GetTokenCache(TOKEN_CACHE_PATH); - var armCreds = GetCreds_User_Popup(TENANT, ARM_TOKEN_AUDIENCE, CLIENTID, tokenCache); - var adlCreds = GetCreds_User_Popup(TENANT, ADL_TOKEN_AUDIENCE, CLIENTID, tokenCache); - } - ``` --A couple of things to know about the preceding snippet: --* The preceding snippet uses a helper functions `GetTokenCache` and `GetCreds_User_Popup`. The code for these helper functions is available [here on GitHub](https://github.com/Azure-Samples/data-lake-analytics-dotnet-auth-options#gettokencache). -* To help you complete the tutorial faster, the snippet uses a native application client ID that is available by default for all Azure subscriptions. So, you can **use this snippet as-is in your application**. -* However, if you do want to use your own Microsoft Entra domain and application client ID, you must create a Microsoft Entra native application and then use the Microsoft Entra tenant ID, client ID, and redirect URI for the application you created. See [Create an Active Directory Application for end-user authentication with Data Lake Storage Gen1](data-lake-store-end-user-authenticate-using-active-directory.md) for instructions. -- -## Next steps -In this article, you learned how to use end-user authentication to authenticate with Azure Data Lake Storage Gen1 using .NET SDK. You can now look at the following articles that talk about how to use the .NET SDK to work with Azure Data Lake Storage Gen1. --* [Account management operations on Data Lake Storage Gen1 using .NET SDK](data-lake-store-get-started-net-sdk.md) -* [Data operations on Data Lake Storage Gen1 using .NET SDK](data-lake-store-data-operations-net-sdk.md) |
data-lake-store | Data Lake Store End User Authenticate Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-end-user-authenticate-python.md | - Title: End-user authentication - Python with Data Lake Storage Gen1 - Azure -description: Learn how to achieve end-user authentication with Azure Data Lake Storage Gen1 using Microsoft Entra ID with Python ---- Previously updated : 05/29/2018----# End-user authentication with Azure Data Lake Storage Gen1 using Python -> [!div class="op_single_selector"] -> * [Using Java](data-lake-store-end-user-authenticate-java-sdk.md) -> * [Using .NET SDK](data-lake-store-end-user-authenticate-net-sdk.md) -> * [Using Python](data-lake-store-end-user-authenticate-python.md) -> * [Using REST API](data-lake-store-end-user-authenticate-rest-api.md) -> -> --In this article, you learn about how to use the Python SDK to do end-user authentication with Azure Data Lake Storage Gen1. End-user authentication can further be split into two categories: --* End-user authentication without multi-factor authentication -* End-user authentication with multi-factor authentication --Both these options are discussed in this article. For service-to-service authentication with Data Lake Storage Gen1 using Python, see [Service-to-service authentication with Data Lake Storage Gen1 using Python](data-lake-store-service-to-service-authenticate-python.md). --## Prerequisites --* **Python**. You can download Python from [here](https://www.python.org/downloads/). This article uses Python 3.6.2. --* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). --* **Create a Microsoft Entra ID "Native" Application**. You must have completed the steps in [End-user authentication with Data Lake Storage Gen1 using Microsoft Entra ID](data-lake-store-end-user-authenticate-using-active-directory.md). --## Install the modules --To work with Data Lake Storage Gen1 using Python, you need to install three modules. --* The `azure-mgmt-resource` module, which includes Azure modules for Active Directory, etc. -* The `azure-mgmt-datalake-store` module, which includes the Azure Data Lake Storage Gen1 account management operations. For more information on this module, see [Azure Data Lake Storage Gen1 Management module reference](/python/api/azure-mgmt-datalake-store/). -* The `azure-datalake-store` module, which includes the Azure Data Lake Storage Gen1 filesystem operations. For more information on this module, see [azure-datalake-store Filesystem module reference](/python/api/azure-datalake-store/azure.datalake.store.core/). --Use the following commands to install the modules. --```console -pip install azure-mgmt-resource -pip install azure-mgmt-datalake-store -pip install azure-datalake-store -``` --## Create a new Python application --1. In the IDE of your choice, create a new Python application, for example, `mysample.py`. --2. Add the following snippet to import the required modules -- ```python - ## Use this for Azure AD authentication - from msrestazure.azure_active_directory import AADTokenCredentials -- ## Required for Azure Data Lake Storage Gen1 account management - from azure.mgmt.datalake.store import DataLakeStoreAccountManagementClient - from azure.mgmt.datalake.store.models import DataLakeStoreAccount -- ## Required for Azure Data Lake Storage Gen1 filesystem management - from azure.datalake.store import core, lib, multithread -- # Common Azure imports - import adal - from azure.mgmt.resource.resources import ResourceManagementClient - from azure.mgmt.resource.resources.models import ResourceGroup -- ## Use these as needed for your application - import logging, pprint, uuid, time - ``` --3. Save changes to `mysample.py`. --## End-user authentication with multi-factor authentication --### For account management --Use the following snippet to authenticate with Microsoft Entra ID for account management operations on a Data Lake Storage Gen1 account. The following snippet can be used to authenticate your application using multi-factor authentication. Provide the values below for an existing Microsoft Entra ID **native** application. --```python -authority_host_url = "https://login.microsoftonline.com" -tenant = "FILL-IN-HERE" -authority_url = authority_host_url + '/' + tenant -client_id = 'FILL-IN-HERE' -redirect = 'urn:ietf:wg:oauth:2.0:oob' -RESOURCE = 'https://management.core.windows.net/' --context = adal.AuthenticationContext(authority_url) -code = context.acquire_user_code(RESOURCE, client_id) -print(code['message']) -mgmt_token = context.acquire_token_with_device_code(RESOURCE, code, client_id) -armCreds = AADTokenCredentials(mgmt_token, client_id, resource = RESOURCE) -``` --### For filesystem operations --Use this to authenticate with Microsoft Entra ID for filesystem operations on a Data Lake Storage Gen1 account. The following snippet can be used to authenticate your application using multi-factor authentication. Provide the values below for an existing Microsoft Entra ID **native** application. --```console -adlCreds = lib.auth(tenant_id='FILL-IN-HERE', resource = 'https://datalake.azure.net/') -``` --## End-user authentication without multi-factor authentication --This is deprecated. For more information, see [Azure Authentication using Python SDK](/azure/developer/python/sdk/authentication-overview). --## Next steps -In this article, you learned how to use end-user authentication to authenticate with Azure Data Lake Storage Gen1 using Python. You can now look at the following articles that talk about how to use Python to work with Azure Data Lake Storage Gen1. --* [Account management operations on Data Lake Storage Gen1 using Python](data-lake-store-get-started-python.md) -* [Data operations on Data Lake Storage Gen1 using Python](data-lake-store-data-operations-python.md) |
data-lake-store | Data Lake Store End User Authenticate Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-end-user-authenticate-rest-api.md | - Title: End-user authentication - REST with Data Lake Storage Gen1 - Azure -description: Learn how to achieve end-user authentication with Azure Data Lake Storage Gen1 using Microsoft Entra ID using REST API ---- Previously updated : 05/29/2018----# End-user authentication with Azure Data Lake Storage Gen1 using REST API -> [!div class="op_single_selector"] -> * [Using Java](data-lake-store-end-user-authenticate-java-sdk.md) -> * [Using .NET SDK](data-lake-store-end-user-authenticate-net-sdk.md) -> * [Using Python](data-lake-store-end-user-authenticate-python.md) -> * [Using REST API](data-lake-store-end-user-authenticate-rest-api.md) -> -> --In this article, you learn about how to use the REST API to do end-user authentication with Azure Data Lake Storage Gen1. For service-to-service authentication with Data Lake Storage Gen1 using REST API, see [Service-to-service authentication with Data Lake Storage Gen1 using REST API](data-lake-store-service-to-service-authenticate-rest-api.md). --## Prerequisites --* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). --* **Create a Microsoft Entra ID "Native" Application**. You must have completed the steps in [End-user authentication with Data Lake Storage Gen1 using Microsoft Entra ID](data-lake-store-end-user-authenticate-using-active-directory.md). --* **[cURL](https://curl.haxx.se/)**. This article uses cURL to demonstrate how to make REST API calls against a Data Lake Storage Gen1 account. --## End-user authentication -End-user authentication is the recommended approach if you want a user to log in to your application using Microsoft Entra ID. Your application is able to access Azure resources with the same level of access as the logged-in user. The user needs to provide their credentials periodically in order for your application to maintain access. --The result of having the end-user login is that your application is given an access token and a refresh token. The access token gets attached to each request made to Data Lake Storage Gen1 or Data Lake Analytics, and it is valid for one hour by default. The refresh token can be used to obtain a new access token, and it is valid for up to two weeks by default, if used regularly. You can use two different approaches for end-user login. --In this scenario, the application prompts the user to log in and all the operations are performed in the context of the user. Perform the following steps: --1. Through your application, redirect the user to the following URL: -- `https://login.microsoftonline.com/<TENANT-ID>/oauth2/authorize?client_id=<APPLICATION-ID>&response_type=code&redirect_uri=<REDIRECT-URI>` -- > [!NOTE] - > \<REDIRECT-URI> needs to be encoded for use in a URL. So, for https://localhost, use `https%3A%2F%2Flocalhost`) -- For the purpose of this tutorial, you can replace the placeholder values in the URL above and paste it in a web browser's address bar. You will be redirected to authenticate using your Azure login. Once you successfully log in, the response is displayed in the browser's address bar. The response will be in the following format: -- `http://localhost/?code=<AUTHORIZATION-CODE>&session_state=<GUID>` --2. Capture the authorization code from the response. For this tutorial, you can copy the authorization code from the address bar of the web browser and pass it in the POST request to the token endpoint, as shown in the following snippet: -- ```console - curl -X POST https://login.microsoftonline.com/<TENANT-ID>/oauth2/token \ - -F redirect_uri=<REDIRECT-URI> \ - -F grant_type=authorization_code \ - -F resource=https://management.core.windows.net/ \ - -F client_id=<APPLICATION-ID> \ - -F code=<AUTHORIZATION-CODE> - ``` -- > [!NOTE] - > In this case, the \<REDIRECT-URI> need not be encoded. - > - > --3. The response is a JSON object that contains an access token (for example, `"access_token": "<ACCESS_TOKEN>"`) and a refresh token (for example, `"refresh_token": "<REFRESH_TOKEN>"`). Your application uses the access token when accessing Azure Data Lake Storage Gen1 and the refresh token to get another access token when an access token expires. -- ```json - {"token_type":"Bearer","scope":"user_impersonation","expires_in":"3599","expires_on":"1461865782","not_before": "1461861882","resource":"https://management.core.windows.net/","access_token":"<REDACTED>","refresh_token":"<REDACTED>","id_token":"<REDACTED>"} - ``` --4. When the access token expires, you can request a new access token using the refresh token, as shown in the following snippet: -- ```console - curl -X POST https://login.microsoftonline.com/<TENANT-ID>/oauth2/token \ - -F grant_type=refresh_token \ - -F resource=https://management.core.windows.net/ \ - -F client_id=<APPLICATION-ID> \ - -F refresh_token=<REFRESH-TOKEN> - ``` --For more information on interactive user authentication, see [Authorization code grant flow](/previous-versions/azure/dn645542(v=azure.100)). --## Next steps -In this article, you learned how to use service-to-service authentication to authenticate with Azure Data Lake Storage Gen1 using REST API. You can now look at the following articles that talk about how to use the REST API to work with Azure Data Lake Storage Gen1. --* [Account management operations on Data Lake Storage Gen1 using REST API](data-lake-store-get-started-rest-api.md) -* [Data operations on Data Lake Storage Gen1 using REST API](data-lake-store-data-operations-rest-api.md) |
data-lake-store | Data Lake Store End User Authenticate Using Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-end-user-authenticate-using-active-directory.md | - Title: End-user authentication - Data Lake Storage Gen1 with Microsoft Entra ID -description: Learn how to achieve end-user authentication with Azure Data Lake Storage Gen1 using Microsoft Entra ID ---- Previously updated : 05/29/2018----# End-user authentication with Azure Data Lake Storage Gen1 using Microsoft Entra ID -> [!div class="op_single_selector"] -> * [End-user authentication](data-lake-store-end-user-authenticate-using-active-directory.md) -> * [Service-to-service authentication](data-lake-store-service-to-service-authenticate-using-active-directory.md) -> -> --Azure Data Lake Storage Gen1 uses Microsoft Entra ID for authentication. Before authoring an application that works with Data Lake Storage Gen1 or Azure Data Lake Analytics, you must decide how to authenticate your application with Microsoft Entra ID. The two main options available are: --* End-user authentication (this article) -* Service-to-service authentication (pick this option from the drop-down above) --Both these options result in your application being provided with an OAuth 2.0 token, which gets attached to each request made to Data Lake Storage Gen1 or Azure Data Lake Analytics. --This article talks about how to create an **Microsoft Entra native application for end-user authentication**. For instructions on Microsoft Entra application configuration for service-to-service authentication, see [Service-to-service authentication with Data Lake Storage Gen1 using Microsoft Entra ID](./data-lake-store-service-to-service-authenticate-using-active-directory.md). --## Prerequisites -* An Azure subscription. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). --* Your subscription ID. You can retrieve it from the Azure portal. For example, it's available from the Data Lake Storage Gen1 account blade. -- ![Get subscription ID](./media/data-lake-store-end-user-authenticate-using-active-directory/get-subscription-id.png) --* Your Microsoft Entra domain name. You can retrieve it by hovering the mouse in the top-right corner of the Azure portal. From the screenshot below, the domain name is **contoso.onmicrosoft.com**, and the GUID within brackets is the tenant ID. -- ![Get Microsoft Entra domain](./media/data-lake-store-end-user-authenticate-using-active-directory/get-aad-domain.png) --* Your Azure tenant ID. For instructions on how to retrieve the tenant ID, see [Get the tenant ID](../active-directory/develop/howto-create-service-principal-portal.md#sign-in-to-the-application). --## End-user authentication -This authentication mechanism is the recommended approach if you want an end user to sign in to your application via Microsoft Entra ID. Your application is then able to access Azure resources with the same level of access as the end user that logged in. Your end user needs to provide their credentials periodically in order for your application to maintain access. --The result of having the end-user sign-in is that your application is given an access token and a refresh token. The access token gets attached to each request made to Data Lake Storage Gen1 or Data Lake Analytics, and it's valid for one hour by default. The refresh token can be used to obtain a new access token, and it's valid for up to two weeks by default. You can use two different approaches for end-user sign-in. --### Using the OAuth 2.0 pop-up -Your application can trigger an OAuth 2.0 authorization pop-up, in which the end user can enter their credentials. This pop-up also works with the Microsoft Entra Two-factor Authentication (2FA) process, if necessary. --> [!NOTE] -> This method is not yet supported in the Azure AD Authentication Library (ADAL) for Python or Java. -> -> --### Directly passing in user credentials -Your application can directly provide user credentials to Microsoft Entra ID. This method only works with organizational ID user accounts; it isn't compatible with personal / “live ID” user accounts, including the accounts ending in @outlook.com or @live.com. Furthermore, this method isn't compatible with user accounts that require Microsoft Entra Two-factor Authentication (2FA). --### What do I need for this approach? -* Microsoft Entra domain name. This requirement is already listed in the prerequisite of this article. -* Microsoft Entra tenant ID. This requirement is already listed in the prerequisite of this article. -* Microsoft Entra ID **native application** -* Application ID for the Microsoft Entra native application -* Redirect URI for the Microsoft Entra native application -* Set delegated permissions ---## Step 1: Create an Active Directory native application --Create and configure a Microsoft Entra native application for end-user authentication with Data Lake Storage Gen1 using Microsoft Entra ID. For instructions, see [Create a Microsoft Entra application](../active-directory/develop/howto-create-service-principal-portal.md). --While following the instructions in the link, make sure you select **Native** for application type, as shown in the following screenshot: --![Create web app](./media/data-lake-store-end-user-authenticate-using-active-directory/azure-active-directory-create-native-app.png "Create native app") --## Step 2: Get application ID and redirect URI --See [Get the application ID](../active-directory/develop/howto-create-service-principal-portal.md#sign-in-to-the-application) to retrieve the application ID. --To retrieve the redirect URI, do the following steps. --1. From the Azure portal, select **Microsoft Entra ID**, select **App registrations**, and then find and select the Microsoft Entra native application that you created. --2. From the **Settings** blade for the application, select **Redirect URIs**. -- ![Get Redirect URI](./media/data-lake-store-end-user-authenticate-using-active-directory/azure-active-directory-redirect-uri.png) --3. Copy the value displayed. ---## Step 3: Set permissions --1. From the Azure portal, select **Microsoft Entra ID**, select **App registrations**, and then find and select the Microsoft Entra native application that you created. --2. From the **Settings** blade for the application, select **Required permissions**, and then select **Add**. -- ![Screenshot of the Settings blade with the Redirect U R I option called out and the Redirect U R I blade with the actual U R I called out.](./media/data-lake-store-end-user-authenticate-using-active-directory/aad-end-user-auth-set-permission-1.png) --3. In the **Add API Access** blade, select **Select an API**, select **Azure Data Lake**, and then select **Select**. -- ![Screenshot of the Add API access blade with the Select an API option called out and the Select an API blade with the Azure Data Lake option and the Select option called out.](./media/data-lake-store-end-user-authenticate-using-active-directory/aad-end-user-auth-set-permission-2.png) --4. In the **Add API Access** blade, select **Select permissions**, select the check box to give **Full access to Data Lake Store**, and then select **Select**. -- ![Screenshot of the Add API access blade with the Select permissions option called out and the Enable Access blade with the Have full access to the Azure Data Lake service option and the Select option called out.](./media/data-lake-store-end-user-authenticate-using-active-directory/aad-end-user-auth-set-permission-3.png) -- Select **Done**. --5. Repeat the last two steps to grant permissions for **Windows Azure Service Management API** as well. --## Next steps -In this article, you created a Microsoft Entra native application and gathered the information you need in your client applications that you author using .NET SDK, Java SDK, REST API, etc. You can now proceed to the following articles that talk about how to use the Microsoft Entra web application to first authenticate with Data Lake Storage Gen1 and then perform other operations on the store. --* [End-user-authentication with Data Lake Storage Gen1 using Java SDK](data-lake-store-end-user-authenticate-java-sdk.md) -* [End-user authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-end-user-authenticate-net-sdk.md) -* [End-user authentication with Data Lake Storage Gen1 using Python](data-lake-store-end-user-authenticate-python.md) -* [End-user authentication with Data Lake Storage Gen1 using REST API](data-lake-store-end-user-authenticate-rest-api.md) |
data-lake-store | Data Lake Store Get Started Cli 2.0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-get-started-cli-2.0.md | - Title: Manage Azure Data Lake Storage Gen1 account - Azure CLI -description: Use the Azure CLI to create a Data Lake Storage Gen1 account and perform basic operations. ----- Previously updated : 06/27/2018---# Get started with Azure Data Lake Storage Gen1 using the Azure CLI ---> [!div class="op_single_selector"] -> * [Portal](data-lake-store-get-started-portal.md) -> * [PowerShell](data-lake-store-get-started-powershell.md) -> * [Azure CLI](data-lake-store-get-started-cli-2.0.md) -> -> --Learn how to use the Azure CLI to create an Azure Data Lake Storage Gen1 account and perform basic operations such as create folders, upload and download data files, delete your account, etc. For more information about Data Lake Storage Gen1, see [Overview of Data Lake Storage Gen1](data-lake-store-overview.md). --The Azure CLI is Azure's command-line experience for managing Azure resources. It can be used on macOS, Linux, and Windows. For more information, see [Overview of Azure CLI](/cli/azure). You can also look at the [Azure Data Lake Storage Gen1 CLI reference](/cli/azure/dls) for a complete list of commands and syntax. ---## Prerequisites -Before you begin this article, you must have the following: --* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). --* **Azure CLI** - See [Install Azure CLI](/cli/azure/install-azure-cli) for instructions. --## Authentication --This article uses a simpler authentication approach with Data Lake Storage Gen1 where you log in as an end-user user. The access level to the Data Lake Storage Gen1 account and file system is then governed by the access level of the logged in user. However, there are other approaches as well to authenticate with Data Lake Storage Gen1, which are **end-user authentication** or **service-to-service authentication**. For instructions and more information on how to authenticate, see [End-user authentication](data-lake-store-end-user-authenticate-using-active-directory.md) or [Service-to-service authentication](./data-lake-store-service-to-service-authenticate-using-active-directory.md). ---## Log in to your Azure subscription --1. Log into your Azure subscription. -- ```azurecli - az login - ``` -- You get a code to use in the next step. Use a web browser to open the page https://aka.ms/devicelogin and enter the code to authenticate. You are prompted to log in using your credentials. --2. Once you log in, the window lists all the Azure subscriptions that are associated with your account. Use the following command to use a specific subscription. - - ```azurecli - az account set --subscription <subscription id> - ``` --## Create an Azure Data Lake Storage Gen1 account --1. Create a new resource group. In the following command, provide the parameter values you want to use. If the location name contains spaces, put it in quotes. For example "East US 2". - - ```azurecli - az group create --location "East US 2" --name myresourcegroup - ``` --2. Create the Data Lake Storage Gen1 account. - - ```azurecli - az dls account create --account mydatalakestoragegen1 --resource-group myresourcegroup - ``` --## Create folders in a Data Lake Storage Gen1 account --You can create folders under your Azure Data Lake Storage Gen1 account to manage and store data. Use the following command to create a folder called **mynewfolder** at the root of the Data Lake Storage Gen1 account. --```azurecli -az dls fs create --account mydatalakestoragegen1 --path /mynewfolder --folder -``` --> [!NOTE] -> The `--folder` parameter ensures that the command creates a folder. If this parameter is not present, the command creates an empty file called mynewfolder at the root of the Data Lake Storage Gen1 account. -> -> --## Upload data to a Data Lake Storage Gen1 account --You can upload data to Data Lake Storage Gen1 directly at the root level or to a folder that you created within the account. The snippets below demonstrate how to upload some sample data to the folder (**mynewfolder**) you created in the previous section. --If you are looking for some sample data to upload, you can get the **Ambulance Data** folder from the [Azure Data Lake Git Repository](https://github.com/MicrosoftBigData/usql/tree/master/Examples/Samples/Data/AmbulanceData). Download the file and store it in a local directory on your computer, such as C:\sampledata\. --```azurecli -az dls fs upload --account mydatalakestoragegen1 --source-path "C:\SampleData\AmbulanceData\vehicle1_09142014.csv" --destination-path "/mynewfolder/vehicle1_09142014.csv" -``` --> [!NOTE] -> For the destination, you must specify the complete path including the file name. -> -> ---## List files in a Data Lake Storage Gen1 account --Use the following command to list the files in a Data Lake Storage Gen1 account. --```azurecli -az dls fs list --account mydatalakestoragegen1 --path /mynewfolder -``` --The output of this should be similar to the following: --```json -[ - { - "accessTime": 1491323529542, - "aclBit": false, - "blockSize": 268435456, - "group": "1808bd5f-62af-45f4-89d8-03c5e81bac20", - "length": 1589881, - "modificationTime": 1491323531638, - "msExpirationTime": 0, - "name": "mynewfolder/vehicle1_09142014.csv", - "owner": "1808bd5f-62af-45f4-89d8-03c5e81bac20", - "pathSuffix": "vehicle1_09142014.csv", - "permission": "770", - "replication": 1, - "type": "FILE" - } -] -``` --## Rename, download, and delete data from a Data Lake Storage Gen1 account --* **To rename a file**, use the following command: - - ```azurecli - az dls fs move --account mydatalakestoragegen1 --source-path /mynewfolder/vehicle1_09142014.csv --destination-path /mynewfolder/vehicle1_09142014_copy.csv - ``` --* **To download a file**, use the following command. Make sure the destination path you specify already exists. - - ```azurecli - az dls fs download --account mydatalakestoragegen1 --source-path /mynewfolder/vehicle1_09142014_copy.csv --destination-path "C:\mysampledata\vehicle1_09142014_copy.csv" - ``` -- > [!NOTE] - > The command creates the destination folder if it does not exist. - > - > --* **To delete a file**, use the following command: - - ```azurecli - az dls fs delete --account mydatalakestoragegen1 --path /mynewfolder/vehicle1_09142014_copy.csv - ``` -- If you want to delete the folder **mynewfolder** and the file **vehicle1_09142014_copy.csv** together in one command, use the --recurse parameter -- ```azurecli - az dls fs delete --account mydatalakestoragegen1 --path /mynewfolder --recurse - ``` --## Work with permissions and ACLs for a Data Lake Storage Gen1 account --In this section you learn about how to manage ACLs and permissions using the Azure CLI. For a detailed discussion on how ACLs are implemented in Azure Data Lake Storage Gen1, see [Access control in Azure Data Lake Storage Gen1](data-lake-store-access-control.md). --* **To update the owner of a file/folder**, use the following command: -- ```azurecli - az dls fs access set-owner --account mydatalakestoragegen1 --path /mynewfolder/vehicle1_09142014.csv --group 80a3ed5f-959e-4696-ba3c-d3c8b2db6766 --owner 6361e05d-c381-4275-a932-5535806bb323 - ``` --* **To update the permissions for a file/folder**, use the following command: -- ```azurecli - az dls fs access set-permission --account mydatalakestoragegen1 --path /mynewfolder/vehicle1_09142014.csv --permission 777 - ``` - -* **To get the ACLs for a given path**, use the following command: -- ```azurecli - az dls fs access show --account mydatalakestoragegen1 --path /mynewfolder/vehicle1_09142014.csv - ``` -- The output should be similar to the following: -- ```output - { - "entries": [ - "user::rwx", - "group::rwx", - "other::" - ], - "group": "1808bd5f-62af-45f4-89d8-03c5e81bac20", - "owner": "1808bd5f-62af-45f4-89d8-03c5e81bac20", - "permission": "770", - "stickyBit": false - } - ``` --* **To set an entry for an ACL**, use the following command: -- ```azurecli - az dls fs access set-entry --account mydatalakestoragegen1 --path /mynewfolder --acl-spec user:6360e05d-c381-4275-a932-5535806bb323:-w- - ``` --* **To remove an entry for an ACL**, use the following command: -- ```azurecli - az dls fs access remove-entry --account mydatalakestoragegen1 --path /mynewfolder --acl-spec user:6360e05d-c381-4275-a932-5535806bb323 - ``` --* **To remove an entire default ACL**, use the following command: -- ```azurecli - az dls fs access remove-all --account mydatalakestoragegen1 --path /mynewfolder --default-acl - ``` --* **To remove an entire non-default ACL**, use the following command: -- ```azurecli - az dls fs access remove-all --account mydatalakestoragegen1 --path /mynewfolder - ``` - -## Delete a Data Lake Storage Gen1 account -Use the following command to delete a Data Lake Storage Gen1 account. --```azurecli -az dls account delete --account mydatalakestoragegen1 -``` --When prompted, enter **Y** to delete the account. --## Next steps -* [Use Azure Data Lake Storage Gen1 for big data requirements](data-lake-store-data-scenarios.md) -* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md) -* [Use Azure Data Lake Analytics with Data Lake Storage Gen1](../data-lake-analytics/data-lake-analytics-get-started-portal.md) -* [Use Azure HDInsight with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md) |
data-lake-store | Data Lake Store Get Started Java Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-get-started-java-sdk.md | - Title: Java SDK - Filesystem operations on Data Lake Storage Gen1 - Azure -description: Use the Java SDK for Azure Data Lake Storage Gen1 to perform filesystem operations on Data Lake Storage Gen1 such as creating folders, and uploading and downloading data files. ---- Previously updated : 02/23/2022----# Filesystem operations on Azure Data Lake Storage Gen1 using Java SDK -> [!div class="op_single_selector"] -> * [.NET SDK](data-lake-store-data-operations-net-sdk.md) -> * [Java SDK](data-lake-store-get-started-java-sdk.md) -> * [REST API](data-lake-store-data-operations-rest-api.md) -> * [Python](data-lake-store-data-operations-python.md) -> -> --Learn how to use the Azure Data Lake Storage Gen1 Java SDK to perform basic operations such as create folders, upload and download data files, etc. For more information about Data Lake Storage Gen1, see [Azure Data Lake Storage Gen1](data-lake-store-overview.md). --You can access the Java SDK API docs for Data Lake Storage Gen1 at [Azure Data Lake Storage Gen1 Java API docs](https://azure.github.io/azure-data-lake-store-java/javadoc/). --## Prerequisites -* Java Development Kit (JDK 7 or higher, using Java version 1.7 or higher) -* Data Lake Storage Gen1 account. Follow the instructions at [Get started with Azure Data Lake Storage Gen1 using the Azure portal](data-lake-store-get-started-portal.md). -* [Maven](https://maven.apache.org/install.html). This tutorial uses Maven for build and project dependencies. Although it is possible to build without using a build system like Maven or Gradle, these systems make is much easier to manage dependencies. -* (Optional) And IDE like [IntelliJ IDEA](https://www.jetbrains.com/idea/download/) or [Eclipse](https://www.eclipse.org/downloads/) or similar. --## Create a Java application -The code sample available [on GitHub](https://azure.microsoft.com/documentation/samples/data-lake-store-java-upload-download-get-started/) walks you through the process of creating files in the store, concatenating files, downloading a file, and deleting some files in the store. This section of the article walks you through the main parts of the code. --1. Create a Maven project using [mvn archetype](https://maven.apache.org/guides/getting-started/maven-in-five-minutes.html) from the command line or using an IDE. For instructions on how to create a Java project using IntelliJ, see [here](https://www.jetbrains.com/help/idea/2016.1/creating-and-running-your-first-java-application.html). For instructions on how to create a project using Eclipse, see [here](https://help.eclipse.org/mars/index.jsp?topic=%2Forg.eclipse.jdt.doc.user%2FgettingStarted%2Fqs-3.htm). --2. Add the following dependencies to your Maven **pom.xml** file. Add the following snippet before the **\</project>** tag: - - ```xml - <dependencies> - <dependency> - <groupId>com.microsoft.azure</groupId> - <artifactId>azure-data-lake-store-sdk</artifactId> - <version>2.1.5</version> - </dependency> - <dependency> - <groupId>org.slf4j</groupId> - <artifactId>slf4j-nop</artifactId> - <version>1.7.21</version> - </dependency> - </dependencies> - ``` - - The first dependency is to use the Data Lake Storage Gen1 SDK (`azure-data-lake-store-sdk`) from the maven repository. The second dependency is to specify the logging framework (`slf4j-nop`) to use for this application. The Data Lake Storage Gen1 SDK uses [SLF4J](https://www.slf4j.org/) logging façade, which lets you choose from a number of popular logging frameworks, like Log4j, Java logging, Logback, etc., or no logging. For this example, we disable logging, hence we use the **slf4j-nop** binding. To use other logging options in your app, see [here](https://www.slf4j.org/manual.html#projectDep). --3. Add the following import statements to your application. -- ```java - import com.microsoft.azure.datalake.store.ADLException; - import com.microsoft.azure.datalake.store.ADLStoreClient; - import com.microsoft.azure.datalake.store.DirectoryEntry; - import com.microsoft.azure.datalake.store.IfExists; - import com.microsoft.azure.datalake.store.oauth2.AccessTokenProvider; - import com.microsoft.azure.datalake.store.oauth2.ClientCredsTokenProvider; -- import java.io.*; - import java.util.Arrays; - import java.util.List; - ``` --## Authentication --* For end-user authentication for your application, see [End-user-authentication with Data Lake Storage Gen1 using Java](data-lake-store-end-user-authenticate-java-sdk.md). -* For service-to-service authentication for your application, see [Service-to-service authentication with Data Lake Storage Gen1 using Java](data-lake-store-service-to-service-authenticate-java.md). --## Create a Data Lake Storage Gen1 client -Creating an [ADLStoreClient](https://azure.github.io/azure-data-lake-store-java/javadoc/) object requires you to specify the Data Lake Storage Gen1 account name and the token provider you generated when you authenticated with Data Lake Storage Gen1 (see [Authentication](#authentication) section). The Data Lake Storage Gen1 account name needs to be a fully qualified domain name. For example, replace **FILL-IN-HERE** with something like **mydatalakestoragegen1.azuredatalakestore.net**. --```java -private static String accountFQDN = "FILL-IN-HERE"; // full account FQDN, not just the account name -ADLStoreClient client = ADLStoreClient.createClient(accountFQDN, provider); -``` --The code snippets in the following sections contain examples of some common filesystem operations. You can look at the full [Data Lake Storage Gen1 Java SDK API docs](https://azure.github.io/azure-data-lake-store-java/javadoc/) of the **ADLStoreClient** object to see other operations. --## Create a directory --The following snippet creates a directory structure in the root of the Data Lake Storage Gen1 account you specified. --```java -// create directory -client.createDirectory("/a/b/w"); -System.out.println("Directory created."); -``` --## Create a file --The following snippet creates a file (c.txt) in the directory structure and writes some data to the file. --```java -// create file and write some content -String filename = "/a/b/c.txt"; -OutputStream stream = client.createFile(filename, IfExists.OVERWRITE ); -PrintStream out = new PrintStream(stream); -for (int i = 1; i <= 10; i++) { - out.println("This is line #" + i); - out.format("This is the same line (%d), but using formatted output. %n", i); -} -out.close(); -System.out.println("File created."); -``` --You can also create a file (d.txt) using byte arrays. --```java -// create file using byte arrays -stream = client.createFile("/a/b/d.txt", IfExists.OVERWRITE); -byte[] buf = getSampleContent(); -stream.write(buf); -stream.close(); -System.out.println("File created using byte array."); -``` --The definition for `getSampleContent` function used in the preceding snippet is available as part of the sample [on GitHub](https://azure.microsoft.com/documentation/samples/data-lake-store-java-upload-download-get-started/). --## Append to a file --The following snippet appends content to an existing file. --```java -// append to file -stream = client.getAppendStream(filename); -stream.write(getSampleContent()); -stream.close(); -System.out.println("File appended."); -``` --The definition for `getSampleContent` function used in the preceding snippet is available as part of the sample [on GitHub](https://azure.microsoft.com/documentation/samples/data-lake-store-java-upload-download-get-started/). --## Read a file --The following snippet reads content from a file in a Data Lake Storage Gen1 account. --```java -// Read File -InputStream in = client.getReadStream(filename); -BufferedReader reader = new BufferedReader(new InputStreamReader(in)); -String line; -while ( (line = reader.readLine()) != null) { - System.out.println(line); -} -reader.close(); -System.out.println(); -System.out.println("File contents read."); -``` --## Concatenate files --The following snippet concatenates two files in a Data Lake Storage Gen1 account. If successful, the concatenated file replaces the two existing files. --```java -// concatenate the two files into one -List<String> fileList = Arrays.asList("/a/b/c.txt", "/a/b/d.txt"); -client.concatenateFiles("/a/b/f.txt", fileList); -System.out.println("Two files concatenated into a new file."); -``` --## Rename a file --The following snippet renames a file in a Data Lake Storage Gen1 account. --```java -//rename the file -client.rename("/a/b/f.txt", "/a/b/g.txt"); -System.out.println("New file renamed."); -``` --## Get metadata for a file --The following snippet retrieves the metadata for a file in a Data Lake Storage Gen1 account. --```java -// get file metadata -DirectoryEntry ent = client.getDirectoryEntry(filename); -printDirectoryInfo(ent); -System.out.println("File metadata retrieved."); -``` --## Set permissions on a file --The following snippet sets permissions on the file that you created in the previous section. --```java -// set file permission -client.setPermission(filename, "744"); -System.out.println("File permission set."); -``` --## List directory contents --The following snippet lists the contents of a directory, recursively. --```java -// list directory contents -List<DirectoryEntry> list = client.enumerateDirectory("/a/b", 2000); -System.out.println("Directory listing for directory /a/b:"); -for (DirectoryEntry entry : list) { - printDirectoryInfo(entry); -} -System.out.println("Directory contents listed."); -``` --The definition for `printDirectoryInfo` function used in the preceding snippet is available as part of the sample [on GitHub](https://azure.microsoft.com/documentation/samples/data-lake-store-java-upload-download-get-started/). --## Delete files and folders --The following snippet deletes the specified files and folders in a Data Lake Storage Gen1 account, recursively. --```java -// delete directory along with all the subdirectories and files in it -client.deleteRecursive("/a"); -System.out.println("All files and folders deleted recursively"); -promptEnterKey(); -``` --## Build and run the application -1. To run from within an IDE, locate and press the **Run** button. To run from Maven, use [exec:exec](https://www.mojohaus.org/exec-maven-plugin/exec-mojo.html). -2. To produce a standalone jar that you can run from command-line build the jar with all dependencies included, using the [Maven assembly plugin](https://maven.apache.org/plugins/maven-assembly-plugin/usage.html). The pom.xml in the [example source code on GitHub](https://github.com/Azure-Samples/data-lake-store-java-upload-download-get-started/blob/master/pom.xml) has an example. --## Next steps -* [Explore JavaDoc for the Java SDK](https://azure.github.io/azure-data-lake-store-java/javadoc/) -* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md) |
data-lake-store | Data Lake Store Get Started Net Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-get-started-net-sdk.md | - Title: Manage an Azure Data Lake Storage Gen1 account with .NET -description: Learn how to use the .NET SDK for Azure Data Lake Storage Gen1 account management operations. ---- Previously updated : 05/29/2018----# Account management operations on Azure Data Lake Storage Gen1 using .NET SDK -> [!div class="op_single_selector"] -> * [.NET SDK](data-lake-store-get-started-net-sdk.md) -> * [REST API](data-lake-store-get-started-rest-api.md) -> * [Python](data-lake-store-get-started-python.md) -> -> --In this article, you learn how to perform account management operations on Azure Data Lake Storage Gen1 using .NET SDK. Account management operations include creating a Data Lake Storage Gen1 account, listing the accounts in an Azure subscription, deleting the accounts, etc. --For instructions on how to perform data management operations on Data Lake Storage Gen1 using .NET SDK, see [Filesystem operations on Data Lake Storage Gen1 using .NET SDK](data-lake-store-data-operations-net-sdk.md). --## Prerequisites -* **Visual Studio 2013 or above**. The instructions below use Visual Studio 2019. --* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). --## Create a .NET application -1. In Visual Studio, select the **File** menu, **New**, and then **Project**. -2. Choose **Console App (.NET Framework)**, and then select **Next**. -3. In **Project name**, enter `CreateADLApplication`, and then select **Create**. --4. Add the NuGet packages to your project. -- 1. Right-click the project name in the Solution Explorer and click **Manage NuGet Packages**. - 2. In the **NuGet Package Manager** tab, make sure that **Package source** is set to **nuget.org** and that **Include prerelease** check box is selected. - 3. Search for and install the following NuGet packages: -- * `Microsoft.Azure.Management.DataLake.Store` - This tutorial uses v2.1.3-preview. - * `Microsoft.Rest.ClientRuntime.Azure.Authentication` - This tutorial uses v2.2.12. -- ![Add a NuGet source](./media/data-lake-store-get-started-net-sdk/data-lake-store-install-nuget-package.png "Create a new Azure Data Lake account") - 4. Close the **NuGet Package Manager**. -5. Open **Program.cs**, delete the existing code, and then include the following statements to add references to namespaces. -- ```csharp - using System; - using System.IO; - using System.Linq; - using System.Text; - using System.Threading; - using System.Collections.Generic; - using System.Security.Cryptography.X509Certificates; // Required only if you are using an Azure AD application created with certificates - - using Microsoft.Rest; - using Microsoft.Rest.Azure.Authentication; - using Microsoft.Azure.Management.DataLake.Store; - using Microsoft.Azure.Management.DataLake.Store.Models; - using Microsoft.IdentityModel.Clients.ActiveDirectory; - ``` --6. Declare the variables and provide the values for placeholders. Also, make sure the local path and file name you provide exist on the computer. -- ```csharp - namespace SdkSample - { - class Program - { - private static DataLakeStoreAccountManagementClient _adlsClient; - - private static string _adlsAccountName; - private static string _resourceGroupName; - private static string _location; - private static string _subId; -- private static void Main(string[] args) - { - _adlsAccountName = "<DATA-LAKE-STORAGE-GEN1-NAME>.azuredatalakestore.net"; - _resourceGroupName = "<RESOURCE-GROUP-NAME>"; - _location = "East US 2"; - _subId = "<SUBSCRIPTION-ID>"; - } - } - } - ``` --In the remaining sections of the article, you can see how to use the available .NET methods to perform operations such as authentication, file upload, etc. --## Authentication --* For end-user authentication for your application, see [End-user authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-end-user-authenticate-net-sdk.md). -* For service-to-service authentication for your application, see [Service-to-service authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-service-to-service-authenticate-net-sdk.md). --## Create client object -The following snippet creates the Data Lake Storage Gen1 account client object, which is used to issue account management requests to the service, such as create account, delete account, etc. --```csharp -// Create client objects and set the subscription ID -_adlsClient = new DataLakeStoreAccountManagementClient(armCreds) { SubscriptionId = _subId }; -``` - -## Create a Data Lake Storage Gen1 account -The following snippet creates a Data Lake Storage Gen1 account in the Azure subscription you provided while creating the Data Lake Storage Gen1 account client object. --```csharp -// Create Data Lake Storage Gen1 account -var adlsParameters = new DataLakeStoreAccount(location: _location); -_adlsClient.Account.Create(_resourceGroupName, _adlsAccountName, adlsParameters); -``` --## List all Data Lake Storage Gen1 accounts within a subscription -Add the following method to your class definition. The following snippet lists all Data Lake Storage Gen1 accounts within a given Azure subscription. --```csharp -// List all Data Lake Storage Gen1 accounts within the subscription -public static List<DataLakeStoreAccountBasic> ListAdlStoreAccounts() -{ - var response = _adlsClient.Account.List(_adlsAccountName); - var accounts = new List<DataLakeStoreAccountBasic>(response); -- while (response.NextPageLink != null) - { - response = _adlsClient.Account.ListNext(response.NextPageLink); - accounts.AddRange(response); - } -- return accounts; -} -``` --## Delete a Data Lake Storage Gen1 account -The following snippet deletes the Data Lake Storage Gen1 account you created earlier. --```csharp -// Delete Data Lake Storage Gen1 account -_adlsClient.Account.Delete(_resourceGroupName, _adlsAccountName); -``` --## See also -* [Filesystem operations on Data Lake Storage Gen1 using .NET SDK](data-lake-store-data-operations-net-sdk.md) -* [Data Lake Storage Gen1 .NET SDK Reference](/dotnet/api/overview/azure/data-lake-store) --## Next steps -* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md) |
data-lake-store | Data Lake Store Get Started Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-get-started-portal.md | - Title: Get started with Azure Data Lake Storage Gen1 - portal -description: Use the Azure portal to create a Data Lake Storage Gen1 account and perform basic operations in the account. ---- Previously updated : 06/27/2018----# Get started with Azure Data Lake Storage Gen1 using the Azure portal --> [!div class="op_single_selector"] -> * [Portal](data-lake-store-get-started-portal.md) -> * [PowerShell](data-lake-store-get-started-powershell.md) -> * [Azure CLI](data-lake-store-get-started-cli-2.0.md) -> -> ---Learn how to use the Azure portal to create a Data Lake Storage Gen1 account and perform basic operations such as create folders, upload, and download data files, delete your account, etc. For more information, see [Overview of Azure Data Lake Storage Gen1](data-lake-store-overview.md). --## Prerequisites --Before you begin this tutorial, you must have the following items: --* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/). --## Create a Data Lake Storage Gen1 account --1. Sign on to the new [Azure portal](https://portal.azure.com). -2. Click **Create a resource > Storage > Data Lake Storage Gen1**. -3. In the **New Data Lake Storage Gen1** blade, provide the values as shown in the following screenshot: -- ![Create a new Data Lake Storage Gen1 account](./media/data-lake-store-get-started-portal/ADL.Create.New.Account.png "Create a new Data Lake Storage Gen1 account") -- * **Name**. Enter a unique name for the Data Lake Storage Gen1 account. - * **Subscription**. Select the subscription under which you want to create a new Data Lake Storage Gen1 account. - * **Resource Group**. Select an existing resource group, or select the **Create new** option to create one. A resource group is a container that holds related resources for an application. For more information, see [Resource Groups in Azure](../azure-resource-manager/management/overview.md#resource-groups). - * **Location**: Select a location where you want to create the Data Lake Storage Gen1 account. - * **Encryption Settings**. There are three options: -- * **Do not enable encryption**. - * **Use keys managed by Data Lake Storage Gen1**, if you want Data Lake Storage Gen1 to manage your encryption keys. - * **Use keys from your own Key Vault**. You can select an existing Azure Key Vault or create a new Key Vault. To use the keys from a Key Vault, you must assign permissions for the Data Lake Storage Gen1 account to access the Azure Key Vault. For the instructions, see [Assign permissions to Azure Key Vault](#assign-permissions-to-azure-key-vault). -- ![Screenshot of the New Data Lake Storage Gen 1 blade and the Encryption settings blade.](./media/data-lake-store-get-started-portal/adls-encryption-2.png "Data Lake Storage Gen1 encryption") -- Click **OK** in the **Encryption Settings** blade. -- For more information, see [Encryption of data in Azure Data Lake Storage Gen1](./data-lake-store-encryption.md). --4. Click **Create**. If you chose to pin the account to the dashboard, you are taken back to the dashboard and you can see the progress of your Data Lake Storage Gen1 account provisioning. Once the Data Lake Storage Gen1 account is provisioned, the account blade shows up. --## <a name="assign-permissions-to-azure-key-vault"></a>Assign permissions to Azure Key Vault --If you used keys from an Azure Key Vault to configure encryption on the Data Lake Storage Gen1 account, you must configure access between the Data Lake Storage Gen1 account and the Azure Key Vault account. Perform the following steps to do so. --1. If you used keys from the Azure Key Vault, the blade for the Data Lake Storage Gen1 account displays a warning at the top. Click the warning to open **Encryption**. -- ![Screenshot of the Data Lake Storage Gen1 account blade showing the warning that says, "Key vault permission configuration needed. Click here to setup.](./media/data-lake-store-get-started-portal/adls-encryption-3.png "Data Lake Storage Gen1 encryption") -2. The blade shows two options to configure access. -- ![Screenshot of the Encryption blade.](./media/data-lake-store-get-started-portal/adls-encryption-4.png "Data Lake Storage Gen1 encryption") -- * In the first option, click **Grant Permissions** to configure access. The first option is enabled only when the user that created the Data Lake Storage Gen1 account is also an admin for the Azure Key Vault. - * The other option is to run the PowerShell cmdlet displayed on the blade. You need to be the owner of the Azure Key Vault or have the ability to grant permissions on the Azure Key Vault. After you have run the cmdlet, come back to the blade and click **Enable** to configure access. --> [!NOTE] -> You can also create a Data Lake Storage Gen1 account using Azure Resource Manager templates. These templates are accessible from [Azure QuickStart Templates](https://azure.microsoft.com/resources/templates/?term=data+lake+store): -> * Without data encryption: [Deploy Azure Data Lake Storage Gen1 account with no data encryption](https://azure.microsoft.com/resources/templates/data-lake-store-no-encryption/). -> * With data encryption using Data Lake Storage Gen1: [Deploy Data Lake Storage Gen1 account with encryption(Data Lake)](https://azure.microsoft.com/resources/templates/data-lake-store-encryption-adls/ |