Updates from: 03/02/2024 02:10:44
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Captcha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-captcha.md
Previously updated : 01/17/2024 Last updated : 03/01/2024
zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
-Azure Active Directory B2C (Azure AD B2C) allows you to enable CAPTCHA prevent to automated attacks on your consumer-facing applications. Azure AD B2CΓÇÖs CAPTCHA supports both audio and visual CAPTCHA challenges. You can enable this security feature in both sign-up and sign-in flows for your local accounts. CAPTCHA isn't applicable for social identity providers' sign-in.
+Azure Active Directory B2C (Azure AD B2C) allows you to enable CAPTCHA to prevent automated attacks on your consumer-facing applications. Azure AD B2CΓÇÖs CAPTCHA supports both audio and visual CAPTCHA challenges. You can enable this security feature in both sign-up and sign-in flows for your local accounts. CAPTCHA isn't applicable for social identity providers' sign-in.
> [!NOTE] > This feature is in public preview
To enable CAPTCHA in MFA flow, you need to make an update in two technical profi
... </TechnicalProfile> ```-
-> [!NOTE]
-> - You can't add CAPTCHA to an MFA step in a sign-up only user flow.
-> - In an MFA flow, CAPTCHA is applicable where the MFA method you select is SMS or phone call, SMS only or Phone call only.
- ## Upload the custom policy files Use the steps in [Upload the policies](tutorial-create-user-flows.md?pivots=b2c-custom-policy&branch=pr-en-us-260336#upload-the-policies) to upload your custom policy files.
Use the steps in [Upload the policies](tutorial-create-user-flows.md?pivots=b2c-
## Test the custom policy Use the steps in [Test the custom policy](tutorial-create-user-flows.md?pivots=b2c-custom-policy#test-the-custom-policy) to test and confirm that CAPTCHA is enabled for your chosen flow. You should be prompted to enter the characters you see or hear depending on the CAPTCHA type, visual or audio, you choose.+ ::: zone-end
+> [!NOTE]
+> - You can't add CAPTCHA to an MFA step in a sign-up only user flow.
+> - In an MFA flow, CAPTCHA is applicable where the MFA method you select is SMS or phone call, SMS only or Phone call only.
+ ## Next steps - Learn how to [Define a CAPTCHA technical profile](captcha-technical-profile.md).
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-domain.md
Previously updated : 02/14/2024 Last updated : 03/01/2024
When using custom domains, consider the following:
- You can set up multiple custom domains. For the maximum number of supported custom domains, see [Microsoft Entra service limits and restrictions](/entra/identity/users/directory-service-limits-restrictions) for Azure AD B2C and [Azure subscription and service limits, quotas, and constraints](/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-front-door-classic-limits) for Azure Front Door. - Azure Front Door is a separate Azure service, so extra charges will be incurred. For more information, see [Front Door pricing](https://azure.microsoft.com/pricing/details/frontdoor).-- If you've multiple applications, migrate all oft them to the custom domain because the browser stores the Azure AD B2C session under the domain name currently being used.
+- If you've multiple applications, migrate all of them to the custom domain because the browser stores the Azure AD B2C session under the domain name currently being used.
- After you configure custom domains, users will still be able to access the Azure AD B2C default domain name *&lt;tenant-name&gt;.b2clogin.com*. You need to block access to the default domain so that attackers can't use it to access your apps or run distributed denial-of-service (DDoS) attacks. [Submit a support ticket](find-help-open-support-ticket.md) to request for the blocking of access to the default domain. > [!WARNING]
active-directory-b2c Custom Policies Series Sign Up Or Sign In Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-sign-up-or-sign-in-federation.md
Notice the claims transformations we defined in [step 3.2](#step-32define-cla
Just like in sign-in with a local account, you need to configure the [Microsoft Entra Technical Profiles](active-directory-technical-profile.md), which you use to connect to Microsoft Entra ID storage, to store or read a user social account.
-1. In the `ContosoCustomPolicy.XML` file, locate the `AAD-UserUpdate` technical profile and then add a new technical profile by using the following code:
+1. In the `ContosoCustomPolicy.XML` file, locate the `AAD-UserRead` technical profile and then add a new technical profile by using the following code:
```xml <TechnicalProfile Id="AAD-UserWriteUsingAlternativeSecurityId">
active-directory-b2c Custom Policies Series Store User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-store-user.md
To configure a display control, use the following steps:
You can configure a Microsoft Entra ID technical profile to update a user account instead of attempting to create a new one. To do so, set the Microsoft Entra ID technical profile to throw an error if the specified user account doesn't already exist in the `Metadata` collection by using the following code. The *Operation* needs to be set to *Write*: ```xml
- <!--<Item Key="Operation">Write</Item>-->
+ <Item Key="Operation">Write</Item>
<Item Key="RaiseErrorIfClaimsPrincipalDoesNotExist">true</Item> ```
active-directory-b2c Implicit Flow Single Page Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/implicit-flow-single-page-application.md
-#Customer intent: As a developer building a single-page application (SPA) with a JavaScript framework, I want to implement OAuth 2.0 implicit flow for sign-in using Azure Active Directory B2C, so that I can securely authenticate users without server-to-server exchange and handle user flows like sign-up and profile management.
+#Customer intent: As a developer building a single-page application (SPA) with a JavaScript framework, I want to implement OAuth 2.0 implicit flow for sign-in using Azure AD B2C, so that I can securely authenticate users without server-to-server exchange and handle user flows like sign-up and profile management.
active-directory-b2c Phone Based Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/phone-based-mfa.md
description: Learn tips for securing phone-based multifactor authentication in y
- - Previously updated : 01/11/2024 Last updated : 03/01/2024
Take the following actions to help mitigate fraudulent sign-ups.
- [Enable the email one-time passcode feature (OTP)](phone-authentication-user-flows.md) for MFA (applies to both sign-up and sign-in flows). - [Configure a Conditional Access policy](conditional-access-user-flow.md) to block sign-ins based on location (applies to sign-in flows only, not sign-up flows).
- - Use API connectors to [integrate with an anti-bot solution like reCAPTCHA](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-captcha) (applies to sign-up flows).
+ - To prevent automated attacks on your consumer-facing apps, [enable CAPTCHA](add-captcha.md). Azure AD B2CΓÇÖs CAPTCHA supports both audio and visual CAPTCHA challenges, and applies to both sign-up and sign-in flows for your local accounts.
- Remove country codes that aren't relevant to your organization from the drop-down menu where the user verifies their phone number (this change will apply to future sign-ups):
ai-services How To Store User Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-store-user-preferences.md
This functionality can be used as an alternate means to storing user preferences
## Enable storing user preferences
-The Immersive Reader SDK [launchAsync](./reference.md#launchasync) `options` parameter contains the `-onPreferencesChanged` callback. This function is called anytime the user changes their preferences. The `value` parameter contains a string, which represents the user's current preferences. This string is then stored, for that user, by the host application.
+The Immersive Reader SDK [launchAsync](reference.md#function-launchasync) `options` parameter contains the `-onPreferencesChanged` callback. This function will be called anytime the user changes their preferences. The `value` parameter contains a string, which represents the user's current preferences. This string is then stored, for that user, by the host application.
```typescript const options = {
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/reference.md
Title: "Immersive Reader SDK Reference"
+ Title: Immersive Reader SDK Javascript reference
-description: The Immersive Reader SDK contains a JavaScript library that allows you to integrate the Immersive Reader into your application.
+description: Learn about the Immersive Reader JavaScript library that allows you to integrate Immersive Reader into your application.
#-+ Previously updated : 11/15/2021- Last updated : 02/28/2024+
-# Immersive Reader JavaScript SDK Reference (v1.4)
+# Immersive Reader JavaScript SDK reference (v1.4)
The Immersive Reader SDK contains a JavaScript library that allows you to integrate the Immersive Reader into your application.
-You may use `npm`, `yarn`, or an `HTML` `<script>` element to include the library of the latest stable build in your web application:
+You can use `npm`, `yarn`, or an HTML `<script>` element to include the library of the latest stable build in your web application:
```html <script type='text/javascript' src='https://ircdname.azureedge.net/immersivereadersdk/immersive-reader-sdk.1.4.0.js'></script>
yarn add @microsoft/immersive-reader-sdk
## Functions
-The SDK exposes the functions:
+The SDK exposes these functions:
-- [`ImmersiveReader.launchAsync(token, subdomain, content, options)`](#launchasync)
+- [ImmersiveReader.launchAsync(token, subdomain, content, options)](#function-launchasync)
+- [ImmersiveReader.close()](#function-close)
+- [ImmersiveReader.renderButtons(options)](#function-renderbuttons)
-- [`ImmersiveReader.close()`](#close)
+### Function: `launchAsync`
-- [`ImmersiveReader.renderButtons(options)`](#renderbuttons)-
-<br>
-
-## launchAsync
-
-Launches the Immersive Reader within an `HTML` `iframe` element in your web application. The size of your content is limited to a maximum of 50 MB.
+`ImmersiveReader.launchAsync(token, subdomain, content, options)` launches the Immersive Reader within an HTML `iframe` element in your web application. The size of your content is limited to a maximum of 50 MB.
```typescript launchAsync(token: string, subdomain: string, content: Content, options?: Options): Promise<LaunchResponse>; ```
-#### launchAsync Parameters
-
-| Name | Type | Description |
+| Parameter | Type | Description |
| - | - | |
-| `token` | string | The Microsoft Entra authentication token. For more information, see [How-To Create an Immersive Reader Resource](./how-to-create-immersive-reader.md). |
-| `subdomain` | string | The custom subdomain of your Immersive Reader resource in Azure. For more information, see [How-To Create an Immersive Reader Resource](./how-to-create-immersive-reader.md). |
-| `content` | [Content](#content) | An object containing the content to be shown in the Immersive Reader. |
-| `options` | [Options](#options) | Options for configuring certain behaviors of the Immersive Reader. Optional. |
+| token | string | The Microsoft Entra authentication token. To learn more, see [How to create an Immersive Reader resource](how-to-create-immersive-reader.md). |
+| subdomain | string | The custom subdomain of your [Immersive Reader resource](how-to-create-immersive-reader.md) in Azure. |
+| content | [Content](#content) | An object that contains the content to be shown in the Immersive Reader. |
+| options | [Options](#options) | Options for configuring certain behaviors of the Immersive Reader. Optional. |
#### Returns
-Returns a `Promise<LaunchResponse>`, which resolves when the Immersive Reader is loaded. The `Promise` resolves to a [`LaunchResponse`](#launchresponse) object.
+Returns a `Promise<LaunchResponse>`, which resolves when the Immersive Reader is loaded. The `Promise` resolves to a [LaunchResponse](#launchresponse) object.
#### Exceptions
-The returned `Promise` will be rejected with an [`Error`](#error) object if the Immersive Reader fails to load. For more information, see the [error codes](#error-codes).
-
-<br>
+If the Immersive Reader fails to load, the returned `Promise` is rejected with an [Error](#error) object.
-## close
+### Function: `close`
-Closes the Immersive Reader.
+`ImmersiveReader.close()` closes the Immersive Reader.
-An example use case for this function is if the exit button is hidden by setting ```hideExitButton: true``` in [options](#options). Then, a different button (for example a mobile header's back arrow) can call this ```close``` function when it's clicked.
+An example use case for this function is if the exit button is hidden by setting `hideExitButton: true` in [options](#options). Then, a different button (for example, a mobile header's back arrow) can call this `close` function when it's clicked.
```typescript close(): void; ```
-<br>
-
-## Immersive Reader Launch Button
-
-The SDK provides default styling for the button for launching the Immersive Reader. Use the `immersive-reader-button` class attribute to enable this styling. For more information, see [How-To Customize the Immersive Reader button](./how-to-customize-launch-button.md).
-
-```html
-<div class='immersive-reader-button'></div>
-```
-
-#### Optional attributes
-
-Use the following attributes to configure the look and feel of the button.
-
-| Attribute | Description |
-| | -- |
-| `data-button-style` | Sets the style of the button. Can be `icon`, `text`, or `iconAndText`. Defaults to `icon`. |
-| `data-locale` | Sets the locale. For example, `en-US` or `fr-FR`. Defaults to English `en`. |
-| `data-icon-px-size` | Sets the size of the icon in pixels. Defaults to 20px. |
-
-<br>
+### Function: `renderButtons`
-## renderButtons
+The `ImmersiveReader.renderButtons(options)` function isn't necessary if you use the [How to customize the Immersive Reader button](how-to-customize-launch-button.md) guidance.
-The ```renderButtons``` function isn't necessary if you're using the [How-To Customize the Immersive Reader button](./how-to-customize-launch-button.md) guidance.
-
-This function styles and updates the document's Immersive Reader button elements. If ```options.elements``` is provided, then the buttons will be rendered within each element provided in ```options.elements```. Using the ```options.elements``` parameter is useful when you have multiple sections in your document on which to launch the Immersive Reader, and want a simplified way to render multiple buttons with the same styling, or want to render the buttons with a simple and consistent design pattern. To use this function with the [renderButtons options](#renderbuttons-options) parameter, call ```ImmersiveReader.renderButtons(options: RenderButtonsOptions);``` on page load as demonstrated in the below code snippet. Otherwise, the buttons will be rendered within the document's elements that have the class ```immersive-reader-button``` as shown in [How-To Customize the Immersive Reader button](./how-to-customize-launch-button.md).
+This function styles and updates the document's Immersive Reader button elements. If `options.elements` is provided, then the buttons are rendered within each element provided in `options.elements`. Using the `options.elements` parameter is useful when you have multiple sections in your document on which to launch the Immersive Reader, and want a simplified way to render multiple buttons with the same styling, or want to render the buttons with a simple and consistent design pattern. To use this function with the [renderButtons options](#renderbuttons-options) parameter, call `ImmersiveReader.renderButtons(options: RenderButtonsOptions);` on page load as demonstrated in the following code snippet. Otherwise, the buttons are rendered within the document's elements that have the class `immersive-reader-button` as shown in [How to customize the Immersive Reader button](how-to-customize-launch-button.md).
```typescript // This snippet assumes there are two empty div elements in
const btns: HTMLDivElement[] = [btn1, btn2];
ImmersiveReader.renderButtons({elements: btns}); ```
-See the above [Optional Attributes](#optional-attributes) for more rendering options. To use these options, add any of the option attributes to each ```HTMLDivElement``` in your page HTML.
+See the [launch button](#launch-button) optional attributes for more rendering options. To use these options, add any of the option attributes to each `HTMLDivElement` in your page HTML.
```typescript renderButtons(options?: RenderButtonsOptions): void; ```
-#### renderButtons Parameters
-
-| Name | Type | Description |
+| Parameter | Type | Description |
| - | - | |
-| `options` | [renderButtons options](#renderbuttons-options) | Options for configuring certain behaviors of the renderButtons function. Optional. |
+| options | [renderButtons options](#renderbuttons-options) | Options for configuring certain behaviors of the renderButtons function. Optional. |
-### renderButtons Options
+#### renderButtons options
Options for rendering the Immersive Reader buttons.
Options for rendering the Immersive Reader buttons.
} ```
-#### renderButtons Options Parameters
-
-| Setting | Type | Description |
+| Parameter | Type | Description |
| - | - | -- | | elements | HTMLDivElement[] | Elements to render the Immersive Reader buttons in. |
-##### `elements`
```Parameters Type: HTMLDivElement[] Required: false ```
-<br>
+## Launch button
+
+The SDK provides default styling for the Immersive Reader launch button. Use the `immersive-reader-button` class attribute to enable this styling. For more information, see [How to customize the Immersive Reader button](how-to-customize-launch-button.md).
+
+```html
+<div class='immersive-reader-button'></div>
+```
+
+Use the following optional attributes to configure the look and feel of the button.
+
+| Attribute | Description |
+| | -- |
+| data-button-style | Sets the style of the button. Can be `icon`, `text`, or `iconAndText`. Defaults to `icon`. |
+| data-locale | Sets the locale. For example, `en-US` or `fr-FR`. Defaults to English `en`. |
+| data-icon-px-size | Sets the size of the icon in pixels. Defaults to 20 px. |
## LaunchResponse
-Contains the response from the call to `ImmersiveReader.launchAsync`. A reference to the `HTML` `iframe` element that contains the Immersive Reader can be accessed via `container.firstChild`.
+Contains the response from the call to `ImmersiveReader.launchAsync`. A reference to the HTML `iframe` element that contains the Immersive Reader can be accessed via `container.firstChild`.
```typescript {
Contains the response from the call to `ImmersiveReader.launchAsync`. A referenc
} ```
-#### LaunchResponse Parameters
-
-| Setting | Type | Description |
+| Parameter | Type | Description |
| - | - | -- | | container | HTMLDivElement | HTML element that contains the Immersive Reader `iframe` element. | | sessionId | String | Globally unique identifier for this session, used for debugging. | | charactersProcessed | number | Total number of characters processed |
-
+ ## Error Contains information about an error.
Contains information about an error.
} ```
-#### Error Parameters
-
-| Setting | Type | Description |
+| Parameter | Type | Description |
| - | - | -- |
-| code | String | One of a set of error codes. For more information, see [Error codes](#error-codes). |
+| code | String | One of a set of error codes. |
| message | String | Human-readable representation of the error. |
-#### Error codes
-
-| Code | Description |
+| Error code | Description |
| - | -- |
-| BadArgument | Supplied argument is invalid, see `message` parameter of the [Error](#error). |
+| BadArgument | Supplied argument is invalid. See `message` parameter of the error. |
| Timeout | The Immersive Reader failed to load within the specified timeout. | | TokenExpired | The supplied token is expired. | | Throttled | The call rate limit has been exceeded. |
-<br>
- ## Types ### Content
Contains the content to be shown in the Immersive Reader.
} ```
-#### Content Parameters
-
-| Name | Type | Description |
+| Parameter | Type | Description |
| - | - | | | title | String | Title text shown at the top of the Immersive Reader (optional) | | chunks | [Chunk[]](#chunk) | Array of chunks |
Required: true
Default value: null ```
-<br>
- ### Chunk
-A single chunk of data, which will be passed into the Content of the Immersive Reader.
+A single chunk of data, which is passed into the content of the Immersive Reader.
```typescript {
A single chunk of data, which will be passed into the Content of the Immersive R
} ```
-#### Chunk Parameters
-
-| Name | Type | Description |
+| Parameter | Type | Description |
| - | - | | | content | String | The string that contains the content sent to the Immersive Reader. |
-| lang | String | Language of the text, the value is in IETF BCP 47-language tag format, for example, en, es-ES. Language will be detected automatically if not specified. For more information, see [Supported Languages](#supported-languages). |
+| lang | String | Language of the text, the value is in *IETF BCP 47-language* tag format, for example, en, es-ES. Language is detected automatically if not specified. For more information, see [Supported languages](#supported-languages). |
| mimeType | string | Plain text, MathML, HTML & Microsoft Word DOCX formats are supported. For more information, see [Supported MIME types](#supported-mime-types). | ##### `content`
Default value: "text/plain"
#### Supported MIME types
-| MIME Type | Description |
+| MIME type | Description |
| | -- | | text/plain | Plain text. |
-| text/html | HTML content. [Learn more](#html-support)|
-| application/mathml+xml | Mathematical Markup Language (MathML). [Learn more](./how-to/display-math.md).
-| application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Word .docx format document.
--
-<br>
+| text/html | [HTML content](#html-support). |
+| application/mathml+xml | [Mathematical Markup Language (MathML)](how-to/display-math.md). |
+| application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Word .docx format document. |
## Options
Contains properties that configure certain behaviors of the Immersive Reader.
} ```
-#### Options Parameters
-
-| Name | Type | Description |
+| Parameter | Type | Description |
| - | - | |
-| uiLang | String | Language of the UI, the value is in IETF BCP 47-language tag format, for example, en, es-ES. Defaults to browser language if not specified. |
-| timeout | Number | Duration (in milliseconds) before [launchAsync](#launchasync) fails with a timeout error (default is 15,000 ms). This timeout only applies to the initial launch of the Reader page, when the Reader page opens successfully and the spinner starts. Adjustment of the timeout should'nt be necessary. |
-| uiZIndex | Number | Z-index of the `HTML` `iframe` element that will be created (default is 1000). |
-| useWebview | Boolean| Use a webview tag instead of an `HTML` `iframe` element, for compatibility with Chrome Apps (default is false). |
+| uiLang | String | Language of the UI, the value is in *IETF BCP 47-language* tag format, for example, en, es-ES. Defaults to browser language if not specified. |
+| timeout | Number | Duration (in milliseconds) before [launchAsync](#function-launchasync) fails with a timeout error (default is 15,000 ms). This timeout only applies to the initial launch of the Reader page, when the Reader page opens successfully and the spinner starts. Adjustment of the timeout shouldn't be necessary. |
+| uiZIndex | Number | Z-index of the HTML `iframe` element that is created (default is 1000). |
+| useWebview | Boolean| Use a webview tag instead of an HTML `iframe` element, for compatibility with Chrome Apps (default is false). |
| onExit | Function | Executes when the Immersive Reader exits. | | customDomain | String | Reserved for internal use. Custom domain where the Immersive Reader webapp is hosted (default is null). | | allowFullscreen | Boolean | The ability to toggle fullscreen (default is true). |
-| parent | Node | Node in which the `HTML` `iframe` element or `Webview` container is placed. If the element doesn't exist, iframe is placed in `body`. |
-| hideExitButton | Boolean | Hides the Immersive Reader's exit button arrow (default is false). This value should only be true if there's an alternative mechanism provided to exit the Immersive Reader (e.g a mobile toolbar's back arrow). |
-| cookiePolicy | [CookiePolicy](#cookiepolicy-options) | Setting for the Immersive Reader's cookie usage (default is *CookiePolicy.Disable*). It's the responsibility of the host application to obtain any necessary user consent following EU Cookie Compliance Policy. For more information, see [Cookie Policy Options](#cookiepolicy-options). |
+| parent | Node | Node in which the HTML `iframe` element or `Webview` container is placed. If the element doesn't exist, iframe is placed in `body`. |
+| hideExitButton | Boolean | Hides the Immersive Reader's exit button arrow (default is false). This value should only be true if there's an alternative mechanism provided to exit the Immersive Reader (for example, a mobile toolbar's back arrow). |
+| cookiePolicy | [CookiePolicy](#cookiepolicy-options) | Setting for the Immersive Reader's cookie usage (default is *CookiePolicy.Disable*). It's the responsibility of the host application to obtain any necessary user consent following EU Cookie Compliance Policy. For more information, see [Cookie Policy options](#cookiepolicy-options). |
| disableFirstRun | Boolean | Disable the first run experience. | | readAloudOptions | [ReadAloudOptions](#readaloudoptions) | Options to configure Read Aloud. | | translationOptions | [TranslationOptions](#translationoptions) | Options to configure translation. | | displayOptions | [DisplayOptions](#displayoptions) | Options to configure text size, font, theme, and so on. |
-| preferences | String | String returned from onPreferencesChanged representing the user's preferences in the Immersive Reader. For more information, see [Settings Parameters](#settings-parameters) and [How-To Store User Preferences](./how-to-store-user-preferences.md). |
-| onPreferencesChanged | Function | Executes when the user's preferences have changed. For more information, see [How-To Store User Preferences](./how-to-store-user-preferences.md). |
+| preferences | String | String returned from onPreferencesChanged representing the user's preferences in the Immersive Reader. For more information, see [How to store user preferences](how-to-store-user-preferences.md). |
+| onPreferencesChanged | Function | Executes when the user's preferences have changed. For more information, see [How to store user preferences](how-to-store-user-preferences.md). |
| disableTranslation | Boolean | Disable the word and document translation experience. |
-| disableGrammar | Boolean | Disable the Grammar experience. This option will also disable Syllables, Parts of Speech and Picture Dictionary, which depends on Parts of Speech. |
-| disableLanguageDetection | Boolean | Disable Language Detection to ensure the Immersive Reader only uses the language that is explicitly specified on the [Content](#content)/[Chunk[]](#chunk). This option should be used sparingly, primarily in situations where language detection isn't working, for instance, this issue is more likely to happen with short passages of fewer than 100 characters. You should be certain about the language you're sending, as text-to-speech won't have the correct voice. Syllables, Parts of Speech and Picture Dictionary won't work correctly if the language isn't correct. |
+| disableGrammar | Boolean | Disable the Grammar experience. This option also disables Syllables, Parts of Speech, and Picture Dictionary, which depends on Parts of Speech. |
+| disableLanguageDetection | Boolean | Disable Language Detection to ensure the Immersive Reader only uses the language that is explicitly specified on the [Content](#content)/[Chunk[]](#chunk). This option should be used sparingly, primarily in situations where language detection isn't working. For instance, this issue is more likely to happen with short passages of fewer than 100 characters. You should be certain about the language you're sending, as text-to-speech won't have the correct voice. Syllables, Parts of Speech, and Picture Dictionary don't work correctly if the language isn't correct. |
##### `uiLang` ```Parameters
Default value: null
``` ##### `preferences`-
-> [!CAUTION]
-> **IMPORTANT** Do not attempt to programmatically change the values of the `-preferences` string sent to and from the Immersive Reader application as this may cause unexpected behavior resulting in a degraded user experience for your customers. Host applications should never assign a custom value to or manipulate the `-preferences` string. When using the `-preferences` string option, use only the exact value that was returned from the `-onPreferencesChanged` callback option.
- ```Parameters Type: String Required: false Default value: null ```
+> [!CAUTION]
+> Don't attempt to programmatically change the values of the `-preferences` string sent to and from the Immersive Reader application because this might cause unexpected behavior resulting in a degraded user experience. Host applications should never assign a custom value to or manipulate the `-preferences` string. When using the `-preferences` string option, use only the exact value that was returned from the `-onPreferencesChanged` callback option.
+ ##### `onPreferencesChanged` ```Parameters Type: Function
Required: false
Default value: null ```
-<br>
- ## ReadAloudOptions ```typescript
type ReadAloudOptions = {
}; ```
-#### ReadAloudOptions Parameters
-
-| Name | Type | Description |
+| Parameter | Type | Description |
| - | - | |
-| voice | String | Voice, either "Female" or "Male". Not all languages support both genders. |
-| speed | Number | Playback speed, must be between 0.5 and 2.5, inclusive. |
+| voice | String | Voice, either *Female* or *Male*. Not all languages support both genders. |
+| speed | Number | Playback speed. Must be between 0.5 and 2.5, inclusive. |
| autoPlay | Boolean | Automatically start Read Aloud when the Immersive Reader loads. |
+> [!NOTE]
+> Due to browser limitations, autoplay is not supported in Safari.
+ ##### `voice` ```Parameters Type: String
Default value: 1
Values available: 0.5, 0.75, 1, 1.25, 1.5, 1.75, 2, 2.25, 2.5 ```
-> [!NOTE]
-> Due to browser limitations, autoplay is not supported in Safari.
-
-<br>
- ## TranslationOptions ```typescript
type TranslationOptions = {
}; ```
-#### TranslationOptions Parameters
-
-| Name | Type | Description |
+| Parameter | Type | Description |
| - | - | |
-| language | String | Sets the translation language, the value is in IETF BCP 47-language tag format, for example, fr-FR, es-MX, zh-Hans-CN. Required to automatically enable word or document translation. |
+| language | String | Sets the translation language, the value is in *IETF BCP 47-language* tag format, for example, fr-FR, es-MX, zh-Hans-CN. Required to automatically enable word or document translation. |
| autoEnableDocumentTranslation | Boolean | Automatically translate the entire document. | | autoEnableWordTranslation | Boolean | Automatically enable word translation. |
type TranslationOptions = {
Type: String Required: true Default value: null
-Values available: For more information, see the Supported Languages section
+Values available: For more information, see the Supported languages section
```
-<br>
- ## ThemeOption ```typescript
type DisplayOptions = {
}; ```
-#### DisplayOptions Parameters
-
-| Name | Type | Description |
+| Parameter | Type | Description |
| - | - | | | textSize | Number | Sets the chosen text size. | | increaseSpacing | Boolean | Sets whether text spacing is toggled on or off. |
-| fontFamily | String | Sets the chosen font ("Calibri", "ComicSans", or "Sitka"). |
-| themeOption | ThemeOption | Sets the chosen Theme of the reader ("Light", "Dark"). |
+| fontFamily | String | Sets the chosen font (*Calibri*, *ComicSans*, or *Sitka*). |
+| themeOption | ThemeOption | Sets the chosen theme of the reader (*Light*, *Dark*). |
##### `textSize` ```Parameters
Default value: "Calibri"
Values available: "Calibri", "Sitka", "ComicSans" ```
-<br>
-
-## CookiePolicy Options
+## CookiePolicy options
```typescript enum CookiePolicy { Disable, Enable } ```
-**The settings listed below are for informational purposes only**. The Immersive Reader stores its settings, or user preferences, in cookies. This *cookiePolicy* option **disables** the use of cookies by default to follow EU Cookie Compliance laws. If you want to re-enable cookies and restore the default functionality for Immersive Reader user preferences, your website or application will need proper consent from the user to enable cookies. Then, to re-enable cookies in the Immersive Reader, you must explicitly set the *cookiePolicy* option to *CookiePolicy.Enable* when launching the Immersive Reader. The table below describes what settings the Immersive Reader stores in its cookie when the *cookiePolicy* option is enabled.
+**The following settings are for informational purposes only**. The Immersive Reader stores its settings, or user preferences, in cookies. This *cookiePolicy* option **disables** the use of cookies by default to follow EU Cookie Compliance laws. If you want to re-enable cookies and restore the default functionality for Immersive Reader user preferences, your website or application needs proper consent from the user to enable cookies. Then, to re-enable cookies in the Immersive Reader, you must explicitly set the *cookiePolicy* option to *CookiePolicy.Enable* when launching the Immersive Reader.
-#### Settings Parameters
+The following table describes what settings the Immersive Reader stores in its cookie when the *cookiePolicy* option is enabled.
| Setting | Type | Description | | - | - | -- | | textSize | Number | Sets the chosen text size. |
-| fontFamily | String | Sets the chosen font ("Calibri", "ComicSans", or "Sitka"). |
+| fontFamily | String | Sets the chosen font (*Calibri*, *ComicSans*, or *Sitka*). |
| textSpacing | Number | Sets whether text spacing is toggled on or off. | | formattingEnabled | Boolean | Sets whether HTML formatting is toggled on or off. |
-| theme | String | Sets the chosen theme (e.g "Light", "Dark"...). |
+| theme | String | Sets the chosen theme (*Light*, *Dark*). |
| syllabificationEnabled | Boolean | Sets whether syllabification toggled on or off. | | nounHighlightingEnabled | Boolean | Sets whether noun-highlighting is toggled on or off. | | nounHighlightingColor | String | Sets the chosen noun-highlighting color. |
enum CookiePolicy { Disable, Enable }
| pictureDictionaryEnabled | Boolean | Sets whether Picture Dictionary is toggled on or off. | | posLabelsEnabled | Boolean | Sets whether the superscript text label of each highlighted Part of Speech is toggled on or off. |
-<br>
+## Supported languages
-## Supported Languages
-
-The translation feature of Immersive Reader supports many languages. For more information, see [Language Support](./language-support.md).
-
-<br>
+The translation feature of Immersive Reader supports many languages. For more information, see [Language support](language-support.md).
## HTML support
-When formatting is enabled, the following content will be rendered as HTML in the Immersive Reader.
+When formatting is enabled, the following content is rendered as HTML in the Immersive Reader.
-| HTML | Supported Content |
+| HTML | Supported content |
| | -- |
-| Font Styles | Bold, Italic, Underline, Code, Strikethrough, Superscript, Subscript |
-| Unordered Lists | Disc, Circle, Square |
-| Ordered Lists | Decimal, Upper-Alpha, Lower-Alpha, Upper-Roman, Lower-Roman |
+| Font styles | Bold, italic, underline, code, strikethrough, superscript, subscript |
+| Unordered lists | Disc, circle, square |
+| Ordered lists | Decimal, upper-Alpha, lower-Alpha, upper-Roman, lower-Roman |
-Unsupported tags will be rendered comparably. Images and tables are currently not supported.
-
-<br>
+Unsupported tags are rendered comparably. Images and tables are currently not supported.
## Browser support Use the most recent versions of the following browsers for the best experience with the Immersive Reader. * Microsoft Edge
-* Internet Explorer 11
* Google Chrome * Mozilla Firefox * Apple Safari
-<br>
-
-## Next steps
+## Next step
-* Explore the [Immersive Reader SDK on GitHub](https://github.com/microsoft/immersive-reader-sdk)
-* [Quickstart: Create a web app that launches the Immersive Reader (C#)](./quickstarts/client-libraries.md?pivots=programming-language-csharp)
+> [!div class="nextstepaction"]
+> [Explore the Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk)
ai-services Security How To Update Role Assignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/security-how-to-update-role-assignment.md
Title: "Security Advisory: Update Role Assignment for Microsoft Entra authentication permissions"
+ Title: "Update role assignment for Microsoft Entra authentication"
-description: This article will show you how to update the role assignment on existing Immersive Reader resources due to a security bug discovered in November 2021
+description: Learn how to update the role assignment on existing Immersive Reader resources due to a security bug.
#-+ Previously updated : 01/06/2022- Last updated : 02/28/2024+
-# Security Advisory: Update Role Assignment for Microsoft Entra authentication permissions
+# Security advisory: Update role assignment for Microsoft Entra authentication
-A security bug has been discovered with Immersive Reader Microsoft Entra authentication configuration. We are advising that you change the permissions on your Immersive Reader resources as described below.
+A security bug was discovered that affects Microsoft Entra authentication for Immersive Reader. We advise that you change the permissions on your Immersive Reader resources.
## Background
-A security bug was discovered that relates to Microsoft Entra authentication for Immersive Reader. When initially creating your Immersive Reader resources and configuring them for Microsoft Entra authentication, it is necessary to grant permissions for the Microsoft Entra application identity to access your Immersive Reader resource. This is known as a Role Assignment. The Azure role that was previously used for permissions was the [Cognitive Services User](../../role-based-access-control/built-in-roles.md#cognitive-services-user) role.
+When you initially create your Immersive Reader resources and configure them for Microsoft Entra authentication, it's necessary to grant permissions for the Microsoft Entra application identity to access your Immersive Reader resource. This is known as a *role assignment*. The Azure role that was previously used for permissions was the [Cognitive Services User](../../role-based-access-control/built-in-roles.md#cognitive-services-user) role.
-During a security audit, it was discovered that this Cognitive Services User role has permissions to [List Keys](/rest/api/cognitiveservices/accountmanagement/accounts/list-keys). This is slightly concerning because Immersive Reader integrations involve the use of this Microsoft Entra access token in client web apps and browsers, and if the access token were to be stolen by a bad actor or attacker, there is a concern that this access token could be used to `list keys` of your Immersive Reader resource. If an attacker could `list keys` for your resource, then they would obtain the `Subscription Key` for your resource. The `Subscription Key` for your resource is used as an authentication mechanism and is considered a secret. If an attacker had the resource's `Subscription Key`, it would allow them to make valid and authenticated API calls to your Immersive Reader resource endpoint, which could lead to Denial of Service due to the increased usage and throttling on your endpoint. It would also allow unauthorized use of your Immersive Reader resource, which would lead to increased charges on your bill.
+During a security audit, it was discovered that this Cognitive Services User role has permissions to [list keys](/rest/api/cognitiveservices/accountmanagement/accounts/list-keys). This is slightly concerning because Immersive Reader integrations involve the use of this Microsoft Entra access token in client web apps and browsers. If the access token were stolen by a bad actor or attacker, there's a concern that this access token could be used to `list keys` for your Immersive Reader resource. If an attacker could `list keys` for your resource, then they would obtain the `Subscription Key` for your resource. The `Subscription Key` for your resource is used as an authentication mechanism and is considered a secret. If an attacker had the resource's `Subscription Key`, it would allow them to make valid and authenticated API calls to your Immersive Reader resource endpoint, which could lead to Denial of Service due to the increased usage and throttling on your endpoint. It would also allow unauthorized use of your Immersive Reader resource, which would lead to increased charges on your bill.
-In practice however, this attack or exploit is not likely to occur or may not even be possible. For Immersive Reader scenarios, customers obtain Microsoft Entra access tokens with an audience of `https://cognitiveservices.azure.com`. In order to successfully `list keys` for your resource, the Microsoft Entra access token would need to have an audience of `https://management.azure.com`. Generally speaking, this is not too much of a concern, since the access tokens used for Immersive Reader scenarios would not work to `list keys`, as they do not have the required audience. In order to change the audience on the access token, an attacker would have to hijack the token acquisition code and change the audience before the call is made to Microsoft Entra ID to acquire the token. Again, this is not likely to be exploited because, as an Immersive Reader authentication best practice, we advise that customers create Microsoft Entra access tokens on the web application backend, not in the client or browser. In those cases, since the token acquisition happens on the backend service, it's not as likely or perhaps even possible that attacker could compromise that process and change the audience.
+In practice, however, this attack or exploit isn't likely to occur or might not even be possible. For Immersive Reader scenarios, customers obtain Microsoft Entra access tokens with an audience of `https://cognitiveservices.azure.com`. In order to successfully `list keys` for your resource, the Microsoft Entra access token would need to have an audience of `https://management.azure.com`. Generally speaking, this isn't much of a concern, since the access tokens used for Immersive Reader scenarios wouldn't work to `list keys`, as they don't have the required audience. In order to change the audience on the access token, an attacker would have to hijack the token acquisition code and change the audience before the call is made to Microsoft Entra ID to acquire the token. Again, this isn't likely to be exploited because, as an Immersive Reader authentication best practice, we advise that customers create Microsoft Entra access tokens on the web application backend, not in the client or browser. In those cases, since the token acquisition happens on the backend service, it's not as likely or perhaps even possible that an attacker could compromise that process and change the audience.
-The real concern comes when or if any customer were to acquire tokens from Microsoft Entra ID directly in client code. We strongly advise against this, but since customers are free to implement as they see fit, it is possible that some customers are doing this.
+The real concern comes when or if any customer were to acquire tokens from Microsoft Entra ID directly in client code. We strongly advise against this, but since customers are free to implement as they see fit, it's possible that some customers are doing this.
-To mitigate the concerns about any possibility of using the Microsoft Entra access token to `list keys`, we have created a new built-in Azure role called `Cognitive Services Immersive Reader User` that does not have the permissions to `list keys`. This new role is not a shared role for the Azure AI services platform like `Cognitive Services User` role is. This new role is specific to Immersive Reader and will only allow calls to Immersive Reader APIs.
+To mitigate the concerns about any possibility of using the Microsoft Entra access token to `list keys`, we created a new built-in Azure role called `Cognitive Services Immersive Reader User` that doesn't have the permissions to `list keys`. This new role isn't a shared role for the Azure AI services platform like `Cognitive Services User` role is. This new role is specific to Immersive Reader and only allows calls to Immersive Reader APIs.
-We are advising that ALL customers migrate to using the new `Cognitive Services Immersive Reader User` role instead of the original `Cognitive Services User` role. We have provided a script below that you can run on each of your resources to switch over the role assignment permissions.
+We advise ALL customers to use the new `Cognitive Services Immersive Reader User` role instead of the original `Cognitive Services User` role. We have provided a script below that you can run on each of your resources to switch over the role assignment permissions.
This recommendation applies to ALL customers, to ensure that this vulnerability is patched for everyone, no matter what the implementation scenario or likelihood of attack.
-If you do NOT do this, nothing will break. The old role will continue to function. The security impact for most customers is minimal. However, it is advised that you migrate to the new role to mitigate the security concerns discussed above. Applying this update is a security advisory recommendation; it is not a mandate.
+If you do NOT do this, nothing will break. The old role will continue to function. The security impact for most customers is minimal. However, we advise that you migrate to the new role to mitigate the security concerns discussed. Applying this update is a security advisory recommendation; it's not a mandate.
-Any new Immersive Reader resources you create with our script at [How to: Create an Immersive Reader resource](./how-to-create-immersive-reader.md) will automatically use the new role.
+Any new Immersive Reader resources you create with our script at [How to: Create an Immersive Reader resource](./how-to-create-immersive-reader.md) automatically use the new role.
+## Update role and rotate your subscription keys
-## Call to action
+If you created and configured an Immersive Reader resource using the instructions at [How to: Create an Immersive Reader resource](./how-to-create-immersive-reader.md) before February 2022, we advise that you perform the following operation to update the role assignment permissions on ALL of your Immersive Reader resources. The operation involves running a script to update the role assignment on a single resource. If you have multiple resources, run this script multiple times, once for each resource.
-If you created and configured an Immersive Reader resource using the instructions at [How to: Create an Immersive Reader resource](./how-to-create-immersive-reader.md) prior to February 2022, it is advised that you perform the operation below to update the role assignment permissions on ALL of your Immersive Reader resources. The operation involves running a script to update the role assignment on a single resource. If you have multiple resources, run this script multiple times, once for each resource.
+After you update the role using the following script, we also advise that you rotate the subscription keys on your resource. This is in case your keys were compromised by the exploit, and somebody is actually using your resource with subscription key authentication without your consent. Rotating the keys renders the previous keys invalid and denies any further access. For customers using Microsoft Entra authentication, which should be everyone per current Immersive Reader SDK implementation, rotating the keys has no effect on the Immersive Reader service, since Microsoft Entra access tokens are used for authentication, not the subscription key. Rotating the subscription keys is just another precaution.
-After you have updated the role using the script below, it is also advised that you rotate the subscription keys on your resource. This is in case your keys have been compromised by the exploit above, and somebody is actually using your resource with subscription key authentication without your consent. Rotating the keys will render the previous keys invalid and deny any further access. For customers using Microsoft Entra authentication, which should be everyone per current Immersive Reader SDK implementation, rotating the keys will have no impact on the Immersive Reader service, since Microsoft Entra access tokens are used for authentication, not the subscription key. Rotating the subscription keys is just another precaution.
+You can rotate the subscription keys in the [Azure portal](https://portal.azure.com). Navigate to your resource and then to the `Keys and Endpoint` section. At the top, there are buttons to `Regenerate Key1` and `Regenerate Key2`.
-You can rotate the subscription keys on the [Azure portal](https://portal.azure.com). Navigate to your resource and then to the `Keys and Endpoint` blade. At the top, there are buttons to `Regenerate Key1` and `Regenerate Key2`.
----
-### Use Azure PowerShell environment to update your Immersive Reader resource Role assignment
+### Use Azure PowerShell to update your role assignment
1. Start by opening the [Azure Cloud Shell](../../cloud-shell/overview.md). Ensure that Cloud Shell is set to PowerShell in the upper-left hand dropdown or by typing `pwsh`.
You can rotate the subscription keys on the [Azure portal](https://portal.azure.
throw "Error: Failed to find Immersive Reader resource" }
- # Get the Azure AD application service principal
+ # Get the Microsoft Entra application service principal
$principalId = az ad sp show --id $AADAppIdentifierUri --query "objectId" -o tsv if (-not $principalId) {
- throw "Error: Failed to find Azure AD application service principal"
+ throw "Error: Failed to find Microsoft Entra application service principal"
} $newRoleName = "Cognitive Services Immersive Reader User"
You can rotate the subscription keys on the [Azure portal](https://portal.azure.
} ```
-1. Run the function `Update-ImmersiveReaderRoleAssignment`, supplying the '<PARAMETER_VALUES>' placeholders below with your own values as appropriate.
+1. Run the function `Update-ImmersiveReaderRoleAssignment`, replacing the `<PARAMETER_VALUES>` placeholders with your own values as appropriate.
- ```azurepowershell-interactive
- Update-ImmersiveReaderRoleAssignment -SubscriptionName '<SUBSCRIPTION_NAME>' -ResourceGroupName '<RESOURCE_GROUP_NAME>' -ResourceName '<RESOURCE_NAME>' -AADAppIdentifierUri '<AAD_APP_IDENTIFIER_URI>'
+ ```azurepowershell
+ Update-ImmersiveReaderRoleAssignment -SubscriptionName '<SUBSCRIPTION_NAME>' -ResourceGroupName '<RESOURCE_GROUP_NAME>' -ResourceName '<RESOURCE_NAME>' -AADAppIdentifierUri '<MICROSOFT_ENTRA_APP_IDENTIFIER_URI>'
```
- The full command will look something like the following. Here we have put each parameter on its own line for clarity, so you can see the whole command. Do not copy or use this command as-is. Copy and use the command above with your own values. This example has dummy values for the '<PARAMETER_VALUES>' above. Yours will be different, as you will come up with your own names for these values.
+ The full command looks something like the following. Here we put each parameter on its own line for clarity, so you can see the whole command. Don't copy or use this command as-is. Copy and use the command with your own values. This example has dummy values for the `<PARAMETER_VALUES>`. Yours will be different, as you come up with your own names for these values.
- ```Update-ImmersiveReaderRoleAssignment```<br>
- ``` -SubscriptionName 'MyOrganizationSubscriptionName'```<br>
- ``` -ResourceGroupName 'MyResourceGroupName'```<br>
- ``` -ResourceName 'MyOrganizationImmersiveReader'```<br>
- ``` -AADAppIdentifierUri 'https://MyOrganizationImmersiveReaderAADApp'```<br>
+ ```azurepowershell
+ Update-ImmersiveReaderRoleAssignment
+ -SubscriptionName 'MyOrganizationSubscriptionName'
+ -ResourceGroupName 'MyResourceGroupName'
+ -ResourceName 'MyOrganizationImmersiveReader'
+ -AADAppIdentifierUri 'https://MyOrganizationImmersiveReaderAADApp'
+ ```
| Parameter | Comments | | | | | SubscriptionName |The name of your Azure subscription. |
- | ResourceGroupName |The name of the Resource Group that contains your Immersive Reader resource. |
+ | ResourceGroupName |The name of the resource group that contains your Immersive Reader resource. |
| ResourceName |The name of your Immersive Reader resource. |
- | AADAppIdentifierUri |The URI for your Azure AD app. |
-
+ | AADAppIdentifierUri |The URI for your Microsoft Entra app. |
-## Next steps
+## Next step
-* View the [Node.js quickstart](./quickstarts/client-libraries.md?pivots=programming-language-nodejs) to see what else you can do with the Immersive Reader SDK using Node.js
-* View the [Android tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Java or Kotlin for Android
-* View the [iOS tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Swift for iOS
-* View the [Python tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Python
-* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](./reference.md)
+> [!div class="nextstepaction"]
+> [Quickstart: Get started with Immersive Reader](quickstarts/client-libraries.md)
ai-services Tutorial Ios Picture Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/tutorial-ios-picture-immersive-reader.md
Title: "Tutorial: Create an iOS app that takes a photo and launches it in the Immersive Reader (Swift)"
-description: In this tutorial, you will build an iOS app from scratch and add the Picture to Immersive Reader functionality.
+description: Learn how to build an iOS app from scratch and add the Picture to Immersive Reader functionality.
#-+ Previously updated : 01/14/2020- Last updated : 02/28/2024+ #Customer intent: As a developer, I want to integrate two Azure AI services, the Immersive Reader and the Read API into my iOS application so that I can view any text from a photo in the Immersive Reader.
The [Immersive Reader](https://www.onenote.com/learningtools) is an inclusively
The [Azure AI Vision Read API](../../ai-services/computer-vision/overview-ocr.md) detects text content in an image using Microsoft's latest recognition models and converts the identified text into a machine-readable character stream.
-In this tutorial, you will build an iOS app from scratch and integrate the Read API, and the Immersive Reader by using the Immersive Reader SDK. A full working sample of this tutorial is available [here](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/ios).
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+In this tutorial, you build an iOS app from scratch and integrate the Read API and the Immersive Reader by using the Immersive Reader SDK. A full working sample of this tutorial is available [on GitHub](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/ios).
## Prerequisites
-* [Xcode](https://apps.apple.com/us/app/xcode/id497799835?mt=12)
-* An Immersive Reader resource configured for Microsoft Entra authentication. Follow [these instructions](./how-to-create-immersive-reader.md) to get set up. You will need some of the values created here when configuring the sample project properties. Save the output of your session into a text file for future reference.
-* Usage of this sample requires an Azure subscription to the Azure AI Vision service. [Create an Azure AI Vision resource in the Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision).
+* An Azure subscription. You can [create one for free](https://azure.microsoft.com/free/ai-services/).
+* MacOS and [Xcode](https://apps.apple.com/us/app/xcode/id497799835?mt=12).
+* An Immersive Reader resource configured for Microsoft Entra authentication. Follow [these instructions](how-to-create-immersive-reader.md) to get set up.
+* A subscription to the Azure AI Vision service. Create an [Azure AI Vision resource in the Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision).
## Create an Xcode project Create a new project in Xcode.
-![New Project](./media/ios/xcode-create-project.png)
Choose **Single View App**.
-![New Single View App](./media/ios/xcode-single-view-app.png)
## Get the SDK CocoaPod The easiest way to use the Immersive Reader SDK is via CocoaPods. To install via Cocoapods:
-1. [Install CocoaPods](http://guides.cocoapods.org/using/getting-started.html) - Follow the getting started guide to install Cocoapods.
+1. Follow the [guide to install Cocoapods](http://guides.cocoapods.org/using/getting-started.html).
2. Create a Podfile by running `pod init` in your Xcode project's root directory.
The easiest way to use the Immersive Reader SDK is via CocoaPods. To install via
## Acquire a Microsoft Entra authentication token
-You need some values from the Microsoft Entra authentication configuration prerequisite step above for this part. Refer back to the text file you saved of that session.
+You need some values from the Microsoft Entra authentication configuration step in the prerequisites section. Refer back to the text file you saved from that session.
````text TenantId => Azure subscription TenantId
-ClientId => Azure AD ApplicationId
-ClientSecret => Azure AD Application Service Principal password
+ClientId => Microsoft Entra ApplicationId
+ClientSecret => Microsoft Entra Application Service Principal password
Subdomain => Immersive Reader resource subdomain (resource 'Name' if the resource was created in the Azure portal, or 'CustomSubDomain' option if the resource was created with Azure CLI PowerShell. Check the Azure portal for the subdomain on the Endpoint in the resource Overview page, for example, 'https://[SUBDOMAIN].cognitiveservices.azure.com/') ````
-In the main project folder, which contains the ViewController.swift file, create a Swift class file called Constants.swift. Replace the class with the following code, adding in your values where applicable. Keep this file as a local file that only exists on your machine and be sure not to commit this file into source control, as it contains secrets that should not be made public. It is recommended that you do not keep secrets in your app. Instead, we recommend using a backend service to obtain the token, where the secrets can be kept outside of the app and off of the device. The backend API endpoint should be secured behind some form of authentication (for example, [OAuth](https://oauth.net/2/)) to prevent unauthorized users from obtaining tokens to use against your Immersive Reader service and billing; that work is beyond the scope of this tutorial.
+In the main project folder, which contains the *ViewController.swift* file, create a Swift class file called `Constants.swift`. Replace the class with the following code, adding in your values where applicable. Keep this file as a local file that only exists on your machine and be sure not to commit this file into source control because it contains secrets that shouldn't be made public. We recommended that you don't keep secrets in your app. Instead, use a backend service to obtain the token, where the secrets can be kept outside of the app and off of the device. The backend API endpoint should be secured behind some form of authentication (for example, [OAuth](https://oauth.net/2/)) to prevent unauthorized users from obtaining tokens to use against your Immersive Reader service and billing; that work is beyond the scope of this tutorial.
## Set up the app to run without a storyboard
-Open AppDelegate.swift and replace the file with the following code.
+Open *AppDelegate.swift* and replace the file with the following code.
```swift import UIKit
class AppDelegate: UIResponder, UIApplicationDelegate {
## Add functionality for taking and uploading photos
-Rename ViewController.swift to PictureLaunchViewController.swift and replace the file with the following code.
+Rename *ViewController.swift* to *PictureLaunchViewController.swift* and replace the file with the following code.
```swift import UIKit
class PictureLaunchViewController: UIViewController, UINavigationControllerDeleg
}) }
- /// Retrieves the token for the Immersive Reader using Azure Active Directory authentication
+ /// Retrieves the token for the Immersive Reader using Microsoft Entra authentication
/// /// - Parameters:
- /// -onSuccess: A closure that gets called when the token is successfully recieved using Azure Active Directory authentication.
- /// -theToken: The token for the Immersive Reader recieved using Azure Active Directory authentication.
- /// -onFailure: A closure that gets called when the token fails to be obtained from the Azure Active Directory Authentication.
- /// -theError: The error that occurred when the token fails to be obtained from the Azure Active Directory Authentication.
+ /// -onSuccess: A closure that gets called when the token is successfully received using Microsoft Entra authentication.
+ /// -theToken: The token for the Immersive Reader received using Microsoft Entra authentication.
+ /// -onFailure: A closure that gets called when the token fails to be obtained from the Microsoft Entra authentication.
+ /// -theError: The error that occurred when the token fails to be obtained from the Microsoft Entra authentication.
func getToken(onSuccess: @escaping (_ theToken: String) -> Void, onFailure: @escaping ( _ theError: String) -> Void) { let tokenForm = "grant_type=client_credentials&resource=https://cognitiveservices.azure.com/&client_id=" + Constants.clientId + "&client_secret=" + Constants.clientSecret
class PictureLaunchViewController: UIViewController, UINavigationControllerDeleg
## Build and run the app Set the archive scheme in Xcode by selecting a simulator or device target.
-![Archive scheme](./media/ios/xcode-archive-scheme.png)<br/>
-![Select Target](./media/ios/xcode-select-target.png)
-In Xcode, press Ctrl + R or select the play button to run the project and the app should launch on the specified simulator or device.
++
+In Xcode, press **Ctrl+R** or select the play button to run the project. The app should launch on the specified simulator or device.
In your app, you should see:
-![Sample app](./media/ios/picture-to-immersive-reader-ipad-app.png)
-Inside the app, take or upload a photo of text by pressing the 'Take Photo' button or 'Choose Photo from Library' button and the Immersive Reader will then launch displaying the text from the photo.
+Take or upload a photo of text by pressing the **Take Photo** button or **Choose Photo from Library** button. The Immersive Reader then launches and displays the text from the photo.
-![Immersive Reader](./media/ios/picture-to-immersive-reader-ipad.png)
-## Next steps
+## Next step
-* Explore the [Immersive Reader SDK Reference](./reference.md)
+> [!div class="nextstepaction"]
+> [Explore the Immersive Reader SDK reference](reference.md)
ai-services Conversation Summarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/how-to/conversation-summarization.md
[!INCLUDE [availability](../includes/regional-availability.md)]
-## Conversation summarization types
+## Conversation summarization aspects
-- Chapter title and narrative (general conversation) are designed to summarize a conversation into chapter titles, and a summarization of the conversation's contents. This summarization type works on conversations with any number of parties.
+- Chapter title and narrative (general conversation) are designed to summarize a conversation into chapter titles, and a summarization of the conversation's contents. This summarization aspect works on conversations with any number of parties.
- Issue and resolution (call center focused) is designed to summarize text chat logs between customers and customer-service agents. This feature is capable of providing both issues and resolutions present in these logs, which occur between two parties.
+- Narrative is designed to summarize the narrative of a conversation.
+ - Recap is designed to condense lengthy meetings or conversations into a concise one-paragraph summary to provide a quick overview. - Follow-up tasks is designed to summarize action items and tasks that arise during a meeting.
The AI models used by the API are provided by the service, you just have to send content for analysis.
+For easier navigation, here are links to the corresponding sections for each service:
+
+|Aspect |Section |
+|--||
+|Issue and Resolution |[Issue and Resolution](#get-summaries-from-text-chats)|
+|Chapter Title |[Chapter Title](#get-chapter-titles) |
+|Narrative |[Narrative](#get-narrative-summarization) |
+|Recap and Follow-up |[Recap and follow-up](#get-narrative-summarization) |
+ ## Features The conversation summarization API uses natural language processing techniques to summarize conversations into shorter summaries per request. Conversation summarization can summarize for issues and resolutions discussed in a two-party conversation or summarize a long conversation into chapters and a short narrative for each chapter.
ai-services Document Summarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/how-to/document-summarization.md
Document summarization is designed to shorten content that users consider too lo
**Abstractive summarization**: Produces a summary by generating summarized sentences from the document that capture the main idea.
-Both of these capabilities are able to summarize around specific items of interest when specified.
+**Query-focused summarization**: Allows you to use a query when summarizing.
+
+Each of these capabilities are able to summarize around specific items of interest when specified.
The AI models used by the API are provided by the service, you just have to send content for analysis.
+For easier navigation, here are links to the corresponding sections for each service:
+
+|Aspect |Section |
+|-|-|
+|Extractive |[Extractive Summarization](#try-document-extractive-summarization) |
+|Abstractive |[Abstrctive Summarization](#try-document-abstractive-summarization)|
+|Query-focused|[Query-focused Summarization](#query-based-summarization) |
++ ## Features > [!TIP]
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/overview.md
Summarization is one of the features offered by [Azure AI Language](../overview.
Though the services are labeled document and conversation summarization, document summarization only accepts plain text blocks, and conversation summarization accept various speech artifacts in order for the model to learn more. If you want to process a conversation but only care about text, you can use document summarization for that scenario.
-Custom Summarization enables users to build custom AI models to summarize unstructured text, such as contracts or novels. By creating a Custom Summarization project, developers can iteratively label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](custom/quickstart.md).
- # [Document summarization](#tab/document-summarization) This documentation contains the following article types:
This documentation contains the following article types:
* **[Quickstarts](quickstart.md?pivots=rest-api&tabs=document-summarization)** are getting-started instructions to guide you through making requests to the service. * **[How-to guides](how-to/document-summarization.md)** contain instructions for using the service in more specific or customized ways.
-Document summarization uses natural language processing techniques to generate a summary for documents. There are two supported API approaches to automatic summarization: extractive and abstractive.
+Document summarization uses natural language processing techniques to generate a summary for documents. There are three supported API approaches to automatic summarization: extractive, abstractive and query-focused.
Extractive summarization extracts sentences that collectively represent the most important or relevant information within the original content. Abstractive summarization generates a summary with concise, coherent sentences or words that aren't verbatim extract sentences from the original document. These features are designed to shorten content that could be considered too long to read.
For more information, *see* [**Use native documents for language processing**](.
## Key features
-There are two types of document summarization this API provides:
+There are the aspects of document summarization this API provides:
-* **Extractive summarization**: Produces a summary by extracting salient sentences within the document.
+* [**Extractive summarization**](how-to/document-summarization.md#try-document-extractive-summarization): Produces a summary by extracting salient sentences within the document.
* Multiple extracted sentences: These sentences collectively convey the main idea of the document. They're original sentences extracted from the input document's content. * Rank score: The rank score indicates how relevant a sentence is to a document's main topic. Document summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank. * Multiple returned sentences: Determine the maximum number of sentences to be returned. For example, if you request a three-sentence summary extractive summarization returns the three highest scored sentences. * Positional information: The start position and length of extracted sentences.
-* **Abstractive summarization**: Generates a summary that doesn't use the same words as in the document, but captures the main idea.
+* [**Abstractive summarization**](how-to/document-summarization.md#try-document-abstractive-summarization): Generates a summary that doesn't use the same words as in the document, but captures the main idea.
* Summary texts: Abstractive summarization returns a summary for each contextual input range within the document. A long document can be segmented so multiple groups of summary texts can be returned with their contextual input range. * Contextual input range: The range within the input document that was used to generate the summary text.
+* [**Query-focused summarization**](how-to/document-summarization.md#query-based-summarization): Generates a summary based on a query
+ As an example, consider the following paragraph of text: *"At Microsoft, we are on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, there's magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code enables us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pretrained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we achieve human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."*
This documentation contains the following article types:
Conversation summarization supports the following features:
-* **Issue/resolution summarization**: A call center specific feature that gives a summary of issues and resolutions in conversations between customer-service agents and your customers.
-* **Chapter title summarization**: Segments a conversation into chapters based on the topics discussed in the conversation, and gives suggested chapter titles of the input conversation.
-* **Recap**: Summarizes a conversation into a brief paragraph.
-* **Narrative summarization**: Generates detail call notes, meeting notes or chat summaries of the input conversation.
-* **Follow-up tasks**: Gives a list of follow-up tasks discussed in the input conversation.
+* [**Issue/resolution summarization**](how-to/conversation-summarization.md#get-summaries-from-text-chats): A call center specific feature that gives a summary of issues and resolutions in conversations between customer-service agents and your customers.
+* [**Chapter title summarization**](how-to/conversation-summarization.md#get-chapter-titles): Segments a conversation into chapters based on the topics discussed in the conversation, and gives suggested chapter titles of the input conversation.
+* [**Recap**](how-to/conversation-summarization.md#get-narrative-summarization): Summarizes a conversation into a brief paragraph.
+* [**Narrative summarization**](how-to/conversation-summarization.md#get-narrative-summarization): Generates detail call notes, meeting notes or chat summaries of the input conversation.
+* [**Follow-up tasks**](how-to/conversation-summarization.md#get-narrative-summarization): Gives a list of follow-up tasks discussed in the input conversation.
## When to use issue and resolution summarization
ai-services Api Version Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/api-version-deprecation.md
Previously updated : 02/13/2024 Last updated : 02/29/2024 recommendations: false
This version contains support for all the latest Azure OpenAI features including
- [Fine-tuning](./how-to/fine-tuning.md) `gpt-35-turbo`, `babbage-002`, and `davinci-002` models.[**Added in 2023-10-01-preview**] - [Whisper](./whisper-quickstart.md). [**Added in 2023-09-01-preview**] - [Function calling](./how-to/function-calling.md) [**Added in 2023-07-01-preview**]-- [DALL-E](./dall-e-quickstart.md) [**Added in 2023-06-01-preview**] - [Retrieval augmented generation with the on your data feature](./use-your-data-quickstart.md). [**Added in 2023-06-01-preview**] ## Retiring soon
This version contains support for all the latest Azure OpenAI features including
On April 2, 2024 the following API preview releases will be retired and will stop accepting API requests: - 2023-03-15-preview-- 2023-06-01-preview - 2023-07-01-preview - 2023-08-01-preview
+- 2023-09-01-preview
+- 2023-10-01-preview
+- 2023-12-01-preview
To avoid service disruptions, you must update to use the latest preview version before the retirement date.
ai-services Assistants Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/assistants-reference.md
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id
+## File upload API reference
+
+Assistants use the [same API for file upload as fine-tuning](/rest/api/azureopenai/files/upload?view=rest-azureopenai-2024-02-15-preview&tabs=HTTP). When uploading a file you have to specify an appropriate value for the [purpose parameter](/rest/api/azureopenai/files/upload?view=rest-azureopenai-2024-02-15-preview&tabs=HTTP#purpose).
++ ## Assistant object | Field | Type | Description |
ai-services Assistants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/assistants.md
We provide a walkthrough of the Assistants playground in our [quickstart guide](
|**Run** | Activation of an Assistant to begin running based on the contents of the Thread. The Assistant uses its configuration and the ThreadΓÇÖs Messages to perform tasks by calling models and tools. As part of a Run, the Assistant appends Messages to the Thread.| |**Run Step** | A detailed list of steps the Assistant took as part of a Run. An Assistant can call tools or create Messages during itΓÇÖs run. Examining Run Steps allows you to understand how the Assistant is getting to its final results. |
+## Assistants data access
+
+Currently, assistants, threads, messages, and files created for Assistants are scoped at the Azure OpenAI resource level. Therefore, anyone with access to the Azure OpenAI resource or API key access is able to read/write assistants, threads, messages, and files.
+
+We strongly recommend the following data access controls:
+
+- Implement authorization. Before performing reads or writes on assistants, threads, messages, and files, ensure that the end-user is authorized to do so.
+- Restrict Azure OpenAI resource and API key access. Carefully consider who has access to Azure OpenAI resources where assistants are being used and associated API keys.
+- Routinely audit which accounts/individuals have access to the Azure OpenAI resource. API keys and resource level access enable a wide range of operations including reading and modifying messages and files.
+- Enable [diagnostic settings](../how-to/monitoring.md#configure-diagnostic-settings) to allow long-term tracking of certain aspects of the Azure OpenAI resource's activity log.
+ ## See also * Learn more about Assistants and [Code Interpreter](../how-to/code-interpreter.md) * Learn more about Assistants and [function calling](../how-to/assistant-functions.md) * [Azure OpenAI Assistants API samples](https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/Assistants)---
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
When you chat with a model, providing a history of the chat will help the model
## Token usage estimation for Azure OpenAI On Your Data
+Azure OpenAI On Your Data Retrieval Augmented Generation (RAG) service that leverages both a search service (such as Azure AI Search) and generation (Azure OpenAI models) to let users get answers for their questions based on provided data.
+As part of this RAG pipeline, there are are three steps at a high-level:
+
+1. Reformulate the user query into a list of search intents. This is done by making a call to the model with a prompt that includes instructions, the user question, and conversation history. Let's call this an *intent prompt*.
+
+1. For each intent, multiple document chunks are retrieved from the search service. After filtering out irrelevant chunks based on the user-specified threshold of strictness and reranking/aggregating the chunks based on internal logic, the user-specified number of document chunks are chosen.
+
+1. These document chunks, along with the user question, conversation history, role information, and instructions are sent to the model to generate the final model response. Let's call this the *generation prompt*.
+
+In total, there are two calls made to the model:
+
+* For processing the intent: The token estimate for the *intent prompt* includes those for the user question, conversation history and the instructions sent to the model for intent generation.
+
+* For generating the response: The token estimate for the *generation prompt* includes those for the user question, conversation history, the retrieved list of document chunks, role information and the instructions sent to it for generation.
+
+The model generated output tokens (both intents and response) need to be taken into account for total token estimation. Summing up all the four columns below gives the average total tokens used for generating a response.
+
+| Model | Generation prompt token count | Intent prompt token count | Response token count | Intent token count |
+|--|--|--|--|--|
+| gpt-35-turbo-16k | 4297 | 1366 | 111 | 25 |
+| gpt-4-0613 | 3997 | 1385 | 118 | 18 |
+| gpt-4-1106-preview | 4538 | 811 | 119 | 27 |
+| gpt-35-turbo-1106 | 4854 | 1372 | 110 | 26 |
+
+The above numbers are based on testing on a data set with:
+
+* 191 conversations
+* 250 questions
+* 10 average tokens per question
+* 4 conversational turns per conversation on average
+
+And the following [parameters](#runtime-parameters).
+
+|Setting |Value |
+|||
+|Number of retrieved documents | 5 |
+|Strictness | 3 |
+|Chunk size | 1024 |
+|Limit responses to ingested data? | True |
+
+These estimates will vary based on the values set for the above parameters. For example, if the number of retrieved documents is set to 10 and strictness is set to 1, the token count will go up. If returned responses aren't limited to the ingested data, there are fewer instructions given to the model and the number of tokens will go down.
+
+The estimates also depend on the nature of the documents and questions being asked. For example, if the questions are open-ended, the responses are likely to be longer. Similarly, a longer system message would contribute to a longer prompt that consumes more tokens, and if the conversation history is long, the prompt will be longer.
| Model | Max tokens for system message | Max tokens for model response | |--|--|--|
When you chat with a model, providing a history of the chat will help the model
| GPT-4-0613-8K | 400 | 1500 | | GPT-4-0613-32K | 2000 | 6400 |
-The table above shows the total number of tokens available for each model type. It also determines the maximum number of tokens that can be used for the [system message](#system-message) and the model response. Additionally, the following also consume tokens:
+The table above shows the maximum number of tokens that can be used for the [system message](#system-message) and the model response. Additionally, the following also consume tokens:
-* The meta prompt (MP): if you limit responses from the model to the grounding data content (`inScope=True` in the API), the maximum number of tokens is 4,036 tokens. Otherwise (for example if `inScope=False`) the maximum is 3,444 tokens. This number is variable depending on the token length of the user question and conversation history. This estimate includes the base prompt and the query rewriting prompts for retrieval.
+* The meta prompt: if you limit responses from the model to the grounding data content (`inScope=True` in the API), the maximum number of tokens higher. Otherwise (for example if `inScope=False`) the maximum is lower. This number is variable depending on the token length of the user question and conversation history. This estimate includes the base prompt and the query rewriting prompts for retrieval.
* User question and history: Variable but capped at 2,000 tokens. * Retrieved documents (chunks): The number of tokens used by the retrieved document chunks depends on multiple factors. The upper bound for this is the number of retrieved document chunks multiplied by the chunk size. It will, however, be truncated based on the tokens available tokens for the specific model being used after counting the rest of fields. 20% of the available tokens are reserved for the model response. The remaining 80% of available tokens include the meta prompt, the user question and conversation history, and the system message. The remaining token budget is used by the retrieved document chunks.
+In order to compute the number of tokens consumed by your input (such as your question, the system message/role information), use the following code sample.
+ ```python import tiktoken
class TokenEstimator(object):
token_output = TokenEstimator.estimate_tokens(input_text) ``` + ## Troubleshooting ### Failed ingestion jobs
ai-services Assistant Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/assistant-functions.md
After you submit tool outputs, the **Run** will enter the `queued` state before
## See also
+* [Assistants API Reference](../assistants-reference.md)
* Learn more about how to use Assistants with our [How-to guide on Assistants](../how-to/assistant.md). * [Azure OpenAI Assistants API samples](https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/Assistants)
ai-services Code Interpreter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/code-interpreter.md
We recommend using assistants with the latest models to take advantage of the ne
|.xml|application/xml or "text/xml"| |.zip|application/zip|
+### File upload API reference
+
+Assistants use the [same API for file upload as fine-tuning](/rest/api/azureopenai/files/upload?view=rest-azureopenai-2024-02-15-preview&tabs=HTTP). When uploading a file you have to specify an appropriate value for the [purpose parameter](/rest/api/azureopenai/files/upload?view=rest-azureopenai-2024-02-15-preview&tabs=HTTP#purpose).
+ ## Enable Code Interpreter # [Python 1.x](#tab/python)
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2
{ "type": "code_interpreter" } ], "model": "gpt-4-1106-preview",
- "file_ids": ["file_123abc456"]
+ "file_ids": ["assistant-123abc456"]
}' ```
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/<YOUR-THREAD-ID>
-d '{ "role": "user", "content": "I need to solve the equation `3x + 11 = 14`. Can you help me?",
- "file_ids": ["file_123abc456"]
+ "file_ids": ["asssistant-123abc456"]
}' ```
Files generated by Code Interpreter can be found in the Assistant message respon
"content": [ { "image_file": {
- "file_id": "file-1YGVTvNzc2JXajI5JU9F0HMD"
+ "file_id": "assistant-1YGVTvNzc2JXajI5JU9F0HMD"
}, "type": "image_file" },
client = AzureOpenAI(
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
-image_data = client.files.content("file-abc123")
+image_data = client.files.content("assistant-abc123")
image_data_bytes = image_data.read() with open("./my-image.png", "wb") as file:
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/files/<YOUR-FILE-ID>/con
## See also
+* [File Upload API reference](/rest/api/azureopenai/files/upload?view=rest-azureopenai-2024-02-15-preview&tabs=HTTP)
+* [Assistants API Reference](../assistants-reference.md)
* Learn more about how to use Assistants with our [How-to guide on Assistants](../how-to/assistant.md). * [Azure OpenAI Assistants API samples](https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/Assistants)
ai-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/managed-identity.md
description: Provides guidance on how to set managed identity with Microsoft Entra ID Previously updated : 06/24/2022-- Last updated : 02/29/2024++ recommendations: false
More complex security scenarios require Azure role-based access control (Azure RBAC). This document covers how to authenticate to your OpenAI resource using Microsoft Entra ID.
-In the following sections, you'll use the Azure CLI to assign roles, and obtain a bearer token to call the OpenAI resource. If you get stuck, links are provided in each section with all available options for each command in Azure Cloud Shell/Azure CLI.
+In the following sections, you'll use the Azure CLI to sign in, and obtain a bearer token to call the OpenAI resource. If you get stuck, links are provided in each section with all available options for each command in Azure Cloud Shell/Azure CLI.
## Prerequisites
In the following sections, you'll use the Azure CLI to assign roles, and obtain
../../cognitive-services-custom-subdomains.md) - Azure CLI - [Installation Guide](/cli/azure/install-azure-cli)-- The following Python libraries: os, requests, json
+- The following Python libraries: os, requests, json, openai, azure-identity
+
+## Assign yourself to the Cognitive Services User role
+
+Assign yourself the [Cognitive Services User](role-based-access-control.md#cognitive-services-contributor) role to allow you to use your account to make Azure OpenAI API calls rather than having to use key-based auth. After you make this change it can take up to 5 minutes before the change takes effect.
## Sign into the Azure CLI
-To sign-in to the Azure CLI, run the following command and complete the sign-in. You may need to do it again if your session has been idle for too long.
+To sign-in to the Azure CLI, run the following command and complete the sign-in. You might need to do it again if your session has been idle for too long.
```azurecli az login ```
-## Assign yourself to the Cognitive Services User role
-
-Assigning yourself to the "Cognitive Services User" role will allow you to use your account for access to the specific Azure AI services resource.
-
-1. Get your user information
-
- ```azurecli
- export user=$(az account show --query "user.name" -o tsv)
- ```
+## Chat Completions
+
+```python
+from azure.identity import DefaultAzureCredential, get_bearer_token_provider
+from openai import AzureOpenAI
+
+token_provider = get_bearer_token_provider(
+ DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
+)
+
+client = AzureOpenAI(
+ api_version="2024-02-15-preview",
+ azure_endpoint="https://{your-custom-endpoint}.openai.azure.com/",
+ azure_ad_token_provider=token_provider
+)
+
+response = client.chat.completions.create(
+ model="gpt-35-turbo-0125", # model = "deployment_name".
+ messages=[
+ {"role": "system", "content": "You are a helpful assistant."},
+ {"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
+ {"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
+ {"role": "user", "content": "Do other Azure AI services support this too?"}
+ ]
+)
+
+print(response.choices[0].message.content)
+```
-2. Assign yourself to ΓÇ£Cognitive Services UserΓÇ¥ role.
+## Querying Azure OpenAI with the control plane API
- ```azurecli
- export resourceId=$(az group show -g $RG --query "id" -o tsv)
- az role assignment create --role "Cognitive Services User" --assignee $user --scope $resourceId
- ```
+```python
+import requests
+import json
+from azure.identity import DefaultAzureCredential
- > [!NOTE]
- > Role assignment change will take ~5 mins to become effective.
+region = "eastus"
+token_credential = DefaultAzureCredential()
+subscriptionId = "{YOUR-SUBSCRIPTION-ID}"
-3. Acquire a Microsoft Entra access token. Access tokens expire in one hour. you'll then need to acquire another one.
- ```azurecli
- export accessToken=$(az account get-access-token --resource https://cognitiveservices.azure.com --query "accessToken" -o tsv)
- ```
+token = token_credential.get_token('https://management.azure.com/.default')
+headers = {'Authorization': 'Bearer ' + token.token}
-4. Make an API call
+url = f"https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.CognitiveServices/locations/{region}/models?api-version=2023-05-01"
-Use the access token to authorize your API call by setting the `Authorization` header value.
+response = requests.get(url, headers=headers)
+data = json.loads(response.text)
-```bash
-curl ${endpoint%/}/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15 \
--H "Content-Type: application/json" \--H "Authorization: Bearer $accessToken" \--d '{ "prompt": "Once upon a time" }'
+print(json.dumps(data, indent=4))
``` ## Authorize access to managed identities
ai-services How To Migrate To Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-migrate-to-custom-neural-voice.md
# Migrate from custom voice to custom neural voice > [!IMPORTANT]
-> We are retiring the standard non-neural training tier of custom voice from March 1, 2021 through February 29, 2024. If you used a non-neural custom voice with your Speech resource prior to March 1, 2021 then you can continue to do so until February 29, 2024. All other Speech resources can only use custom neural voice. After February 29, 2024, the non-neural custom voices won't be supported with any Speech resource.
->
-> The pricing for custom voice is different from custom neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Custom voice (non-neural training) is referred as **Custom**.
+> The standard non-neural training tier of custom voice is retired as of February 29, 2024. You could have used a non-neural custom voice with your Speech resource prior to February 29, 2024. Now you can only use custom neural voice with your Speech resources. If you have a non-neural custom voice, you must migrate to custom neural voice.
The custom neural voice lets you build higher-quality voice models while requiring less data. You can develop more realistic, natural, and conversational voices. Your customers and end users benefit from the latest Text to speech technology, in a responsible way.
ai-services Migrate V2 To V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migrate-v2-to-v3.md
# Migrate code from v2.0 to v3.0 of the REST API > [!IMPORTANT]
-> The Speech to text REST API v2.0 is deprecated and will be retired by February 29, 2024. Please migrate your applications to the Speech to text REST API v3.2. Complete the steps in this article and then see the Speech to text REST API [v3.0 to v3.1](migrate-v3-0-to-v3-1.md) and [v3.1 to v3.2](migrate-v3-1-to-v3-2.md) migration guides for additional requirements.
+> The Speech to text REST API v2.0 is retired as of February 29, 2024. Please migrate your applications to the Speech to text REST API v3.2. Complete the steps in this article and then see the Speech to text REST API [v3.0 to v3.1](migrate-v3-0-to-v3-1.md) and [v3.1 to v3.2](migrate-v3-1-to-v3-2.md) migration guides for additional requirements.
## Forward compatibility
ai-services Migration Overview Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migration-overview-neural-voice.md
We're retiring two features from [text to speech](index-text-to-speech.yml) capa
## Custom voice (non-neural training) > [!IMPORTANT]
-> We are retiring the standard non-neural training tier of custom voice from March 1, 2021 through February 29, 2024. If you used a non-neural custom voice with your Speech resource prior to March 1, 2021 then you can continue to do so until February 29, 2024. All other Speech resources can only use custom neural voice. After February 29, 2024, the non-neural custom voices won't be supported with any Speech resource.
->
-> The pricing for custom voice is different from custom neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Custom voice (non-neural training) is referred as **Custom**.
+> The standard non-neural training tier of custom voice is retired as of February 29, 2024. You could have used a non-neural custom voice with your Speech resource prior to February 29, 2024. Now you can only use custom neural voice with your Speech resources. If you have a non-neural custom voice, you must migrate to custom neural voice.
Go to [this article](how-to-migrate-to-custom-neural-voice.md) to learn how to migrate to custom neural voice.
ai-services Quickstart Text Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/quickstart-text-rest-api.md
Add the following code sample to your `index.js` file. **Make sure you update th
params: { 'api-version': '3.0', 'from': 'en',
- 'to': ['fr', 'zu']
+ 'to': 'fr,zu'
}, data: [{ 'text': 'I would really like to drive your car around the block a few times!'
ai-studio Create Manage Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-manage-compute.md
To create a compute instance in Azure AI Studio:
- **Assign to another user**: You can create a compute instance on behalf of another user. Note that a compute instance can't be shared. It can only be used by a single assigned user. By default, it will be assigned to the creator and you can change this to a different user. - **Assign a managed identity**: You can attach system assigned or user assigned managed identities to grant access to resources. The name of the created system managed identity will be in the format `/workspace-name/computes/compute-instance-name` in your Microsoft Entra ID. - **Enable SSH access**: Enter credentials for an administrator user account that will be created on each compute node. These can be used to SSH to the compute nodes.
-Note that disabling SSH prevents SSH access from the public internet. But when a private virtual network is used, users can still SSH from within the virtual network.
+Note that disabling SSH prevents SSH access from the public internet. When a private virtual network is used, users can still SSH from within the virtual network.
- **Enable virtual network**: - If you're using an Azure Virtual Network, specify the Resource group, Virtual network, and Subnet to create the compute instance inside an Azure Virtual Network. You can also select No public IP to prevent the creation of a public IP address, which requires a private link workspace. You must also satisfy these network requirements for virtual network setup. - If you're using a managed virtual network, the compute instance is created inside the managed virtual network. You can also select No public IP to prevent the creation of a public IP address. For more information, see managed compute with a managed network.
You can start or stop a compute instance from the Azure AI Studio.
## Next steps - [Create and manage prompt flow runtimes](./create-manage-runtime.md)-- [Vulnerability management](../concepts/vulnerability-management.md)
+- [Vulnerability management](../concepts/vulnerability-management.md)
aks Ai Toolchain Operator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ai-toolchain-operator.md
+
+ Title: Deploy an AI model on Azure Kubernetes Service (AKS) with the AI toolchain operator (preview)
+description: Learn how to enable the AI toolchain operator add-on on Azure Kubernetes Service (AKS) to simplify OSS AI model management and deployment.
++ Last updated : 02/28/2024++
+# Deploy an AI model on Azure Kubernetes Service (AKS) with the AI toolchain operator (preview)
+
+The AI toolchain operator (KAITO) is a managed add-on for AKS that simplifies the experience of running OSS AI models on your AKS clusters. The AI toolchain operator automatically provisions the necessary GPU nodes and sets up the associated inference server as an endpoint server to your AI models. Using this add-on reduces your onboarding time and enables you to focus on AI model usage and development rather than infrastructure setup.
+
+This article shows you how to enable the AI toolchain operator add-on and deploy an AI model on AKS.
++
+## Before you begin
+
+* This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for AKS](./concepts-clusters-workloads.md).
+* For ***all hosted model inference images*** and recommended infrastructure setup, see the [KAITO GitHub repository](https://github.com/Azure/kaito).
+
+## Prerequisites
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ * If you have multiple Azure subscriptions, make sure you select the correct subscription in which the resources will be created and charged using the [az account set][az-account-set] command.
+
+ > [!NOTE]
+ > The subscription you use must have GPU VM quota.
+
+* Azure CLI version 2.47.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+* The Kubernetes command-line client, kubectl, installed and configured. For more information, see [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
+* [Install the Azure CLI AKS preview extension](#install-the-azure-cli-preview-extension).
+* [Register the AI toolchain operator add-on feature flag](#register-the-ai-toolchain-operator-add-on-feature-flag).
+
+### Install the Azure CLI preview extension
+
+1. Install the Azure CLI preview extension using the [az extension add][az-extension-add] command.
+
+ ```azurecli-interactive
+ az extension add --name aks-preview
+ ```
+
+2. Update the extension to make sure you have the latest version using the [az extension update][az-extension-update] command.
+
+ ```azurecli-interactive
+ az extension update --name aks-preview
+ ```
+
+### Register the AI toolchain operator add-on feature flag
+
+1. Register the AIToolchainOperatorPreview feature flag using the [az feature register][az-feature-register] command.
+
+ ```azurecli-interactive
+ az feature register --namespace "Microsoft.ContainerService" --name "AIToolchainOperatorPreview"
+ ```
+
+ It takes a few minutes for the registration to complete.
+
+2. Verify the registration using the [az feature show][az-feature-show] command.
+
+ ```azurecli-interactive
+ az feature show --namespace "Microsoft.ContainerService" --name "AIToolchainOperatorPreview"
+ ```
+
+### Export environment variables
+
+* To simplify the configuration steps in this article, you can define environment variables using the following commands. Make sure to replace the placeholder values with your own.
+
+ ```azurecli-interactive
+ export AZURE_SUBSCRIPTION_ID="mySubscriptionID"
+ export AZURE_RESOURCE_GROUP="myResourceGroup"
+ export AZURE_LOCATION="myLocation"
+ export CLUSTER_NAME="myClusterName"
+ ```
+
+## Enable the AI toolchain operator add-on on an AKS cluster
+
+The following sections describe how to create an AKS cluster with the AI toolchain operator add-on enabled and deploy a default hosted AI model.
+
+### Create an AKS cluster with the AI toolchain operator add-on enabled
+
+1. Create an Azure resource group using the [az group create][az-group-create] command.
+
+ ```azurecli-interactive
+ az group create --name ${AZURE_RESOURCE_GROUP} --location ${AZURE_LOCATION}
+ ```
+
+2. Create an AKS cluster with the AI toolchain operator add-on enabled using the [az aks create][az-aks-create] command with the `--enable-ai-toolchain-operator` and `--enable-oidc-issuer` flags.
+
+ ```azurecli-interactive
+ az aks create --location ${AZURE_LOCATION} \
+ --resource-group ${AZURE_RESOURCE_GROUP} \
+ --name ${CLUSTER_NAME} \
+ --enable-oidc-issuer \
+ --enable-ai-toolchain-operator
+ ```
+
+ > [!NOTE]
+ > AKS creates a managed identity once you enable the AI toolchain operator add-on. The managed identity is used to create GPU node pools in the managed AKS cluster. Proper permissions need to be set for it manually following the steps introduced in the following sections.
+ >
+ > AI toolchain operator enablement requires the enablement of OIDC issuer.
+
+3. On an existing AKS cluster, you can enable the AI toolchain operator add-on using the [az aks update][az-aks-update] command.
+
+ ```azurecli-interactive
+ az aks update --name ${CLUSTER_NAME} \
+ --resource-group ${AZURE_RESOURCE_GROUP} \
+ --enable-oidc-issuer \
+ --enable-ai-toolchain-operator
+ ```
+
+## Connect to your cluster
+
+1. Configure `kubectl` to connect to your cluster using the [az aks get-credentials][az-aks-get-credentials] command.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group ${AZURE_RESOURCE_GROUP} --name ${CLUSTER_NAME}
+ ```
+
+2. Verify the connection to your cluster using the `kubectl get` command.
+
+ ```azurecli-interactive
+ kubectl get nodes
+ ```
+
+## Export environment variables
+
+* Export environment variables for the MC resource group, principal ID identity, and KAITO identity using the following commands:
+
+ ```azurecli-interactive
+ export MC_RESOURCE_GROUP=$(az aks show --resource-group ${AZURE_RESOURCE_GROUP} \
+ --name ${CLUSTER_NAME} \
+ --query nodeResourceGroup \
+ -o tsv)
+ export PRINCIPAL_ID=$(az identity show --name "ai-toolchain-operator-${CLUSTER_NAME}" \
+ --resource-group "${MC_RESOURCE_GROUP}" \
+ --query 'principalId'
+ -o tsv)
+ export KAITO_IDENTITY_NAME="ai-toolchain-operator-${CLUSTER_NAME}"
+ ```
+
+## Get the AKS OpenID Connect (OIDC) Issuer
+
+* Get the AKS OIDC Issuer URL and export it as an environment variable:
+
+ ```azurecli-interactive
+ export AKS_OIDC_ISSUER=$(az aks show --resource-group "${AZURE_RESOURCE_GROUP}" \
+ --name "${CLUSTER_NAME}" \
+ --query "oidcIssuerProfile.issuerUrl" \
+ -o tsv)
+ ```
+
+## Create role assignment for the service principal
+
+* Create a new role assignment for the service principal using the [az role assignment create][az-role-assignment-create] command.
+
+ ```azurecli-interactive
+ az role assignment create --role "Contributor" \
+ --assignee "${PRINCIPAL_ID}" \
+ --scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AZURE_RESOURCE_GROUP}"
+ ```
+
+## Establish a federated identity credential
+
+* Create the federated identity credential between the managed identity, AKS OIDC issuer, and subject using the [az identity federated-credential create][az-identity-federated-credential-create] command.
+
+ ```azurecli-interactive
+ az identity federated-credential create --name "kaito-federated-identity" \
+ --identity-name "${KAITO_IDENTITY_NAME}" \
+ -g "${MC_RESOURCE_GROUP}" \
+ --issuer "${AKS_OIDC_ISSUER}" \
+ --subject system:serviceaccount:"kube-system:kaito-gpu-provisioner" \
+ --audience api://AzureADTokenExchange
+ ```
+
+## Verify that your deployment is running
+
+1. Restart the KAITO GPU provisioner deployment on your pods using the `kubectl rollout restart` command:
+
+ ```azurecli-interactive
+ kubectl rollout restart deployment/kaito-gpu-provisioner -n kube-system
+ ```
+
+2. Verify that the deployment is running using the `kubectl get` command:
+
+ ```azurecli-interactive
+ kubectl get deployment -n kube-system | grep kaito
+ ```
+
+## Deploy a default hosted AI model
+
+1. Deploy the Falcon 7B model YAML file from the GitHub repository using the `kubectl apply` command.
+
+ ```azurecli-interactive
+ kubectl apply -f https://raw.githubusercontent.com/Azure/kaito/main/examples/kaito_workspace_falcon_7b.yaml
+ ```
+
+2. Track the live resource changes in your workspace using the `kubectl get` command.
+
+ ```azurecli-interactive
+ kubectl get workspace workspace-falcon-7b -w
+ ```
+
+ > [!NOTE]
+ > As you track the live resource changes in your workspace, note that machine readiness can take up to 10 minutes, and workspace readiness up to 20 minutes.
+
+3. Check your service and get the service IP address using the `kubectl get svc` command.
+
+ ```azurecli-interactive
+ export SERVICE_IP=$(kubectl get svc workspace-falcon-7b -o jsonpath='{.spec.clusterIP}')
+ ```
+
+4. Run the Falcon 7B model with a sample input of your choice using the following `curl` command:
+
+ ```azurecli-interactive
+ kubectl run -it --rm --restart=Never curl --image=curlimages/curl -- curl -X POST http://$CLUSTERIP/chat -H "accept: application/json" -H "Content-Type: application/json" -d "{"prompt":"YOUR QUESTION HERE"}"
+ ```
+
+## Clean up resources
+
+If you no longer need these resources, you can delete them to avoid incurring extra Azure charges.
+
+* Delete the resource group and its associated resources using the [az group delete][az-group-delete] command.
+
+ ```azurecli-interactive
+ az group delete --name "${AZURE_RESOURCE_GROUP}" --yes --no-wait
+ ```
+
+## Next steps
+
+For more inference model options, see the [KAITO GitHub repository](https://github.com/Azure/kaito).
+
+<!-- LINKS -->
+[az-group-create]: /cli/azure/group#az_group_create
+[az-group-delete]: /cli/azure/group#az_group_delete
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-update]: /cli/azure/aks#az_aks_update
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
+[az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az_identity_federated_credential_create
+[az-account-set]: /cli/azure/account#az_account_set
+[az-extension-add]: /cli/azure/extension#az_extension_add
+[az-extension-update]: /cli/azure/extension#az_extension_update
+[az-feature-register]: /cli/azure/feature#az_feature_register
+[az-feature-show]: /cli/azure/feature#az_feature_show
+[az-provider-register]: /cli/azure/provider#az_provider_register
aks Istio About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-about.md
Title: Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+ Title: Istio-based service mesh add-on for Azure Kubernetes Service
description: Istio-based service mesh add-on for Azure Kubernetes Service. Last updated 04/09/2023 +
-# Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+# Istio-based service mesh add-on for Azure Kubernetes Service
[Istio][istio-overview] addresses the challenges developers and operators face with a distributed or microservices architecture. The Istio-based service mesh add-on provides an officially supported and tested integration for Azure Kubernetes Service (AKS). - ## What is a Service Mesh? Modern applications are typically architected as distributed collections of microservices, with each collection of microservices performing some discrete business function. A service mesh is a dedicated infrastructure layer that you can add to your applications. It allows you to transparently add capabilities like observability, traffic management, and security, without adding them to your own code. The term **service mesh** describes both the type of software you use to implement this pattern, and the security or network domain that is created when you use that software.
This service mesh add-on uses and builds on top of open-source Istio. The add-on
Istio-based service mesh add-on for AKS has the following limitations: * The add-on doesn't work on AKS clusters that are using [Open Service Mesh addon for AKS][open-service-mesh-about]. * The add-on doesn't work on AKS clusters that have Istio installed on them already outside the add-on installation.
-* Managed lifecycle of mesh on how Istio versions are installed and later made available for upgrades.
+* The add-on doesn't support adding pods associated with virtual nodes to be added under the mesh.
* Istio doesn't support Windows Server containers. * Customization of mesh based on the following custom resources is blocked for now - `EnvoyFilter, ProxyConfig, WorkloadEntry, WorkloadGroup, Telemetry, IstioOperator, WasmPlugin`
+* Gateway API for Istio ingress gateway or managing mesh traffic (GAMMA) are currently not yet supported with Istio addon.
## Next steps
aks Istio Deploy Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-deploy-addon.md
Title: Deploy Istio-based service mesh add-on for Azure Kubernetes Service (preview)
-description: Deploy Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+ Title: Deploy Istio-based service mesh add-on for Azure Kubernetes Service
+description: Deploy Istio-based service mesh add-on for Azure Kubernetes Service
Last updated 04/09/2023 +
-# Deploy Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+# Deploy Istio-based service mesh add-on for Azure Kubernetes Service
This article shows you how to install the Istio-based service mesh add-on for Azure Kubernetes Service (AKS) cluster. For more information on Istio and the service mesh add-on, see [Istio-based service mesh add-on for Azure Kubernetes Service][istio-about]. - ## Before you begin ### Set environment variables
export RESOURCE_GROUP=<resource-group-name>
export LOCATION=<location> ```
-### Verify Azure CLI and aks-preview extension versions
-The add-on requires:
-* Azure CLI version 2.49.0 or later installed. To install or upgrade, see [Install Azure CLI][azure-cli-install].
-* `aks-preview` Azure CLI extension of version 0.5.163 or later installed
-
-You can run `az --version` to verify above versions.
-
-To install the aks-preview extension, run the following command:
-
-```azurecli-interactive
-az extension add --name aks-preview
-```
-Run the following command to update to the latest version of the extension released:
+### Verify Azure CLI version
-```azurecli-interactive
-az extension update --name aks-preview
-```
+The add-on requires Azure CLI version 2.57.0 or later installed. You can run `az --version` to verify version. To install or upgrade, see [Install Azure CLI][azure-cli-install].
## Install Istio add-on at the time of cluster creation
Confirm the `istiod` pod has a status of `Running`. For example:
``` NAME READY STATUS RESTARTS AGE
-istiod-asm-1-17-74f7f7c46c-xfdtl 1/1 Running 0 2m
+istiod-asm-1-18-74f7f7c46c-xfdtl 1/1 Running 0 2m
``` ## Enable sidecar injection
istiod-asm-1-17-74f7f7c46c-xfdtl 1/1 Running 0 2m
To automatically install sidecar to any new pods, annotate your namespaces: ```bash
-kubectl label namespace default istio.io/rev=asm-1-17
+kubectl label namespace default istio.io/rev=asm-1-18
``` > [!IMPORTANT]
-> The default `istio-injection=enabled` labeling doesn't work. Explicit versioning (`istio.io/rev=asm-1-17`) is required.
+> The default `istio-injection=enabled` labeling doesn't work. Explicit versioning (`istio.io/rev=asm-1-18`) is required.
For manual injection of sidecar using `istioctl kube-inject`, you need to specify extra parameters for `istioNamespace` (`-i`) and `revision` (`-r`). Example: ```bash
-kubectl apply -f <(istioctl kube-inject -f sample.yaml -i aks-istio-system -r asm-1-17) -n foo
+kubectl apply -f <(istioctl kube-inject -f sample.yaml -i aks-istio-system -r asm-1-18) -n foo
``` ## Deploy sample application
kubectl apply -f <(istioctl kube-inject -f sample.yaml -i aks-istio-system -r as
Use `kubectl apply` to deploy the sample application on the cluster: ```bash
-kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/bookinfo/platform/kube/bookinfo.yaml
+kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.18/samples/bookinfo/platform/kube/bookinfo.yaml
``` Confirm several deployments and services are created on your cluster. For example:
To test this sample application against ingress, check out [next-steps](#next-st
Use `kubectl delete` to delete the sample application: ```bash
-kubectl delete -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/bookinfo/platform/kube/bookinfo.yaml
+kubectl delete -f https://raw.githubusercontent.com/istio/istio/release-1.18/samples/bookinfo/platform/kube/bookinfo.yaml
``` If you don't intend to enable Istio ingress on your cluster and want to disable the Istio add-on, run the following command:
az group delete --name ${RESOURCE_GROUP} --yes --no-wait
[uninstall-osm-addon]: open-service-mesh-uninstall-add-on.md [uninstall-istio-oss]: https://istio.io/latest/docs/setup/install/istioctl/#uninstall-istio
-[istio-deploy-ingress]: istio-deploy-ingress.md
+[istio-deploy-ingress]: istio-deploy-ingress.md
aks Istio Deploy Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-deploy-ingress.md
Title: Azure Kubernetes Service (AKS) external or internal ingresses for Istio service mesh add-on (preview)
-description: Deploy external or internal ingresses for Istio service mesh add-on for Azure Kubernetes Service (preview)
+ Title: Azure Kubernetes Service (AKS) external or internal ingresses for Istio service mesh add-on
+description: Deploy external or internal ingresses for Istio service mesh add-on for Azure Kubernetes Service
-+ Last updated 08/07/2023-+
-# Azure Kubernetes Service (AKS) external or internal ingresses for Istio service mesh add-on deployment (preview)
+# Azure Kubernetes Service (AKS) external or internal ingresses for Istio service mesh add-on deployment
This article shows you how to deploy external or internal ingresses for Istio service mesh add-on for Azure Kubernetes Service (AKS) cluster. - ## Prerequisites This guide assumes you followed the [documentation][istio-deploy-addon] to enable the Istio add-on on an AKS cluster, deploy a sample application and set environment variables.
aks Istio Meshconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-meshconfig.md
Title: Configure Istio-based service mesh add-on for Azure Kubernetes Service (preview)
-description: Configure Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+ Title: Configure Istio-based service mesh add-on for Azure Kubernetes Service
+description: Configure Istio-based service mesh add-on for Azure Kubernetes Service
Last updated 02/14/2024 +
-# Configure Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+# Configure Istio-based service mesh add-on for Azure Kubernetes Service
Open-source Istio uses [MeshConfig][istio-meshconfig] to define mesh-wide settings for the Istio service mesh. Istio-based service mesh add-on for AKS builds on top of MeshConfig and classifies different properties as supported, allowed, and blocked. This article walks through how to configure Istio-based service mesh add-on for Azure Kubernetes Service and the support policy applicable for such configuration. - ## Prerequisites This guide assumes you followed the [documentation][istio-deploy-addon] to enable the Istio add-on on an AKS cluster.
aks Istio Plugin Ca https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-plugin-ca.md
Title: Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service (preview)
-description: Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service (preview)
+ Title: Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service
+description: Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service
Last updated 12/04/2023++
-# Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service (preview)
+# Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service
-In the Istio-based service mesh addon for Azure Kubernetes Service (preview), by default the Istio certificate authority (CA) generates a self-signed root certificate and key and uses them to sign the workload certificates. To protect the root CA key, you should use a root CA, which runs on a secure machine offline. You can use the root CA to issue intermediate certificates to the Istio CAs that run in each cluster. An Istio CA can sign workload certificates using the administrator-specified certificate and key, and distribute an administrator-specified root certificate to the workloads as the root of trust. This article addresses how to bring your own certificates and keys for Istio CA in the Istio-based service mesh add-on for Azure Kubernetes Service.
+In the Istio-based service mesh addon for Azure Kubernetes Service, by default the Istio certificate authority (CA) generates a self-signed root certificate and key and uses them to sign the workload certificates. To protect the root CA key, you should use a root CA, which runs on a secure machine offline. You can use the root CA to issue intermediate certificates to the Istio CAs that run in each cluster. An Istio CA can sign workload certificates using the administrator-specified certificate and key, and distribute an administrator-specified root certificate to the workloads as the root of trust. This article addresses how to bring your own certificates and keys for Istio CA in the Istio-based service mesh add-on for Azure Kubernetes Service.
[ ![Diagram that shows root and intermediate CA with Istio.](./media/istio/istio-byo-ca.png) ](./media/istio/istio-byo-ca.png#lightbox) This article addresses how you can configure the Istio certificate authority with a root certificate, signing certificate and key provided as inputs using Azure Key Vault to the Istio-based service mesh add-on. - ## Before you begin
-### Verify Azure CLI and aks-preview extension versions
-
-The add-on requires:
-* Azure CLI version 2.49.0 or later installed. To install or upgrade, see [Install Azure CLI][install-azure-cli].
-* `aks-preview` Azure CLI extension of version 0.5.163 or later installed
-
-You can run `az --version` to verify above versions.
-
-To install the aks-preview extension, run the following command:
-
-```azurecli-interactive
-az extension add --name aks-preview
-```
-
-Run the following command to update to the latest version of the extension released:
+### Verify Azure CLI version
-```azurecli-interactive
-az extension update --name aks-preview
-```
+The add-on requires Azure CLI version 2.57.0 or later installed. You can run `az --version` to verify version. To install or upgrade, see [Install Azure CLI][azure-cli-install].
### Set up Azure Key Vault
aks Istio Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-upgrade.md
Title: Upgrade Istio-based service mesh add-on for Azure Kubernetes Service (preview)
-description: Upgrade Istio-based service mesh add-on for Azure Kubernetes Service (preview).
+ Title: Upgrade Istio-based service mesh add-on for Azure Kubernetes Service
+description: Upgrade Istio-based service mesh add-on for Azure Kubernetes Service
Last updated 05/04/2023-++
-# Upgrade Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+# Upgrade Istio-based service mesh add-on for Azure Kubernetes Service
This article addresses upgrade experiences for Istio-based service mesh add-on for Azure Kubernetes Service (AKS).
aks Quick Kubernetes Deploy Azd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-azd.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure
description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using the AZD CLI. Last updated 02/06/2024-+ #Customer intent: As a developer or cluster operator, I want to deploy an AKS cluster and deploy an application so I can see how to run applications using the managed Kubernetes service in Azure.
The Azure Development Template contains all the code needed to create the servic
Run `azd auth login` 1. Copy the device code that appears.
-2. Hit enter to open in a new tab the auth portal.
-3. Enter in your Microsoft Credentials in the new page.
-4. Confirm that it's you trying to connect to Azure CLI. If you encounter any issues, skip to the Troubleshooting section.
-5. Verify the message "Device code authentication completed. Logged in to Azure." appears in your original terminal.
+1. Hit enter to open in a new tab the auth portal.
+1. Enter in your Microsoft Credentials in the new page.
+1. Confirm that it's you trying to connect to Azure CLI. If you encounter any issues, skip to the Troubleshooting section.
+1. Verify the message "Device code authentication completed. Logged in to Azure." appears in your original terminal.
-### Troubleshooting: Can't Connect to Localhost
-
-Certain Azure security policies cause conflicts when trying to sign in. As a workaround, you can perform a curl request to the localhost url you were redirected to after you logged in.
-
-The workaround requires the Azure CLI for authentication. If you don't have it or aren't using GitHub Codespaces, install the [Azure CLI][install-azure-cli].
-
-1. Inside a terminal, run `az login --scope https://graph.microsoft.com/.default`
-2. Copy the "localhost" URL from the failed redirect
-3. In a new terminal window, type `curl` and paste your url
-4. If it works, code for a webpage saying "You have logged into Microsoft Azure!" appears
-5. Close the terminal and go back to the old terminal
-6. Copy and note down which subscription_id you want to use
-7. Paste in the subscription_ID to the command `az account set -n {sub}`
- If you have multiple Azure subscriptions, select the appropriate subscription for billing using the [az account set](/cli/azure/account#az-account-set) command.
The workaround requires the Azure CLI for authentication. If you don't have it o
The step can take longer depending on your internet speed. 1. Create all your resources with the `azd up` command.
-2. Select which Azure subscription and region for your AKS Cluster.
-3. Wait as azd automatically runs the commands for pre-provision and post-provision steps.
-4. At the end, your output shows the newly created deployments and services.
+1. Select which Azure subscription and region for your AKS Cluster.
+1. Wait as azd automatically runs the commands for pre-provision and post-provision steps.
+1. At the end, your output shows the newly created deployments and services.
```output deployment.apps/rabbitmq created
The step can take longer depending on your internet speed.
When your application is created, a Kubernetes service exposes the application's front end service to the internet. This process can take a few minutes to complete. Once completed, follow these steps verify and test the application by opening up the store-front page.
+1. Set your namespace as the demo namespace `pets` with the `kubectl set-context` command.
+
+ ```console
+ kubectl config set-context --current --namespace=pets
+ ```
+ 1. View the status of the deployed pods with the [kubectl get pods][kubectl-get] command. Check that all pods are in the `Running` state before proceeding:
When your application is created, a Kubernetes service exposes the application's
Once on the store page, you can add new items to your cart and check them out. To verify, visit the Azure Service in your portal to view the records of the transactions for your store app.
-<!-- Image of Storefront Checkout -->
- ## Delete the cluster Once you're finished with the quickstart, remember to clean up all your resources to avoid Azure charges.
aks Quick Kubernetes Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the A
description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using the Azure portal. Previously updated : 01/11/2024 Last updated : 03/01/2024 #Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure.
This quickstart assumes a basic understanding of Kubernetes concepts. For more i
:::image type="content" source="media/quick-kubernetes-deploy-portal/create-node-pool-linux.png" alt-text="Screenshot showing how to create a node pool running Ubuntu Linux." lightbox="media/quick-kubernetes-deploy-portal/create-node-pool-linux.png":::
-1. Leave all settings on the other tabs set to their defaults.
+1. Leave all settings on the other tabs set to their defaults, except for the settings on the **Monitoring** tab. By default, the [Azure Monitor features][azure-monitor-features-containers] Container insights, Azure Monitor managed service for Prometheus, and Azure Managed Grafana are enabled. You can save costs by disabling them.
1. Select **Review + create** to run validation on the cluster configuration. After validation completes, select **Create** to create the AKS cluster.
To learn more about AKS and walk through a complete code-to-deployment example,
[intro-azure-linux]: ../../azure-linux/intro-azure-linux.md [baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json [aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
+[azure-monitor-features-containers]: ../../azure-monitor/containers/container-insights-overview.md
aks Tutorial Kubernetes Deploy Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-application.md
Title: Kubernetes on Azure tutorial - Deploy an application to Azure Kubernetes Service (AKS) description: In this Azure Kubernetes Service (AKS) tutorial, you deploy a multi-container application to your cluster using images stored in Azure Container Registry. Previously updated : 11/02/2023- Last updated : 02/20/2023+ #Customer intent: As a developer, I want to learn how to deploy apps to an Azure Kubernetes Service (AKS) cluster so that I can deploy and run my own applications.
In this tutorial, part four of seven, you deploy a sample application into a Kub
## Before you begin
-In previous tutorials, you packaged an application into a container image, uploaded the image to Azure Container Registry, and created a Kubernetes cluster. To complete this tutorial, you need the pre-created `aks-store-quickstart.yaml` Kubernetes manifest file. This file download was included with the application source code in a previous tutorial. Make sure you cloned the repo and changed directories into the cloned repo. If you haven't completed these steps and want to follow along, start with [Tutorial 1 - Prepare application for AKS][aks-tutorial-prepare-app].
+In previous tutorials, you packaged an application into a container image, uploaded the image to Azure Container Registry, and created a Kubernetes cluster. To complete this tutorial, you need the precreated `aks-store-quickstart.yaml` Kubernetes manifest file. This file was downloaded in the application source code from [Tutorial 1 - Prepare application for AKS][aks-tutorial-prepare-app].
### [Azure CLI](#tab/azure-cli)
-This tutorial requires Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+This tutorial requires Azure CLI version 2.0.53 or later. Check your version with `az --version`. To install or upgrade, see [Install Azure CLI][azure-cli-install].
### [Azure PowerShell](#tab/azure-powershell)
-This tutorial requires Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+This tutorial requires Azure PowerShell version 5.9.0 or later. Check your version with `Get-InstalledModule -Name Az`. To install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+
+### [Azure Developer CLI](#tab/azure-azd)
+
+This tutorial requires Azure Developer CLI (AZD) version 1.5.1 or later. Check your version with `azd version`. To install or upgrade, see [Install Azure Developer CLI][azure-azd-install].
In these tutorials, your Azure Container Registry (ACR) instance stores the cont
az acr list --resource-group myResourceGroup --query "[].{acrLoginServer:loginServer}" --output table ```
-2. Make sure you're in the cloned *aks-store-demo* directory, and then open the manifest file with a text editor, such as `vi`:
+2. Make sure you're in the cloned *aks-store-demo* directory, and then open the manifest file with a text editor, such as `vi`.
```azurecli-interactive vi aks-store-quickstart.yaml
In these tutorials, your Azure Container Registry (ACR) instance stores the cont
(Get-AzContainerRegistry -ResourceGroupName myResourceGroup -Name <acrName>).LoginServer ```
-2. Make sure you're in the cloned *aks-store-demo* directory, and then open the manifest file with a text editor, such as `vi`:
+2. Make sure you're in the cloned *aks-store-demo* directory, and then open the manifest file with a text editor, such as `vi`.
```azurepowershell-interactive vi aks-store-quickstart.yaml
In these tutorials, your Azure Container Registry (ACR) instance stores the cont
4. Save and close the file. In `vi`, use `:wq`. +
+### [Azure Developer CLI](#tab/azure-azd)
+
+AZD doesn't require a container registry step since it's in the template.
+
-## Deploy the application
+## Run the application
+
+### [Azure CLI](#tab/azure-cli)
+
+1. Deploy the application using the [`kubectl apply`][kubectl-apply] command, which parses the manifest file and creates the defined Kubernetes objects.
+
+ ```console
+ kubectl apply -f aks-store-quickstart.yaml
+ ```
+
+ The following example output shows the resources successfully created in the AKS cluster:
+
+ ```output
+ deployment.apps/rabbitmq created
+ service/rabbitmq created
+ deployment.apps/order-service created
+ service/order-service created
+ deployment.apps/product-service created
+ service/product-service created
+ deployment.apps/store-front created
+ service/store-front created
+ ```
+
+2. Check the deployment is successful by viewing the pods with `kubectl`
+
+ ```console
+ kubectl get pods
+ ```
+
+### [Azure PowerShell](#tab/azure-powershell)
-* Deploy the application using the [`kubectl apply`][kubectl-apply] command, which parses the manifest file and creates the defined Kubernetes objects.
+1. Deploy the application using the [`kubectl apply`][kubectl-apply] command, which parses the manifest file and creates the defined Kubernetes objects.
```console kubectl apply -f aks-store-quickstart.yaml
In these tutorials, your Azure Container Registry (ACR) instance stores the cont
service/store-front created ```
+2. Check the deployment is successful by viewing the pods with `kubectl`
+
+ ```console
+ kubectl get pods
+ ```
+
+### [Azure Developer CLI](#tab/azure-azd)
+
+Deployment in AZD in broken down into multiple stages represented by hooks. Run `azd up` as an all-in-one command.
+
+When you first run azd up, you're prompted to select which Subscription and Region to host your Azure resources.
+
+You can update these variables for `AZURE_LOCATION` and `AZURE_SUBSCRIPTION_ID` from inside the `.azure/<your-env-name>/.env` file.
+++ ## Test the application When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
+### Command Line
+ 1. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument. ```console kubectl get service store-front --watch ```
- Initially, the `EXTERNAL-IP` for the *store-front* service shows as *pending*.
+ Initially, the `EXTERNAL-IP` for the *store-front* service shows as *pending*:
```output store-front LoadBalancer 10.0.34.242 <pending> 80:30676/TCP 5s
When the application runs, a Kubernetes service exposes the application front en
3. View the application in action by opening a web browser to the external IP address of your service.
+ :::image type="content" source="./learn/media/quick-kubernetes-deploy-cli/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="./learn/media/quick-kubernetes-deploy-cli/aks-store-application.png":::
+ If the application doesn't load, it might be an authorization problem with your image registry. To view the status of your containers, use the `kubectl get pods` command. If you can't pull the container images, see [Authenticate with Azure Container Registry from Azure Kubernetes Service](cluster-container-registry-integration.md).
+### Azure portal
+
+Navigate to your Azure portal to find your deployment information.
+
+1. Open your [Resource Group][azure-rg] on the Azure portal
+1. Navigate to the Kubernetes service for your cluster
+1. Select `Services and Ingress` under `Kubernetes Resources`
+1. Copy the External IP shown in the column for store-front
+1. Paste the IP into your browser and visit your store page
+
+ :::image type="content" source="./learn/media/quick-kubernetes-deploy-cli/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="./learn/media/quick-kubernetes-deploy-cli/aks-store-application.png":::
+ ## Next steps In this tutorial, you deployed a sample Azure application to a Kubernetes cluster in AKS. You learned how to: > [!div class="checklist"]
->
+>
> * Update a Kubernetes manifest file. > * Run an application in Kubernetes. > * Test the application.
In the next tutorial, you learn how to use PaaS services for stateful workloads
> [Use PaaS services for stateful workloads in AKS][aks-tutorial-paas] <!-- LINKS - external -->
+[azure-rg]:https://ms.portal.azure.com/#browse/resourcegroups
[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get <!-- LINKS - internal --> [aks-tutorial-prepare-app]: ./tutorial-kubernetes-prepare-app.md [az-acr-list]: /cli/azure/acr
+[azure-azd-install]: /azure/developer/azure-developer-cli/install-azd
[azure-cli-install]: /cli/azure/install-azure-cli [azure-powershell-install]: /powershell/azure/install-az-ps [get-azcontainerregistry]: /powershell/module/az.containerregistry/get-azcontainerregistry
aks Tutorial Kubernetes Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-cluster.md
Title: Kubernetes on Azure tutorial - Deploy an Azure Kubernetes Service (AKS) cluster
-description: In this Azure Kubernetes Service (AKS) tutorial, you create an AKS cluster and use kubectl to connect to the Kubernetes main node.
+ Title: Kubernetes on Azure tutorial - Create an Azure Kubernetes Service (AKS) cluster
+description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to create an AKS cluster and use kubectl to connect to the Kubernetes main node.
Previously updated : 10/23/2023- Last updated : 02/14/2024+ #Customer intent: As a developer or IT pro, I want to learn how to create an Azure Kubernetes Service (AKS) cluster so that I can deploy and run my own applications.
-# Tutorial - Deploy an Azure Kubernetes Service (AKS) cluster
+# Tutorial - Create an Azure Kubernetes Service (AKS) cluster
Kubernetes provides a distributed platform for containerized applications. With Azure Kubernetes Service (AKS), you can quickly create a production ready Kubernetes cluster.
In this tutorial, part three of seven, you deploy a Kubernetes cluster in AKS. Y
## Before you begin
-In previous tutorials, you created a container image and uploaded it to an ACR instance. If you haven't completed these steps and want to follow along, start with [Tutorial 1 - Prepare application for AKS][aks-tutorial-prepare-app].
+In previous tutorials, you created a container image and uploaded it to an ACR instance. Start with [Tutorial 1 - Prepare application for AKS][aks-tutorial-prepare-app] to follow along.
-* If you're using Azure CLI, this tutorial requires that you're running the Azure CLI version 2.0.53 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-* If you're using Azure PowerShell, this tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+* If you're using Azure CLI, this tutorial requires that you're running the Azure CLI version 2.0.53 or later. Check your version with `az --version`. To install or upgrade, see [Install Azure CLI][azure-cli-install].
+* If you're using Azure PowerShell, this tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Check your version with `Get-InstalledModule -Name Az`. To install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+* If you're using Azure Developer CLI, this tutorial requires that you're running the Azure Developer CLI version 1.5.1 or later. Check your version with `azd version`. To install or upgrade, see [Install Azure Developer CLI][azure-azd-install].
To learn more about AKS and Kubernetes RBAC, see [Control access to cluster reso
### [Azure CLI](#tab/azure-cli)
-This tutorial requires Azure CLI version 2.0.53 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+This tutorial requires Azure CLI version 2.0.53 or later. Check your version with `az --version`. To install or upgrade, see [Install Azure CLI][azure-cli-install].
### [Azure PowerShell](#tab/azure-powershell)
-This tutorial requires Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+This tutorial requires Azure PowerShell version 5.9.0 or later. Check your version with `Get-InstalledModule -Name Az`. To install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
--
-## Create an AKS cluster
-
-AKS clusters can use [Kubernetes role-based access control (Kubernetes RBAC)][k8s-rbac], which allows you to define access to resources based on roles assigned to users. Permissions are combined when users are assigned multiple roles. Permissions can be scoped to either a single namespace or across the whole cluster. For more information, see [Control access to cluster resources using Kubernetes RBAC and Azure Active Directory identities in AKS][aks-k8s-rbac].
-
-For information about AKS resource limits and region availability, see [Quotas, virtual machine size restrictions, and region availability in AKS][quotas-skus-regions].
-
-> [!NOTE]
-> To ensure your cluster operates reliably, you should run at least two nodes.
-
-### [Azure CLI](#tab/azure-cli)
-
-To allow an AKS cluster to interact with other Azure resources, the Azure platform automatically creates a cluster identity. In this example, the cluster identity is [granted the right to pull images][container-registry-integration] from the ACR instance you created in the previous tutorial. To execute the command successfully, you need to have an **Owner** or **Azure account administrator** role in your Azure subscription.
-
-* Create an AKS cluster using the [`az aks create`][az aks create] command. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region.
-
- ```azurecli-interactive
- az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --node-count 2 \
- --generate-ssh-keys \
- --attach-acr <acrName>
- ```
-
- > [!NOTE]
- > If you already generated SSH keys, you may encounter an error similar to `linuxProfile.ssh.publicKeys.keyData is invalid`. To proceed, retry the command without the `--generate-ssh-keys` parameter.
-
-### [Azure PowerShell](#tab/azure-powershell)
+### [Azure Developer CLI](#tab/azure-azd)
-To allow an AKS cluster to interact with other Azure resources, the Azure platform automatically creates a cluster identity. In this example, the cluster identity is [granted the right to pull images][container-registry-integration] from the ACR instance you created in the previous tutorial. To execute the command successfully, you need to have an **Owner** or **Azure account administrator** role in your Azure subscription.
-
-* Create an AKS cluster using the [`New-AzAksCluster`][new-azakscluster] cmdlet. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region.
-
- ```azurepowershell-interactive
- New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -GenerateSshKey -AcrNameToAttach <acrName>
- ```
-
- > [!NOTE]
- > If you already generated SSH keys, you may encounter an error similar to `linuxProfile.ssh.publicKeys.keyData is invalid`. To proceed, retry the command without the `-GenerateSshKey` parameter.
+This tutorial requires Azure Developer CLI version 1.5.1 or later. Check your version with `azd version`. To install or upgrade, see [Install Azure Developer CLI][azure-azd-install].
-To avoid needing an **Owner** or **Azure account administrator** role, you can also manually configure a service principal to pull images from ACR. For more information, see [ACR authentication with service principals](../container-registry/container-registry-auth-service-principal.md) or [Authenticate from Kubernetes with a pull secret](../container-registry/container-registry-auth-kubernetes.md). Alternatively, you can use a [managed identity](use-managed-identity.md) instead of a service principal for easier management.
-
-After a few minutes, the deployment completes and returns JSON-formatted information about the AKS deployment.
- ## Install the Kubernetes CLI You use the Kubernetes CLI, [`kubectl`][kubectl], to connect to your Kubernetes cluster. If you use the Azure Cloud Shell, `kubectl` is already installed. If you're running the commands locally, you can use the Azure CLI or Azure PowerShell to install `kubectl`.
You use the Kubernetes CLI, [`kubectl`][kubectl], to connect to your Kubernetes
Install-AzAksCliTool ```
+### [Azure Developer CLI](#tab/azure-azd)
+
+AZD Environments in a codespace automatically download all dependencies found in `./devcontainer/devcontainer.json`. The Kubernetes CLI is in the file, along with any ACR images.
+
+* To install `kubectl` locally, use the [`az aks install-cli`][az aks install-cli] command.
+
+ ```azurecli
+ az aks install-cli
+ ```
+ ## Connect to cluster using kubectl
You use the Kubernetes CLI, [`kubectl`][kubectl], to connect to your Kubernetes
kubectl get nodes ```
- The following example output shows a list of the cluster nodes:
+ The following example output shows a list of the cluster nodes.
```output NAME STATUS ROLES AGE VERSION
You use the Kubernetes CLI, [`kubectl`][kubectl], to connect to your Kubernetes
kubectl get nodes ```
- The following example output shows a list of the cluster nodes:
+ The following example output shows a list of the cluster nodes.
```output NAME STATUS ROLES AGE VERSION
You use the Kubernetes CLI, [`kubectl`][kubectl], to connect to your Kubernetes
aks-nodepool1-19366578-vmss000003 Ready agent 47h v1.25.6 ```
+### [Azure Developer CLI](#tab/azure-azd)
+
+Sign in to your Azure Account through AZD configures your credentials.
+
+1. Authenticate using AZD.
+
+ ```azurecli-interactive
+ azd auth login
+ ```
+
+2. Follow the directions for your auth method.
+
+3. Verify the connection to your cluster using the [`kubectl get nodes`][kubectl-get] command.
+
+ ```azurecli-interactive
+ kubectl get nodes
+ ```
+
+ The following example output shows a list of the cluster nodes.
+
+ ```output
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-19366578-vmss000002 Ready agent 47h v1.25.6
+ aks-nodepool1-19366578-vmss000003 Ready agent 47h v1.25.6
+ ```
++++
+## Create an AKS cluster
+
+AKS clusters can use [Kubernetes role-based access control (Kubernetes RBAC)][k8s-rbac], which allows you to define access to resources based on roles assigned to users. Permissions are combined when users are assigned multiple roles. Permissions can be scoped to either a single namespace or across the whole cluster. For more information, see [Control access to cluster resources using Kubernetes RBAC and Microsoft Entra ID in AKS][aks-k8s-rbac].
+
+For information about AKS resource limits and region availability, see [Quotas, virtual machine size restrictions, and region availability in AKS][quotas-skus-regions].
+
+> [!NOTE]
+> To ensure your cluster operates reliably, you should run at least two nodes.
+
+### [Azure CLI](#tab/azure-cli)
+
+To allow an AKS cluster to interact with other Azure resources, the Azure platform automatically creates a cluster identity. In this example, the cluster identity is [granted the right to pull images][container-registry-integration] from the ACR instance you created in the previous tutorial. To execute the command successfully, you need to have an **Owner** or **Azure account administrator** role in your Azure subscription.
+
+* Create an AKS cluster using the [`az aks create`][az aks create] command. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region.
+
+ ```azurecli-interactive
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --node-count 2 \
+ --generate-ssh-keys \
+ --attach-acr <acrName>
+ ```
+
+ > [!NOTE]
+ > If you already generated SSH keys, you may encounter an error similar to `linuxProfile.ssh.publicKeys.keyData is invalid`. To proceed, retry the command without the `--generate-ssh-keys` parameter.
+
+To avoid needing an **Owner** or **Azure account administrator** role, you can also manually configure a service principal to pull images from ACR. For more information, see [ACR authentication with service principals](../container-registry/container-registry-auth-service-principal.md) or [Authenticate from Kubernetes with a pull secret](../container-registry/container-registry-auth-kubernetes.md). Alternatively, you can use a [managed identity](use-managed-identity.md) instead of a service principal for easier management.
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To allow an AKS cluster to interact with other Azure resources, the Azure platform automatically creates a cluster identity. In this example, the cluster identity is [granted the right to pull images][container-registry-integration] from the ACR instance you created in the previous tutorial. To execute the command successfully, you need to have an **Owner** or **Azure account administrator** role in your Azure subscription.
+
+* Create an AKS cluster using the [`New-AzAksCluster`][new-azakscluster] cmdlet. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region.
+
+ ```azurepowershell-interactive
+ New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -GenerateSshKey -AcrNameToAttach <acrName>
+ ```
+
+ > [!NOTE]
+ > If you already generated SSH keys, you may encounter an error similar to `linuxProfile.ssh.publicKeys.keyData is invalid`. To proceed, retry the command without the `-GenerateSshKey` parameter.
+
+To avoid needing an **Owner** or **Azure account administrator** role, you can also manually configure a service principal to pull images from ACR. For more information, see [ACR authentication with service principals](../container-registry/container-registry-auth-service-principal.md) or [Authenticate from Kubernetes with a pull secret](../container-registry/container-registry-auth-kubernetes.md). Alternatively, you can use a [managed identity](use-managed-identity.md) instead of a service principal for easier management.
+
+### [Azure Developer CLI](#tab/azure-azd)
+
+AZD packages the deployment of clusters with the application itself using `azd up`. This command is covered in the next tutorial.
+ ## Next steps
In the next tutorial, you learn how to deploy an application to your cluster.
[az aks create]: /cli/azure/aks#az_aks_create [az aks install-cli]: /cli/azure/aks#az_aks_install_cli [az aks get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[azure-azd-install]: /azure/developer/azure-developer-cli/install-azd
[azure-cli-install]: /cli/azure/install-azure-cli [container-registry-integration]: ./cluster-container-registry-integration.md [quotas-skus-regions]: quotas-skus-regions.md
aks Tutorial Kubernetes Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-prepare-app.md
Title: Kubernetes on Azure tutorial - Prepare an application for Azure Kubernetes Service (AKS) description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to prepare and build a multi-container app with Docker Compose that you can then deploy to AKS. Previously updated : 10/23/2023- Last updated : 02/15/2023+ #Customer intent: As a developer, I want to learn how to build a container-based application so that I can deploy the app to Azure Kubernetes Service.
To complete this tutorial, you need a local Docker development environment runni
> [!NOTE] > Azure Cloud Shell doesn't include the Docker components required to complete every step in these tutorials. Therefore, we recommend using a full Docker development environment. ++ ## Get application code The [sample application][sample-application] used in this tutorial is a basic store front app including the following Kubernetes deployments and
The [sample application][sample-application] used in this tutorial is a basic st
* **Order service**: Places orders. * **Rabbit MQ**: Message queue for an order queue. +
+### [Git](#tab/azure-cli)
+ 1. Use [git][] to clone the sample application to your development environment. ```console
The [sample application][sample-application] used in this tutorial is a basic st
cd aks-store-demo ``` +
+### [Azure Developer CLI](#tab/azure-azd)
+
+1. If you are using AZD locally, create an empty directory named `aks-store-demo` to host the azd template files.
+
+ ```azurecli
+ mkdir aks-store-demo
+ ```
+
+1. Change into the new directory to load all the files from the azd template.
+
+ ```azurecli
+ cd aks-store-demo
+ ```
+
+1. Run the Azure Developer CLI ([azd][]) init command which clones the sample application into your empty directory.
+
+ Here, the `--template` flag is specified to point to the aks-store-demo application.
+
+ ```azurecli
+ azd init --template aks-store-demo
+ ```
+++ ## Review Docker Compose file
-The sample application you create in this tutorial uses the [*docker-compose-quickstart* YAML file](https://github.com/Azure-Samples/aks-store-demo/blob/main/docker-compose-quickstart.yml) in the [repository](https://github.com/Azure-Samples/aks-store-demo/tree/main) you cloned in the previous step.
+The sample application you create in this tutorial uses the [*docker-compose-quickstart* YAML file](https://github.com/Azure-Samples/aks-store-demo/blob/main/docker-compose-quickstart.yml) from the [repository](https://github.com/Azure-Samples/aks-store-demo/tree/main) you cloned.
```yaml version: "3.7"
networks:
driver: bridge ```
-## Create container images and run application
++
+## Create container images and run application
+
+### [Docker](#tab/azure-cli)
You can use [Docker Compose][docker-compose] to automate building container images and the deployment of multi-container applications.
-1. Create the container image, download the Redis image, and start the application using the `docker compose` command.
+### Docker
+
+1. Create the container image, download the Redis image, and start the application using the `docker compose` command:
```console docker compose -f docker-compose-quickstart.yml up -d
Since you validated the application's functionality, you can stop and remove the
docker compose down ``` +
+### [Azure Developer CLI](#tab/azure-azd)
+
+When you use AZD, there are no manual container image dependencies. AZD handles the provisioning, deployment, and cleans up of your applications and clusters with the `azd up` and `azd down` commands, similar to Docker.
+
+You can customize the preparation steps to use either Terraform or Bicep before deploying the cluster.
+
+1. This is selected within your `azure.yaml` infra section. By default, this project uses terraform.
+
+ ```yml
+ infra:
+ provider: terraform
+ path: infra/terraform
+
+2. To select Bicep change the provider and path from terraform to bicep
+
+ ```yml
+ infra:
+ provider: bicep
+ path: infra/bicep
+ ```
++ ## Next steps
+### [Azure CLI](#tab/azure-cli)
+ In this tutorial, you created a sample application, created container images for the application, and then tested the application. You learned how to: > [!div class="checklist"]
In the next tutorial, you learn how to store container images in an ACR.
> [!div class="nextstepaction"] > [Push images to Azure Container Registry][aks-tutorial-prepare-acr]
+### [Azure Developer CLI](#tab/azure-azd)
+
+In this tutorial, you cloned a sample application using AZD. You learned how to:
+
+> [!div class="checklist"]
+> * Clone a sample azd template from GitHub.
+> * View where container images are used from the sample application source.
+
+In the next tutorial, you learn how to create a cluster using the azd template you cloned.
+
+> [!div class="nextstepaction"]
+> [Create an AKS Cluster][aks-tutorial-deploy-cluster]
+++ <!-- LINKS - external --> [docker-compose]: https://docs.docker.com/compose/ [docker-for-linux]: https://docs.docker.com/engine/installation/#supported-platforms
In the next tutorial, you learn how to store container images in an ACR.
[sample-application]: https://github.com/Azure-Samples/aks-store-demo <!-- LINKS - internal -->
-[aks-tutorial-prepare-acr]: ./tutorial-kubernetes-prepare-acr.md
+[aks-tutorial-prepare-acr]: ./tutorial-kubernetes-prepare-acr.md
+[aks-tutorial-deploy-cluster]: ./tutorial-kubernetes-deploy-cluster.md
+[azd]: /azure/developer/azure-developer-cli/install-azd
api-center Enable Api Center Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/enable-api-center-portal.md
If the user is assigned the role, there might be a problem with the registration
az provider register --namespace Microsoft.ApiCenter ```
+### Unable to sign in to portal
+
+If users who have been assigned the **Azure API Center Data Reader** role can't complete the sign-in flow after selecting **Sign in** in the API Center portal, there might be a problem with the configuration of the Microsoft Entra ID identity provider.
+
+In the Microsoft Entra app registration, review and, if needed, update the **Redirect URI** settings:
+
+* Platform: **Single-page application (SPA)**
+* URI: `https://<api-center-name>.portal.<region>.azure-apicenter.ms`. This value must be the URI shown for the Microsoft Entra ID provider for your API Center portal.
+ ### Unable to select Azure API Center permissions in Microsoft Entra app registration If you're unable to request API permissions to Azure API Center in your Microsoft Entra app registration for the API Center portal, check that you are searching for **Azure API Center** (or application ID `c3ca1a77-7a87-4dba-b8f8-eea115ae4573`).
api-management Api Management Howto Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ip-addresses.md
In [multi-regional deployments](api-management-howto-deploy-multi-region.md), ea
If your API Management service is inside a virtual network, it will have two types of IP addresses: public and private.
-* Public IP addresses are used for internal communication on port `3443` - for managing configuration (for example, through Azure Resource Manager). In the external VNet configuration, they are also used for runtime API traffic.
+* Public IP addresses are used for internal communication on port `3443` - for managing configuration (for example, through Azure Resource Manager). In the *external* VNet configuration, they are also used for runtime API traffic. In the *internal* VNet configuration, public IP addresses are only used for Azure internal management operations and don't expose your instance to the internet.
* Private virtual IP (VIP) addresses, available **only** in the [internal VNet mode](api-management-using-with-internal-vnet.md), are used to connect from within the network to API Management endpoints - gateways, the developer portal, and the management plane for direct API access. You can use them for setting up DNS records within the network.
In the Developer, Basic, Standard, and Premium tiers of API Management, the publ
* The service subscription is disabled or warned (for example, for nonpayment) and then reinstated. [Learn more about subscription states](/azure/cost-management-billing/manage/subscription-states) * (Developer and Premium tiers) Azure Virtual Network is added to or removed from the service. * (Developer and Premium tiers) API Management service is switched between external and internal VNet deployment mode.
-* (Developer and Premium tiers) API Management service is moved to a different subnet, or [migrated](migrate-stv1-to-stv2.md) from the `stv1` to the `stv2` compute platform..
+* (Developer and Premium tiers) API Management service is moved to a different subnet, or [migrated](migrate-stv1-to-stv2.md) from the `stv1` to the `stv2` compute platform.
* (Premium tier) [Availability zones](../reliability/migrate-api-mgt.md) are enabled, added, or removed. * (Premium tier) In [multi-regional deployments](api-management-howto-deploy-multi-region.md), the regional IP address changes if a region is vacated and then reinstated.
api-management Developer Portal Extend Custom Functionality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-extend-custom-functionality.md
Last updated 10/27/2023 + # Extend the developer portal with custom widgets
app-service Configure Basic Auth Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-basic-auth-disable.md
Title: Disable basic authentication for deployment
description: Learn how to secure App Service deployment by disabling basic authentication. keywords: azure app service, security, deployment, FTP, MsDeploy Previously updated : 01/26/2024 Last updated : 02/29/2024
App Service provides basic authentication for FTP and WebDeploy clients to conne
## Disable basic authentication
+Two different controls for basic authentication are available. Specifically:
+
+- For [FTP deployment](deploy-ftp.md), basic authentication is controlled by the `basicPublishingCredentialsPolicies/ftp` flag (**FTP Basic Auth Publishing Credentials** option in the portal).
+- For other deployment methods that use basic authentication, such as Visual Studio, local Git, and GitHub, basic authentication is controlled by the `basicPublishingCredentialsPolicies/scm` flag (**SCM Basic Auth Publishing Credentials** option in the portal).
+ ### [Azure portal](#tab/portal)
-1. In the [Azure portal], search for and select **App Services**, and then select your app.
+1. In the [Azure portal](https://portal.azure.com), search for and select **App Services**, and then select your app.
-1. In the app's left menu, select **Configuration**.
+1. In the app's left menu, select **Configuration** > **General settings**.
-1. For **Basic Auth Publishing Credentials**, select **Off**, then select **Save**.
+1. For **SCM Basic Auth Publishing Credentials** or **FTP Basic Auth Publishing Credentials**, select **Off**, then select **Save**.
:::image type="content" source="media/configure-basic-auth-disable/basic-auth-disable.png" alt-text="A screenshot showing how to disable basic authentication for Azure App Service in the Azure portal.":::
To confirm that Git access is blocked, try [local Git deployment](deploy-local-g
## Deployment without basic authentication
-When you disable basic authentication, deployment methods that depend on basic authentication stop working. The following table shows how various deployment methods behave when basic authentication is disabled, and if there's any fallback mechanism. For more information, see [Authentication types by deployment methods in Azure App Service](deploy-authentication-types.md).
+When you disable basic authentication, deployment methods that depend on basic authentication stop working.
+
+The following table shows how various deployment methods behave when basic authentication is disabled, and if there's any fallback mechanism. For more information, see [Authentication types by deployment methods in Azure App Service](deploy-authentication-types.md).
| Deployment method | When basic authentication is disabled | |-|-|
app-service Deploy Continuous Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-continuous-deployment.md
Title: Configure continuous deployment
description: Learn how to enable CI/CD to Azure App Service from GitHub, Bitbucket, Azure Repos, or other repos. Select the build pipeline that fits your needs. ms.assetid: 6adb5c84-6cf3-424e-a336-c554f23b4000 Previously updated : 01/26/2024 Last updated : 02/29/2024
You can customize the GitHub Actions build provider in these ways:
# [App Service Build Service](#tab/appservice) > [!NOTE]
-> App Service Build Service requires [basic authentication to be enabled](configure-basic-auth-disable.md) for the webhook to work. For more information, see [Deployment without basic authentication](configure-basic-auth-disable.md#deployment-without-basic-authentication).
+> App Service Build Service requires [SCM basic authentication to be enabled](configure-basic-auth-disable.md) for the webhook to work. For more information, see [Deployment without basic authentication](configure-basic-auth-disable.md#deployment-without-basic-authentication).
App Service Build Service is the deployment and build engine native to App Service, otherwise known as Kudu. When this option is selected, App Service adds a webhook into the repository you authorized. Any code push to the repository triggers the webhook, and App Service pulls the changes into its repository and performs any deployment tasks. For more information, see [Deploying from GitHub (Kudu)](https://github.com/projectkudu/kudu/wiki/Deploying-from-GitHub).
app-service Deploy Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-ftp.md
description: Learn how to deploy your app to Azure App Service using FTP or FTPS
ms.assetid: ae78b410-1bc0-4d72-8fc4-ac69801247ae Previously updated : 01/26/2024 Last updated : 02/29/2024
or API app to [Azure App Service](./overview.md).
The FTP/S endpoint for your app is already active. No configuration is necessary to enable FTP/S deployment. > [!NOTE]
-> When [basic authentication is disabled](configure-basic-auth-disable.md), FTP/S deployment doesn't work, and you can't view or configure FTP credentials in the app's Deployment Center.
+> When [FTP basic authentication is disabled](configure-basic-auth-disable.md), FTP/S deployment doesn't work, and you can't view or configure FTP credentials in the app's Deployment Center.
## Get deployment credentials
app-service Deploy Local Git https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-local-git.md
Title: Deploy from local Git repo
description: Learn how to enable local Git deployment to Azure App Service. One of the simplest ways to deploy code from your local machine. ms.assetid: ac50a623-c4b8-4dfd-96b2-a09420770063 Previously updated : 01/26/2024 Last updated : 02/29/2024
This how-to guide shows you how to deploy your app to [Azure App Service](overview.md) from a Git repository on your local computer. > [!NOTE]
-> When [basic authentication is disabled](configure-basic-auth-disable.md), Local Git deployment doesn't work, and you can't configure Local Git deployment in the app's Deployment Center.
+> When [SCM basic authentication is disabled](configure-basic-auth-disable.md), Local Git deployment doesn't work, and you can't configure Local Git deployment in the app's Deployment Center.
## Prerequisites
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md
zone_pivot_groups: app-service-cli-portal
# Use the in-place migration feature to migrate App Service Environment v1 and v2 to App Service Environment v3 > [!NOTE]
-> The migration feature described in this article is used for in-place (same subnet) automated migration of App Service Environment v1 and v2 to App Service Environment v3. If you're looking for information on the side by side migration feature, see [Migrate to App Service Environment v3 by using the side by side migration feature](side-by-side-migrate.md). If you're looking for information on manual migration options, see [Manual migration options](migration-alternatives.md). For help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). For more information on App Service Environment v3, see [App Service Environment v3 overview](overview.md).
+> The migration feature described in this article is used for in-place (same subnet) automated migration of App Service Environment v1 and v2 to App Service Environment v3. If you're looking for information on the side-by-side migration feature, see [Migrate to App Service Environment v3 by using the side-by-side migration feature](side-by-side-migrate.md). If you're looking for information on manual migration options, see [Manual migration options](migration-alternatives.md). For help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). For more information on App Service Environment v3, see [App Service Environment v3 overview](overview.md).
> You can automatically migrate App Service Environment v1 and v2 to [App Service Environment v3](overview.md) by using the in-place migration feature. To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [overview of the in-place migration feature](migrate.md).
app-service How To Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-side-by-side-migrate.md
Title: Use the side by side migration feature to migrate your App Service Environment v2 to App Service Environment v3
-description: Learn how to migrate your App Service Environment v2 to App Service Environment v3 by using the side by side migration feature.
+ Title: Use the side-by-side migration feature to migrate your App Service Environment v2 to App Service Environment v3
+description: Learn how to migrate your App Service Environment v2 to App Service Environment v3 by using the side-by-side migration feature.
Last updated 2/21/2024
# zone_pivot_groups: app-service-cli-portal
-# Use the side by side migration feature to migrate App Service Environment v2 to App Service Environment v3 (Preview)
+# Use the side-by-side migration feature to migrate App Service Environment v2 to App Service Environment v3 (Preview)
> [!NOTE]
-> The migration feature described in this article is used for side by side (different subnet) automated migration of App Service Environment v2 to App Service Environment v3 and is currently **in preview**.
+> The migration feature described in this article is used for side-by-side (different subnet) automated migration of App Service Environment v2 to App Service Environment v3 and is currently **in preview**.
> > If you're looking for information on the in-place migration feature, see [Migrate to App Service Environment v3 by using the in-place migration feature](migrate.md). If you're looking for information on manual migration options, see [Manual migration options](migration-alternatives.md). For help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). For more information on App Service Environment v3, see [App Service Environment v3 overview](overview.md). >
-You can automatically migrate App Service Environment v2 to [App Service Environment v3](overview.md) by using the side by side migration feature. To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [overview of the side by side migration feature](side-by-side-migrate.md).
+You can automatically migrate App Service Environment v2 to [App Service Environment v3](overview.md) by using the side-by-side migration feature. To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [overview of the side-by-side migration feature](side-by-side-migrate.md).
> [!IMPORTANT] > We recommend that you use this feature for development environments before migrating any production environments, to avoid unexpected problems. Please provide any feedback related to this article or the feature by using the buttons at the bottom of the page.
Follow the steps described here in order and as written, because you're making A
For this guide, [install the Azure CLI](/cli/azure/install-azure-cli) or use [Azure Cloud Shell](https://shell.azure.com/).
+> [!IMPORTANT]
+> During the migration, the Azure portal might show incorrect information about your App Service Environment and your apps. Don't go to the Migration experience in the Azure portal since the side-by-side migration feature isn't available there. We recommend that you use the Azure CLI to check the status of your migration. If you have any questions about the status of your migration or your apps, contact support.
+>
+ ## 1. Select the subnet for your new App Service Environment v3 Select a subnet in your App Service Environment v3 that meets the [subnet requirements for App Service Environment v3](./networking.md#subnet-requirements). Note the name of the subnet you select. This subnet must be different than the subnet your existing App Service Environment is in.
ASE_ID=$(az appservice ase show --name $ASE_NAME --resource-group $ASE_RG --quer
## 3. Validate migration is supported
-The following command checks whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. See the [troubleshooting](side-by-side-migrate.md#troubleshooting) section for descriptions of the potential error messages that you can get. If your environment [isn't supported for migration using the side by side migration feature](side-by-side-migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the side by side migration feature, see the [manual migration options](migration-alternatives.md).
+The following command checks whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. See the [troubleshooting](side-by-side-migrate.md#troubleshooting) section for descriptions of the potential error messages that you can get. If your environment [isn't supported for migration using the side-by-side migration feature](side-by-side-migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the side-by-side migration feature, see the [manual migration options](migration-alternatives.md).
```azurecli az rest --method post --uri "${ASE_ID}/NoDowntimeMigrate?phase=Validation&api-version=2022-03-01"
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the in-place migration fea
description: Overview of the in-place migration feature for migration to App Service Environment v3. Previously updated : 02/22/2024 Last updated : 03/1/2024 # Migration to App Service Environment v3 using the in-place migration feature > [!NOTE]
-> The migration feature described in this article is used for in-place (same subnet) automated migration of App Service Environment v1 and v2 to App Service Environment v3. If you're looking for information on the side by side migration feature, see [Migrate to App Service Environment v3 by using the side by side migration feature](side-by-side-migrate.md). If you're looking for information on manual migration options, see [Manual migration options](migration-alternatives.md). For help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). For more information on App Service Environment v3, see [App Service Environment v3 overview](overview.md).
+> The migration feature described in this article is used for in-place (same subnet) automated migration of App Service Environment v1 and v2 to App Service Environment v3. If you're looking for information on the side-by-side migration feature, see [Migrate to App Service Environment v3 by using the side-by-side migration feature](side-by-side-migrate.md). If you're looking for information on manual migration options, see [Manual migration options](migration-alternatives.md). For help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). For more information on App Service Environment v3, see [App Service Environment v3 overview](overview.md).
> App Service can automate migration of your App Service Environment v1 and v2 to an [App Service Environment v3](overview.md). There are different migration options. Review the [migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree) to decide which option is best for your use case. App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
If your App Service Environment doesn't pass the validation checks or you try to
|Migrate can only be called on an ASE in ARM VNET and this ASE is in Classic VNET. |App Service Environments in Classic VNets can't migrate using the in-place migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). | |ASEv3 Migration is not yet ready. |The underlying infrastructure isn't ready to support App Service Environment v3. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the in-place migration feature to be available in your region. | |Migration cannot be called on this ASE, please contact support for help migrating. |Support needs to be engaged for migrating this App Service Environment. This issue is potentially due to custom settings used by this environment. |Open a support case to engage support to resolve your issue. |
-|Migrate cannot be called if IP SSL is enabled on any of the sites.|App Service Environments that have sites with IP SSL enabled can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. |
+|Migrate cannot be called if IP SSL is enabled on any of the sites.|App Service Environments that have sites with IP SSL enabled can't be migrated using the migration feature. |Remove the IP SSL from all of your apps in the App Service Environment to enable the migration feature. |
|Full migration cannot be called before IP addresses are generated. |This error appears if you attempt to migrate before finishing the premigration steps. |Ensure you complete all premigration steps before you attempt to migrate. See the [step-by-step guide for migrating](how-to-migrate.md). | |Migration to ASEv3 is not allowed for this ASE. |You can't migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). | |Subscription has too many App Service Environments. Please remove some before trying to create more.|The App Service Environment [quota for your subscription](../../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) is met. |Remove unneeded environments or contact support to review your options. |
Migration requires a three to six hour service window for App Service Environmen
- The existing App Service Environment is shut down and replaced by the new App Service Environment v3. - All App Service plans in the App Service Environment are converted from the Isolated to Isolated v2 tier. - All of the apps that are on your App Service Environment are temporarily down. **You should expect about one hour of downtime during this period**.
- - If you can't support downtime, see the [side by side migration feature](side-by-side-migrate.md) or the [migration-alternatives](migration-alternatives.md#migrate-manually).
+ - If you can't support downtime, see the [side-by-side migration feature](side-by-side-migrate.md) or the [migration-alternatives](migration-alternatives.md#migrate-manually).
- The public addresses that are used by the App Service Environment change to the IPs generated during the IP generation step. As in the IP generation step, you can't scale, modify your App Service Environment, or deploy apps to it during this process. When migration is complete, the apps that were on the old App Service Environment are running on the new App Service Environment v3.
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
You can [deploy ARM templates](../deploy-complex-application-predictably.md) by
## Migrate manually
-The [in-place migration feature](migrate.md) automates the migration to App Service Environment v3 and transfers all of your apps to the new environment. There's about one hour of downtime during this migration. If your apps can't have any downtime, we recommend that you use the [side by side migration feature](side-by-side-migrate.md), which is a zero-downtime migration option since the new environment is created in a different subnet. If you also choose not to use the side by side migration feature, you can use one of the manual options to re-create your apps in App Service Environment v3.
+The [in-place migration feature](migrate.md) automates the migration to App Service Environment v3 and transfers all of your apps to the new environment. There's about one hour of downtime during this migration. If your apps can't have any downtime, we recommend that you use the [side-by-side migration feature](side-by-side-migrate.md), which is a zero-downtime migration option since the new environment is created in a different subnet. If you also choose not to use the side-by-side migration feature, you can use one of the manual options to re-create your apps in App Service Environment v3.
You can distribute traffic between your old and new environments by using [Application Gateway](../networking/app-gateway-with-service-endpoints.md). If you're using an internal load balancer (ILB) App Service Environment, [create an Azure Application Gateway instance](integrate-with-application-gateway.md) with an extra back-end pool to distribute traffic between your environments. For information about ILB App Service Environments and internet-facing App Service Environments, see [Application Gateway integration](../overview-app-gateway-integration.md).
app-service Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md
Title: Migrate to App Service Environment v3 by using the side by side migration feature
-description: Overview of the side by side migration feature for migration to App Service Environment v3.
+ Title: Migrate to App Service Environment v3 by using the side-by-side migration feature
+description: Overview of the side-by-side migration feature for migration to App Service Environment v3.
Previously updated : 2/22/2024 Last updated : 3/1/2024
-# Migration to App Service Environment v3 using the side by side migration feature (Preview)
+# Migration to App Service Environment v3 using the side-by-side migration feature (Preview)
> [!NOTE]
-> The migration feature described in this article is used for side by side (different subnet) automated migration of App Service Environment v2 to App Service Environment v3 and is currently **in preview**.
+> The migration feature described in this article is used for side-by-side (different subnet) automated migration of App Service Environment v2 to App Service Environment v3 and is currently **in preview**.
> > If you're looking for information on the in-place migration feature, see [Migrate to App Service Environment v3 by using the in-place migration feature](migrate.md). If you're looking for information on manual migration options, see [Manual migration options](migration-alternatives.md). For help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). For more information on App Service Environment v3, see [App Service Environment v3 overview](overview.md). > App Service can automate migration of your App Service Environment v1 and v2 to an [App Service Environment v3](overview.md). There are different migration options. Review the [migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree) to decide which option is best for your use case. App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
-The side by side migration feature automates your migration to App Service Environment v3. The side by side migration feature creates a new App Service Environment v3 with all of your apps in a different subnet. Your existing App Service Environment isn't deleted until you initiate its deletion at the end of the migration process. Because of this process, there's a rollback option if you need to cancel your migration. This migration option is best for customers who want to migrate to App Service Environment v3 with zero downtime and can support using a different subnet for their new environment. If you need to use the same subnet and can support about one hour of application downtime, see the [in-place migration feature](migrate.md). For manual migration options that allow you to migrate at your own pace, see [manual migration options](migration-alternatives.md).
+The side-by-side migration feature automates your migration to App Service Environment v3. The side-by-side migration feature creates a new App Service Environment v3 with all of your apps in a different subnet. Your existing App Service Environment isn't deleted until you initiate its deletion at the end of the migration process. Because of this process, there's a rollback option if you need to cancel your migration. This migration option is best for customers who want to migrate to App Service Environment v3 with zero downtime and can support using a different subnet for their new environment. If you need to use the same subnet and can support about one hour of application downtime, see the [in-place migration feature](migrate.md). For manual migration options that allow you to migrate at your own pace, see [manual migration options](migration-alternatives.md).
> [!IMPORTANT] > It is recommended to use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
The side by side migration feature automates your migration to App Service Envir
## Supported scenarios
-At this time, the side by side migration feature supports migrations to App Service Environment v3 in the following regions:
+At this time, the side-by-side migration feature doesn't support migrations to App Service Environment v3 in the following regions:
### Azure Public -- East Asia-- North Europe-- West Central US-- West US 2
+- UAE Central
-The following App Service Environment configurations can be migrated using the side by side migration feature. The table gives the App Service Environment v3 configuration when using the side by side migration feature based on your existing App Service Environment.
+### Azure Government
+
+- US DoD Central
+- US DoD East
+- US Gov Arizona
+- US Gov Texas
+- US Gov Virginia
+
+### Microsoft Azure operated by 21Vianet
+
+- China East 2
+- China North 2
+
+The following App Service Environment configurations can be migrated using the side-by-side migration feature. The table gives the App Service Environment v3 configuration when using the side-by-side migration feature based on your existing App Service Environment.
|Configuration |App Service Environment v3 Configuration | ||--|
App Service Environment v3 can be deployed as [zone redundant](../../availabilit
If you want your new App Service Environment v3 to use a custom domain suffix and you aren't using one currently, custom domain suffix can be configured during the migration set-up or at any time once migration is complete. For more information, see [Configure custom domain suffix for App Service Environment](./how-to-custom-domain-suffix.md). If your existing environment has a custom domain suffix and you no longer want to use it, don't configure a custom domain suffix during the migration set-up.
-## Side by side migration feature limitations
+## Side-by-side migration feature limitations
-The following are limitations when using the side by side migration feature:
+The following are limitations when using the side-by-side migration feature:
- Your new App Service Environment v3 is in a different subnet but the same virtual network as your existing environment. - You can't change the region your App Service Environment is located in. - ELB App Service Environment canΓÇÖt be migrated to ILB App Service Environment v3 and vice versa.
+- The side-by-side migration feature is only available using the CLI or via REST API. The feature isn't available in the Azure portal.
App Service Environment v3 doesn't support the following features that you might be using with your current App Service Environment v2. - Configuring an IP-based TLS/SSL binding with your apps. - App Service Environment v3 doesn't fall back to Azure DNS if your configured custom DNS servers in the virtual network aren't able to resolve a given name. If this behavior is required, ensure that you have a forwarder to a public DNS or include Azure DNS in the list of custom DNS servers.
-The side by side migration feature doesn't support the following scenarios. See the [manual migration options](migration-alternatives.md) if your App Service Environment falls into one of these categories.
+The side-by-side migration feature doesn't support the following scenarios. See the [manual migration options](migration-alternatives.md) if your App Service Environment falls into one of these categories.
- App Service Environment v1 - You can find the version of your App Service Environment by navigating to your App Service Environment in the [Azure portal](https://portal.azure.com) and selecting **Configuration** under **Settings** on the left-hand side. You can also use [Azure Resource Explorer](https://resources.azure.com/) and review the value of the `kind` property for your App Service Environment.
The side by side migration feature doesn't support the following scenarios. See
- ELB App Service Environment v2 with IP SSL addresses - [Zone pinned](zone-redundancy.md) App Service Environment v2
-The App Service platform reviews your App Service Environment to confirm side by side migration support. If your scenario doesn't pass all validation checks, you can't migrate at this time using the side by side migration feature. If your environment is in an unhealthy or suspended state, you can't migrate until you make the needed updates.
+The App Service platform reviews your App Service Environment to confirm side-by-side migration support. If your scenario doesn't pass all validation checks, you can't migrate at this time using the side-by-side migration feature. If your environment is in an unhealthy or suspended state, you can't migrate until you make the needed updates.
> [!NOTE] > App Service Environment v3 doesn't support IP SSL. If you use IP SSL, you must remove all IP SSL bindings before migrating to App Service Environment v3. The migration feature will support your environment once all IP SSL bindings are removed.
If your App Service Environment doesn't pass the validation checks or you try to
|Error message |Description |Recommendation | |||-|
-|Migrate can only be called on an ASE in ARM VNET and this ASE is in Classic VNET. |App Service Environments in Classic virtual networks can't migrate using the side by side migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). |
-|ASEv3 Migration is not yet ready. |The underlying infrastructure isn't ready to support App Service Environment v3. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the side by side migration feature to be available in your region. |
+|Migrate can only be called on an ASE in ARM VNET and this ASE is in Classic VNET. |App Service Environments in Classic virtual networks can't migrate using the side-by-side migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). |
+|ASEv3 Migration is not yet ready. |The underlying infrastructure isn't ready to support App Service Environment v3. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the side-by-side migration feature to be available in your region. |
|Cannot enable zone redundancy for this ASE. |The region the App Service Environment is in doesn't support zone redundancy. |If you need to enable zone redundancy, use one of the manual migration options to migrate to a [region that supports zone redundancy](overview.md#regions). | |Migrate cannot be called on this custom DNS suffix ASE at this time. |Custom domain suffix migration is blocked. |Open a support case to engage support to resolve your issue. | |Zone redundant ASE migration cannot be called at this time. |Zone redundant App Service Environment migration is blocked. |Open a support case to engage support to resolve your issue. |
-|Migrate cannot be called on ASEv2 that is zone-pinned. |App Service Environment v2 that's zone pinned can't be migrated using the side by side migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. |
+|Migrate cannot be called on ASEv2 that is zone-pinned. |App Service Environment v2 that's zone pinned can't be migrated using the side-by-side migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. |
|Existing revert migration operation ongoing, please try again later. |A previous migration attempt is being reverted. |Wait until the revert that's in progress completes before attempting to start migration again. | |Properties.VirtualNetwork.Id should contain the subnet resource ID. |The error appears if you attempt to migrate without providing a new subnet for the placement of your App Service Environment v3. |Ensure you follow the guidance and complete the step to identify the subnet you'll use for your App Service Environment v3. | |Unable to move to `<requested phase>` from the current phase `<previous phase>` of No Downtime Migration. |This error appears if you attempt to do a migration step in the incorrect order. |Ensure you follow the migration steps in order. | |Failed to start revert operation on ASE in hybrid state, please try again later. |This error appears if you try to revert the migration but something goes wrong. This error doesn't affect either your old or your new environment. |Open a support case to engage support to resolve your issue. |
-|This ASE cannot be migrated without downtime. |This error appears if you try to use the side by side migration feature on an App Service Environment v1. |The side by side migration feature doesn't support App Service Environment v1. Migrate using the [in-place migration feature](migrate.md) or one of the [manual migration options](migration-alternatives.md). |
+|This ASE cannot be migrated without downtime. |This error appears if you try to use the side-by-side migration feature on an App Service Environment v1. |The side-by-side migration feature doesn't support App Service Environment v1. Migrate using the [in-place migration feature](migrate.md) or one of the [manual migration options](migration-alternatives.md). |
|Migrate is not available for this subscription. |Support needs to be engaged for migrating this App Service Environment.|Open a support case to engage support to resolve your issue.| |Zone redundant migration cannot be called since the IP addresses created during pre-migrate are not zone redundant. |This error appears if you attempt a zone redundant migration but didn't create zone redundant IPs during the IP generation step. |Open a support case to engage support if you need to enable zone redundancy. Otherwise, you can migrate without enabling zone redundancy. |
-|Migrate cannot be called if IP SSL is enabled on any of the sites. |App Service Environments that have sites with IP SSL enabled can't be migrated using the side by side migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, you can disable the IP SSL on all sites in the App Service Environment and attempt migration again. |
+|Migrate cannot be called if IP SSL is enabled on any of the sites. |App Service Environments that have sites with IP SSL enabled can't be migrated using the side-by-side migration feature. |Remove the IP SSL from all of your apps in the App Service Environment to enable the migration feature. |
|Cannot migrate within the same subnet. |The error appears if you specify the same subnet that your current environment is in for placement of your App Service Environment v3. |You must specify a different subnet for your App Service Environment v3. If you need to use the same subnet, migrate using the [in-place migration feature](migrate.md). | |Subscription has too many App Service Environments. Please remove some before trying to create more.|The App Service Environment [quota for your subscription](../../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) is met. |Remove unneeded environments or contact support to review your options. | |Migrate cannot be called on this ASE until the active upgrade has finished. |App Service Environments can't be migrated during platform upgrades. You can set your [upgrade preference](how-to-upgrade-preference.md) from the Azure portal. In some cases, an upgrade is initiated when visiting the migration page if your App Service Environment isn't on the current build. |Wait until the upgrade finishes and then migrate. |
If your App Service Environment doesn't pass the validation checks or you try to
|Migration is invalid. Your ASE needs to be upgraded to the latest build to ensure successful migration. We will upgrade your ASE now. Please try migrating again in few hours once platform upgrade has finished. |Your App Service Environment isn't on the minimum build required for migration. An upgrade is started. Your App Service Environment won't be impacted, but you won't be able to scale or make changes to your App Service Environment while the upgrade is in progress. You won't be able to migrate until the upgrade finishes. |Wait until the upgrade finishes and then migrate. | |Full migration cannot be called before IP addresses are generated. |This error appears if you attempt to migrate before finishing the premigration steps. |Ensure you complete all premigration steps before you attempt to migrate. See the [step-by-step guide for migrating](how-to-side-by-side-migrate.md). |
-## Overview of the migration process using the side by side migration feature
+## Overview of the migration process using the side-by-side migration feature
-Side by side migration consists of a series of steps that must be followed in order. Key points are given for a subset of the steps. It's important to understand what happens during these steps and how your environment and apps are impacted. After reviewing the following information and when you're ready to migrate, follow the [step-by-step guide](how-to-side-by-side-migrate.md).
+Side-by-side migration consists of a series of steps that must be followed in order. Key points are given for a subset of the steps. It's important to understand what happens during these steps and how your environment and apps are impacted. After reviewing the following information and when you're ready to migrate, follow the [step-by-step guide](how-to-side-by-side-migrate.md).
### Select and prepare the subnet for your new App Service Environment v3
There's no application downtime during the migration, but as in the IP generatio
> Since scaling is blocked during the migration, you should scale your environment to the desired size before starting the migration. >
-Side by side migration requires a three to six hour service window for App Service Environment v2 to v3 migrations. During migration, scaling and environment configurations are blocked and the following events occur:
+Side-by-side migration requires a three to six hour service window for App Service Environment v2 to v3 migrations. During migration, scaling and environment configurations are blocked and the following events occur:
- The new App Service Environment v3 is created in the subnet you selected. - Your new App Service plans are created in the new App Service Environment v3 with the corresponding Isolated v2 tier.
The final step is to redirect traffic to your new App Service Environment v3 and
Once you're ready to redirect traffic, you can complete the final step of the migration. This step updates internal DNS records to point to the load balancer IP address of your new App Service Environment v3 and the frontends that were created during the migration. Changes are effective immediately. This step also shuts down your old App Service Environment and deletes it. Your new App Service Environment v3 is now your production environment. > [!IMPORTANT]
-> During the preview, in some cases there may be up to 20 minutes of downtime when you complete the final step of the migration. This downtime is due to the DNS change. The downtime is expected to be removed once the feature is generally available. If you have a requirement for zero downtime, you should wait until the side by side migration feature is generally available. During preview, however, you can still use the side by side migration feature to migrate your dev environments to App Service Environment v3 to learn about the migration process and see how it impacts your workloads.
+> During the preview, in some cases there may be up to 20 minutes of downtime when you complete the final step of the migration. This downtime is due to the DNS change. The downtime is expected to be removed once the feature is generally available. If you have a requirement for zero downtime, you should wait until the side-by-side migration feature is generally available. During preview, however, you can still use the side-by-side migration feature to migrate your dev environments to App Service Environment v3 to learn about the migration process and see how it impacts your workloads.
> If you discover any issues with your new App Service Environment v3, don't run the command to redirect customer traffic. This command also initiates the deletion of your App Service Environment v2. If you find an issue, you can revert all changes and return to your old App Service Environment v2. The revert process takes 3 to 6 hours to complete. There's no downtime associated with this process. Once the revert process completes, your old App Service Environment is back online and your new App Service Environment v3 is deleted. You can then attempt the migration again once you resolve any issues.
The App Service plan SKUs available for App Service Environment v3 run on the Is
## Frequently asked questions - **What if migrating my App Service Environment is not currently supported?**
- You can't migrate using the side by side migration feature at this time. If you have an unsupported environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md).
+ You can't migrate using the side-by-side migration feature at this time. If you have an unsupported environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md).
- **How do I choose which migration option is right for me?** Review the [migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree) to decide which option is best for your use case.-- **How do I know if I should use the side by side migration feature?**
- The side by side migration feature is best for customers who want to migrate to App Service Environment v3 but can't support application downtime. Since a new subnet is used for your new environment, there are networking considerations to be aware of, including new IPs. If you can support downtime, see the [in-place migration feature](migrate.md), which results in minimal configuration changes, or the [manual migration options](migration-alternatives.md). The in-place migration feature creates your App Service Environment v3 in the same subnet as your existing environment and uses the same networking infrastructure.
+- **How do I know if I should use the side-by-side migration feature?**
+ The side-by-side migration feature is best for customers who want to migrate to App Service Environment v3 but can't support application downtime. Since a new subnet is used for your new environment, there are networking considerations to be aware of, including new IPs. If you can support downtime, see the [in-place migration feature](migrate.md), which results in minimal configuration changes, or the [manual migration options](migration-alternatives.md). The in-place migration feature creates your App Service Environment v3 in the same subnet as your existing environment and uses the same networking infrastructure.
- **Will I experience downtime during the migration?**
- No, there's no downtime during the side by side migration process. Your apps continue to run on your existing App Service Environment until you complete the final step of the migration where DNS changes are effective immediately. Once you complete the final step, your old App Service Environment is shut down and deleted. Your new App Service Environment v3 is now your production environment.
+ No, there's no downtime during the side-by-side migration process. Your apps continue to run on your existing App Service Environment until you complete the final step of the migration where DNS changes are effective immediately. Once you complete the final step, your old App Service Environment is shut down and deleted. Your new App Service Environment v3 is now your production environment.
- **Will I need to do anything to my apps after the migration to get them running on the new App Service Environment?** No, all of your apps running on the old environment are automatically migrated to the new environment and run like before. No user input is needed. - **What if my App Service Environment has a custom domain suffix?**
- The side by side migration feature supports this [migration scenario](#supported-scenarios).
+ The side-by-side migration feature supports this [migration scenario](#supported-scenarios).
- **What if my App Service Environment is zone pinned?**
- The side by side migration feature doesn't support this [migration scenario](#supported-scenarios) at this time. If you have a zone pinned App Service Environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md).
+ The side-by-side migration feature doesn't support this [migration scenario](#supported-scenarios) at this time. If you have a zone pinned App Service Environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md).
- **What if my App Service Environment has IP SSL addresses?**
- IP SSL isn't supported on App Service Environment v3. You must remove all IP SSL bindings before migrating using the migration feature or one of the manual options. If you intend to use the side by side migration feature, once you remove all IP SSL bindings, you pass that validation check and can proceed with the automated migration.
+ IP SSL isn't supported on App Service Environment v3. You must remove all IP SSL bindings before migrating using the migration feature or one of the manual options. If you intend to use the side-by-side migration feature, once you remove all IP SSL bindings, you pass that validation check and can proceed with the automated migration.
- **What properties of my App Service Environment will change?**
- You're on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. Both your inbound and outbound IPs change when using the side by side migration feature. Note for ELB App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). For a full comparison of the App Service Environment versions, see [App Service Environment version comparison](version-comparison.md).
+ You're on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. Both your inbound and outbound IPs change when using the side-by-side migration feature. Note for ELB App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). For a full comparison of the App Service Environment versions, see [App Service Environment version comparison](version-comparison.md).
- **What happens if migration fails or there is an unexpected issue during the migration?**
- If there's an unexpected issue, support teams are on hand. We recommend that you migrate dev environments before touching any production environments to learn about the migration process and see how it impacts your workloads. With the side by side migration feature, you can revert all changes if there's any issues.
+ If there's an unexpected issue, support teams are on hand. We recommend that you migrate dev environments before touching any production environments to learn about the migration process and see how it impacts your workloads. With the side-by-side migration feature, you can revert all changes if there's any issues.
- **What happens to my old App Service Environment?**
- If you decide to migrate an App Service Environment using the side by side migration feature, your old environment is used up until the final step in the migration process. Once you complete the final step, the old environment and all of the apps hosted on it get shutdown and deleted. Your old environment is no longer accessible. A rollback to the old environment at this point isn't possible.
+ If you decide to migrate an App Service Environment using the side-by-side migration feature, your old environment is used up until the final step in the migration process. Once you complete the final step, the old environment and all of the apps hosted on it get shutdown and deleted. Your old environment is no longer accessible. A rollback to the old environment at this point isn't possible.
- **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?** After 31 August 2024, if you don't migrate to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, [App Service Environment v1/v2 will no longer be available after that date](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain. ## Next steps > [!div class="nextstepaction"]
-> [Migrate your App Service Environment to App Service Environment v3 using the side by side migration feature](how-to-side-by-side-migrate.md)
+> [Migrate your App Service Environment to App Service Environment v3 using the side-by-side migration feature](how-to-side-by-side-migrate.md)
> [!div class="nextstepaction"] > [App Service Environment v3 Networking](networking.md)
app-service Upgrade To Asev3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/upgrade-to-asev3.md
This page is your one-stop shop for guidance and resources to help you upgrade s
|Step|Action|Resources| |-|||
-|**1**|**Pre-flight check**|Determine if your environment meets the prerequisites to automate your upgrade using one of the automated migration features. Decide whether an in-place or side by side migration is right for your use case.<br><br>- [Migration path decision tree](#migration-path-decision-tree)<br>- [Automated upgrade using the in-place migration feature](migrate.md)<br>- [Automated upgrade using the side by side migration feature](side-by-side-migrate.md)<br><br>If not, you can upgrade manually.<br><br>- [Manual migration](migration-alternatives.md)|
-|**2**|**Migrate**|Based on results of your review, either upgrade using one of the automated migration features or follow the manual steps.<br><br>- [Use the in-place automated migration feature](how-to-migrate.md)<br>- [Use the side by side automated migration feature](how-to-side-by-side-migrate.md)<br>- [Migrate manually](migration-alternatives.md)|
-|**3**|**Testing and troubleshooting**|Upgrading using one of the automated migration features requires a 3-6 hour service window. If you use the side by side migration feature, you have the opportunity to [test and validate your App Service Environment v3](side-by-side-migrate.md#redirect-customer-traffic-and-complete-migration) before completing the upgrade. Support teams are monitoring upgrades to ensure success. If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).|
+|**1**|**Pre-flight check**|Determine if your environment meets the prerequisites to automate your upgrade using one of the automated migration features. Decide whether an in-place or side-by-side migration is right for your use case.<br><br>- [Migration path decision tree](#migration-path-decision-tree)<br>- [Automated upgrade using the in-place migration feature](migrate.md)<br>- [Automated upgrade using the side-by-side migration feature](side-by-side-migrate.md)<br><br>If not, you can upgrade manually.<br><br>- [Manual migration](migration-alternatives.md)|
+|**2**|**Migrate**|Based on results of your review, either upgrade using one of the automated migration features or follow the manual steps.<br><br>- [Use the in-place automated migration feature](how-to-migrate.md)<br>- [Use the side-by-side automated migration feature](how-to-side-by-side-migrate.md)<br>- [Migrate manually](migration-alternatives.md)|
+|**3**|**Testing and troubleshooting**|Upgrading using one of the automated migration features requires a 3-6 hour service window. If you use the side-by-side migration feature, you have the opportunity to [test and validate your App Service Environment v3](side-by-side-migrate.md#redirect-customer-traffic-and-complete-migration) before completing the upgrade. Support teams are monitoring upgrades to ensure success. If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).|
|**4**|**Optimize your App Service plans**|Once your upgrade is complete, you can optimize the App Service plans for additional benefits.<br><br>Review the autoselected Isolated v2 SKU sizes and scale up or scale down your App Service plans as needed.<br><br>- [Scale down your App Service plans](../manage-scale-up.md)<br>- [App Service Environment post-migration scaling guidance](migrate.md#pricing)<br><br>Explore reserved instance pricing, savings plans, and check out the pricing estimates if needed.<br><br>- [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service/windows/)<br>- [How reservation discounts apply to Isolated v2 instances](../../cost-management-billing/reservations/reservation-discount-app-service.md#how-reservation-discounts-apply-to-isolated-v2-instances)<br>- [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator)| |**5**|**Learn more**|On-demand: [Learn Live webinar with Azure FastTrack Architects](https://www.youtube.com/watch?v=lI9TK_v-dkg&ab_channel=MicrosoftDeveloper).<br><br>Need more help? [Submit a request](https://cxp.azure.com/nominationportal/nominationform/fasttrack) to contact FastTrack.<br><br>[Frequently asked questions](migrate.md#frequently-asked-questions)<br><br>[Community support](https://aka.ms/asev1v2retirement)|
App Service Environment v3 is the latest version of App Service Environment. It'
There are two automated migration features available to help you upgrade to App Service Environment v3. - **In-place migration feature** migrates your App Service Environment to App Service Environment v3 in-place. In-place means that your App Service Environment v3 replaces your existing App Service Environment in the same subnet. There's application downtime during the migration because a subnet can only have a single App Service Environment at a given time. For more information about this feature, see [Automated upgrade using the in-place migration feature](migrate.md).-- **Side by side migration feature** creates a new App Service Environment v3 in a different subnet that you choose and recreates all of your App Service plans and apps in that new environment. Your existing environment is up and running during the entire migration. Once the new App Service Environment v3 is ready, you can redirect traffic to the new environment and complete the migration. There's no application downtime during the migration. For more information about this feature, see [Automated upgrade using the side by side migration feature](side-by-side-migrate.md).
+- **Side-by-side migration feature** creates a new App Service Environment v3 in a different subnet that you choose and recreates all of your App Service plans and apps in that new environment. Your existing environment is up and running during the entire migration. Once the new App Service Environment v3 is ready, you can redirect traffic to the new environment and complete the migration. There's no application downtime during the migration. For more information about this feature, see [Automated upgrade using the side-by-side migration feature](side-by-side-migrate.md).
- **Manual migration options** are available if you can't use the automated migration features. For more information about these options, see [Migration alternatives](migration-alternatives.md). ### Migration path decision tree
Got 2 minutes? We'd love to hear about your upgrade experience in this quick, an
> [Migration to App Service Environment v3 using the in-place migration feature](migrate.md) > [!div class="nextstepaction"]
-> [Migration to App Service Environment v3 using the side by side migration feature](side-by-side-migrate.md)
+> [Migration to App Service Environment v3 using the side-by-side migration feature](side-by-side-migrate.md)
> [!div class="nextstepaction"] > [Manually migrate to App Service Environment v3](migration-alternatives.md)
application-gateway Tcp Tls Proxy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tcp-tls-proxy-overview.md
Previously updated : 02/26/2024 Last updated : 03/01/2024
In addition to the existing Layer 7 capabilities (HTTP, HTTPS, WebSockets and HTTP/2), Azure Application Gateway now also supports Layer 4 (TCP protocol) and TLS (Transport Layer Security) proxying. This feature is currently in public preview. To preview this feature, see [Register to the preview](how-to-tcp-tls-proxy.md#register-to-the-preview).
-## Application Gateway Layer 4 capabilities
+## TLS/TCP proxy capabilities on Application Gateway
As a reverse proxy service, the Layer 4 operations of Application Gateway work similar to its Layer 7 proxy operations. A client establishes a TCP connection with Application Gateway, and Application Gateway itself initiates a new TCP connection to a backend server from the backend pool. The following figure shows typical operation.
Process flow:
1. A client initiates a TCP or TLS connection with the application gateway using its frontend listener's IP address and port number. This establishes the frontend connection. Once the connection is established, the client sends a request using the required application layer protocol. 2. The application gateway establishes a new connection with one of the backend targets from the associated backend pool (forming the backend connection) and sends the client request to that backend server. 3. The response from the backend server is sent back to the client by the application gateway.
-4. The same frontend TCP connection is used for subsequent requests from the client unless the TCP idle timeout closes that connection.
+4. The same frontend TCP connection is used for subsequent requests from the client unless the TCP idle timeout closes that connection.
+
+### Comparing Azure Load Balancer with Azure Application Gateway:
+| Product | Type |
+| - | - |
+| [**Azure Load Balancer**](../load-balancer/load-balancer-overview.md) | A pass-through load balancer where a client directly establishes a connection with a backend server selected by the Load Balancer's distribution algorithm. |
+| **Azure Application Gateway** | Terminating load balancer where a client directly establishes a connection with Application Gateway and a separate connection is initiated with a backend server selected by Application Gateway's distribution algorithm. |
+ ## Features
Process flow:
## Next steps
-[Configure Azure Application Gateway TCP/TLS proxy](how-to-tcp-tls-proxy.md)
+- [Configure Azure Application Gateway TCP/TLS proxy](how-to-tcp-tls-proxy.md)
+- Visit [frequently asked questions (FAQs)](application-gateway-faq.yml#configurationtls-tcp-proxy)
azure-app-configuration Reference Kubernetes Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/reference-kubernetes-provider.md
# Azure App Configuration Kubernetes Provider reference
-The following reference outlines the properties supported by the Azure App Configuration Kubernetes Provider.
+The following reference outlines the properties supported by the Azure App Configuration Kubernetes Provider `v1.2.0`. See [release notes](https://github.com/Azure/AppConfiguration/blob/main/releaseNotes/KubernetesProvider.md) for more information on the change.
## Properties
An `AzureAppConfigurationProvider` resource has the following top-level child pr
|auth|The authentication method to access Azure App Configuration.|false|object| |configuration|The settings for querying and processing key-values in Azure App Configuration.|false|object| |secret|The settings for Key Vault references in Azure App Configuration.|conditional|object|
+|featureFlag|The settings for feature flags in Azure App Configuration.|false|object|
The `spec.target` property has the following child property.
The `spec.target` property has the following child property.
|configMapName|The name of the ConfigMap to be created.|true|string| |configMapData|The setting that specifies how the retrieved data should be populated in the generated ConfigMap.|false|object|
-If the `spec.target.configMapData` property is not set, the generated ConfigMap will be populated with the list of key-values retrieved from Azure App Configuration, which allows the ConfigMap to be consumed as environment variables. Update this property if you wish to consume the ConfigMap as a mounted file. This property has the following child properties.
+If the `spec.target.configMapData` property is not set, the generated ConfigMap is populated with the list of key-values retrieved from Azure App Configuration, which allows the ConfigMap to be consumed as environment variables. Update this property if you wish to consume the ConfigMap as a mounted file. This property has the following child properties.
|Name|Description|Required|Type| |||||
If the `spec.target.configMapData` property is not set, the generated ConfigMap
|key|The key name of the retrieved data when the `type` is set to `json`, `yaml` or `properties`. Set it to the file name if the ConfigMap is set up to be consumed as a mounted file.|conditional|string| |separator|The delimiter that is used to output the ConfigMap data in hierarchical format when the type is set to `json` or `yaml`. The separator is empty by default and the generated ConfigMap contains key-values in their original form. Configure this setting only if the configuration file loader used in your application can't load key-values without converting them to the hierarchical format.|optional|string|
-The `spec.auth` property isn't required if the connection string of your App Configuration store is provided by setting the `spec.connectionStringReference` property. Otherwise, one of the identities, service principal, workload identity, or managed identity, will be used for authentication. The `spec.auth` has the following child properties. Only one of them should be specified. If none of them are set, the system-assigned managed identity of the virtual machine scale set will be used.
+The `spec.auth` property isn't required if the connection string of your App Configuration store is provided by setting the `spec.connectionStringReference` property. Otherwise, one of the identities, service principal, workload identity, or managed identity, is used for authentication. The `spec.auth` has the following child properties. Only one of them should be specified. If none of them are set, the system-assigned managed identity of the virtual machine scale set is used.
|Name|Description|Required|Type| |||||
The `spec.configuration` has the following child properties.
||||| |selectors|The list of selectors for key-value filtering.|false|object array| |trimKeyPrefixes|The list of key prefixes to be trimmed.|false|string array|
-|refresh|The settings for refreshing data from Azure App Configuration. If the property is absent, data from Azure App Configuration will not be refreshed.|false|object|
+|refresh|The settings for refreshing key-values from Azure App Configuration. If the property is absent, key-values from Azure App Configuration are not refreshed.|false|object|
-If the `spec.configuration.selectors` property isn't set, all key-values with no label will be downloaded. It contains an array of *selector* objects, which have the following child properties.
+If the `spec.configuration.selectors` property isn't set, all key-values with no label are downloaded. It contains an array of *selector* objects, which have the following child properties.
|Name|Description|Required|Type| |||||
The `spec.configuration.refresh` property has the following child properties.
|Name|Description|Required|Type| |||||
-|enabled|The setting that determines whether data from Azure App Configuration is automatically refreshed. If the property is absent, a default value of `false` will be used.|false|bool|
-|monitoring|The key-values monitored for change detection, aka sentinel keys. The data from Azure App Configuration will be refreshed only if at least one of the monitored key-values is changed.|true|object|
-|interval|The interval at which the data will be refreshed from Azure App Configuration. It must be greater than or equal to 1 second. If the property is absent, a default value of 30 seconds will be used.|false|duration string|
+|enabled|The setting that determines whether key-values from Azure App Configuration is automatically refreshed. If the property is absent, a default value of `false` is used.|false|bool|
+|monitoring|The key-values monitored for change detection, aka sentinel keys. The key-values from Azure App Configuration are refreshed only if at least one of the monitored key-values is changed.|true|object|
+|interval|The interval at which the key-values are refreshed from Azure App Configuration. It must be greater than or equal to 1 second. If the property is absent, a default value of 30 seconds is used.|false|duration string|
The `spec.configuration.refresh.monitoring.keyValues` is an array of objects, which have the following child properties.
The `spec.secret` property has the following child properties. It is required if
||||| |target|The destination of the retrieved secrets in Kubernetes.|true|object| |auth|The authentication method to access Key Vaults.|false|object|
-|refresh|The settings for refreshing data from Key Vaults. If the property is absent, data from Key Vaults will not be refreshed unless the corresponding Key Vault references are reloaded.|false|object|
+|refresh|The settings for refreshing data from Key Vaults. If the property is absent, data from Key Vaults is not refreshed unless the corresponding Key Vault references are reloaded.|false|object|
The `spec.secret.target` property has the following child property.
The authentication method of each *Key Vault* can be specified with the followin
|workloadIdentity|The settings of the workload identity used for authentication with a Key Vault. It has the same child properties as `spec.auth.workloadIdentity`.|false|object| |managedIdentityClientId|The client ID of a user-assigned managed identity of virtual machine scale set used for authentication with a Key Vault.|false|string|
-The `spec.secret.refresh` property has the following child property.
+The `spec.secret.refresh` property has the following child properties.
|Name|Description|Required|Type| |||||
-|enabled|The setting that determines whether data from Key Vaults is automatically refreshed. If the property is absent, a default value of `false` will be used.|false|bool|
-|interval|The interval at which the data will be refreshed from Key Vault. It must be greater than or equal to 1 minute. The Key Vault refresh is independent of the App Configuration refresh configured via `spec.configuration.refresh`.|true|duration string|
+|enabled|The setting that determines whether data from Key Vaults is automatically refreshed. If the property is absent, a default value of `false` is used.|false|bool|
+|interval|The interval at which the data is refreshed from Key Vault. It must be greater than or equal to 1 minute. The Key Vault refresh is independent of the App Configuration refresh configured via `spec.configuration.refresh`.|true|duration string|
+
+The `spec.featureFlag` property has the following child properties. It is required if any feature flags are expected to be downloaded.
+
+|Name|Description|Required|Type|
+|||||
+|selectors|The list of selectors for feature flag filtering.|false|object array|
+|refresh|The settings for refreshing feature flags from Azure App Configuration. If the property is absent, feature flags from Azure App Configuration are not refreshed.|false|object|
+
+If the `spec.featureFlag.selectors` property isn't set, feature flags are not downloaded. It contains an array of *selector* objects, which have the following child properties.
+
+|Name|Description|Required|Type|
+|||||
+|keyFilter|The key filter for querying feature flags.|true|string|
+|labelFilter|The label filter for querying feature flags.|false|string|
+
+The `spec.featureFlag.refresh` property has the following child properties.
+
+|Name|Description|Required|Type|
+|||||
+|enabled|The setting that determines whether feature flags from Azure App Configuration are automatically refreshed. If the property is absent, a default value of `false` is used.|false|bool|
+|interval|The interval at which the feature flags are refreshed from Azure App Configuration. It must be greater than or equal to 1 second. If the property is absent, a default value of 30 seconds is used.|false|duration string|
+
+## Installation
+
+Use the following `helm install` command to install the Azure App Configuration Kubernetes Provider. See [helm-values.yaml](https://github.com/Azure/AppConfiguration-KubernetesProvider/blob/main/deploy/parameter/helm-values.yaml) for the complete list of parameters and their default values. You can override the default values by passing the `--set` flag to the command.
+
+```bash
+helm install azureappconfiguration.kubernetesprovider \
+ oci://mcr.microsoft.com/azure-app-configuration/helmchart/kubernetes-provider \
+ --namespace azappconfig-system \
+ --create-namespace
+```
+
+### Autoscaling
+
+By default, autoscaling is disabled. However, if you have multiple `AzureAppConfigurationProvider` resources to produce multiple ConfigMaps/Secrets, you can enable horizontal pod autoscaling by setting `autoscaling.enabled` to `true`.
## Examples
spec:
interval: 1h ```
+### Feature Flags
+
+In the following sample, feature flags with keys starting with `app1` and labels equivalent to `common` are downloaded and refreshed every 10 minutes.
+
+``` yaml
+apiVersion: azconfig.io/v1
+kind: AzureAppConfigurationProvider
+metadata:
+ name: appconfigurationprovider-sample
+spec:
+ endpoint: <your-app-configuration-store-endpoint>
+ target:
+ configMapName: configmap-created-by-appconfig-provider
+ featureFlag:
+ selectors:
+ - keyFilter: app1*
+ labelFilter: common
+ refresh:
+ enabled: true
+ interval: 10m
+```
+ ### ConfigMap Consumption Applications running in Kubernetes typically consume the ConfigMap either as environment variables or as configuration files. If the `configMapData.type` property is absent or is set to default, the ConfigMap is populated with the itemized list of data retrieved from Azure App Configuration, which can be easily consumed as environment variables. If the `configMapData.type` property is set to json, yaml or properties, data retrieved from Azure App Configuration is grouped into one item with key name specified by the `configMapData.key` property in the generated ConfigMap, which can be consumed as a mounted file.
Assuming an App Configuration store has these key-values:
#### [default](#tab/default)
-and the `configMapData.type` property is absent or set to `default`,
+And the `configMapData.type` property is absent or set to `default`,
``` yaml apiVersion: azconfig.io/v1
spec:
configMapName: configmap-created-by-appconfig-provider ```
-the generated ConfigMap will be populated with the following data:
+The generated ConfigMap is populated with the following data:
``` yaml data:
data:
#### [json](#tab/json)
-and the `configMapData.type` property is set to `json`,
+And the `configMapData.type` property is set to `json`,
``` yaml apiVersion: azconfig.io/v1
spec:
key: appSettings.json ```
-the generated ConfigMap will be populated with the following data:
+The generated ConfigMap is populated with the following data:
``` yaml data:
data:
#### [yaml](#tab/yaml)
-and the `configMapData.type` property is set to `yaml`,
+And the `configMapData.type` property is set to `yaml`,
``` yaml apiVersion: azconfig.io/v1
spec:
key: appSettings.yaml ```
-the generated ConfigMap will be populated with the following data:
+The generated ConfigMap is populated with the following data:
``` yaml data:
data:
#### [properties](#tab/properties)
-and the `configMapData.type` property is set to `properties`,
+And the `configMapData.type` property is set to `properties`,
``` yaml apiVersion: azconfig.io/v1
spec:
key: app.properties ```
-the generated ConfigMap will be populated with the following data:
+The generated ConfigMap is populated with the following data:
``` yaml data:
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md
To Arc-enable a System Center VMM management server, deploy [Azure Arc resource
The following image shows the architecture for the Arc-enabled SCVMM: ## How is Arc-enabled SCVMM different from Arc-enabled Servers
azure-functions Functions Bindings Cache Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-input.md
+
+ Title: Azure Cache for Redis input binding for Azure Functions (preview)
+description: Learn how to use input bindings to connect to Azure Cache for Redis from Azure Functions.
+++++ Last updated : 02/27/2024
+zone_pivot_groups: programming-languages-set-functions-lang-workers
++
+# Azure Cache for Redis input binding for Azure Functions (preview)
+
+When a function runs, the Azure Cache for Redis input binding retrieves data from a cache and passes it to your function as an input parameter.
+
+For information on setup and configuration details, see the [overview](functions-bindings-cache.md).
+
+<! Replace with the following when Node.js v4 is supported:
+-->
+<! Replace with the following when Python v2 is supported:
+-->
+
+## Example
++
+> [!IMPORTANT]
+>
+>For .NET functions, using the _isolated worker_ model is recommended over the _in-process_ model. For a comparison of the _in-process_ and _isolated worker_ models, see differences between the _isolated worker_ model and the _in-process_ model for .NET on Azure Functions.
+
+The following code uses the key from the pub/sub trigger to obtain and log the value from an input binding using a `GET` command:
+
+### [Isolated process](#tab/isolated-process)
+
+```csharp
+using Microsoft.Extensions.Logging;
+
+namespace Microsoft.Azure.Functions.Worker.Extensions.Redis.Samples.RedisInputBinding
+{
+ public class SetGetter
+ {
+ private readonly ILogger<SetGetter> logger;
+
+ public SetGetter(ILogger<SetGetter> logger)
+ {
+ this.logger = logger;
+ }
+
+ [Function(nameof(SetGetter))]
+ public void Run(
+ [RedisPubSubTrigger(Common.connectionStringSetting, "__keyevent@0__:set")] string key,
+ [RedisInput(Common.connectionStringSetting, "GET {Message}")] string value)
+ {
+ logger.LogInformation($"Key '{key}' was set to value '{value}'");
+ }
+ }
+}
+
+```
+
+### [In-process](#tab/in-process)
+
+```csharp
+using Microsoft.Extensions.Logging;
+
+namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisPubSubTrigger
+{
+ internal class SetGetter
+ {
+ [FunctionName(nameof(SetGetter))]
+ public static void Run(
+ [RedisPubSubTrigger(Common.connectionStringSetting, "__keyevent@0__:set")] string key,
+ [Redis(Common.connectionStringSetting, "GET {Message}")] string value,
+ ILogger logger)
+ {
+ logger.LogInformation($"Key '{key}' was set to value '{value}'");
+ }
+ }
+}
+```
+++
+More samples for the Azure Cache for Redis input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-redis-extension).
+<!-- link to redis samples -->
+The following code uses the key from the pub/sub trigger to obtain and log the value from an input binding using a `GET` command:
+
+```java
+package com.function.RedisInputBinding;
+
+import com.microsoft.azure.functions.*;
+import com.microsoft.azure.functions.annotation.*;
+import com.microsoft.azure.functions.redis.annotation.*;
+
+public class SetGetter {
+ @FunctionName("SetGetter")
+ public void run(
+ @RedisPubSubTrigger(
+ name = "key",
+ connection = "redisConnectionString",
+ channel = "__keyevent@0__:set")
+ String key,
+ @RedisInput(
+ name = "value",
+ connection = "redisConnectionString",
+ command = "GET {Message}")
+ String value,
+ final ExecutionContext context) {
+ context.getLogger().info("Key '" + key + "' was set to value '" + value + "'");
+ }
+}
+```
+
+### [Model v3](#tab/nodejs-v3)
+
+This function.json defines both a pub/sub trigger and an input binding to the GET message on an Azure Cache for Redis instance:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "redisPubSubTrigger",
+ "connection": "redisConnectionString",
+ "channel": "__keyevent@0__:set",
+ "name": "key",
+ "direction": "in"
+ },
+ {
+ "type": "redis",
+ "connection": "redisConnectionString",
+ "command": "GET {Message}",
+ "name": "value",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "index.js"
+}
+```
+
+This JavaScript code (from index.js) retrives and logs the cached value related to the key provided by the pub/sub trigger.
+
+```nodejs
+
+module.exports = async function (context, key, value) {
+ context.log("Key '" + key + "' was set to value '" + value + "'");
+}
+
+```
+
+### [Model v4](#tab/nodejs-v4)
+
+<! Replace with the following when Node.js v4 is supported:
+-->
++++
+This function.json defines both a pub/sub trigger and an input binding to the GET message on an Azure Cache for Redis instance:
+<!Note: it might be confusing that the binding `name` and the parameter name are the same in these examples. >
+```json
+{
+ "bindings": [
+ {
+ "type": "redisPubSubTrigger",
+ "connection": "redisConnectionString",
+ "channel": "__keyevent@0__:set",
+ "name": "key",
+ "direction": "in"
+ },
+ {
+ "type": "redis",
+ "connection": "redisConnectionString",
+ "command": "GET {Message}",
+ "name": "value",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "run.ps1"
+}
+```
+
+This PowerShell code (from run.ps1) retrieves and logs the cached value related to the key provided by the pub/sub trigger.
+
+```powershell
+param($key, $value, $TriggerMetadata)
+Write-Host "Key '$key' was set to value '$value'"
+```
++
+The following example uses a pub/sub trigger with an input binding to the GET message on an Azure Cache for Redis instance. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+### [v1](#tab/python-v1)
+
+This function.json defines both a pub/sub trigger and an input binding to the GET message on an Azure Cache for Redis instance:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "redisPubSubTrigger",
+ "connection": "redisConnectionString",
+ "channel": "__keyevent@0__:set",
+ "name": "key",
+ "direction": "in"
+ },
+ {
+ "type": "redis",
+ "connection": "redisConnectionString",
+ "command": "GET {Message}",
+ "name": "value",
+ "direction": "in"
+ }
+ ]
+}
+```
+
+This Python code (from \_\_init\_\_.py) retrives and logs the cached value related to the key provided by the pub/sub trigger:
+
+```python
+
+import logging
+
+def main(key: str, value: str):
+ logging.info("Key '" + key + "' was set to value '" + value + "'")
+
+```
+
+The [configuration](#configuration) section explains these properties.
+
+### [v2](#tab/python-v2)
+
+<! Replace with the following when Python v2 is supported:
+-->
+++
+## Attributes
+
+> [!NOTE]
+> Not all commands are supported for this binding. At the moment, only read commands that return a single output are supported. The full list can be found [here](https://github.com/Azure/azure-functions-redis-extension/blob/main/src/Microsoft.Azure.WebJobs.Extensions.Redis/Bindings/RedisAsyncConverter.cs#L63)
+
+|Attribute property | Description |
+|-|--|
+| `Connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` |
+| `Command` | The redis-cli command to be executed on the cache with all arguments separated by spaces, such as: `GET key`, `HGET key field`. |
+
+## Annotations
+
+The `RedisInput` annotation supports these properties:
+
+| Property | Description |
+|-||
+| `name` | The name of the specific input binding. |
+| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` |
+| `command` | The redis-cli command to be executed on the cache with all arguments separated by spaces, such as: `GET key` or `HGET key field`. |
+## Configuration
+
+The following table explains the binding configuration properties that you set in the function.json file.
+
+| function.json property | Description |
+||-|
+| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` |
+| `command` | The redis-cli command to be executed on the cache with all arguments separated by spaces, such as: `GET key`, `HGET key field`. |
+
+> [!NOTE]
+> Python v2 and Node.js v4 for Functions don't use function.json to define the function. Both of these new language versions aren't currently supported by Azure Redis Cache bindings.
++
+See the [Example section](#example) for complete examples.
+
+## Usage
+
+The input binding expects to receive a string from the cache.
+When you use a custom type as the binding parameter, the extension tries to deserialize a JSON-formatted string into the custom type of this parameter.
+
+## Related content
+
+- [Introduction to Azure Functions](functions-overview.md)
+- [Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started)
+- [Tutorial: Create a write-behind cache by using Azure Functions and Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-write-behind)
+- [Redis connection string](functions-bindings-cache.md#redis-connection-string)
azure-functions Functions Bindings Cache Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-output.md
+
+ Title: Using Redis Output bindings with Azure Functions for Azure Cache for Redis (preview)
+description: Learn how to use Redis output binding on an Azure Functions.
+
+zone_pivot_groups: programming-languages-set-functions-lang-workers
++++ Last updated : 02/27/2024++
+# Azure Cache for Redis output binding for Azure Functions (preview)
+
+The Azure Cache for Redis output bindings lets you change the keys in a cache based on a set of available trigger on the cache.
+
+For information on setup and configuration details, see the [overview](functions-bindings-cache.md).
+
+<! Replace with the following when Node.js v4 is supported:
+-->
+<! Replace with the following when Python v2 is supported:
+-->
+
+## Example
++
+The following example shows a pub/sub trigger on the set event with an output binding to the same Redis instance. The set event triggers the cache and the output binding returns a delete command for the key that triggered the function.
+
+> [!IMPORTANT]
+>
+>For .NET functions, using the _isolated worker_ model is recommended over the _in-process_ model. For a comparison of the _in-process_ and _isolated worker_ models, see differences between the _isolated worker_ model and the _in-process_ model for .NET on Azure Functions.
+
+### [In-process](#tab/in-process)
+
+```c#
+using Microsoft.Extensions.Logging;
+
+namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisOutputBinding
+{
+ internal class SetDeleter
+ {
+ [FunctionName(nameof(SetDeleter))]
+ public static void Run(
+ [RedisPubSubTrigger(Common.connectionStringSetting, "__keyevent@0__:set")] string key,
+ [Redis(Common.connectionStringSetting, "DEL")] out string[] arguments,
+ ILogger logger)
+ {
+ logger.LogInformation($"Deleting recently SET key '{key}'");
+ arguments = new string[] { key };
+ }
+ }
+}
+```
+
+### [Isolated process](#tab/isolated-process)
+
+```csharp
+using Microsoft.Extensions.Logging;
+
+namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisOutputBinding
+{
+ internal class SetDeleter
+ {
+ [FunctionName(nameof(SetDeleter))]
+ [return: Redis(Common.connectionStringSetting, "DEL")]
+ public static string Run(
+ [RedisPubSubTrigger(Common.connectionStringSetting, "__keyevent@0__:set")] string key,
+ ILogger logger)
+ {
+ logger.LogInformation($"Deleting recently SET key '{key}'");
+ return key;
+ }
+ }
+}
+```
+++
+The following example shows a pub/sub trigger on the set event with an output binding to the same Redis instance. The set event triggers the cache and the output binding returns a delete command for the key that triggered the function.
+<!Note: it might be confusing that the binding `name` and the parameter name are the same in these examples. >
+```java
+package com.function.RedisOutputBinding;
+
+import com.microsoft.azure.functions.*;
+import com.microsoft.azure.functions.annotation.*;
+import com.microsoft.azure.functions.redis.annotation.*;
+
+public class SetDeleter {
+ @FunctionName("SetDeleter")
+ @RedisOutput(
+ name = "value",
+ connection = "redisConnectionString",
+ command = "DEL")
+ public String run(
+ @RedisPubSubTrigger(
+ name = "key",
+ connection = "redisConnectionString",
+ channel = "__keyevent@0__:set")
+ String key,
+ final ExecutionContext context) {
+ context.getLogger().info("Deleting recently SET key '" + key + "'");
+ return key;
+ }
+}
+
+```
+
+This example shows a pub/sub trigger on the set event with an output binding to the same Redis instance. The set event triggers the cache and the output binding returns a delete command for the key that triggered the function.
+
+### [Model v3](#tab/nodejs-v3)
+
+The bindings are defined in this `function.json`` file:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "redisPubSubTrigger",
+ "connection": "redisConnectionString",
+ "channel": "__keyevent@0__:set",
+ "name": "key",
+ "direction": "in"
+ },
+ {
+ "type": "redis",
+ "connection": "redisConnectionString",
+ "command": "DEL",
+ "name": "$return",
+ "direction": "out"
+ }
+ ],
+ "scriptFile": "index.js"
+}
+```
+
+This code from the `index.js` file takes the key from the trigger and returns it to the output binding to delete the cached item.
+
+```javascript
+module.exports = async function (context, key) {
+ context.log("Deleting recently SET key '" + key + "'");
+ return key;
+}
+
+```
+
+### [Model v4](#tab/nodejs-v4)
+
+<! Replace with the following when Node.js v4 is supported:
+-->
++
+This example shows a pub/sub trigger on the set event with an output binding to the same Redis instance. The set event triggers the cache and the output binding returns a delete command for the key that triggered the function.
+
+The bindings are defined in this `function.json` file:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "redisPubSubTrigger",
+ "connection": "redisLocalhost",
+ "channel": "__keyevent@0__:set",
+ "name": "key",
+ "direction": "in"
+ },
+ {
+ "type": "redis",
+ "connection": "redisLocalhost",
+ "command": "DEL",
+ "name": "retVal",
+ "direction": "out"
+ }
+ ],
+ "scriptFile": "run.ps1"
+}
+
+```
+
+This code from the `run.ps1` file takes the key from the trigger and passes it to the output binding to delete the cached item.
+
+```powershell
+param($key, $TriggerMetadata)
+Write-Host "Deleting recently SET key '$key'"
+Push-OutputBinding -Name retVal -Value $key
+```
+
+This example shows a pub/sub trigger on the set event with an output binding to the same Redis instance. The set event triggers the cache and the output binding returns a delete command for the key that triggered the function.
+
+### [v1](#tab/python-v1)
+
+The bindings are defined in this `function.json` file:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "redisPubSubTrigger",
+ "connection": "redisLocalhost",
+ "channel": "__keyevent@0__:set",
+ "name": "key",
+ "direction": "in"
+ },
+ {
+ "type": "redis",
+ "connection": "redisLocalhost",
+ "command": "DEL",
+ "name": "$return",
+ "direction": "out"
+ }
+ ],
+ "scriptFile": "__init__.py"
+}
+```
+
+This code from the `__init__.py` file takes the key from the trigger and passes it to the output binding to delete the cached item.
+
+```python
+import logging
+
+def main(key: str) -> str:
+ logging.info("Deleting recently SET key '" + key + "'")
+ return key
+```
+
+### [v2](#tab/python-v2)
+
+<! Replace with the following when Node.js v4 is supported:
+-->
+++
+## Attributes
+
+> [!NOTE]
+> All commands are supported for this binding.
+
+The way in which you define an output binding parameter depends on whether your C# functions runs [in-process](functions-dotnet-class-library.md) or in an [isolated worker process](dotnet-isolated-process-guide.md).
+
+The output binding is defined this way:
+
+| Definition | Example | Description |
+| -- | -- | -- |
+| On an `out` parameter | `[Redis(<Connection>, <Command>)] out string <Return_Variable>` | The string variable returned by the method is a key value that the binding uses to execute the command against the specific cache. |
+
+In this case, the type returned by the method is a key value that the binding uses to execute the command against the specific cache.
+
+When your function has multiple output bindings, you can instead apply the binding attribute to the property of a type that is a key value, which the binding uses to execute the command against the specific cache. For more information, see [Multiple output bindings](dotnet-isolated-process-guide.md#multiple-output-bindings).
+++
+Regardless of the C# process mode, the same properties are supported by the output binding attribute:
+
+| Attribute property | Description |
+|--| -|
+| `Connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` |
+| `Command` | The redis-cli command to be executed on the cache, such as: `DEL`. |
++
+## Annotations
+
+The `RedisOutput` annotation supports these properties:
+
+| Property | Description |
+|-||
+| `name` | The name of the specific input binding. |
+| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` |
+| `command` | The redis-cli command to be executed on the cache, such as: `DEL`. |
++
+## Configuration
+
+The following table explains the binding configuration properties that you set in the _function.json_ file.
+
+| Property | Description |
+|-||
+| `name` | The name of the specific input binding. |
+| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` |
+| `command` | The redis-cli command to be executed on the cache, such as: `DEL`. |
++
+See the [Example section](#example) for complete examples.
+
+## Usage
+
+The output returns a string, which is the key of the cache entry on which apply the specific command.
+
+There are three types of connections that are allowed from an Azure Functions instance to a Redis Cache in your deployments. For local development, you can also use service principal secrets. Use the `appsettings` to configure each of the following types of client authentication, assuming the `Connection` was set to `Redis` in the function.
+
+## Related content
+
+- [Introduction to Azure Functions](functions-overview.md)
+- [Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started)
+- [Tutorial: Create a write-behind cache by using Azure Functions and Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-write-behind)
+- [Redis connection string](functions-bindings-cache.md#redis-connection-string)
+- [Multiple output bindings](dotnet-isolated-process-guide.md#multiple-output-bindings)
azure-functions Functions Bindings Cache Trigger Redislist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redislist.md
Title: Using RedisListTrigger Azure Function (preview)
-description: Learn how to use RedisListTrigger Azure Functions
+ Title: RedisListTrigger for Azure Functions (preview)
+description: Learn how to use the RedisListTrigger Azure Functions for Azure Cache for Redis.
zone_pivot_groups: programming-languages-set-functions-lang-workers
Previously updated : 08/07/2023 Last updated : 02/27/2024
-# RedisListTrigger Azure Function (preview)
+# RedisListTrigger for Azure Functions (preview)
The `RedisListTrigger` pops new elements from a list and surfaces those entries to the function.
+For more information about Azure Cache for Redis triggers and bindings, [Redis Extension for Azure Functions](https://github.com/Azure/azure-functions-redis-extension/tree/main).
+ ## Scope of availability for functions triggers |Tier | Basic | Standard, Premium | Enterprise, Enterprise Flash |
The `RedisListTrigger` pops new elements from a list and surfaces those entries
| Lists | Yes | Yes | Yes | > [!IMPORTANT]
-> Redis triggers aren't currently supported for functions running in the [Consumption plan](consumption-plan.md).
+> Redis triggers aren't currently supported for functions running in the [Consumption plan](consumption-plan.md).
>
+<! Replace with the following when Node.js v4 is supported:
+-->
+<! Replace with the following when Python v2 is supported:
+-->
+ ## Example ::: zone pivot="programming-language-csharp"
-The following sample polls the key `listTest` at a localhost Redis instance at `127.0.0.1:6379`:
+> [!IMPORTANT]
+>
+>For .NET functions, using the _isolated worker_ model is recommended over the _in-process_ model. For a comparison of the _in-process_ and _isolated worker_ models, see differences between the _isolated worker_ model and the _in-process_ model for .NET on Azure Functions.
+
+The following sample polls the key `listTest`.:
### [Isolated worker model](#tab/isolated-process)
-The isolated process examples aren't available in preview.
+```csharp
+using Microsoft.Extensions.Logging;
+
+namespace Microsoft.Azure.Functions.Worker.Extensions.Redis.Samples.RedisListTrigger
+{
+ public class SimpleListTrigger
+ {
+ private readonly ILogger<SimpleListTrigger> logger;
+
+ public SimpleListTrigger(ILogger<SimpleListTrigger> logger)
+ {
+ this.logger = logger;
+ }
+
+ [Function(nameof(SimpleListTrigger))]
+ public void Run(
+ [RedisListTrigger(Common.connectionStringSetting, "listTest")] string entry)
+ {
+ logger.LogInformation(entry);
+ }
+ }
+}
+
+```
### [In-process model](#tab/in-process) ```csharp
-[FunctionName(nameof(ListsTrigger))]
-public static void ListsTrigger(
- [RedisListTrigger("Redis", "listTest")] string entry,
- ILogger logger)
+using Microsoft.Extensions.Logging;
+
+namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisListTrigger
{
- logger.LogInformation($"The entry pushed to the list listTest: '{entry}'");
+ internal class SimpleListTrigger
+ {
+ [FunctionName(nameof(SimpleListTrigger))]
+ public static void Run(
+ [RedisListTrigger(Common.connectionStringSetting, "listTest")] string entry,
+ ILogger logger)
+ {
+ logger.LogInformation(entry);
+ }
+ }
} ```
public static void ListsTrigger(
The following sample polls the key `listTest` at a localhost Redis instance at `redisLocalhost`: ```java
- @FunctionName("ListTrigger")
- public void ListTrigger(
+package com.function.RedisListTrigger;
+
+import com.microsoft.azure.functions.*;
+import com.microsoft.azure.functions.annotation.*;
+import com.microsoft.azure.functions.redis.annotation.*;
+
+public class SimpleListTrigger {
+ @FunctionName("SimpleListTrigger")
+ public void run(
@RedisListTrigger(
- name = "entry",
- connectionStringSetting = "redisLocalhost",
+ name = "req",
+ connection = "redisConnectionString",
key = "listTest",
- pollingIntervalInMs = 100,
- messagesPerWorker = 10,
- count = 1,
- listPopFromBeginning = false)
- String entry,
+ pollingIntervalInMs = 1000,
+ maxBatchSize = 1)
+ String message,
final ExecutionContext context) {
- context.getLogger().info(entry);
+ context.getLogger().info(message);
}
+}
``` ::: zone-end ::: zone pivot="programming-language-javascript"
-### [v3](#tab/node-v3)
+### [Model v3](#tab/node-v3)
This sample uses the same `index.js` file, with binding data in the `function.json` file.
module.exports = async function (context, entry) {
From `function.json`, here's the binding data:
-```javascript
+```json
{
- "bindings": [
- {
- "type": "redisListTrigger",
- "listPopFromBeginning": true,
- "connectionStringSetting": "redisLocalhost",
- "key": "listTest",
- "pollingIntervalInMs": 1000,
- "messagesPerWorker": 100,
- "count": 10,
- "name": "entry",
- "direction": "in"
- }
- ],
- "scriptFile": "index.js"
+ "bindings": [
+ {
+ "type": "redisListTrigger",
+ "listPopFromBeginning": true,
+ "connection": "redisConnectionString",
+ "key": "listTest",
+ "pollingIntervalInMs": 1000,
+ "maxBatchSize": 16,
+ "name": "entry",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "index.js"
} ```
-### [v4](#tab/node-v4)
+### [Model v4](#tab/node-v4)
-The JavaScript v4 programming model example isn't available in preview.
+<! Replace with the following when Node.js v4 is supported:
+-->
Write-Host $entry
From `function.json`, here's the binding data:
-```powershell
+```json
{
- "bindings": [
- {
- "type": "redisListTrigger",
- "listPopFromBeginning": true,
- "connectionStringSetting": "redisLocalhost",
- "key": "listTest",
- "pollingIntervalInMs": 1000,
- "messagesPerWorker": 100,
- "count": 10,
- "name": "entry",
- "direction": "in"
- }
- ],
- "scriptFile": "run.ps1"
+ "bindings": [
+ {
+ "type": "redisListTrigger",
+ "listPopFromBeginning": true,
+ "connection": "redisConnectionString",
+ "key": "listTest",
+ "pollingIntervalInMs": 1000,
+ "maxBatchSize": 16,
+ "name": "entry",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "run.ps1"
} ```
From `function.json`, here's the binding data:
```json {
- "bindings": [
- {
- "type": "redisListTrigger",
- "listPopFromBeginning": true,
- "connectionStringSetting": "redisLocalhost",
- "key": "listTest",
- "pollingIntervalInMs": 1000,
- "messagesPerWorker": 100,
- "count": 10,
- "name": "entry",
- "direction": "in"
- }
- ],
- "scriptFile": "__init__.py"
+ "bindings": [
+ {
+ "type": "redisListTrigger",
+ "listPopFromBeginning": true,
+ "connection": "redisConnectionString",
+ "key": "listTest",
+ "pollingIntervalInMs": 1000,
+ "maxBatchSize": 16,
+ "name": "entry",
+ "direction": "in"
+ }
+ ],
+ "scriptFile": "__init__.py"
} ``` ### [v2](#tab/python-v2)
-The Python v2 programming model example isn't available in preview.
+<! Replace with the following when Python v2 is supported:
+-->
The Python v2 programming model example isn't available in preview.
| Parameter | Description | Required | Default | |||:--:|--:|
-| `ConnectionStringSetting` | Name of the setting in the `appsettings` that holds the cache connection string (for example, `<cacheName>.redis.cache.windows.net:6380,password=...`). | Yes | |
+| `Connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` | Yes | |
| `Key` | Key to read from. This field can be resolved using `INameResolver`. | Yes | | | `PollingIntervalInMs` | How often to poll Redis in milliseconds. | Optional | `1000` | | `MessagesPerWorker` | How many messages each functions instance should process. Used to determine how many instances the function should scale to. | Optional | `100` |
-| `Count` | Number of entries to pop from Redis at one time. These are processed in parallel. Only supported on Redis 6.2+ using the `COUNT` argument in [`LPOP`](https://redis.io/commands/lpop/) and [`RPOP`](https://redis.io/commands/rpop/). | Optional | `10` |
+| `Count` | Number of entries to pop from Redis at one time. Entries are processed in parallel. Only supported on Redis 6.2+ using the `COUNT` argument in [`LPOP`](https://redis.io/commands/lpop/) and [`RPOP`](https://redis.io/commands/rpop/). | Optional | `10` |
| `ListPopFromBeginning` | Determines whether to pop entries from the beginning using [`LPOP`](https://redis.io/commands/lpop/), or to pop entries from the end using [`RPOP`](https://redis.io/commands/rpop/). | Optional | `true` | ::: zone-end
The Python v2 programming model example isn't available in preview.
| Parameter | Description | Required | Default | ||-|:--:|--:| | `name` | "entry" | | |
-| `connectionStringSetting` | The name of the setting in the `appsettings` that contains the cache connection string. For example: `<cacheName>.redis.cache.windows.net:6380,password...` | Yes | |
+| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...`| Yes | |
| `key` | This field can be resolved using INameResolver. | Yes | | | `pollingIntervalInMs` | How often to poll Redis in milliseconds. | Optional | `1000` | | `messagesPerWorker` | How many messages each functions instance should process. Used to determine how many instances the function should scale to. | Optional | `100` |
The following table explains the binding configuration properties that you set i
||-|:--:|--:| | `type` | Name of the trigger. | No | | | `listPopFromBeginning` | Whether to delete the stream entries after the function has run. Set to `true`. | Yes | `true` |
-| `connectionString` | The name of the setting in the `appsettings` that contains the cache connection string. For example: `<cacheName>.redis.cache.windows.net:6380,password...` | No | |
+| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` | No | |
| `key` | This field can be resolved using `INameResolver`. | No | | | `pollingIntervalInMs` | How often to poll Redis in milliseconds. | Yes | `1000` | | `messagesPerWorker` | How many messages each functions instance should process. Used to determine how many instances the function should scale to. | Yes | `100` |
-| `count` | Number of entries to read from the cache at one time. These are processed in parallel. | Yes | `10` |
+| `count` | Number of entries to read from the cache at one time. Entries are processed in parallel. | Yes | `10` |
| `name` | ? | Yes | | | `direction` | Set to `in`. | No | |
See the Example section for complete examples.
The `RedisListTrigger` pops new elements from a list and surfaces those entries to the function. The trigger polls Redis at a configurable fixed interval, and uses [`LPOP`](https://redis.io/commands/lpop/) and [`RPOP`](https://redis.io/commands/rpop/) to pop entries from the lists.
-### Output
--
-> [!NOTE]
-> Once the `RedisListTrigger` becomes generally available, the following information will be moved to a dedicated Output page.
-
-StackExchange.Redis.RedisValue
-
-| Output Type | Description |
-|||
-| [`StackExchange.Redis.RedisValue`](https://github.com/StackExchange/StackExchange.Redis/blob/main/src/StackExchange.Redis/RedisValue.cs) | `string`, `byte[]`, `ReadOnlyMemory<byte>`: The entry from the list. |
-| `Custom` | The trigger uses Json.NET serialization to map the message from the channel from a `string` to a custom type. |
--
-> [!NOTE]
-> Once the `RedisListTrigger` becomes generally available, the following information will be moved to a dedicated Output page.
-
-| Output Type | Description |
+| Type | Description |
|-|--| | `byte[]` | The message from the channel. | | `string` | The message from the channel. | | `Custom` | The trigger uses Json.NET serialization to map the message from the channel from a `string` into a custom type. | ---- ::: zone-end ## Related content
StackExchange.Redis.RedisValue
- [Introduction to Azure Functions](functions-overview.md) - [Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started) - [Tutorial: Create a write-behind cache by using Azure Functions and Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-write-behind)
+- [Redis connection string](functions-bindings-cache.md#redis-connection-string)
- [Redis lists](https://redis.io/docs/data-types/lists/)
azure-functions Functions Bindings Cache Trigger Redispubsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redispubsub.md
Title: Using RedisPubSubTrigger Azure Function (preview)
-description: Learn how to use RedisPubSubTrigger Azure Function
+ Title: RedisPubSubTrigger for Azure Functions (preview)
+description: Learn how to use RedisPubSubTrigger Azure Function with Azure Cache for Redis.
zone_pivot_groups: programming-languages-set-functions-lang-workers
Previously updated : 08/07/2023 Last updated : 02/27/2024
-# RedisPubSubTrigger Azure Function (preview)
+# RedisPubSubTrigger for Azure Functions (preview)
Redis features [publish/subscribe functionality](https://redis.io/docs/interact/pubsub/) that enables messages to be sent to Redis and broadcast to subscribers.
+For more information about Azure Cache for Redis triggers and bindings, [Redis Extension for Azure Functions](https://github.com/Azure/azure-functions-redis-extension/tree/main).
+ ## Scope of availability for functions triggers |Tier | Basic | Standard, Premium | Enterprise, Enterprise Flash |
Redis features [publish/subscribe functionality](https://redis.io/docs/interact/
> This trigger isn't supported on a [consumption plan](/azure/azure-functions/consumption-plan) because Redis PubSub requires clients to always be actively listening to receive all messages. For consumption plans, your function might miss certain messages published to the channel. >
+<! Replace with the following when Node.js v4 is supported:
+-->
+<! Replace with the following when Node.js v4 is supported:
+-->
+ ## Examples ::: zone pivot="programming-language-csharp" [!INCLUDE [dotnet-execution](../../includes/functions-dotnet-execution-model.md)]
+> [!IMPORTANT]
+>
+>For .NET functions, using the _isolated worker_ model is recommended over the _in-process_ model. For a comparison of the _in-process_ and _isolated worker_ models, see differences between the _isolated worker_ model and the _in-process_ model for .NET on Azure Functions.
+ ### [Isolated worker model](#tab/isolated-process)
-The isolated process examples aren't available in preview.
+This sample listens to the channel `pubsubTest`.
```csharp
-//TBD
+using Microsoft.Extensions.Logging;
+
+namespace Microsoft.Azure.Functions.Worker.Extensions.Redis.Samples.RedisPubSubTrigger
+{
+ internal class SimplePubSubTrigger
+ {
+ private readonly ILogger<SimplePubSubTrigger> logger;
+
+ public SimplePubSubTrigger(ILogger<SimplePubSubTrigger> logger)
+ {
+ this.logger = logger;
+ }
+
+ [Function(nameof(SimplePubSubTrigger))]
+ public void Run(
+ [RedisPubSubTrigger(Common.connectionStringSetting, "pubsubTest")] string message)
+ {
+ logger.LogInformation(message);
+ }
+ }
+}
+
+```
+
+This sample listens to any keyspace notifications for the key `keyspaceTest`.
+
+```csharp
+using Microsoft.Extensions.Logging;
+
+namespace Microsoft.Azure.Functions.Worker.Extensions.Redis.Samples.RedisPubSubTrigger
+{
+ internal class KeyspaceTrigger
+ {
+ private readonly ILogger<KeyspaceTrigger> logger;
+
+ public KeyspaceTrigger(ILogger<KeyspaceTrigger> logger)
+ {
+ this.logger = logger;
+ }
+
+ [Function(nameof(KeyspaceTrigger))]
+ public void Run(
+ [RedisPubSubTrigger(Common.connectionStringSetting, "__keyspace@0__:keyspaceTest")] string message)
+ {
+ logger.LogInformation(message);
+ }
+ }
+}
+
+```
+
+This sample listens to any `keyevent` notifications for the delete command [`DEL`](https://redis.io/commands/del/).
+
+```csharp
+using Microsoft.Extensions.Logging;
+
+namespace Microsoft.Azure.Functions.Worker.Extensions.Redis.Samples.RedisPubSubTrigger
+{
+ internal class KeyeventTrigger
+ {
+ private readonly ILogger<KeyeventTrigger> logger;
+
+ public KeyeventTrigger(ILogger<KeyeventTrigger> logger)
+ {
+ this.logger = logger;
+ }
+
+ [Function(nameof(KeyeventTrigger))]
+ public void Run(
+ [RedisPubSubTrigger(Common.connectionStringSetting, "__keyevent@0__:del")] string message)
+ {
+ logger.LogInformation($"Key '{message}' deleted.");
+ }
+ }
+}
+ ``` ### [In-process model](#tab/in-process)
The isolated process examples aren't available in preview.
This sample listens to the channel `pubsubTest`. ```csharp
-[FunctionName(nameof(PubSubTrigger))]
-public static void PubSubTrigger(
- [RedisPubSubTrigger("redisConnectionString", "pubsubTest")] string message,
- ILogger logger)
+using Microsoft.Extensions.Logging;
+
+namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisPubSubTrigger
{
- logger.LogInformation(message);
+ internal class SimplePubSubTrigger
+ {
+ [FunctionName(nameof(SimplePubSubTrigger))]
+ public static void Run(
+ [RedisPubSubTrigger(Common.connectionStringSetting, "pubsubTest")] string message,
+ ILogger logger)
+ {
+ logger.LogInformation(message);
+ }
+ }
} ```
-This sample listens to any keyspace notifications for the key `myKey`.
+This sample listens to any keyspace notifications for the key `keyspaceTest`.
```csharp
-[FunctionName(nameof(KeyspaceTrigger))]
-public static void KeyspaceTrigger(
- [RedisPubSubTrigger("redisConnectionString", "__keyspace@0__:myKey")] string message,
- ILogger logger)
+using Microsoft.Extensions.Logging;
+
+namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisPubSubTrigger
{
- logger.LogInformation(message);
+ internal class KeyspaceTrigger
+ {
+ [FunctionName(nameof(KeyspaceTrigger))]
+ public static void Run(
+ [RedisPubSubTrigger(Common.connectionStringSetting, "__keyspace@0__:keyspaceTest")] string message,
+ ILogger logger)
+ {
+ logger.LogInformation(message);
+ }
+ }
} ``` This sample listens to any `keyevent` notifications for the delete command [`DEL`](https://redis.io/commands/del/). ```csharp
-[FunctionName(nameof(KeyeventTrigger))]
-public static void KeyeventTrigger(
- [RedisPubSubTrigger("redisConnectionString", "__keyevent@0__:del")] string message,
- ILogger logger)
+using Microsoft.Extensions.Logging;
+
+namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisPubSubTrigger
{
- logger.LogInformation(message);
+ internal class KeyeventTrigger
+ {
+ [FunctionName(nameof(KeyeventTrigger))]
+ public static void Run(
+ [RedisPubSubTrigger(Common.connectionStringSetting, "__keyevent@0__:del")] string message,
+ ILogger logger)
+ {
+ logger.LogInformation($"Key '{message}' deleted.");
+ }
+ }
} ```
public static void KeyeventTrigger(
This sample listens to the channel `pubsubTest`. ```java
-@FunctionName("PubSubTrigger")
- public void PubSubTrigger(
+package com.function.RedisPubSubTrigger;
+
+import com.microsoft.azure.functions.*;
+import com.microsoft.azure.functions.annotation.*;
+import com.microsoft.azure.functions.redis.annotation.*;
+
+public class SimplePubSubTrigger {
+ @FunctionName("SimplePubSubTrigger")
+ public void run(
@RedisPubSubTrigger(
- name = "message",
- connectionStringSetting = "redisConnectionString",
+ name = "req",
+ connection = "redisConnectionString",
channel = "pubsubTest") String message, final ExecutionContext context) { context.getLogger().info(message); }
+}
``` This sample listens to any keyspace notifications for the key `myKey`. ```java
-@FunctionName("KeyspaceTrigger")
- public void KeyspaceTrigger(
+package com.function.RedisPubSubTrigger;
+
+import com.microsoft.azure.functions.*;
+import com.microsoft.azure.functions.annotation.*;
+import com.microsoft.azure.functions.redis.annotation.*;
+
+public class KeyspaceTrigger {
+ @FunctionName("KeyspaceTrigger")
+ public void run(
@RedisPubSubTrigger(
- name = "message",
- connectionStringSetting = "redisConnectionString",
- channel = "__keyspace@0__:myKey")
+ name = "req",
+ connection = "redisConnectionString",
+ channel = "__keyspace@0__:keyspaceTest")
String message, final ExecutionContext context) { context.getLogger().info(message); }
+}
``` This sample listens to any `keyevent` notifications for the delete command [`DEL`](https://redis.io/commands/del/). ```java
- @FunctionName("KeyeventTrigger")
- public void KeyeventTrigger(
+package com.function.RedisPubSubTrigger;
+
+import com.microsoft.azure.functions.*;
+import com.microsoft.azure.functions.annotation.*;
+import com.microsoft.azure.functions.redis.annotation.*;
+
+public class KeyeventTrigger {
+ @FunctionName("KeyeventTrigger")
+ public void run(
@RedisPubSubTrigger(
- name = "message",
- connectionStringSetting = "redisConnectionString",
+ name = "req",
+ connection = "redisConnectionString",
channel = "__keyevent@0__:del") String message, final ExecutionContext context) { context.getLogger().info(message); }
+}
``` ::: zone-end ::: zone pivot="programming-language-javascript"
-### [v3](#tab/node-v3)
+### [Model v3](#tab/node-v3)
This sample uses the same `index.js` file, with binding data in the `function.json` file determining on which channel the trigger occurs.
Here's binding data to listen to the channel `pubsubTest`.
"bindings": [ { "type": "redisPubSubTrigger",
- "connectionStringSetting": "redisConnectionString",
+ "connection": "redisConnectionString",
"channel": "pubsubTest", "name": "message", "direction": "in"
Here's binding data to listen to the channel `pubsubTest`.
} ```
-Here's binding data to listen to keyspace notifications for the key `myKey`.
+Here's binding data to listen to keyspace notifications for the key `keyspaceTest`.
```json { "bindings": [ { "type": "redisPubSubTrigger",
- "connectionStringSetting": "redisConnectionString",
- "channel": "__keyspace@0__:myKey",
+ "connection": "redisConnectionString",
+ "channel": "__keyspace@0__:keyspaceTest",
"name": "message", "direction": "in" }
Here's binding data to listen to `keyevent` notifications for the delete command
"bindings": [ { "type": "redisPubSubTrigger",
- "connectionStringSetting": "redisConnectionString",
+ "connection": "redisConnectionString",
"channel": "__keyevent@0__:del", "name": "message", "direction": "in"
Here's binding data to listen to `keyevent` notifications for the delete command
], "scriptFile": "index.js" }+ ```
-### [v4](#tab/node-v4)
-The JavaScript v4 programming model example isn't available in preview.
+### [Model v4](#tab/node-v4)
+
+<! Replace with the following when Node.js v4 is supported:
+-->
::: zone-end
Here's binding data to listen to the channel `pubsubTest`.
"bindings": [ { "type": "redisPubSubTrigger",
- "connectionStringSetting": "redisConnectionString",
+ "connection": "redisConnectionString",
"channel": "pubsubTest", "name": "message", "direction": "in"
Here's binding data to listen to the channel `pubsubTest`.
} ```
-Here's binding data to listen to keyspace notifications for the key `myKey`.
+Here's binding data to listen to keyspace notifications for the key `keyspaceTest`.
```json { "bindings": [ { "type": "redisPubSubTrigger",
- "connectionStringSetting": "redisConnectionString",
- "channel": "__keyspace@0__:myKey",
+ "connection": "redisConnectionString",
+ "channel": "__keyspace@0__:keyspaceTest",
"name": "message", "direction": "in" }
Here's binding data to listen to `keyevent` notifications for the delete command
"bindings": [ { "type": "redisPubSubTrigger",
- "connectionStringSetting": "redisConnectionString",
+ "connection": "redisConnectionString",
"channel": "__keyevent@0__:del", "name": "message", "direction": "in"
Here's binding data to listen to the channel `pubsubTest`.
"bindings": [ { "type": "redisPubSubTrigger",
- "connectionStringSetting": "redisConnectionString",
+ "connection": "redisConnectionString",
"channel": "pubsubTest", "name": "message", "direction": "in"
Here's binding data to listen to the channel `pubsubTest`.
} ```
-Here's binding data to listen to keyspace notifications for the key `myKey`.
+Here's binding data to listen to keyspace notifications for the key `keyspaceTest`.
```json { "bindings": [ { "type": "redisPubSubTrigger",
- "connectionStringSetting": "redisConnectionString",
- "channel": "__keyspace@0__:myKey",
+ "connection": "redisConnectionString",
+ "channel": "__keyspace@0__:keyspaceTest",
"name": "message", "direction": "in" }
Here's binding data to listen to `keyevent` notifications for the delete command
"bindings": [ { "type": "redisPubSubTrigger",
- "connectionStringSetting": "redisConnectionString",
+ "connection": "redisConnectionString",
"channel": "__keyevent@0__:del", "name": "message", "direction": "in"
Here's binding data to listen to `keyevent` notifications for the delete command
### [v2](#tab/python-v2)
-The Python v2 programming model example isn't available in preview.
+<! Replace with the following when Python v2 is supported:
+-->
The Python v2 programming model example isn't available in preview.
| Parameter | Description | Required | Default | ||--|:--:| --:|
-| `ConnectionStringSetting` | Name of the setting in the `appsettings` that holds the cache connection string. For example,`<cacheName>.redis.cache.windows.net:6380,password=...`. | Yes | |
+| `Connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` | Yes | |
| `Channel` | The pub sub channel that the trigger should listen to. Supports glob-style channel patterns. This field can be resolved using `INameResolver`. | Yes | | ::: zone-end
The Python v2 programming model example isn't available in preview.
| Parameter | Description | Required | Default | ||--|: --:| --:| | `name` | Name of the variable holding the value returned by the function. | Yes | |
-| `connectionStringSetting` | Name of the setting in the `appsettings` that holds the cache connection string (for example, `<cacheName>.redis.cache.windows.net:6380,password=...`) | Yes | |
+| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...`| Yes | |
| `channel` | The pub sub channel that the trigger should listen to. Supports glob-style channel patterns. | Yes | | ::: zone-end
The Python v2 programming model example isn't available in preview.
| function.json property | Description | Required | Default | ||--| :--:| --:|
-| `type` | Trigger type. For the pub sub trigger, this is `redisPubSubTrigger`. | Yes | |
-| `connectionStringSetting` | Name of the setting in the `appsettings` that holds the cache connection string (for example, `<cacheName>.redis.cache.windows.net:6380,password=...`) | Yes | |
+| `type` | Trigger type. For the pub sub trigger, the type is `redisPubSubTrigger`. | Yes | |
+| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...`| Yes | |
| `channel` | Name of the pub sub channel that is being subscribed to | Yes | | | `name` | Name of the variable holding the value returned by the function. | Yes | | | `direction` | Must be set to `in`. | Yes | |
The Python v2 programming model example isn't available in preview.
::: zone-end >[!IMPORTANT]
->The `connectionStringSetting` parameter does not hold the Redis cache connection string itself. Instead, it points to the name of the environment variable that holds the connection string. This makes the application more secure. For more information, see [Redis connection string](functions-bindings-cache.md#redis-connection-string).
+>The `connection` parameter does not hold the Redis cache connection string itself. Instead, it points to the name of the environment variable that holds the connection string. This makes the application more secure. For more information, see [Redis connection string](functions-bindings-cache.md#redis-connection-string).
> ## Usage
Redis features [publish/subscribe functionality](https://redis.io/docs/interact/
- The `RedisPubSubTrigger` isn't capable of listening to [keyspace notifications](https://redis.io/docs/manual/keyspace-notifications/) on clustered caches. - Basic tier functions don't support triggering on `keyspace` or `keyevent` notifications through the `RedisPubSubTrigger`. - The `RedisPubSubTrigger` isn't supported on a [consumption plan](/azure/azure-functions/consumption-plan) because Redis PubSub requires clients to always be actively listening to receive all messages. For consumption plans, your function might miss certain messages published to the channel.-- Functions with the `RedisPubSubTrigger` shouldn't be scaled out to multiple instances. Each instance listens and processes each pub sub message, resulting in duplicate processing
+- Functions with the `RedisPubSubTrigger` shouldn't be scaled out to multiple instances. Each instance listens and processes each pub sub message, resulting in duplicate processing.
> [!WARNING] > This trigger isn't supported on a [consumption plan](/azure/azure-functions/consumption-plan) because Redis PubSub requires clients to always be actively listening to receive all messages. For consumption plans, your function might miss certain messages published to the channel.
Because these events are published on pub/sub channels, the `RedisPubSubTrigger`
> [!IMPORTANT] > In Azure Cache for Redis, `keyspace` events must be enabled before notifications are published. For more information, see [Advanced Settings](/azure/azure-cache-for-redis/cache-configure#keyspace-notifications-advanced-settings).
-## Output
- ::: zone pivot="programming-language-csharp"
-> [!NOTE]
-> Once the `RedisPubSubTrigger` becomes generally available, the following information will be moved to a dedicated Output page.
+| Type | Description|
+|||
+| `string` | The channel message serialized as JSON (UTF-8 encoded for byte types) in the format that follows. |
+| `Custom`| The trigger uses Json.NET serialization to map the message from the channel into the given custom type. |
+JSON string format
-| Output Type | Description|
-|||
-| [`StackExchange.Redis.ChannelMessage`](https://github.com/StackExchange/StackExchange.Redis/blob/main/src/StackExchange.Redis/ChannelMessageQueue.cs)| The value returned by `StackExchange.Redis`. |
-| [`StackExchange.Redis.RedisValue`](https://github.com/StackExchange/StackExchange.Redis/blob/main/src/StackExchange.Redis/RedisValue.cs)| `string`, `byte[]`, `ReadOnlyMemory<byte>`: The message from the channel. |
-| `Custom`| The trigger uses Json.NET serialization to map the message from the channel from a `string` into a custom type. |
+```json
+{
+ "SubscriptionChannel":"__keyspace@0__:*",
+ "Channel":"__keyspace@0__:mykey",
+ "Message":"set"
+}
+
+```
::: zone-end ::: zone pivot="programming-language-java,programming-language-javascript,programming-language-powershell,programming-language-python"
-> [!NOTE]
-> Once the `RedisPubSubTrigger` becomes generally available, the following information will be moved to a dedicated Output page.
-
-| Output Type | Description |
+| Type | Description |
|-|--|
-| `byte[]` | The message from the channel. |
-| `string` | The message from the channel. |
+| `string` | The channel message serialized as JSON (UTF-8 encoded for byte types) in the format that follows. |
| `Custom` | The trigger uses Json.NET serialization to map the message from the channel from a `string` into a custom type. | --
+```json
+{
+ "SubscriptionChannel":"__keyspace@0__:*",
+ "Channel":"__keyspace@0__:mykey",
+ "Message":"set"
+}
+```
::: zone-end
Because these events are published on pub/sub channels, the `RedisPubSubTrigger`
- [Introduction to Azure Functions](functions-overview.md) - [Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started) - [Tutorial: Create a write-behind cache by using Azure Functions and Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-write-behind)
+- [Redis connection string](functions-bindings-cache.md#redis-connection-string)
- [Redis pub sub messages](https://redis.io/docs/manual/pubsub/)
azure-functions Functions Bindings Cache Trigger Redisstream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redisstream.md
Title: Using RedisStreamTrigger Azure Function (preview)
-description: Learn how to use RedisStreamTrigger Azure Function
+ Title: RedisStreamTrigger for Azure Functions (preview)
+description: Learn how to use RedisStreamTrigger Azure Function for Azure Cache for Redis.
zone_pivot_groups: programming-languages-set-functions-lang-workers
Previously updated : 08/07/2023 Last updated : 02/27/2024
-# RedisStreamTrigger Azure Function (preview)
+# RedisStreamTrigger for Azure Functions (preview)
The `RedisStreamTrigger` reads new entries from a stream and surfaces those elements to the function.
+For more information, see [RedisStreamTrigger](https://github.com/Azure/azure-functions-redis-extension/tree/mapalan/UpdateReadMe/samples/dotnet/RedisStreamTrigger).
+ | Tier | Basic | Standard, Premium | Enterprise, Enterprise Flash | ||:--:|:--:|:-:| | Streams | Yes | Yes | Yes | > [!IMPORTANT]
-> Redis triggers aren't currently supported for functions running in the [Consumption plan](consumption-plan.md).
+> Redis triggers aren't currently supported for functions running in the [Consumption plan](consumption-plan.md).
>
+<! Replace with the following when Node.js v4 is supported:
+-->
+<! Replace with the following when Python v2 is supported:
+-->
+ ## Example
+> [!IMPORTANT]
+>
+>For .NET functions, using the _isolated worker_ model is recommended over the _in-process_ model. For a comparison of the _in-process_ and _isolated worker_ models, see differences between the _isolated worker_ model and the _in-process_ model for .NET on Azure Functions.
+ ::: zone pivot="programming-language-csharp" [!INCLUDE [dotnet-execution](../../includes/functions-dotnet-execution-model.md)] ### [Isolated worker model](#tab/isolated-process)
-The isolated process examples aren't available in preview.
```csharp
-//TBD
+using Microsoft.Extensions.Logging;
+
+namespace Microsoft.Azure.Functions.Worker.Extensions.Redis.Samples.RedisStreamTrigger
+{
+ internal class SimpleStreamTrigger
+ {
+ private readonly ILogger<SimpleStreamTrigger> logger;
+
+ public SimpleStreamTrigger(ILogger<SimpleStreamTrigger> logger)
+ {
+ this.logger = logger;
+ }
+
+ [Function(nameof(SimpleStreamTrigger))]
+ public void Run(
+ [RedisStreamTrigger(Common.connectionStringSetting, "streamKey")] string entry)
+ {
+ logger.LogInformation(entry);
+ }
+ }
+}
``` ### [In-process model](#tab/in-process) ```csharp
-[FunctionName(nameof(StreamsTrigger))]
-public static void StreamsTrigger(
- [RedisStreamTrigger("Redis", "streamTest")] string entry,
- ILogger logger)
+using Microsoft.Extensions.Logging;
+
+namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisStreamTrigger
{
- logger.LogInformation($"The entry pushed to the list listTest: '{entry}'");
+ internal class SimpleStreamTrigger
+ {
+ [FunctionName(nameof(SimpleStreamTrigger))]
+ public static void Run(
+ [RedisStreamTrigger(Common.connectionStringSetting, "streamKey")] string entry,
+ ILogger logger)
+ {
+ logger.LogInformation(entry);
+ }
+ }
}+ ```
public static void StreamsTrigger(
```java
- @FunctionName("StreamTrigger")
- public void StreamTrigger(
+package com.function.RedisStreamTrigger;
+
+import com.microsoft.azure.functions.*;
+import com.microsoft.azure.functions.annotation.*;
+import com.microsoft.azure.functions.redis.annotation.*;
+
+public class SimpleStreamTrigger {
+ @FunctionName("SimpleStreamTrigger")
+ public void run(
@RedisStreamTrigger(
- name = "entry",
- connectionStringSetting = "redisLocalhost",
+ name = "req",
+ connection = "redisConnectionString",
key = "streamTest",
- pollingIntervalInMs = 100,
- messagesPerWorker = 10,
- count = 1,
- deleteAfterProcess = true)
- String entry,
+ pollingIntervalInMs = 1000,
+ maxBatchSize = 1)
+ String message,
final ExecutionContext context) {
- context.getLogger().info(entry);
+ context.getLogger().info(message);
}
+}
``` ::: zone-end ::: zone pivot="programming-language-javascript"
-### [v3](#tab/node-v3)
+### [Model v3](#tab/node-v3)
This sample uses the same `index.js` file, with binding data in the `function.json` file.
From `function.json`, here's the binding data:
"bindings": [ { "type": "redisStreamTrigger",
- "deleteAfterProcess": false,
- "connectionStringSetting": "redisLocalhost",
+ "connection": "redisConnectionString",
"key": "streamTest", "pollingIntervalInMs": 1000,
- "messagesPerWorker": 100,
- "count": 10,
+ "maxBatchSize": 16,
"name": "entry", "direction": "in" }
From `function.json`, here's the binding data:
} ```
-### [v4](#tab/node-v4)
+### [Model v4](#tab/node-v4)
-The JavaScript v4 programming model example isn't available in preview.
+ <! Replace with the following when Node.js v4 is supported:
+ [!INCLUDE [functions-nodejs-model-tabs-description](../../includes/functions-nodejs-model-tabs-description.md)]
+ -->
+ [!INCLUDE [functions-nodejs-model-tabs-redis-preview](../../includes/functions-nodejs-model-tabs-redis-preview.md)]
Write-Host ($entry | ConvertTo-Json)
From `function.json`, here's the binding data:
-```powershell
+```json
{ "bindings": [ { "type": "redisStreamTrigger",
- "deleteAfterProcess": false,
- "connectionStringSetting": "redisLocalhost",
+ "connection": "redisConnectionString",
"key": "streamTest", "pollingIntervalInMs": 1000,
- "messagesPerWorker": 100,
- "count": 10,
+ "maxBatchSize": 16,
"name": "entry", "direction": "in" }
From `function.json`, here's the binding data:
"bindings": [ { "type": "redisStreamTrigger",
- "deleteAfterProcess": false,
- "connectionStringSetting": "redisLocalhost",
+ "connection": "redisConnectionString",
"key": "streamTest", "pollingIntervalInMs": 1000,
- "messagesPerWorker": 100,
- "count": 10,
+ "maxBatchSize": 16,
"name": "entry", "direction": "in" }
From `function.json`, here's the binding data:
### [v2](#tab/python-v2)
-The Python v2 programming model example isn't available in preview.
+<! Replace with the following when Python v2 is supported:
+-->
The Python v2 programming model example isn't available in preview.
| Parameters | Description | Required | Default | ||-|:--:|--:|
-| `ConnectionStringSetting` | The name of the setting in the `appsettings` that contains cache connection string For example: `<cacheName>.redis.cache.windows.net:6380,password=...` | Yes | |
+| `Connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...`| Yes | |
| `Key` | Key to read from. | Yes | | | `PollingIntervalInMs` | How often to poll the Redis server in milliseconds. | Optional | `1000` | | `MessagesPerWorker` | The number of messages each functions worker should process. Used to determine how many workers the function should scale to. | Optional | `100` |
The Python v2 programming model example isn't available in preview.
| Parameter | Description | Required | Default | ||-|:--:|--:| | `name` | `entry` | Yes | |
-| `connectionStringSetting` | The name of the setting in the `appsettings` that contains cache connection string For example: `<cacheName>.redis.cache.windows.net:6380,password=...` | Yes | |
+| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` | Yes | |
| `key` | Key to read from. | Yes | | | `pollingIntervalInMs` | How frequently to poll Redis, in milliseconds. | Optional | `1000` |
-| `messagesPerWorker` | The number of messages each functions worker should process. It's used to determine how many workers the function should scale to | Optional | `100` |
-| `count` | Number of entries to read from Redis at one time. These are processed in parallel. | Optional | `10` |
+| `messagesPerWorker` | The number of messages each functions worker should process. It's used to determine how many workers the function should scale to. | Optional | `100` |
+| `count` | Number of entries to read from Redis at one time. Entries are processed in parallel. | Optional | `10` |
| `deleteAfterProcess` | Whether to delete the stream entries after the function has run. | Optional | `false` | ::: zone-end
The following table explains the binding configuration properties that you set i
||-|:--:|--:| | `type` | | Yes | | | `deleteAfterProcess` | | Optional | `false` |
-| `connectionStringSetting` | The name of the setting in the `appsettings` that contains cache connection string For example: `<cacheName>.redis.cache.windows.net:6380,password=...` | Yes | |
+| `connection` | The name of the [application setting](functions-how-to-use-azure-function-app-settings.md#settings) that contains the cache connection string, such as: `<cacheName>.redis.cache.windows.net:6380,password...` | Yes | |
| `key` | The key to read from. | Yes | | | `pollingIntervalInMs` | How often to poll Redis in milliseconds. | Optional | `1000` | | `messagesPerWorker` | (optional) The number of messages each functions worker should process. Used to determine how many workers the function should scale | Optional | `100` |
The `RedisStreamTrigger` Azure Function reads new entries from a stream and surf
The trigger polls Redis at a configurable fixed interval, and uses [`XREADGROUP`](https://redis.io/commands/xreadgroup/) to read elements from the stream.
-The consumer group for all function instances is the `ID` of the function. For example, `Microsoft.Azure.WebJobs.Extensions.Redis.Samples.RedisSamples.StreamTrigger` for the `StreamTrigger` sample. Each function creates a new random GUID to use as its consumer name within the group to ensure that scaled out instances of the function don't read the same messages from the stream.
+The consumer group for all instances of a function is the name of the function, that is, `SimpleStreamTrigger` for the [StreamTrigger sample](https://github.com/Azure/azure-functions-redis-extension/blob/main/samples/dotnet/RedisStreamTrigger/SimpleStreamTrigger.cs).
-### Output
+Each functions instance uses the [`WEBSITE_INSTANCE_ID`](/azure/app-service/reference-app-settings?tabs=kudu%2Cdotnet#scaling) or generates a random GUID to use as its consumer name within the group to ensure that scaled out instances of the function don't read the same messages from the stream.
-
-> [!NOTE]
-> Once the `RedisStreamTrigger` becomes generally available, the following information will be moved to a dedicated Output page.
+<!-- ::: zone pivot="programming-language-csharp"
-| Output Type | Description |
+| Type | Description |
|-|--| | [`StackExchange.Redis.ChannelMessage`](https://github.com/StackExchange/StackExchange.Redis/blob/main/src/StackExchange.Redis/ChannelMessageQueue.cs) | The value returned by `StackExchange.Redis`. | | `StackExchange.Redis.NameValueEntry[]`, `Dictionary<string, string>` | The values contained within the entry. | | `string, byte[], ReadOnlyMemory<byte>` | The stream entry serialized as JSON (UTF-8 encoded for byte types) in the following format: `{"Id":"1658354934941-0","Values":{"field1":"value1","field2":"value2","field3":"value3"}}` | | `Custom` | The trigger uses Json.NET serialization to map the message from the channel from a `string` into a custom type. | -
-> [!NOTE]
-> Once the `RedisStreamTrigger` becomes generally available, the following information will be moved to a dedicated Output page.
-| Output Type | Description |
+| Type | Description |
|-|--| | `byte[]` | The message from the channel. | | `string` | The message from the channel. | | `Custom` | The trigger uses Json.NET serialization to map the message from the channel from a `string` into a custom type. | --- ::: zone-end ## Related content
The consumer group for all function instances is the `ID` of the function. For e
- [Introduction to Azure Functions](functions-overview.md) - [Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started) - [Using Azure Functions and Azure Cache for Redis to create a write-behind cache](/azure/azure-cache-for-redis/cache-tutorial-write-behind)
+- [Redis connection string](functions-bindings-cache.md#redis-connection-string)
- [Redis streams](https://redis.io/docs/data-types/streams/)
azure-functions Functions Bindings Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache.md
Previously updated : 07/26/2023 Last updated : 03/01/2024 # Overview of Azure functions for Azure Cache for Redis (preview)
Azure Cache for Redis can be used as a trigger for Azure Functions, allowing you
You can integrate Azure Cache for Redis and Azure Functions to build functions that react to events from Azure Cache for Redis or external systems.
-| Action | Direction | Type | Preview |
-||--|||
-| Triggers on Redis pub sub messages | N/A | [RedisPubSubTrigger](functions-bindings-cache-trigger-redispubsub.md) | Yes|
-| Triggers on Redis lists | N/A | [RedisListsTrigger](functions-bindings-cache-trigger-redislist.md) | Yes |
-| Triggers on Redis streams | N/A | [RedisStreamsTrigger](functions-bindings-cache-trigger-redisstream.md) | Yes |
+| Action | Direction | Support level |
+||--|--|
+| [Trigger on Redis pub sub messages](functions-bindings-cache-trigger-redispubsub.md) | Trigger | Preview |
+| [Trigger on Redis lists](functions-bindings-cache-trigger-redislist.md) | Trigger | Preview |
+| [Trigger on Redis streams](functions-bindings-cache-trigger-redisstream.md) | Trigger | Preview |
+| [Read a cached value](functions-bindings-cache-input.md) | Input | Preview |
+| [Write a values to cache](functions-bindings-cache-output.md) | Output | Preview |
-## Scope of availability for functions triggers
+## Scope of availability for functions triggers and bindings
|Tier | Basic | Standard, Premium | Enterprise, Enterprise Flash | ||::|::|::| |Pub/Sub | Yes | Yes | Yes | |Lists | Yes | Yes | Yes | |Streams | Yes | Yes | Yes |
+|Bindings | Yes | Yes | Yes |
> [!IMPORTANT]
-> Redis triggers aren't currently supported for functions running in the [Consumption plan](consumption-plan.md).
->
+> Redis triggers are currently only supported for functions running in either a [Elastic Premium plan](functions-premium-plan.md) or a dedicated [App Service plan](./dedicated-plan.md).
::: zone pivot="programming-language-csharp"
dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-python,programming-language-powershell"
-1. Add the extension bundle by adding or replacing the following code in your _host.json_ file:
+Add the extension bundle by adding or replacing the following code in your _host.json_ file:
- <!-- I don't see this in the samples. -->
- ```json
+ ```json
{ "version": "2.0", "extensionBundle": {
dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease
"version": "[4.11.*, 5.0.0)" } }+ ```
- >[!WARNING]
- >The Redis extension is currently only available in a preview bundle release.
- >
+>[!WARNING]
+>The Redis extension is currently only available in a preview bundle release.
+>
::: zone-end ## Redis connection string
-Azure Cache for Redis triggers and bindings have a required property for the cache connection string. The connection string can be found on the [**Access keys**](/azure/azure-cache-for-redis/cache-configure#access-keys) menu in the Azure Cache for Redis portal. The Redis trigger or binding looks for an environmental variable holding the connection string with the name passed to the `ConnectionStringSetting` parameter. In local development, the `ConnectionStringSetting` can be defined using the [local.settings.json](/azure/azure-functions/functions-develop-local#local-settings-file) file. When deployed to Azure, [application settings](/azure/azure-functions/functions-how-to-use-azure-function-app-settings) can be used.
+Azure Cache for Redis triggers and bindings have a required property for the cache connection string. The connection string can be found on the [**Access keys**](/azure/azure-cache-for-redis/cache-configure#access-keys) menu in the Azure Cache for Redis portal. The Redis trigger or binding looks for an environmental variable holding the connection string with the name passed to the `Connection` parameter.
+
+In local development, the `Connection` can be defined using the [local.settings.json](/azure/azure-functions/functions-develop-local#local-settings-file) file. When deployed to Azure, [application settings](/azure/azure-functions/functions-how-to-use-azure-function-app-settings) can be used.
+
+When connecting to a cache instance with an Azure function, you can use three types of connections in your deployments: Connection string, System-assigned managed identity, and User-assigned managed identity
+
+For local development, you can also use service principal secrets.
+
+Use the `appsettings` to configure each of the following types of client authentication, assuming the `Connection` was set to `Redis` in the function.
+
+### Connection string
+
+```JSON
+"Redis": "<cacheName>.redis.cache.windows.net:6380,password=..."
+```
+
+### System-assigned managed identity
+
+```JSON
+"Redis:redisHostName": "<cacheName>.redis.cache.windows.net",
+"Redis:principalId": "<principalId>"
+```
+
+### User-assigned managed identity
+
+```JSON
+"Redis:redisHostName": "<cacheName>.redis.cache.windows.net",
+"Redis:principalId": "<principalId>",
+"Redis:clientId": "<clientId>"
+```
+
+### Service Principal Secret
+
+Connections using Service Principal Secrets are only available during local development.
+
+```JSON
+"Redis:redisHostName": "<cacheName>.redis.cache.windows.net",
+"Redis:principalId": "<principalId>",
+"Redis:clientId": "<clientId>"
+"Redis:tenantId": "<tenantId>"
+"Redis:clientSecret": "<clientSecret>"
+```
## Related content
azure-health-insights Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/get-started.md
The Service URL to access your service is: https://```YOUR-NAME```.cognitiveserv
To send an API request, you need your Azure AI services account endpoint and key.
-You can find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/onco-phenotype/create-job).
+
+<!-- You can also find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/radiology-insights/create-job). -->
++ ![[Screenshot of the Keys and Endpoints for the Radiology Insights.](../media/keys-and-endpoints.png)](../media/keys-and-endpoints.png#lightbox)
Ocp-Apim-Subscription-Key: {cognitive-services-account-key}
} ```
-You can also find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/onco-phenotype/create-job).
+<!-- You can also find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/radiology-insights/create-job). -->
++++ ### Evaluating a response that contains a case
http://{cognitive-services-account-endpoint}/health-insights/radiology-insights/
"status": "succeeded" } ```
-You can find a full view of the [response parameters here](/rest/api/cognitiveservices/healthinsights/onco-phenotype/get-job).
+<!-- You can also find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/radiology-insights/get-job). -->
## Data limits
azure-health-insights Inferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/inferences.md
# Inference information
-This document describes details of all inferences generated by application of RI to a radiology document.
+This document describes the details of all inferences generated by application of RI to a radiology document.
The Radiology Insights feature of Azure Health Insights uses natural language processing techniques to process unstructured medical radiology documents. It adds several types of inferences that help the user to effectively monitor, understand, and improve financial and clinical outcomes in a radiology workflow context.
The types of inferences currently supported by the system are: AgeMismatch, SexM
-To interact with the Radiology-Insights model, you can provide several model configuration parameters that modify the outcome of the responses. One of the configurations is ΓÇ£inferenceTypesΓÇ¥, which can be used if only part of the Radiology Insights inferences is required. If this list is omitted or empty, the model returns all the inference types.
+To interact with the Radiology-Insights model, you can provide several model configuration parameters that modify the outcome of the responses. One of the configurations is "inferenceTypes", which can be used if only part of the Radiology Insights inferences is required. If this list is omitted or empty, the model returns all the inference types.
```json "configuration" : {
To interact with the Radiology-Insights model, you can provide several model con
An age mismatch occurs when the document gives a certain age for the patient, which differs from the age that is calculated based on the patientΓÇÖs info birthdate and the encounter period in the request. - kind: RadiologyInsightsInferenceType.AgeMismatch;
-<details><summary>Examples request/response json</summary>
+Examples request/response json:
+ [!INCLUDE [Example input json](../includes/example-inference-age-mismatch-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-inference-age-mismatch-json-response.md)]
-</details>
+ **Laterality Discrepancy** A laterality mismatch is mostly flagged when the orderedProcedure is for a body part with a laterality and the text refers to the opposite laterality.
-Example: ΓÇ£x-ray right footΓÇ¥, ΓÇ£left foot is normalΓÇ¥
+Example: "x-ray right foot", "left foot is normal"
- kind: RadiologyInsightsInferenceType.LateralityDiscrepancy - LateralityIndication: FHIR.R4.CodeableConcept - DiscrepancyType: LateralityDiscrepancyType There are three possible discrepancy types:-- ΓÇ£orderLateralityMismatchΓÇ¥ means that the laterality in the text conflicts with the one in the order.-- ΓÇ£textLateralityContradictionΓÇ¥ means that there's a body part with left or right in the finding section, and the same body part occurs with the opposite laterality in the impression section.-- ΓÇ£textLateralityMissingΓÇ¥ means that the laterality mentioned in the order never occurs in the text.
+- "orderLateralityMismatch" means that the laterality in the text conflicts with the one in the order.
+- "textLateralityContradiction" means that there's a body part with left or right in the finding section, and the same body part occurs with the opposite laterality in the impression section.
+- "textLateralityMissing" means that the laterality mentioned in the order never occurs in the text.
The lateralityIndication is a FHIR.R4.CodeableConcept. There are two possible values (SNOMED codes):
The lateralityIndication is a FHIR.R4.CodeableConcept. There are two possible va
The meaning of this field is as follows: - For orderLateralityMismatch: concept in the text that the laterality was flagged for. - For textLateralityContradiction: concept in the impression section that the laterality was flagged for.-- For ΓÇ£textLateralityMissingΓÇ¥, this field isn't filled in.
+- For "textLateralityMissing", this field isn't filled in.
+
+A mismatch with discrepancy type "textLaterityMissing" has no token extensions.
-A mismatch with discrepancy type ΓÇ£textLaterityMissingΓÇ¥ has no token extensions.
+Examples request/response json:
-<details><summary>Examples request/response json</summary>
[!INCLUDE [Example input json](../includes/example-inference-laterality-discrepancy-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-inference-laterality-discrepancy-json-response.md)]
-</details>
+
A mismatch with discrepancy type ΓÇ£textLaterityMissingΓÇ¥ has no token extensio
This mismatch occurs when the document gives a different sex for the patient than stated in the patientΓÇÖs info in the request. If the patient info contains no sex, then the mismatch can also be flagged when there's contradictory language about the patientΓÇÖs sex in the text. - kind: RadiologyInsightsInferenceType.SexMismatch - sexIndication: FHIR.R4.CodeableConcept
-Field ΓÇ£sexIndicationΓÇ¥ contains one coding with a SNOMED concept for either MALE (FINDING) if the document refers to a male or FEMALE (FINDING) if the document refers to a female:
+Field "sexIndication" contains one coding with a SNOMED concept for either MALE (FINDING) if the document refers to a male or FEMALE (FINDING) if the document refers to a female:
- 248153007: MALE (FINDING) - 248152002: FEMALE (FINDING)
-<details><summary>Examples request/response json</summary>
+Examples request/response json:
+ [!INCLUDE [Example input json](../includes/example-inference-sex-mismatch-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-inference-sex-mismatch-json-response.md)]
-</details>
+
CompleteOrderDiscrepancy is created if there's a complete orderedProcedure - mea
- MissingBodyParts: Array FHIR.R4.CodeableConcept - missingBodyPartMeasurements: Array FHIR.R4.CodeableConcept
-Field ΓÇ£ordertypeΓÇ¥ contains one Coding, with one of the following Loinc codes:
+Field "ordertype" contains one Coding, with one of the following Loinc codes:
- 24558-9: US Abdomen - 24869-0: US Pelvis - 24531-6: US Retroperitoneum - 24601-7: US breast
-Fields ΓÇ£missingBodyPartsΓÇ¥ and/or ΓÇ£missingBodyPartsMeasurementsΓÇ¥ contain body parts (radlex codes) that are missing or whose measurements are missing. The token extensions refer to body parts or measurements that are present (or words that imply them).
+Fields "missingBodyParts" and/or "missingBodyPartsMeasurements" contain body parts (radlex codes) that are missing or whose measurements are missing. The token extensions refer to body parts or measurements that are present (or words that imply them).
-<details><summary>Examples request/response json</summary>
+Examples request/response json:
+ [!INCLUDE [Example input json](../includes/example-inference-complete-order-discrepancy-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-inference-complete-order-discrepancy-json-response.md)]
-</details>
+
This inference is created if there's a limited order, meaning that not all body
- PresentBodyParts: Array FHIR.R4.CodeableConcept - PresentBodyPartMeasurements: Array FHIR.R4.CodeableConcept
-Field ΓÇ£ordertypeΓÇ¥ contains one Coding, with one of the following Loinc codes:
+Field "ordertype" contains one Coding, with one of the following Loinc codes:
- 24558-9: US Abdomen - 24869-0: US Pelvis - 24531-6: US Retroperitoneum - 24601-7: US breast
-Fields ΓÇ£presentBodyPartsΓÇ¥ and/or ΓÇ£presentBodyPartsMeasurementsΓÇ¥ contain body parts (radlex codes) that are present or whose measurements are present. The token extensions refer to body parts or measurements that are present (or words that imply them).
+Fields "presentBodyParts" and/or "presentBodyPartsMeasurements" contain body parts (radlex codes) that are present or whose measurements are present. The token extensions refer to body parts or measurements that are present (or words that imply them).
-<details><summary>Examples request/response json</summary>
+Examples request/response json:
+ [!INCLUDE [Example input json](../includes/example-inference-limited-order-discrepancy-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-inference-limited-order-discrepancy-json-response.md)]
-</details>
+ **Finding**
-This inference is created for a medical problem (for example ΓÇ£acute infection of the lungsΓÇ¥) or for a characteristic or a nonpathologic finding of a body part (for example ΓÇ£stomach normalΓÇ¥).
+This inference is created for a medical problem (for example "acute infection of the lungs") or for a characteristic or a nonpathologic finding of a body part (for example "stomach normal").
- kind: RadiologyInsightsInferenceType.finding - finding: FHIR.R4.Observation Finding: Section and ci_sentence
-Next to the token extensions, there can be an extension with url ΓÇ£sectionΓÇ¥. This extension has an inner extension with a display name that describes the section. The inner extension can also have a LOINC code.
-There can also be an extension with url ΓÇ£ci_sentenceΓÇ¥. This extension refers to the sentence containing the first token of the clinical indicator (that is, the medical problem), if any. The generation of such a sentence is switchable.
+Next to the token extensions, there can be an extension with url "section". This extension has an inner extension with a display name that describes the section. The inner extension can also have a LOINC code.
+There can also be an extension with url "ci_sentence". This extension refers to the sentence containing the first token of the clinical indicator (that is, the medical problem), if any. The generation of such a sentence is switchable.
-Finding: fields within field ΓÇ£findingΓÇ¥
-list of fields within field ΓÇ£findingΓÇ¥, except ΓÇ£componentΓÇ¥:
-- status: is always set to ΓÇ£unknownΓÇ¥-- resourceType: is always set to "ObservationΓÇ¥
+Finding: fields within field "finding"
+list of fields within field "finding", except "component":
+- status: is always set to "unknown"
+- resourceType: is always set to "Observation"
- interpretation: contains a sublist of the following SNOMED codes: - 7147002: NEW (QUALIFIER VALUE) - 36692007: KNOWN (QUALIFIER VALUE)
list of fields within field ΓÇ£findingΓÇ¥, except ΓÇ£componentΓÇ¥:
- 263730007: CONTINUAL (QUALIFIER VALUE) In this list, the string before the colon is the code, and the string after the colon is the display name.
-If the value is ΓÇ£NONE (QUALIFIER VALUE)ΓÇ¥, the finding is absent. This value is, for example, ΓÇ£no sepsisΓÇ¥.
-category: if filled, this field contains an array with one element. It contains one of the following SNOMED concepts:
-- 439401001: DIAGNOSIS (OBSERVABLE ENTITY)-- 404684003: CLINICAL FINDING (FINDING)-- 162432007: SYMPTOM: GENERALIZED (FINDING)-- 246501002: TECHNIQUE (ATTRIBUTE)-- 91722005: PHYSICAL ANATOMICAL ENTITY (BODY STRUCTURE)
+If the value is "NONE (QUALIFIER VALUE)", the finding is absent. This value is, for example, "no sepsis".
code: - SNOMED code 404684003: CLINICAL FINDING (FINDING) (meaning that the finding has a clinical indicator) or - SNOMED code 123037004: BODY STRUCTURE (BODY STRUCTURE) (no clinical indicator.)
-Finding: field ΓÇ£componentΓÇ¥
-Much relevant information is in the components. The componentΓÇÖs ΓÇ£codeΓÇ¥ field contains one CodeableConcept with one SNOMED code.
+Finding: field "component"
+Much relevant information is in the components. The componentΓÇÖs "code" field contains one CodeableConcept with one SNOMED code.
Component description: (some of the components are optional)
-Finding: component ΓÇ£subject of informationΓÇ¥
-This component has SNOMED code 131195008: SUBJECT OF INFORMATION (ATTRIBUTE). It also has the ΓÇ£valueCodeableConceptΓÇ¥ field filled. The value is a SNOMED code describing the medical problem that the finding pertains to.
-At least one ΓÇ£subject of informationΓÇ¥ component is present if and only if the ΓÇ£finding.codeΓÇ¥ field has 404684003: CLINICAL FINDING (FINDING). There can be several "subject of informationΓÇ¥ components, with different concepts in the ΓÇ£valueCodeableConceptΓÇ¥ field.
+Finding: component "subject of information"
+This component has SNOMED code 131195008: SUBJECT OF INFORMATION (ATTRIBUTE). It also has the "valueCodeableConcept" field filled. The value is a SNOMED code describing the medical problem that the finding pertains to.
+At least one "subject of information" component is present if and only if the "finding.code" field has 404684003: CLINICAL FINDING (FINDING). There can be several "subject of information" components, with different concepts in the "valueCodeableConcept" field.
-Finding: component ΓÇ£anatomyΓÇ¥
-Zero or more components with SNOMED code ΓÇ£722871000000108: ANATOMY (QUALIFIER VALUE)ΓÇ¥. This component has field ΓÇ£valueCodeConceptΓÇ¥ filled with a SNOMED or radlex code. For example, for ΓÇ£lung infectionΓÇ¥ this component contains a code for the lungs.
+Finding: component "anatomy"
+Zero or more components with SNOMED code "722871000000108: ANATOMY (QUALIFIER VALUE)". This component has field "valueCodeConcept" filled with a SNOMED or radlex code. For example, for "lung infection" this component contains a code for the lungs.
-Finding: component ΓÇ£regionΓÇ¥
-Zero or more components with SNOMED code 45851105: REGION (ATTRIBUTE). Like anatomy, this component has field ΓÇ£valueCodeableConceptΓÇ¥ filled with a SNOMED or radlex code. Such a concept refers to the body region of the anatomy. For example, if the anatomy is a code for the vagina, the region may be a code for the female reproductive system.
+Finding: component "region"
+Zero or more components with SNOMED code 45851105: REGION (ATTRIBUTE). Like anatomy, this component has field "valueCodeableConcept" filled with a SNOMED or radlex code. Such a concept refers to the body region of the anatomy. For example, if the anatomy is a code for the vagina, the region may be a code for the female reproductive system.
-Finding: component ΓÇ£lateralityΓÇ¥
-Zero or more components with code 45651917: LATERALITY (ATTRIBUTE). Each has field ΓÇ£valueCodeableConceptΓÇ¥ set to a SNOMED concept pertaining to the laterality of the finding. For example, this component is filled for a finding pertaining to the right arm.
+Finding: component "laterality"
+Zero or more components with code 45651917: LATERALITY (ATTRIBUTE). Each has field "valueCodeableConcept" set to a SNOMED concept pertaining to the laterality of the finding. For example, this component is filled for a finding pertaining to the right arm.
-Finding: component ΓÇ£change valuesΓÇ¥
-Zero or more components with code 288533004: CHANGE VALUES (QUALIFIER VALUE). Each has field ΓÇ£valueCodeableConceptΓÇ¥ set to a SNOMED concept pertaining to a size change in the finding (for example, a nodule that is growing or decreasing).
+Finding: component "change values"
+Zero or more components with code 288533004: CHANGE VALUES (QUALIFIER VALUE). Each has field "valueCodeableConcept" set to a SNOMED concept pertaining to a size change in the finding (for example, a nodule that is growing or decreasing).
-Finding: component ΓÇ£percentageΓÇ¥
-At most one component with code 45606679: PERCENT (PROPERTY) (QUALIFIER VALUE). It has field ΓÇ£valueStringΓÇ¥ set with either a value or a range consisting of a lower and upper value, separated by ΓÇ£-ΓÇ£.
+Finding: component "percentage"
+At most one component with code 45606679: PERCENT (PROPERTY) (QUALIFIER VALUE). It has field "valueString" set with either a value or a range consisting of a lower and upper value, separated by "-".
-Finding: component ΓÇ£severityΓÇ¥
-At most one component with code 272141005: SEVERITIES (QUALIFIER VALUE), indicating how severe the medical problem is. It has field ΓÇ£valueCodeableConceptΓÇ¥ set with a SNOMED code from the following list:
+Finding: component "severity"
+At most one component with code 272141005: SEVERITIES (QUALIFIER VALUE), indicating how severe the medical problem is. It has field "valueCodeableConcept" set with a SNOMED code from the following list:
- 255604002: MILD (QUALIFIER VALUE) - 6736007: MODERATE (SEVERITY MODIFIER) (QUALIFIER VALUE) - 24484000: SEVERE (SEVERITY MODIFIER) (QUALIFIER VALUE) - 371923003: MILD TO MODERATE (QUALIFIER VALUE) - 371924009: MODERATE TO SEVERE (QUALIFIER VALUE)
-Finding: component ΓÇ£chronicityΓÇ¥
-At most one component with code 246452003: CHRONICITY (ATTRIBUTE), indicating whether the medical problem is chronic or acute. It has field ΓÇ£valueCodeableConceptΓÇ¥ set with a SNOMED code from the following list:
+Finding: component "chronicity"
+At most one component with code 246452003: CHRONICITY (ATTRIBUTE), indicating whether the medical problem is chronic or acute. It has field "valueCodeableConcept" set with a SNOMED code from the following list:
- 255363002: SUDDEN (QUALIFIER VALUE) - 90734009: CHRONIC (QUALIFIER VALUE) - 19939008: SUBACUTE (QUALIFIER VALUE) - 255212004: ACUTE-ON-CHRONIC (QUALIFIER VALUE)
-Finding: component ΓÇ£causeΓÇ¥
-At most one component with code 135650694: CAUSES OF HARM (QUALIFIER VALUE), indicating what the cause is of the medical problem. It has field ΓÇ£valueStringΓÇ¥ set to the strings of one or more tokens from the text, separated by ΓÇ£;;ΓÇ¥.
+Finding: component "cause"
+At most one component with code 135650694: CAUSES OF HARM (QUALIFIER VALUE), indicating what the cause is of the medical problem. It has field "valueString" set to the strings of one or more tokens from the text, separated by ";;".
-Finding: component ΓÇ£qualifier valueΓÇ¥
+Finding: component "qualifier value"
Zero or more components with code 362981000: QUALIFIER VALUE (QUALIFIER VALUE). This component refers to a feature of the medical problem. Every component has either:-- Field ΓÇ£valueStringΓÇ¥ set with token strings from the text, separated by ΓÇ£;;ΓÇ¥-- Or field ΓÇ£valueCodeableConceptΓÇ¥ set to a SNOMED code
+- Field "valueString" set with token strings from the text, separated by ";;"
+- Or field "valueCodeableConcept" set to a SNOMED code
- Or no field set (then the meaning can be retrieved from the token extensions (rare occurrence))
-Finding: component ΓÇ£multipleΓÇ¥
-Exactly one component with code 46150521: MULTIPLE (QUALIFIER VALUE). It has field ΓÇ£valueBooleanΓÇ¥ set to true or false. This component indicates the difference between, for example, one nodule (multiple is false) or several nodules (multiple is true). This component has no token extensions.
+Finding: component "multiple"
+Exactly one component with code 46150521: MULTIPLE (QUALIFIER VALUE). It has field "valueBoolean" set to true or false. This component indicates the difference between, for example, one nodule (multiple is false) or several nodules (multiple is true). This component has no token extensions.
-Finding: component ΓÇ£sizeΓÇ¥
-Zero or more components with code 246115007, "SIZE (ATTRIBUTE)". Even if there's just one size for a finding, there are several components if the size has two or three dimensions, for example, ΓÇ£2.1 x 3.3 cmΓÇ¥ or ΓÇ£1.2 x 2.2 x 1.5 cmΓÇ¥. There's a size component for every dimension.
-Every component has field ΓÇ£interpretationΓÇ¥ set to either SNOMED code 15240007: CURRENT or 9130008: PREVIOUS, depending on whether the size was measured during this visit or in the past.
-Every component has either field ΓÇ£valueQuantityΓÇ¥ or ΓÇ£valueRangeΓÇ¥ set.
-If ΓÇ£valueQuantityΓÇ¥ is set, then ΓÇ£valueQuantity.valueΓÇ¥ is always set. In most cases, ΓÇ£valueQuantity.unitΓÇ¥ is set. It's possible that ΓÇ£valueQuantity.comparatorΓÇ¥ is also set, to either ΓÇ£>ΓÇ¥, ΓÇ£<ΓÇ¥, ΓÇ£>=ΓÇ¥ or ΓÇ£<=ΓÇ¥. For example, the component is set to ΓÇ£<=ΓÇ¥ for ΓÇ£the tumor is up to 2 cmΓÇ¥.
-If ΓÇ£valueRangeΓÇ¥ is set, then ΓÇ£valueRange.lowΓÇ¥ and ΓÇ£valueRange.highΓÇ¥ are set to quantities with the same data as described in the previous paragraph. This field contains, for example, ΓÇ£The tumor is between 2.5 cm and 2.6 cm in size".
+Finding: component "size"
+Zero or more components with code 246115007, "SIZE (ATTRIBUTE)". Even if there's just one size for a finding, there are several components if the size has two or three dimensions, for example, "2.1 x 3.3 cm" or "1.2 x 2.2 x 1.5 cm". There's a size component for every dimension.
+Every component has field "interpretation" set to either SNOMED code 15240007: CURRENT or 9130008: PREVIOUS, depending on whether the size was measured during this visit or in the past.
+Every component has either field "valueQuantity" or "valueRange" set.
+If "valueQuantity" is set, then "valueQuantity.value" is always set. In most cases, "valueQuantity.unit" is set. It's possible that "valueQuantity.comparator" is also set, to either ">", "<", ">=" or "<=". For example, the component is set to "<=" for "the tumor is up to 2 cm".
+If "valueRange" is set, then "valueRange.low" and "valueRange.high" are set to quantities with the same data as described in the previous paragraph. This field contains, for example, "The tumor is between 2.5 cm and 2.6 cm in size".
-<details><summary>Examples request/response json</summary>
+Examples request/response json:
+ [!INCLUDE [Example input json](../includes/example-inference-finding-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-inference-finding-json-response.md)]
-</details>
+
This inference is made for a new medical problem that requires attention within
- kind: RadiologyInsightsInferenceType.criticalResult - result: CriticalResult
-Field ΓÇ£result.descriptionΓÇ¥ gives a description of the medical problem, for example ΓÇ£MALIGNANCYΓÇ¥.
-Field ΓÇ£result.findingΓÇ¥, if set, contains the same information as the ΓÇ£findingΓÇ¥ field in a finding inference.
+Field "result.description" gives a description of the medical problem, for example "MALIGNANCY".
+Field "result.finding", if set, contains the same information as the "finding" field in a finding inference.
Next to token extensions, there can be an extension for a section. This field contains the most specific section that the first token of the critical result is in (or to be precise, the first token that is in a section). This section is in the same format as a section for a finding.
-<details><summary>Examples request/response json</summary>
+Examples request/response json:
+ [!INCLUDE [Example input json](../includes/example-inference-critical-result-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-inference-critical-result-json-response.md)]
-</details>
+
recommendedProcedure: ProcedureRecommendation
- follow up Recommendation: sentences Next to the token extensions, there can be an extension containing sentences. This behavior is switchable. - follow up Recommendation: boolean fields
-ΓÇ£isHedgingΓÇ¥ mean that the recommendation is uncertain, for example, ΓÇ£a follow-up could be doneΓÇ¥. ΓÇ£isConditionalΓÇ¥ is for input like ΓÇ£If the patient continues having pain, an MRI should be performed.ΓÇ¥
-ΓÇ£isOptionsΓÇ¥: is also for conditional input.
-ΓÇ£isGuidelineΓÇ¥ means that the recommendation is in a general guideline like the following:
+"isHedging" mean that the recommendation is uncertain, for example, "a follow-up could be done". "isConditional" is for input like "If the patient continues having pain, an MRI should be performed."
+"isOptions": is also for conditional input.
+"isGuideline" means that the recommendation is in a general guideline like the following:
BI-RADS CATEGORIES: - (0) Incomplete: Needs more imaging evaluation
BI-RADS CATEGORIES:
- (6) Known biopsy-proven malignancy - follow up Recommendation: effectiveDateTime and effectivePeriod
-Field ΓÇ£effectiveDateTimeΓÇ¥ will be set when the procedure needs to be done (recommended) at a specific point in time. For example, ΓÇ£next WednesdayΓÇ¥. Field ΓÇ£effectivePeriodΓÇ¥ will be set if a specific period is mentioned, with a start and end datetime. For example, for ΓÇ£within six monthsΓÇ¥, the start datetime will be the date of service, and the end datetime will be the day six months after that.
+Field "effectiveDateTime" will be set when the procedure needs to be done (recommended) at a specific point in time. For example, "next Wednesday". Field "effectivePeriod" will be set if a specific period is mentioned, with a start and end datetime. For example, for "within six months", the start datetime will be the date of service, and the end datetime will be the day six months after that.
- follow up Recommendation: findings
-If set, field ΓÇ£findingsΓÇ¥ contains one or more findings that have to do with the recommendation. For example, a leg scan (procedure) can be recommended because of leg pain (finding).
-Every array element of field ΓÇ£findingsΓÇ¥ is a RecommendationFinding. Field RecommendationFinding.finding has the same information as a FindingInference.finding field.
-For field ΓÇ£RecommendationFinding.RecommendationFindingStatusΓÇ¥, see the OpenAPI specification for the possible values.
-Field ΓÇ£RecommendationFinding.criticalFindingΓÇ¥ is set if a critical result is associated with the finding. It then contains the same information as described for a critical result inference.
+If set, field "findings" contains one or more findings that have to do with the recommendation. For example, a leg scan (procedure) can be recommended because of leg pain (finding).
+Every array element of field "findings" is a RecommendationFinding. Field RecommendationFinding.finding has the same information as a FindingInference.finding field.
+For field "RecommendationFinding.RecommendationFindingStatus", see the OpenAPI specification for the possible values.
+Field "RecommendationFinding.criticalFinding" is set if a critical result is associated with the finding. It then contains the same information as described for a critical result inference.
- follow up Recommendation: recommended procedure
-Field ΓÇ£recommendedProcedureΓÇ¥ is either a GenericProcedureRecommendation, or an ImagingProcedureRecommendation. (Type ΓÇ£procedureRecommendationΓÇ¥ is a supertype for these two types.)
+Field "recommendedProcedure" is either a GenericProcedureRecommendation, or an ImagingProcedureRecommendation. (Type "procedureRecommendation" is a supertype for these two types.)
A GenericProcedureRecommendation has the following:-- Field ΓÇ£kindΓÇ¥ has value ΓÇ£genericProcedureRecommendationΓÇ¥-- Field ΓÇ£descriptionΓÇ¥ has either value ΓÇ£MANAGEMENT PROCEDURE (PROCEDURE)ΓÇ¥ or ΓÇ£CONSULTATION (PROCEDURE)ΓÇ¥-- Field ΓÇ£codeΓÇ¥ only contains an extension with tokens
+- Field "kind" has value "genericProcedureRecommendation"
+- Field "description" has either value "MANAGEMENT PROCEDURE (PROCEDURE)" or "CONSULTATION (PROCEDURE)"
+- Field "code" only contains an extension with tokens
An ImagingProcedureRecommendation has the following:-- Field ΓÇ£kindΓÇ¥ has value ΓÇ£imagingProcedureRecommendationΓÇ¥-- Field ΓÇ£imagingProceduresΓÇ¥ contains an array with one element of type ImagingProcedure.
+- Field "kind" has value "imagingProcedureRecommendation"
+- Field "imagingProcedures" contains an array with one element of type ImagingProcedure.
This type has the following fields, the first 2 of which are always filled:-- ΓÇ£modalityΓÇ¥: a CodeableConcept containing at most one coding with a SNOMED code.-- ΓÇ£anatomyΓÇ¥: a CodeableConcept containing at most one coding with a SNOMED code.-- ΓÇ£laterality: a CodeableConcept containing at most one coding with a SNOMED code.-- ΓÇ£contrastΓÇ¥: not set.-- ΓÇ£viewΓÇ¥: not set.
+- "modality": a CodeableConcept containing at most one coding with a SNOMED code.
+- "anatomy": a CodeableConcept containing at most one coding with a SNOMED code.
+- "laterality: a CodeableConcept containing at most one coding with a SNOMED code.
+- "contrast": not set.
+- "view": not set.
-<details><summary>Examples request/response json</summary>
+Examples request/response json:
+ [!INCLUDE [Example input json](../includes/example-1-inference-follow-up-recommendation-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-1-inference-follow-up-recommendation-json-response.md)]
-</details>
+
This inference is created when findings or test results were communicated to a m
- recipient: Array MedicalProfessionalType - wasAcknowledged: boolean
-Field ΓÇ£wasAcknowledgedΓÇ¥ is set to true if the communication was verbal (nonverbal communication might not have reached the recipient yet and cannot be considered acknowledged). Field ΓÇ£dateTimeΓÇ¥ is set if the date-time of the communication is known. Field ΓÇ£recipientΓÇ¥ is set if the recipient(s) are known. See the OpenAPI spec for its possible values.
+Field "wasAcknowledged" is set to true if the communication was verbal (nonverbal communication might not have reached the recipient yet and cannot be considered acknowledged). Field "dateTime" is set if the date-time of the communication is known. Field "recipient" is set if the recipient(s) are known. See the OpenAPI spec for its possible values.
+
+Examples request/response json:
-<details><summary>Examples request/response json</summary>
[!INCLUDE [Example input json](../includes/example-inference-follow-up-communication-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-inference-follow-up-communication-json-response.md)]
-</details>
+
This inference is for the ordered radiology procedure(s).
- imagingProcedures: Array ImagingProcedure - orderedProcedure: OrderedProcedure
-Field ΓÇ£imagingProceduresΓÇ¥ contains one or more instances of an imaging procedure, as documented for the follow up recommendations.
-Field ΓÇ£procedureCodesΓÇ¥, if set, contains LOINC codes.
-Field ΓÇ£orderedProcedureΓÇ¥ contains the description(s) and the code(s) of the ordered procedure(s) as given by the client. The descriptions are in field ΓÇ£orderedProcedure.descriptionΓÇ¥, separated by ΓÇ£;;ΓÇ¥. The codes are in ΓÇ£orderedProcedure.code.codingΓÇ¥. In every coding in the array, only field ΓÇ£codingΓÇ¥ is set.
+Field "imagingProcedures" contains one or more instances of an imaging procedure, as documented for the follow up recommendations.
+Field "procedureCodes", if set, contains LOINC codes.
+Field "orderedProcedure" contains the description(s) and the code(s) of the ordered procedure(s) as given by the client. The descriptions are in field "orderedProcedure.description", separated by ";;". The codes are in "orderedProcedure.code.coding". In every coding in the array, only field "coding" is set.
-<details><summary>Examples request/response json</summary>
+Examples request/response json:
+ [!INCLUDE [Example input json](../includes/example-inference-radiology-procedure-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-inference-radiology-procedure-json-response.md)]
-</details>
+
azure-health-insights Model Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/model-configuration.md
false | No Evidence is returned
**FollowupRecommendationOptions** - includeRecommendationsWithNoSpecifiedModality - type: boolean
- - description: Include/Exclude follow-up recommendations with no specific radiologic modality, default is false.
+ - description: To include or exclude follow-up recommendations with no specific radiologic modality. Default is false.
- includeRecommendationsInReferences - type: boolean
- - description: Include/Exclude follow-up recommendations in references to a guideline or article, default is false.
+ - description: To include or exclude follow-up recommendations in references to a guideline or article. Default is false.
- provideFocusedSentenceEvidence - type: boolean - description: Provide a single focused sentence as evidence for the recommendation, default is false.
-When includeEvidence is false, no evidence is returned.
-This configuration overrules includeRecommendationsWithNoSpecifiedModality and provideFocusedSentenceEvidence and no evidence is shown.
+IncludeEvidence
+
+- IncludeEvidence
+- type: boolean
+- Provide evidence for the inference, default is false with no evidence returned.
+
-When includeEvidence is true, it depends on the value set on the two other configurations whether the evidence of the inference or a single focused sentence is given as evidence.
## Examples
When includeEvidence is true, it depends on the value set on the two other confi
CDARecommendation_GuidelineFalseUnspecTrueLimited
-The includeRecommendationsWithNoSpecifiedModality is true, includeRecommendationsInReferences is false, provideFocusedSentenceEvidence for recommendations is true and includeEvidence is true.
+- includeRecommendationsWithNoSpecifiedModality is true
+- includeRecommendationsInReferences are false
+- provideFocusedSentenceEvidence for recommendations is true
+- includeEvidence is true
As a result, the model includes evidence for all inferences. - The model checks for follow-up recommendations with a specified modality. - The model checks for follow-up recommendations with no specific radiologic modality. - The model provides a single focused sentence as evidence for the recommendation.
-<details><summary>Examples request/response json</summary>
+Examples request/response json:
+ [!INCLUDE [Example input json](../includes/example-2-inference-follow-up-recommendation-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-2-inference-follow-up-recommendation-json-response.md)]
-</details>
+
As a result, the model includes evidence for all inferences.
CDARecommendation_GuidelineTrueUnspecFalseLimited
-The includeRecommendationsWithNoSpecifiedModality is false, includeRecommendationsInReferences is true, provideFocusedSentenceEvidence for findings is true and includeEvidence is true.
+- includeRecommendationsWithNoSpecifiedModality is false
+- includeRecommendationsInReferences are true
+- provideFocusedSentenceEvidence for findings is true
+- includeEvidence is true
As a result, the model includes evidence for all inferences. - The model checks for follow-up recommendations with a specified modality.
As a result, the model includes evidence for all inferences.
- The model provides a single focused sentence as evidence for the finding.
-<details><summary>Examples request/response json</summary>
+Examples request/response json:
+ [!INCLUDE [Example input json](../includes/example-1-inference-follow-up-recommendation-json-request.md)]+ [!INCLUDE [Example output json](../includes/example-1-inference-follow-up-recommendation-json-response.md)]
-</details>
+
azure-maps Azure Maps Qps Rate Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-qps-rate-limits.md
The following list shows the QPS usage limits for each Azure Maps service by Pri
| Copyright service | 10 | 10 | 10 | | Creator - Alias, TilesetDetails | 10 | Not Available | Not Available | | Creator - Conversion, Dataset, Feature State, Features, Map Configuration, Style, Routeset, Wayfinding | 50 | Not Available | Not Available |
-| Data service (Deprecated<sup>1</sup>) | 50 | 50 | Not Available |
-| Data registry service | 50 | 50 | Not Available |
+| Data registry service | 50 | 50 |  Not Available  |
+| Data service (Deprecated<sup>1</sup>) | 50 | 50 |  Not Available  |
| Geolocation service | 50 | 50 | 50 |
-| Render service - Traffic tiles and Static maps | 50 | 50 | 50 |
| Render service - Road tiles | 500 | 500 | 50 | | Render service - Satellite tiles | 250 | 250 | Not Available |
+| Render service - Static maps | 50 | 50 | 50 |
+| Render service - Traffic tiles | 50 | 50 | 50 |
| Render service - Weather tiles | 100 | 100 | 50 | | Route service - Batch | 10 | 10 | Not Available | | Route service - Non-Batch | 50 | 50 | 50 | | Search service - Batch | 10 | 10 | Not Available | | Search service - Non-Batch | 500 | 500 | 50 | | Search service - Non-Batch Reverse | 250 | 250 | 50 |
-| Spatial service | 50 | 50 | Not Available |
+| Spatial service | 50 | 50 |  Not Available  |
| Timezone service | 50 | 50 | 50 | | Traffic service | 50 | 50 | 50 | | Weather service | 50 | 50 | 50 |
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
> [!CAUTION] > This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
-This article describes the version details for the Azure Monitor agent virtual machine extension. This extension deploys the agent on virtual machines, scale sets, and Arc-enabled servers (on premise servers with Azure Arc agent installed).
+This article describes the version details for the Azure Monitor agent virtual machine extension. This extension deploys the agent on virtual machines, scale sets, and Arc-enabled servers (on-premises servers with Azure Arc agent installed).
We strongly recommended to always update to the latest version, or opt in to the [Automatic Extension Update](../../virtual-machines/automatic-extension-upgrade.md) feature.
We strongly recommended to always update to the latest version, or opt in to the
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
+| February 2024 | **Windows**<ul><li>Fix memory leak in IIS log collection</li><li>Fix json parsing with unicode characters for some ingestion endpoints</li><li>Allow Client installer to run on AVD DevBox partner</li><li>Enable TLS 1.3 on supported Windows versions</li><li>Enable Agent Side Aggregation for Private Preview</li><li>Update MetricsExtension package to 2.2024.202.2043</li><li>Update AzureSecurityPack.Geneva package to 4.31</li></ul>**Linux**<ul><li></li></ul> | 1.24.0 | Coming soon |
| January 2024 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.5 won't install on Arc enabled servers. **This issue was fixed in 1.29.6**</li></ul>**Windows**<ul><li>Added support for Transport Layer Security 1.3</li><li>Reverted a change to enable multiple IIS subscriptions to use same filter. Feature will be redeployed once memory leak is fixed.</li><li>Improved ETW event throughput rate</li></ul>**Linux**<ul><li>Fix Error messages logged intended for mdsd.err went to mdsd.warn instead in 1.29.4 only. Likely error messages: "Exception while uploading to Gig-LA : ...", "Exception while uploading to ODS: ...", "Failed to upload to ODS: ..."</li><li>Syslog time zones incorrect: AMA now uses machine current time when AMA receives an event to populate the TimeGenerated field. The previous behavior parsed the time zone from the Syslog event which caused incorrect times if a device sent an event from a time zone different than the AMA collector machine.</li><li>Reduced noise generated by AMAs' use of semanage when SELinux is enabled"</li></ul> | 1.23.0 | 1.29.5, 1.29.6 | | December 2023 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.4 won't install on Arc enabled servers. Fix is coming in 1.29.6.</li><li>Multiple IIS subscriptions causes a memory leak. feature reverted in 1.23.0.</ul>**Windows** <ul><li>Prevent CPU spikes by not using bookmark when resetting an Event Log subscription</li><li>Added missing fluentbit exe to AMA client setup for Custom Log support</li><li>Updated to latest AzureCredentialsManagementService and DsmsCredentialsManagement package</li><li>Update ME to v2.2023.1027.1417</li></ul>**Linux**<ul><li>Support for TLS V1.3</li><li>Support for nopri in Syslog</li><li>Ability to set disk quota from DCR Agent Settings</li><li>Add ARM64 Ubuntu 22 support</li><li>**Fixes**<ul><li>SysLog</li><ul><li>Parse syslog Palo Alto CEF with multiple space characters following the hostname</li><li>Fix an issue with incorrectly parsing messages containing two '\n' chars in a row</li><li>Improved support for non-RFC compliant devices</li><li>Support infoblox device messages containing both hostname and IP headers</li></ul><li>Fix AMA crash in RHEL 7.2</li><li>Remove dependency on "which" command</li><li>Fix port conflicts due to AMA using 13000 </li><li>Reliability and Performance improvements</li></ul></li></ul>| 1.22.0 | 1.29.4| | October 2023| **Windows** <ul><li>Minimize CPU spikes when resetting an Event Log subscription</li><li>Enable multiple IIS subscriptions to use same filter</li><li>Cleanup files and folders for inactive tenants in multitenant mode</li><li>AMA installer won't install unnecessary certs</li><li>AMA emits Telemetry table locally</li><li>Update Metric Extension to v2.2023.721.1630</li><li>Update AzureSecurityPack to v4.29.0.4</li><li>Update AzureWatson to v1.0.99</li></ul>**Linux**<ul><li> Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics</li><li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ul> |1.21.0|1.28.11|
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
The following features and services now have an Azure Monitor Agent version (som
| Service or feature | Migration recommendation | Current state | More information | | : | : | : | : |
-| [VM insights, Service Map, and Dependency agent](../vm/vminsights-overview.md) | Migrate to Azure Monitor Agent | Generally Available | [Enable VM Insights](../vm/vminsights-enable-overview.md) |
-| [Container insights](../containers/container-insights-overview.md) | Migrate to Azure Monitor Agent | **Linux**: Generally available<br>**Windows**:Public preview | [Enable Container Insights](../containers/container-insights-onboard.md) |
-| [Microsoft Sentinel](../../sentinel/overview.md) | Migrate to Azure Monitor Agent | Public Preview | See [AMA migration for Microsoft Sentinel](../../sentinel/ama-migrate.md). Only CEF and Firewall collection remain for GA status |
-| [Change Tracking and Inventory](../../automation/change-tracking/overview-monitoring-agent.md) | Migrate to Azure Monitor Agent | Generally Available | [Migration guidance from Change Tracking and inventory using Log Analytics to Change Tracking and inventory using Azure Monitoring Agent version](../../automation/change-tracking/guidance-migration-log-analytics-monitoring-agent.md) |
-| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Migrate to new service called Connection Monitor with Azure Monitor Agent | Generally Available | [Monitor network connectivity using Azure Monitor agent with connection monitor](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) |
-| Azure Stack HCI Insights | Migrate to Azure Monitor Agent | Generally Available| [Monitor Azure Stack HCI with Insights](/azure-stack/hci/manage/monitor-hci-single) |
-| [Azure Virtual Desktop (AVD) Insights](../../virtual-desktop/insights.md) | Migrate to Azure Monitor Agent |Generally Available | [Use Azure Virtual Desktop Insights to monitor your deployment](../../virtual-desktop/insights.md#session-host-data-settings) |
+| [VM insights, Service Map, and Dependency agent](../vm/vminsights-overview.md) | Migrate to Azure Monitor Agent | Generally Available | [Enable VM Insights](../vm/vminsights-enable-overview.md) |
+| [Microsoft Sentinel](../../sentinel/overview.md) | Migrate to Azure Monitor Agent | Public Preview | [AMA migration for Microsoft Sentinel](../../sentinel/ama-migrate.md). |
+| [Change Tracking and Inventory](../../automation/change-tracking/overview-monitoring-agent.md) | Migrate to Azure Monitor Agent | Generally Available | [Migration for Change Tracking and inventory](../../automation/change-tracking/guidance-migration-log-analytics-monitoring-agent.md) |
+| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Migrate to new service called Connection Monitor with Azure Monitor Agent | Generally Available | [Monitor network connectivity using connection monitor](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) |
+| Azure Stack HCI Insights | Migrate to Azure Monitor Agent | Generally Available| [Monitor Azure Stack HCI with Insights](/azure-stack/hci/manage/monitor-hci-single) |
+| [Azure Virtual Desktop (AVD) Insights](../../virtual-desktop/insights.md) | Migrate to Azure Monitor Agent |Generally Available | [Azure Virtual Desktop Insights](../../virtual-desktop/insights.md#session-host-data-settings) |
| [Container Monitoring Solution](../containers/containers.md) | Migrate to new service called Container Insights with Azure Monitor Agent | Generally Available | [Enable Container Insights](../containers/container-insights-transition-solution.md) | | [DNS Collector](../../sentinel/connect-dns-ama.md) | Use new Sentinel Connector | Generally Available | [Enable DNS Connector](../../sentinel/connect-dns-ama.md)|
-> [!NOTE]
-> Features and services listed above in preview **may not be available in Azure Government and China clouds**. They will be available typically within a month *after* the features/services become generally available.
- When you migrate the following services, which currently use Log Analytics agent, to their respective replacements (v2), you no longer need either of the monitoring agents: | Service | Migration recommendation | Current state | More information | | : | : | : | : |
-| [Microsoft Defender for Cloud, Servers, SQL, and Endpoint](../../security-center/security-center-introduction.md) | Migrate to Microsoft Defender for Cloud (No dependency on Log Analytics agents or Azure Monitor Agent) | Generally available | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](../../defender-for-cloud/upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation)|
+| [Microsoft Defender for Cloud, Servers, SQL, and Endpoint](../../security-center/security-center-introduction.md) | Migrate to Microsoft Defender for Cloud (No dependency on Log Analytics agents or Azure Monitor Agent) | Generally available | [Defender for Cloud plan for Log Analytics agent deprecation](../../defender-for-cloud/upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation)|
| [Update Management](../../automation/update-management/overview.md) | Migrate to Azure Update Manager (No dependency on Log Analytics agents or Azure Monitor Agent) | Generally available | [Update Manager documentation](../../update-manager/update-manager-faq.md#la-agent-also-known-as-mma-is-retiring-and-will-be-replaced-with-ama-is-it-necessary-to-move-to-update-manager-or-can-i-continue-to-use-automation-update-management-with-ama) |
-| [Automation Hybrid Runbook Worker overview](../../automation/automation-hybrid-runbook-worker.md) | Automation Hybrid Worker Extension (no dependency on Log Analytics agents or Azure Monitor Agent) | Generally available | [Migrate an existing Agent based to Extension based Hybrid Workers](../../automation/extension-based-hybrid-runbook-worker-install.md#migrate-an-existing-agent-based-to-extension-based-hybrid-workers) |
+| [Automation Hybrid Runbook Worker overview](../../automation/automation-hybrid-runbook-worker.md) | Automation Hybrid Worker Extension (no dependency on Log Analytics agents or Azure Monitor Agent) | Generally available | [Migrate to Extension based Hybrid Workers](../../automation/extension-based-hybrid-runbook-worker-install.md#migrate-an-existing-agent-based-to-extension-based-hybrid-workers) |
+
+## Known parity gaps for solutions that may impact your migration
+- ***Sentinel***: CEF and Windows firewall logs are not yet GA
+- ***SQL Assessment Solution***: This is now part of SQL best practice assessment. The deployment policies require one Log Analytics Workspace per subscription, which is not the best practice recommended by the AMA team.
+- ***Microsoft Defender for cloud***: Some features for the new agentless solution are in development. Your migration maybe impacted if you use FIM, Endpoint protection discovery recommendations, OS Misconfigurations (ASB recommendations) and Adaptive Application controls.
+- ***Container Insights***: The Windows version is in public preview.
## Frequently asked questions
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
public class HomeController : Controller
For more information about custom data reporting in Application Insights, see [Application Insights custom metrics API reference](./api-custom-events-metrics.md). A similar approach can be used for sending custom metrics to Application Insights by using the [GetMetric API](./get-metric.md).
+### How do I capture Request and Response body in my telemetry?
+
+ASP.NET Core has [built-in
+support](https://learn.microsoft.com/aspnet/core/fundamentals/http-logging) for
+logging HTTP Request/Response information (including body) via
+[`ILogger`](#ilogger-logs). It is recommended to leverage this. This may
+potentially expose personally identifiable information (PII) in telemetry, and
+can cause costs (performance costs and Application Insights billing) to
+significantly increase, so evaluate the risks carefully before using this.
+ ### How do I customize ILogger logs collection? The default setting for Application Insights is to only capture **Warning** and more severe logs.
If the SDK is installed at build time as shown in this article, you don't need t
Yes. Feature support for the SDK is the same in all platforms, with the following exceptions: * The SDK collects [event counters](./eventcounters.md) on Linux because [performance counters](./performance-counters.md) are only supported in Windows. Most metrics are the same.
-* Although `ServerTelemetryChannel` is enabled by default, if the application is running in Linux or macOS, the channel doesn't automatically create a local storage folder to keep telemetry temporarily if there are network issues. Because of this limitation, telemetry is lost when there are temporary network or server issues. To work around this issue, configure a local folder for the channel.
-
-### [ASP.NET Core 6.0](#tab/netcore6)
-
-```csharp
-using Microsoft.ApplicationInsights.Channel;
-using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel;
-
-var builder = WebApplication.CreateBuilder(args);
-
-// The following will configure the channel to use the given folder to temporarily
-// store telemetry items during network or Application Insights server issues.
-// User should ensure that the given folder already exists
-// and that the application has read/write permissions.
-builder.Services.AddSingleton(typeof(ITelemetryChannel),
- new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"});
-builder.Services.AddApplicationInsightsTelemetry();
-
-var app = builder.Build();
-```
-
-### [ASP.NET Core 3.1](#tab/netcore3)
-
-```csharp
-using Microsoft.ApplicationInsights.Channel;
-using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel;
-
-public void ConfigureServices(IServiceCollection services)
-{
- // The following will configure the channel to use the given folder to temporarily
- // store telemetry items during network or Application Insights server issues.
- // User should ensure that the given folder already exists
- // and that the application has read/write permissions.
- services.AddSingleton(typeof(ITelemetryChannel),
- new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"});
- services.AddApplicationInsightsTelemetry();
-}
-```
-
-> [!NOTE]
-> This .NET version is no longer supported.
---
-This limitation isn't applicable from version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.15.0) and later.
-### Is this SDK supported for the new .NET Core 3.X Worker Service template applications?
+### Is this SDK supported for Worker Services?
-This SDK requires `HttpContext`. It doesn't work in any non-HTTP applications, including the .NET Core 3.X Worker Service applications. To enable Application Insights in such applications by using the newly released Microsoft.ApplicationInsights.WorkerService SDK, see [Application Insights for Worker Service applications (non-HTTP applications)](worker-service.md).
+No. Please use [Application Insights for Worker Service applications (non-HTTP applications)](worker-service.md) for worker services.
### How can I uninstall the SDK?
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
Application Insights SDKs for .NET and .NET Core ship with `DependencyTrackingTe
|[Azure Blob Storage, Table Storage, or Queue Storage](https://www.nuget.org/packages/WindowsAzure.Storage/) | Calls made with the Azure Storage client. | |[Azure Event Hubs client SDK](https://nuget.org/packages/Azure.Messaging.EventHubs) | Use the latest package: https://nuget.org/packages/Azure.Messaging.EventHubs. | |[Azure Service Bus client SDK](https://nuget.org/packages/Azure.Messaging.ServiceBus)| Use the latest package: https://nuget.org/packages/Azure.Messaging.ServiceBus. |
-|[Azure Cosmos DB](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) | Tracked automatically if HTTP/HTTPS is used. TCP will also be captured automatically using preview package >= [3.33.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.33.0-preview). |
+|[Azure Cosmos DB](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) | Tracked automatically if HTTP/HTTPS is used. Tracing for operations in direct mode with TCP will also be captured automatically using preview package >= [3.33.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.33.0-preview). For more details visit the [documentation](../../cosmos-db/nosql/sdk-observability.md). |
If you're missing a dependency or using a different SDK, make sure it's in the list of [autocollected dependencies](#dependency-auto-collection). If the dependency isn't autocollected, you can track it manually with a [track dependency call](./api-custom-events-metrics.md#trackdependency).
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
Links are provided to more information for each supported scenario.
|Azure App Service on Linux - Publish as Docker | :x: | [ :white_check_mark: :link: ](azure-web-apps-net-core.md?tabs=linux) | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: | |Azure Functions - basic | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | |Azure Functions - dependencies | :x: | :x: | [ :white_check_mark: :link: ](monitor-functions.md) | :x: | [ :white_check_mark: :link: ](monitor-functions.md#distributed-tracing-for-python-function-apps) |
-|Azure Spring Cloud | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) | :x: | :x: |
+|Azure Spring Apps | :x: | :x: | [ :white_check_mark: :link: ](../../spring-apps/enterprise/how-to-application-insights.md) | :x: | :x: |
|Azure Kubernetes Service (AKS) | :x: | :x: | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | |Azure VMs Windows | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) ┬▓ ┬│ | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) ┬▓ ┬│ | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | |On-premises VMs Windows | [ :white_check_mark: :link: ](application-insights-asp-net-agent.md) ┬│ | [ :white_check_mark: :link: ](application-insights-asp-net-agent.md) ┬▓ ┬│ | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: |
azure-monitor Java Get Started Supplemental https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-get-started-supplemental.md
For more information, see [Use Application Insights Java In-Process Agent in Azu
### Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.19.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.5.0.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.19.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.5.0.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.19.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.5.0.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.19.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.5.0.jar" -jar <myapp.jar>
```
FROM ...
COPY target/*.jar app.jar
-COPY agent/applicationinsights-agent-3.4.19.jar applicationinsights-agent-3.4.19.jar
+COPY agent/applicationinsights-agent-3.5.0.jar applicationinsights-agent-3.5.0.jar
COPY agent/applicationinsights.json applicationinsights.json ENV APPLICATIONINSIGHTS_CONNECTION_STRING="CONNECTION-STRING"
-ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.19.jar", "-jar", "app.jar"]
+ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.5.0.jar", "-jar", "app.jar"]
```
-In this example we have copied the `applicationinsights-agent-3.4.19.jar` and `applicationinsights.json` files from an `agent` folder (you can choose any folder of your machine). These two files have to be in the same folder in the Docker container.
+In this example we have copied the `applicationinsights-agent-3.5.0.jar` and `applicationinsights.json` files from an `agent` folder (you can choose any folder of your machine). These two files have to be in the same folder in the Docker container.
### Third-party container images
For information on setting up the Application Insights Java agent, see [Enabling
If you installed Tomcat via `apt-get` or `yum`, you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.19.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.5.0.jar"
``` #### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.19.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.19.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.5.0.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.19.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.5.0.jar` to `CATALINA_OPTS`.
### Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.19.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.5.0.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.19.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.5.0.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.19.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.5.0.jar` to `CATALINA_OPTS`.
#### Run Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.19.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.5.0.jar` to the `Java Options` under the `Java` tab.
### JBoss EAP 7 #### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.19.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.5.0.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.19.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.5.0.jar -Xms1303m -Xmx1303m ..."
... ``` #### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.19.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.5.0.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.4.19.jar` to the existing `j
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.4.19.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.5.0.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`:
``` --exec--javaagent:path/to/applicationinsights-agent-3.4.19.jar
+-javaagent:path/to/applicationinsights-agent-3.5.0.jar
``` ### Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.4.19.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.5.0.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.4.19.jar>
+ -javaagent:path/to/applicationinsights-agent-3.5.0.jar>
</jvm-options> ... </java-config>
Add `-javaagent:path/to/applicationinsights-agent-3.4.19.jar` to the existing `j
1. In `Generic JVM arguments`, add the following JVM argument: ```
- -javaagent:path/to/applicationinsights-agent-3.4.19.jar
+ -javaagent:path/to/applicationinsights-agent-3.5.0.jar
``` 1. Save and restart the application server.
Add `-javaagent:path/to/applicationinsights-agent-3.4.19.jar` to the existing `j
Create a new file `jvm.options` in the server directory (for example, `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.4.19.jar
+-javaagent:path/to/applicationinsights-agent-3.5.0.jar
``` ### Others
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
There are two options for enabling Application Insights Java with Spring Boot: J
## Enabling with JVM argument
-Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.19.jar"` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.5.0.jar"` somewhere before `-jar`, for example:
```
-java -javaagent:"path/to/applicationinsights-agent-3.4.19.jar" -jar <myapp.jar>
+java -javaagent:"path/to/applicationinsights-agent-3.5.0.jar" -jar <myapp.jar>
``` ### Spring Boot via Docker entry point
To enable Application Insights Java programmatically, you must add the following
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-runtime-attach</artifactId>
- <version>3.4.19</version>
+ <version>3.5.0</version>
</dependency> ```
First, add the `applicationinsights-core` dependency:
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>
- <version>3.4.19</version>
+ <version>3.5.0</version>
</dependency> ```
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
More information and configuration options are provided in the following section
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.19.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.5.0.jar`.
You can specify your own configuration file path by using one of these two options: * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it resolves relative to the directory where `applicationinsights-agent-3.4.19.jar` is located.
+If you specify a relative path, it resolves relative to the directory where `applicationinsights-agent-3.5.0.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the JSON configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
Or you can set the connection string by using the Java system property `applicat
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it resolves relative to the directory where `applicationinsights-agent-3.4.19.jar` is located.
+If you specify a relative path, it resolves relative to the directory where `applicationinsights-agent-3.5.0.jar` is located.
```json {
and add `applicationinsights-core` to your application:
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>
- <version>3.4.19</version>
+ <version>3.5.0</version>
</dependency> ```
In the preceding configuration example:
* `level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. * `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.4.19.jar` is located.
+`applicationinsights-agent-3.5.0.jar` is located.
Starting from version 3.0.2, you can also set the self-diagnostics `level` by using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`. It then takes precedence over the self-diagnostics level specified in the JSON configuration.
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
There are typically no code changes when upgrading to 3.x. The 3.x SDK dependenc
Add the 3.x Java agent to your JVM command-line args, for example ```--javaagent:path/to/applicationinsights-agent-3.4.19.jar
+-javaagent:path/to/applicationinsights-agent-3.5.0.jar
``` If you're using the Application Insights 2.x Java agent, just replace your existing `-javaagent:...` with the aforementioned example.
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
dotnet add package Azure.Monitor.OpenTelemetry.Exporter
#### [Java](#tab/java)
-Download the [applicationinsights-agent-3.4.19.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.19/applicationinsights-agent-3.4.19.jar) file.
+Download the [applicationinsights-agent-3.5.0.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.5.0/applicationinsights-agent-3.5.0.jar) file.
> [!WARNING] >
var loggerFactory = LoggerFactory.Create(builder =>
Java autoinstrumentation is enabled through configuration changes; no code changes are required.
-Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.19.jar"` to your application's JVM args.
+Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.5.0.jar"` to your application's JVM args.
> [!TIP] > Sampling is enabled by default at a rate of 5 requests per second, aiding in cost management. Telemetry data may be missing in scenarios exceeding this rate. For more information on modifying sampling configuration, see [sampling overrides](./java-standalone-sampling-overrides.md).
To paste your Connection String, select from the following options:
B. Set via Configuration File - Java Only (Recommended)
- Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.19.jar` with the following content:
+ Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.5.0.jar` with the following content:
```json {
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-private-clouds-clusters.md
Title: Concepts - Private clouds and clusters
description: Understand the key capabilities of Azure VMware Solution software-defined data centers and VMware vSphere clusters. Previously updated : 1/16/2024 Last updated : 3/1/2024
A private cloud includes clusters with:
- Dedicated bare-metal server hosts provisioned with VMware ESXi hypervisor - VMware vCenter Server for managing ESXi and vSAN-- VMware NSX-T Data Center software-defined networking for vSphere workload VMs
+- VMware NSX software-defined networking for vSphere workload VMs
- VMware vSAN datastore for vSphere workload VMs - VMware HCX for workload mobility - Resources in the Azure underlay (required for connectivity and to operate the private cloud)
Each Azure VMware Solution architectural component has the following function:
- Azure Subscription: Provides controlled access, budget, and quota management for the Azure VMware Solution. - Azure Region: Groups data centers into Availability Zones (AZs) and then groups AZs into regions. - Azure Resource Group: Places Azure services and resources into logical groups.-- Azure VMware Solution Private Cloud: Offers compute, networking, and storage resources using VMware software, including vCenter Server, NSX-T Data Center software-defined networking, vSAN software-defined storage, and Azure bare-metal ESXi hosts. Azure NetApp Files, Azure Elastic SAN, and Pure Cloud Block Store are also supported.
+- Azure VMware Solution Private Cloud: Offers compute, networking, and storage resources using VMware software, including vCenter Server, NSX software-defined networking, vSAN software-defined storage, and Azure bare-metal ESXi hosts. Azure NetApp Files, Azure Elastic SAN, and Pure Cloud Block Store are also supported.
- Azure VMware Solution Resource Cluster: Provides compute, networking, and storage resources for customer workloads by scaling out the Azure VMware Solution private cloud using VMware software, including vSAN software-defined storage and Azure bare-metal ESXi hosts. Azure NetApp Files, Azure Elastic SAN, and Pure Cloud Block Store are also supported. - VMware HCX: Delivers mobility, migration, and network extension services. - VMware Site Recovery: Automates disaster recovery and storage replication services with VMware vSphere Replication. Third-party disaster recovery solutions Zerto Disaster Recovery and JetStream Software Disaster Recovery are also supported.
Azure VMware Solution monitors the following conditions on the host:
## Backup and restore
-Azure VMware Solution private cloud vCenter Server, NSX-T Data Center, and HCX Manager (if enabled) configurations are on a daily backup schedule. Open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) in the Azure portal to request restoration.
+Azure VMware Solution private cloud vCenter Server, NSX, and HCX Manager (if enabled) configurations are on a daily backup schedule. Open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) in the Azure portal to request restoration.
> [!NOTE] > Restorations are intended for catastrophic situations only.
batch Batch Automatic Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-automatic-scaling.md
Title: Autoscale compute nodes in an Azure Batch pool description: Enable automatic scaling on an Azure Batch cloud pool to dynamically adjust the number of compute nodes in the pool. Previously updated : 08/23/2023 Last updated : 02/29/2024
You can get the value of these service-defined variables to make adjustments tha
| $TaskSlotsPerNode |The number of task slots that can be used to run concurrent tasks on a single compute node in the pool. | | $CurrentDedicatedNodes |The current number of dedicated compute nodes. | | $CurrentLowPriorityNodes |The current number of Spot compute nodes, including any nodes that have been preempted. |
+| $UsableNodeCount | The number of usable compute nodes. |
| $PreemptedNodeCount | The number of nodes in the pool that are in a preempted state. | > [!WARNING]
You can get the value of these service-defined variables to make adjustments tha
> date, these service-defined variables will no longer be populated with sample data. Please discontinue use of these variables > before this date.
-> [!WARNING]
-> `$PreemptedNodeCount` is currently not available and returns `0` valued data.
- > [!NOTE] > Use `$RunningTasks` when scaling based on the number of tasks running at a point in time, and `$ActiveTasks` when scaling based on the number of tasks that are queued up to run.
$runningTasksSample = $RunningTasks.GetSample(60 * TimeInterval_Second, 120 * Ti
Because there might be a delay in sample availability, you should always specify a time range with a look-back start time that's older than one minute. It takes approximately one minute for samples to propagate through the system, so samples in the range `(0 * TimeInterval_Second, 60 * TimeInterval_Second)` might not be available. Again, you can use the percentage parameter of `GetSample()` to force a particular sample percentage requirement. > [!IMPORTANT]
-> We strongly recommend that you **avoid relying *only* on `GetSample(1)` in your autoscale formulas**. This is because `GetSample(1)` essentially says to the Batch service, "Give me the last sample you have, no matter how long ago you retrieved it." Since it's only a single sample, and it might be an older sample, it might not be representative of the larger picture of recent task or resource state. If you do use `GetSample(1)`, make sure that it's part of a larger statement and not the only data point that your formula relies on.
+> We strongly recommend that you **avoid relying *only* on `GetSample(1)` in your autoscale formulas**. This is because `GetSample(1)` essentially says to the Batch service, "Give me the last sample you had, no matter how long ago you retrieved it." Since it's only a single sample, and it might be an older sample, it might not be representative of the larger picture of recent task or resource state. If you do use `GetSample(1)`, make sure that it's part of a larger statement and not the only data point that your formula relies on.
## Write an autoscale formula
batch Batch Pool No Public Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-no-public-ip-address.md
- Title: Create an Azure Batch pool without public IP addresses (preview)
-description: Learn how to create an Azure Batch pool without public IP addresses.
- Previously updated : 05/30/2023---
-# Create a Batch pool without public IP addresses (preview)
-
-> [!WARNING]
-> This preview version will be retired on **31 March 2023**, and will be replaced by
-> [Simplified node communication pool without public IP addresses](simplified-node-communication-pool-no-public-ip.md).
-> For more information, see the [Retirement Migration Guide](batch-pools-without-public-ip-addresses-classic-retirement-migration-guide.md).
-
-> [!IMPORTANT]
-> - Support for pools without public IP addresses in Azure Batch is currently in public preview for the following regions: France Central, East Asia, West Central US, South Central US, West US 2, East US, North Europe, East US 2, Central US, West Europe, North Central US, West US, Australia East, Japan East, Japan West.
-> - This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> - For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-When you create an Azure Batch pool, you can provision the virtual machine configuration pool without a public IP address. This article explains how to set up a Batch pool without public IP addresses.
-
-## Why use a pool without public IP addresses?
-
-By default, all the compute nodes in an Azure Batch virtual machine configuration pool are assigned a public IP address. This address is used by the Batch service to schedule tasks and for communication with compute nodes, including outbound access to the internet.
-
-To restrict access to these nodes and reduce the discoverability of these nodes from the internet, you can provision the pool without public IP addresses.
-
-## Prerequisites
--- **Authentication**. To use a pool without public IP addresses inside a [virtual network](./batch-virtual-network.md), the Batch client API must use Microsoft Entra authentication. Azure Batch support for Microsoft Entra ID is documented in [Authenticate Azure Batch services with Microsoft Entra ID](batch-aad-auth.md). If you aren't creating your pool within a virtual network, either Microsoft Entra authentication or key-based authentication can be used.--- **An Azure VNet**. If you're creating your pool in a [virtual network](batch-virtual-network.md), follow these requirements and configurations. To prepare a VNet with one or more subnets in advance, you can use the Azure portal, Azure PowerShell, the Azure CLI, or other methods.-
- - The VNet must be in the same subscription and region as the Batch account you use to create your pool.
-
- - The subnet specified for the pool must have enough unassigned IP addresses to accommodate the number of VMs targeted for the pool; that is, the sum of the `targetDedicatedNodes` and `targetLowPriorityNodes` properties of the pool. If the subnet doesn't have enough unassigned IP addresses, the pool partially allocates the compute nodes, and a resize error occurs.
-
- - You must disable private link service and endpoint network policies. This action can be done by using Azure CLI:
-
- `az network vnet subnet update --vnet-name <vnetname> -n <subnetname> --resource-group <resourcegroup> --disable-private-endpoint-network-policies --disable-private-link-service-network-policies`
-
-> [!IMPORTANT]
-> For each 100 dedicated or Spot nodes, Batch allocates one private link service and one load balancer. These resources are limited by the subscription's [resource quotas](../azure-resource-manager/management/azure-subscription-service-limits.md). For large pools, you might need to [request a quota increase](batch-quota-limit.md#increase-a-quota) for one or more of these resources. Additionally, no resource locks should be applied to any resource created by Batch, since this prevent cleanup of resources as a result of user-initiated actions such as deleting a pool or resizing to zero.
-
-## Current limitations
-
-1. Pools without public IP addresses must use Virtual Machine Configuration and not Cloud Services Configuration.
-1. [Custom endpoint configuration](pool-endpoint-configuration.md) to Batch compute nodes doesn't work with pools without public IP addresses.
-1. Because there are no public IP addresses, you can't [use your own specified public IP addresses](create-pool-public-ip.md) with this type of pool.
-1. [Basic VM size](../virtual-machines/sizes-previous-gen.md#basic-a) doesn't work with pools without public IP addresses.
-
-## Create a pool without public IP addresses in the Azure portal
-
-1. Navigate to your Batch account in the Azure portal.
-1. In the **Settings** window on the left, select **Pools**.
-1. In the **Pools** window, select **Add**.
-1. On the **Add Pool** window, select the option you intend to use from the **Image Type** dropdown.
-1. Select the correct **Publisher/Offer/Sku** of your image.
-1. Specify the remaining required settings, including the **Node size**, **Target dedicated nodes**, and **Target Spot/low-priority nodes**, and any desired optional settings.
-1. Optionally select a virtual network and subnet you wish to use. This virtual network must be in the same resource group as the pool you're creating.
-1. In **IP address provisioning type**, select **NoPublicIPAddresses**.
-
-![Screenshot of the Add pool screen with NoPublicIPAddresses selected.](./media/batch-pool-no-public-ip-address/create-pool-without-public-ip-address.png)
-
-## Use the Batch REST API to create a pool without public IP addresses
-
-The example below shows how to use the [Batch Service REST API](/rest/api/batchservice/pool/add) to create a pool that uses public IP addresses.
-
-### REST API URI
-
-```http
-POST {batchURL}/pools?api-version=2020-03-01.11.0
-client-request-id: 00000000-0000-0000-0000-000000000000
-```
-
-### Request body
-
-```json
-"pool": {
- "id": "pool2",
- "vmSize": "standard_a1",
- "virtualMachineConfiguration": {
- "imageReference": {
- "publisher": "Canonical",
- "offer": "UbuntuServer",
- "sku": "20.04-lts"
- },
- "nodeAgentSKUId": "batch.node.ubuntu 20.04"
- }
- "networkConfiguration": {
- "subnetId": "/subscriptions/<your_subscription_id>/resourceGroups/<your_resource_group>/providers/Microsoft.Network/virtualNetworks/<your_vnet_name>/subnets/<your_subnet_name>",
- "publicIPAddressConfiguration": {
- "provision": "NoPublicIPAddresses"
- }
- },
- "resizeTimeout": "PT15M",
- "targetDedicatedNodes": 5,
- "targetLowPriorityNodes": 0,
- "taskSlotsPerNode": 3,
- "taskSchedulingPolicy": {
- "nodeFillType": "spread"
- },
- "enableAutoScale": false,
- "enableInterNodeCommunication": true,
- "metadata": [
- {
- "name": "myproperty",
- "value": "myvalue"
- }
- ]
-}
-```
-
-> [!Important]
-> This document references a release version of Linux that is nearing or at, End of Life(EOL). Please consider updating to a more current version.
-
-## Outbound access to the internet
-
-In a pool without public IP addresses, your virtual machines won't be able to access the public internet unless you configure your network setup appropriately, such as by using [virtual network NAT](../virtual-network/nat-gateway/nat-overview.md). NAT only allows outbound access to the internet from the virtual machines in the virtual network. Batch-created compute nodes won't be publicly accessible, since they don't have public IP addresses associated.
-
-Another way to provide outbound connectivity is to use a user-defined route (UDR). This method lets you route traffic to a proxy machine that has public internet access.
-
-## Next steps
--- Learn more about [creating pools in a virtual network](batch-virtual-network.md).-- Learn how to [use private endpoints with Batch accounts](private-connectivity.md).
batch Batch Pool Vm Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-vm-sizes.md
Title: Choose VM sizes and images for pools description: How to choose from the available VM sizes and OS versions for compute nodes in Azure Batch pools Previously updated : 02/13/2023 Last updated : 02/29/2024 # Choose a VM size and image for compute nodes in an Azure Batch pool
az batch location list-skus --location <azure-region>
``` > [!TIP]
-> Batch **does not** support any VM SKU sizes that have only remote storage. A local temporary disk is required for Batch.
-> For example, Batch supports [ddv4 and ddsv4](../virtual-machines/ddv4-ddsv4-series.md), but does not support
-> [dv4 and dsv4](../virtual-machines/dv4-dsv4-series.md).
+> It's recommended to avoid VM SKUs/families with impending Batch support end of life (EOL) dates. These dates can be discovered
+> via the [`ListSupportedVirtualMachineSkus` API](/rest/api/batchmanagement/location/list-supported-virtual-machine-skus),
+> [PowerShell](/powershell/module/az.batch/get-azbatchsupportedvirtualmachinesku),
+> or [Azure CLI](/cli/azure/batch/location#az-batch-location-list-skus).
+> For more information, see the [Batch best practices guide](best-practices.md) regarding Batch pool VM SKU selection.
+
+Batch **doesn't** support any VM SKU sizes that have only remote storage. A local temporary disk is required for Batch.
+For example, Batch supports [ddv4 and ddsv4](../virtual-machines/ddv4-ddsv4-series.md), but does not support
+[dv4 and dsv4](../virtual-machines/dv4-dsv4-series.md).
### Using Generation 2 VM Images
For example, using the Azure CLI, you can obtain the list of supported VM images
az batch pool supported-images list ```
-It's recommended to avoid images with impending Batch support end of life (EOL) dates. These dates can be discovered via
-the [`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages),
-[PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images).
-For more information, see the [Batch best practices guide](best-practices.md) regarding Batch pool VM image selection.
+> [!TIP]
+> It's recommended to avoid images with impending Batch support end of life (EOL) dates. These dates can be discovered via
+> the [`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages),
+> [PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images).
+> For more information, see the [Batch best practices guide](best-practices.md) regarding Batch pool VM image selection.
## Next steps
batch Batch Rendering Application Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-rendering-application-reference.md
- Title: Use rendering applications
-description: How to use rendering applications with Azure Batch. This article provides a brief description of how to run each rendering application.
Previously updated : 08/02/2018---
-# Rendering applications
-
-Rendering applications are used by creating Batch jobs and tasks. The task command line property specifies the appropriate command line and parameters. The easiest way to create the job tasks is to use the Batch Explorer templates as specified in [this article](./batch-rendering-using.md#using-batch-explorer). The templates can be viewed and modified versions created if necessary.
-
-This article provides a brief description of how to run each rendering application.
-
-## Rendering with Autodesk 3ds Max
-
-### Renderer support
-
-In addition to the renderers built into 3ds Max, the following renderers are available on the rendering VM images and can be referenced by the 3ds Max scene file:
-
-* Autodesk Arnold
-* Chaos Group V-Ray
-
-### Task command line
-
-Invoke the `3dsmaxcmdio.exe` application to perform command line rendering on a pool node. This application is on the path when the task is run. The `3dsmaxcmdio.exe` application has the same available parameters as the `3dsmaxcmd.exe` application, which is documented in the [3ds Max help documentation](https://help.autodesk.com/view/3DSMAX/2018/ENU/) (Rendering | Command-Line Rendering section).
-
-For example:
-
-```
-3dsmaxcmdio.exe -v:5 -rfw:0 -start:{0} -end:{0} -bitmapPath:"%AZ_BATCH_JOB_PREP_WORKING_DIR%\sceneassets\images" -outputName:dragon.jpg -w:1280 -h:720 "%AZ_BATCH_JOB_PREP_WORKING_DIR%\scenes\dragon.max"
-```
-
-Notes:
-
-* Great care must be taken to ensure the asset files are found. Ensure the paths are correct and relative using the **Asset Tracking** window, or use the `-bitmapPath` parameter on the command line.
-* See if there are issues with the render, such as inability to find assets, by checking the `stdout.txt` file written by 3ds Max when the task is run.
-
-### Batch Explorer templates
-
-Pool and job templates can be accessed from the **Gallery** in Batch Explorer. The template source files are available in the [Batch Explorer data repository on GitHub](https://github.com/Azure/BatchExplorer-data/tree/master/ncj/3dsmax).
-
-## Rendering with Autodesk Maya
-
-### Renderer support
-
-In addition to the renderers built into Maya, the following renderers are available on the rendering VM images and can be referenced by the 3ds Max scene file:
-
-* Autodesk Arnold
-* Chaos Group V-Ray
-
-### Task command line
-
-The `renderer.exe` command-line renderer is used in the task command line. The command-line renderer is documented in [Maya help](https://help.autodesk.com/view/MAYAUL/2018/ENU/?guid=GUID-EB558BC0-5C2B-439C-9B00-F97BCB9688E4).
-
-In the following example, a job preparation task is used to copy the scene files and assets to the job preparation working directory, an output folder is used to store the rendering image, and frame 10 is rendered.
-
-```
-render -renderer sw -proj "%AZ_BATCH_JOB_PREP_WORKING_DIR%" -verb -rd "%AZ_BATCH_TASK_WORKING_DIR%\output" -s 10 -e 10 -x 1920 -y 1080 "%AZ_BATCH_JOB_PREP_WORKING_DIR%\scene-file.ma"
-```
-
-For V-Ray rendering, the Maya scene file would normally specify V-Ray as the renderer. It can also be specified on the command line:
-
-```
-render -renderer vray -proj "%AZ_BATCH_JOB_PREP_WORKING_DIR%" -verb -rd "%AZ_BATCH_TASK_WORKING_DIR%\output" -s 10 -e 10 -x 1920 -y 1080 "%AZ_BATCH_JOB_PREP_WORKING_DIR%\scene-file.ma"
-```
-
-For Arnold rendering, the Maya scene file would normally specify Arnold as the renderer. It can also be specified on the command line:
-
-```
-render -renderer arnold -proj "%AZ_BATCH_JOB_PREP_WORKING_DIR%" -verb -rd "%AZ_BATCH_TASK_WORKING_DIR%\output" -s 10 -e 10 -x 1920 -y 1080 "%AZ_BATCH_JOB_PREP_WORKING_DIR%\scene-file.ma"
-```
-
-### Batch Explorer templates
-
-Pool and job templates can be accessed from the **Gallery** in Batch Explorer. The template source files are available in the [Batch Explorer data repository on GitHub](https://github.com/Azure/BatchExplorer-data/tree/master/ncj/maya).
-
-## Next steps
-
-Use the pool and job templates from the [data repository in GitHub](https://github.com/Azure/BatchExplorer-data/tree/master/ncj) using Batch Explorer. When required, create new templates or modify one of the supplied templates.
batch Batch Rendering Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-rendering-applications.md
- Title: Rendering applications
-description: It's possible to use any rendering applications with Azure Batch. However, Azure Marketplace VM images are available with common applications pre-installed.
Previously updated : 03/12/2021---
-# Pre-installed applications on Batch rendering VM images
-
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
-
-It's possible to use any rendering applications with Azure Batch. However, Azure Marketplace VM images are available with common applications pre-installed.
-
-Where applicable, pay-for-use licensing is available for the pre-installed rendering applications. When a Batch pool is created, the required applications can be specified and both the cost of VM and applications will be billed per minute. Application prices are listed on the [Azure Batch pricing page](https://azure.microsoft.com/pricing/details/batch/#graphic-rendering).
-
-Some applications only support Windows, but most are supported on both Windows and Linux.
-
-> [!WARNING]
-> The rendering VM images and pay-for-use licensing have been [deprecated and will be retired on February 29, 2024](https://azure.microsoft.com/updates/azure-batch-rendering-vm-images-licensing-will-be-retired-on-29-february-2024/). To use Batch for rendering, [a custom VM image and standard application licensing should be used.](batch-rendering-functionality.md#batch-pools-using-custom-vm-images-and-standard-application-licensing)
-
-## Applications on latest CentOS 7 rendering image
-
-The following list applies to the CentOS rendering image, version 1.2.0.
-
-* Autodesk Maya I/O 2020 Update 4.6
-* Autodesk Arnold for Maya 2020 (Arnold version 6.2.0.0) MtoA-4.2.0-2020
-* Chaos Group V-Ray for Maya 2020 (version 5.00.21)
-* Blender (2.80)
-* AZ 10
-
-## Applications on latest Windows Server rendering image
-
-The following list applies to the Windows Server rendering image, version 1.5.0.
-
-* Autodesk Maya I/O 2020 Update 4.4
-* Autodesk 3ds Max I/O 2021 Update 3
-* Autodesk Arnold for Maya 2020 (Arnold version 6.1.0.1) MtoA-4.1.1.1-2020
-* Autodesk Arnold for 3ds Max 2021 (Arnold version 6.1.0.1) MAXtoA-4.2.2.20-2021
-* Chaos Group V-Ray for Maya 2020 (version 5.00.21)
-* Chaos Group V-Ray for 3ds Max 2021 (version 5.00.05)
-* Blender (2.79)
-* Blender (2.80)
-* AZ 10
-
-> [!IMPORTANT]
-> To run V-Ray with Maya outside of the [Azure Batch extension templates](https://github.com/Azure/batch-extension-templates), start `vrayses.exe` before running the render. To start the vrayses.exe outside of the templates you can use the following command `%MAYA_2020%\vray\bin\vrayses.exe"`.
->
-> For an example, see the start task of the [Maya and V-Ray template](https://github.com/Azure/batch-extension-templates/blob/master/templates/maya/render-vray-windows/pool.template.json) on GitHub.
-
-## Applications on previous Windows Server rendering images
-
-The following list applies to Windows Server 2016, version 1.3.8 rendering images.
-
-* Autodesk Maya I/O 2017 Update 5 (version 17.4.5459)
-* Autodesk Maya I/O 2018 Update 6 (version 18.4.0.7622)
-* Autodesk Maya I/O 2019
-* Autodesk 3ds Max I/O 2018 Update 4 (version 20.4.0.4254)
-* Autodesk 3ds Max I/O 2019 Update 1 (version 21.2.0.2219)
-* Autodesk 3ds Max I/O 2020 Update 2
-* Autodesk Arnold for Maya 2017 (Arnold version 5.3.0.2) MtoA-3.2.0.2-2017
-* Autodesk Arnold for Maya 2018 (Arnold version 5.3.0.2) MtoA-3.2.0.2-2018
-* Autodesk Arnold for Maya 2019 (Arnold version 5.3.0.2) MtoA-3.2.0.2-2019
-* Autodesk Arnold for 3ds Max 2018 (Arnold version 5.3.0.2)(version 1.2.926)
-* Autodesk Arnold for 3ds Max 2019 (Arnold version 5.3.0.2)(version 1.2.926)
-* Autodesk Arnold for 3ds Max 2020 (Arnold version 5.3.0.2)(version 1.2.926)
-* Chaos Group V-Ray for Maya 2017 (version 4.12.01)
-* Chaos Group V-Ray for Maya 2018 (version 4.12.01)
-* Chaos Group V-Ray for Maya 2019 (version 4.04.03)
-* Chaos Group V-Ray for 3ds Max 2018 (version 4.20.01)
-* Chaos Group V-Ray for 3ds Max 2019 (version 4.20.01)
-* Chaos Group V-Ray for 3ds Max 2020 (version 4.20.01)
-* Blender (2.79)
-* Blender (2.80)
-* AZ 10
-
-The following list applies to Windows Server 2016, version 1.3.7 rendering images.
-
-* Autodesk Maya I/O 2017 Update 5 (version 17.4.5459)
-* Autodesk Maya I/O 2018 Update 4 (version 18.4.0.7622)
-* Autodesk 3ds Max I/O 2019 Update 1 (version 21.2.0.2219)
-* Autodesk 3ds Max I/O 2018 Update 4 (version 20.4.0.4254)
-* Autodesk Arnold for Maya 2017 (Arnold version 5.2.0.1) MtoA-3.1.0.1-2017
-* Autodesk Arnold for Maya 2018 (Arnold version 5.2.0.1) MtoA-3.1.0.1-2018
-* Autodesk Arnold for 3ds Max 2018 (Arnold version 5.0.2.4)(version 1.2.926)
-* Autodesk Arnold for 3ds Max 2019 (Arnold version 5.0.2.4)(version 1.2.926)
-* Chaos Group V-Ray for Maya 2018 (version 3.52.03)
-* Chaos Group V-Ray for 3ds Max 2018 (version 3.60.02)
-* Chaos Group V-Ray for Maya 2019 (version 3.52.03)
-* Chaos Group V-Ray for 3ds Max 2019 (version 4.10.01)
-* Blender (2.79)
-
-> [!NOTE]
-> Chaos Group V-Ray for 3ds Max 2019 (version 4.10.01) introduces breaking changes to V-ray. To use the previous version (version 3.60.02), use Windows Server 2016, version 1.3.2 rendering nodes.
-
-## Applications on previous CentOS rendering images
-
-The following list applies to CentOS 7.6, version 1.1.6 rendering images.
-
-* Autodesk Maya I/O 2017 Update 5 (cut 201708032230)
-* Autodesk Maya I/O 2018 Update 2 (cut 201711281015)
-* Autodesk Maya I/O 2019 Update 1
-* Autodesk Arnold for Maya 2017 (Arnold version 5.3.1.1) MtoA-3.2.1.1-2017
-* Autodesk Arnold for Maya 2018 (Arnold version 5.3.1.1) MtoA-3.2.1.1-2018
-* Autodesk Arnold for Maya 2019 (Arnold version 5.3.1.1) MtoA-3.2.1.1-2019
-* Chaos Group V-Ray for Maya 2017 (version 3.60.04)
-* Chaos Group V-Ray for Maya 2018 (version 3.60.04)
-* Blender (2.68)
-* Blender (2.8)
-
-## Next steps
-
-To use the rendering VM images, they need to be specified in the pool configuration when a pool is created; see the [Batch pool capabilities for rendering](./batch-rendering-functionality.md).
batch Batch Rendering Functionality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-rendering-functionality.md
Title: Rendering capabilities description: Standard Azure Batch capabilities are used to run rendering workloads and apps. Batch includes specific features to support rendering workloads. Previously updated : 12/13/2021 Last updated : 02/28/2024
The task command line strings will need to reference the applications and paths
Most rendering applications will require licenses obtained from a license server. If there's an existing on-premises license server, then both the pool and license server need to be on the same [virtual network](../virtual-network/virtual-networks-overview.md). It is also possible to run a license server on an Azure VM, with the Batch pool and license server VM being on the same virtual network.
-## Batch pools using rendering VM images
-
-> [!WARNING]
-> The rendering VM images and pay-for-use licensing have been [deprecated and will be retired on February 29, 2024](https://azure.microsoft.com/updates/azure-batch-rendering-vm-images-licensing-will-be-retired-on-29-february-2024/). To use Batch for rendering, [a custom VM image and standard application licensing should be used.](batch-rendering-functionality.md#batch-pools-using-custom-vm-images-and-standard-application-licensing)
-
-### Rendering application installation
-
-An Azure Marketplace rendering VM image can be specified in the pool configuration if only the pre-installed applications need to be used.
-
-There is a Windows image and a CentOS image. In the [Azure Marketplace](https://azuremarketplace.microsoft.com), the VM images can be found by searching for 'batch rendering'.
-
-The Azure portal and Batch Explorer provide GUI tools to select a rendering VM image when you create a pool. If using a Batch API, then specify the following property values for [ImageReference](/rest/api/batchservice/pool/add#imagereference) when creating a pool:
-
-| Publisher | Offer | Sku | Version |
-||||--|
-| batch | rendering-centos73 | rendering | latest |
-| batch | rendering-windows2016 | rendering | latest |
-
-Other options are available if additional applications are required on the pool VMs:
+## Batch pools using custom VM images
* A custom image from the Azure Compute Gallery: * Using this option, you can configure your VM with the exact applications and specific versions that you require. For more information, see [Create a pool with the Azure Compute Gallery](batch-sig-images.md). Autodesk and Chaos Group have modified Arnold and V-Ray, respectively, to validate against an Azure Batch licensing service. Make sure you have the versions of these applications with this support, otherwise the pay-per-use licensing won't work. Current versions of Maya or 3ds Max don't require a license server when running headless (in batch/command-line mode). Contact Azure support if you're not sure how to proceed with this option.
Other options are available if additional applications are required on the pool
* Resource files: * Application files are uploaded to Azure blob storage, and you specify file references in the [pool start task](/rest/api/batchservice/pool/add#starttask). When pool VMs are created, the resource files are downloaded onto each VM.
-### Pay-for-use licensing for pre-installed applications
-
-The applications that will be used and have a licensing fee need to be specified in the pool configuration.
-
-* Specify the `applicationLicenses` property when [creating a pool](/rest/api/batchservice/pool/add#request-body). The following values can be specified in the array of strings - "vray", "arnold", "3dsmax", "maya".
-* When you specify one or more applications, then the cost of those applications is added to the cost of the VMs. Application prices are listed on the [Azure Batch pricing page](https://azure.microsoft.com/pricing/details/batch/#graphic-rendering).
-
-> [!NOTE]
-> If instead you connect to a license server to use the rendering applications, do not specify the `applicationLicenses` property.
-
-You can use the Azure portal or Batch Explorer to select applications and show the application prices.
-
-If an attempt is made to use an application, but the application hasnΓÇÖt been specified in the `applicationLicenses` property of the pool configuration or does not reach a license server, then the application execution fails with a licensing error and non-zero exit code.
-
-### Environment variables for pre-installed applications
-
-To be able to create the command line for rendering tasks, the installation location of the rendering application executables must be specified. System environment variables have been created on the Azure Marketplace VM images, which can be used instead of having to specify actual paths. These environment variables are in addition to the [standard Batch environment variables](./batch-compute-node-environment-variables.md) created for each task.
-
-|Application|Application Executable|Environment Variable|
-||||
-|Autodesk 3ds Max 2021|3dsmaxcmdio.exe|3DSMAX_2021_EXEC|
-|Autodesk Maya 2020|render.exe|MAYA_2020_EXEC|
-|Chaos Group V-Ray Standalone|vray.exe|VRAY_4.10.03_EXEC|
-|Arnold 2020 command line|kick.exe|ARNOLD_2020_EXEC|
-|Blender|blender.exe|BLENDER_2018_EXEC|
- ## Azure VM families As with other workloads, rendering application system requirements vary, and performance requirements vary for jobs and projects. A large variety of VM families are available in Azure depending on your requirements ΓÇô lowest cost, best price/performance, best performance, and so on.
When the Azure Marketplace VM images are used, then the best practice is to use
## Next steps
-* Learn about [using rendering applications with Batch](batch-rendering-applications.md).
+* Learn about [Batch rendering services](batch-rendering-service.md).
* Learn about [Storage and data movement options for rendering asset and output files](batch-rendering-storage-data-movement.md).
batch Batch Rendering Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-rendering-service.md
Title: Rendering overview description: Introduction of using Azure for rendering and an overview of Azure Batch rendering capabilities Previously updated : 12/13/2021 Last updated : 02/29/2024 # Rendering using Azure
-Rendering is the process of taking 3D models and converting them into 2D images. 3D scene files are authored in applications such as Autodesk 3ds Max, Autodesk Maya, and Blender. Rendering applications such as Autodesk Maya, Autodesk Arnold, Chaos Group V-Ray, and Blender Cycles produce 2D images. Sometimes single images are created from the scene files. However, it's common to model and render multiple images, and then combine them in an animation.
+Rendering is the process of taking 3D models and converting them into 2D images. 3D scene files are authored in applications such as Autodesk 3ds Max, Autodesk Maya, and Blender. Rendering applications such as Autodesk Maya, Autodesk Arnold, Chaos Group V-Ray, and Blender Cycles produce 2D images. Sometimes single images are created from the scene files. However, it's common to model and render multiple images, and then combine them in an animation.
The rendering workload is heavily used for special effects (VFX) in the Media and Entertainment industry. Rendering is also used in many other industries such as advertising, retail, oil and gas, and manufacturing.
-The process of rendering is computationally intensive; there can be many frames/images to produce and each image can take many hours to render. Rendering is therefore a perfect batch processing workload that can leverage Azure to run many renders in parallel and utilize a wide range of hardware, including GPUs.
+The process of rendering is computationally intensive; there can be many frames/images to produce and each image can take many hours to render. Rendering is therefore a perfect batch processing workload that can use Azure to run many renders in parallel and utilize a wide range of hardware, including GPUs.
## Why use Azure for rendering? For many reasons, rendering is a workload perfectly suited for Azure: * Rendering jobs can be split into many pieces that can be run in parallel using multiple VMs:
- * Animations consist of many frames and each frame can be rendered in parallel. The more VMs available to process each frame, the faster all the frames and the animation can be produced.
- * Some rendering software allows single frames to be broken up into multiple pieces, such as tiles or slices. Each piece can be rendered separately, then combined into the final image when all pieces have finished. The more VMs that are available, the faster a frame can be rendered.
+ * Animations consist of many frames and each frame can be rendered in parallel. The more VMs available to process each frame, the faster all the frames and the animation can be produced.
+ * Some rendering software allows single frames to be broken up into multiple pieces, such as tiles or slices. Each piece can be rendered separately, then combined into the final image when all pieces are finished. The more VMs that are available, the faster a frame can be rendered.
* Rendering projects can require huge scale:
- * Individual frames can be complex and require many hours to render, even on high-end hardware; animations can consist of hundreds of thousands of frames. A huge amount of compute is required to render high-quality animations in a reasonable amount of time. In some cases, over 100,000 cores have been used to render thousands of frames in parallel.
+ * Individual frames can be complex and require many hours to render, even on high-end hardware; animations can consist of hundreds of thousands of frames. A huge amount of compute is required to render high-quality animations in a reasonable amount of time. In some cases, over 100,000 cores are being used to render thousands of frames in parallel.
* Rendering projects are project-based and require varying amounts of compute: * Allocate compute and storage capacity when required, scale it up or down according to load during a project, and remove it when a project is finished.
- * Pay for capacity when allocated, but donΓÇÖt pay for it when there is no load, such as between projects.
+ * Pay for capacity when allocated, but donΓÇÖt pay for it when there's no load, such as between projects.
* Cater for bursts due to unexpected changes; scale higher if there are unexpected changes late in a project and those changes need to be processed on a tight schedule. * Choose from a wide selection of hardware according to application, workload, and timeframe: * ThereΓÇÖs a wide selection of hardware available in Azure that can be allocated and managed with Batch.
- * Depending on the project, the requirement may be for the best price/performance or the best overall performance. Different scenes and/or rendering applications will have different memory requirements. Some rendering application can leverage GPUs for the best performance or certain features.
+ * Depending on the project, the requirement may be for the best price/performance or the best overall performance. Different scenes and/or rendering applications can have different memory requirements. Some rendering applications can use GPUs for the best performance or certain features.
* Low-priority or [Azure Spot VMs](https://azure.microsoft.com/pricing/spot/) reduce cost: * Low-priority and Spot VMs are available for a large discount compared to standard VMs and are suitable for some job types. ## Existing on-premises rendering environment
-The most common case is for there to be an existing on-premises render farm being managed by a render management application such as PipelineFX Qube, Royal Render, Thinkbox Deadline, or a custom application. The requirement is to extend the on-premises render farm capacity using Azure VMs.
+The most common case is for there to be an existing on-premises render farm that's managed by a render management application such as PipelineFX Qube, Royal Render, Thinkbox Deadline, or a custom application. The requirement is to extend the on-premises render farm capacity using Azure VMs.
Azure infrastructure and services are used to create a hybrid environment where Azure is used to supplement the on-premises capacity. For example: * Use a [Virtual Network](../virtual-network/virtual-networks-overview.md) to place the Azure resources on the same network as the on-premises render farm. * Use [Avere vFXT for Azure](../avere-vfxt/avere-vfxt-overview.md) or [Azure HPC Cache](../hpc-cache/hpc-cache-overview.md) to cache source files in Azure to reduce bandwidth use and latency, maximizing performance.
-* Ensure the existing license server is on the virtual network and purchase the additional licenses required to cater for the extra Azure-based capacity.
+* Ensure the existing license server is on the virtual network and purchase more licenses as required to cater for the extra Azure-based capacity.
## No existing render farm
-Client workstations may be performing rendering, but the rendering load is increasing and it is taking too long to solely use workstation capacity.
+Client workstations may be performing rendering, but the rendering load is increasing and it's taking too long to solely use workstation capacity.
There are two main options available:
-* Deploy an on-premises render manager, such as Royal Render, and configure a hybrid environment to use Azure when further capacity or performance is required. A render manager is specifically tailored for rendering workloads and will include plug-ins for the popular client applications, enabling easy submission of rendering jobs.
+* Deploy an on-premises render manager, such as Royal Render, and configure a hybrid environment to use Azure when further capacity or performance is required. A render manager is specially tailored for rendering workloads and will include plug-ins for the popular client applications, enabling easy submission of rendering jobs.
-* A custom solution using Azure Batch to allocate and manage the compute capacity as well as providing the job scheduling to run the render jobs.
+* A custom solution using Azure Batch to allocate and manage the compute capacity and providing the job scheduling to run the render jobs.
## Next steps
- Learn how to [use Azure infrastructure and services to extend an existing on-premises render farm](https://azure.microsoft.com/solutions/high-performance-computing/rendering/).
- Learn more about [Azure Batch rendering capabilities](batch-rendering-functionality.md).
batch Batch Rendering Using https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-rendering-using.md
- Title: Using rendering capabilities
-description: How to use Azure Batch rendering capabilities. Try using the Batch Explorer application, either directly or invoked from a client application plug-in.
Previously updated : 03/12/2020---
-# Using Azure Batch rendering
-
-> [!WARNING]
-> The rendering VM images and pay-for-use licensing have been [deprecated and will be retired on February 29, 2024](https://azure.microsoft.com/updates/azure-batch-rendering-vm-images-licensing-will-be-retired-on-29-february-2024/). To use Batch for rendering, [a custom VM image and standard application licensing should be used.](batch-rendering-functionality.md#batch-pools-using-custom-vm-images-and-standard-application-licensing)
-
-There are several ways to use Azure Batch rendering:
-
-* APIs:
- * Write code using any of the Batch APIs. Developers can integrate Azure Batch capabilities into their existing applications or workflow, whether cloud or based on-premises.
-* Command line tools:
- * The [Azure command line](/cli/azure/) or [PowerShell](/powershell/azure/) can be used to script Batch use.
- * In particular, the [Batch CLI template support](./batch-cli-templates.md) makes it much easier to create pools and submit jobs.
-* Batch Explorer UI:
- * [Batch Explorer](https://github.com/Azure/BatchLabs) is a cross-platform client tool that also allows Batch accounts to be managed and monitored.
- * For each of the rendering applications, a number of pool and job templates are provided that can be used to easily create pools and to submit jobs. A set of templates is listed in the application UI, with the template files being accessed from GitHub.
- * Custom templates can be authored from scratch or the supplied templates from GitHub can be copied and modified.
-* Client application plug-ins:
- * Plug-ins are available that allow Batch rendering to be used from directly within the client design and modeling applications. The plug-ins mainly invoke the Batch Explorer application with contextual information about the current 3D model and include features to help manage assets.
-
-The best way to try Azure Batch rendering and simplest way for end-users, who are not developers and not Azure experts, is to use the Batch Explorer application, either directly or invoked from a client application plug-in.
-
-## Using Batch Explorer
-
-Batch Explorer [downloads are available](https://azure.github.io/BatchExplorer/) for Windows, OSX, and Linux.
-
-### Using templates to create pools and run jobs
-
-A comprehensive set of templates is available for use with Batch Explorer that makes it easy to create pools and submit jobs for the various rendering applications without having to specify all the properties required to create pools, jobs, and tasks directly with Batch. The templates available in Batch Explorer are stored and visible in [a GitHub repository](https://github.com/Azure/BatchExplorer-data/tree/master/ncj).
-
-![Batch Explorer Gallery](./media/batch-rendering-using/batch-explorer-gallery.png)
-
-Templates are provided that cater for all the applications present on the Marketplace rendering VM images. For each application multiple templates exist, including pool templates to cater for CPU and GPU pools, Windows and Linux pools; job templates include full frame or tiled Blender rendering and V-Ray distributed rendering. The set of supplied templates will be expanded over time to cater for other Batch capabilities, such as pool auto-scaling.
-
-It's also possible for custom templates to be produced, from scratch or by modifying the supplied templates. Custom templates can be used by selecting the 'Local templates' item in the 'Gallery' section of Batch Explorer.
-
-### File system and data movement
-
-The 'Data' section in Batch Explorer allows files to be copied between a local file system and Azure Storage accounts.
-
-## Next steps
-
-* Learn about [using rendering applications with Batch](batch-rendering-applications.md).
-* Learn about [Storage and data movement options for rendering asset and output files](batch-rendering-storage-data-movement.md).
batch Batch Sig Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-sig-images.md
When you create an Azure Batch pool using the Virtual Machine Configuration, you
## Benefits of the Azure Compute Gallery
-When you use the Azure Compute Gallery for your custom image, you have control over the operating system type and configuration, as well as the type of data disks. Your Shared Image can include applications and reference data that become available on all the Batch pool nodes as soon as they are provisioned.
+When you use the Azure Compute Gallery for your custom image, you have control over the operating system type and configuration, as well as the type of data disks. Your Shared Image can include applications and reference data that become available on all the Batch pool nodes as soon as they're provisioned.
You can also have multiple versions of an image as needed for your environment. When you use an image version to create a VM, the image version is used to create new disks for the VM.
Using a Shared Image configured for your scenario can provide several advantages
## Prerequisites
-> [!NOTE]
-> Currently, Azure Batch does not support the ΓÇÿTrustedLaunchΓÇÖ feature. You must use the standard security type to create a custom image instead.
->
-> You need to authenticate using Microsoft Entra ID. If you use shared-key-auth, you will get an authentication error.
- - **An Azure Batch account.** To create a Batch account, see the Batch quickstarts using the [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md).
+> [!NOTE]
+> Authentication using Microsoft Entra ID is required. If you use Shared Key Auth, you will get an authentication error.
+ - **an Azure Compute Gallery image**. To create a Shared Image, you need to have or create a managed image resource. The image should be created from snapshots of the VM's OS disk and optionally its attached data disks. > [!NOTE]
The following steps show how to prepare a VM, take a snapshot, and create an ima
### Prepare a VM
-If you are creating a new VM for the image, use a first party Azure Marketplace image supported by Batch as the base image for your managed image. Only first party images can be used as a base image.
+If you're creating a new VM for the image, use a first party Azure Marketplace image supported by Batch as the base image for your managed image. Only first party images can be used as a base image.
To get a full list of current Azure Marketplace image references supported by Azure Batch, use one of the following APIs to return a list of Windows and Linux VM images including the node agent SKU IDs for each image:
To get a full list of current Azure Marketplace image references supported by Az
Follow these guidelines when creating VMs: - Ensure the VM is created with a managed disk. This is the default storage setting when you create a VM.-- Do not install Azure extensions, such as the Custom Script extension, on the VM. If the image contains a pre-installed extension, Azure may encounter problems when deploying the Batch pool.
+- Don't install Azure extensions, such as the Custom Script extension, on the VM. If the image contains a pre-installed extension, Azure may encounter problems when deploying the Batch pool.
- When using attached data disks, you need to mount and format the disks from within a VM to use them. - Ensure that the base OS image you provide uses the default temp drive. The Batch node agent currently expects the default temp drive.-- Ensure that the OS disk is not encrypted.
+- Ensure that the OS disk isn't encrypted.
- Once the VM is running, connect to it via RDP (for Windows) or SSH (for Linux). Install any necessary software or copy desired data. - For faster pool provisioning, use the [ReadWrite disk cache setting](../virtual-machines/premium-storage-performance.md#disk-caching) for the VM's OS disk.
Once you have successfully created your managed image, you need to create an Azu
To create a pool from your Shared Image using the Azure CLI, use the `az batch pool create` command. Specify the Shared Image ID in the `--image` field. Make sure the OS type and SKU matches the versions specified by `--node-agent-sku-id`
-> [!NOTE]
-> You need to authenticate using Microsoft Entra ID. If you use shared-key-auth, you will get an authentication error.
- > [!IMPORTANT] > The node agent SKU id must align with the publisher/offer/SKU in order for the node to start.
Use the following steps to create a pool from a Shared Image in the Azure portal
1. In the **Image Type** section, select **Azure Compute Gallery**. 1. Complete the remaining sections with information about your managed image. 1. Select **OK**.
-1. Once the node is allocated, use **Connect** to generate user and the RDP file for Windows OR use SSH to for Linux to login to the allocated node and verify.
+1. Once the node is allocated, use **Connect** to generate user and the RDP file for Windows OR use SSH to for Linux to log in to the allocated node and verify.
![Create a pool with from a Shared image with the portal.](media/batch-sig-images/create-custom-pool.png)
Use the following steps to create a pool from a Shared Image in the Azure portal
If you plan to create a pool with hundreds or thousands of VMs or more using a Shared Image, use the following guidance. -- **Azure Compute Gallery replica numbers.** For every pool with up to 300 instances, we recommend you keep at least one replica. For example, if you are creating a pool with 3000 VMs, you should keep at least 10 replicas of your image. We always suggest keeping more replicas than minimum requirements for better performance.
+- **Azure Compute Gallery replica numbers.** For every pool with up to 300 instances, we recommend you keep at least one replica. For example, if you're creating a pool with 3,000 VMs, you should keep at least 10 replicas of your image. We always suggest keeping more replicas than minimum requirements for better performance.
-- **Resize timeout.** If your pool contains a fixed number of nodes (if it doesn't autoscale), increase the `resizeTimeout` property of the pool depending on the pool size. For every 1000 VMs, the recommended resize timeout is at least 15 minutes. For example, the recommended resize timeout for a pool with 2000 VMs is at least 30 minutes.
+- **Resize timeout.** If your pool contains a fixed number of nodes (if it doesn't autoscale), increase the `resizeTimeout` property of the pool depending on the pool size. For every 1,000 VMs, the recommended resize timeout is at least 15 minutes. For example, the recommended resize timeout for a pool with 2,000 VMs is at least 30 minutes.
## Next steps
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md
Title: Best practices description: Learn best practices and useful tips for developing your Azure Batch solutions. Previously updated : 11/02/2023 Last updated : 02/29/2024
derived or aligned with. An image without a specified `batchSupportEndOfLife` da
determined yet by the Batch service. Absence of a date doesn't indicate that the respective image will be supported indefinitely. An EOL date may be added or updated in the future at any time.
+- **VM SKUs with impending end-of-life (EOL) dates:** As with VM images, VM SKUs or families may also reach Batch support
+end of life (EOL). These dates can be discovered via the
+[`ListSupportedVirtualMachineSkus` API](/rest/api/batchmanagement/location/list-supported-virtual-machine-skus),
+[PowerShell](/powershell/module/az.batch/get-azbatchsupportedvirtualmachinesku), or
+[Azure CLI](/cli/azure/batch/location#az-batch-location-list-skus).
+Plan for the migration of your workload to a non-EOL VM SKU by creating a new pool with an appropriate supported VM SKU.
+Absence of an associated `batchSupportEndOfLife` date for a VM SKU doesn't indicate that particular VM SKU will be
+supported indefinitely. An EOL date may be added or updated in the future at any time.
+ - **Unique resource names:** Batch resources (jobs, pools, etc.) often come and go over time. For example, you may create a pool on Monday, delete it on Tuesday, and then create another similar pool on Thursday. Each new resource you create should be given a unique name that you haven't used before. You can create uniqueness by using a GUID (either as the entire resource name, or as a part of it) or by embedding the date and time that the resource was created in the resource name. Batch supports [DisplayName](/dotnet/api/microsoft.azure.batch.jobspecification.displayname), which can give a resource a more readable name even if the actual resource ID is something that isn't human-friendly. Using unique names makes it easier for you to differentiate which particular resource did something in logs and metrics. It also removes ambiguity if you ever have to file a support case for a resource. - **Continuity during pool maintenance and failure:** It's best to have your jobs use pools dynamically. If your jobs use the same pool for everything, there's a chance that jobs won't run if something goes wrong with the pool. This principle is especially important for time-sensitive workloads. For example, select or create a pool dynamically when you schedule each job, or have a way to override the pool name so that you can bypass an unhealthy pool.
For the purposes of isolation, if your scenario requires isolating jobs or tasks
#### Batch Node Agent updates
-Batch node agents aren't automatically upgraded for pools that have non-zero compute nodes. To ensure your Batch pools receive the latest security fixes and updates to the Batch node agent, you need to either resize the pool to zero compute nodes or recreate the pool. It's recommended to monitor the [Batch Node Agent release notes](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md) to understand changes to new Batch node agent versions. Checking regularly for updates when they were released enables you to plan upgrades to the latest agent version.
+Batch node agents aren't automatically upgraded for pools that have nonzero compute nodes. To ensure your Batch pools receive the latest security fixes and updates to the Batch node agent, you need to either resize the pool to zero compute nodes or recreate the pool. It's recommended to monitor the [Batch Node Agent release notes](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md) to understand changes to new Batch node agent versions. Checking regularly for updates when they were released enables you to plan upgrades to the latest agent version.
Before you recreate or resize your pool, you should download any node agent logs for debugging purposes if you're experiencing issues with your Batch pool or compute nodes. This process is further discussed in the [Nodes](#nodes) section.
Pool lifetime can vary depending upon the method of allocation and options appli
- **Pool recreation:** Avoid deleting and recreating pools on a daily basis. Instead, create a new pool and then update your existing jobs to point to the new pool. Once all of the tasks have been moved to the new pool, then delete the old pool. -- **Pool efficiency and billing:** Batch itself incurs no extra charges. However, you do incur charges for Azure resources utilized, such as compute, storage, networking and any other resources that may be required for your Batch workload. You're billed for every compute node in the pool, regardless of the state it's in. For more information, see [Cost analysis and budgets for Azure Batch](budget.md).
+- **Pool efficiency and billing:** Batch itself incurs no extra charges. However, you do incur charges for Azure resources utilized, such as compute, storage, networking, and any other resources that may be required for your Batch workload. You're billed for every compute node in the pool, regardless of the state it's in. For more information, see [Cost analysis and budgets for Azure Batch](budget.md).
- **Ephemeral OS disks:** Virtual Machine Configuration pools can use [ephemeral OS disks](create-pool-ephemeral-os-disk.md), which create the OS disk on the VM cache or temporary SSD, to avoid extra costs associated with managed disks.
A [job](jobs-and-tasks.md#jobs) is a container designed to contain hundreds, tho
### Fewer jobs, more tasks
-Using a job to run a single task is inefficient. For example, it's more efficient to use a single job containing 1000 tasks rather than creating 100 jobs that contain 10 tasks each. If you used 1000 jobs, each with a single task that would be the least efficient, slowest, and most expensive approach to take.
+Using a job to run a single task is inefficient. For example, it's more efficient to use a single job containing 1,000 tasks rather than creating 100 jobs that contain 10 tasks each. If you used 1,000 jobs, each with a single task that would be the least efficient, slowest, and most expensive approach to take.
Avoid designing a Batch solution that requires thousands of simultaneously active jobs. There's no quota for tasks, so executing many tasks under as few jobs as possible efficiently uses your [job and job schedule quotas](batch-quota-limit.md#resource-quotas).
Deleting tasks accomplishes two things:
- Cleans up the corresponding task data on the node (provided `retentionTime` hasn't already been hit). This action helps ensure that your nodes don't fill up with task data and run out of disk space. > [!NOTE]
-> For tasks just submitted to Batch, the DeleteTask API call takes up to 10 minutes to take effect. Before it takes effect, other tasks might be prevented from being scheduled. It's because Batch Scheduler still tries to schedule the tasks just deleted. If you want to delete one task shortly after it's submitted, please terminate the task instead (since the terminate task will take effect immediately). And then delete the task 10 minutes later.
+> For tasks just submitted to Batch, the DeleteTask API call takes up to 10 minutes to take effect. Before it takes effect,
+> other tasks might be prevented from being scheduled. It's because Batch Scheduler still tries to schedule the tasks just
+> deleted. If you wanted to delete one task shortly after it's submitted, please terminate the task instead (since the
+> terminate task will take effect immediately). And then delete the task 10 minutes later.
### Submit large numbers of tasks in collection
Batch supports oversubscribing tasks on nodes (running more tasks than a node ha
### Design for retries and re-execution
-Tasks can be automatically retried by Batch. There are two types of retries: user-controlled and internal. User-controlled retries are specified by the task's [maxTaskRetryCount](/dotnet/api/microsoft.azure.batch.taskconstraints.maxtaskretrycount). When a program specified in the task exits with a non-zero exit code, the task is retried up to the value of the `maxTaskRetryCount`.
+Tasks can be automatically retried by Batch. There are two types of retries: user-controlled and internal. User-controlled retries are specified by the task's [maxTaskRetryCount](/dotnet/api/microsoft.azure.batch.taskconstraints.maxtaskretrycount). When a program specified in the task exits with a nonzero exit code, the task is retried up to the value of the `maxTaskRetryCount`.
Although rare, a task can be retried internally due to failures on the compute node, such as not being able to update internal state or a failure on the node while the task is running. The task will be retried on the same compute node, if possible, up to an internal limit before giving up on the task and deferring the task to be rescheduled by Batch, potentially on a different compute node.
section about attaching and preparing data disks for compute nodes.
### Attaching and preparing data disks Each individual compute node has the exact same data disk specification attached if specified as part of the Batch pool instance. Only
-new data disks may be attached to Batch pools. These data disks attached to compute nodes aren't automatically partitioned, formatted or
+new data disks may be attached to Batch pools. These data disks attached to compute nodes aren't automatically partitioned, formatted, or
mounted. It's your responsibility to perform these operations as part of your [start task](jobs-and-tasks.md#start-task). These start tasks must be crafted to be idempotent. Re-execution of the start tasks on compute nodes is possible. If the start task isn't idempotent, potential data loss can occur on the data disks.
Review the following guidance related to connectivity in your Batch solutions.
### Network Security Groups (NSGs) and User Defined Routes (UDRs)
-When provisioning [Batch pools in a virtual network](batch-virtual-network.md), ensure that you're closely following the guidelines regarding the use of the BatchNodeManagement.*region* service tag, ports, protocols and direction of the rule. Use of the service tag is highly recommended; don't use underlying Batch service IP addresses as they can change over time. Using Batch service IP addresses directly can cause instability, interruptions, or outages for your Batch pools.
+When provisioning [Batch pools in a virtual network](batch-virtual-network.md), ensure that you're closely following the guidelines regarding the use of the BatchNodeManagement.*region* service tag, ports, protocols, and direction of the rule. Use of the service tag is highly recommended; don't use underlying Batch service IP addresses as they can change over time. Using Batch service IP addresses directly can cause instability, interruptions, or outages for your Batch pools.
For User Defined Routes (UDRs), it's recommended to use BatchNodeManagement.*region* [service tags](../virtual-network/virtual-networks-udr-overview.md#service-tags-for-user-defined-routes) instead of Batch service IP addresses as they can change over time.
batch Managed Identity Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/managed-identity-pools.md
Title: Configure managed identities in Batch pools description: Learn how to enable user-assigned managed identities on Batch pools and how to use managed identities within the nodes. Previously updated : 04/03/2023 Last updated : 02/29/2024 ms.devlang: csharp
This topic explains how to enable user-assigned managed identities on Batch pool
First, [create your user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) in the same tenant as your Batch account. You can create the identity using the Azure portal, the Azure Command-Line Interface (Azure CLI), PowerShell, Azure Resource Manager, or the Azure REST API. This managed identity doesn't need to be in the same resource group or even in the same subscription.
-> [!IMPORTANT]
+> [!TIP]
> A system-assigned managed identity created for a Batch account for [customer data encryption](batch-customer-managed-key.md) > cannot be used as a user-assigned managed identity on a Batch pool as described in this document. If you wish to use the same > managed identity on both the Batch account and Batch pool, then use a common user-assigned managed identity instead.
After you've created one or more user-assigned managed identities, you can creat
- [Use the Azure portal to create the Batch pool](#create-batch-pool-in-azure-portal) - [Use the Batch .NET management library to create the Batch pool](#create-batch-pool-with-net)
+> [!WARNING]
+> In-place updates of pool managed identities are not supported while the pool has active nodes. Existing compute nodes
+> will not be updated with changes. It is recommended to scale the pool down to zero compute nodes before modifying the
+> identity collection to ensure all VMs have the same set of identities assigned.
+ ### Create Batch pool in Azure portal To create a Batch pool with a user-assigned managed identity through the Azure portal:
var pool = await managementClient.Pool.CreateWithHttpMessagesAsync(
cancellationToken: default(CancellationToken)).ConfigureAwait(false); ```
-> [!IMPORTANT]
-> Managed identities are not updated on existing VMs once a pool has been started. It is recommended to scale the pool down to zero before modifying the identity collection to ensure all VMs
-> have the same set of identities assigned.
- ## Use user-assigned managed identities in Batch nodes Many Azure Batch functions that access other Azure resources directly on the compute nodes, such as Azure Storage or
batch Quick Run Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-run-python.md
Title: 'Quickstart: Use Python to create a pool and run a job' description: Follow this quickstart to run an app that uses the Azure Batch client library for Python to create and run Batch pools, nodes, jobs, and tasks. Previously updated : 04/13/2023 Last updated : 03/01/2024 ms.devlang: python
After you complete this quickstart, you understand the [key concepts of the Batc
- A Batch account with a linked Azure Storage account. You can create the accounts by using any of the following methods: [Azure CLI](quick-create-cli.md) | [Azure portal](quick-create-portal.md) | [Bicep](quick-create-bicep.md) | [ARM template](quick-create-template.md) | [Terraform](quick-create-terraform.md). -- [Python](https://python.org/downloads) version 3.6 or later, which includes the [pip](https://pip.pypa.io/en/stable/installing) package manager.
+- [Python](https://python.org/downloads) version 3.8 or later, which includes the [pip](https://pip.pypa.io/en/stable/installing) package manager.
## Run the app
batch Tutorial Parallel Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-parallel-python.md
Title: "Tutorial: Run a parallel workload using the Python API"
description: Learn how to process media files in parallel using ffmpeg in Azure Batch with the Batch Python client library. ms.devlang: python Previously updated : 05/25/2023 Last updated : 03/01/2024
In this tutorial, you convert MP4 media files to MP3 format, in parallel, by usi
## Prerequisites
-* [Python version 3.7 or later](https://www.python.org/downloads/)
+* [Python version 3.8 or later](https://www.python.org/downloads/)
* [pip package manager](https://pip.pypa.io/en/stable/installation/)
batch Tutorial Run Python Batch Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-run-python-batch-azure-data-factory.md
Title: 'Tutorial: Run a Batch job through Azure Data Factory'
description: Learn how to use Batch Explorer, Azure Storage Explorer, and a Python script to run a Batch workload through an Azure Data Factory pipeline. ms.devlang: python Previously updated : 04/20/2023 Last updated : 03/01/2024
In this tutorial, you learn how to:
- A Data Factory instance. To create the data factory, follow the instructions in [Create a data factory](/azure/data-factory/quickstart-create-data-factory-portal#create-a-data-factory). - [Batch Explorer](https://azure.github.io/BatchExplorer) downloaded and installed. - [Storage Explorer](https://azure.microsoft.com/products/storage/storage-explorer) downloaded and installed.-- [Python 3.7 or above](https://www.python.org/downloads), with the [azure-storage-blob](https://pypi.org/project/azure-storage-blob) package installed by using `pip`.
+- [Python 3.8 or above](https://www.python.org/downloads), with the [azure-storage-blob](https://pypi.org/project/azure-storage-blob) package installed by using `pip`.
- The [iris.csv input dataset](https://github.com/Azure-Samples/batch-adf-pipeline-tutorial/blob/master/iris.csv) downloaded from GitHub. ## Use Batch Explorer to create a Batch pool and nodes
communications-gateway Configure Test Customer Teams Direct Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/configure-test-customer-teams-direct-routing.md
Previously updated : 10/09/2023 Last updated : 01/08/2024 #CustomerIntent: As someone deploying Azure Communications Gateway, I want to test my deployment so that I can be sure that calls work.
You must be able to sign in to the Microsoft 365 admin center for your test cust
## Choose a DNS subdomain label to use to identify the customer
-Azure Communications Gateway's per-region domain names might be as follows, where the `<deployment_id>` subdomain is autogenerated and unique to the deployment:
+Azure Communications Gateway has per-region domain names. You need to set up subdomains of these domain names for your test customer. Microsoft Phone System and Azure Communications Gateway use this subdomain to match calls to tenants.
-* `r1.<deployment_id>.commsgw.azure.com`
-* `r2.<deployment_id>.commsgw.azure.com`
+1. Choose a DNS label to identify the test customer.
+ * The label can be up to 10 characters in length and can only contain letters, numbers, underscores, and dashes.
+ * You must not use wildcard subdomains or subdomains with multiple labels.
+ * For example, you could allocate the label `test`.
+1. Use this label to create a subdomain of each per-region domain name for your Azure Communications Gateway.
+1. Make a note of the label you choose and the corresponding subdomains.
-Choose a DNS label to identify the test customer. The label can be up to 10 characters in length and can only contain letters, numbers, underscores, and dashes. You must not use wildcard subdomains or subdomains with multiple labels. For example, you could allocate the label `test`.
+> [!TIP]
+> To find your deployment's per-region domain names:
+> 1. Sign in to the [Azure portal](https://azure.microsoft.com/).
+> 1. Search for your Communications Gateway resource and select it.
+> 1. Check that you're on the **Overview** of your Azure Communications Gateway resource.
+> 1. Select **Properties**.
+> 1. In each **Service Location** section, find the **Hostname** field.
-You use this label to create a subdomain of each per-region domain name for your Azure Communications Gateway. Microsoft Phone System and Azure Communications Gateway use this subdomain to match calls to tenants.
+For example, your per-region domain names might be as follows, where the `<deployment_id>` subdomain is autogenerated and unique to the deployment:
-For example, the `test` label combined with the per-region domain names creates the following deployment-specific domain names:
+* `r1.<deployment_id>.commsgw.azure.com`
+* `r2.<deployment_id>.commsgw.azure.com`
+
+If you allocate the label `test`, this label combined with the per-region domain names creates the following domain names for your test customer:
* `test.r1.<deployment_id>.commsgw.azure.com` * `test.r2.<deployment_id>.commsgw.azure.com`
-Make a note of the label you choose and the corresponding subdomains.
++
+> [!TIP]
+> Lab deployments have one per-region domain name. Your test customer therefore also only has one customer-specific per-region domain name.
## Start registering the subdomains in the customer tenant and get DNS TXT values
To route calls to a customer tenant, the customer tenant must be configured with
1. Register the first customer-specific per-region domain name (for example `test.r1.<deployment_id>.commsgw.azure.com`). 1. Start the verification process using TXT records. 1. Note the TXT value that Microsoft 365 provides.
-1. Repeat the previous step for the second customer-specific per-region domain name.
+1. (Production deployments only) Repeat the previous step for the second customer-specific per-region domain name.
> [!IMPORTANT] > Don't complete the verification process yet. You must carry out [Use Azure Communications Gateway's Provisioning API to configure the customer and generate DNS records](#use-azure-communications-gateways-provisioning-api-to-configure-the-customer-and-generate-dns-records) first.
When you have used Azure Communications Gateway to generate the DNS records for
1. Sign into the Microsoft 365 admin center for the customer tenant as a Global Administrator. 1. Select **Settings** > **Domains**.
-1. Finish verifying the two customer-specific per-region domain names by following [Add a subdomain to the customer tenant and verify it](/microsoftteams/direct-routing-sbc-multiple-tenants#add-a-subdomain-to-the-customer-tenant-and-verify-it).
+1. Finish verifying the customer-specific per-region domain names by following [Add a subdomain to the customer tenant and verify it](/microsoftteams/direct-routing-sbc-multiple-tenants#add-a-subdomain-to-the-customer-tenant-and-verify-it).
## Configure the customer tenant's call routing to use Azure Communications Gateway
communications-gateway Connect Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connect-operator-connect.md
You must [deploy Azure Communications Gateway](deploy.md).
You must have access to a user account with the Microsoft Entra Global Administrator role.
-You must allocate six "service verification" test numbers for each of Operator Connect and Teams Phone Mobile. These numbers are used by the Operator Connect and Teams Phone Mobile programs for continuous call testing.
-
+You must allocate "service verification" test numbers. These numbers are used by the Operator Connect and Teams Phone Mobile programs for continuous call testing. Production deployments need six numbers for each service. Lab deployments need three numbers for each service.
- If you selected the service you're setting up as part of deploying Azure Communications Gateway, you've allocated numbers for the service already. - Otherwise, choose the phone numbers now (in E.164 format and including the country code) and names to identify them. We recommend names of the form OC1 and OC2 (for Operator Connect) and TPM1 and TPM2 (for Teams Phone Mobile).
communications-gateway Connect Teams Direct Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connect-teams-direct-routing.md
Previously updated : 11/22/2023 Last updated : 01/08/2024 - template-how-to-pattern
Microsoft Teams only sends traffic to domains that you confirm that you own. You
1. Select your Communications Gateway resource. Check that you're on the **Overview** of your Azure Communications Gateway resource. 1. Select **Properties**. 1. Find the field named **Domain**. This name is your deployment's _base domain name_.
-1. In each **Service Location** section, find the **Hostname** field. This field provides the _per-region domain name_. Your deployment has two service regions and therefore two per-region domain names.
-1. Note down the base domain name and the per-region domain names. You'll need these values in the next steps.
+1. In each **Service Location** section, find the **Hostname** field. This field provides the _per-region domain name_.
+ - A production deployment has two service regions and therefore two per-region domain names.
+ - A lab deployment has one service region and therefore one per-region domain name.
+1. Note down the base domain name and the per-region domain name(s). You'll need these values in the next steps.
## Register the base domain name for Azure Communications Gateway in your tenant
To activate the base domain in Microsoft 365, you must have at least one user or
## Connect your tenant to Azure Communications Gateway
-You most configure your Microsoft 365 tenant with two SIP trunks to Azure Communications Gateway. Each trunk connects to one of the per-region domain names that you found in [Find your Azure Communication Gateway's domain names](#find-your-azure-communication-gateways-domain-names).
+You must configure your Microsoft 365 tenant with SIP trunks to Azure Communications Gateway. Each trunk connects to one of the per-region domain names that you found in [Find your Azure Communication Gateway's domain names](#find-your-azure-communication-gateways-domain-names).
-Follow [Connect your Session Border Controller (SBC) to Direct Routing](/microsoftteams/direct-routing-connect-the-sbc), using the following configuration settings.
+Use [Connect your Session Border Controller (SBC) to Direct Routing](/microsoftteams/direct-routing-connect-the-sbc) and the following configuration settings to set up the trunks.
+
+- For a production deployment, set up two trunks.
+- For a lab deployment, set up one trunk.
| Teams Admin Center setting | PowerShell parameter | Value to use (Admin Center / PowerShell) | | -- | -- | |
communications-gateway Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connectivity.md
Previously updated : 11/20/2023 Last updated : 01/08/2024 #CustomerIntent: As someone planning a deployment, I want to learn about my options for connectivity, so that I can start deploying
The following table lists all the available connection types and whether they're
|||||| | MAPS Voice |✅ |✅|✅|- Best media quality because of prioritization with Microsoft network<br>- No extra costs<br>- See [Internet peering for Peering Service Voice walkthrough](../internet-peering/walkthrough-communications-services-partner.md)| |ExpressRoute Microsoft Peering |✅|✅|✅|- Easy to deploy<br>- Extra cost<br>- Consult with your onboarding team and ensure that it's available in your region<br>- See [Using ExpressRoute for Microsoft PSTN services](/azure/expressroute/using-expressroute-for-microsoft-pstn)|
-|Public internet |❌|✅|✅|- No extra setup<br>- Not recommended for production|
+|Public internet |⚠️ Lab deployments only|✅|✅|- No extra setup<br>- Where available, not recommended for production |
-Set up your network as in the following diagram and configure it in accordance with any network connectivity specifications for your chosen communications services. Your network must have two sites with cross-connect functionality. For more information on the reliability design for Azure Communications Gateway, see [Reliability in Azure Communications Gateway](reliability-communications-gateway.md).
+> [!NOTE]
+> The Operator Connect and Teams Phone Mobile programs do not allow production deployments to use the public internet.
+
+Set up your network as in the following diagram and configure it in accordance with any network connectivity specifications for your chosen communications services. For production deployments, your network must have two sites with cross-connect functionality. For more information on the reliability design for Azure Communications Gateway, see [Reliability in Azure Communications Gateway](reliability-communications-gateway.md).
:::image type="content" source="media/azure-communications-gateway-network.svg" alt-text="Network diagram showing Azure Communications Gateway deployed into two Azure regions within one Azure Geography. The Azure Communications Gateway resource in each region connects to a communications service and both operator sites. Azure Communications Gateway uses MAPS or Express Route as its peering service between Azure and an operators network." lightbox="media/azure-communications-gateway-network.svg":::
+Lab deployments have one Azure service region and must connect to one site in your network.
+ ## IP addresses and domain names Azure Communications Gateway (ACG) deployments require multiple IP addresses and fully qualified domain names (FQDNs). The following diagram and table describe the IP addresses and FQDNs that you might need to know about.
communications-gateway Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md
Previously updated : 10/09/2023 Last updated : 01/08/2024 # Deploy Azure Communications Gateway
You must have completed [Prepare to deploy Azure Communications Gateway](prepare
|The name of the Azure subscription to use to create an Azure Communications Gateway resource. You must use the same subscription for all resources in your Azure Communications Gateway deployment. |**Project details: Subscription**| |The Azure resource group in which to create the Azure Communications Gateway resource. |**Project details: Resource group**| |The name for the deployment. This name can contain alphanumeric characters and `-`. It must be 3-24 characters long. |**Instance details: Name**|
- |The management Azure region: the region in which your monitoring and billing data is processed. We recommend that you select a region near or colocated with the two regions for handling call traffic. |**Instance details: Region**
+ |The management Azure region: the region in which your monitoring and billing data is processed. We recommend that you select a region near or colocated with the two regions for handling call traffic. |**Instance details: Region** |
+ |The type of deployment. Choose from **Standard** (for production) or **Lab**. |**Instance details: SKU** |
|The voice codecs to use between Azure Communications Gateway and your network. We recommend that you only specify any codecs if you have a strong reason to restrict codecs (for example, licensing of specific codecs) and you can't configure your network or endpoints not to offer specific codecs. Restricting codecs can reduce the overall voice quality due to lower-fidelity codecs being selected. |**Call Handling: Supported codecs**| |Whether your Azure Communications Gateway resource should handle emergency calls as standard calls or directly route them to the Emergency Routing Service Provider (US only; only for Operator Connect or Teams Phone Mobile). |**Call Handling: Emergency call handling**| |A comma-separated list of dial strings used for emergency calls. For Microsoft Teams, specify dial strings as the standard emergency number (for example `999`). For Zoom, specify dial strings in the format `+<country-code><emergency-number>` (for example `+44999`).|**Call Handling: Emergency dial strings**|
You must have completed [Prepare to deploy Azure Communications Gateway](prepare
Collect all of the values in the following table for both service regions in which you want to deploy Azure Communications Gateway.
+> [!NOTE]
+> Lab deployments have one Azure region and connect to one site in your network.
+ |**Value**|**Field name(s) in Azure portal**| |||
- |The Azure regions to use for call traffic. |**Service Region One/Two: Region**|
- |The IPv4 address used by Azure Communications Gateway to contact your network from this region. |**Service Region One/Two: Operator IP address**|
+ |The Azure region to use for call traffic. |**Service Region One/Two: Region**|
+ |The IPv4 address belonging to your network that Azure Communications Gateway should use to contact your network from this region. |**Service Region One/Two: Operator IP address**|
|The set of IP addresses/ranges that are permitted as sources for signaling traffic from your network. Provide an IPv4 address range using CIDR notation (for example, 192.0.2.0/24) or an IPv4 address (for example, 192.0.2.0). You can also provide a comma-separated list of IPv4 addresses and/or address ranges.|**Service Region One/Two: Allowed Signaling Source IP Addresses/CIDR Ranges**| |The set of IP addresses/ranges that are permitted as sources for media traffic from your network. Provide an IPv4 address range using CIDR notation (for example, 192.0.2.0/24) or an IPv4 address (for example, 192.0.2.0). You can also provide a comma-separated list of IPv4 addresses and/or address ranges.|**Service Region One/Two: Allowed Media Source IP Address/CIDR Ranges**|
communications-gateway Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/get-started.md
This article summarizes the steps and documentation that you need.
Read the following articles to learn about Azure Communications Gateway. - [Your network and Azure Communications Gateway](role-in-network.md), to learn how Azure Communications Gateway fits into your network.-- [Onboarding with Included Benefits for Azure Communications Gateway](onboarding.md), to learn about onboarding to Operator Connect or Teams Phone Mobile and the support we can provide.
+- [Onboarding with Included Benefits for Azure Communications Gateway](onboarding.md), to learn about onboarding to your chosen communications services and the support we can provide.
+- [Lab Azure Communications Gateway overview](lab.md), to learn about when and how you could use a lab deployment.
- [Connectivity for Azure Communications Gateway](connectivity.md) and [Reliability in Azure Communications Gateway](reliability-communications-gateway.md), to create a network design that includes Azure Communications Gateway. - [Overview of security for Azure Communications Gateway](security.md), to learn about how Azure Communications Gateway keeps customer data and your network secure. - [Provisioning API (preview) for Azure Communications Gateway](provisioning-platform.md), to learn about when you might need or want to integrate with the Provisioning API.
Use the following procedures to deploy Azure Communications Gateway and connect
1. [Deploy Azure Communications Gateway](deploy.md) describes how to create your Azure Communications Gateway resource in the Azure portal and connect it to your networks. 1. [Integrate with Azure Communications Gateway's Provisioning API (preview)](integrate-with-provisioning-api.md) describes how to integrate with the Provisioning API. Integrating with the API is: - Required for Microsoft Teams Direct Routing and Zoom Phone Cloud Peering.
- - Recommended for Operator Connect and Teams Phone Mobile because it enables flow-through API-based provisioning of your customers both on Azure Communications Gateway and in the Operator Connect environment. This enables additional functionality to be provided by Azure Communications Gateway, such as injecting custom SIP headers, while also fulfilling the requirement from the the Operator Connect and Teams Phone Mobile programs for you to use APIs for provisioning customers in the Operator Connect environment. For more information, see [Provisioning and Operator Connect APIs](interoperability-operator-connect.md#provisioning-and-operator-connect-apis).
+ - Recommended for Operator Connect and Teams Phone Mobile because it enables flow-through API-based provisioning of your customers both on Azure Communications Gateway and in the Operator Connect environment. This enables additional functionality to be provided by Azure Communications Gateway, such as injecting custom SIP headers, while also fulfilling the requirement from the Operator Connect and Teams Phone Mobile programs for you to use APIs for provisioning customers in the Operator Connect environment. For more information, see [Provisioning and Operator Connect APIs](interoperability-operator-connect.md#provisioning-and-operator-connect-apis).
## Integrate with your chosen communications services
communications-gateway Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/lab.md
+
+ Title: Lab deployments for Azure Communications Gateway
+description: Learn about the benefits of lab deployments for Azure Communications Gateway
++++ Last updated : 01/08/2024+
+#CustomerIntent: As someone planning a deployment, I want to know about lab deployments so that I can decide if I want one
++
+# Lab Azure Communications Gateway overview
++
+You can experiment with and test Azure Communications Gateway by connecting your preproduction networks to a dedicated Azure Communications Gateway _lab deployment_. A lab deployment is separate from the deployment for your production traffic. We call the deployment type that you use for production traffic a _production deployment_ or _standard deployment_.
+
+You must have deployed a standard deployment or be about to deploy a standard deployment. You can't use a lab deployment as a standalone Azure Communications Gateway deployment.
+
+## Uses of lab deployments
+
+Lab deployments allow you to make changes and test them without affecting your production deployment. For example, you can:
+
+- Test configuration changes to Azure Communications Gateway.
+- Test new Azure Communications Gateway features and services (for example, configuring Microsoft Teams Direct Routing or Zoom Phone Cloud Peering).
+- Test changes in your preproduction network, before rolling them out to your production networks.
+
+Lab deployments support all the communications services supported by production deployments.
+
+## Considerations for lab deployments
+
+Lab deployments:
+
+- Use a single Azure region, which means there's no geographic redundancy.
+- Don't have an availability service-level agreement (SLA).
+- Are limited to 200 users.
+
+For Operator Connect and Teams Phone Mobile, lab deployments connect to the same Microsoft Entra tenant as production deployments. Microsoft Teams configuration for your tenant shows configuration for your lab deployments and production deployments together.
+
+You can't automatically apply the same configuration to lab deployments and production deployments. You need to configure each deployment separately.
++
+## Setting up and using a lab deployment
+
+You plan for, order, and deploy lab deployments in the same way as production deployments.
+
+We recommend the following approach.
+
+1. Integrate your preproduction network with the lab deployment and your chosen communications services.
+1. Carry out the acceptance test plan (ATP) and any automated testing for your communications services in your preproduction environment.
+1. Integrate your production network with a production deployment and your communications services, by applying the working configuration from your preproduction environment to your production environment.
+1. Optionally, carry out the acceptance plan in your production environment.
+1. Carry out any automated tests and network failover tests in your production environment.
+
+You can separate access to lab deployments and production deployments by using Microsoft Entra ID to assign different permissions to the resources.
+
+## Related content
+
+- [Learn more about planning a deployment](get-started.md#learn-about-and-plan-for-azure-communications-gateway)
+- [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md)
communications-gateway Plan And Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/plan-and-manage-costs.md
Previously updated : 10/27/2023 Last updated : 01/08/2024 # Plan and manage costs for Azure Communications Gateway This article describes how you're charged for Azure Communications Gateway and how you can plan for and manage these costs.
-After you've started using Azure Communications Gateway, you can use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act.
+After you start using Azure Communications Gateway, you can use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act.
-Costs for Azure Communications Gateway are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure Communications Gateway, your Azure bill includes all services and resources used in your Azure subscription, including third-party Azure services.
+Costs for Azure Communications Gateway are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure Communications Gateway, your Azure bill includes all services and resources used in your Azure subscription, including non-Microsoft Azure services.
## Prerequisites
Azure Communications Gateway runs on Azure infrastructure that accrues costs whe
### How you're charged for Azure Communications Gateway
-When you deploy Azure Communications Gateway, you're charged for how you use the voice features of the product. The charges are based on the number of users assigned to the platform by a series of Azure Communications Gateway meters. The meters include:
+When you deploy Azure Communications Gateway, you're charged for how you use the voice features of the product. The charges are based on a series of Azure Communications Gateway meters and the number of users assigned to the platform.
+
+The meters for production deployments include:
- A "Fixed Network Service Fee" or a "Mobile Network Service Fee" meter. - This meter is charged hourly and includes the use of 999 users for testing and early adoption.
When you deploy Azure Communications Gateway, you're charged for how you use the
- If your deployment includes fixed networks and mobile networks, you're charged the Mobile Network Service Fee. - A series of tiered per-user meters that charge based on the number of users that are assigned to the deployment. These per-user fees are based on the maximum number of users during your billing cycle, excluding the 999 users included in the service availability fee.
-For example, if you have 28,000 users assigned to the deployment each month, you're charged for:
-* The service availability fee for each hour in the month
-* 24,001 users in the 1000-25000 tier
-* 3000 users in the 25000+ tier
+For example, if you have 28,000 users assigned to a production deployment each month, you're charged for:
+- The service availability fee for each hour in the month
+- 24,001 users in the 1000-25000 tier
+- 3000 users in the 25000+ tier
+
+Lab deployments are charged on a "Lab - Fixed or Mobile Fee" service availability meter. The meter includes 200 users.
> [!NOTE] > A Microsoft Teams Direct Routing or Zoom Phone Cloud Peering user is any telephone number configured with Direct Routing service or Zoom service on Azure Communications Gateway. Billing for the user starts as soon as you have configured the number.
At the end of your billing cycle, the charges for each meter are summed. Your bi
> [!TIP] > If you receive a quote through Microsoft Volume Licensing, pricing may be presented as aggregated so that the values are easily readable (for example showing the per-user meters in groups of 10 or 100 rather than the pricing for individual users). This does not impact the way you will be billed.
-If you've arranged any custom work with Microsoft, you might be charged an extra fee for that work. That fee isn't included in these meters.
+If you arrange any custom work with Microsoft, you might be charged an extra fee for that work. That fee isn't included in these meters.
If your Azure subscription has a spending limit, Azure prevents you from spending over your credit amount. As you create and use Azure resources, your credits are used. When you reach your credit limit, the resources that you deployed are disabled for the rest of that billing period. You can't change your credit limit, but you can remove it. For more information about spending limits, see [Azure spending limit](../cost-management-billing/manage/spending-limit.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
If you have multiple Azure Communications Gateway deployments and you move users
### Using Azure Prepayment with Azure Communications Gateway
-You can pay for Azure Communications Gateway charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third-party products and services including those from the Azure Marketplace.
+You can pay for Azure Communications Gateway charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for non-Microsoft products and services including those from the Azure Marketplace.
## Monitor costs
communications-gateway Prepare To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md
Previously updated : 11/06/2023 Last updated : 01/08/2024 # Prepare to deploy Azure Communications Gateway
The following sections describe the information you need to collect and the deci
[!INCLUDE [communications-gateway-deployment-prerequisites](includes/communications-gateway-deployment-prerequisites.md)]
+If you want to set up a lab deployment, you must have deployed a standard deployment or be about to deploy one. You can't use a lab deployment as a standalone Azure Communications Gateway deployment.
+ ## Arrange onboarding You need a Microsoft onboarding team to deploy Azure Communications Gateway. Azure Communications Gateway includes an onboarding program called [Included Benefits](onboarding.md). If you're not eligible for Included Benefits or you require more support, discuss your requirements with your Microsoft sales representative.
We recommend that you use an existing Microsoft Entra tenant for Azure Communica
The Operator Connect and Teams Phone Mobile environments inherit identities and configuration permissions from your Microsoft Entra tenant through a Microsoft application called Project Synergy. You must add this application to your Microsoft Entra tenant as part of [Connect Azure Communications Gateway to Operator Connect or Teams Phone Mobile](connect-operator-connect.md) (if your tenant does not already contain this application).
+> [!IMPORTANT]
+> For Operator Connect and Teams Phone Mobile, production deployments and lab deployments must connect to the same Microsoft Entra tenant. Microsoft Teams configuration for your tenant shows configuration for your lab deployments and production deployments together.
++ ## Get access to Azure Communications Gateway for your Azure subscription
-Access to Azure Communications Gateway is restricted. When you've completed the previous steps in this article, contact your onboarding team and ask them to enable your subscription. If you don't already have an onboarding team, contact azcog-enablement@microsoft.com with your Azure subscription ID and contact details.
+Access to Azure Communications Gateway is restricted. When you've completed the previous steps in this article:
-Wait for confirmation that Azure Communications Gateway is enabled before moving on to the next step.
+1. Contact your onboarding team and ask them to enable your subscription. If you don't already have an onboarding team, contact azcog-enablement@microsoft.com with your Azure subscription ID and contact details.
+2. Wait for confirmation that Azure Communications Gateway is enabled before moving on to the next step.
## Create a network design
If you plan to route emergency calls through Azure Communications Gateway, read
- [Operator Connect and Teams Phone Mobile](emergency-calls-operator-connect.md) - [Zoom Phone Cloud Peering](emergency-calls-zoom.md)
-## Configure Microsoft Azure Peering Service Voice or ExpressRoute
+## Connect your network to Azure
-Connect your network to Azure:
+Configure connections between your network and Azure:
- To configure Microsoft Azure Peering Service Voice (sometimes called MAPS Voice), follow the instructions in [Internet peering for Peering Service Voice walkthrough](../internet-peering/walkthrough-communications-services-partner.md). - To configure ExpressRoute Microsoft Peering, follow the instructions in [Tutorial: Configure peering for ExpressRoute circuit](../../articles/expressroute/expressroute-howto-routing-portal-resource-manager.md).
communications-gateway Reliability Communications Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/reliability-communications-gateway.md
- subject-reliability - references_regions Previously updated : 11/06/2023 Last updated : 01/08/2024 # Reliability in Azure Communications Gateway
Azure Communications Gateway ensures your service is reliable by using Azure red
## Azure Communications Gateway's redundancy model
-Each Azure Communications Gateway deployment consists of three separate regions: a Management Region and two Service Regions. This article describes the two different region types and their distinct redundancy models. It covers both regional reliability with availability zones and cross-region reliability with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
+Production Azure Communications Gateway deployments (also called standard deployments) consist of three separate regions: a _management region_ and two _service regions_. Lab deployments consist of one management region and one service region.
+
+This article describes the two different region types and their distinct redundancy models. It covers both regional reliability with availability zones and cross-region reliability with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
:::image type="complex" source="media/reliability/azure-communications-gateway-management-and-service-regions.png" alt-text="Diagram of two service regions, a management region and two operator sites."::: Diagram showing two operator sites and the Azure regions for Azure Communications Gateway. Azure Communications Gateway has two service regions and one management region. The service regions connect to the management region and to the operator sites. The management region can be colocated with a service region.
Each Azure Communications Gateway deployment consists of three separate regions:
## Service regions
-Service regions contain the voice and API infrastructure used for handling traffic between your network and your chosen communications services. Each instance of Azure Communications Gateway consists of two service regions that are deployed in an active-active mode (as required by the Operator Connect and Teams Phone Mobile programs). Fast failover between the service regions is provided at the infrastructure/IP level and at the application (SIP/RTP/HTTP) level.
+Service regions contain the voice and API infrastructure used for handling traffic between your network and your chosen communications services.
+
+Production Azure Communications Gateway deployments have two service regions that are deployed in an active-active mode (as required by the Operator Connect and Teams Phone Mobile programs). Fast failover between the service regions is provided at the infrastructure/IP level and at the application (SIP/RTP/HTTP) level.
The service regions also contain the infrastructure for Azure Communications Gateway's [Provisioning API](provisioning-platform.md). > [!TIP]
-> You must always have two service regions, even if one of the service regions chosen is in a single-region Azure Geography (for example, Qatar). If you choose a single-region Azure Geography, choose a second Azure region in a different Azure Geography.
+> Production deployments must always have two service regions, even if one of the service regions chosen is in a single-region Azure Geography (for example, Qatar). If you choose a single-region Azure Geography, choose a second Azure region in a different Azure Geography.
-These service regions are identical in operation and provide resiliency to both Zone and Regional failures. Each service region can carry 100% of the traffic using the Azure Communications Gateway instance. As such, end users should still be able to make and receive calls successfully during any Zone or Regional downtime.
+The service regions are identical in operation and provide resiliency to both Zone and Regional failures. Each service region can carry 100% of the traffic using the Azure Communications Gateway instance. As such, end-users should still be able to make and receive calls successfully during any Zone or Regional downtime.
+
+Lab deployments have one service region.
### Call routing requirements Azure Communications Gateway offers a 'successful redial' redundancy model: calls handled by failing peers are terminated, but new calls are routed to healthy peers. This model mirrors the redundancy model provided by Microsoft Teams.
-We expect your network to have two geographically redundant sites. Each site should be paired with an Azure Communications Gateway region. The redundancy model relies on cross-connectivity between your network and Azure Communications Gateway service regions.
+For production deployments, we expect your network to have two geographically redundant sites. Each site should be paired with an Azure Communications Gateway region. The redundancy model relies on cross-connectivity between your network and Azure Communications Gateway service regions.
:::image type="complex" source="media/reliability/azure-communications-gateway-service-region-redundancy.png" alt-text="Diagram of two operator sites and two service regions. Both service regions connect to both sites, with primary and secondary routes."::: Diagram of two operator sites (operator site A and operator site B) and two service regions (service region A and service region B). Operator site A has a primary route to service region A and a secondary route to service region B. Operator site B has a primary route to service region B and a secondary route to service region A. :::image-end:::
+Lab deployments must connect to one site in your network.
+ Each Azure Communications Gateway service region provides an SRV record. This record contains all the SIP peers providing SBC functionality (for routing calls to communications services) within the region. If your Azure Communications Gateway includes Mobile Control Point (MCP), each service region provides an extra SRV record for MCP. Each per-region MCP record contains MCP within the region at top priority and MCP in the other region at a lower priority.
When your network routes calls to Azure Communications Gateway's SIP peers for S
If your Azure Communications Gateway deployment includes integrated Mobile Control Point (MCP), your network must do as follows for MCP: > [!div class="checklist"]
-> - Detect when MCP in a region is unavailable, mark the targets for that region's SRV record as unavailable, and retry periodically to determine when the region is available again. MCP does not respond to SIP OPTIONS.
+> - Detect when MCP in a region is unavailable, mark the targets for that region's SRV record as unavailable, and retry periodically to determine when the region is available again. MCP doesn't respond to SIP OPTIONS.
> - Handle 5xx responses from MCP according to your organization's policy. For example, you could retry the request, or you could allow the call to continue without passing through Azure Communications Gateway and into Microsoft Phone System. The details of this routing behavior are specific to your network. You must agree them with your onboarding team during your integration project.
During a zone-wide outage, calls handled by the affected zone are terminated, wi
## Disaster recovery: fallback to other regions - [!INCLUDE [introduction to disaster recovery](../reliability/includes/reliability-disaster-recovery-description-include.md)] - This section describes the behavior of Azure Communications Gateway during a region-wide outage. ### Disaster recovery: cross-region failover for service regions
The SBC function in Azure Communications Gateway provides OPTIONS polling to all
Provisioning API clients contact Azure Communications Gateway using the base domain name for your deployment. The DNS record for this domain has a time-to-live (TTL) of 60 seconds. When a region fails, Azure updates the DNS record to refer to another region, so clients making a new DNS lookup receive the details of the new region. We recommend ensuring that clients can make a new DNS lookup and retry a request 60 seconds after a timeout or a 5xx response.
+> [!TIP]
+> Lab deployments don't offer cross-region failover (because they have only one service region).
+ ### Disaster recovery: cross-region failover for management regions Voice traffic and provisioning through the Number Management Portal are unaffected by failures in the management region, because the corresponding Azure resources are hosted in service regions. Users of the Number Management Portal might need to sign in again.
Monitoring services might be temporarily unavailable until service has been rest
## Choosing management and service regions
-A single deployment of Azure Communications Gateway is designed to handle your traffic within a geographic area. Deploy both service regions within the same geographic area (for example North America). This model ensures that latency on voice calls remains within the limits required by the Operator Connect and Teams Phone Mobile programs.
+A single deployment of Azure Communications Gateway is designed to handle your traffic within a geographic area. Deploy both service regions in a production deployment within the same geographic area (for example North America). This model ensures that latency on voice calls remains within the limits required by the Operator Connect and Teams Phone Mobile programs.
Consider the following points when you choose your service region locations:
communications-gateway Request Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/request-changes.md
When you raise a request, we'll investigate. If we think the problem is caused b
This article provides an overview of how to raise support requests for Azure Communications Gateway. For more detailed information on raising support requests, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). + ## Prerequisites We strongly recommend a Microsoft support plan that includes technical support, such as [Microsoft Unified Support](https://www.microsoft.com/en-us/unifiedsupport/overview).
communications-gateway Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/whats-new.md
Previously updated : 02/16/2024 Last updated : 03/01/2024 # What's new in Azure Communications Gateway? This article covers new features and improvements for Azure Communications Gateway.
+## March 2024
+
+### Lab deployments
+
+From March 2024, you can set up a dedicated lab deployment of Azure Communications Gateway. Lab deployments allow you to make changes and test them without affecting your production deployment. For example, you can:
+
+- Test configuration changes to Azure Communications Gateway.
+- Test new Azure Communications Gateway features and services (for example, configuring Microsoft Teams Direct Routing or Zoom Phone Cloud Peering).
+- Test changes in your preproduction network, before rolling them out to your production networks.
+
+You plan for, order, and deploy lab deployments in the same way as production deployments. You must have deployed a standard deployment or be about to deploy one. You can't use a lab deployment as a standalone Azure Communications Gateway deployment.
+
+For more information, see [Lab Azure Communications Gateway overview](lab.md).
+ ## February 2024 ### Flow-through provisioning for Operator Connect and Teams Phone Mobile
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-dotnet.md
ms.devlang: csharp-+ Last updated 01/08/2024 zone_pivot_groups: azure-cosmos-db-quickstart-env
cosmos-db Quickstart Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-go.md
ms.devlang: golang-+ Last updated 01/08/2024 zone_pivot_groups: azure-cosmos-db-quickstart-env
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java.md
ms.devlang: java-+ Last updated 01/08/2024 zone_pivot_groups: azure-cosmos-db-quickstart-env
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-nodejs.md
ms.devlang: javascript-+ Last updated 01/08/2024 zone_pivot_groups: azure-cosmos-db-quickstart-env
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-python.md
ms.devlang: python-+ Last updated 01/08/2024 zone_pivot_groups: azure-cosmos-db-quickstart-env
cosmos-db Sdk Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-observability.md
Previously updated : 05/09/2023 Last updated : 02/27/2024
Distributed tracing is available in the following SDKs:
|SDK |Supported version |Notes | |-|||
-|.NET v3 SDK |[>= `3.33.0-preview`](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.33.0-preview) |This feature is on by default if you're using a supported preview SDK version. You can disable tracing by setting `IsDistributedTracingEnabled = false` in `CosmosClientOptions`. |
+|.NET v3 SDK |[>= `3.36.0`](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.36.0) |This feature is available in both preview and non-preview versions. For non-preview versions it's off by default. You can enable tracing by setting `IsDistributedTracingEnabled = false` in `CosmosClientOptions.CosmosClientTelemetryOptions`. |
+|.NET v3 SDK preview |[>= `3.33.0-preview`](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.33.0-preview) |This feature is available in both preview and non-preview versions. For preview versions it's on by default. You can disable tracing by setting `IsDistributedTracingEnabled = true` in `CosmosClientOptions.CosmosClientTelemetryOptions`. |
|Java v4 SDK |[>= `4.43.0`](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.43.0) | | ## Trace attributes
If you've configured logs in your trace provider, you can automatically get [dia
### [.NET](#tab/dotnet)
-In addition to getting diagnostic logs for failed requests, point operations that take over 100 ms and query operations that take over 500 ms also generate diagnostics. You can configure the log level to control which diagnostics logs you receive.
+In addition to getting diagnostic logs for failed requests, you can configure different latency thresholds for when to collect diagnostics from successful requests. The default values are 100 ms for point operations and 500 ms for non point operations and can be adjusted through client options.
+
+```csharp
+CosmosClientOptions options = new CosmosClientOptions()
+{
+ CosmosClientTelemetryOptions = new CosmosClientTelemetryOptions()
+ {
+ DisableDistributedTracing = false,
+ CosmosThresholdOptions = new CosmosThresholdOptions()
+ {
+ PointOperationLatencyThreshold = TimeSpan.FromMilliseconds(100),
+ NonPointOperationLatencyThreshold = TimeSpan.FromMilliseconds(500)
+ }
+ },
+};
+```
+
+You can configure the log level to control which diagnostics logs you receive.
|Log Level |Description | |-|| |Error | Logs for errors only. |
-|Warning | Logs for errors and high latency requests. |
+|Warning | Logs for errors and high latency requests based on configured thresholds. |
|Information | There are no specific information level logs. Logs in this level are the same as using Warning. | Depending on your application environment, there are different ways to configure the log level. Here's a sample configuration in `appSettings.json`:
cost-management-billing Limited Time Central Sweden https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/limited-time-central-sweden.md
Previously updated : 11/17/2023 Last updated : 03/01/2024
Save up to 50 percent compared to pay-as-you-go pricing when you purchase one-year [Azure Reserved Virtual Machine (VM) Instances](../../virtual-machines/prepay-reserved-vm-instances.md?toc=/azure/cost-management-billing/reservations/toc.json&source=azlto3) for select Linux VMs in Sweden Central for a limited time. This offer is available between September 1, 2023 ΓÇô February 29, 2024.
+> [!NOTE]
+> This limited-time offer expired on March 1, 2024. You can still purchase Azure Reserved VM Instances at regular discounted prices. For more information about reservation discount, see [How the Azure reservation discount is applied to virtual machines](../manage/understand-vm-reservation-charges.md).
+ ## Purchase the limited time offer To take advantage of this limited-time offer, [purchase](https://aka.ms/azure/pricing/SwedenCentral/Purchase1) a one-year term for Azure Reserved Virtual Machine Instances for qualified VM instances in the Sweden Central region.
Enterprise Agreement and Microsoft Customer Agreement billing readers can view a
These terms and conditions (hereinafter referred to as "terms") govern the limited time offer ("offer") provided by Microsoft to customers purchasing a one-year Azure Reserved VM Instance in Sweden Central between September 1, 2023 (12 AM Pacific Standard Time) ΓÇô February 29, 2024 (11:59 PM Pacific Standard Time), for any of the following VM series: -- Dadsv5-- Dasv5-- Ddsv5-- Ddv5-- Dldsv5-- Dlsv5-- Dsv5-- Dv5-- Eadsv5-- Easv5-- Ebdsv5-- Ebsv5-- Edsv5-- Edv5-- Esv5-- Ev5-
-The offer provides them with a discount up to 50% compared to pay-as-you-go pricing. The savings doesn't include operating system costs. Actual savings may vary based on instance type or usage.
+- `Dadsv5`
+- `Dasv5`
+- `Ddsv5`
+- `Ddv5`
+- `Dldsv5`
+- `Dlsv5`
+- `Dsv5`
+- `Dv5`
+- `Eadsv5`
+- `Easv5`
+- `Ebdsv5`
+- `Ebsv5`
+- `Edsv5`
+- `Edv5`
+- `Esv5`
+- `Ev5`
+
+The offer provides them with a discount up to 50% compared to pay-as-you-go pricing. The savings doesn't include operating system costs. Actual savings might vary based on instance type or usage.
**Eligibility** - The Offer is open to individuals who meet the following criteria:
The offer provides them with a discount up to 50% compared to pay-as-you-go pric
**Offer details** - Upon successful purchase and payment for the one-year Azure Reserved VM Instance in Sweden Central for one or more of the qualified VMs during the specified period, the discount applies automatically to the number of running virtual machines in Sweden Central that match the reservation scope and attributes. You don't need to assign a reservation to a virtual machine to get the discounts. A reserved instance purchase covers only the compute part of your VM usage. For more information about how to pay and save with an Azure Reserved VM Instance, see [Prepay for Azure virtual machines to save money](../../virtual-machines/prepay-reserved-vm-instances.md?toc=/azure/cost-management-billing/reservations/toc.json&source=azlto3). -- Additional taxes may apply.-- Payment will be processed using the payment method on file for the selected subscriptions.
+- Other taxes might apply.
+- Payment is processed using the payment method on file for the selected subscriptions.
- Estimated savings are calculated based on your current on-demand rate. **Qualifying purchase** - To be eligible for the 50% discount, customers must make a purchase of the one-year Azure Reserved Virtual Machine Instances for one of the following qualified VMs in Sweden Central between September 1, 2023, and February 29, 2024. -- Dadsv5-- Dasv5-- Ddsv5-- Ddv5-- Dldsv5-- Dlsv5-- Dsv5-- Dv5-- Eadsv5-- Easv5-- Ebdsv5-- Ebsv5-- Edsv5-- Edv5-- Esv5-- Ev5
+- `Dadsv5`
+- `Dasv5`
+- `Ddsv5`
+- `Ddv5`
+- `Dldsv5`
+- `Dlsv5`
+- `Dsv5`
+- `Dv5`
+- `Eadsv5`
+- `Easv5`
+- `Ebdsv5`
+- `Ebsv5`
+- `Edsv5`
+- `Edv5`
+- `Esv5`
+- `Ev5`
Instance size flexibility is available for these VMs. For more information about Instance Size Flexibility, see [Virtual machine size flexibility](../../virtual-machines/reserved-vm-instance-size-flexibility.md?source=azlto7).
Instance size flexibility is available for these VMs. For more information about
- The discount only applies to resources associated with subscriptions purchased through Enterprise, Cloud Solution Provider (CSP), Microsoft Customer Agreement and individual plans with pay-as-you-go rates. - A reservation discount is "use-it-or-lose-it." So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours. - When you deallocate, delete, or scale the number of VMs, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are lost.-- Stopped VMs are billed and continue to use reservation hours. Deallocate or delete VM resources or scale-in other VMs to use your available reservation hours with other workloads.
+- Stopped VMs are billed and continue to use reservation hours. To use your available reservation hours with other workloads, deallocate or delete VM resources or scale-in other VMs.
- For more information about how Azure Reserved VM Instance discounts are applied, see [Understand Azure Reserved VM Instances discount](../manage/understand-vm-reservation-charges.md?source=azlto4). **Exchanges and refunds** - The offer follows standard exchange and refund policies for reservations. For more information about exchanges and refunds, see [Self-service exchanges and refunds for Azure Reservations](exchange-and-refund-azure-reservations.md?source=azlto6).
Instance size flexibility is available for these VMs. For more information about
**Termination or modification** - Microsoft reserves the right to modify, suspend, or terminate the offer at any time without prior notice.
-If you have purchased the one-year Azure Reserved Virtual Machine Instances for the qualified VMs in Sweden Central between September 1, 2023, and February 29, 2024 you'll continue to get the discount throughout the one-year term, even if the offer is canceled.
+If you purchased the one-year Azure Reserved Virtual Machine Instances for the qualified VMs in Sweden Central between September 1, 2023, and February 29, 2024 you'll continue to get the discount throughout the one-year term, even if the offer is canceled.
By participating in the offer, customers agree to be bound by these terms and the decisions of Microsoft. Microsoft reserves the right to disqualify any customer who violates these terms or engages in any fraudulent or harmful activities related to the offer.
cost-management-billing Limited Time Us West https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/limited-time-us-west.md
Previously updated : 12/08/2023 Last updated : 03/01/2024
Save up to 50 percent compared to pay-as-you-go pricing when you purchase one-year [Azure Reserved Virtual Machine (VM) Instances](../../virtual-machines/prepay-reserved-vm-instances.md?toc=/azure/cost-management-billing/reservations/toc.json&source=azlto3) for `Dv3s` VMs in US West for a limited time. This offer is available between September 1, 2023 ΓÇô February 29, 2024.
+> [!NOTE]
+> This limited-time offer expired on March 1, 2024. You can still purchase Azure Reserved VM Instances at regular discounted prices. For more information about reservation discount, see [How the Azure reservation discount is applied to virtual machines](../manage/understand-vm-reservation-charges.md).
+ ## Purchase the limited time offer To take advantage of this limited-time offer, [purchase](https://aka.ms/azure/pricing/USWest/Purchase) a one-year term for Azure Reserved Virtual Machine Instances for qualified `Dv3s` instances in the US West region.
data-lake-store Data Lake Store Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-access-control.md
- Title: Overview of access control in Data Lake Storage Gen1 | Microsoft Docs
-description: Learn about the basics of the access control model of Azure Data Lake Storage Gen1, which derives from HDFS.
---- Previously updated : 03/26/2018---
-# Access control in Azure Data Lake Storage Gen1
-
-Azure Data Lake Storage Gen1 implements an access control model that derives from HDFS, which in turn derives from the POSIX access control model. This article summarizes the basics of the access control model for Data Lake Storage Gen1.
-
-## Access control lists on files and folders
-
-There are two kinds of access control lists (ACLs), **Access ACLs** and **Default ACLs**.
-
-* **Access ACLs**: These control access to an object. Files and folders both have Access ACLs.
-
-* **Default ACLs**: A "template" of ACLs associated with a folder that determine the Access ACLs for any child items that are created under that folder. Files do not have Default ACLs.
--
-Both Access ACLs and Default ACLs have the same structure.
-
-> [!NOTE]
-> Changing the Default ACL on a parent does not affect the Access ACL or Default ACL of child items that already exist.
->
->
-
-## Permissions
-
-The permissions on a filesystem object are **Read**, **Write**, and **Execute**, and they can be used on files and folders as shown in the following table:
-
-| | File | Folder |
-||-|-|
-| **Read (R)** | Can read the contents of a file | Requires **Read** and **Execute** to list the contents of the folder|
-| **Write (W)** | Can write or append to a file | Requires **Write** and **Execute** to create child items in a folder |
-| **Execute (X)** | Does not mean anything in the context of Data Lake Storage Gen1 | Required to traverse the child items of a folder |
-
-### Short forms for permissions
-
-**RWX** is used to indicate **Read + Write + Execute**. A more condensed numeric form exists in which **Read=4**, **Write=2**, and **Execute=1**, the sum of which represents the permissions. Following are some examples.
-
-| Numeric form | Short form | What it means |
-|--|||
-| 7 | `RWX` | Read + Write + Execute |
-| 5 | `R-X` | Read + Execute |
-| 4 | `R--` | Read |
-| 0 | `` | No permissions |
--
-### Permissions do not inherit
-
-In the POSIX-style model that's used by Data Lake Storage Gen1, permissions for an item are stored on the item itself. In other words, permissions for an item cannot be inherited from the parent items.
-
-## Common scenarios related to permissions
-
-Following are some common scenarios to help you understand which permissions are needed to perform certain operations on a Data Lake Storage Gen1 account.
-
-| Operation | Object | / | Seattle/ | Portland/ | Data.txt |
-|--||--||-|-|
-| Read | Data.txt | `--X` | `--X` | `--X` | `R--` |
-| Append to | Data.txt | `--X` | `--X` | `--X` | `-W-` |
-| Delete | Data.txt | `--X` | `--X` | `-WX` | `` |
-| Create | Data.txt | `--X` | `--X` | `-WX` | `` |
-| List | / | `R-X` | `` | `` | `` |
-| List | /Seattle/ | `--X` | `R-X` | `` | `` |
-| List | /Seattle/Portland/ | `--X` | `--X` | `R-X` | `` |
--
-> [!NOTE]
-> Write permissions on the file are not required to delete it as long as the previous two conditions are true.
->
->
--
-## Users and identities
-
-Every file and folder has distinct permissions for these identities:
-
-* The owning user
-* The owning group
-* Named users
-* Named groups
-* All other users
-
-The identities of users and groups are Microsoft Entra identities. So unless otherwise noted, a "user," in the context of Data Lake Storage Gen1, can either mean a Microsoft Entra user or a Microsoft Entra security group.
-
-### The super-user
-
-A super-user has the most rights of all the users in the Data Lake Storage Gen1 account. A super-user:
-
-* Has RWX Permissions to **all** files and folders.
-* Can change the permissions on any file or folder.
-* Can change the owning user or owning group of any file or folder.
-
-All users that are part of the **Owners** role for a Data Lake Storage Gen1 account are automatically a super-user.
-
-### The owning user
-
-The user who created the item is automatically the owning user of the item. An owning user can:
-
-* Change the permissions of a file that is owned.
-* Change the owning group of a file that is owned, as long as the owning user is also a member of the target group.
-
-> [!NOTE]
-> The owning user *cannot* change the owning user of a file or folder. Only super-users can change the owning user of a file or folder.
->
->
-
-### The owning group
-
-**Background**
-
-In the POSIX ACLs, every user is associated with a "primary group." For example, user "alice" might belong to the "finance" group. Alice might also belong to multiple groups, but one group is always designated as her primary group. In POSIX, when Alice creates a file, the owning group of that file is set to her primary group, which in this case is "finance." The owning group otherwise behaves similarly to assigned permissions for other users/groups.
-
-Because there is no ΓÇ£primary groupΓÇ¥ associated to users in Data Lake Storage Gen1, the owning group is assigned as below.
-
-**Assigning the owning group for a new file or folder**
-
-* **Case 1**: The root folder "/". This folder is created when a Data Lake Storage Gen1 account is created. In this case, the owning group is set to an all-zero GUID. This value does not permit any access. It is a placeholder until such time a group is assigned.
-* **Case 2** (Every other case): When a new item is created, the owning group is copied from the parent folder.
-
-**Changing the owning group**
-
-The owning group can be changed by:
-* Any super-users.
-* The owning user, if the owning user is also a member of the target group.
-
-> [!NOTE]
-> The owning group *cannot* change the ACLs of a file or folder.
->
-> For accounts created on or before September 2018, the owning group was set to the user who created the account in the case of the root folder for **Case 1**, above. A single user account is not valid for providing permissions via the owning group, thus no permissions are granted by this default setting. You can assign this permission to a valid user group.
--
-## Access check algorithm
-
-The following pseudocode represents the access check algorithm for Data Lake Storage Gen1 accounts.
-
-```
-def access_check( user, desired_perms, path ) :
- # access_check returns true if user has the desired permissions on the path, false otherwise
- # user is the identity that wants to perform an operation on path
- # desired_perms is a simple integer with values from 0 to 7 ( R=4, W=2, X=1). User desires these permissions
- # path is the file or folder
- # Note: the "sticky bit" is not illustrated in this algorithm
-
-# Handle super users.
- if (is_superuser(user)) :
- return True
-
- # Handle the owning user. Note that mask IS NOT used.
- entry = get_acl_entry( path, OWNER )
- if (user == entry.identity)
- return ( (desired_perms & entry.permissions) == desired_perms )
-
- # Handle the named users. Note that mask IS used.
- entries = get_acl_entries( path, NAMED_USER )
- for entry in entries:
- if (user == entry.identity ) :
- mask = get_mask( path )
- return ( (desired_perms & entry.permmissions & mask) == desired_perms)
-
- # Handle named groups and owning group
- member_count = 0
- perms = 0
- entries = get_acl_entries( path, NAMED_GROUP | OWNING_GROUP )
- for entry in entries:
- if (user_is_member_of_group(user, entry.identity)) :
- member_count += 1
- perms | = entry.permissions
- if (member_count>0) :
- return ((desired_perms & perms & mask ) == desired_perms)
-
- # Handle other
- perms = get_perms_for_other(path)
- mask = get_mask( path )
- return ( (desired_perms & perms & mask ) == desired_perms)
-```
-
-### The mask
-
-As illustrated in the Access Check Algorithm, the mask limits access for **named users**, the **owning group**, and **named groups**.
-
-> [!NOTE]
-> For a new Data Lake Storage Gen1 account, the mask for the Access ACL of the root folder ("/") defaults to RWX.
->
->
-
-### The sticky bit
-
-The sticky bit is a more advanced feature of a POSIX filesystem. In the context of Data Lake Storage Gen1, it is unlikely that the sticky bit will be needed. In summary, if the sticky bit is enabled on a folder, a child item can only be deleted or renamed by the child item's owning user.
-
-The sticky bit is not shown in the Azure portal.
-
-## Default permissions on new files and folders
-
-When a new file or folder is created under an existing folder, the Default ACL on the parent folder determines:
--- A child folderΓÇÖs Default ACL and Access ACL.-- A child file's Access ACL (files do not have a Default ACL).-
-### umask
-
-When creating a file or folder, umask is used to modify how the default ACLs are set on the child item. umask is a 9-bit value on parent folders that contains an RWX value for **owning user**, **owning group**, and **other**.
-
-The umask for Azure Data Lake Storage Gen1 is a constant value set to 007. This value translates to
-
-| umask component | Numeric form | Short form | Meaning |
-||--|||
-| umask.owning_user | 0 | `` | For owning user, copy the parent's Default ACL to the child's Access ACL |
-| umask.owning_group | 0 | `` | For owning group, copy the parent's Default ACL to the child's Access ACL |
-| umask.other | 7 | `RWX` | For other, remove all permissions on the child's Access ACL |
-
-The umask value used by Azure Data Lake Storage Gen1 effectively means that the value for other is never transmitted by default on new children - regardless of what the Default ACL indicates.
-
-The following pseudocode shows how the umask is applied when creating the ACLs for a child item.
-
-```
-def set_default_acls_for_new_child(parent, child):
- child.acls = []
- for entry in parent.acls :
- new_entry = None
- if (entry.type == OWNING_USER) :
- new_entry = entry.clone(perms = entry.perms & (~umask.owning_user))
- elif (entry.type == OWNING_GROUP) :
- new_entry = entry.clone(perms = entry.perms & (~umask.owning_group))
- elif (entry.type == OTHER) :
- new_entry = entry.clone(perms = entry.perms & (~umask.other))
- else :
- new_entry = entry.clone(perms = entry.perms )
- child_acls.add( new_entry )
-```
-
-## Common questions about ACLs in Data Lake Storage Gen1
-
-### Do I have to enable support for ACLs?
-
-No. Access control via ACLs is always on for a Data Lake Storage Gen1 account.
-
-### Which permissions are required to recursively delete a folder and its contents?
-
-* The parent folder must have **Write + Execute** permissions.
-* The folder to be deleted, and every folder within it, requires **Read + Write + Execute** permissions.
-
-> [!NOTE]
-> You do not need Write permissions to delete files in folders. Also, the root folder "/" can **never** be deleted.
->
->
-
-### Who is the owner of a file or folder?
-
-The creator of a file or folder becomes the owner.
-
-### Which group is set as the owning group of a file or folder at creation?
-
-The owning group is copied from the owning group of the parent folder under which the new file or folder is created.
-
-### I am the owning user of a file but I donΓÇÖt have the RWX permissions I need. What do I do?
-
-The owning user can change the permissions of the file to give themselves any RWX permissions they need.
-
-### When I look at ACLs in the Azure portal I see user names but through APIs, I see GUIDs, why is that?
-
-Entries in the ACLs are stored as GUIDs that correspond to users in Microsoft Entra ID. The APIs return the GUIDs as is. The Azure portal tries to make ACLs easier to use by translating the GUIDs into friendly names when possible.
-
-### Why do I sometimes see GUIDs in the ACLs when I'm using the Azure portal?
-
-A GUID is shown when the user doesn't exist in Microsoft Entra anymore. Usually this happens when the user has left the company or if their account has been deleted in Microsoft Entra ID. Also, ensure that you're using the right ID for setting ACLs (details in question below).
-
-### When using service principal, what ID should I use to set ACLs?
-
-On the Azure Portal, go to **Microsoft Entra ID -> Enterprise applications** and select your application. The **Overview** tab should display an Object ID and this is what should be used when adding ACLs for data access (and not Application Id).
-
-### Does Data Lake Storage Gen1 support inheritance of ACLs?
-
-No, but Default ACLs can be used to set ACLs for child files and folder newly created under the parent folder.
-
-### What are the limits for ACL entries on files and folders?
-
-32 ACLs can be set per file and per directory. Access and default ACLs each have their own 32 ACL entry limit. Use security groups for ACL assignments if possible. By using groups, you're less likely to exceed the maximum number of ACL entries per file or directory.
-
-### Where can I learn more about POSIX access control model?
-
-* [POSIX Access Control Lists on Linux](https://www.linux.com/news/posix-acls-linux)
-* [HDFS permission guide](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html)
-* [POSIX FAQ](https://www.opengroup.org/austin/papers/posix_faq.html)
-* [POSIX 1003.1 2008](https://standards.ieee.org/wp-content/uploads/import/documents/interpretations/1003.1-2008_interp.pdf)
-* [POSIX 1003.1 2013](https://pubs.opengroup.org/onlinepubs/9699919799.2013edition/)
-* [POSIX 1003.1 2016](https://pubs.opengroup.org/onlinepubs/9699919799.2016edition/)
-* [POSIX ACL on Ubuntu](https://help.ubuntu.com/community/FilePermissionsACLs)
-
-## See also
-
-* [Overview of Azure Data Lake Storage Gen1](data-lake-store-overview.md)
data-lake-store Data Lake Store Archive Eventhub Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-archive-eventhub-capture.md
- Title: Capture data from Event Hubs to Azure Data Lake Storage Gen1
-description: Learn how to use Azure Data Lake Storage Gen1 to capture data received by Azure Event Hubs. Begin by verifying the prerequisites.
---- Previously updated : 05/29/2018---
-# Use Azure Data Lake Storage Gen1 to capture data from Event Hubs
-
-Learn how to use Azure Data Lake Storage Gen1 to capture data received by Azure Event Hubs.
-
-## Prerequisites
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-* **An Azure Data Lake Storage Gen1 account**. For instructions on how to create one, see [Get started with Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md).
-
-* **An Event Hubs namespace**. For instructions, see [Create an Event Hubs namespace](../event-hubs/event-hubs-create.md#create-an-event-hubs-namespace). Make sure the Data Lake Storage Gen1 account and the Event Hubs namespace are in the same Azure subscription.
--
-## Assign permissions to Event Hubs
-
-In this section, you create a folder within the account where you want to capture the data from Event Hubs. You also assign permissions to Event Hubs so that it can write data into a Data Lake Storage Gen1 account.
-
-1. Open the Data Lake Storage Gen1 account where you want to capture data from Event Hubs and then click on **Data Explorer**.
-
- ![Data Lake Storage Gen1 data explorer](./media/data-lake-store-archive-eventhub-capture/data-lake-store-open-data-explorer.png "Data Lake Storage Gen1 data explorer")
-
-1. Click **New Folder** and then enter a name for folder where you want to capture the data.
-
- ![Create a new folder in Data Lake Storage Gen1](./media/data-lake-store-archive-eventhub-capture/data-lake-store-create-new-folder.png "Create a new folder in Data Lake Storage Gen1")
-
-1. Assign permissions at the root of Data Lake Storage Gen1.
-
- a. Click **Data Explorer**, select the root of the Data Lake Storage Gen1 account, and then click **Access**.
-
- ![Screenshot of the Data explorer with the root of the account and the Access option called out.](./media/data-lake-store-archive-eventhub-capture/data-lake-store-assign-permissions-to-root.png "Assign permissions for the Data Lake Storage Gen1 root")
-
- b. Under **Access**, click **Add**, click **Select User or Group**, and then search for `Microsoft.EventHubs`.
-
- ![Screenshot of the Access page with the Add option, Select User or Group option, and Microsoft Eventhubs option called out.](./media/data-lake-store-archive-eventhub-capture/data-lake-store-assign-eventhub-sp.png "Assign permissions for the Data Lake Storage Gen1 root")
-
- Click **Select**.
-
- c. Under **Assign Permissions**, click **Select Permissions**. Set **Permissions** to **Execute**. Set **Add to** to **This folder and all children**. Set **Add as** to **An access permission entry and a default permission entry**.
-
- > [!IMPORTANT]
- > When creating a new folder hierarchy for capturing data received by Azure Event Hubs, this is an easy way to ensure access to the destination folder. However, adding permissions to all children of a top level folder with many child files and folders may take a long time. If your root folder contains a large number of files and folders, it may be faster to add **Execute** permissions for `Microsoft.EventHubs` individually to each folder in the path to your final destination folder.
-
- ![Screenshot of the Assign Permissions section with the Select Permissions option called out. The Select Permissions section is next to it with the Execute option, Add to option, and Add as option called out.](./media/data-lake-store-archive-eventhub-capture/data-lake-store-assign-eventhub-sp1.png "Assign permissions for the Data Lake Storage Gen1 root")
-
- Click **OK**.
-
-1. Assign permissions for the folder under the Data Lake Storage Gen1 account where you want to capture data.
-
- a. Click **Data Explorer**, select the folder in the Data Lake Storage Gen1 account, and then click **Access**.
-
- ![Screenshot of the Data explorer with a folder in the account and the Access option called out.](./media/data-lake-store-archive-eventhub-capture/data-lake-store-assign-permissions-to-folder.png "Assign permissions for the Data Lake Storage Gen1 folder")
-
- b. Under **Access**, click **Add**, click **Select User or Group**, and then search for `Microsoft.EventHubs`.
-
- ![Screenshot of the Data explorer Access page with the Add option, Select User or Group option, and Microsoft Eventhubs option called out.](./media/data-lake-store-archive-eventhub-capture/data-lake-store-assign-eventhub-sp.png "Assign permissions for the Data Lake Storage Gen1 folder")
-
- Click **Select**.
-
- c. Under **Assign Permissions**, click **Select Permissions**. Set **Permissions** to **Read, Write,** and **Execute**. Set **Add to** to **This folder and all children**. Finally, set **Add as** to **An access permission entry and a default permission entry**.
-
- ![Screenshot of the Assign Permissions section with the Select Permissions option called out. The Select Permissions section is next to it with the Read, Write, and Execute options, the Add to option, and the Add as option called out.](./media/data-lake-store-archive-eventhub-capture/data-lake-store-assign-eventhub-sp-folder.png "Assign permissions for the Data Lake Storage Gen1 folder")
-
- Click **OK**.
-
-## Configure Event Hubs to capture data to Data Lake Storage Gen1
-
-In this section, you create an Event Hub within an Event Hubs namespace. You also configure the Event Hub to capture data to an Azure Data Lake Storage Gen1 account. This section assumes that you have already created an Event Hubs namespace.
-
-1. From the **Overview** pane of the Event Hubs namespace, click **+ Event Hub**.
-
- ![Screenshot of the Overview pane with the Event Hub option called out.](./media/data-lake-store-archive-eventhub-capture/data-lake-store-create-event-hub.png "Create Event Hub")
-
-1. Provide the following values to configure Event Hubs to capture data to Data Lake Storage Gen1.
-
- ![Screenshot of the Create Event Hub dialog box with the Name text box, the Capture option, the Capture Provider option, the Select Data Lake Store option, and the Data Lake Path option called out.](./media/data-lake-store-archive-eventhub-capture/data-lake-store-configure-eventhub.png "Create Event Hub")
-
- a. Provide a name for the Event Hub.
-
- b. For this tutorial, set **Partition Count** and **Message Retention** to the default values.
-
- c. Set **Capture** to **On**. Set the **Time Window** (how frequently to capture) and **Size Window** (data size to capture).
-
- d. For **Capture Provider**, select **Azure Data Lake Store** and then select the Data Lake Storage Gen1 account you created earlier. For **Data Lake Path**, enter the name of the folder you created in the Data Lake Storage Gen1 account. You only need to provide the relative path to the folder.
-
- e. Leave the **Sample capture file name formats** to the default value. This option governs the folder structure that is created under the capture folder.
-
- f. Click **Create**.
-
-## Test the setup
-
-You can now test the solution by sending data to the Azure Event Hub. Follow the instructions at [Send events to Azure Event Hubs](../event-hubs/event-hubs-dotnet-framework-getstarted-send.md). Once you start sending the data, you see the data reflected in Data Lake Storage Gen1 using the folder structure you specified. For example, you see a folder structure, as shown in the following screenshot, in your Data Lake Storage Gen1 account.
-
-![Sample EventHub data in Data Lake Storage Gen1](./media/data-lake-store-archive-eventhub-capture/data-lake-store-eventhub-data-sample.png "Sample EventHub data in Data Lake Storage Gen1")
-
-> [!NOTE]
-> Even if you do not have messages coming into Event Hubs, Event Hubs writes empty files with just the headers into the Data Lake Storage Gen1 account. The files are written at the same time interval that you provided while creating the Event Hubs.
->
->
-
-## Analyze data in Data Lake Storage Gen1
-
-Once the data is in Data Lake Storage Gen1, you can run analytical jobs to process and crunch the data. See [USQL Avro Example](https://github.com/Azure/usql/tree/master/Examples/AvroExamples) on how to do this using Azure Data Lake Analytics.
-
-
-## See also
-* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md)
-* [Copy data from Azure Storage Blobs to Data Lake Storage Gen1](data-lake-store-copy-data-azure-storage-blob.md)
data-lake-store Data Lake Store Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-best-practices.md
- Title: Best practices for using Azure Data Lake Storage Gen1 | Microsoft Docs
-description: Learn the best practices about data ingestion, date security, and performance related to using Azure Data Lake Storage Gen1 (previously known as Azure Data Lake Store)
------ Previously updated : 06/27/2018---
-# Best practices for using Azure Data Lake Storage Gen1
--
-In this article, you learn about best practices and considerations for working with Azure Data Lake Storage Gen1. This article provides information around security, performance, resiliency, and monitoring for Data Lake Storage Gen1. Before Data Lake Storage Gen1, working with truly big data in services like Azure HDInsight was complex. You had to shard data across multiple Blob storage accounts so that petabyte storage and optimal performance at that scale could be achieved. With Data Lake Storage Gen1, most of the hard limits for size and performance are removed. However, there are still some considerations that this article covers so that you can get the best performance with Data Lake Storage Gen1.
-
-## Security considerations
-
-Azure Data Lake Storage Gen1 offers POSIX access controls and detailed auditing for Microsoft Entra users, groups, and service principals. These access controls can be set to existing files and folders. The access controls can also be used to create defaults that can be applied to new files or folders. When permissions are set to existing folders and child objects, the permissions need to be propagated recursively on each object. If there are large number of files, propagating the permissions can take a long time. The time taken can range between 30-50 objects processed per second. Hence, plan the folder structure and user groups appropriately. Otherwise, it can cause unanticipated delays and issues when you work with your data.
-
-Assume you have a folder with 100,000 child objects. If you take the lower bound of 30 objects processed per second, to update the permission for the whole folder could take an hour. More details on Data Lake Storage Gen1 ACLs are available at [Access control in Azure Data Lake Storage Gen1](data-lake-store-access-control.md). For improved performance on assigning ACLs recursively, you can use the Azure Data Lake Command-Line Tool. The tool creates multiple threads and recursive navigation logic to quickly apply ACLs to millions of files. The tool is available for Linux and Windows, and the [documentation](https://github.com/Azure/data-lake-adlstool) and [downloads](https://aka.ms/adlstool-download) for this tool can be found on GitHub. These same performance improvements can be enabled by your own tools written with the Data Lake Storage Gen1 [.NET](data-lake-store-data-operations-net-sdk.md) and [Java](data-lake-store-get-started-java-sdk.md) SDKs.
-
-### Use security groups versus individual users
-
-When working with big data in Data Lake Storage Gen1, most likely a service principal is used to allow services such as Azure HDInsight to work with the data. However, there might be cases where individual users need access to the data as well. In such cases, you must use Microsoft Entra ID [security groups](data-lake-store-secure-data.md#create-security-groups-in-azure-active-directory) instead of assigning individual users to folders and files.
-
-Once a security group is assigned permissions, adding or removing users from the group doesnΓÇÖt require any updates to Data Lake Storage Gen1. This also helps ensure you don't exceed the limit of [32 Access and Default ACLs](../azure-resource-manager/management/azure-subscription-service-limits.md#data-lake-storage-limits) (this includes the four POSIX-style ACLs that are always associated with every file and folder: [the owning user](data-lake-store-access-control.md#the-owning-user), [the owning group](data-lake-store-access-control.md#the-owning-group), [the mask](data-lake-store-access-control.md#the-mask), and other).
-
-### Security for groups
-
-As discussed, when users need access to Data Lake Storage Gen1, itΓÇÖs best to use Microsoft Entra security groups. Some recommended groups to start with might be **ReadOnlyUsers**, **WriteAccessUsers**, and **FullAccessUsers** for the root of the account, and even separate ones for key subfolders. If there are any other anticipated groups of users that might be added later, but have not been identified yet, you might consider creating dummy security groups that have access to certain folders. Using security group ensures that later you do not need a long processing time for assigning new permissions to thousands of files.
-
-### Security for service principals
-
-Microsoft Entra service principals are typically used by services like Azure HDInsight to access data in Data Lake Storage Gen1. Depending on the access requirements across multiple workloads, there might be some considerations to ensure security inside and outside of the organization. For many customers, a single Microsoft Entra service principal might be adequate, and it can have full permissions at the root of the Data Lake Storage Gen1 account. Other customers might require multiple clusters with different service principals where one cluster has full access to the data, and another cluster with only read access. As with the security groups, you might consider making a service principal for each anticipated scenario (read, write, full) once a Data Lake Storage Gen1 account is created.
-
-### Enable the Data Lake Storage Gen1 firewall with Azure service access
-
-Data Lake Storage Gen1 supports the option of turning on a firewall and limiting access only to Azure services, which is recommended for a smaller attack vector from outside intrusions. Firewall can be enabled on the Data Lake Storage Gen1 account in the Azure portal via the **Firewall** > **Enable Firewall (ON)** > **Allow access to Azure services** options.
-
-![Firewall settings in Data Lake Storage Gen1](./media/data-lake-store-best-practices/data-lake-store-firewall-setting.png "Firewall settings in Data Lake Storage Gen1")
-
-Once firewall is enabled, only Azure services such as HDInsight, Data Factory, Azure Synapse Analytics, etc. have access to Data Lake Storage Gen1. Due to the internal network address translation used by Azure, the Data Lake Storage Gen1 firewall does not support restricting specific services by IP and is only intended for restrictions of endpoints outside of Azure, such as on-premises.
-
-## Performance and scale considerations
-
-One of the most powerful features of Data Lake Storage Gen1 is that it removes the hard limits on data throughput. Removing the limits enables customers to grow their data size and accompanied performance requirements without needing to shard the data. One of the most important considerations for optimizing Data Lake Storage Gen1 performance is that it performs the best when given parallelism.
-
-### Improve throughput with parallelism
-
-Consider giving 8-12 threads per core for the most optimal read/write throughput. This is due to blocking reads/writes on a single thread, and more threads can allow higher concurrency on the VM. To ensure that levels are healthy and parallelism can be increased, be sure to monitor the VMΓÇÖs CPU utilization.
-
-### Avoid small file sizes
-
-POSIX permissions and auditing in Data Lake Storage Gen1 comes with an overhead that becomes apparent when working with numerous small files. As a best practice, you must batch your data into larger files versus writing thousands or millions of small files to Data Lake Storage Gen1. Avoiding small file sizes can have multiple benefits, such as:
-
-* Lowering the authentication checks across multiple files
-* Reduced open file connections
-* Faster copying/replication
-* Fewer files to process when updating Data Lake Storage Gen1 POSIX permissions
-
-Depending on what services and workloads are using the data, a good size to consider for files is 256 MB or greater. If the file sizes cannot be batched when landing in Data Lake Storage Gen1, you can have a separate compaction job that combines these files into larger ones. For more information and recommendation on file sizes and organizing the data in Data Lake Storage Gen1, see [Structure your data set](data-lake-store-performance-tuning-guidance.md#structure-your-data-set).
-
-### Large file sizes and potential performance impact
-
-Although Data Lake Storage Gen1 supports large files up to petabytes in size, for optimal performance and depending on the process reading the data, it might not be ideal to go above 2 GB on average. For example, when using **Distcp** to copy data between locations or different storage accounts, files are the finest level of granularity used to determine map tasks. So, if you are copying 10 files that are 1 TB each, at most 10 mappers are allocated. Also, if you have lots of files with mappers assigned, initially the mappers work in parallel to move large files. However, as the job starts to wind down only a few mappers remain allocated and you can be stuck with a single mapper assigned to a large file. Microsoft has submitted improvements to Distcp to address this issue in future Hadoop versions.
-
-Another example to consider is when using Azure Data Lake Analytics with Data Lake Storage Gen1. Depending on the processing done by the extractor, some files that cannot be split (for example, XML, JSON) could suffer in performance when greater than 2 GB. In cases where files can be split by an extractor (for example, CSV), large files are preferred.
-
-### Capacity plan for your workload
-
-Azure Data Lake Storage Gen1 removes the hard IO throttling limits that are placed on Blob storage accounts. However, there are still soft limits that need to be considered. The default ingress/egress throttling limits meet the needs of most scenarios. If your workload needs to have the limits increased, work with Microsoft support. Also, look at the limits during the proof-of-concept stage so that IO throttling limits are not hit during production. If that happens, it might require waiting for a manual increase from the Microsoft engineering team. If IO throttling occurs, Azure Data Lake Storage Gen1 returns an error code of 429, and ideally should be retried with an appropriate exponential backoff policy.
-
-### Optimize ΓÇ£writesΓÇ¥ with the Data Lake Storage Gen1 driver buffer
-
-To optimize performance and reduce IOPS when writing to Data Lake Storage Gen1 from Hadoop, perform write operations as close to the Data Lake Storage Gen1 driver buffer size as possible. Try not to exceed the buffer size before flushing, such as when streaming using Apache Storm or Spark streaming workloads. When writing to Data Lake Storage Gen1 from HDInsight/Hadoop, it is important to know that Data Lake Storage Gen1 has a driver with a 4-MB buffer. Like many file system drivers, this buffer can be manually flushed before reaching the 4-MB size. If not, it is immediately flushed to storage if the next write exceeds the bufferΓÇÖs maximum size. Where possible, you must avoid an overrun or a significant underrun of the buffer when syncing/flushing policy by count or time window.
-
-## Resiliency considerations
-
-When architecting a system with Data Lake Storage Gen1 or any cloud service, you must consider your availability requirements and how to respond to potential interruptions in the service. An issue could be localized to the specific instance or even region-wide, so having a plan for both is important. Depending on the **recovery time objective** and the **recovery point objective** SLAs for your workload, you might choose a more or less aggressive strategy for high availability and disaster recovery.
-
-### High availability and disaster recovery
-
-High availability (HA) and disaster recovery (DR) can sometimes be combined together, although each has a slightly different strategy, especially when it comes to data. Data Lake Storage Gen1 already handles 3x replication under the hood to guard against localized hardware failures. However, since replication across regions is not built in, you must manage this yourself. When building a plan for HA, in the event of a service interruption the workload needs access to the latest data as quickly as possible by switching over to a separately replicated instance locally or in a new region.
-
-In a DR strategy, to prepare for the unlikely event of a catastrophic failure of a region, it is also important to have data replicated to a different region. This data might initially be the same as the replicated HA data. However, you must also consider your requirements for edge cases such as data corruption where you may want to create periodic snapshots to fall back to. Depending on the importance and size of the data, consider rolling delta snapshots of 1-, 6-, and 24-hour periods on the local and/or secondary store, according to risk tolerances.
-
-For data resiliency with Data Lake Storage Gen1, it is recommended to geo-replicate your data to a separate region with a frequency that satisfies your HA/DR requirements, ideally every hour. This frequency of replication minimizes massive data movements that can have competing throughput needs with the main system and a better recovery point objective (RPO). Additionally, you should consider ways for the application using Data Lake Storage Gen1 to automatically fail over to the secondary account through monitoring triggers or length of failed attempts, or at least send a notification to admins for manual intervention. Keep in mind that there is tradeoff of failing over versus waiting for a service to come back online. If the data hasn't finished replicating, a failover could cause potential data loss, inconsistency, or complex merging of the data.
-
-Below are the top three recommended options for orchestrating replication between Data Lake Storage Gen1 accounts, and key differences between each of them.
-
-| |Distcp |Azure Data Factory |AdlCopy |
-|||||
-|**Scale limits** | Bounded by worker nodes | Limited by Max Cloud Data Movement units | Bound by Analytics units |
-|**Supports copying deltas** | Yes | No | No |
-|**Built-in orchestration** | No (use Oozie Airflow or cron jobs) | Yes | No (Use Azure Automation or Windows Task Scheduler) |
-|**Supported file systems** | ADL, HDFS, WASB, S3, GS, CFS |Numerous, see [Connectors](../data-factory/connector-azure-blob-storage.md). | ADL to ADL, WASB to ADL (same region only) |
-|**OS support** |Any OS running Hadoop | N/A | Windows 10 |
-
-### Use Distcp for data movement between two locations
-
-Short for distributed copy, Distcp is a Linux command-line tool that comes with Hadoop and provides distributed data movement between two locations. The two locations can be Data Lake Storage Gen1, HDFS, WASB, or S3. This tool uses MapReduce jobs on a Hadoop cluster (for example, HDInsight) to scale out on all the nodes. Distcp is considered the fastest way to move big data without special network compression appliances. Distcp also provides an option to only update deltas between two locations, handles automatic retries, as well as dynamic scaling of compute. This approach is incredibly efficient when it comes to replicating things like Hive/Spark tables that can have many large files in a single directory and you only want to copy over the modified data. For these reasons, Distcp is the most recommended tool for copying data between big data stores.
-
-Copy jobs can be triggered by Apache Oozie workflows using frequency or data triggers, as well as Linux cron jobs. For intensive replication jobs, it is recommended to spin up a separate HDInsight Hadoop cluster that can be tuned and scaled specifically for the copy jobs. This ensures that copy jobs do not interfere with critical jobs. If running replication on a wide enough frequency, the cluster can even be taken down between each job. If failing over to secondary region, make sure that another cluster is also spun up in the secondary region to replicate new data back to the primary Data Lake Storage Gen1 account once it comes back up. For examples of using Distcp, see [Use Distcp to copy data between Azure Storage Blobs and Data Lake Storage Gen1](data-lake-store-copy-data-wasb-distcp.md).
-
-### Use Azure Data Factory to schedule copy jobs
-
-Azure Data Factory can also be used to schedule copy jobs using a **Copy Activity**, and can even be set up on a frequency via the **Copy Wizard**. Keep in mind that Azure Data Factory has a limit of cloud data movement units (DMUs), and eventually caps the throughput/compute for large data workloads. Additionally, Azure Data Factory currently does not offer delta updates between Data Lake Storage Gen1 accounts, so folders like Hive tables would require a complete copy to replicate. Refer to the [Copy Activity tuning guide](../data-factory/copy-activity-performance.md) for more information on copying with Data Factory.
-
-### AdlCopy
-
-AdlCopy is a Windows command-line tool that allows you to copy data between two Data Lake Storage Gen1 accounts only within the same region. The AdlCopy tool provides a standalone option or the option to use an Azure Data Lake Analytics account to run your copy job. Though it was originally built for on-demand copies as opposed to a robust replication, it provides another option to do distributed copying across Data Lake Storage Gen1 accounts within the same region. For reliability, itΓÇÖs recommended to use the premium Data Lake Analytics option for any production workload. The standalone version can return busy responses and has limited scale and monitoring.
-
-Like Distcp, the AdlCopy needs to be orchestrated by something like Azure Automation or Windows Task Scheduler. As with Data Factory, AdlCopy does not support copying only updated files, but recopies and overwrite existing files. For more information and examples of using AdlCopy, see [Copy data from Azure Storage Blobs to Data Lake Storage Gen1](data-lake-store-copy-data-azure-storage-blob.md).
-
-## Monitoring considerations
-
-Data Lake Storage Gen1 provides detailed diagnostic logs and auditing. Data Lake Storage Gen1 provides some basic metrics in the Azure portal under the Data Lake Storage Gen1 account and in Azure Monitor. Availability of Data Lake Storage Gen1 is displayed in the Azure portal. However, this metric is refreshed every seven minutes and cannot be queried through a publicly exposed API. To get the most up-to-date availability of a Data Lake Storage Gen1 account, you must run your own synthetic tests to validate availability. Other metrics such as total storage utilization, read/write requests, and ingress/egress can take up to 24 hours to refresh. So, more up-to-date metrics must be calculated manually through Hadoop command-line tools or aggregating log information. The quickest way to get the most recent storage utilization is running this HDFS command from a Hadoop cluster node (for example, head node):
-
-```console
-hdfs dfs -du -s -h adl://<adlsg1_account_name>.azuredatalakestore.net:443/
-```
-
-### Export Data Lake Storage Gen1 diagnostics
-
-One of the quickest ways to get access to searchable logs from Data Lake Storage Gen1 is to enable log shipping to **Log Analytics** under the **Diagnostics** blade for the Data Lake Storage Gen1 account. This provides immediate access to incoming logs with time and content filters, along with alerting options (email/webhook) triggered within 15-minute intervals. For instructions, see [Accessing diagnostic logs for Azure Data Lake Storage Gen1](data-lake-store-diagnostic-logs.md).
-
-For more real-time alerting and more control on where to land the logs, consider exporting logs to Azure EventHub where content can be analyzed individually or over a time window in order to submit real-time notifications to a queue. A separate application such as a [Logic App](../connectors/connectors-create-api-azure-event-hubs.md) can then consume and communicate the alerts to the appropriate channel, as well as submit metrics to monitoring tools like NewRelic, Datadog, or AppDynamics. Alternatively, if you are using a third-party tool such as ElasticSearch, you can export the logs to Blob Storage and use the [Azure Logstash plugin](https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash/logstash-input-azureblob) to consume the data into your Elasticsearch, Kibana, and Logstash (ELK) stack.
-
-### Turn on debug-level logging in HDInsight
-
-If Data Lake Storage Gen1 log shipping is not turned on, Azure HDInsight also provides a way to turn on [client-side logging for Data Lake Storage Gen1](data-lake-store-performance-tuning-mapreduce.md) via log4j. You must set the following property in **Ambari** > **YARN** > **Config** > **Advanced yarn-log4j configurations**:
-
-`log4j.logger.com.microsoft.azure.datalake.store=DEBUG`
-
-Once the property is set and the nodes are restarted, Data Lake Storage Gen1 diagnostics is written to the YARN logs on the nodes (/tmp/\<user\>/yarn.log), and important details like errors or throttling (HTTP 429 error code) can be monitored. This same information can also be monitored in Azure Monitor logs or wherever logs are shipped to in the [Diagnostics](data-lake-store-diagnostic-logs.md) blade of the Data Lake Storage Gen1 account. It is recommended to at least have client-side logging turned on or utilize the log shipping option with Data Lake Storage Gen1 for operational visibility and easier debugging.
-
-### Run synthetic transactions
-
-Currently, the service availability metric for Data Lake Storage Gen1 in the Azure portal has 7-minute refresh window. Also, it cannot be queried using a publicly exposed API. Hence, it is recommended to build a basic application that does synthetic transactions to Data Lake Storage Gen1 that can provide up to the minute availability. An example might be creating a WebJob, Logic App, or Azure Function App to perform a read, create, and update against Data Lake Storage Gen1 and send the results to your monitoring solution. The operations can be done in a temporary folder and then deleted after the test, which might be run every 30-60 seconds, depending on requirements.
-
-## Directory layout considerations
-
-When landing data into a data lake, itΓÇÖs important to pre-plan the structure of the data so that security, partitioning, and processing can be utilized effectively. Many of the following recommendations can be used whether itΓÇÖs with Azure Data Lake Storage Gen1, Blob Storage, or HDFS. Every workload has different requirements on how the data is consumed, but below are some common layouts to consider when working with IoT and batch scenarios.
-
-### IoT structure
-
-In IoT workloads, there can be a great deal of data being landed in the data store that spans across numerous products, devices, organizations, and customers. ItΓÇÖs important to pre-plan the directory layout for organization, security, and efficient processing of the data for down-stream consumers. A general template to consider might be the following layout:
-
-```console
-{Region}/{SubjectMatter(s)}/{yyyy}/{mm}/{dd}/{hh}/
-```
-
-For example, landing telemetry for an airplane engine within the UK might look like the following structure:
-
-```console
-UK/Planes/BA1293/Engine1/2017/08/11/12/
-```
-
-There's an important reason to put the date at the end of the folder structure. If you want to lock down certain regions or subject matters to users/groups, then you can easily do so with the POSIX permissions. Otherwise, if there was a need to restrict a certain security group to viewing just the UK data or certain planes, with the date structure in front a separate permission would be required for numerous folders under every hour folder. Additionally, having the date structure in front would exponentially increase the number of folders as time went on.
-
-### Batch jobs structure
-
-From a high-level, a commonly used approach in batch processing is to land data in an ΓÇ£inΓÇ¥ folder. Then, once the data is processed, put the new data into an ΓÇ£outΓÇ¥ folder for downstream processes to consume. This directory structure is seen sometimes for jobs that require processing on individual files and might not require massively parallel processing over large datasets. Like the IoT structure recommended above, a good directory structure has the parent-level folders for things such as region and subject matters (for example, organization, product/producer). This structure helps with securing the data across your organization and better management of the data in your workloads. Furthermore, consider date and time in the structure to allow better organization, filtered searches, security, and automation in the processing. The level of granularity for the date structure is determined by the interval on which the data is uploaded or processed, such as hourly, daily, or even monthly.
-
-Sometimes file processing is unsuccessful due to data corruption or unexpected formats. In such cases, directory structure might benefit from a **/bad** folder to move the files to for further inspection. The batch job might also handle the reporting or notification of these *bad* files for manual intervention. Consider the following template structure:
-
-```console
-{Region}/{SubjectMatter(s)}/In/{yyyy}/{mm}/{dd}/{hh}/
-{Region}/{SubjectMatter(s)}/Out/{yyyy}/{mm}/{dd}/{hh}/
-{Region}/{SubjectMatter(s)}/Bad/{yyyy}/{mm}/{dd}/{hh}/
-```
-
-For example, a marketing firm receives daily data extracts of customer updates from their clients in North America. It might look like the following snippet before and after being processed:
-
-```console
-NA/Extracts/ACMEPaperCo/In/2017/08/14/updates_08142017.csv
-NA/Extracts/ACMEPaperCo/Out/2017/08/14/processed_updates_08142017.csv
-```
-
-In the common case of batch data being processed directly into databases such as Hive or traditional SQL databases, there isnΓÇÖt a need for an **/in** or **/out** folder since the output already goes into a separate folder for the Hive table or external database. For example, daily extracts from customers would land into their respective folders, and orchestration by something like Azure Data Factory, Apache Oozie, or Apache Airflow would trigger a daily Hive or Spark job to process and write the data into a Hive table.
-
-## Next steps
-
-* [Overview of Azure Data Lake Storage Gen1](data-lake-store-overview.md)
-* [Access Control in Azure Data Lake Storage Gen1](data-lake-store-access-control.md)
-* [Security in Azure Data Lake Storage Gen1](data-lake-store-security-overview.md)
-* [Tuning Azure Data Lake Storage Gen1 for performance](data-lake-store-performance-tuning-guidance.md)
-* [Performance tuning guidance for using HDInsight Spark with Azure Data Lake Storage Gen1](data-lake-store-performance-tuning-spark.md)
-* [Performance tuning guidance for using HDInsight Hive with Azure Data Lake Storage Gen1](data-lake-store-performance-tuning-hive.md)
-* [Create HDInsight clusters with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md)
data-lake-store Data Lake Store Comparison With Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-comparison-with-blob-storage.md
- Title: Comparison of Azure Data Lake Storage Gen1 with Blob storage
-description: Learn about the differences between Azure Data Lake Storage Gen1 and Azure Blob Storage regarding some key aspects of big data processing.
---- Previously updated : 03/26/2018---
-# Comparing Azure Data Lake Storage Gen1 and Azure Blob Storage
--
-The table in this article summarizes the differences between Azure Data Lake Storage Gen1 and Azure Blob Storage along some key aspects of big data processing. Azure Blob Storage is a general purpose, scalable object store that is designed for a wide variety of storage scenarios. Azure Data Lake Storage Gen1 is a hyper-scale repository that is optimized for big data analytics workloads.
-
-| Category | Azure Data Lake Storage Gen1 | Azure Blob Storage |
-| -- | - | |
-| Purpose |Optimized storage for big data analytics workloads |General purpose object store for a wide variety of storage scenarios, including big data analytics |
-| Use Cases |Batch, interactive, streaming analytics and machine learning data such as log files, IoT data, click streams, large datasets |Any type of text or binary data, such as application back end, backup data, media storage for streaming and general purpose data. Additionally, full support for analytics workloads; batch, interactive, streaming analytics and machine learning data such as log files, IoT data, click streams, large datasets |
-| Key Concepts |Data Lake Storage Gen1 account contains folders, which in turn contains data stored as files |Storage account has containers, which in turn has data in the form of blobs |
-| Structure |Hierarchical file system |Object store with flat namespace |
-| API |REST API over HTTPS |REST API over HTTP/HTTPS |
-| Server-side API |[WebHDFS-compatible REST API](/rest/api/datalakestore/) |[Azure Blob Storage REST API](/rest/api/storageservices/Blob-Service-REST-API) |
-| Hadoop File System Client |Yes |Yes |
-| Data Operations - Authentication |Based on [Microsoft Entra identities](../active-directory/develop/authentication-vs-authorization.md) |Based on shared secrets - [Account Access Keys](../storage/common/storage-account-keys-manage.md) and [Shared Access Signature Keys](../storage/common/storage-sas-overview.md). |
-| Data Operations - Authentication Protocol |[OpenID Connect](https://openid.net/connect/). Calls must contain a valid JWT (JSON web token) issued by Microsoft Entra ID.|Hash-based Message Authentication Code (HMAC). Calls must contain a Base64-encoded SHA-256 hash over a part of the HTTP request. |
-| Data Operations - Authorization |POSIX Access Control Lists (ACLs). ACLs based on Microsoft Entra identities can be set at the file and folder level. |For account-level authorization ΓÇô Use [Account Access Keys](../storage/common/storage-account-keys-manage.md)<br>For account, container, or blob authorization - Use [Shared Access Signature Keys](../storage/common/storage-sas-overview.md) |
-| Data Operations - Auditing |Available. See [here](data-lake-store-diagnostic-logs.md) for information. |Available |
-| Encryption data at rest |<ul><li>Transparent, Server side</li> <ul><li>With service-managed keys</li><li>With customer-managed keys in Azure KeyVault</li></ul></ul> |<ul><li>Transparent, Server side</li> <ul><li>With service-managed keys</li><li>With customer-managed keys in Azure KeyVault (preview)</li></ul><li>Client-side encryption</li></ul> |
-| Management operations (for example, Account Create) |[Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) for account management |[Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) for account management |
-| Developer SDKs |.NET, Java, Python, Node.js |.NET, Java, Python, Node.js, C++, Ruby, PHP, Go, Android, iOS |
-| Analytics Workload Performance |Optimized performance for parallel analytics workloads. High Throughput and IOPS. |Optimized performance for parallel analytics workloads. |
-| Size limits |No limits on account sizes, file sizes, or number of files |For specific limits, see [Scalability targets for standard storage accounts](../storage/common/scalability-targets-standard-account.md) and [Scalability and performance targets for Blob storage](../storage/blobs/scalability-targets.md). Larger account limits available by contacting [Azure Support](https://azure.microsoft.com/support/faq/) |
-| Geo-redundancy |Locally redundant (multiple copies of data in one Azure region) |Locally redundant (LRS), zone redundant (ZRS), globally redundant (GRS), read-access globally redundant (RA-GRS). See [here](../storage/common/storage-redundancy.md) for more information |
-| Service state |Generally available |Generally available |
-| Regional availability |See [here](https://azure.microsoft.com/regions/#services) |Available in all Azure regions |
-| Price |See [Pricing](https://azure.microsoft.com/pricing/details/data-lake-store/) |See [Pricing](https://azure.microsoft.com/pricing/details/storage/) |
data-lake-store Data Lake Store Compatible Oss Other Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-compatible-oss-other-applications.md
- Title: Big data applications compatible with Data Lake Storage Gen1 | Microsoft Docs
-description: List of open source applications that work with Azure Data Lake Storage Gen1 (previously known as Azure Data Lake Store)
---- Previously updated : 06/27/2018---
-# Open Source Big Data applications that work with Azure Data Lake Storage Gen1
--
-This article lists the open source big data applications that work with Azure Data Lake Storage Gen1. For the applications in the table below, only the versions available with the listed distribution are supported. For information on what versions of these applications are available with HDInsight, see [HDInsight component versioning](../hdinsight/hdinsight-component-versioning.md).
-
-| Open Source Software | Distribution |
-| | |
-| [Apache Sqoop](https://sqoop.apache.org/) |HDInsight 3.2, 3.4, 3.5, and 3.6 |
-| [MapReduce](https://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html) |HDInsight 3.2, 3.4, 3.5, and 3.6 |
-| [Apache Storm](https://storm.apache.org/) |HDInsight 3.2, 3.4, 3.5, and 3.6 |
-| [Apache Hive](https://hive.apache.org/) |HDInsight 3.2, 3.4, 3.5, and 3.6 |
-| [HCatalog](https://cwiki.apache.org/confluence/display/Hive/HCatalog) |HDInsight 3.2, 3.4, 3.5, and 3.6 |
-| [Apache Mahout](https://mahout.apache.org/) |HDInsight 3.2, 3.4, 3.5, and 3.6 |
-| [Apache Pig/Pig Latin](https://pig.apache.org/) |HDInsight 3.2, 3.4, 3.5, and 3.6 |
-| [Apache Oozie](https://oozie.apache.org/) |HDInsight 3.2, 3.4, 3.5, and 3.6 |
-| [Apache Zookeeper](https://zookeeper.apache.org/) |HDInsight 3.2, 3.4, 3.5, and 3.6 |
-| [Apache Tez](https://tez.apache.org/) |HDInsight 3.2, 3.4, 3.5, and 3.6 |
-| [Apache Spark](https://spark.apache.org/) |HDInsight 3.4, 3.5, and 3.6 |
--
-## See also
-* [Overview of Azure Data Lake Storage Gen1](data-lake-store-overview.md)
-
data-lake-store Data Lake Store Connectivity From Vnets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-connectivity-from-vnets.md
- Title: Connect to Azure Data Lake Storage Gen1 from VNETs | Microsoft Docs
-description: Learn how to enable access to Azure Data Lake Storage Gen1 from Azure virtual machines that have restricted access to resources.
--- Previously updated : 01/31/2018----
-# Access Azure Data Lake Storage Gen1 from VMs within an Azure VNET
-Azure Data Lake Storage Gen1 is a PaaS service that runs on public Internet IP addresses. Any server that can connect to the public Internet can typically connect to Azure Data Lake Storage Gen1 endpoints as well. By default, all VMs that are in Azure VNETs can access the Internet and hence can access Azure Data Lake Storage Gen1. However, it is possible to configure VMs in a VNET to not have access to the Internet. For such VMs, access to Azure Data Lake Storage Gen1 is restricted as well. Blocking public Internet access for VMs in Azure VNETs can be done using any of the following approaches:
-
-* By configuring Network Security Groups (NSG)
-* By configuring User Defined Routes (UDR)
-* By exchanging routes via BGP (industry standard dynamic routing protocol), when ExpressRoute is used, that block access to the Internet
-
-In this article, you will learn how to enable access to Azure Data Lake Storage Gen1 from Azure VMs, which have been restricted to access resources using one of the three methods listed previously.
-
-## Enabling connectivity to Azure Data Lake Storage Gen1 from VMs with restricted connectivity
-To access Azure Data Lake Storage Gen1 from such VMs, you must configure them to access the IP address for the region where the Azure Data Lake Storage Gen1 account is available. You can identify the IP addresses for your Data Lake Storage Gen1 account regions by resolving the DNS names of your accounts (`<account>.azuredatalakestore.net`). To resolve DNS names of your accounts, you can use tools such as **nslookup**. Open a command prompt on your computer and run the following command:
-
-```console
-nslookup mydatastore.azuredatalakestore.net
-```
-
-The output resembles the following. The value against **Address** property is the IP address associated with your Data Lake Storage Gen1 account.
-
-```output
-Non-authoritative answer:
-Name: 1434ceb1-3a4b-4bc0-9c69-a0823fd69bba-mydatastore.projectcabostore.net
-Address: 104.44.88.112
-Aliases: mydatastore.azuredatalakestore.net
-```
--
-### Enabling connectivity from VMs restricted by using NSG
-When an NSG rule is used to block access to the Internet, then you can create another NSG that allows access to the Data Lake Storage Gen1 IP Address. For more information about NSG rules, see [Network security groups overview](../virtual-network/network-security-groups-overview.md). For instructions on how to create NSGs, see [How to create a network security group](../virtual-network/tutorial-filter-network-traffic.md).
-
-### Enabling connectivity from VMs restricted by using UDR or ExpressRoute
-When routes, either UDRs or BGP-exchanged routes, are used to block access to the Internet, a special route needs to be configured so that VMs in such subnets can access Data Lake Storage Gen1 endpoints. For more information, see [User-defined routes overview](../virtual-network/virtual-networks-udr-overview.md). For instructions on creating UDRs, see [Create UDRs in Resource Manager](../virtual-network/tutorial-create-route-table-powershell.md).
-
-### Enabling connectivity from VMs restricted by using ExpressRoute
-When an ExpressRoute circuit is configured, the on-premises servers can access Data Lake Storage Gen1 through public peering. More details on configuring ExpressRoute for public peering is available at [ExpressRoute FAQs](../expressroute/expressroute-faqs.md).
-
-## See also
-* [Overview of Azure Data Lake Storage Gen1](data-lake-store-overview.md)
-* [Securing data stored in Azure Data Lake Storage Gen1](data-lake-store-security-overview.md)
data-lake-store Data Lake Store Copy Data Azure Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-copy-data-azure-storage-blob.md
- Title: Copy data from Azure Storage blobs to Data Lake Storage Gen1
-description: Use AdlCopy tool to copy data from Azure Storage Blobs to Azure Data Lake Storage Gen1
---- Previously updated : 05/29/2018---
-# Copy data from Azure Storage Blobs to Azure Data Lake Storage Gen1
-
-> [!div class="op_single_selector"]
-> * [Using DistCp](data-lake-store-copy-data-wasb-distcp.md)
-> * [Using AdlCopy](data-lake-store-copy-data-azure-storage-blob.md)
->
->
-
-Data Lake Storage Gen1 provides a command-line tool, AdlCopy, to copy data from the following sources:
-
-* From Azure Storage blobs into Data Lake Storage Gen1. You can't use AdlCopy to copy data from Data Lake Storage Gen1 to Azure Storage blobs.
-* Between two Data Lake Storage Gen1 accounts.
-
-Also, you can use the AdlCopy tool in two different modes:
-
-* **Standalone**, where the tool uses Data Lake Storage Gen1 resources to perform the task.
-* **Using a Data Lake Analytics account**, where the units assigned to your Data Lake Analytics account are used to perform the copy operation. You might want to use this option when you are looking to perform the copy tasks in a predictable manner.
-
-## Prerequisites
-
-Before you begin this article, you must have the following:
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-* **Azure Storage blobs** container with some data.
-* **A Data Lake Storage Gen1 account**. For instructions on how to create one, see [Get started with Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md)
-* **Data Lake Analytics account (optional)** - See [Get started with Azure Data Lake Analytics](../data-lake-analytics/data-lake-analytics-get-started-portal.md) for instructions on how to create a Data Lake Analytics account.
-* **AdlCopy tool**. Install the AdlCopy tool.
-
-## Syntax of the AdlCopy tool
-
-Use the following syntax to work with the AdlCopy tool
-
-```console
-AdlCopy /Source <Blob or Data Lake Storage Gen1 source> /Dest <Data Lake Storage Gen1 destination> /SourceKey <Key for Blob account> /Account <Data Lake Analytics account> /Units <Number of Analytics units> /Pattern
-```
-
-The parameters in the syntax are described below:
-
-| Option | Description |
-| | |
-| Source |Specifies the location of the source data in the Azure storage blob. The source can be a blob container, a blob, or another Data Lake Storage Gen1 account. |
-| Dest |Specifies the Data Lake Storage Gen1 destination to copy to. |
-| SourceKey |Specifies the storage access key for the Azure storage blob source. This is required only if the source is a blob container or a blob. |
-| Account |**Optional**. Use this if you want to use Azure Data Lake Analytics account to run the copy job. If you use the /Account option in the syntax but do not specify a Data Lake Analytics account, AdlCopy uses a default account to run the job. Also, if you use this option, you must add the source (Azure Storage Blob) and destination (Azure Data Lake Storage Gen1) as data sources for your Data Lake Analytics account. |
-| Units |Specifies the number of Data Lake Analytics units that will be used for the copy job. This option is mandatory if you use the **/Account** option to specify the Data Lake Analytics account. |
-| Pattern |Specifies a regex pattern that indicates which blobs or files to copy. AdlCopy uses case-sensitive matching. The default pattern when no pattern is specified is to copy all items. Specifying multiple file patterns is not supported. |
-
-## Use AdlCopy (as standalone) to copy data from an Azure Storage blob
-
-1. Open a command prompt and navigate to the directory where AdlCopy is installed, typically `%HOMEPATH%\Documents\adlcopy`.
-1. Run the following command to copy a specific blob from the source container to a Data Lake Storage Gen1 folder:
-
- ```console
- AdlCopy /source https://<source_account>.blob.core.windows.net/<source_container>/<blob name> /dest swebhdfs://<dest_adlsg1_account>.azuredatalakestore.net/<dest_folder>/ /sourcekey <storage_account_key_for_storage_container>
- ```
-
- For example:
-
- ```console
- AdlCopy /source https://mystorage.blob.core.windows.net/mycluster/HdiSamples/HdiSamples/WebsiteLogSampleData/SampleLog/909f2b.log /dest swebhdfs://mydatalakestorage.azuredatalakestore.net/mynewfolder/ /sourcekey uJUfvD6cEvhfLoBae2yyQf8t9/BpbWZ4XoYj4kAS5Jf40pZaMNf0q6a8yqTxktwVgRED4vPHeh/50iS9atS5LQ==
- ```
-
- >[!NOTE]
- >The syntax above specifies the file to be copied to a folder in the Data Lake Storage Gen1 account. AdlCopy tool creates a folder if the specified folder name does not exist.
-
- You will be prompted to enter the credentials for the Azure subscription under which you have your Data Lake Storage Gen1 account. You will see an output similar to the following:
-
- ```output
- Initializing Copy.
- Copy Started.
- 100% data copied.
- Finishing Copy.
- Copy Completed. 1 file copied.
- ```
-
-1. You can also copy all the blobs from one container to the Data Lake Storage Gen1 account using the following command:
-
- ```console
- AdlCopy /source https://<source_account>.blob.core.windows.net/<source_container>/ /dest swebhdfs://<dest_adlsg1_account>.azuredatalakestore.net/<dest_folder>/ /sourcekey <storage_account_key_for_storage_container>
- ```
-
- For example:
-
- ```console
- AdlCopy /Source https://mystorage.blob.core.windows.net/mycluster/example/data/gutenberg/ /dest adl://mydatalakestorage.azuredatalakestore.net/mynewfolder/ /sourcekey uJUfvD6cEvhfLoBae2yyQf8t9/BpbWZ4XoYj4kAS5Jf40pZaMNf0q6a8yqTxktwVgRED4vPHeh/50iS9atS5LQ==
- ```
-
-### Performance considerations
-
-If you are copying from an Azure Blob Storage account, you may be throttled during copy on the blob storage side. This will degrade the performance of your copy job. To learn more about the limits of Azure Blob Storage, see Azure Storage limits at [Azure subscription and service limits](../azure-resource-manager/management/azure-subscription-service-limits.md).
-
-## Use AdlCopy (as standalone) to copy data from another Data Lake Storage Gen1 account
-
-You can also use AdlCopy to copy data between two Data Lake Storage Gen1 accounts.
-
-1. Open a command prompt and navigate to the directory where AdlCopy is installed, typically `%HOMEPATH%\Documents\adlcopy`.
-1. Run the following command to copy a specific file from one Data Lake Storage Gen1 account to another.
-
- ```console
- AdlCopy /Source adl://<source_adlsg1_account>.azuredatalakestore.net/<path_to_file> /dest adl://<dest_adlsg1_account>.azuredatalakestore.net/<path>/
- ```
-
- For example:
-
- ```console
- AdlCopy /Source adl://mydatastorage.azuredatalakestore.net/mynewfolder/909f2b.log /dest adl://mynewdatalakestorage.azuredatalakestore.net/mynewfolder/
- ```
-
- > [!NOTE]
- > The syntax above specifies the file to be copied to a folder in the destination Data Lake Storage Gen1 account. AdlCopy tool creates a folder if the specified folder name does not exist.
- >
- >
-
- You will be prompted to enter the credentials for the Azure subscription under which you have your Data Lake Storage Gen1 account. You will see an output similar to the following:
-
- ```output
- Initializing Copy.
- Copy Started.|
- 100% data copied.
- Finishing Copy.
- Copy Completed. 1 file copied.
- ```
-1. The following command copies all files from a specific folder in the source Data Lake Storage Gen1 account to a folder in the destination Data Lake Storage Gen1 account.
-
- ```console
- AdlCopy /Source adl://mydatastorage.azuredatalakestore.net/mynewfolder/ /dest adl://mynewdatalakestorage.azuredatalakestore.net/mynewfolder/
- ```
-
-### Performance considerations
-
-When using AdlCopy as a standalone tool, the copy is run on shared, Azure-managed resources. The performance you may get in this environment depends on system load and available resources. This mode is best used for small transfers on an ad hoc basis. No parameters need to be tuned when using AdlCopy as a standalone tool.
-
-## Use AdlCopy (with Data Lake Analytics account) to copy data
-
-You can also use your Data Lake Analytics account to run the AdlCopy job to copy data from Azure storage blobs to Data Lake Storage Gen1. You would typically use this option when the data to be moved is in the range of gigabytes and terabytes, and you want better and predictable performance throughput.
-
-To use your Data Lake Analytics account with AdlCopy to copy from an Azure Storage Blob, the source (Azure Storage Blob) must be added as a data source for your Data Lake Analytics account. For instructions on adding additional data sources to your Data Lake Analytics account, see [Manage Data Lake Analytics account data sources](../data-lake-analytics/data-lake-analytics-manage-use-portal.md#manage-data-sources).
-
-> [!NOTE]
-> If you are copying from an Azure Data Lake Storage Gen1 account as the source using a Data Lake Analytics account, you do not need to associate the Data Lake Storage Gen1 account with the Data Lake Analytics account. The requirement to associate the source store with the Data Lake Analytics account is only when the source is an Azure Storage account.
->
->
-
-Run the following command to copy from an Azure Storage blob to a Data Lake Storage Gen1 account using Data Lake Analytics account:
-
-```console
-AdlCopy /source https://<source_account>.blob.core.windows.net/<source_container>/<blob name> /dest swebhdfs://<dest_adlsg1_account>.azuredatalakestore.net/<dest_folder>/ /sourcekey <storage_account_key_for_storage_container> /Account <data_lake_analytics_account> /Units <number_of_data_lake_analytics_units_to_be_used>
-```
-
-For example:
-
-```console
-AdlCopy /Source https://mystorage.blob.core.windows.net/mycluster/example/data/gutenberg/ /dest swebhdfs://mydatalakestorage.azuredatalakestore.net/mynewfolder/ /sourcekey uJUfvD6cEvhfLoBae2yyQf8t9/BpbWZ4XoYj4kAS5Jf40pZaMNf0q6a8yqTxktwVgRED4vPHeh/50iS9atS5LQ== /Account mydatalakeanalyticaccount /Units 2
-```
-
-Similarly, run the following command to copy all files from a specific folder in the source Data Lake Storage Gen1 account to a folder in the destination Data Lake Storage Gen1 account using Data Lake Analytics account:
-
-```console
-AdlCopy /Source adl://mysourcedatalakestorage.azuredatalakestore.net/mynewfolder/ /dest adl://mydestdatastorage.azuredatalakestore.net/mynewfolder/ /Account mydatalakeanalyticaccount /Units 2
-```
-
-### Performance considerations
-
-When copying data in the range of terabytes, using AdlCopy with your own Azure Data Lake Analytics account provides better and more predictable performance. The parameter that should be tuned is the number of Azure Data Lake Analytics Units to use for the copy job. Increasing the number of units will increase the performance of your copy job. Each file to be copied can use maximum one unit. Specifying more units than the number of files being copied will not increase performance.
-
-## Use AdlCopy to copy data using pattern matching
-
-In this section, you learn how to use AdlCopy to copy data from a source (in our example below we use Azure Storage Blob) to a destination Data Lake Storage Gen1 account using pattern matching. For example, you can use the steps below to copy all files with .csv extension from the source blob to the destination.
-
-1. Open a command prompt and navigate to the directory where AdlCopy is installed, typically `%HOMEPATH%\Documents\adlcopy`.
-1. Run the following command to copy all files with *.csv extension from a specific blob from the source container to a Data Lake Storage Gen1 folder:
-
- ```console
- AdlCopy /source https://<source_account>.blob.core.windows.net/<source_container>/<blob name> /dest swebhdfs://<dest_adlsg1_account>.azuredatalakestore.net/<dest_folder>/ /sourcekey <storage_account_key_for_storage_container> /Pattern *.csv
- ```
-
- For example:
-
- ```console
- AdlCopy /source https://mystorage.blob.core.windows.net/mycluster/HdiSamples/HdiSamples/FoodInspectionData/ /dest adl://mydatalakestorage.azuredatalakestore.net/mynewfolder/ /sourcekey uJUfvD6cEvhfLoBae2yyQf8t9/BpbWZ4XoYj4kAS5Jf40pZaMNf0q6a8yqTxktwVgRED4vPHeh/50iS9atS5LQ== /Pattern *.csv
- ```
-
-## Billing
-
-* If you use the AdlCopy tool as standalone you will be billed for egress costs for moving data, if the source Azure Storage account is not in the same region as the Data Lake Storage Gen1 account.
-* If you use the AdlCopy tool with your Data Lake Analytics account, standard [Data Lake Analytics billing rates](https://azure.microsoft.com/pricing/details/data-lake-analytics/) will apply.
-
-## Considerations for using AdlCopy
-
-* AdlCopy (for version 1.0.5), supports copying data from sources that collectively have more than thousands of files and folders. However, if you encounter issues copying a large dataset, you can distribute the files/folders into different subfolders and use the path to those subfolders as the source instead.
-
-## Performance considerations for using AdlCopy
-
-AdlCopy supports copying data containing thousands of files and folders. However, if you encounter issues copying a large dataset, you can distribute the files/folders into smaller subfolders. AdlCopy was built for ad hoc copies. If you are trying to copy data on a recurring basis, you should consider using [Azure Data Factory](../data-factory/connector-azure-data-lake-store.md) that provides full management around the copy operations.
-
-## Release notes
-
-* 1.0.13 - If you are copying data to the same Azure Data Lake Storage Gen1 account across multiple adlcopy commands, you do not need to reenter your credentials for each run anymore. Adlcopy will now cache that information across multiple runs.
-
-## Next steps
-
-* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md)
-* [Use Azure Data Lake Analytics with Data Lake Storage Gen1](../data-lake-analytics/data-lake-analytics-get-started-portal.md)
-* [Use Azure HDInsight with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md)
data-lake-store Data Lake Store Copy Data Wasb Distcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-copy-data-wasb-distcp.md
- Title: Copy data to and from WASB into Azure Data Lake Storage Gen1 using DistCp
-description: Use the DistCp tool to copy data to and from Azure Storage blobs to Azure Data Lake Storage Gen1
---- Previously updated : 01/03/2020----
-# Use DistCp to copy data between Azure Storage blobs and Azure Data Lake Storage Gen1
-
-> [!div class="op_single_selector"]
-> * [Using DistCp](data-lake-store-copy-data-wasb-distcp.md)
-> * [Using AdlCopy](data-lake-store-copy-data-azure-storage-blob.md)
->
->
-
-If you have an HDInsight cluster with access to Azure Data Lake Storage Gen1, you can use Hadoop ecosystem tools like DistCp to copy data to and from an HDInsight cluster storage (WASB) into a Data Lake Storage Gen1 account. This article shows how to use the DistCp tool.
-
-## Prerequisites
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-* **An Azure Data Lake Storage Gen1 account**. For instructions on how to create one, see [Get started with Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md).
-* **Azure HDInsight cluster** with access to a Data Lake Storage Gen1 account. See [Create an HDInsight cluster with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md). Make sure you enable Remote Desktop for the cluster.
-
-## Use DistCp from an HDInsight Linux cluster
-
-An HDInsight cluster comes with the DistCp tool, which can be used to copy data from different sources into an HDInsight cluster. If you've configured the HDInsight cluster to use Data Lake Storage Gen1 as additional storage, you can use DistCp out-of-the-box to copy data to and from a Data Lake Storage Gen1 account. In this section, we look at how to use the DistCp tool.
-
-1. From your desktop, use SSH to connect to the cluster. See [Connect to a Linux-based HDInsight cluster](../hdinsight/hdinsight-hadoop-linux-use-ssh-unix.md). Run the commands from the SSH prompt.
-
-1. Verify whether you can access the Azure Storage blobs (WASB). Run the following command:
-
- ```
- hdfs dfs ΓÇôls wasb://<container_name>@<storage_account_name>.blob.core.windows.net/
- ```
-
- The output provides a list of contents in the storage blob.
-
-1. Similarly, verify whether you can access the Data Lake Storage Gen1 account from the cluster. Run the following command:
-
- ```
- hdfs dfs -ls adl://<data_lake_storage_gen1_account>.azuredatalakestore.net:443/
- ```
-
- The output provides a list of files and folders in the Data Lake Storage Gen1 account.
-
-1. Use DistCp to copy data from WASB to a Data Lake Storage Gen1 account.
-
- ```
- hadoop distcp wasb://<container_name>@<storage_account_name>.blob.core.windows.net/example/data/gutenberg adl://<data_lake_storage_gen1_account>.azuredatalakestore.net:443/myfolder
- ```
-
- The command copies the contents of the **/example/data/gutenberg/** folder in WASB to **/myfolder** in the Data Lake Storage Gen1 account.
-
-1. Similarly, use DistCp to copy data from a Data Lake Storage Gen1 account to WASB.
-
- ```
- hadoop distcp adl://<data_lake_storage_gen1_account>.azuredatalakestore.net:443/myfolder wasb://<container_name>@<storage_account_name>.blob.core.windows.net/example/data/gutenberg
- ```
-
- The command copies the contents of **/myfolder** in the Data Lake Storage Gen1 account to **/example/data/gutenberg/** folder in WASB.
-
-## Performance considerations while using DistCp
-
-Because the DistCp toolΓÇÖs lowest granularity is a single file, setting the maximum number of simultaneous copies is the most important parameter to optimize it against Data Lake Storage Gen1. You can control the number of simultaneous copies by setting the number of mappers (ΓÇÿmΓÇÖ) parameter on the command line. This parameter specifies the maximum number of mappers that are used to copy data. The default value is 20.
-
-Example:
-
-```
- hadoop distcp wasb://<container_name>@<storage_account_name>.blob.core.windows.net/example/data/gutenberg adl://<data_lake_storage_gen1_account>.azuredatalakestore.net:443/myfolder -m 100
-```
-
-### How to determine the number of mappers to use
-
-Here's some guidance that you can use.
-
-* **Step 1: Determine total YARN memory** - The first step is to determine the YARN memory available to the cluster where you run the DistCp job. This information is available in the Ambari portal associated with the cluster. Navigate to YARN and view the **Configs** tab to see the YARN memory. To get the total YARN memory, multiply the YARN memory per node with the number of nodes you have in your cluster.
-
-* **Step 2: Calculate the number of mappers** - The value of **m** is equal to the quotient of total YARN memory divided by the YARN container size. The YARN container size information is also available in the Ambari portal. Navigate to YARN and view the **Configs** tab. The YARN container size is displayed in this window. The equation to arrive at the number of mappers (**m**) is:
-
- `m = (number of nodes * YARN memory for each node) / YARN container size`
-
-Example:
-
-LetΓÇÖs assume that you have four D14v2s nodes in the cluster and you want to transfer 10 TB of data from 10 different folders. Each of the folders contains varying amounts of data and the file sizes within each folder are different.
-
-* Total YARN memory - From the Ambari portal you determine that the YARN memory is 96 GB for a D14 node. So, total YARN memory for four node cluster is:
-
- `YARN memory = 4 * 96GB = 384GB`
-
-* Number of mappers - From the Ambari portal you determine that the YARN container size is 3072 for a D14 cluster node. So, the number of mappers is:
-
- `m = (4 nodes * 96GB) / 3072MB = 128 mappers`
-
-If other applications are using memory, you can choose to only use a portion of your clusterΓÇÖs YARN memory for DistCp.
-
-### Copying large datasets
-
-When the size of the dataset to be moved is large (for example, > 1 TB) or if you have many different folders, consider using multiple DistCp jobs. There's likely no performance gain, but it spreads out the jobs so that if any job fails, you need to only restart that specific job instead of the entire job.
-
-### Limitations
-
-* DistCp tries to create mappers that are similar in size to optimize performance. Increasing the number of mappers may not always increase performance.
-
-* DistCp is limited to only one mapper per file. Therefore, you shouldn't have more mappers than you have files. Because DistCp can assign only one mapper to a file, this limits the amount of concurrency that can be used to copy large files.
-
-* If you have a small number of large files, split them into 256-MB file chunks to give you more potential concurrency.
-
-* If you're copying from an Azure Blob storage account, your copy job may be throttled on the Blob storage side. This degrades the performance of your copy job. To learn more about the limits of Azure Blob storage, see Azure Storage limits at [Azure subscription and service limits](../azure-resource-manager/management/azure-subscription-service-limits.md).
-
-## See also
-
-* [Copy data from Azure Storage blobs to Data Lake Storage Gen1](data-lake-store-copy-data-azure-storage-blob.md)
-* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md)
-* [Use Azure Data Lake Analytics with Data Lake Storage Gen1](../data-lake-analytics/data-lake-analytics-get-started-portal.md)
-* [Use Azure HDInsight with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md)
data-lake-store Data Lake Store Data Operations Net Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-data-operations-net-sdk.md
- Title: .NET SDK - Filesystem operations on Data Lake Storage Gen1 - Azure
-description: Use the Azure Data Lake Storage Gen1 .NET SDK for filesystem operations on Data Lake Storage Gen1 such as create folders, etc.
---- Previously updated : 01/03/2020----
-# Filesystem operations on Data Lake Storage Gen1 using the .NET SDK
-
-> [!div class="op_single_selector"]
-> * [.NET SDK](data-lake-store-data-operations-net-sdk.md)
-> * [Java SDK](data-lake-store-get-started-java-sdk.md)
-> * [REST API](data-lake-store-data-operations-rest-api.md)
-> * [Python](data-lake-store-data-operations-python.md)
->
->
-
-In this article, you learn how to perform filesystem operations on Data Lake Storage Gen1 using the .NET SDK. Filesystem operations include creating folders in a Data Lake Storage Gen1 account, uploading files, downloading files, etc.
-
-For instructions on how to do account management operations on Data Lake Storage Gen1 using the .NET SDK, see [Account management operations on Data Lake Storage Gen1 using .NET SDK](data-lake-store-get-started-net-sdk.md).
-
-## Prerequisites
-
-* **Visual Studio 2013 or above**. The instructions in this article use Visual Studio 2019.
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-* **Azure Data Lake Storage Gen1 account**. For instructions on how to create an account, see [Get started with Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md).
-
-## Create a .NET application
-
-The code sample available [on GitHub](https://github.com/Azure-Samples/data-lake-store-adls-dot-net-get-started/tree/master/AdlsSDKGettingStarted) walks you through the process of creating files in the store, concatenating files, downloading a file, and deleting some files in the store. This section of the article walks you through the main parts of the code.
-
-1. In Visual Studio, select the **File** menu, **New**, and then **Project**.
-1. Choose **Console App (.NET Framework)**, and then select **Next**.
-1. In **Project name**, enter `CreateADLApplication`, and then select **Create**.
-1. Add the NuGet packages to your project.
-
- 1. Right-click the project name in the Solution Explorer and click **Manage NuGet Packages**.
- 1. In the **NuGet Package Manager** tab, make sure that **Package source** is set to **nuget.org**. Also, make sure the **Include prerelease** check box is selected.
- 1. Search for and install the following NuGet packages:
-
- * `Microsoft.Azure.DataLake.Store` - This article uses v1.0.0.
- * `Microsoft.Rest.ClientRuntime.Azure.Authentication` - This article uses v2.3.1.
-
- Close the **NuGet Package Manager**.
-
-1. Open **Program.cs**, delete the existing code, and then include the following statements to add references to namespaces.
-
- ```
- using System;
- using System.IO;using System.Threading;
- using System.Linq;
- using System.Text;
- using System.Collections.Generic;
- using System.Security.Cryptography.X509Certificates; // Required only if you're using an Azure AD application created with certificates
-
- using Microsoft.Rest;
- using Microsoft.Rest.Azure.Authentication;
- using Microsoft.Azure.DataLake.Store;
- using Microsoft.IdentityModel.Clients.ActiveDirectory;
- ```
-
-1. Declare the variables as shown below, and provide the values for the placeholders. Also, make sure the local path and file name you provide here exist on the computer.
-
- ```
- namespace SdkSample
- {
- class Program
- {
- private static string _adlsg1AccountName = "<DATA-LAKE-STORAGE-GEN1-NAME>.azuredatalakestore.net";
- }
- }
- ```
-
-In the remaining sections of the article, you can see how to use the available .NET methods to do operations such as authentication, file upload, etc.
-
-## Authentication
-
-* For end-user authentication for your application, see [End-user authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-end-user-authenticate-net-sdk.md).
-* For service-to-service authentication for your application, see [Service-to-service authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-service-to-service-authenticate-net-sdk.md).
-
-## Create client object
-
-The following snippet creates the Data Lake Storage Gen1 filesystem client object, which is used to issue requests to the service.
-
-```
-// Create client objects
-AdlsClient client = AdlsClient.CreateClient(_adlsg1AccountName, adlCreds);
-```
-
-## Create a file and directory
-
-Add the following snippet to your application. This snippet adds a file and any parent directory that does not exist.
-
-```
-// Create a file - automatically creates any parent directories that don't exist
-// The AdlsOutputStream preserves record boundaries - it does not break records while writing to the store
-
-using (var stream = client.CreateFile(fileName, IfExists.Overwrite))
-{
- byte[] textByteArray = Encoding.UTF8.GetBytes("This is test data to write.\r\n");
- stream.Write(textByteArray, 0, textByteArray.Length);
-
- textByteArray = Encoding.UTF8.GetBytes("This is the second line.\r\n");
- stream.Write(textByteArray, 0, textByteArray.Length);
-}
-```
-
-## Append to a file
-
-The following snippet appends data to an existing file in Data Lake Storage Gen1 account.
-
-```
-// Append to existing file
-
-using (var stream = client.GetAppendStream(fileName))
-{
- byte[] textByteArray = Encoding.UTF8.GetBytes("This is the added line.\r\n");
- stream.Write(textByteArray, 0, textByteArray.Length);
-}
-```
-
-## Read a file
-
-The following snippet reads the contents of a file in Data Lake Storage Gen1.
-
-```
-//Read file contents
-
-using (var readStream = new StreamReader(client.GetReadStream(fileName)))
-{
- string line;
- while ((line = readStream.ReadLine()) != null)
- {
- Console.WriteLine(line);
- }
-}
-```
-
-## Get file properties
-
-The following snippet returns the properties associated with a file or a directory.
-
-```
-// Get file properties
-var directoryEntry = client.GetDirectoryEntry(fileName);
-PrintDirectoryEntry(directoryEntry);
-```
-
-The definition of the `PrintDirectoryEntry` method is available as part of the sample [on GitHub](https://github.com/Azure-Samples/data-lake-store-adls-dot-net-get-started/tree/master/AdlsSDKGettingStarted).
-
-## Rename a file
-
-The following snippet renames an existing file in a Data Lake Storage Gen1 account.
-
-```
-// Rename a file
-string destFilePath = "/Test/testRenameDest3.txt";
-client.Rename(fileName, destFilePath, true);
-```
-
-## Enumerate a directory
-
-The following snippet enumerates directories in a Data Lake Storage Gen1 account.
-
-```
-// Enumerate directory
-foreach (var entry in client.EnumerateDirectory("/Test"))
-{
- PrintDirectoryEntry(entry);
-}
-```
-
-The definition of the `PrintDirectoryEntry` method is available as part of the sample [on GitHub](https://github.com/Azure-Samples/data-lake-store-adls-dot-net-get-started/tree/master/AdlsSDKGettingStarted).
-
-## Delete directories recursively
-
-The following snippet deletes a directory, and all its subdirectories, recursively.
-
-```
-// Delete a directory and all its subdirectories and files
-client.DeleteRecursive("/Test");
-```
-
-## Samples
-
-Here are a few samples that show how to use the Data Lake Storage Gen1 Filesystem SDK.
-
-* [Basic sample on GitHub](https://github.com/Azure-Samples/data-lake-store-adls-dot-net-get-started/tree/master/AdlsSDKGettingStarted)
-* [Advanced sample on GitHub](https://github.com/Azure-Samples/data-lake-store-adls-dot-net-samples)
-
-## See also
-
-* [Account management operations on Data Lake Storage Gen1 using .NET SDK](data-lake-store-get-started-net-sdk.md)
-* [Data Lake Storage Gen1 .NET SDK Reference](/dotnet/api/overview/azure/data-lake-store)
-
-## Next steps
-
-* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md)
data-lake-store Data Lake Store Data Operations Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-data-operations-python.md
- Title: 'Python: Filesystem operations on Azure Data Lake Storage Gen1 | Microsoft Docs'
-description: Learn how to use Python SDK to work with the Data Lake Storage Gen1 file system.
---- Previously updated : 05/29/2018-----
-# Filesystem operations on Azure Data Lake Storage Gen1 using Python
-> [!div class="op_single_selector"]
-> * [.NET SDK](data-lake-store-data-operations-net-sdk.md)
-> * [Java SDK](data-lake-store-get-started-java-sdk.md)
-> * [REST API](data-lake-store-data-operations-rest-api.md)
-> * [Python](data-lake-store-data-operations-python.md)
->
->
-
-In this article, you learn how to use Python SDK to perform filesystem operations on Azure Data Lake Storage Gen1. For instructions on how to perform account management operations on Data Lake Storage Gen1 using Python, see [Account management operations on Data Lake Storage Gen1 using Python](data-lake-store-get-started-python.md).
-
-## Prerequisites
-
-* **Python**. You can download Python from [here](https://www.python.org/downloads/). This article uses Python 3.6.2.
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-* **Azure Data Lake Storage Gen1 account**. Follow the instructions at [Get started with Azure Data Lake Storage Gen1 using the Azure portal](data-lake-store-get-started-portal.md).
-
-## Install the modules
-
-To work with Data Lake Storage Gen1 using Python, you need to install three modules.
-
-* The `azure-mgmt-resource` module, which includes Azure modules for Active Directory, etc.
-* The `azure-mgmt-datalake-store` module, which includes the Azure Data Lake Storage Gen1 account management operations. For more information on this module, see the [azure-mgmt-datalake-store module reference](/python/api/azure-mgmt-datalake-store/).
-* The `azure-datalake-store` module, which includes the Azure Data Lake Storage Gen1 filesystem operations. For more information on this module, see the [azure-datalake-store file-system module reference](/python/api/azure-datalake-store/azure.datalake.store.core/).
-
-Use the following commands to install the modules.
-
-```console
-pip install azure-mgmt-resource
-pip install azure-mgmt-datalake-store
-pip install azure-datalake-store
-```
-
-## Create a new Python application
-
-1. In the IDE of your choice create a new Python application, for example, **mysample.py**.
-
-2. Add the following lines to import the required modules
-
- ```python
- ## Use this only for Azure AD service-to-service authentication
- from azure.common.credentials import ServicePrincipalCredentials
-
- ## Use this only for Azure AD end-user authentication
- from azure.common.credentials import UserPassCredentials
-
- ## Use this only for Azure AD multi-factor authentication
- from msrestazure.azure_active_directory import AADTokenCredentials
-
- ## Required for Azure Data Lake Storage Gen1 account management
- from azure.mgmt.datalake.store import DataLakeStoreAccountManagementClient
- from azure.mgmt.datalake.store.models import DataLakeStoreAccount
-
- ## Required for Azure Data Lake Storage Gen1 filesystem management
- from azure.datalake.store import core, lib, multithread
-
- ## Common Azure imports
- from azure.mgmt.resource.resources import ResourceManagementClient
- from azure.mgmt.resource.resources.models import ResourceGroup
-
- ## Use these as needed for your application
- import logging, getpass, pprint, uuid, time
- ```
-
-3. Save changes to mysample.py.
-
-## Authentication
-
-In this section, we talk about the different ways to authenticate with Microsoft Entra ID. The options available are:
-
-* For end-user authentication for your application, see [End-user authentication with Data Lake Storage Gen1 using Python](data-lake-store-end-user-authenticate-python.md).
-* For service-to-service authentication for your application, see [Service-to-service authentication with Data Lake Storage Gen1 using Python](data-lake-store-service-to-service-authenticate-python.md).
-
-## Create filesystem client
-
-The following snippet first creates the Data Lake Storage Gen1 account client. It uses the client object to create a Data Lake Storage Gen1 account. Finally, the snippet creates a filesystem client object.
-
-```python
-## Declare variables
-subscriptionId = 'FILL-IN-HERE'
-adlsAccountName = 'FILL-IN-HERE'
-
-## Create a filesystem client object
-adlsFileSystemClient = core.AzureDLFileSystem(adlCreds, store_name=adlsAccountName)
-```
-
-## Create a directory
-
-```python
-## Create a directory
-adlsFileSystemClient.mkdir('/mysampledirectory')
-```
-
-## Upload a file
-
-```python
-## Upload a file
-multithread.ADLUploader(adlsFileSystemClient, lpath='C:\\data\\mysamplefile.txt', rpath='/mysampledirectory/mysamplefile.txt', nthreads=64, overwrite=True, buffersize=4194304, blocksize=4194304)
-```
--
-## Download a file
-
-```python
-## Download a file
-multithread.ADLDownloader(adlsFileSystemClient, lpath='C:\\data\\mysamplefile.txt.out', rpath='/mysampledirectory/mysamplefile.txt', nthreads=64, overwrite=True, buffersize=4194304, blocksize=4194304)
-```
-
-## Delete a directory
-
-```python
-## Delete a directory
-adlsFileSystemClient.rm('/mysampledirectory', recursive=True)
-```
-
-## Next steps
-* [Account management operations on Data Lake Storage Gen1 using Python](data-lake-store-get-started-python.md).
-
-## See also
-
-* [Azure Data Lake Storage Gen1 Python (Filesystem) Reference](/python/api/azure-datalake-store/azure.datalake.store.core)
-* [Open Source Big Data applications compatible with Azure Data Lake Storage Gen1](data-lake-store-compatible-oss-other-applications.md)
data-lake-store Data Lake Store Data Operations Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-data-operations-rest-api.md
- Title: 'REST API: Filesystem operations on Azure Data Lake Storage Gen1 | Microsoft Docs'
-description: Use WebHDFS REST APIs to perform filesystem operations on Azure Data Lake Storage Gen1
---- Previously updated : 05/29/2018---
-# Filesystem operations on Azure Data Lake Storage Gen1 using REST API
-> [!div class="op_single_selector"]
-> * [.NET SDK](data-lake-store-data-operations-net-sdk.md)
-> * [Java SDK](data-lake-store-get-started-java-sdk.md)
-> * [REST API](data-lake-store-data-operations-rest-api.md)
-> * [Python](data-lake-store-data-operations-python.md)
->
->
-
-In this article, you learn how to use WebHDFS REST APIs and Data Lake Storage Gen1 REST APIs to perform filesystem operations on Azure Data Lake Storage Gen1. For instructions on how to perform account management operations on Data Lake Storage Gen1 using REST API, see [Account management operations on Data Lake Storage Gen1 using REST API](data-lake-store-get-started-rest-api.md).
-
-## Prerequisites
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-* **Azure Data Lake Storage Gen1 account**. Follow the instructions at [Get started with Azure Data Lake Storage Gen1 using the Azure portal](data-lake-store-get-started-portal.md).
-
-* **[cURL](https://curl.haxx.se/)**. This article uses cURL to demonstrate how to make REST API calls against a Data Lake Storage Gen1 account.
-
-<a name='how-do-i-authenticate-using-azure-active-directory'></a>
-
-## How do I authenticate using Microsoft Entra ID?
-You can use two approaches to authenticate using Microsoft Entra ID.
-
-* For end-user authentication for your application (interactive), see [End-user authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-end-user-authenticate-rest-api.md).
-* For service-to-service authentication for your application (non-interactive), see [Service-to-service authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-service-to-service-authenticate-rest-api.md).
--
-## Create folders
-This operation is based on the WebHDFS REST API call defined [here](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Make_a_Directory).
-
-Use the following cURL command. Replace **\<yourstorename>** with your Data Lake Storage Gen1 account name.
-
-```console
-curl -i -X PUT -H "Authorization: Bearer <REDACTED>" -d "" 'https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/mytempdir/?op=MKDIRS'
-```
-
-In the preceding command, replace \<`REDACTED`\> with the authorization token you retrieved earlier. This command creates a directory called **mytempdir** under the root folder of your Data Lake Storage Gen1 account.
-
-If the operation completes successfully, you should see a response like the following snippet:
-
-```output
-{"boolean":true}
-```
-
-## List folders
-This operation is based on the WebHDFS REST API call defined [here](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#List_a_Directory).
-
-Use the following cURL command. Replace **\<yourstorename>** with your Data Lake Storage Gen1 account name.
-
-```console
-curl -i -X GET -H "Authorization: Bearer <REDACTED>" 'https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/?op=LISTSTATUS'
-```
-
-In the preceding command, replace \<`REDACTED`\> with the authorization token you retrieved earlier.
-
-If the operation completes successfully, you should see a response like the following snippet:
-
-```output
-{
-"FileStatuses": {
- "FileStatus": [{
- "length": 0,
- "pathSuffix": "mytempdir",
- "type": "DIRECTORY",
- "blockSize": 268435456,
- "accessTime": 1458324719512,
- "modificationTime": 1458324719512,
- "replication": 0,
- "permission": "777",
- "owner": "<GUID>",
- "group": "<GUID>"
- }]
-}
-}
-```
-
-## Upload data
-This operation is based on the WebHDFS REST API call defined [here](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File).
-
-Use the following cURL command. Replace **\<yourstorename>** with your Data Lake Storage Gen1 account name.
-
-```console
-curl -i -X PUT -L -T 'C:\temp\list.txt' -H "Authorization: Bearer <REDACTED>" 'https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/mytempdir/list.txt?op=CREATE'
-```
-
-In the preceding syntax **-T** parameter is the location of the file you are uploading.
-
-The output is similar to the following snippet:
-
-```output
-HTTP/1.1 307 Temporary Redirect
-...
-Location: https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/mytempdir/list.txt?op=CREATE&write=true
-...
-Content-Length: 0
-
-HTTP/1.1 100 Continue
-
-HTTP/1.1 201 Created
-...
-```
-
-## Read data
-This operation is based on the WebHDFS REST API call defined [here](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File).
-
-Reading data from a Data Lake Storage Gen1 account is a two-step process.
-
-* You first submit a GET request against the endpoint `https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/mytempdir/myinputfile.txt?op=OPEN`. This call returns a location to submit the next GET request to.
-* You then submit the GET request against the endpoint `https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/mytempdir/myinputfile.txt?op=OPEN&read=true`. This call displays the contents of the file.
-
-However, because there is no difference in the input parameters between the first and the second step, you can use the `-L` parameter to submit the first request. `-L` option essentially combines two requests into one and makes cURL redo the request on the new location. Finally, the output from all the request calls is displayed, like shown in the following snippet. Replace **\<yourstorename>** with your Data Lake Storage Gen1 account name.
-
-```console
-curl -i -L GET -H "Authorization: Bearer <REDACTED>" 'https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/mytempdir/myinputfile.txt?op=OPEN'
-```
-
-You should see an output similar to the following snippet:
-
-```output
-HTTP/1.1 307 Temporary Redirect
-...
-Location: https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/mytempdir/somerandomfile.txt?op=OPEN&read=true
-...
-
-HTTP/1.1 200 OK
-...
-
-Hello, Data Lake Store user!
-```
-
-## Rename a file
-This operation is based on the WebHDFS REST API call defined [here](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Rename_a_FileDirectory).
-
-Use the following cURL command to rename a file. Replace **\<yourstorename>** with your Data Lake Storage Gen1 account name.
-
-```console
-curl -i -X PUT -H "Authorization: Bearer <REDACTED>" -d "" 'https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/mytempdir/myinputfile.txt?op=RENAME&destination=/mytempdir/myinputfile1.txt'
-```
-
-You should see an output similar to the following snippet:
-
-```output
-HTTP/1.1 200 OK
-...
-
-{"boolean":true}
-```
-
-## Delete a file
-This operation is based on the WebHDFS REST API call defined [here](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Delete_a_FileDirectory).
-
-Use the following cURL command to delete a file. Replace **\<yourstorename>** with your Data Lake Storage Gen1 account name.
-
-```console
-curl -i -X DELETE -H "Authorization: Bearer <REDACTED>" 'https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/mytempdir/myinputfile1.txt?op=DELETE'
-```
-
-You should see an output like the following:
-
-```output
-HTTP/1.1 200 OK
-...
-
-{"boolean":true}
-```
-
-## Next steps
-* [Account management operations on Data Lake Storage Gen1 using REST API](data-lake-store-get-started-rest-api.md).
-
-## See also
-* [Azure Data Lake Storage Gen1 REST API Reference](/rest/api/datalakestore/)
-* [Open Source Big Data applications compatible with Azure Data Lake Storage Gen1](data-lake-store-compatible-oss-other-applications.md)
data-lake-store Data Lake Store Data Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-data-scenarios.md
- Title: Data scenarios involving Data Lake Storage Gen1 | Microsoft Docs
-description: Understand the different scenarios and tools using which data can ingested, processed, downloaded, and visualized in Data Lake Storage Gen1 (previously known as Azure Data Lake Store)
---- Previously updated : 08/05/2022---
-# Using Azure Data Lake Storage Gen1 for big data requirements
--
-There are four key stages in big data processing:
-
-* Ingesting large amounts of data into a data store, at real-time or in batches
-* Processing the data
-* Downloading the data
-* Visualizing the data
-
-In this article, we look at these stages with respect to Azure Data Lake Storage Gen1 to understand the options and tools available to meet your big data needs.
-
-## Ingest data into Data Lake Storage Gen1
-This section highlights the different sources of data and the different ways in which that data can be ingested into a Data Lake Storage Gen1 account.
-
-![Ingest data into Data Lake Storage Gen1](./media/data-lake-store-data-scenarios/ingest-data.png "Ingest data into Data Lake Storage Gen1")
-
-### Ad hoc data
-This represents smaller data sets that are used for prototyping a big data application. There are different ways of ingesting ad hoc data depending on the source of the data.
-
-| Data Source | Ingest it using |
-| | |
-| Local computer |<ul> <li>[Azure portal](data-lake-store-get-started-portal.md)</li> <li>[Azure PowerShell](data-lake-store-get-started-powershell.md)</li> <li>[Azure CLI](data-lake-store-get-started-cli-2.0.md)</li> <li>[Using Data Lake Tools for Visual Studio](../data-lake-analytics/data-lake-analytics-data-lake-tools-get-started.md) </li></ul> |
-| Azure Storage Blob |<ul> <li>[Azure Data Factory](../data-factory/connector-azure-data-lake-store.md)</li> <li>[AdlCopy tool](data-lake-store-copy-data-azure-storage-blob.md)</li><li>[DistCp running on HDInsight cluster](data-lake-store-copy-data-wasb-distcp.md)</li> </ul> |
-
-### Streamed data
-This represents data that can be generated by various sources such as applications, devices, sensors, etc. This data can be ingested into Data Lake Storage Gen1 by a variety of tools. These tools will usually capture and process the data on an event-by-event basis in real-time, and then write the events in batches into Data Lake Storage Gen1 so that they can be further processed.
-
-Following are tools that you can use:
-
-* [Azure Stream Analytics](../stream-analytics/stream-analytics-define-outputs.md) - Events ingested into Event Hubs can be written to Azure Data Lake Storage Gen1 using an Azure Data Lake Storage Gen1 output.
-* [EventProcessorHost](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md) ΓÇô You can receive events from Event Hubs and then write it to Data Lake Storage Gen1 using the [Data Lake Storage Gen1 .NET SDK](data-lake-store-get-started-net-sdk.md).
-
-### Relational data
-You can also source data from relational databases. Over a period of time, relational databases collect huge amounts of data which can provide key insights if processed through a big data pipeline. You can use the following tools to move such data into Data Lake Storage Gen1.
-
-* [Apache Sqoop](data-lake-store-data-transfer-sql-sqoop.md)
-* [Azure Data Factory](../data-factory/copy-activity-overview.md)
-
-### Web server log data (upload using custom applications)
-This type of dataset is specifically called out because analysis of web server log data is a common use case for big data applications and requires large volumes of log files to be uploaded to Data Lake Storage Gen1. You can use any of the following tools to write your own scripts or applications to upload such data.
-
-* [Azure CLI](data-lake-store-get-started-cli-2.0.md)
-* [Azure PowerShell](data-lake-store-get-started-powershell.md)
-* [Azure Data Lake Storage Gen1 .NET SDK](data-lake-store-get-started-net-sdk.md)
-* [Azure Data Factory](../data-factory/copy-activity-overview.md)
-
-For uploading web server log data, and also for uploading other kinds of data (e.g. social sentiments data), it is a good approach to write your own custom scripts/applications because it gives you the flexibility to include your data uploading component as part of your larger big data application. In some cases this code may take the form of a script or simple command line utility. In other cases, the code may be used to integrate big data processing into a business application or solution.
-
-### Data associated with Azure HDInsight clusters
-Most HDInsight cluster types (Hadoop, HBase, Storm) support Data Lake Storage Gen1 as a data storage repository. HDInsight clusters access data from Azure Storage Blobs (WASB). For better performance, you can copy the data from WASB into a Data Lake Storage Gen1 account associated with the cluster. You can use the following tools to copy the data.
-
-* [Apache DistCp](data-lake-store-copy-data-wasb-distcp.md)
-* [AdlCopy Service](data-lake-store-copy-data-azure-storage-blob.md)
-* [Azure Data Factory](../data-factory/connector-azure-data-lake-store.md)
-
-### Data stored in on-premises or IaaS Hadoop clusters
-Large amounts of data may be stored in existing Hadoop clusters, locally on machines using HDFS. The Hadoop clusters may be in an on-premises deployment or may be within an IaaS cluster on Azure. There could be requirements to copy such data to Azure Data Lake Storage Gen1 for a one-off approach or in a recurring fashion. There are various options that you can use to achieve this. Below is a list of alternatives and the associated trade-offs.
-
-| Approach | Details | Advantages | Considerations |
-| | | | |
-| Use Azure Data Factory (ADF) to copy data directly from Hadoop clusters to Azure Data Lake Storage Gen1 |[ADF supports HDFS as a data source](../data-factory/connector-hdfs.md) |ADF provides out-of-the-box support for HDFS and first class end-to-end management and monitoring |Requires Data Management Gateway to be deployed on-premises or in the IaaS cluster |
-| Export data from Hadoop as files. Then copy the files to Azure Data Lake Storage Gen1 using appropriate mechanism. |You can copy files to Azure Data Lake Storage Gen1 using: <ul><li>[Azure PowerShell for Windows OS](data-lake-store-get-started-powershell.md)</li><li>[Azure CLI](data-lake-store-get-started-cli-2.0.md)</li><li>Custom app using any Data Lake Storage Gen1 SDK</li></ul> |Quick to get started. Can do customized uploads |Multi-step process that involves multiple technologies. Management and monitoring will grow to be a challenge over time given the customized nature of the tools |
-| Use Distcp to copy data from Hadoop to Azure Storage. Then copy data from Azure Storage to Data Lake Storage Gen1 using appropriate mechanism. |You can copy data from Azure Storage to Data Lake Storage Gen1 using: <ul><li>[Azure Data Factory](../data-factory/copy-activity-overview.md)</li><li>[AdlCopy tool](data-lake-store-copy-data-azure-storage-blob.md)</li><li>[Apache DistCp running on HDInsight clusters](data-lake-store-copy-data-wasb-distcp.md)</li></ul> |You can use open-source tools. |Multi-step process that involves multiple technologies |
-
-### Really large datasets
-For uploading datasets that range in several terabytes, using the methods described above can sometimes be slow and costly. In such cases, you can use the options below.
-
-* **Using Azure ExpressRoute**. Azure ExpressRoute lets you create private connections between Azure datacenters and infrastructure on your premises. This provides a reliable option for transferring large amounts of data. For more information, see [Azure ExpressRoute documentation](../expressroute/expressroute-introduction.md).
-* **"Offline" upload of data**. If using Azure ExpressRoute is not feasible for any reason, you can use [Azure Import/Export service](../import-export/storage-import-export-service.md) to ship hard disk drives with your data to an Azure data center. Your data is first uploaded to Azure Storage Blobs. You can then use [Azure Data Factory](../data-factory/connector-azure-data-lake-store.md) or [AdlCopy tool](data-lake-store-copy-data-azure-storage-blob.md) to copy data from Azure Storage Blobs to Data Lake Storage Gen1.
-
- > [!NOTE]
- > While using the Import/Export service, the file sizes on the disks that you ship to Azure data center should not be greater than 195 GB.
- >
- >
-
-## Process data stored in Data Lake Storage Gen1
-Once the data is available in Data Lake Storage Gen1 you can run analysis on that data using the supported big data applications. Currently, you can use Azure HDInsight and Azure Data Lake Analytics to run data analysis jobs on the data stored in Data Lake Storage Gen1.
-
-![Analyze data in Data Lake Storage Gen1](./media/data-lake-store-data-scenarios/analyze-data.png "Analyze data in Data Lake Storage Gen1")
-
-You can look at the following examples.
-
-* [Create an HDInsight cluster with Data Lake Storage Gen1 as storage](data-lake-store-hdinsight-hadoop-use-portal.md)
-* [Use Azure Data Lake Analytics with Data Lake Storage Gen1](../data-lake-analytics/data-lake-analytics-get-started-portal.md)
-
-## Download data from Data Lake Storage Gen1
-You might also want to download or move data from Azure Data Lake Storage Gen1 for scenarios such as:
-
-* Move data to other repositories to interface with your existing data processing pipelines. For example, you might want to move data from Data Lake Storage Gen1 to Azure SQL Database or SQL Server.
-* Download data to your local computer for processing in IDE environments while building application prototypes.
-
-![Egress data from Data Lake Storage Gen1](./media/data-lake-store-data-scenarios/egress-data.png "Egress data from Data Lake Storage Gen1")
-
-In such cases, you can use any of the following options:
-
-* [Apache Sqoop](data-lake-store-data-transfer-sql-sqoop.md)
-* [Azure Data Factory](../data-factory/copy-activity-overview.md)
-* [Apache DistCp](data-lake-store-copy-data-wasb-distcp.md)
-
-You can also use the following methods to write your own script/application to download data from Data Lake Storage Gen1.
-
-* [Azure CLI](data-lake-store-get-started-cli-2.0.md)
-* [Azure PowerShell](data-lake-store-get-started-powershell.md)
-* [Azure Data Lake Storage Gen1 .NET SDK](data-lake-store-get-started-net-sdk.md)
-
-## Visualize data in Data Lake Storage Gen1
-You can use a mix of services to create visual representations of data stored in Data Lake Storage Gen1.
-
-![Visualize data in Data Lake Storage Gen1](./media/data-lake-store-data-scenarios/visualize-data.png "Visualize data in Data Lake Storage Gen1")
-
-* You can start by using [Azure Data Factory to move data from Data Lake Storage Gen1 to Azure Synapse Analytics](../data-factory/copy-activity-overview.md)
-* After that, you can [integrate Power BI with Azure Synapse Analytics](/power-bi/connect-data/service-azure-sql-data-warehouse-with-direct-connect) to create visual representation of the data.
data-lake-store Data Lake Store Data Transfer Sql Sqoop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-data-transfer-sql-sqoop.md
- Title: Copy data between Data Lake Storage Gen1 and Azure SQL - Sqoop | Microsoft Docs
-description: Use Sqoop to copy data between Azure SQL Database and Azure Data Lake Storage Gen1
---- Previously updated : 07/30/2019---
-# Copy data between Data Lake Storage Gen1 and Azure SQL Database using Sqoop
-
-Learn how to use Apache Sqoop to import and export data between Azure SQL Database and Azure Data Lake Storage Gen1.
-
-## What is Sqoop?
-
-Big data applications are a natural choice for processing unstructured and semi-structured data, such as logs and files. However, you may also have a need to process structured data that's stored in relational databases.
-
-[Apache Sqoop](https://sqoop.apache.org/docs/1.4.4/SqoopUserGuide.html) is a tool designed to transfer data between relational databases and a big data repository, such as Data Lake Storage Gen1. You can use it to import data from a relational database management system (RDBMS) such as Azure SQL Database into Data Lake Storage Gen1. You can then transform and analyze the data using big data workloads, and then export the data back into an RDBMS. In this article, you use a database in Azure SQL Database as your relational database to import/export from.
-
-## Prerequisites
-
-Before you begin, you must have the following:
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-* **An Azure Data Lake Storage Gen1 account**. For instructions on how to create the account, see [Get started with Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md)
-* **Azure HDInsight cluster** with access to a Data Lake Storage Gen1 account. See [Create an HDInsight cluster with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md). This article assumes you have an HDInsight Linux cluster with Data Lake Storage Gen1 access.
-* **Azure SQL Database**. For instructions on how to create a database in Azure SQL Database, see [Create a database in Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart)
-
-## Create sample tables in the database
-
-1. To start, create two sample tables in the database. Use [SQL Server Management Studio](/azure/azure-sql/database/connect-query-ssms) or Visual Studio to connect to the database and then run the following queries.
-
- **Create Table1**
-
- ```sql
- CREATE TABLE [dbo].[Table1](
- [ID] [int] NOT NULL,
- [FName] [nvarchar](50) NOT NULL,
- [LName] [nvarchar](50) NOT NULL,
- CONSTRAINT [PK_Table_1] PRIMARY KEY CLUSTERED
- (
- [ID] ASC
- )
- ) ON [PRIMARY]
- GO
- ```
-
- **Create Table2**
-
- ```sql
- CREATE TABLE [dbo].[Table2](
- [ID] [int] NOT NULL,
- [FName] [nvarchar](50) NOT NULL,
- [LName] [nvarchar](50) NOT NULL,
- CONSTRAINT [PK_Table_2] PRIMARY KEY CLUSTERED
- (
- [ID] ASC
- )
- ) ON [PRIMARY]
- GO
- ```
-
-1. Run the following command to add some sample data to **Table1**. Leave **Table2** empty. Later, you'll import data from **Table1** into Data Lake Storage Gen1. Then, you'll export data from Data Lake Storage Gen1 into **Table2**.
-
- ```sql
- INSERT INTO [dbo].[Table1] VALUES (1,'Neal','Kell'), (2,'Lila','Fulton'), (3, 'Erna','Myers'), (4,'Annette','Simpson');
- ```
-
-## Use Sqoop from an HDInsight cluster with access to Data Lake Storage Gen1
-
-An HDInsight cluster already has the Sqoop packages available. If you've configured the HDInsight cluster to use Data Lake Storage Gen1 as additional storage, you can use Sqoop (without any configuration changes) to import/export data between a relational database such as Azure SQL Database, and a Data Lake Storage Gen1 account.
-
-1. For this article, we assume you created a Linux cluster so you should use SSH to connect to the cluster. See [Connect to a Linux-based HDInsight cluster](../hdinsight/hdinsight-hadoop-linux-use-ssh-unix.md).
-
-1. Verify whether you can access the Data Lake Storage Gen1 account from the cluster. Run the following command from the SSH prompt:
-
- ```console
- hdfs dfs -ls adl://<data_lake_storage_gen1_account>.azuredatalakestore.net/
- ```
-
- This command provides a list of files/folders in the Data Lake Storage Gen1 account.
-
-### Import data from Azure SQL Database into Data Lake Storage Gen1
-
-1. Navigate to the directory where Sqoop packages are available. Typically, this location is `/usr/hdp/<version>/sqoop/bin`.
-
-1. Import the data from **Table1** into the Data Lake Storage Gen1 account. Use the following syntax:
-
- ```console
- sqoop-import --connect "jdbc:sqlserver://<sql-database-server-name>.database.windows.net:1433;username=<username>@<sql-database-server-name>;password=<password>;database=<sql-database-name>" --table Table1 --target-dir adl://<data-lake-storage-gen1-name>.azuredatalakestore.net/Sqoop/SqoopImportTable1
- ```
-
- The **sql-database-server-name** placeholder represents the name of the server where the database is running. **sql-database-name** placeholder represents the actual database name.
-
- For example,
-
- ```console
- sqoop-import --connect "jdbc:sqlserver://mysqoopserver.database.windows.net:1433;username=user1@mysqoopserver;password=<password>;database=mysqoopdatabase" --table Table1 --target-dir adl://myadlsg1store.azuredatalakestore.net/Sqoop/SqoopImportTable1
- ```
-
-1. Verify that the data has been transferred to the Data Lake Storage Gen1 account. Run the following command:
-
- ```console
- hdfs dfs -ls adl://hdiadlsg1store.azuredatalakestore.net/Sqoop/SqoopImportTable1/
- ```
-
- You should see the following output.
-
- ```console
- -rwxrwxrwx 0 sshuser hdfs 0 2016-02-26 21:09 adl://hdiadlsg1store.azuredatalakestore.net/Sqoop/SqoopImportTable1/_SUCCESS
- -rwxrwxrwx 0 sshuser hdfs 12 2016-02-26 21:09 adl://hdiadlsg1store.azuredatalakestore.net/Sqoop/SqoopImportTable1/part-m-00000
- -rwxrwxrwx 0 sshuser hdfs 14 2016-02-26 21:09 adl://hdiadlsg1store.azuredatalakestore.net/Sqoop/SqoopImportTable1/part-m-00001
- -rwxrwxrwx 0 sshuser hdfs 13 2016-02-26 21:09 adl://hdiadlsg1store.azuredatalakestore.net/Sqoop/SqoopImportTable1/part-m-00002
- -rwxrwxrwx 0 sshuser hdfs 18 2016-02-26 21:09 adl://hdiadlsg1store.azuredatalakestore.net/Sqoop/SqoopImportTable1/part-m-00003
- ```
-
- Each **part-m-*** file corresponds to a row in the source table, **Table1**. You can view the contents of the part-m-* files to verify.
-
-### Export data from Data Lake Storage Gen1 into Azure SQL Database
-
-1. Export the data from the Data Lake Storage Gen1 account to the empty table, **Table2**, in the Azure SQL Database. Use the following syntax.
-
- ```console
- sqoop-export --connect "jdbc:sqlserver://<sql-database-server-name>.database.windows.net:1433;username=<username>@<sql-database-server-name>;password=<password>;database=<sql-database-name>" --table Table2 --export-dir adl://<data-lake-storage-gen1-name>.azuredatalakestore.net/Sqoop/SqoopImportTable1 --input-fields-terminated-by ","
- ```
-
- For example,
-
- ```console
- sqoop-export --connect "jdbc:sqlserver://mysqoopserver.database.windows.net:1433;username=user1@mysqoopserver;password=<password>;database=mysqoopdatabase" --table Table2 --export-dir adl://myadlsg1store.azuredatalakestore.net/Sqoop/SqoopImportTable1 --input-fields-terminated-by ","
- ```
-
-1. Verify that the data was uploaded to the SQL Database table. Use [SQL Server Management Studio](/azure/azure-sql/database/connect-query-ssms) or Visual Studio to connect to the Azure SQL Database and then run the following query.
-
- ```sql
- SELECT * FROM TABLE2
- ```
-
- This command should have the following output.
-
- ```output
- ID FName LName
- -
- 1 Neal Kell
- 2 Lila Fulton
- 3 Erna Myers
- 4 Annette Simpson
- ```
-
-## Performance considerations while using Sqoop
-
-For information about performance tuning your Sqoop job to copy data to Data Lake Storage Gen1, see the [Sqoop performance blog post](/archive/blogs/shanyu/performance-tuning-for-hdinsight-storm-and-microsoft-azure-eventhubs).
-
-## Next steps
-
-* [Copy data from Azure Storage Blobs to Data Lake Storage Gen1](data-lake-store-copy-data-azure-storage-blob.md)
-* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md)
-* [Use Azure Data Lake Analytics with Data Lake Storage Gen1](../data-lake-analytics/data-lake-analytics-get-started-portal.md)
-* [Use Azure HDInsight with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md)
data-lake-store Data Lake Store Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-diagnostic-logs.md
- Title: Viewing diagnostic logs for Azure Data Lake Storage Gen1 | Microsoft Docs
-description: 'Understand how to setup and access diagnostic logs for Azure Data Lake Storage Gen1 '
---- Previously updated : 03/26/2018---
-# Accessing diagnostic logs for Azure Data Lake Storage Gen1
-Learn to enable diagnostic logging for your Azure Data Lake Storage Gen1 account and how to view the logs collected for your account.
-
-Organizations can enable diagnostic logging for their Azure Data Lake Storage Gen1 account to collect data access audit trails that provides information such as list of users accessing the data, how frequently the data is accessed, how much data is stored in the account, etc. When enabled, the diagnostics and/or requests are logged on a best-effort basis. Both Requests and Diagnostics log entries are created only if there are requests made against the service endpoint.
-
-## Prerequisites
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-* **Azure Data Lake Storage Gen1 account**. Follow the instructions at [Get started with Azure Data Lake Storage Gen1 using the Azure portal](data-lake-store-get-started-portal.md).
-
-## Enable diagnostic logging for your Data Lake Storage Gen1 account
-1. Sign on to the new [Azure portal](https://portal.azure.com).
-2. Open your Data Lake Storage Gen1 account, and from your Data Lake Storage Gen1 account blade, click **Diagnostic settings**.
-3. In the **Diagnostics settings** blade, click **Turn on diagnostics**.
-
- ![Screenshot of the Data Lake Storage Gen 1 account with the Diagnostic setting option and the Turn on diagnostics option called out.](./media/data-lake-store-diagnostic-logs/turn-on-diagnostics.png "Enable diagnostic logs")
-
-3. In the **Diagnostics settings** blade, make the following changes to configure diagnostic logging.
-
- ![Screenshot of the Diagnostic setting section with the Name text box and the Save option called out.](./media/data-lake-store-diagnostic-logs/enable-diagnostic-logs.png "Enable diagnostic logs")
-
- * For **Name**, enter a value for the diagnostic log configuration.
- * You can choose to store/process the data in different ways.
-
- * Select the option to **Archive to a storage account** to store logs to an Azure Storage account. You use this option if you want to archive the data that will be batch-processed at a later date. If you select this option you must provide an Azure Storage account to save the logs to.
-
- * Select the option to **Stream to an event hub** to stream log data to an Azure Event Hub. Most likely you will use this option if you have a downstream processing pipeline to analyze incoming logs at real time. If you select this option, you must provide the details for the Azure Event Hub you want to use.
-
- * Select the option to **Send to Log Analytics** to use the Azure Monitor service to analyze the generated log data. If you select this option, you must provide the details for the Log Analytics workspace that you would use the perform log analysis. See [View or analyze data collected with Azure Monitor logs search](../azure-monitor/logs/log-analytics-tutorial.md) for details on using Azure Monitor logs.
-
- * Specify whether you want to get audit logs or request logs or both.
- * Specify the number of days for which the data must be retained. Retention is only applicable if you are using Azure storage account to archive log data.
- * Click **Save**.
-
-Once you have enabled diagnostic settings, you can watch the logs in the **Diagnostic Logs** tab.
-
-## View diagnostic logs for your Data Lake Storage Gen1 account
-There are two ways to view the log data for your Data Lake Storage Gen1 account.
-
-* From the Data Lake Storage Gen1 account settings view
-* From the Azure Storage account where the data is stored
-
-### Using the Data Lake Storage Gen1 Settings view
-1. From your Data Lake Storage Gen1 account **Settings** blade, click **Diagnostic Logs**.
-
- ![View diagnostic logs](./media/data-lake-store-diagnostic-logs/view-diagnostic-logs.png "View diagnostic logs")
-2. In the **Diagnostics Logs** blade, you should see the logs categorized by **Audit Logs** and **Request Logs**.
-
- * Request logs capture every API request made on the Data Lake Storage Gen1 account.
- * Audit Logs are similar to request Logs but provide a much more detailed breakdown of the operations being performed on the Data Lake Storage Gen1 account. For example, a single upload API call in request logs might result in multiple "Append" operations in the audit logs.
-3. To download the logs, click the **Download** link against each log entry.
-
-### From the Azure Storage account that contains log data
-1. Open the Azure Storage account blade associated with Data Lake Storage Gen1 for logging, and then click Blobs. The **Blob service** blade lists two containers.
-
- ![Screenshot of the Data Lake Storage Gen 1 blade the the Blobs option selected and the Blog service blade with the names of the two blob services called out.](./media/data-lake-store-diagnostic-logs/view-diagnostic-logs-storage-account.png "View diagnostic logs")
-
- * The container **insights-logs-audit** contains the audit logs.
- * The container **insights-logs-requests** contains the request logs.
-2. Within these containers, the logs are stored under the following structure.
-
- ![Screenshot of the log structure as it is stored in the container.](./media/data-lake-store-diagnostic-logs/view-diagnostic-logs-storage-account-structure.png "View diagnostic logs")
-
- As an example, the complete path to an audit log could be `https://adllogs.blob.core.windows.net/insights-logs-audit/resourceId=/SUBSCRIPTIONS/<sub-id>/RESOURCEGROUPS/myresourcegroup/PROVIDERS/MICROSOFT.DATALAKESTORE/ACCOUNTS/mydatalakestorage/y=2016/m=07/d=18/h=04/m=00/PT1H.json`
-
- Similarly, the complete path to a request log could be `https://adllogs.blob.core.windows.net/insights-logs-requests/resourceId=/SUBSCRIPTIONS/<sub-id>/RESOURCEGROUPS/myresourcegroup/PROVIDERS/MICROSOFT.DATALAKESTORE/ACCOUNTS/mydatalakestorage/y=2016/m=07/d=18/h=14/m=00/PT1H.json`
-
-## Understand the structure of the log data
-The audit and request logs are in a JSON format. In this section, we look at the structure of JSON for request and audit logs.
-
-### Request logs
-Here's a sample entry in the JSON-formatted request log. Each blob has one root object called **records** that contains an array of log objects.
-
-```json
-{
-"records":
- [
- . . . .
- ,
- {
- "time": "2016-07-07T21:02:53.456Z",
- "resourceId": "/SUBSCRIPTIONS/<subscription_id>/RESOURCEGROUPS/<resource_group_name>/PROVIDERS/MICROSOFT.DATALAKESTORE/ACCOUNTS/<data_lake_storage_gen1_account_name>",
- "category": "Requests",
- "operationName": "GETCustomerIngressEgress",
- "resultType": "200",
- "callerIpAddress": "::ffff:1.1.1.1",
- "correlationId": "4a11c709-05f5-417c-a98d-6e81b3e29c58",
- "identity": "1808bd5f-62af-45f4-89d8-03c5e81bac30",
- "properties": {"HttpMethod":"GET","Path":"/webhdfs/v1/Samples/Outputs/Drivers.csv","RequestContentLength":0,"StoreIngressSize":0 ,"StoreEgressSize":4096,"ClientRequestId":"3b7adbd9-3519-4f28-a61c-bd89506163b8","StartTime":"2016-07-07T21:02:52.472Z","EndTime":"2016-07-07T21:02:53.456Z","QueryParameters":"api-version=<version>&op=<operationName>"}
- }
- ,
- . . . .
- ]
-}
-```
-
-#### Request log schema
-| Name | Type | Description |
-| | | |
-| time |String |The timestamp (in UTC) of the log |
-| resourceId |String |The ID of the resource that operation took place on |
-| category |String |The log category. For example, **Requests**. |
-| operationName |String |Name of the operation that is logged. For example, getfilestatus. |
-| resultType |String |The status of the operation, For example, 200. |
-| callerIpAddress |String |The IP address of the client making the request |
-| correlationId |String |The ID of the log that can used to group together a set of related log entries |
-| identity |Object |The identity that generated the log |
-| properties |JSON |See below for details |
-
-#### Request log properties schema
-| Name | Type | Description |
-| | | |
-| HttpMethod |String |The HTTP Method used for the operation. For example, GET. |
-| Path |String |The path the operation was performed on |
-| RequestContentLength |int |The content length of the HTTP request |
-| ClientRequestId |String |The ID that uniquely identifies this request |
-| StartTime |String |The time at which the server received the request |
-| EndTime |String |The time at which the server sent a response |
-| StoreIngressSize |Long |Size in bytes ingressed to Data Lake Store |
-| StoreEgressSize |Long |Size in bytes egressed from Data Lake Store |
-| QueryParameters |String |Description: These are the http query parameters. Example 1: api-version=2014-01-01&op=getfilestatus Example 2: op=APPEND&append=true&syncFlag=DATA&filesessionid=bee3355a-4925-4435-bb4d-ceea52811aeb&leaseid=bee3355a-4925-4435-bb4d-ceea52811aeb&offset=28313319&api-version=2017-08-01 |
-
-### Audit logs
-Here's a sample entry in the JSON-formatted audit log. Each blob has one root object called **records** that contains an array of log objects
-
-```json
-{
-"records":
- [
- . . . .
- ,
- {
- "time": "2016-07-08T19:08:59.359Z",
- "resourceId": "/SUBSCRIPTIONS/<subscription_id>/RESOURCEGROUPS/<resource_group_name>/PROVIDERS/MICROSOFT.DATALAKESTORE/ACCOUNTS/<data_lake_storage_gen1_account_name>",
- "category": "Audit",
- "operationName": "SeOpenStream",
- "resultType": "0",
- "resultSignature": "0",
- "correlationId": "381110fc03534e1cb99ec52376ceebdf;Append_BrEKAmg;25.66.9.145",
- "identity": "A9DAFFAF-FFEE-4BB5-A4A0-1B6CBBF24355",
- "properties": {"StreamName":"adl://<data_lake_storage_gen1_account_name>.azuredatalakestore.net/logs.csv"}
- }
- ,
- . . . .
- ]
-}
-```
-
-#### Audit log schema
-| Name | Type | Description |
-| | | |
-| time |String |The timestamp (in UTC) of the log |
-| resourceId |String |The ID of the resource that operation took place on |
-| category |String |The log category. For example, **Audit**. |
-| operationName |String |Name of the operation that is logged. For example, getfilestatus. |
-| resultType |String |The status of the operation, For example, 200. |
-| resultSignature |String |Additional details on the operation. |
-| correlationId |String |The ID of the log that can used to group together a set of related log entries |
-| identity |Object |The identity that generated the log |
-| properties |JSON |See below for details |
-
-#### Audit log properties schema
-| Name | Type | Description |
-| | | |
-| StreamName |String |The path the operation was performed on |
-
-## Samples to process the log data
-When sending logs from Azure Data Lake Storage Gen1 to Azure Monitor logs (see [View or analyze data collected with Azure Monitor logs search](../azure-monitor/logs/log-analytics-tutorial.md) for details on using Azure Monitor logs), the following query will return a table containing a list of user display names, the time of the events, and the count of events for the time of the event along with a visual chart. It can easily be modified to show user GUID or other attributes:
-
-```
-search *
-| where ( Type == "AzureDiagnostics" )
-| summarize count(TimeGenerated) by identity_s, TimeGenerated
-```
--
-Azure Data Lake Storage Gen1 provides a sample on how to process and analyze the log data. You can find the sample at [https://github.com/Azure/AzureDataLake/tree/master/Samples/AzureDiagnosticsSample](https://github.com/Azure/AzureDataLake/tree/master/Samples/AzureDiagnosticsSample).
-
-## See also
-* [Overview of Azure Data Lake Storage Gen1](data-lake-store-overview.md)
-* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md)
data-lake-store Data Lake Store Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-disaster-recovery-guidance.md
- Title: Disaster recovery guidance for Azure Data Lake Storage Gen1 | Microsoft Docs
-description: Learn how to further protect your data from region-wide outages or accidental deletions beyond the locally redundant storage of Azure Data Lake Storage Gen1.
---- Previously updated : 02/21/2018---
-# High availability and disaster recovery guidance for Data Lake Storage Gen1
-
-Data Lake Storage Gen1 provides locally redundant storage (LRS). Therefore, the data in your Data Lake Storage Gen1 account is resilient to transient hardware failures within a datacenter through automated replicas. This ensures durability and high availability, meeting the Data Lake Storage Gen1 SLA. This article provides guidance on how to further protect your data from rare region-wide outages or accidental deletions.
-
-## Disaster recovery guidance
-
-It's critical for you to prepare a disaster recovery plan. Review the information in this article and these additional resources to help you create your own plan.
-
-* [Disaster recovery and high availability for Azure applications](/azure/architecture/framework/resiliency/backup-and-recovery)
-* [Azure resiliency technical guidance](/azure/architecture/framework/resiliency/app-design)
-
-### Best practice recommendations
-
-We recommend that you copy your critical data to another Data Lake Storage Gen1 account in another region with a frequency aligned to the needs of your disaster recovery plan. There are a variety of methods to copy data including [ADLCopy](data-lake-store-copy-data-azure-storage-blob.md), [Azure PowerShell](data-lake-store-get-started-powershell.md), or [Azure Data Factory](../data-factory/connector-azure-data-lake-store.md). Azure Data Factory is a useful service for creating and deploying data movement pipelines on a recurring basis.
-
-If a regional outage occurs, you can then access your data in the region where the data was copied. You can monitor the [Azure Service Health Dashboard](https://azure.microsoft.com/status/) to determine the Azure service status across the globe.
-
-## Data corruption or accidental deletion recovery guidance
-
-While Data Lake Storage Gen1 provides data resiliency through automated replicas, this does not prevent your application (or developers/users) from corrupting data or accidentally deleting it.
-
-To prevent accidental deletion, we recommend that you first set the correct access policies for your Data Lake Storage Gen1 account. This includes applying [Azure resource locks](../azure-resource-manager/management/lock-resources.md) to lock down important resources and applying account and file level access control using the available [Data Lake Storage Gen1 security features](data-lake-store-security-overview.md). We also recommend that you routinely create copies of your critical data using [ADLCopy](data-lake-store-copy-data-azure-storage-blob.md), [Azure PowerShell](data-lake-store-get-started-powershell.md) or [Azure Data Factory](../data-factory/connector-azure-data-lake-store.md) in another Data Lake Storage Gen1 account, folder, or Azure subscription. This can be used to recover from a data corruption or deletion incident. Azure Data Factory is a useful service for creating and deploying data movement pipelines on a recurring basis.
-
-You can also enable [diagnostic logging](data-lake-store-diagnostic-logs.md) for a Data Lake Storage Gen1 account to collect data access audit trails. The audit trails provide information about who might have deleted or updated a file.
-
-## Next steps
-
-* [Get started with Data Lake Storage Gen1](data-lake-store-get-started-portal.md)
-* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md)
data-lake-store Data Lake Store Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-encryption.md
- Title: Encryption in Azure Data Lake Storage Gen1 | Microsoft Docs
-description: Encryption in Azure Data Lake Storage Gen1 helps you protect your data, implement enterprise security policies, and meet regulatory compliance requirements. This article provides an overview of the design, and discusses some of the technical aspects of implementation.
---- Previously updated : 03/26/2018---
-# Encryption of data in Azure Data Lake Storage Gen1
-
-Encryption in Azure Data Lake Storage Gen1 helps you protect your data, implement enterprise security policies, and meet regulatory compliance requirements. This article provides an overview of the design, and discusses some of the technical aspects of implementation.
-
-Data Lake Storage Gen1 supports encryption of data both at rest and in transit. For data at rest, Data Lake Storage Gen1 supports "on by default," transparent encryption. Here is what these terms mean in a bit more detail:
-
-* **On by default**: When you create a new Data Lake Storage Gen1 account, the default setting enables encryption. Thereafter, data that is stored in Data Lake Storage Gen1 is always encrypted prior to storing on persistent media. This is the behavior for all data, and it cannot be changed after an account is created.
-* **Transparent**: Data Lake Storage Gen1 automatically encrypts data prior to persisting, and decrypts data prior to retrieval. The encryption is configured and managed at the Data Lake Storage Gen1 account level by an administrator. No changes are made to the data access APIs. Thus, no changes are required in applications and services that interact with Data Lake Storage Gen1 because of encryption.
-
-Data in transit (also known as data in motion) is also always encrypted in Data Lake Storage Gen1. In addition to encrypting data prior to storing to persistent media, the data is also always secured in transit by using HTTPS. HTTPS is the only protocol that is supported for the Data Lake Storage Gen1 REST interfaces. The following diagram shows how data becomes encrypted in Data Lake Storage Gen1:
-
-![Diagram of data encryption in Data Lake Storage Gen1](./media/data-lake-store-encryption/fig1.png)
--
-## Set up encryption with Data Lake Storage Gen1
-
-Encryption for Data Lake Storage Gen1 is set up during account creation, and it is always enabled by default. You can either manage the keys yourself, or allow Data Lake Storage Gen1 to manage them for you (this is the default).
-
-For more information, see [Getting started](./data-lake-store-get-started-portal.md).
-
-## How encryption works in Data Lake Storage Gen1
-
-The following information covers how to manage master encryption keys, and it explains the three different types of keys you can use in data encryption for Data Lake Storage Gen1.
-
-### Master encryption keys
-
-Data Lake Storage Gen1 provides two modes for management of master encryption keys (MEKs). For now, assume that the master encryption key is the top-level key. Access to the master encryption key is required to decrypt any data stored in Data Lake Storage Gen1.
-
-The two modes for managing the master encryption key are as follows:
-
-* Service managed keys
-* Customer managed keys
-
-In both modes, the master encryption key is secured by storing it in Azure Key Vault. Key Vault is a fully managed, highly secure service on Azure that can be used to safeguard cryptographic keys. For more information, see [Key Vault](https://azure.microsoft.com/services/key-vault).
-
-Here is a brief comparison of capabilities provided by the two modes of managing the MEKs.
-
-| Question | Service managed keys | Customer managed keys |
-| -- | -- | |
-|How is data stored?|Always encrypted prior to being stored.|Always encrypted prior to being stored.|
-|Where is the Master Encryption Key stored?|Key Vault|Key Vault|
-|Are any encryption keys stored in the clear outside of Key Vault? |No|No|
-|Can the MEK be retrieved by Key Vault?|No. After the MEK is stored in Key Vault, it can only be used for encryption and decryption.|No. After the MEK is stored in Key Vault, it can only be used for encryption and decryption.|
-|Who owns the Key Vault instance and the MEK?|The Data Lake Storage Gen1 service|You own the Key Vault instance, which belongs in your own Azure subscription. The MEK in Key Vault can be managed by software or hardware.|
-|Can you revoke access to the MEK for the Data Lake Storage Gen1 service?|No|Yes. You can manage access control lists in Key Vault, and remove access control entries to the service identity for the Data Lake Storage Gen1 service.|
-|Can you permanently delete the MEK?|No|Yes. If you delete the MEK from Key Vault, the data in the Data Lake Storage Gen1 account cannot be decrypted by anyone, including the Data Lake Storage Gen1 service. <br><br> If you have explicitly backed up the MEK prior to deleting it from Key Vault, the MEK can be restored, and the data can then be recovered. However, if you have not backed up the MEK prior to deleting it from Key Vault, the data in the Data Lake Storage Gen1 account can never be decrypted thereafter.|
--
-Aside from this difference of who manages the MEK and the Key Vault instance in which it resides, the rest of the design is the same for both modes.
-
-It's important to remember the following when you choose the mode for the master encryption keys:
-
-* You can choose whether to use customer managed keys or service managed keys when you provision a Data Lake Storage Gen1 account.
-* After a Data Lake Storage Gen1 account is provisioned, the mode cannot be changed.
-
-### Encryption and decryption of data
-
-There are three types of keys that are used in the design of data encryption. The following table provides a summary:
-
-| Key | Abbreviation | Associated with | Storage location | Type | Notes |
-|--|--|--|-|||
-| Master Encryption Key | MEK | A Data Lake Storage Gen1 account | Key Vault | Asymmetric | It can be managed by Data Lake Storage Gen1 or you. |
-| Data Encryption Key | DEK | A Data Lake Storage Gen1 account | Persistent storage, managed by the Data Lake Storage Gen1 service | Symmetric | The DEK is encrypted by the MEK. The encrypted DEK is what is stored on persistent media. |
-| Block Encryption Key | BEK | A block of data | None | Symmetric | The BEK is derived from the DEK and the data block. |
-
-The following diagram illustrates these concepts:
-
-![Keys in data encryption](./media/data-lake-store-encryption/fig2.png)
-
-#### Pseudo algorithm when a file is to be decrypted:
-1. Check if the DEK for the Data Lake Storage Gen1 account is cached and ready for use.
- - If not, then read the encrypted DEK from persistent storage, and send it to Key Vault to be decrypted. Cache the decrypted DEK in memory. It is now ready to use.
-2. For every block of data in the file:
- - Read the encrypted block of data from persistent storage.
- - Generate the BEK from the DEK and the encrypted block of data.
- - Use the BEK to decrypt data.
--
-#### Pseudo algorithm when a block of data is to be encrypted:
-1. Check if the DEK for the Data Lake Storage Gen1 account is cached and ready for use.
- - If not, then read the encrypted DEK from persistent storage, and send it to Key Vault to be decrypted. Cache the decrypted DEK in memory. It is now ready to use.
-2. Generate a unique BEK for the block of data from the DEK.
-3. Encrypt the data block with the BEK, by using AES-256 encryption.
-4. Store the encrypted data block of data on persistent storage.
-
-> [!NOTE]
-> The DEK is always stored encrypted by the MEK, whether on persistent media or cached in memory.
-
-## Key rotation
-
-When you are using customer-managed keys, you can rotate the MEK. To learn how to set up a Data Lake Storage Gen1 account with customer-managed keys, see [Getting started](./data-lake-store-get-started-portal.md).
-
-### Prerequisites
-
-When you set up the Data Lake Storage Gen1 account, you have chosen to use your own keys. This option cannot be changed after the account has been created. The following steps assume that you are using customer-managed keys (that is, you have chosen your own keys from Key Vault).
-
-Note that if you use the default options for encryption, your data is always encrypted by using keys managed by Data Lake Storage Gen1. In this option, you don't have the ability to rotate keys, as they are managed by Data Lake Storage Gen1.
-
-### How to rotate the MEK in Data Lake Storage Gen1
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Browse to the Key Vault instance that stores your keys associated with your Data Lake Storage Gen1 account. Select **Keys**.
-
- ![Screenshot of Key Vault](./media/data-lake-store-encryption/keyvault.png)
-
-3. Select the key associated with your Data Lake Storage Gen1 account, and create a new version of this key. Note that Data Lake Storage Gen1 currently only supports key rotation to a new version of a key. It doesn't support rotating to a different key.
-
- ![Screenshot of Keys window, with New Version highlighted](./media/data-lake-store-encryption/keynewversion.png)
-
-4. Browse to the Data Lake Storage Gen1 account, and select **Encryption**.
-
- ![Screenshot of Data Lake Storage Gen1 account window, with Encryption highlighted](./media/data-lake-store-encryption/select-encryption.png)
-
-5. A message notifies you that a new key version of the key is available. Click **Rotate Key** to update the key to the new version.
-
- ![Screenshot of Data Lake Storage Gen1 window with message and Rotate Key highlighted](./media/data-lake-store-encryption/rotatekey.png)
-
-This operation should take less than two minutes, and there is no expected downtime due to key rotation. After the operation is complete, the new version of the key is in use.
-
-> [!IMPORTANT]
-> After the key rotation operation is complete, the old version of the key is no longer actively used for encrypting new data. There may be cases however where accessing older data may need the old key. To allow for reading of such older data, do not delete the old key
data-lake-store Data Lake Store End User Authenticate Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-end-user-authenticate-java-sdk.md
- Title: End-user authentication - Java with Data Lake Storage Gen1 - Azure
-description: Learn how to achieve end-user authentication with Azure Data Lake Storage Gen1 using Microsoft Entra ID with Java
---- Previously updated : 05/29/2018---
-# End-user authentication with Azure Data Lake Storage Gen1 using Java
-> [!div class="op_single_selector"]
-> * [Using Java](data-lake-store-end-user-authenticate-java-sdk.md)
-> * [Using .NET SDK](data-lake-store-end-user-authenticate-net-sdk.md)
-> * [Using Python](data-lake-store-end-user-authenticate-python.md)
-> * [Using REST API](data-lake-store-end-user-authenticate-rest-api.md)
->
->
--
-In this article, you learn about how to use the Java SDK to do end-user authentication with Azure Data Lake Storage Gen1. For service-to-service authentication with Data Lake Storage Gen1 using Java SDK, see [Service-to-service authentication with Data Lake Storage Gen1 using Java](data-lake-store-service-to-service-authenticate-java.md).
-
-## Prerequisites
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-* **Create a Microsoft Entra ID "Native" Application**. You must have completed the steps in [End-user authentication with Data Lake Storage Gen1 using Microsoft Entra ID](data-lake-store-end-user-authenticate-using-active-directory.md).
-
-* [Maven](https://maven.apache.org/install.html). This tutorial uses Maven for build and project dependencies. Although it is possible to build without using a build system like Maven or Gradle, these systems make is much easier to manage dependencies.
-
-* (Optional) And IDE like [IntelliJ IDEA](https://www.jetbrains.com/idea/download/) or [Eclipse](https://www.eclipse.org/downloads/) or similar.
-
-## End-user authentication
-1. Create a Maven project using [mvn archetype](https://maven.apache.org/guides/getting-started/maven-in-five-minutes.html) from the command line or using an IDE. For instructions on how to create a Java project using IntelliJ, see [here](https://www.jetbrains.com/help/idea/2016.1/creating-and-running-your-first-java-application.html). For instructions on how to create a project using Eclipse, see [here](https://help.eclipse.org/mars/index.jsp?topic=%2Forg.eclipse.jdt.doc.user%2FgettingStarted%2Fqs-3.htm).
-
-2. Add the following dependencies to your Maven **pom.xml** file. Add the following snippet before the **\</project>** tag:
-
- ```xml
- <dependencies>
- <dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>azure-data-lake-store-sdk</artifactId>
- <version>2.2.3</version>
- </dependency>
- <dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-nop</artifactId>
- <version>1.7.21</version>
- </dependency>
- </dependencies>
- ```
-
- The first dependency is to use the Data Lake Storage Gen1 SDK (`azure-data-lake-store-sdk`) from the maven repository. The second dependency is to specify the logging framework (`slf4j-nop`) to use for this application. The Data Lake Storage Gen1 SDK uses [SLF4J](https://www.slf4j.org/) logging façade, which lets you choose from a number of popular logging frameworks, like Log4j, Java logging, Logback, etc., or no logging. For this example, we disable logging, hence we use the **slf4j-nop** binding. To use other logging options in your app, see [here](https://www.slf4j.org/manual.html#projectDep).
-
-3. Add the following import statements to your application.
-
- ```java
- import com.microsoft.azure.datalake.store.ADLException;
- import com.microsoft.azure.datalake.store.ADLStoreClient;
- import com.microsoft.azure.datalake.store.DirectoryEntry;
- import com.microsoft.azure.datalake.store.IfExists;
- import com.microsoft.azure.datalake.store.oauth2.AccessTokenProvider;
- import com.microsoft.azure.datalake.store.oauth2.DeviceCodeTokenProvider;
- ```
-
-4. Use the following snippet in your Java application to obtain token for the Active Directory native application you created earlier using the `DeviceCodeTokenProvider`. Replace **FILL-IN-HERE** with the actual values for the Microsoft Entra native application.
-
- ```java
- private static String nativeAppId = "FILL-IN-HERE";
-
- AccessTokenProvider provider = new DeviceCodeTokenProvider(nativeAppId);
- ```
-
-The Data Lake Storage Gen1 SDK provides convenient methods that let you manage the security tokens needed to talk to the Data Lake Storage Gen1 account. However, the SDK does not mandate that only these methods be used. You can use any other means of obtaining token as well, like using the [Azure AD SDK](https://github.com/AzureAD/azure-activedirectory-library-for-java), or your own custom code.
-
-## Next steps
-In this article, you learned how to use end-user authentication to authenticate with Azure Data Lake Storage Gen1 using Java SDK. You can now look at the following articles that talk about how to use the Java SDK to work with Azure Data Lake Storage Gen1.
-
-* [Data operations on Data Lake Storage Gen1 using Java SDK](data-lake-store-get-started-java-sdk.md)
data-lake-store Data Lake Store End User Authenticate Net Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-end-user-authenticate-net-sdk.md
- Title: End-user authentication - .NET with Data Lake Storage Gen1 - Azure
-description: Learn how to achieve end-user authentication with Azure Data Lake Storage Gen1 using Microsoft Entra ID with .NET SDK
---- Previously updated : 09/22/2022---
-# End-user authentication with Azure Data Lake Storage Gen1 using .NET SDK
-> [!div class="op_single_selector"]
-> * [Using Java](data-lake-store-end-user-authenticate-java-sdk.md)
-> * [Using .NET SDK](data-lake-store-end-user-authenticate-net-sdk.md)
-> * [Using Python](data-lake-store-end-user-authenticate-python.md)
-> * [Using REST API](data-lake-store-end-user-authenticate-rest-api.md)
->
->
-
-In this article, you learn about how to use the .NET SDK to do end-user authentication with Azure Data Lake Storage Gen1. For service-to-service authentication with Data Lake Storage Gen1 using .NET SDK, see [Service-to-service authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-service-to-service-authenticate-net-sdk.md).
-
-## Prerequisites
-* **Visual Studio 2013 or above**. The instructions below use Visual Studio 2019.
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-* **Create a Microsoft Entra ID "Native" Application**. You must have completed the steps in [End-user authentication with Data Lake Storage Gen1 using Microsoft Entra ID](data-lake-store-end-user-authenticate-using-active-directory.md).
-
-## Create a .NET application
-1. In Visual Studio, select the **File** menu, **New**, and then **Project**.
-2. Choose **Console App (.NET Framework)**, and then select **Next**.
-3. In **Project name**, enter `CreateADLApplication`, and then select **Create**.
-
-4. Add the NuGet packages to your project.
-
- 1. Right-click the project name in the Solution Explorer and click **Manage NuGet Packages**.
- 2. In the **NuGet Package Manager** tab, make sure that **Package source** is set to **nuget.org** and that **Include prerelease** check box is selected.
- 3. Search for and install the following NuGet packages:
-
- * `Microsoft.Azure.Management.DataLake.Store` - This tutorial uses v2.1.3-preview.
- * `Microsoft.Rest.ClientRuntime.Azure.Authentication` - This tutorial uses v2.2.12.
-
- ![Add a NuGet source](./media/data-lake-store-get-started-net-sdk/data-lake-store-install-nuget-package.png "Create a new Azure Data Lake account")
- 4. Close the **NuGet Package Manager**.
-
-5. Open **Program.cs**
-6. Replace the using statements with the following lines:
-
- ```csharp
- using System;
- using System.IO;
- using System.Linq;
- using System.Text;
- using System.Threading;
- using System.Collections.Generic;
-
- using Microsoft.Rest;
- using Microsoft.Rest.Azure.Authentication;
- using Microsoft.Azure.Management.DataLake.Store;
- using Microsoft.Azure.Management.DataLake.Store.Models;
- using Microsoft.IdentityModel.Clients.ActiveDirectory;
- ```
-
-## End-user authentication
-Add this snippet in your .NET client application. Replace the placeholder values with the values retrieved from a Microsoft Entra native application (listed as prerequisite). This snippet lets you authenticate your application **interactively** with Data Lake Storage Gen1, which means you are prompted to enter your Azure credentials.
-
-For ease of use, the following snippet uses default values for client ID and redirect URI that are valid for any Azure subscription. In the following snippet, you only need to provide the value for your tenant ID. You can retrieve the Tenant ID using the instructions provided at [Get the tenant ID](../active-directory/develop/howto-create-service-principal-portal.md#sign-in-to-the-application).
-
-- Replace the Main() function with the following code:-
- ```csharp
- private static void Main(string[] args)
- {
- //User login via interactive popup
- string TENANT = "<AAD-directory-domain>";
- string CLIENTID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx";
- System.Uri ARM_TOKEN_AUDIENCE = new System.Uri(@"https://management.core.windows.net/");
- System.Uri ADL_TOKEN_AUDIENCE = new System.Uri(@"https://datalake.azure.net/");
- string MY_DOCUMENTS = System.Environment.GetFolderPath(System.Environment.SpecialFolder.MyDocuments);
- string TOKEN_CACHE_PATH = System.IO.Path.Combine(MY_DOCUMENTS, "my.tokencache");
- var tokenCache = GetTokenCache(TOKEN_CACHE_PATH);
- var armCreds = GetCreds_User_Popup(TENANT, ARM_TOKEN_AUDIENCE, CLIENTID, tokenCache);
- var adlCreds = GetCreds_User_Popup(TENANT, ADL_TOKEN_AUDIENCE, CLIENTID, tokenCache);
- }
- ```
-
-A couple of things to know about the preceding snippet:
-
-* The preceding snippet uses a helper functions `GetTokenCache` and `GetCreds_User_Popup`. The code for these helper functions is available [here on GitHub](https://github.com/Azure-Samples/data-lake-analytics-dotnet-auth-options#gettokencache).
-* To help you complete the tutorial faster, the snippet uses a native application client ID that is available by default for all Azure subscriptions. So, you can **use this snippet as-is in your application**.
-* However, if you do want to use your own Microsoft Entra domain and application client ID, you must create a Microsoft Entra native application and then use the Microsoft Entra tenant ID, client ID, and redirect URI for the application you created. See [Create an Active Directory Application for end-user authentication with Data Lake Storage Gen1](data-lake-store-end-user-authenticate-using-active-directory.md) for instructions.
-
-
-## Next steps
-In this article, you learned how to use end-user authentication to authenticate with Azure Data Lake Storage Gen1 using .NET SDK. You can now look at the following articles that talk about how to use the .NET SDK to work with Azure Data Lake Storage Gen1.
-
-* [Account management operations on Data Lake Storage Gen1 using .NET SDK](data-lake-store-get-started-net-sdk.md)
-* [Data operations on Data Lake Storage Gen1 using .NET SDK](data-lake-store-data-operations-net-sdk.md)
data-lake-store Data Lake Store End User Authenticate Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-end-user-authenticate-python.md
- Title: End-user authentication - Python with Data Lake Storage Gen1 - Azure
-description: Learn how to achieve end-user authentication with Azure Data Lake Storage Gen1 using Microsoft Entra ID with Python
---- Previously updated : 05/29/2018---
-# End-user authentication with Azure Data Lake Storage Gen1 using Python
-> [!div class="op_single_selector"]
-> * [Using Java](data-lake-store-end-user-authenticate-java-sdk.md)
-> * [Using .NET SDK](data-lake-store-end-user-authenticate-net-sdk.md)
-> * [Using Python](data-lake-store-end-user-authenticate-python.md)
-> * [Using REST API](data-lake-store-end-user-authenticate-rest-api.md)
->
->
-
-In this article, you learn about how to use the Python SDK to do end-user authentication with Azure Data Lake Storage Gen1. End-user authentication can further be split into two categories:
-
-* End-user authentication without multi-factor authentication
-* End-user authentication with multi-factor authentication
-
-Both these options are discussed in this article. For service-to-service authentication with Data Lake Storage Gen1 using Python, see [Service-to-service authentication with Data Lake Storage Gen1 using Python](data-lake-store-service-to-service-authenticate-python.md).
-
-## Prerequisites
-
-* **Python**. You can download Python from [here](https://www.python.org/downloads/). This article uses Python 3.6.2.
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-* **Create a Microsoft Entra ID "Native" Application**. You must have completed the steps in [End-user authentication with Data Lake Storage Gen1 using Microsoft Entra ID](data-lake-store-end-user-authenticate-using-active-directory.md).
-
-## Install the modules
-
-To work with Data Lake Storage Gen1 using Python, you need to install three modules.
-
-* The `azure-mgmt-resource` module, which includes Azure modules for Active Directory, etc.
-* The `azure-mgmt-datalake-store` module, which includes the Azure Data Lake Storage Gen1 account management operations. For more information on this module, see [Azure Data Lake Storage Gen1 Management module reference](/python/api/azure-mgmt-datalake-store/).
-* The `azure-datalake-store` module, which includes the Azure Data Lake Storage Gen1 filesystem operations. For more information on this module, see [azure-datalake-store Filesystem module reference](/python/api/azure-datalake-store/azure.datalake.store.core/).
-
-Use the following commands to install the modules.
-
-```console
-pip install azure-mgmt-resource
-pip install azure-mgmt-datalake-store
-pip install azure-datalake-store
-```
-
-## Create a new Python application
-
-1. In the IDE of your choice, create a new Python application, for example, `mysample.py`.
-
-2. Add the following snippet to import the required modules
-
- ```python
- ## Use this for Azure AD authentication
- from msrestazure.azure_active_directory import AADTokenCredentials
-
- ## Required for Azure Data Lake Storage Gen1 account management
- from azure.mgmt.datalake.store import DataLakeStoreAccountManagementClient
- from azure.mgmt.datalake.store.models import DataLakeStoreAccount
-
- ## Required for Azure Data Lake Storage Gen1 filesystem management
- from azure.datalake.store import core, lib, multithread
-
- # Common Azure imports
- import adal
- from azure.mgmt.resource.resources import ResourceManagementClient
- from azure.mgmt.resource.resources.models import ResourceGroup
-
- ## Use these as needed for your application
- import logging, pprint, uuid, time
- ```
-
-3. Save changes to `mysample.py`.
-
-## End-user authentication with multi-factor authentication
-
-### For account management
-
-Use the following snippet to authenticate with Microsoft Entra ID for account management operations on a Data Lake Storage Gen1 account. The following snippet can be used to authenticate your application using multi-factor authentication. Provide the values below for an existing Microsoft Entra ID **native** application.
-
-```python
-authority_host_url = "https://login.microsoftonline.com"
-tenant = "FILL-IN-HERE"
-authority_url = authority_host_url + '/' + tenant
-client_id = 'FILL-IN-HERE'
-redirect = 'urn:ietf:wg:oauth:2.0:oob'
-RESOURCE = 'https://management.core.windows.net/'
-
-context = adal.AuthenticationContext(authority_url)
-code = context.acquire_user_code(RESOURCE, client_id)
-print(code['message'])
-mgmt_token = context.acquire_token_with_device_code(RESOURCE, code, client_id)
-armCreds = AADTokenCredentials(mgmt_token, client_id, resource = RESOURCE)
-```
-
-### For filesystem operations
-
-Use this to authenticate with Microsoft Entra ID for filesystem operations on a Data Lake Storage Gen1 account. The following snippet can be used to authenticate your application using multi-factor authentication. Provide the values below for an existing Microsoft Entra ID **native** application.
-
-```console
-adlCreds = lib.auth(tenant_id='FILL-IN-HERE', resource = 'https://datalake.azure.net/')
-```
-
-## End-user authentication without multi-factor authentication
-
-This is deprecated. For more information, see [Azure Authentication using Python SDK](/azure/developer/python/sdk/authentication-overview).
-
-## Next steps
-In this article, you learned how to use end-user authentication to authenticate with Azure Data Lake Storage Gen1 using Python. You can now look at the following articles that talk about how to use Python to work with Azure Data Lake Storage Gen1.
-
-* [Account management operations on Data Lake Storage Gen1 using Python](data-lake-store-get-started-python.md)
-* [Data operations on Data Lake Storage Gen1 using Python](data-lake-store-data-operations-python.md)
data-lake-store Data Lake Store End User Authenticate Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-end-user-authenticate-rest-api.md
- Title: End-user authentication - REST with Data Lake Storage Gen1 - Azure
-description: Learn how to achieve end-user authentication with Azure Data Lake Storage Gen1 using Microsoft Entra ID using REST API
---- Previously updated : 05/29/2018---
-# End-user authentication with Azure Data Lake Storage Gen1 using REST API
-> [!div class="op_single_selector"]
-> * [Using Java](data-lake-store-end-user-authenticate-java-sdk.md)
-> * [Using .NET SDK](data-lake-store-end-user-authenticate-net-sdk.md)
-> * [Using Python](data-lake-store-end-user-authenticate-python.md)
-> * [Using REST API](data-lake-store-end-user-authenticate-rest-api.md)
->
->
-
-In this article, you learn about how to use the REST API to do end-user authentication with Azure Data Lake Storage Gen1. For service-to-service authentication with Data Lake Storage Gen1 using REST API, see [Service-to-service authentication with Data Lake Storage Gen1 using REST API](data-lake-store-service-to-service-authenticate-rest-api.md).
-
-## Prerequisites
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-* **Create a Microsoft Entra ID "Native" Application**. You must have completed the steps in [End-user authentication with Data Lake Storage Gen1 using Microsoft Entra ID](data-lake-store-end-user-authenticate-using-active-directory.md).
-
-* **[cURL](https://curl.haxx.se/)**. This article uses cURL to demonstrate how to make REST API calls against a Data Lake Storage Gen1 account.
-
-## End-user authentication
-End-user authentication is the recommended approach if you want a user to log in to your application using Microsoft Entra ID. Your application is able to access Azure resources with the same level of access as the logged-in user. The user needs to provide their credentials periodically in order for your application to maintain access.
-
-The result of having the end-user login is that your application is given an access token and a refresh token. The access token gets attached to each request made to Data Lake Storage Gen1 or Data Lake Analytics, and it is valid for one hour by default. The refresh token can be used to obtain a new access token, and it is valid for up to two weeks by default, if used regularly. You can use two different approaches for end-user login.
-
-In this scenario, the application prompts the user to log in and all the operations are performed in the context of the user. Perform the following steps:
-
-1. Through your application, redirect the user to the following URL:
-
- `https://login.microsoftonline.com/<TENANT-ID>/oauth2/authorize?client_id=<APPLICATION-ID>&response_type=code&redirect_uri=<REDIRECT-URI>`
-
- > [!NOTE]
- > \<REDIRECT-URI> needs to be encoded for use in a URL. So, for https://localhost, use `https%3A%2F%2Flocalhost`)
-
- For the purpose of this tutorial, you can replace the placeholder values in the URL above and paste it in a web browser's address bar. You will be redirected to authenticate using your Azure login. Once you successfully log in, the response is displayed in the browser's address bar. The response will be in the following format:
-
- `http://localhost/?code=<AUTHORIZATION-CODE>&session_state=<GUID>`
-
-2. Capture the authorization code from the response. For this tutorial, you can copy the authorization code from the address bar of the web browser and pass it in the POST request to the token endpoint, as shown in the following snippet:
-
- ```console
- curl -X POST https://login.microsoftonline.com/<TENANT-ID>/oauth2/token \
- -F redirect_uri=<REDIRECT-URI> \
- -F grant_type=authorization_code \
- -F resource=https://management.core.windows.net/ \
- -F client_id=<APPLICATION-ID> \
- -F code=<AUTHORIZATION-CODE>
- ```
-
- > [!NOTE]
- > In this case, the \<REDIRECT-URI> need not be encoded.
- >
- >
-
-3. The response is a JSON object that contains an access token (for example, `"access_token": "<ACCESS_TOKEN>"`) and a refresh token (for example, `"refresh_token": "<REFRESH_TOKEN>"`). Your application uses the access token when accessing Azure Data Lake Storage Gen1 and the refresh token to get another access token when an access token expires.
-
- ```json
- {"token_type":"Bearer","scope":"user_impersonation","expires_in":"3599","expires_on":"1461865782","not_before": "1461861882","resource":"https://management.core.windows.net/","access_token":"<REDACTED>","refresh_token":"<REDACTED>","id_token":"<REDACTED>"}
- ```
-
-4. When the access token expires, you can request a new access token using the refresh token, as shown in the following snippet:
-
- ```console
- curl -X POST https://login.microsoftonline.com/<TENANT-ID>/oauth2/token \
- -F grant_type=refresh_token \
- -F resource=https://management.core.windows.net/ \
- -F client_id=<APPLICATION-ID> \
- -F refresh_token=<REFRESH-TOKEN>
- ```
-
-For more information on interactive user authentication, see [Authorization code grant flow](/previous-versions/azure/dn645542(v=azure.100)).
-
-## Next steps
-In this article, you learned how to use service-to-service authentication to authenticate with Azure Data Lake Storage Gen1 using REST API. You can now look at the following articles that talk about how to use the REST API to work with Azure Data Lake Storage Gen1.
-
-* [Account management operations on Data Lake Storage Gen1 using REST API](data-lake-store-get-started-rest-api.md)
-* [Data operations on Data Lake Storage Gen1 using REST API](data-lake-store-data-operations-rest-api.md)
data-lake-store Data Lake Store End User Authenticate Using Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-end-user-authenticate-using-active-directory.md
- Title: End-user authentication - Data Lake Storage Gen1 with Microsoft Entra ID
-description: Learn how to achieve end-user authentication with Azure Data Lake Storage Gen1 using Microsoft Entra ID
---- Previously updated : 05/29/2018---
-# End-user authentication with Azure Data Lake Storage Gen1 using Microsoft Entra ID
-> [!div class="op_single_selector"]
-> * [End-user authentication](data-lake-store-end-user-authenticate-using-active-directory.md)
-> * [Service-to-service authentication](data-lake-store-service-to-service-authenticate-using-active-directory.md)
->
->
-
-Azure Data Lake Storage Gen1 uses Microsoft Entra ID for authentication. Before authoring an application that works with Data Lake Storage Gen1 or Azure Data Lake Analytics, you must decide how to authenticate your application with Microsoft Entra ID. The two main options available are:
-
-* End-user authentication (this article)
-* Service-to-service authentication (pick this option from the drop-down above)
-
-Both these options result in your application being provided with an OAuth 2.0 token, which gets attached to each request made to Data Lake Storage Gen1 or Azure Data Lake Analytics.
-
-This article talks about how to create an **Microsoft Entra native application for end-user authentication**. For instructions on Microsoft Entra application configuration for service-to-service authentication, see [Service-to-service authentication with Data Lake Storage Gen1 using Microsoft Entra ID](./data-lake-store-service-to-service-authenticate-using-active-directory.md).
-
-## Prerequisites
-* An Azure subscription. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-* Your subscription ID. You can retrieve it from the Azure portal. For example, it's available from the Data Lake Storage Gen1 account blade.
-
- ![Get subscription ID](./media/data-lake-store-end-user-authenticate-using-active-directory/get-subscription-id.png)
-
-* Your Microsoft Entra domain name. You can retrieve it by hovering the mouse in the top-right corner of the Azure portal. From the screenshot below, the domain name is **contoso.onmicrosoft.com**, and the GUID within brackets is the tenant ID.
-
- ![Get Microsoft Entra domain](./media/data-lake-store-end-user-authenticate-using-active-directory/get-aad-domain.png)
-
-* Your Azure tenant ID. For instructions on how to retrieve the tenant ID, see [Get the tenant ID](../active-directory/develop/howto-create-service-principal-portal.md#sign-in-to-the-application).
-
-## End-user authentication
-This authentication mechanism is the recommended approach if you want an end user to sign in to your application via Microsoft Entra ID. Your application is then able to access Azure resources with the same level of access as the end user that logged in. Your end user needs to provide their credentials periodically in order for your application to maintain access.
-
-The result of having the end-user sign-in is that your application is given an access token and a refresh token. The access token gets attached to each request made to Data Lake Storage Gen1 or Data Lake Analytics, and it's valid for one hour by default. The refresh token can be used to obtain a new access token, and it's valid for up to two weeks by default. You can use two different approaches for end-user sign-in.
-
-### Using the OAuth 2.0 pop-up
-Your application can trigger an OAuth 2.0 authorization pop-up, in which the end user can enter their credentials. This pop-up also works with the Microsoft Entra Two-factor Authentication (2FA) process, if necessary.
-
-> [!NOTE]
-> This method is not yet supported in the Azure AD Authentication Library (ADAL) for Python or Java.
->
->
-
-### Directly passing in user credentials
-Your application can directly provide user credentials to Microsoft Entra ID. This method only works with organizational ID user accounts; it isn't compatible with personal / ΓÇ£live IDΓÇ¥ user accounts, including the accounts ending in @outlook.com or @live.com. Furthermore, this method isn't compatible with user accounts that require Microsoft Entra Two-factor Authentication (2FA).
-
-### What do I need for this approach?
-* Microsoft Entra domain name. This requirement is already listed in the prerequisite of this article.
-* Microsoft Entra tenant ID. This requirement is already listed in the prerequisite of this article.
-* Microsoft Entra ID **native application**
-* Application ID for the Microsoft Entra native application
-* Redirect URI for the Microsoft Entra native application
-* Set delegated permissions
--
-## Step 1: Create an Active Directory native application
-
-Create and configure a Microsoft Entra native application for end-user authentication with Data Lake Storage Gen1 using Microsoft Entra ID. For instructions, see [Create a Microsoft Entra application](../active-directory/develop/howto-create-service-principal-portal.md).
-
-While following the instructions in the link, make sure you select **Native** for application type, as shown in the following screenshot:
-
-![Create web app](./media/data-lake-store-end-user-authenticate-using-active-directory/azure-active-directory-create-native-app.png "Create native app")
-
-## Step 2: Get application ID and redirect URI
-
-See [Get the application ID](../active-directory/develop/howto-create-service-principal-portal.md#sign-in-to-the-application) to retrieve the application ID.
-
-To retrieve the redirect URI, do the following steps.
-
-1. From the Azure portal, select **Microsoft Entra ID**, select **App registrations**, and then find and select the Microsoft Entra native application that you created.
-
-2. From the **Settings** blade for the application, select **Redirect URIs**.
-
- ![Get Redirect URI](./media/data-lake-store-end-user-authenticate-using-active-directory/azure-active-directory-redirect-uri.png)
-
-3. Copy the value displayed.
--
-## Step 3: Set permissions
-
-1. From the Azure portal, select **Microsoft Entra ID**, select **App registrations**, and then find and select the Microsoft Entra native application that you created.
-
-2. From the **Settings** blade for the application, select **Required permissions**, and then select **Add**.
-
- ![Screenshot of the Settings blade with the Redirect U R I option called out and the Redirect U R I blade with the actual U R I called out.](./media/data-lake-store-end-user-authenticate-using-active-directory/aad-end-user-auth-set-permission-1.png)
-
-3. In the **Add API Access** blade, select **Select an API**, select **Azure Data Lake**, and then select **Select**.
-
- ![Screenshot of the Add API access blade with the Select an API option called out and the Select an API blade with the Azure Data Lake option and the Select option called out.](./media/data-lake-store-end-user-authenticate-using-active-directory/aad-end-user-auth-set-permission-2.png)
-
-4. In the **Add API Access** blade, select **Select permissions**, select the check box to give **Full access to Data Lake Store**, and then select **Select**.
-
- ![Screenshot of the Add API access blade with the Select permissions option called out and the Enable Access blade with the Have full access to the Azure Data Lake service option and the Select option called out.](./media/data-lake-store-end-user-authenticate-using-active-directory/aad-end-user-auth-set-permission-3.png)
-
- Select **Done**.
-
-5. Repeat the last two steps to grant permissions for **Windows Azure Service Management API** as well.
-
-## Next steps
-In this article, you created a Microsoft Entra native application and gathered the information you need in your client applications that you author using .NET SDK, Java SDK, REST API, etc. You can now proceed to the following articles that talk about how to use the Microsoft Entra web application to first authenticate with Data Lake Storage Gen1 and then perform other operations on the store.
-
-* [End-user-authentication with Data Lake Storage Gen1 using Java SDK](data-lake-store-end-user-authenticate-java-sdk.md)
-* [End-user authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-end-user-authenticate-net-sdk.md)
-* [End-user authentication with Data Lake Storage Gen1 using Python](data-lake-store-end-user-authenticate-python.md)
-* [End-user authentication with Data Lake Storage Gen1 using REST API](data-lake-store-end-user-authenticate-rest-api.md)
data-lake-store Data Lake Store Get Started Cli 2.0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-get-started-cli-2.0.md
- Title: Manage Azure Data Lake Storage Gen1 account - Azure CLI
-description: Use the Azure CLI to create a Data Lake Storage Gen1 account and perform basic operations.
----- Previously updated : 06/27/2018--
-# Get started with Azure Data Lake Storage Gen1 using the Azure CLI
--
-> [!div class="op_single_selector"]
-> * [Portal](data-lake-store-get-started-portal.md)
-> * [PowerShell](data-lake-store-get-started-powershell.md)
-> * [Azure CLI](data-lake-store-get-started-cli-2.0.md)
->
->
-
-Learn how to use the Azure CLI to create an Azure Data Lake Storage Gen1 account and perform basic operations such as create folders, upload and download data files, delete your account, etc. For more information about Data Lake Storage Gen1, see [Overview of Data Lake Storage Gen1](data-lake-store-overview.md).
-
-The Azure CLI is Azure's command-line experience for managing Azure resources. It can be used on macOS, Linux, and Windows. For more information, see [Overview of Azure CLI](/cli/azure). You can also look at the [Azure Data Lake Storage Gen1 CLI reference](/cli/azure/dls) for a complete list of commands and syntax.
--
-## Prerequisites
-Before you begin this article, you must have the following:
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-* **Azure CLI** - See [Install Azure CLI](/cli/azure/install-azure-cli) for instructions.
-
-## Authentication
-
-This article uses a simpler authentication approach with Data Lake Storage Gen1 where you log in as an end-user user. The access level to the Data Lake Storage Gen1 account and file system is then governed by the access level of the logged in user. However, there are other approaches as well to authenticate with Data Lake Storage Gen1, which are **end-user authentication** or **service-to-service authentication**. For instructions and more information on how to authenticate, see [End-user authentication](data-lake-store-end-user-authenticate-using-active-directory.md) or [Service-to-service authentication](./data-lake-store-service-to-service-authenticate-using-active-directory.md).
--
-## Log in to your Azure subscription
-
-1. Log into your Azure subscription.
-
- ```azurecli
- az login
- ```
-
- You get a code to use in the next step. Use a web browser to open the page https://aka.ms/devicelogin and enter the code to authenticate. You are prompted to log in using your credentials.
-
-2. Once you log in, the window lists all the Azure subscriptions that are associated with your account. Use the following command to use a specific subscription.
-
- ```azurecli
- az account set --subscription <subscription id>
- ```
-
-## Create an Azure Data Lake Storage Gen1 account
-
-1. Create a new resource group. In the following command, provide the parameter values you want to use. If the location name contains spaces, put it in quotes. For example "East US 2".
-
- ```azurecli
- az group create --location "East US 2" --name myresourcegroup
- ```
-
-2. Create the Data Lake Storage Gen1 account.
-
- ```azurecli
- az dls account create --account mydatalakestoragegen1 --resource-group myresourcegroup
- ```
-
-## Create folders in a Data Lake Storage Gen1 account
-
-You can create folders under your Azure Data Lake Storage Gen1 account to manage and store data. Use the following command to create a folder called **mynewfolder** at the root of the Data Lake Storage Gen1 account.
-
-```azurecli
-az dls fs create --account mydatalakestoragegen1 --path /mynewfolder --folder
-```
-
-> [!NOTE]
-> The `--folder` parameter ensures that the command creates a folder. If this parameter is not present, the command creates an empty file called mynewfolder at the root of the Data Lake Storage Gen1 account.
->
->
-
-## Upload data to a Data Lake Storage Gen1 account
-
-You can upload data to Data Lake Storage Gen1 directly at the root level or to a folder that you created within the account. The snippets below demonstrate how to upload some sample data to the folder (**mynewfolder**) you created in the previous section.
-
-If you are looking for some sample data to upload, you can get the **Ambulance Data** folder from the [Azure Data Lake Git Repository](https://github.com/MicrosoftBigData/usql/tree/master/Examples/Samples/Data/AmbulanceData). Download the file and store it in a local directory on your computer, such as C:\sampledata\.
-
-```azurecli
-az dls fs upload --account mydatalakestoragegen1 --source-path "C:\SampleData\AmbulanceData\vehicle1_09142014.csv" --destination-path "/mynewfolder/vehicle1_09142014.csv"
-```
-
-> [!NOTE]
-> For the destination, you must specify the complete path including the file name.
->
->
--
-## List files in a Data Lake Storage Gen1 account
-
-Use the following command to list the files in a Data Lake Storage Gen1 account.
-
-```azurecli
-az dls fs list --account mydatalakestoragegen1 --path /mynewfolder
-```
-
-The output of this should be similar to the following:
-
-```json
-[
- {
- "accessTime": 1491323529542,
- "aclBit": false,
- "blockSize": 268435456,
- "group": "1808bd5f-62af-45f4-89d8-03c5e81bac20",
- "length": 1589881,
- "modificationTime": 1491323531638,
- "msExpirationTime": 0,
- "name": "mynewfolder/vehicle1_09142014.csv",
- "owner": "1808bd5f-62af-45f4-89d8-03c5e81bac20",
- "pathSuffix": "vehicle1_09142014.csv",
- "permission": "770",
- "replication": 1,
- "type": "FILE"
- }
-]
-```
-
-## Rename, download, and delete data from a Data Lake Storage Gen1 account
-
-* **To rename a file**, use the following command:
-
- ```azurecli
- az dls fs move --account mydatalakestoragegen1 --source-path /mynewfolder/vehicle1_09142014.csv --destination-path /mynewfolder/vehicle1_09142014_copy.csv
- ```
-
-* **To download a file**, use the following command. Make sure the destination path you specify already exists.
-
- ```azurecli
- az dls fs download --account mydatalakestoragegen1 --source-path /mynewfolder/vehicle1_09142014_copy.csv --destination-path "C:\mysampledata\vehicle1_09142014_copy.csv"
- ```
-
- > [!NOTE]
- > The command creates the destination folder if it does not exist.
- >
- >
-
-* **To delete a file**, use the following command:
-
- ```azurecli
- az dls fs delete --account mydatalakestoragegen1 --path /mynewfolder/vehicle1_09142014_copy.csv
- ```
-
- If you want to delete the folder **mynewfolder** and the file **vehicle1_09142014_copy.csv** together in one command, use the --recurse parameter
-
- ```azurecli
- az dls fs delete --account mydatalakestoragegen1 --path /mynewfolder --recurse
- ```
-
-## Work with permissions and ACLs for a Data Lake Storage Gen1 account
-
-In this section you learn about how to manage ACLs and permissions using the Azure CLI. For a detailed discussion on how ACLs are implemented in Azure Data Lake Storage Gen1, see [Access control in Azure Data Lake Storage Gen1](data-lake-store-access-control.md).
-
-* **To update the owner of a file/folder**, use the following command:
-
- ```azurecli
- az dls fs access set-owner --account mydatalakestoragegen1 --path /mynewfolder/vehicle1_09142014.csv --group 80a3ed5f-959e-4696-ba3c-d3c8b2db6766 --owner 6361e05d-c381-4275-a932-5535806bb323
- ```
-
-* **To update the permissions for a file/folder**, use the following command:
-
- ```azurecli
- az dls fs access set-permission --account mydatalakestoragegen1 --path /mynewfolder/vehicle1_09142014.csv --permission 777
- ```
-
-* **To get the ACLs for a given path**, use the following command:
-
- ```azurecli
- az dls fs access show --account mydatalakestoragegen1 --path /mynewfolder/vehicle1_09142014.csv
- ```
-
- The output should be similar to the following:
-
- ```output
- {
- "entries": [
- "user::rwx",
- "group::rwx",
- "other::"
- ],
- "group": "1808bd5f-62af-45f4-89d8-03c5e81bac20",
- "owner": "1808bd5f-62af-45f4-89d8-03c5e81bac20",
- "permission": "770",
- "stickyBit": false
- }
- ```
-
-* **To set an entry for an ACL**, use the following command:
-
- ```azurecli
- az dls fs access set-entry --account mydatalakestoragegen1 --path /mynewfolder --acl-spec user:6360e05d-c381-4275-a932-5535806bb323:-w-
- ```
-
-* **To remove an entry for an ACL**, use the following command:
-
- ```azurecli
- az dls fs access remove-entry --account mydatalakestoragegen1 --path /mynewfolder --acl-spec user:6360e05d-c381-4275-a932-5535806bb323
- ```
-
-* **To remove an entire default ACL**, use the following command:
-
- ```azurecli
- az dls fs access remove-all --account mydatalakestoragegen1 --path /mynewfolder --default-acl
- ```
-
-* **To remove an entire non-default ACL**, use the following command:
-
- ```azurecli
- az dls fs access remove-all --account mydatalakestoragegen1 --path /mynewfolder
- ```
-
-## Delete a Data Lake Storage Gen1 account
-Use the following command to delete a Data Lake Storage Gen1 account.
-
-```azurecli
-az dls account delete --account mydatalakestoragegen1
-```
-
-When prompted, enter **Y** to delete the account.
-
-## Next steps
-* [Use Azure Data Lake Storage Gen1 for big data requirements](data-lake-store-data-scenarios.md)
-* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md)
-* [Use Azure Data Lake Analytics with Data Lake Storage Gen1](../data-lake-analytics/data-lake-analytics-get-started-portal.md)
-* [Use Azure HDInsight with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md)
data-lake-store Data Lake Store Get Started Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-get-started-java-sdk.md
- Title: Java SDK - Filesystem operations on Data Lake Storage Gen1 - Azure
-description: Use the Java SDK for Azure Data Lake Storage Gen1 to perform filesystem operations on Data Lake Storage Gen1 such as creating folders, and uploading and downloading data files.
---- Previously updated : 02/23/2022---
-# Filesystem operations on Azure Data Lake Storage Gen1 using Java SDK
-> [!div class="op_single_selector"]
-> * [.NET SDK](data-lake-store-data-operations-net-sdk.md)
-> * [Java SDK](data-lake-store-get-started-java-sdk.md)
-> * [REST API](data-lake-store-data-operations-rest-api.md)
-> * [Python](data-lake-store-data-operations-python.md)
->
->
-
-Learn how to use the Azure Data Lake Storage Gen1 Java SDK to perform basic operations such as create folders, upload and download data files, etc. For more information about Data Lake Storage Gen1, see [Azure Data Lake Storage Gen1](data-lake-store-overview.md).
-
-You can access the Java SDK API docs for Data Lake Storage Gen1 at [Azure Data Lake Storage Gen1 Java API docs](https://azure.github.io/azure-data-lake-store-java/javadoc/).
-
-## Prerequisites
-* Java Development Kit (JDK 7 or higher, using Java version 1.7 or higher)
-* Data Lake Storage Gen1 account. Follow the instructions at [Get started with Azure Data Lake Storage Gen1 using the Azure portal](data-lake-store-get-started-portal.md).
-* [Maven](https://maven.apache.org/install.html). This tutorial uses Maven for build and project dependencies. Although it is possible to build without using a build system like Maven or Gradle, these systems make is much easier to manage dependencies.
-* (Optional) And IDE like [IntelliJ IDEA](https://www.jetbrains.com/idea/download/) or [Eclipse](https://www.eclipse.org/downloads/) or similar.
-
-## Create a Java application
-The code sample available [on GitHub](https://azure.microsoft.com/documentation/samples/data-lake-store-java-upload-download-get-started/) walks you through the process of creating files in the store, concatenating files, downloading a file, and deleting some files in the store. This section of the article walks you through the main parts of the code.
-
-1. Create a Maven project using [mvn archetype](https://maven.apache.org/guides/getting-started/maven-in-five-minutes.html) from the command line or using an IDE. For instructions on how to create a Java project using IntelliJ, see [here](https://www.jetbrains.com/help/idea/2016.1/creating-and-running-your-first-java-application.html). For instructions on how to create a project using Eclipse, see [here](https://help.eclipse.org/mars/index.jsp?topic=%2Forg.eclipse.jdt.doc.user%2FgettingStarted%2Fqs-3.htm).
-
-2. Add the following dependencies to your Maven **pom.xml** file. Add the following snippet before the **\</project>** tag:
-
- ```xml
- <dependencies>
- <dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>azure-data-lake-store-sdk</artifactId>
- <version>2.1.5</version>
- </dependency>
- <dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-nop</artifactId>
- <version>1.7.21</version>
- </dependency>
- </dependencies>
- ```
-
- The first dependency is to use the Data Lake Storage Gen1 SDK (`azure-data-lake-store-sdk`) from the maven repository. The second dependency is to specify the logging framework (`slf4j-nop`) to use for this application. The Data Lake Storage Gen1 SDK uses [SLF4J](https://www.slf4j.org/) logging façade, which lets you choose from a number of popular logging frameworks, like Log4j, Java logging, Logback, etc., or no logging. For this example, we disable logging, hence we use the **slf4j-nop** binding. To use other logging options in your app, see [here](https://www.slf4j.org/manual.html#projectDep).
-
-3. Add the following import statements to your application.
-
- ```java
- import com.microsoft.azure.datalake.store.ADLException;
- import com.microsoft.azure.datalake.store.ADLStoreClient;
- import com.microsoft.azure.datalake.store.DirectoryEntry;
- import com.microsoft.azure.datalake.store.IfExists;
- import com.microsoft.azure.datalake.store.oauth2.AccessTokenProvider;
- import com.microsoft.azure.datalake.store.oauth2.ClientCredsTokenProvider;
-
- import java.io.*;
- import java.util.Arrays;
- import java.util.List;
- ```
-
-## Authentication
-
-* For end-user authentication for your application, see [End-user-authentication with Data Lake Storage Gen1 using Java](data-lake-store-end-user-authenticate-java-sdk.md).
-* For service-to-service authentication for your application, see [Service-to-service authentication with Data Lake Storage Gen1 using Java](data-lake-store-service-to-service-authenticate-java.md).
-
-## Create a Data Lake Storage Gen1 client
-Creating an [ADLStoreClient](https://azure.github.io/azure-data-lake-store-java/javadoc/) object requires you to specify the Data Lake Storage Gen1 account name and the token provider you generated when you authenticated with Data Lake Storage Gen1 (see [Authentication](#authentication) section). The Data Lake Storage Gen1 account name needs to be a fully qualified domain name. For example, replace **FILL-IN-HERE** with something like **mydatalakestoragegen1.azuredatalakestore.net**.
-
-```java
-private static String accountFQDN = "FILL-IN-HERE"; // full account FQDN, not just the account name
-ADLStoreClient client = ADLStoreClient.createClient(accountFQDN, provider);
-```
-
-The code snippets in the following sections contain examples of some common filesystem operations. You can look at the full [Data Lake Storage Gen1 Java SDK API docs](https://azure.github.io/azure-data-lake-store-java/javadoc/) of the **ADLStoreClient** object to see other operations.
-
-## Create a directory
-
-The following snippet creates a directory structure in the root of the Data Lake Storage Gen1 account you specified.
-
-```java
-// create directory
-client.createDirectory("/a/b/w");
-System.out.println("Directory created.");
-```
-
-## Create a file
-
-The following snippet creates a file (c.txt) in the directory structure and writes some data to the file.
-
-```java
-// create file and write some content
-String filename = "/a/b/c.txt";
-OutputStream stream = client.createFile(filename, IfExists.OVERWRITE );
-PrintStream out = new PrintStream(stream);
-for (int i = 1; i <= 10; i++) {
- out.println("This is line #" + i);
- out.format("This is the same line (%d), but using formatted output. %n", i);
-}
-out.close();
-System.out.println("File created.");
-```
-
-You can also create a file (d.txt) using byte arrays.
-
-```java
-// create file using byte arrays
-stream = client.createFile("/a/b/d.txt", IfExists.OVERWRITE);
-byte[] buf = getSampleContent();
-stream.write(buf);
-stream.close();
-System.out.println("File created using byte array.");
-```
-
-The definition for `getSampleContent` function used in the preceding snippet is available as part of the sample [on GitHub](https://azure.microsoft.com/documentation/samples/data-lake-store-java-upload-download-get-started/).
-
-## Append to a file
-
-The following snippet appends content to an existing file.
-
-```java
-// append to file
-stream = client.getAppendStream(filename);
-stream.write(getSampleContent());
-stream.close();
-System.out.println("File appended.");
-```
-
-The definition for `getSampleContent` function used in the preceding snippet is available as part of the sample [on GitHub](https://azure.microsoft.com/documentation/samples/data-lake-store-java-upload-download-get-started/).
-
-## Read a file
-
-The following snippet reads content from a file in a Data Lake Storage Gen1 account.
-
-```java
-// Read File
-InputStream in = client.getReadStream(filename);
-BufferedReader reader = new BufferedReader(new InputStreamReader(in));
-String line;
-while ( (line = reader.readLine()) != null) {
- System.out.println(line);
-}
-reader.close();
-System.out.println();
-System.out.println("File contents read.");
-```
-
-## Concatenate files
-
-The following snippet concatenates two files in a Data Lake Storage Gen1 account. If successful, the concatenated file replaces the two existing files.
-
-```java
-// concatenate the two files into one
-List<String> fileList = Arrays.asList("/a/b/c.txt", "/a/b/d.txt");
-client.concatenateFiles("/a/b/f.txt", fileList);
-System.out.println("Two files concatenated into a new file.");
-```
-
-## Rename a file
-
-The following snippet renames a file in a Data Lake Storage Gen1 account.
-
-```java
-//rename the file
-client.rename("/a/b/f.txt", "/a/b/g.txt");
-System.out.println("New file renamed.");
-```
-
-## Get metadata for a file
-
-The following snippet retrieves the metadata for a file in a Data Lake Storage Gen1 account.
-
-```java
-// get file metadata
-DirectoryEntry ent = client.getDirectoryEntry(filename);
-printDirectoryInfo(ent);
-System.out.println("File metadata retrieved.");
-```
-
-## Set permissions on a file
-
-The following snippet sets permissions on the file that you created in the previous section.
-
-```java
-// set file permission
-client.setPermission(filename, "744");
-System.out.println("File permission set.");
-```
-
-## List directory contents
-
-The following snippet lists the contents of a directory, recursively.
-
-```java
-// list directory contents
-List<DirectoryEntry> list = client.enumerateDirectory("/a/b", 2000);
-System.out.println("Directory listing for directory /a/b:");
-for (DirectoryEntry entry : list) {
- printDirectoryInfo(entry);
-}
-System.out.println("Directory contents listed.");
-```
-
-The definition for `printDirectoryInfo` function used in the preceding snippet is available as part of the sample [on GitHub](https://azure.microsoft.com/documentation/samples/data-lake-store-java-upload-download-get-started/).
-
-## Delete files and folders
-
-The following snippet deletes the specified files and folders in a Data Lake Storage Gen1 account, recursively.
-
-```java
-// delete directory along with all the subdirectories and files in it
-client.deleteRecursive("/a");
-System.out.println("All files and folders deleted recursively");
-promptEnterKey();
-```
-
-## Build and run the application
-1. To run from within an IDE, locate and press the **Run** button. To run from Maven, use [exec:exec](https://www.mojohaus.org/exec-maven-plugin/exec-mojo.html).
-2. To produce a standalone jar that you can run from command-line build the jar with all dependencies included, using the [Maven assembly plugin](https://maven.apache.org/plugins/maven-assembly-plugin/usage.html). The pom.xml in the [example source code on GitHub](https://github.com/Azure-Samples/data-lake-store-java-upload-download-get-started/blob/master/pom.xml) has an example.
-
-## Next steps
-* [Explore JavaDoc for the Java SDK](https://azure.github.io/azure-data-lake-store-java/javadoc/)
-* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md)
data-lake-store Data Lake Store Get Started Net Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-get-started-net-sdk.md
- Title: Manage an Azure Data Lake Storage Gen1 account with .NET
-description: Learn how to use the .NET SDK for Azure Data Lake Storage Gen1 account management operations.
---- Previously updated : 05/29/2018---
-# Account management operations on Azure Data Lake Storage Gen1 using .NET SDK
-> [!div class="op_single_selector"]
-> * [.NET SDK](data-lake-store-get-started-net-sdk.md)
-> * [REST API](data-lake-store-get-started-rest-api.md)
-> * [Python](data-lake-store-get-started-python.md)
->
->
-
-In this article, you learn how to perform account management operations on Azure Data Lake Storage Gen1 using .NET SDK. Account management operations include creating a Data Lake Storage Gen1 account, listing the accounts in an Azure subscription, deleting the accounts, etc.
-
-For instructions on how to perform data management operations on Data Lake Storage Gen1 using .NET SDK, see [Filesystem operations on Data Lake Storage Gen1 using .NET SDK](data-lake-store-data-operations-net-sdk.md).
-
-## Prerequisites
-* **Visual Studio 2013 or above**. The instructions below use Visual Studio 2019.
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-## Create a .NET application
-1. In Visual Studio, select the **File** menu, **New**, and then **Project**.
-2. Choose **Console App (.NET Framework)**, and then select **Next**.
-3. In **Project name**, enter `CreateADLApplication`, and then select **Create**.
-
-4. Add the NuGet packages to your project.
-
- 1. Right-click the project name in the Solution Explorer and click **Manage NuGet Packages**.
- 2. In the **NuGet Package Manager** tab, make sure that **Package source** is set to **nuget.org** and that **Include prerelease** check box is selected.
- 3. Search for and install the following NuGet packages:
-
- * `Microsoft.Azure.Management.DataLake.Store` - This tutorial uses v2.1.3-preview.
- * `Microsoft.Rest.ClientRuntime.Azure.Authentication` - This tutorial uses v2.2.12.
-
- ![Add a NuGet source](./media/data-lake-store-get-started-net-sdk/data-lake-store-install-nuget-package.png "Create a new Azure Data Lake account")
- 4. Close the **NuGet Package Manager**.
-5. Open **Program.cs**, delete the existing code, and then include the following statements to add references to namespaces.
-
- ```csharp
- using System;
- using System.IO;
- using System.Linq;
- using System.Text;
- using System.Threading;
- using System.Collections.Generic;
- using System.Security.Cryptography.X509Certificates; // Required only if you are using an Azure AD application created with certificates
-
- using Microsoft.Rest;
- using Microsoft.Rest.Azure.Authentication;
- using Microsoft.Azure.Management.DataLake.Store;
- using Microsoft.Azure.Management.DataLake.Store.Models;
- using Microsoft.IdentityModel.Clients.ActiveDirectory;
- ```
-
-6. Declare the variables and provide the values for placeholders. Also, make sure the local path and file name you provide exist on the computer.
-
- ```csharp
- namespace SdkSample
- {
- class Program
- {
- private static DataLakeStoreAccountManagementClient _adlsClient;
-
- private static string _adlsAccountName;
- private static string _resourceGroupName;
- private static string _location;
- private static string _subId;
-
- private static void Main(string[] args)
- {
- _adlsAccountName = "<DATA-LAKE-STORAGE-GEN1-NAME>.azuredatalakestore.net";
- _resourceGroupName = "<RESOURCE-GROUP-NAME>";
- _location = "East US 2";
- _subId = "<SUBSCRIPTION-ID>";
- }
- }
- }
- ```
-
-In the remaining sections of the article, you can see how to use the available .NET methods to perform operations such as authentication, file upload, etc.
-
-## Authentication
-
-* For end-user authentication for your application, see [End-user authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-end-user-authenticate-net-sdk.md).
-* For service-to-service authentication for your application, see [Service-to-service authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-service-to-service-authenticate-net-sdk.md).
-
-## Create client object
-The following snippet creates the Data Lake Storage Gen1 account client object, which is used to issue account management requests to the service, such as create account, delete account, etc.
-
-```csharp
-// Create client objects and set the subscription ID
-_adlsClient = new DataLakeStoreAccountManagementClient(armCreds) { SubscriptionId = _subId };
-```
-
-## Create a Data Lake Storage Gen1 account
-The following snippet creates a Data Lake Storage Gen1 account in the Azure subscription you provided while creating the Data Lake Storage Gen1 account client object.
-
-```csharp
-// Create Data Lake Storage Gen1 account
-var adlsParameters = new DataLakeStoreAccount(location: _location);
-_adlsClient.Account.Create(_resourceGroupName, _adlsAccountName, adlsParameters);
-```
-
-## List all Data Lake Storage Gen1 accounts within a subscription
-Add the following method to your class definition. The following snippet lists all Data Lake Storage Gen1 accounts within a given Azure subscription.
-
-```csharp
-// List all Data Lake Storage Gen1 accounts within the subscription
-public static List<DataLakeStoreAccountBasic> ListAdlStoreAccounts()
-{
- var response = _adlsClient.Account.List(_adlsAccountName);
- var accounts = new List<DataLakeStoreAccountBasic>(response);
-
- while (response.NextPageLink != null)
- {
- response = _adlsClient.Account.ListNext(response.NextPageLink);
- accounts.AddRange(response);
- }
-
- return accounts;
-}
-```
-
-## Delete a Data Lake Storage Gen1 account
-The following snippet deletes the Data Lake Storage Gen1 account you created earlier.
-
-```csharp
-// Delete Data Lake Storage Gen1 account
-_adlsClient.Account.Delete(_resourceGroupName, _adlsAccountName);
-```
-
-## See also
-* [Filesystem operations on Data Lake Storage Gen1 using .NET SDK](data-lake-store-data-operations-net-sdk.md)
-* [Data Lake Storage Gen1 .NET SDK Reference](/dotnet/api/overview/azure/data-lake-store)
-
-## Next steps
-* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md)
data-lake-store Data Lake Store Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-get-started-portal.md
- Title: Get started with Azure Data Lake Storage Gen1 - portal
-description: Use the Azure portal to create a Data Lake Storage Gen1 account and perform basic operations in the account.
---- Previously updated : 06/27/2018---
-# Get started with Azure Data Lake Storage Gen1 using the Azure portal
-
-> [!div class="op_single_selector"]
-> * [Portal](data-lake-store-get-started-portal.md)
-> * [PowerShell](data-lake-store-get-started-powershell.md)
-> * [Azure CLI](data-lake-store-get-started-cli-2.0.md)
->
->
--
-Learn how to use the Azure portal to create a Data Lake Storage Gen1 account and perform basic operations such as create folders, upload, and download data files, delete your account, etc. For more information, see [Overview of Azure Data Lake Storage Gen1](data-lake-store-overview.md).
-
-## Prerequisites
-
-Before you begin this tutorial, you must have the following items:
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-## Create a Data Lake Storage Gen1 account
-
-1. Sign on to the new [Azure portal](https://portal.azure.com).
-2. Click **Create a resource > Storage > Data Lake Storage Gen1**.
-3. In the **New Data Lake Storage Gen1** blade, provide the values as shown in the following screenshot:
-
- ![Create a new Data Lake Storage Gen1 account](./media/data-lake-store-get-started-portal/ADL.Create.New.Account.png "Create a new Data Lake Storage Gen1 account")
-
- * **Name**. Enter a unique name for the Data Lake Storage Gen1 account.
- * **Subscription**. Select the subscription under which you want to create a new Data Lake Storage Gen1 account.
- * **Resource Group**. Select an existing resource group, or select the **Create new** option to create one. A resource group is a container that holds related resources for an application. For more information, see [Resource Groups in Azure](../azure-resource-manager/management/overview.md#resource-groups).
- * **Location**: Select a location where you want to create the Data Lake Storage Gen1 account.
- * **Encryption Settings**. There are three options:
-
- * **Do not enable encryption**.
- * **Use keys managed by Data Lake Storage Gen1**, if you want Data Lake Storage Gen1 to manage your encryption keys.
- * **Use keys from your own Key Vault**. You can select an existing Azure Key Vault or create a new Key Vault. To use the keys from a Key Vault, you must assign permissions for the Data Lake Storage Gen1 account to access the Azure Key Vault. For the instructions, see [Assign permissions to Azure Key Vault](#assign-permissions-to-azure-key-vault).
-
- ![Screenshot of the New Data Lake Storage Gen 1 blade and the Encryption settings blade.](./media/data-lake-store-get-started-portal/adls-encryption-2.png "Data Lake Storage Gen1 encryption")
-
- Click **OK** in the **Encryption Settings** blade.
-
- For more information, see [Encryption of data in Azure Data Lake Storage Gen1](./data-lake-store-encryption.md).
-
-4. Click **Create**. If you chose to pin the account to the dashboard, you are taken back to the dashboard and you can see the progress of your Data Lake Storage Gen1 account provisioning. Once the Data Lake Storage Gen1 account is provisioned, the account blade shows up.
-
-## <a name="assign-permissions-to-azure-key-vault"></a>Assign permissions to Azure Key Vault
-
-If you used keys from an Azure Key Vault to configure encryption on the Data Lake Storage Gen1 account, you must configure access between the Data Lake Storage Gen1 account and the Azure Key Vault account. Perform the following steps to do so.
-
-1. If you used keys from the Azure Key Vault, the blade for the Data Lake Storage Gen1 account displays a warning at the top. Click the warning to open **Encryption**.
-
- ![Screenshot of the Data Lake Storage Gen1 account blade showing the warning that says, "Key vault permission configuration needed. Click here to setup.](./media/data-lake-store-get-started-portal/adls-encryption-3.png "Data Lake Storage Gen1 encryption")
-2. The blade shows two options to configure access.
-
- ![Screenshot of the Encryption blade.](./media/data-lake-store-get-started-portal/adls-encryption-4.png "Data Lake Storage Gen1 encryption")
-
- * In the first option, click **Grant Permissions** to configure access. The first option is enabled only when the user that created the Data Lake Storage Gen1 account is also an admin for the Azure Key Vault.
- * The other option is to run the PowerShell cmdlet displayed on the blade. You need to be the owner of the Azure Key Vault or have the ability to grant permissions on the Azure Key Vault. After you have run the cmdlet, come back to the blade and click **Enable** to configure access.
-
-> [!NOTE]
-> You can also create a Data Lake Storage Gen1 account using Azure Resource Manager templates. These templates are accessible from [Azure QuickStart Templates](https://azure.microsoft.com/resources/templates/?term=data+lake+store):
-> * Without data encryption: [Deploy Azure Data Lake Storage Gen1 account with no data encryption](https://azure.microsoft.com/resources/templates/data-lake-store-no-encryption/).
-> * With data encryption using Data Lake Storage Gen1: [Deploy Data Lake Storage Gen1 account with encryption(Data Lake)](https://azure.microsoft.com/resources/templates/data-lake-store-encryption-adls/).
-> * With data encryption using Azure Key Vault: [Deploy Data Lake Storage Gen1 account with encryption(Key Vault)](https://azure.microsoft.com/resources/templates/data-lake-store-encryption-key-vault/).
->
->
-
-## <a name="createfolder"></a>Create folders
-
-You can create folders under your Data Lake Storage Gen1 account to manage and store data.
-
-1. Open the Data Lake Storage Gen1 account that you created. From the left pane, click **All resources**, and then from the **All resources** blade, click the account name under which you want to create folders. If you pinned the account to the startboard, click that account tile.
-2. In your Data Lake Storage Gen1 account blade, click **Data Explorer**.
-
- ![Screenshot of the Data Lake Storage Gen 1 account blade with the Data explorer option called out.](./media/data-lake-store-get-started-portal/ADL.Create.Folder.png "Create folders in a Data Lake Storage Gen1 account")
-3. From Data Explorer blade, click **New Folder**, enter a name for the new folder, and then click **OK**.
-
- ![Screenshot of the Data Explorer blade with the New folder option and the Create new folder text box called out.](./media/data-lake-store-get-started-portal/ADL.Folder.Name.png "Create folders in a Data Lake Storage Gen1 account")
-
- The newly created folder is listed in the **Data Explorer** blade. You can create nested folders up to any level.
-
- ![Create folders in a Data Lake account](./media/data-lake-store-get-started-portal/ADL.New.Directory.png "Create folders in a Data Lake account")
-
-## <a name="uploaddata"></a>Upload data
-
-You can upload your data to a Data Lake Storage Gen1 account directly at the root level or to a folder that you created within the account.
-
-1. From the **Data Explorer** blade, click **Upload**.
-2. In the **Upload files** blade, navigate to the files you want to upload, and then click **Add selected files**. You can also select more than one file to upload.
-
- ![Upload data](./media/data-lake-store-get-started-portal/ADL.New.Upload.File.png "Upload data")
-
-If you are looking for some sample data to upload, you can get the **Ambulance Data** folder from the [Azure Data Lake Git Repository](https://github.com/MicrosoftBigData/usql/tree/master/Examples/Samples/Data/AmbulanceData).
-
-## <a name="properties"></a>Actions available on the stored data
-
-Click the ellipsis icon against a file, and from the pop-up menu, click the action you want to perform on the data.
-
-![Properties on the data](./media/data-lake-store-get-started-portal/ADL.File.Properties.png "Properties on the data")
-
-## Secure your data
-
-You can secure the data stored in your Data Lake Storage Gen1 account using Microsoft Entra ID and access control (ACLs). For instructions on how to do that, see [Securing data in Azure Data Lake Storage Gen1](data-lake-store-secure-data.md).
-
-## Delete your account
-
-To delete a Data Lake Storage Gen1 account, from your Data Lake Storage Gen1 blade, click **Delete**. To confirm the action, you'll be prompted to enter the name of the account you wish to delete. Enter the name of the account, and then click **Delete**.
-
-![Delete Data Lake Storage Gen1 account](./media/data-lake-store-get-started-portal/ADL.Delete.Account.png "Delete Data Lake account")
-
-## Next steps
-
-* [Use Azure Data Lake Storage Gen1 for big data requirements](data-lake-store-data-scenarios.md)
-* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md)
-* [Use Azure Data Lake Analytics with Data Lake Storage Gen1](../data-lake-analytics/data-lake-analytics-get-started-portal.md)
-* [Use Azure HDInsight with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md)
data-lake-store Data Lake Store Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-get-started-powershell.md
- Title: Get started with Azure Data Lake Storage Gen1 - PowerShell | Microsoft Docs
-description: Use Azure PowerShell to create an Azure Data Lake Storage Gen1 account and perform basic operations.
---- Previously updated : 06/27/2018----
-# Get started with Azure Data Lake Storage Gen1 using Azure PowerShell
-
-> [!div class="op_single_selector"]
-> * [Portal](data-lake-store-get-started-portal.md)
-> * [PowerShell](data-lake-store-get-started-powershell.md)
-> * [Azure CLI](data-lake-store-get-started-cli-2.0.md)
->
->
--
-Learn how to use Azure PowerShell to create an Azure Data Lake Storage Gen1 account and perform basic operations such as create folders, upload and download data files, delete your account, etc. For more information about Data Lake Storage Gen1, see [Overview of Data Lake Storage Gen1](data-lake-store-overview.md).
-
-## Prerequisites
--
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-* **Azure PowerShell 1.0 or greater**. See [How to install and configure Azure PowerShell](/powershell/azure/).
-
-## Authentication
-
-This article uses a simpler authentication approach with Data Lake Storage Gen1 where you're prompted to enter your Azure account credentials. The access level to Data Lake Storage Gen1 account and file system is then governed by the access level of the logged in user. However, there are other approaches to authenticate with Data Lake Storage Gen1, which are end-user authentication or service-to-service authentication. For instructions and more information on how to authenticate, see [End-user authentication](data-lake-store-end-user-authenticate-using-active-directory.md) or [Service-to-service authentication](./data-lake-store-service-to-service-authenticate-using-active-directory.md).
-
-## Create a Data Lake Storage Gen1 account
-
-1. From your desktop, open a new Windows PowerShell window. Enter the following snippet to log in to your Azure account, set the subscription, and register the Data Lake Storage Gen1 provider. When prompted to log in, make sure you log in as one of the subscription administrators/owner:
-
- ```PowerShell
- # Log in to your Azure account
- Connect-AzAccount
-
- # List all the subscriptions associated to your account
- Get-AzSubscription
-
- # Select a subscription
- Set-AzContext -SubscriptionId <subscription ID>
-
- # Register for Azure Data Lake Storage Gen1
- Register-AzResourceProvider -ProviderNamespace "Microsoft.DataLakeStore"
- ```
-
-1. A Data Lake Storage Gen1 account is associated with an Azure resource group. Start by creating a resource group.
-
- ```PowerShell
- $resourceGroupName = "<your new resource group name>"
- New-AzResourceGroup -Name $resourceGroupName -Location "East US 2"
- ```
-
- ![Create an Azure Resource Group](./media/data-lake-store-get-started-powershell/ADL.PS.CreateResourceGroup.png "Create an Azure Resource Group")
-
-1. Create a Data Lake Storage Gen1 account. The name you specify must only contain lowercase letters and numbers.
-
- ```PowerShell
- $dataLakeStorageGen1Name = "<your new Data Lake Storage Gen1 account name>"
- New-AzDataLakeStoreAccount -ResourceGroupName $resourceGroupName -Name $dataLakeStorageGen1Name -Location "East US 2"
- ```
-
- ![Create a Data Lake Storage Gen1 account](./media/data-lake-store-get-started-powershell/ADL.PS.CreateADLAcc.png "Create a Data Lake Storage Gen1 account")
-
-1. Verify that the account is successfully created.
-
- ```PowerShell
- Test-AzDataLakeStoreAccount -Name $dataLakeStorageGen1Name
- ```
-
- The output for the cmdlet should be **True**.
-
-## Create directory structures
-
-You can create directories under your Data Lake Storage Gen1 account to manage and store data.
-
-1. Specify a root directory.
-
- ```PowerShell
- $myrootdir = "/"
- ```
-
-1. Create a new directory called **mynewdirectory** under the specified root.
-
- ```PowerShell
- New-AzDataLakeStoreItem -Folder -AccountName $dataLakeStorageGen1Name -Path $myrootdir/mynewdirectory
- ```
-
-1. Verify that the new directory is successfully created.
-
- ```PowerShell
- Get-AzDataLakeStoreChildItem -AccountName $dataLakeStorageGen1Name -Path $myrootdir
- ```
-
- It should show an output as shown in the following screenshot:
-
- ![Verify Directory](./media/data-lake-store-get-started-powershell/ADL.PS.Verify.Dir.Creation.png "Verify Directory")
-
-## Upload data
-
-You can upload your data to Data Lake Storage Gen1 directly at the root level, or to a directory that you created within the account. The snippets in this section demonstrate how to upload some sample data to the directory (**mynewdirectory**) you created in the previous section.
-
-If you are looking for some sample data to upload, you can get the **Ambulance Data** folder from the [Azure Data Lake Git Repository](https://github.com/MicrosoftBigData/usql/tree/master/Examples/Samples/Data/AmbulanceData). Download the file and store it in a local directory on your computer, such as C:\sampledata\.
-
-```PowerShell
-Import-AzDataLakeStoreItem -AccountName $dataLakeStorageGen1Name `
- -Path "C:\sampledata\vehicle1_09142014.csv" `
- -Destination $myrootdir\mynewdirectory\vehicle1_09142014.csv
-```
-
-## Rename, download, and delete data
-
-To rename a file, use the following command:
-
-```PowerShell
-Move-AzDataLakeStoreItem -AccountName $dataLakeStorageGen1Name `
- -Path $myrootdir\mynewdirectory\vehicle1_09142014.csv `
- -Destination $myrootdir\mynewdirectory\vehicle1_09142014_Copy.csv
-```
-
-To download a file, use the following command:
-
-```PowerShell
-Export-AzDataLakeStoreItem -AccountName $dataLakeStorageGen1Name `
- -Path $myrootdir\mynewdirectory\vehicle1_09142014_Copy.csv `
- -Destination "C:\sampledata\vehicle1_09142014_Copy.csv"
-```
-
-To delete a file, use the following command:
-
-```PowerShell
-Remove-AzDataLakeStoreItem -AccountName $dataLakeStorageGen1Name `
- -Paths $myrootdir\mynewdirectory\vehicle1_09142014_Copy.csv
-```
-
-When prompted, enter **Y** to delete the item. If you have more than one file to delete, you can provide all the paths separated by comma.
-
-```PowerShell
-Remove-AzDataLakeStoreItem -AccountName $dataLakeStorageGen1Name `
- -Paths $myrootdir\mynewdirectory\vehicle1_09142014.csv, $myrootdir\mynewdirectoryvehicle1_09142014_Copy.csv
-```
-
-## Delete your account
-
-Use the following command to delete your Data Lake Storage Gen1 account.
-
-```PowerShell
-Remove-AzDataLakeStoreAccount -Name $dataLakeStorageGen1Name
-```
-
-When prompted, enter **Y** to delete the account.
-
-## Next steps
-
-* [Performance tuning guidance for using PowerShell with Azure Data Lake Storage Gen1](data-lake-store-performance-tuning-powershell.md)
-* [Use Azure Data Lake Storage Gen1 for big data requirements](data-lake-store-data-scenarios.md)
-* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md)
-* [Use Azure Data Lake Analytics with Data Lake Storage Gen1](../data-lake-analytics/data-lake-analytics-get-started-portal.md)
-* [Use Azure HDInsight with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md)
data-lake-store Data Lake Store Get Started Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-get-started-python.md
- Title: Manage an Azure Data Lake Storage Gen1 account with Python
-description: Learn how to use the Python SDK for Azure Data Lake Storage Gen1 account management operations.
---- Previously updated : 05/29/2018-----
-# Account management operations on Azure Data Lake Storage Gen1 using Python
-> [!div class="op_single_selector"]
-> * [.NET SDK](data-lake-store-get-started-net-sdk.md)
-> * [REST API](data-lake-store-get-started-rest-api.md)
-> * [Python](data-lake-store-get-started-python.md)
->
->
-
-Learn how to use the Python SDK for Azure Data Lake Storage Gen1 to perform basic account management operations such as create a Data Lake Storage Gen1 account, list the Data Lake Storage Gen1 accounts, etc. For instructions on how to perform filesystem operations on Data Lake Storage Gen1 using Python, see [Filesystem operations on Data Lake Storage Gen1 using Python](data-lake-store-data-operations-python.md).
-
-## Prerequisites
-
-* **Python**. You can download Python from [here](https://www.python.org/downloads/). This article uses Python 3.6.2.
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-* **An Azure resource group**. For instructions, see [Create an Azure resource group](../azure-resource-manager/management/manage-resource-groups-portal.md).
-
-## Install the modules
-
-To work with Data Lake Storage Gen1 using Python, you need to install three modules.
-
-* The `azure-mgmt-resource` module, which includes Azure modules for Active Directory, etc.
-* The `azure-mgmt-datalake-store` module, which includes the Azure Data Lake Storage Gen1 account management operations. For more information on this module, see [Azure Data Lake Storage Gen1 Management module reference](/python/api/azure-mgmt-datalake-store/).
-* The `azure-datalake-store` module, which includes the Azure Data Lake Storage Gen1 filesystem operations. For more information on this module, see [azure-datalake-store filesystem module reference](/python/api/azure-datalake-store/azure.datalake.store.core/).
-
-Use the following commands to install the modules.
-
-```console
-pip install azure-identity
-pip install azure-mgmt-resource
-pip install azure-mgmt-datalake-store
-pip install azure-datalake-store
-```
-
-## Create a new Python application
-
-1. In the IDE of your choice create a new Python application, for example, **mysample.py**.
-
-2. Add the following snippet to import the required modules:
-
- ```python
- # Acquire a credential object for the app identity. When running in the cloud,
- # DefaultAzureCredential uses the app's managed identity (MSI) or user-assigned service principal.
- # When run locally, DefaultAzureCredential relies on environment variables named
- # AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, and AZURE_TENANT_ID.
- from azure.identity import DefaultAzureCredential
-
- ## Required for Data Lake Storage Gen1 account management
- from azure.mgmt.datalake.store import DataLakeStoreAccountManagementClient
- from azure.mgmt.datalake.store.models import CreateDataLakeStoreAccountParameters
-
- ## Required for Data Lake Storage Gen1 filesystem management
- from azure.datalake.store import core, lib, multithread
-
- # Common Azure imports
- import adal
- from azure.mgmt.resource.resources import ResourceManagementClient
- from azure.mgmt.resource.resources.models import ResourceGroup
-
- # Use these as needed for your application
- import logging, getpass, pprint, uuid, time
- ```
-
-3. Save changes to mysample.py.
-
-## Authentication
-
-In this section, we talk about the different ways to authenticate with Microsoft Entra ID. The options available are:
-
-* For end-user authentication for your application, see [End-user authentication with Data Lake Storage Gen1 using Python](data-lake-store-end-user-authenticate-python.md).
-* For service-to-service authentication for your application, see [Service-to-service authentication with Data Lake Storage Gen1 using Python](data-lake-store-service-to-service-authenticate-python.md).
-
-## Create client and Data Lake Storage Gen1 account
-
-The following snippet first creates the Data Lake Storage Gen1 account client. It uses the client object to create a Data Lake Storage Gen1 account. Finally, the snippet creates a filesystem client object.
-
-```python
-## Declare variables
-subscriptionId = 'FILL-IN-HERE'
-adlsAccountName = 'FILL-IN-HERE'
-resourceGroup = 'FILL-IN-HERE'
-location = 'eastus2'
-credential = DefaultAzureCredential()
-
-## Create Data Lake Storage Gen1 account management client object
-adlsAcctClient = DataLakeStoreAccountManagementClient(credential, subscription_id=subscriptionId)
-
-## Create a Data Lake Storage Gen1 account
-adlsAcctResult = adlsAcctClient.accounts.begin_create(
- resourceGroup,
- adlsAccountName,
- CreateDataLakeStoreAccountParameters(
- location=location
- )
-)
-```
-
-
-## List the Data Lake Storage Gen1 accounts
-
-```python
-## List the existing Data Lake Storage Gen1 accounts
-result_list_response = adlsAcctClient.accounts.list()
-result_list = list(result_list_response)
-for items in result_list:
- print(items)
-```
-
-## Delete the Data Lake Storage Gen1 account
-
-```python
-## Delete an existing Data Lake Storage Gen1 account
-adlsAcctClient.accounts.begin_delete(resourceGroup, adlsAccountName)
-```
-
-
-## Next steps
-* [Filesystem operations on Data Lake Storage Gen1 using Python](data-lake-store-data-operations-python.md).
-
-## See also
-
-* [azure-datalake-store Python (Filesystem) reference](/python/api/azure-datalake-store/azure.datalake.store.core)
-* [Open Source Big Data applications compatible with Azure Data Lake Storage Gen1](data-lake-store-compatible-oss-other-applications.md)
data-lake-store Data Lake Store Get Started Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-get-started-rest-api.md
- Title: Manage an Azure Data Lake Storage Gen1 account with REST
-description: Use the WebHDFS REST API to perform account management operations on an Azure Data Lake Storage Gen1 account.
---- Previously updated : 05/29/2018---
-# Account management operations on Azure Data Lake Storage Gen1 using REST API
-> [!div class="op_single_selector"]
-> * [.NET SDK](data-lake-store-get-started-net-sdk.md)
-> * [REST API](data-lake-store-get-started-rest-api.md)
-> * [Python](data-lake-store-get-started-python.md)
->
->
-
-In this article, you learn how to perform account management operations on Azure Data Lake Storage Gen1 using the REST API. Account management operations include creating a Data Lake Storage Gen1 account, deleting a Data Lake Storage Gen1 account, etc. For instructions on how to perform filesystem operations on Data Lake Storage Gen1 using REST API, see [Filesystem operations on Data Lake Storage Gen1 using REST API](data-lake-store-data-operations-rest-api.md).
-
-## Prerequisites
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-* **[cURL](https://curl.haxx.se/)**. This article uses cURL to demonstrate how to make REST API calls against a Data Lake Storage Gen1 account.
-
-<a name='how-do-i-authenticate-using-azure-active-directory'></a>
-
-## How do I authenticate using Microsoft Entra ID?
-You can use two approaches to authenticate using Microsoft Entra ID.
-
-* For end-user authentication for your application (interactive), see [End-user authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-end-user-authenticate-rest-api.md).
-* For service-to-service authentication for your application (non-interactive), see [Service-to-service authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-service-to-service-authenticate-rest-api.md).
--
-## Create a Data Lake Storage Gen1 account
-This operation is based on the REST API call defined [here](/rest/api/datalakestore/accounts/create).
-
-Use the following cURL command. Replace **\<yourstoragegen1name>** with your Data Lake Storage Gen1 name.
-
-```console
-curl -i -X PUT -H "Authorization: Bearer <REDACTED>" -H "Content-Type: application/json" https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.DataLakeStore/accounts/<yourstoragegen1name>?api-version=2015-10-01-preview -d@"C:\temp\input.json"
-```
-
-In the above command, replace \<`REDACTED`\> with the authorization token you retrieved earlier. The request payload for this command is contained in the **input.json** file that is provided for the `-d` parameter above. The contents of the input.json file resemble the following snippet:
-
-```json
-{
-"location": "eastus2",
-"tags": {
- "department": "finance"
- },
-"properties": {}
-}
-```
-
-## Delete a Data Lake Storage Gen1 account
-This operation is based on the REST API call defined [here](/rest/api/datalakestore/accounts/delete).
-
-Use the following cURL command to delete a Data Lake Storage Gen1 account. Replace **\<yourstoragegen1name>** with your Data Lake Storage Gen1 account name.
-
-```console
-curl -i -X DELETE -H "Authorization: Bearer <REDACTED>" https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.DataLakeStore/accounts/<yourstoragegen1name>?api-version=2015-10-01-preview
-```
-
-You should see an output like the following snippet:
-
-```output
-HTTP/1.1 200 OK
-...
-...
-```
-
-## Next steps
-* [Filesystem operations on Data Lake Storage Gen1 using REST API](data-lake-store-data-operations-rest-api.md).
-
-## See also
-* [Azure Data Lake Storage Gen1 REST API Reference](/rest/api/datalakestore/)
-* [Open Source Big Data applications compatible with Azure Data Lake Storage Gen1](data-lake-store-compatible-oss-other-applications.md)
data-lake-store Data Lake Store Hdinsight Hadoop Use Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-hdinsight-hadoop-use-portal.md
- Title: Create Azure HDInsight clusters with Data Lake Storage Gen1 - portal
-description: Use the Azure portal to create and use HDInsight clusters with Azure Data Lake Storage Gen1
---- Previously updated : 05/29/2018---
-# Create HDInsight clusters with Azure Data Lake Storage Gen1 by using the Azure portal
-
-> [!div class="op_single_selector"]
-> * [Use the Azure portal](data-lake-store-hdinsight-hadoop-use-portal.md)
-> * [Use PowerShell (for default storage)](data-lake-store-hdinsight-hadoop-use-powershell-for-default-storage.md)
-> * [Use PowerShell (for additional storage)](data-lake-store-hdinsight-hadoop-use-powershell.md)
-> * [Use Resource Manager](data-lake-store-hdinsight-hadoop-use-resource-manager-template.md)
->
->
-
-Learn how to use the Azure portal to create a HDInsight cluster with Azure Data Lake Storage Gen1 as the default storage or an additional storage. Even though additional storage is optional for a HDInsight cluster, it's recommended to store your business data in the additional storage accounts.
-
-## Prerequisites
-
-Before you begin, ensure that you've met the following requirements:
-
-* **An Azure subscription**. Go to [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-* **An Azure Data Lake Storage Gen1 account**. Follow the instructions from [Get started with Azure Data Lake Storage Gen1 by using the Azure portal](data-lake-store-get-started-portal.md). You must also create a root folder on the account. In this article, a root folder called __/clusters__ is used.
-* **a Microsoft Entra service principal**. This how-to guide provides instructions on how to create a service principal in Microsoft Entra ID. However, to create a service principal, you must be a Microsoft Entra administrator. If you're an administrator, you can skip this prerequisite and continue.
-
->[!NOTE]
->You can create a service principal only if you're a Microsoft Entra administrator. Your Microsoft Entra administrator must create a service principal before you can create an HDInsight cluster with Data Lake Storage Gen1. Also, the service principal must be created with a certificate, as described at [Create a service principal with certificate](../active-directory/develop/howto-authenticate-service-principal-powershell.md#create-service-principal-with-self-signed-certificate).
->
-
-## Create an HDInsight cluster
-
-In this section, you create a HDInsight cluster with Data Lake Storage Gen1 as the default or the additional storage. This article focuses only on the part of configuring Data Lake Storage Gen1. For the general cluster creation information and procedures, see [Create Hadoop clusters in HDInsight](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
-
-### Create a cluster with Data Lake Storage Gen1 as default storage
-
-To create an HDInsight cluster with a Data Lake Storage Gen1 as the default storage account:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Follow [Create clusters](../hdinsight/hdinsight-hadoop-create-linux-clusters-portal.md#create-clusters) for the general information on creating HDInsight clusters.
-3. On the **Storage** blade, under **Primary storage type**, select **Azure Data Lake Storage Gen1**, and then enter the following information:
-
- ![HDInsight storage account settings](./media/data-lake-store-hdinsight-hadoop-use-portal/hdi.adl.1.adls.storage.png)
-
- * **Select Data Lake Store account**: Select an existing Data Lake Storage Gen1 account. An existing Data Lake Storage Gen1 account is required. See [Prerequisites](#prerequisites).
- * **Root path**: Enter a path where the cluster-specific files are to be stored. On the screenshot, it is __/clusters/myhdiadlcluster/__, in which the __/clusters__ folder must exist, and the Portal creates *myhdicluster* folder. The *myhdicluster* is the cluster name.
- * **Data Lake Store access**: Configure access between the Data Lake Storage Gen1 account and HDInsight cluster. For instructions, see [Configure Data Lake Storage Gen1 access](#configure-data-lake-storage-gen1-access).
- * **Additional storage accounts**: Add Azure storage accounts as additional storage accounts for the cluster. To add additional Data Lake Storage Gen1 accounts is done by giving the cluster permissions on data in more Data Lake Storage Gen1 accounts while configuring a Data Lake Storage Gen1 account as the primary storage type. See [Configure Data Lake Storage Gen1 access](#configure-data-lake-storage-gen1-access).
-
-4. On the **Data Lake Store access**, click **Select**, and then continue with cluster creation as described in [Create Hadoop clusters in HDInsight](../hdinsight/hdinsight-hadoop-create-linux-clusters-portal.md).
-
-### Create a cluster with Data Lake Storage Gen1 as additional storage
-
-The following instructions create a HDInsight cluster with an Azure Blob storage account as the default storage, and a storage account with Data Lake Storage Gen1 as an additional storage.
-
-To create a HDInsight cluster with Data Lake Storage Gen1 as an additional storage account:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Follow [Create clusters](../hdinsight/hdinsight-hadoop-create-linux-clusters-portal.md#create-clusters) for the general information on creating HDInsight clusters.
-3. On the **Storage** blade, under **Primary storage type**, select **Azure Storage**, and then enter the following information:
-
- ![HDInsight storage account settings additional storage](./media/data-lake-store-hdinsight-hadoop-use-portal/hdi.adl.1.png)
-
- * **Selection method** - To specify a storage account that is part of your Azure subscription, select **My subscriptions**, and then select the storage account. To specify a storage account that is outside your Azure subscription, select **Access key**, and then provide the information for the outside storage account.
-
- * **Default container** - Use either the default value or specify your own name.
- * **Additional storage accounts** - Add more Azure storage accounts as the additional storage.
- * **Data Lake Store access** - Configure access between the Data Lake Storage Gen1 account and HDInsight cluster. For instructions see [Configure Data Lake Storage Gen1 access](#configure-data-lake-storage-gen1-access).
-
-## Configure Data Lake Storage Gen1 access
-
-In this section, you configure Data Lake Storage Gen1 access from HDInsight clusters using a Microsoft Entra service principal.
-
-### Specify a service principal
-
-From the Azure portal, you can either use an existing service principal or create a new one.
-
-To create a service principal from the Azure portal:
-1. See [Create Service Principal and Certificates](../active-directory/develop/howto-create-service-principal-portal.md) using Microsoft Entra ID.
-
-To use an existing service principal from the Azure portal:
-
-1. Service Principal should have owner permissions on the Storage account. See [Set up permissions for the Service Principal to be owner on the storage account](#configure-serviceprincipal-permissions).
-1. Select **Data Lake Store access**.
-1. On the **Data Lake Storage Gen1 access** blade, select **Use existing**.
-1. Select **Service principal**, and then select a service principal.
-1. Upload the certificate (.pfx file) that's associated with your selected service principal, and then enter the certificate password.
-
- ![Add service principal to HDInsight cluster](./media/data-lake-store-hdinsight-hadoop-use-portal/hdi.adl.5.png)
-
-1. Select **Access** to configure the folder access. See [Configure file permissions](#configure-file-permissions).
-
-### <a name="configure-serviceprincipal-permissions"></a>Set up permissions for the Service Principal to be owner on the storage account
-1. On the Access Control(IAM) blade of storage account click Add a role assignment.
-2. On the Add a role assignment blade select Role as ΓÇÿownerΓÇÖ, and select the SPN and click save.
-
-### <a name="configure-file-permissions"></a>Configure file permissions
-
-The configuration is different depending on whether the account is used as the default storage or an additional storage account:
-
-* Used as default storage
-
- * permission at the root level of the Data Lake Storage Gen1 account
- * permission at the root level of the HDInsight cluster storage. For example, the __/clusters__ folder used earlier in the tutorial.
-
-* Use as an additional storage
-
- * Permission at the folders where you need file access.
-
-To assign permission at the storage account with Data Lake Storage Gen1 at the root level:
-
-1. On the **Data Lake Storage Gen1 access** blade, select **Access**. The **Select file permissions** blade is opened. It lists all the storage accounts in your subscription.
-1. Hover (do not click) the mouse over the name of the account with Data Lake Storage Gen1 to make the check box visible, then select the check box.
-
- ![Select file permissions](./media/data-lake-store-hdinsight-hadoop-use-portal/hdi.adl.3.png)
-
- By default, __READ__, __WRITE__, AND __EXECUTE__ are all selected.
-
-1. Click **Select** on the bottom of the page.
-1. Select **Run** to assign permission.
-1. Select **Done**.
-
-To assign permission at the HDInsight cluster root level:
-
-1. On the **Data Lake Storage Gen1 access** blade, select **Access**. The **Select file permissions** blade is opened. It lists all the storage accounts with Data Lake Storage Gen1 in your subscription.
-1. From the **Select file permissions** blade, select the storage account with Data Lake Storage Gen1 name to show its content.
-1. Select the HDInsight cluster storage root by selecting the checkbox on the left of the folder. According to the screenshot earlier, the cluster storage root is __/clusters__ folder that you specified while selecting Data Lake Storage Gen1 as default storage.
-1. Set the permissions on the folder. By default, read, write, and execute are all selected.
-1. Click **Select** on the bottom of the page.
-1. Select **Run**.
-1. Select **Done**.
-
-If you are using Data Lake Storage Gen1 as additional storage, you must assign permission only for the folders that you want to access from the HDInsight cluster. For example, in the screenshot below, you provide access only to the **mynewfolder** folder in a storage account with Data Lake Storage Gen1.
-
-![Assign service principal permissions to the HDInsight cluster](./media/data-lake-store-hdinsight-hadoop-use-portal/hdi.adl.3-1.png)
-
-## <a name="verify-cluster-set-up"></a>Verify cluster setup
-
-After the cluster setup is complete, on the cluster blade, verify your results by doing either or both of the following steps:
-
-* To verify that the associated storage for the cluster is the account with Data Lake Storage Gen1 that you specified, select **Storage accounts** in the left pane.
-
- ![Verify associated storage](./media/data-lake-store-hdinsight-hadoop-use-portal/hdi.adl.6-1.png)
-
-* To verify that the service principal is correctly associated with the HDInsight cluster, select **Data Lake Storage Gen1 access** in the left pane.
-
- ![Verify service principal](./media/data-lake-store-hdinsight-hadoop-use-portal/hdi.adl.6.png)
-
-## Examples
-
-After you set up the cluster with Data Lake Storage Gen1 as your storage, see these examples of how to use HDInsight cluster to analyze the data that's stored in Data Lake Storage Gen1.
-
-### Run a Hive query against data in a Data Lake Storage Gen1 (as primary storage)
-
-To run a Hive query, use the Hive views interface in the Ambari portal. For instructions on how to use Ambari Hive views, see [Use the Hive View with Hadoop in HDInsight](../hdinsight/hadoop/apache-hadoop-use-hive-ambari-view.md).
-
-When you work with data in a Data Lake Storage Gen1, there are a few strings to change.
-
-If you use, for example, the cluster that you created with Data Lake Storage Gen1 as primary storage, the path to the data is: *adl://<data_lake_storage_gen1_account_name>/azuredatalakestore.net/path/to/file*. A Hive query to create a table from sample data that's stored in the Data Lake Storage Gen1 looks like the following statement:
-
-```console
-CREATE EXTERNAL TABLE websitelog (str string) LOCATION 'adl://hdiadlsg1storage.azuredatalakestore.net/clusters/myhdiadlcluster/HdiSamples/HdiSamples/WebsiteLogSampleData/SampleLog/'
-```
-
-Descriptions:
-
-* `adl://hdiadlsg1storage.azuredatalakestore.net/` is the root of the account with Data Lake Storage Gen1.
-* `/clusters/myhdiadlcluster` is the root of the cluster data that you specified while creating the cluster.
-* `/HdiSamples/HdiSamples/WebsiteLogSampleData/SampleLog/` is the location of the sample file that you used in the query.
-
-### Run a Hive query against data in a Data Lake Storage Gen1 (as additional storage)
-
-If the cluster that you created uses Blob storage as default storage, the sample data is not contained in the storage account with Data Lake Storage Gen1 that's used as additional storage. In such a case, first transfer the data from Blob storage to the storage account with Data Lake Storage Gen1, and then run the queries as shown in the preceding example.
-
-For information on how to copy data from Blob storage to a storage account with Data Lake Storage Gen1, see the following articles:
-
-* [Use Distcp to copy data between Azure Blob storage and Data Lake Storage Gen1](data-lake-store-copy-data-wasb-distcp.md)
-* [Use AdlCopy to copy data from Azure Blob storage to Data Lake Storage Gen1](data-lake-store-copy-data-azure-storage-blob.md)
-
-### Use Data Lake Storage Gen1 with a Spark cluster
-
-You can use a Spark cluster to run Spark jobs on data that is stored in a Data Lake Storage Gen1. For more information, see [Use HDInsight Spark cluster to analyze data in Data Lake Storage Gen1](../hdinsight/spark/apache-spark-use-with-data-lake-store.md).
-
-### Use Data Lake Storage Gen1 in a Storm topology
-
-## See also
-
-* [Use Data Lake Storage Gen1 with Azure HDInsight clusters](../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen1.md)
-* [PowerShell: Create an HDInsight cluster to use Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-powershell.md)
-
-[makecert]: /windows-hardware/drivers/devtest/makecert
-[pvk2pfx]: /windows-hardware/drivers/devtest/pvk2pfx
data-lake-store Data Lake Store Hdinsight Hadoop Use Powershell For Default Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-hdinsight-hadoop-use-powershell-for-default-storage.md
- Title: PowerShell - HDInsight cluster with Data Lake Storage Gen1 - Azure
-description: Use Azure PowerShell to create and use Azure HDInsight clusters with Azure Data Lake Storage Gen1.
---- Previously updated : 12/06/2021----
-# Create HDInsight clusters with Azure Data Lake Storage Gen1 as default storage by using PowerShell
-
-> [!div class="op_single_selector"]
-> * [Use the Azure portal](data-lake-store-hdinsight-hadoop-use-portal.md)
-> * [Use PowerShell (for default storage)](data-lake-store-hdinsight-hadoop-use-powershell-for-default-storage.md)
-> * [Use PowerShell (for additional storage)](data-lake-store-hdinsight-hadoop-use-powershell.md)
-> * [Use Resource Manager](data-lake-store-hdinsight-hadoop-use-resource-manager-template.md)
-
-Learn how to use Azure PowerShell to configure Azure HDInsight clusters with Azure Data Lake Storage Gen1, as default storage. For instructions on creating an HDInsight cluster with Data Lake Storage Gen1 as additional storage, see [Create an HDInsight cluster with Data Lake Storage Gen1 as additional storage](data-lake-store-hdinsight-hadoop-use-powershell.md).
-
-Here are some important considerations for using HDInsight with Data Lake Storage Gen1:
-
-* The option to create HDInsight clusters with access to Data Lake Storage Gen1 as default storage is available for HDInsight version 3.5 and 3.6.
-
-* The option to create HDInsight clusters with access to Data Lake Storage Gen1 as default storage is *not available* for HDInsight Premium clusters.
-
-To configure HDInsight to work with Data Lake Storage Gen1 by using PowerShell, follow the instructions in the next five sections.
-
-## Prerequisites
--
-Before you begin this tutorial, make sure that you meet the following requirements:
-
-* **An Azure subscription**: Go to [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-* **Azure PowerShell 1.0 or greater**: See [How to install and configure PowerShell](/powershell/azure/).
-* **Windows Software Development Kit (SDK)**: To install Windows SDK, go to [Downloads and tools for Windows 10](https://dev.windows.com/downloads). The SDK is used to create a security certificate.
-* **Microsoft Entra service principal**: This tutorial describes how to create a service principal in Microsoft Entra ID. However, to create a service principal, you must be a Microsoft Entra administrator. If you are an administrator, you can skip this prerequisite and proceed with the tutorial.
-
- >[!NOTE]
- >You can create a service principal only if you are a Microsoft Entra administrator. Your Microsoft Entra administrator must create a service principal before you can create an HDInsight cluster with Data Lake Storage Gen1. The service principal must be created with a certificate, as described in [Create a service principal with certificate](../active-directory/develop/howto-authenticate-service-principal-powershell.md#create-service-principal-with-certificate-from-certificate-authority).
- >
-
-## Create an Azure Data Lake Storage Gen1 account
-
-To create a Data Lake Storage Gen1 account, do the following:
-
-1. From your desktop, open a PowerShell window, and then enter the snippets below. When you are prompted to sign in, sign in as one of the subscription administrators or owners.
-
- ```azurepowershell
- # Sign in to your Azure account
- Connect-AzAccount
-
- # List all the subscriptions associated to your account
- Get-AzSubscription
-
- # Select a subscription
- Set-AzContext -SubscriptionId <subscription ID>
-
- # Register for Data Lake Storage Gen1
- Register-AzResourceProvider -ProviderNamespace "Microsoft.DataLakeStore"
- ```
-
- > [!NOTE]
- > If you register the Data Lake Storage Gen1 resource provider and receive an error similar to `Register-AzResourceProvider : InvalidResourceNamespace: The resource namespace 'Microsoft.DataLakeStore' is invalid`, your subscription might not be approved for Data Lake Storage Gen1. To enable your Azure subscription for Data Lake Storage Gen1, follow the instructions in [Get started with Azure Data Lake Storage Gen1 by using the Azure portal](data-lake-store-get-started-portal.md).
- >
-
-2. A Data Lake Storage Gen1 account is associated with an Azure resource group. Start by creating a resource group.
-
- ```azurepowershell
- $resourceGroupName = "<your new resource group name>"
- New-AzResourceGroup -Name $resourceGroupName -Location "East US 2"
- ```
-
- You should see an output like this:
-
- ```output
- ResourceGroupName : hdiadlgrp
- Location : eastus2
- ProvisioningState : Succeeded
- Tags :
- ResourceId : /subscriptions/<subscription-id>/resourceGroups/hdiadlgrp
- ```
-
-3. Create a Data Lake Storage Gen1 account. The account name you specify must contain only lowercase letters and numbers.
-
- ```azurepowershell
- $dataLakeStorageGen1Name = "<your new Data Lake Storage Gen1 name>"
- New-AzDataLakeStoreAccount -ResourceGroupName $resourceGroupName -Name $dataLakeStorageGen1Name -Location "East US 2"
- ```
-
- You should see an output like the following:
-
- ```output
- ...
- ProvisioningState : Succeeded
- State : Active
- CreationTime : 5/5/2017 10:53:56 PM
- EncryptionState : Enabled
- ...
- LastModifiedTime : 5/5/2017 10:53:56 PM
- Endpoint : hdiadlstore.azuredatalakestore.net
- DefaultGroup :
- Id : /subscriptions/<subscription-id>/resourceGroups/hdiadlgrp/providers/Microsoft.DataLakeStore/accounts/hdiadlstore
- Name : hdiadlstore
- Type : Microsoft.DataLakeStore/accounts
- Location : East US 2
- Tags : {}
- ```
-
-4. Using Data Lake Storage Gen1 as default storage requires you to specify a root path to which the cluster-specific files are copied during cluster creation. To create a root path, which is **/clusters/hdiadlcluster** in the snippet, use the following cmdlets:
-
- ```azurepowershell
- $myrootdir = "/"
- New-AzDataLakeStoreItem -Folder -AccountName $dataLakeStorageGen1Name -Path $myrootdir/clusters/hdiadlcluster
- ````
-
-## Set up authentication for role-based access to Data Lake Storage Gen1
-Every Azure subscription is associated with a Microsoft Entra entity. Users and services that access subscription resources by using the Azure portal or the Azure Resource Manager API must first authenticate with Microsoft Entra ID. Access is granted to Azure subscriptions and services by assigning them the appropriate role on an Azure resource. For services, a service principal identifies the service in Microsoft Entra ID.
-
-This section illustrates how to grant an application service, such as HDInsight, access to an Azure resource (the Data Lake Storage Gen1 account that you created earlier). You do so by creating a service principal for the application and assigning roles to it via PowerShell.
-
-To set up Active Directory authentication for Data Lake Storage Gen1, perform the tasks in the following two sections.
-
-### Create a self-signed certificate
-Make sure you have [Windows SDK](https://dev.windows.com/en-us/downloads) installed before proceeding with the steps in this section. You must have also created a directory, such as *C:\mycertdir*, where you create the certificate.
-
-1. From the PowerShell window, go to the location where you installed Windows SDK (typically, *C:\Program Files (x86)\Windows Kits\10\bin\x86*) and use the [MakeCert][makecert] utility to create a self-signed certificate and a private key. Use the following commands:
-
- ```azurepowershell
- $certificateFileDir = "<my certificate directory>"
- cd $certificateFileDir
-
- makecert -sv mykey.pvk -n "cn=HDI-ADL-SP" CertFile.cer -r -len 2048
- ```
-
- You will be prompted to enter the private key password. After the command is successfully executed, you should see **CertFile.cer** and **mykey.pvk** in the certificate directory that you specified.
-2. Use the [Pvk2Pfx][pvk2pfx] utility to convert the .pvk and .cer files that MakeCert created to a .pfx file. Run the following command:
-
- ```azurepowershell
- pvk2pfx -pvk mykey.pvk -spc CertFile.cer -pfx CertFile.pfx -po <password>
- ```
-
- When you are prompted, enter the private key password that you specified earlier. The value you specify for the **-po** parameter is the password that's associated with the .pfx file. After the command has been completed successfully, you should also see a **CertFile.pfx** in the certificate directory that you specified.
-
-<a name='create-an-azure-ad-and-a-service-principal'></a>
-
-### Create a Microsoft Entra ID and a service principal
-In this section, you create a service principal for a Microsoft Entra application, assign a role to the service principal, and authenticate as the service principal by providing a certificate. To create an application in Microsoft Entra ID, run the following commands:
-
-1. Paste the following cmdlets in the PowerShell console window. Make sure that the value you specify for the **-DisplayName** property is unique. The values for **-HomePage** and **-IdentiferUris** are placeholder values and are not verified.
-
- ```azurepowershell
- $certificateFilePath = "$certificateFileDir\CertFile.pfx"
-
- $password = Read-Host -Prompt "Enter the password" # This is the password you specified for the .pfx file
-
- $certificatePFX = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2($certificateFilePath, $password)
-
- $rawCertificateData = $certificatePFX.GetRawCertData()
-
- $credential = [System.Convert]::ToBase64String($rawCertificateData)
-
- $application = New-AzADApplication `
- -DisplayName "HDIADL" `
- -HomePage "https://contoso.com" `
- -IdentifierUris "https://contoso.com" `
- -CertValue $credential `
- -StartDate $certificatePFX.NotBefore `
- -EndDate $certificatePFX.NotAfter
-
- $applicationId = $application.ApplicationId
- ```
-
-2. Create a service principal by using the application ID.
-
- ```azurepowershell
- $servicePrincipal = New-AzADServicePrincipal -ApplicationId $applicationId -Role Contributor
-
- $objectId = $servicePrincipal.Id
- ```
-
-3. Grant the service principal access to the Data Lake Storage Gen1 root and all the folders in the root path that you specified earlier. Use the following cmdlets:
-
- ```azurepowershell
- Set-AzDataLakeStoreItemAclEntry -AccountName $dataLakeStorageGen1Name -Path / -AceType User -Id $objectId -Permissions All
- Set-AzDataLakeStoreItemAclEntry -AccountName $dataLakeStorageGen1Name -Path /clusters -AceType User -Id $objectId -Permissions All
- Set-AzDataLakeStoreItemAclEntry -AccountName $dataLakeStorageGen1Name -Path /clusters/hdiadlcluster -AceType User -Id $objectId -Permissions All
- ```
-
-## Create an HDInsight Linux cluster with Data Lake Storage Gen1 as the default storage
-
-In this section, you create an HDInsight Hadoop Linux cluster with Data Lake Storage Gen1 as the default storage. For this release, the HDInsight cluster and Data Lake Storage Gen1 must be in the same location.
-
-1. Retrieve the subscription tenant ID, and store it to use later.
-
- ```azurepowershell
- $tenantID = (Get-AzContext).Tenant.TenantId
- ```
-
-2. Create the HDInsight cluster by using the following cmdlets:
-
- ```azurepowershell
- # Set these variables
-
- $location = "East US 2"
- $storageAccountName = $dataLakeStorageGen1Name # Data Lake Storage Gen1 account name
- $storageRootPath = "<Storage root path you specified earlier>" # e.g. /clusters/hdiadlcluster
- $clusterName = "<unique cluster name>"
- $clusterNodes = <ClusterSizeInNodes> # The number of nodes in the HDInsight cluster
- $httpCredentials = Get-Credential
- $sshCredentials = Get-Credential
-
- New-AzHDInsightCluster `
- -ClusterType Hadoop `
- -OSType Linux `
- -ClusterSizeInNodes $clusterNodes `
- -ResourceGroupName $resourceGroupName `
- -ClusterName $clusterName `
- -HttpCredential $httpCredentials `
- -Location $location `
- -DefaultStorageAccountType AzureDataLakeStore `
- -DefaultStorageAccountName "$storageAccountName.azuredatalakestore.net" `
- -DefaultStorageRootPath $storageRootPath `
- -Version "3.6" `
- -SshCredential $sshCredentials `
- -AadTenantId $tenantId `
- -ObjectId $objectId `
- -CertificateFilePath $certificateFilePath `
- -CertificatePassword $password
- ```
-
- After the cmdlet has been successfully completed, you should see an output that lists the cluster details.
-
-## Run test jobs on the HDInsight cluster to use Data Lake Storage Gen1
-After you have configured an HDInsight cluster, you can run test jobs on it to ensure that it can access Data Lake Storage Gen1. To do so, run a sample Hive job to create a table that uses the sample data that's already available in Data Lake Storage Gen1 at *\<cluster root>/example/data/sample.log*.
-
-In this section, you make a Secure Shell (SSH) connection into the HDInsight Linux cluster that you created, and then you run a sample Hive query.
-
-* If you are using a Windows client to make an SSH connection into the cluster, see [Use SSH with Linux-based Hadoop on HDInsight from Windows](../hdinsight/hdinsight-hadoop-linux-use-ssh-windows.md).
-* If you are using a Linux client to make an SSH connection into the cluster, see [Use SSH with Linux-based Hadoop on HDInsight from Linux](../hdinsight/hdinsight-hadoop-linux-use-ssh-unix.md).
-
-1. After you have made the connection, start the Hive command-line interface (CLI) by using the following command:
-
- ```powershell
- hive
- ```
-
-2. Use the CLI to enter the following statements to create a new table named **vehicles** by using the sample data in Data Lake Storage Gen1:
-
- ```azurepowershell
- DROP TABLE log4jLogs;
- CREATE EXTERNAL TABLE log4jLogs (t1 string, t2 string, t3 string, t4 string, t5 string, t6 string, t7 string)
- ROW FORMAT DELIMITED FIELDS TERMINATED BY ' '
- STORED AS TEXTFILE LOCATION 'adl:///example/data/';
- SELECT t4 AS sev, COUNT(*) AS count FROM log4jLogs WHERE t4 = '[ERROR]' AND INPUT__FILE__NAME LIKE '%.log' GROUP BY t4;
- ```
-
-You should see the query output on the SSH console.
-
->[!NOTE]
->The path to the sample data in the preceding CREATE TABLE command is `adl:///example/data/`, where `adl:///` is the cluster root. Following the example of the cluster root that's specified in this tutorial, the command is `adl://hdiadlstore.azuredatalakestore.net/clusters/hdiadlcluster`. You can either use the shorter alternative or provide the complete path to the cluster root.
->
-
-## Access Data Lake Storage Gen1 by using HDFS commands
-After you have configured the HDInsight cluster to use Data Lake Storage Gen1, you can use Hadoop Distributed File System (HDFS) shell commands to access the store.
-
-In this section, you make an SSH connection into the HDInsight Linux cluster that you created, and then you run the HDFS commands.
-
-* If you are using a Windows client to make an SSH connection into the cluster, see [Use SSH with Linux-based Hadoop on HDInsight from Windows](../hdinsight/hdinsight-hadoop-linux-use-ssh-windows.md).
-* If you are using a Linux client to make an SSH connection into the cluster, see [Use SSH with Linux-based Hadoop on HDInsight from Linux](../hdinsight/hdinsight-hadoop-linux-use-ssh-unix.md).
-
-After you've made the connection, list the files in Data Lake Storage Gen1 by using the following HDFS file system command.
-
-```azurepowershell
-hdfs dfs -ls adl:///
-```
-
-You can also use the `hdfs dfs -put` command to upload some files to Data Lake Storage Gen1, and then use `hdfs dfs -ls` to verify whether the files were successfully uploaded.
-
-## See also
-* [Use Data Lake Storage Gen1 with Azure HDInsight clusters](../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen1.md)
-* [Azure portal: Create an HDInsight cluster to use Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md)
-
-[makecert]: /windows-hardware/drivers/devtest/makecert
-[pvk2pfx]: /windows-hardware/drivers/devtest/pvk2pfx
data-lake-store Data Lake Store Hdinsight Hadoop Use Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-hdinsight-hadoop-use-powershell.md
- Title: PowerShell - HDInsight with Data Lake Storage Gen1 - add-on storage - Azure
-description: Learn how to use Azure PowerShell to configure an HDInsight cluster with Azure Data Lake Storage Gen1 as additional storage.
---- Previously updated : 12/06/2021----
-# Use Azure PowerShell to create an HDInsight cluster with Azure Data Lake Storage Gen1 (as additional storage)
-
-> [!div class="op_single_selector"]
-> * [Using Portal](data-lake-store-hdinsight-hadoop-use-portal.md)
-> * [Using PowerShell (for default storage)](data-lake-store-hdinsight-hadoop-use-powershell-for-default-storage.md)
-> * [Using PowerShell (for additional storage)](data-lake-store-hdinsight-hadoop-use-powershell.md)
-> * [Using Resource Manager](data-lake-store-hdinsight-hadoop-use-resource-manager-template.md)
->
->
-
-Learn how to use Azure PowerShell to configure an HDInsight cluster with Azure Data Lake Storage Gen1, **as additional storage**. For instructions on how to create an HDInsight cluster with Data Lake Storage Gen1 as default storage, see [Create an HDInsight cluster with Data Lake Storage Gen1 as default storage](data-lake-store-hdinsight-hadoop-use-powershell-for-default-storage.md).
-
-> [!NOTE]
-> If you are going to use Data Lake Storage Gen1 as additional storage for HDInsight cluster, we strongly recommend that you do this while you create the cluster as described in this article. Adding Data Lake Storage Gen1 as additional storage to an existing HDInsight cluster is a complicated process and prone to errors.
->
-
-For supported cluster types, Data Lake Storage Gen1 can be used as a default storage or additional storage account. When Data Lake Storage Gen1 is used as additional storage, the default storage account for the clusters will still be Azure Blob storage (WASB) and the cluster-related files (such as logs, etc.) are still written to the default storage, while the data that you want to process can be stored in a Data Lake Storage Gen1. Using Data Lake Storage Gen1 as an additional storage account does not impact performance or the ability to read/write to the storage from the cluster.
-
-## Using Data Lake Storage Gen1 for HDInsight cluster storage
-
-Here are some important considerations for using HDInsight with Data Lake Storage Gen1:
-
-* Option to create HDInsight clusters with access to Data Lake Storage Gen1 as additional storage is available for HDInsight versions 3.2, 3.4, 3.5, and 3.6.
-
-Configuring HDInsight to work with Data Lake Storage Gen1 using PowerShell involves the following steps:
-
-* Create a Data Lake Storage Gen1 account
-* Set up authentication for role-based access to Data Lake Storage Gen1
-* Create HDInsight cluster with authentication to Data Lake Storage Gen1
-* Run a test job on the cluster
-
-## Prerequisites
--
-Before you begin this tutorial, you must have the following:
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-* **Azure PowerShell 1.0 or greater**. See [How to install and configure Azure PowerShell](/powershell/azure/).
-* **Windows SDK**. You can install it from [here](https://dev.windows.com/en-us/downloads). You use this to create a security certificate.
-* **Microsoft Entra service principal**. Steps in this tutorial provide instructions on how to create a service principal in Microsoft Entra ID. However, you must be a Microsoft Entra administrator to be able to create a service principal. If you are a Microsoft Entra administrator, you can skip this prerequisite and proceed with the tutorial.
-
- **If you are not a Microsoft Entra administrator**, you will not be able to perform the steps required to create a service principal. In such a case, your Microsoft Entra administrator must first create a service principal before you can create an HDInsight cluster with Data Lake Storage Gen1. Also, the service principal must be created using a certificate, as described at [Create a service principal with certificate](../active-directory/develop/howto-authenticate-service-principal-powershell.md#create-service-principal-with-certificate-from-certificate-authority).
-
-## Create a Data Lake Storage Gen1 account
-Follow these steps to create a Data Lake Storage Gen1 account.
-
-1. From your desktop, open a new Azure PowerShell window, and enter the following snippet. When prompted to log in, make sure you log in as one of the subscription administrator/owner:
-
- ```azurepowershell
- # Log in to your Azure account
- Connect-AzAccount
-
- # List all the subscriptions associated to your account
- Get-AzSubscription
-
- # Select a subscription
- Set-AzContext -SubscriptionId <subscription ID>
-
- # Register for Data Lake Storage Gen1
- Register-AzResourceProvider -ProviderNamespace "Microsoft.DataLakeStore"
- ```
-
- > [!NOTE]
- > If you receive an error similar to `Register-AzResourceProvider : InvalidResourceNamespace: The resource namespace 'Microsoft.DataLakeStore' is invalid` when registering the Data Lake Storage Gen1 resource provider, it is possible that your subscription is not approved for Data Lake Storage Gen1. Make sure you enable your Azure subscription for Data Lake Storage Gen1 by following these [instructions](data-lake-store-get-started-portal.md).
- >
- >
-2. A storage account with Data Lake Storage Gen1 is associated with an Azure Resource Group. Start by creating an Azure Resource Group.
-
- ```azurepowershell
- $resourceGroupName = "<your new resource group name>"
- New-AzResourceGroup -Name $resourceGroupName -Location "East US 2"
- ```
-
- You should see an output like this:
-
- ```output
- ResourceGroupName : hdiadlgrp
- Location : eastus2
- ProvisioningState : Succeeded
- Tags :
- ResourceId : /subscriptions/<subscription-id>/resourceGroups/hdiadlgrp
- ```
-
-3. Create a storage account with Data Lake Storage Gen1. The account name you specify must only contain lowercase letters and numbers.
-
- ```azurepowershell
- $dataLakeStorageGen1Name = "<your new storage account with Data Lake Storage Gen1 name>"
- New-AzDataLakeStoreAccount -ResourceGroupName $resourceGroupName -Name $dataLakeStorageGen1Name -Location "East US 2"
- ```
-
- You should see an output like the following:
-
- ```output
- ...
- ProvisioningState : Succeeded
- State : Active
- CreationTime : 5/5/2017 10:53:56 PM
- EncryptionState : Enabled
- ...
- LastModifiedTime : 5/5/2017 10:53:56 PM
- Endpoint : hdiadlstore.azuredatalakestore.net
- DefaultGroup :
- Id : /subscriptions/<subscription-id>/resourceGroups/hdiadlgrp/providers/Microsoft.DataLakeStore/accounts/hdiadlstore
- Name : hdiadlstore
- Type : Microsoft.DataLakeStore/accounts
- Location : East US 2
- Tags : {}
- ```
-
-5. Upload some sample data to Data Lake Storage Gen1. We'll use this later in this article to verify that the data is accessible from an HDInsight cluster. If you are looking for some sample data to upload, you can get the **Ambulance Data** folder from the [Azure Data Lake Git Repository](https://github.com/MicrosoftBigData/usql/tree/master/Examples/Samples/Data/AmbulanceData).
-
- ```azurepowershell
- $myrootdir = "/"
- Import-AzDataLakeStoreItem -AccountName $dataLakeStorageGen1Name -Path "C:\<path to data>\vehicle1_09142014.csv" -Destination $myrootdir\vehicle1_09142014.csv
- ```
-
-## Set up authentication for role-based access to Data Lake Storage Gen1
-
-Every Azure subscription is associated with a Microsoft Entra ID. Users and services that access resources of the subscription using the Azure portal or Azure Resource Manager API must first authenticate with that Microsoft Entra ID. Access is granted to Azure subscriptions and services by assigning them the appropriate role on an Azure resource. For services, a service principal identifies the service in the Microsoft Entra ID. This section illustrates how to grant an application service, like HDInsight, access to an Azure resource (the storage account with Data Lake Storage Gen1 you created earlier) by creating a service principal for the application and assigning roles to that via Azure PowerShell.
-
-To set up Active Directory authentication for Data Lake Storage Gen1, you must perform the following tasks.
-
-* Create a self-signed certificate
-* Create an application in Microsoft Entra ID and a Service Principal
-
-### Create a self-signed certificate
-
-Make sure you have [Windows SDK](https://dev.windows.com/en-us/downloads) installed before proceeding with the steps in this section. You must have also created a directory, such as **C:\mycertdir**, where the certificate will be created.
-
-1. From the PowerShell window, navigate to the location where you installed Windows SDK (typically, `C:\Program Files (x86)\Windows Kits\10\bin\x86` and use the [MakeCert][makecert] utility to create a self-signed certificate and a private key. Use the following commands.
-
- ```azurepowershell
- $certificateFileDir = "<my certificate directory>"
- cd $certificateFileDir
-
- makecert -sv mykey.pvk -n "cn=HDI-ADL-SP" CertFile.cer -r -len 2048
- ```
-
- You will be prompted to enter the private key password. After the command successfully executes, you should see a **CertFile.cer** and **mykey.pvk** in the certificate directory you specified.
-2. Use the [Pvk2Pfx][pvk2pfx] utility to convert the .pvk and .cer files that MakeCert created to a .pfx file. Run the following command.
-
- ```azurepowershell
- pvk2pfx -pvk mykey.pvk -spc CertFile.cer -pfx CertFile.pfx -po <password>
- ```
-
- When prompted enter the private key password you specified earlier. The value you specify for the **-po** parameter is the password that is associated with the .pfx file. After the command successfully completes, you should also see a CertFile.pfx in the certificate directory you specified.
-
-<a name='create-an-azure-active-directory-and-a-service-principal'></a>
-
-### Create a Microsoft Entra ID and a service principal
-
-In this section, you perform the steps to create a service principal for a Microsoft Entra application, assign a role to the service principal, and authenticate as the service principal by providing a certificate. Run the following commands to create an application in Microsoft Entra ID.
-
-1. Paste the following cmdlets in the PowerShell console window. Make sure the value you specify for the **-DisplayName** property is unique. Also, the values for **-HomePage** and **-IdentiferUris** are placeholder values and are not verified.
-
- ```azurepowershell
- $certificateFilePath = "$certificateFileDir\CertFile.pfx"
-
- $password = Read-Host -Prompt "Enter the password" # This is the password you specified for the .pfx file
-
- $certificatePFX = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2($certificateFilePath, $password)
-
- $rawCertificateData = $certificatePFX.GetRawCertData()
-
- $credential = [System.Convert]::ToBase64String($rawCertificateData)
-
- $application = New-AzADApplication `
- -DisplayName "HDIADL" `
- -HomePage "https://contoso.com" `
- -IdentifierUris "https://contoso.com" `
- -CertValue $credential `
- -StartDate $certificatePFX.NotBefore `
- -EndDate $certificatePFX.NotAfter
-
- $applicationId = $application.ApplicationId
- ```
-
-2. Create a service principal using the application ID.
-
- ```azurepowershell
- $servicePrincipal = New-AzADServicePrincipal -ApplicationId $applicationId -Role Contributor
-
- $objectId = $servicePrincipal.Id
- ```
-
-3. Grant the service principal access to the Data Lake Storage Gen1 folder and the file that you will access from the HDInsight cluster. The snippet below provides access to the root of the storage account with Data Lake Storage Gen1 (where you copied the sample data file), and the file itself.
-
- ```azurepowershell
- Set-AzDataLakeStoreItemAclEntry -AccountName $dataLakeStorageGen1Name -Path / -AceType User -Id $objectId -Permissions All
- Set-AzDataLakeStoreItemAclEntry -AccountName $dataLakeStorageGen1Name -Path /vehicle1_09142014.csv -AceType User -Id $objectId -Permissions All
- ```
-
-## Create an HDInsight Linux cluster with Data Lake Storage Gen1 as additional storage
-
-In this section, we create an HDInsight Hadoop Linux cluster with Data Lake Storage Gen1 as additional storage. For this release, the HDInsight cluster and storage account with Data Lake Storage Gen1 must be in the same location.
-
-1. Start with retrieving the subscription tenant ID. You will need that later.
-
- ```azurepowershell
- $tenantID = (Get-AzContext).Tenant.TenantId
- ```
-
-2. For this release, for a Hadoop cluster, Data Lake Storage Gen1 can only be used as an additional storage for the cluster. The default storage will still be the Azure Blob storage (WASB). So, we'll first create the storage account and storage containers required for the cluster.
-
- ```azurepowershell
- # Create an Azure storage account
- $location = "East US 2"
- $storageAccountName = "<StorageAccountName>" # Provide a Storage account name
-
- New-AzStorageAccount -ResourceGroupName $resourceGroupName -StorageAccountName $storageAccountName -Location $location -Type Standard_GRS
-
- # Create an Azure Blob Storage container
- $containerName = "<ContainerName>" # Provide a container name
- $storageAccountKey = (Get-AzStorageAccountKey -Name $storageAccountName -ResourceGroupName $resourceGroupName)[0].Value
- $destContext = New-AzStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageAccountKey
- New-AzStorageContainer -Name $containerName -Context $destContext
- ```
-
-3. Create the HDInsight cluster. Use the following cmdlets.
-
- ```azurepowershell
- # Set these variables
- $clusterName = $containerName # As a best practice, have the same name for the cluster and container
- $clusterNodes = <ClusterSizeInNodes> # The number of nodes in the HDInsight cluster
- $httpCredentials = Get-Credential
- $sshCredentials = Get-Credential
-
- New-AzHDInsightCluster -ClusterName $clusterName -ResourceGroupName $resourceGroupName -HttpCredential $httpCredentials -Location $location -DefaultStorageAccountName "$storageAccountName.blob.core.windows.net" -DefaultStorageAccountKey $storageAccountKey -DefaultStorageContainer $containerName -ClusterSizeInNodes $clusterNodes -ClusterType Hadoop -Version "3.4" -OSType Linux -SshCredential $sshCredentials -ObjectID $objectId -AadTenantId $tenantID -CertificateFilePath $certificateFilePath -CertificatePassword $password
- ```
-
- After the cmdlet successfully completes, you should see an output listing the cluster details.
--
-## Run test jobs on the HDInsight cluster to use the Data Lake Storage Gen1
-After you have configured an HDInsight cluster, you can run test jobs on the cluster to test that the HDInsight cluster can access Data Lake Storage Gen1. To do so, we will run a sample Hive job that creates a table using the sample data that you uploaded earlier to your storage account with Data Lake Storage Gen1.
-
-In this section you will SSH into the HDInsight Linux cluster you created and run the a sample Hive query.
-
-* If you are using a Windows client to SSH into the cluster, see [Use SSH with Linux-based Hadoop on HDInsight from Windows](../hdinsight/hdinsight-hadoop-linux-use-ssh-unix.md).
-* If you are using a Linux client to SSH into the cluster, see [Use SSH with Linux-based Hadoop on HDInsight from Linux](../hdinsight/hdinsight-hadoop-linux-use-ssh-unix.md)
-
-1. Once connected, start the Hive CLI by using the following command:
-
- ```azurepowershell
- hive
- ```
-
-2. Using the CLI, enter the following statements to create a new table named **vehicles** by using the sample data in Data Lake Storage Gen1:
-
- ```azurepowershell
- DROP TABLE vehicles;
- CREATE EXTERNAL TABLE vehicles (str string) LOCATION 'adl://<mydatalakestoragegen1>.azuredatalakestore.net:443/';
- SELECT * FROM vehicles LIMIT 10;
- ```
-
- You should see an output similar to the following:
-
- ```output
- 1,1,2014-09-14 00:00:03,46.81006,-92.08174,51,S,1
- 1,2,2014-09-14 00:00:06,46.81006,-92.08174,13,NE,1
- 1,3,2014-09-14 00:00:09,46.81006,-92.08174,48,NE,1
- 1,4,2014-09-14 00:00:12,46.81006,-92.08174,30,W,1
- 1,5,2014-09-14 00:00:15,46.81006,-92.08174,47,S,1
- 1,6,2014-09-14 00:00:18,46.81006,-92.08174,9,S,1
- 1,7,2014-09-14 00:00:21,46.81006,-92.08174,53,N,1
- 1,8,2014-09-14 00:00:24,46.81006,-92.08174,63,SW,1
- 1,9,2014-09-14 00:00:27,46.81006,-92.08174,4,NE,1
- 1,10,2014-09-14 00:00:30,46.81006,-92.08174,31,N,1
- ```
-
-## Access Data Lake Storage Gen1 using HDFS commands
-Once you have configured the HDInsight cluster to use Data Lake Storage Gen1, you can use the HDFS shell commands to access the store.
-
-In this section you will SSH into the HDInsight Linux cluster you created and run the HDFS commands.
-
-* If you are using a Windows client to SSH into the cluster, see [Use SSH with Linux-based Hadoop on HDInsight from Windows](../hdinsight/hdinsight-hadoop-linux-use-ssh-unix.md).
-* If you are using a Linux client to SSH into the cluster, see [Use SSH with Linux-based Hadoop on HDInsight from Linux](../hdinsight/hdinsight-hadoop-linux-use-ssh-unix.md)
-
-Once connected, use the following HDFS filesystem command to list the files in the storage account with Data Lake Storage Gen1.
-
-```azurepowershell
-hdfs dfs -ls adl://<storage account with Data Lake Storage Gen1 name>.azuredatalakestore.net:443/
-```
-
-This should list the file that you uploaded earlier to Data Lake Storage Gen1.
-
-```output
-15/09/17 21:41:15 INFO web.CaboWebHdfsFileSystem: Replacing original urlConnectionFactory with org.apache.hadoop.hdfs.web.URLConnectionFactory@21a728d6
-Found 1 items
--rwxrwxrwx 0 NotSupportYet NotSupportYet 671388 2015-09-16 22:16 adl://mydatalakestoragegen1.azuredatalakestore.net:443/mynewfolder
-```
-
-You can also use the `hdfs dfs -put` command to upload some files to Data Lake Storage Gen1, and then use `hdfs dfs -ls` to verify whether the files were successfully uploaded.
-
-## See Also
-* [Use Data Lake Storage Gen1 with Azure HDInsight clusters](../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen1.md)
-* [Portal: Create an HDInsight cluster to use Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md)
-
-[makecert]: /windows-hardware/drivers/devtest/makecert
-[pvk2pfx]: /windows-hardware/drivers/devtest/pvk2pfx
data-lake-store Data Lake Store Hdinsight Hadoop Use Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-hdinsight-hadoop-use-resource-manager-template.md
- Title: Template - HDInsight cluster with Data Lake Storage Gen1
-description: Use Azure Resource Manager templates to create and use Azure HDInsight clusters with Azure Data Lake Storage Gen1.
----- Previously updated : 05/29/2018--
-# Create an HDInsight cluster with Azure Data Lake Storage Gen1 using Azure Resource Manager template
-> [!div class="op_single_selector"]
-> * [Using Portal](data-lake-store-hdinsight-hadoop-use-portal.md)
-> * [Using PowerShell (for default storage)](data-lake-store-hdinsight-hadoop-use-powershell-for-default-storage.md)
-> * [Using PowerShell (for additional storage)](data-lake-store-hdinsight-hadoop-use-powershell.md)
-> * [Using Resource Manager](data-lake-store-hdinsight-hadoop-use-resource-manager-template.md)
->
->
-
-Learn how to use Azure PowerShell to configure an HDInsight cluster with Azure Data Lake Storage Gen1, **as additional storage**.
-
-For supported cluster types, Data Lake Storage Gen1 can be used as a default storage or as an additional storage account. When Data Lake Storage Gen1 is used as additional storage, the default storage account for the clusters will still be Azure Blob storage (WASB) and the cluster-related files (such as logs, etc.) are still written to the default storage, while the data that you want to process can be stored in a Data Lake Storage Gen1 account. Using Data Lake Storage Gen1 as an additional storage account does not impact performance or the ability to read/write to the storage from the cluster.
-
-## Using Data Lake Storage Gen1 for HDInsight cluster storage
-
-Here are some important considerations for using HDInsight with Data Lake Storage Gen1:
-
-* Option to create HDInsight clusters with access to Data Lake Storage Gen1 as default storage is available for HDInsight version 3.5 and 3.6.
-
-* Option to create HDInsight clusters with access to Data Lake Storage Gen1 as additional storage is available for HDInsight versions 3.2, 3.4, 3.5, and 3.6.
-
-In this article, we provision a Hadoop cluster with Data Lake Storage Gen1 as additional storage. For instructions on how to create a Hadoop cluster with Data Lake Storage Gen1 as default storage, see [Create an HDInsight cluster with Data Lake Storage Gen1 using Azure portal](data-lake-store-hdinsight-hadoop-use-portal.md).
-
-## Prerequisites
--
-Before you begin this tutorial, you must have the following:
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-* **Azure PowerShell 1.0 or greater**. See [How to install and configure Azure PowerShell](/powershell/azure/).
-* **Microsoft Entra service principal**. Steps in this tutorial provide instructions on how to create a service principal in Microsoft Entra ID. However, you must be a Microsoft Entra administrator to be able to create a service principal. If you are a Microsoft Entra administrator, you can skip this prerequisite and proceed with the tutorial.
-
- **If you are not a Microsoft Entra administrator**, you will not be able to perform the steps required to create a service principal. In such a case, your Microsoft Entra administrator must first create a service principal before you can create an HDInsight cluster with Data Lake Storage Gen1. Also, the service principal must be created using a certificate, as described at [Create a service principal with certificate](../active-directory/develop/howto-authenticate-service-principal-powershell.md#create-service-principal-with-certificate-from-certificate-authority).
-
-## Create an HDInsight cluster with Data Lake Storage Gen1
-The Resource Manager template, and the prerequisites for using the template, are available on GitHub at [Deploy a HDInsight Linux cluster with new Data Lake Storage Gen1](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.hdinsight/hdinsight-datalake-store-azure-storage). Follow the instructions provided at this link to create an HDInsight cluster with Data Lake Storage Gen1 as the additional storage.
-
-The instructions at the link mentioned above require PowerShell. Before you start with those instructions, make sure you log in to your Azure account. From your desktop, open a new Azure PowerShell window, and enter the following snippets. When prompted to log in, make sure you log in as one of the subscription administrators/owner:
-
-```
-# Log in to your Azure account
-Connect-AzAccount
-
-# List all the subscriptions associated to your account
-Get-AzSubscription
-
-# Select a subscription
-Set-AzContext -SubscriptionId <subscription ID>
-```
-
-The template deploys these resource types:
-
-* [Microsoft.DataLakeStore/accounts](/azure/templates/microsoft.datalakestore/accounts)
-* [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts)
-* [Microsoft.HDInsight/clusters](/azure/templates/microsoft.hdinsight/clusters)
-
-## Upload sample data to Data Lake Storage Gen1
-The Resource Manager template creates a new storage account with Data Lake Storage Gen1 and associates it with the HDInsight cluster. You must now upload some sample data to Data Lake Storage Gen1. You'll need this data later in the tutorial to run jobs from an HDInsight cluster that access data in the storage account with Data Lake Storage Gen1. For instructions on how to upload data, see [Upload a file to Data Lake Storage Gen1](data-lake-store-get-started-portal.md#uploaddata). If you are looking for some sample data to upload, you can get the **Ambulance Data** folder from the [Azure Data Lake Git Repository](https://github.com/Azure/usql/tree/master/Examples/Samples/Data/AmbulanceData).
-
-## Set relevant ACLs on the sample data
-To make sure the sample data you upload is accessible from the HDInsight cluster, you must ensure that the Microsoft Entra application that is used to establish identity between the HDInsight cluster and Data Lake Storage Gen1 has access to the file/folder you are trying to access. To do this, perform the following steps.
-
-1. Find the name of the Microsoft Entra application that is associated with HDInsight cluster and the storage account with Data Lake Storage Gen1. One way to look for the name is to open the HDInsight cluster blade that you created using the Resource Manager template, click the **Cluster Microsoft Entra identity** tab, and look for the value of **Service Principal Display Name**.
-2. Now, provide access to this Microsoft Entra application on the file/folder that you want to access from the HDInsight cluster. To set the right ACLs on the file/folder in Data Lake Storage Gen1, see [Securing data in Data Lake Storage Gen1](data-lake-store-secure-data.md#filepermissions).
-
-## Run test jobs on the HDInsight cluster to use Data Lake Storage Gen1
-After you have configured an HDInsight cluster, you can run test jobs on the cluster to test that the HDInsight cluster can access Data Lake Storage Gen1. To do so, we will run a sample Hive job that creates a table using the sample data that you uploaded earlier to your storage account with Data Lake Storage Gen1.
-
-In this section, you SSH into an HDInsight Linux cluster and run the sample Hive query. If you are using a Windows client, we recommend using **PuTTY**, which can be downloaded from [https://www.chiark.greenend.org.uk/~sgtatham/putty/download.html](https://www.chiark.greenend.org.uk/~sgtatham/putty/download.html).
-
-For more information on using PuTTY, see [Use SSH with Linux-based Hadoop on HDInsight from Windows](../hdinsight/hdinsight-hadoop-linux-use-ssh-unix.md).
-
-1. Once connected, start the Hive CLI by using the following command:
-
- ```
- hive
- ```
-2. Using the CLI, enter the following statements to create a new table named **vehicles** by using the sample data in Data Lake Storage Gen1:
-
- ```
- DROP TABLE vehicles;
- CREATE EXTERNAL TABLE vehicles (str string) LOCATION 'adl://<mydatalakestoragegen1>.azuredatalakestore.net:443/';
- SELECT * FROM vehicles LIMIT 10;
- ```
-
- You should see output similar to the following:
-
- ```
- 1,1,2014-09-14 00:00:03,46.81006,-92.08174,51,S,1
- 1,2,2014-09-14 00:00:06,46.81006,-92.08174,13,NE,1
- 1,3,2014-09-14 00:00:09,46.81006,-92.08174,48,NE,1
- 1,4,2014-09-14 00:00:12,46.81006,-92.08174,30,W,1
- 1,5,2014-09-14 00:00:15,46.81006,-92.08174,47,S,1
- 1,6,2014-09-14 00:00:18,46.81006,-92.08174,9,S,1
- 1,7,2014-09-14 00:00:21,46.81006,-92.08174,53,N,1
- 1,8,2014-09-14 00:00:24,46.81006,-92.08174,63,SW,1
- 1,9,2014-09-14 00:00:27,46.81006,-92.08174,4,NE,1
- 1,10,2014-09-14 00:00:30,46.81006,-92.08174,31,N,1
- ```
--
-## Access Data Lake Storage Gen1 using HDFS commands
-Once you have configured the HDInsight cluster to use Data Lake Storage Gen1, you can use the HDFS shell commands to access the store.
-
-In this section, you SSH into an HDInsight Linux cluster and run the HDFS commands. If you are using a Windows client, we recommend using **PuTTY**, which can be downloaded from [https://www.chiark.greenend.org.uk/~sgtatham/putty/download.html](https://www.chiark.greenend.org.uk/~sgtatham/putty/download.html).
-
-For more information on using PuTTY, see [Use SSH with Linux-based Hadoop on HDInsight from Windows](../hdinsight/hdinsight-hadoop-linux-use-ssh-unix.md).
-
-Once connected, use the following HDFS filesystem command to list the files in the storage account with Data Lake Storage Gen1.
-
-```
-hdfs dfs -ls adl://<storage account with Data Lake Storage Gen1 name>.azuredatalakestore.net:443/
-```
-
-This should list the file that you uploaded earlier to Data Lake Storage Gen1.
-
-```
-15/09/17 21:41:15 INFO web.CaboWebHdfsFileSystem: Replacing original urlConnectionFactory with org.apache.hadoop.hdfs.web.URLConnectionFactory@21a728d6
-Found 1 items
--rwxrwxrwx 0 NotSupportYet NotSupportYet 671388 2015-09-16 22:16 adl://mydatalakestoragegen1.azuredatalakestore.net:443/mynewfolder
-```
-
-You can also use the `hdfs dfs -put` command to upload some files to Data Lake Storage Gen1, and then use `hdfs dfs -ls` to verify whether the files were successfully uploaded.
--
-## Next steps
-* [Copy data from Azure Storage Blobs to Data Lake Storage Gen1](data-lake-store-copy-data-wasb-distcp.md)
-* [Use Data Lake Storage Gen1 with Azure HDInsight clusters](../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen1.md)
data-lake-store Data Lake Store In Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-in-storage-explorer.md
-
Title: 'Manage Data Lake Storage Gen1 resources - Azure Storage Explorer'
-description: Learn how to access and manage your Azure Data Lake Storage Gen1 data and resources in Azure Storage Explorer
---- Previously updated : 06/04/2021---
-# Manage Data Lake Storage Gen1 resources by using Storage Explorer
-
-[Azure Data Lake Storage Gen1](./data-lake-store-overview.md) is a service for storing large amounts of unstructured data, such as text or binary data. You can get access to the data from anywhere via HTTP or HTTPS. Data Lake Storage Gen1 in Azure Storage Explorer enables you to access and manage Data Lake Storage Gen1 data and resources, along with other Azure entities like blobs and queues. Now you can use the same tool to manage your different Azure entities in one place.
-
-Another advantage is that you don't need to have subscription permission to manage Data Lake Storage Gen1 data. In Storage Explorer, you can attach the Data Lake Storage Gen1 path to the **Local & Attached** node as long as someone grants the permission.
-
-## Prerequisites
-
-To complete the steps in this article, you need the following prerequisites:
-
-* An Azure subscription. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial).
-* A Data Lake Storage Gen1 account. For instructions on how to create one, see [Get started with Azure Data Lake Storage Gen1](./data-lake-store-get-started-portal.md).
-
-## Install Storage Explorer
-
-Install the latest Azure Storage Explorer bits from the [product webpage](https://azure.microsoft.com/features/storage-explorer/). The installation supports Windows, Linux, and Mac versions.
-
-## Connect to an Azure subscription
-
-1. In Storage Explorer, select the plug-in icon.
-
- ![Screenshot that shows where the plug-in icon is located in the user interface](./media/data-lake-store-in-storage-explorer/plug-in-icon.png)
-
- This opens the **Connect to Azure Storage** dialog box.
-1. On the **Select Resource** page, select **Subscription**.
-1. On the **Select Azure Environment** page, select the Azure environment to sign in to, and then select **Next**.
-1. In the **Sign in** dialog box, enter your Azure credentials, and then select **Next**.
-
-1. In Storage Explorer, in the **ACCOUNT MANAGEMENT** pane, select the subscription that contains the Data Lake Storage Gen1 account that you want to manage, and then select **Open Explorer**.
-1. In the **EXPLORER** pane, expand your subscription. The pane updates and displays the accounts in the selected subscription. This includes any Data Lake Storage Gen1 accounts, for example:
-
- ![Screenshot that shows an example account in the Data Lake Storage Gen1 node](./media/data-lake-store-in-storage-explorer/account-list.png)
-
-## Connect to Data Lake Storage Gen1
-
-You can access resources that don't exist in your subscription if someone gives you the URI for the resources. You can then connect to Data Lake Storage Gen1 by using the URI after you sign in.
-
-1. Open Storage Explorer.
-1. Expand **Local & Attached**.
-1. Right-click **Data Lake Storage Gen1 (Preview)**, and then select **Connect to Data Lake Storage Gen1**.
-1. Enter the URI, for example:
-
- ![Screenshot that shows the "Connect to Data Lake Store" dialog box, with the text box for entering the URI](./media/data-lake-store-in-storage-explorer/storageexplorer-adls-uri-attach-dialog.png)
-
- The tool browses to the location of the URL that you just entered.
-
- ![Shows the Data Lake Storage Gen1 account listed under the Data Lake Storage Gen1 (Preview) node in the UI](./media/data-lake-store-in-storage-explorer/storageexplorer-adls-attach-finish.png)
-
-## View the contents of a Data Lake Storage Gen1 account
-
-A Data Lake Storage Gen1 account's resources contain folders and files. The following steps show how to view the contents of a Data Lake Storage Gen1 account within Storage Explorer.
-
-1. Open Storage Explorer.
-1. Expand the subscription that contains the Data Lake Storage Gen1 account that you want to view.
-1. Expand **Data Lake Storage Gen1 (Preview)**.
-1. Select the Data Lake Storage Gen1 account that you want to view.
-
- The main pane displays the contents of the Data Lake Storage Gen1 account.
-
- ![Shows the main pane with the Data Lake Storage Gen1 account selected and a list of folders in the account](./media/data-lake-store-in-storage-explorer/storageexplorer-adls-toolbar-mainpane.png)
-
-## Manage resources in Data Lake Storage Gen1
-
-You can manage Data Lake Storage Gen1 resources by doing following operations:
-
-* Browse through Data Lake Storage Gen1 resources across multiple Data Lake Storage Gen1 accounts.
-* Use a connection string to connect to and manage Data Lake Storage Gen1 directly.
-* View Data Lake Storage Gen1 resources shared by others through an ACL under **Local & Attached**.
-* Perform file and folder CRUD operations: support recursive folders and multi-selected files.
-* Drag, drop, and add a folder to quickly access recent locations. This operation mirrors the desktop File Explorer experience.
-* Copy and open a Data Lake Storage Gen1 hyperlink in Storage Explorer with one click.
-* Display the **Activities** log in the lower pane to view activity status.
-* Display folder statistics and file properties.
-
-## Manage resources in Azure Storage Explorer
-
-After you create a Data Lake Storage Gen1 account, you can:
-
-* Upload folders and files, download folders and files, and open resources on your local computer.
-* Pin to **Quick Access**, create a new folder, copy a URL, and select all.
-* Copy and paste, rename, delete, get folder statistics, and refresh.
-
-The following items show how to manage resources in a Data Lake Storage Gen1 account. Follow the steps for the task that you want to do.
-
-### Upload files
-
-1. On the main pane's toolbar, select **Upload**, and then select **Upload Files**.
-1. In the **Select files to upload** dialog box, select the files that you want to upload.
-1. Select **Open** to begin the upload.
-
-> [!NOTE]
-> You can also directly drag the files on a local computer to start uploading.
-
-### Upload a folder
-
-1. On the main pane's toolbar, select **Upload**, and then select **Upload Folder**.
-1. In the **Select folder to upload** dialog box, select the folder that you want to upload.
-1. Select **Select Folder** to begin the upload.
-
-> [!NOTE]
-> You can also directly drag a folder on a local computer to start uploading.
-
-### Download folders or files to your local computer
-
-1. Select the folders or files that you want to download.
-1. On the main pane's toolbar, select **Download**.
-1. In the **Select a folder to save the downloaded files into** dialog box, specify the location and the name.
-1. Select **Save**.
-
-### Open a folder or file from your local computer
-
-1. Select the folder or file that you want to open.
-1. On the main pane's toolbar, select **Open**. Or, right-click the selected folder or file, and then select **Open** on the shortcut menu.
-
-The file is downloaded and opened through the application that's associated with the underlying file type. Or, the folder is opened in the main pane.
-
-### Copy folders or files to the clipboard
-
-You can copy Data Lake Storage Gen1 folders or files and paste them in another Data Lake Storage Gen1 account. Copy and paste operations across storage types aren't supported. For example, you can't copy Data Lake Storage Gen1 folders or files and paste them to Azure Blob storage or the other way around.
-
-1. Select the folders or files that you want to copy.
-1. On the main pane's toolbar, select **Copy**. Or, right-click the selected folders or files, and then select **Copy** on the shortcut menu.
-1. In the navigation pane, browse to another Data Lake Storage Gen1 account, and select it to view it in the main pane.
-1. On the main pane's toolbar, select **Paste** to create a copy. Or, select **Paste** on the destination's shortcut menu.
-
-> [!NOTE]
-> The copy/paste operation works by downloading the folders or files to the local computer and then uploading them to the destination. The tool doesn't perform the action in the back end. The copy/paste operation on large files is slow.
-
-### Delete folders or files
-
-1. Select the folders or files that you want to delete.
-1. On the main pane's toolbar, select **Delete**. Or, right-click the selected folders or files, and then select **Delete** on the shortcut menu.
-1. Select **Yes** in the confirmation dialog box.
-
-### Pin to Quick Access
-
-1. Select the folder that you want to pin so that you can easily access the resources.
-1. On the main pane's toolbar, select **Pin to Quick access**.
-
- In the navigation pane, the selected folder is added to the **Quick Access** node.
-
- ![Shows the folder listed under the Quick Access node in the UI](./media/data-lake-store-in-storage-explorer/storageexplorer-adls-quick-access.png)
-
-### Use deep links
-
-If you have a URL, you can enter the URL into the address path in File Explorer or a browser. Then Storage Explorer opens automatically and goes to the location of the URL.
-
-![Shows the URL of a folder in a Data Lake Storage Gen1 account that's copied into the File Explorer window](./media/data-lake-store-in-storage-explorer/storageexplorer-adls-deep-link.png)
-
-## Next steps
-
-* View the [latest Storage Explorer release notes and videos](https://www.storageexplorer.com).
-* [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md)
-* [Get started with Azure Data Lake Storage Gen1](./data-lake-store-overview.md)
data-lake-store Data Lake Store Integrate With Other Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-integrate-with-other-services.md
- Title: Integrate Data Lake Storage Gen1 with other Azure services
-description: Understand how you can integrate Azure Data Lake Storage Gen1 with other Azure services.
---- Previously updated : 06/03/2022---
-# Integrating Azure Data Lake Storage Gen1 with other Azure services
-Azure Data Lake Storage Gen1 can be used in conjunction with other Azure services to enable a wider range of scenarios. The following article lists the services that Data Lake Storage Gen1 can be integrated with.
-
-## Use Data Lake Storage Gen1 with Azure HDInsight
-You can provision an [Azure HDInsight](https://azure.microsoft.com/documentation/learning-paths/hdinsight-self-guided-hadoop-training/) cluster that uses Data Lake Storage Gen1 as the HDFS-compliant storage. For this release, for Hadoop and Storm clusters on Windows and Linux, you can use Data Lake Storage Gen1 only as an additional storage. Such clusters still use Azure Storage (WASB) as the default storage. However, for HBase clusters on Windows and Linux, you can use Data Lake Storage Gen1 as the default storage, or additional storage, or both.
-
-For instructions on how to provision an HDInsight cluster with Data Lake Storage Gen1, see:
-
-* [Provision an HDInsight cluster with Data Lake Storage Gen1 using Azure portal](data-lake-store-hdinsight-hadoop-use-portal.md)
-* [Provision an HDInsight cluster with Data Lake Storage Gen1 as default storage using Azure PowerShell](data-lake-store-hdinsight-hadoop-use-powershell-for-default-storage.md)
-* [Provision an HDInsight cluster with Data Lake Storage Gen1 as additional storage using Azure PowerShell](data-lake-store-hdinsight-hadoop-use-powershell.md)
-
-## Use Data Lake Storage Gen1 with Azure Data Lake Analytics
-[Azure Data Lake Analytics](../data-lake-analytics/data-lake-analytics-overview.md) enables you to work with Big Data at cloud scale. It dynamically provisions resources and lets you do analytics on terabytes or even exabytes of data that can be stored in a number of supported data sources, one of them being Data Lake Storage Gen1. Data Lake Analytics is specially optimized to work with Data Lake Storage Gen1 - providing the highest level of performance, throughput, and parallelization for you big data workloads.
-
-For instructions on how to use Data Lake Analytics with Data Lake Storage Gen1, see [Get Started with Data Lake Analytics using Data Lake Storage Gen1](../data-lake-analytics/data-lake-analytics-get-started-portal.md).
-
-## Use Data Lake Storage Gen1 with Azure Data Factory
-You can use [Azure Data Factory](https://azure.microsoft.com/services/data-factory/) to ingest data from Azure tables, Azure SQL Database, Azure SQL DataWarehouse, Azure Storage Blobs, and on-premises databases. Being a first class citizen in the Azure ecosystem, Azure Data Factory can be used to orchestrate the ingestion of data from these source to Data Lake Storage Gen1.
-
-For instructions on how to use Azure Data Factory with Data Lake Storage Gen1, see [Move data to and from Data Lake Storage Gen1 using Data Factory](../data-factory/connector-azure-data-lake-store.md).
-
-## Copy data from Azure Storage Blobs into Data Lake Storage Gen1
-Azure Data Lake Storage Gen1 provides a command-line tool, AdlCopy, that enables you to copy data from Azure Blob Storage into a Data Lake Storage Gen1 account. For more information, see [Copy data from Azure Storage Blobs to Data Lake Storage Gen1](data-lake-store-copy-data-azure-storage-blob.md).
-
-## Copy data between Azure SQL Database and Data Lake Storage Gen1
-You can use Apache Sqoop to import and export data between Azure SQL Database and Data Lake Storage Gen1. For more information, see [Copy data between Data Lake Storage Gen1 and Azure SQL Database using Sqoop](data-lake-store-data-transfer-sql-sqoop.md).
-
-## Use Data Lake Storage Gen1 with Stream Analytics
-You can use Data Lake Storage Gen1 as one of the outputs to store data streamed using Azure Stream Analytics. For more information, see [Stream data from Azure Storage Blob into Data Lake Storage Gen1 using Azure Stream Analytics](data-lake-store-stream-analytics.md).
-
-## Use Data Lake Storage Gen1 with Power BI
-You can use Power BI to import data from a Data Lake Storage Gen1 account to analyze and visualize the data. For more information, see [Analyze data in Data Lake Storage Gen1 using Power BI](data-lake-store-power-bi.md).
-
-## Use Data Lake Storage Gen1 with Data Catalog
-You can register data from Data Lake Storage Gen1 into the Azure Data Catalog to make the data discoverable throughout the organization. For more information see [Register data from Data Lake Storage Gen1 in Azure Data Catalog](data-lake-store-with-data-catalog.md).
-
-## Use Data Lake Storage Gen1 with SQL Server Integration Services (SSIS)
-You can use the Data Lake Storage Gen1 connection manager in SSIS to connect an SSIS package with Data Lake Storage Gen1. For more information, see [Use Data Lake Storage Gen1 with SSIS](/sql/integration-services/connection-manager/azure-data-lake-store-connection-manager).
-
-## Use Data Lake Storage Gen1 with Azure Event Hubs
-You can use Azure Data Lake Storage Gen1 to archive and capture data received by Azure Event Hubs. For more information see [Use Data Lake Storage Gen1 with Azure Event Hubs](data-lake-store-archive-eventhub-capture.md).
-
-## See also
-* [Overview of Azure Data Lake Storage Gen1](data-lake-store-overview.md)
-* [Get Started with Data Lake Storage Gen1 using Portal](data-lake-store-get-started-portal.md)
-* [Get started with Data Lake Storage Gen1 using PowerShell](data-lake-store-get-started-powershell.md)
data-lake-store Data Lake Store Migration Cross Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-migration-cross-region.md
- Title: Azure Data Lake Storage Gen1 cross-region migration | Microsoft Docs
-description: Learn what to consider as you plan and complete a migration to Azure Data Lake Storage Gen1 as it becomes available in new regions.
---- Previously updated : 01/27/2017---
-# Migrate Azure Data Lake Storage Gen1 across regions
-
-As Azure Data Lake Storage Gen1 becomes available in new regions, you might choose to do a one-time migration, to take advantage of the new region. Learn what to consider as you plan and complete the migration.
-
-## Prerequisites
-
-* **An Azure subscription**. For more information, see [Create your free Azure account today](https://azure.microsoft.com/pricing/free-trial/).
-* **A Data Lake Storage Gen1 account in two different regions**. For more information, see [Get started with Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md).
-* **Azure Data Factory**. For more information, see [Introduction to Azure Data Factory](../data-factory/introduction.md).
--
-## Migration considerations
-
-First, identify the migration strategy that works best for your application that writes, reads, or processes data in Data Lake Storage Gen1. When you choose a strategy, consider your application's availability requirements, and the downtime that occurs during a migration. For example, your simplest approach might be to use the "lift-and-shift" cloud migration model. In this approach, you pause the application in your existing region while all your data is copied to the new region. When the copy process is finished, you resume your application in the new region, and then delete the old Data Lake Storage Gen1 account. Downtime during the migration is required.
-
-To reduce downtime, you might immediately start ingesting new data in the new region. When you have the minimum data needed, run your application in the new region. In the background, continue to copy older data from the existing Data Lake Storage Gen1 account to the new Data Lake Storage Gen1 account in the new region. By using this approach, you can make the switch to the new region with little downtime. When all the older data has been copied, delete the old Data Lake Storage Gen1 account.
-
-Other important details to consider when planning your migration are:
-
-* **Data volume**. The volume of data (in gigabytes, the number of files and folders, and so on) affects the time and resources you need for the migration.
-
-* **Data Lake Storage Gen1 account name**. The new account name in the new region must be globally unique. For example, the name of your old Data Lake Storage Gen1 account in East US 2 might be contosoeastus2.azuredatalakestore.net. You might name your new Data Lake Storage Gen1 account in North EU contosonortheu.azuredatalakestore.net.
-
-* **Tools**. We recommend that you use the [Azure Data Factory Copy Activity](../data-factory/connector-azure-data-lake-store.md) to copy Data Lake Storage Gen1 files. Data Factory supports data movement with high performance and reliability. Keep in mind that Data Factory copies only the folder hierarchy and content of the files. You need to manually apply any access control lists (ACLs) that you use in the old account to the new account. For more information, including performance targets for best-case scenarios, see the [Copy Activity performance and tuning guide](../data-factory/copy-activity-performance.md). If you want data copied more quickly, you might need to use additional Cloud Data Movement Units. Some other tools, like AdlCopy, don't support copying data between regions.
-
-* **Bandwidth charges**. [Bandwidth charges](https://azure.microsoft.com/pricing/details/bandwidth/) apply because data is transferred out of an Azure region.
-
-* **ACLs on your data**. Secure your data in the new region by applying ACLs to files and folders. For more information, see [Securing data stored in Azure Data Lake Storage Gen1](data-lake-store-secure-data.md). We recommend that you use the migration to update and adjust your ACLs. You might want to use settings similar to your current settings. You can view the ACLs that are applied to any file by using the Azure portal, [PowerShell cmdlets](/powershell/module/az.datalakestore/get-azdatalakestoreitempermission), or SDKs.
-
-* **Location of analytics services**. For best performance, your analytics services, like Azure Data Lake Analytics or Azure HDInsight, should be in the same region as your data.
-
-## Next steps
-* [Overview of Azure Data Lake Storage Gen1](data-lake-store-overview.md)
data-lake-store Data Lake Store Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-network-security.md
- Title: Network security in Azure Data Lake Storage Gen1 | Microsoft Docs
-description: Understand how virtual network integration works in Azure Data Lake Storage Gen1
---- Previously updated : 10/09/2018---
-# Virtual network integration for Azure Data Lake Storage Gen1
-
-This article introduces virtual network integration for Azure Data Lake Storage Gen1. With virtual network integration, you can configure your accounts to accept traffic only from specific virtual networks and subnets.
-
-This feature helps to secure your Data Lake Storage account from external threats.
-
-Virtual network integration for Data Lake Storage Gen1 makes use of the virtual network service endpoint security between your virtual network and Microsoft Entra ID to generate additional security claims in the access token. These claims are then used to authenticate your virtual network to your Data Lake Storage Gen1 account and allow access.
-
-> [!NOTE]
-> There's no additional charge associated with using these capabilities. Your account is billed at the standard rate for Data Lake Storage Gen1. For more information, see [pricing](https://azure.microsoft.com/pricing/details/data-lake-store/?cdn=disable). For all other Azure services that you use, see [pricing](https://azure.microsoft.com/pricing/#product-picker).
-
-## Scenarios for virtual network integration for Data Lake Storage Gen1
-
-With Data Lake Storage Gen1 virtual network integration, you can restrict access to your Data Lake Storage Gen1 account from specific virtual networks and subnets. After your account is locked to the specified virtual network subnet, other virtual networks/VMs in Azure aren't allowed access. Functionally, Data Lake Storage Gen1 virtual network integration enables the same scenario as [virtual network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md). A few key differences are detailed in the following sections.
-
-![Scenario diagram for Data Lake Storage Gen1 virtual network integration](media/data-lake-store-network-security/scenario-diagram.png)
-
-> [!NOTE]
-> The existing IP firewall rules can be used in addition to virtual network rules to allow access from on-premises networks too.
-
-## Optimal routing with Data Lake Storage Gen1 virtual network integration
-
-A key benefit of virtual network service endpoints is [optimal routing](../virtual-network/virtual-network-service-endpoints-overview.md#key-benefits) from your virtual network. You can perform the same route optimization to Data Lake Storage Gen1 accounts. Use the following [user-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) from your virtual network to your Data Lake Storage Gen1 account.
-
-**Data Lake Storage public IP address** ΓÇô Use the public IP address for your target Data Lake Storage Gen1 accounts. To identify the IP addresses for your Data Lake Storage Gen1 account, [resolve the DNS names](./data-lake-store-connectivity-from-vnets.md#enabling-connectivity-to-azure-data-lake-storage-gen1-from-vms-with-restricted-connectivity) of your accounts. Create a separate entry for each address.
-
-```azurecli
-# Create a route table for your resource group.
-az network route-table create --resource-group $RgName --name $RouteTableName
-
-# Create route table rules for Data Lake Storage public IP addresses.
-# There's one rule per Data Lake Storage public IP address.
-az network route-table route create --name toADLSregion1 --resource-group $RgName --route-table-name $RouteTableName --address-prefix <ADLS Public IP Address> --next-hop-type Internet
-
-# Update the virtual network, and apply the newly created route table to it.
-az network vnet subnet update --vnet-name $VnetName --name $SubnetName --resource-group $RgName --route-table $RouteTableName
-```
-
-## Data exfiltration from the customer virtual network
-
-In addition to securing the Data Lake Storage accounts for access from the virtual network, you also might be interested in making sure there's no exfiltration to an unauthorized account.
-
-Use a firewall solution in your virtual network to filter the outbound traffic based on the destination account URL. Allow access to only approved Data Lake Storage Gen1 accounts.
-
-Some available options are:
-- [Azure Firewall](../firewall/overview.md): [Deploy and configure an Azure firewall](../firewall/tutorial-firewall-deploy-portal.md) for your virtual network. Secure the outbound Data Lake Storage traffic, and lock it down to the known and approved account URL.-- [Network virtual appliance](https://azure.microsoft.com/solutions/network-appliances/) firewall: Your administrator might allow the use of only certain commercial firewall vendors. Use a network virtual appliance firewall solution that's available in the Azure Marketplace to perform the same function.-
-> [!NOTE]
-> Using firewalls in the data path introduces an additional hop in the data path. It might affect the network performance for end-to-end data exchange. Throughput availability and connection latency might be affected.
-
-## Limitations
--- HDInsight clusters that were created before Data Lake Storage Gen1 virtual network integration support was available must be re-created to support this new feature.
-
-- When you create a new HDInsight cluster and select a Data Lake Storage Gen1 account with virtual network integration enabled, the process fails. First, disable the virtual network rule. Or on the **Firewall and virtual networks** blade of the Data Lake Storage account, select **Allow access from all networks and services**. Then create the HDInsight cluster before finally re-enabling the virtual network rule or de-selecting **Allow access from all networks and services**. For more information, see the [Exceptions](#exceptions) section.--- Data Lake Storage Gen1 virtual network integration doesn't work with [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
-
-- File and folder data in your virtual network-enabled Data Lake Storage Gen1 account isn't accessible from the portal. This restriction includes access from a VM thatΓÇÖs within the virtual network and activities such as using Data Explorer. Account management activities continue to work. File and folder data in your virtual network-enabled Data Lake Storage account is accessible via all non-portal resources. These resources include SDK access, PowerShell scripts, and other Azure services when they don't originate from the portal. -
-## Configuration
-
-<a name='step-1-configure-your-virtual-network-to-use-an-azure-ad-service-endpoint'></a>
-
-### Step 1: Configure your virtual network to use a Microsoft Entra service endpoint
-
-1. Go to the Azure portal, and sign in to your account.
-
-2. [Create a new virtual network](../virtual-network/quick-create-portal.md)in your subscription. Or you can go to an existing virtual network. The virtual network must be in the same region as the Data Lake Storage Gen 1 account.
-
-3. On the **Virtual network** blade, select **Service endpoints**.
-
-4. Select **Add** to add a new service endpoint.
-
- ![Add a virtual network service endpoint](media/data-lake-store-network-security/config-vnet-1.png)
-
-5. Select **Microsoft.AzureActiveDirectory** as the service for the endpoint.
-
- ![Select the Microsoft.AzureActiveDirectory service endpoint](media/data-lake-store-network-security/config-vnet-2.png)
-
-6. Select the subnets for which you intend to allow connectivity. Select **Add**.
-
- ![Select the subnet](media/data-lake-store-network-security/config-vnet-3.png)
-
-7. It can take up to 15 minutes for the service endpoint to be added. After it's added, it shows up in the list. Verify that it shows up and that all details are as configured.
-
- ![Successful addition of the service endpoint](media/data-lake-store-network-security/config-vnet-4.png)
-
-### Step 2: Set up the allowed virtual network or subnet for your Data Lake Storage Gen1 account
-
-1. After you configure your virtual network, [create a new Azure Data Lake Storage Gen1 account](data-lake-store-get-started-portal.md#create-a-data-lake-storage-gen1-account) in your subscription. Or you can go to an existing Data Lake Storage Gen1 account. The Data Lake Storage Gen1 account must be in the same region as the virtual network.
-
-2. Select **Firewall and virtual networks**.
-
- > [!NOTE]
- > If you don't see **Firewall and virtual networks** in the settings, log off the portal. Close the browser, and clear the browser cache. Restart the machine and retry.
-
- ![Add a virtual network rule to your Data Lake Storage account](media/data-lake-store-network-security/config-adls-1.png)
-
-3. Select **Selected networks**.
-
-4. Select **Add existing virtual network**.
-
- ![Add existing virtual network](media/data-lake-store-network-security/config-adls-2.png)
-
-5. Select the virtual networks and subnets to allow for connectivity. Select **Add**.
-
- ![Choose the virtual network and subnets](media/data-lake-store-network-security/config-adls-3.png)
-
-6. Make sure that the virtual networks and subnets show up correctly in the list. Select **Save**.
-
- ![Save the new rule](media/data-lake-store-network-security/config-adls-4.png)
-
- > [!NOTE]
- > It might take up to 5 minutes for the settings to take into effect after you save.
-
-7. [Optional] On the **Firewall and virtual networks** page, in the **Firewall** section, you can allow connectivity from specific IP addresses.
-
-## Exceptions
-You can enable connectivity from Azure services and VMs outside of your selected virtual networks. On the **Firewall and virtual networks** blade, in the **Exceptions** area, select from two options:
-
-- **Allow all Azure services to access this Data Lake Storage Gen1 account**. This option allows Azure services such as Azure Data Factory, Azure Event Hubs, and all Azure VMs to communicate with your Data Lake Storage account.--- **Allow Azure Data Lake Analytics to access this Data Lake Storage Gen1 account**. This option allows Data Lake Analytics connectivity to this Data Lake Storage account. -
- ![Firewall and virtual network exceptions](media/data-lake-store-network-security/firewall-exceptions.png)
-
-We recommend that you keep these exceptions turned off. Turn them on only if you need connectivity from these other services from outside your virtual network.
data-lake-store Data Lake Store Offline Bulk Data Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-offline-bulk-data-upload.md
- Title: Upload large data set to Azure Data Lake Storage Gen1 - offline methods
-description: Use the Import/Export service to copy data from Azure Blob storage to Azure Data Lake Storage Gen1
---- Previously updated : 05/29/2018--
-# Use the Azure Import/Export service for offline copy of data to Data Lake Storage Gen1
-
-In this article, you'll learn how to copy huge data sets (>200 GB) into Data Lake Storage Gen1 by using offline copy methods, like the [Azure Import/Export service](../import-export/storage-import-export-service.md). Specifically, the file used as an example in this article is 339,420,860,416 bytes, or about 319 GB on disk. Let's call this file 319GB.tsv.
-
-The Azure Import/Export service helps you to transfer large amounts of data more securely to Azure Blob storage by shipping hard disk drives to an Azure datacenter.
-
-## Prerequisites
-
-Before you begin, you must have the following:
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-* **An Azure storage account**.
-* **An Azure Data Lake Storage Gen1 account**. For instructions on how to create one, see [Get started with Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md).
-
-## Prepare the data
-
-Before using the Import/Export service, break the data file to be transferred **into copies that are less than 200 GB** in size. The import tool does not work with files greater than 200 GB. In this article, we split the file into chunks of 100 GB each. You can do this by using [Cygwin](https://cygwin.com/install.html). Cygwin supports Linux commands. In this case, use the following command:
-
-```console
-split -b 100m 319GB.tsv
-```
-
-The split operation creates files with the following names.
-
-* *319GB.tsv-part-aa*
-* *319GB.tsv-part-ab*
-* *319GB.tsv-part-ac*
-* *319GB.tsv-part-ad*
-
-## Get disks ready with data
-
-Follow the instructions in [Using the Azure Import/Export service](../import-export/storage-import-export-service.md) (under the **Prepare your drives** section) to prepare your hard drives. Here's the overall sequence:
-
-1. Procure a hard disk that meets the requirement to be used for the Azure Import/Export service.
-2. Identify an Azure storage account where the data will be copied after it is shipped to the Azure datacenter.
-3. Use the [Azure Import/Export Tool](https://go.microsoft.com/fwlink/?LinkID=301900&clcid=0x409), a command-line utility. Here's a sample snippet that shows how to use the tool.
-
- ```
- WAImportExport PrepImport /sk:<StorageAccountKey> /t: <TargetDriveLetter> /format /encrypt /logdir:e:\myexportimportjob\logdir /j:e:\myexportimportjob\journal1.jrn /id:myexportimportjob /srcdir:F:\demo\ExImContainer /dstdir:importcontainer/vf1/
- ```
- See [Using the Azure Import/Export service](../import-export/storage-import-export-service.md) for more sample snippets.
-4. The preceding command creates a journal file at the specified location. Use this journal file to create an import job from the [Azure portal](https://portal.azure.com).
-
-## Create an import job
-
-You can now create an import job by using the instructions in [Using the Azure Import/Export service](../import-export/storage-import-export-service.md) (under the **Create the Import job** section). For this import job, with other details, also provide the journal file created while preparing the disk drives.
-
-## Physically ship the disks
-
-You can now physically ship the disks to an Azure datacenter. There, the data is copied over to the Azure Storage blobs you provided while creating the import job. Also, while creating the job, if you opted to provide the tracking information later, you can now go back to your import job and update the tracking number.
-
-## Copy data from blobs to Data Lake Storage Gen1
-
-After the status of the import job shows that it's completed, you can verify whether the data is available in the Azure Storage blobs you had specified. You can then use a variety of methods to move that data from the blobs to Azure Data Lake Storage Gen1. For all the available options for uploading data, see [Ingesting data into Data Lake Storage Gen1](data-lake-store-data-scenarios.md#ingest-data-into-data-lake-storage-gen1).
-
-In this section, we provide you with the JSON definitions that you can use to create an Azure Data Factory pipeline for copying data. You can use these JSON definitions from the [Azure portal](../data-factory/tutorial-copy-data-portal.md) or [Visual Studio](../data-factory/tutorial-copy-data-dot-net.md).
-
-### Source linked service (Azure Storage blob)
-
-```JSON
-{
- "name": "AzureStorageLinkedService",
- "properties": {
- "type": "AzureStorage",
- "description": "",
- "typeProperties": {
- "connectionString": "DefaultEndpointsProtocol=https;AccountName=<accountname>;AccountKey=<accountkey>"
- }
- }
-}
-```
-
-### Target linked service (Data Lake Storage Gen1)
-
-```JSON
-{
- "name": "AzureDataLakeStorageGen1LinkedService",
- "properties": {
- "type": "AzureDataLakeStore",
- "description": "",
- "typeProperties": {
- "authorization": "<Click 'Authorize' to allow this data factory and the activities it runs to access this Data Lake Storage Gen1 account with your access rights>",
- "dataLakeStoreUri": "https://<adlsg1_account_name>.azuredatalakestore.net/webhdfs/v1",
- "sessionId": "<OAuth session id from the OAuth authorization session. Each session id is unique and may only be used once>"
- }
- }
-}
-```
-
-### Input data set
-
-```JSON
-{
- "name": "InputDataSet",
- "properties": {
- "published": false,
- "type": "AzureBlob",
- "linkedServiceName": "AzureStorageLinkedService",
- "typeProperties": {
- "folderPath": "importcontainer/vf1/"
- },
- "availability": {
- "frequency": "Hour",
- "interval": 1
- },
- "external": true,
- "policy": {}
- }
-}
-```
-
-### Output data set
-
-```JSON
-{
-"name": "OutputDataSet",
-"properties": {
- "published": false,
- "type": "AzureDataLakeStore",
- "linkedServiceName": "AzureDataLakeStorageGen1LinkedService",
- "typeProperties": {
- "folderPath": "/importeddatafeb8job/"
- },
- "availability": {
- "frequency": "Hour",
- "interval": 1
- }
- }
-}
-```
-
-### Pipeline (copy activity)
-
-```JSON
-{
- "name": "CopyImportedData",
- "properties": {
- "description": "Pipeline with copy activity",
- "activities": [
- {
- "type": "Copy",
- "typeProperties": {
- "source": {
- "type": "BlobSource"
- },
- "sink": {
- "type": "AzureDataLakeStoreSink",
- "copyBehavior": "PreserveHierarchy",
- "writeBatchSize": 0,
- "writeBatchTimeout": "00:00:00"
- }
- },
- "inputs": [
- {
- "name": "InputDataSet"
- }
- ],
- "outputs": [
- {
- "name": "OutputDataSet"
- }
- ],
- "policy": {
- "timeout": "01:00:00",
- "concurrency": 1
- },
- "scheduler": {
- "frequency": "Hour",
- "interval": 1
- },
- "name": "AzureBlobtoDataLake",
- "description": "Copy Activity"
- }
- ],
- "start": "2016-02-08T22:00:00Z",
- "end": "2016-02-08T23:00:00Z",
- "isPaused": false,
- "pipelineMode": "Scheduled"
- }
-}
-```
-
-For more information, see [Move data from Azure Storage blob to Azure Data Lake Storage Gen1 using Azure Data Factory](../data-factory/connector-azure-data-lake-store.md).
-
-## Reconstruct the data files in Data Lake Storage Gen1
--
-We started with a file that was 319 GB, and broke it down into files of smaller size so that it could be transferred by using the Azure Import/Export service. Now that the data is in Azure Data Lake Storage Gen1, we can reconstruct the file to its original size. You can use the following Azure PowerShell cmdlets to do so.
-
-```PowerShell
-# Login to our account
-Connect-AzAccount
-
-# List your subscriptions
-Get-AzSubscription
-
-# Switch to the subscription you want to work with
-Set-AzContext -SubscriptionId
-Register-AzResourceProvider -ProviderNamespace "Microsoft.DataLakeStore"
-
-# Join the files
-Join-AzDataLakeStoreItem -AccountName "<adlsg1_account_name" -Paths "/importeddatafeb8job/319GB.tsv-part-aa","/importeddatafeb8job/319GB.tsv-part-ab", "/importeddatafeb8job/319GB.tsv-part-ac", "/importeddatafeb8job/319GB.tsv-part-ad" -Destination "/importeddatafeb8job/MergedFile.csv"
-```
-
-## Next steps
-
-* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md)
-* [Use Azure Data Lake Analytics with Data Lake Storage Gen1](../data-lake-analytics/data-lake-analytics-get-started-portal.md)
-* [Use Azure HDInsight with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md)
data-lake-store Data Lake Store Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-overview.md
- Title: What is Azure Data Lake Storage Gen1? | Microsoft Docs
-description: Overview of Data Lake Storage Gen1 (previously known as Azure Data Lake Store), and the value it provides over other data stores
---- Previously updated : 04/17/2019----
-# What is Azure Data Lake Storage Gen1?
--
-Azure Data Lake Storage Gen1 is an enterprise-wide hyper-scale repository for big data analytic workloads. Azure Data Lake enables you to capture data of any size, type, and ingestion speed in one single place for operational and exploratory analytics.
-
-Data Lake Storage Gen1 can be accessed from Hadoop (available with HDInsight cluster) using the WebHDFS-compatible REST APIs. It's designed to enable analytics on the stored data and is tuned for performance for data analytics scenarios. Data Lake Storage Gen1 includes all enterprise-grade capabilities: security, manageability, scalability, reliability, and availability.
-
-![Azure Data Lake](./media/data-lake-store-overview/data-lake-store-concept.png)
-
-## Key capabilities
-
-Some of the key capabilities of Data Lake Storage Gen1 include the following.
-
-### Built for Hadoop
-
-Data Lake Storage Gen1 is an Apache Hadoop file system that's compatible with Hadoop Distributed File System (HDFS), and works with the Hadoop ecosystem. Your existing HDInsight applications or services that use the WebHDFS API can easily integrate with Data Lake Storage Gen1. Data Lake Storage Gen1 also exposes a WebHDFS-compatible REST interface for applications.
-
-You can easily analyze data stored in Data Lake Storage Gen1 using Hadoop analytic frameworks such as MapReduce or Hive. You can provision Azure HDInsight clusters and configure them to directly access data stored in Data Lake Storage Gen1.
-
-### Unlimited storage, petabyte files
-
-Data Lake Storage Gen1 provides unlimited storage and can store a variety of data for analytics. It doesn't impose any limits on account sizes, file sizes, or the amount of data that can be stored in a data lake. Individual files can range from kilobyte to petabytes in size. Data is stored durably by making multiple copies. There is no limit on the duration of time for which the data can be stored in the data lake.
-
-### Performance-tuned for big data analytics
-
-Data Lake Storage Gen1 is built for running large-scale analytic systems that require massive throughput to query and analyze large amounts of data. The data lake spreads parts of a file over a number of individual storage servers. This improves the read throughput when reading the file in parallel for performing data analytics.
-
-### Enterprise ready: Highly available and secure
-
-Data Lake Storage Gen1 provides industry-standard availability and reliability. Your data assets are stored durably by making redundant copies to guard against any unexpected failures.
-
-Data Lake Storage Gen1 also provides enterprise-grade security for the stored data. For more information, see [Securing data in Azure Data Lake Storage Gen1](#DataLakeStoreSecurity).
-
-### All data
-
-Data Lake Storage Gen1 can store any data in its native format, without requiring any prior transformations. Data Lake Storage Gen1 does not require a schema to be defined before the data is loaded, leaving it up to the individual analytic framework to interpret the data and define a schema at the time of the analysis. The ability to store files of arbitrary sizes and formats makes it possible for Data Lake Storage Gen1 to handle structured, semi-structured, and unstructured data.
-
-Data Lake Storage Gen1 containers for data are essentially folders and files. You operate on the stored data using SDKs, the Azure portal, and Azure PowerShell. If you put your data into the store using these interfaces and using the appropriate containers, you can store any type of data. Data Lake Storage Gen1 does not perform any special handling of data based on the type of data it stores.
-
-## <a name="DataLakeStoreSecurity"></a>Securing data
-
-Data Lake Storage Gen1 uses Microsoft Entra ID for authentication, and access control lists (ACLs) to manage access to your data.
-
-| Feature | Description |
-| | |
-| Authentication |Data Lake Storage Gen1 integrates with Microsoft Entra ID for identity and access management for all the data stored in Data Lake Storage Gen1. Because of the integration, Data Lake Storage Gen1 benefits from all Microsoft Entra feature such as multi-factor authentication, Conditional Access, Azure role-based access control, application usage monitoring, security monitoring and alerting, and so on. Data Lake Storage Gen1 supports the OAuth 2.0 protocol for authentication within the REST interface. See [Data Lake Storage Gen1 authentication](data-lakes-store-authentication-using-azure-active-directory.md).|
-| Access control |Data Lake Storage Gen1 provides access control by supporting POSIX-style permissions exposed by the WebHDFS protocol. You can enable ACLs on the root folder, on subfolders, and on individual files. For more information about how ACLs work in the context of Data Lake Storage Gen1, see [Access control in Data Lake Storage Gen1](data-lake-store-access-control.md). |
-| Encryption |Data Lake Storage Gen1 also provides encryption for data that's stored in the account. You specify the encryption settings while creating a Data Lake Storage Gen1 account. You can choose to have your data encrypted or opt for no encryption. For more information, see [Encryption in Data Lake Storage Gen1](data-lake-store-encryption.md). For instructions on how to provide encryption-related configuration, see [Get started with Data Lake Storage Gen1 using the Azure portal](data-lake-store-get-started-portal.md). |
-
-For instructions on how to secure data in Data Lake Storage Gen1, see [Securing data in Azure Data Lake Storage Gen1](data-lake-store-secure-data.md).
-
-## Application compatibility
-
-Data Lake Storage Gen1 is compatible with most open-source components in the Hadoop ecosystem. It also integrates well with other Azure services. To learn more about how you can use Data Lake Storage Gen1 with open-source components and other Azure services, use the following links:
--- See [Applications and services compatible with Azure Data Lake Storage Gen1](data-lake-store-compatible-oss-other-applications.md) for a list of open-source applications interoperable with Data Lake Storage Gen1.-- See [Integrating with other Azure services](data-lake-store-integrate-with-other-services.md) to understand how to use Data Lake Storage Gen1 with other Azure services to enable a wider range of scenarios.-- See [Scenarios for using Data Lake Storage Gen1](data-lake-store-data-scenarios.md) to learn how to use Data Lake Storage Gen1 in scenarios such as ingesting data, processing data, downloading data, and visualizing data.-
-## Data Lake Storage Gen1 file system
-
-Data Lake Storage Gen1 can be accessed via the filesystem AzureDataLakeFilesystem (adl://) in Hadoop environments (available with HDInsight cluster). Applications and services that use adl:// can take advantage of further performance optimizations that aren't currently available in WebHDFS. As a result, Data Lake Storage Gen1 gives you the flexibility to either make use of the best performance with the recommended option of using adl:// or maintain existing code by continuing to use the WebHDFS API directly. Azure HDInsight fully leverages the AzureDataLakeFilesystem to provide the best performance on Data Lake Storage Gen1.
-
-You can access your data in Data Lake Storage Gen1 using `adl://<data_lake_storage_gen1_name>.azuredatalakestore.net`. For more information about how to access the data in Data Lake Storage Gen1, see [View properties of the stored data](data-lake-store-get-started-portal.md#properties).
-
-## Next steps
--- [Get started with Data Lake Storage Gen1 using the Azure portal](data-lake-store-get-started-portal.md)-- [Get started with Data Lake Storage Gen1 using .NET SDK](data-lake-store-get-started-net-sdk.md)-- [Use Azure HDInsight with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md)
data-lake-store Data Lake Store Performance Tuning Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-performance-tuning-guidance.md
- Title: Azure Data Lake Storage Gen1 - performance tuning
-description: Learn how using all available throughput in Azure Data Lake Storage Gen1 is important to get the best performance by performing as many reads and writes in parallel as possible.
---- Previously updated : 06/30/2017---
-# Tune Azure Data Lake Storage Gen1 for performance
-
-Data Lake Storage Gen1 supports high-throughput for I/O intensive analytics and data movement. In Data Lake Storage Gen1, using all available throughput ΓÇô the amount of data that can be read or written per second ΓÇô is important to get the best performance. This is achieved by performing as many reads and writes in parallel as possible.
-
-![Data Lake Storage Gen1 performance](./media/data-lake-store-performance-tuning-guidance/throughput.png)
-
-Data Lake Storage Gen1 can scale to provide the necessary throughput for all analytics scenario. By default, a Data Lake Storage Gen1 account provides automatically enough throughput to meet the needs of a broad category of use cases. For the cases where customers run into the default limit, the Data Lake Storage Gen1 account can be configured to provide more throughput by contacting Microsoft support.
-
-## Data ingestion
-
-When ingesting data from a source system to Data Lake Storage Gen1, it's important to consider that the source hardware, source network hardware, and network connectivity to Data Lake Storage Gen1 can be the bottleneck.
-
-![Diagram that shows that the source hardware, source network hardware, and network connectivity to Data Lake Storage Gen1 can be the bottleneck.](./media/data-lake-store-performance-tuning-guidance/bottleneck.png)
-
-It's important to ensure that the data movement is not affected by these factors.
-
-### Source hardware
-
-Whether you're using on-premises machines or VMs in Azure, you should carefully select the appropriate hardware. For Source Disk Hardware, prefer SSDs to HDDs and pick disk hardware with faster spindles. For Source Network Hardware, use the fastest NICs possible. On Azure, we recommend Azure D14 VMs that have the appropriately powerful disk and networking hardware.
-
-### Network connectivity to Data Lake Storage Gen1
-
-The network connectivity between your source data and Data Lake Storage Gen1 can sometimes be the bottleneck. When your source data is On-Premises, consider using a dedicated link with [Azure ExpressRoute](https://azure.microsoft.com/services/expressroute/) . If your source data is in Azure, the performance will be best when the data is in the same Azure region as the Data Lake Storage Gen1 account.
-
-### Configure data ingestion tools for maximum parallelization
-
-After you've addressed the source hardware and network connectivity bottlenecks, you're ready to configure your ingestion tools. The following table summarizes the key settings for several popular ingestion tools and provides in-depth performance tuning articles for them. To learn more about which tool to use for your scenario, visit this [article](./data-lake-store-data-scenarios.md).
-
-| Tool | Settings | More Details |
-|--|||
-| PowerShell | PerFileThreadCount, ConcurrentFileCount | [Link](./data-lake-store-get-started-powershell.md) |
-| AdlCopy | Azure Data Lake Analytics units | [Link](./data-lake-store-copy-data-azure-storage-blob.md#performance-considerations-for-using-adlcopy) |
-| DistCp | -m (mapper) | [Link](./data-lake-store-copy-data-wasb-distcp.md#performance-considerations-while-using-distcp) |
-| Azure Data Factory| parallelCopies | [Link](../data-factory/copy-activity-performance.md) |
-| Sqoop | fs.azure.block.size, -m (mapper) | [Link](/archive/blogs/shanyu/performance-tuning-for-hdinsight-storm-and-microsoft-azure-eventhubs) |
-
-## Structure your data set
-
-When data is stored in Data Lake Storage Gen1, the file size, number of files, and folder structure affect performance. The following section describes best practices in these areas.
-
-### File size
-
-Typically, analytics engines such as HDInsight and Azure Data Lake Analytics have a per-file overhead. If you store your data as many small files, this can negatively affect performance.
-
-In general, organize your data into larger sized files for better performance. As a rule of thumb, organize data sets in files of 256 MB or larger. In some cases such as images and binary data, it is not possible to process them in parallel. In these cases, it is recommended to keep individual files under 2 GB.
-
-Sometimes, data pipelines have limited control over the raw data that has lots of small files. It is recommended to have a "cooking" process that generates larger files to use for downstream applications.
-
-### Organize time-series data in folders
-
-For Hive and ADLA workloads, partition pruning of time-series data can help some queries read only a subset of the data, which improves performance.
-
-Those pipelines that ingest time-series data, often place their files with a structured naming for files and folders. The following is a common example we see for data that is structured by date: *\DataSet\YYYY\MM\DD\datafile_YYYY_MM_DD.tsv*.
-
-Notice that the datetime information appears both as folders and in the filename.
-
-For date and time, the following is a common pattern: *\DataSet\YYYY\MM\DD\HH\mm\datafile_YYYY_MM_DD_HH_mm.tsv*.
-
-Again, the choice you make with the folder and file organization should optimize for the larger file sizes and a reasonable number of files in each folder.
-
-## Optimize I/O intensive jobs on Hadoop and Spark workloads on HDInsight
-
-Jobs fall into one of the following three categories:
-
-* **CPU intensive.** These jobs have long computation times with minimal I/O times. Examples include machine learning and natural language processing jobs.
-* **Memory intensive.** These jobs use lots of memory. Examples include PageRank and real-time analytics jobs.
-* **I/O intensive.** These jobs spend most of their time doing I/O. A common example is a copy job that does only read and write operations. Other examples include data preparation jobs that read numerous data, performs some data transformation, and then writes the data back to the store.
-
-The following guidance is only applicable to I/O intensive jobs.
-
-### General considerations for an HDInsight cluster
-
-* **HDInsight versions.** For best performance, use the latest release of HDInsight.
-* **Regions.** Place the Data Lake Storage Gen1 account in the same region as the HDInsight cluster.
-
-An HDInsight cluster is composed of two head nodes and some worker nodes. Each worker node provides a specific number of cores and memory, which is determined by the VM-type. When running a job, YARN is the resource negotiator that allocates the available memory and cores to create containers. Each container runs the tasks needed to complete the job. Containers run in parallel to process tasks quickly. Therefore, performance is improved by running as many parallel containers as possible.
-
-There are three layers within an HDInsight cluster that can be tuned to increase the number of containers and use all available throughput.
-
-* **Physical layer**
-* **YARN layer**
-* **Workload layer**
-
-### Physical layer
-
-**Run cluster with more nodes and/or larger sized VMs.** A larger cluster will enable you to run more YARN containers as shown in the picture below.
-
-![Diagram that shows the use of more YARN containers.](./media/data-lake-store-performance-tuning-guidance/VM.png)
-
-**Use VMs with more network bandwidth.** The amount of network bandwidth can be a bottleneck if there is less network bandwidth than Data Lake Storage Gen1 throughput. Different VMs will have varying network bandwidth sizes. Choose a VM-type that has the largest possible network bandwidth.
-
-### YARN layer
-
-**Use smaller YARN containers.** Reduce the size of each YARN container to create more containers with the same amount of resources.
-
-![Diagram that shows the use of smaller YARN containers.](./media/data-lake-store-performance-tuning-guidance/small-containers.png)
-
-Depending on your workload, there will always be a minimum YARN container size that is needed. If you pick too small a container, your jobs will run into out-of-memory issues. Typically YARN containers should be no smaller than 1 GB. It's common to see 3 GB YARN containers. For some workloads, you may need larger YARN containers.
-
-**Increase cores per YARN container.** Increase the number of cores allocated to each container to increase the number of parallel tasks that run in each container. This works for applications like Spark, which run multiple tasks per container. For applications like Hive that run a single thread in each container, it's better to have more containers rather than more cores per container.
-
-### Workload layer
-
-**Use all available containers.** Set the number of tasks to be equal or larger than the number of available containers so that all resources are used.
-
-![Diagram that shows the use of all available containers.](./media/data-lake-store-performance-tuning-guidance/use-containers.png)
-
-**Failed tasks are costly.** If each task has a large amount of data to process, then failure of a task results in an expensive retry. Therefore, it's better to create more tasks, each of which processes a small amount of data.
-
-In addition to the general guidelines above, each application has different parameters available to tune for that specific application. The table below lists some of the parameters and links to get started with performance tuning for each application.
-
-| Workload | Parameter to set tasks |
-|--|-|
-| [Spark on HDInsight](data-lake-store-performance-tuning-spark.md) | <ul><li>Num-executors</li><li>Executor-memory</li><li>Executor-cores</li></ul> |
-| [Hive on HDInsight](data-lake-store-performance-tuning-hive.md) | <ul><li>hive.tez.container.size</li></ul> |
-| [MapReduce on HDInsight](data-lake-store-performance-tuning-mapreduce.md) | <ul><li>Mapreduce.map.memory</li><li>Mapreduce.job.maps</li><li>Mapreduce.reduce.memory</li><li>Mapreduce.job.reduces</li></ul> |
-| [Storm on HDInsight](data-lake-store-performance-tuning-storm.md)| <ul><li>Number of worker processes</li><li>Number of spout executor instances</li><li>Number of bolt executor instances </li><li>Number of spout tasks</li><li>Number of bolt tasks</li></ul>|
-
-## See also
-
-* [Overview of Azure Data Lake Storage Gen1](data-lake-store-overview.md)
-* [Get Started with Azure Data Lake Analytics](../data-lake-analytics/data-lake-analytics-get-started-portal.md)
data-lake-store Data Lake Store Performance Tuning Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-performance-tuning-hive.md
- Title: Performance tuning - Hive on Azure Data Lake Storage Gen1
-description: Learn about performance tuning for Hive on HdInsight and Azure Data Lake Storage Gen1. For I/O intensive queries, tune Hive to get better performance.
---- Previously updated : 12/19/2016---
-# Performance tuning guidance for Hive on HDInsight and Azure Data Lake Storage Gen1
-
-The default settings have been set to provide good performance across many different use cases. For I/O intensive queries, Hive can be tuned to get better performance with Azure Data Lake Storage Gen1.
-
-## Prerequisites
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-* **A Data Lake Storage Gen1 account**. For instructions on how to create one, see [Get started with Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md)
-* **Azure HDInsight cluster** with access to a Data Lake Storage Gen1 account. See [Create an HDInsight cluster with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md). Make sure you enable Remote Desktop for the cluster.
-* **Running Hive on HDInsight**. To learn about running Hive jobs on HDInsight, see [Use Hive on HDInsight](../hdinsight/hadoop/hdinsight-use-hive.md)
-* **Performance tuning guidelines on Data Lake Storage Gen1**. For general performance concepts, see [Data Lake Storage Gen1 Performance Tuning Guidance](./data-lake-store-performance-tuning-guidance.md)
-
-## Parameters
-
-Here are the most important settings to tune for improved Data Lake Storage Gen1 performance:
-
-* **hive.tez.container.size** ΓÇô the amount of memory used by each tasks
-
-* **tez.grouping.min-size** ΓÇô minimum size of each mapper
-
-* **tez.grouping.max-size** ΓÇô maximum size of each mapper
-
-* **hive.exec.reducer.bytes.per.reducer** ΓÇô size of each reducer
-
-**hive.tez.container.size** - The container size determines how much memory is available for each task. This is the main input for controlling the concurrency in Hive.
-
-**tez.grouping.min-size** ΓÇô This parameter allows you to set the minimum size of each mapper. If the number of mappers that Tez chooses is smaller than the value of this parameter, then Tez will use the value set here.
-
-**tez.grouping.max-size** ΓÇô The parameter allows you to set the maximum size of each mapper. If the number of mappers that Tez chooses is larger than the value of this parameter, then Tez will use the value set here.
-
-**hive.exec.reducer.bytes.per.reducer** ΓÇô This parameter sets the size of each reducer. By default, each reducer is 256MB.
-
-## Guidance
-
-**Set hive.exec.reducer.bytes.per.reducer** ΓÇô The default value works well when the data is uncompressed. For data that is compressed, you should reduce the size of the reducer.
-
-**Set hive.tez.container.size** ΓÇô In each node, memory is specified by yarn.nodemanager.resource.memory-mb and should be correctly set on HDI cluster by default. For additional information on setting the appropriate memory in YARN, see this [post](../hdinsight/hdinsight-hadoop-hive-out-of-memory-error-oom.md).
-
-I/O intensive workloads can benefit from more parallelism by decreasing the Tez container size. This gives the user more containers which increases concurrency. However, some Hive queries require a significant amount of memory (e.g. MapJoin). If the task does not have enough memory, you will get an out of memory exception during runtime. If you receive out of memory exceptions, then you should increase the memory.
-
-The concurrent number of tasks running or parallelism will be bounded by the total YARN memory. The number of YARN containers will dictate how many concurrent tasks can run. To find the YARN memory per node, you can go to Ambari. Navigate to YARN and view the Configs tab. The YARN memory is displayed in this window.
-
-> Total YARN memory = nodes * YARN memory per node
-> Number of YARN containers = Total YARN memory / Tez container size
-
-The key to improving performance using Data Lake Storage Gen1 is to increase the concurrency as much as possible. Tez automatically calculates the number of tasks that should be created so you do not need to set it.
-
-## Example Calculation
-
-Let's say you have an 8 node D14 cluster.
-
-> Total YARN memory = nodes * YARN memory per node
-> Total YARN memory = 8 nodes * 96GB = 768GB
-> Number of YARN containers = 768GB / 3072MB = 256
-
-## Limitations
-
-**Data Lake Storage Gen1 throttling**
-
-If you hit the limits of bandwidth provided by Data Lake Storage Gen1, you would start to see task failures. This could be identified by observing throttling errors in task logs. You can decrease the parallelism by increasing Tez container size. If you need more concurrency for your job, please contact us.
-
-To check if you are getting throttled, you need to enable the debug logging on the client side. Here's how you can do that:
-
-1. Put the following property in the log4j properties in Hive config. This can be done from Ambari view: log4j.logger.com.microsoft.azure.datalake.store=DEBUG
-Restart all the nodes/service for the config to take effect.
-
-2. If you are getting throttled, you'll see the HTTP 429 error code in the hive log file. The hive log file is in /tmp/&lt;user&gt;/hive.log
-
-## Further information on Hive tuning
-
-Here are a few blogs that will help tune your Hive queries:
-* [Optimize Hive queries for Hadoop in HDInsight](../hdinsight/hdinsight-hadoop-optimize-hive-query.md)
-* [Encoding the Hive query file in Azure HDInsight](/archive/blogs/bigdatasupport/encoding-the-hive-query-file-in-azure-hdinsight)
-* Ignite talk on optimize Hive on HDInsight
data-lake-store Data Lake Store Performance Tuning Mapreduce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-performance-tuning-mapreduce.md
- Title: Azure Data Lake Storage Gen1 performance tuning - MapReduce
-description: Learn about performance tuning for MapReduce in Azure Data Lake Storage Gen1, including parameters, guidance, an example calculation, and limitations.
---- Previously updated : 12/19/2016---
-# Performance tuning guidance for MapReduce on HDInsight and Azure Data Lake Storage Gen1
-
-## Prerequisites
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-* **An Azure Data Lake Storage Gen1 account**. For instructions on how to create one, see [Get started with Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md)
-* **Azure HDInsight cluster** with access to a Data Lake Storage Gen1 account. See [Create an HDInsight cluster with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md). Make sure you enable Remote Desktop for the cluster.
-* **Using MapReduce on HDInsight**. For more information, see [Use MapReduce in Hadoop on HDInsight](../hdinsight/hadoop/hdinsight-use-mapreduce.md)
-* **Review performance tuning guidelines for Data Lake Storage Gen1**. For general performance concepts, see [Data Lake Storage Gen1 Performance Tuning Guidance](./data-lake-store-performance-tuning-guidance.md)
-
-## Parameters
-
-When running MapReduce jobs, here are the most important parameters that you can configure to increase performance on Data Lake Storage Gen1:
-
-|Parameter | Description |
-|||
-|`Mapreduce.map.memory.mb` | The amount of memory to allocate to each mapper. |
-|`Mapreduce.job.maps` | The number of map tasks per job. |
-|`Mapreduce.reduce.memory.mb` | The amount of memory to allocate to each reducer. |
-|`Mapreduce.job.reduces` | The number of reduce tasks per job. |
-
-### Mapreduce.map.memory / Mapreduce.reduce.memory
-
-Adjust this number based on how much memory is needed for the map and/or reduce task. You can view the default values of `mapreduce.map.memory` and `mapreduce.reduce.memory` in Ambari via the Yarn configuration. In Ambari, navigate to YARN and view the **Configs** tab. The YARN memory will be displayed.
-
-### Mapreduce.job.maps / Mapreduce.job.reduces
-
-This determines the maximum number of mappers or reducers to create. The number of splits determines how many mappers are created for the MapReduce job. Therefore, you may get fewer mappers than you requested if there are fewer splits than the number of mappers requested.
-
-## Guidance
-
-### Step 1: Determine number of jobs running
-
-By default, MapReduce will use the entire cluster for your job. You can use less of the cluster by using fewer mappers than there are available containers. The guidance in this document assumes that your application is the only application running on your cluster.
-
-### Step 2: Set mapreduce.map.memory/mapreduce.reduce.memory
-
-The size of the memory for map and reduce tasks will be dependent on your specific job. You can reduce the memory size if you want to increase concurrency. The number of concurrently running tasks depends on the number of containers. By decreasing the amount of memory per mapper or reducer, more containers can be created, which enable more mappers or reducers to run concurrently. Decreasing the amount of memory too much may cause some processes to run out of memory. If you get a heap error when running your job, increase the memory per mapper or reducer. Consider that adding more containers adds extra overhead for each additional container, which can potentially degrade performance. Another alternative is to get more memory by using a cluster that has higher amounts of memory or increasing the number of nodes in your cluster. More memory will enable more containers to be used, which means more concurrency.
-
-### Step 3: Determine total YARN memory
-
-To tune mapreduce.job.maps/mapreduce.job.reduces, consider the amount of total YARN memory available for use. This information is available in Ambari. Navigate to YARN and view the **Configs** tab. The YARN memory is displayed in this window. Multiply the YARN memory with the number of nodes in your cluster to get the total YARN memory.
-
-`Total YARN memory = nodes * YARN memory per node`
-
-If you're using an empty cluster, then memory can be the total YARN memory for your cluster. If other applications are using memory, then you can choose to only use a portion of your clusterΓÇÖs memory by reducing the number of mappers or reducers to the number of containers you want to use.
-
-### Step 4: Calculate number of YARN containers
-
-YARN containers dictate the amount of concurrency available for the job. Take total YARN memory and divide that by mapreduce.map.memory.
-
-`# of YARN containers = total YARN memory / mapreduce.map.memory`
-
-### Step 5: Set mapreduce.job.maps/mapreduce.job.reduces
-
-Set mapreduce.job.maps/mapreduce.job.reduces to at least the number of available containers. You can experiment further by increasing the number of mappers and reducers to see if you get better performance. Keep in mind that more mappers will have additional overhead so having too many mappers may degrade performance.
-
-CPU scheduling and CPU isolation are turned off by default so the number of YARN containers is constrained by memory.
-
-## Example calculation
-
-LetΓÇÖs say you currently have a cluster composed of 8 D14 nodes and you want to run an I/O intensive job. Here are the calculations you should do:
-
-### Step 1: Determine number of jobs running
-
-For our example, we assume that our job is the only one running.
-
-### Step 2: Set mapreduce.map.memory/mapreduce.reduce.memory
-
-For our example, you're running an I/O intensive job and decide that 3 GB of memory for map tasks is sufficient.
-
-`mapreduce.map.memory = 3GB`
-
-### Step 3: Determine total YARN memory
-
-`total memory from the cluster is 8 nodes * 96GB of YARN memory for a D14 = 768GB`
-
-### Step 4: Calculate # of YARN containers
-
-`# of YARN containers = 768 GB of available memory / 3 GB of memory = 256`
-
-### Step 5: Set mapreduce.job.maps/mapreduce.job.reduces
-
-`mapreduce.map.jobs = 256`
-
-## Limitations
-
-**Data Lake Storage Gen1 throttling**
-
-As a multi-tenant service, Data Lake Storage Gen1 sets account level bandwidth limits. If you hit these limits, you will start to see task failures. This can be identified by observing throttling errors in task logs. If you need more bandwidth for your job, please contact us.
-
-To check if you are getting throttled, you need to enable the debug logging on the client side. HereΓÇÖs how you can do that:
-
-1. Put the following property in the log4j properties in Ambari > YARN > Config > Advanced yarn-log4j: log4j.logger.com.microsoft.azure.datalake.store=DEBUG
-
-2. Restart all the nodes/service for the config to take effect.
-
-3. If you're getting throttled, youΓÇÖll see the HTTP 429 error code in the YARN log file. The YARN log file is in /tmp/&lt;user&gt;/yarn.log
-
-## Examples to run
-
-To demonstrate how MapReduce runs on Data Lake Storage Gen1, the following is some sample code that was run on a cluster with the following settings:
-
-* 16 node D14v2
-* Hadoop cluster running HDI 3.6
-
-For a starting point, here are some example commands to run MapReduce Teragen, Terasort, and Teravalidate. You can adjust these commands based on your resources.
-
-### Teragen
-
-```
-yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar teragen -Dmapreduce.job.maps=2048 -Dmapreduce.map.memory.mb=3072 10000000000 adl://example/data/1TB-sort-input
-```
-
-### Terasort
-
-```
-yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar terasort -Dmapreduce.job.maps=2048 -Dmapreduce.map.memory.mb=3072 -Dmapreduce.job.reduces=512 -Dmapreduce.reduce.memory.mb=3072 adl://example/data/1TB-sort-input adl://example/data/1TB-sort-output
-```
-
-### Teravalidate
-
-```
-yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar teravalidate -Dmapreduce.job.maps=512 -Dmapreduce.map.memory.mb=3072 adl://example/data/1TB-sort-output adl://example/data/1TB-sort-validate
-```
data-lake-store Data Lake Store Performance Tuning Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-performance-tuning-powershell.md
- Title: Azure Data Lake Storage Gen1 performance tuning - PowerShell
-description: Tips on how to improve performance when using Azure PowerShell with Azure Data Lake Storage Gen1.
----- Previously updated : 01/09/2018--
-# Performance tuning guidance for using PowerShell with Azure Data Lake Storage Gen1
-
-This article describes the properties that you can tune to get better performance while using PowerShell to work with Data Lake Storage Gen1.
--
-## Performance-related properties
-
-| Property | Default | Description |
-|||-|
-| PerFileThreadCount | 10 | This parameter enables you to choose the number of parallel threads for uploading or downloading each file. This number represents the max threads that can be allocated per file, but you may get fewer threads depending on your scenario (for example, if you are uploading a 1-KB file, you get one thread even if you ask for 20 threads). |
-| ConcurrentFileCount | 10 | This parameter is specifically for uploading or downloading folders. This parameter determines the number of concurrent files that can be uploaded or downloaded. This number represents the maximum number of concurrent files that can be uploaded or downloaded at one time, but you may get less concurrency depending on your scenario (for example, if you are uploading two files, you get two concurrent files uploads even if you ask for 15). |
-
-**Example:**
-
-This command downloads files from Data Lake Storage Gen1 to the user's local drive using 20 threads per file and 100 concurrent files.
-
-```PowerShell
-Export-AzDataLakeStoreItem -AccountName "Data Lake Storage Gen1 account name" `
- -PerFileThreadCount 20 `
- -ConcurrentFileCount 100 `
- -Path /Powershell/100GB `
- -Destination C:\Performance\ `
- -Force `
- -Recurse
-```
-
-## How to determine property values
-
-The next question you might have is how to determine what value to provide for the performance-related properties. Here's some guidance that you can use.
-
-* **Step 1: Determine the total thread count** - Start by calculating the total thread count to use. As a general guideline, you should use six threads for each physical core.
-
- `Total thread count = total physical cores * 6`
-
- **Example:**
-
- Assuming you are running the PowerShell commands from a D14 VM that has 16 cores
-
- `Total thread count = 16 cores * 6 = 96 threads`
-
-* **Step 2: Calculate PerFileThreadCount** - We calculate our PerFileThreadCount based on the size of the files. For files smaller than 2.5 GB, there is no need to change this parameter because the default of 10 is sufficient. For files larger than 2.5 GB, you should use 10 threads as the base for the first 2.5 GB and add 1 thread for each additional 256-MB increase in file size. If you are copying a folder with a large range of file sizes, consider grouping them into similar file sizes. Having dissimilar file sizes may cause non-optimal performance. If that's not possible to group similar file sizes, you should set PerFileThreadCount based on the largest file size.
-
- `PerFileThreadCount = 10 threads for the first 2.5 GB + 1 thread for each additional 256 MB increase in file size`
-
- **Example:**
-
- Assuming you have 100 files ranging from 1 GB to 10 GB, we use the 10 GB as the largest file size for equation, which would read like the following.
-
- `PerFileThreadCount = 10 + ((10 GB - 2.5 GB) / 256 MB) = 40 threads`
-
-* **Step 3: Calculate ConcurrentFilecount** - Use the total thread count and PerFileThreadCount to calculate ConcurrentFileCount based on the following equation:
-
- `Total thread count = PerFileThreadCount * ConcurrentFileCount`
-
- **Example:**
-
- Based on the example values we have been using
-
- `96 = 40 * ConcurrentFileCount`
-
- So, **ConcurrentFileCount** is **2.4**, which we can round off to **2**.
-
-## Further tuning
-
-You might require further tuning because there is a range of file sizes to work with. The preceding calculation works well if all or most of the files are larger and closer to the 10-GB range. If instead, there are many different files sizes with many files being smaller, then you could reduce PerFileThreadCount. By reducing the PerFileThreadCount, we can increase ConcurrentFileCount. So, if we assume that most of our files are smaller in the 5-GB range, we can redo our calculation:
-
-`PerFileThreadCount = 10 + ((5 GB - 2.5 GB) / 256 MB) = 20`
-
-So, **ConcurrentFileCount** becomes 96/20, which is 4.8, rounded off to **4**.
-
-You can continue to tune these settings by changing the **PerFileThreadCount** up and down depending on the distribution of your file sizes.
-
-### Limitation
-
-* **Number of files is less than ConcurrentFileCount**: If the number of files you are uploading is smaller than the **ConcurrentFileCount** that you calculated, then you should reduce **ConcurrentFileCount** to be equal to the number of files. You can use any remaining threads to increase **PerFileThreadCount**.
-
-* **Too many threads**: If you increase thread count too much without increasing your cluster size, you run the risk of degraded performance. There can be contention issues when context-switching on the CPU.
-
-* **Insufficient concurrency**: If the concurrency is not sufficient, then your cluster may be too small. You can increase the number of nodes in your cluster, which gives you more concurrency.
-
-* **Throttling errors**: You may see throttling errors if your concurrency is too high. If you are seeing throttling errors, you should either reduce the concurrency or contact us.
-
-## Next steps
-
-* [Use Azure Data Lake Storage Gen1 for big data requirements](data-lake-store-data-scenarios.md)
-* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md)
-* [Use Azure Data Lake Analytics with Data Lake Storage Gen1](../data-lake-analytics/data-lake-analytics-get-started-portal.md)
-* [Use Azure HDInsight with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md)
data-lake-store Data Lake Store Performance Tuning Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-performance-tuning-spark.md
- Title: Performance tuning - Spark with Azure Data Lake Storage Gen1
-description: Learn about performance tuning guidelines for Spark on Azure HDInsight and Azure Data Lake Storage Gen1.
---- Previously updated : 12/19/2016---
-# Performance tuning guidance for Spark on HDInsight and Azure Data Lake Storage Gen1
-
-When tuning performance on Spark, you need to consider the number of apps that will be running on your cluster. By default, you can run four apps concurrently on your HDI cluster (Note: the default setting is subject to change). You may decide to use fewer apps so you can override the default settings and use more of the cluster for those apps.
-
-## Prerequisites
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-* **An Azure Data Lake Storage Gen1 account**. For instructions on how to create one, see [Get started with Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md)
-* **Azure HDInsight cluster** with access to a Data Lake Storage Gen1 account. See [Create an HDInsight cluster with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md). Make sure you enable Remote Desktop for the cluster.
-* **Running Spark cluster on Data Lake Storage Gen1**. For more information, see [Use HDInsight Spark cluster to analyze data in Data Lake Storage Gen1](../hdinsight/spark/apache-spark-use-with-data-lake-store.md)
-* **Performance tuning guidelines on Data Lake Storage Gen1**. For general performance concepts, see [Data Lake Storage Gen1 Performance Tuning Guidance](./data-lake-store-performance-tuning-guidance.md)
-
-## Parameters
-
-When running Spark jobs, here are the most important settings that can be tuned to increase performance on Data Lake Storage Gen1:
-
-* **Num-executors** - The number of concurrent tasks that can be executed.
-
-* **Executor-memory** - The amount of memory allocated to each executor.
-
-* **Executor-cores** - The number of cores allocated to each executor.
-
-**Num-executors**
-Num-executors will set the maximum number of tasks that can run in parallel. The actual number of tasks that can run in parallel is bounded by the memory and CPU resources available in your cluster.
-
-**Executor-memory**
-This is the amount of memory that is being allocated to each executor. The memory needed for each executor is dependent on the job. For complex operations, the memory needs to be higher. For simple operations like read and write, memory requirements will be lower. The amount of memory for each executor can be viewed in Ambari. In Ambari, navigate to Spark and view the **Configs** tab.
-
-**Executor-cores**
-This sets the amount of cores used per executor, which determines the number of parallel threads that can be run per executor. For example, if executor-cores = 2, then each executor can run 2 parallel tasks in the executor. The executor-cores needed will be dependent on the job. I/O heavy jobs do not require a large amount of memory per task so each executor can handle more parallel tasks.
-
-By default, two virtual YARN cores are defined for each physical core when running Spark on HDInsight. This number provides a good balance of concurrency and amount of context switching from multiple threads.
-
-## Guidance
-
-While running Spark analytic workloads to work with data in Data Lake Storage Gen1, we recommend that you use the most recent HDInsight version to get the best performance with Data Lake Storage Gen1. When your job is more I/O intensive, then certain parameters can be configured to improve performance. Data Lake Storage Gen1 is a highly scalable storage platform that can handle high throughput. If the job mainly consists of read or writes, then increasing concurrency for I/O to and from Data Lake Storage Gen1 could increase performance.
-
-There are a few general ways to increase concurrency for I/O intensive jobs.
-
-**Step 1: Determine how many apps are running on your cluster** ΓÇô You should know how many apps are running on the cluster including the current one. The default values for each Spark setting assumes that there are 4 apps running concurrently. Therefore, you will only have 25% of the cluster available for each app. To get better performance, you can override the defaults by changing the number of executors.
-
-**Step 2: Set executor-memory** ΓÇô the first thing to set is the executor-memory. The memory will be dependent on the job that you are going to run. You can increase concurrency by allocating less memory per executor. If you see out of memory exceptions when you run your job, then you should increase the value for this parameter. One alternative is to get more memory by using a cluster that has higher amounts of memory or increasing the size of your cluster. More memory will enable more executors to be used, which means more concurrency.
-
-**Step 3: Set executor-cores** ΓÇô For I/O intensive workloads that do not have complex operations, itΓÇÖs good to start with a high number of executor-cores to increase the number of parallel tasks per executor. Setting executor-cores to 4 is a good start.
-
-```console
-executor-cores = 4
-```
-
-Increasing the number of executor-cores will give you more parallelism so you can experiment with different executor-cores. For jobs that have more complex operations, you should reduce the number of cores per executor. If executor-cores is set higher than 4, then garbage collection may become inefficient and degrade performance.
-
-**Step 4: Determine amount of YARN memory in cluster** ΓÇô This information is available in Ambari. Navigate to YARN and view the Contigs tab. The YARN memory is displayed in this window.
-Note while you are in the window, you can also see the default YARN container size. The YARN container size is the same as memory per executor parameter.
-
-Total YARN memory = nodes * YARN memory per node
-
-**Step 5: Calculate num-executors**
-
-**Calculate memory constraint** - The num-executors parameter is constrained either by memory or by CPU. The memory constraint is determined by the amount of available YARN memory for your application. Take the total YARN memory and divide that by executor-memory. The constraint needs to be de-scaled for the number of apps so we divide by the number of apps.
-
-Memory constraint = (total YARN memory / executor memory) / # of apps
-
-**Calculate CPU constraint** - The CPU constraint is calculated as the total virtual cores divided by the number of cores per executor. There are 2 virtual cores for each physical core. Similar to the memory constraint, we have divide by the number of apps.
-
-virtual cores = (nodes in cluster * # of physical cores in node * 2)
-CPU constraint = (total virtual cores / # of cores per executor) / # of apps
-
-**Set num-executors** ΓÇô The num-executors parameter is determined by taking the minimum of the memory constraint and the CPU constraint.
-
-num-executors = Min (total virtual Cores / # of cores per executor, available YARN memory / executor-memory)
-Setting a higher number of num-executors does not necessarily increase performance. You should consider that adding more executors will add extra overhead for each additional executor, which can potentially degrade performance. Num-executors is bounded by the cluster resources.
-
-## Example Calculation
-
-LetΓÇÖs say you currently have a cluster composed of 8 D4v2 nodes that is running two apps including the one you are going to run.
-
-**Step 1: Determine how many apps are running on your cluster** ΓÇô you know that you have two apps on your cluster, including the one you are going to run.
-
-**Step 2: Set executor-memory** ΓÇô for this example, we determine that 6GB of executor-memory will be sufficient for I/O intensive job.
-
-```console
-executor-memory = 6GB
-```
-
-**Step 3: Set executor-cores** ΓÇô Since this is an I/O intensive job, we can set the number of cores for each executor to four. Setting cores per executor to larger than four may cause garbage collection problems.
-
-```console
-executor-cores = 4
-```
-
-**Step 4: Determine amount of YARN memory in cluster** ΓÇô We navigate to Ambari to find out that each D4v2 has 25GB of YARN memory. Since there are 8 nodes, the available YARN memory is multiplied by 8.
-
-Total YARN memory = nodes * YARN memory* per node
-Total YARN memory = 8 nodes * 25 GB = 200 GB
-
-**Step 5: Calculate num-executors** ΓÇô The num-executors parameter is determined by taking the minimum of the memory constraint and the CPU constraint divided by the # of apps running on Spark.
-
-**Calculate memory constraint** ΓÇô The memory constraint is calculated as the total YARN memory divided by the memory per executor.
-
-Memory constraint = (total YARN memory / executor memory) / # of apps
-Memory constraint = (200 GB / 6 GB) / 2
-Memory constraint = 16 (rounded)
-**Calculate CPU constraint** - The CPU constraint is calculated as the total yarn cores divided by the number of cores per executor.
-
-YARN cores = nodes in cluster * # of cores per node * 2
-YARN cores = 8 nodes * 8 cores per D14 * 2 = 128
-CPU constraint = (total YARN cores / # of cores per executor) / # of apps
-CPU constraint = (128 / 4) / 2
-CPU constraint = 16
-
-**Set num-executors**
-
-num-executors = Min (memory constraint, CPU constraint)
-num-executors = Min (16, 16)
-num-executors = 16
data-lake-store Data Lake Store Performance Tuning Storm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-performance-tuning-storm.md
- Title: Performance tuning - Storm with Azure Data Lake Storage Gen1
-description: Understand the factors that should be considered when you tune the performance of an Azure Storm topology, including troubleshooting common issues.
---- Previously updated : 12/19/2016---
-# Performance tuning guidance for Storm on HDInsight and Azure Data Lake Storage Gen1
-
-Understand the factors that should be considered when you tune the performance of an Azure Storm topology. For example, it's important to understand the characteristics of the work done by the spouts and the bolts (whether the work is I/O or memory intensive). This article covers a range of performance tuning guidelines, including troubleshooting common issues.
-
-## Prerequisites
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-* **An Azure Data Lake Storage Gen1 account**. For instructions on how to create one, see [Get started with Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md).
-* **An Azure HDInsight cluster** with access to a Data Lake Storage Gen1 account. See [Create an HDInsight cluster with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md). Make sure you enable Remote Desktop for the cluster.
-* **Performance tuning guidelines on Data Lake Storage Gen1**. For general performance concepts, see [Data Lake Storage Gen1 Performance Tuning Guidance](./data-lake-store-performance-tuning-guidance.md).
-
-## Tune the parallelism of the topology
-
-You might be able to improve performance by increasing the concurrency of the I/O to and from Data Lake Storage Gen1. A Storm topology has a set of configurations that determine the parallelism:
-* Number of worker processes (the workers are evenly distributed across the VMs).
-* Number of spout executor instances.
-* Number of bolt executor instances.
-* Number of spout tasks.
-* Number of bolt tasks.
-
-For example, on a cluster with 4 VMs and 4 worker processes, 32 spout executors and 32 spout tasks, and 256 bolt executors and 512 bolt tasks, consider the following:
-
-Each supervisor, which is a worker node, has a single worker Java virtual machine (JVM) process. This JVM process manages 4 spout threads and 64 bolt threads. Within each thread, tasks are run sequentially. With the preceding configuration, each spout thread has one task, and each bolt thread has two tasks.
-
-In Storm, here are the various components involved, and how they affect the level of parallelism you have:
-* The head node (called Nimbus in Storm) is used to submit and manage jobs. These nodes have no impact on the degree of parallelism.
-* The supervisor nodes. In HDInsight, this corresponds to a worker node Azure VM.
-* The worker tasks are Storm processes running in the VMs. Each worker task corresponds to a JVM instance. Storm distributes the number of worker processes you specify to the worker nodes as evenly as possible.
-* Spout and bolt executor instances. Each executor instance corresponds to a thread running within the workers (JVMs).
-* Storm tasks. These are logical tasks that each of these threads run. This does not change the level of parallelism, so you should evaluate if you need multiple tasks per executor or not.
-
-### Get the best performance from Data Lake Storage Gen1
-
-When working with Data Lake Storage Gen1, you get the best performance if you do the following:
-* Coalesce your small appends into larger sizes (ideally 4 MB).
-* Do as many concurrent requests as you can. Because each bolt thread is doing blocking reads, you want to have somewhere in the range of 8-12 threads per core. This keeps the NIC and the CPU well utilized. A larger VM enables more concurrent requests.
-
-### Example topology
-
-LetΓÇÖs assume you have an eight worker node cluster with a D13v2 Azure VM. This VM has eight cores, so among the eight worker nodes, you have 64 total cores.
-
-LetΓÇÖs say we do eight bolt threads per core. Given 64 cores, that means we want 512 total bolt executor instances (that is, threads). In this case, letΓÇÖs say we start with one JVM per VM, and mainly use the thread concurrency within the JVM to achieve concurrency. That means we need eight worker tasks (one per Azure VM), and 512 bolt executors. Given this configuration, Storm tries to distribute the workers evenly across worker nodes (also known as supervisor nodes), giving each worker node one JVM. Now within the supervisors, Storm tries to distribute the executors evenly between supervisors, giving each supervisor (that is, JVM) eight threads each.
-
-## Tune additional parameters
-After you have the basic topology, you can consider whether you want to tweak any of the parameters:
-* **Number of JVMs per worker node.** If you have a large data structure (for example, a lookup table) that you host in memory, each JVM requires a separate copy. Alternatively, you can use the data structure across many threads if you have fewer JVMs. For the boltΓÇÖs I/O, the number of JVMs does not make as much of a difference as the number of threads added across those JVMs. For simplicity, it's a good idea to have one JVM per worker. Depending on what your bolt is doing or what application processing you require, though, you may need to change this number.
-* **Number of spout executors.** Because the preceding example uses bolts for writing to Data Lake Storage Gen1, the number of spouts is not directly relevant to the bolt performance. However, depending on the amount of processing or I/O happening in the spout, it's a good idea to tune the spouts for best performance. Ensure that you have enough spouts to be able to keep the bolts busy. The output rates of the spouts should match the throughput of the bolts. The actual configuration depends on the spout.
-* **Number of tasks.** Each bolt runs as a single thread. Additional tasks per bolt don't provide any additional concurrency. The only time they are of benefit is if your process of acknowledging the tuple takes a large proportion of your bolt execution time. It's a good idea to group many tuples into a larger append before you send an acknowledgment from the bolt. So, in most cases, multiple tasks provide no additional benefit.
-* **Local or shuffle grouping.** When this setting is enabled, tuples are sent to bolts within the same worker process. This reduces inter-process communication and network calls. This is recommended for most topologies.
-
-This basic scenario is a good starting point. Test with your own data to tweak the preceding parameters to achieve optimal performance.
-
-## Tune the spout
-
-You can modify the following settings to tune the spout.
--- **Tuple timeout: topology.message.timeout.secs**. This setting determines the amount of time a message takes to complete, and receive acknowledgment, before it is considered failed.--- **Max memory per worker process: worker.childopts**. This setting lets you specify additional command-line parameters to the Java workers. The most commonly used setting here is XmX, which determines the maximum memory allocated to a JVMΓÇÖs heap.--- **Max spout pending: topology.max.spout.pending**. This setting determines the number of tuples that can in be flight (not yet acknowledged at all nodes in the topology) per spout thread at any time.-
- A good calculation to do is to estimate the size of each of your tuples. Then figure out how much memory one spout thread has. The total memory allocated to a thread, divided by this value, should give you the upper bound for the max spout pending parameter.
-
-## Tune the bolt
-When you're writing to Data Lake Storage Gen1, set a size sync policy (buffer on the client side) to 4 MB. A flushing or hsync() is then performed only when the buffer size is at this value. The Data Lake Storage Gen1 driver on the worker VM automatically does this buffering, unless you explicitly perform an hsync().
-
-The default Data Lake Storage Gen1 Storm bolt has a size sync policy parameter (fileBufferSize) that can be used to tune this parameter.
-
-In I/O-intensive topologies, it's a good idea to have each bolt thread write to its own file, and to set a file rotation policy (fileRotationSize). When the file reaches a certain size, the stream is automatically flushed and a new file is written to. The recommended file size for rotation is 1 GB.
-
-### Handle tuple data
-
-In Storm, a spout holds on to a tuple until it is explicitly acknowledged by the bolt. If a tuple has been read by the bolt but has not been acknowledged yet, the spout might not have persisted into Data Lake Storage Gen1 back end. After a tuple is acknowledged, the spout can be guaranteed persistence by the bolt, and can then delete the source data from whatever source it is reading from.
-
-For best performance on Data Lake Storage Gen1, have the bolt buffer 4 MB of tuple data. Then write to the Data Lake Storage Gen1 back end as one 4 MB write. After the data has been successfully written to the store (by calling hflush()), the bolt can acknowledge the data back to the spout. This is what the example bolt supplied here does. It is also acceptable to hold a larger number of tuples before the hflush() call is made and the tuples acknowledged. However, this increases the number of tuples in flight that the spout needs to hold, and therefore increases the amount of memory required per JVM.
-
-> [!NOTE]
-> Applications might have a requirement to acknowledge tuples more frequently (at data sizes less than 4 MB) for other non-performance reasons. However, that might affect the I/O throughput to the storage back end. Carefully weigh this tradeoff against the boltΓÇÖs I/O performance.
-
-If the incoming rate of tuples is not high, so the 4-MB buffer takes a long time to fill, consider mitigating this by:
-* Reducing the number of bolts, so there are fewer buffers to fill.
-* Having a time-based or count-based policy, where an hflush() is triggered every x flushes or every y milliseconds, and the tuples accumulated so far are acknowledged back.
-
-The throughput in this case is lower, but with a slow rate of events, maximum throughput is not the biggest objective anyway. These mitigations help you reduce the total time that it takes for a tuple to flow through to the store. This might matter if you want a real-time pipeline even with a low event rate. Also note that if your incoming tuple rate is low, you should adjust the topology.message.timeout_secs parameter, so the tuples donΓÇÖt time out while they are getting buffered or processed.
-
-## Monitor your topology in Storm
-While your topology is running, you can monitor it in the Storm user interface. Here are the main parameters to look at:
-
-* **Total process execution latency.** This is the average time one tuple takes to be emitted by the spout, processed by the bolt, and acknowledged.
-
-* **Total bolt process latency.** This is the average time spent by the tuple at the bolt until it receives an acknowledgment.
-
-* **Total bolt execute latency.** This is the average time spent by the bolt in the execute method.
-
-* **Number of failures.** This refers to the number of tuples that failed to be fully processed before they timed out.
-
-* **Capacity.** This is a measure of how busy your system is. If this number is 1, your bolts are working as fast as they can. If it is less than 1, increase the parallelism. If it is greater than 1, reduce the parallelism.
-
-## Troubleshoot common problems
-Here are a few common troubleshooting scenarios.
-* **Many tuples are timing out.** Look at each node in the topology to determine where the bottleneck is. The most common reason for this is that the bolts are not able to keep up with the spouts. This leads to tuples clogging the internal buffers while waiting to be processed. Consider increasing the timeout value or decreasing the max spout pending.
-
-* **There is a high total process execution latency, but a low bolt process latency.** In this case, it is possible that the tuples are not being acknowledged fast enough. Check that there are a sufficient number of acknowledgers. Another possibility is that they are waiting in the queue for too long before the bolts start processing them. Decrease the max spout pending.
-
-* **There is a high bolt execute latency.** This means that the execute() method of your bolt is taking too long. Optimize the code, or look at write sizes and flush behavior.
-
-### Data Lake Storage Gen1 throttling
-If you hit the limits of bandwidth provided by Data Lake Storage Gen1, you might see task failures. Check task logs for throttling errors. You can decrease the parallelism by increasing container size.
-
-To check if you are getting throttled, enable the debug logging on the client side:
-
-1. In **Ambari** > **Storm** > **Config** > **Advanced storm-worker-log4j**, change **&lt;root level="info"&gt;** to **&lt;root level=ΓÇ¥debugΓÇ¥&gt;**. Restart all the nodes/service for the configuration to take effect.
-2. Monitor the Storm topology logs on worker nodes (under /var/log/storm/worker-artifacts/&lt;TopologyName&gt;/&lt;port&gt;/worker.log) for Data Lake Storage Gen1 throttling exceptions.
-
-## Next steps
-Additional performance tuning for Storm can be referenced in [this blog](/archive/blogs/shanyu/performance-tuning-for-hdinsight-storm-and-microsoft-azure-eventhubs).
-
-For an additional example to run, see [this one on GitHub](https://github.com/hdinsight/storm-performance-automation).
data-lake-store Data Lake Store Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-power-bi.md
- Title: Analyze data in Azure Data Lake Storage Gen1 - Power BI
-description: Learn how to use Power BI Desktop to analyze and visualize data stored in Azure Data Lake Storage Gen1.
---- Previously updated : 05/29/2018---
-# Analyze data in Azure Data Lake Storage Gen1 by using Power BI
-In this article, you learn how to use Power BI Desktop to analyze and visualize data stored in Azure Data Lake Storage Gen1.
-
-## Prerequisites
-Before you begin this tutorial, you must have the following:
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-* **A Data Lake Storage Gen1 account**. Follow the instructions at [Get started with Azure Data Lake Storage Gen1 using the Azure portal](data-lake-store-get-started-portal.md). This article assumes that you have already created a Data Lake Storage Gen1 account, called **myadlsg1**, and uploaded a sample data file (**Drivers.txt**) to it. This sample file is available for download from [Azure Data Lake Git Repository](https://github.com/Azure/usql/tree/master/Examples/Samples/Data/AmbulanceData/Drivers.txt).
-* **Power BI Desktop**. You can download this from [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=45331).
-
-## Create a report in Power BI Desktop
-1. Launch Power BI Desktop on your computer.
-2. From the **Home** ribbon, click **Get Data**, and then click More. In the **Get Data** dialog box, click **Azure**, click **Azure Data Lake Store**, and then click **Connect**.
-
- ![Screenshot of the Get Data dialog box with the Azure Data Lake Store option highlighted and the Connect option called out.](./media/data-lake-store-power-bi/get-data-lake-store-account.png "Connect to Data Lake Storage Gen1")
-3. If you see a dialog box about the connector being in a development phase, opt to continue.
-4. In the **Azure Data Lake Store** dialog box, provide the URL to your Data Lake Storage Gen1 account, and then click **OK**.
-
- ![URL for Data Lake Storage Gen1](./media/data-lake-store-power-bi/get-data-lake-store-account-url.png "URL for Data Lake Storage Gen1")
-5. In the next dialog box, click **Sign in** to sign into the Data Lake Storage Gen1 account. You will be redirected to your organization's sign-in page. Follow the prompts to sign into the account.
-
- ![Sign into Data Lake Storage Gen1](./media/data-lake-store-power-bi/get-data-lake-store-account-signin.png "Sign into Data Lake Storage Gen1")
-6. After you have successfully signed in, click **Connect**.
-
- ![Screenshot of the Azure Data Lake Store dialog box with the Connect option called out.](./media/data-lake-store-power-bi/get-data-lake-store-account-connect.png "Connect to Data Lake Storage Gen1")
-7. The next dialog box shows the file that you uploaded to your Data Lake Storage Gen1 account. Verify the info and then click **Load**.
-
- ![Load data from Data Lake Storage Gen1](./media/data-lake-store-power-bi/get-data-lake-store-account-load.png "Load data from Data Lake Storage Gen1")
-8. After the data has been successfully loaded into Power BI, you will see the following fields in the **Fields** tab.
-
- ![Imported fields](./media/data-lake-store-power-bi/imported-fields.png "Imported fields")
-
- However, to visualize and analyze the data, we prefer the data to be available per the following fields
-
- ![Desired fields](./media/data-lake-store-power-bi/desired-fields.png "Desired fields")
-
- In the next steps, we will update the query to convert the imported data in the desired format.
-9. From the **Home** ribbon, click **Edit Queries**.
-
- ![Screenshot of the Home ribbon with the Edit Queries option called out.](./media/data-lake-store-power-bi/edit-queries.png "Edit queries")
-10. In the Query Editor, under the **Content** column, click **Binary**.
-
- ![Screenshot of the Query Editor with the Content column called out.](./media/data-lake-store-power-bi/convert-query1.png "Edit queries")
-11. You will see a file icon, that represents the **Drivers.txt** file that you uploaded. Right-click the file, and click **CSV**.
-
- ![Screenshot of the Query Editor with the CSV option called out.](./media/data-lake-store-power-bi/convert-query2.png "Edit queries")
-12. You should see an output as shown below. Your data is now available in a format that you can use to create visualizations.
-
- ![Screenshot of the Query Editor with the output displayed as expected.](./media/data-lake-store-power-bi/convert-query3.png "Edit queries")
-13. From the **Home** ribbon, click **Close and Apply**, and then click **Close and Apply**.
-
- ![Screenshot of the Home ribbon with the close and Apply option called out.](./media/data-lake-store-power-bi/load-edited-query.png "Edit queries")
-14. Once the query is updated, the **Fields** tab will show the new fields available for visualization.
-
- ![Updated fields](./media/data-lake-store-power-bi/updated-query-fields.png "Updated fields")
-15. Let us create a pie chart to represent the drivers in each city for a given country/region. To do so, make the following selections.
-
- 1. From the Visualizations tab, click the symbol for a pie chart.
-
- ![Create pie chart](./media/data-lake-store-power-bi/create-pie-chart.png "Create pie chart")
- 2. The columns that we are going to use are **Column 4** (name of the city) and **Column 7** (name of the country/region). Drag these columns from **Fields** tab to **Visualizations** tab as shown below.
-
- ![Create visualizations](./media/data-lake-store-power-bi/create-visualizations.png "Create visualizations")
- 3. The pie chart should now resemble like the one shown below.
-
- ![Pie chart](./media/data-lake-store-power-bi/pie-chart.png "Create visualizations")
-16. By selecting a specific country/region from the page level filters, you can now see the number of drivers in each city of the selected country/region. For example, under the **Visualizations** tab, under **Page level filters**, select **Brazil**.
-
- ![Select a country/region](./media/data-lake-store-power-bi/select-country.png "Select a country/region")
-17. The pie chart is automatically updated to display the drivers in the cities of Brazil.
-
- ![Drivers in a country/region](./media/data-lake-store-power-bi/driver-per-country.png "Drivers per country/region")
-18. From the **File** menu, click **Save** to save the visualization as a Power BI Desktop file.
-
-## Publish report to Power BI service
-Once you have created the visualizations in Power BI Desktop, you can share it with others by publishing it to the Power BI service. For instructions on how to do that, see [Publish from Power BI Desktop](https://powerbi.microsoft.com/documentation/powerbi-desktop-upload-desktop-files/).
-
-## See also
-* [Analyze data in Data Lake Storage Gen1 using Data Lake Analytics](../data-lake-analytics/data-lake-analytics-get-started-portal.md)
-
data-lake-store Data Lake Store Secure Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-secure-data.md
- Title: Securing data stored in Azure Data Lake Storage Gen1 | Microsoft Docs
-description: Learn how to secure data in Azure Data Lake Storage Gen1 using groups and access control lists
----- Previously updated : 03/26/2018--
-# Securing data stored in Azure Data Lake Storage Gen1
-Securing data in Azure Data Lake Storage Gen1 is a three-step approach. Both Azure role-based access control (Azure RBAC) and access control lists (ACLs) must be set to fully enable access to data for users and security groups.
-
-1. Start by creating security groups in Microsoft Entra ID. These security groups are used to implement Azure role-based access control (Azure RBAC) in the Azure portal. For more information, see [Azure RBAC](../role-based-access-control/role-assignments-portal.md).
-2. Assign the Microsoft Entra security groups to the Data Lake Storage Gen1 account. This controls access to the Data Lake Storage Gen1 account from the portal and management operations from the portal or APIs.
-3. Assign the Microsoft Entra security groups as access control lists (ACLs) on the Data Lake Storage Gen1 file system.
-4. Additionally, you can also set an IP address range for clients that can access the data in Data Lake Storage Gen1.
-
-This article provides instructions on how to use the Azure portal to perform the above tasks. For in-depth information on how Data Lake Storage Gen1 implements security at the account and data level, see [Security in Azure Data Lake Storage Gen1](data-lake-store-security-overview.md). For deep-dive information on how ACLs are implemented in Data Lake Storage Gen1, see [Overview of Access Control in Data Lake Storage Gen1](data-lake-store-access-control.md).
-
-## Prerequisites
-Before you begin this tutorial, you must have the following:
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-* **A Data Lake Storage Gen1 account**. For instructions on how to create one, see [Get started with Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md)
-
-<a name='create-security-groups-in-azure-active-directory'></a>
-
-## Create security groups in Microsoft Entra ID
-For instructions on how to create Microsoft Entra security groups and how to add users to the group, see [Managing security groups in Microsoft Entra ID](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
-
-> [!NOTE]
-> You can add both users and other groups to a group in Microsoft Entra ID using the Azure portal. However, in order to add a service principal to a group, use [Microsoft Entra IDΓÇÖs PowerShell module](../active-directory/enterprise-users/groups-settings-v2-cmdlets.md).
->
-> ```powershell
-> # Get the desired group and service principal and identify the correct object IDs
-> Get-AzureADGroup -SearchString "<group name>"
-> Get-AzureADServicePrincipal -SearchString "<SPI name>"
->
-> # Add the service principal to the group
-> Add-AzureADGroupMember -ObjectId <Group object ID> -RefObjectId <SPI object ID>
-> ```
-
-## Assign users or security groups to Data Lake Storage Gen1 accounts
-When you assign users or security groups to Data Lake Storage Gen1 accounts, you control access to the management operations on the account using the Azure portal and Azure Resource Manager APIs.
-
-1. Open a Data Lake Storage Gen1 account. From the left pane, click **All resources**, and then from the All resources blade, click the account name to which you want to assign a user or security group.
-
-2. In your Data Lake Storage Gen1 account blade, click **Access Control (IAM)**. The blade by default lists the subscription owners as the owner.
-
- ![Assign security group to Azure Data Lake Storage Gen1 account](./media/data-lake-store-secure-data/adl.select.user.icon1.png "Assign security group to Azure Data Lake Storage Gen1 account")
-
-3. In the **Access Control (IAM)** blade, click **Add** to open the **Add permissions** blade. In the **Add permissions** blade, select a **Role** for the user/group. Look for the security group you created earlier in Microsoft Entra ID and select it. If you have a lot of users and groups to search from, use the **Select** text box to filter on the group name.
-
- ![Add a role for the user](./media/data-lake-store-secure-data/adl.add.user.1.png "Add a role for the user")
-
- The **Owner** and **Contributor** role provide access to a variety of administration functions on the data lake account. For users who will interact with data in the data lake but still need to view account management information, you can add them to the **Reader** role. The scope of these roles is limited to the management operations related to the Data Lake Storage Gen1 account.
-
- For data operations, individual file system permissions define what the users can do. Therefore, a user having a Reader role can only view administrative settings associated with the account but can potentially read and write data based on file system permissions assigned to them. Data Lake Storage Gen1 file system permissions are described at [Assign security group as ACLs to the Azure Data Lake Storage Gen1 file system](#filepermissions).
-
- > [!IMPORTANT]
- > Only the **Owner** role automatically enables file system access. The **Contributor**, **Reader**, and all other roles require ACLs to enable any level of access to folders and files. The **Owner** role provides super-user file and folder permissions that cannot be overridden via ACLs. For more information on how Azure RBAC policies map to data access, see [Azure RBAC for account management](data-lake-store-security-overview.md#azure-rbac-for-account-management).
-
-4. If you want to add a group/user that is not listed in the **Add permissions** blade, you can invite them by typing their email address in the **Select** text box and then selecting them from the list.
-
- ![Add a security group](./media/data-lake-store-secure-data/adl.add.user.2.png "Add a security group")
-
-5. Click **Save**. You should see the security group added as shown below.
-
- ![Security group added](./media/data-lake-store-secure-data/adl.add.user.3.png "Security group added")
-
-6. Your user/security group now has access to the Data Lake Storage Gen1 account. If you want to provide access to specific users, you can add them to the security group. Similarly, if you want to revoke access for a user, you can remove them from the security group. You can also assign multiple security groups to an account.
-
-## <a name="filepermissions"></a>Assign users or security groups as ACLs to the Data Lake Storage Gen1 file system
-By assigning user/security groups to the Data Lake Storage Gen1 file system, you set access control on the data stored in Data Lake Storage Gen1.
-
-1. In your Data Lake Storage Gen1 account blade, click **Data Explorer**.
-
- ![View data via Data Explorer](./media/data-lake-store-secure-data/adl.start.data.explorer.png "View data via Data Explorer")
-2. In the **Data Explorer** blade, click the folder for which you want to configure the ACL, and then click **Access**. To assign ACLs to a file, you must first click the file to preview it and then click **Access** from the **File Preview** blade.
-
- ![Set ACLs on Data Lake Storage Gen1 file system](./media/data-lake-store-secure-data/adl.acl.1.png "Set ACLs on Data Lake Storage Gen1 file system")
-3. The **Access** blade lists the owners and assigned permissions already assigned to the root. Click the **Add** icon to add additional Access ACLs.
- > [!IMPORTANT]
- > Setting access permissions for a single file does not necessarily grant a user/group access to that file. The path to the file must be accessible to the assigned user/group. For more information and examples, see [Common scenarios related to permissions](data-lake-store-access-control.md#common-scenarios-related-to-permissions).
-
- ![List standard and custom access](./media/data-lake-store-secure-data/adl.acl.2.png "List standard and custom access")
-
- * The **Owners** and **Everyone else** provide UNIX-style access, where you specify read, write, execute (rwx) to three distinct user classes: owner, group, and others.
- * **Assigned permissions** corresponds to the POSIX ACLs that enable you to set permissions for specific named users or groups beyond the file's owner or group.
-
- For more information, see [HDFS ACLs](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#ACLs_Access_Control_Lists). For more information on how ACLs are implemented in Data Lake Storage Gen1, see [Access Control in Data Lake Storage Gen1](data-lake-store-access-control.md).
-4. Click the **Add** icon to open the **Assign permissions** blade. In this blade, click **Select user or group**, and then in **Select user or group** blade, look for the security group you created earlier in Microsoft Entra ID. If you have a lot of groups to search from, use the text box at the top to filter on the group name. Click the group you want to add and then click **Select**.
-
- ![Add a group](./media/data-lake-store-secure-data/adl.acl.3.png "Add a group")
-5. Click **Select permissions**, select the permissions, whether the permissions should be applied to recursively, and whether you want to assign the permissions as an access ACL, default ACL, or both. Click **OK**.
-
- ![Screenshot of the Assign permissions blade with the Select permissions option called out and the Select permissions blade with the Ok option called out.](./media/data-lake-store-secure-data/adl.acl.4.png "Assign permissions to group")
-
- For more information about permissions in Data Lake Storage Gen1, and Default/Access ACLs, see [Access Control in Data Lake Storage Gen1](data-lake-store-access-control.md).
-6. After clicking **Ok** in the **Select permissions** blade, the newly added group and associated permissions will now be listed in the **Access** blade.
-
- ![Screenshot of the Access blade with the Data Engineering option called out.](./media/data-lake-store-secure-data/adl.acl.5.png "Assign permissions to group")
-
- > [!IMPORTANT]
- > In the current release, you can have up to 28 entries under **Assigned permissions**. If you want to add more than 28 users, you should create security groups, add users to security groups, add provide access to those security groups for the Data Lake Storage Gen1 account.
- >
- >
-7. If required, you can also modify the access permissions after you have added the group. Clear or select the check box for each permission type (Read, Write, Execute) based on whether you want to remove or assign that permission to the security group. Click **Save** to save the changes, or **Discard** to undo the changes.
-
-## Set IP address range for data access
-Data Lake Storage Gen1 enables you to further lock down access to your data store at network level. You can enable firewall, specify an IP address, or define an IP address range for your trusted clients. Once enabled, only clients that have the IP addresses within defined range can connect to the store.
-
-![Firewall settings and IP access](./media/data-lake-store-secure-data/firewall-ip-access.png "Firewall settings and IP address")
-
-## Remove security groups for a Data Lake Storage Gen1 account
-When you remove security groups from Data Lake Storage Gen1 accounts, you are only changing access to the management operations on the account using the Azure portal and Azure Resource Manager APIs.
-
-Access to data is unchanged and is still managed by the access ACLs. The exception to this are users/groups in the Owners role. Users/groups removed from the Owners role are no longer super users and their access falls back to access ACL settings.
-
-1. In your Data Lake Storage Gen1 account blade, click **Access Control (IAM)**.
-
- ![Assign security group to Data Lake Storage Gen1 account](./media/data-lake-store-secure-data/adl.select.user.icon.png "Assign security group to Data Lake Storage Gen1 account")
-2. In the **Access Control (IAM)** blade, click the security group(s) you want to remove. Click **Remove**.
-
- ![Security group removed](./media/data-lake-store-secure-data/adl.remove.group.png "Security group removed")
-
-## Remove security group ACLs from a Data Lake Storage Gen1 file system
-When you remove security group ACLs from a Data Lake Storage Gen1 file system, you change access to the data in the Data Lake Storage Gen1 account.
-
-1. In your Data Lake Storage Gen1 account blade, click **Data Explorer**.
-
- ![Create directories in Data Lake Storage Gen1 account](./media/data-lake-store-secure-data/adl.start.data.explorer.png "Create directories in Data Lake Storage Gen1 account")
-2. In the **Data Explorer** blade, click the folder for which you want to remove the ACL, and then click **Access**. To remove ACLs for a file, you must first click the file to preview it and then click **Access** from the **File Preview** blade.
-
- ![Set ACLs on Data Lake Storage Gen1 file system](./media/data-lake-store-secure-data/adl.acl.1.png "Set ACLs on Data Lake Storage Gen1 file system")
-3. In the **Access** blade, click the security group you want to remove. In the **Access details** blade, click **Remove**.
-
- ![Screenshot of the Access blade with the Data Engineering option called out and the Access details blade with the Remove option called out.](./media/data-lake-store-secure-data/adl.remove.acl.png "Assign permissions to group")
-
-## See also
-* [Overview of Azure Data Lake Storage Gen1](data-lake-store-overview.md)
-* [Copy data from Azure Storage Blobs to Data Lake Storage Gen1](data-lake-store-copy-data-azure-storage-blob.md)
-* [Use Azure Data Lake Analytics with Data Lake Storage Gen1](../data-lake-analytics/data-lake-analytics-get-started-portal.md)
-* [Use Azure HDInsight with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md)
-* [Get Started with Data Lake Storage Gen1 using PowerShell](data-lake-store-get-started-powershell.md)
-* [Get Started with Data Lake Storage Gen1 using .NET SDK](data-lake-store-get-started-net-sdk.md)
-* [Access diagnostic logs for Data Lake Storage Gen1](data-lake-store-diagnostic-logs.md)
data-lake-store Data Lake Store Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-security-overview.md
- Title: Overview of security in Azure Data Lake Storage Gen1 | Microsoft Docs
-description: Learn about security capabilities of Azure Data Lake Storage Gen1, including authentication, authorization, network isolation, data protection, and auditing.
---- Previously updated : 03/11/2020---
-# Security in Azure Data Lake Storage Gen1
-
-Many enterprises are taking advantage of big data analytics for business insights to help them make smart decisions. An organization might have a complex and regulated environment, with an increasing number of diverse users. It is vital for an enterprise to make sure that critical business data is stored more securely, with the correct level of access granted to individual users. Azure Data Lake Storage Gen1 is designed to help meet these security requirements. In this article, learn about the security capabilities of Data Lake Storage Gen1, including:
-
-* Authentication
-* Authorization
-* Network isolation
-* Data protection
-* Auditing
-
-## Authentication and identity management
-
-Authentication is the process by which a user's identity is verified when the user interacts with Data Lake Storage Gen1 or with any service that connects to Data Lake Storage Gen1. For identity management and authentication, Data Lake Storage Gen1 uses [Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md), a comprehensive identity and access management cloud solution that simplifies the management of users and groups.
-
-Each Azure subscription can be associated with an instance of Microsoft Entra ID. Only users and service identities that are defined in your Microsoft Entra service can access your Data Lake Storage Gen1 account, by using the Azure portal, command-line tools, or through client applications your organization builds by using the Data Lake Storage Gen1 SDK. Key advantages of using Microsoft Entra ID as a centralized access control mechanism are:
-
-* Simplified identity lifecycle management. The identity of a user or a service (a service principal identity) can be quickly created and quickly revoked by simply deleting or disabling the account in the directory.
-* Multi-factor authentication. [Multi-factor authentication](../active-directory/authentication/concept-mfa-howitworks.md) provides an additional layer of security for user sign-ins and transactions.
-* Authentication from any client through a standard open protocol, such as OAuth or OpenID.
-* Federation with enterprise directory services and cloud identity providers.
-
-## Authorization and access control
-
-After Microsoft Entra authenticates a user so that the user can access Data Lake Storage Gen1, authorization controls access permissions for Data Lake Storage Gen1. Data Lake Storage Gen1 separates authorization for account-related and data-related activities in the following manner:
-
-* [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) for account management
-* POSIX ACL for accessing data in the store
-
-### Azure RBAC for account management
-
-Four basic roles are defined for Data Lake Storage Gen1 by default. The roles permit different operations on a Data Lake Storage Gen1 account via the Azure portal, PowerShell cmdlets, and REST APIs. The Owner and Contributor roles can perform a variety of administration functions on the account. You can assign the Reader role to users who only view account management data.
-
-![Azure roles](./media/data-lake-store-security-overview/rbac-roles.png "Azure roles")
-
-Note that although roles are assigned for account management, some roles affect access to data. You need to use ACLs to control access to operations that a user can perform on the file system. The following table shows a summary of management rights and data access rights for the default roles.
-
-| Roles | Management rights | Data access rights | Explanation |
-| | | | |
-| No role assigned |None |Governed by ACL |The user cannot use the Azure portal or Azure PowerShell cmdlets to browse Data Lake Storage Gen1. The user can use command-line tools only. |
-| Owner |All |All |The Owner role is a superuser. This role can manage everything and has full access to data. |
-| Reader |Read-only |Governed by ACL |The Reader role can view everything regarding account management, such as which user is assigned to which role. The Reader role can't make any changes. |
-| Contributor |All except add and remove roles |Governed by ACL |The Contributor role can manage some aspects of an account, such as deployments and creating and managing alerts. The Contributor role cannot add or remove roles. |
-| User Access Administrator |Add and remove roles |Governed by ACL |The User Access Administrator role can manage user access to accounts. |
-
-For instructions, see [Assign users or security groups to Data Lake Storage Gen1 accounts](data-lake-store-secure-data.md#assign-users-or-security-groups-to-data-lake-storage-gen1-accounts).
-
-### Using ACLs for operations on file systems
-
-Data Lake Storage Gen1 is a hierarchical file system like Hadoop Distributed File System (HDFS), and it supports [POSIX ACLs](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#ACLs_Access_Control_Lists). It controls read (r), write (w), and execute (x) permissions to resources for the Owner role, for the Owners group, and for other users and groups. In Data Lake Storage Gen1, ACLs can be enabled on the root folder, on subfolders, and on individual files. For more information on how ACLs work in context of Data Lake Storage Gen1, see [Access control in Data Lake Storage Gen1](data-lake-store-access-control.md).
-
-We recommend that you define ACLs for multiple users by using [security groups](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md). Add users to a security group, and then assign the ACLs for a file or folder to that security group. This is useful when you want to provide assigned permissions, because you are limited to a maximum of 28 entries for assigned permissions. For more information about how to better secure data stored in Data Lake Storage Gen1 by using Microsoft Entra security groups, see [Assign users or security group as ACLs to the Data Lake Storage Gen1 file system](data-lake-store-secure-data.md#filepermissions).
-
-![List access permissions](./media/data-lake-store-security-overview/adl.acl.2.png "List access permissions")
-
-## Network isolation
-
-Use Data Lake Storage Gen1 to help control access to your data store at the network level. You can establish firewalls and define an IP address range for your trusted clients. With an IP address range, only clients that have an IP address within the defined range can connect to Data Lake Storage Gen1.
-
-![Firewall settings and IP access](./media/data-lake-store-security-overview/firewall-ip-access.png "Firewall settings and IP address")
-
-Azure virtual networks (VNet) support service tags for Data Lake Gen 1. A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change. For more information, see [Azure service tags overview](../virtual-network/service-tags-overview.md).
-
-## Data protection
-
-Data Lake Storage Gen1 protects your data throughout its life cycle. For data in transit, Data Lake Storage Gen1 uses the industry-standard Transport Layer Security (TLS 1.2) protocol to secure data over the network.
-
-![Encryption in Data Lake Storage Gen1](./media/data-lake-store-security-overview/adls-encryption.png "Encryption in Data Lake Storage Gen1")
-
-Data Lake Storage Gen1 also provides encryption for data that is stored in the account. You can chose to have your data encrypted or opt for no encryption. If you opt in for encryption, data stored in Data Lake Storage Gen1 is encrypted prior to storing on persistent media. In such a case, Data Lake Storage Gen1 automatically encrypts data prior to persisting and decrypts data prior to retrieval, so it is completely transparent to the client accessing the data. There is no code change required on the client side to encrypt/decrypt data.
-
-For key management, Data Lake Storage Gen1 provides two modes for managing your master encryption keys (MEKs), which are required for decrypting any data that is stored in Data Lake Storage Gen1. You can either let Data Lake Storage Gen1 manage the MEKs for you, or choose to retain ownership of the MEKs using your Azure Key Vault account. You specify the mode of key management while creating a Data Lake Storage Gen1 account. For more information on how to provide encryption-related configuration, see [Get started with Azure Data Lake Storage Gen1 using the Azure Portal](data-lake-store-get-started-portal.md).
-
-## Activity and diagnostic logs
-
-You can use activity or diagnostic logs, depending on whether you are looking for logs for account management-related activities or data-related activities.
-
-* Account management-related activities use Azure Resource Manager APIs and are surfaced in the Azure portal via activity logs.
-* Data-related activities use WebHDFS REST APIs and are surfaced in the Azure portal via diagnostic logs.
-
-### Activity log
-
-To comply with regulations, an organization might require adequate audit trails of account management activities if it needs to dig into specific incidents. Data Lake Storage Gen1 has built-in monitoring and it logs all account management activities.
-
-For account management audit trails, view and choose the columns that you want to log. You also can export activity logs to Azure Storage.
-
-![Activity log](./media/data-lake-store-security-overview/activity-logs.png "Activity log")
-
-For more information on working with activity logs, see [View activity logs to audit actions on resources](../azure-monitor/essentials/activity-log.md).
-
-### Diagnostics logs
-
-You can enable data access audit and diagnostic logging in the Azure portal and send the logs to an Azure Blob storage account, an event hub, or Azure Monitor logs.
-
-![Diagnostics logs](./media/data-lake-store-security-overview/diagnostic-logs.png "Diagnostics logs")
-
-For more information on working with diagnostic logs with Data Lake Storage Gen1, see [Accessing diagnostic logs for Data Lake Storage Gen1](data-lake-store-diagnostic-logs.md).
-
-## Summary
-
-Enterprise customers demand a data analytics cloud platform that is secure and easy to use. Data Lake Storage Gen1 is designed to help address these requirements through identity management and authentication via Microsoft Entra integration, ACL-based authorization, network isolation, data encryption in transit and at rest, and auditing.
-
-If you want to see new features in Data Lake Storage Gen1, send us your feedback in the [Data Lake Storage Gen1 UserVoice forum](https://feedback.azure.com/d365community/forum/7fd97106-7326-ec11-b6e6-000d3a4f032c).
-
-## See also
-
-* [Overview of Azure Data Lake Storage Gen1](data-lake-store-overview.md)
-* [Get started with Data Lake Storage Gen1](data-lake-store-get-started-portal.md)
-* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md)
data-lake-store Data Lake Store Service To Service Authenticate Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-service-to-service-authenticate-java.md
- Title: Service-to-service authentication - Data Lake Storage Gen2 ΓÇô Java SDK
-description: Learn how to achieve service-to-service authentication with Azure Data Lake Storage Gen2 using Microsoft Entra ID with Java
---- Previously updated : 05/29/2018---
-# Service-to-service authentication with Azure Data Lake Storage Gen2 using Java
-
-> [!div class="op_single_selector"]
-> * [Using Java](data-lake-store-service-to-service-authenticate-java.md)
-> * [Using .NET SDK](data-lake-store-service-to-service-authenticate-net-sdk.md)
-> * [Using Python](data-lake-store-service-to-service-authenticate-python.md)
-> * [Using REST API](data-lake-store-service-to-service-authenticate-rest-api.md)
->
->
-
-In this article, you learn about how to use the Java SDK to do service-to-service authentication with Azure Data Lake Storage Gen2. End-user authentication with Data Lake Storage Gen2 using the Java SDK isn't supported.
-
-## Prerequisites
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-* **Create a Microsoft Entra ID "Web" Application**. You must have completed the steps in [Service-to-service authentication with Data Lake Storage Gen2 using Microsoft Entra ID](data-lake-store-service-to-service-authenticate-using-active-directory.md).
-
-* [Maven](https://maven.apache.org/install.html). This tutorial uses Maven for build and project dependencies. Although it is possible to build without using a build system like Maven or Gradle, these systems make it much easier to manage dependencies.
-
-* (Optional) An IDE like [IntelliJ IDEA](https://www.jetbrains.com/idea/download/) or [Eclipse](https://www.eclipse.org/downloads/) or similar.
-
-## Service-to-service authentication
-
-1. Create a Maven project using [mvn archetype](https://maven.apache.org/guides/getting-started/maven-in-five-minutes.html) from the command line or using an IDE. For instructions on how to create a Java project using IntelliJ, see [here](https://www.jetbrains.com/help/idea/2016.1/creating-and-running-your-first-java-application.html). For instructions on how to create a project using Eclipse, see [here](https://help.eclipse.org/mars/index.jsp?topic=%2Forg.eclipse.jdt.doc.user%2FgettingStarted%2Fqs-3.htm).
-
-2. Add the following dependencies to your Maven **pom.xml** file. Add the following snippet before the **\</project>** tag:
-
- ```xml
- <dependencies>
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-storage-file-datalake</artifactId>
- <version>12.6.0</version>
- </dependency>
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-identity</artifactId>
- <version>1.3.3</version>
- </dependency>
- <dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-nop</artifactId>
- <version>1.7.21</version>
- </dependency>
- </dependencies>
- ```
-
- The first dependency is to use the Data Lake Storage Gen2 SDK (`azure-storage-file-datalake`) from the Maven repository. The second dependency is to specify the logging framework (`slf4j-nop`) to use for this app. The Data Lake Storage Gen2 SDK uses the [slf4j](https://www.slf4j.org/) logging façade, which lets you choose from a number of popular logging frameworks, like log4j, Java logging, logback, or no logging. For this example, we disable logging, hence we use the **slf4j-nop** binding. To use other logging options in your app, see [Declaring project dependencies for logging](https://www.slf4j.org/manual.html#projectDep).
-
-3. Add the following import statements to your application.
-
- ```java
- import com.azure.identity.ClientSecretCredential;
- import com.azure.identity.ClientSecretCredentialBuilder;
- import com.azure.storage.file.datalake.DataLakeDirectoryClient;
- import com.azure.storage.file.datalake.DataLakeFileClient;
- import com.azure.storage.file.datalake.DataLakeServiceClient;
- import com.azure.storage.file.datalake.DataLakeServiceClientBuilder;
- import com.azure.storage.file.datalake.DataLakeFileSystemClient;
- import com.azure.storage.file.datalake.models.ListPathsOptions;
- import com.azure.storage.file.datalake.models.PathAccessControl;
- import com.azure.storage.file.datalake.models.PathPermissions;
- ```
-
-4. Use the following snippet in your Java app to obtain a token for the Active Directory web app you created earlier using one of the class of `StorageSharedKeyCredential` (the following example uses `credential`). The token provider caches the credentials used to obtain the token in memory, and automatically renews the token if it's about to expire. It's possible to create your own subclasses of `StorageSharedKeyCredential` so tokens are obtained by your customer code. For now, let's just use the one provided in the SDK.
-
- Replace **FILL-IN-HERE** with the actual values for the Microsoft Entra Web application.
-
- ```java
- private static String clientId = "FILL-IN-HERE";
- private static String tenantId = "FILL-IN-HERE";
- private static String clientSecret = "FILL-IN-HERE";
-
- ClientSecretCredential credential = new ClientSecretCredentialBuilder().clientId(clientId).tenantId(tenantId).clientSecret(clientSecret).build();
- ```
-
-The Data Lake Storage Gen2 SDK provides convenient methods that let you manage the security tokens needed to talk to the Data Lake Storage Gen2 account. However, the SDK doesn't mandate that only these methods be used. You can use any other means of obtaining token as well, like using the [Azure Identity client library](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/identity/azure-identity) or your own custom code.
-
-## Next steps
-
-In this article, you learned how to use end-user authentication to authenticate with Data Lake Storage Gen2 using Java SDK. You can now look at the following articles that talk about how to use the Java SDK to work with Data Lake Storage Gen2.
-
-* [Data operations on Data Lake Storage Gen2 using Java SDK](data-lake-store-get-started-java-sdk.md)
data-lake-store Data Lake Store Service To Service Authenticate Net Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-service-to-service-authenticate-net-sdk.md
- Title: .NET - Service-to-service authentication - Data Lake Storage Gen1
-description: Learn how to achieve service-to-service authentication with Azure Data Lake Storage Gen1 using Microsoft Entra ID using .NET SDK
----- Previously updated : 05/29/2018--
-# Service-to-service authentication with Azure Data Lake Storage Gen1 using .NET SDK
-> [!div class="op_single_selector"]
-> * [Using Java](data-lake-store-service-to-service-authenticate-java.md)
-> * [Using .NET SDK](data-lake-store-service-to-service-authenticate-net-sdk.md)
-> * [Using Python](data-lake-store-service-to-service-authenticate-python.md)
-> * [Using REST API](data-lake-store-service-to-service-authenticate-rest-api.md)
->
->
-
-In this article, you learn about how to use the .NET SDK to do service-to-service authentication with Azure Data Lake Storage Gen1. For end-user authentication with Data Lake Storage Gen1 using .NET SDK, see [End-user authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-end-user-authenticate-net-sdk.md).
-
-## Prerequisites
-* **Visual Studio 2013 or above**. The instructions below use Visual Studio 2019.
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-* **Create a Microsoft Entra ID "Web" Application**. You must have completed the steps in [Service-to-service authentication with Data Lake Storage Gen1 using Microsoft Entra ID](data-lake-store-service-to-service-authenticate-using-active-directory.md).
-
-## Create a .NET application
-1. In Visual Studio, select the **File** menu, **New**, and then **Project**.
-2. Choose **Console App (.NET Framework)**, and then select **Next**.
-3. In **Project name**, enter `CreateADLApplication`, and then select **Create**.
-
-4. Add the NuGet packages to your project.
-
- 1. Right-click the project name in the Solution Explorer and click **Manage NuGet Packages**.
- 2. In the **NuGet Package Manager** tab, make sure that **Package source** is set to **nuget.org** and that **Include prerelease** check box is selected.
- 3. Search for and install the following NuGet packages:
-
- * `Microsoft.Azure.Management.DataLake.Store` - This tutorial uses v2.1.3-preview.
- * `Microsoft.Rest.ClientRuntime.Azure.Authentication` - This tutorial uses v2.2.12.
-
- ![Add a NuGet source](./media/data-lake-store-get-started-net-sdk/data-lake-store-install-nuget-package.png "Create a new Azure Data Lake account")
- 4. Close the **NuGet Package Manager**.
-
-5. Open **Program.cs**, delete the existing code, and then include the following statements to add references to namespaces.
-
-```csharp
-using System;
-using System.IO;
-using System.Linq;
-using System.Text;
-using System.Threading;
-using System.Collections.Generic;
-using System.Security.Cryptography.X509Certificates; // Required only if you are using an Azure AD application created with certificates
-
-using Microsoft.Rest;
-using Microsoft.Rest.Azure.Authentication;
-using Microsoft.Azure.Management.DataLake.Store;
-using Microsoft.Azure.Management.DataLake.Store.Models;
-using Microsoft.IdentityModel.Clients.ActiveDirectory;
-```
-
-## Service-to-service authentication with client secret
-Add this snippet in your .NET client application. Replace the placeholder values with the values retrieved from a Microsoft Entra web application (listed as a prerequisite). This snippet lets you authenticate your application **non-interactively** with Data Lake Storage Gen1 using the client secret/key for Microsoft Entra web application.
-
-```csharp
-private static void Main(string[] args)
-{
- // Service principal / application authentication with client secret / key
- // Use the client ID of an existing AAD "Web App" application.
- string TENANT = "<AAD-directory-domain>";
- string CLIENTID = "<AAD_WEB_APP_CLIENT_ID>";
- System.Uri ARM_TOKEN_AUDIENCE = new System.Uri(@"https://management.core.windows.net/");
- System.Uri ADL_TOKEN_AUDIENCE = new System.Uri(@"https://datalake.azure.net/");
- string secret_key = "<AAD_WEB_APP_SECRET_KEY>";
- var armCreds = GetCreds_SPI_SecretKey(TENANT, ARM_TOKEN_AUDIENCE, CLIENTID, secret_key);
- var adlCreds = GetCreds_SPI_SecretKey(TENANT, ADL_TOKEN_AUDIENCE, CLIENTID, secret_key);
-}
-```
-
-The preceding snippet uses a helper function `GetCreds_SPI_SecretKey`. The code for this helper function is available [here on GitHub](https://github.com/Azure-Samples/data-lake-analytics-dotnet-auth-options#getcreds_spi_secretkey).
-
-## Service-to-service authentication with certificate
-
-Add this snippet in your .NET client application. Replace the placeholder values with the values retrieved from a Microsoft Entra web application (listed as a prerequisite). This snippet lets you authenticate your application **non-interactively** with Data Lake Storage Gen1 using the certificate for a Microsoft Entra web application. For instructions on how to create a Microsoft Entra application, see [Create service principal with certificates](../active-directory/develop/howto-authenticate-service-principal-powershell.md#create-service-principal-with-self-signed-certificate).
-
-```csharp
-private static void Main(string[] args)
-{
- // Service principal / application authentication with certificate
- // Use the client ID and certificate of an existing AAD "Web App" application.
- string TENANT = "<AAD-directory-domain>";
- string CLIENTID = "<AAD_WEB_APP_CLIENT_ID>";
- System.Uri ARM_TOKEN_AUDIENCE = new System.Uri(@"https://management.core.windows.net/");
- System.Uri ADL_TOKEN_AUDIENCE = new System.Uri(@"https://datalake.azure.net/");
- var cert = new X509Certificate2(@"d:\cert.pfx", "<certpassword>");
- var armCreds = GetCreds_SPI_Cert(TENANT, ARM_TOKEN_AUDIENCE, CLIENTID, cert);
- var adlCreds = GetCreds_SPI_Cert(TENANT, ADL_TOKEN_AUDIENCE, CLIENTID, cert);
-}
-```
-
-The preceding snippet uses a helper function `GetCreds_SPI_Cert`. The code for this helper function is available [here on GitHub](https://github.com/Azure-Samples/data-lake-analytics-dotnet-auth-options#getcreds_spi_cert).
-
-## Next steps
-In this article, you learned how to use service-to-service authentication to authenticate with Data Lake Storage Gen1 using .NET SDK. You can now look at the following articles that talk about how to use the .NET SDK to work with Data Lake Storage Gen1.
-
-* [Account management operations on Data Lake Storage Gen1 using .NET SDK](data-lake-store-get-started-net-sdk.md)
-* [Data operations on Data Lake Storage Gen1 using .NET SDK](data-lake-store-data-operations-net-sdk.md)
data-lake-store Data Lake Store Service To Service Authenticate Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-service-to-service-authenticate-python.md
- Title: Python - Service-to-service authentication - Data Lake Storage Gen1
-description: Learn how to achieve service-to-service authentication with Azure Data Lake Storage Gen1 using Microsoft Entra ID using Python
---- Previously updated : 05/29/2018---
-# Service-to-service authentication with Azure Data Lake Storage Gen1 using Python
-> [!div class="op_single_selector"]
-> * [Using Java](data-lake-store-service-to-service-authenticate-java.md)
-> * [Using .NET SDK](data-lake-store-service-to-service-authenticate-net-sdk.md)
-> * [Using Python](data-lake-store-service-to-service-authenticate-python.md)
-> * [Using REST API](data-lake-store-service-to-service-authenticate-rest-api.md)
->
->
-
-In this article, you learn about how to use the Python SDK to do service-to-service authentication with Azure Data Lake Storage Gen1. For end-user authentication with Data Lake Storage Gen1 using Python, see [End-user authentication with Data Lake Storage Gen1 using Python](data-lake-store-end-user-authenticate-python.md).
--
-## Prerequisites
-
-* **Python**. You can download Python from [here](https://www.python.org/downloads/). This article uses Python 3.6.2.
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-* **Create a Microsoft Entra ID "Web" Application**. You must have completed the steps in [Service-to-service authentication with Data Lake Storage Gen1 using Microsoft Entra ID](data-lake-store-service-to-service-authenticate-using-active-directory.md).
-
-## Install the modules
-
-To work with Data Lake Storage Gen1 using Python, you need to install three modules.
-
-* The `azure-mgmt-resource` module, which includes Azure modules for Active Directory, etc.
-* The `azure-mgmt-datalake-store` module, which includes the Data Lake Storage Gen1 account management operations. For more information on this module, see [Azure Data Lake Storage Gen1 Management module reference](/python/api/azure-mgmt-datalake-store/).
-* The `azure-datalake-store` module, which includes the Data Lake Storage Gen1 filesystem operations. For more information on this module, see [azure-datalake-store Filesystem module reference](/python/api/azure-datalake-store/azure.datalake.store.core/).
-
-Use the following commands to install the modules.
-
-```
-pip install azure-mgmt-resource
-pip install azure-mgmt-datalake-store
-pip install azure-datalake-store
-```
-
-## Create a new Python application
-
-1. In the IDE of your choice create a new Python application, for example, **mysample.py**.
-
-2. Add the following snippet to import the required modules:
-
- ```
- ## Use this for Azure AD authentication
- from msrestazure.azure_active_directory import AADTokenCredentials
-
- ## Required for Data Lake Storage Gen1 account management
- from azure.mgmt.datalake.store import DataLakeStoreAccountManagementClient
- from azure.mgmt.datalake.store.models import DataLakeStoreAccount
-
- ## Required for Data Lake Storage Gen1 filesystem management
- from azure.datalake.store import core, lib, multithread
-
- # Common Azure imports
- import adal
- from azure.mgmt.resource.resources import ResourceManagementClient
- from azure.mgmt.resource.resources.models import ResourceGroup
-
- ## Use these as needed for your application
- import logging, getpass, pprint, uuid, time
- ```
-
-3. Save changes to mysample.py.
-
-## Service-to-service authentication with client secret for account management
-
-Use this snippet to authenticate with Microsoft Entra ID for account management operations on Data Lake Storage Gen1 such as create a Data Lake Storage Gen1 account, delete a Data Lake Storage Gen1 account, etc. The following snippet can be used to authenticate your application non-interactively, using the client secret for an application / service principal of an existing Microsoft Entra ID "Web App" application.
-
-```python
-authority_host_uri = 'https://login.microsoftonline.com'
-tenant = '<TENANT>'
-authority_uri = authority_host_uri + '/' + tenant
-RESOURCE = 'https://management.core.windows.net/'
-client_id = '<CLIENT_ID>'
-client_secret = '<CLIENT_SECRET>'
-
-context = adal.AuthenticationContext(authority_uri, api_version=None)
-mgmt_token = context.acquire_token_with_client_credentials(RESOURCE, client_id, client_secret)
-armCreds = AADTokenCredentials(mgmt_token, client_id, resource=RESOURCE)
-```
-
-## Service-to-service authentication with client secret for filesystem operations
-
-Use the following snippet to authenticate with Microsoft Entra ID for filesystem operations on Data Lake Storage Gen1 such as create folder, upload file, etc. The following snippet can be used to authenticate your application non-interactively, using the client secret for an application / service principal. Use this with an existing Microsoft Entra ID "Web App" application.
-
-```python
-tenant = '<TENANT>'
-RESOURCE = 'https://datalake.azure.net/'
-client_id = '<CLIENT_ID>'
-client_secret = '<CLIENT_SECRET>'
-
-adlCreds = lib.auth(tenant_id = tenant,
- client_secret = client_secret,
- client_id = client_id,
- resource = RESOURCE)
-```
-
-<!-- ## Service-to-service authentication with certificate for account management
-
-Use this snippet to authenticate with Azure AD for account management operations on Data Lake Storage Gen1 such as create a Data Lake Storage Gen1 account, delete a Data Lake Storage Gen1 account, etc. The following snippet can be used to authenticate your application non-interactively, using the certificate of an existing Azure AD "Web App" application. For instructions on how to create an Azure AD application, see [Create service principal with certificates](../active-directory/develop/howto-authenticate-service-principal-powershell.md#create-service-principal-with-self-signed-certificate).
-
- authority_host_uri = 'https://login.microsoftonline.com'
- tenant = '<TENANT>'
- authority_uri = authority_host_uri + '/' + tenant
- resource_uri = 'https://management.core.windows.net/'
- client_id = '<CLIENT_ID>'
- client_cert = '<CLIENT_CERT>'
- client_cert_thumbprint = '<CLIENT_CERT_THUMBPRINT>'
-
- context = adal.AuthenticationContext(authority_uri, api_version=None)
- mgmt_token = context.acquire_token_with_client_certificate(resource_uri, client_id, client_cert, client_cert_thumbprint)
- credentials = AADTokenCredentials(mgmt_token, client_id) -->
-
-## Next steps
-In this article, you learned how to use service-to-service authentication to authenticate with Data Lake Storage Gen1 using Python. You can now look at the following articles that talk about how to use Python to work with Data Lake Storage Gen1.
-
-* [Account management operations on Data Lake Storage Gen1 using Python](data-lake-store-get-started-python.md)
-* [Data operations on Data Lake Storage Gen1 using Python](data-lake-store-data-operations-python.md)
data-lake-store Data Lake Store Service To Service Authenticate Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-service-to-service-authenticate-rest-api.md
- Title: REST - Service-to-service authentication - Data Lake Storage Gen1 - Azure
-description: Learn how to achieve service-to-service authentication with Azure Data Lake Storage Gen1 and Microsoft Entra ID using the REST API.
---- Previously updated : 05/29/2018---
-# Service-to-service authentication with Azure Data Lake Storage Gen1 using REST API
-> [!div class="op_single_selector"]
-> * [Using Java](data-lake-store-service-to-service-authenticate-java.md)
-> * [Using .NET SDK](data-lake-store-service-to-service-authenticate-net-sdk.md)
-> * [Using Python](data-lake-store-service-to-service-authenticate-python.md)
-> * [Using REST API](data-lake-store-service-to-service-authenticate-rest-api.md)
->
->
-
-In this article, you learn how to use the REST API to do service-to-service authentication with Azure Data Lake Storage Gen1. For end-user authentication with Data Lake Storage Gen1 using REST API, see [End-user authentication with Data Lake Storage Gen1 using REST API](data-lake-store-end-user-authenticate-rest-api.md).
-
-## Prerequisites
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-* **Create a Microsoft Entra ID "Web" Application**. You must have completed the steps in [Service-to-service authentication with Data Lake Storage Gen1 using Microsoft Entra ID](data-lake-store-service-to-service-authenticate-using-active-directory.md).
-
-## Service-to-service authentication
-
-In this scenario, the application provides its own credentials to perform the operations. For this, you must issue a POST request like the one shown in the following snippet:
-
-```console
-curl -X POST https://login.microsoftonline.com/<TENANT-ID>/oauth2/token \
- -F grant_type=client_credentials \
- -F resource=https://management.core.windows.net/ \
- -F client_id=<CLIENT-ID> \
- -F client_secret=<AUTH-KEY>
-```
-
-The output of the request includes an authorization token (denoted by `access-token` in the output below) that you subsequently pass with your REST API calls. Save the authentication token in a text file; you will need it when making REST calls to Data Lake Storage Gen1.
-
-```output
-{"token_type":"Bearer","expires_in":"3599","expires_on":"1458245447","not_before":"1458241547","resource":"https://management.core.windows.net/","access_token":"<REDACTED>"}
-```
-
-This article uses the **non-interactive** approach. For more information on non-interactive (service-to-service calls), see [Service to service calls using credentials](/previous-versions/azure/dn645543(v=azure.100)).
-
-## Next steps
-
-In this article, you learned how to use service-to-service authentication to authenticate with Data Lake Storage Gen1 using REST API. You can now look at the following articles that talk about how to use the REST API to work with Data Lake Storage Gen1.
-
-* [Account management operations on Data Lake Storage Gen1 using REST API](data-lake-store-get-started-rest-api.md)
-* [Data operations on Data Lake Storage Gen1 using REST API](data-lake-store-data-operations-rest-api.md)
data-lake-store Data Lake Store Service To Service Authenticate Using Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-service-to-service-authenticate-using-active-directory.md
- Title: Service-to-service authentication - Data Lake Storage Gen1 - Azure
-description: Learn how to achieve service-to-service authentication with Azure Data Lake Storage Gen1 using Microsoft Entra ID.
----- Previously updated : 05/29/2018--
-# Service-to-service authentication with Azure Data Lake Storage Gen1 using Microsoft Entra ID
-> [!div class="op_single_selector"]
-> * [End-user authentication](data-lake-store-end-user-authenticate-using-active-directory.md)
-> * [Service-to-service authentication](data-lake-store-service-to-service-authenticate-using-active-directory.md)
->
->
-
-Azure Data Lake Storage Gen1 uses Microsoft Entra ID for authentication. Before authoring an application that works with Data Lake Storage Gen1, you must decide how to authenticate your application with Microsoft Entra ID. The two main options available are:
-
-* End-user authentication
-* Service-to-service authentication (this article)
-
-Both these options result in your application being provided with an OAuth 2.0 token, which gets attached to each request made to Data Lake Storage Gen1.
-
-This article talks about how to create an **Microsoft Entra web application for service-to-service authentication**. For instructions on Microsoft Entra application configuration for end-user authentication, see [End-user authentication with Data Lake Storage Gen1 using Microsoft Entra ID](data-lake-store-end-user-authenticate-using-active-directory.md).
-
-## Prerequisites
-* An Azure subscription. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-## Step 1: Create an Active Directory web application
-
-Create and configure a Microsoft Entra web application for service-to-service authentication with Azure Data Lake Storage Gen1 using Microsoft Entra ID. For instructions, see [Create a Microsoft Entra application](../active-directory/develop/howto-create-service-principal-portal.md).
-
-While following the instructions at the preceding link, make sure you select **Web App / API** for application type, as shown in the following screenshot:
-
-![Create web app](./media/data-lake-store-authenticate-using-active-directory/azure-active-directory-create-web-app.png "Create web app")
-
-## Step 2: Get application ID, authentication key, and tenant ID
-When programmatically logging in, you need the ID for your application. If the application runs under its own credentials, you also need an authentication key.
-
-* For instructions on how to retrieve the application ID and authentication key (also called the client secret) for your application, see [Get application ID and authentication key](../active-directory/develop/howto-create-service-principal-portal.md#sign-in-to-the-application).
-
-* For instructions on how to retrieve the tenant ID, see [Get tenant ID](../active-directory/develop/howto-create-service-principal-portal.md#sign-in-to-the-application).
-
-<a name='step-3-assign-the-azure-ad-application-to-the-azure-data-lake-storage-gen1-account-file-or-folder'></a>
-
-## Step 3: Assign the Microsoft Entra application to the Azure Data Lake Storage Gen1 account file or folder
--
-1. Sign on to the [Azure portal](https://portal.azure.com). Open the Data Lake Storage Gen1 account that you want to associate with the Microsoft Entra application you created earlier.
-2. In your Data Lake Storage Gen1 account blade, click **Data Explorer**.
-
- ![Create directories in Data Lake Storage Gen1 account](./media/data-lake-store-authenticate-using-active-directory/adl.start.data.explorer.png "Create directories in Data Lake account")
-3. In the **Data Explorer** blade, click the file or folder for which you want to provide access to the Microsoft Entra application, and then click **Access**. To configure access to a file, you must click **Access** from the **File Preview** blade.
-
- ![Set ACLs on Data Lake file system](./media/data-lake-store-authenticate-using-active-directory/adl.acl.1.png "Set ACLs on Data Lake file system")
-4. The **Access** blade lists the standard access and custom access already assigned to the root. Click the **Add** icon to add custom-level ACLs.
-
- ![List standard and custom access](./media/data-lake-store-authenticate-using-active-directory/adl.acl.2.png "List standard and custom access")
-5. Click the **Add** icon to open the **Add Custom Access** blade. In this blade, click **Select User or Group**, and then in **Select User or Group** blade, look for the Microsoft Entra application you created earlier. If you have many groups to search from, use the text box at the top to filter on the group name. Click the group you want to add and then click **Select**.
-
- ![Add a group](./media/data-lake-store-authenticate-using-active-directory/adl.acl.3.png "Add a group")
-6. Click **Select Permissions**, select the permissions and whether you want to assign the permissions as a default ACL, access ACL, or both. Click **OK**.
-
- ![Screenshot of the Add Custom Access blade with the Select Permissions option called out and the Select Permissions blade with the OK option called out.](./media/data-lake-store-authenticate-using-active-directory/adl.acl.4.png "Assign permissions to group")
-
- For more information about permissions in Data Lake Storage Gen1, and Default/Access ACLs, see [Access Control in Data Lake Storage Gen1](data-lake-store-access-control.md).
-7. In the **Add Custom Access** blade, click **OK**. The newly added groups, with the associated permissions, are listed in the **Access** blade.
-
- ![Screenshot of the Access blade with the newly added group called out in the Custom Access section.](./media/data-lake-store-authenticate-using-active-directory/adl.acl.5.png "Assign permissions to group")
-
-> [!NOTE]
-> If you plan on restricting your Microsoft Entra application to a specific folder, you will also need to give that same Microsoft Entra application **Execute** permission to the root to enable file creation access via the .NET SDK.
-
-> [!NOTE]
-> If you want to use the SDKs to create a Data Lake Storage Gen1 account, you must assign the Microsoft Entra web application as a role to the Resource Group in which you create the Data Lake Storage Gen1 account.
->
->
-
-## Step 4: Get the OAuth 2.0 token endpoint (only for Java-based applications)
-
-1. Sign on to the [Azure portal](https://portal.azure.com) and click Active Directory from the left pane.
-
-2. From the left pane, click **App registrations**.
-
-3. From the top of the App registrations blade, click **Endpoints**.
-
- ![Screenshot of Active Directory with the App registrations option and the Endpoints option called out.](./media/data-lake-store-authenticate-using-active-directory/oauth-token-endpoint.png "OAuth token endpoint")
-
-4. From the list of endpoints, copy the OAuth 2.0 token endpoint.
-
- ![Screenshot of the Endpoints blade with the O AUTH 2 point O TOKEN ENDPOINT copy icon called out.](./media/data-lake-store-authenticate-using-active-directory/oauth-token-endpoint-1.png "OAuth token endpoint")
-
-## Next steps
-In this article, you created a Microsoft Entra web application and gathered the information you need in your client applications that you author using .NET SDK, Java, Python, REST API, etc. You can now proceed to the following articles that talk about how to use the Microsoft Entra native application to first authenticate with Data Lake Storage Gen1 and then perform other operations on the store.
-
-* [Service-to-service authentication with Data Lake Storage Gen1 using Java](data-lake-store-service-to-service-authenticate-java.md)
-* [Service-to-service authentication with Data Lake Storage Gen1 using .NET SDK](data-lake-store-service-to-service-authenticate-net-sdk.md)
-* [Service-to-service authentication with Data Lake Storage Gen1 using Python](data-lake-store-service-to-service-authenticate-python.md)
-* [Service-to-service authentication with Data Lake Storage Gen1 using REST API](data-lake-store-service-to-service-authenticate-rest-api.md)
data-lake-store Data Lake Store Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-stream-analytics.md
- Title: Stream data from Stream Analytics to Data Lake Storage Gen1 - Azure
-description: Learn how to use Azure Data Lake Storage Gen1 as an output for an Azure Stream Analytics job, with a simple scenario that reads data from an Azure Storage blob.
---- Previously updated : 05/30/2018---
-# Stream data from Azure Storage Blob into Azure Data Lake Storage Gen1 using Azure Stream Analytics
-In this article, you learn how to use Azure Data Lake Storage Gen1 as an output for an Azure Stream Analytics job. This article demonstrates a simple scenario that reads data from an Azure Storage blob (input) and writes the data to Data Lake Storage Gen1 (output).
-
-## Prerequisites
-Before you begin this tutorial, you must have the following:
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-
-* **Azure Storage account**. You will use a blob container from this account to input data for a Stream Analytics job. For this tutorial, assume you have a storage account called **storageforasa** and a container within the account called **storageforasacontainer**. Once you have created the container, upload a sample data file to it.
-
-* **A Data Lake Storage Gen1 account**. Follow the instructions at [Get started with Azure Data Lake Storage Gen1 using the Azure portal](data-lake-store-get-started-portal.md). Let's assume you have a Data Lake Storage Gen1 account called **myadlsg1**.
-
-## Create a Stream Analytics Job
-You start by creating a Stream Analytics job that includes an input source and an output destination. For this tutorial, the source is an Azure blob container and the destination is Data Lake Storage Gen1.
-
-1. Sign on to the [Azure portal](https://portal.azure.com).
-
-2. From the left pane, click **Stream Analytics jobs**, and then click **Add**.
-
- ![Create a Stream Analytics Job](./media/data-lake-store-stream-analytics/create.job.png "Create a Stream Analytics job")
-
- > [!NOTE]
- > Make sure you create job in the same region as the storage account or you will incur additional cost of moving data between regions.
- >
-
-## Create a Blob input for the job
-
-1. Open the page for the Stream Analytics job, from the left pane click the **Inputs** tab, and then click **Add**.
-
- ![Screenshot of the Stream Analytics Job blade with the Inputs option and the Add stream input option called out.](./media/data-lake-store-stream-analytics/create.input.1.png "Add an input to your job")
-
-2. On the **New input** blade, provide the following values.
-
- ![Screenshot of the Blob storage - new input blade.](./media/data-lake-store-stream-analytics/create.input.2.png "Add an input to your job")
-
- * For **Input alias**, enter a unique name for the job input.
- * For **Source type**, select **Data stream**.
- * For **Source**, select **Blob storage**.
- * For **Subscription**, select **Use blob storage from current subscription**.
- * For **Storage account**, select the storage account that you created as part of the prerequisites.
- * For **Container**, select the container that you created in the selected storage account.
- * For **Event serialization format**, select **CSV**.
- * For **Delimiter**, select **tab**.
- * For **Encoding**, select **UTF-8**.
-
- Click **Create**. The portal now adds the input and tests the connection to it.
--
-## Create a Data Lake Storage Gen1 output for the job
-
-1. Open the page for the Stream Analytics job, click the **Outputs** tab, click **Add**, and select **Data Lake Storage Gen1**.
-
- ![Screenshot of the Stream Analytics Job blade with the Outputs option, Add option, and Data Lake Storage Gen 1 option called out.](./media/data-lake-store-stream-analytics/create.output.1.png "Add an output to your job")
-
-2. On the **New output** blade, provide the following values.
-
- ![Screenshot of the Data Lake Storage Gen 1 - new output blade with the Authorize option called out.](./media/data-lake-store-stream-analytics/create.output.2.png "Add an output to your job")
-
- * For **Output alias**, enter a unique name for the job output. This is a friendly name used in queries to direct the query output to this Data Lake Storage Gen1 account.
- * You will be prompted to authorize access to the Data Lake Storage Gen1 account. Click **Authorize**.
-
-3. On the **New output** blade, continue to provide the following values.
-
- ![Screenshot of the Data Lake Storage Gen 1 - new output blade.](./media/data-lake-store-stream-analytics/create.output.3.png "Add an output to your job")
-
- * For **Account name**, select the Data Lake Storage Gen1 account you already created where you want the job output to be sent to.
- * For **Path prefix pattern**, enter a file path used to write your files within the specified Data Lake Storage Gen1 account.
- * For **Date format**, if you used a date token in the prefix path, you can select the date format in which your files are organized.
- * For **Time format**, if you used a time token in the prefix path, specify the time format in which your files are organized.
- * For **Event serialization format**, select **CSV**.
- * For **Delimiter**, select **tab**.
- * For **Encoding**, select **UTF-8**.
-
- Click **Create**. The portal now adds the output and tests the connection to it.
-
-## Run the Stream Analytics job
-
-1. To run a Stream Analytics job, you must run a query from the **Query** tab. For this tutorial, you can run the sample query by replacing the placeholders with the job input and output aliases, as shown in the screen capture below.
-
- ![Run query](./media/data-lake-store-stream-analytics/run.query.png "Run query")
-
-2. Click **Save** from the top of the screen, and then from the **Overview** tab, click **Start**. From the dialog box, select **Custom Time**, and then set the current date and time.
-
- ![Set job time](./media/data-lake-store-stream-analytics/run.query.2.png "Set job time")
-
- Click **Start** to start the job. It can take up to a couple minutes to start the job.
-
-3. To trigger the job to pick the data from the blob, copy a sample data file to the blob container. You can get a sample data file from the [Azure Data Lake Git Repository](https://github.com/Azure/usql/tree/master/Examples/Samples/Data/AmbulanceData/Drivers.txt). For this tutorial, let's copy the file **vehicle1_09142014.csv**. You can use various clients, such as [Azure Storage Explorer](https://storageexplorer.com/), to upload data to a blob container.
-
-4. From the **Overview** tab, under **Monitoring**, see how the data was processed.
-
- ![Monitor job](./media/data-lake-store-stream-analytics/run.query.3.png "Monitor job")
-
-5. Finally, you can verify that the job output data is available in the Data Lake Storage Gen1 account.
-
- ![Verify output](./media/data-lake-store-stream-analytics/run.query.4.png "Verify output")
-
- In the Data Explorer pane, notice that the output is written to a folder path as specified in the Data Lake Storage Gen1 output settings (`streamanalytics/job/output/{date}/{time}`).
-
-## See also
-* [Create an HDInsight cluster to use Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md)
data-lake-store Data Lake Store With Data Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-with-data-catalog.md
- Title: Integrate Data Lake Storage Gen1 with Azure Data Catalog
-description: Learn how to register data from Azure Data Lake Storage Gen1 in Azure Data Catalog to make data discoverable in your organization.
---- Previously updated : 05/29/2018---
-# Register data from Azure Data Lake Storage Gen1 in Azure Data Catalog
-In this article, you will learn how to integrate Azure Data Lake Storage Gen1 with Azure Data Catalog to make your data discoverable within an organization by integrating it with Data Catalog. For more information on cataloging data, see [Azure Data Catalog](../data-catalog/overview.md). To understand scenarios in which you can use Data Catalog, see [Azure Data Catalog common scenarios](../data-catalog/data-catalog-common-scenarios.md).
-
-## Prerequisites
-Before you begin this tutorial, you must have the following:
-
-* **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-* **Enable your Azure subscription** for Data Lake Storage Gen1. See [instructions](data-lake-store-get-started-portal.md).
-* **A Data Lake Storage Gen1 account**. Follow the instructions at [Get started with Azure Data Lake Storage Gen1 using the Azure portal](data-lake-store-get-started-portal.md). For this tutorial, create a Data Lake Storage Gen1 account called **datacatalogstore**.
-
- Once you have created the account, upload a sample data set to it. For this tutorial, let us upload all the .csv files under the **AmbulanceData** folder in the [Azure Data Lake Git Repository](https://github.com/Azure/usql/tree/master/Examples/Samples/Data/AmbulanceData/). You can use various clients, such as [Azure Storage Explorer](https://storageexplorer.com/), to upload data to a blob container.
-* **Azure Data Catalog**. Your organization must already have an Azure Data Catalog created for your organization. Only one catalog is allowed for each organization.
-
-## Register Data Lake Storage Gen1 as a source for Data Catalog
-
-1. Go to `https://azure.microsoft.com/services/data-catalog`, and click **Get started**.
-1. Log into the Azure Data Catalog portal, and click **Publish data**.
-
- ![Register a data source](./media/data-lake-store-with-data-catalog/register-data-source.png "Register a data source")
-1. On the next page, click **Launch Application**. This will download the application manifest file on your computer. Double-click the manifest file to start the application.
-1. On the Welcome page, click **Sign in**, and enter your credentials.
-
- ![Welcome screen](./media/data-lake-store-with-data-catalog/welcome.screen.png "Welcome screen")
-1. On the Select a Data Source page, select **Azure Data Lake Store**, and then click **Next**.
-
- ![Select data source](./media/data-lake-store-with-data-catalog/select-source.png "Select data source")
-1. On the next page, provide the Data Lake Storage Gen1 account name that you want to register in Data Catalog. Leave the other options as default and then click **Connect**.
-
- ![Connect to data source](./media/data-lake-store-with-data-catalog/connect-to-source.png "Connect to data source")
-1. The next page can be divided into the following segments.
-
- a. The **Server Hierarchy** box represents the Data Lake Storage Gen1 account folder structure. **$Root** represents the Data Lake Storage Gen1 account root, and **AmbulanceData** represents the folder created in the root of the Data Lake Storage Gen1 account.
-
- b. The **Available objects** box lists the files and folders under the **AmbulanceData** folder.
-
- c. The **Objects to be registered** box lists the files and folders that you want to register in Azure Data Catalog.
-
- ![Screenshot of the Microsoft Azure Data Catalog - Store Account dialog box.](./media/data-lake-store-with-data-catalog/view-data-structure.png "View data structure")
-1. For this tutorial, you should register all the files in the directory. For that, click the (![move objects](./media/data-lake-store-with-data-catalog/move-objects.png "Move objects")) button to move all the files to **Objects to be registered** box.
-
- Because the data will be registered in an organization-wide data catalog, it is a recommended approach to add some metadata that you can later use to quickly locate the data. For example, you can add an e-mail address for the data owner (for example, one who is uploading the data) or add a tag to identify the data. The screen capture below shows a tag that you add to the data.
-
- ![Screenshot of the Microsoft Azure Data Catalog - Store Account dialog box with the tag that was added to the data called out.](./media/data-lake-store-with-data-catalog/view-selected-data-structure.png "View data structure")
-
- Click **Register**.
-1. The following screen capture denotes that the data is successfully registered in the Data Catalog.
-
- ![Registration complete](./media/data-lake-store-with-data-catalog/registration-complete.png "View data structure")
-1. Click **View Portal** to go back to the Data Catalog portal and verify that you can now access the registered data from the portal. To search the data, you can use the tag you used while registering the data.
-
- ![Search data in catalog](./media/data-lake-store-with-data-catalog/search-data-in-catalog.png "Search data in catalog")
-1. You can now perform operations like adding annotations and documentation to the data. For more information, see the following links.
-
- * [Annotate data sources in Data Catalog](../data-catalog/data-catalog-how-to-annotate.md)
- * [Document data sources in Data Catalog](../data-catalog/data-catalog-how-to-documentation.md)
-
-## See also
-* [Annotate data sources in Data Catalog](../data-catalog/data-catalog-how-to-annotate.md)
-* [Document data sources in Data Catalog](../data-catalog/data-catalog-how-to-documentation.md)
-* [Integrate Data Lake Storage Gen1 with other Azure services](data-lake-store-integrate-with-other-services.md)
data-lake-store Data Lakes Store Authentication Using Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lakes-store-authentication-using-azure-active-directory.md
- Title: Authentication - Data Lake Storage Gen1 with Microsoft Entra ID
-description: Learn how to authenticate with Azure Data Lake Storage Gen1 using Microsoft Entra ID.
---- Previously updated : 05/29/2018---
-# Authentication with Azure Data Lake Storage Gen1 using Microsoft Entra ID
-
-Azure Data Lake Storage Gen1 uses Microsoft Entra ID for authentication. Before authoring an application that works with Data Lake Storage Gen1, you must decide how to authenticate your application with Microsoft Entra ID.
-
-## Authentication options
-
-* **End-user authentication** - An end user's Azure credentials are used to authenticate with Data Lake Storage Gen1. The application you create to work with Data Lake Storage Gen1 prompts for these user credentials. As a result, this authentication mechanism is *interactive* and the application runs in the logged in user's context. For more information and instructions, see [End-user authentication for Data Lake Storage Gen1](data-lake-store-end-user-authenticate-using-active-directory.md).
-
-* **Service-to-service authentication** - Use this option if you want an application to authenticate itself with Data Lake Storage Gen1. In such cases, you create a Microsoft Entra application and use the key from the Microsoft Entra application to authenticate with Data Lake Storage Gen1. As a result, this authentication mechanism is *non-interactive*. For more information and instructions, see [Service-to-service authentication for Data Lake Storage Gen1](data-lake-store-service-to-service-authenticate-using-active-directory.md).
-
-The following table illustrates how end-user and service-to-service authentication mechanisms are supported for Data Lake Storage Gen1. Here's how you read the table.
-
-* The Γ£ö* symbol denotes that authentication option is supported and links to an article that demonstrates how to use the authentication option.
-* The Γ£ö symbol denotes that the authentication option is supported.
-* The empty cells denote that the authentication option is not supported.
--
-|Use this authentication option with... |.NET |Java |PowerShell |Azure CLI | Python |REST |
-|:|:|:--|:-|:-|:|:--|
-|End-user (without MFA**) | Γ£ö | Γ£ö | Γ£ö | Γ£ö | **[Γ£ö*](data-lake-store-end-user-authenticate-python.md#end-user-authentication-without-multi-factor-authentication)**(deprecated) | **[Γ£ö*](data-lake-store-end-user-authenticate-rest-api.md)** |
-|End-user (with MFA) | **[Γ£ö*](data-lake-store-end-user-authenticate-net-sdk.md)** | **[Γ£ö*](data-lake-store-end-user-authenticate-java-sdk.md)** | Γ£ö | **[Γ£ö*](data-lake-store-get-started-cli-2.0.md)** | **[Γ£ö*](data-lake-store-end-user-authenticate-python.md#end-user-authentication-with-multi-factor-authentication)** | Γ£ö |
-|Service-to-service (using client key) | **[Γ£ö*](data-lake-store-service-to-service-authenticate-net-sdk.md#service-to-service-authentication-with-client-secret)** | **[Γ£ö*](data-lake-store-service-to-service-authenticate-java.md)** | Γ£ö | Γ£ö | **[Γ£ö*](data-lake-store-service-to-service-authenticate-python.md#service-to-service-authentication-with-client-secret-for-account-management)** | **[Γ£ö*](data-lake-store-service-to-service-authenticate-rest-api.md)** |
-|Service-to-service (using client certificate) | **[Γ£ö*](data-lake-store-service-to-service-authenticate-net-sdk.md#service-to-service-authentication-with-certificate)** | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-
-<i>* Click the <b>Γ£ö\*</b> symbol. It's a link.</i><br>
-<i>** MFA stands for multi-factor authentication</i>
-
-See [Authentication Scenarios for Microsoft Entra ID](../active-directory/develop/authentication-vs-authorization.md) for more information on how to use Microsoft Entra ID for authentication.
-
-## Next steps
-
-* [End-user authentication](data-lake-store-end-user-authenticate-using-active-directory.md)
-* [Service-to-service authentication](data-lake-store-service-to-service-authenticate-using-active-directory.md)
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
- Title: Built-in policy definitions for Azure Data Lake Storage Gen1
-description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources.
Previously updated : 02/06/2024------
-# Azure Policy built-in definitions for Azure Data Lake Storage Gen1
-
-This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy
-definitions for Azure Data Lake Storage Gen1. For additional Azure Policy built-ins for other
-services, see
-[Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
-
-The name of each built-in policy definition links to the policy definition in the Azure portal. Use
-the link in the **Version** column to view the source on the
-[Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
-
-## Azure Data Lake Storage Gen1
--
-## Next steps
--- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).-- Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).-- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
data-lake-store Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/security-controls-policy.md
- Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1
-description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
Previously updated : 02/06/2024------
-# Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1
-
-[Regulatory Compliance in Azure Policy](../governance/policy/concepts/regulatory-compliance.md)
-provides Microsoft created and managed initiative definitions, known as _built-ins_, for the
-**compliance domains** and **security controls** related to different compliance standards. This
-page lists the **compliance domains** and **security controls** for Azure Data Lake Storage Gen1.
-You can assign the built-ins for a **security control** individually to help make your Azure
-resources compliant with the specific standard.
---
-## Next steps
--- Learn more about [Azure Policy Regulatory Compliance](../governance/policy/concepts/regulatory-compliance.md).-- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
deployment-environments Concept Azure Developer Cli With Deployment Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-azure-developer-cli-with-deployment-environments.md
+
+ Title: Use Azure Developer CLI with Azure Deployment Environments
+description: Understand ADE and `azd` work together to provision application infrastructure and deploy application code to the new infrastructure.
++++ Last updated : 02/24/2024+
+# Customer intent: As a platform engineer, I want to understand ADE and `azd` work together to provision application infrastructure and deploy application code to the new infrastructure.
+++
+# Use Azure Developer CLI with Azure Deployment Environments
+
+In this article, you learn about Azure Developer CLI (`azd`) and how it works with Azure Deployment Environments (ADE) to provision application infrastructure and deploy application code to the new infrastructure.
+
+The Azure Developer CLI (`azd`) is an open-source command-line tool that provides developer-friendly commands that map to key stages in your workflow. You can install `azd` locally on your machine or use it in other environments.
+
+With ADE, you can create environments from an environment definition in a catalog attached to your dev center. By adding `azd`, you can deploy your application code to the new infrastructure.
+
+## How does `azd` work with ADE?
+
+`azd` works with ADE to enable you to create environments from where youΓÇÖre working.
+
+With ADE and `azd`, individual developers working with unique infrastructure and code that they want to upload to the cloud can create an environment from a local folder. They can use `azd` to provision an environment and deploy their code seamlessly.
+
+At scale, using ADE and `azd` together enables you to provide a way for developers to create app infrastructure and code. Your team can create multiple ADE environments from the same `azd` compatible environment definition, and provision code to the cloud in a consistent way.
+
+## Understand `azd` templates
+
+The Azure Developer CLI commands are designed to work with standardized templates. Each template is a code repository that adheres to specific file and folder conventions. The templates contain the assets `azd` needs to provision an Azure Deployment Environment environment. When you run a command like `azd up`, the tool uses the template assets to execute various workflow steps, such as provisioning or deploying resources to Azure.
+
+The following is a typical `azd` template structure:
+
+```txt
+Γö£ΓöÇΓöÇ infra [ Contains infrastructure as code files ]
+Γö£ΓöÇΓöÇ .azdo [ Configures an Azure Pipeline ]
+Γö£ΓöÇΓöÇ .devcontainer [ For DevContainer ]
+Γö£ΓöÇΓöÇ .github [ Configures a GitHub workflow ]
+Γö£ΓöÇΓöÇ .vscode [ VS Code workspace configurations ]
+Γö£ΓöÇΓöÇ .azure [ Stores Azure configurations and environment variables ]
+Γö£ΓöÇΓöÇ src [ Contains all of the deployable app source code ]
+ΓööΓöÇΓöÇ azure.yaml [ Describes the app and type of Azure resources]
+```
+
+All `azd` templates include the following assets:
+
+- *infra folder* - Contains all of the Bicep or Terraform infrastructure as code files for the azd template. The infra folder is not used in `azd` with ADE. ADE provides the infrastructure as code files for the `azd` template. You don't need to include these files in your `azd` template.
+
+- *azure.yaml file* - A configuration file that defines one or more services in your project and maps them to Azure resources for deployment. For example, you might define an API service and a web front-end service, each with attributes that map them to different Azure resources for deployment.
+
+- *.azure folder* - Contains essential Azure configurations and environment variables, such as the location to deploy resources or other subscription information.
+
+- *src folder* - Contains all of the deployable app source code. Some `azd` templates only provide infrastructure assets and leave the src directory empty for you to add your own application code.
+
+Most `azd` templates also optionally include one or more of the following folders:
+
+- *.devcontainer folder* - Allows you to set up a Dev Container environment for your application. This is a common development environment approach that isn't specific to azd.
+
+- *.github folder* - Holds the CI/CD workflow files for GitHub Actions, which is the default CI/CD provider for azd.
+
+- *.azdo folder* - If you decide to use Azure Pipelines for CI/CD, define the workflow configuration files in this folder.
+
+## `azd` compatible catalogs
+
+Azure Deployment Environments catalogs consist of environment definitions: IaC templates that define the infrastructure resources that are provisioned for a deployment environment. Azure Developer CLI uses environment definitions in the catalog attached to the dev center to provision new environments.
+
+> [!NOTE]
+> Currently, Azure Developer CLI works with ARM templates stored in the Azure Deployment Environments dev center catalog.
+
+To properly support certain Azure Compute services, Azure Developer CLI requires more configuration settings in the IaC template. For example, you must tag app service hosts with specific information so that `azd` knows how to find the hosts and deploy the app to them.
+
+You can see a list of supported Azure services here: [Supported Azure compute services (host)](/azure/developer/azure-developer-cli/supported-languages-environments#supported-azure-compute-services-host).
+
+## Make your ADE catalog compatible with `azd`
+
+To enable your development teams to us `azd` with ADE, you need to create an environment definition in your catalog that is compatible with `azd`. You can create a new `azd`compatible environment definition, or you can use an existing environment definition from the Azure Deployment Environments dev center catalog. If you choose to use an existing environment definition, you need to make a few changes to make it compatible with `azd`.
+
+Changes include:
+- If you're modifying an existing `azd` template, remove the `infra` folder. ADE uses the following files to create the infrastructure:
+ - ARM template (azuredeploy.json.)
+ - Configuration file that defines parameters (environment.yaml or manifest.yaml)
+- Tag resources in *azure.yaml* with specific information so that `azd` knows how to find the hosts and deploy the app to them.
+ - Learn about [Tagging resources for Azure Deployment Environments](/azure/developer/azure-developer-cli/ade-integration?branch=main#tagging-resources-for-azure-deployment-environments).
+ - Learn about [Azure Developer CLI's azure.yaml schema](/azure/developer/azure-developer-cli/azd-schema).
+- Configure dev center settings like environment variables, `azd` environment configuration, `azd` project configuration, and user configuration.
+ - Learn about [Configuring dev center settings](/azure/developer/azure-developer-cli/ade-integration?branch=main#configure-dev-center-settings).
+
+To learn more about how to make your ADE environment definition compatible with `azd`, see [Make your project compatible with Azure Developer CLI](/azure/developer/azure-developer-cli/ade-integration).
+
+## Enable `azd` support in ADE
+
+To enable `azd` support with ADE, you need to set the `platform.type` to devcenter. This configuration allows `azd` to leverage new dev center components for remote environment state and provisioning, and means that the infra folder in your templates will effectively be ignored. Instead, `azd` will use one of the infrastructure templates defined in your dev center catalog for resource provisioning.
+
+To enable `azd` support, run the following command:
+
+ ```bash
+ azd config set platform.type devcenter
+ ```
+### Explore `azd` commands
+
+When the dev center feature is enabled, the default behavior of some common azd commands changes to work with these remote environments. For more information, see [Work with Azure Deployment Environments](/azure/developer/azure-developer-cli/ade-integration?branch=main#work-with-azure-deployment-evironments).
++
+## Related content
+
+- [Add and configure an environment definition](./configure-environment-definition.md)
+- [Create an environment by using the Azure Developer CLI](./how-to-create-environment-with-azure-developer.md)
+- [Make your project compatible with Azure Developer CLI](/azure/developer/azure-developer-cli/make-azd-compatible?pivots=azd-create)
deployment-environments How To Create Environment With Azure Developer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-environment-with-azure-developer.md
Last updated 01/26/2023
-# Customer intent: As a developer, I want to be able to create an enviroment by using AZD so that I can create my coding environment.
+# Customer intent: As a developer, I want to be able to create an environment by using AZD so that I can create my coding environment.
In this article, you install the Azure Developer CLI (AZD), create a new deploym
Azure Developer CLI (AZD) is an open-source tool that accelerates the time it takes for you to get your application from local development environment to Azure. AZD provides best practice, developer-friendly commands that map to key stages in your workflow, whether youΓÇÖre working in the terminal, your editor or integrated development environment (IDE), or CI/CD (continuous integration/continuous deployment).
+<!-- To learn how to set up AZD to work with Azure Deployment Environments, see [Use Azure Developer CLI with Azure Deployment Environments](/azure/deployment-environments/concept-azure-developer-cli-with-deployment-environments). -->
+ ## Prerequisites You should:
You should:
- [Quickstart: Create and configure an Azure Deployment Environments project](quickstart-create-and-configure-projects.md) - A catalog attached to your dev center.
-## AZD compatible catalogs
-
-Azure Deployment Environments catalogs consist of environment definitions: IaC templates that define the resources that are provisioned for a deployment environment. Azure Developer CLI uses environment definitions in the attached catalog to provision new environments.
-
-> [!NOTE]
-> Currently, Azure Developer CLI works with ARM templates stored in the Azure Deployment Environments dev center catalog.
-
-To properly support certain Azure Compute services, Azure Developer CLI requires more configuration settings in the IaC template. For example, you must tag app service hosts with specific information so that AZD knows how to find the hosts and deploy the app to them.
-
-You can see a list of supported Azure services here: [Supported Azure compute services (host)](/azure/developer/azure-developer-cli/supported-languages-environments).
-
-To get help with AZD compatibility, see [Make your project compatible with Azure Developer CLI](/azure/developer/azure-developer-cli/make-azd-compatible?pivots=azd-create).
- ## Prepare to work with AZD When you work with AZD for the first time, there are some one-time setup tasks you need to complete. These tasks include installing the Azure Developer CLI, signing in to your Azure account, and enabling AZD support for Azure Deployment Environments.
expressroute Expressroute Howto Circuit Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-circuit-portal-resource-manager.md
Sign in to the Azure portal with this [Preview link](https://aka.ms/expressroute
**Standard Resiliency** - This option provides a single ExpressRoute circuit with local redundancy at a single ExpressRoute location. > [!NOTE]
- > Doesn't provide protection against location wide outages. This option is recommended for non-critical and non-production workloads.
+ > Doesn't provide protection against location wide outages. This option is recommended for development/test environment and non-production workloads.
:::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/standard-resiliency.png" alt-text="Diagram of standard resiliency for an ExpressRoute connection.":::
expressroute Expressroute Howto Linkvnet Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-portal-resource-manager.md
This article helps you create a connection to link a virtual network (virtual ne
**Standard resiliency** - This option provides a single redundant connection from the virtual network gateway to a single ExpressRoute circuit. > [!NOTE]
- > Doesn't provide protection against location wide outages. This option is recommended for non-critical and non-production workloads.
+ > Doesn't provide protection against location wide outages. This option is recommended for development/testing environment and non-production workloads.
:::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/standard-resiliency.png" alt-text="Diagram of a virtual network gateway connected to a single ExpressRoute circuit.":::
expressroute Expressroute Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-introduction.md
Microsoft uses BGP, an industry standard dynamic routing protocol, to exchange r
Each ExpressRoute circuit consists of two connections to two Microsoft Enterprise edge routers (MSEEs) at an [ExpressRoute Location](./expressroute-locations.md#expressroute-locations) from the connectivity provider or your network edge. Microsoft requires dual BGP connections from the connectivity provider or your network edge ΓÇô one to each MSEE. You might choose not to deploy redundant devices/Ethernet circuits at your end. However, connectivity providers use redundant devices to ensure that your connections are handed off to Microsoft in a redundant manner.
+### Resiliency
+
+Microsoft offers multiple ExpressRoute peering locations in many geopolitical regions. To ensure maximum resiliency, Microsoft recommends that you connect to two ExpressRoute circuits in two peering locations. For non-production development and test work loads, you can achieve standard resiliency by connecting to a single ExpressRoute circuit that offers redundant connections within a single peering location. The Azure portal provides a guided experience to help you create a resilient ExpressRoute configuration. For Azure PowerShell, CLI, ARM template, Terraform, and Bicep, maximum resiliency can be achieved by creating a second ExpressRoute circuit in a different ExpressRoute location and establishing a connection to it. For more information, see [Create maximum resiliency with ExpressRoute](expressroute-howto-circuit-portal-resource-manager.md?pivots=expressroute-preview).
++ ### Connectivity to Microsoft cloud services ExpressRoute connections enable access to the following
expressroute Provider Rate Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/provider-rate-limit.md
Previously updated : 01/12/2024 Last updated : 03/01/2024
This article discusses how rate limiting works for ExpressRoute circuits created
## How does rate limiting work over an ExpressRoute circuit?
-An ExpressRoute circuit consists of two links that connects the Customer/Provider edge to the Microsoft Enterprise Edge (MSEE) routers. If your circuit bandwidth is 1 Gbps and you distribute your traffic evenly across both links, you can achieve a maximum throughput of 2 Gbps (two times 1 Gbps). Rate limiting restricts your throughput to the configured bandwidth if you exceed it on either link. The ExpressRoute circuit SLA is only guaranteed for the bandwidth that you configured. For example, if you purchased a 1-Gbps circuit, your SLA is for a maximum throughput of 1 Gbps.
+An ExpressRoute circuit consists of two links that connects the Customer or Provider edge to the Microsoft Enterprise Edge (MSEE) routers. With a circuit bandwidth of 1 Gbps and traffic distributed evenly across both links, a maximum throughput of 2 Gbps (twice the 1 Gbps) can be achieved. However, rate limiting will restricts your throughput to the configured bandwidth if it is exceeded on either link. It is important to note that the excess 1 Gbps in this example serves as redundancy to prevent service disruptions during any link or device maintenance periods.
:::image type="content" source="./media/provider-rate-limit/circuit.png" alt-text="Diagram of rate limiting on an ExpressRoute circuit over provider ports.":::
governance Assign Policy Azurecli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-azurecli.md
az logout
## Next steps
-In this quickstart, you assigned a policy definition to identify non-compliant resources in your
-Azure environment.
+In this quickstart, you assigned a policy definition to identify non-compliant resources in your Azure environment.
-To learn more how to assign policies that validate if new resources are compliant, continue to the
-tutorial.
+To learn more about how to assign policies that validate resource compliance, continue to the tutorial.
> [!div class="nextstepaction"] > [Tutorial: Create and manage policies to enforce compliance](./tutorials/create-and-manage.md)
governance Assign Policy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-bicep.md
az logout
## Next steps
-In this quickstart, you assigned a built-in policy definition to a resource group scope and reviewed its compliance state. The policy definition audits if the virtual machines in the resource group are compliant and identifies resources that aren't compliant.
+In this quickstart, you assigned a policy definition to identify non-compliant resources in your Azure environment.
-To learn more about assigning policies to validate that new resources are compliant, continue to the
-tutorial.
+To learn more about how to assign policies that validate resource compliance, continue to the tutorial.
> [!div class="nextstepaction"] > [Tutorial: Create and manage policies to enforce compliance](./tutorials/create-and-manage.md)
governance Assign Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-portal.md
Title: "Quickstart: New policy assignment with portal"
-description: In this quickstart, you use Azure portal to create an Azure Policy assignment to identify non-compliant resources.
Previously updated : 08/17/2021
+ Title: "Quickstart: Create policy assignment using Azure portal"
+description: In this quickstart, you create an Azure Policy assignment to identify non-compliant resources using Azure portal.
Last updated : 02/29/2024
-# Quickstart: Create a policy assignment to identify non-compliant resources
-The first step in understanding compliance in Azure is to identify the status of your resources.
-This quickstart steps you through the process of creating a policy assignment to identify virtual
-machines that aren't using managed disks.
+# Quickstart: Create a policy assignment to identify non-compliant resources using Azure portal
-At the end of this process, you'll successfully identify virtual machines that aren't using managed
-disks. They're _non-compliant_ with the policy assignment.
+The first step in understanding compliance in Azure is to identify the status of your resources. In this quickstart, you create a policy assignment to identify non-compliant resources using Azure portal. The policy is assigned to a resource group and audits virtual machines that don't use managed disks. After you create the policy assignment, you identify non-compliant virtual machines.
## Prerequisites
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
+- If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- A resource group with at least one virtual machine that doesn't use managed disks.
## Create a policy assignment
-In this quickstart, you create a policy assignment and assign the _Audit VMs that do not use managed
-disks_ policy definition.
+In this quickstart, you create a policy assignment with a built-in policy definition, [Audit VMs that do not use managed disks](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/VMRequireManagedDisk_Audit.json).
-1. Launch the Azure Policy service in the Azure portal by selecting **All services**, then searching
- for and selecting **Policy**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for _policy_ and select it from the list.
- :::image type="content" source="./media/assign-policy-portal/search-policy.png" alt-text="Screenshot of searching for Policy in All Services." border="false":::
+ :::image type="content" source="./media/assign-policy-portal/search-policy.png" alt-text="Screenshot of the Azure portal to search for policy.":::
-1. Select **Assignments** on the left side of the Azure Policy page. An assignment is a policy that
- has been assigned to take place within a specific scope.
+1. Select **Assignments** on the **Policy** pane.
- :::image type="content" source="./media/assign-policy-portal/select-assignments.png" alt-text="Screenshot of selecting the Assignments page from Policy Overview page." border="false":::
+ :::image type="content" source="./media/assign-policy-portal/select-assignments.png" alt-text="Screenshot of the Assignments pane that highlights the option to Assign policy.":::
-1. Select **Assign Policy** from the top of the **Policy - Assignments** page.
+1. Select **Assign Policy** from the **Policy Assignments** pane.
- :::image type="content" source="./media/assign-policy-portal/select-assign-policy.png" alt-text="Screenshot of selecting 'Assign policy' from Assignments page." border="false":::
+1. On the **Assign Policy** pane **Basics** tab configure the following options:
-1. On the **Assign Policy** page, set the **Scope** by selecting the ellipsis and then selecting
- either a management group or subscription. Optionally, select a resource group. A scope
- determines what resources or grouping of resources the policy assignment gets enforced on. Then
- use the **Select** button at the bottom of the **Scope** page.
+ | Field | Action |
+ | - | - |
+ | **Scope** | Use the ellipsis (`...`) and then select a subscription and a resource group. Then choose **Select** to apply the scope. |
+ | **Exclusions** | Optional and isn't used in this example. |
+ | **Policy definition** | Select the ellipsis to open the list of available definitions. |
+ | **Available Definitions** | Search the policy definitions list for _Audit VMs that do not use managed disks_ definition, select the policy, and select **Add**. |
+ | **Assignment name** | By default uses the name of the selected policy. You can change it but for this example, use the default name. |
+ | **Description** | Optional to provide details about this policy assignment. |
+ | **Policy enforcement** | Defaults to _Enabled_. For more information, go to [enforcement mode](./concepts/assignment-structure.md#enforcement-mode). |
+ | **Assigned by** | Defaults to who is signed in to Azure. This field is optional and custom values can be entered. |
- This example uses the **Contoso** subscription. Your subscription will differ.
+ :::image type="content" source="./media/assign-policy-portal/select-available-definition.png" alt-text="Screenshot of filtering the available definitions.":::
-1. Resources can be excluded based on the **Scope**. **Exclusions** start at one level lower than
- the level of the **Scope**. **Exclusions** are optional, so leave it blank for now.
+1. Select **Next** to view each tab for **Advanced**, **Parameters**, and **Remediation**. No changes are needed for this example.
-1. Select the **Policy definition** ellipsis to open the list of available definitions. Azure Policy
- comes with built-in policy definitions you can use. Many are available, such as:
+ | Tab name | Options |
+ | - | - |
+ | **Advanced** | Includes options for [resource selectors](./concepts/assignment-structure.md#resource-selectors-preview) and [overrides](./concepts/assignment-structure.md#overrides-preview). |
+ | **Parameters** | If the policy definition you selected on the **Basics** tab included parameters, they're configured on **Parameters** tab. This example doesn't use parameters. |
+ | **Remediation** | You can create a managed identity. For this example, **Create a Managed Identity** is unchecked. <br><br> This box _must_ be checked when a policy or initiative includes a policy with either the [deployIfNotExists](./concepts/effects.md#deployifnotexists) or [modify](./concepts/effects.md#modify) effect. For more information, go to [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) and [how remediation access control works](./how-to/remediate-resources.md#how-remediation-access-control-works). |
- - Enforce tag and its value
- - Apply tag and its value
- - Inherit a tag from the resource group if missing
+1. Select **Next** and on the **Non-compliance messages** tab create a **Non-compliance message** like _Virtual machines should use managed disks_.
- For a partial list of available built-in policies, see [Azure Policy samples](./samples/index.md).
+ This custom message is displayed when a resource is denied or for non-compliant resources during regular evaluation.
-1. Search through the policy definitions list to find the _Audit VMs that do not use managed disks_
- definition. Select that policy and then use the **Select** button.
+1. Select **Next** and on the **Review + create** tab, review the policy assignment details.
- :::image type="content" source="./media/assign-policy-portal/select-available-definition.png" alt-text="Screenshot of filtering the available definitions." border="false":::
-
-1. The **Assignment name** is automatically populated with the policy name you selected, but you can
- change it. For this example, leave _Audit VMs that do not use managed disks_. You can also add an
- optional **Description**. The description provides details about this policy assignment.
- **Assigned by** will automatically fill based on who is logged in. This field is optional, so
- custom values can be entered.
-
-1. Leave policy enforcement _Enabled_. For more information, see
- [Policy assignment - enforcement mode](./concepts/assignment-structure.md#enforcement-mode).
-
-1. Select **Next** at the bottom of the page or the **Parameters** tab at the top of the page to
- move to the next segment of the assignment wizard.
-
-1. If the policy definition selected on the **Basics** tab included parameters, they are configured
- on this tab. Since the _Audit VMs that do not use managed disks_ has no parameters, select
- **Next** at the bottom of the page or the **Remediation** tab at the top of the page to move to
- the next segment of the assignment wizard.
-
-1. Leave **Create a Managed Identity** unchecked. This box _must_ be checked when the policy or
- initiative includes a policy with either the
- [deployIfNotExists](./concepts/effects.md#deployifnotexists) or
- [modify](./concepts/effects.md#modify) effect. As the policy used for this quickstart doesn't,
- leave it blank. For more information, see
- [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) and
- [how remediation access control works](./how-to/remediate-resources.md#how-remediation-access-control-works).
-
-1. Select **Next** at the bottom of the page or the **Non-compliance messages** tab at the top of
- the page to move to the next segment of the assignment wizard.
-
-1. Set the **Non-compliance message** to _Virtual machines should use a managed disk_. This custom
- message is displayed when a resource is denied or for non-compliant resources during regular
- evaluation.
-
-1. Select **Next** at the bottom of the page or the **Review + Create** tab at the top of the page
- to move to the next segment of the assignment wizard.
-
-1. Review the selected options, then select **Create** at the bottom of the page.
-
-You're now ready to identify non-compliant resources to understand the compliance state of your
-environment.
+1. Select **Create** to create the policy assignment.
## Identify non-compliant resources
-Select **Compliance** in the left side of the page. Then locate the _Audit VMs that do not use
-managed disks_ policy assignment you created.
+On the **Policy** pane, select **Compliance** and locate the _Audit VMs that do not use managed disks_ policy assignment. The compliance state for a new policy assignment takes a few minutes to become active and provide results about the policy's state.
-If there are any existing resources that aren't compliant with this new assignment, they appear
-under **Non-compliant resources**.
+The policy assignment shows resources that aren't compliant with a **Compliance state** of **Non-compliant**. To get more details, select the policy assignment name to view the **Resource Compliance**.
-When a condition is evaluated against your existing resources and found true, then those resources
-are marked as non-compliant with the policy. The following table shows how different policy effects
-work with the condition evaluation for the resulting compliance state. Although you don't see the
-evaluation logic in the Azure portal, the compliance state results are shown. The compliance state
-result is either compliant or non-compliant.
+When a condition is evaluated against your existing resources and found true, then those resources are marked as non-compliant with the policy. The following table shows how different policy effects work with the condition evaluation for the resulting compliance state. Although you don't see the evaluation logic in the Azure portal, the compliance state results are shown. The compliance state result is either compliant or non-compliant.
| Resource State | Effect | Policy Evaluation | Compliance State | | | | | |
result is either compliant or non-compliant.
| Exists | Deny, Audit, Append, Modify, DeployIfNotExist, AuditIfNotExist | True | Non-Compliant | | Exists | Deny, Audit, Append, Modify, DeployIfNotExist, AuditIfNotExist | False | Compliant |
-> [!NOTE]
-> The DeployIfNotExist and AuditIfNotExist effects require the IF statement to be TRUE and the
-> existence condition to be FALSE to be non-compliant. When TRUE, the IF condition triggers
-> evaluation of the existence condition for the related resources.
+The `DeployIfNotExist` and `AuditIfNotExist` effects require the `IF` statement to be `TRUE` and the existence condition to be `FALSE` to be non-compliant. When `TRUE`, the `IF` condition triggers evaluation of the existence condition for the related resources.
## Clean up resources
-To remove the assignment created, follow these steps:
+You can delete a policy assignment from **Compliance** or from **Assignments**.
+
+To remove the policy assignment created in this article, follow these steps:
-1. Select **Compliance** (or **Assignments**) in the left side of the Azure Policy page and locate
- the _Audit VMs that do not use managed disks_ policy assignment you created.
+1. On the **Policy** pane, select **Compliance** and locate the _Audit VMs that do not use managed disks_ policy assignment.
-1. Right-click the _Audit VMs that do not use managed disks_ policy assignment and select **Delete
- assignment**.
+1. Select the policy assignment's ellipsis and select **Delete assignment**.
- :::image type="content" source="./media/assign-policy-portal/delete-assignment.png" alt-text="Screenshot of using the context menu to delete an assignment from the Compliance page." border="false":::
+ :::image type="content" source="./media/assign-policy-portal/delete-assignment.png" alt-text="Screenshot of the Compliance pane that highlights the menu to delete a policy assignment." lightbox="./media/assign-policy-portal/delete-assignment.png":::
## Next steps
-In this quickstart, you assigned a policy definition to a scope and evaluated its compliance report.
-The policy definition validates that all the resources in the scope are compliant and identifies
-which ones aren't.
+In this quickstart, you assigned a policy definition to identify non-compliant resources in your Azure environment.
-To learn more about assigning policies to validate that new resources are compliant, continue to the
-tutorial for:
+To learn more about how to assign policies that validate resource compliance, continue to the tutorial.
> [!div class="nextstepaction"]
-> [Creating and managing policies](./tutorials/create-and-manage.md)
+> [Tutorial: Create and manage policies to enforce compliance](./tutorials/create-and-manage.md)
governance Assign Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-powershell.md
Disconnect-AzAccount
## Next steps
-In this quickstart, you assigned a policy definition to identify non-compliant resources in your
-Azure environment.
+In this quickstart, you assigned a policy definition to identify non-compliant resources in your Azure environment.
-To learn more how to assign policies that validate if new resources are compliant, continue to the
-tutorial.
+To learn more about how to assign policies that validate resource compliance, continue to the tutorial.
> [!div class="nextstepaction"] > [Tutorial: Create and manage policies to enforce compliance](./tutorials/create-and-manage.md)
governance Assign Policy Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-template.md
az logout
## Next steps
-In this quickstart, you assigned a built-in policy definition to a resource group scope and reviewed its compliance state. The policy definition audits if the virtual machines in the resource group are compliant and identifies resources that aren't compliant.
+In this quickstart, you assigned a policy definition to identify non-compliant resources in your Azure environment.
-To learn more about assigning policies to validate that new resources are compliant, continue to the
-tutorial.
+To learn more about how to assign policies that validate resource compliance, continue to the tutorial.
> [!div class="nextstepaction"] > [Tutorial: Create and manage policies to enforce compliance](./tutorials/create-and-manage.md)
healthcare-apis Fhir Versioning Policy And History Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-versioning-policy-and-history-management.md
When configuring resource level configuration, you'll be able to select the FHIR
**Make sure** to select **Save** after you've completed your versioning policy configuration. ## History management
History in FHIR is important for end users to see how a resource has changed ove
Changing the versioning policy either at a system level or resource level won't remove the existing history for any resources in your FHIR service. If you're looking to reduce the history data size in your FHIR service, you must use the [$purge-history](purge-history.md) operation.
+> [!NOTE]
+> The query parameter _summary=count and _count=0 can be added to _history endpoint to get count of all versioned resources. This count includes soft deleted resources.
+ ## Next steps In this article, you learned how to purge the history for resources in the FHIR service. For more information about how to disable history and some concepts about history management, see
healthcare-apis Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md
The `import` operation supports two modes: initial mode and incremental mode. Ea
- Optimized for loading data into the FHIR server periodically and doesn't block writes through the API. -- Allows you to load `lastUpdated` and `versionId` from resource metadata if present in the resource JSON.
+- Allows you to load `lastUpdated` and `versionId` from resource metadata if present in the resource JSON.
+
+- Allows you to load resources in non-sequential order of versions.
* If import files don't have the `version` and `lastUpdated` field values specified, there's no guarantee of importing resources in FHIR service.
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview-of-search.md
# Overview of FHIR search
-The Fast Healthcare Interoperability Resources (FHIR&#174;) specification defines an API for querying resources in a FHIR server database. This article will guide you through some key aspects of querying data in FHIR. For complete details about the FHIR search API, refer to the HL7 [FHIR Search](https://www.hl7.org/fhir/search.html) documentation.
+The Fast Healthcare Interoperability Resources (FHIR&#174;) specification defines an API for querying resources in a FHIR server database. This article guides you through some key aspects of querying data in FHIR. For complete details about the FHIR search API, refer to the HL7 [FHIR Search](https://www.hl7.org/fhir/search.html) documentation.
Throughout this article, we'll demonstrate FHIR search syntax in example API calls with the `{{FHIR_URL}}` placeholder to represent the FHIR server URL. In the case of the FHIR service in Azure Health Data Services, this URL would be `https://<WORKSPACE-NAME>-<FHIR-SERVICE-NAME>.fhir.azurehealthcareapis.com`.
In the following sections, we'll cover the various aspects of querying resources
## Search parameters
-When you do a search in FHIR, you are searching the database for resources that match certain search criteria. The FHIR API specifies a rich set of search parameters for fine-tuning search criteria. Each resource in FHIR carries information as a set of elements, and search parameters work to query the information in these elements. In a FHIR search API call, if a positive match is found between the request's search parameters and the corresponding element values stored in a resource instance, then the FHIR server returns a bundle containing the resource instance(s) whose elements satisfied the search criteria.
+When you do a search in FHIR, you're searching the database for resources that match certain search criteria. The FHIR API specifies a rich set of search parameters for fine-tuning search criteria. Each resource in FHIR carries information as a set of elements, and search parameters work to query the information in these elements. In a FHIR search API call, if a positive match is found between the request's search parameters and the corresponding element values stored in a resource instance, then the FHIR server returns a bundle containing the resource instance(s) whose elements satisfied the search criteria.
For each search parameter, the FHIR specification defines the [data type(s)](https://www.hl7.org/fhir/search.html#ptypes) that can be used. Support in the FHIR service for the various data types is outlined below.
There are [common search parameters](https://www.hl7.org/fhir/search.html#all) t
### Resource-specific parameters
-The FHIR service in Azure Health Data Services supports almost all [resource-specific search parameters](https://www.hl7.org/fhir/searchparameter-registry.html) defined in the FHIR specification. Search parameters that are not supported are listed in the links below:
+The FHIR service in Azure Health Data Services supports almost all [resource-specific search parameters](https://www.hl7.org/fhir/searchparameter-registry.html) defined in the FHIR specification. Search parameters that aren't supported are listed in the links below:
* [STU3 Unsupported Search Parameters](https://github.com/microsoft/fhir-server/blob/main/src/Microsoft.Health.Fhir.Core/Data/Stu3/unsupported-search-parameters.json)
GET {{FHIR_URL}}/metadata
To view the supported search parameters in the capability statement, navigate to `CapabilityStatement.rest.resource.searchParam` for the resource-specific search parameters and `CapabilityStatement.rest.searchParam` for search parameters that apply to all resources. > [!NOTE]
-> The FHIR service in Azure Health Data Services does not automatically index search parameters that are not defined in the base FHIR specification. However, the FHIR service does support [custom search parameters](how-to-do-custom-search.md).
+> The FHIR service in Azure Health Data Services does not automatically index search parameters that aren't defined in the base FHIR specification. However, the FHIR service does support [custom search parameters](how-to-do-custom-search.md).
### Composite search parameters Composite searches in FHIR allow you to search against element pairs as logically connected units. For example, if you were searching for observations where the height of the patient was over 60 inches, you would want to make sure that a single property of the observation contained the height code *and* a value greater than 60 inches (the value should only pertain to height). You wouldn't want to return a positive match on an observation with the height code *and* an arm to arm length over 60 inches, for example. Composite search parameters prevent this problem by searching against pre-specified pairs of elements whose values must both meet the search criteria for a positive match to occur.
For more information, see the HL7 [Composite Search Parameters](https://www.hl7.
| `:above` (token) | No | No | | `:not-in` (token) | No | No |
-For search parameters that have a specific order (numbers, dates, and quantities), you can use a [prefix](https://www.hl7.org/fhir/search.html#prefix) before the parameter value to refine the search criteria (e.g. `Patient?_lastUpdated=gt2022-08-01` where the prefix `gt` means "greater than"). The FHIR service in Azure Health Data Services supports all prefixes defined in the FHIR standard.
+For search parameters that have a specific order (numbers, dates, and quantities), you can use a [prefix](https://www.hl7.org/fhir/search.html#prefix) before the parameter value to refine the search criteria (for example, `Patient?_lastUpdated=gt2022-08-01` where the prefix `gt` means "greater than"). The FHIR service in Azure Health Data Services supports all prefixes defined in the FHIR standard.
### Search result parameters FHIR specifies a set of search result parameters to help manage the information returned from a search. For detailed information on how to use search result parameters in FHIR, refer to the [HL7](https://www.hl7.org/fhir/search.html#return) website. Below is a list of FHIR search result parameters and their support in the FHIR service.
FHIR specifies a set of search result parameters to help manage the information
| **Search result parameters** | **FHIR service in Azure Health Data Services** | **Azure API for FHIR** | **Comment**| | - | -- | - | | | `_elements` | Yes | Yes |
-| `_count` | Yes | Yes | `_count` is limited to 1000 resources. If it's set higher than 1000, only 1000 will be returned and a warning will be included in the bundle. |
+| `_count` | Yes | Yes | `_count` is limited to 1000 resources. If it's set higher than 1000, only 1000 are returned and a warning will be included in the bundle. |
| `_include` | Yes | Yes | Items retrieved with `_include` are limited to 100. `_include` on PaaS and OSS on Azure Cosmos DB doesn't support `:iterate` [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). | | `_revinclude` | Yes | Yes |Items retrieved with `_revinclude` are limited to 100. `_revinclude` on PaaS and OSS on Azure Cosmos DB doesn't support `:iterate` [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). There's also an incorrect status code for a bad request [#1319](https://github.com/microsoft/fhir-server/issues/1319). | | `_summary` | Yes | Yes |
FHIR specifies a set of search result parameters to help manage the information
| `_containedType` | No | No | | `_score` | No | No |
-> [!NOTE]
-> By default, `_sort` arranges records in ascending order. You can also use the prefix `-` to sort in descending order. The FHIR service only allows you to sort on a single field at a time.
+Note:
+1. By default, `_sort` arranges records in ascending order. You can also use the prefix `-` to sort in descending order. The FHIR service only allows you to sort on a single field at a time.
+1. FHIR service supports wild card searches with revinclude. Adding "*.*" query parameter in revinclude query, it directs FHIR service to reference all the resources mapped to the source resource.
-By default, the FHIR service in Azure Health Data Services is set to lenient handling. This means that the server will ignore any unknown or unsupported parameters. If you want to use strict handling, you can include the `Prefer` header and set `handling=strict`.
+By default, the FHIR service in Azure Health Data Services is set to lenient handling. This means that the server ignores any unknown or unsupported parameters. If you want to use strict handling, you can include the `Prefer` header and set `handling=strict`.
## Chained & reverse chained searching
Similarly, you can do a reverse chained search with the `_has` parameter. This a
## Pagination
-As mentioned above, the results from a FHIR search will be available in paginated form at a link provided in the `searchset` bundle. By default, the FHIR service will display 10 search results per page, but this can be increased (or decreased) by setting the `_count` parameter. If there are more matches than fit on one page, the bundle will include a `next` link. Repeatedly fetching from the `next` link will yield the subsequent pages of results. Note that the `_count` parameter value cannot exceed 1000.
+As mentioned above, the results from a FHIR search is available in paginated form at a link provided in the `searchset` bundle. By default, the FHIR service displays 10 search results per page, but this can be increased (or decreased) by setting the `_count` parameter. If there are more matches than fit on one page, the bundle includes a `next` link. Repeatedly fetching from the `next` link yields the subsequent pages of results. Note that the `_count` parameter value can't exceed 1000.
Currently, the FHIR service in Azure Health Data Services only supports the `next` link and doesnΓÇÖt support `first`, `last`, or `previous` links in bundles returned from a search.
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
This article provides details about the features and enhancements made to Azure
> [!IMPORTANT] > Azure Health Data Services is generally available. For more information, see the [Service Level Agreement (SLA) for Azure Health Data Services](https://azure.microsoft.com/support/legal/sla/health-data-services/v1_1/).
+## February 2024
+
+### FHIR service
+
+**Import operation honors ingestion of non-sequential resource versions**
+
+Prior to this change incremental mode in import operation assumed versions to be sequential integers. After this bug fix, versions can be ingested in non-sequential order. For more information, see [Import operation supports non-sequential version ordering for resources](https://github.com/microsoft/fhir-server/pull/3685).
+
+**Revinclude search can reference all resources with wild character**
+
+FHIR service supports wild card searches with revinclude. Adding "*.*" query parameter in revinclude query, it will direct FHIR service to reference all the resources mapped to the source resource.
+
+**Improve FHIR queries response time with performance enhancements**
+
+To improve performance. missing modifier can be specified for a search parameter that is used in sort. For more information, see [improve performance using missing modifier](https://github.com/microsoft/fhir-server/pull/3655)
+
+**Enables counting all versions (historical and soft deleted) of resources**
+
+The query parameter _summary=count and _count=0 can be added to _history endpoint to get count of all versioned resources. This count includes soft deleted resources.
+ ## January 2024 ### DICOM service
lab-services Concept Nested Virtualization Template Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-nested-virtualization-template-vm.md
Before setting up a lab with nested virtualization, here are a few things to tak
- Choose a size that provides good performance for both the host (lab VM) and guest VMs (VMs inside the lab VM). Make sure the size you choose can run the host VM and any Hyper-V machines at the same time. -- The host VM requires extra configuration to let the guest machines have internet connectivity.
+- If using Windows Server, the host VM requires extra configuration to let the guest machines have internet connectivity.
- Guest VMs don't have access to Azure resources, such as DNS servers, on the Azure virtual network. -- Hyper-V guest VMs are licensed as independent machines. For information about licensing for Microsoft operation systems and products, see [Microsoft Licensing](https://www.microsoft.com/licensing/default). Check licensing agreements for any other software you use, before installing it on the template VM or guest VMs.
+- Hyper-V guest VMs are licensed as independent machines. For information about licensing for Microsoft operation systems and products, see [Microsoft Licensing](https://www.microsoft.com/licensing/default). Check licensing agreements for any other software you use, before installing it on a lab VM or guest VMs.
- Virtualization applications other than Hyper-V [*aren't* supported for nested virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization#3rd-party-virtualization-apps). This includes any software that requires hardware virtualization extensions.
You can enable nested virtualization and create nested Hyper-V VMs on the templa
To enable nested virtualization for a lab: 1. Connect to the template VM by using a remote desktop client
+1. Enable Hyper-V feature and tools on the template VM.
+1. If using Windows Server, create a Network Address Translation (NAT) network to allow the VMs inside the template VM to communicate with each other.
-1. Enable nested virtualization on the template VM operating system.
+ > [!NOTE]
+ > The NAT network created on the Lab Services VM will allow a Hyper-V VM to access the internet and other Hyper-V VMs on the same Lab Services VM. The Hyper-V VM won't be able to access Azure resources, such as DNS servers, on an Azure virtual network.
- - Enable the Hyper-V role: the Hyper-V role must be enabled for the creation and running of VMs inside the template VM.
- - Enable DHCP (optional): when the template VM has the DHCP role enabled, the VMs inside the template VM get an IP address automatically assigned to them.
- - Create a NAT network for the nested VMs: set up a Network Address Translation (NAT) network to allow the VMs inside the template VM to have internet access and communicate with each other.
-
- >[!NOTE]
- >The NAT network created on the Lab Services VM will allow a Hyper-V VM to access the internet and other Hyper-V VMs on the same Lab Services VM. The Hyper-V VM won't be able to access Azure resources, such as DNS servers, on an Azure virtual network.
-
-1. Use Hyper-V manager to create the nested virtual machines inside the template VM.
-
-> [!NOTE]
-> Make sure to select the option to create a template virtual machine when you create a lab that requires nested virtualization.
+1. Use Hyper-V manager to create the nested virtual machines inside the template VM.
+1. Verify nested virtual machines have internet access.
Follow these steps to [enable nested virtualization on a template VM](./how-to-enable-nested-virtualization-template-vm-using-script.md).
-## Connect to a nested VM in another lab VM
-
-You can connect to a lab VM from another lab VM or a nested VM without any extra configuration. However, to connect to a nested VM that is hosted in another lab VM, requires adding a static mapping to the NAT instance with the [**Add-NetNatStaticMapping**](/powershell/module/netnat/add-netnatstaticmapping) PowerShell cmdlet.
-
-> [!NOTE]
-> The ping command to test connectivity from or to a nested VM doesn't work.
-
-> [!NOTE]
-> The static mapping only works when you use private IP addresses. The VM that the lab user is connecting from must be a lab VM, or the VM has to be on the same network if using advanced networking.
-
-### Example scenarios
-
-Consider the following sample lab setup:
--- Lab VM 1 (Windows Server 2022, IP 10.0.0.8)
- - Nested VM 1-1 (Ubuntu 20.04, IP 192.168.0.102)
- - Nested VM 1-2 (Windows 11, IP 192.168.0.103, remote desktop enabled and allowed)
--- Lab VM 2 (Windows Server 2022, IP 10.0.0.9)
- - Nested VM 2-1 (Ubuntu 20.04, IP 192.168.0.102)
- - Nested VM 2-2 (Windows 11, IP 192.168.0.103, remote desktop enabled and allowed)
-
-To connect with SSH from lab VM 2 to nested lab VM 1-1:
-
-1. On lab VM 1, add a static mapping:
-
- ```powershell
- Add-NetNatStaticMapping -NatName "LabServicesNat" -Protocol TCP -ExternalIPAddress 0.0.0.0 -InternalIPAddress 192.168.0.102 -InternalPort 22 -ExternalPort 23
- ```
-
-1. On lab VM 2, connect using SSH:
-
- ```bash
- ssh user1@10.0.0.8 -p 23
- ```
-
-To connect with RDP from lab VM 2, or its nested VMs, to nested lab VM 1-2:
-
-1. On lab VM 1, add a static mapping:
-
- ```powershell
- Add-NetNatStaticMapping -NatName "LabServicesNat" -Protocol TCP -ExternalIPAddress 0.0.0.0 -InternalIPAddress 192.168.0.103 -InternalPort 3389 -ExternalPort 3390
- ```
-
-1. On lab VM 2, or its nested VMs, connect using RDP to `10.0.0.8:3390`
-
- > [!IMPORTANT]
- > Include `~\` in front of the user name. For example, `~\Administrator` or `~\user1`.
- ## Recommendations ### Non-admin user
lab-services How To Enable Nested Virtualization Template Vm Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-nested-virtualization-template-vm-using-script.md
Previously updated : 06/27/2023 Last updated : 02/12/2024 # Enable nested virtualization in Azure Lab Services
-Nested virtualization enables you to create a lab in Azure Lab Services that contains a multi-VM environment. To avoid that lab users need to enable nested virtualization on their lab VM and install the nested VMs inside it, you can prepare a lab template. When you publish the lab, each lab user has a lab VM that already contains the nested virtual machines.
+Nested virtualization enables you to create a lab in Azure Lab Services that contains a multi-VM environment. You can enable nested virtualization on the template VM and preconfigure the nested VMs on the template VM. When you publish the lab, each lab user receives a lab VM that already contains the nested VMs.
For concepts, considerations, and recommendations about nested virtualization, see [nested virtualization in Azure Lab Services](./concept-nested-virtualization-template-vm.md).
For concepts, considerations, and recommendations about nested virtualization, s
## Enable nested virtualization
-To enable nested virtualization on the template VM, you first connect to the VM by using a remote desktop (RDP) client. You can then apply the configuration changes by either running a PowerShell script or using Windows tools.
+> [!IMPORTANT]
+> It is recommend to use nested virtualization with Windows 11 to take advantage of the 'Default Switch' created when you install Hyper-V on a Windows client OS. Nested virtualization on Windows Server OSes should be used when additional control over the network settings is required.
+
+To enable nested virtualization on the template VM, you first connect [connect to the template virtual machine by using RDP (Remote Desktop Protocol) client](./how-to-create-manage-template.md#update-a-template-vm). You can then apply the configuration changes by either running a PowerShell script or using Windows tools.
# [PowerShell](#tab/powershell)
-You can use a PowerShell script to set up nested virtualization on a template VM in Azure Lab Services. The following steps guide you through how to use the [Lab Services Hyper-V scripts](https://github.com/Azure/LabServices/tree/main/ClassTypes/PowerShell/HyperV). The steps are intended for Windows Server 2016, Windows Server 2019, or Windows 10.
+You can use a PowerShell script to set up nested virtualization on a template VM in Azure Lab Services. The following steps guide you through how to use the [Lab Services Hyper-V scripts](https://github.com/Azure/LabServices/tree/main/ClassTypes/PowerShell/HyperV). The script is intended for Windows 11.
1. Follow these steps to [connect to and update the template machine](./how-to-create-manage-template.md#update-a-template-vm). 1. Launch **PowerShell** in **Administrator** mode.
-1. You may have to change the execution policy to successfully run the script. Run the following command:
+1. You might have to change the execution policy to successfully run the script.
```powershell Set-ExecutionPolicy bypass -force ```
-1. Download and run the script:
+1. Download and run the script to enable the Hyper-V feature and tools.
```powershell Invoke-WebRequest 'https://aka.ms/azlabs/scripts/hyperV-powershell' -Outfile SetupForNestedVirtualization.ps1
You can use a PowerShell script to set up nested virtualization on a template VM
``` > [!NOTE]
- > The script may require the machine to be restarted. Follow instructions from the script and re-run the script until **Script completed** is seen in the output.
+ > The script might require the machine to be restarted. If so, stop and start the template VM from the [Azure Lab Services website](https://labs.azure.com) and re-run the script until **Script completed** is seen in the output.
-1. Don't forget to reset the execution policy. Run the following command:
+1. Don't forget to reset the execution policy.
```powershell Set-ExecutionPolicy default -force ```
-# [Windows tools](#tab/windows)
-
-You can set up nested virtualization on a template VM in Azure Lab Services using Windows roles and tools directly. There are a few things needed on the template VM enable nested virtualization. The following steps describe how to manually set up a Lab Services machine template with Hyper-V. Steps are intended for Windows Server 2016 or Windows Server 2019.
-
-First, follow these steps to [connect to the template virtual machine by using a remote desktop client](./how-to-create-manage-template.md#update-a-template-vm).
-
-### 1. Enable the Hyper-V role
-
-The following steps describe the actions to enable Hyper-V on Windows Server using Server Manager. After enabling Hyper-V, Hyper-V manager is available to add, modify, and delete client VMs.
-
-1. In **Server Manager**, on the Dashboard page, select **Add Roles and Features**.
-
-2. On the **Before you begin** page, select **Next**.
-3. On the **Select installation type** page, keep the default selection of Role-based or feature-based installation and then select **Next**.
-4. On the **Select destination server** page, select a server from the server pool. The current server is already selected. Select **Next**.
-5. On the **Select server roles** page, select **Hyper-V**.
-6. The **Add Roles and Features Wizard** pop-up appears. Select **Include management tools (if applicable)**. Select the **Add Features** button.
-7. On the **Select server roles** page, select **Next**.
-8. On the **Select features page**, select **Next**.
-9. On the **Hyper-V** page, select **Next**.
-10. On the **Create Virtual Switches** page, accept the defaults, and select **Next**.
-11. On the **Virtual Machine Migration** page, accept the defaults, and select **Next**.
-12. On the **Default Stores** page, accept the defaults, and select **Next**.
-13. On the **Confirm installation selections** page, select **Restart the destination server automatically if required**.
-14. When the **Add Roles and Features Wizard** pop-up appears, select **Yes**.
-15. Select **Install**.
-16. Wait for the **Installation progress** page to indicate that the Hyper-V role is complete. The machine may restart in the middle of the installation.
-17. Select **Close**.
-
-### 2. Enable the DHCP role
-
-When you create a client VM, it needs an IP address in the Network Address Translation (NAT) network. Create the NAT network in a later step.
-
-To assign the IP addresses automatically, configure the lab VM template as a DHCP server:
-
-1. In **Server Manager**, on the **Dashboard** page, select **Add Roles and Features**.
-2. On the **Before you begin** page, select **Next**.
-3. On the **Select installation type** page, select **Role-based or feature-based installation** and then select **Next**.
-4. On the **Select destination server** page, select the current server from the server pool and then select **Next**.
-5. On the **Select server roles** page, select **DHCP Server**.
-6. The **Add Roles and Features Wizard** pop-up appears. Select **Include management tools (if applicable)**. Select **Add Features**.
-
- >[!NOTE]
- >You may see a validation error stating that no static IP addresses were found. This warning can be ignored for our scenario.
-
-7. On the **Select server roles** page, select **Next**.
-8. On the **Select features** page, select **Next**.
-9. On the **DHCP Server** page, select **Next**.
-10. On the **Confirm installation selections** page, select **Install**.
-11. Wait for the **Installation progress page** to indicate that the DHCP role is complete.
-12. Select Close.
-
-### 3. Enable the Routing and Remote Access role
-
-Next, enable the [Routing service](/windows-server/remote/remote-access/remote-access#routing-service) to enable routing network traffic between the VMs on the template VM.
-
-1. In **Server Manager**, on the **Dashboard** page, select **Add Roles and Features**.
-
-2. On the **Before you begin** page, select **Next**.
-3. On the **Select installation type** page, select **Role-based or feature-based installation** and then select **Next**.
-4. On the **Select destination server** page, select the current server from the server pool and then select **Next**.
-5. On the **Select server roles** page, select **Remote Access**. Select **OK**.
-6. On the **Select features** page, select **Next**.
-7. On the **Remote Access** page, select **Next**.
-8. On the **Role Services** page, select **Routing**.
-9. The **Add Roles and Features Wizard** pop-up appears. Select **Include management tools (if applicable)**. Select **Add Features**.
-10. Select **Next**.
-11. On the **Web Server Role (IIS)** page, select **Next**.
-12. On the **Select role services** page, select **Next**.
-13. On the **Confirm installation selections** page, select **Install**.
-14. Wait for the **Installation progress** page to indicate that the Remote Access role is complete.
-15. Select **Close**.
-
-### 4. Create virtual NAT network
+The template VM is now configured for use with nested virtualization and you can [create VMs](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-machine-in-hyper-v?tabs=hyper-v-manager) inside it. Use the switch specified by the script when creating new Hyper-V VMs.
-Now that you've installed all the necessary roles, you can create the NAT network. The creation process involves creating a switch and the NAT network, itself.
-
-A NAT network assigns a public IP address to a group of VMs on a private network to allow connectivity to the internet. In this case, the group of private VMs consists of the nested VMs. The NAT network allows the nested VMs to communicate with one another.
-
-A switch is a network device that handles receiving and routing of traffic in a network.
-
-#### Create a new virtual switch
-
-To create a virtual switch in Hyper-V:
+# [Windows tools](#tab/windows)
-1. Open **Hyper-V Manager** from Windows Administrative Tools.
+You can set up nested virtualization on a template VM in Azure Lab Services using Windows features and tools directly. The following steps describe how to manually set up a Lab Services machine template with Hyper-V. Steps are intended for Windows 11.
-2. Select the current server in the left-hand navigation menu.
-3. Select **Virtual Switch Manager…** from the **Actions** menu on the right-hand side of the **Hyper-V Manager**.
-4. On the **Virtual Switch Manager** pop-up, select **Internal** for the type of switch to create. Select **Create Virtual Switch**.
-5. For the newly created virtual switch, set the name to something memorable. For this example, we use 'LabServicesSwitch'. Select **OK**.
-6. A new network adapter is created. The name is similar to 'vEthernet (LabServicesSwitch)'. To verify open the **Control Panel**, select **Network and Internet**, select **View network status and tasks**. On the left, select **Change adapter settings**.
+1. Open the **Settings** page.
+1. Select **Apps**.
+1. Select **Optional features**.
+1. Select **More Windows features** under the **Related features** section.
+1. The **Windows features** pop-up appears. Check the **Hyper-V** feature and select **OK**.
+1. Wait for the Hyper-V feature to be installed. When prompted to restart the VM, select **Don't restart**.
+1. Go to the [Azure Lab Services website](https://labs.azure.com) to stop and restart the template VM.
-#### Create a NAT network
+The template VM is now configured to use nested virtualization and you can [create VMs](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-machine-in-hyper-v?tabs=hyper-v-manager) inside it. Use 'Default Switch' when creating new nested VMs with Hyper-V.
-To create a NAT network on the lab template VM:
+
-1. Open the **Routing and Remote Access** tool from Windows Administrative Tools.
+## Connect to a nested VM in another lab VM
-2. Select the local server in the left navigation page.
-3. Choose **Action** -> **Configure and Enable Routing and Remote Access**.
-4. When **Routing and Remote Access Server Setup Wizard** appears, select **Next**.
-5. On the **Configuration** page, select **Network address translation (NAT)** configuration. Select **Next**.
+Extra configuration is required to connect from a nested VM on one lab VM to a nested VM that is hosted in another lab VM. Add a static mapping to the NAT instance with the [**Add-NetNatStaticMapping**](/powershell/module/netnat/add-netnatstaticmapping) PowerShell cmdlet.
- >[!WARNING]
- >Do not choose the 'Virtual private network (VPN) access and NAT' option.
+> [!NOTE]
+> The ping command to test connectivity from or to a nested VM doesn't work.
-6. On **NAT Internet Connection** page, choose 'Ethernet'. Don't choose the 'vEthernet (LabServicesSwitch)' connection we created in Hyper-V Manager. Select **Next**.
-7. Select **Finish** on the last page of the wizard.
-8. When the **Start the service** dialog appears, select **Start Service**.
-9. Wait until service is started.
+> [!NOTE]
+> The static mapping only works when you use private IP addresses. The VM that the lab user is connecting from must be a lab VM, or the VM has to be on the same network if using advanced networking.
-### 5. Update network adapter settings
+### Example scenarios
-Next, associate the IP address of the network adapter with the default gateway IP of the NAT network you created earlier. In this example, assign an IP address of 192.168.0.1, with a subnet mask of 255.255.255.0. Use the virtual switch that you created earlier.
+Consider the following sample lab setup:
-1. Open the **Control Panel**, select **Network and Internet**, select **View network status and tasks**.
+- Lab VM 1 (Windows Server 2022, IP 10.0.0.8)
+ - Nested VM 1-1 (Ubuntu 20.04, IP 192.168.0.102, SSH allowed)
+ - Nested VM 1-2 (Windows 11, IP 192.168.0.103, remote desktop enabled and allowed)
-2. On the left, select **Change adapter settings**.
-3. In the **Network Connections** window, double-click on 'vEthernet (LabServicesSwitch)' to show the **vEthernet (LabServicesSwitch) Status** details dialog.
-4. Select the **Properties** button.
-5. Select **Internet Protocol Version 4 (TCP/IPv4)** item and select the **Properties** button.
-6. In the **Internet Protocol Version 4 (TCP/IPv4) Properties** dialog:
+- Lab VM 2 (Windows Server 2022, IP 10.0.0.9)
+ - Nested VM 2-1 (Ubuntu 20.04, IP 192.168.0.102, SSH allowed)
+ - Nested VM 2-2 (Windows 11, IP 192.168.0.103, remote desktop enabled and allowed)
- - Select **Use the following IP address**.
- - For the IP address, enter 192.168.0.1.
- - For the subnet mask, enter 255.255.255.0.
- - Leave the default gateway and DNs servers blank.
+Enable connection with SSH from lab VM 2 to nested lab VM 1-1:
- >[!NOTE]
- > The range for the NAT network will be, in CIDR notation, 192.168.0.0/24. This range provides usable IP addresses from 192.168.0.1 to 192.168.0.254. By convention, gateways have the first IP address in a subnet range.
+1. On lab VM 1, add a static mapping:
-7. Select OK.
+ ```powershell
+ Add-NetNatStaticMapping -NatName "LabServicesNat" -Protocol TCP -ExternalIPAddress 0.0.0.0 -InternalIPAddress 192.168.0.102 -InternalPort 22 -ExternalPort 23
+ ```
-### 6. Create DHCP Scope
+1. On lab VM 2, connect using SSH:
-Next, you can add a DHCP scope. In this case, our NAT network is 192.168.0.0/24 in CIDR notation. This range provides usable IP addresses from 192.168.0.1 to 192.168.0.254. The scope you create must be in that range of usable addresses, excluding the IP address you assigned in the previous step.
+ ```bash
+ ssh user1@10.0.0.8 -p 23
+ ```
-1. Open **Administrative Tools** and open the **DHCP** administrative tool.
-2. In the **DHCP** tool, expand the node for the current server and select **IPv4**.
-3. From the Action menu, choose **New Scope…**.
-4. When the **New Scope Wizard** appears, select **Next** on the **Welcome** page.
-5. On the **Scope Name** page, enter 'LabServicesDhcpScope' or something else memorable for the name. Select **Next**.
-6. On the **IP Address Range** page, enter the following values.
+Enable connection with RDP from lab VM 2, or its nested VMs, to nested lab VM 1-2:
- - 192.168.0.100 for the Start IP address
- - 192.168.0.200 for the End IP address
- - 24 for the Length
- - 255.255.255.0 for the Subnet mask
+1. On lab VM 1, add a static mapping.
-7. Select **Next**.
-8. On the **Add Exclusions and Delay** page, select **Next**.
-9. On the **Lease Duration** page, select **Next**.
-10. On the **Configure DHCP Options** page, select **Yes, I want to configure these options now**. Select **Next**.
-11. On the **Router (Default Gateway)**
-12. Add 192.168.0.1, if not done already. Select **Next**.
-13. On the **Domain Name and DNS Servers** page, add 168.63.129.16 as a DNS server IP address, if not done already. 168.63.129.16 is the IP address for an Azure static DNS server. Select **Next**.
-14. On the **WINS Servers** page, select **Next**.
-15. One the **Activate Scope** page, select **Yes, I want to activate this scope now**. Select **Next**.
-16. On the **Completing the New Scope Wizard** page, select **Finish**.
+ ```powershell
+ Add-NetNatStaticMapping -NatName "LabServicesNat" -Protocol TCP -ExternalIPAddress 0.0.0.0 -InternalIPAddress 192.168.0.103 -InternalPort 3389 -ExternalPort 3390
+ ```
-
+1. On lab VM 2, or its nested VMs, connect using RDP to `10.0.0.8:3390`.
-You've now configured your template VM to use nested virtualization and create VMs inside it.
+ > [!IMPORTANT]
+ > Include `~\` in front of the user name. For example, `~\Administrator` or `~\user1`.
## Troubleshooting
Perform the following steps to verify your nested VM configuration:
- This error can happen when a lab user leaves the Hyper-V VM in the saved state. You can right-select the VM in Hyper-V Manager and select **Delete saved state**. > [!CAUTION]
- > Deleting the saved state means that any unsaved work is lost, but anything saved to disk remains intact.
+ > Deleting the saved state means that any unsaved work is lost. Anything saved to disk remains intact.
-- This error can happen when the Hyper-V VM is turned off and the VHDX file is corrupted. If the lab user has created a backup of the VDHX file, or saved a snapshot, they can restore the VM from that point.
+- This error can happen when the Hyper-V VM is turned off and the VHDX file is corrupted. If the lab user created a backup of the VDHX file, they can restore the VM from that point.
-It's recommended that Hyper-V VMs have their [automatic shutdown action set to shutdown](./concept-nested-virtualization-template-vm.md#automatically-shut-down-nested-vms).
+Hyper-V VMs should have their [automatic shutdown action set to shutdown](./concept-nested-virtualization-template-vm.md#automatically-shut-down-nested-vms).
### Hyper-V is too slow
-Increase the number vCPUs and memory that is assigned to the Hyper-V VM in Hyper-V Manager. The total number of vCPUs can't exceed the number of cores of the host VM (lab VM). If you're using variable memory, the default option, increase the minimum amount of memory assigned to the VM. The maximum amount of assigned memory (if using variable memory) can exceed the amount of memory of the host VM. This allows greater flexibility when having to complete intensive operations on just one of the Hyper-V VMs.
+Increase the number vCPUs and memory that is assigned to the Hyper-V VM in Hyper-V Manager. The total number of vCPUs can't exceed the number of cores of the host VM (lab VM). If you're using variable memory, the default option, increase the minimum amount of memory assigned to the VM. The maximum amount of assigned memory (if using variable memory) can exceed the amount of memory of the host VM. Variable memory allows greater flexibility when having to complete intensive operations on just one of the Hyper-V VMs.
If you're using the Medium (Nested Virtualization) VM size for the lab, consider using the Large (Nested Virtualization) VM size instead to have more compute resources for each lab VM. ### Internet connectivity isn't working for nested VMs -- Confirm that you followed the previous steps for enabling nested virtualization. Consider using the PowerShell script option.
+- Verify that you followed the previous steps for enabling nested virtualization. Consider using the PowerShell script option.
-- If you're running a system administration class, consider not using the host VM (lab VM) as the DHCP server.
+- Check if the host VM (lab VM) has the DHCP role installed if you are using Windows Server.
- Changing the settings of the lab VM can cause issues with other lab VMs. Create an internal or private NAT network and have one of the VMs act as the DHCP, DNS, or domain controller. Using private over internal does mean that Hyper-V VMs don't have internet access.
+ Running a lab VM as a DHCP server is an *unsupported* scenario. See [Can I deploy a DHCP server in a virtual network?](/azure/virtual-network/virtual-networks-faq) for details. Changing the settings of the lab VM can cause issues with other lab VMs.
-- Check the network adapter settings for the Hyper-V VM:
+- Check the network adapter settings for the Hyper-V VM.
- Set the IP address of the DNS server and DHCP server to [*168.63.129.16*](/azure/virtual-network/what-is-ip-address-168-63-129-16).
- - Set the guest VM IPv4 address in the range of the [NAT network you created previously](#create-a-nat-network).
+ - If the guest VM IPv4 address is set manually, verify it is in the range of the NAT network connected to the Hyper-V switch.
+ - Try enabling Hyper-V [DHCP guard](/archive/blogs/virtual_pc_guy/hyper-v-networkingdhcp-guard) and [Router guard](/archive/blogs/virtual_pc_guy/hyper-v-networkingrouter-guard).
+
+ ```powershell
+ Get-VMNetworkAdapter * | Set-VMNetworkAdapter -RouterGuard On -DhcpGuard On
+ ```
> [!NOTE]
-> The ping command from a Hyper-V VM to the host VM doesn't work. To test internet connectivity, launch a web browser and verify that the web page loads correctly.
+> The `ping` command from a Hyper-V VM to the host VM doesn't work. To test internet connectivity, launch a web browser and verify that the web page loads correctly.
+
+### Can't start Hyper-V VMs
+
+You might choose to create a non-admin user when creating your lab. To be able to start or stop Hyper-V VMs, the non-admin user must be added to **Hyper-V Administrators** group. For more information about Hyper-V and non-admin users, see [Non-admin user](concept-nested-virtualization-template-vm.md#non-admin-user).
## Next steps
-Now that you've configured nested virtualization on the template VM, you can [create nested virtual machines with Hyper-V](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-machine-in-hyper-v). See [Microsoft Evaluation Center](https://www.microsoft.com/evalcenter/) to check out available operating systems and software.
+Now that nested virtualization is configured on the template VM, you can [create nested virtual machines with Hyper-V](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-machine-in-hyper-v). See [Microsoft Evaluation Center](https://www.microsoft.com/evalcenter/) to check out available operating systems and software.
- [Add lab users](how-to-manage-lab-users.md) - [Set quota hours](how-to-manage-lab-users.md#set-quotas-for-users)
load-testing Concept Azure Load Testing Vnet Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/concept-azure-load-testing-vnet-injection.md
description: Learn about the scenarios for deploying Azure Load Testing in a virtual network. This deployment enables you to load test private application endpoints and hybrid deployments. --++ Last updated 08/22/2023
load-testing Concept Load Test App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/concept-load-test-app-service.md
description: 'Learn how to use Azure Load Testing with apps hosted on Azure App Service. Run load tests, use environment variables, and gain insights with server metrics and diagnostics.' --++ Last updated 06/30/2023
load-testing Concept Load Testing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/concept-load-testing-concepts.md
Title: Key concepts for Azure Load Testing
description: Learn how Azure Load Testing works, and the key concepts behind it. --++ Last updated 11/24/2023
load-testing How To Add Requests To Url Based Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-add-requests-to-url-based-test.md
description: Learn how to add requests to a URL-based test in Azure Load Testing by using UI fields or cURL commands. Use variables to pass parameters to requests. --++ Last updated 10/30/2023
load-testing How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-assign-roles.md
Title: Manage roles in Azure Load Testing description: Learn how to manage access to an Azure load testing resource using Azure role-based access control (Azure RBAC).--++
load-testing How To Compare Multiple Test Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-compare-multiple-test-runs.md
description: 'Learn how you can visually compare multiple test runs with Azure Load Testing to identify and analyze performance regressions.' --++ Last updated 01/11/2024
load-testing How To Configure Load Test Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-configure-load-test-cicd.md
Title: 'Manually configure CI/CD for load tests' description: 'This article shows how to run your load tests with Azure Load Testing in CI/CD. Learn how to add a load test to GitHub Actions, Azure Pipelines or other CI tools.'--++
load-testing How To Configure User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-configure-user-properties.md
description: Learn how to use JMeter user properties with Azure Load Testing. --++ Last updated 04/05/2023
load-testing How To Create And Run Load Test With Jmeter Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-create-and-run-load-test-with-jmeter-script.md
--++ Last updated 10/23/2023 adobe-target: true
load-testing How To Create Manage Test Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-create-manage-test-runs.md
description: Learn how to create and manage tests runs in Azure Load Testing with the Azure portal. --++ Last updated 05/10/2023
load-testing How To Create Manage Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-create-manage-test.md
description: 'Learn how to create and manage tests in your Azure Load Testing resource.' --++ Last updated 05/10/2023
load-testing How To Diagnose Failing Load Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-diagnose-failing-load-test.md
description: Learn how you can diagnose and troubleshoot failing tests in Azure Load Testing. Download and analyze the Apache JMeter worker logs in the Azure portal. --++ Last updated 11/23/2023
load-testing How To Export Test Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-export-test-results.md
description: Learn how to export load test results in Azure Load Testing and use them for reporting in third-party tools. --++ Last updated 02/08/2024 # CustomerIntent: As a tester, I want to understand how I can export the load test results, so that I can use other reporting tools to analyze the load test results.
load-testing How To High Scale Load https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-high-scale-load.md
description: Learn how to configure test engine instances in Azure Load Testing to run high-scale load tests. Monitor engine health metrics to find an optimal configuration for your load test. --++ Last updated 10/23/2023
load-testing How To Monitor Server Side Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-monitor-server-side-metrics.md
description: Learn how to capture and monitor server-side application metrics when running a load test with Azure Load Testing. Add Azure app components and resource metrics to your load test configuration. --++ Last updated 01/16/2024
load-testing How To Read Csv Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-read-csv-data.md
description: Learn how to read external data from a CSV file in Apache JMeter with Azure Load Testing. --++ Last updated 10/23/2023
load-testing How To Test Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-private-endpoint.md
description: Learn how to deploy Azure Load Testing in a virtual network (virtua
--++ Last updated 05/12/2023
load-testing How To Test Secured Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-secured-endpoints.md
Title: Load test authenticated endpoints description: Learn how to load test authenticated endpoints with Azure Load Testing. Use shared secrets, credentials, or client certificates for load testing applications that require authentication.--++
load-testing How To Use Jmeter Plugins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-use-jmeter-plugins.md
description: Learn how to customize your load test with JMeter plugins and Azure Load Testing. Upload a custom plugin JAR file or reference a publicly available plugin. --++ Last updated 10/19/2023
load-testing Monitor Load Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/monitor-load-testing.md
description: Learn about the data
--++ Last updated 04/05/2023
load-testing Overview What Is Azure Load Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/overview-what-is-azure-load-testing.md
description: 'Azure Load Testing is a fully managed load-testing service for gen
--++ Last updated 05/09/2023 adobe-target: true
load-testing Quickstart Create And Run Load Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/quickstart-create-and-run-load-test.md
description: 'This quickstart shows how to create an Azure Load Testing resource
--++ Last updated 10/23/2023 adobe-target: true
load-testing Reference Test Config Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/reference-test-config-yaml.md
description: 'Learn how to configure a load test by using a YAML file. The YAML
--++ Last updated 12/06/2023 adobe-target: true
load-testing Resource Jmeter Property Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-jmeter-property-overrides.md
description: 'The list of Apache JMeter properties that are overridden by Azure
--++ Last updated 01/12/2023
load-testing Resource Jmeter Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-jmeter-support.md
description: Learn which Apache JMeter features are supported in Azure Load Test
--++ Last updated 06/14/2023
load-testing Resource Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-limits-quotas-capacity.md
description: 'Service limits used for capacity planning and configuring high-sca
--++ Last updated 09/21/2022
load-testing Resource Supported Azure Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-supported-azure-resource-types.md
description: 'Learn which Azure resource types are supported for server-side mon
--++ Last updated 06/02/2023
load-testing Tutorial Identify Bottlenecks Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-identify-bottlenecks-azure-portal.md
description: In this tutorial, you learn how to identify performance bottlenecks in a web app by running a high-scale load test with Azure Load Testing. Use the dashboard to analyze client-side and server-side metrics. --++ Last updated 11/29/2023 #Customer intent: As an Azure user, I want to learn how to identify and fix bottlenecks in a web app so that I can improve the performance of the web apps that I'm running in Azure.
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md
To successfully invoke a batch endpoint and create jobs, ensure you have the fol
> [!TIP] > If you are using a credential-less data store or external Azure Storage Account as data input, ensure you [configure compute clusters for data access](how-to-authenticate-batch-endpoint.md#configure-compute-clusters-for-data-access). **The managed identity of the compute cluster** is used **for mounting** the storage account. The identity of the job (invoker) is still used to read the underlying data allowing you to achieve granular access control.
+## Create jobs basics
+To create a job from a batch endpoint you have to invoke it. Invocation can be done using the Azure CLI, the Azure Machine Learning SDK for Python, or a REST API call. The following examples show the basics of invocation for a batch endpoint that receives a single input data folder for processing. See [Understanding inputs and outputs](how-to-access-data-batch-endpoints-jobs.md#understanding-inputs-and-outputs) for examples with different inputs and outputs.
+
+# [Azure CLI](#tab/cli)
+
+Use the `invoke` operation under batch endpoints:
+
+```azurecli
+az ml batch-endpoint invoke --name $ENDPOINT_NAME \
+ --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data
+```
+
+# [Python](#tab/sdk)
+
+Use the method `MLClient.batch_endpoints.invoke()` to specify the name of the experiment:
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ inputs={
+ "heart_dataset": Input("https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data")
+ }
+)
+```
+
+# [REST](#tab/rest)
+
+Make a `POST` request to the invocation URL of the endpoint. You can get the invocation URL from Azure Machine Learning portal, in the endpoint's details page.
+
+__Body__
+
+```json
+{
+ "properties": {
+ "InputData": {
+ "heart_dataset": {
+ "JobInputType" : "UriFolder",
+ "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data"
+ }
+ }
+ }
+}
+```
+
+__Request__
+
+```http
+POST jobs HTTP/1.1
+Host: <ENDPOINT_URI>
+Authorization: Bearer <TOKEN>
+Content-Type: application/json
+```
++
+### Invoke a specific deployment
+
+Batch endpoints can host multiple deployments under the same endpoint. The default endpoint is used unless the user specifies otherwise. You can change the deployment that is used as follows:
+
+# [Azure CLI](#tab/cli)
+
+Use the argument `--deployment-name` or `-d` to specify the name of the deployment:
+
+```azurecli
+az ml batch-endpoint invoke --name $ENDPOINT_NAME \
+ --deployment-name $DEPLOYMENT_NAME \
+ --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data
+```
+
+# [Python](#tab/sdk)
+
+Use the parameter `deployment_name` to specify the name of the deployment:
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ deployment_name=deployment.name,
+ inputs={
+ "heart_dataset": Input("https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data")
+ }
+)
+```
+
+# [REST](#tab/rest)
+
+Add the header `azureml-model-deployment` to your request, including the name of the deployment you want to invoke.
+
+__Body__
+
+```json
+{
+ "properties": {
+ "InputData": {
+ "heart_dataset": {
+ "JobInputType" : "UriFolder",
+ "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data"
+ }
+ }
+ }
+}
+```
+
+__Request__
+
+```http
+POST jobs HTTP/1.1
+Host: <ENDPOINT_URI>
+Authorization: Bearer <TOKEN>
+Content-Type: application/json
+azureml-model-deployment: DEPLOYMENT_NAME
+```
++
+### Configure job properties
+
+You can configure some of the properties in the created job at invocation time.
+
+> [!NOTE]
+> Configuring job properties is only available in batch endpoints with Pipeline component deployments by the moment.
+
+#### Configure experiment name
+
+# [Azure CLI](#tab/cli)
+
+Use the argument `--experiment-name` to specify the name of the experiment:
+
+```azurecli
+az ml batch-endpoint invoke --name $ENDPOINT_NAME \
+ --experiment-name "my-batch-job-experiment" \
+ --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data
+```
+
+# [Python](#tab/sdk)
+
+Use the parameter `experiment_name` to specify the name of the experiment:
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ experiment_name="my-batch-job-experiment",
+ inputs={
+ "heart_dataset": Input("https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data"),
+ }
+)
+```
+
+# [REST](#tab/rest)
+
+Use the key `experimentName` in `properties` section to indicate the experiment name:
+
+__Body__
+
+```json
+{
+ "properties": {
+ "InputData": {
+ "heart_dataset": {
+ "JobInputType" : "UriFolder",
+ "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data"
+ }
+ },
+ "properties":
+ {
+ "experimentName": "my-batch-job-experiment"
+ }
+ }
+}
+```
+
+__Request__
+
+```http
+POST jobs HTTP/1.1
+Host: <ENDPOINT_URI>
+Authorization: Bearer <TOKEN>
+Content-Type: application/json
+```
+ ## Understanding inputs and outputs
The following example shows how to change the location where an output named `sc
Content-Type: application/json ```
-## Invoke a specific deployment
-
-Batch endpoints can host multiple deployments under the same endpoint. The default endpoint is used unless the user specifies otherwise. You can change the deployment that is used as follows:
-
-# [Azure CLI](#tab/cli)
-
-Use the argument `--deployment-name` or `-d` to specify the name of the deployment:
-
-```azurecli
-az ml batch-endpoint invoke --name $ENDPOINT_NAME --deployment-name $DEPLOYMENT_NAME --input $INPUT_DATA
-```
-
-# [Python](#tab/sdk)
-
-Use the parameter `deployment_name` to specify the name of the deployment:
-
-```python
-job = ml_client.batch_endpoints.invoke(
- endpoint_name=endpoint.name,
- deployment_name=deployment.name,
- inputs={
- "heart_dataset": input,
- }
-)
-```
-
-# [REST](#tab/rest)
-
-Add the header `azureml-model-deployment` to your request, including the name of the deployment you want to invoke.
-
-__Request__
-
-```http
-POST jobs HTTP/1.1
-Host: <ENDPOINT_URI>
-Authorization: Bearer <TOKEN>
-Content-Type: application/json
-azureml-model-deployment: DEPLOYMENT_NAME
-```
--
-## Configure job properties
-
-You can configure some of the properties in the created job at invocation time.
-
-> [!NOTE]
-> Configuring job properties is only available in batch endpoints with Pipeline component deployments by the moment.
-
-### Configure experiment name
-
-# [Azure CLI](#tab/cli)
-
-Use the argument `--experiment-name` to specify the name of the experiment:
-
-```azurecli
-az ml batch-endpoint invoke --name $ENDPOINT_NAME --experiment-name "my-batch-job-experiment" --input $INPUT_DATA
-```
-
-# [Python](#tab/sdk)
-
-Use the parameter `experiment_name` to specify the name of the experiment:
-
-```python
-job = ml_client.batch_endpoints.invoke(
- endpoint_name=endpoint.name,
- experiment_name="my-batch-job-experiment",
- inputs={
- "heart_dataset": input,
- }
-)
-```
-
-# [REST](#tab/rest)
-
-Use the key `experimentName` in `properties` section to indicate the experiment name:
-
-__Body__
-
-```json
-{
- "properties": {
- "InputData": {
- "heart_dataset": {
- "JobInputType" : "UriFolder",
- "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data"
- }
- },
- "properties":
- {
- "experimentName": "my-batch-job-experiment"
- }
- }
-}
-```
-
-__Request__
-
-```http
-POST jobs HTTP/1.1
-Host: <ENDPOINT_URI>
-Authorization: Bearer <TOKEN>
-Content-Type: application/json
-```
-- ## Next steps
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-batch-endpoint.md
In this case, we want to execute a batch endpoint using the identity of the user
1. Once authenticated, use the following command to run a batch deployment job: ```azurecli
- az ml batch-endpoint invoke --name $ENDPOINT_NAME --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME \
+ --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci
``` # [Python](#tab/sdk)
When working with REST, we recommend invoking batch endpoints using a service pr
1. The simplest way to get a valid token for your user account is to use the Azure CLI. In a console, run the following command: ```azurecli
- az account get-access-token --resource https://ml.azure.com --query "accessToken" --output tsv
+ az account get-access-token --resource https://ml.azure.com \
+ --query "accessToken" \
+ --output tsv
``` 1. Take note of the generated output.
When working with REST, we recommend invoking batch endpoints using a service pr
{ "properties": { "InputData": {
- "mnistinput": {
- "JobInputType" : "UriFolder",
- "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci"
+ "mnistinput": {
+ "JobInputType" : "UriFolder",
+ "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci"
} } }
In this case, we want to execute a batch endpoint using a service principal alre
1. To authenticate using a service principal, use the following command. For more details see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli). ```azurecli
- az login --service-principal -u <app-id> -p <password-or-cert> --tenant <tenant>
+ az login --service-principal \
+ --tenant <tenant> \
+ -u <app-id> \
+ -p <password-or-cert>
``` 1. Once authenticated, use the following command to run a batch deployment job: ```azurecli
- az ml batch-endpoint invoke --name $ENDPOINT_NAME --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME \
+ --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/
``` # [Python](#tab/sdk)
In this case, we want to execute a batch endpoint using a service principal alre
``` > [!IMPORTANT]
- > Notice that the resource scope for invoking a batch endpoints (`https://ml.azure.com) is different from the resource scope used to manage them. All management APIs in Azure use the resource scope `https://management.azure.com`, including Azure Machine Learning.
+ > Notice that the resource scope for invoking a batch endpoints (`https://ml.azure.com`) is different from the resource scope used to manage them. All management APIs in Azure use the resource scope `https://management.azure.com`, including Azure Machine Learning.
3. Once authenticated, use the query to run a batch deployment job:
In this case, we want to execute a batch endpoint using a service principal alre
{ "properties": { "InputData": {
- "mnistinput": {
- "JobInputType" : "UriFolder",
- "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci"
+ "mnistinput": {
+ "JobInputType" : "UriFolder",
+ "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci"
} } }
az login --identity
Once authenticated, use the following command to run a batch deployment job: ```azurecli
-az ml batch-endpoint invoke --name $ENDPOINT_NAME --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci
+az ml batch-endpoint invoke --name $ENDPOINT_NAME \
+ --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci
``` # [Python](#tab/sdk)
To successfully invoke a batch endpoint you need the following explicit actions
"Microsoft.MachineLearningServices/workspaces/batchEndpoints/read", "Microsoft.MachineLearningServices/workspaces/batchEndpoints/write", "Microsoft.MachineLearningServices/workspaces/batchEndpoints/deployments/read",
- "Microsoft.MachineLearningServices/workspaces/batchEndpoints/write",
"Microsoft.MachineLearningServices/workspaces/batchEndpoints/deployments/write", "Microsoft.MachineLearningServices/workspaces/batchEndpoints/deployments/jobs/write", "Microsoft.MachineLearningServices/workspaces/batchEndpoints/jobs/write",
machine-learning How To Inference Onnx Automl Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-onnx-automl-image-models.md
Previously updated : 10/18/2021 Last updated : 02/18/2024
[!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)]
-In this article, you will learn how to use Open Neural Network Exchange (ONNX) to make predictions on computer vision models generated from automated machine learning (AutoML) in Azure Machine Learning.
+In this article, you'll learn how to use Open Neural Network Exchange (ONNX) to make predictions on computer vision models generated from automated machine learning (AutoML) in Azure Machine Learning.
To use ONNX for predictions, you need to:
To use ONNX for predictions, you need to:
[ONNX Runtime](https://onnxruntime.ai/https://docsupdatetracker.net/index.html) is an open-source project that supports cross-platform inference. ONNX Runtime provides APIs across programming languages (including Python, C++, C#, C, Java, and JavaScript). You can use these APIs to perform inference on input images. After you have the model that has been exported to ONNX format, you can use these APIs on any programming language that your project needs.
-In this guide, you'll learn how to use [Python APIs for ONNX Runtime](https://onnxruntime.ai/docs/get-started/with-python.html) to make predictions on images for popular vision tasks. You can use these ONNX exported models across languages.
+In this guide, you learn how to use [Python APIs for ONNX Runtime](https://onnxruntime.ai/docs/get-started/with-python.html) to make predictions on images for popular vision tasks. You can use these ONNX exported models across languages.
## Prerequisites
onnx_model_path = mlflow_client.download_artifacts(
After the model downloading step, you use the ONNX Runtime Python package to perform inferencing by using the *model.onnx* file. For demonstration purposes, this article uses the datasets from [How to prepare image datasets](how-to-prepare-datasets-for-automl-images.md) for each vision task.
-We've trained the models for all vision tasks with their respective datasets to demonstrate ONNX model inference.
+We trained the models for all vision tasks with their respective datasets to demonstrate ONNX model inference.
## Load the labels and ONNX model files
The output is a tuple of `output_names` and predictions. Here, `output_names` an
| Output name | Output shape | Output type | Description | | -- |-|--||
-| `output_names` | `(3*batch_size)` | List of keys | For a batch size of 2, `output_names` will be `['boxes_0', 'labels_0', 'scores_0', 'boxes_1', 'labels_1', 'scores_1']` |
-| `predictions` | `(3*batch_size)` | List of ndarray(float) | For a batch size of 2, `predictions` will take the shape of `[(n1_boxes, 4), (n1_boxes), (n1_boxes), (n2_boxes, 4), (n2_boxes), (n2_boxes)]`. Here, values at each index correspond to same index in `output_names`. |
+| `output_names` | `(3*batch_size)` | List of keys | For a batch size of 2, `output_names` is `['boxes_0', 'labels_0', 'scores_0', 'boxes_1', 'labels_1', 'scores_1']` |
+| `predictions` | `(3*batch_size)` | List of ndarray(float) | For a batch size of 2, `predictions` takes the shape of `[(n1_boxes, 4), (n1_boxes), (n1_boxes), (n2_boxes, 4), (n2_boxes), (n2_boxes)]`. Here, values at each index correspond to same index in `output_names`. |
-The following table describes boxes, labels and scores returned for each sample in the batch of images.
+The following table describes boxes, labels, and scores returned for each sample in the batch of images.
| Name | Shape | Type | Description | | -- |-|--||
The input is a preprocessed image, with the shape `(1, 3, 640, 640)` for a batch
| Input | `(batch_size, num_channels, height, width)` | ndarray(float) | Input is a preprocessed image, with the shape `(1, 3, 640, 640)` for a batch size of 1, and a height of 640 and width of 640.| ### Output format
-ONNX model predictions contain multiple outputs. The first output is needed to perform non-max suppression for detections. For ease of use, automated ML displays the output format after the NMS postprocessing step. The output after NMS is a list of boxes, labels, and scores for each sample in the batch.
+ONNX model predictions contain multiple outputs. The first output is needed to perform nonmax suppression for detections. For ease of use, automated ML displays the output format after the NMS postprocessing step. The output after NMS is a list of boxes, labels, and scores for each sample in the batch.
| Output name | Output shape | Output type | Description |
The output is a tuple of `output_names` and predictions. Here, `output_names` an
| Output name | Output shape | Output type | Description | | -- |-|--||
-| `output_names` | `(4*batch_size)` | List of keys | For a batch size of 2, `output_names` will be `['boxes_0', 'labels_0', 'scores_0', 'masks_0', 'boxes_1', 'labels_1', 'scores_1', 'masks_1']` |
-| `predictions` | `(4*batch_size)` | List of ndarray(float) | For a batch size of 2, `predictions` will take the shape of `[(n1_boxes, 4), (n1_boxes), (n1_boxes), (n1_boxes, 1, height_onnx, width_onnx), (n2_boxes, 4), (n2_boxes), (n2_boxes), (n2_boxes, 1, height_onnx, width_onnx)]`. Here, values at each index correspond to same index in `output_names`. |
+| `output_names` | `(4*batch_size)` | List of keys | For a batch size of 2, `output_names` is `['boxes_0', 'labels_0', 'scores_0', 'masks_0', 'boxes_1', 'labels_1', 'scores_1', 'masks_1']` |
+| `predictions` | `(4*batch_size)` | List of ndarray(float) | For a batch size of 2, `predictions` takes the shape of `[(n1_boxes, 4), (n1_boxes), (n1_boxes), (n1_boxes, 1, height_onnx, width_onnx), (n2_boxes, 4), (n2_boxes), (n2_boxes), (n2_boxes, 1, height_onnx, width_onnx)]`. Here, values at each index correspond to same index in `output_names`. |
| Name | Shape | Type | Description | | -- |-|--||
print(json.dumps(bounding_boxes_batch, indent=1))
# [Multi-class image classification](#tab/multi-class)
-Visualize an input image with labels
+Visualize an input image with labels.
```python import matplotlib.image as mpimg
plt.show()
# [Multi-label image classification](#tab/multi-label)
-Visualize an input image with labels
+Visualize an input image with labels.
```python import matplotlib.image as mpimg
plt.show()
# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
-Visualize an input image with boxes and labels
+Visualize an input image with boxes and labels.
```python import matplotlib.image as mpimg
plt.show()
# [Object detection with YOLO](#tab/object-detect-yolo)
-Visualize an input image with boxes and labels
+Visualize an input image with boxes and labels.
```python import matplotlib.image as mpimg
plt.show()
# [Instance segmentation](#tab/instance-segmentation)
-Visualize a sample input image with masks and labels
+Visualize a sample input image with masks and labels.
```python import matplotlib.patches as patches
machine-learning How To Secure Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-secure-prompt-flow.md
When you're developing your LLM application using prompt flow, you want a secure
- Related Azure Cognitive Services as such Azure OpenAI, Azure content safety and Azure AI Search, you can use network config to make them as private then using private endpoint to let Azure Machine Learning services communicate with them. - Other non Azure resources such as SerpAPI etc. If you have strict outbound rule, you need add FQDN rule to access them.
+## Options in different network set up
+
+In Azure machine learning, we have two options to secure network isolation, bring your own network or using workspace managed virtual network. Learn more about [Secure workspace resources](../how-to-network-isolation-planning.md).
+
+Here is table to illustrate the options in different network set up for prompt flow.
+
+|Ingress|Egress |Compute type in authoring |Compute type in inference |Network options for workspace|
+|-|-|-||--|
+|Public |Public |Serverless (recommend), Compute instance| Managed online endpoint (recommend), K8s online endpoint|Managed (recommend) /Bring you own|
+|Private|Public |Serverless (recommend), Compute instance| Managed online endpoint (recommend), K8s online endpoint|Managed (recommend) /Bring you own|
+|Public |Private|Serverless (recommend), Compute instance| Managed online endpoint |Managed|
+|Private|Private|Serverless (recommend), Compute instance| Managed online endpoint |Managed|
+
+- In private VNet scenario, we would recommend to use workspace enabled managed virtual network. It's the easiest way to secure your workspace and related resources.
+- You can also have one workspace for prompt flow authoring with your virtual network and another workspace for prompt flow deployment using managed online endpoint with workspace managed virtual network.
+- We didn't support mixed using of managed virtual network and bring your own virtual network in single workspace. And as managed online endpoint is support managed virtual network only, you can't deploy prompt flow to managed online endpoint in workspace which enabled bring your own virtual network.
++ ## Secure prompt flow with workspace managed virtual network Workspace managed virtual network is the recommended way to support network isolation in prompt flow. It provides easily configuration to secure your workspace. After you enable managed virtual network in the workspace level, resources related to workspace in the same virtual network, will use the same network setting in the workspace level. You can also configure the workspace to use private endpoint to access other Azure resources such as Azure OpenAI, Azure content safety, and Azure AI Search. You also can configure FQDN rule to approve outbound to non-Azure resources use by your prompt flow such as SerpAPI etc.
Workspace managed virtual network is the recommended way to support network isol
- If you have strict outbound rule, make sure you have open the [Required public internet access](../how-to-secure-workspace-vnet.md#required-public-internet-access). - Add workspace MSI as `Storage File Data Privileged Contributor` to storage account linked with workspace. Please follow step 2 in [Secure prompt flow with workspace managed virtual network](#secure-prompt-flow-with-workspace-managed-virtual-network). - Meanwhile, you can follow [private Azure Cognitive Services](../../ai-services/cognitive-services-virtual-networks.md) to make them as private.-- If you want to deploy prompt flow in workspace which secured by your own virtual network, you can deploy it to AKS cluster which is in the same virtual network. You can follow [Secure Azure Kubernetes Service inferencing environment](../how-to-secure-kubernetes-inferencing-environment.md) to secure your AKS cluster.
+- If you want to deploy prompt flow in workspace which secured by your own virtual network, you can deploy it to AKS cluster which is in the same virtual network. You can follow [Secure Azure Kubernetes Service inferencing environment](../how-to-secure-kubernetes-inferencing-environment.md) to secure your AKS cluster. Learn more about [How to deploy prompt flow to ASK cluster via code](./how-to-deploy-to-code.md).
- You can either create private endpoint to the same virtual network or leverage virtual network peering to make them communicate with each other. ## Known limitations - AI studio don't support bring your own virtual network, it only support workspace managed virtual network.-- Managed online endpoint only supports workspace with managed virtual network. If you want to use your own virtual network, you might need one workspace for prompt flow authoring with your virtual network and another workspace for prompt flow deployment using managed online endpoint with workspace managed virtual network.
+- Managed online endpoint with selected egress only supports workspace with managed virtual network. If you want to use your own virtual network, you might need one workspace for prompt flow authoring with your virtual network and another workspace for prompt flow deployment using managed online endpoint with workspace managed virtual network.
## Next steps
machine-learning Llm Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/llm-tool.md
Prompt flow provides a few different large language model APIs:
> [!NOTE] > We removed the `embedding` option from the LLM tool API setting. You can use an embedding API with the [embedding tool](embedding-tool.md).
+> Only key-based authentication is supported for Azure OpenAI connection.
+> Please don't use non-ascii characters in resource group name of Azure OpenAI resource, prompt flow didn't support this case.
## Prerequisites
machine-learning Troubleshoot Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/troubleshoot-guidance.md
Follow these steps to find Python packages installed in compute instance runtime
:::image type="content" source="../media/faq/list-packages.png" alt-text="Screenshot that shows finding Python packages installed in runtime." lightbox = "../media/faq/list-packages.png":::
+### Runtime start failures using custom environment
+
+#### CI (Compute instance) runtime start failure using custom environment
+
+To use promptflow as runtime on CI, you need use the base image provide by promptflow. If you want to add extra packages to the base image, you need follow the [Customize environment with Docker context for runtime](../how-to-customize-environment-runtime.md) to create a new environment. Then use it to create CI runtime.
+
+If you got `UserError: FlowRuntime on compute instance is not ready`, you need login into to terminal of CI and run `journalctl -u c3-progenitor.serivice` to check the logs.
+
+#### Automatic runtime start failure with requirements.txt or custom base image
+
+Automatic runtime support to use `requirements.txt` or custom base image in `flow.dag.yaml` to customize the image. We would recommend you to use `requirements.txt` for common case, which will use `pip install -r requirements.txt` to install the packages. If you have dependency more then python packages, you need follow the [Customize environment with Docker context for runtime](../how-to-customize-environment-runtime.md) to create build a new image base on top of promptflow base image. Then use it in `flow.dag.yaml`. Learn more about [Customize environment with Docker context for runtime](../how-to-create-manage-runtime.md#update-an-automatic-runtime-preview-on-a-flow-page).
+
+- You can not use arbitrary base image to create runtime, you need use the base image provide by promptflow.
+- Don't pin the version of `promptflow` and `promptflow-tools` in `requirements.txt`, because we already include them in the runtime base image. Using old version of `promptflow` and `promptflow-tools` may cause unexpected behavior.
+=======
## Flow run related issues ### How to find the raw inputs and outputs of in LLM tool for further investigation?
In prompt flow, on flow page with successful run and run detail page, you can fi
You may encounter 409 error from Azure OpenAI, it means you have reached the rate limit of Azure OpenAI. You can check the error message in the output section of LLM node. Learn more about [Azure OpenAI rate limit](../../../ai-services/openai/quotas-limits.md).
machine-learning Vector Index Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/vector-index-lookup-tool.md
The following example is for a JSON format response returned by the tool, which
} } ]- ```+
+## Deploying to an online endpoint
+
+When you deploy a flow containing the vector index lookup tool to an online endpoint, there's an extra step to set up permissions. During deployment through the web pages, there's a choice between System-assigned and User-assigned Identity types. Either way, using the Azure portal (or a similar functionality), add the "AzureML Data Scientist" role of Azure Machine learning studio to the identity assign to the endpoint.
machine-learning Transparency Note https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/transparency-note.md
To improve performance, you can modify the following parameters, depending on th
The Microsoft development team tested the auto-generate prompt variants feature to evaluate harm mitigation and fitness for purpose.
-The testing for harm mitigation showed support for the combination of system prompts and Azure Open AI content management policies in actively safeguarding responses. You can find more opportunities to minimize the risk of harms in [Azure OpenAI Service abuse monitoring](/azure/ai-services/openai/concepts/abuse-monitoring) and [Azure OpenAI Service content filtering](/azure/ai-services/openai/concepts/content-filter).
+The testing for harm mitigation showed support for the combination of system prompts and Azure OpenAI content management policies in actively safeguarding responses. You can find more opportunities to minimize the risk of harms in [Azure OpenAI Service abuse monitoring](/azure/ai-services/openai/concepts/abuse-monitoring) and [Azure OpenAI Service content filtering](/azure/ai-services/openai/concepts/content-filter).
Fitness-for-purpose testing supported the quality of generated prompts from creative purposes (poetry) and chat-bot agents. We caution you against drawing sweeping conclusions, given the breadth of possible base prompts and potential use cases. For your environment, use evaluations that are appropriate to the required use cases, and ensure that a human reviewer is part of the process.
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-data.md
-+ Previously updated : 10/21/2021 Last updated : 03/01/2024 #Customer intent: As an experienced Python developer, I need to securely access my data in my Azure storage solutions and use it to accomplish my machine learning tasks.
[!INCLUDE [CLI v1](../includes/machine-learning-cli-v1.md)] [!INCLUDE [SDK v1](../includes/machine-learning-sdk-v1.md)] -
-Azure Machine Learning makes it easy to connect to your data in the cloud. It provides an abstraction layer over the underlying storage service, so you can securely access and work with your data without having to write code specific to your storage type. Azure Machine Learning also provides the following data capabilities:
+Azure Machine Learning makes it easy to connect to your data in the cloud. It provides an abstraction layer over the underlying storage service, so that you can securely access and work with your data without the need to write code specific to your storage type. Azure Machine Learning also provides these data capabilities:
* Interoperability with Pandas and Spark DataFrames * Versioning and tracking of data lineage
-* Data labeling
+* Data labeling
* Data drift monitoring
-
-## Data workflow
-When you're ready to use the data in your cloud-based storage solution, we recommend the following data delivery workflow. This workflow assumes you have an [Azure storage account](../../storage/common/storage-account-create.md?tabs=azure-portal) and data in a cloud-based storage service in Azure.
+## Data workflow
-1. Create an [Azure Machine Learning datastore](#connect-to-storage-with-datastores) to store connection information to your Azure storage.
+To use the data in your cloud-based storage solution, we recommend this data delivery workflow. The workflow assumes that you have an [Azure storage account](../../storage/common/storage-account-create.md?tabs=azure-portal), and data in an Azure cloud-based storage service.
-2. From that datastore, create an [Azure Machine Learning dataset](#reference-data-in-storage-with-datasets) to point to a specific file(s) in your underlying storage.
+1. Create an [Azure Machine Learning datastore](#connect-to-storage-with-datastores) to store connection information to your Azure storage
-3. To use that dataset in your machine learning experiment you can either
- * Mount it to your experiment's compute target for model training.
+2. From that datastore, create an [Azure Machine Learning dataset](#reference-data-in-storage-with-datasets) to point to a specific file or files in your underlying storage
- **OR**
+3. To use that dataset in your machine learning experiment, you can either
+ * Mount the dataset to the compute target of your experiment, for model training
- * Consume it directly in Azure Machine Learning solutions like, automated machine learning (automated ML) experiment runs, machine learning pipelines, or the [Azure Machine Learning designer](concept-designer.md).
+ **OR**
-4. Create [dataset monitors](#monitor-model-performance-with-data-drift) for your model output dataset to detect for data drift.
+ * Consume the dataset directly in Azure Machine Learning solutions - for example, automated machine learning (automated ML) experiment runs, machine learning pipelines, or the [Azure Machine Learning designer](concept-designer.md).
-5. If data drift is detected, update your input dataset and retrain your model accordingly.
+4. Create [dataset monitors](#monitor-model-performance-with-data-drift) for your model output dataset to detect data drift
-The following diagram provides a visual demonstration of this recommended workflow.
+5. For detected data drift, update your input dataset and retrain your model accordingly
-![Diagram shows the Azure Storage Service which flows into a datastore, which flows into a dataset.](./media/concept-data/data-concept-diagram.svg)
+This screenshot shows the recommended workflow:
## Connect to storage with datastores
-Azure Machine Learning datastores securely keep the connection information to your data storage on Azure, so you don't have to code it in your scripts. [Register and create a datastore](../how-to-access-data.md) to easily connect to your storage account, and access the data in your underlying storage service.
+Azure Machine Learning datastores securely host your data storage connection information on Azure, so you don't have to place that information in your scripts. For more information about connecting to a storage account and data access in your underlying storage service, visit [Register and create a datastore](../how-to-access-data.md).
-Supported cloud-based storage services in Azure that can be registered as datastores:
+These supported Azure cloud-based storage services can register as datastores:
-+ Azure Blob Container
-+ Azure File Share
-+ Azure Data Lake
-+ Azure Data Lake Gen2
-+ Azure SQL Database
-+ Azure Database for PostgreSQL
-+ Databricks File System
-+ Azure Database for MySQL
+- Azure Blob Container
+- Azure File Share
+- Azure Data Lake
+- Azure Data Lake Gen2
+- Azure SQL Database
+- Azure Database for PostgreSQL
+- Databricks File System
+- Azure Database for MySQL
>[!TIP]
-> You can create datastores with credential-based authentication for accessing storage services, like a service principal or shared access signature (SAS) token. These credentials can be accessed by users who have *Reader* access to the workspace.
+> You can create datastores with credential-based authentication to access storage services, for example a service principal or a shared access signature (SAS) token. Users with *Reader* access to the workspace can access these credentials.
>
-> If this is a concern, [create a datastore that uses identity-based data access](../how-to-identity-based-data-access.md) to connect to storage services.
-
+> If this is a concern, visit [create a datastore that uses identity-based data access](../how-to-identity-based-data-access.md) for more information about connections to storage services.
## Reference data in storage with datasets
-Azure Machine Learning datasets aren't copies of your data. By creating a dataset, you create a reference to the data in its storage service, along with a copy of its metadata.
+Azure Machine Learning datasets aren't copies of your data. Dataset creation itself creates a reference to the data in its storage service, along with a copy of its metadata.
Because datasets are lazily evaluated, and the data remains in its existing location, you
-* Incur no extra storage cost.
-* Don't risk unintentionally changing your original data sources.
-* Improve ML workflow performance speeds.
+- Incur no extra storage cost
+- Don't risk unintentional changes to your original data sources
+- Improve ML workflow performance speeds
-To interact with your data in storage, [create a dataset](how-to-create-register-datasets.md) to package your data into a consumable object for machine learning tasks. Register the dataset to your workspace to share and reuse it across different experiments without data ingestion complexities.
+To interact with your data in storage, [create a dataset](how-to-create-register-datasets.md) to package your data into a consumable object for machine learning tasks. Register the dataset to your workspace, to share and reuse it across different experiments without data ingestion complexities.
-Datasets can be created from local files, public urls, [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/), or Azure storage services via datastores.
+You can create datasets from local files, public urls, [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/), or Azure storage services via datastores.
-There are 2 types of datasets:
+There are two types of datasets:
-+ A [FileDataset](/python/api/azureml-core/azureml.data.file_dataset.filedataset) references single or multiple files in your datastores or public URLs. If your data is already cleansed and ready to use in training experiments, you can [download or mount files](how-to-train-with-datasets.md#mount-files-to-remote-compute-targets) referenced by FileDatasets to your compute target.
+- A [FileDataset](/python/api/azureml-core/azureml.data.file_dataset.filedataset) references single or multiple files in your datastores or public URLs. If your data is already cleansed and ready for training experiments, you can [download or mount files](how-to-train-with-datasets.md#mount-files-to-remote-compute-targets) referenced by FileDatasets to your compute target
-+ A [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) represents data in a tabular format by parsing the provided file or list of files. You can load a TabularDataset into a pandas or Spark DataFrame for further manipulation and cleansing. For a complete list of data formats you can create TabularDatasets from, see the [TabularDatasetFactory class](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory).
+- A [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) represents data in a tabular format, by parsing the provided file or list of files. You can load a TabularDataset into a pandas or Spark DataFrame for further manipulation and cleansing. For a complete list of data formats from which you can create TabularDatasets, visit the [TabularDatasetFactory class](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory)
-Additional datasets capabilities can be found in the following documentation:
+These resources offer more information about dataset capabilities:
-+ [Version and track](how-to-version-track-datasets.md) dataset lineage.
-+ [Monitor your dataset](how-to-monitor-datasets.md) to help with data drift detection.
+- [Version and track](how-to-version-track-datasets.md) dataset lineage
+- [Monitor your dataset](how-to-monitor-datasets.md) to help with data drift detection
## Work with your data
-With datasets, you can accomplish a number of machine learning tasks through seamless integration with Azure Machine Learning features.
-
-+ Create a [data labeling project](#label-data-with-data-labeling-projects).
-+ Train machine learning models:
- + [automated ML experiments](../how-to-use-automated-ml-for-ml-models.md)
- + the [designer](tutorial-designer-automobile-price-train-score.md#import-data)
- + [notebooks](how-to-train-with-datasets.md)
- + [Azure Machine Learning pipelines](how-to-create-machine-learning-pipelines.md)
-+ Access datasets for scoring with [batch inference](../tutorial-pipeline-batch-scoring-classification.md) in [machine learning pipelines](how-to-create-machine-learning-pipelines.md).
-+ Set up a dataset monitor for [data drift](#monitor-model-performance-with-data-drift) detection.
-
+With datasets, you can accomplish machine learning tasks through seamless integration with Azure Machine Learning features.
+- Create a [data labeling project](#label-data-with-data-labeling-projects)
+- Train machine learning models:
+ - [automated ML experiments](../how-to-use-automated-ml-for-ml-models.md)
+ - the [designer](tutorial-designer-automobile-price-train-score.md#import-data)
+ - [notebooks](how-to-train-with-datasets.md)
+ - [Azure Machine Learning pipelines](how-to-create-machine-learning-pipelines.md)
+- Access datasets for scoring with [batch inference](../tutorial-pipeline-batch-scoring-classification.md) in [machine learning pipelines](how-to-create-machine-learning-pipelines.md)
+- Set up a dataset monitor for [data drift](#monitor-model-performance-with-data-drift) detection
## Label data with data labeling projects
-Labeling large amounts of data has often been a headache in machine learning projects. Those with a computer vision component, such as image classification or object detection, generally require thousands of images and corresponding labels.
+Labeling large volumes of data in machine learning projects can become a headache. Projects that involve a computer vision component, such as image classification or object detection, often require thousands of images and corresponding labels.
-Azure Machine Learning gives you a central location to create, manage, and monitor labeling projects. Labeling projects help coordinate the data, labels, and team members, allowing you to more efficiently manage the labeling tasks. Currently supported tasks are image classification, either multi-label or multi-class, and object identification using bounded boxes.
+Azure Machine Learning provides a central location to create, manage, and monitor labeling projects. Labeling projects help coordinate the data, labels, and team members, so that you can more efficiently manage the labeling tasks. Currently supported tasks involve image classification, either multi-label or multi-class, and object identification using bounded boxes.
Create an [image labeling project](../how-to-create-image-labeling-projects.md) or [text labeling project](../how-to-create-text-labeling-projects.md), and output a dataset for use in machine learning experiments. -- ## Monitor model performance with data drift
-In the context of machine learning, data drift is the change in model input data that leads to model performance degradation. It is one of the top reasons model accuracy degrades over time, thus monitoring data drift helps detect model performance issues.
+In the context of machine learning, data drift involves the change in model input data that leads to model performance degradation. It's a major reason that model accuracy degrades over time, and data drift monitoring helps detect model performance issues.
-See the [Create a dataset monitor](how-to-monitor-datasets.md) article, to learn more about how to detect and alert to data drift on new data in a dataset.
+For more information, visit [Create a dataset monitor](how-to-monitor-datasets.md) to learn how to detect and alert to data drift on new data in a dataset.
-## Next steps
+## Next steps
-+ Create a dataset in Azure Machine Learning studio or with the Python SDK [using these steps.](how-to-create-register-datasets.md)
-+ Try out dataset training examples with our [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
+- [Create a dataset in Azure Machine Learning studio or with the Python SDK](how-to-create-register-datasets.md)
+- Try out dataset training examples with our [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/)
mysql How To Request Quota Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-request-quota-increase.md
+
+ Title: Request quota increases for Azure Database for MySQL
+description: Request quota increases for Azure Database for MySQL - Flexible Server resources.
+++ Last updated : 02/29/2024+++++
+# Request quota increases for Azure Database for MySQL - Flexible Server
+
+The resources in Azure Database for MySQL - Flexible Server have default quotas/limits. However, there might be a case where your workload needs more quota than the default value. In such case, you must reach out to the Azure Database for MySQL - Flexible Server team to request a quota increase. This article explains how to request a quota increase for Azure Database for MySQL - Flexible Server resources.
+
+## Create a new support request
+
+To request a quota increase, you must create a new support request with your workload details. The Azure Database for MySQL flexible server team then processes your request and approves or denies it. Use the following steps to create a new support request from the Azure portal:
+
+1. Sign into the Azure portal.
+
+1. From the left-hand menu, select **Help + support** and then select **Create a support request**.
+
+1. In the **Problem Description** tab, fill the following details:
+
+ - For **Summary**, Provide a short description of your request such as your workload, why the default values aren't sufficient along with any error messages you're observing.
+ - For **Issue type**, select **Service and subscription limits (quotas)**
+ - For **Subscription**, select the subscription for which you want to increase the quota.
+ - For **Quota type**, select **Azure Database for MySQL Flexible Server**
+
+ :::image type="content" source="media/how-to-request-quota-increase/request-quota-increase-mysql-flex.png" alt-text="Screenshot of new support request.":::
+
+1. In the **Additional Details** tab, enter the details corresponding to your quota request. The Information provided on this tab is used to further assess your issue and help the support engineer troubleshoot the problem.
+1. Fill the following details in this form:
+
+ - In **Request details** select **Enter details** and select the relevant **Quota Type**
+
+ provide the requested information for your specific quota request like Location, Series, New Quota.
+
+ - **File upload**: Upload the diagnostic files or any other files that you think are relevant to the support request. To learn more on the file upload guidance, see the [Azure support](../../azure-portal/supportability/how-to-manage-azure-support-request.md#upload-files) article.
+
+ - **Allow collection of advanced ΓÇïdiagnostic information?ΓÇï**: Choose Yes or NO
+
+ - **Severity**: Choose one of the available severity levels based on the business impact.
+
+ - **Preferred contact method**: You can either choose to be contacted over **Email** or by **Phone**.
+
+1. Fill out the remaining details such as your availability, support language, contact information, email, and phone number on the form.
+
+1. Select **Next: Review+Create**. Validate the information provided and select **Create** to create a support request.
+
+The Azure Database for MySQL - Flexible Server support team processes all quota requests in 24-48 hours.
+
+## Related content
+
+- [Create an Azure Database for MySQL - Flexible Server instance in the portal](/azure/mysql/flexible-server/quickstart-create-server-portal)
+- [Service limitations](/azure/mysql/flexible-server/concepts-limitations)
mysql Resolve Capacity Errors Mysql Flex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/resolve-capacity-errors-mysql-flex.md
+
+ Title: Resolve capacity errors
+description: The article describes how you can resolve capacity errors when deploying or scaling Azure Database for MySQL - Flexible Server.
+++ Last updated : 02/29/2024+++++
+# Resolve capacity errors for Azure Database for MySQL - Flexible Server
+
+The article describes how you can resolve capacity errors when deploying or scaling Azure Database for MySQL - Flexible Server.
+
+## Exceeded quota
+
+If you encounter any of the following errors when attempting to deploy your Azure MySQL - Flexible Server resource, [submit a request to increase your quota](how-to-request-quota-increase.md).
+
+- `Operation could not be completed as it results in exceeding approved {0} Cores quota. Additional details - Current Limit: {1}, Current Usage: {2}, Additional Required: {3}, (Minimum) New Limit Required: {4}.Submit a request for Quota increase by specifying parameters listed in the 'Details' section for deployment to succeed.`
+
+## Subscription access
+
+Your subscription might not have access to create a server in the selected region if your subscription isn't registered with the MySQL resource provider (RP).
+
+If you see any of the following errors, [Register your subscription with the MySQL RP](#register-with-mysql-rp) to resolve it.
+
+- `Your subscription does not have access to create a server in the selected region.`
+
+- `Provisioning is restricted in this region. Please choose a different region. For exceptions to this rule please open a support request with issue type of 'Service and subscription limits'`
+
+- `Location 'region name' is not accepting creation of new Azure Database for MySQL - Flexible servers for the subscription 'subscription id' at this time`
+
+## Enable region
+
+Your subscription might not have access to create a server in the selected region. To resolve this issue, [file a support request to access a region](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+
+If you see the following errors, file a support ticket to enable the specific region:
+- `Subscription 'Subscription name' is not allowed to provision in 'region name`
+- `Subscriptions are restricted from provisioning in this region. Please choose a different region. For exceptions to this rule please open a support request with the Issue type of 'Service and subscription limits.`
+
+## Availability Zone
+
+If you receive the following errors, select a different availability zone.
+
+- `Availability zone '{ID}' is not available for subscription '{Sub ID}' in this region temporarily due to capacity constraints.`
+- `Multi-Zone HA is not supported in this region. Please choose a different region. For exceptions to this rule please open a support request with the Issue type of 'Service and subscription limits'.`
+
+## SKU Not Available
+
+If you encounter the following error, select a different SKU type. Availability of SKU might differ across regions, either the specific SKU isn't supported in the region or temporarily unavailable.
+
+`Specified SKU is not supported in this region. Please choose a different SKU.`
+
+## Register with MySQL RP
+
+To deploy Azure Database for MySQL - Flexible Server resources, register your subscription with the MySQL resource provider (RP).
+
+You can register your subscription using the Azure portal, [the Azure CLI](/cli/azure/install-azure-cli), or [Azure PowerShell](/powershell/azure/install-azure-powershell).
+
+#### [Azure portal](#tab/portal)
+
+To register your subscription in the Azure portal, follow these steps:
+
+1. In Azure portal, select **More services.**
+
+1. Go to **Subscriptions** and select your subscription.
+
+1. On the **Subscriptions** page, in the left hand pane under **Settings** select **Resource providers.**
+
+1. Enter **MySQL** in the filter to bring up the MySQL related extensions.
+
+1. Select **Register**, **Re-register**, or **Unregister** for the **Microsoft.DBforMySQL** provider, depending on your desired action.
+ :::image type="content" source="media/resolve-capacity-errors-mysql-flex/resource-provider-screen.png" alt-text="Screenshot of register mysql resource provider screen." lightbox="media/resolve-capacity-errors-mysql-flex/resource-provider-screen.png":::
+
+#### [Azure CLI](#tab/azure-cli-b)
+
+To register your subscription using [the Azure CLI](/cli/azure/install-azure-cli), run this cmdlet:
+
+```azurecli-interactive
+# Register the MySQL resource provider to your subscription
+az provider register --namespace Microsoft.DBforMySQL
+```
+
+#### [Azure PowerShell](#tab/powershell)
+
+To register your subscription using [Azure PowerShell](/powershell/azure/install-az-ps), run this cmdlet:
+
+```powershell-interactive
+# Register the MySQL resource provider to your subscription
+Register-AzResourceProvider -ProviderNamespace Microsoft.DBforMySQL
+```
+++
+## Other provisioning issues
+
+If you're still experiencing provisioning issues, open a **Region** access request under the support topic of Azure Database for MySQL - Flexible Server and specify the vCores you want to utilize.
+
+## Azure Program regions
+
+Azure Program offerings (Azure Pass, Imagine, Azure for Students, MPN, BizSpark, BizSpark Plus, Microsoft for Startups / Sponsorship Offers, Microsoft Developer Network(MSDN) / Visual Studio Subscriptions) have access to a limited set of regions.
+
+If your subscription is part of above offerings and you require access to any of the listed regions, submit an access request. Alternatively, you might opt for an alternate region:
+
+`Australia Central, Australia Central 2, Australia SouthEast, Brazil SouthEast, Canada East, China East, China North, China North 2, France South, Germany North, Japan West, Jio India Central, Jio India West, Korea South, Norway West, South Africa West, South India, Switzerland West, UAE Central, UK West, US DoD Central, US DoD East, US Gov Arizona, US Gov Texas, West Central US, West India.`
+
+## Related content
+
+- [Azure subscription and service limits, quotas, and constraints](/azure/azure-resource-manager/management/azure-subscription-service-limits)
notification-hubs Firebase Migration Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/firebase-migration-rest.md
+
+ Title: Azure Notification Hubs and the Google Firebase Cloud Messaging (FCM) migration using REST API and the Azure portal
+description: Describes how Azure Notification Hubs addresses the Google GCM to FCM migration using REST APIs.
++++ Last updated : 03/01/2024++
+ms.lastreviewed: 03/01/2024
++
+# Google Firebase Cloud Messaging migration using REST API and the Azure portal
+
+This article describes the core capabilities for the integration of Azure Notification Hubs with Firebase Cloud Messaging (FCM) v1. As a reminder, Google will stop supporting FCM legacy HTTP on June 20, 2024, so you must migrate your applications and notification payloads to the new format before then. All methods of onboarding will be ready for migration by March 1, 2024.
+
+## Concepts for FCM v1
+
+- A new platform type is supported, called **FCM v1**.
+- New APIs, credentials, registrations, and installations are used for FCM v1.
+
+> [!NOTE]
+> The existing FCM platform is referred to as *FCM legacy* in this article.
+
+## Migration steps
+
+The Firebase Cloud Messaging (FCM) legacy API will be deprecated by July 2024. You can begin migrating from the legacy HTTP protocol to FCM v1 on March 1, 2024. You must complete the migration by June 2024. This section describes the steps to migrate from FCM legacy to FCM v1 using the Notification Hubs REST API.
+
+## REST API
+
+The following section describes how to perform the migration using the REST API.
+
+### Step 1: Add FCM v1 credentials to hub
+
+The first step is to add credentials via the Azure portal, a management-plane hub operation, or data-plane hub operation.
+
+#### Create Google service account JSON file
+
+1. In the [Firebase console](https://console.firebase.google.com/), select your project and go to **Project settings**.
+1. Select the **Service accounts** tab, create a service account, and generate a private key from your Google service account.
+1. Select **Generate new private key** to generate a JSON file. Download and open the file. Replace the values for `project_id`, `private_key`, and `client_email`, as these are required for Azure Notification Hubs hub credential updates.
+
+ :::image type="content" source="media/firebase-migration-rest/firebase-project-settings.png" alt-text="Screenshot of Firebase console project settings." lightbox="media/firebase-migration-rest/firebase-project-settings.png":::
+
+ OR
+
+ If you want to create a service account with customized access permission, you can create a service account through the [IAM & Admin > Service Accounts page](https://console.cloud.google.com/iam-admin/serviceaccounts). Go to the page directly by clicking **Manage service account permissions**. You can create a service account that has **Firebase cloud messaging admin access** and use it for your notification hub credential update.
+
+ :::image type="content" source="media/firebase-migration-rest/service-accounts.png" alt-text="Screenshot showing IAM service account settings." lightbox="media/firebase-migration-rest/service-accounts.png":::
+
+#### Option 1: Update FcmV1 credentials via the Azure portal
+
+Go to your notification hub on the Azure portal, and select **Settings > Google (FCM v1)**. Get the **Private Key**, **Project ID**, and **Client Email** values from the service account JSON file acquired from the previous section, and save them for later use.
++
+#### Option 2: Update FcmV1 credentials via management plane hub operation
+
+See the [description of a NotificationHub FcmV1Credential.](/rest/api/notificationhubs/notification-hubs/create-or-update?view=rest-notificationhubs-2023-10-01-preview&tabs=HTTP#fcmv1credential).
+
+- Use API version: 2023-10-01-preview
+- **FcmV1CredentialProperties**:
+
+ | Name | Type |
+ |--||
+ | `clientEmail` | string |
+ | `privateKey` | string |
+ | `projectId` | string |
+
+#### Option 3: Update FcmV1 credentials via data plane hub operation
+
+See [Create a notification hub](/rest/api/notificationhubs/create-notification-hub) and [Update a notification hub](/rest/api/notificationhubs/update-notification-hub).
+
+- Use API version: 2015-01
+- Make sure to put **FcmV1Credential** after **GcmCredential**, as the order is important.
+
+For example, the following is the request body:
+
+```xml
+<NotificationHubDescription xmlns:i='http://www.w3.org/2001/XMLSchema-instance'
+ΓÇ» ΓÇ» xmlns='http://schemas.microsoft.com/netservices/2010/10/servicebus/connect'>
+ΓÇ» ΓÇ» <ApnsCredential>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Properties>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Name>Endpoint</Name>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Value>{_apnsCredential.Endpoint}</Value>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» </Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Name>AppId</Name>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Value>{_apnsCredential.AppId}</Value>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» </Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Name>AppName</Name>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Value>{_apnsCredential.AppName}</Value>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» </Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Name>KeyId</Name>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Value>{_apnsCredential.KeyId}</Value>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» </Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Name>Token</Name>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Value>{_apnsCredential.Token}</Value>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» </Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» </Properties>
+ΓÇ» ΓÇ» </ApnsCredential>
+ΓÇ» ΓÇ» <WnsCredential>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Properties>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Name>PackageSid</Name>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Value>{_wnsCredential.PackageSid}</Value>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» </Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Name>SecretKey</Name>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Value>{_wnsCredential.SecretKey}</Value>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» </Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» </Properties>
+ΓÇ» ΓÇ» </WnsCredential>
+ΓÇ» ΓÇ» <GcmCredential>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Properties>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Name>GoogleApiKey</Name>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Value>{_gcmCredential.GoogleApiKey}</Value>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» </Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» </Properties>
+ΓÇ» ΓÇ» </GcmCredential>
+ΓÇ» ΓÇ» <FcmV1Credential>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Properties>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Name>ProjectId</Name>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Value>{_fcmV1Credential.ProjectId}</Value>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» </Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Name>PrivateKey</Name>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Value>{_fcmV1Credential.PrivateKey}</Value>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» </Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Name>ClientEmail</Name>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Value>{_fcmV1Credential.ClientEmail}</Value>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» </Property>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» </Properties>
+ΓÇ» ΓÇ» </FcmV1Credential>
+</NotificationHubDescription>
+```
+
+### Step 2: Manage registration and installation
+
+For direct send scenarios, proceed directly to step 3. If you're using one of the Azure SDKs, see the [SDKs article](firebase-migration-sdk.md).
+
+#### Option 1: Create FCM v1 registration or update GCM registration to FCM v1
+
+If you have an existing GCM registration, update the registration to **FcmV1Registration**. See [Create or update a registration](/rest/api/notificationhubs/create-update-registration). If you don't have an existing **GcmRegistration**, create a new registration as **FcmV1Registration**. See [Create a registration](/rest/api/notificationhubs/create-registration). The registration request body should appear as in the following example:
+
+```xml
+// FcmV1Registration
+<?xml version="1.0" encoding="utf-8"?>
+<entry xmlns="http://www.w3.org/2005/Atom">
+ΓÇ» ΓÇ» <content type="application/xml">
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» <FcmV1RegistrationDescription xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/netservices/2010/10/servicebus/connect">
+ <Tags>myTag, myOtherTag</Tags>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <FcmV1RegistrationId>{deviceToken}</FcmV1RegistrationId>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» </FcmV1RegistrationDescription>
+ΓÇ» ΓÇ» </content>
+</entry>
+
+// FcmV1TemplateRegistration
+<?xml version="1.0" encoding="utf-8"?>
+<entry xmlns="http://www.w3.org/2005/Atom">
+ΓÇ» ΓÇ» <content type="application/xml">
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» <FcmV1TemplateRegistrationDescription xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/netservices/2010/10/servicebus/connect">
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <Tags>myTag, myOtherTag</Tags>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <FcmV1RegistrationId>{deviceToken}</FcmV1RegistrationId>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» <BodyTemplate><![CDATA[ {BodyTemplate}]]></BodyTemplate>
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» </ FcmV1TemplateRegistrationDescription >
+ΓÇ» ΓÇ» </content>
+</entry>
+```
+
+#### Option 2: Create FCM c1 installation or update GCM installation to FCM v1
+
+See [Create or overwrite an installation](/rest/api/notificationhubs/create-overwrite-installation) and set `platform` to `FCMV1`.
+
+### Step 3: Send a push notification
+
+#### Option 1: Debug send
+
+Use this procedure to test notifications prior to option 2, 3, or 4. See [Notification Hubs - Debug Send](/rest/api/notificationhubs/notification-hubs/debug-send?view=rest-notificationhubs-2023-10-01-preview&tabs=HTTP).
+
+> [!NOTE]
+> Use API version: 2023-10-01-preview.
+
+In the header:
+
+| Request header | Value |
+|--||
+| `Content-Type` | `application/json;charset=utf-8` |
+| `ServiceBusNotification-Format` | Set to `fcmV1` or `template` |
+| `Servicebusnotification-Tags` | {single tag identifier} |
+
+Test a payload [with the following structure](https://firebase.google.com/docs/reference/fcm/rest/v1/projects.messages/send) via debug send. Note that FcmV1 introduces a significant change in the structuring of the JSON message payload:
+
+1. The entire payload moved under a message object.
+1. Android-specific options moved to the Android object and `time_to_live` is now `ttl` with a string value.
+1. The `data` field now allows only a flat string-to-string mapping.
+1. For more information, see [the FCM reference](https://firebase.google.com/docs/reference/fcm/rest/v1/projects.messages/send).
+
+Alternatively, you can perform a test send (debug send) via the Azure portal:
++
+#### Option 2: Direct send
+
+Perform a [direct send](/rest/api/notificationhubs/direct-send?view=rest-notificationhubs-2023-10-01-preview). In the request header, set `ServiceBusNotification-Format` to `fcmV1`.
+
+#### Option 3: FcmV1 native notification (audience send)
+
+Perform an FcmV1 native notification send. See [Send a Google Cloud Messaging (GCM) native notification](/rest/api/notificationhubs/send-gcm-native-notification?view=rest-notificationhubs-2023-10-01-preview). In the request header, set `ServiceBusNotification-Format` to `fcmV1`. For example, in the request body:
+
+```json
+{
+ΓÇ» "message": {
+ΓÇ» ΓÇ» "notification": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "title": "Breaking News",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "body": "FcmV1 is ready."
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
+ΓÇ» ΓÇ» "android": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "data": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "name": "wrench",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "mass": "1.3kg",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "count": "3"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
+ΓÇ» ΓÇ» }
+}
+```
+
+#### Option 4: template notification
+
+You can test template sends with a new request body following [the new JSON payload structure](https://firebase.google.com/docs/reference/fcm/rest/v1/projects.messages/send). No other changes need to be made. See [Send a template notification](/rest/api/notificationhubs/send-template-notification?view=rest-notificationhubs-2023-10-01-preview).
+
+## Next steps
+
+[Firebase Cloud Messaging migration using Azure SDKs](firebase-migration-sdk.md)
notification-hubs Firebase Migration Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/firebase-migration-sdk.md
+
+ Title: Azure Notification Hubs and the Google Firebase Cloud Messaging (FCM) migration using SDKs
+description: Describes how Azure Notification Hubs addresses the Google Cloud Messaging (GCM) to FCM migration using the Azure SDKs.
++++ Last updated : 03/01/2024++
+ms.lastreviewed: 03/01/2024
++
+# Google Firebase Cloud Messaging migration using Azure SDKs
+
+Google will deprecate the Firebase Cloud Messaging (FCM) legacy API by July 2024. You can begin migrating from the legacy HTTP protocol to FCM v1 on March 1, 2024. You must complete the migration by June 2024. This section describes the steps to migrate from FCM legacy to FCM v1 using the Azure SDKs.
+
+## Prerequisites
+
+To update your FCM credentials, [follow step 1 in the REST API guide](firebase-migration-rest.md#step-1-add-fcm-v1-credentials-to-hub).
+
+## Android SDK
+
+1. Update the SDK version to `2.0.0` in the **build.gradle** file of your application. For example:
+
+ ```gradle
+ // This is not a complete build.gradle file; it only highlights the portions you need to update.
+
+ dependencies {
+ // Ensure the following line is updated in your app/library's "dependencies" section.
+ implementation 'com.microsoft.azure:notification-hubs-android-sdk:2.0.0'
+ // optionally, use the fcm optimized SKU instead:
+ // implementation 'com.microsoft.azure:notification-hubs-android-sdk-fcm:2.0.0'
+ }
+ ```
+
+1. Update the payload template. If you're not using templates, you can skip this step.
+
+ See the [FCM REST reference](https://firebase.google.com/docs/reference/fcm/rest/v1/projects.messages) for the FCM v1 payload structure. For information about migrating from the FCM legacy payload to the FCM v1 payload, see [Update the payload of send requests](https://firebase.google.com/docs/cloud-messaging/migrate-v1#update-the-payload-of-send-requests).
+
+ For example, if you're using registrations:
+
+ ```csharp
+ NotificationHub hub = new NotificationHub(BuildConfig.hubName, BuildConfig.hubListenConnectionString, context);
+ String template = "{\"message\":{\"android\":{\"data\":{\"message\":\"{'Notification Hub test notification: ' + $(myTextProp)}\"}}}}";
+ hub.registerTemplate(token, "template-name", template);
+ ```
+
+ If you're using installations:
+
+ ```csharp
+ InstallationTemplate testTemplate = new InstallationTemplate();
+ testTemplate.setBody("{\"message\":{\"android\":{\"data\":{\"message\":\"{'Notification Hub test notification: ' + $(myTextProp)}\"}}}}");
+ NotificationHub.setTemplate("testTemplate", testTemplate);
+ ```
+
+### Server SDKs (Data Plane)
+
+1. Update the SDK package to the latest version (4.2.0):
+
+ | SDK GitHub name | SDK package name | Version |
+ |-|--||
+ | azure-notificationhubs-dotnet | Microsoft.Azure.NotificationHubs | 4.2.0 |
+ | azure-notificationhubs-java-backend | com.windowsazure.Notification-Hubs-java-sdk | 1.1.0 |
+ | azure-sdk-for-js | @azure/notification-hubs | 1.1.0 |
+
+ For example, in the **.csproj** file:
+
+ ```xml
+ <PackageReference Include="Microsoft.Azure.NotificationHubs" Version="4.2.0" />
+ ```
+
+1. Add the `FcmV1Credential` to the notification hub. This step is a one-time setup. Unless you have many hubs, and want to automate this step, you can use the REST API or the Azure portal to add the FCM v1 credentials:
+
+ ```csharp
+ // Create new notification hub with FCM v1 credentials
+ var hub = new NotificationHubDescription("hubname");
+ hub.FcmV1Credential = new FcmV1Credential("private-key", "project-id", "client-email");
+ hub = await namespaceManager.CreateNotificationHubAsync(hub);
+
+ // Update existing notification hub with FCM v1 credentials
+ var hub = await namespaceManager.GetNotificationHubAsync("hubname", CancellationToken.None);
+ hub.FcmV1Credential = new FcmV1Credential("private-key", "project-id", "client-email");
+ hub = await namespaceManager.UpdateNotificationHubAsync(hub, CancellationToken.None);
+ ```
+
+ ```java
+ // Create new notification hub with FCM V1 credentials
+ NamespaceManager namespaceManager = new NamespaceManager(namespaceConnectionString);
+ NotificationHubDescription hub = new NotificationHubDescription("hubname");
+ hub.setFcmV1Credential(new FcmV1Credential("private-key", "project-id", "client-email"));
+ hub = namespaceManager.createNotificationHub(hub);
+
+ // Updating existing Notification Hub with FCM V1 Credentials
+ NotificationHubDescription hub = namespaceManager.getNotificationHub("hubname");
+ hub.setFcmV1Credential(new FcmV1Credential("private-key", "project-id", "client-email"));
+ hub = namespaceManager.updateNotificationHub(hub);
+ ```
+
+1. Manage registrations and installations. For registrations, use `FcmV1RegistrationDescription` to register FCM v1 devices. For example:
+
+ ```csharp
+ // Create new Registration
+ var deviceToken = "device-token";
+ var tags = new HashSet<string> { "tag1", "tag2" };
+ FcmV1RegistrationDescription registration = await hub. CreateFcmV1NativeRegistrationAsync(deviceToken, tags);
+ ```
+
+ For Java, use `FcmV1Registration` to register FCMv1 devices:
+
+ ```java
+ // Create new registration
+ NotificationHub client = new NotificationHub(connectionString, hubName);
+ FcmV1Registration registration = client.createRegistration(new FcmV1Registration("fcm-device-token"));
+ ```
+
+ For JavaScript, use `createFcmV1RegistrationDescription` to register FCMv1 devices:
+
+ ```javascript
+ // Create FCM V1 registration
+ const context = createClientContext(connectionString, hubName);
+ const registration = createFcmV1RegistrationDescription({
+ fcmV1RegistrationId: "device-token",
+ });
+ const registrationResponse = await createRegistration(context, registration);
+ ```
+
+ For installations, use `NotificationPlatform.FcmV1` as the platform with `Installation`, or use `FcmV1Installation` to create FCM v1 installations:
+
+ ```csharp
+ // Create new installation
+ var installation = new Installation
+ {
+ InstallationId = "installation-id",
+ PushChannel = "device-token",
+ Platform = NotificationPlatform.FcmV1
+ };
+ await hubClient.CreateOrUpdateInstallationAsync(installation);
+
+ // Alternatively, you can use the FcmV1Installation class directly
+ var installation = new FcmV1Installation("installation-id", "device-token");
+ await hubClient.CreateOrUpdateInstallationAsync(installation);
+ ```
+
+ For Java, use `NotificationPlatform.FcmV1` as the platform:
+
+ ```java
+ // Create new installation
+ NotificationHub client = new NotificationHub(connectionString, hubName);
+ client.createOrUpdateInstallation(new Installation("installation-id", NotificationPlatform.FcmV1, "device-token"));
+ ```
+
+ For JavaScript, use `createFcmV1Installation` to create an FCMv1 installation:
+
+ ```javascript
+ // Create FCM V1 installation
+ const context = createClientContext(connectionString, hubName);
+ const installation = createFcmV1Installation({
+ installationId: "installation-id",
+ pushChannel: "device-token",
+ });
+ const result = await createOrUpdateInstallation(context, installation);
+ ```
+
+ Note the following considerations:
+
+ - If the device registration happens on the client app, update the client app first to register under the FCMv1 platform.
+ - If the device registration happens on the server, you can fetch all registrations/installations and update them to FCMv1 on the server.
+
+1. Send the notification to FCMv1. Use `FcmV1Notification` when you send notifications that target FCMv1. For example:
+
+ ```csharp
+ // Send FCM v1 notification
+ var jsonBody = "{\"message\":{\"android\":{\"data\":{\"message\":\"Notification Hub test notification\"}}}}";
+ var n = new FcmV1Notification(jsonBody);
+ NotificationOutcome outcome = await hub.SendNotificationAsync(n, "tag");
+ ```
+
+ ```java
+ // Send FCM V1 Notification
+ NotificationHub client = new NotificationHub(connectionString, hubName);
+ NotificationOutcome outcome = client.sendNotification(new FcmV1Notification("{\"message\":{\"android\":{\"data\":{\"message\":\"Notification Hub test notification\"}}}}"));
+ ```
+
+ ```javascript
+ // Send FCM V1 Notification
+ const context = createClientContext(connectionString, hubName);
+ const messageBody = `{
+ "message": {
+ "android": {
+ "data": {
+ "message": "Notification Hub test notification"
+ }
+ }
+ }
+ }`;
+
+ const notification = createFcmV1Notification({
+ body: messageBody,
+ });
+ const result = await sendNotification(context, notification);
+ ```
+
+## Next steps
+
+[Firebase Cloud Messaging migration using REST API](firebase-migration-rest.md)
notification-hubs Notification Hubs Android Push Notification Google Fcm Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-android-push-notification-google-fcm-get-started.md
mobile-android
ms.devlang: java Previously updated : 06/30/2023 Last updated : 03/01/2024 ms.lastreviewed: 09/11/2019
ms.lastreviewed: 09/11/2019
This tutorial shows you how to use Azure Notification Hubs and the Firebase Cloud Messaging (FCM) SDK version 0.6 to send push notifications to an Android application. In this tutorial, you create a blank Android app that receives push notifications by using Firebase Cloud Messaging (FCM).
+> [!IMPORTANT]
+> Google will stop supporting FCM legacy HTTP on June 20, 2024. For more information, see [Azure Notification Hubs and Google Firebase Cloud Messaging migration](notification-hubs-gcm-to-fcm.md).
The completed code for this tutorial can be downloaded [from GitHub](https://github.com/Azure/azure-notificationhubs-android/tree/master/FCMTutorialApp).
notification-hubs Notification Hubs Gcm To Fcm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-gcm-to-fcm.md
Title: Azure Notification Hubs and the Google Firebase Cloud Messaging (FCM) migration
-description: Describes how Azure Notification Hubs addresses the Google GCM to FCM migration.
+ Title: Azure Notification Hubs and the Google Firebase Cloud Messaging (FCM) migration using REST APIs and SDKs
+description: Describes how Azure Notification Hubs addresses the Google GCM to FCM migration using either REST APIs or SDKs.
Previously updated : 01/25/2024 Last updated : 03/01/2024
-ms.lastreviewed: 01/25/2024
+ms.lastreviewed: 03/01/2024
# Azure Notification Hubs and Google Firebase Cloud Messaging migration
+The core capabilities for the integration of Azure Notification Hubs with Firebase Cloud Messaging (FCM) v1 are available. As a reminder, Google will stop supporting FCM legacy HTTP on June 20, 2024, so you must migrate your applications and notification payloads to the new format before then.
+## Concepts for FCM v1
+- A new platform type is supported, called **FCM v1**.
+- New APIs, credentials, registrations, and installations are used for FCM v1.
+
+## Migration steps
+
+The Firebase Cloud Messaging (FCM) legacy API will be deprecated by July 2024. You can begin migrating from the legacy HTTP protocol to FCM v1 now. You must complete the migration by June 2024.
+
+- For information about migrating from FCM legacy to FCM v1 using the Azure SDKs, see [Google Firebase Cloud Messaging (FCM) migration using SDKs](firebase-migration-sdk.md).
+- For information about migrating from FCM legacy to FCM v1 using the Azure REST APIs, see [Google Firebase Cloud Messaging (FCM) migration using REST APIs](firebase-migration-rest.md).
+
+## Next steps
+
+- [What is Azure Notification Hubs?](notification-hubs-push-notification-overview.md)
operator-nexus Howto Baremetal Run Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-run-read.md
The command execution produces an output file containing the results that can be
## Executing a run-read command
-The run-read command executes a read-only command on the specified BMM.
+The run-read command lets you run a command on the BMM that does not change anything. Some commands have more
+than one word, or need an argument to work. These commands are made like this to separate them from the ones
+that can change things. For example, run-read-command can use `kubectl get` but not `kubectl apply`. When you
+use these commands, you have to put all the words in the ΓÇ£commandΓÇ¥ field. For example,
+`{"command":"kubectl get","arguments":["nodes"]}` is right; `{"command":"kubectl","arguments":["get","nodes"]}`
+is wrong.
-The current list of supported commands are:
+Also note that some commands begin with `nc-toolbox nc-toolbox-runread` and must be entered as shown.
+`nc-toolbox-runread` is a special container image that includes more tools that aren't installed on the
+baremetal host, such as `ipmitool` and `racadm`.
+
+The list below shows the commands you can use. Commands in `*italics*` cannot have `arguments`; the rest can.
-- `traceroute`-- `ping` - `arp`-- `tcpdump` - `brctl show` - `dmidecode`
+- *`fdisk -l`*
- `host`-- `ip link show`
+- *`hostname`*
+- *`ifconfig -a`*
+- *`ifconfig -s`*
- `ip address show`
+- `ip link show`
- `ip maddress show` - `ip route show` - `journalctl`-- `kubectl logs`-- `kubectl describe`-- `kubectl get` - `kubectl api-resources` - `kubectl api-versions`
+- `kubectl describe`
+- `kubectl get`
+- `kubectl logs`
+- *`mount`*
+- `ping`
+- *`ss`*
+- `tcpdump`
+- `traceroute`
- `uname`
+- *`ulimit -a`*
- `uptime`-- `fdisk -l`-- `hostname`-- `ifconfig -a`-- `ifconfig -s`-- `mount`-- `ss`-- `ulimit -a`
+- `nc-toolbox nc-toolbox-runread ipmitool channel authcap`
+- `nc-toolbox nc-toolbox-runread ipmitool channel info`
+- `nc-toolbox nc-toolbox-runread ipmitool chassis status`
+- `nc-toolbox nc-toolbox-runread ipmitool chassis power status`
+- `nc-toolbox nc-toolbox-runread ipmitool chassis restart cause`
+- `nc-toolbox nc-toolbox-runread ipmitool chassis poh`
+- `nc-toolbox nc-toolbox-runread ipmitool dcmi power get_limit`
+- `nc-toolbox nc-toolbox-runread ipmitool dcmi sensors`
+- `nc-toolbox nc-toolbox-runread ipmitool dcmi asset_tag`
+- `nc-toolbox nc-toolbox-runread ipmitool dcmi get_mc_id_string`
+- `nc-toolbox nc-toolbox-runread ipmitool dcmi thermalpolicy get`
+- `nc-toolbox nc-toolbox-runread ipmitool dcmi get_temp_reading`
+- `nc-toolbox nc-toolbox-runread ipmitool dcmi get_conf_param`
+- `nc-toolbox nc-toolbox-runread ipmitool delloem lcd info`
+- `nc-toolbox nc-toolbox-runread ipmitool delloem lcd status`
+- `nc-toolbox nc-toolbox-runread ipmitool delloem mac list`
+- `nc-toolbox nc-toolbox-runread ipmitool delloem mac get`
+- `nc-toolbox nc-toolbox-runread ipmitool delloem lan get`
+- `nc-toolbox nc-toolbox-runread ipmitool delloem powermonitor powerconsumption`
+- `nc-toolbox nc-toolbox-runread ipmitool delloem powermonitor powerconsumptionhistory`
+- `nc-toolbox nc-toolbox-runread ipmitool delloem powermonitor getpowerbudget`
+- `nc-toolbox nc-toolbox-runread ipmitool delloem vflash info card`
+- `nc-toolbox nc-toolbox-runread ipmitool echo`
+- `nc-toolbox nc-toolbox-runread ipmitool ekanalyzer print`
+- `nc-toolbox nc-toolbox-runread ipmitool ekanalyzer summary`
+- `nc-toolbox nc-toolbox-runread ipmitool fru print`
+- `nc-toolbox nc-toolbox-runread ipmitool fwum info`
+- `nc-toolbox nc-toolbox-runread ipmitool fwum status`
+- `nc-toolbox nc-toolbox-runread ipmitool fwum tracelog`
+- `nc-toolbox nc-toolbox-runread ipmitool gendev list`
+- `nc-toolbox nc-toolbox-runread ipmitool hpm rollbackstatus`
+- `nc-toolbox nc-toolbox-runread ipmitool hpm selftestresult`
+- `nc-toolbox nc-toolbox-runread ipmitool ime help`
+- `nc-toolbox nc-toolbox-runread ipmitool ime info`
+- `nc-toolbox nc-toolbox-runread ipmitool isol info`
+- `nc-toolbox nc-toolbox-runread ipmitool lan print`
+- `nc-toolbox nc-toolbox-runread ipmitool lan alert print`
+- `nc-toolbox nc-toolbox-runread ipmitool lan stats get`
+- `nc-toolbox nc-toolbox-runread ipmitool mc bootparam get`
+- `nc-toolbox nc-toolbox-runread ipmitool mc chassis poh`
+- `nc-toolbox nc-toolbox-runread ipmitool mc chassis policy list`
+- `nc-toolbox nc-toolbox-runread ipmitool mc chassis power status`
+- `nc-toolbox nc-toolbox-runread ipmitool mc chassis status`
+- `nc-toolbox nc-toolbox-runread ipmitool mc getenables`
+- `nc-toolbox nc-toolbox-runread ipmitool mc getsysinfo`
+- `nc-toolbox nc-toolbox-runread ipmitool mc guid`
+- `nc-toolbox nc-toolbox-runread ipmitool mc info`
+- `nc-toolbox nc-toolbox-runread ipmitool mc restart cause`
+- `nc-toolbox nc-toolbox-runread ipmitool mc watchdog get`
+- `nc-toolbox nc-toolbox-runread ipmitool bmc bootparam get`
+- `nc-toolbox nc-toolbox-runread ipmitool bmc chassis poh`
+- `nc-toolbox nc-toolbox-runread ipmitool bmc chassis policy list`
+- `nc-toolbox nc-toolbox-runread ipmitool bmc chassis power status`
+- `nc-toolbox nc-toolbox-runread ipmitool bmc chassis status`
+- `nc-toolbox nc-toolbox-runread ipmitool bmc getenables`
+- `nc-toolbox nc-toolbox-runread ipmitool bmc getsysinfo`
+- `nc-toolbox nc-toolbox-runread ipmitool bmc guid`
+- `nc-toolbox nc-toolbox-runread ipmitool bmc info`
+- `nc-toolbox nc-toolbox-runread ipmitool bmc restart cause`
+- `nc-toolbox nc-toolbox-runread ipmitool bmc watchdog get`
+- `nc-toolbox nc-toolbox-runread ipmitool nm alert get`
+- `nc-toolbox nc-toolbox-runread ipmitool nm capability`
+- `nc-toolbox nc-toolbox-runread ipmitool nm discover`
+- `nc-toolbox nc-toolbox-runread ipmitool nm policy get policy_id`
+- `nc-toolbox nc-toolbox-runread ipmitool nm policy limiting`
+- `nc-toolbox nc-toolbox-runread ipmitool nm statistics`
+- `nc-toolbox nc-toolbox-runread ipmitool nm suspend get`
+- `nc-toolbox nc-toolbox-runread ipmitool nm threshold get`
+- `nc-toolbox nc-toolbox-runread ipmitool pef`
+- `nc-toolbox nc-toolbox-runread ipmitool picmg addrinfo`
+- `nc-toolbox nc-toolbox-runread ipmitool picmg policy get`
+- `nc-toolbox nc-toolbox-runread ipmitool power status`
+- `nc-toolbox nc-toolbox-runread ipmitool sdr elist`
+- `nc-toolbox nc-toolbox-runread ipmitool sdr get`
+- `nc-toolbox nc-toolbox-runread ipmitool sdr info`
+- `nc-toolbox nc-toolbox-runread ipmitool sdr list`
+- `nc-toolbox nc-toolbox-runread ipmitool sdr type`
+- `nc-toolbox nc-toolbox-runread ipmitool sel elist`
+- `nc-toolbox nc-toolbox-runread ipmitool sel get`
+- `nc-toolbox nc-toolbox-runread ipmitool sel info`
+- `nc-toolbox nc-toolbox-runread ipmitool sel list`
+- `nc-toolbox nc-toolbox-runread ipmitool sel time get`
+- `nc-toolbox nc-toolbox-runread ipmitool sensor get`
+- `nc-toolbox nc-toolbox-runread ipmitool sensor list`
+- `nc-toolbox nc-toolbox-runread ipmitool session info`
+- `nc-toolbox nc-toolbox-runread ipmitool sol info`
+- `nc-toolbox nc-toolbox-runread ipmitool sol payload status`
+- `nc-toolbox nc-toolbox-runread ipmitool user list`
+- `nc-toolbox nc-toolbox-runread ipmitool user summary`
+- *`nc-toolbox nc-toolbox-runread racadm arp`*
+- *`nc-toolbox nc-toolbox-runread racadm coredump`*
+- `nc-toolbox nc-toolbox-runread racadm diagnostics`
+- `nc-toolbox nc-toolbox-runread racadm eventfilters get`
+- `nc-toolbox nc-toolbox-runread racadm fcstatistics`
+- `nc-toolbox nc-toolbox-runread racadm get`
+- `nc-toolbox nc-toolbox-runread racadm getconfig`
+- `nc-toolbox nc-toolbox-runread racadm gethostnetworkinterfaces`
+- *`nc-toolbox nc-toolbox-runread racadm getled`*
+- `nc-toolbox nc-toolbox-runread racadm getniccfg`
+- `nc-toolbox nc-toolbox-runread racadm getraclog`
+- `nc-toolbox nc-toolbox-runread racadm getractime`
+- `nc-toolbox nc-toolbox-runread racadm getsel`
+- `nc-toolbox nc-toolbox-runread racadm getsensorinfo`
+- `nc-toolbox nc-toolbox-runread racadm getssninfo`
+- `nc-toolbox nc-toolbox-runread racadm getsvctag`
+- `nc-toolbox nc-toolbox-runread racadm getsysinfo`
+- `nc-toolbox nc-toolbox-runread racadm gettracelog`
+- `nc-toolbox nc-toolbox-runread racadm getversion`
+- `nc-toolbox nc-toolbox-runread racadm hwinventory`
+- *`nc-toolbox nc-toolbox-runread racadm ifconfig`*
+- *`nc-toolbox nc-toolbox-runread racadm inlettemphistory get`*
+- `nc-toolbox nc-toolbox-runread racadm jobqueue view`
+- `nc-toolbox nc-toolbox-runread racadm lclog view`
+- `nc-toolbox nc-toolbox-runread racadm lclog viewconfigresult`
+- `nc-toolbox nc-toolbox-runread racadm license view`
+- *`nc-toolbox nc-toolbox-runread racadm netstat`*
+- `nc-toolbox nc-toolbox-runread racadm nicstatistics`
+- `nc-toolbox nc-toolbox-runread racadm ping`
+- `nc-toolbox nc-toolbox-runread racadm ping6`
+- *`nc-toolbox nc-toolbox-runread racadm racdump`*
+- `nc-toolbox nc-toolbox-runread racadm sslcertview`
+- *`nc-toolbox nc-toolbox-runread racadm swinventory`*
+- *`nc-toolbox nc-toolbox-runread racadm systemconfig getbackupscheduler`*
+- `nc-toolbox nc-toolbox-runread racadm systemperfstatistics` (PeakReset argument NOT allowed)
+- *`nc-toolbox nc-toolbox-runread racadm techsupreport getupdatetime`*
+- `nc-toolbox nc-toolbox-runread racadm traceroute`
+- `nc-toolbox nc-toolbox-runread racadm traceroute6`
+- `nc-toolbox nc-toolbox-runread racadm usercertview`
+- *`nc-toolbox nc-toolbox-runread racadm vflashsd status`*
+- *`nc-toolbox nc-toolbox-runread racadm vflashpartition list`*
+- *`nc-toolbox nc-toolbox-runread racadm vflashpartition status -a`*
The command syntax is:
az networkcloud baremetalmachine run-read-command --name "<machine-name>"
--limit-time-seconds <timeout> \ --commands '[{"command":"<command1>"},{"command":"<command2>","arguments":["<arg1>","<arg2>"]}]' \ --resource-group "<resourceGroupName>" \
- --subscription "<subscription>"
+ --subscription "<subscription>"
```
-These commands don't require `arguments`:
--- `fdisk -l`-- `hostname`-- `ifconfig -a`-- `ifconfig -s`-- `mount`-- `ss`-- `ulimit -a`-
-All other inputs are required.
- Multiple commands can be provided in json format to `--commands` option. For a command with multiple arguments, provide as a list to `arguments` parameter. See [Azure CLI Shorthand](https://github.com/Azure/azure-cli/blob/dev/doc/shorthand_syntax.md) for instructions on constructing the `--commands` structure.
This command runs synchronously. If you wish to skip waiting for the command to
When an optional argument `--output-directory` is provided, the output result is downloaded and extracted to the local directory.
-### This example executes the `hostname` command and a `ping` command.
+### This example executes the `hostname` command and a `ping` command
```azurecli
-az networkcloud baremetalmachine run-read-command --name "bareMetalMachineName" \
+az networkcloud baremetalmachine run-read-command --name "<bareMetalMachineName>" \
--limit-time-seconds 60 \
- --commands '[{"command":"hostname"],"arguments":["198.51.102.1","-c","3"]},{"command":"ping"}]' \
- --resource-group "resourceGroupName" \
- --subscription "<subscription>"
+ --commands '[{"command":"hostname"},{"command":"ping","arguments":["198.51.102.1","-c","3"]}]' \
+ --resource-group "<resourceGroupName>" \
+ --subscription "<subscription>"
```
-In the response, an HTTP status code of 202 is returned as the operation is performed asynchronously.
+### This example executes the `racadm getsysinfo -c` command
+
+```azurecli
+az networkcloud baremetalmachine run-read-command --name "<bareMetalMachineName>" \
+ --limit-time-seconds 60 \
+ --commands '[{"command":"nc-toolbox nc-toolbox-runread racadm getsysinfo","arguments":["-c"]}]' \
+ --resource-group "<resourceGroupName>" \
+ --subscription "<subscription>"
+```
## Checking command status and viewing output
-Sample output looks something as below. It prints the top 4K characters of the result to the screen for convenience and provides a short-lived link to the storage blob containing the command execution result. You can use the link to download the zipped output file (tar.gz).
+Sample output is shown. It prints the top 4,000 characters of the result to the screen for convenience and provides a short-lived link to the storage blob containing the command execution result. You can use the link to download the zipped output file (tar.gz).
```output ====Action Command Output====
postgresql How To Use Pgvector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-use-pgvector.md
Title: Vector search on Azure Database for PostgreSQL
-description: Vector search capabilities for retrieval augmented generation (RAG) on Azure Database for PostgreSQL .
+description: Enable semantic simiolarity search for Retrieval Augemented Generation (RAG) on Azure Database for PostgreSQL with pgvector database extension.
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
The following table provides a brief description of each built-in role. Click th
> | <a name='azure-connected-machine-onboarding'></a>[Azure Connected Machine Onboarding](./built-in-roles/management-and-governance.md#azure-connected-machine-onboarding) | Can onboard Azure Connected Machines. | b64e21ea-ac4e-4cdf-9dc9-5b892992bee7 | > | <a name='azure-connected-machine-resource-administrator'></a>[Azure Connected Machine Resource Administrator](./built-in-roles/management-and-governance.md#azure-connected-machine-resource-administrator) | Can read, write, delete and re-onboard Azure Connected Machines. | cd570a14-e51a-42ad-bac8-bafd67325302 | > | <a name='azure-connected-machine-resource-manager'></a>[Azure Connected Machine Resource Manager](./built-in-roles/management-and-governance.md#azure-connected-machine-resource-manager) | Custom Role for AzureStackHCI RP to manage hybrid compute machines and hybrid connectivity endpoints in a resource group | f5819b54-e033-4d82-ac66-4fec3cbf3f4c |
-> | <a name='azure-resource-bridge-deployment-role'></a>[Azure Resource Bridge Deployment Role](./built-in-roles/management-and-governance.md#azure-resource-bridge-deployment-role) | Azure Resource Bridge Deployment Role | 7b1f81f9-4196-4058-8aae-762e593270df |
> | <a name='billing-reader'></a>[Billing Reader](./built-in-roles/management-and-governance.md#billing-reader) | Allows read access to billing data | fa23ad8b-c56e-40d8-ac0c-ce449e1d2c64 | > | <a name='blueprint-contributor'></a>[Blueprint Contributor](./built-in-roles/management-and-governance.md#blueprint-contributor) | Can manage blueprint definitions, but not assign them. | 41077137-e803-4205-871c-5a86e6a753b4 | > | <a name='blueprint-operator'></a>[Blueprint Operator](./built-in-roles/management-and-governance.md#blueprint-operator) | Can assign existing published blueprints, but cannot create new blueprints. Note that this only works if the assignment is done with a user-assigned managed identity. | 437d2ced-4a38-4302-8479-ed2bcb43d090 |
The following table provides a brief description of each built-in role. Click th
> [!div class="mx-tableFixed"] > | Built-in role | Description | ID | > | | | |
+> | <a name='azure-resource-bridge-deployment-role'></a>[Azure Resource Bridge Deployment Role](./built-in-roles/hybrid-multicloud.md#azure-resource-bridge-deployment-role) | Azure Resource Bridge Deployment Role | 7b1f81f9-4196-4058-8aae-762e593270df |
> | <a name='azure-stack-hci-administrator'></a>[Azure Stack HCI Administrator](./built-in-roles/hybrid-multicloud.md#azure-stack-hci-administrator) | Grants full access to the cluster and its resources, including the ability to register Azure Stack HCI and assign others as Azure Arc HCI VM Contributor and/or Azure Arc HCI VM Reader | bda0d508-adf1-4af0-9c28-88919fc3ae06 | > | <a name='azure-stack-hci-device-management-role'></a>[Azure Stack HCI Device Management Role](./built-in-roles/hybrid-multicloud.md#azure-stack-hci-device-management-role) | Microsoft.AzureStackHCI Device Management Role | 865ae368-6a45-4bd1-8fbf-0d5151f56fc1 | > | <a name='azure-stack-hci-vm-contributor'></a>[Azure Stack HCI VM Contributor](./built-in-roles/hybrid-multicloud.md#azure-stack-hci-vm-contributor) | Grants permissions to perform all VM actions | 874d1c73-6003-4e60-a13a-cb31ea190a85 |
role-based-access-control Ai Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/ai-machine-learning.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/analytics.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/compute.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/containers.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/databases.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/devops.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control General https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/general.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Hybrid Multicloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/hybrid-multicloud.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
This article lists the Azure built-in roles in the Hybrid + multicloud category.
+## Azure Resource Bridge Deployment Role
+
+Azure Resource Bridge Deployment Role
+
+[Learn more](/azure-stack/hci/deploy/deployment-azure-resource-manager-template)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](../permissions/management-and-governance.md#microsoftauthorization)/roleassignments/read | Get information about a role assignment. |
+> | [Microsoft.AzureStackHCI](../permissions/hybrid-multicloud.md#microsoftazurestackhci)/Register/Action | Registers the subscription for the Azure Stack HCI resource provider and enables the creation of Azure Stack HCI resources. |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/register/action | Registers the subscription for Appliances resource provider and enables the creation of Appliance. |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/appliances/read | Gets an Appliance resource |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/appliances/write | Creates or Updates Appliance resource |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/appliances/delete | Deletes Appliance resource |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/locations/operationresults/read | Get result of Appliance operation |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/locations/operationsstatus/read | Get result of Appliance operation |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/appliances/listClusterUserCredential/action | Get an appliance cluster user credential |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/appliances/listKeys/action | Get an appliance cluster customer user keys |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/appliances/upgradeGraphs/read | Gets the upgrade graph of Appliance cluster |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/telemetryconfig/read | Get Appliances telemetry config utilized by Appliances CLI |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/operations/read | Gets list of Available Operations for Appliances |
+> | [Microsoft.ExtendedLocation](../permissions/hybrid-multicloud.md#microsoftextendedlocation)/register/action | Registers the subscription for Custom Location resource provider and enables the creation of Custom Location. |
+> | [Microsoft.ExtendedLocation](../permissions/hybrid-multicloud.md#microsoftextendedlocation)/customLocations/deploy/action | Deploy permissions to a Custom Location resource |
+> | [Microsoft.ExtendedLocation](../permissions/hybrid-multicloud.md#microsoftextendedlocation)/customLocations/read | Gets an Custom Location resource |
+> | [Microsoft.ExtendedLocation](../permissions/hybrid-multicloud.md#microsoftextendedlocation)/customLocations/write | Creates or Updates Custom Location resource |
+> | [Microsoft.ExtendedLocation](../permissions/hybrid-multicloud.md#microsoftextendedlocation)/customLocations/delete | Deletes Custom Location resource |
+> | [Microsoft.HybridConnectivity](../permissions/hybrid-multicloud.md#microsofthybridconnectivity)/register/action | Register the subscription for Microsoft.HybridConnectivity |
+> | [Microsoft.Kubernetes](../permissions/hybrid-multicloud.md#microsoftkubernetes)/register/action | Registers Subscription with Microsoft.Kubernetes resource provider |
+> | [Microsoft.KubernetesConfiguration](../permissions/hybrid-multicloud.md#microsoftkubernetesconfiguration)/register/action | Registers subscription to Microsoft.KubernetesConfiguration resource provider. |
+> | [Microsoft.KubernetesConfiguration](../permissions/hybrid-multicloud.md#microsoftkubernetesconfiguration)/extensions/write | Creates or updates extension resource. |
+> | [Microsoft.KubernetesConfiguration](../permissions/hybrid-multicloud.md#microsoftkubernetesconfiguration)/extensions/read | Gets extension instance resource. |
+> | [Microsoft.KubernetesConfiguration](../permissions/hybrid-multicloud.md#microsoftkubernetesconfiguration)/extensions/delete | Deletes extension instance resource. |
+> | [Microsoft.KubernetesConfiguration](../permissions/hybrid-multicloud.md#microsoftkubernetesconfiguration)/extensions/operations/read | Gets Async Operation status. |
+> | [Microsoft.KubernetesConfiguration](../permissions/hybrid-multicloud.md#microsoftkubernetesconfiguration)/namespaces/read | Get Namespace Resource |
+> | [Microsoft.KubernetesConfiguration](../permissions/hybrid-multicloud.md#microsoftkubernetesconfiguration)/operations/read | Gets available operations of the Microsoft.KubernetesConfiguration resource provider. |
+> | [Microsoft.GuestConfiguration](../permissions/management-and-governance.md#microsoftguestconfiguration)/guestConfigurationAssignments/read | Get guest configuration assignment. |
+> | [Microsoft.HybridContainerService](../permissions/hybrid-multicloud.md#microsofthybridcontainerservice)/register/action | Register the subscription for Microsoft.HybridContainerService |
+> | [Microsoft.HybridContainerService](../permissions/hybrid-multicloud.md#microsofthybridcontainerservice)/kubernetesVersions/read | Lists the supported kubernetes versions from the underlying custom location |
+> | [Microsoft.HybridContainerService](../permissions/hybrid-multicloud.md#microsofthybridcontainerservice)/kubernetesVersions/write | Puts the kubernetes version resource type |
+> | [Microsoft.HybridContainerService](../permissions/hybrid-multicloud.md#microsofthybridcontainerservice)/skus/read | Lists the supported VM SKUs from the underlying custom location |
+> | [Microsoft.HybridContainerService](../permissions/hybrid-multicloud.md#microsofthybridcontainerservice)/skus/write | Puts the VM SKUs resource type |
+> | [Microsoft.Resources](../permissions/management-and-governance.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.AzureStackHCI](../permissions/hybrid-multicloud.md#microsoftazurestackhci)/StorageContainers/Write | Creates/Updates storage containers resource |
+> | [Microsoft.AzureStackHCI](../permissions/hybrid-multicloud.md#microsoftazurestackhci)/StorageContainers/Read | Gets/Lists storage containers resource |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Azure Resource Bridge Deployment Role",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/7b1f81f9-4196-4058-8aae-762e593270df",
+ "name": "7b1f81f9-4196-4058-8aae-762e593270df",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/roleassignments/read",
+ "Microsoft.AzureStackHCI/Register/Action",
+ "Microsoft.ResourceConnector/register/action",
+ "Microsoft.ResourceConnector/appliances/read",
+ "Microsoft.ResourceConnector/appliances/write",
+ "Microsoft.ResourceConnector/appliances/delete",
+ "Microsoft.ResourceConnector/locations/operationresults/read",
+ "Microsoft.ResourceConnector/locations/operationsstatus/read",
+ "Microsoft.ResourceConnector/appliances/listClusterUserCredential/action",
+ "Microsoft.ResourceConnector/appliances/listKeys/action",
+ "Microsoft.ResourceConnector/appliances/upgradeGraphs/read",
+ "Microsoft.ResourceConnector/telemetryconfig/read",
+ "Microsoft.ResourceConnector/operations/read",
+ "Microsoft.ExtendedLocation/register/action",
+ "Microsoft.ExtendedLocation/customLocations/deploy/action",
+ "Microsoft.ExtendedLocation/customLocations/read",
+ "Microsoft.ExtendedLocation/customLocations/write",
+ "Microsoft.ExtendedLocation/customLocations/delete",
+ "Microsoft.HybridConnectivity/register/action",
+ "Microsoft.Kubernetes/register/action",
+ "Microsoft.KubernetesConfiguration/register/action",
+ "Microsoft.KubernetesConfiguration/extensions/write",
+ "Microsoft.KubernetesConfiguration/extensions/read",
+ "Microsoft.KubernetesConfiguration/extensions/delete",
+ "Microsoft.KubernetesConfiguration/extensions/operations/read",
+ "Microsoft.KubernetesConfiguration/namespaces/read",
+ "Microsoft.KubernetesConfiguration/operations/read",
+ "Microsoft.GuestConfiguration/guestConfigurationAssignments/read",
+ "Microsoft.HybridContainerService/register/action",
+ "Microsoft.HybridContainerService/kubernetesVersions/read",
+ "Microsoft.HybridContainerService/kubernetesVersions/write",
+ "Microsoft.HybridContainerService/skus/read",
+ "Microsoft.HybridContainerService/skus/write",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.AzureStackHCI/StorageContainers/Write",
+ "Microsoft.AzureStackHCI/StorageContainers/Read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Azure Resource Bridge Deployment Role",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ## Azure Stack HCI Administrator Grants full access to the cluster and its resources, including the ability to register Azure Stack HCI and assign others as Azure Arc HCI VM Contributor and/or Azure Arc HCI VM Reader
Grants full access to the cluster and its resources, including the ability to re
> | [Microsoft.HybridCompute](../permissions/hybrid-multicloud.md#microsofthybridcompute)/licenses/read | Reads any Azure Arc licenses | > | [Microsoft.HybridCompute](../permissions/hybrid-multicloud.md#microsofthybridcompute)/licenses/write | Installs or Updates an Azure Arc licenses | > | [Microsoft.HybridCompute](../permissions/hybrid-multicloud.md#microsofthybridcompute)/licenses/delete | Deletes an Azure Arc licenses |
-> | Microsoft.ResourceConnector/register/action | Registers the subscription for Appliances resource provider and enables the creation of Appliance. |
-> | Microsoft.ResourceConnector/appliances/read | Gets an Appliance resource |
-> | Microsoft.ResourceConnector/appliances/write | Creates or Updates Appliance resource |
-> | Microsoft.ResourceConnector/appliances/delete | Deletes Appliance resource |
-> | Microsoft.ResourceConnector/locations/operationresults/read | Get result of Appliance operation |
-> | Microsoft.ResourceConnector/locations/operationsstatus/read | Get result of Appliance operation |
-> | Microsoft.ResourceConnector/appliances/listClusterUserCredential/action | Get an appliance cluster user credential |
-> | Microsoft.ResourceConnector/appliances/listKeys/action | Get an appliance cluster customer user keys |
-> | Microsoft.ResourceConnector/operations/read | Gets list of Available Operations for Appliances |
-> | Microsoft.ExtendedLocation/register/action | Registers the subscription for Custom Location resource provider and enables the creation of Custom Location. |
-> | Microsoft.ExtendedLocation/customLocations/read | Gets an Custom Location resource |
-> | Microsoft.ExtendedLocation/customLocations/deploy/action | Deploy permissions to a Custom Location resource |
-> | Microsoft.ExtendedLocation/customLocations/write | Creates or Updates Custom Location resource |
-> | Microsoft.ExtendedLocation/customLocations/delete | Deletes Custom Location resource |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/register/action | Registers the subscription for Appliances resource provider and enables the creation of Appliance. |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/appliances/read | Gets an Appliance resource |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/appliances/write | Creates or Updates Appliance resource |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/appliances/delete | Deletes Appliance resource |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/locations/operationresults/read | Get result of Appliance operation |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/locations/operationsstatus/read | Get result of Appliance operation |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/appliances/listClusterUserCredential/action | Get an appliance cluster user credential |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/appliances/listKeys/action | Get an appliance cluster customer user keys |
+> | [Microsoft.ResourceConnector](../permissions/hybrid-multicloud.md#microsoftresourceconnector)/operations/read | Gets list of Available Operations for Appliances |
+> | [Microsoft.ExtendedLocation](../permissions/hybrid-multicloud.md#microsoftextendedlocation)/register/action | Registers the subscription for Custom Location resource provider and enables the creation of Custom Location. |
+> | [Microsoft.ExtendedLocation](../permissions/hybrid-multicloud.md#microsoftextendedlocation)/customLocations/read | Gets an Custom Location resource |
+> | [Microsoft.ExtendedLocation](../permissions/hybrid-multicloud.md#microsoftextendedlocation)/customLocations/deploy/action | Deploy permissions to a Custom Location resource |
+> | [Microsoft.ExtendedLocation](../permissions/hybrid-multicloud.md#microsoftextendedlocation)/customLocations/write | Creates or Updates Custom Location resource |
+> | [Microsoft.ExtendedLocation](../permissions/hybrid-multicloud.md#microsoftextendedlocation)/customLocations/delete | Deletes Custom Location resource |
> | Microsoft.EdgeMarketplace/offers/read | Get a Offer | > | Microsoft.EdgeMarketplace/publishers/read | Get a Publisher | > | [Microsoft.Kubernetes](../permissions/hybrid-multicloud.md#microsoftkubernetes)/register/action | Registers Subscription with Microsoft.Kubernetes resource provider |
Grants full access to the cluster and its resources, including the ability to re
> | [Microsoft.Resources](../permissions/management-and-governance.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. | > | [Microsoft.AzureStackHCI](../permissions/hybrid-multicloud.md#microsoftazurestackhci)/StorageContainers/Write | Creates/Updates storage containers resource | > | [Microsoft.AzureStackHCI](../permissions/hybrid-multicloud.md#microsoftazurestackhci)/StorageContainers/Read | Gets/Lists storage containers resource |
-> | Microsoft.HybridContainerService/register/action | Register the subscription for Microsoft.HybridContainerService |
+> | [Microsoft.HybridContainerService](../permissions/hybrid-multicloud.md#microsofthybridcontainerservice)/register/action | Register the subscription for Microsoft.HybridContainerService |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Grants permissions to perform all VM actions
> | [Microsoft.HybridCompute](../permissions/hybrid-multicloud.md#microsofthybridcompute)/licenses/read | Reads any Azure Arc licenses | > | [Microsoft.HybridCompute](../permissions/hybrid-multicloud.md#microsofthybridcompute)/licenses/write | Installs or Updates an Azure Arc licenses | > | [Microsoft.HybridCompute](../permissions/hybrid-multicloud.md#microsofthybridcompute)/licenses/delete | Deletes an Azure Arc licenses |
-> | Microsoft.ExtendedLocation/customLocations/Read | Gets an Custom Location resource |
-> | Microsoft.ExtendedLocation/customLocations/deploy/action | Deploy permissions to a Custom Location resource |
+> | [Microsoft.ExtendedLocation](../permissions/hybrid-multicloud.md#microsoftextendedlocation)/customLocations/Read | Gets an Custom Location resource |
+> | [Microsoft.ExtendedLocation](../permissions/hybrid-multicloud.md#microsoftextendedlocation)/customLocations/deploy/action | Deploy permissions to a Custom Location resource |
> | [Microsoft.KubernetesConfiguration](../permissions/hybrid-multicloud.md#microsoftkubernetesconfiguration)/extensions/read | Gets extension instance resource. | > | **NotActions** | | > | *none* | |
role-based-access-control Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/identity.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/integration.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Internet Of Things https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/internet-of-things.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Management And Governance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/management-and-governance.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
Custom Role for AzureStackHCI RP to manage hybrid compute machines and hybrid co
} ```
-## Azure Resource Bridge Deployment Role
-
-Azure Resource Bridge Deployment Role
-
-[Learn more](/azure/azure-arc/resource-bridge/overview)
-
-> [!div class="mx-tableFixed"]
-> | Actions | Description |
-> | | |
-> | [Microsoft.AzureStackHCI](../permissions/hybrid-multicloud.md#microsoftazurestackhci)/Register/Action | Registers the subscription for the Azure Stack HCI resource provider and enables the creation of Azure Stack HCI resources. |
-> | Microsoft.ResourceConnector/register/action | Registers the subscription for Appliances resource provider and enables the creation of Appliance. |
-> | Microsoft.ResourceConnector/appliances/read | Gets an Appliance resource |
-> | Microsoft.ResourceConnector/appliances/write | Creates or Updates Appliance resource |
-> | Microsoft.ResourceConnector/appliances/delete | Deletes Appliance resource |
-> | Microsoft.ResourceConnector/locations/operationresults/read | Get result of Appliance operation |
-> | Microsoft.ResourceConnector/locations/operationsstatus/read | Get result of Appliance operation |
-> | Microsoft.ResourceConnector/appliances/listClusterUserCredential/action | Get an appliance cluster user credential |
-> | Microsoft.ResourceConnector/appliances/listKeys/action | Get an appliance cluster customer user keys |
-> | Microsoft.ResourceConnector/appliances/upgradeGraphs/read | Gets the upgrade graph of Appliance cluster |
-> | Microsoft.ResourceConnector/telemetryconfig/read | Get Appliances telemetry config utilized by Appliances CLI |
-> | Microsoft.ResourceConnector/operations/read | Gets list of Available Operations for Appliances |
-> | Microsoft.ExtendedLocation/register/action | Registers the subscription for Custom Location resource provider and enables the creation of Custom Location. |
-> | Microsoft.ExtendedLocation/customLocations/deploy/action | Deploy permissions to a Custom Location resource |
-> | Microsoft.ExtendedLocation/customLocations/read | Gets an Custom Location resource |
-> | Microsoft.ExtendedLocation/customLocations/write | Creates or Updates Custom Location resource |
-> | Microsoft.ExtendedLocation/customLocations/delete | Deletes Custom Location resource |
-> | [Microsoft.HybridConnectivity](../permissions/hybrid-multicloud.md#microsofthybridconnectivity)/register/action | Register the subscription for Microsoft.HybridConnectivity |
-> | [Microsoft.Kubernetes](../permissions/hybrid-multicloud.md#microsoftkubernetes)/register/action | Registers Subscription with Microsoft.Kubernetes resource provider |
-> | [Microsoft.KubernetesConfiguration](../permissions/hybrid-multicloud.md#microsoftkubernetesconfiguration)/register/action | Registers subscription to Microsoft.KubernetesConfiguration resource provider. |
-> | [Microsoft.KubernetesConfiguration](../permissions/hybrid-multicloud.md#microsoftkubernetesconfiguration)/extensions/write | Creates or updates extension resource. |
-> | [Microsoft.KubernetesConfiguration](../permissions/hybrid-multicloud.md#microsoftkubernetesconfiguration)/extensions/read | Gets extension instance resource. |
-> | [Microsoft.KubernetesConfiguration](../permissions/hybrid-multicloud.md#microsoftkubernetesconfiguration)/extensions/delete | Deletes extension instance resource. |
-> | [Microsoft.KubernetesConfiguration](../permissions/hybrid-multicloud.md#microsoftkubernetesconfiguration)/extensions/operations/read | Gets Async Operation status. |
-> | [Microsoft.KubernetesConfiguration](../permissions/hybrid-multicloud.md#microsoftkubernetesconfiguration)/namespaces/read | Get Namespace Resource |
-> | [Microsoft.KubernetesConfiguration](../permissions/hybrid-multicloud.md#microsoftkubernetesconfiguration)/operations/read | Gets available operations of the Microsoft.KubernetesConfiguration resource provider. |
-> | [Microsoft.GuestConfiguration](../permissions/management-and-governance.md#microsoftguestconfiguration)/guestConfigurationAssignments/read | Get guest configuration assignment. |
-> | Microsoft.HybridContainerService/register/action | Register the subscription for Microsoft.HybridContainerService |
-> | [Microsoft.Resources](../permissions/management-and-governance.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
-> | [Microsoft.AzureStackHCI](../permissions/hybrid-multicloud.md#microsoftazurestackhci)/StorageContainers/Write | Creates/Updates storage containers resource |
-> | [Microsoft.AzureStackHCI](../permissions/hybrid-multicloud.md#microsoftazurestackhci)/StorageContainers/Read | Gets/Lists storage containers resource |
-> | **NotActions** | |
-> | *none* | |
-> | **DataActions** | |
-> | *none* | |
-> | **NotDataActions** | |
-> | *none* | |
-
-```json
-{
- "assignableScopes": [
- "/"
- ],
- "description": "Azure Resource Bridge Deployment Role",
- "id": "/providers/Microsoft.Authorization/roleDefinitions/7b1f81f9-4196-4058-8aae-762e593270df",
- "name": "7b1f81f9-4196-4058-8aae-762e593270df",
- "permissions": [
- {
- "actions": [
- "Microsoft.AzureStackHCI/Register/Action",
- "Microsoft.ResourceConnector/register/action",
- "Microsoft.ResourceConnector/appliances/read",
- "Microsoft.ResourceConnector/appliances/write",
- "Microsoft.ResourceConnector/appliances/delete",
- "Microsoft.ResourceConnector/locations/operationresults/read",
- "Microsoft.ResourceConnector/locations/operationsstatus/read",
- "Microsoft.ResourceConnector/appliances/listClusterUserCredential/action",
- "Microsoft.ResourceConnector/appliances/listKeys/action",
- "Microsoft.ResourceConnector/appliances/upgradeGraphs/read",
- "Microsoft.ResourceConnector/telemetryconfig/read",
- "Microsoft.ResourceConnector/operations/read",
- "Microsoft.ExtendedLocation/register/action",
- "Microsoft.ExtendedLocation/customLocations/deploy/action",
- "Microsoft.ExtendedLocation/customLocations/read",
- "Microsoft.ExtendedLocation/customLocations/write",
- "Microsoft.ExtendedLocation/customLocations/delete",
- "Microsoft.HybridConnectivity/register/action",
- "Microsoft.Kubernetes/register/action",
- "Microsoft.KubernetesConfiguration/register/action",
- "Microsoft.KubernetesConfiguration/extensions/write",
- "Microsoft.KubernetesConfiguration/extensions/read",
- "Microsoft.KubernetesConfiguration/extensions/delete",
- "Microsoft.KubernetesConfiguration/extensions/operations/read",
- "Microsoft.KubernetesConfiguration/namespaces/read",
- "Microsoft.KubernetesConfiguration/operations/read",
- "Microsoft.GuestConfiguration/guestConfigurationAssignments/read",
- "Microsoft.HybridContainerService/register/action",
- "Microsoft.Resources/subscriptions/resourceGroups/read",
- "Microsoft.AzureStackHCI/StorageContainers/Write",
- "Microsoft.AzureStackHCI/StorageContainers/Read"
- ],
- "notActions": [],
- "dataActions": [],
- "notDataActions": []
- }
- ],
- "roleName": "Azure Resource Bridge Deployment Role",
- "roleType": "BuiltInRole",
- "type": "Microsoft.Authorization/roleDefinitions"
-}
-```
- ## Billing Reader Allows read access to billing data
role-based-access-control Mixed Reality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/mixed-reality.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/monitor.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/networking.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/security.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/storage.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Web And Mobile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/web-and-mobile.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Ai Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/ai-machine-learning.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
Azure service: [Cognitive Services](/azure/cognitive-services/)
> | Microsoft.CognitiveServices/accounts/ContentSafety/text:detectjailbreak/action | A synchronous API for the analysis of text jailbreak. | > | Microsoft.CognitiveServices/accounts/ContentSafety/text:adaptiveannotate/action | A remote procedure call (RPC) operation. | > | Microsoft.CognitiveServices/accounts/ContentSafety/text:detectungroundedness/action | A synchronous API for the analysis of language model outputs to determine if they align with the information provided by the user or contain fictional content. |
+> | Microsoft.CognitiveServices/accounts/ContentSafety/text:detectinjectionattacks/action | A synchronous API for the analysis of text injection attacks. |
> | Microsoft.CognitiveServices/accounts/ContentSafety/blocklisthitcalls/read | Show blocklist hit request count at different timestamps. | > | Microsoft.CognitiveServices/accounts/ContentSafety/blocklisttopterms/read | List top terms hit in blocklist at different timestamps. | > | Microsoft.CognitiveServices/accounts/ContentSafety/categories/severities/requestcounts/read | List API request count number of a specific category and a specific severity given a time range. Default maxpagesize is 1000. |
Azure service: [Cognitive Services](/azure/cognitive-services/)
> | Microsoft.CognitiveServices/accounts/OpenAI/assistants/threads/runs/write | Create or update assistant thread run. | > | Microsoft.CognitiveServices/accounts/OpenAI/assistants/threads/runs/read | Retrieve assistant thread run. | > | Microsoft.CognitiveServices/accounts/OpenAI/assistants/threads/runs/steps/read | Retrieve assistant thread run step. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/batch-jobs/write | Creates Batch Inference jobs. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/batch-jobs/delete | Deletes Batch Inference jobs. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/batch-jobs/read | Gets information about batch jobs. |
> | Microsoft.CognitiveServices/accounts/OpenAI/deployments/search/action | Search for the most relevant documents using the current engine. | > | Microsoft.CognitiveServices/accounts/OpenAI/deployments/completions/action | Create a completion from a chosen model. | > | Microsoft.CognitiveServices/accounts/OpenAI/deployments/read | Gets information about deployments. |
Azure service: [Machine Learning](/azure/machine-learning/)
> | Microsoft.MachineLearningServices/workspaces/connections/write | Creates or updates a Machine Learning Services connection(s) | > | Microsoft.MachineLearningServices/workspaces/connections/delete | Deletes the Machine Learning Services connection(s) | > | Microsoft.MachineLearningServices/workspaces/connections/listsecrets/action | Gets the Machine Learning Services connection with secret values |
+> | Microsoft.MachineLearningServices/workspaces/connections/deployments/read | Gets the Machine Learning Services AzureOpenAI Connection deployment |
+> | Microsoft.MachineLearningServices/workspaces/connections/deployments/write | Creates or Updates the Machine Learning Services AzureOpenAI Connection deployment |
+> | Microsoft.MachineLearningServices/workspaces/connections/deployments/delete | Deletes the Machine Learning Services AzureOpenAI Connection deployment |
+> | Microsoft.MachineLearningServices/workspaces/connections/models/read | Gets the Machine Learning Services AzureOpenAI Connection model |
> | Microsoft.MachineLearningServices/workspaces/data/read | Reads Data container in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/data/write | Writes Data container in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/data/delete | Deletes Data container in Machine Learning Services Workspace(s) |
Azure service: [Machine Learning](/azure/machine-learning/)
> | Microsoft.MachineLearningServices/workspaces/managedstorages/claim/read | Get my claims on data | > | Microsoft.MachineLearningServices/workspaces/managedstorages/claim/write | Update my claims on data | > | Microsoft.MachineLearningServices/workspaces/managedstorages/quota/read | Get my data quota usage |
+> | Microsoft.MachineLearningServices/workspaces/marketplaceSubscriptions/read | Gets the Machine Learning Service Workspaces Marketplace Subscription(s) |
+> | Microsoft.MachineLearningServices/workspaces/marketplaceSubscriptions/write | Creates or Updates the Machine Learning Service Workspaces Marketplace Subscription(s) |
+> | Microsoft.MachineLearningServices/workspaces/marketplaceSubscriptions/delete | Deletes the Machine Learning Service Workspaces Marketplace Subscription(s) |
> | Microsoft.MachineLearningServices/workspaces/metadata/artifacts/read | Gets artifacts in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/metadata/artifacts/write | Creates or updates artifacts in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/metadata/artifacts/delete | Deletes artifacts in Machine Learning Services Workspace(s) |
role-based-access-control Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/analytics.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
Azure service: [Data Factory](/azure/data-factory/)
> | Microsoft.DataFactory/factories/querytriggerruns/action | Queries the Trigger Runs. | > | Microsoft.DataFactory/factories/querypipelineruns/action | Queries the Pipeline Runs. | > | Microsoft.DataFactory/factories/querydebugpipelineruns/action | Queries the Debug Pipeline Runs. |
+> | Microsoft.DataFactory/factories/PrivateEndpointConnectionsApproval/action | Approve Private Endpoint Connection. |
> | Microsoft.DataFactory/factories/adfcdcs/read | Reads ADF Change data capture. | > | Microsoft.DataFactory/factories/adfcdcs/delete | Deletes ADF Change data capture. | > | Microsoft.DataFactory/factories/adfcdcs/write | Create or update ADF Change data capture. |
role-based-access-control Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/compute.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
Azure service: [Azure Container Apps](/azure/container-apps/)
> | microsoft.app/builds/delete | Delete a Managed Environment's Build | > | microsoft.app/builds/listauthtoken/action | Gets the token used to connect to the build endpoints, such as source code upload or build log streaming. | > | microsoft.app/connectedenvironments/join/action | Allows to create a Container App or Container Apps Job in a Connected Environment |
-> | microsoft.app/connectedenvironments/checknameavailability/action | Check reource name availability for a Connected Environment |
-> | microsoft.app/connectedenvironments/write | Create or update a Connected Environment |
-> | microsoft.app/connectedenvironments/delete | Delete a Connected Environment |
-> | microsoft.app/connectedenvironments/read | Get a Connected Environment |
-> | microsoft.app/connectedenvironments/certificates/write | Create or update a Connected Environment Certificate |
-> | microsoft.app/connectedenvironments/certificates/read | Get a Connected Environment's Certificate |
-> | microsoft.app/connectedenvironments/certificates/delete | Delete a Connected Environment's Certificate |
-> | microsoft.app/connectedenvironments/daprcomponents/write | Create or Update Connected Environment Dapr Component |
-> | microsoft.app/connectedenvironments/daprcomponents/read | Read Connected Environment Dapr Component |
-> | microsoft.app/connectedenvironments/daprcomponents/delete | Delete Connected Environment Dapr Component |
-> | microsoft.app/connectedenvironments/daprcomponents/listsecrets/action | List Secrets of a Dapr Component |
-> | microsoft.app/connectedenvironments/storages/read | Get storage for a Connected Environment. |
-> | microsoft.app/connectedenvironments/storages/write | Create or Update a storage of Connected Environment. |
-> | microsoft.app/connectedenvironments/storages/delete | Delete a storage of Connected Environment. |
> | microsoft.app/containerapp/resiliencypolicies/read | Get App Resiliency Policy | > | microsoft.app/containerapps/write | Create or update a Container App | > | microsoft.app/containerapps/delete | Delete a Container App |
Azure service: [Azure Container Apps](/azure/container-apps/)
> | microsoft.app/jobs/stop/execution/backport/action | Stop a Container Apps Job's specific execution | > | microsoft.app/locations/availablemanagedenvironmentsworkloadprofiletypes/read | Get Available Workload Profile Types in a Region | > | microsoft.app/locations/billingmeters/read | Get Billing Meters in a Region |
-> | microsoft.app/locations/connectedenvironmentoperationresults/read | Get a Connected Environment Long Running Operation Result |
-> | microsoft.app/locations/connectedenvironmentoperationstatuses/read | Get a Connected Environment Long Running Operation Status |
> | microsoft.app/locations/containerappoperationresults/read | Get a Container App Long Running Operation Result | > | microsoft.app/locations/containerappoperationstatuses/read | Get a Container App Long Running Operation Status | > | microsoft.app/locations/containerappsjoboperationresults/read | Get a Container Apps Job Long Running Operation Result |
Azure service: [Azure Container Apps](/azure/container-apps/)
> | microsoft.app/sessionpools/sessions/read | Get a Session | > | **DataAction** | **Description** | > | microsoft.app/sessionpools/interpreters/execute/action | Execute Code |
-> | microsoft.app/sessionpools/interpreters/read | Read interpreter resources |
## Microsoft.AppPlatform
role-based-access-control Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/containers.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/databases.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
Azure service: [Azure Database for MySQL](/azure/mysql/)
> | Microsoft.DBforMySQL/privateEndpointConnectionsApproval/action | Determines if user is allowed to approve a private endpoint connection | > | Microsoft.DBforMySQL/register/action | Register MySQL Resource Provider | > | Microsoft.DBforMySQL/checkNameAvailability/action | Verify whether given server name is available for provisioning worldwide for a given subscription. |
-> | Microsoft.DBforMySQL/flexibleServers/resetGtid/action | |
> | Microsoft.DBforMySQL/flexibleServers/read | Returns the list of servers or gets the properties for the specified server. | > | Microsoft.DBforMySQL/flexibleServers/write | Creates a server with the specified parameters or updates the properties or tags for the specified server. | > | Microsoft.DBforMySQL/flexibleServers/delete | Deletes an existing server. |
+> | Microsoft.DBforMySQL/flexibleServers/validateEstimateHighAvailability/action | |
+> | Microsoft.DBforMySQL/flexibleServers/getReplicationStatusForMigration/action | Return whether the replication is able to migration. |
+> | Microsoft.DBforMySQL/flexibleServers/resetGtid/action | |
> | Microsoft.DBforMySQL/flexibleServers/checkServerVersionUpgradeAvailability/action | |
+> | Microsoft.DBforMySQL/flexibleServers/privateEndpointConnectionsApproval/action | Determines if user is allowed to approve a private endpoint connection |
> | Microsoft.DBforMySQL/flexibleServers/backupAndExport/action | Creates a server backup for long term with specific backup name and export it. | > | Microsoft.DBforMySQL/flexibleServers/validateBackup/action | Validate that the server is ready for backup. | > | Microsoft.DBforMySQL/flexibleServers/checkHaReplica/action | |
Azure service: [Azure Database for MySQL](/azure/mysql/)
> | Microsoft.DBforMySQL/flexibleServers/advancedThreatProtectionSettings/write | Update the server's advanced threat protection setting. | > | Microsoft.DBforMySQL/flexibleServers/backups/write | Creates a server backup with specific backup name. | > | Microsoft.DBforMySQL/flexibleServers/backups/read | Returns the list of backups for a server or gets the properties for the specified backup. |
+> | Microsoft.DBforMySQL/flexibleServers/backupsv2/write | |
+> | Microsoft.DBforMySQL/flexibleServers/backupsv2/read | |
> | Microsoft.DBforMySQL/flexibleServers/configurations/read | Returns the list of MySQL server configurations or gets the configurations for the specified server. | > | Microsoft.DBforMySQL/flexibleServers/configurations/write | Updates the configuration of a MySQL server. | > | Microsoft.DBforMySQL/flexibleServers/databases/read | Returns the list of databases for a server or gets the properties for the specified database. |
Azure service: [Azure Database for MySQL](/azure/mysql/)
> | Microsoft.DBforMySQL/flexibleServers/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for MySQL servers | > | Microsoft.DBforMySQL/flexibleServers/providers/Microsoft.Insights/metricDefinitions/read | Return types of metrics that are available for databases | > | Microsoft.DBforMySQL/flexibleServers/replicas/read | Returns the list of read replicas for a MySQL server |
+> | Microsoft.DBforMySQL/flexibleServers/supportedFeatures/read | Return the list of the MySQL Server Supported Features |
> | Microsoft.DBforMySQL/locations/checkVirtualNetworkSubnetUsage/action | Checks the subnet usage for speicifed delegated virtual network. | > | Microsoft.DBforMySQL/locations/checkNameAvailability/action | Verify whether given server name is available for provisioning worldwide for a given subscription. | > | Microsoft.DBforMySQL/locations/listMigrations/action | Return the List of MySQL scheduled auto migrations |
Azure service: [Azure Database for PostgreSQL](/azure/postgresql/)
> | Microsoft.DBforPostgreSQL/flexibleServers/testConnectivity/action | | > | Microsoft.DBforPostgreSQL/flexibleServers/startLtrBackup/action | Start long term backup for a server | > | Microsoft.DBforPostgreSQL/flexibleServers/ltrPreBackup/action | Checks if a server is ready for a long term backup |
+> | Microsoft.DBforPostgreSQL/flexibleServers/privateEndpointConnectionsApproval/action | Determines if the user is allowed to approve a private endpoint connection |
> | Microsoft.DBforPostgreSQL/flexibleServers/administrators/read | Return the list of server administrators or gets the properties for the specified server administrator. | > | Microsoft.DBforPostgreSQL/flexibleServers/administrators/delete | Deletes an existing PostgreSQL server administrator. | > | Microsoft.DBforPostgreSQL/flexibleServers/administrators/write | Creates a server administrator with the specified parameters or update the properties or tags for the specified server administrator. |
Azure service: [Azure Database for PostgreSQL](/azure/postgresql/)
> | Microsoft.DBforPostgreSQL/flexibleServers/migrations/read | Gets the properties for the specified migration workflow. | > | Microsoft.DBforPostgreSQL/flexibleServers/migrations/read | List of migration workflows for the specified database server. | > | Microsoft.DBforPostgreSQL/flexibleServers/migrations/write | Update the properties for the specified migration. |
-> | Microsoft.DBforPostgreSQL/flexibleServers/migrations/delete | Deletes an existing migration workflow. |
> | Microsoft.DBforPostgreSQL/flexibleServers/privateEndpointConnectionProxies/read | Returns the list of private endpoint connection proxies or gets the properties for the specified private endpoint connection proxy. | > | Microsoft.DBforPostgreSQL/flexibleServers/privateEndpointConnectionProxies/delete | Deletes an existing private endpoint connection proxy resource. | > | Microsoft.DBforPostgreSQL/flexibleServers/privateEndpointConnectionProxies/write | Creates a private endpoint connection proxy with the specified parameters or updates the properties or tags for the specified private endpoint connection proxy | > | Microsoft.DBforPostgreSQL/flexibleServers/privateEndpointConnectionProxies/validate/action | Validates a private endpoint connection create call from NRP side | > | Microsoft.DBforPostgreSQL/flexibleServers/privateEndpointConnections/read | Returns the list of private endpoint connections or gets the properties for the specified private endpoint connection. |
-> | Microsoft.DBforPostgreSQL/flexibleServers/privateEndpointConnections/read | |
> | Microsoft.DBforPostgreSQL/flexibleServers/privateEndpointConnections/delete | Deletes an existing private endpoint connection | > | Microsoft.DBforPostgreSQL/flexibleServers/privateEndpointConnections/write | Approves or rejects an existing private endpoint connection | > | Microsoft.DBforPostgreSQL/flexibleServers/privateLinkResources/read | Return a list containing private link resource or gets the specified private link resource. |
Azure service: [Azure Database for PostgreSQL](/azure/postgresql/)
> | Microsoft.DBforPostgreSQL/locations/privateEndpointConnectionOperationResults/read | Gets the result for a private endpoint connection operation | > | Microsoft.DBforPostgreSQL/locations/privateEndpointConnectionProxyAzureAsyncOperation/read | Gets the result for a private endpoint connection proxy operation | > | Microsoft.DBforPostgreSQL/locations/privateEndpointConnectionProxyOperationResults/read | Gets the result for a private endpoint connection proxy operation |
+> | Microsoft.DBforPostgreSQL/locations/resourceType/usages/read | Gets the quota usages of a subscription |
> | Microsoft.DBforPostgreSQL/locations/securityAlertPoliciesAzureAsyncOperation/read | Return the list of Server threat detection operation result. | > | Microsoft.DBforPostgreSQL/locations/securityAlertPoliciesOperationResults/read | Return the list of Server threat detection operation result. | > | Microsoft.DBforPostgreSQL/locations/serverKeyAzureAsyncOperation/read | Gets in-progress operations on data encryption server keys |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | Microsoft.Sql/locations/longTermRetentionServers/longTermRetentionDatabases/longTermRetentionBackups/read | Lists the long term retention backups for a database | > | Microsoft.Sql/locations/longTermRetentionServers/longTermRetentionDatabases/longTermRetentionBackups/delete | Deletes a long term retention backup | > | Microsoft.Sql/locations/longTermRetentionServers/longTermRetentionDatabases/longTermRetentionBackups/changeAccessTier/action | Change long term retention backup access tier operation. |
-> | Microsoft.Sql/locations/managedDatabaseMoveAzureAsyncOperation/read | Gets Managed Instance database move Azure async operation. |
> | Microsoft.Sql/locations/managedDatabaseMoveOperationResults/read | Gets Managed Instance database move operation result. | > | Microsoft.Sql/locations/managedDatabaseRestoreAzureAsyncOperation/completeRestore/action | Completes managed database restore operation | > | Microsoft.Sql/locations/managedInstanceAdvancedThreatProtectionAzureAsyncOperation/read | Retrieve results of the managed instance Advanced Threat Protection settings write operation |
role-based-access-control Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/devops.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control General https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/general.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Hybrid Multicloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/hybrid-multicloud.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
Azure service: [Azure Stack HCI](/azure-stack/hci/)
> | Microsoft.AzureStackHCI/VirtualMachineInstances/WACloginAsAdmin/Action | Manage ARC enabled VM resources on HCI via Windows Admin Center as an administrator | > | Microsoft.AzureStackHCI/virtualMachines/WACloginAsAdmin/Action | Manage ARC enabled VM resources on HCI via Windows Admin Center as an administrator |
+## Microsoft.ExtendedLocation
+
+Azure service: [Custom locations](/azure/azure-arc/platform/conceptual-custom-locations)
+
+> [!div class="mx-tableFixed"]
+> | Action | Description |
+> | | |
+> | Microsoft.ExtendedLocation/register/action | Registers the subscription for Custom Location resource provider and enables the creation of Custom Location. |
+> | Microsoft.ExtendedLocation/unregister/action | UnRegisters the subscription for Custom Location resource provider and disables the creation of Custom Location. |
+> | Microsoft.ExtendedLocation/customLocations/read | Gets an Custom Location resource |
+> | Microsoft.ExtendedLocation/customLocations/write | Creates or Updates Custom Location resource |
+> | Microsoft.ExtendedLocation/customLocations/deploy/action | Deploy permissions to a Custom Location resource |
+> | Microsoft.ExtendedLocation/customLocations/delete | Deletes Custom Location resource |
+> | Microsoft.ExtendedLocation/customLocations/findTargetResourceGroup/action | Evaluate Labels Against Resource Sync Rules to Get Resource Group for Resource Sync |
+> | Microsoft.ExtendedLocation/customLocations/enabledresourcetypes/read | Gets EnabledResourceTypes for a Custom Location resource |
+> | Microsoft.ExtendedLocation/customLocations/resourceSyncRules/read | Gets a Resource Sync Rule resource |
+> | Microsoft.ExtendedLocation/customLocations/resourceSyncRules/write | Creates or Updates a Resource Sync Rule resource |
+> | Microsoft.ExtendedLocation/customLocations/resourceSyncRules/delete | Deletes Resource Sync Rule resource |
+> | Microsoft.ExtendedLocation/locations/operationresults/read | Get result of Custom Location operation |
+> | Microsoft.ExtendedLocation/locations/operationsstatus/read | Get result of Custom Location operation |
+> | Microsoft.ExtendedLocation/operations/read | Gets list of Available Operations for Custom Locations |
+ ## Microsoft.HybridCompute Azure service: [Azure Arc](/azure/azure-arc/)
Azure service: [Azure Arc](/azure/azure-arc/)
> | Microsoft.HybridCompute/licenses/write | Installs or Updates an Azure Arc licenses | > | Microsoft.HybridCompute/licenses/delete | Deletes an Azure Arc licenses | > | Microsoft.HybridCompute/locations/notifyNetworkSecurityPerimeterUpdatesAvailable/action | Updates Network Security Perimeter Profiles |
-> | Microsoft.HybridCompute/locations/machines/extensions/notifyExtension/action | Notifies Microsoft.HybridCompute about extensions updates |
+> | Microsoft.HybridCompute/locations/notifyExtension/action | Notifies Microsoft.HybridCompute about extensions updates |
> | Microsoft.HybridCompute/locations/operationresults/read | Reads the status of an operation on Microsoft.HybridCompute Resource Provider | > | Microsoft.HybridCompute/locations/operationstatus/read | Reads the status of an operation on Microsoft.HybridCompute Resource Provider | > | Microsoft.HybridCompute/locations/privateLinkScopes/read | Reads the full details of any Azure Arc privateLinkScopes |
Azure service: Microsoft.HybridConnectivity
> | Microsoft.HybridConnectivity/solutionTypes/read | Retrieve the list of available solution types. | > | Microsoft.HybridConnectivity/solutionTypes/read | Retrieve the solution type by provided solution type. |
+## Microsoft.HybridContainerService
+
+Azure service: Microsoft.HybridContainerService
+
+> [!div class="mx-tableFixed"]
+> | Action | Description |
+> | | |
+> | Microsoft.HybridContainerService/register/action | Register the subscription for Microsoft.HybridContainerService |
+> | Microsoft.HybridContainerService/unregister/action | Unregister the subscription for Microsoft.HybridContainerService |
+> | Microsoft.HybridContainerService/kubernetesVersions/read | Gets the supported kubernetes versions from the underlying custom location |
+> | Microsoft.HybridContainerService/kubernetesVersions/write | Puts the kubernetes version resource type |
+> | Microsoft.HybridContainerService/kubernetesVersions/delete | Delete the kubernetes versions resource type |
+> | Microsoft.HybridContainerService/kubernetesVersions/read | Lists the supported kubernetes versions from the underlying custom location |
+> | Microsoft.HybridContainerService/Locations/operationStatuses/read | read operationStatuses |
+> | Microsoft.HybridContainerService/Locations/operationStatuses/write | write operationStatuses |
+> | Microsoft.HybridContainerService/Operations/read | read Operations |
+> | Microsoft.HybridContainerService/provisionedClusterInstances/read | Gets the Hybrid AKS provisioned cluster instance |
+> | Microsoft.HybridContainerService/provisionedClusterInstances/write | Creates the Hybrid AKS provisioned cluster instance |
+> | Microsoft.HybridContainerService/provisionedClusterInstances/delete | Deletes the Hybrid AKS provisioned cluster instance |
+> | Microsoft.HybridContainerService/provisionedClusterInstances/read | Gets the Hybrid AKS provisioned cluster instances associated with the connected cluster |
+> | Microsoft.HybridContainerService/provisionedClusterInstances/listUserKubeconfig/action | Lists the AAD user credentials of a provisioned cluster instance used only in direct mode. |
+> | Microsoft.HybridContainerService/provisionedClusterInstances/listAdminKubeconfig/action | Lists the admin credentials of a provisioned cluster instance used only in direct mode. |
+> | Microsoft.HybridContainerService/provisionedClusterInstances/agentPools/read | Gets the agent pool in the Hybrid AKS provisioned cluster instance |
+> | Microsoft.HybridContainerService/provisionedClusterInstances/agentPools/write | Creates the agent pool in the Hybrid AKS provisioned cluster instance |
+> | Microsoft.HybridContainerService/provisionedClusterInstances/agentPools/delete | Deletes the agent pool in the Hybrid AKS provisioned cluster instance |
+> | Microsoft.HybridContainerService/provisionedClusterInstances/agentPools/write | Updates the agent pool in the Hybrid AKS provisioned cluster instance |
+> | Microsoft.HybridContainerService/provisionedClusterInstances/agentPools/read | Gets the agent pools in the Hybrid AKS provisioned cluster instance |
+> | Microsoft.HybridContainerService/provisionedClusterInstances/hybridIdentityMetadata/read | Get the hybrid identity metadata proxy resource. |
+> | Microsoft.HybridContainerService/provisionedClusterInstances/hybridIdentityMetadata/write | Creates the hybrid identity metadata proxy resource that facilitates the managed identity provisioning. |
+> | Microsoft.HybridContainerService/provisionedClusterInstances/hybridIdentityMetadata/delete | Deletes the hybrid identity metadata proxy resource. |
+> | Microsoft.HybridContainerService/provisionedClusterInstances/hybridIdentityMetadata/read | Lists the hybrid identity metadata proxy resource in a provisioned cluster instance. |
+> | Microsoft.HybridContainerService/provisionedClusterInstances/upgradeProfiles/read | read upgradeProfiles |
+> | Microsoft.HybridContainerService/provisionedClusters/read | Gets the Hybrid AKS provisioned cluster |
+> | Microsoft.HybridContainerService/provisionedClusters/write | Creates the Hybrid AKS provisioned cluster |
+> | Microsoft.HybridContainerService/provisionedClusters/delete | Deletes the Hybrid AKS provisioned cluster |
+> | Microsoft.HybridContainerService/provisionedClusters/write | Updates the Hybrid AKS provisioned cluster |
+> | Microsoft.HybridContainerService/provisionedClusters/read | Gets the Hybrid AKS provisioned cluster in a resource group |
+> | Microsoft.HybridContainerService/provisionedClusters/read | Gets the Hybrid AKS provisioned cluster in a subscription |
+> | Microsoft.HybridContainerService/provisionedClusters/upgradeNodeImageVersionForEntireCluster/action | Upgrading the node image version of a cluster applies the newest OS and runtime updates to the nodes. |
+> | Microsoft.HybridContainerService/provisionedClusters/listClusterUserCredential/action | Lists the AAD user credentials of a provisioned cluster used only in direct mode. |
+> | Microsoft.HybridContainerService/provisionedClusters/listClusterAdminCredential/action | Lists the admin credentials of a provisioned cluster used only in direct mode. |
+> | Microsoft.HybridContainerService/provisionedClusters/agentPools/read | Gets the agent pool in the Hybrid AKS provisioned cluster |
+> | Microsoft.HybridContainerService/provisionedClusters/agentPools/write | Creates the agent pool in the Hybrid AKS provisioned cluster |
+> | Microsoft.HybridContainerService/provisionedClusters/agentPools/delete | Deletes the agent pool in the Hybrid AKS provisioned cluster |
+> | Microsoft.HybridContainerService/provisionedClusters/agentPools/write | Updates the agent pool in the Hybrid AKS provisioned cluster |
+> | Microsoft.HybridContainerService/provisionedClusters/agentPools/read | Gets the agent pools in the Hybrid AKS provisioned cluster |
+> | Microsoft.HybridContainerService/provisionedClusters/hybridIdentityMetadata/read | Get the hybrid identity metadata proxy resource. |
+> | Microsoft.HybridContainerService/provisionedClusters/hybridIdentityMetadata/write | Creates the hybrid identity metadata proxy resource that facilitates the managed identity provisioning. |
+> | Microsoft.HybridContainerService/provisionedClusters/hybridIdentityMetadata/delete | Deletes the hybrid identity metadata proxy resource. |
+> | Microsoft.HybridContainerService/provisionedClusters/hybridIdentityMetadata/read | Lists the hybrid identity metadata proxy resource in a cluster. |
+> | Microsoft.HybridContainerService/provisionedClusters/upgradeProfiles/read | read upgradeProfiles |
+> | Microsoft.HybridContainerService/skus/read | Gets the supported VM skus from the underlying custom location |
+> | Microsoft.HybridContainerService/skus/write | Puts the VM SKUs resource type |
+> | Microsoft.HybridContainerService/skus/delete | Deletes the Vm Sku resource type |
+> | Microsoft.HybridContainerService/skus/read | Lists the supported VM SKUs from the underlying custom location |
+> | Microsoft.HybridContainerService/storageSpaces/read | Gets the Hybrid AKS storage space object |
+> | Microsoft.HybridContainerService/storageSpaces/write | Puts the Hybrid AKS storage object |
+> | Microsoft.HybridContainerService/storageSpaces/delete | Deletes the Hybrid AKS storage object |
+> | Microsoft.HybridContainerService/storageSpaces/write | Patches the Hybrid AKS storage object |
+> | Microsoft.HybridContainerService/storageSpaces/read | List the Hybrid AKS storage object by resource group |
+> | Microsoft.HybridContainerService/storageSpaces/read | List the Hybrid AKS storage object by subscription |
+> | Microsoft.HybridContainerService/virtualNetworks/read | Gets the Hybrid AKS virtual network |
+> | Microsoft.HybridContainerService/virtualNetworks/write | Puts the Hybrid AKS virtual network |
+> | Microsoft.HybridContainerService/virtualNetworks/delete | Deletes the Hybrid AKS virtual network |
+> | Microsoft.HybridContainerService/virtualNetworks/write | Patches the Hybrid AKS virtual network |
+> | Microsoft.HybridContainerService/virtualNetworks/read | Lists the Hybrid AKS virtual networks by resource group |
+> | Microsoft.HybridContainerService/virtualNetworks/read | Lists the Hybrid AKS virtual networks by subscription |
+ ## Microsoft.Kubernetes Azure service: [Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overview)
Azure service: [Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overvi
> | Microsoft.KubernetesConfiguration/sourceControlConfigurations/read | Gets source control configuration. | > | Microsoft.KubernetesConfiguration/sourceControlConfigurations/delete | Deletes source control configuration. |
+## Microsoft.ResourceConnector
+
+Azure service: Microsoft ResourceConnector
+
+> [!div class="mx-tableFixed"]
+> | Action | Description |
+> | | |
+> | Microsoft.ResourceConnector/register/action | Registers the subscription for Appliances resource provider and enables the creation of Appliance. |
+> | Microsoft.ResourceConnector/unregister/action | Unregisters the subscription for Appliances resource provider and disables the creation of Appliance. |
+> | Microsoft.ResourceConnector/appliances/read | Gets an Appliance resource |
+> | Microsoft.ResourceConnector/appliances/write | Creates or Updates Appliance resource |
+> | Microsoft.ResourceConnector/appliances/delete | Deletes Appliance resource |
+> | Microsoft.ResourceConnector/appliances/listClusterUserCredential/action | Get an appliance cluster user credential |
+> | Microsoft.ResourceConnector/appliances/listKeys/action | Get an appliance cluster customer user keys |
+> | Microsoft.ResourceConnector/appliances/upgradeGraphs/read | Gets the upgrade graph of Appliance cluster |
+> | Microsoft.ResourceConnector/locations/operationresults/read | Get result of Appliance operation |
+> | Microsoft.ResourceConnector/locations/operationsstatus/read | Get result of Appliance operation |
+> | Microsoft.ResourceConnector/operations/read | Gets list of Available Operations for Appliances |
+> | Microsoft.ResourceConnector/telemetryconfig/read | Get Appliances telemetry config utilized by Appliances CLI |
+ ## Next steps - [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types)
role-based-access-control Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/identity.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/integration.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
Azure service: [Azure App Configuration](/azure/azure-app-configuration/)
> | Microsoft.AppConfiguration/configurationStores/RegenerateKey/action | Regenerates of the API key's for the specified configuration store. | > | Microsoft.AppConfiguration/configurationStores/ListKeyValue/action | Lists a key-value for the specified configuration store. | > | Microsoft.AppConfiguration/configurationStores/PrivateEndpointConnectionsApproval/action | Auto-Approve a private endpoint connection under the specified configuration store. |
+> | Microsoft.AppConfiguration/configurationStores/keyValues/action | Performs an action on an existing key-value from the configuration store. This also grants the ability to read key values. |
> | Microsoft.AppConfiguration/configurationStores/joinPerimeter/action | Determines if a user is allowed to associate an Azure App Configuration with a Network Security Perimeter. | > | Microsoft.AppConfiguration/configurationStores/eventGridFilters/read | Gets the properties of the specified configuration store event grid filter or lists all the configuration store event grid filters under the specified configuration store. | > | Microsoft.AppConfiguration/configurationStores/eventGridFilters/write | Create or update a configuration store event grid filter with the specified parameters. | > | Microsoft.AppConfiguration/configurationStores/eventGridFilters/delete | Deletes a configuration store event grid filter. |
+> | Microsoft.AppConfiguration/configurationStores/keyValues/write | Creates or updates a key-value in the configuration store. |
+> | Microsoft.AppConfiguration/configurationStores/keyValues/delete | Deletes an existing key-value from the configuration store. |
> | Microsoft.AppConfiguration/configurationStores/networkSecurityPerimeterAssociationProxies/read | Get the properties of the specific network security perimeter association proxy or lists all the network security perimeter association proxies under the specified configuration store. | > | Microsoft.AppConfiguration/configurationStores/networkSecurityPerimeterAssociationProxies/write | Create or update a network security perimeter association proxy under the specified configuration store. | > | Microsoft.AppConfiguration/configurationStores/networkSecurityPerimeterAssociationProxies/delete | Delete a network security perimeter association proxy under the specified configuration store. |
Azure service: [Azure API for FHIR](/azure/healthcare-apis/azure-api-for-fhir/)
> | Microsoft.HealthcareApis/workspaces/dicomservices/read | | > | Microsoft.HealthcareApis/workspaces/dicomservices/write | | > | Microsoft.HealthcareApis/workspaces/dicomservices/delete | |
-> | Microsoft.HealthcareApis/workspaces/dicomservices/dicomcasts/read | |
-> | Microsoft.HealthcareApis/workspaces/dicomservices/dicomcasts/write | |
-> | Microsoft.HealthcareApis/workspaces/dicomservices/dicomcasts/delete | |
> | Microsoft.HealthcareApis/workspaces/dicomservices/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic settings for the Azure service. | > | Microsoft.HealthcareApis/workspaces/dicomservices/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic settings for the Azure service. | > | Microsoft.HealthcareApis/workspaces/dicomservices/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for the Azure service. |
Azure service: [Logic Apps](/azure/logic-apps/)
> | Microsoft.Logic/integrationAccounts/partners/write | Creates or updates the partner in integration account. | > | Microsoft.Logic/integrationAccounts/partners/delete | Deletes the partner in integration account. | > | Microsoft.Logic/integrationAccounts/partners/listContentCallbackUrl/action | Gets the callback URL for partner content in integration account. |
+> | Microsoft.Logic/integrationAccounts/privateEndpointConnectionProxies/read | Gets the Private Endpoint Connection Proxies. |
+> | Microsoft.Logic/integrationAccounts/privateEndpointConnectionProxies/write | Creates or Updates the Private Endpoint Connection Proxies. |
+> | Microsoft.Logic/integrationAccounts/privateEndpointConnectionProxies/delete | Deletes the Private Endpoint Connection Proxies. |
+> | Microsoft.Logic/integrationAccounts/privateEndpointConnectionProxies/validate/action | Validates the Private Endpoint Connection Proxies. |
> | Microsoft.Logic/integrationAccounts/providers/Microsoft.Insights/logDefinitions/read | Reads the Integration Account log definitions. | > | Microsoft.Logic/integrationAccounts/rosettaNetProcessConfigurations/read | Reads the RosettaNet process configuration in integration account. | > | Microsoft.Logic/integrationAccounts/rosettaNetProcessConfigurations/write | Creates or updates the RosettaNet process configuration in integration account. |
role-based-access-control Internet Of Things https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/internet-of-things.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Management And Governance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/management-and-governance.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/migration.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Mixed Reality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/mixed-reality.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/monitor.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
Azure service: [Azure Monitor](/azure/azure-monitor/)
> | Microsoft.Insights/Metrics/Write | Write metrics | > | Microsoft.Insights/Telemetry/Write | Write telemetry |
-## Microsoft.Monitor
+## microsoft.monitor
Azure service: [Azure Monitor](/azure/azure-monitor/) > [!div class="mx-tableFixed"] > | Action | Description | > | | |
+> | microsoft.monitor/register/action | Registers the subscription for the Microsoft.Monitor resource provider |
+> | microsoft.monitor/unregister/action | Unregisters the subscription for the Microsoft.Monitor resource provider |
> | microsoft.monitor/accounts/read | Read any Monitoring Account | > | microsoft.monitor/accounts/write | Create or Update any Monitoring Account | > | microsoft.monitor/accounts/delete | Delete any Monitoring Account |
role-based-access-control Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/networking.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
Azure service: [Azure Private 5G Core](/azure/private-5g-core/)
> | Microsoft.MobileNetwork/mobileNetworks/read | Gets information about the specified mobile network. | > | Microsoft.MobileNetwork/mobileNetworks/write | Creates or updates a mobile network. | > | Microsoft.MobileNetwork/mobileNetworks/delete | Deletes the specified mobile network. |
-> | Microsoft.MobileNetwork/mobileNetworks/write | Updates mobile network tags. |
+> | Microsoft.MobileNetwork/mobileNetworks/write | Updates mobile network tags and managed identity. |
> | Microsoft.MobileNetwork/mobileNetworks/read | Lists all the mobile networks in a subscription. | > | Microsoft.MobileNetwork/mobileNetworks/read | Lists all the mobile networks in a resource group. | > | Microsoft.MobileNetwork/mobileNetworks/dataNetworks/read | Gets information about the specified data network. |
Azure service: [Azure Private 5G Core](/azure/private-5g-core/)
> | Microsoft.MobileNetwork/packetCoreControlPlanes/rollback/action | Roll back the specified packet core control plane to the previous version, "rollbackVersion". Multiple consecutive rollbacks are not possible. This action may cause a service outage. | > | Microsoft.MobileNetwork/packetCoreControlPlanes/reinstall/action | Reinstall the specified packet core control plane. This action will remove any transaction state from the packet core to return it to a known state. This action will cause a service outage. | > | Microsoft.MobileNetwork/packetCoreControlPlanes/collectDiagnosticsPackage/action | Collect a diagnostics package for the specified packet core control plane. This action will upload the diagnostics to a storage account. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/reinstall/action | Reinstall the specified packet core control plane. This action will try to restore the packet core to the installed state that was disrupted by a transient failure. This action will cause a service outage. |
> | Microsoft.MobileNetwork/packetCoreControlPlanes/read | Gets information about the specified packet core control plane. | > | Microsoft.MobileNetwork/packetCoreControlPlanes/write | Creates or updates a packet core control plane. | > | Microsoft.MobileNetwork/packetCoreControlPlanes/delete | Deletes the specified packet core control plane. |
role-based-access-control Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/security.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
Azure service: [Microsoft Sentinel](/azure/sentinel/)
> | Microsoft.SecurityInsights/register/action | Registers the subscription to Azure Sentinel | > | Microsoft.SecurityInsights/unregister/action | Unregisters the subscription from Azure Sentinel | > | Microsoft.SecurityInsights/dataConnectorsCheckRequirements/action | Check user authorization and license |
+> | Microsoft.SecurityInsights/contentTranslators/action | Check a translation of content |
> | Microsoft.SecurityInsights/Aggregations/read | Gets aggregated information | > | Microsoft.SecurityInsights/alertRules/read | Gets the alert rules | > | Microsoft.SecurityInsights/alertRules/write | Updates alert rules |
role-based-access-control Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/storage.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Web And Mobile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/web-and-mobile.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Previously updated : 02/07/2024 Last updated : 03/01/2024
Click the resource provider name in the following list to see the list of permis
> | [Microsoft.AlertsManagement](./permissions/monitor.md#microsoftalertsmanagement) | Analyze all of the alerts in your Log Analytics repository. | [Azure Monitor](/azure/azure-monitor/) | > | [Microsoft.Dashboard](./permissions/monitor.md#microsoftdashboard) | | [Azure Managed Grafana](/azure/managed-grafana/) | > | [Microsoft.Insights](./permissions/monitor.md#microsoftinsights) | Full observability into your applications, infrastructure, and network. | [Azure Monitor](/azure/azure-monitor/) |
-> | [Microsoft.Monitor](./permissions/monitor.md#microsoftmonitor) | | [Azure Monitor](/azure/azure-monitor/) |
+> | [microsoft.monitor](./permissions/monitor.md#microsoftmonitor) | | [Azure Monitor](/azure/azure-monitor/) |
> | [Microsoft.OperationalInsights](./permissions/monitor.md#microsoftoperationalinsights) | | [Azure Monitor](/azure/azure-monitor/) | > | [Microsoft.OperationsManagement](./permissions/monitor.md#microsoftoperationsmanagement) | A simplified management solution for any enterprise. | [Azure Monitor](/azure/azure-monitor/) |
Click the resource provider name in the following list to see the list of permis
> | | | | > | [Microsoft.AzureStack](./permissions/hybrid-multicloud.md#microsoftazurestack) | Build and run innovative hybrid applications across cloud boundaries. | [Azure Stack](/azure-stack/) | > | [Microsoft.AzureStackHCI](./permissions/hybrid-multicloud.md#microsoftazurestackhci) | | [Azure Stack HCI](/azure-stack/hci/) |
+> | [Microsoft.ExtendedLocation](./permissions/hybrid-multicloud.md#microsoftextendedlocation) | | [Custom locations](/azure/azure-arc/platform/conceptual-custom-locations) |
> | [Microsoft.HybridCompute](./permissions/hybrid-multicloud.md#microsofthybridcompute) | | [Azure Arc](/azure/azure-arc/) | > | [Microsoft.HybridConnectivity](./permissions/hybrid-multicloud.md#microsofthybridconnectivity) | | |
+> | [Microsoft.HybridContainerService](./permissions/hybrid-multicloud.md#microsofthybridcontainerservice) | | |
> | [Microsoft.Kubernetes](./permissions/hybrid-multicloud.md#microsoftkubernetes) | | [Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overview) | > | [Microsoft.KubernetesConfiguration](./permissions/hybrid-multicloud.md#microsoftkubernetesconfiguration) | | [Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overview) |
+> | [Microsoft.ResourceConnector](./permissions/hybrid-multicloud.md#microsoftresourceconnector) | | |
## Next steps
search Search Howto Incremental Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-incremental-index.md
- ignite-2023 Previously updated : 02/22/2024 Last updated : 02/29/2024 # Enable caching for incremental enrichment in Azure AI Search > [!IMPORTANT]
-> This feature is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [preview REST API](/rest/api/searchservice/index-preview) supports this feature
+> This feature is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [preview REST API](/rest/api/searchservice/index-preview) supports this feature.
This article explains how to add caching to an enrichment pipeline so that you can modify downstream enrichment steps without having to rebuild in full every time. By default, a skillset is stateless, and changing any part of its composition requires a full rerun of the indexer. With an [**enrichment cache**](cognitive-search-incremental-indexing-conceptual.md), the indexer can determine which parts of the document tree must be refreshed based on changes detected in the skillset or indexer definitions. Existing processed output is preserved and reused wherever possible.
Azure Storage is used to store cached enrichments. The storage account must be [
Preview APIs or beta Azure SDKs are required for enabling cache on an indexer. The portal does not currently provide an option for caching enrichment.
+If you are planning to [index blobs](search-howto-indexing-azure-blob-storage.md), and you require the documents to be removed from both your cache and index when they are deleted from your data source, it is necessary to enable a [deletion policy](search-howto-index-changed-deleted-blobs.md) within the indexer. Without this policy, the deletion from the cache is not supported.
++ > [!CAUTION] > If you're using the [SharePoint Online indexer (Preview)](search-howto-index-sharepoint-online.md), you should avoid incremental enrichment. Under certain circumstances, the cache becomes invalid, requiring an [indexer reset and run](search-howto-run-reset-indexers.md), should you choose to reload it.
sentinel Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/prerequisites.md
Before deploying Microsoft Sentinel, make sure that your Azure tenant meets the
For more information about other roles and permissions supported for Microsoft Sentinel, see [Permissions in Microsoft Sentinel](roles.md). -- A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) is required to house all of the data that Microsoft Sentinel will be ingesting and using for its detections, analytics, and other features. For more information, see [Microsoft Sentinel workspace architecture best practices](best-practices-workspace-architecture.md). Microsoft Sentinel doesn't support Log Analytics workspaces with a resource lock applied.
+- A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) is required to house all of the data that Microsoft Sentinel will be ingesting and using for its detections, analytics, and other features. For more information, see [Microsoft Sentinel workspace architecture best practices](best-practices-workspace-architecture.md).
+
+- The Log Analytics workspace must not have a resource lock applied, and the workspace pricing tier must be Pay-as-You-Go or a commitment tier. Log Analytics legacy pricing tiers and resource locks aren't supported when enabling Microsoft Sentinel. For more information about pricing tiers, see [Simplified pricing tiers for Microsoft Sentinel](enroll-simplified-pricing-tier.md#prerequisites).
- We recommend that when you set up your Microsoft Sentinel workspace, [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md) that's dedicated to Microsoft Sentinel and the resources that Microsoft Sentinel uses, including the Log Analytics workspace, any playbooks, workbooks, and so on.
sentinel Sentinel Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solution.md
Title: Monitor Zero Trust (TIC 3.0) security architectures with Microsoft Sentinel description: Install and learn how to use the Microsoft Sentinel Zero Trust (TIC3.0) solution for an automated visualization of Zero Trust principles, cross-walked to the Trusted Internet Connections framework. Last updated 01/09/2023-
service-bus-messaging Message Browsing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-browsing.md
You can specify the maximum number of messages that you want the peek operation
Here's an example snippet for peeking all messages with the Python Service Bus SDK. The `sequence_number​` can be used to track the last peeked message and start browsing at the next message.
+### [C#](#tab/csharp)
+
+```csharp
+using Azure.Messaging.ServiceBus;
+
+// Create a Service Bus client for your namespace
+ServiceBusClient client = new ServiceBusClient("NAMESPACECONNECTIONSTRING");
+
+// Create Service Bus receiver for your queue in the namespace
+ServiceBusReceiver receiver = client.CreateReceiver("QUEUENAME");
+
+// Peek operation with max count set to 5
+var peekedMessages = await receiver.PeekMessagesAsync(maxMessages: 5);
+
+// Keep receiving while there are messages in the queue
+while (peekedMessages.Count > 0)
+{
+ int counter = 0; // To get the sequence number of the last peeked message
+ int countPeekedMessages = peekedMessages.Count;
+
+ if (countPeekedMessages > 0)
+ {
+ // For each peeked message, print the message body
+ foreach (ServiceBusReceivedMessage msg in peekedMessages)
+ {
+ Console.WriteLine(msg.Body);
+ counter++;
+ }
+ Console.WriteLine("Peek round complete");
+ Console.WriteLine("");
+ }
+
+ // Start receiving from the message after the last one
+ var fromSeqNum = peekedMessages[counter-1].SequenceNumber + 1;
+ peekedMessages = await receiver.PeekMessagesAsync(maxMessages: 5, fromSequenceNumber: fromSeqNum);
+}
+```
+
+The following sample output is from peeking a queue with 13 messages in it.
+
+```bash
+Message 1
+Message 2
+Message 3
+Message 4
+Message 5
+Peek round complete
+
+Message 6
+Message 7
+Message 8
+Message 9
+Message 10
+Peek round complete
+
+Message 11
+Message 12
+Message 13
+Peek round complete
+```
++
+### [Python](#tab/python)
+ ```python import os from azure.servicebus import ServiceBusClient
with servicebus_client:
print("Receive is done.") ``` ++ ## Next steps Try the samples in the language of your choice to explore Azure Service Bus features.
site-recovery Azure To Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-architecture.md
description: Overview of the architecture used when you set up disaster recovery
Previously updated : 03/27/2023 Last updated : 02/29/2024
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Title: Support matrix for Azure VM disaster recovery with Azure Site Recovery description: Summarizes support for Azure VMs disaster recovery to a secondary region with Azure Site Recovery. Previously updated : 02/14/2024 Last updated : 02/29/2024
Rocky Linux | [See supported versions](#supported-rocky-linux-kernel-versions-fo
**Release** | **Mobility service version** | **Red Hat kernel version** | | | |
-RHEL 9.0 <br> RHEL 9.1 <br> RHEL 9.2 <br> RHEL 9.3 | 9.60 | 5.14.0-70.13.1.el9_0.x86_64 <br> 5.14.0-70.17.1.el9_0.x86_64 <br> 5.14.0-70.22.1.el9_0.x86_64 <br> 5.14.0-70.26.1.el9_0.x86_64 <br> 5.14.0-70.30.1.el9_0.x86_64 <br> 5.14.0-70.36.1.el9_0.x86_64 <br> 5.14.0-70.43.1.el9_0.x86_64 <br> 5.14.0-70.49.1.el9_0.x86_64 <br> 5.14.0-70.50.2.el9_0.x86_64 <br> 5.14.0-70.53.1.el9_0.x86_64 <br> 5.14.0-70.58.1.el9_0.x86_64 <br> 5.14.0-70.64.1.el9_0.x86_64 <br> 5.14.0-70.70.1.el9_0.x86_64 <br> 5.14.0-70.75.1.el9_0.x86_64 <br> 5.14.0-70.80.1.el9_0.x86_64 <br> 5.14.0-70.85.1.el9_0.x86_64 <br> 5.14.0-162.6.1.el9_1.x86_64ΓÇ» <br> 5.14.0-162.12.1.el9_1.x86_64 <br> 5.14.0-162.18.1.el9_1.x86_64 <br> 5.14.0-284.11.1.el9_2.x86_64 <br> 5.14.0-284.13.1.el9_2.x86_64 <br> 5.14.0-284.16.1.el9_2.x86_64 <br> 5.14.0-284.18.1.el9_2.x86_64 <br> 5.14.0-284.23.1.el9_2.x86_64 <br> 5.14.0-284.25.1.el9_2.x86_64 <br> 5.14.0-284.28.1.el9_2.x86_64 <br> 5.14.0-284.30.1.el9_2.x86_64 <br> 5.14.0-284.32.1.el9_2.x86_64 <br> 5.14.0-284.34.1.el9_2.x86_64 <br> 5.14.0-284.36.1.el9_2.x86_64 <br> 5.14.0-284.40.1.el9_2.x86_64 <br> 5.14.0-284.41.1.el9_2.x86_64 <br>5.14.0-284.43.1.el9_2.x86_64 <br>5.14.0-284.44.1.el9_2.x86_64 <br> 5.14.0-284.45.1.el9_2.x86_64 <br>5.14.0-284.48.1.el9_2.x86_64 <br>5.14.0-284.50.1.el9_2.x86_64 <br> 5.14.0-284.52.1.el9_2.x86_64 <br>5.14.0-362.8.1.el9_3.x86_64 <br>5.14.0-362.13.1.el9_3.x86_64 |
+RHEL 9.0 <br> RHEL 9.1 <br> RHEL 9.2 <br> RHEL 9.3 | 9.60 | 5.14.0-70.13.1.el9_0.x86_64 <br> 5.14.0-70.17.1.el9_0.x86_64 <br> 5.14.0-70.22.1.el9_0.x86_64 <br> 5.14.0-70.26.1.el9_0.x86_64 <br> 5.14.0-70.30.1.el9_0.x86_64 <br> 5.14.0-70.36.1.el9_0.x86_64 <br> 5.14.0-70.43.1.el9_0.x86_64 <br> 5.14.0-70.49.1.el9_0.x86_64 <br> 5.14.0-70.50.2.el9_0.x86_64 <br> 5.14.0-70.53.1.el9_0.x86_64 <br> 5.14.0-70.58.1.el9_0.x86_64 <br> 5.14.0-70.64.1.el9_0.x86_64 <br> 5.14.0-70.70.1.el9_0.x86_64 <br> 5.14.0-70.75.1.el9_0.x86_64 <br> 5.14.0-70.80.1.el9_0.x86_64 <br> 5.14.0-70.85.1.el9_0.x86_64 <br> 5.14.0-162.6.1.el9_1.x86_64ΓÇ» <br> 5.14.0-162.12.1.el9_1.x86_64 <br> 5.14.0-162.18.1.el9_1.x86_64 <br> 5.14.0-162.22.2.el9_1.x86_64 <br> 5.14.0-162.23.1.el9_1.x86_64 <br> 5.14.0-284.11.1.el9_2.x86_64 <br> 5.14.0-284.13.1.el9_2.x86_64 <br> 5.14.0-284.16.1.el9_2.x86_64 <br> 5.14.0-284.18.1.el9_2.x86_64 <br> 5.14.0-284.23.1.el9_2.x86_64 <br> 5.14.0-284.25.1.el9_2.x86_64 <br> 5.14.0-284.28.1.el9_2.x86_64 <br> 5.14.0-284.30.1.el9_2.x86_64 <br> 5.14.0-284.32.1.el9_2.x86_64 <br> 5.14.0-284.34.1.el9_2.x86_64 <br> 5.14.0-284.36.1.el9_2.x86_64 <br> 5.14.0-284.40.1.el9_2.x86_64 <br> 5.14.0-284.41.1.el9_2.x86_64 <br>5.14.0-284.43.1.el9_2.x86_64 <br>5.14.0-284.44.1.el9_2.x86_64 <br> 5.14.0-284.45.1.el9_2.x86_64 <br>5.14.0-284.48.1.el9_2.x86_64 <br>5.14.0-284.50.1.el9_2.x86_64 <br> 5.14.0-284.52.1.el9_2.x86_64 <br>5.14.0-362.8.1.el9_3.x86_64 <br>5.14.0-362.13.1.el9_3.x86_64 <br> 5.14.0-362.18.1.el9_3.x86_64 |
#### Supported Ubuntu kernel versions for Azure virtual machines
RHEL 9.0 <br> RHEL 9.1 <br> RHEL 9.2 <br> RHEL 9.3 | 9.60 | 5.14.0-70.13.1.el9_
16.04 LTS | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | No new 16.04 LTS kernels supported in this release. | 16.04 LTS | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| No new 16.04 LTS kernels supported in this release. | |||
-18.04 LTS | [9.60]() | 4.15.0-1168-azure <br> 4.15.0-1169-azure <br> 4.15.0-1170-azure <br> 4.15.0-1171-azure <br> 4.15.0-1172-azure <br> 4.15.0-1173-azure <br> 4.15.0-214-generic <br> 4.15.0-216-generic <br> 4.15.0-218-generic <br> 4.15.0-219-generic <br> 4.15.0-220-generic <br> 4.15.0-221-generic <br> 5.4.0-1110-azure <br> 5.4.0-1111-azure <br> 5.4.0-1112-azure <br> 5.4.0-1113-azure <br> 5.4.0-1115-azure <br> 5.4.0-1116-azure <br> 5.4.0-1117-azure <br> 5.4.0-1118-azure <br> 5.4.0-1119-azure <br> 5.4.0-1120-azure <br> 5.4.0-1121-azure <br> 5.4.0-1122-azure <br> 5.4.0-152-generic <br> 5.4.0-153-generic <br> 5.4.0-155-generic <br> 5.4.0-156-generic <br> 5.4.0-159-generic <br> 5.4.0-162-generic <br> 5.4.0-163-generic <br> 5.4.0-164-generic <br> 5.4.0-165-generic <br> 5.4.0-166-generic <br> 5.4.0-167-generic <br> 5.4.0-169-generic <br> 5.4.0-170-generic |
+18.04 LTS | [9.60]() | 4.15.0-1168-azure <br> 4.15.0-1169-azure <br> 4.15.0-1170-azure <br> 4.15.0-1171-azure <br> 4.15.0-1172-azure <br> 4.15.0-1173-azure <br> 4.15.0-214-generic <br> 4.15.0-216-generic <br> 4.15.0-218-generic <br> 4.15.0-219-generic <br> 4.15.0-220-generic <br> 4.15.0-221-generic <br> 5.4.0-1110-azure <br> 5.4.0-1111-azure <br> 5.4.0-1112-azure <br> 5.4.0-1113-azure <br> 5.4.0-1115-azure <br> 5.4.0-1116-azure <br> 5.4.0-1117-azure <br> 5.4.0-1118-azure <br> 5.4.0-1119-azure <br> 5.4.0-1120-azure <br> 5.4.0-1121-azure <br> 5.4.0-1122-azure <br> 5.4.0-152-generic <br> 5.4.0-153-generic <br> 5.4.0-155-generic <br> 5.4.0-156-generic <br> 5.4.0-159-generic <br> 5.4.0-162-generic <br> 5.4.0-163-generic <br> 5.4.0-164-generic <br> 5.4.0-165-generic <br> 5.4.0-166-generic <br> 5.4.0-167-generic <br> 5.4.0-169-generic <br> 5.4.0-170-generic <br> 5.4.0-1123-azure <br> 5.4.0-171-generic |
18.04 LTS | [9.57](https://support.microsoft.com/topic/e94901f6-7624-4bb4-8d43-12483d2e1d50) | No new 18.04 LTS kernels supported in this release. | 18.04 LTS | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | No new 18.04 LTS kernels supported in this release. | 18.04 LTS |[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 4.15.0-1166-azure <br> 4.15.0-1167-azure <br> 4.15.0-212-generic <br> 4.15.0-213-generic <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.4.0-149-generic <br> 5.4.0-150-generic | 18.04 LTS |[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 4.15.0-208-generic <br> 4.15.0-209-generic <br> 5.4.0-1105-azure <br> 5.4.0-1106-azure <br> 5.4.0-146-generic <br> 4.15.0-1163-azure <br> 4.15.0-1164-azure <br> 4.15.0-1165-azure <br> 4.15.0-210-generic <br> 4.15.0-211-generic <br> 5.4.0-1107-azure <br> 5.4.0-147-generic <br> 5.4.0-147-generic <br> 5.4.0-148-generic <br> 4.15.0-212-generic <br> 4.15.0-1166-azure <br> 5.4.0-149-generic <br> 5.4.0-150-generic <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure | |||
-20.04 LTS | [9.60]() | 5.15.0-1054-azure <br> 5.15.0-92-generic <br> 5.4.0-1122-azure <br> 5.4.0-170-generic |
+20.04 LTS | [9.60]() | 5.15.0-1054-azure <br> 5.15.0-92-generic <br> 5.4.0-1122-azure <br> 5.4.0-170-generic <br> 5.15.0-94-generic <br> 5.4.0-1123-azure <br> 5.4.0-171-generic |
20.04 LTS | [9.57](https://support.microsoft.com/topic/e94901f6-7624-4bb4-8d43-12483d2e1d50) | 5.15.0-1052-azure <br> 5.15.0-1053-azure <br> 5.15.0-89-generic <br> 5.15.0-91-generic <br> 5.4.0-1120-azure <br> 5.4.0-1121-azure <br> 5.4.0-167-generic <br> 5.4.0-169-generic | 20.04 LTS | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | 5.15.0-1049-azure <br> 5.15.0-1050-azure <br> 5.15.0-1051-azure <br> 5.15.0-86-generic <br> 5.15.0-87-generic <br> 5.15.0-88-generic <br> 5.4.0-1117-azure <br> 5.4.0-1118-azure <br> 5.4.0-1119-azure <br> 5.4.0-164-generic <br> 5.4.0-165-generic <br> 5.4.0-166-generic | 20.04 LTS |[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 5.15.0-1039-azure <br> 5.15.0-1040-azure <br> 5.15.0-1041-azure <br> 5.15.0-73-generic <br> 5.15.0-75-generic <br> 5.15.0-76-generic <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.4.0-1110-azure <br> 5.4.0-1111-azure <br> 5.4.0-149-generic <br> 5.4.0-150-generic <br> 5.4.0-152-generic <br> 5.4.0-153-generic <br> 5.4.0-155-generic <br> 5.4.0-1112-azure <br> 5.15.0-78-generic <br> 5.15.0-1042-azure <br> 5.15.0-79-generic <br> 5.4.0-156-generic <br> 5.15.0-1047-azure <br> 5.15.0-84-generic <br> 5.4.0-1116-azure <br> 5.4.0-163-generic <br> 5.15.0-1043-azure <br> 5.15.0-1045-azure <br> 5.15.0-1046-azure <br> 5.15.0-82-generic <br> 5.15.0-83-generic | 20.04 LTS |[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.15.0-1035-azure <br> 5.15.0-1036-azure <br> 5.15.0-69-generic <br> 5.4.0-1105-azure <br> 5.4.0-1106-azure <br> 5.4.0-146-generic <br> 5.4.0-147-generic <br> 5.15.0-1037-azure <br> 5.15.0-1038-azure <br> 5.15.0-70-generic <br> 5.15.0-71-generic <br> 5.15.0-72-generic <br> 5.4.0-1107-azure <br> 5.4.0-148-generic <br> 5.4.0-149-generic <br> 5.4.0-150-generic <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.15.0-73-generic <br> 5.15.0-1039-azure | |||
-22.04 LTS |[9.60]()| 5.19.0-1025-azure <br> 5.19.0-1026-azure <br> 5.19.0-1027-azure <br> 5.19.0-41-generic <br> 5.19.0-42-generic <br> 5.19.0-43-generic <br> 5.19.0-45-generic <br> 5.19.0-46-generic <br> 5.19.0-50-generic <br> 6.2.0-1005-azure <br> 6.2.0-1006-azure <br> 6.2.0-1007-azure <br> 6.2.0-1008-azure <br> 6.2.0-1011-azure <br> 6.2.0-1012-azure <br> 6.2.0-1014-azure <br> 6.2.0-1015-azure <br> 6.2.0-1016-azure <br> 6.2.0-1017-azure <br> 6.2.0-1018-azure <br> 6.2.0-25-generic <br> 6.2.0-26-generic <br> 6.2.0-31-generic <br> 6.2.0-32-generic <br> 6.2.0-33-generic <br> 6.2.0-34-generic <br> 6.2.0-35-generic <br> 6.2.0-36-generic <br> 6.2.0-37-generic <br> 6.2.0-39-generic <br> 6.5.0-1007-azure <br> 6.5.0-1009-azure <br> 6.5.0-1010-azure <br> 6.5.0-14-generic <br> 5.15.0-1054-azure <br> 5.15.0-92-generic <br>6.2.0-1019-azure <br>6.5.0-1011-azure <br>6.5.0-15-generic |
+22.04 LTS |[9.60]()| 5.19.0-1025-azure <br> 5.19.0-1026-azure <br> 5.19.0-1027-azure <br> 5.19.0-41-generic <br> 5.19.0-42-generic <br> 5.19.0-43-generic <br> 5.19.0-45-generic <br> 5.19.0-46-generic <br> 5.19.0-50-generic <br> 6.2.0-1005-azure <br> 6.2.0-1006-azure <br> 6.2.0-1007-azure <br> 6.2.0-1008-azure <br> 6.2.0-1011-azure <br> 6.2.0-1012-azure <br> 6.2.0-1014-azure <br> 6.2.0-1015-azure <br> 6.2.0-1016-azure <br> 6.2.0-1017-azure <br> 6.2.0-1018-azure <br> 6.2.0-25-generic <br> 6.2.0-26-generic <br> 6.2.0-31-generic <br> 6.2.0-32-generic <br> 6.2.0-33-generic <br> 6.2.0-34-generic <br> 6.2.0-35-generic <br> 6.2.0-36-generic <br> 6.2.0-37-generic <br> 6.2.0-39-generic <br> 6.5.0-1007-azure <br> 6.5.0-1009-azure <br> 6.5.0-1010-azure <br> 6.5.0-14-generic <br> 5.15.0-1054-azure <br> 5.15.0-92-generic <br>6.2.0-1019-azure <br>6.5.0-1011-azure <br>6.5.0-15-generic <br> 5.15.0-94-generic <br>6.5.0-17-generic|
22.04 LTS | [9.57](https://support.microsoft.com/topic/e94901f6-7624-4bb4-8d43-12483d2e1d50) | 5.15.0-1052-azure <br> 5.15.0-1053-azure <br> 5.15.0-76-generic <br> 5.15.0-89-generic <br> 5.15.0-91-generic | 22.04 LTS | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | 5.15.0-1049-azure <br> 5.15.0-1050-azure <br> 5.15.0-1051-azure <br> 5.15.0-86-generic <br> 5.15.0-87-generic <br> 5.15.0-88-generic | 22.04 LTS |[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810)| 5.15.0-1039-azure <br> 5.15.0-1040-azure <br> 5.15.0-1041-azure <br> 5.15.0-73-generic <br> 5.15.0-75-generic <br> 5.15.0-76-generic <br> 5.15.0-78-generic <br> 5.15.0-1042-azure <br> 5.15.0-1044-azure <br> 5.15.0-79-generic <br> 5.15.0-1047-azure <br> 5.15.0-84-generic <br> 5.15.0-1045-azure <br> 5.15.0-1046-azure <br> 5.15.0-82-generic <br> 5.15.0-83-generic |
Debian 10 | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azur
Debian 10 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810)| 5.10.0-0.deb10.23-amd64 <br> 5.10.0-0.deb10.23-cloud-amd64 <br> 4.19.0-25-amd64 <br> 4.19.0-25-cloud-amd64 <br> 5.10.0-0.deb10.24-amd64 <br> 5.10.0-0.deb10.24-cloud-amd64 | Debian 10 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.10.0-0.bpo.3-amd64 <br> 5.10.0-0.bpo.3-cloud-amd64 <br> 5.10.0-0.bpo.4-amd64 <br> 5.10.0-0.bpo.4-cloud-amd64 <br> 5.10.0-0.bpo.5-amd64 <br> 5.10.0-0.bpo.5-cloud-amd64 <br> 4.19.0-24-amd64 <br> 4.19.0-24-cloud-amd64 <br> 5.10.0-0.deb10.22-amd64 <br> 5.10.0-0.deb10.22-cloud-amd64 <br> 5.10.0-0.deb10.23-amd64 <br> 5.10.0-0.deb10.23-cloud-amd64 | |||
-Debian 11 | [9.60]()| 5.10.0-27-amd64 <br> 5.10.0-27-cloud-amd64 |
+Debian 11 | [9.60]()| 5.10.0-27-amd64 <br> 5.10.0-27-cloud-amd64 <br> 5.10.0-28-amd64 <br> 5.10.0-28-cloud-amd64 |
Debian 11 | [9.57](https://support.microsoft.com/topic/e94901f6-7624-4bb4-8d43-12483d2e1d50)| No new Debian 11 kernels supported in this release. | Debian 11 | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d)| 5.10.0-26-amd64 <br> 5.10.0-26-cloud-amd64 | Debian 11 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810)| 5.10.0-24-amd64 <br> 5.10.0-24-cloud-amd64 <br> 5.10.0-25-amd64 <br> 5.10.0-25-cloud-amd64 |
SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4) | [9.54](https://support.mi
**Release** | **Mobility service version** | **Red Hat kernel version** | | | |
-Oracle Linux 9.0 <br> Oracle Linux 9.1 <br> Oracle Linux 9.2 <br> Oracle Linux 9.3 | 9.60 | 5.14.0-70.13.1.el9_0.x86_64 <br> 5.14.0-70.17.1.el9_0.x86_64 <br> <br> 5.14.0-70.22.1.el9_0.x86_64 <br> 5.14.0-70.26.1.el9_0.x86_64 <br> 5.14.0-70.30.1.el9_0.x86_64 <br> 5.14.0-70.36.1.el9_0.x86_64 <br> 5.14.0-70.43.1.el9_0.x86_64 <br> 5.14.0-70.49.1.el9_0.x86_64 <br> 5.14.0-70.50.2.el9_0.x86_64 <br> 5.14.0-70.53.1.el9_0.x86_64 <br> 5.14.0-70.58.1.el9_0.x86_64 <br> 5.14.0-70.64.1.el9_0.x86_64 <br> 5.14.0-70.70.1.el9_0.x86_64 <br> 5.14.0-70.75.1.el9_0.x86_64 <br> 5.14.0-70.80.1.el9_0.x86_64 <br> 5.14.0-70.85.1.el9_0.x86_64 <br> 5.14.0-162.6.1.el9_1.x86_64ΓÇ» <br> 5.14.0-162.12.1.el9_1.x86_64 <br> 5.14.0-162.18.1.el9_1.x86_64 <br> 5.14.0-284.11.1.el9_2.x86_64 <br>5.14.0-284.13.1.el9_2.x86_64 <br>5.14.0-284.16.1.el9_2.x86_64 <br>5.14.0-284.18.1.el9_2.x86_64 <br> 5.14.0-284.23.1.el9_2.x86_64 <br> 5.14.0-284.25.1.el9_2.x86_64 <br> 5.14.0-284.28.1.el9_2.x86_64 <br> 5.14.0-284.30.1.el9_2.x86_64 <br> 5.14.0-284.32.1.el9_2.x86_64 <br> 5.14.0-284.34.1.el9_2.x86_64 <br> 5.14.0-284.36.1.el9_2.x86_64 <br> 5.14.0-284.40.1.el9_2.x86_64 <br> 5.14.0-284.41.1.el9_2.x86_64 <br> 5.14.0-284.43.1.el9_2.x86_64 <br> 5.14.0-284.44.1.el9_2.x86_64 <br> 5.14.0-284.45.1.el9_2.x86_64 <br> 5.14.0-284.48.1.el9_2.x86_64 <br> 5.14.0-284.50.1.el9_2.x86_64 <br> 5.14.0-284.52.1.el9_2.x86_64 <br> 5.14.0-362.8.1.el9_3.x86_64 <br> 5.14.0-362.13.1.el9_3.x86_64 |
+Oracle Linux 9.0 <br> Oracle Linux 9.1 <br> Oracle Linux 9.2 <br> Oracle Linux 9.3 | 9.60 | 5.14.0-70.13.1.el9_0.x86_64 <br> 5.14.0-70.17.1.el9_0.x86_64 <br> 5.14.0-70.22.1.el9_0.x86_64 <br> 5.14.0-70.26.1.el9_0.x86_64 <br> 5.14.0-70.30.1.el9_0.x86_64 <br> 5.14.0-70.36.1.el9_0.x86_64 <br> 5.14.0-70.43.1.el9_0.x86_64 <br> 5.14.0-70.49.1.el9_0.x86_64 <br> 5.14.0-70.50.2.el9_0.x86_64 <br> 5.14.0-70.53.1.el9_0.x86_64 <br> 5.14.0-70.58.1.el9_0.x86_64 <br> 5.14.0-70.64.1.el9_0.x86_64 <br> 5.14.0-70.70.1.el9_0.x86_64 <br> 5.14.0-70.75.1.el9_0.x86_64 <br> 5.14.0-70.80.1.el9_0.x86_64 <br> 5.14.0-70.85.1.el9_0.x86_64 <br> 5.14.0-162.6.1.el9_1.x86_64ΓÇ» <br> 5.14.0-162.12.1.el9_1.x86_64 <br> 5.14.0-162.18.1.el9_1.x86_64 <br> 5.14.0-162.22.2.el9_1.x86_64 <br> 5.14.0-162.23.1.el9_1.x86_64 <br> 5.14.0-284.11.1.el9_2.x86_64 <br> 5.14.0-284.13.1.el9_2.x86_64 <br> 5.14.0-284.16.1.el9_2.x86_64 <br> 5.14.0-284.18.1.el9_2.x86_64 <br> 5.14.0-284.23.1.el9_2.x86_64 <br> 5.14.0-284.25.1.el9_2.x86_64 <br> 5.14.0-284.28.1.el9_2.x86_64 <br> 5.14.0-284.30.1.el9_2.x86_64 <br> 5.14.0-284.32.1.el9_2.x86_64 <br> 5.14.0-284.34.1.el9_2.x86_64 <br> 5.14.0-284.36.1.el9_2.x86_64 <br> 5.14.0-284.40.1.el9_2.x86_64 <br> 5.14.0-284.41.1.el9_2.x86_64 <br>5.14.0-284.43.1.el9_2.x86_64 <br>5.14.0-284.44.1.el9_2.x86_64 <br> 5.14.0-284.45.1.el9_2.x86_64 <br>5.14.0-284.48.1.el9_2.x86_64 <br>5.14.0-284.50.1.el9_2.x86_64 <br> 5.14.0-284.52.1.el9_2.x86_64 <br>5.14.0-362.8.1.el9_3.x86_64 <br>5.14.0-362.13.1.el9_3.x86_64 <br> 5.14.0-362.18.1.el9_3.x86_64 |
#### Supported Rocky Linux kernel versions for Azure virtual machines
Oracle Linux 9.0 <br> Oracle Linux 9.1 <br> Oracle Linux 9.2 <br> Oracle Linu
**Release** | **Mobility service version** | **Red Hat kernel version** | | | |
-Rocky Linux 9.0 <br> Rocky Linux 9.1 | [9.60]() | 5.14.0-70.13.1.el9_0.x86_64 <br> 5.14.0-70.17.1.el9_0.x86_64 <br> 5.14.0-70.22.1.el9_0.x86_64 <br> 5.14.0-70.26.1.el9_0.x86_64 <br> 5.14.0-70.30.1.el9_0.x86_64 <br> 5.14.0-70.36.1.el9_0.x86_64 <br> 5.14.0-70.43.1.el9_0.x86_64 <br> 5.14.0-70.49.1.el9_0.x86_64 <br> 5.14.0-70.50.2.el9_0.x86_64 <br> 5.14.0-70.53.1.el9_0.x86_64 <br> 5.14.0-70.58.1.el9_0.x86_64 <br> 5.14.0-70.64.1.el9_0.x86_64 <br> 5.14.0-70.70.1.el9_0.x86_64 <br> 5.14.0-70.75.1.el9_0.x86_64 <br> 5.14.0-70.80.1.el9_0.x86_64 <br> 5.14.0-70.85.1.el9_0.x86_64 <br> 5.14.0-162.6.1.el9_1.x86_64ΓÇ» <br> 5.14.0-162.12.1.el9_1.x86_64 <br> 5.14.0-162.18.1.el9_1.x86_64 |
+Rocky Linux 9.0 <br> Rocky Linux 9.1 | [9.60]() | 5.14.0-70.13.1.el9_0.x86_64 <br> 5.14.0-70.17.1.el9_0.x86_64 <br> 5.14.0-70.22.1.el9_0.x86_64 <br> 5.14.0-70.26.1.el9_0.x86_64 <br> 5.14.0-70.30.1.el9_0.x86_64 <br> 5.14.0-70.36.1.el9_0.x86_64 <br> 5.14.0-70.43.1.el9_0.x86_64 <br> 5.14.0-70.49.1.el9_0.x86_64 <br> 5.14.0-70.50.2.el9_0.x86_64 <br> 5.14.0-70.53.1.el9_0.x86_64 <br> 5.14.0-70.58.1.el9_0.x86_64 <br> 5.14.0-70.64.1.el9_0.x86_64 <br> 5.14.0-70.70.1.el9_0.x86_64 <br> 5.14.0-70.75.1.el9_0.x86_64 <br> 5.14.0-70.80.1.el9_0.x86_64 <br> 5.14.0-70.85.1.el9_0.x86_64 <br> 5.14.0-162.6.1.el9_1.x86_64ΓÇ» <br> 5.14.0-162.12.1.el9_1.x86_64 <br> 5.14.0-162.18.1.el9_1.x86_64 <br> 5.14.0-162.22.2.el9_1.x86_64 <br> 5.14.0-162.23.1.el9_1.x86_64 |
**Release** | **Mobility service version** | **Kernel version** | | | |
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Title: Support matrix for VMware/physical disaster recovery in Azure Site Recove
description: Summarizes support for disaster recovery of VMware VMs and physical server to Azure using Azure Site Recovery. Previously updated : 02/09/2024 Last updated : 02/29/2024
Rocky Linux | [See supported versions](#rocky-linux-server-supported-kernel-vers
**Release** | **Mobility service version** | **Red Hat kernel version** | | | |
-RHEL 9.0 <br> RHEL 9.1 <br> RHEL 9.2 <br> RHEL 9.3 | 9.60 | 5.14.0-70.13.1.el9_0.x86_64 <br> 5.14.0-70.17.1.el9_0.x86_64 <br> 5.14.0-70.22.1.el9_0.x86_64 <br> 5.14.0-70.26.1.el9_0.x86_64 <br> 5.14.0-70.30.1.el9_0.x86_64 <br> 5.14.0-70.36.1.el9_0.x86_64 <br> 5.14.0-70.43.1.el9_0.x86_64 <br> 5.14.0-70.49.1.el9_0.x86_64 <br> 5.14.0-70.50.2.el9_0.x86_64 <br> 5.14.0-70.53.1.el9_0.x86_64 <br> 5.14.0-70.58.1.el9_0.x86_64 <br> 5.14.0-70.64.1.el9_0.x86_64 <br> 5.14.0-70.70.1.el9_0.x86_64 <br> 5.14.0-70.75.1.el9_0.x86_64 <br> 5.14.0-70.80.1.el9_0.x86_64 <br> 5.14.0-70.85.1.el9_0.x86_64 <br> 5.14.0-162.6.1.el9_1.x86_64ΓÇ» <br> 5.14.0-162.12.1.el9_1.x86_64 <br> 5.14.0-162.18.1.el9_1.x86_64 <br> 5.14.0-284.11.1.el9_2.x86_64 <br> 5.14.0-284.13.1.el9_2.x86_64 <br> 5.14.0-284.16.1.el9_2.x86_64 <br> 5.14.0-284.18.1.el9_2.x86_64 <br> 5.14.0-284.23.1.el9_2.x86_64 <br> 5.14.0-284.25.1.el9_2.x86_64 <br> 5.14.0-284.28.1.el9_2.x86_64 <br> 5.14.0-284.30.1.el9_2.x86_64 <br> 5.14.0-284.32.1.el9_2.x86_64 <br> 5.14.0-284.34.1.el9_2.x86_64 <br> 5.14.0-284.36.1.el9_2.x86_64 <br> 5.14.0-284.40.1.el9_2.x86_64 <br> 5.14.0-284.41.1.el9_2.x86_64 <br>5.14.0-284.43.1.el9_2.x86_64 <br>5.14.0-284.44.1.el9_2.x86_64 <br> 5.14.0-284.45.1.el9_2.x86_64 <br>5.14.0-284.48.1.el9_2.x86_64 <br>5.14.0-284.50.1.el9_2.x86_64 <br> 5.14.0-284.52.1.el9_2.x86_64 <br>5.14.0-362.8.1.el9_3.x86_64 <br>5.14.0-362.13.1.el9_3.x86_64 |
+RHEL 9.0 <br> RHEL 9.1 <br> RHEL 9.2 <br> RHEL 9.3 | 9.60 | 5.14.0-70.13.1.el9_0.x86_64 <br> 5.14.0-70.17.1.el9_0.x86_64 <br> 5.14.0-70.22.1.el9_0.x86_64 <br> 5.14.0-70.26.1.el9_0.x86_64 <br> 5.14.0-70.30.1.el9_0.x86_64 <br> 5.14.0-70.36.1.el9_0.x86_64 <br> 5.14.0-70.43.1.el9_0.x86_64 <br> 5.14.0-70.49.1.el9_0.x86_64 <br> 5.14.0-70.50.2.el9_0.x86_64 <br> 5.14.0-70.53.1.el9_0.x86_64 <br> 5.14.0-70.58.1.el9_0.x86_64 <br> 5.14.0-70.64.1.el9_0.x86_64 <br> 5.14.0-70.70.1.el9_0.x86_64 <br> 5.14.0-70.75.1.el9_0.x86_64 <br> 5.14.0-70.80.1.el9_0.x86_64 <br> 5.14.0-70.85.1.el9_0.x86_64 <br> 5.14.0-162.6.1.el9_1.x86_64ΓÇ» <br> 5.14.0-162.12.1.el9_1.x86_64 <br> 5.14.0-162.18.1.el9_1.x86_64 <br> 5.14.0-162.22.2.el9_1.x86_64 <br> 5.14.0-162.23.1.el9_1.x86_64 <br> 5.14.0-284.11.1.el9_2.x86_64 <br> 5.14.0-284.13.1.el9_2.x86_64 <br> 5.14.0-284.16.1.el9_2.x86_64 <br> 5.14.0-284.18.1.el9_2.x86_64 <br> 5.14.0-284.23.1.el9_2.x86_64 <br> 5.14.0-284.25.1.el9_2.x86_64 <br> 5.14.0-284.28.1.el9_2.x86_64 <br> 5.14.0-284.30.1.el9_2.x86_64 <br> 5.14.0-284.32.1.el9_2.x86_64 <br> 5.14.0-284.34.1.el9_2.x86_64 <br> 5.14.0-284.36.1.el9_2.x86_64 <br> 5.14.0-284.40.1.el9_2.x86_64 <br> 5.14.0-284.41.1.el9_2.x86_64 <br>5.14.0-284.43.1.el9_2.x86_64 <br>5.14.0-284.44.1.el9_2.x86_64 <br> 5.14.0-284.45.1.el9_2.x86_64 <br>5.14.0-284.48.1.el9_2.x86_64 <br>5.14.0-284.50.1.el9_2.x86_64 <br> 5.14.0-284.52.1.el9_2.x86_64 <br>5.14.0-362.8.1.el9_3.x86_64 <br>5.14.0-362.13.1.el9_3.x86_64 <br> 5.14.0-362.18.1.el9_3.x86_64 |
### Ubuntu kernel versions
RHEL 9.0 <br> RHEL 9.1 <br> RHEL 9.2 <br> RHEL 9.3 | 9.60 | 5.14.0-70.13.1.el9_
||| 16.04 LTS | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f), [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810), [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) [9.57](https://support.microsoft.com/topic/e94901f6-7624-4bb4-8d43-12483d2e1d50), 9.59, 9.60 | 4.4.0-21-generic to 4.4.0-210-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic, 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-142-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1113-azure </br> 4.15.0-101-generic to 4.15.0-107-generic | |||
-18.04 LTS | [9.60]() | No new Ubuntu 18.04 kernels supported in this release. |
+18.04 LTS | [9.60]() | 4.15.0-1168-azure <br> 4.15.0-1169-azure <br> 4.15.0-1170-azure <br> 4.15.0-1171-azure <br> 4.15.0-1172-azure <br> 4.15.0-1173-azure <br> 4.15.0-214-generic <br> 4.15.0-216-generic <br> 4.15.0-218-generic <br> 4.15.0-219-generic <br> 4.15.0-220-generic <br> 4.15.0-221-generic <br> 5.4.0-1110-azure <br> 5.4.0-1111-azure <br> 5.4.0-1112-azure <br> 5.4.0-1113-azure <br> 5.4.0-1115-azure <br> 5.4.0-1116-azure <br> 5.4.0-1117-azure <br> 5.4.0-1118-azure <br> 5.4.0-1119-azure <br> 5.4.0-1120-azure <br> 5.4.0-1121-azure <br> 5.4.0-1122-azure <br> 5.4.0-1123-azure <br> 5.4.0-152-generic <br> 5.4.0-153-generic <br> 5.4.0-155-generic <br> 5.4.0-156-generic <br> 5.4.0-159-generic <br> 5.4.0-162-generic <br> 5.4.0-163-generic <br> 5.4.0-164-generic <br> 5.4.0-165-generic <br> 5.4.0-166-generic <br> 5.4.0-167-generic <br> 5.4.0-169-generic <br> 5.4.0-170-generic <br> 5.4.0-1112-azure <br> 5.4.0-1113-azure <br> 5.4.0-1115-azure <br> 5.4.0-1116-azure <br> 5.4.0-1117-azure <br> 5.4.0-1118-azure <br> 5.4.0-1119-azure <br> 5.4.0-1120-azure <br> 5.4.0-1121-azure <br> 5.4.0-1122-azure <br> 5.4.0-1123-azure <br> 5.4.0-171-generic |
18.04 LTS | [9.59]() | No new Ubuntu 18.04 kernels supported in this release. | 18.04 LTS | [9.57](https://support.microsoft.com/topic/e94901f6-7624-4bb4-8d43-12483d2e1d50) | No new Ubuntu 18.04 kernels supported in this release| 18.04 LTS | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | No new Ubuntu 18.04 kernels supported in this release| 18.04 LTS |[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 4.15.0-1163-azure <br> 4.15.0-1164-azure <br> 4.15.0-1165-azure <br> 4.15.0-1166-azure <br> 4.15.0-1167-azure <br> 4.15.0-210-generic <br> 4.15.0-211-generic <br> 4.15.0-212-generic <br> 4.15.0-213-generic <br> 5.4.0-1107-azure <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.4.0-147-generic <br> 5.4.0-148-generic <br> 5.4.0-149-generic <br> 5.4.0-150-generic | 18.04 LTS|[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 4.15.0-1161-azure <br> 4.15.0-1162-azure <br> 4.15.0-204-generic <br> 4.15.0-206-generic <br> 4.15.0-208-generic <br> 4.15.0-209-generic <br> 5.4.0-1101-azure <br> 5.4.0-1103-azure <br> 5.4.0-1104-azure <br> 5.4.0-1105-azure <br> 5.4.0-1106-azure <br> 5.4.0-139-generic <br> 5.4.0-144-generic <br> 5.4.0-146-generic | |||
-20.04 LTS | [9.60]() | No new Ubuntu 20.04 kernels supported in this release. |
+20.04 LTS | [9.60]() | 5.15.0-1054-azure <br> 5.15.0-92-generic <br> 5.15.0-94-generic <br> 5.4.0-1122-azure <br>5.4.0-1123-azure <br> 5.4.0-170-generic <br> 5.4.0-171-generic |
20.04 LTS | [9.59]() | No new Ubuntu 20.04 kernels supported in this release. | 20.04 LTS |[9.57](https://support.microsoft.com/topic/e94901f6-7624-4bb4-8d43-12483d2e1d50) | 5.15.0-89-generic <br> 5.15.0-91-generic <br> 5.4.0-167-generic <br> 5.4.0-169-generic | 20.04 LTS |[9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | 5.15.0-1049-azure <br> 5.15.0-1050-azure <br> 5.15.0-1051-azure <br> 5.15.0-86-generic <br> 5.15.0-87-generic <br> 5.15.0-88-generic <br> 5.4.0-1117-azure <br> 5.4.0-1118-azure <br> 5.4.0-1119-azure <br> 5.4.0-164-generic <br> 5.4.0-165-generic <br> 5.4.0-166-generic | 20.04 LTS|[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 5.15.0-1037-azure <br> 5.15.0-1038-azure <br> 5.15.0-1039-azure <br> 5.15.0-1040-azure <br> 5.15.0-1041-azure <br> 5.15.0-70-generic <br> 5.15.0-71-generic <br> 5.15.0-72-generic <br> 5.15.0-73-generic <br> 5.15.0-75-generic <br> 5.15.0-76-generic <br> 5.4.0-1107-azure <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.4.0-1110-azure <br> 5.4.0-1111-azure <br> 5.4.0-148-generic <br> 5.4.0-149-generic <br> 5.4.0-150-generic <br> 5.4.0-152-generic <br> 5.4.0-153-generic | 20.04 LTS|[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.15.0-1033-azure <br> 5.15.0-1034-azure <br> 5.15.0-1035-azure <br> 5.15.0-1036-azure <br> 5.15.0-60-generic <br> 5.15.0-67-generic <br> 5.15.0-69-generic <br> 5.4.0-1101-azure <br> 5.4.0-1103-azure <br> 5.4.0-1104-azure <br> 5.4.0-1105-azure <br> 5.4.0-1106-azure <br> 5.4.0-139-generic <br> 5.4.0-144-generic <br> 5.4.0-146-generic <br> 5.4.0-147-generic | |||
-22.04 LTS <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet. | [9.60]() | 5.19.0-1025-azure <br> 5.19.0-1026-azure <br> 5.19.0-1027-azure <br> 6.2.0-1005-azure <br> 6.2.0-1006-azure <br> 6.2.0-1007-azure <br> 6.2.0-1008-azure <br> 6.2.0-1011-azure <br> 6.2.0-1012-azure <br> 6.2.0-1014-azure <br> 6.2.0-1015-azure <br> 6.2.0-1016-azure <br> 6.2.0-1017-azure <br> 6.2.0-1018-azure <br> 6.5.0-1007-azure <br> 6.5.0-1009-azure <br> 6.5.0-1010-azure <br> 5.19.0-41-generic <br> 5.19.0-42-generic <br> 5.19.0-43-generic <br> 5.19.0-45-generic <br> 5.19.0-46-generic <br> 5.19.0-50-generic <br> 6.2.0-25-generic <br> 6.2.0-26-generic <br> 6.2.0-31-generic <br> 6.2.0-32-generic <br> 6.2.0-33-generic <br> 6.2.0-34-generic <br> 6.2.0-35-generic <br> 6.2.0-36-generic <br> 6.2.0-37-generic <br> 6.2.0-39-generic <br> 6.5.0-14-generic |
+22.04 LTS <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet. | [9.60]() | 5.19.0-1025-azure <br> 5.19.0-1026-azure <br> 5.19.0-1027-azure <br> 6.2.0-1005-azure <br> 6.2.0-1006-azure <br> 6.2.0-1007-azure <br> 6.2.0-1008-azure <br> 6.2.0-1011-azure <br> 6.2.0-1012-azure <br> 6.2.0-1014-azure <br> 6.2.0-1015-azure <br> 6.2.0-1016-azure <br> 6.2.0-1017-azure <br> 6.2.0-1018-azure <br> 6.5.0-1007-azure <br> 6.5.0-1009-azure <br> 6.5.0-1010-azure <br> 5.19.0-41-generic <br> 5.19.0-42-generic <br> 5.19.0-43-generic <br> 5.19.0-45-generic <br> 5.19.0-46-generic <br> 5.19.0-50-generic <br> 6.2.0-25-generic <br> 6.2.0-26-generic <br> 6.2.0-31-generic <br> 6.2.0-32-generic <br> 6.2.0-33-generic <br> 6.2.0-34-generic <br> 6.2.0-35-generic <br> 6.2.0-36-generic <br> 6.2.0-37-generic <br> 6.2.0-39-generic <br> 6.5.0-14-generic <br> 5.15.0-1054-azure <br> 5.15.0-92-generic <br> 5.15.0-94-generic <br> 6.2.0-1019-azure <br> 6.5.0-1011-azure <br> 6.5.0-15-generic <br> 6.5.0-17-generic |
22.04 LTS <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet.| [9.57](https://support.microsoft.com/topic/e94901f6-7624-4bb4-8d43-12483d2e1d50) | 5.15.0-76-generic <br> 5.15.0-89-generic <br> 5.15.0-91-generic | 22.04 LTS <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet. |[9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | 5.15.0-1049-azure <br> 5.15.0-1050-azure <br> 5.15.0-1051-azure <br> 5.15.0-86-generic <br> 5.15.0-87-generic <br> 5.15.0-88-generic | 22.04 LTS <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet. |[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810)| 5.15.0-1037-azure <br> 5.15.0-1038-azure <br> 5.15.0-1039-azure <br> 5.15.0-1040-azure <br> 5.15.0-1041-azure <br> 5.15.0-71-generic <br> 5.15.0-72-generic <br> 5.15.0-73-generic <br> 5.15.0-75-generic <br> 5.15.0-76-generic |
Debian 10 | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azur
Debian 10 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 4.19.0-24-amd64 <br> 4.19.0-24-cloud-amd64 <br> 5.10.0-0.deb10.22-amd64 <br> 5.10.0-0.deb10.22-cloud-amd64 <br> 5.10.0-0.deb10.23-amd64 <br> 5.10.0-0.deb10.23-cloud-amd64 | Debian 10 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.10.0-0.bpo.3-amd64 <br> 5.10.0-0.bpo.3-cloud-amd64 <br> 5.10.0-0.bpo.4-amd64 <br> 5.10.0-0.bpo.4-cloud-amd64 <br> 5.10.0-0.bpo.5-amd64 <br> 5.10.0-0.bpo.5-cloud-amd64 <br> 5.10.0-0.deb10.21-amd64 <br> 5.10.0-0.deb10.21-cloud-amd64 | |||
-Debian 11 | [9.60]() | 5.10.0-27-amd64 <br> 5.10.0-27-cloud-amd64 |
+Debian 11 | [9.60]() | 5.10.0-27-amd64 <br> 5.10.0-27-cloud-amd64 <br> 5.10.0-28-amd64 <br> 5.10.0-28-cloud-amd64 |
Debian 11 | [9.59]() | No new Debian 11 kernels supported in this release. | Debian 11 | [9.57](https://support.microsoft.com/topic/e94901f6-7624-4bb4-8d43-12483d2e1d50) | No new Debian 11 kernels supported in this release. | Debian 11 | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | 5.10.0-26-amd64 <br> 5.10.0-26-cloud-amd64 |
SUSE Linux Enterprise Server 15, SP1, SP2, SP3, SP4 | [9.54](https://support.mic
**Release** | **Mobility service version** | **Red Hat kernel version** | | | |
-Oracle Linux 9.0 <br> Oracle Linux 9.1 <br> Oracle Linux 9.2 <br> Oracle Linux 9.3 | 9.60 | 5.14.0-70.13.1.el9_0.x86_64 <br> 5.14.0-70.17.1.el9_0.x86_64 <br> 5.14.0-70.22.1.el9_0.x86_64 <br> 5.14.0-70.26.1.el9_0.x86_64 <br> 5.14.0-70.30.1.el9_0.x86_64 <br> 5.14.0-70.36.1.el9_0.x86_64 <br> 5.14.0-70.43.1.el9_0.x86_64 <br> 5.14.0-70.49.1.el9_0.x86_64 <br> 5.14.0-70.50.2.el9_0.x86_64 <br> 5.14.0-70.53.1.el9_0.x86_64 <br> 5.14.0-70.58.1.el9_0.x86_64 <br> 5.14.0-70.64.1.el9_0.x86_64 <br> 5.14.0-70.70.1.el9_0.x86_64 <br> 5.14.0-70.75.1.el9_0.x86_64 <br> 5.14.0-70.80.1.el9_0.x86_64 <br> 5.14.0-70.85.1.el9_0.x86_64 <br> 5.14.0-162.6.1.el9_1.x86_64ΓÇ» <br> 5.14.0-162.12.1.el9_1.x86_64 <br> 5.14.0-162.18.1.el9_1.x86_64 <br> 5.14.0-284.11.1.el9_2.x86_64 <br>5.14.0-284.13.1.el9_2.x86_64 <br>5.14.0-284.16.1.el9_2.x86_64 <br>5.14.0-284.18.1.el9_2.x86_64 <br> 5.14.0-284.23.1.el9_2.x86_64 <br> 5.14.0-284.25.1.el9_2.x86_64 <br> 5.14.0-284.28.1.el9_2.x86_64 <br> 5.14.0-284.30.1.el9_2.x86_64 <br> 5.14.0-284.32.1.el9_2.x86_64 <br> 5.14.0-284.34.1.el9_2.x86_64 <br> 5.14.0-284.36.1.el9_2.x86_64 <br> 5.14.0-284.40.1.el9_2.x86_64 <br> 5.14.0-284.41.1.el9_2.x86_64 <br> 5.14.0-284.43.1.el9_2.x86_64 <br> 5.14.0-284.44.1.el9_2.x86_64 <br> 5.14.0-284.45.1.el9_2.x86_64 <br> 5.14.0-284.48.1.el9_2.x86_64 <br> 5.14.0-284.50.1.el9_2.x86_64 <br> 5.14.0-284.52.1.el9_2.x86_64 <br> 5.14.0-362.8.1.el9_3.x86_64 <br> 5.14.0-362.13.1.el9_3.x86_64 |
+Oracle Linux 9.0 <br> Oracle Linux 9.1 <br> Oracle Linux 9.2 <br> Oracle Linux 9.3 | 9.60 | 5.14.0-70.13.1.el9_0.x86_64 <br> 5.14.0-70.17.1.el9_0.x86_64 <br> 5.14.0-70.22.1.el9_0.x86_64 <br> 5.14.0-70.26.1.el9_0.x86_64 <br> 5.14.0-70.30.1.el9_0.x86_64 <br> 5.14.0-70.36.1.el9_0.x86_64 <br> 5.14.0-70.43.1.el9_0.x86_64 <br> 5.14.0-70.49.1.el9_0.x86_64 <br> 5.14.0-70.50.2.el9_0.x86_64 <br> 5.14.0-70.53.1.el9_0.x86_64 <br> 5.14.0-70.58.1.el9_0.x86_64 <br> 5.14.0-70.64.1.el9_0.x86_64 <br> 5.14.0-70.70.1.el9_0.x86_64 <br> 5.14.0-70.75.1.el9_0.x86_64 <br> 5.14.0-70.80.1.el9_0.x86_64 <br> 5.14.0-70.85.1.el9_0.x86_64 <br> 5.14.0-162.6.1.el9_1.x86_64ΓÇ» <br> 5.14.0-162.12.1.el9_1.x86_64 <br> 5.14.0-162.18.1.el9_1.x86_64 <br> 5.14.0-162.22.2.el9_1.x86_64 <br> 5.14.0-162.23.1.el9_1.x86_64 <br> 5.14.0-284.11.1.el9_2.x86_64 <br> 5.14.0-284.13.1.el9_2.x86_64 <br> 5.14.0-284.16.1.el9_2.x86_64 <br> 5.14.0-284.18.1.el9_2.x86_64 <br> 5.14.0-284.23.1.el9_2.x86_64 <br> 5.14.0-284.25.1.el9_2.x86_64 <br> 5.14.0-284.28.1.el9_2.x86_64 <br> 5.14.0-284.30.1.el9_2.x86_64 <br> 5.14.0-284.32.1.el9_2.x86_64 <br> 5.14.0-284.34.1.el9_2.x86_64 <br> 5.14.0-284.36.1.el9_2.x86_64 <br> 5.14.0-284.40.1.el9_2.x86_64 <br> 5.14.0-284.41.1.el9_2.x86_64 <br>5.14.0-284.43.1.el9_2.x86_64 <br>5.14.0-284.44.1.el9_2.x86_64 <br> 5.14.0-284.45.1.el9_2.x86_64 <br>5.14.0-284.48.1.el9_2.x86_64 <br>5.14.0-284.50.1.el9_2.x86_64 <br> 5.14.0-284.52.1.el9_2.x86_64 <br>5.14.0-362.8.1.el9_3.x86_64 <br>5.14.0-362.13.1.el9_3.x86_64 <br> 5.14.0-362.18.1.el9_3.x86_64 |
### Rocky Linux Server supported kernel versions
Oracle Linux 9.0 <br> Oracle Linux 9.1 <br> Oracle Linux 9.2 <br> Oracle Linu
**Release** | **Mobility service version** | **Red Hat kernel version** | | | |
-Rocky Linux 9.0 <br> Rocky Linux 9.1 | [9.60]() | 5.14.0-70.13.1.el9_0.x86_64 <br> 5.14.0-70.17.1.el9_0.x86_64 <br> 5.14.0-70.22.1.el9_0.x86_64 <br> 5.14.0-70.26.1.el9_0.x86_64 <br> 5.14.0-70.30.1.el9_0.x86_64 <br> 5.14.0-70.36.1.el9_0.x86_64 <br> 5.14.0-70.43.1.el9_0.x86_64 <br> 5.14.0-70.49.1.el9_0.x86_64 <br> 5.14.0-70.50.2.el9_0.x86_64 <br> 5.14.0-70.53.1.el9_0.x86_64 <br> 5.14.0-70.58.1.el9_0.x86_64 <br> 5.14.0-70.64.1.el9_0.x86_64 <br> 5.14.0-70.70.1.el9_0.x86_64 <br> 5.14.0-70.75.1.el9_0.x86_64 <br> 5.14.0-70.80.1.el9_0.x86_64 <br> 5.14.0-70.85.1.el9_0.x86_64 <br> 5.14.0-162.6.1.el9_1.x86_64ΓÇ» <br> 5.14.0-162.12.1.el9_1.x86_64 <br> 5.14.0-162.18.1.el9_1.x86_64 |
+Rocky Linux 9.0 <br> Rocky Linux 9.1 | [9.60]() | 5.14.0-70.13.1.el9_0.x86_64 <br> 5.14.0-70.17.1.el9_0.x86_64 <br> 5.14.0-70.22.1.el9_0.x86_64 <br> 5.14.0-70.26.1.el9_0.x86_64 <br> 5.14.0-70.30.1.el9_0.x86_64 <br> 5.14.0-70.36.1.el9_0.x86_64 <br> 5.14.0-70.43.1.el9_0.x86_64 <br> 5.14.0-70.49.1.el9_0.x86_64 <br> 5.14.0-70.50.2.el9_0.x86_64 <br> 5.14.0-70.53.1.el9_0.x86_64 <br> 5.14.0-70.58.1.el9_0.x86_64 <br> 5.14.0-70.64.1.el9_0.x86_64 <br> 5.14.0-70.70.1.el9_0.x86_64 <br> 5.14.0-70.75.1.el9_0.x86_64 <br> 5.14.0-70.80.1.el9_0.x86_64 <br> 5.14.0-70.85.1.el9_0.x86_64 <br> 5.14.0-162.6.1.el9_1.x86_64ΓÇ» <br> 5.14.0-162.12.1.el9_1.x86_64 <br> 5.14.0-162.18.1.el9_1.x86_64 <br> 5.14.0-162.22.2.el9_1.x86_64 <br> 5.14.0-162.23.1.el9_1.x86_64 |
**Release** | **Mobility service version** | **Kernel version** | | | |
spring-apps Quickstart Deploy Microservice Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-microservice-apps.md
Last updated 01/19/2023 -+ zone_pivot_groups: spring-apps-tier-selection
spring-apps Quickstart Deploy Restful Api App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-restful-api-app.md
Last updated 10/02/2023 -+ zone_pivot_groups: spring-apps-enterprise-or-consumption-plan-selection
spring-apps Quickstart Deploy Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-web-app.md
Last updated 07/11/2023-+ zone_pivot_groups: spring-apps-plan-selection
spring-apps Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart.md
Last updated 08/09/2023-+ zone_pivot_groups: spring-apps-plan-selection
spring-apps Troubleshoot Build Exit Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/troubleshoot-build-exit-code.md
This article describes how to troubleshoot build issues with your Azure Spring A
The Azure Spring Apps Enterprise plan uses Tanzu Buildpacks to transform your application source code into images. For more information, see [Tanzu Buildpacks](https://docs.vmware.com/en/VMware-Tanzu-Buildpacks/https://docsupdatetracker.net/index.html).
-When you deploy your app in Azure Spring Apps using the [Azure CLI](/cli/azure/install-azure-cli), you see a build log in the Azure CLI console. If the build fails, Azure Spring Apps displays an exit code and error message in the CLI console indicating why the buildpack execution failed during different phases of the buildpack [lifecycle](https://buildpacks.io/docs/concepts/components/lifecycle/).
+When you deploy your app in Azure Spring Apps using the [Azure CLI](/cli/azure/install-azure-cli), you see a build log in the Azure CLI console. If the build fails, Azure Spring Apps displays an exit code and error message in the CLI console indicating why the buildpack execution failed during different phases of the buildpack [lifecycle](https://buildpacks.io/docs/for-platform-operators/concepts/lifecycle/).
The following list describes some common exit codes:
storage Data Lake Storage Migrate Gen1 To Gen2 Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-migrate-gen1-to-gen2-azure-portal.md
- Title: Migrate from Azure Data Lake Storage Gen1 to Gen2 using the Azure portal-
-description: You can simplify the task of migrating from Azure Data Lake Storage Gen1 to Azure Data Lake Storage Gen2 by using the Azure portal.
---- Previously updated : 10/16/2023---
-# Migrate Azure Data Lake Storage from Gen1 to Gen2 by using the Azure portal
-
-This article shows you how to simplify the migration by using the Azure portal.
-
-> [!NOTE]
-> On **Feb. 29, 2024** Azure Data Lake Storage Gen1 will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/action-required-switch-to-azure-data-lake-storage-gen2-by-29-february-2024/). If you use Azure Data Lake Storage Gen1, make sure to migrate to Azure Data Lake Storage Gen2 prior to that date.
->
-> Since **April 1, 2023** Microsoft has been freezing Data Lake Storage Gen1 accounts that have zero read or write transactions in the last 180 days. If any of your accounts match that profile, please identify which ones you intend to migrate so that they won't be frozen. Contact your Microsoft account team or send a message to [ADLSGen1toGen2MigrationQA@service.microsoft.com](mailto:ADLSGen1toGen2MigrationQA@service.microsoft.com).
-
- You can provide your consent in the Azure portal and then migrate your data and metadata (such as timestamps and ACLs) automatically from Azure Data Lake Storage Gen1 to Azure Data Lake Storage Gen2.
-
-Here's a video that tells you more about it.
-
- :::column span="2":::
- > [!VIDEO https://learn-video.azurefd.net/vod/player?show=inside-azure-for-it&ep=migrate-azure-data-lake-storage-adls-from-gen1-to-gen2-by-using-the-azure-portal]
- :::column-end:::
- :::column span="":::
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Chapters**:
- ___
-
- - 00.37 - Introduction
-
- - 01:16 - Preparing for migration
-
- - 07:15 - Copy migration
-
- - 17:40 - Copy vs complete migration
-
- - 19:43 - Complete migration
-
- - 33:15 - Post migration
-<br>
- :::column-end:::
-
-Before you start, be sure to read the general guidance on how to migrate from Gen1 to Gen2 in [Azure Data Lake Storage migration guidelines and patterns](data-lake-storage-migrate-gen1-to-gen2.md).
-
-Your account might not qualify for portal-based migration based on certain constraints. When the **Migrate data** button is not enabled in the Azure portal for your Gen1 account, if you have a support plan, you can [file a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). You can also get answers from community experts in [Microsoft Q&A](/answers/topics/azure-data-lake-storage.html).
-
-> [!NOTE]
-> For easier reading, this article uses the term *Gen1* to refer to Azure Data Lake Storage Gen1, and the term *Gen2* to refer to Azure Data Lake Storage Gen2.
-
-## Step 1: Create a storage account with Gen2 capabilities
-
-Azure Data Lake Storage Gen2 isn't a dedicated storage account or service type. It's a set of capabilities that you can obtain by enabling the **Hierarchical namespace** feature of an Azure storage account. To create an account that has Gen2 capabilities, see [Create a storage account to use with Azure Data Lake Storage Gen2](create-data-lake-storage-account.md).
-
-As you create the account, make sure to configure settings with the following values.
-
-| Setting | Value |
-|--|--|
-| **Storage account name** | Any name that you want. This name doesn't have to match the name of your Gen1 account and can be in any subscription of your choice. |
-| **Location** | The same region used by the Data Lake Storage Gen1 account |
-| **Replication** | LRS or ZRS |
-| **Minimum TLS version** | 1.0 |
-| **NFS v3** | Disabled |
-| **Hierarchical namespace** | Enabled |
-
-> [!NOTE]
-> The migration tool in the Azure portal doesn't move account settings. Therefore, after you've created the account, you'll have to manually configure settings such as encryption, network firewalls, data protection.
-
-> [!IMPORTANT]
-> Ensure that you use a fresh, newly created storage account that has no history of use. **Don't** migrate to a previously used account or use an account in which containers have been deleted to make the account empty.
-
-## Step 2: Verify Azure role-based access control (Azure RBAC) role assignments
-
-For Gen2, ensure that the [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) role has been assigned to your Microsoft Entra user identity in the scope of the storage account, parent resource group, or subscription.
-
-For Gen1, ensure that the [Owner](../../role-based-access-control/built-in-roles.md#owner) role has been assigned to your Microsoft Entra identity in the scope of the Gen1 account, parent resource group, or subscription.
-
-## Step 3: Migrate Azure Data Lake Analytics workloads
-
-Azure Data Lake Storage Gen2 doesn't support Azure Data Lake Analytics. Azure Data Lake Analytics [will be retired](https://azure.microsoft.com/updates/migrate-to-azure-synapse-analytics/) on February 29, 2024. If you attempt to use the Azure portal to migrate an Azure Data Lake Storage Gen1 account that is used for Azure Data Lake Analytics, it's possible that you'll break your Azure Data Lake Analytics workloads. You must first [migrate your Azure Data Lake Analytics workloads to Azure Synapse Analytics](../../data-lake-analytics/migrate-azure-data-lake-analytics-to-synapse.md) or another supported compute platform before attempting to migrate your Gen1 account.
-
-For more information, see [Manage Azure Data Lake Analytics using the Azure portal](../../data-lake-analytics/data-lake-analytics-manage-use-portal.md).
-
-## Step 4: Prepare the Gen1 account
-
-File or directory names with only spaces or tabs, ending with a `.`, containing a `:`, or with multiple consecutive forward slashes (`//`) aren't compatible with Gen2. You need to rename these files or directories before you migrate.
-
-For the better performance, consider delaying the migration for at least ten days from the time of the last delete operation. In a Gen1 account, deleted files become _soft_ deleted files, and the Garbage Collector won't remove them permanently until seven days and will take a few extra days to process the cleanup. The time it takes for cleanup will depend on the number of files. All files, including soft deleted files, are processed during migration. If you wait until the Garbage Collector has permanently removed deleted files, your wait time can improve.
-
-## Step 5: Perform the migration
-
-Before you begin, review the two migration options below, and decide whether to only copy data from Gen1 to Gen2 (recommended) or perform a complete migration.
-
-**Option 1: Copy data only (recommended).** In this option, data is copied from Gen1 to Gen2. As the data is being copied, the Gen1 account becomes read-only. After the data is copied, both the Gen1 and Gen2 accounts will be accessible. However, you must update the applications and compute workloads to use the new Gen2 endpoint.
-
-**Option 2: Perform a complete migration.** In this option, data is copied from Gen1 to Gen2. After the data is copied, all the traffic from the Gen1 account will be redirected to the Gen2-enabled account. Redirected requests use the [Gen1 compatibility layer](#gen1-compatibility-layer) to translate Gen1 API calls to Gen2 equivalents. During the migration, the Gen1 account becomes read-only. After the migration is complete, the Gen1 account won't be accessible.
-
-Whichever option you choose, after you've migrated and verified that all your workloads work as expected, you can delete the Gen1 account.
-
-### Option 1: Copy data from Gen1 to Gen2
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) to get started.
-
-2. Locate your Data Lake Storage Gen1 account and display the account overview.
-
-3. Select the **Migrate data** button.
-
- > [!div class="mx-imgBorder"]
- > ![Button to migrate](./media/data-lake-storage-migrate-gen1-to-gen2-azure-portal/migration-tool.png)
-
-4. Select **Copy data to a new Gen2 account**.
-
- > [!div class="mx-imgBorder"]
- > ![Copy data option](./media/data-lake-storage-migrate-gen1-to-gen2-azure-portal/migration-data-option.png)
-
-5. Give Microsoft consent to perform the data migration by selecting the checkbox. Then, select the **Apply** button.
-
- > [!div class="mx-imgBorder"]
- > ![Checkbox to provide consent](./media/data-lake-storage-migrate-gen1-to-gen2-azure-portal/migration-consent.png)
-
- A progress bar appears along with a sub status message. You can use these indicators to gauge the progress of the migration. Because the time to complete each task varies, the progress bar won't advance at a consistent rate. For example, the progress bar might quickly advance to 50 percent, but then take a bit more time to complete the remaining 50 percent.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of progress bar when migrating data.](./media/data-lake-storage-migrate-gen1-to-gen2-azure-portal/migration-progress.png)
-
- > [!IMPORTANT]
- > While your data is being migrated, your Gen1 account becomes read-only and your Gen2-enabled account is disabled. When the migration is finished, you can read and write to both accounts.
-
- You can stop the migration at any time by selecting the **Stop migration** button.
-
- > [!div class="mx-imgBorder"]
- > ![Stop migration option](./media/data-lake-storage-migrate-gen1-to-gen2-azure-portal/migration-stop.png)
-
-### Option 2: Perform a complete migration
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) to get started.
-
-2. Locate your Data Lake Storage Gen1 account and display the account overview.
-
-3. Select the **Migrate data** button.
-
- > [!div class="mx-imgBorder"]
- > ![Migrate button](./media/data-lake-storage-migrate-gen1-to-gen2-azure-portal/migration-tool.png)
-
-4. Select **Complete migration to a new Gen2 account**.
-
- > [!div class="mx-imgBorder"]
- > ![Complete migration option](./media/data-lake-storage-migrate-gen1-to-gen2-azure-portal/migration-complete-option.png)
-
-5. Give Microsoft consent to perform the data migration by selecting the checkbox. Then, select the **Apply** button.
-
- > [!div class="mx-imgBorder"]
- > ![Consent checkbox](./media/data-lake-storage-migrate-gen1-to-gen2-azure-portal/migration-consent.png)
-
- A progress bar appears along with a sub status message. You can use these indicators to gauge the progress of the migration. Because the time to complete each task varies, the progress bar won't advance at a consistent rate. For example, the progress bar might quickly advance to 50 percent, but then take a bit more time to complete the remaining 50 percent.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of progress bar when performing a complete migration.](./media/data-lake-storage-migrate-gen1-to-gen2-azure-portal/migration-progress.png)
-
- > [!IMPORTANT]
- > While your data is being migrated, your Gen1 account becomes read-only and the Gen2-enabled account is disabled.
- >
- > Also, while the Gen1 URI is being redirected, both accounts are disabled.
- >
- > When the migration is finished, your Gen1 account will be disabled. The data in your Gen1 account won't be accessible and will be deleted after 30 days. Your Gen2 account will be available for reads and writes.
-
- You can stop the migration at any time before the URI is redirected by selecting the **Stop migration** button.
-
- > [!div class="mx-imgBorder"]
- > ![Migration stop button](./media/data-lake-storage-migrate-gen1-to-gen2-azure-portal/migration-stop.png)
-
-## Step 6: Verify that the migration completed
-
-If the migration completes successfully, then a container named **gen1** will be created in the Gen2-enabled account, and all data from the Gen1 account will be copied to this new **gen1** container. In order to find the data on a path that existed on Gen1, you must add the prefix **gen1/** to the same path to access it on Gen2. For example, a path that was named 'FolderRoot/FolderChild/FileName.csv' on Gen1 will be available at 'gen1/FolderRoot/FolderChild/FileName.csv' on Gen2. Container names can't be renamed on Gen2, so this **gen1** container on Gen2 can't be renamed post migration. However, the data can be copied to a new container in Gen2 if needed.
-
-If the migration doesn't complete successfully, a message appears which states that the migration is stalled due to incompatibilities. If you would like assistance with the next step, then please contact [Microsoft Support](https://go.microsoft.com/fwlink/?linkid=2228816). This message can appear if the Gen2-enabled account was previously used or when files and directories in the Gen1 account use incompatible naming conventions.
-
-Before contacting support, ensure that you're using a fresh, newly created storage account that has no history of use. Avoid migrating to a previously used account or an account in which containers have been deleted to make the account empty. In your Gen1 account, ensure that you rename any file or directory names that contain only spaces or tabs, end with a `.`, contain a `:`, or contain multiple forward slashes (`//`).
-
-## Step 7: Migrate workloads and applications
-
-1. Configure [services in your workloads](./data-lake-storage-supported-azure-services.md) to point to your Gen2 endpoint. For links to articles that help you configure Azure Databricks, HDInsight, and other Azure services to use Gen2, see [Azure services that support Azure Data Lake Storage Gen2](data-lake-storage-supported-azure-services.md).
-
-2. Update applications to use Gen2 APIs. See these guides:
-
- | Environment | Article |
- |--|--|
- |Azure Storage Explorer |[Use Azure Storage Explorer to manage directories and files in Azure Data Lake Storage Gen2](data-lake-storage-explorer.md)|
- |.NET |[Use .NET to manage directories and files in Azure Data Lake Storage Gen2](data-lake-storage-directory-file-acl-dotnet.md)|
- |Java|[Use Java to manage directories and files in Azure Data Lake Storage Gen2](data-lake-storage-directory-file-acl-java.md)|
- |Python|[Use Python to manage directories and files in Azure Data Lake Storage Gen2](data-lake-storage-directory-file-acl-python.md)|
- |JavaScript (Node.js)|[Use JavaScript SDK in Node.js to manage directories and files in Azure Data Lake Storage Gen2](data-lake-storage-directory-file-acl-javascript.md)|
- |REST API |[Azure Data Lake Store REST API](/rest/api/storageservices/data-lake-storage-gen2)|
-
-3. Update scripts to use Data Lake Storage Gen2 [PowerShell cmdlets](data-lake-storage-directory-file-acl-powershell.md), and [Azure CLI commands](data-lake-storage-directory-file-acl-cli.md).
-
-4. Search for URI references that contain the string `adl://` in code files, or in Databricks notebooks, Apache Hive HQL files or any other file used as part of your workloads. Replace these references with the [Gen2 formatted URI](data-lake-storage-introduction-abfs-uri.md) of your new storage account. For example: the Gen1 URI: `adl://mydatalakestore.azuredatalakestore.net/mydirectory/myfile` might become `abfss://myfilesystem@mydatalakestore.dfs.core.windows.net/mydirectory/myfile`.
-
-## Gen1 compatibility layer
-
-This layer attempts to provide application compatibility between Gen1 and Gen2 as a convenience during the migration, so that applications can continue using Gen1 APIs to interact with data in the Gen2-enabled account. This layer has limited functionality and it's advised to validate the workloads with test accounts if you use this approach as part of migration. The compatibility layer runs on the server, so there's nothing to install.
-
-> [!IMPORTANT]
-> Microsoft does not recommend this capability as a replacement for migrating your workloads and applications. Support for the Gen1 compatibility layer will end when Gen1 [is retired on Feb. 29, 2024](https://azure.microsoft.com/updates/action-required-switch-to-azure-data-lake-storage-gen2-by-29-february-2024/).
-
-To encounter the least number of issues with the compatibility layer, make sure that your Gen1 SDKs use the following versions (or higher).
-
- | Language | SDK version |
- |--|--|
- | **.NET** | [2.3.9](https://github.com/Azure/azure-data-lake-store-net/blob/master/CHANGELOG.md) |
- | **Java** | [1.1.21](https://github.com/Azure/azure-data-lake-store-jav) |
- | **Python** | [0.0.51](https://github.com/Azure/azure-data-lake-store-python/blob/master/HISTORY.rst) |
-
-The following functionality isn't supported in the compatibility layer.
--- ListStatus API option to ListBefore an entry.--- ListStatus API with over 4,000 files without a continuation token.--- Chunk-encoding for append operations.--- Any API calls that use `https://management.azure.com/` as the Microsoft Entra token audience.--- File or directory names with only spaces or tabs, ending with a `.`, containing a `:`, or with multiple consecutive forward slashes (`//`).-
-## Frequently asked questions
-
-#### How long will migration take?
-
-Data and metadata are migrated in parallel. The total time required to complete a migration is equal to whichever of these two processes complete last.
-
-The following table shows the approximate speed of each migration processing task.
-
-> [!NOTE]
-> These time estimates are approximate and can vary. For example, copying a large number of small files can slow performance.
-
-| Processing task | Speed |
-|-||
-| Data copy | 9 TB per hour |
-| Data validation | 9 million files or folders per hour |
-| Metadata copy | 4 million files or folders per hour |
-| Metadata processing | 25 million files or folders per hour |
-| Additional metadata processing (data copy option)<sup>1</sup> | 50 million files or folders per hour |
-
-<sup>1</sup> The additional metadata processing time applies only if you choose the **Copy data to a new Gen2 account** option. This processing time does not apply if you choose the **Complete migration to a new gen2 account** option.
-
-##### Example: Processing a large amount of data and metadata
-
-This example assumes **300 TB** of data and **200 million** data and metadata items.
-
-| Task | Estimated time |
-|--|--|
-| Copy data | 300 TB / 9 TB = 33.33 hours |
-| Validate data | 200 million / 9 million = 22.22 hours|
-| **Total data migration time** | **33.33 + 22.2 = 55.55 hours** |
-| Copy metadata | 200 million / 4 million = 50 hours |
-| Metadata processing | 200 million / 25 million = 8 hours |
-| Additional metadata processing - data copy option only | 200 million / 50 million = 4 hours |
-| **Total metadata migration time** | **50 + 8 + 4 = 62 hours** |
-| **Total time to perform a data-only migration** | **62 hours** |
-| **Total time to perform a complete migration**| **62 - 4 = 58 hours** |
-
-##### Example: Processing a small amount of data and metadata
-
-This example assumes that **2 TB** of data and **56 thousand** data and metadata items.
-
-| Task | Estimated time |
-|--|--|
-| Copy data | (2 TB / 9 TB) * 60 minutes = 13.3 minutes|
-| Validate data | (56,000 / 9 million) * 3,600 seconds = 22.4 seconds |
-| **Total data migration time** | **13.3 minutes + 22.4 seconds = approximately 14 minutes** |
-| Copy metadata | (56,000 / 4 million) * 3,600 seconds = approximately 51 seconds |
-| Metadata processing | 56,000/ 25 million = 8 seconds |
-| Additional metadata processing - data copy option only | (56,000 / 50 million) * 3,600 seconds = 4 seconds|
-| **Total metadata migration time** | **51 + 8 + 4 = 63 seconds** |
-| **Total time to perform a data-only migration** | **14 minutes** |
-| **Total time to perform a complete migration** | **14 minutes - 4 seconds = 13 minutes and 56 seconds (approximately 14 minutes)** |
-
-#### How much does the data migration cost?
-
-There's no cost to use the portal-based migration tool, however you'll be billed for usage of Azure Data Lake Gen1 and Gen2 services. During the data migration, you'll be billed for the data storage and transactions of the Gen1 account.
-
-Post migration, if you chose the option that copies only data, then you'll be billed for the data storage and transactions for both Azure Data Lake Gen1 and Gen2 accounts. To avoid being billed for the Gen1 account, delete the Gen1 account after you've updated your applications to point to Gen2. If you chose to perform a complete migration, you'll be billed only for the data storage and transactions of the Gen2-enabled account.
-
-#### While providing consent, I encountered the error message *Migration initiation failed*. What should I do next?
-
-Make sure all your Azure Data lake Analytics accounts are [migrated to Azure Synapse Analytics](../../data-lake-analytics/migrate-azure-data-lake-analytics-to-synapse.md) or another supported compute platform. Once Azure Data Lake Analytics accounts are migrated, retry the consent. If you see the issue further and you have a support plan, you can [file a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). You can also get answers from community experts in [Microsoft Q&A](/answers/topics/azure-data-lake-storage.html).
-
-#### After the migration completes, can I go back to using the Gen1 account?
-
-If you used [Option 1: Copy data from Gen1 to Gen2](#option-1-copy-data-from-gen1-to-gen2) mentioned above, then both the Gen1 and Gen2 accounts are available for reads and writes post migration. However, if you used [Option 2: Perform a complete migration](#option-2-perform-a-complete-migration), then going back to the Gen1 account isn't supported. In Option 2, after the migration completes, the data in your Gen1 account won't be accessible and will be deleted after 30 days. You can continue to view the Gen1 account in the Azure portal, and when you're ready, you can delete the Gen1 account.
-
-#### I would like to enable Geo-redundant storage (GRS) on the Gen2-enabled account, how do I do that?
-
-Once the migration is complete, both in "Copy data" and "Complete migration" options, you can go ahead and change the redundancy option to GRS as long as you don't plan to use the application compatibility layer. The application compatibility won't work on accounts that use GRS redundancy.
-
-#### Gen1 doesn't have containers and Gen2 has them - what should I expect?
-
-When we copy the data over to your Gen2-enabled account, we automatically create a container named 'Gen1'. In Gen2 container names can't be renamed and hence post migration data can be copied to new container in Gen2 as needed.
-
-#### What should I consider in terms of migration performance?
-
-When you copy the data over to your Gen2-enabled account, two factors that can affect performance are the number of files and the amount of metadata you have. For example, many small files can affect the performance of the migration.
-
-#### Will WebHDFS File System APIs supported on Gen2 account post migration?
-
-WebHDFS File System APIs of Gen1 will be supported on Gen2 but with certain deviations, and only limited functionality is supported via the compatibility layer. Customers should plan to leverage Gen2-specific APIs for better performance and features.
-
-#### What happens to my Gen1 account after the retirement date?
-
-The account becomes inaccessible. You won't be able to:
--- Manage the account--- Access data in the account --- Receive service updates to Gen1 or Gen1 APIs, SDKs, or client tools--- Access Gen1 customer support online, by phone or by email-
-See [Action required: Switch to Azure Data Lake Storage Gen2 by 29 February 2024](https://azure.microsoft.com/updates/action-required-switch-to-azure-data-lake-storage-gen2-by-29-february-2024/).
-
-## Next steps
--- Learn about migration in general. For more information, see [Migrate Azure Data Lake Storage from Gen1 to Gen2](data-lake-storage-migrate-gen1-to-gen2.md).
storage Data Lake Storage Migrate Gen1 To Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-migrate-gen1-to-gen2.md
- Title: Guidelines and patterns for migrating Azure Data Lake Storage from Gen1 to Gen2-
-description: Learn how to migrate Azure Data Lake Storage from Gen1 to Gen2, which is built on Azure Blob storage and provides a set of capabilities dedicated to big data analytics.
---- Previously updated : 03/09/2023---
-# Azure Data Lake Storage migration guidelines and patterns
-
-You can migrate your data, workloads, and applications from Azure Data Lake Storage Gen1 to Azure Data Lake Storage Gen2. This article explains the recommended migration approach and covers the different migration patterns and when to use each. For easier reading, this article uses the term *Gen1* to refer to Azure Data Lake Storage Gen1, and the term *Gen2* to refer to Azure Data Lake Storage Gen2.
-
-On **Feb 29, 2024** Azure Data Lake Storage Gen1 will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/action-required-switch-to-azure-data-lake-storage-gen2-by-29-february-2024/). If you use Azure Data Lake Storage Gen1, make sure to migrate to Azure Data Lake Storage Gen2 prior to that date. This article shows you how to do that.
-
-Azure Data Lake Storage Gen2 is built on [Azure Blob storage](storage-blobs-introduction.md) and provides a set of capabilities dedicated to big data analytics. [Data Lake Storage Gen2](https://azure.microsoft.com/services/storage/data-lake-storage/) combines features from [Azure Data Lake Storage Gen1](../../data-lake-store/index.yml), such as file system semantics, directory, and file level security and scale with low-cost, tiered storage, high availability/disaster recovery capabilities from [Azure Blob storage](storage-blobs-introduction.md).
-
-> [!NOTE]
-> Because Gen1 and Gen2 are different services, there is no in-place upgrade experience. To simplify the migration to Gen2 by using the Azure portal, see [Migrate Azure Data Lake Storage from Gen1 to Gen2 by using the Azure portal](data-lake-storage-migrate-gen1-to-gen2-azure-portal.md).
-
-## Recommended approach
-
-To migrate from Gen1 to Gen2, we recommend the following approach.
-
-Step 1: Assess readiness
-
-Step 2: Prepare to migrate
-
-Step 3: Migrate data and application workloads
-
-Step 4: Cutover from Gen1 to Gen2
-
-### Step 1: Assess readiness
-
-1. Learn about the [Data Lake Storage Gen2 offering](https://azure.microsoft.com/services/storage/data-lake-storage/); its benefits, costs, and general architecture.
-
-2. [Compare the capabilities](#gen1-gen2-feature-comparison) of Gen1 with those of Gen2.
-
-3. Review a list of [known issues](data-lake-storage-known-issues.md) to assess any gaps in functionality.
-
-4. Gen2 supports Blob storage features such as [diagnostic logging](../common/storage-analytics-logging.md), [access tiers](access-tiers-overview.md), and [Blob storage lifecycle management policies](./lifecycle-management-overview.md). If you're interesting in using any of these features, review [current level of support](./storage-feature-support-in-storage-accounts.md).
-
-5. Review the current state of [Azure ecosystem support](./data-lake-storage-multi-protocol-access.md) to ensure that Gen2 supports any services that your solutions depend upon.
-
-### Step 2: Prepare to migrate
-
-1. Identify the data sets that you'll migrate.
-
- Take this opportunity to clean up data sets that you no longer use. Unless you plan to migrate all of your data at one time, Take this time to identify logical groups of data that you can migrate in phases.
-
- Perform an [Ageing Analysis](https://github.com/Azure/adlsgen1togen2migration/tree/main/3-Migrate/Utilities/Ageing%20Analysis) (or similar) on your Gen1 account to identify which files or folders stay in inventory for a long time or are perhaps becoming obsolete.
-
-2. Determine the impact that a migration will have on your business.
-
- For example, consider whether you can afford any downtime while the migration takes place. These considerations can help you to identify a suitable migration pattern, and to choose the most appropriate tools.
-
-3. Create a migration plan.
-
- We recommend these [migration patterns](#migration-patterns). You can choose one of these patterns, combine them together, or design a custom pattern of your own.
-
-### Step 3: Migrate data, workloads, and applications
-
-Migrate data, workloads, and applications by using the pattern that you prefer. We recommend that you validate scenarios incrementally.
-
-1. [Create a storage account](create-data-lake-storage-account.md) and enable the hierarchical namespace feature.
-
-2. Migrate your data.
-
-3. Configure [services in your workloads](./data-lake-storage-supported-azure-services.md) to point to your Gen2 endpoint.
-
- For HDInsight clusters, you can add storage account configuration settings to the %HADOOP_HOME%/conf/core-site.xml file. If you plan to migrate external Hive tables from Gen1 to Gen2, then make sure to add storage account settings to the %HIVE_CONF_DIR%/hive-site.xml file as well.
-
- You can modify the settings each file by using [Apache Ambari](../../hdinsight/hdinsight-hadoop-manage-ambari.md). To find storage account settings, see [Hadoop Azure Support: ABFS ΓÇö Azure Data Lake Storage Gen2](https://hadoop.apache.org/docs/stable/hadoop-azure/abfs.html). This example uses the `fs.azure.account.key` setting to enable Shared Key authorization:
-
- ```xml
- <property>
- <name>fs.azure.account.key.abfswales1.dfs.core.windows.net</name>
- <value>your-key-goes-here</value>
- </property>
- ```
-
- For links to articles that help you configure HDInsight, Azure Databricks, and other Azure services to use Gen2, see [Azure services that support Azure Data Lake Storage Gen2](data-lake-storage-supported-azure-services.md).
-
-4. Update applications to use Gen2 APIs. See these guides:
-
-| Environment | Article |
-|--|--|
-|Azure Storage Explorer |[Use Azure Storage Explorer to manage directories and files in Azure Data Lake Storage Gen2](data-lake-storage-explorer.md)|
-|.NET |[Use .NET to manage directories and files in Azure Data Lake Storage Gen2](data-lake-storage-directory-file-acl-dotnet.md)|
-|Java|[Use Java to manage directories and files in Azure Data Lake Storage Gen2](data-lake-storage-directory-file-acl-java.md)|
-|Python|[Use Python to manage directories and files in Azure Data Lake Storage Gen2](data-lake-storage-directory-file-acl-python.md)|
-|JavaScript (Node.js)|[Use JavaScript SDK in Node.js to manage directories and files in Azure Data Lake Storage Gen2](data-lake-storage-directory-file-acl-javascript.md)|
-|REST API |[Azure Data Lake Store REST API](/rest/api/storageservices/data-lake-storage-gen2)|
-
-5. Update scripts to use Data Lake Storage Gen2 [PowerShell cmdlets](data-lake-storage-directory-file-acl-powershell.md), and [Azure CLI commands](data-lake-storage-directory-file-acl-cli.md).
-
-6. Search for URI references that contain the string `adl://` in code files, or in Databricks notebooks, Apache Hive HQL files or any other file used as part of your workloads. Replace these references with the [Gen2 formatted URI](data-lake-storage-introduction-abfs-uri.md) of your new storage account. For example: the Gen1 URI: `adl://mydatalakestore.azuredatalakestore.net/mydirectory/myfile` might become `abfss://myfilesystem@mydatalakestore.dfs.core.windows.net/mydirectory/myfile`.
-
-7. Configure the security on your account to include [Azure roles](assign-azure-role-data-access.md), [file and folder level security](data-lake-storage-access-control.md), and [Azure Storage firewalls and virtual networks](../common/storage-network-security.md).
-
-### Step 4: Cutover from Gen1 to Gen2
-
-After you're confident that your applications and workloads are stable on Gen2, you can begin using Gen2 to satisfy your business scenarios. Turn off any remaining pipelines that are running on Gen1 and decommission your Gen1 account.
-
-<a id="gen1-gen2-feature-comparison"></a>
-
-## Gen1 vs Gen2 capabilities
-
-This table compares the capabilities of Gen1 to that of Gen2.
-
-|Area |Gen1 |Gen2 |
-||||
-|Data organization|[Hierarchical namespace](data-lake-storage-namespace.md)<br>File and folder support|[Hierarchical namespace](data-lake-storage-namespace.md)<br>Container, file and folder support |
-|Geo-redundancy| [LRS](../common/storage-redundancy.md#locally-redundant-storage)| [LRS](../common/storage-redundancy.md#locally-redundant-storage), [ZRS](../common/storage-redundancy.md#zone-redundant-storage), [GRS](../common/storage-redundancy.md#geo-redundant-storage), [RA-GRS](../common/storage-redundancy.md#read-access-to-data-in-the-secondary-region) |
-|Authentication|[Microsoft Entra managed identity](../../active-directory/managed-identities-azure-resources/overview.md)<br>[Service principals](../../active-directory/develop/app-objects-and-service-principals.md)|[Microsoft Entra managed identity](../../active-directory/managed-identities-azure-resources/overview.md)<br>[Service principals](../../active-directory/develop/app-objects-and-service-principals.md)<br>[Shared Access Key](/rest/api/storageservices/authorize-with-shared-key)|
-|Authorization|Management - [Azure RBAC](../../role-based-access-control/overview.md)<br>Data - [ACLs](data-lake-storage-access-control.md)|Management - [Azure RBAC](../../role-based-access-control/overview.md)<br>Data - [ACLs](data-lake-storage-access-control.md), [Azure RBAC](../../role-based-access-control/overview.md) |
-|Encryption - Data at rest|Server side - with [Microsoft-managed](../common/storage-service-encryption.md?toc=/azure/storage/blobs/toc.json) or [customer-managed](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) keys|Server side - with [Microsoft-managed](../common/storage-service-encryption.md?toc=/azure/storage/blobs/toc.json) or [customer-managed](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) keys|
-|VNET Support|[VNET Integration](../../data-lake-store/data-lake-store-network-security.md)|[Service Endpoints](../common/storage-network-security.md?toc=/azure/storage/blobs/toc.json), [Private Endpoints](../common/storage-private-endpoints.md)|
-|Developer experience|[REST](../../data-lake-store/data-lake-store-data-operations-rest-api.md), [.NET](../../data-lake-store/data-lake-store-data-operations-net-sdk.md), [Java](../../data-lake-store/data-lake-store-get-started-java-sdk.md), [Python](../../data-lake-store/data-lake-store-data-operations-python.md), [PowerShell](../../data-lake-store/data-lake-store-get-started-powershell.md), [Azure CLI](../../data-lake-store/data-lake-store-get-started-cli-2.0.md)|Generally available - [REST](/rest/api/storageservices/data-lake-storage-gen2), [.NET](data-lake-storage-directory-file-acl-dotnet.md), [Java](data-lake-storage-directory-file-acl-java.md), [Python](data-lake-storage-directory-file-acl-python.md)<br>Public preview - [JavaScript](data-lake-storage-directory-file-acl-javascript.md), [PowerShell](data-lake-storage-directory-file-acl-powershell.md), [Azure CLI](data-lake-storage-directory-file-acl-cli.md)|
-|Resource logs|Classic logs<br>[Azure Monitor integrated](../../data-lake-store/data-lake-store-diagnostic-logs.md)|[Classic logs](../common/storage-analytics-logging.md) - Generally available<br>[Azure Monitor integrated](monitor-blob-storage.md) - Preview|
-|Ecosystem|[HDInsight (3.6)](../../data-lake-store/data-lake-store-hdinsight-hadoop-use-portal.md), [Azure Databricks (3.1 and above)](https://docs.databricks.com/dat)|
-
-<a id="migration-patterns"></a>
-
-## Gen1 to Gen2 patterns
-
-Choose a migration pattern, and then modify that pattern as needed.
-
-|Migration pattern | Details |
-|||
-|**Lift and Shift**|The simplest pattern. Ideal if your data pipelines can afford downtime.|
-|**Incremental copy**|Similar to *lift and shift*, but with less downtime. Ideal for large amounts of data that take longer to copy.|
-|**Dual pipeline**|Ideal for pipelines that can't afford any downtime.|
-|**Bidirectional sync**|Similar to *dual pipeline*, but with a more phased approach that is suited for more complicated pipelines.|
-
-Let's take a closer look at each pattern.
-
-### Lift and shift pattern
-
-This is the simplest pattern.
-
-1. Stop all writes to Gen1.
-
-2. Move data from Gen1 to Gen2. We recommend [Azure Data Factory](../../data-factory/connector-azure-data-lake-storage.md) or by using the [Azure portal](data-lake-storage-migrate-gen1-to-gen2-azure-portal.md). ACLs copy with the data.
-
-3. Point ingest operations and workloads to Gen2.
-
-4. Decommission Gen1.
-
-Check out our sample code for the lift and shift pattern in our [Lift and Shift migration sample](https://github.com/Azure/adlsgen1togen2migration/tree/main/3-Migrate/Lift%20and%20Shift).
-
-> [!div class="mx-imgBorder"]
-> ![lift and shift pattern](./media/data-lake-storage-migrate-gen1-to-gen2/lift-and-shift.png)
-
-#### Considerations for using the lift and shift pattern
--- Cutover from Gen1 to Gen2 for all workloads at the same time.--- Expect downtime during the migration and the cutover period.--- Ideal for pipelines that can afford downtime and all apps can be upgraded at one time.-
-> [!TIP]
-> Consider using the [Azure portal](data-lake-storage-migrate-gen1-to-gen2-azure-portal.md) to shorten downtime and reduce the number of steps required by you to complete the migration.
-
-### Incremental copy pattern
-
-1. Start moving data from Gen1 to Gen2. We recommend [Azure Data Factory](../../data-factory/connector-azure-data-lake-storage.md). ACLs copy with the data.
-
-2. Incrementally copy new data from Gen1.
-
-3. After all data is copied, stop all writes to Gen1, and point workloads to Gen2.
-
-4. Decommission Gen1.
-
-Check out our sample code for the incremental copy pattern in our [Incremental copy migration sample](https://github.com/Azure/adlsgen1togen2migration/tree/main/3-Migrate/Incremental).
-
-> [!div class="mx-imgBorder"]
-> ![Incremental copy pattern](./media/data-lake-storage-migrate-gen1-to-gen2/incremental-copy.png)
-
-#### Considerations for using the incremental copy pattern:
--- Cutover from Gen1 to Gen2 for all workloads at the same time.--- Expect downtime during cutover period only.--- Ideal for pipelines where all apps upgraded at one time, but the data copy requires more time.-
-### Dual pipeline pattern
-
-1. Move data from Gen1 to Gen2. We recommend [Azure Data Factory](../../data-factory/connector-azure-data-lake-storage.md). ACLs copy with the data.
-
-2. Ingest new data to both Gen1 and Gen2.
-
-3. Point workloads to Gen2.
-
-4. Stop all writes to Gen1 and then decommission Gen1.
-
-Check out our sample code for the dual pipeline pattern in our [Dual Pipeline migration sample](https://github.com/Azure/adlsgen1togen2migration/tree/main/3-Migrate/Dual%20pipeline).
-
-> [!div class="mx-imgBorder"]
-> ![Dual pipeline pattern](./media/data-lake-storage-migrate-gen1-to-gen2/dual-pipeline.png)
-
-#### Considerations for using the dual pipeline pattern:
--- Gen1 and Gen2 pipelines run side-by-side.--- Supports zero downtime.--- Ideal in situations where your workloads and applications can't afford any downtime, and you can ingest into both storage accounts.-
-### Bi-directional sync pattern
-
-1. Set up bidirectional replication between Gen1 and Gen2. We recommend [WanDisco](https://docs.wandisco.com/bigdata/wdfusion/adls/). It offers a repair feature for existing data.
-
-3. When all moves are complete, stop all writes to Gen1 and turn off bidirectional replication.
-
-4. Decommission Gen1.
-
-Check out our sample code for the bidirectional sync pattern in our [Bidirectional Sync migration sample](https://github.com/Azure/adlsgen1togen2migration/tree/main/3-Migrate/Bi-directional).
-
-> [!div class="mx-imgBorder"]
-> ![Bidirectional pattern](./media/data-lake-storage-migrate-gen1-to-gen2/bidirectional-sync.png)
-
-#### Considerations for using the bi-directional sync pattern:
--- Ideal for complex scenarios that involve a large number of pipelines and dependencies where a phased approach might make more sense.--- Migration effort is high, but it provides side-by-side support for Gen1 and Gen2.-
-## Next steps
--- Learn about the various parts of setting up security for a storage account. For more information, see [Azure Storage security guide](./security-recommendations.md).-- Optimize the performance for your Data Lake Store. See [Optimize Azure Data Lake Storage Gen2 for performance](./data-lake-storage-best-practices.md)-- Review the best practices for managing your Data Lake Store. See [Best practices for using Azure Data Lake Storage Gen2](data-lake-storage-best-practices.md)-
-## See also
--- [Introduction to Azure Data Lake Storage Gen2 (Training module)](/training/modules/introduction-to-azure-data-lake-storage/)-- [Best practices for using Azure Data Lake Storage Gen2](data-lake-storage-best-practices.md)-- [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control-model.md)
storage Storage Quickstart Blobs Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-dotnet.md
Last updated 02/06/2024
ms.devlang: csharp-+ ai-usage: ai-assisted zone_pivot_groups: azure-blob-storage-quickstart-options
storage Storage Quickstart Blobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md
Last updated 02/22/2024
ms.devlang: python-+ ai-usage: ai-assisted zone_pivot_groups: azure-blob-storage-quickstart-options
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md
Last updated 09/12/2023 -+ # Create a storage account
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
The following table lists services that can access your storage account data if
| Azure Event Grid | `Microsoft.EventGrid/partnerTopics` | Enables access to storage accounts. | | Azure Event Grid | `Microsoft.EventGrid/systemTopics` | Enables access to storage accounts. | | Azure Event Grid | `Microsoft.EventGrid/topics` | Enables access to storage accounts. |
+| Microsoft Fabric | `Microsoft.Fabric` | Enables access to storage accounts. |
| Azure Healthcare APIs | `Microsoft.HealthcareApis/services` | Enables access to storage accounts. | | Azure Healthcare APIs | `Microsoft.HealthcareApis/workspaces` | Enables access to storage accounts. | | Azure IoT Central | `Microsoft.IoTCentral/IoTApps` | Enables access to storage accounts. |
storage Files Monitoring Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-monitoring-alerts.md
Previously updated : 02/13/2024 Last updated : 03/01/2024
The following table lists some example scenarios to monitor and the proper metri
| File share is throttled. | Metric: Transactions<br>Dimension name: Response type <br>Dimension name: FileShare (premium file share only) | | File share size is 80% of capacity. | Metric: File Capacity<br>Dimension name: FileShare (premium file share only) | | File share egress has exceeded 500 GiB in one day. | Metric: Egress<br>Dimension name: FileShare (premium file share only) |
+| File share availability is less than 99.9%. | Metric: Availability<br>Dimension name: FileShare (premium file share only) |
## How to create an alert if a file share is throttled
To create an alert that will notify you if a file share is being throttled, foll
9. Select **Review + create** to create the alert.
-## Create an alert for high server latency
+## How to create an alert for high server latency
To create an alert for high server latency (average), follow these steps.
To create an alert for high server latency (average), follow these steps.
8. Select **Review + create** to create the alert.
+## How to create an alert if the Azure file share availability is less than 99.9%
+
+1. Open the **Create an alert rule** dialog box. For more information, see [Create or edit an alert rule](/azure/azure-monitor/alerts/alerts-create-new-alert-rule).
+
+2. In the **Condition** tab, select the **Availability** metric.
+
+3. In the **Alert logic** section, provide the following:
+ - **Threshold** = **Static**
+ - **Aggregation type** = **Average**
+ - **Operator** = **Less than**
+ - **Threshold value** enter **99.9**
+
+4. In the **Split by dimensions** section:
+ - Select the **Dimension name** drop-down and select **File Share**.
+ - Select the **Dimension values** drop-down and select the file share(s) that you want to alert on.
+
+ > [!NOTE]
+ > If the file share is a standard file share, the **File Share** dimension won't list the file share(s) because per-share metrics aren't available for standard file shares. Availability alerts for standard file shares will be at the storage acount level.
+
+6. In the **When to evaluate** section, select the following:
+ - **Check every** = **5 minutes**
+ - **Lookback period** = **1 hour**
+
+7. Click **Next** to go to the **Actions** tab and add an action group (email, SMS, etc.) to the alert. You can select an existing action group or create a new action group.
+
+8. Click **Next** to go to the **Details** tab and fill in the details of the alert such as the alert name, description, and severity.
+
+9. Select **Review + create** to create the alert.
+ ## Related content - [Monitor Azure Files](storage-files-monitoring.md)
storage Storage Files Identity Auth Domain Services Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-domain-services-enable.md
description: Learn how to enable identity-based authentication over Server Messa
Previously updated : 11/28/2023 Last updated : 03/01/2024 recommendations: false # Enable Microsoft Entra Domain Services authentication on Azure Files+ [!INCLUDE [storage-files-aad-auth-include](../../../includes/storage-files-aad-auth-include.md)] This article focuses on enabling and configuring Microsoft Entra Domain Services (formerly Azure Active Directory Domain Services) for identity-based authentication with Azure file shares. In this authentication scenario, Microsoft Entra credentials and Microsoft Entra Domain Services credentials are the same and can be used interchangeably.
If you're new to Azure Files, we recommend reading our [planning guide](storage-
> Azure Files supports authentication for Microsoft Entra Domain Services with full or partial (scoped) synchronization with Microsoft Entra ID. For environments with scoped synchronization present, administrators should be aware that Azure Files only honors Azure RBAC role assignments granted to principals that are synchronized. Role assignments granted to identities not synchronized from Microsoft Entra ID to Microsoft Entra Domain Services will be ignored by the Azure Files service. ## Applies to+ | File share type | SMB | NFS | |-|:-:|:-:| | Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
Follow these steps to grant access to Azure Files resources with Microsoft Entra
The following diagram illustrates the end-to-end workflow for enabling Microsoft Entra Domain Services authentication over SMB for Azure Files.
-![Diagram showing Microsoft Entra ID over SMB for Azure Files workflow](media/storage-files-active-directory-enable/azure-active-directory-over-smb-workflow.png)
<a name='enable-azure-ad-ds-authentication-for-your-account'></a>
Keep in mind that you can enable Microsoft Entra Domain Services authentication
To enable Microsoft Entra Domain Services authentication over SMB with the [Azure portal](https://portal.azure.com), follow these steps: 1. In the Azure portal, go to your existing storage account, or [create a storage account](../common/storage-account-create.md).
-1. In the **File shares** section, select **Active directory: Not Configured**.
+1. Select **Data storage** > **File shares**.
+1. In the **File share settings** section, select **Identity-based access: Not configured**.
- :::image type="content" source="media/storage-files-active-directory-enable/files-azure-ad-enable-storage-account-identity.png" alt-text="Screenshot of the File shares pane in your storage account, Active directory is highlighted." lightbox="media/storage-files-active-directory-enable/files-azure-ad-enable-storage-account-identity.png":::
+ :::image type="content" source="media/storage-files-identity-auth-domain-services-enable/enable-entra-storage-account-identity.png" alt-text="Screenshot of the file shares pane in your storage account, identity-based access is highlighted." lightbox="media/storage-files-identity-auth-domain-services-enable/enable-entra-storage-account-identity.png":::
-1. Select **Microsoft Entra Domain Services** then enable the feature by ticking the checkbox.
+1. Under **Microsoft Entra Domain Services** select **Set up**, then enable the feature by ticking the checkbox.
1. Select **Save**.
- :::image type="content" source="media/storage-files-active-directory-enable/files-azure-ad-ds-highlight.png" alt-text="Screenshot of the Active Directory pane, Microsoft Entra Domain Services is enabled." lightbox="media/storage-files-active-directory-enable/files-azure-ad-ds-highlight.png":::
+ :::image type="content" source="media/storage-files-identity-auth-domain-services-enable/entra-domain-services-highlight.png" alt-text="Screenshot of the identity-based access configuration pane, Microsoft Entra Domain Services is enabled as the source." lightbox="media/storage-files-identity-auth-domain-services-enable/entra-domain-services-highlight.png":::
# [PowerShell](#tab/azure-powershell)
storage Storage Files Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-monitoring.md
The following table lists common and recommended alert rules for Azure Files and
|Metric | File share size is 80% of capacity. | File Capacity<br>Dimension name: FileShare (premium file share only) | |Metric | File share egress exceeds 500 GiB in one day. | Egress<br>Dimension name: FileShare (premium file share only) | |Metric | High server latency. | Success Server Latency<br>Dimension name: API Name, for example Read and Write API|
+|Metric | File share availability is less than 99.9%. | Availability<br>Dimension name: FileShare (premium file share only) |
For instructions on how to create alerts on throttling, capacity, egress, and high server latency, see [Create monitoring alerts for Azure Files](files-monitoring-alerts.md).
stream-analytics Automation Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/automation-powershell.md
Title: Auto-pause an Azure Stream Analytics with PowerShell
-description: This article describes how to auto-pause an Azure Stream Analytics job on a schedule with PowerShell
+ Title: Automatically pause an Azure Stream Analytics with PowerShell
+description: This article describes how to automatically pause an Azure Stream Analytics job on a schedule by using PowerShell.
Last updated 11/03/2021
-# Auto-pause a job with PowerShell and Azure Functions or Azure Automation
+# Automatically pause a job by using PowerShell and Azure Functions or Azure Automation
-Some applications require a stream processing approach, made easy with [Azure Stream Analytics](./stream-analytics-introduction.md) (ASA), but don't strictly need to run continuously. The reasons are various:
+Some applications require a stream processing approach (such as through [Azure Stream Analytics](./stream-analytics-introduction.md)) but don't strictly need to run continuously. The reasons include:
-- Input data arriving on a schedule (top of the hour...)
+- Input data that arrives on a schedule (for example, top of the hour)
- A sparse or low volume of incoming data (few records per minute)-- Business processes that benefit from time-windowing capabilities, but are running in batch by essence (Finance or HR...)-- Demonstrations, prototypes, or tests that involve **long running jobs at low scale**
+- Business processes that benefit from time-windowing capabilities but that run in batch by essence (for example, finance or HR)
+- Demonstrations, prototypes, or tests that involve long-running jobs at low scale
-The benefit of not running these jobs continuously will be **cost savings**, as Stream Analytics jobs are [billed](https://azure.microsoft.com/pricing/details/stream-analytics/) per Streaming Unit **over time.**
+The benefit of not running these jobs continuously is cost savings, because Stream Analytics jobs are [billed](https://azure.microsoft.com/pricing/details/stream-analytics/) per Streaming Unit over time.
-This article will explain how to set up auto-pause for an Azure Stream Analytics job. In it, we configure a task that automatically pauses and resumes a job on a schedule. If we're using the term **pause**, the actual job [state](./job-states.md) is **stopped**, as to avoid any billing.
+This article explains how to set up automatic pausing for an Azure Stream Analytics job. You configure a task that automatically pauses and resumes a job on a schedule. The term *pause* means that the job [state](./job-states.md) is **Stopped** to avoid any billing.
-We'll discuss the overall design first, then go through the required components, and finally discuss some implementation details.
+This article discusses the overall design, the required components, and some implementation details.
> [!NOTE]
-> There are downsides to auto-pausing a job. The main ones being the loss of the low latency / real time capabilities, and the potential risks from allowing the input event backlog to grow unsupervised while a job is paused. Auto-pausing should not be considered for most production scenarios running at scale
+> There are downsides to automatically pausing a job. The main downsides are the loss of low-latency/real-time capabilities and the potential risks from allowing the input event backlog to grow unsupervised while a job is paused. Organizations shouldn't consider automatic pausing for most production scenarios that run at scale.
## Design
-For this example, we want our job to run for N minutes, before pausing it for M minutes. When the job is paused, the input data won't be consumed, accumulating upstream. After the job is started, it will catch-up with that backlog, process the data trickling in, before being shut down again.
+For the example in this article, you want your job to run for *N* minutes before pausing it for *M* minutes. When the job is paused, the input data isn't consumed and accumulates upstream. After the job starts, it catches up with that backlog and processes the data trickling in before it's shut down again.
-![Diagram that illustrates the behavior of the auto-paused job over time](./media/automation/principle.png)
+![Diagram that illustrates the behavior of an automatically paused job over time.](./media/automation/principle.png)
-When running, the task shouldn't stop the job until its metrics are healthy. The metrics of interest will be the input backlog and the [watermark](./stream-analytics-time-handling.md#background-time-concepts). We'll check that both are at their baseline for at least N minutes. This behavior translates to two actions:
+When the job is running, the task shouldn't stop the job until its metrics are healthy. The metrics of interest are the input backlog and the [watermark](./stream-analytics-time-handling.md#background-time-concepts). You'll check that both are at their baseline for at least *N* minutes. This behavior translates to two actions:
-- A stopped job is restarted after M minutes-- A running job is stopped anytime after N minutes, as soon as its backlog and watermark metrics are healthy
+- A stopped job is restarted after *M* minutes.
+- A running job is stopped anytime after *N* minutes, as soon as its backlog and watermark metrics are healthy.
-![Diagram that shows the possible states of the job](./media/automation/States.png)
+![Diagram that shows the possible states of a job.](./media/automation/States.png)
-As an example, let's consider N = 5 minutes, and M = 10 minutes. With these settings, a job has at least 5 minutes to process all the data received in 15. Potential cost savings are up to 66%.
+As an example, consider that *N* = 5 minutes and *M* = 10 minutes. With these settings, a job has at least 5 minutes to process all the data received in 15. Potential cost savings are up to 66%.
-To restart the job, we'll use the `When Last Stopped` [start option](./start-job.md#start-options). This option tells ASA to process all the events that were backlogged upstream since the job was stopped. There are two caveats in this situation. First, the job can't stay stopped longer than the retention period of the input stream. If we only run the job once a day, we need to make sure that the [event hub retention period](/azure/event-hubs/event-hubs-faq#what-is-the-maximum-retention-period-for-events-) is more than one day. Second, the job needs to have been started at least once for the mode `When Last Stopped` to be accepted (else it has literally never been stopped before). So the first run of a job needs to be manual, or we would need to extend the script to cover for that case.
+To restart the job, use the **When Last Stopped** [start option](./start-job.md#start-options). This option tells Stream Analytics to process all the events that were backlogged upstream since the job was stopped.
-The last consideration is to make these actions idempotent. This way, they can be repeated at will with no side effects, for both ease of use and resiliency.
+There are two caveats in this situation. First, the job can't stay stopped longer than the retention period of the input stream. If you run the job only once a day, you need to make sure that the [retention period for events](/azure/event-hubs/event-hubs-faq#what-is-the-maximum-retention-period-for-events-) is more than one day. Second, the job needs to have been started at least once for the mode **When Last Stopped** to be accepted (or else it has literally never been stopped before). So the first run of a job needs to be manual, or you need to extend the script to cover for that case.
+
+The last consideration is to make these actions idempotent. You can then repeat them at will with no side effects, for both ease of use and resiliency.
## Components ### API calls
-We anticipate the need to interact with ASA on the following **aspects**:
+This article anticipates the need to interact with Stream Analytics on the following aspects:
+
+- Get the current job status (Stream Analytics resource management):
+ - If the job is running:
+ - Get the time since the job started (logs).
+ - Get the current metric values (metrics).
+ - If applicable, stop the job (Stream Analytics resource management).
+ - If the job is stopped:
+ - Get the time since the job stopped (logs).
+ - If applicable, start the job (Stream Analytics resource management).
-- **Get the current job status** (*ASA Resource Management*)
- - If running
- - **Get the time since started** (*Logs*)
- - **Get the current metric values** (*Metrics*)
- - If applicable, **stop the job** (*ASA Resource Management*)
- - If stopped
- - **Get the time since stopped** (*Logs*)
- - If applicable, **start the job** (*ASA Resource Management*)
+For Stream Analytics resource management, you can use the [REST API](/rest/api/streamanalytics/), the [.NET SDK](/dotnet/api/microsoft.azure.management.streamanalytics), or one of the CLI libraries ([Azure CLI](/cli/azure/stream-analytics) or [PowerShell](/powershell/module/az.streamanalytics)).
-For *ASA Resource Management*, we can use either the [REST API](/rest/api/streamanalytics/), the [.NET SDK](/dotnet/api/microsoft.azure.management.streamanalytics) or one of the CLI libraries ([Az CLI](/cli/azure/stream-analytics), [PowerShell](/powershell/module/az.streamanalytics)).
+For metrics and logs, everything in Azure is centralized under [Azure Monitor](../azure-monitor/overview.md), with a similar choice of API surfaces. Logs and metrics are always 1 to 3 minutes behind when you're querying the APIs. So setting *N* at 5 usually means the job runs 6 to 8 minutes in reality.
-For *Metrics* and *Logs*, in Azure everything is centralized under [Azure Monitor](../azure-monitor/overview.md), with a similar choice of API surfaces. We have to remember that logs and metrics are always 1 to 3 minutes behind when querying the APIs. So setting N at 5 usually means the job will be running 6 to 8 minutes in reality. Another thing to consider is that metrics are always emitted. When the job is stopped, the API returns empty records. We'll have to clean up the output of our API calls to only look at relevant values.
+Another consideration is that metrics are always emitted. When the job is stopped, the API returns empty records. You have to clean up the output of your API calls to focus on relevant values.
### Scripting language
-For this article, we decided to implement auto-pause in **PowerShell**. The first reason is that [PowerShell](/powershell/scripting/overview) is now cross-platform. It can run on any OS, which makes deployments easier. The second reason is that it takes and returns objects rather than strings. Objects make parsing and processing easy for automation tasks.
+This article implements automatic pausing in [PowerShell](/powershell/scripting/overview). The first reason for this choice is that PowerShell is now cross-platform. It can run on any operating system, which makes deployments easier. The second reason is that it takes and returns objects rather than strings. Objects make parsing and processing easier for automation tasks.
-In PowerShell, we'll use the [Az PowerShell](/powershell/azure/new-azureps-module-az) module, which embarks [Az.Monitor](/powershell/module/az.monitor/) and [Az.StreamAnalytics](/powershell/module/az.streamanalytics/), for everything we need:
+In PowerShell, use the [Az PowerShell](/powershell/azure/new-azureps-module-az) module (which embarks [Az.Monitor](/powershell/module/az.monitor/) and [Az.StreamAnalytics](/powershell/module/az.streamanalytics/)) for everything you need:
- [Get-AzStreamAnalyticsJob](/powershell/module/az.streamanalytics/get-azstreamanalyticsjob) for the current job status-- [Start-AzStreamAnalyticsJob](/powershell/module/az.streamanalytics/start-azstreamanalyticsjob) / [Stop-AzStreamAnalyticsJob](/powershell/module/az.streamanalytics/stop-azstreamanalyticsjob)-- [Get-AzMetric](/powershell/module/az.monitor/get-azmetric) with `InputEventsSourcesBacklogged` [(from ASA metrics)](../azure-monitor/essentials/metrics-supported.md#microsoftstreamanalyticsstreamingjobs)-- [Get-AzActivityLog](/powershell/module/az.monitor/get-azactivitylog) for event names beginning with `Stop Job`
+- [Start-AzStreamAnalyticsJob](/powershell/module/az.streamanalytics/start-azstreamanalyticsjob) or [Stop-AzStreamAnalyticsJob](/powershell/module/az.streamanalytics/stop-azstreamanalyticsjob)
+- [Get-AzMetric](/powershell/module/az.monitor/get-azmetric) with `InputEventsSourcesBacklogged` (from [Stream Analytics metrics](../azure-monitor/essentials/metrics-supported.md#microsoftstreamanalyticsstreamingjobs))
+- [Get-AzActivityLog](/powershell/module/az.monitor/get-azactivitylog) for event names that begin with `Stop Job`
### Hosting service
-To host our PowerShell task, we'll need a service that offers scheduled runs. There are lots of options, but looking at serverless ones:
+To host your PowerShell task, you need a service that offers scheduled runs. There are many options, but here are two serverless ones:
-- [Azure Functions](../azure-functions/functions-overview.md), a serverless compute engine that can run almost any piece of code. Functions offer a [timer trigger](../azure-functions/functions-bindings-timer.md?tabs=csharp) that can run up to every second-- [Azure Automation](../automation/overview.md), a managed service built for operating cloud workloads and resources. Which fits the bill, but whose minimal schedule interval is 1 hour (less with [workarounds](../automation/shared-resources/schedules.md#schedule-runbooks-to-run-more-frequently)).
+- [Azure Functions](../azure-functions/functions-overview.md), a compute engine that can run almost any piece of code. It offers a [timer trigger](../azure-functions/functions-bindings-timer.md?tabs=csharp) that can run up to every second.
+- [Azure Automation](../automation/overview.md), a managed service for operating cloud workloads and resources. Its purpose is appropriate, but its minimal schedule interval is 1 hour (less with [workarounds](../automation/shared-resources/schedules.md#schedule-runbooks-to-run-more-frequently)).
-If we don't mind the workaround, Azure Automation is the easier way to deploy the task. But to be able to compare, in this article we'll be writing a local script first. Once we have a functioning script, we'll deploy it both in Functions and in an Automation Account.
+If you don't mind the workarounds, Azure Automation is the easier way to deploy the task. But in this article, you write a local script first so you can compare. After you have a functioning script, you deploy it both in Functions and in an Automation account.
### Developer tools
-We highly recommend local development using [VSCode](https://code.visualstudio.com/), both for [Functions](../azure-functions/create-first-function-vs-code-powershell.md) and [ASA](./quick-create-visual-studio-code.md). Using a local IDE allows us to use source control and to easily repeat deployments. But for the sake of brevity, here we'll illustrate the process in the [Azure portal](https://portal.azure.com).
+We highly recommend local development through [Visual Studio Code](https://code.visualstudio.com/), for both [Functions](../azure-functions/create-first-function-vs-code-powershell.md) and [Stream Analytics](./quick-create-visual-studio-code.md). Using a local development environment allows you to use source control and helps you easily repeat deployments. But for the sake of brevity, this article illustrates the process in the [Azure portal](https://portal.azure.com).
## Writing the PowerShell script locally
-The best way to develop the script is locally. PowerShell being cross-platform, the script can be written and tested on any OS. On Windows we can use [Windows Terminal](https://www.microsoft.com/p/windows-terminal/9n0dx20hk701) with [PowerShell 7](/powershell/scripting/install/installing-powershell-on-windows), and [Az PowerShell](/powershell/azure/install-azure-powershell).
+The best way to develop the script is locally. Because PowerShell is cross-platform, you can write the script and test it on any operating system. On Windows, you can use [Windows Terminal](https://www.microsoft.com/p/windows-terminal/9n0dx20hk701) with [PowerShell 7](/powershell/scripting/install/installing-powershell-on-windows) and [Azure PowerShell](/powershell/azure/install-azure-powershell).
-The final script that will be used is available for [Functions](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/Automation/Auto-pause/run.ps1) (and [Azure Automation](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/Automation/Auto-pause/runbook.ps1)). It's different than the one explained below, having been wired to the hosting environment (Functions or Automation). We'll discuss that aspect later. First, let's step through a version of it that only **runs locally**.
+The final script that this article uses is available for [Azure Functions](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/Automation/Auto-pause/run.ps1) and [Azure Automation](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/Automation/Auto-pause/runbook.ps1). It's different from the following script in that it's wired to the hosting environment (Functions or Automation). This article discusses that aspect later. First, you step through a version of the script that runs only locally.
-This script is purposefully written in a simple form, so it can be understood by all.
+This script is purposefully written in a simple form, so everyone can understand it.
-At the top, we set the required parameters, and check the initial job status:
+At the top, you set the required parameters and check the initial job status:
```PowerShell
At the top, we set the required parameters, and check the initial job status:
$restartThresholdMinute = 10 # This is M $stopThresholdMinute = 5 # This is N
-$maxInputBacklog = 0 # The amount of backlog we tolerate when stopping the job (in event count, 0 is a good starting point)
-$maxWatermark = 10 # The amount of watermark we tolerate when stopping the job (in seconds, 10 is a good starting point at low SUs)
+$maxInputBacklog = 0 # The amount of backlog you tolerate when stopping the job (in event count, 0 is a good starting point)
+$maxWatermark = 10 # The amount of watermark you tolerate when stopping the job (in seconds, 10 is a good starting point at low Streaming Units)
$subscriptionId = "<Replace with your Subscription Id - not the name>" $resourceGroupName = "<Replace with your Resource Group Name>"
-$asaJobName = "<Replace with your ASA job name>"
+$asaJobName = "<Replace with your Stream Analytics job name>"
$resourceId = "/subscriptions/$($subscriptionId )/resourceGroups/$($resourceGroupName )/providers/Microsoft.StreamAnalytics/streamingjobs/$($asaJobName)"
-# If not already logged, uncomment and run the 2 following commands
+# If not already logged, uncomment and run the two following commands:
# Connect-AzAccount # Set-AzContext -SubscriptionId $subscriptionId
-# Check current ASA job status
+# Check current Stream Analytics job status
$currentJobState = Get-AzStreamAnalyticsJob -ResourceGroupName $resourceGroupName -Name $asaJobName | Foreach-Object {$_.JobState} Write-Output "asaRobotPause - Job $($asaJobName) is $($currentJobState)." ```
-Then if the job is running, we check if the job has been running at least N minutes, its backlog, and its watermark.
+If the job is running, you then check if the job has been running at least *N* minutes. You also check its backlog and its watermark.
```PowerShell # Switch state if ($currentJobState -eq "Running") {
- # First we look up the job start time with Get-AzActivityLog
- ## Get-AzActivityLog issues warnings about deprecation coming in future releases, here we ignore them via -WarningAction Ignore
- ## We check in 1000 record of history, to make sure we're not missing what we're looking for. It may need adjustment for a job that has a lot of logging happening.
- ## There is a bug in Get-AzActivityLog that triggers an error when Select-Object First is in the same pipeline (on the same line). We move it down.
+ # First, look up the job start time with Get-AzActivityLog
+ ## Get-AzActivityLog issues warnings about deprecation coming in future releases. Here you ignore them via -WarningAction Ignore.
+ ## You check in 1,000 records of history, to make sure you're not missing what you're looking for. It might need adjustment for a job that has a lot of logging happening.
+ ## There's a bug in Get-AzActivityLog that triggers an error when Select-Object First is in the same pipeline (on the same line). So you move it down.
$startTimeStamp = Get-AzActivityLog -ResourceId $resourceId -MaxRecord 1000 -WarningAction Ignore | Where-Object {$_.EventName.Value -like "Start Job*"} $startTimeStamp = $startTimeStamp | Select-Object -First 1 | Foreach-Object {$_.EventTimeStamp}
- # Then we gather the current metric values
- ## Get-AzMetric issues warnings about deprecation coming in future releases, here we ignore them via -WarningAction Ignore
+ # Then gather the current metric values
+ ## Get-AzMetric issues warnings about deprecation coming in future releases. Here you ignore them via -WarningAction Ignore.
$currentBacklog = Get-AzMetric -ResourceId $resourceId -TimeGrain 00:01:00 -MetricName "InputEventsSourcesBacklogged" -DetailedOutput -WarningAction Ignore $currentWatermark = Get-AzMetric -ResourceId $resourceId -TimeGrain 00:01:00 -MetricName "OutputWatermarkDelaySeconds" -DetailedOutput -WarningAction Ignore
- # Metric are always lagging 1-3 minutes behind, so grabbing the last N minutes means checking N+3 actually. This may be overly safe and fined tune down per job.
+ # Metrics are always lagging 1-3 minutes behind, so grabbing the last N minutes actually means checking N+3. This might be overly safe and can be fine-tuned down per job.
$Backlog = $currentBacklog.Data |
- Where-Object {$_.Maximum -ge 0} | # We remove the empty records (when the job is stopped or starting)
+ Where-Object {$_.Maximum -ge 0} | # Remove the empty records (when the job is stopped or starting)
Sort-Object -Property Timestamp -Descending |
- Where-Object {$_.Timestamp -ge $startTimeStamp} | # We only keep the records of the latest run
- Select-Object -First $stopThresholdMinute | # We take the last N records
- Measure-Object -Sum Maximum # We sum over those N records
+ Where-Object {$_.Timestamp -ge $startTimeStamp} | # Keep only the records of the latest run
+ Select-Object -First $stopThresholdMinute | # Take the last N records
+ Measure-Object -Sum Maximum # Sum over those N records
$BacklogSum = $Backlog.Sum $Watermark = $currentWatermark.Data |
if ($currentJobState -eq "Running")
Sort-Object -Property Timestamp -Descending | Where-Object {$_.Timestamp -ge $startTimeStamp} | Select-Object -First $stopThresholdMinute |
- Measure-Object -Average Maximum # Here we average
- $WatermarkAvg = [int]$Watermark.Average # Rounding the decimal value casting it to integer
+ Measure-Object -Average Maximum # Here you average
+ $WatermarkAvg = [int]$Watermark.Average # Rounding the decimal value and casting it to integer
- # Since we called Get-AzMetric with a TimeGrain of a minute, counting the number of records gives us the duration in minutes
+ # Because you called Get-AzMetric with a TimeGrain of a minute, counting the number of records gives you the duration in minutes
Write-Output "asaRobotPause - Job $($asaJobName) is running since $($startTimeStamp) with a sum of $($BacklogSum) backlogged events, and an average watermark of $($WatermarkAvg) sec, for $($Watermark.Count) minutes." # -le for lesser or equal, -ge for greater or equal
if ($currentJobState -eq "Running")
```
-If the job is stopped, we look in the log when was the last "Stop Job" action:
+If the job is stopped, check the log to find when the last `Stop Job` action happened:
```PowerShell elseif ($currentJobState -eq "Stopped") {
- # First we look up the job start time with Get-AzActivityLog
- ## Get-AzActivityLog issues warnings about deprecation coming in future releases, here we ignore them via -WarningAction Ignore
- ## We check in 1000 record of history, to make sure we're not missing what we're looking for. It may need adjustment for a job that has a lot of logging happening.
- ## There is a bug in Get-AzActivityLog that triggers an error when Select-Object First is in the same pipeline (on the same line). We move it down.
+ # First, look up the job start time with Get-AzActivityLog
+ ## Get-AzActivityLog issues warnings about deprecation coming in future releases. Here you ignore them via -WarningAction Ignore.
+ ## You check in 1,000 records of history, to make sure you're not missing what you're looking for. It might need adjustment for a job that has a lot of logging happening.
+ ## There's a bug in Get-AzActivityLog that triggers an error when Select-Object First is in the same pipeline (on the same line). So you move it down.
$stopTimeStamp = Get-AzActivityLog -ResourceId $resourceId -MaxRecord 1000 -WarningAction Ignore | Where-Object {$_.EventName.Value -like "Stop Job*"} $stopTimeStamp = $stopTimeStamp | Select-Object -First 1 | Foreach-Object {$_.EventTimeStamp}
- # Get-Date returns a local time, we project it to the same time zone (universal) as the result of Get-AzActivityLog that we extracted above
+ # Get-Date returns a local time. You project it to the same time zone (universal) as the result of Get-AzActivityLog that you extracted earlier.
$minutesSinceStopped = ((Get-Date).ToUniversalTime()- $stopTimeStamp).TotalMinutes # -ge for greater or equal
else {
} ```
-At the end, we log the job completion:
+At the end, log the job completion:
```PowerShell
-# Final ASA job status check
+# Final Stream Analytics job status check
$newJobState = Get-AzStreamAnalyticsJob -ResourceGroupName $resourceGroupName -Name $asaJobName | Foreach-Object {$_.JobState} Write-Output "asaRobotPause - Job $($asaJobName) was $($currentJobState), is now $($newJobState). Job completed." ```
-## Option 1: Hosting the task in Azure Functions
+## Option 1: Host the task in Azure Functions
For reference, the Azure Functions team maintains an exhaustive [PowerShell developer guide](../azure-functions/functions-reference-powershell.md?tabs=portal).
-First we'll need a new **Function App**. A Function App is similar to a solution that can host multiple Functions.
+First, you need a new *function app*. A function app is similar to a solution that can host multiple functions.
-The full procedure is [here](../azure-functions/functions-create-function-app-portal.md#create-a-function-app), but the gist is to go in the [Azure portal](https://portal.azure.com), and create a new Function App with:
+You can get the [full procedure](../azure-functions/functions-create-function-app-portal.md#create-a-function-app), but the gist is to go in the [Azure portal](https://portal.azure.com) and create a new function app with:
- Publish: **Code** - Runtime: **PowerShell Core** - Version: **7+**
-Once it's provisioned, let's start with its overall configuration.
+After you provision the function app, start with its overall configuration.
### Managed identity for Azure Functions
-The Function needs permissions to start and stop the ASA job. We'll assign these permissions via a [managed identity](../active-directory/managed-identities-azure-resources/overview.md).
+The function needs permissions to start and stop the Stream Analytics job. You assign these permissions by using a [managed identity](../active-directory/managed-identities-azure-resources/overview.md).
-The first step is to enable a **system-assigned managed identity** for the Function, following that [procedure](../app-service/overview-managed-identity.md?tabs=ps%2cportal&toc=/azure/azure-functions/toc.json).
+The first step is to enable a *system-assigned managed identity* for the function, by following [this procedure](../app-service/overview-managed-identity.md?tabs=ps%2cportal&toc=/azure/azure-functions/toc.json).
-Now we can grant the right permissions to that identity on the ASA job we want to auto-pause. For that, in the Portal for the **ASA job** (not the Function one), in **Access control (IAM)**, add a **role assignment** to the role *Contributor* for a member of type *Managed Identity*, selecting the name of the Function above.
+Now you can grant the right permissions to that identity on the Stream Analytics job that you want to automatically pause. For this task, in the portal area for the Stream Analytics job (not the function one), in **Access control (IAM)**, add a role assignment to the role **Contributor** for a member of type **Managed Identity**. Select the name of the function from earlier.
-![Screenshot of IAM settings for the ASA job](./media/automation/function-asa-role.png)
+![Screenshot of access control settings for a Stream Analytics job.](./media/automation/function-asa-role.png)
-In the PowerShell script, we can add a check that ensures the managed identity is set properly (the final script is available [here](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/Automation/Auto-pause/run.ps1))
+In the PowerShell script, you can add a check to ensure that the managed identity is set properly. (The final script is available on [GitHub](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/Automation/Auto-pause/run.ps1).)
```PowerShell
-# Check if managed identity has been enabled and granted access to a subscription, resource group, or resource
+# Check if a managed identity has been enabled and granted access to a subscription, resource group, or resource
$AzContext = Get-AzContext -ErrorAction SilentlyContinue if (-not $AzContext.Subscription.Id) {
if (-not $AzContext.Subscription.Id)
```
-We'll also add some logging info to make sure the Function is firing up:
+Add some logging info to make sure that the function is firing up:
```PowerShell
Write-Host "asaRobotPause - PowerShell timer trigger function is starting at tim
### Parameters for Azure Functions
-The best way to pass our parameters to the script in Functions is to use the Function App application settings as [environment variables](../azure-functions/functions-reference-powershell.md?tabs=portal#environment-variables).
+The best way to pass your parameters to the script in Functions is to use the function app's application settings as [environment variables](../azure-functions/functions-reference-powershell.md?tabs=portal#environment-variables).
-To do so, the first step is in the Function App page, to define our parameters as **App Settings** following that [procedure](../azure-functions/functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings). We'll need:
+The first step is to follow the [procedure](../azure-functions/functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings) to define your parameters as **App Settings** on the page for the function app. You need:
|Name|Value| |-|-|
-|maxInputBacklog|The amount of backlog we tolerate when stopping the job (in event count, 0 is a good starting point)|
-|maxWatermark|The amount of watermark we tolerate when stopping the job (in seconds, 10 is a good starting point at low SUs)|
-|restartThresholdMinute|M: the time (in minutes) until a stopped job is restarted|
-|stopThresholdMinute|N: the time (in minutes) of cool down until a running job is stopped. The input backlog will need to stay at 0 during that time|
-|subscriptionId|The SubscriptionId (not the name) of the ASA job to be auto-paused|
-|resourceGroupName|The Resource Group Name of the ASA job to be auto-paused|
-|asaJobName|The Name of the ASA job to be aut-paused|
+|`maxInputBacklog`|The amount of backlog that you tolerate when stopping the job. In the event count, `0` is a good starting point.|
+|`maxWatermark`|The amount of watermark that you tolerate when stopping the job. In seconds, `10` is a good starting point at low Streaming Units.|
+|`restartThresholdMinute`|*M*: The time (in minutes) until a stopped job is restarted.|
+|`stopThresholdMinute`|*N*: The time (in minutes) of cooldown until a running job is stopped. The input backlog needs to stay at `0` during that time.|
+|`subscriptionId`|The subscription ID (not the name) of the Stream Analytics job to be automatically paused.|
+|`resourceGroupName`|The resource group name of the Stream Analytics job to be automatically paused.|
+|`asaJobName`|The name of the Stream Analytics job to be automatically paused.|
-We'll later need to update our PowerShell script to load the variables accordingly:
+Then, update your PowerShell script to load the variables accordingly:
```PowerShell $maxInputBacklog = $env:maxInputBacklog
$asaJobName = $env:asaJobName
### PowerShell module requirements
-The same way we had to install Az PowerShell locally to use the ASA commands (like `Start-AzStreamAnalyticsJob`), we'll need to [add it to the Function App host](../azure-functions/functions-reference-powershell.md?tabs=portal#dependency-management).
+The same way that you had to install Azure PowerShell locally to use the Stream Analytics commands (like `Start-AzStreamAnalyticsJob`), you need to [add it to the function app host](../azure-functions/functions-reference-powershell.md?tabs=portal#dependency-management):
-To do that, we can go in `Functions` > `App files` of the Function App page, select `requirements.psd1`, and uncomment the line `'Az' = '6.*'`. For that change to take effect, the whole app will need to be restarted.
+1. On the page for the function app, under **Functions**, select **App files**, and then select **requirements.psd1**.
+1. Uncomment the line `'Az' = '6.*'`.
+1. To make that change take effect, restart the app.
-![Screenshot of the app files settings for the Function App](./media/automation/function-app-files.png)
+![Screenshot of the app files settings for the function app.](./media/automation/function-app-files.png)
### Creating the function
-Once all that configuration is done, we can create the specific function, inside the Function App, that will run our script.
+After you finish all that configuration, you can create the specific function inside the function app to run the script.
-We'll develop in the portal, a function triggered on a timer (every minute with `0 */1 * * * *`, which [reads](../azure-functions/functions-bindings-timer.md?tabs=csharp#ncrontab-expressions) "*on second 0 of every 1 minute*"):
+In the portal, develop a function that's triggered on a timer. Make sure that the function is triggered every minute with `0 */1 * * * *`, and that it [reads](../azure-functions/functions-bindings-timer.md?tabs=csharp#ncrontab-expressions) "on second 0 of every 1 minute."
-![Screenshot of creating a new timer trigger function in the function app](./media/automation/new-function-timer.png)
+![Screenshot of creating a new timer trigger function in a function app.](./media/automation/new-function-timer.png)
-If needed, we can change the timer value in `Integration`, by updating the schedule:
+If necessary, you can change the timer value in **Integration** by updating the schedule.
-![Screenshot of the integration settings of the function](./media/automation/function-timer.png)
+![Screenshot of the integration settings of a function.](./media/automation/function-timer.png)
-Then in `Code + Test`, we can copy our script in `run.ps1` and test it. The full script can be copied from [here](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/Automation/Auto-pause/run.ps1), the business logic has been moved into a TRY/CATCH statement to generate proper errors if anything fails during processing.
+Then, in **Code + Test**, you can copy your script in *run.ps1* and test it. Or you can copy the full script from [GitHub](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/Automation/Auto-pause/run.ps1). The business logic was moved into a try/catch statement to generate proper errors if anything fails during processing.
-![Screenshot of Code+Test for the function](./media/automation/function-code.png)
+![Screenshot of the Code+Test pane for the function.](./media/automation/function-code.png)
-We can check that everything runs fine via **Test/Run** in the `Code + Test` pane. We can also look at the `Monitor` pane, but it's always late of a couple of executions.
+You can check that everything runs fine by selecting **Test/Run** on the **Code + Test** pane. You can also check the **Monitor** pane, but it's always late by a couple of executions.
-![Screenshot of the output of a successful run](./media/automation/function-run.png)
+![Screenshot of the output of a successful run.](./media/automation/function-run.png)
### Setting an alert on the function execution
-Finally, we want to be notified via an alert if the function doesn't run successfully. Alerts have a minor cost, but they may prevent more expensive situations.
+Finally, you want to be notified via an alert if the function doesn't run successfully. Alerts have a minor cost, but they might prevent more expensive situations.
-In the **Function App** page, under `Logs`, run the following query that returns all non-successful runs in the last 5 minutes:
+On the page for the function app, under **Logs**, run the following query. It returns all unsuccessful runs in the last 5 minutes.
```SQL requests
requests
| order by failedCount desc ```
-In the query editor, pick `New alert rule`. In the following screen, define the **Measurement** as:
+In the query editor, select **New alert rule**. On the pane that opens, define **Measurement** as:
-- Measure: failedCount-- Aggregation type: Total-- Aggregation granularity: 5 minutes
+- Measure: **failedCount**
+- Aggregation type: **Total**
+- Aggregation granularity: **5 minutes**
-Next set up the **Alert logic** as follows:
+Next, set up **Alert logic** as follows:
-- Operator: Greater than-- Threshold value: 0-- Frequency of evaluation: 5 minutes
+- Operator: **Greater than**
+- Threshold value: **0**
+- Frequency of evaluation: **5 minutes**
-From there, reuse or create a new [action group](../azure-monitor/alerts/action-groups.md?WT.mc_id=Portal-Microsoft_Azure_Monitoring), then complete the configuration.
+From there, reuse or create a new [action group](../azure-monitor/alerts/action-groups.md?WT.mc_id=Portal-Microsoft_Azure_Monitoring). Then complete the configuration.
-To check that the alert was set up properly, we can add `throw "Testing the alert"` anywhere in the PowerShell script, and wait 5 minutes to receive an email.
+To check that you set up the alert properly, you can add `throw "Testing the alert"` anywhere in the PowerShell script and then wait 5 minutes to receive an email.
-## Option 2: Hosting the task in Azure Automation
+## Option 2: Host the task in Azure Automation
-First we'll need a new **Automation Account**. An Automation Account is similar to a solution that can host multiple runbooks.
+First, you need a new Automation account. An Automation account is similar to a solution that can host multiple runbooks.
-The procedure is [here](../automation/quickstarts/create-azure-automation-account-portal.md). Here we can select to use a system-assigned managed identity directly in the `advanced` tab.
+For the procedure, see the [Create an Automation account using the Azure portal](../automation/quickstarts/create-azure-automation-account-portal.md) quickstart. You can choose to use a system-assigned managed identity directly on the **Advanced** tab.
-For reference, the Automation team has a [good tutorial](../automation/learn/powershell-runbook-managed-identity.md) to get started on PowerShell runbooks.
+For reference, the Automation team has a [tutorial](../automation/learn/powershell-runbook-managed-identity.md) for getting started on PowerShell runbooks.
### Parameters for Azure Automation
-With a runbook we can use the classic parameter syntax of PowerShell to pass arguments:
+With a runbook, you can use the classic parameter syntax of PowerShell to pass arguments:
```PowerShell Param(
Param(
### Managed identity for Azure Automation
-The Automation Account should have received a managed identity during provisioning. But if needed, we can enable one using that [procedure](../automation/enable-managed-identity-for-automation.md).
+The Automation account should have received a managed identity during provisioning. But if necessary, you can enable a managed identity by using [this procedure](../automation/enable-managed-identity-for-automation.md).
-Like for the function, we'll need to grant the right permissions on the ASA job we want to auto-pause.
+Like you did for the function, you need to grant the right permissions on the Stream Analytics job that you want to automatically pause.
-For that, in the Portal for the **ASA job** (not the Automation page), in **Access control (IAM)**, add a **role assignment** to the role *Contributor* for a member of type *Managed Identity*, selecting the name of the Automation Account above.
+To grant the permissions, in the portal area for the Stream Analytics job (not the Automation page), in **Access control (IAM)**, add a role assignment to the role **Contributor** for a member of type **Managed Identity**. Select the name of the Automation account from earlier.
-![Screenshot of IAM settings for the ASA job](./media/automation/function-asa-role.png)
+![Screenshot of access control settings for a Stream Analytics job.](./media/automation/function-asa-role.png)
-In the PowerShell script, we can add a check that ensures the managed identity is set properly (the final script is available [here](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/Automation/Auto-pause/runbook.ps1))
+In the PowerShell script, you can add a check to ensure that the managed identity is set properly. (The final script is available on [GitHub](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/Automation/Auto-pause/runbook.ps1).)
```PowerShell
-# Ensures you do not inherit an AzContext in your runbook
+# Ensure that you don't inherit an AzContext in your runbook
Disable-AzContextAutosave -Scope Process | Out-Null
-# Connect using a Managed Service Identity
+# Connect by using a managed service identity
try { $AzureContext = (Connect-AzAccount -Identity).context }
catch{
### Creating the runbook
-Once the configuration is done, we can create the specific runbook, inside the Automation Account, that will run our script. Here we don't need to add Az PowerShell as a requirement, it's already built in.
+After you finish the configuration, you can create the specific runbook inside the Automation account to run your script. Here, you don't need to add Azure PowerShell as a requirement. It's already built in.
-In the portal, under Process Automation, select `Runbooks`, then select `Create a runbook`, pick `PowerShell` as the runbook type and any version above `7` as the version (at the moment `7.1 (preview)`).
+In the portal, under **Process Automation**, select **Runbooks**. Then select **Create a runbook**, select **PowerShell** as the runbook type, and choose any version above **7** as the version (at the moment, **7.1 (preview)**).
-We can now paste our script and test it. The full script can be copied from [here](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/Automation/Auto-pause/runbook.ps1), the business logic has been moved into a TRY/CATCH statement to generate proper errors if anything fails during processing.
+You can now paste your script and test it. You can copy the full script from [GitHub](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/Automation/Auto-pause/runbook.ps1). The business logic was moved into a try/catch statement to generate proper errors if anything fails during processing.
-![Screenshot of the runbook script editor in Azure Automation](./media/automation/automation-code.png)
+![Screenshot of the runbook script editor in Azure Automation.](./media/automation/automation-code.png)
-We can check that everything is wired properly in the `Test Pane`.
+You can check that everything is wired properly in **Test pane**.
-After that we need to `Publish` the job, which will allow us to link the runbook to a schedule. Creating and linking the schedule is a straightforward process that won't be discussed here. Now is a good time to remember that there are [workarounds](../automation/shared-resources/schedules.md#schedule-runbooks-to-run-more-frequently) to achieve schedule intervals under 1 hour.
+After that, you need to publish the job (by selecting **Publish**) so that you can link the runbook to a schedule. Creating and linking the schedule is a straightforward process. Now is a good time to remember that there are [workarounds](../automation/shared-resources/schedules.md#schedule-runbooks-to-run-more-frequently) to achieve schedule intervals under 1 hour.
-Finally, we can set up an alert. The first step is to enable logs via the [Diagnostic settings](../azure-monitor/essentials/create-diagnostic-settings.md?tabs=cli) of the Automation Account. The second step is to capture errors via a query like we did for Functions.
+Finally, you can set up an alert. The first step is to enable logs by using the [diagnostic settings](../azure-monitor/essentials/create-diagnostic-settings.md?tabs=cli) of the Automation account. The second step is to capture errors by using a query like you did for Functions.
## Outcome
-Looking at our ASA job, we can see that everything is running as expected in two places.
+In your Stream Analytics job, you can verify that everything is running as expected in two places.
-In the Activity Log:
+Here's the activity log:
-![Screenshot of the logs of the ASA job](./media/automation/asa-logs.png)
+![Screenshot of the logs of the Stream Analytics job.](./media/automation/asa-logs.png)
-And via its Metrics:
+And here are the metrics:
-![Screenshot of the metrics of the ASA job](./media/automation/asa-metrics.png)
+![Screenshot of the metrics of the Stream Analytics job.](./media/automation/asa-metrics.png)
-Once the script is understood, it's straightforward to rework it to extend its scope. It can easily be updated to target a list of jobs instead of a single one. Larger scopes can be defined and processed via tags, resource groups, or even entire subscriptions.
+After you understand the script, reworking it to extend its scope is a straightforward task. You can easily update the script to target a list of jobs instead of a single one. You can define and process larger scopes by using tags, resource groups, or even entire subscriptions.
## Get support
-For further assistance, try our [Microsoft Q&A question page for Azure Stream Analytics](/answers/tags/179/azure-stream-analytics).
+For further assistance, try the [Microsoft Q&A page for Azure Stream Analytics](/answers/tags/179/azure-stream-analytics).
## Next steps
-You've learned the basics of using PowerShell to automate the management of Azure Stream Analytics jobs. To learn more, see the following articles:
+You learned the basics of using PowerShell to automate the management of Azure Stream Analytics jobs. To learn more, see the following articles:
- [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)-- [Get started using Azure Stream Analytics](stream-analytics-real-time-fraud-detection.md)-- [Scale Azure Stream Analytics jobs](stream-analytics-scale-jobs.md)-- [Azure Stream Analytics Management .NET SDK](/previous-versions/azure/dn889315(v=azure.100))-- [Azure Stream Analytics Query Language Reference](/stream-analytics-query/stream-analytics-query-language-reference)-- [Azure Stream Analytics Management REST API Reference](/rest/api/streamanalytics/)
+- [Analyze fraudulent call data with Stream Analytics and visualize results in a Power BI dashboard](stream-analytics-real-time-fraud-detection.md)
+- [Scale an Azure Stream Analytics job to increase throughput](stream-analytics-scale-jobs.md)
+- [Azure Stream Analytics Query Language reference](/stream-analytics-query/stream-analytics-query-language-reference)
+- [Azure Stream Analytics Management REST API](/rest/api/streamanalytics/)
stream-analytics Stream Analytics How To Configure Azure Machine Learning Endpoints In Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-how-to-configure-azure-machine-learning-endpoints-in-stream-analytics.md
Title: Use Machine Learning Studio (classic) endpoints in Azure Stream Analytics
-description: This article describes how to use Machine Language user defined functions in Azure Stream Analytics.
+description: This article describes how to use Machine Learning user-defined functions in Azure Stream Analytics.
Last updated 06/11/2019
Last updated 06/11/2019
[!INCLUDE [ML Studio (classic) retirement](../../includes/machine-learning-studio-classic-deprecation.md)]
-Stream Analytics supports user-defined functions that call out to Machine Learning Studio (classic) endpoints. REST API support for this feature is detailed in the [Stream Analytics REST API library](/rest/api/streamanalytics/). This article provides supplemental information needed for successful implementation of this capability in Stream Analytics. A tutorial has also been posted and is available [here](stream-analytics-machine-learning-integration-tutorial.md).
+Azure Stream Analytics supports user-defined functions (UDFs) that call out to Azure Machine Learning Studio (classic) endpoints. The [Stream Analytics REST API library](/rest/api/streamanalytics/) describes REST API support for this feature.
+
+This article provides supplemental information that you need for successful implementation of this capability in Stream Analytics. A [tutorial](stream-analytics-machine-learning-integration-tutorial.md) is also available.
## Overview: Machine Learning Studio (classic) terminology
-Microsoft Machine Learning Studio (classic) provides a collaborative, drag-and-drop tool you can use to build, test, and deploy predictive analytics solutions on your data. This tool is called *Machine Learning Studio (classic)*. Studio (classic) is used to interact with the machine learning resources and easily build, test, and iterate on your design. These resources and their definitions are below.
-* **Workspace**: The *workspace* is a container that holds all other machine learning resources together in a container for management and control.
-* **Experiment**: *Experiments* are created by data scientists to utilize datasets and train a machine learning model.
-* **Endpoint**: *Endpoints* are the Studio (classic) object used to take features as input, apply a specified machine learning model and return scored output.
-* **Scoring Webservice**: A *scoring webservice* is a collection of endpoints as mentioned above.
+Machine Learning Studio (classic) provides a collaborative, drag-and-drop tool that you can use to build, test, and deploy predictive analytics solutions on your data. You can use Machine Learning Studio (classic) to interact with these machine learning resources:
+
+* **Workspace**: A container that holds all other machine learning resources together for management and control.
+* **Experiment**: A test that data scientists create to utilize datasets and train a machine learning model.
+* **Endpoint**: An object that you use to take features as input, apply a specified machine learning model, and return scored output.
+* **Scoring web service**: A collection of endpoints.
+
+Each endpoint has APIs for batch execution and synchronous execution. Stream Analytics uses synchronous execution. The specific service is called a [request/response service](../machine-learning/classic/consume-web-services.md) in Machine Learning Studio (classic).
+
+## Machine Learning Studio (classic) resources needed for Stream Analytics jobs
-Each endpoint has apis for batch execution and synchronous execution. Stream Analytics uses synchronous execution. The specific service is named a [Request/Response Service](../machine-learning/classic/consume-web-services.md) in Machine Learning Studio (classic).
+For the purposes of Stream Analytics job processing, a request/response endpoint, an [API key](../machine-learning/classic/consume-web-services.md), and a Swagger definition are all necessary for successful execution. Stream Analytics has an additional endpoint that constructs the URL for a Swagger endpoint, looks up the interface, and returns a default UDF definition to the user.
-## Studio (classic) resources needed for Stream Analytics jobs
-For the purposes of Stream Analytics job processing, a Request/Response endpoint, an [apikey](../machine-learning/classic/consume-web-services.md), and a swagger definition are all necessary for successful execution. Stream Analytics has an additional endpoint that constructs the url for swagger endpoint, looks up the interface and returns a default UDF definition to the user.
+## Configure a Stream Analytics and Machine Learning Studio (classic) UDF via REST API
-## Configure a Stream Analytics and Studio (classic) UDF via REST API
-By using REST APIs you may configure your job to call Studio (classic) functions. The steps are as follows:
+By using REST APIs, you can configure your job to call Machine Learning Studio (classic) functions:
-1. Create a Stream Analytics job
-2. Define an input
-3. Define an output
-4. Create a user-defined function (UDF)
-5. Write a Stream Analytics transformation that calls the UDF
-6. Start the job
+1. Create a Stream Analytics job.
+2. Define an input.
+3. Define an output.
+4. Create a UDF.
+5. Write a Stream Analytics transformation that calls the UDF.
+6. Start the job.
-## Creating a UDF with basic properties
-As an example, the following sample code creates a scalar UDF named *newudf* that binds to an Machine Learning Studio (classic) endpoint. Note that the *endpoint* (service URI) can be found on the API help page for the chosen service and the *apiKey* can be found on the Services main page.
+## Create a UDF with basic properties
+
+As an example, the following sample code creates a scalar UDF named *newudf* that binds to a Machine Learning Studio (classic) endpoint. You can find the `endpoint` value (service URI) on the API help page for the chosen service. You can find the `apiKey` value on the service's main page.
```
- PUT : /subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microsoft.StreamAnalytics/streamingjobs/<streamingjobName>/functions/<udfName>?api-version=<apiVersion>
+PUT : /subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microsoft.StreamAnalytics/streamingjobs/<streamingjobName>/functions/<udfName>?api-version=<apiVersion>
``` Example request body:
Example request body:
} ```
-## Call RetrieveDefaultDefinition endpoint for default UDF
-Once the skeleton UDF is created the complete definition of the UDF is needed. The RetrieveDefaultDefinition endpoint helps you get the default definition for a scalar function that is bound to an Machine Learning Studio (classic) endpoint. The payload below requires you to get the default UDF definition for a scalar function that is bound to a Studio (classic) endpoint. It doesn't specify the actual endpoint as it has already been provided during PUT request. Stream Analytics calls the endpoint provided in the request if it is provided explicitly. Otherwise it uses the one originally referenced. Here the UDF takes a single string parameter (a sentence) and returns a single output of type string which indicates the "sentiment" label for that sentence.
+## Call the RetrieveDefaultDefinition endpoint for the default UDF
+
+After you create the skeleton UDF, you need the complete definition of the UDF. The `RetrieveDefaultDefinition` endpoint helps you get the default definition for a scalar function that's bound to a Machine Learning Studio (classic) endpoint.
+
+The following payload requires you to get the default UDF definition for a scalar function that's bound to a Studio (classic) endpoint. It doesn't specify the actual endpoint, because the `PUT` request already provided it.
+
+Stream Analytics calls the endpoint from the request, if the request explicitly provided an endpoint. Otherwise, Stream Analytics uses the endpoint that was originally referenced. Here, the UDF takes a single string parameter (a sentence) and returns a single output of type `string` that indicates the `Sentiment` label for that sentence.
``` POST : /subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microsoft.StreamAnalytics/streamingjobs/<streamingjobName>/functions/<udfName>/RetrieveDefaultDefinition?api-version=<apiVersion>
Example request body:
} ```
-A sample output of this would look something like below.
+The output of this request looks something like the following example:
```json {
A sample output of this would look something like below.
} ```
-## Patch UDF with the response
-Now the UDF must be patched with the previous response, as shown below.
+## Patch the UDF with the response
+
+Now, you must patch the UDF with the previous response.
``` PATCH : /subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microsoft.StreamAnalytics/streamingjobs/<streamingjobName>/functions/<udfName>?api-version=<apiVersion> ```
-Request Body (Output from RetrieveDefaultDefinition):
+Request body (output from `RetrieveDefaultDefinition`):
```json {
Request Body (Output from RetrieveDefaultDefinition):
} ```
-## Implement Stream Analytics transformation to call the UDF
-Now query the UDF (here named scoreTweet) for every input event and write a response for that event to an output.
+## Implement a Stream Analytics transformation to call the UDF
+
+Query the UDF (here named `scoreTweet`) for every input event, and write a response for that event to an output:
```json {
Now query the UDF (here named scoreTweet) for every input event and write a resp
} ``` - ## Get help
-For further assistance, try our [Microsoft Q&A question page for Azure Stream Analytics](/answers/tags/179/azure-stream-analytics)
+
+For further assistance, try the [Microsoft Q&A page for Azure Stream Analytics](/answers/tags/179/azure-stream-analytics).
## Next steps+ * [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)
-* [Get started using Azure Stream Analytics](stream-analytics-real-time-fraud-detection.md)
-* [Scale Azure Stream Analytics jobs](stream-analytics-scale-jobs.md)
-* [Azure Stream Analytics Query Language Reference](/stream-analytics-query/stream-analytics-query-language-reference)
-* [Azure Stream Analytics Management REST API Reference](/rest/api/streamanalytics/)
+* [Analyze fraudulent call data with Stream Analytics and visualize results in a Power BI dashboard](stream-analytics-real-time-fraud-detection.md)
+* [Scale an Azure Stream Analytics job to increase throughput](stream-analytics-scale-jobs.md)
+* [Azure Stream Analytics Query Language reference](/stream-analytics-query/stream-analytics-query-language-reference)
+* [Azure Stream Analytics Management REST API](/rest/api/streamanalytics/)
stream-analytics Stream Analytics Real Time Event Processing Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-real-time-event-processing-reference-architecture.md
Title: Real-time event processing using Azure Stream Analytics
-description: This article describes the reference architecture to achieve real-time event processing and analytics using Azure Stream Analytics.
+ Title: Real-time event processing with Azure Stream Analytics
+description: This article describes the reference architecture to achieve real-time event processing and analytics by using Azure Stream Analytics.
Last updated 01/24/2017 # Reference architecture: Real-time event processing with Microsoft Azure Stream Analytics
-The reference architecture for real-time event processing with Azure Stream Analytics is intended to provide a generic blueprint for deploying a real-time platform as a service (PaaS) stream-processing solution with Microsoft Azure.
+
+The reference architecture for real-time event processing with Azure Stream Analytics provides a generic blueprint for deploying a real-time platform as a service (PaaS) stream-processing solution by using Microsoft Azure.
## Summary
-Traditionally, analytics solutions have been based on capabilities such as ETL (extract, transform, load) and data warehousing, where data is stored prior to analysis. Changing requirements, including more rapidly arriving data, are pushing this existing model to the limit. The ability to analyze data within moving streams prior to storage is one solution, and while it is not a new capability, the approach has not been widely adopted across all industry verticals.
-Microsoft Azure provides an extensive catalog of analytics technologies that are capable of supporting an array of different solution scenarios and requirements. Selecting which Azure services to deploy for an end-to-end solution can be a challenge given the breadth of offerings. This paper is designed to describe the capabilities and interoperation of the various Azure services that support an event-streaming solution. It also explains some of the scenarios in which customers can benefit from this type of approach.
+Traditionally, analytics solutions are based on capabilities such as ETL (extract, transform, load) and data warehousing, where data is stored before analysis. Changing requirements, including more rapidly arriving data, are pushing this existing model to the limit.
+
+The ability to analyze data within moving streams before storage is one solution. Although this approach isn't new, it hasn't been widely adopted across industry verticals.
+
+Microsoft Azure provides an extensive catalog of analytics technologies that can support an array of solution scenarios and requirements. Selecting which Azure services to deploy for an end-to-end solution can be a challenge, considering the breadth of offerings.
+
+This reference describes the capabilities and interoperation of the Azure services that support an event-streaming solution. It also explains some of the scenarios in which customers can benefit from this type of approach.
## Contents
-* Executive Summary
-* Introduction to Real-Time Analytics
-* Value Proposition of Real-Time Data in Azure
-* Common Scenarios for Real-Time Analytics
-* Architecture and Components
- * Data Sources
- * Data-Integration Layer
- * Real-time Analytics Layer
- * Data Storage Layer
- * Presentation / Consumption Layer
+
+* Executive summary
+* Introduction to real-time analytics
+* Value proposition of real-time data in Azure
+* Common scenarios for real-time analytics
+* Architecture and components
+ * Data sources
+ * Data integration layer
+ * Real-time analytics layer
+ * Data storage layer
+ * Presentation/consumption layer
* Conclusion **
Microsoft Azure provides an extensive catalog of analytics technologies that are
**Download:** [Real-Time Event Processing with Microsoft Azure Stream Analytics](https://download.microsoft.com/download/6/2/3/623924DE-B083-4561-9624-C1AB62B5F82B/real-time-event-processing-with-microsoft-azure-stream-analytics.pdf) ## Get help
-For further assistance, try the [Microsoft Q&A question page for Azure Stream Analytics](/answers/tags/179/azure-stream-analytics)
+
+For further assistance, try the [Microsoft Q&A page for Azure Stream Analytics](/answers/tags/179/azure-stream-analytics).
## Next steps+ * [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)
-* [Get started using Azure Stream Analytics](stream-analytics-real-time-fraud-detection.md)
-* [Scale Azure Stream Analytics jobs](stream-analytics-scale-jobs.md)
-* [Azure Stream Analytics Query Language Reference](/stream-analytics-query/stream-analytics-query-language-reference)
-* [Azure Stream Analytics Management REST API Reference](/rest/api/streamanalytics/)
+* [Analyze fraudulent call data with Stream Analytics and visualize results in a Power BI dashboard](stream-analytics-real-time-fraud-detection.md)
+* [Scale an Azure Stream Analytics job to increase throughput](stream-analytics-scale-jobs.md)
+* [Azure Stream Analytics Query Language reference](/stream-analytics-query/stream-analytics-query-language-reference)
+* [Azure Stream Analytics Management REST API](/rest/api/streamanalytics/)
stream-analytics Stream Analytics Troubleshoot Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-troubleshoot-input.md
Title: Troubleshooting Inputs for Azure Stream Analytics
+ Title: Troubleshooting inputs for Azure Stream Analytics
description: This article describes techniques to troubleshoot your input connections in Azure Stream Analytics jobs.
Last updated 12/15/2023
# Troubleshoot input connections
-This article describes common issues with Azure Stream Analytics input connections, how to troubleshoot input issues, and how to correct the issues. Many troubleshooting steps require resource logs to be enabled for your Stream Analytics job. If you do not have resource logs enabled, see [Troubleshoot Azure Stream Analytics by using resource logs](stream-analytics-job-diagnostic-logs.md).
+This article describes common problems with Azure Stream Analytics input connections, how to troubleshoot those problems, and how to correct them.
-## Input events not received by job
+Many troubleshooting steps require you to turn on resource logs for your Stream Analytics job. If you don't have resource logs turned on, see [Troubleshoot Azure Stream Analytics by using resource logs](stream-analytics-job-diagnostic-logs.md).
-1. Test your input and output connectivity. Verify connectivity to inputs and outputs by using the **Test Connection** button for each input and output.
+## Job doesn't receive input events
-2. Examine your input data.
+1. Verify your connectivity to inputs and outputs. Use the **Test Connection** button for each input and output.
+
+2. Examine your input data:
+
+ 1. Use the [Sample Data](./stream-analytics-test-query.md) button for each input. Download the input sample data.
- 1. Use the [**Sample Data**](./stream-analytics-test-query.md) button for each input. Download the input sample data.
-
1. Inspect the sample data to understand the schema and [data types](/stream-analytics-query/data-types-azure-stream-analytics).
-
- 1. Check [Event Hub metrics](../event-hubs/event-hubs-metrics-azure-monitor.md) to ensure events are being sent. Message metrics should be greater than zero if Event Hubs is receiving messages.
-3. Ensure that you have selected a time range in the input preview. Choose **Select time range**, and then enter a sample duration before testing your query.
+ 1. Check [Azure Event Hubs metrics](../event-hubs/event-hubs-metrics-azure-monitor.md) to ensure that events are being sent. Message metrics should be greater than zero if Event Hubs is receiving messages.
+
+3. Ensure that you selected a time range in the input preview. Choose **Select time range**, and then enter a sample duration before testing your query.
+
+> [!IMPORTANT]
+> For [Azure Stream Analytics jobs](./run-job-in-virtual-network.md) that aren't network injected, don't rely on the source IP address of connections coming from Stream Analytics in any way. They can be public or private IPs, depending on service infrastructure operations that happen from time to time.
- > [!IMPORTANT]
- > For non-[network injected ASA jobs](./run-job-in-virtual-network.md), please do not rely on source IP address of connections coming from ASA in any way. They can be public or private IPs depending on service infrastructure operations that happen from time to time.
-
+## Malformed input events cause deserialization errors
-## Malformed input events causes deserialization errors
+Deserialization problems happen when the input stream of your Stream Analytics job contains malformed messages. For example, a missing parenthesis or brace in a JSON object, or an incorrect time-stamp format in the time field, can cause a malformed message.
-Deserialization issues are caused when the input stream of your Stream Analytics job contains malformed messages. For example, a malformed message could be caused by a missing parenthesis, or brace, in a JSON object or an incorrect timestamp format in the time field.
-
-When a Stream Analytics job receives a malformed message from an input, it drops the message and notifies you with a warning. A warning symbol is shown on the **Inputs** tile of your Stream Analytics job. The following warning symbol exists as long as the job is in running state:
+When a Stream Analytics job receives a malformed message from an input, it drops the message and notifies you with a warning. A warning symbol appears on the **Inputs** tile of your Stream Analytics job. The warning symbol exists as long as the job is in a running state.
-![Azure Stream Analytics inputs tile](media/stream-analytics-malformed-events/stream-analytics-inputs-tile.png)
+![Screenshot that shows the Inputs tile for Azure Stream Analytics.](media/stream-analytics-malformed-events/stream-analytics-inputs-tile.png)
-Enable resource logs to view the details of the error and the message (payload) that caused the error. There are multiple reasons why deserialization errors can occur. For more information regarding specific deserialization errors, see [Input data errors](data-errors.md#input-data-errors). If resource logs are not enabled, a brief notification will be available in the Azure portal.
+Turn on resource logs to view the details of the error and the message (payload) that caused the error. There are multiple reasons why deserialization errors can occur. For more information about specific deserialization errors, see [Input data errors](data-errors.md#input-data-errors). If resource logs aren't turned on, a brief notification appears in the Azure portal.
-![Input details warning notification](media/stream-analytics-malformed-events/warning-message-with-offset.png)
+![Screenshot that shows a warning notification about input details.](media/stream-analytics-malformed-events/warning-message-with-offset.png)
-In cases where the message payload is greater than 32 KB or is in binary format, run the CheckMalformedEvents.cs code available in the [GitHub samples repository](https://github.com/Azure/azure-stream-analytics/tree/master/Samples/CheckMalformedEventsEH). This code reads the partition ID, offset, and prints the data that's located in that offset.
+If the message payload is greater than 32 KB or is in binary format, run the *CheckMalformedEvents.cs* code available in the [GitHub samples repository](https://github.com/Azure/azure-stream-analytics/tree/master/Samples/CheckMalformedEventsEH). This code reads the partition ID offset and prints the data located in that offset.
-Other common reasons that result in input deserialization errors are:
-1. Integer column having a value greater than 9223372036854775807.
-2. Strings instead of array of objects or line separated objects. Valid example : *[{'a':1}]*. Invalid example : *"'a' :1"*.
-3. Using Event Hubs capture blob in Avro format as input in your job.
-4. Having two columns in a single input event that differ only in case. Example: *column1* and *COLUMN1*.
+Other common reasons for input deserialization errors are:
+
+* An integer column that has a value greater than `9223372036854775807`.
+* Strings instead of an array of objects or line-separated objects. Valid example: `*[{'a':1}]*`. Invalid example: `*"'a' :1"*`.
+* Using an Event Hubs capture blob in Avro format as input in your job.
+* Having two columns in a single input event that differ only in case. Example: `*column1*` and `*COLUMN1*`.
## Partition count changes
-Partition count of Event Hubs can be changed. The Stream Analytics job needs to be stopped and started again if the partition count of the event hub is changed.
-The following errors are shown when the partition count of the event hub is changed when the job is running.
-Microsoft.Streaming.Diagnostics.Exceptions.InputPartitioningChangedException
+The partition count of event hubs can be changed. If the partition count of an event hub is changed, you need to stop and restart the Stream Analytics job.
+
+The following error appears when the partition count of an event hub is changed while the job is running: `Microsoft.Streaming.Diagnostics.Exceptions.InputPartitioningChangedException`.
-## Job exceeds maximum Event Hubs receivers
+## Job exceeds the maximum Event Hubs receivers
-A best practice for using Event Hubs is to use multiple consumer groups for job scalability. The number of readers in the Stream Analytics job for a specific input affects the number of readers in a single consumer group. The precise number of receivers is based on internal implementation details for the scale-out topology logic and is not exposed externally. The number of readers can change when a job is started or during job upgrades.
+A best practice for using Event Hubs is to use multiple consumer groups for job scalability. The number of readers in the Stream Analytics job for a specific input affects the number of readers in a single consumer group.
-The following error messages are shown when the number of receivers exceeds the maximum. The error message includes a list of existing connections made to Event Hubs under a consumer group. The tag `AzureStreamAnalytics` indicates that the connections are from Azure Streaming Service.
+The precise number of receivers is based on internal implementation details for the scale-out topology logic. The number isn't exposed externally. The number of readers can change when a job is started or upgraded.
+
+The following error message appears when the number of receivers exceeds the maximum. The message includes a list of existing connections made to Event Hubs under a consumer group. The tag `AzureStreamAnalytics` indicates that the connections are from an Azure streaming service.
``` The streaming job failed: Stream Analytics job has validation errors: Job will exceed the maximum amount of Event Hubs Receivers.
AzureStreamAnalytics_c4b65e4a-f572-4cfc-b4e2-cf237f43c6f0_1.
``` > [!NOTE]
-> When the number of readers changes during a job upgrade, transient warnings are written to audit logs. Stream Analytics jobs automatically recover from these transient issues.
-
-### Add a consumer group in Event Hubs
+> When the number of readers changes during a job upgrade, transient warnings are written to audit logs. Stream Analytics jobs automatically recover from these transient problems.
To add a new consumer group in your Event Hubs instance, follow these steps: 1. Sign in to the Azure portal.
-2. Locate your Event Hub.
+2. Locate your event hub.
-3. Select **Event Hubs** under the **Entities** heading.
+3. Under the **Entities** heading, select **Event Hubs**.
4. Select the event hub by name.
-5. On the **Event Hubs Instance** page, under the **Entities** heading, select **Consumer groups**. A consumer group with name **$Default** is listed.
+5. On the **Event Hubs Instance** page, under the **Entities** heading, select **Consumer groups**. A consumer group with the name **$Default** is listed.
-6. Select **+ Consumer Group** to add a new consumer group.
+6. Select **+ Consumer Group** to add a new consumer group.
- ![Add a consumer group in Event Hubs](media/stream-analytics-event-hub-consumer-groups/new-eh-consumer-group.png)
+ ![Screenshot that shows the button for adding a consumer group in Event Hubs.](media/stream-analytics-event-hub-consumer-groups/new-eh-consumer-group.png)
-7. When you created the input in the Stream Analytics job to point to the Event Hub, you specified the consumer group there. **$Default** is used when none is specified. Once you create a new consumer group, edit the event hub input in the Stream Analytics job and specify the name of the new consumer group.
+7. When you created the input in the Stream Analytics job to point to the event hub, you specified the consumer group there. Event Hubs uses **$Default** if no consumer group is specified. After you create a consumer group, edit the event hub input in the Stream Analytics job and specify the name of the new consumer group.
-## Readers per partition exceeds Event Hubs limit
+## Readers per partition exceed the Event Hubs limit
-If your streaming query syntax references the same input event hub resource multiple times, the job engine can use multiple readers per query from that same consumer group. When there are too many references to the same consumer group, the job can exceed the limit of five and thrown an error. In those circumstances, you can further divide by using multiple inputs across multiple consumer groups using the solution described in the following section.
+If your streaming query syntax references the same resource for event hub input multiple times, the job engine can use multiple readers per query from that same consumer group. When there are too many references to the same consumer group, the job can exceed the limit of five and throw an error. In those circumstances, you can further divide by using multiple inputs across multiple consumer groups.
-Scenarios in which the number of readers per partition exceeds the Event Hubs limit of five include the following:
+Scenarios in which the number of readers per partition exceeds the Event Hubs limit of five include:
-* Multiple SELECT statements: If you use multiple SELECT statements that refer to **same** event hub input, each SELECT statement causes a new receiver to be created.
+* Multiple `SELECT` statements: If you use multiple `SELECT` statements that refer to the *same* event hub input, each `SELECT` statement causes a new receiver to be created.
-* UNION: When you use a UNION, it's possible to have multiple inputs that refer to the **same** event hub and consumer group.
+* `UNION`: When you use `UNION`, it's possible to have multiple inputs that refer to the *same* event hub and consumer group.
-* SELF JOIN: When you use a SELF JOIN operation, it's possible to refer to the **same** event hub multiple times.
+* `SELF JOIN`: When you use a `SELF JOIN` operation, it's possible to refer to the *same* event hub multiple times.
The following best practices can help mitigate scenarios in which the number of readers per partition exceeds the Event Hubs limit of five. ### Split your query into multiple steps by using a WITH clause
-The WITH clause specifies a temporary named result set that can be referenced by a FROM clause in the query. You define the WITH clause in the execution scope of a single SELECT statement.
+The `WITH` clause specifies a temporary named result set that a `FROM` clause in the query can reference. You define the `WITH` clause in the execution scope of a single `SELECT` statement.
For example, instead of this query:
FROM data
### Ensure that inputs bind to different consumer groups
-For queries in which three or more inputs are connected to the same Event Hubs consumer group, create separate consumer groups. This requires the creation of additional Stream Analytics inputs.
+For queries in which three or more inputs are connected to the same Event Hubs consumer group, create separate consumer groups. This task requires the creation of additional Stream Analytics inputs.
### Create separate inputs with different consumer groups
-You can create separate inputs with different consumer groups for the same Event Hub. The following UNION query is an example where *InputOne* and *InputTwo* refer to the same Event Hubs source. Any query can have separate inputs with different consumer groups. The UNION query is only one example.
+You can create separate inputs with different consumer groups for the same event hub. In the following example of a `UNION` query, *InputOne* and *InputTwo* refer to the same Event Hubs source. Any query can have separate inputs with different consumer groups. The `UNION` query is only one example.
```sql WITH
SELECT foo FROM DataTwo
```
-## Readers per partition exceeds IoT Hub limit
+## Readers per partition exceed the IoT Hub limit
-Stream Analytics jobs use IoT Hub's built-in [Event Hub compatible endpoint](../iot-hub/iot-hub-devguide-messages-read-builtin.md) to connect and read events from IoT Hub. If your read per partition exceeds the limits of IoT Hub, you can use the [solutions for Event Hub](#readers-per-partition-exceeds-event-hubs-limit) to resolve it. You can create a consumer group for the built-in endpoint through IoT Hub portal endpoint session or through the [IoT Hub SDK](/rest/api/iothub/IotHubResource/CreateEventHubConsumerGroup).
+Stream Analytics jobs use the built-in [Event Hubs-compatible endpoint](../iot-hub/iot-hub-devguide-messages-read-builtin.md) in Azure IoT Hub to connect and read events from IoT Hub. If your readers per partition exceed the limits of IoT Hub, you can use the [solutions for Event Hubs](#readers-per-partition-exceed-the-event-hubs-limit) to resolve it. You can create a consumer group for the built-in endpoint through the IoT Hub portal endpoint session or through the [IoT Hub SDK](/rest/api/iothub/IotHubResource/CreateEventHubConsumerGroup).
## Get help
-For further assistance, try our [Microsoft Q&A question page for Azure Stream Analytics](/answers/tags/179/azure-stream-analytics).
+For further assistance, try the [Microsoft Q&A page for Azure Stream Analytics](/answers/tags/179/azure-stream-analytics).
## Next steps * [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)
-* [Get started using Azure Stream Analytics](stream-analytics-real-time-fraud-detection.md)
-* [Scale Azure Stream Analytics jobs](stream-analytics-scale-jobs.md)
-* [Azure Stream Analytics Query Language Reference](/stream-analytics-query/stream-analytics-query-language-reference)
-* [Azure Stream Analytics Management REST API Reference](/rest/api/streamanalytics/)
+* [Analyze fraudulent call data with Stream Analytics and visualize results in a Power BI dashboard](stream-analytics-real-time-fraud-detection.md)
+* [Scale an Azure Stream Analytics job to increase throughput](stream-analytics-scale-jobs.md)
+* [Azure Stream Analytics Query Language reference](/stream-analytics-query/stream-analytics-query-language-reference)
+* [Azure Stream Analytics Management REST API](/rest/api/streamanalytics/)
synapse-analytics Quickstart Gallery Sample Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/quickstart-gallery-sample-notebook.md
description: Learn how to use a sample notebook from the Synapse Analytics galle
Previously updated : 06/11/2021-- Last updated : 02/29/2024++
This notebook demonstrates the basic steps used in creating a model: **data impo
1. Open your workspace and select **Learn** from the home page. 1. In the **Knowledge center**, select **Browse gallery**. 1. In the gallery, select **Notebooks**.
-1. Find and select the notebook "Data Exploration and ML Modeling - NYC taxi predict using Spark MLib".
+1. Find and select a notebook from the gallery.
:::image type="content" source="media\quickstart-gallery-sample-notebook\gallery-select-ml-notebook.png" alt-text="Select the machine learning sample notebook in the gallery.":::
This notebook demonstrates the basic steps used in creating a model: **data impo
1. In the **Attach to** menu in the open notebook, select your Apache Spark pool.
-## Run the notebook
-
-The notebook is divided into multiple cells that each perform a specific function.
-You can manually run each cell, running cells sequentially, or select **Run all** to run all the cells.
-
-Here are descriptions for each of the cells in the notebook:
-
-1. Import PySpark functions that the notebook uses.
-1. **Ingest Date** - Ingest data from the Azure Open Dataset **NycTlcYellow** into a local dataframe for processing. The code extracts data within a specific time period - you can modify the start and end dates to get different data.
-1. Downsample the dataset to make development faster. You can modify this step to change the sample size or the sampling seed.
-1. **Exploratory Data Analysis** - Display charts to view the data. This can give you an idea what data prep might be needed before creating the model.
-1. **Data Prep and Featurization** - Filter out outlier data discovered through visualization and create some useful derived variables.
-1. **Data Prep and Featurization Part 2** - Drop unneeded columns and create some additional features.
-1. **Encoding** - Convert string variables to numbers that the Logistic Regression model is expecting.
-1. **Generation of Testing and Training Data Sets** - Split the data into separate testing and training data sets. You can modify the fraction and randomizing seed used to split the data.
-1. **Train the Model** - Train a Logistic Regression model and display its "Area under ROC" metric to see how well the model is working. This step also saves the trained model in case you want to use it elsewhere.
-1. **Evaluate and Visualize** - Plot the model's ROC curve to further evaluate the model.
- ## Save the notebook To save your notebook by selecting **Publish** on the workspace command bar.
synapse-analytics Quickstart Integrate Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/quickstart-integrate-azure-machine-learning.md
Previously updated : 12/16/2021 Last updated : 02/29/2024
# Quickstart: Create a new Azure Machine Learning linked service in Synapse > **IMPORTANT, PLEASE NOTE THE BELOW LIMITATIONS:**
-> - **The Azure ML integration is not currently supported in Synapse Workspaces with Data Exfiltration Protection.** If you are **not** using data exfiltration protection and want to connect to Azure ML using private endpoints, you can set up a managed AzureML private endpoint in your Synapse workspace. [Read more about managed private endpoints](../security/how-to-create-managed-private-endpoints.md)
+> - **The Azure Machine Learning integration is not currently supported in Synapse Workspaces with Data Exfiltration Protection.** If you are **not** using data exfiltration protection and want to connect to Azure Machine Learning using private endpoints, you can set up a managed Azure Machine Learning private endpoint in your Synapse workspace. [Read more about managed private endpoints](../security/how-to-create-managed-private-endpoints.md)
> - **AzureML linked service is not supported with self hosted integration runtimes.** This applies to Synapse workspaces with and without Data Exfiltration Protection.
+> - **The Azure Synapse Spark 3.3 and 3.4 runtimes do not support using the Azure Machine Learning Linked Service to authenticate to the Azure Machine Learning MLFlow tracking URI.** To learn more about the limitations on these runtimes, see [Azure Synapse Runtime for Apache Spark 3.3](../spark/apache-spark-33-runtime.md) and [Azure Synapse Runtime for Apache Spark 3.4](../spark//apache-spark-34-runtime.md)
In this quickstart, you'll link an Azure Synapse Analytics workspace to an Azure Machine Learning workspace. Linking these workspaces allows you to leverage Azure Machine Learning from various experiences in Synapse.
In the following sections, you'll find guidance on how to create an Azure Machin
This section will guide you on how to create an Azure Machine Learning linked service in Azure Synapse, using the [Azure Synapse workspace Managed Identity](../../data-factory/data-factory-service-identity.md?context=/azure/synapse-analytics/context/context&tabs=synapse-analytics)
-### Give MSI permission to the Azure ML workspace
+### Give MSI permission to the Azure Machine Learning workspace
1. Navigate to your Azure Machine Learning workspace resource in the Azure portal and select **Access Control** 1. Create a role assignment and add your Synapse workspace Managed Service identity (MSI) as a *contributor* of the Azure Machine Learning workspace. Note that this will require being an owner of the resource group that the Azure Machine Learning workspace belongs to. If you have trouble finding your Synapse workspace MSI, search for the name of the Synapse workspace.
-### Create an Azure ML linked service
+### Create an Azure Machine Learning linked service
1. In the Synapse workspace where you want to create the new Azure Machine Learning linked service, go to **Manage** > **Linked services**, and create a new linked service with type "Azure Machine Learning".
This step will create a new Service Principal. If you want to use an existing Se
![Assign contributor role](media/quickstart-integrate-azure-machine-learning/quickstart-integrate-azure-machine-learning-createsp-00c.png)
-### Create an Azure ML linked service
+### Create an Azure Machine Learning linked service
1. In the Synapse workspace where you want to create the new Azure Machine Learning linked service, go to **Manage** -> **Linked services**, create a new linked service with type "Azure Machine Learning".
synapse-analytics Apache Spark Data Visualization Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-data-visualization-tutorial.md
Previously updated : 10/20/2020 Last updated : 02/29/2024
Create an Apache Spark Pool by following the [Create an Apache Spark pool tutori
3. Because the raw data is in a Parquet format, you can use the Spark context to pull the file into memory as a DataFrame directly. Create a Spark DataFrame by retrieving the data via the Open Datasets API. Here, we use the Spark DataFrame *schema on read* properties to infer the datatypes and schema. ```python
- from azureml.opendatasets import NycTlcYellow
- from datetime import datetime
- from dateutil import parser
-
- end_date = parser.parse('2018-06-06')
- start_date = parser.parse('2018-05-01')
- nyc_tlc = NycTlcYellow(start_date=start_date, end_date=end_date)
- df = nyc_tlc.to_spark_dataframe()
+ from azureml.opendatasets import NycTlcYellow
+
+ from datetime import datetime
+ from dateutil import parser
+
+ end_date = parser.parse('2018-05-08 00:00:00')
+ start_date = parser.parse('2018-05-01 00:00:00')
+
+ nyc_tlc = NycTlcYellow(start_date=start_date, end_date=end_date)
+ filtered_df = spark.createDataFrame(nyc_tlc.to_pandas_dataframe())
+ ``` 4. After the data is read, we'll want to do some initial filtering to clean the dataset. We might remove unneeded columns and add columns that extract important information. In addition, we'll filter out anomalies within the dataset.
synapse-analytics Apache Spark External Metastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-external-metastore.md
Last updated 02/15/2022
# Use external Hive Metastore for Synapse Spark Pool
+> [!NOTE]
+> External Hive metastores will no longer be supported in Spark 3.4 and subsequent versions in Synapse.
+ Azure Synapse Analytics allows Apache Spark pools in the same workspace to share a managed HMS (Hive Metastore) compatible metastore as their catalog. When customers want to persist the Hive catalog metadata outside of the workspace, and share catalog objects with other computational engines outside of the workspace, such as HDInsight and Azure Databricks, they can connect to an external Hive Metastore. In this article, you can learn how to connect Synapse Spark to an external Apache Hive Metastore. ## Supported Hive Metastore versions
The feature works with Spark 3.1. The following table shows the supported Hive M
|2.4|Yes|Yes|Yes|Yes|No| |3.1|Yes|Yes|Yes|Yes|Yes| + ## Set up linked service to Hive Metastore > [!NOTE]
synapse-analytics Apache Spark Machine Learning Mllib Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-machine-learning-mllib-notebook.md
Previously updated : 02/15/2022 Last updated : 02/29/2024
Because the raw data is in a Parquet format, you can use the Spark context to pu
```python from azureml.opendatasets import NycTlcYellow
- end_date = parser.parse('2018-06-06')
- start_date = parser.parse('2018-05-01')
+ from datetime import datetime
+ from dateutil import parser
+
+ end_date = parser.parse('2018-05-08 00:00:00')
+ start_date = parser.parse('2018-05-01 00:00:00')
+
nyc_tlc = NycTlcYellow(start_date=start_date, end_date=end_date)
- filtered_df = nyc_tlc.to_spark_dataframe()
+ filtered_df = spark.createDataFrame(nyc_tlc.to_pandas_dataframe())
+ ``` 2. The downside to simple filtering is that, from a statistical perspective, it might introduce bias into the data. Another approach is to use the sampling built into Spark.
virtual-desktop Azure Stack Hci Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci-overview.md
Title: Azure Virtual Desktop for Azure Stack HCI
-description: Learn about using Azure Virtual Desktop for Azure Stack HCI to deploy session hosts where you need them.
+ Title: Azure Virtual Desktop with Azure Stack HCI
+description: Learn about using Azure Virtual Desktop with Azure Stack HCI, enablng you to deploy session hosts where you need them.
Last updated 01/24/2024
-# Azure Virtual Desktop for Azure Stack HCI
+# Azure Virtual Desktop with Azure Stack HCI
> [!IMPORTANT]
-> Azure Virtual Desktop for Azure Stack HCI is currently in preview for Azure Government and Azure China. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Azure Virtual Desktop with Azure Stack HCI is currently in preview for Azure Government and Azure China. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-With Azure Virtual Desktop for Azure Stack HCI, you can deploy session hosts for Azure Virtual Desktop where you need them. If you already have an existing on-premises virtual desktop infrastructure (VDI) deployment, Azure Virtual Desktop for Azure Stack HCI can improve your experience. If you're already using Azure Virtual Desktop on Azure, you can extend your deployment to your on-premises infrastructure to better meet your performance or data locality needs.
+Using Azure Virtual Desktop with Azure Stack HCI, you can deploy session hosts for Azure Virtual Desktop where you need them. If you already have an existing on-premises virtual desktop infrastructure (VDI) deployment, Azure Virtual Desktop with Azure Stack HCI can improve your experience. If you're already using Azure Virtual Desktop with your session hosts in Azure, you can extend your deployment to your on-premises infrastructure to better meet your performance or data locality needs.
-Azure Virtual Desktop for Azure Stack HCI isn't an Azure Arc-enabled service. As such, it's not supported as a standalone service outside of Azure, in a multicloud environment, or on Azure Arc-enabled servers besides Azure Stack HCI virtual machines as described in this article.
+Azure Virtual Desktop service components, such as host pools, workspaces, and application groups are all deployed in Azure, but you can choose to deploy session hosts on Azure Stack HCI. As Azure Virtual Desktop with Azure Stack HCI isn't an Azure Arc-enabled service, it's not supported as a standalone service outside of Azure, in a multicloud environment, or on other Azure Arc-enabled servers.
## Benefits
-With Azure Virtual Desktop for Azure Stack HCI, you can:
+Using Azure Virtual Desktop with Azure Stack HCI, you can:
- Improve performance for Azure Virtual Desktop users in areas with poor connectivity to the Azure public cloud by giving them session hosts closer to their location.
Finally, users can connect using the same [Remote Desktop clients](users/remote-
## Licensing and pricing
-To run Azure Virtual Desktop on Azure Stack HCI, you need to make sure you're licensed correctly and be aware of the pricing model. There are three components that affect how much it costs to run Azure Virtual Desktop for Azure Stack HCI:
+To run Azure Virtual Desktop with Azure Stack HCI, you need to make sure you're licensed correctly and be aware of the pricing model. There are three components that affect how much it costs to run Azure Virtual Desktop with Azure Stack HCI:
-- **User access rights.** The same licenses that grant access to Azure Virtual Desktop on Azure also apply to Azure Virtual Desktop for Azure Stack HCI. Learn more at [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/).
+- **User access rights.** The same licenses that grant access to Azure Virtual Desktop on Azure also apply to Azure Virtual Desktop with Azure Stack HCI. Learn more at [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/).
- **Infrastructure costs.** Learn more at [Azure Stack HCI pricing](https://azure.microsoft.com/pricing/details/azure-stack/hci/).
There are different classifications of data for Azure Virtual Desktop, such as c
## Limitations
-Azure Virtual Desktop for Azure Stack HCI has the following limitations:
+Azure Virtual Desktop with Azure Stack HCI has the following limitations:
- You can't use some Azure Virtual Desktop features when session hosts running on Azure Stack HCI, such as:
Azure Virtual Desktop for Azure Stack HCI has the following limitations:
## Next steps
-To learn how to deploy Azure Virtual Desktop for Azure Stack HCI, see [Deploy Azure Virtual Desktop](deploy-azure-virtual-desktop.md).
+To learn how to deploy Azure Virtual Desktop with Azure Stack HCI, see [Deploy Azure Virtual Desktop](deploy-azure-virtual-desktop.md).
virtual-desktop Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/licensing.md
Licensing Azure Virtual Desktop works differently for internal and external comm
> [!IMPORTANT] > Per-user access pricing can only be used for external commercial purposes, not internal purposes. Per-user access pricing isn't a way to enable guest user accounts with Azure Virtual Desktop. Check if your Azure Virtual Desktop solution is is applicable for per-user access pricing by reviewing [our licensing documentation](https://www.microsoft.com/licensing/terms/productoffering/MicrosoftAzure/EAEAS#Documents).
-## Eligible licenses for internal commercial purposes to use Azure Virtual Desktop
+## Eligible licenses to use Azure Virtual Desktop
-If you're providing Azure Virtual Desktop access for internal commercial purposes, you must purchase one of the following eligible licenses for each user that accesses Azure Virtual Desktop. The license you need also depends on whether you're using a Windows client operating system or a Windows Server operating system for your session hosts.
+You must provide an eligible license for each user that accesses Azure Virtual Desktop. The license you need also depends on whether you're using a Windows client operating system or a Windows Server operating system for your session hosts, and whether it's for internal or external commercial purposes. The following table shows the eligible licensing methods for each scenario:
[!INCLUDE [Operating systems and user access rights](includes/include-operating-systems-user-access-rights.md)]
-## Per-user access pricing for external commercial purposes to use Azure Virtual Desktop
+### Per-user access pricing for external commercial purposes to use Azure Virtual Desktop
Per-user access pricing lets you pay for Azure Virtual Desktop access rights for external commercial purposes. You must enroll in per-user access pricing to build a compliant deployment for external users.
Here's a summary of the two types of licenses for Azure Virtual Desktop you can
| Access rights | Internal purposes only. It doesn't grant permission for external commercial purposes, not even identities you create in your own Microsoft Entra tenant. | External commercial purposes only. It doesn't grant access to members of your own organization or contractors for internal business purposes. | | Billing | Licensing channels. | Pay-as-you-go through an Azure meter, billed to an Azure subscription. | | User behavior | Fixed cost per user each month regardless of user behavior. | Cost per user each month depends on user behavior. |
-| Other products | Dependent on the license. | Only includes access rights to Azure Virtual Desktop and [FSlogix](/fslogix/overview-what-is-fslogix).<br /><br />Per-user access pricing only supports Windows Enterprise and Windows Enterprise multi-session client operating systems for session hosts. Windows Server isn't supported with per-user access pricing. |
+| Other products | Dependent on the license. | Only includes access rights to Azure Virtual Desktop and [FSlogix](/fslogix/overview-what-is-fslogix). |
## Next steps
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
You need to enter the following identity parameters when deploying session hosts
## Operating systems and licenses
-You have a choice of operating systems (OS) that you can use for session hosts to provide desktops and applications. You can use different operating systems with different host pools to provide flexibility to your users. We support the following 64-bit versions of these operating systems, where supported versions and dates are inline with the [Microsoft Lifecycle Policy](/lifecycle/).
+You have a choice of operating systems (OS) that you can use for session hosts to provide desktops and applications. You can use different operating systems with different host pools to provide flexibility to your users. We support the 64-bit operating systems and SKUs in the following table lists (where supported versions and dates are inline with the [Microsoft Lifecycle Policy](/lifecycle/)), along with the licensing methods applicable for each commercial purpose:
[!INCLUDE [Operating systems and user access rights](includes/include-operating-systems-user-access-rights.md)]
-To learn more, see about licenses you can use, including per-user access pricing, see [Licensing Azure Virtual Desktop](licensing.md).
+To learn more about licenses you can use, including per-user access pricing, see [Licensing Azure Virtual Desktop](licensing.md).
> [!IMPORTANT] > - The following items are not supported:
Consider the following points when managing session hosts:
## Azure regions
-You can deploy session hosts in any Azure region to use with Azure Virtual Desktop. For host pools, workspaces, and application groups, you can deploy them in the following Azure regions:
+You can deploy host pools, workspaces, and application groups in the following Azure regions. This list of regions is where the *metadata* for the host pool can be stored. However, session hosts for the user sessions can be located in any Azure region, and on-premises when using [Azure Virtual Desktop on Azure Stack HCI](azure-stack-hci-overview.md), enabling you to deploy compute resources close to your users. For more information about the types of data and locations, see [Data locations for Azure Virtual Desktop](data-locations.md).
:::row::: :::column:::
You can deploy session hosts in any Azure region to use with Azure Virtual Deskt
:::column-end::: :::row-end:::
-This list of regions is where the *metadata* for the host pool can be stored. However, session hosts can be located in any Azure region, and on-premises when using [Azure Virtual Desktop on Azure Stack HCI](azure-stack-hci-overview.md). For more information about the types of data and locations, see [Data locations for Azure Virtual Desktop](data-locations.md). Azure Virtual Desktop is also available in sovereign clouds, such as [Azure for US Government](https://azure.microsoft.com/explore/global-infrastructure/government/) and [Azure operated by 21Vianet](https://docs.azure.cn/virtual-desktop/) in China.
--
+Azure Virtual Desktop is also available in sovereign clouds, such as [Azure for US Government](https://azure.microsoft.com/explore/global-infrastructure/government/) and [Azure operated by 21Vianet](https://docs.azure.cn/virtual-desktop/) in China.
To learn more about the architecture and resilience of the Azure Virtual Desktop service, see [Azure Virtual Desktop service architecture and resilience](service-architecture-resilience.md).
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Previously updated : 02/01/2023 Last updated : 03/01/2024 # What's new in Azure Virtual Desktop?
Make sure to check back here often to keep up with new updates.
> [!TIP] > See [What's new in documentation](whats-new-documentation.md), where we highlight new and updated articles for Azure Virtual Desktop.
+## February 2024
+
+Here's what changed in February 2024:
+
+### Azure Virtual Desktop for Azure Stack HCI now generally available
+
+Azure Virtual Desktop for Azure Stack HCI extends the capabilities of the Microsoft Cloud to your datacenters. Bringing the benefits of Azure Virtual Desktop and Azure Stack HCI together, organizations can securely run virtualized desktops and apps on-premises in their datacenter and at the edges of their organization. This versatility is especially useful for organizations with data residency and proximity requirements or latency-sensitive workloads.
+
+For more information, see [Azure Virtual Desktop for Azure Stack HCI now available!](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/azure-virtual-desktop-for-azure-stack-hci-now-available/ba-p/4038030)
+
+### Azure Virtual Desktop web client version 2 is now available
+
+The Azure Virtual Desktop web client has now updated to web client version 2. All users automatically migrate to this new version of the web client to access their resources.
+
+For more information about the new features available in version 2, see [Use features of the Remote Desktop Web client](./users/client-features-web.md).
+ ## January 2024 There were no major releases or new features in January 2024.
virtual-machines Custom Script Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-linux.md
Use Version 2 for new and existing deployments. The new version is a drop-in rep
| Azure Linux | 2.x | 2.x | | openSUSE | 12.3+ | Not Supported | | Oracle Linux | 6.4+, 7.x+, 8.x+ | Not Supported |
-| Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+ | 8.6+, 9.0+ |
+| Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+, 9.x+ | 8.6+, 9.x+ |
| Rocky Linux | 9.x+ | 9.x+ | | SLES | 12.x+, 15.x+ | 15.x SP4+ | | Ubuntu | 18.04+, 20.04+, 22.04+ | 20.04+, 22.04+ |
virtual-machines Salt Minion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/salt-minion.md
+
+ Title: Salt Minion for Linux or Windows Azure VMs
+description: Install Salt Minion on Linux or Windows VMs using the VM Extension.
+++++ Last updated : 01/24/2024+
+# Install Salt Minion on Linux or Windows VMs using the VM Extension
+
+## Prerequisites
+
+* A Microsoft Azure account with one (or more) Windows or Linux VMs
+* A Salt Master (either on-premises or in a cloud) that can accept connections from Salt minions hosted on Azure
+* The Salt Minion VM Extension requires that the target VM is connected to the internet in order to fetch Salt packages
+
+## Supported platforms
+
+Azure VM running any of the following supported OS:
+
+* Ubuntu 20.04, 22.04 (x86_64)
+* Debian 10, 11 (x86_64)
+* Oracle Linux 7, 8, 9 (x86_64)
+* RHEL 7, 8, 9 (x86_64)
+* Microsoft Windows 10, 11 Pro (x86_64)
+* Microsoft Windows Server 2012 R2, 2016, 2019, 2022 Datacenter (x86_64)
+
+If you want another distro to be supported (assuming Salt [supports](https://docs.saltproject.io/salt/install-guide/en/latest/topics/salt-supported-operating-systems.html) it), an issue can be filed on [GitLab](https://gitlab.com/turtletraction-oss/azure-salt-vm-extensions/-/issues).
+
+## Supported Salt Minion versions
+
+* 3006 and up (onedir)
+
+## Extension details
+
+* Publisher name: `turtletraction.oss`
+* Linux extension name: `salt-minion.linux`
+* Windows extension name: `salt-minion.windows`
+
+## Salt Minion settings
+
+* `master_address` - Salt Master address to connect to (`localhost` by default)
+* `minion_id` - Minion ID (hostname by default)
+* `salt_version` - Salt Minion version to install, for example `3006.1` (`latest` by default)
+
+## Install Salt Minion using the Azure portal
+
+1. Select one of your VMs.
+2. In the left menu click **Extensions + applications**.
+3. Click **+ Add**.
+4. In the gallery, type **Salt Minion** in the search bar.
+5. Select the **Salt Minion** tile and click **Next**.
+6. Enter configuration parameters in the provided form (see [Salt Minion settings](#salt-minion-settings)).
+7. Click **Review + create**.
+
+## Install Salt Minion using the Azure CLI
+
+```shell
+az vm extension set --resource-group my-group --vm-name vm-ubuntu22 --name salt-minion.linux --publisher turtletraction.oss --settings '{"master_address": "10.x.x.x"}'
+az vm extension set --resource-group my-group --vm-name vm-windows11 --name salt-minion.windows --publisher turtletraction.oss --settings '{"master_address": "10.x.x.x"}'
+```
+
+To uninstall it:
+
+```shell
+az vm extension delete --resource-group my-group --vm-name vm-ubuntu22 --name salt-minion.linux
+az vm extension delete --resource-group my-group --vm-name vm-windows11 --name salt-minion.windows
+```
+
+## Install Salt Minion using the Azure ARM template
+
+```json
+{
+ "$schema": "http://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vmName": {
+ "type": "string"
+ },
+ "master_address": {
+ "type": "string"
+ },
+ "salt_version": {
+ "type": "string"
+ },
+ "minion_id": {
+ "type": "string"
+ }
+ },
+ "resources": [
+ {
+ "name": "[concat(parameters('vmName'),'/salt-minion.linux')]",
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "location": "[resourceGroup().location]",
+ "apiVersion": "2015-06-15",
+ "properties": {
+ "publisher": "turtletraction.oss",
+ "type": "salt-minion.linux",
+ "typeHandlerVersion": "1.0",
+ "autoUpgradeMinorVersion": true,
+ "settings": {
+ "master_address": "[parameters('master_address')]",
+ "salt_version": "[parameters('salt_version')]",
+ "minion_id": "[parameters('minion_id')]"
+ }
+ }
+ }
+ ]
+}
+```
+
+## Install Salt Minion using Terraform
+
+Assuming that you have defined a VM resource in TerraForm named `vm_ubuntu`, then use something like this to install the extension on it:
+
+```hcl
+resource "azurerm_virtual_machine_extension" "vmext_ubuntu" {
+ name = "vmext_ubuntu"
+ virtual_machine_id = azurerm_linux_virtual_machine.vm_ubuntu.id
+ publisher = "turtletraction.oss"
+ type = "salt-minion.linux"
+ type_handler_version = "1.0"
+
+ settings = <<SETTINGS
+{
+ "salt_version": "3006.1",
+ "master_address": "x.x.x.x",
+ "minion_id": "ubuntu22"
+}
+SETTINGS
+}
+```
+
+## Support
+
+* For commercial support or assistance with Salt, you can visit the extension creator, [TurtleTraction](https://turtletraction.com/salt-open-support)
+* The source code of this extension is available on [GitLab](https://gitlab.com/turtletraction-oss/azure-salt-vm-extensions/)
+* For Azure related issues, you can file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select Get support
+
virtual-machines Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-cli.md
Title: 'Quickstart: Use the Azure CLI to create a Linux VM'
-description: In this quickstart, you learn how to use the Azure CLI to create a Linux virtual machine
-
+ Title: 'Quickstart: Use the Azure CLI to create a Linux Virtual Machine'
+description: Create a Linux virtual machine using the Azure CLI.
+ Previously updated : 06/01/2022-- Last updated : 02/28/2024+++
-# Quickstart: Create a Linux virtual machine with the Azure CLI
+# Create a Linux virtual machine on Azure
-**Applies to:** :heavy_check_mark: Linux VMs
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/?Microsoft_Azure_CloudNative_clientoptimizations=false&feature.canmodifyextensions=true#view/Microsoft_Azure_CloudNative/SubscriptionSelectionPage.ReactView/tutorialKey/CreateLinuxVMAndSSH)
-This quickstart shows you how to use the Azure CLI to deploy a Linux virtual machine (VM) in Azure. The Azure CLI is used to create and manage Azure resources via either the command line or scripts.
+## Define environment variables
-In this tutorial, we will be installing the latest Debian image. To show the VM in action, you'll connect to it using SSH and install the NGINX web server.
+The First step is to define the environment variables.
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Launch Azure Cloud Shell
+```bash
+export RANDOM_ID="$(openssl rand -hex 3)"
+export MY_RESOURCE_GROUP_NAME="myVMResourceGroup$RANDOM_ID"
+export REGION=EastUS
+export MY_VM_NAME="myVM$RANDOM_ID"
+export MY_USERNAME=azureuser
+export MY_VM_IMAGE="Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts-gen2:latest"
+```
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+## Log in to Azure using the CLI
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also open Cloud Shell in a separate browser tab by going to [https://shell.azure.com/bash](https://shell.azure.com/bash). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and select **Enter** to run it.
+In order to run commands in Azure using the CLI, you need to log in first. This is done using the `az login` command.
-If you prefer to install and use the CLI locally, this quickstart requires Azure CLI version 2.0.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+## Create a resource group
-## Define Environment Variables
-Environment variables are commonly used in Linux to centralize configuration data to improve consistency and maintainability of the system. Create the following environment variables to specify the names of resources that will be created later in this tutorial:
+A resource group is a container for related resources. All resources must be placed in a resource group. The following command creates a resource group with the previously defined $MY_RESOURCE_GROUP_NAME and $REGION parameters.
-```azurecli-interactive
-export RESOURCE_GROUP_NAME=myResourceGroup
-export LOCATION=eastus
-export VM_NAME=myVM
-export VM_IMAGE=debian
-export ADMIN_USERNAME=azureuser
+```bash
+az group create --name $MY_RESOURCE_GROUP_NAME --location $REGION
```
-## Create a resource group
-
-Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
+Results:
-```azurecli-interactive
-az group create --name $RESOURCE_GROUP_NAME --location $LOCATION
+<!-- expected_similarity=0.3 -->
+```json
+{
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myVMResourceGroup",
+ "location": "eastus",
+ "managedBy": null,
+ "name": "myVMResourceGroup",
+ "properties": {
+ "provisioningState": "Succeeded"
+ },
+ "tags": null,
+ "type": "Microsoft.Resources/resourceGroups"
+}
```
-## Create virtual machine
+## Create the virtual machine
-Create a VM with the [az vm create](/cli/azure/vm) command.
+To create a VM in this resource group, we need to use the `vm create` command. In the following code example, we provided the `--generate-ssh-keys` flag, which causes the CLI to look for an available ssh key in `~/.ssh`. If one is found, it is used. If not, one is generated and stored in `~/.ssh`. We also provide the `--public-ip-sku Standard` flag to ensure that the machine is accessible via a public IP address. Finally, we deploy the latest `Ubuntu 22.04` image.
-The following example creates a VM and adds a user account. The `--generate-ssh-keys` parameter is used to automatically generate an SSH key, and put it in the default key location (*~/.ssh*). To use a specific set of keys instead, use the `--ssh-key-values` option.
+All other values are configured using environment variables.
-```azurecli-interactive
+```bash
az vm create \
- --resource-group $RESOURCE_GROUP_NAME \
- --name $VM_NAME \
- --image $VM_IMAGE \
- --admin-username $ADMIN_USERNAME \
- --generate-ssh-keys \
- --public-ip-sku Standard
+ --resource-group $MY_RESOURCE_GROUP_NAME \
+ --name $MY_VM_NAME \
+ --image $MY_VM_IMAGE \
+ --admin-username $MY_USERNAME \
+ --assign-identity \
+ --generate-ssh-keys \
+ --public-ip-sku Standard
```
-It takes a few minutes to create the VM and supporting resources. The following example output shows the VM create operation was successful.
-<!--expected_similarity=0.18-->
+Results:
+
+<!-- expected_similarity=0.3 -->
```json { "fqdns": "",
- "id": "/subscriptions/<guid>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM",
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myVMResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM",
"location": "eastus",
- "macAddress": "00-0D-3A-23-9A-49",
+ "macAddress": "00-0D-3A-10-4F-70",
"powerState": "VM running", "privateIpAddress": "10.0.0.4",
- "publicIpAddress": "40.68.254.142",
- "resourceGroup": "myResourceGroup"
+ "publicIpAddress": "52.147.208.85",
+ "resourceGroup": "myVMResourceGroup",
+ "zones": ""
} ```
-Make a note of the `publicIpAddress` to use later.
+## Enable Azure AD Login for a Linux virtual machine in Azure
-You can retrieve and store the IP address in the variable IP_ADDRESS with the following command:
+The following code example deploys a Linux VM and then installs the extension to enable an Azure AD Login for a Linux VM. VM extensions are small applications that provide post-deployment configuration and automation tasks on Azure virtual machines.
-```azurecli-interactive
-export IP_ADDRESS=$(az vm show --show-details --resource-group $RESOURCE_GROUP_NAME --name $VM_NAME --query publicIps --output tsv)
+```bash
+az vm extension set \
+ --publisher Microsoft.Azure.ActiveDirectory \
+ --name AADSSHLoginForLinux \
+ --resource-group $MY_RESOURCE_GROUP_NAME \
+ --vm-name $MY_VM_NAME
```
-Cost information isn't presented during the virtual machine creation process for CLI like it is for the [Azure portal](quick-create-portal.md). If you want to learn more about how cost works for virtual machines, see the [Cost optimization Overview page](../plan-to-manage-costs.md).
+## Store IP address of VM in order to SSH
-## Install web server
+Run the following command to store the IP Address of the VM as an environment variable:
-To see your VM in action, install the NGINX web server. Update your package sources and then install the latest NGINX package. The following command uses run-command to run `sudo apt-get update && sudo apt-get install -y nginx` on the VM:
-
-```azurecli-interactive
-az vm run-command invoke \
- --resource-group $RESOURCE_GROUP_NAME \
- --name $VM_NAME \
- --command-id RunShellScript \
- --scripts "sudo apt-get update && sudo apt-get install -y nginx"
+```bash
+export IP_ADDRESS=$(az vm show --show-details --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_VM_NAME --query publicIps --output tsv)
```
-## Open port 80 for web traffic
-
-By default, only SSH connections are opened when you create a Linux VM in Azure. Use [az vm open-port](/cli/azure/vm) to open TCP port 80 for use with the NGINX web server:
-
-```azurecli-interactive
-az vm open-port --port 80 --resource-group $RESOURCE_GROUP_NAME --name $VM_NAME
-```
-
-## View the web server in action
-Use a web browser of your choice to view the default NGINX welcome page. Use the public IP address of your VM as the web address. The following example shows the default NGINX web site:
+## SSH into the VM
-![Screenshot showing the N G I N X default web page.](./media/quick-create-cli/nginix-welcome-page-debian.png)
+<!--## Export the SSH configuration for use with SSH clients that support OpenSSH & SSH into the VM.
+Log in to Azure Linux VMs with Azure AD supports exporting the OpenSSH certificate and configuration. That means you can use any SSH clients that support OpenSSH-based certificates to sign in through Azure AD. The following example exports the configuration for all IP addresses assigned to the VM:-->
-Alternatively, run the following command to see the NGINX welcome page in the terminal
-
-```azurecli-interactive
- curl $IP_ADDRESS
-```
-
-The following example shows the default NGINX web site in the terminal as successful output:
-<!--expected_similarity=0.8-->
-```html
-<!DOCTYPE html>
-<html>
-<head>
-<title>Welcome to nginx!</title>
-<style>
- body {
- width: 35em;
- margin: 0 auto;
- font-family: Tahoma, Verdana, Arial, sans-serif;
- }
-</style>
-</head>
-<body>
-<h1>Welcome to nginx!</h1>
-<p>If you see this page, the nginx web server is successfully installed and
-working. Further configuration is required.</p>
-
-<p>For online documentation and support please refer to
-<a href="http://nginx.org/">nginx.org</a>.<br/>
-Commercial support is available at
-<a href="http://nginx.com/">nginx.com</a>.</p>
-
-<p><em>Thank you for using nginx.</em></p>
-</body>
-</html>
+<!--
+```bash
+yes | az ssh config --file ~/.ssh/config --name $MY_VM_NAME --resource-group $MY_RESOURCE_GROUP_NAME
```
+-->
-## Clean up resources
-
-When no longer needed, you can use the [az group delete](/cli/azure/group) command to remove the resource group, VM, and all related resources.
+You can now SSH into the VM by running the output of the following command in your ssh client of choice:
-```azurecli-interactive
-az group delete --name $RESOURCE_GROUP_NAME --no-wait --yes --verbose
+```bash
+ssh -o StrictHostKeyChecking=no $MY_USERNAME@$IP_ADDRESS
```
-## Next steps
-
-In this quickstart, you deployed a simple virtual machine, opened a network port for web traffic, and installed a basic web server. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs.
-
+## Next Steps
-> [!div class="nextstepaction"]
-> [Azure Linux virtual machine tutorials](./tutorial-manage-vm.md)
+* [Use Cloud-Init to initialize a Linux VM on first boot](tutorial-automate-vm-deployment.md)
+* [Create custom VM images](tutorial-custom-images.md)
+* [Load Balance VMs](../../load-balancer/quickstart-load-balancer-standard-public-cli.md)
virtual-machines Tutorial Lemp Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-lemp-stack.md
+
+ Title: Tutorial - Deploy a LEMP stack using WordPress on a VM
+description: In this tutorial, you learn how to install the LEMP stack, and WordPress, on a Linux virtual machine in Azure.
+++
+ms.devlang: azurecli
++ Last updated : 2/29/2024++
+#Customer intent: As an IT administrator, I want to learn how to install the LEMP stack so that I can quickly prepare a Linux VM to run web applications.
++
+# Tutorial: Install a LEMP stack on an Azure Linux VM
+
+**Applies to:** :heavy_check_mark: Linux VMs
+
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#view/Microsoft_Azure_CloudNative/SubscriptionSelectionPage.ReactView/tutorialKey/CreateLinuxVMLAMP)
+
+This article walks you through how to deploy an NGINX web server, Azure MySQL Flexible Server, and PHP (the LEMP stack) on an Ubuntu Linux VM in Azure. To see the LEMP server in action, you can optionally install and configure a WordPress site. In this tutorial you learn how to:
+
+> [!div class="checklist"]
+>
+> * Create an Ubuntu VM
+> * Open ports 80 and 443 for web traffic
+> * Install and Secure NGINX, Azure Flexible MySQL Server, and PHP
+> * Verify installation and configuration
+> * Install WordPress
+This setup is for quick tests or proof of concept. For more on the LEMP stack, including recommendations for a production environment, see the [Ubuntu documentation](https://help.ubuntu.com/community/ApacheMySQLPHP).
+
+This tutorial uses the CLI within the [Azure Cloud Shell](../../cloud-shell/overview.md), which is constantly updated to the latest version. To open the Cloud Shell, select **Try it** from the top of any code block.
+
+If you choose to install and use the CLI locally, this tutorial requires that you're running the Azure CLI version 2.0.30 or later. Find the version by running the `az --version` command. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+
+## Variable declaration
+
+First we need to define a few variables that help with the configuration of the LEMP workload.
+
+```bash
+export NETWORK_PREFIX="$(($RANDOM % 254 + 1))"
+export RANDOM_ID="$(openssl rand -hex 3)"
+export MY_RESOURCE_GROUP_NAME="myLEMPResourceGroup$RANDOM_ID"
+export REGION="westeurope"
+export MY_VM_NAME="myVM$RANDOM_ID"
+export MY_VM_USERNAME="azureadmin"
+export MY_VM_SIZE='Standard_DS2_v2'
+export MY_VM_IMAGE='Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts-gen2:latest'
+export MY_PUBLIC_IP_NAME="myPublicIP$RANDOM_ID"
+export MY_DNS_LABEL="mydnslabel$RANDOM_ID"
+export MY_NSG_NAME="myNSG$RANDOM_ID"
+export MY_NSG_SSH_RULE="Allow-Access$RANDOM_ID"
+export MY_VM_NIC_NAME="myVMNic$RANDOM_ID"
+export MY_VNET_NAME="myVNet$RANDOM_ID"
+export MY_VNET_PREFIX="10.$NETWORK_PREFIX.0.0/22"
+export MY_SN_NAME="mySN$RANDOM_ID"
+export MY_SN_PREFIX="10.$NETWORK_PREFIX.0.0/24"
+export MY_MYSQL_DB_NAME="mydb$RANDOM_ID"
+export MY_MYSQL_ADMIN_USERNAME="dbadmin$RANDOM_ID"
+export MY_MYSQL_ADMIN_PW="$(openssl rand -base64 32)"
+export MY_MYSQL_SN_NAME="myMySQLSN$RANDOM_ID"
+export MY_WP_ADMIN_PW="$(openssl rand -base64 32)"
+export MY_WP_ADMIN_USER="wpcliadmin"
+export MY_AZURE_USER=$(az account show --query user.name --output tsv)
+export FQDN="${MY_DNS_LABEL}.${REGION}.cloudapp.azure.com"
+```
+
+<!--```bash
+export MY_AZURE_USER_ID=$(az ad user list --filter "mail eq '$MY_AZURE_USER'" --query "[0].id" -o tsv)
+```-->
+
+## Create a resource group
+
+Create a resource group with the [az group create](/cli/azure/group#az-group-create) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
+The following example creates a resource group named `$MY_RESOURCE_GROUP_NAME` in the `eastus` location.
+
+```bash
+az group create \
+ --name $MY_RESOURCE_GROUP_NAME \
+ --location $REGION -o JSON
+```
+
+Results:
+
+<!-- expected_similarity=0.3 -->
+```JSON
+{
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroupxxxxxx",
+ "location": "eastus",
+ "managedBy": null,
+ "name": "myLEMPResourceGroupxxxxxx",
+ "properties": {
+ "provisioningState": "Succeeded"
+ },
+ "tags": null,
+ "type": "Microsoft.Resources/resourceGroups"
+}
+```
+
+## Setup LEMP networking
+
+## Create an Azure Virtual Network
+
+A virtual network is the fundamental building block for private networks in Azure. Azure Virtual Network enables Azure resources like VMs to securely communicate with each other and the internet.
+Use [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) to create a virtual network named `$MY_VNET_NAME` with a subnet named `$MY_SN_NAME` in the `$MY_RESOURCE_GROUP_NAME` resource group.
+
+```bash
+az network vnet create \
+ --name $MY_VNET_NAME \
+ --resource-group $MY_RESOURCE_GROUP_NAME \
+ --location $REGION \
+ --address-prefix $MY_VNET_PREFIX \
+ --subnet-name $MY_SN_NAME \
+ --subnet-prefixes $MY_SN_PREFIX -o JSON
+```
+
+Results:
+
+<!-- expected_similarity=0.3 -->
+```JSON
+{
+ "newVNet": {
+ "addressSpace": {
+ "addressPrefixes": [
+ "10.19.0.0/22"
+ ]
+ },
+ "enableDdosProtection": false,
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroupxxxxxx/providers/Microsoft.Network/virtualNetworks/myVNetxxxxxx",
+ "location": "eastus",
+ "name": "myVNetxxxxxx",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "myLEMPResourceGroupxxxxxx",
+ "subnets": [
+ {
+ "addressPrefix": "10.19.0.0/24",
+ "delegations": [],
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroupxxxxxx/providers/Microsoft.Network/virtualNetworks/myVNetxxxxxx/subnets/mySNxxxxxx",
+ "name": "mySNxxxxxx",
+ "privateEndpointNetworkPolicies": "Disabled",
+ "privateLinkServiceNetworkPolicies": "Enabled",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "myLEMPResourceGroupxxxxxx",
+ "type": "Microsoft.Network/virtualNetworks/subnets"
+ }
+ ],
+ "type": "Microsoft.Network/virtualNetworks",
+ "virtualNetworkPeerings": []
+ }
+}
+```
+
+## Create an Azure Public IP
+
+Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a standard zone-redundant public IPv4 address named `MY_PUBLIC_IP_NAME` in `$MY_RESOURCE_GROUP_NAME`.
+
+>[!NOTE]
+>The below options for zones are only valid selections in regions with [Availability Zones](../../reliability/availability-zones-service-support.md).
+```bash
+az network public-ip create \
+ --name $MY_PUBLIC_IP_NAME \
+ --location $REGION \
+ --resource-group $MY_RESOURCE_GROUP_NAME \
+ --dns-name $MY_DNS_LABEL \
+ --sku Standard \
+ --allocation-method static \
+ --version IPv4 \
+ --zone 1 2 3 -o JSON
+```
+
+Results:
+
+<!-- expected_similarity=0.3 -->
+```JSON
+{
+ "publicIp": {
+ "ddosSettings": {
+ "protectionMode": "VirtualNetworkInherited"
+ },
+ "dnsSettings": {
+ "domainNameLabel": "mydnslabelxxxxxx",
+ "fqdn": "mydnslabelxxxxxx.eastus.cloudapp.azure.com"
+ },
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroupxxxxxx/providers/Microsoft.Network/publicIPAddresses/myPublicIPxxxxxx",
+ "idleTimeoutInMinutes": 4,
+ "ipTags": [],
+ "location": "eastus",
+ "name": "myPublicIPxxxxxx",
+ "provisioningState": "Succeeded",
+ "publicIPAddressVersion": "IPv4",
+ "publicIPAllocationMethod": "Static",
+ "resourceGroup": "myLEMPResourceGroupxxxxxx",
+ "sku": {
+ "name": "Standard",
+ "tier": "Regional"
+ },
+ "type": "Microsoft.Network/publicIPAddresses",
+ "zones": [
+ "1",
+ "2",
+ "3"
+ ]
+ }
+}
+```
+
+## Create an Azure Network Security Group
+
+Security rules in network security groups enable you to filter the type of network traffic that can flow in and out of virtual network subnets and network interfaces. To learn more about network security groups, see [Network security group overview](../../virtual-network/network-security-groups-overview.md).
+
+```bash
+az network nsg create \
+ --name $MY_NSG_NAME \
+ --resource-group $MY_RESOURCE_GROUP_NAME \
+ --location $REGION -o JSON
+```
+
+Results:
+
+<!-- expected_similarity=0.3 -->
+```JSON
+{
+ "NewNSG": {
+ "defaultSecurityRules":
+ {
+ "access": "Allow",
+ "description": "Allow inbound traffic from all VMs in VNET",
+ "destinationAddressPrefix": "VirtualNetwork",
+ "destinationAddressPrefixes": [],
+ "destinationPortRange": "*",
+ "destinationPortRanges": [],
+ "direction": "Inbound",
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroup104/providers/Microsoft.Network/networkSecurityGroups/protect-vms/defaultSecurityRules/AllowVnetInBound",
+ "name": "AllowVnetInBound",
+ "priority": 65000,
+ "protocol": "*",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "myLEMPResourceGroup104",
+ "sourceAddressPrefix": "VirtualNetwork",
+ "sourceAddressPrefixes": [],
+ "sourcePortRange": "*",
+ "sourcePortRanges": [],
+ "type": "Microsoft.Network/networkSecurityGroups/defaultSecurityRules"
+ },
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroup104/providers/Microsoft.Network/networkSecurityGroups/protect-vms",
+ "location": "eastus",
+ "name": "protect-vms",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "myLEMPResourceGroup104",
+ "securityRules": [],
+ "type": "Microsoft.Network/networkSecurityGroups"
+ }
+}
+```
+
+## Create Azure Network Security Group rules
+
+Create a rule to allow connections to the virtual machine on port 22 for SSH and ports 80, 443 for HTTP and HTTPS. An extra rule is created to allow all ports for outbound connections. Use [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create) to create a network security group rule.
+
+```bash
+az network nsg rule create \
+ --resource-group $MY_RESOURCE_GROUP_NAME \
+ --nsg-name $MY_NSG_NAME \
+ --name $MY_NSG_SSH_RULE \
+ --access Allow \
+ --protocol Tcp \
+ --direction Inbound \
+ --priority 100 \
+ --source-address-prefix '*' \
+ --source-port-range '*' \
+ --destination-address-prefix '*' \
+ --destination-port-range 22 80 443 -o JSON
+```
+
+Results:
+
+<!-- expected_similarity=0.3 -->
+```JSON
+{
+ "access": "Allow",
+ "destinationAddressPrefix": "*",
+ "destinationAddressPrefixes": [],
+ "destinationPortRanges": [
+ "22",
+ "80",
+ "443"
+ ],
+ "direction": "Inbound",
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroupxxxxxx/providers/Microsoft.Network/networkSecurityGroups/myNSGNamexxxxxx/securityRules/Allow-Accessxxxxxx",
+ "name": "Allow-Accessxxxxxx",
+ "priority": 100,
+ "protocol": "Tcp",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "myLEMPResourceGroupxxxxxx",
+ "sourceAddressPrefix": "*",
+ "sourceAddressPrefixes": [],
+ "sourcePortRange": "*",
+ "sourcePortRanges": [],
+ "type": "Microsoft.Network/networkSecurityGroups/securityRules"
+}
+```
+
+## Create an Azure Network Interface
+
+Use [az network nic create](/cli/azure/network/nic#az-network-nic-create) to create the network interface for the virtual machine. The public IP addresses and the NSG created previously are associated with the NIC. The network interface is attached to the virtual network you created previously.
+
+```bash
+az network nic create \
+ --resource-group $MY_RESOURCE_GROUP_NAME \
+ --name $MY_VM_NIC_NAME \
+ --location $REGION \
+ --ip-forwarding false \
+ --subnet $MY_SN_NAME \
+ --vnet-name $MY_VNET_NAME \
+ --network-security-group $MY_NSG_NAME \
+ --public-ip-address $MY_PUBLIC_IP_NAME -o JSON
+```
+
+Results:
+
+<!-- expected_similarity=0.3 -->
+```JSON
+{
+ "NewNIC": {
+ "auxiliaryMode": "None",
+ "auxiliarySku": "None",
+ "disableTcpStateTracking": false,
+ "dnsSettings": {
+ "appliedDnsServers": [],
+ "dnsServers": []
+ },
+ "enableAcceleratedNetworking": false,
+ "enableIPForwarding": false,
+ "hostedWorkloads": [],
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroupxxxxxx/providers/Microsoft.Network/networkInterfaces/myVMNicNamexxxxxx",
+ "ipConfigurations": [
+ {
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroupxxxxxx/providers/Microsoft.Network/networkInterfaces/myVMNicNamexxxxxx/ipConfigurations/ipconfig1",
+ "name": "ipconfig1",
+ "primary": true,
+ "privateIPAddress": "10.19.0.4",
+ "privateIPAddressVersion": "IPv4",
+ "privateIPAllocationMethod": "Dynamic",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "myLEMPResourceGroupxxxxxx",
+ "subnet": {
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroupxxxxxx/providers/Microsoft.Network/virtualNetworks/myVNetxxxxxx/subnets/mySNxxxxxx",
+ "resourceGroup": "myLEMPResourceGroupxxxxxx"
+ },
+ "type": "Microsoft.Network/networkInterfaces/ipConfigurations"
+ }
+ ],
+ "location": "eastus",
+ "name": "myVMNicNamexxxxxx",
+ "networkSecurityGroup": {
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroupxxxxxx/providers/Microsoft.Network/networkSecurityGroups/myNSGNamexxxxxx",
+ "resourceGroup": "myLEMPResourceGroupxxxxxx"
+ },
+ "nicType": "Standard",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "myLEMPResourceGroupxxxxxx",
+ "tapConfigurations": [],
+ "type": "Microsoft.Network/networkInterfaces",
+ "vnetEncryptionSupported": false
+ }
+}
+```
+## Cloud-init overview
+
+Cloud-init is a widely used approach to customize a Linux VM as it boots for the first time. You can use cloud-init to install packages and write files, or to configure users and security. As cloud-init runs during the initial boot process, there are no other steps or required agents to apply to your configuration.
+
+Cloud-init also works across distributions. For example, you don't use apt-get install or yum install to install a package. Instead you can define a list of packages to install. Cloud-init automatically uses the native package management tool for the distro you select.
+
+We're working with our partners to get cloud-init included and working in the images that they provide to Azure. For detailed information cloud-init support for each distribution, see [Cloud-init support for VMs in Azure](./using-cloud-init.md).
+
+### Create cloud-init config file
+
+To see cloud-init in action, create a VM that installs a LEMP stack and runs a simple Wordpress app secured with an SSL certificate. The following cloud-init configuration installs the required packages, creates the Wordpress website, then initialize and starts the website.
+
+```bash
+cat << EOF > cloud-init.txt
+#cloud-config
+# Install, update, and upgrade packages
+package_upgrade: true
+package_update: true
+package_reboot_if_require: true
+# Install packages
+packages:
+ - vim
+ - certbot
+ - python3-certbot-nginx
+ - bash-completion
+ - nginx
+ - mysql-client
+ - php
+ - php-cli
+ - php-bcmath
+ - php-curl
+ - php-imagick
+ - php-intl
+ - php-json
+ - php-mbstring
+ - php-mysql
+ - php-gd
+ - php-xml
+ - php-xmlrpc
+ - php-zip
+ - php-fpm
+write_files:
+ - owner: www-data:www-data
+ path: /etc/nginx/sites-available/default.conf
+ content: |
+ server {
+ listen 80 default_server;
+ listen [::]:80 default_server;
+ root /var/www/html;
+ server_name $FQDN;
+ }
+write_files:
+ - owner: www-data:www-data
+ path: /etc/nginx/sites-available/$FQDN.conf
+ content: |
+ upstream php {
+ server unix:/run/php/php8.1-fpm.sock;
+ }
+ server {
+ listen 443 ssl http2;
+ listen [::]:443 ssl http2;
+ server_name $FQDN;
+ ssl_certificate /etc/letsencrypt/live/$FQDN/fullchain.pem;
+ ssl_certificate_key /etc/letsencrypt/live/$FQDN/privkey.pem;
+ root /var/www/$FQDN;
+ index index.php;
+ location / {
+ try_files \$uri \$uri/ /index.php?\$args;
+ }
+ location ~ \.php$ {
+ include fastcgi_params;
+ fastcgi_intercept_errors on;
+ fastcgi_pass php;
+ fastcgi_param SCRIPT_FILENAME \$document_root\$fastcgi_script_name;
+ }
+ location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
+ expires max;
+ log_not_found off;
+ }
+ location = /favicon.ico {
+ log_not_found off;
+ access_log off;
+ }
+ location = /robots.txt {
+ allow all;
+ log_not_found off;
+ access_log off;
+ }
+ }
+ server {
+ listen 80;
+ listen [::]:80;
+ server_name $FQDN;
+ return 301 https://$FQDN\$request_uri;
+ }
+runcmd:
+ - sed -i 's/;cgi.fix_pathinfo.*/cgi.fix_pathinfo = 1/' /etc/php/8.1/fpm/php.ini
+ - sed -i 's/^max_execution_time \= .*/max_execution_time \= 300/g' /etc/php/8.1/fpm/php.ini
+ - sed -i 's/^upload_max_filesize \= .*/upload_max_filesize \= 64M/g' /etc/php/8.1/fpm/php.ini
+ - sed -i 's/^post_max_size \= .*/post_max_size \= 64M/g' /etc/php/8.1/fpm/php.ini
+ - systemctl restart php8.1-fpm
+ - systemctl restart nginx
+ - certbot --nginx certonly --non-interactive --agree-tos -d $FQDN -m dummy@dummy.com --redirect
+ - ln -s /etc/nginx/sites-available/$FQDN.conf /etc/nginx/sites-enabled/
+ - rm /etc/nginx/sites-enabled/default
+ - systemctl restart nginx
+ - curl --url https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar --output /tmp/wp-cli.phar
+ - mv /tmp/wp-cli.phar /usr/local/bin/wp
+ - chmod +x /usr/local/bin/wp
+ - wp cli update
+ - mkdir -m 0755 -p /var/www/$FQDN
+ - chown -R azureadmin:www-data /var/www/$FQDN
+ - sudo -u azureadmin -i -- wp core download --path=/var/www/$FQDN
+ - sudo -u azureadmin -i -- wp config create --dbhost=$MY_MYSQL_DB_NAME.mysql.database.azure.com --dbname=wp001 --dbuser=$MY_MYSQL_ADMIN_USERNAME --dbpass="$MY_MYSQL_ADMIN_PW" --path=/var/www/$FQDN
+ - sudo -u azureadmin -i -- wp core install --url=$FQDN --title="Azure hosted blog" --admin_user=$MY_WP_ADMIN_USER --admin_password="$MY_WP_ADMIN_PW" --admin_email=$MY_AZURE_USER --path=/var/www/$FQDN
+ - sudo -u azureadmin -i -- wp plugin update --all --path=/var/www/$FQDN
+ - chmod 600 /var/www/$FQDN/wp-config.php
+ - mkdir -p -m 0775 /var/www/$FQDN/wp-content/uploads
+ - chgrp www-data /var/www/$FQDN/wp-content/uploads
+EOF
+```
+
+## Create an Azure Private DNS Zone for Azure MySQL Flexible Server
+
+Azure Private DNS Zone integration allows you to resolve the private DNS within the current VNET or any in-region peered VNET where the private DNS Zone is linked. Use [az network private-dns zone create](/cli/azure/network/private-dns/zone#az-network-private-dns-zone-create) to create the private DNS zone.
+
+```bash
+az network private-dns zone create \
+ --resource-group $MY_RESOURCE_GROUP_NAME \
+ --name $MY_DNS_LABEL.private.mysql.database.azure.com -o JSON
+```
+
+Results:
+
+<!-- expected_similarity=0.3 -->
+```JSON
+{
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroupxxxxxx/providers/Microsoft.Network/privateDnsZones/mydnslabelxxxxxx.private.mysql.database.azure.com",
+ "location": "global",
+ "maxNumberOfRecordSets": 25000,
+ "maxNumberOfVirtualNetworkLinks": 1000,
+ "maxNumberOfVirtualNetworkLinksWithRegistration": 100,
+ "name": "mydnslabelxxxxxx.private.mysql.database.azure.com",
+ "numberOfRecordSets": 1,
+ "numberOfVirtualNetworkLinks": 0,
+ "numberOfVirtualNetworkLinksWithRegistration": 0,
+ "provisioningState": "Succeeded",
+ "resourceGroup": "myLEMPResourceGroupxxxxxx",
+ "tags": null,
+ "type": "Microsoft.Network/privateDnsZones"
+}
+```
+
+## Create an Azure Database for MySQL - Flexible Server
+
+Azure Database for MySQL - Flexible Server is a managed service that you can use to run, manage, and scale highly available MySQL servers in the cloud. Create a flexible server with the [az mysql flexible-server create](../../mysql/flexible-server/quickstart-create-server-cli.md#create-an-azure-database-for-mysql-flexible-server-instance) command. A server can contain multiple databases. The following command creates a server using service defaults and variable values from your Azure CLI's local environment:
+
+```bash
+az mysql flexible-server create \
+ --admin-password $MY_MYSQL_ADMIN_PW \
+ --admin-user $MY_MYSQL_ADMIN_USERNAME \
+ --auto-scale-iops Disabled \
+ --high-availability Disabled \
+ --iops 500 \
+ --location $REGION \
+ --name $MY_MYSQL_DB_NAME \
+ --database-name wp001 \
+ --resource-group $MY_RESOURCE_GROUP_NAME \
+ --sku-name Standard_B2s \
+ --storage-auto-grow Disabled \
+ --storage-size 20 \
+ --subnet $MY_MYSQL_SN_NAME \
+ --private-dns-zone $MY_DNS_LABEL.private.mysql.database.azure.com \
+ --tier Burstable \
+ --version 8.0.21 \
+ --vnet $MY_VNET_NAME \
+ --yes -o JSON
+```
+
+Results:
+
+<!-- expected_similarity=0.3 -->
+```JSON
+{
+ "databaseName": "wp001",
+ "host": "mydbxxxxxx.mysql.database.azure.com",
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroupxxxxxx/providers/Microsoft.DBforMySQL/flexibleServers/mydbxxxxxx",
+ "location": "East US",
+ "resourceGroup": "myLEMPResourceGroupxxxxxx",
+ "skuname": "Standard_B2s",
+ "subnetId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroupxxxxxx/providers/Microsoft.Network/virtualNetworks/myVNetxxxxxx/subnets/myMySQLSNxxxxxx",
+ "username": "dbadminxxxxxx",
+ "version": "8.0.21"
+}
+```
+
+```bash
+echo "Your MySQL user $MY_MYSQL_ADMIN_USERNAME password is: $MY_WP_ADMIN_PW"
+```
+
+The server created has the below attributes:
+
+* The server name, admin username, admin password, resource group name, location are already specified in local context environment of the cloud shell. They're created in the same location as your resource group and other Azure components.
+* Service defaults for remaining server configurations: compute tier (Burstable), compute size/SKU (Standard_B2s), backup retention period (7 days), and MySQL version (8.0.21)
+* The default connectivity method is Private access (VNet Integration) with a linked virtual network and an auto-generated subnet.
+
+> [!NOTE]
+> The connectivity method cannot be changed after creating the server. For example, if you selected `Private access (VNet Integration)` during create then you cannot change to `Public access (allowed IP addresses)` after create. We highly recommend creating a server with Private access to securely access your server using VNet Integration. Learn more about Private access in the [concepts article](../../mysql/flexible-server/concepts-networking-vnet.md).
+If you'd like to change any defaults, refer to the Azure CLI [reference documentation](../../mysql/flexible-server/quickstart-create-server-cli.md) for the complete list of configurable CLI parameters.
+
+## Check the Azure Database for MySQL - Flexible Server status
+
+It takes a few minutes to create the Azure Database for MySQL - Flexible Server and supporting resources.
+
+```bash
+runtime="10 minute";
+endtime=$(date -ud "$runtime" +%s);
+while [[ $(date -u +%s) -le $endtime ]]; do
+ STATUS=$(az mysql flexible-server show -g $MY_RESOURCE_GROUP_NAME -n $MY_MYSQL_DB_NAME --query state -o tsv);
+ echo $STATUS;
+ if [ "$STATUS" == 'Ready' ]; then
+ break;
+ else
+ sleep 10;
+ fi;
+done
+```
+
+## Configure server parameters in Azure Database for MySQL - Flexible Server
+
+You can manage Azure Database for MySQL - Flexible Server configuration using server parameters. The server parameters are configured with the default and recommended value when you create the server.
+
+Show server parameter details:
+
+Run the [az mysql flexible-server parameter show](../../mysql/flexible-server/how-to-configure-server-parameters-cli.md) command to show details about any particular parameter for the server.
+
+## Disable Azure Database for MySQL - Flexible Server SSL connection parameter for Wordpress integration
+
+Modify a server parameter value:
+
+You can also modify the value of a certain server parameter, which updates the underlying configuration value for the MySQL server engine. To update the server parameter, use the [az mysql flexible-server parameter set](../../mysql/flexible-server/how-to-configure-server-parameters-cli.md#modify-a-server-parameter-value) command.
+
+```bash
+az mysql flexible-server parameter set \
+ -g $MY_RESOURCE_GROUP_NAME \
+ -s $MY_MYSQL_DB_NAME \
+ -n require_secure_transport -v "OFF" -o JSON
+```
+
+Results:
+
+<!-- expected_similarity=0.3 -->
+```JSON
+{
+ "allowedValues": "ON,OFF",
+ "currentValue": "OFF",
+ "dataType": "Enumeration",
+ "defaultValue": "ON",
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroupxxxxxx/providers/Microsoft.DBforMySQL/flexibleServers/mydbxxxxxx/configurations/require_secure_transport",
+ "isConfigPendingRestart": "False",
+ "isDynamicConfig": "True",
+ "isReadOnly": "False",
+ "name": "require_secure_transport",
+ "resourceGroup": "myLEMPResourceGroupxxxxxx",
+ "source": "user-override",
+ "systemData": null,
+ "type": "Microsoft.DBforMySQL/flexibleServers/configurations",
+ "value": "OFF"
+}
+```
+
+## Create an Azure Linux Virtual Machine
+
+The following example creates a VM named `$MY_VM_NAME` and creates SSH keys if they don't already exist in a default key location. The command also sets `$MY_VM_USERNAME` as an administrator user name.
+
+To improve the security of Linux virtual machines in Azure, you can integrate with Azure Active Directory authentication. Now you can use Azure AD as a core authentication platform. You can also SSH into the Linux VM by using Azure AD and OpenSSH certificate-based authentication. This functionality allows organizations to manage access to VMs with Azure role-based access control and Conditional Access policies.
+
+Create a VM with the [az vm create](/cli/azure/vm#az-vm-create) command.
+
+```bash
+az vm create \
+ --name $MY_VM_NAME \
+ --resource-group $MY_RESOURCE_GROUP_NAME \
+ --admin-username $MY_VM_USERNAME \
+ --authentication-type ssh \
+ --assign-identity \
+ --image $MY_VM_IMAGE \
+ --location $REGION \
+ --nic-delete-option Delete \
+ --os-disk-caching ReadOnly \
+ --os-disk-delete-option Delete \
+ --os-disk-size-gb 30 \
+ --size $MY_VM_SIZE \
+ --generate-ssh-keys \
+ --storage-sku Premium_LRS \
+ --nics $MY_VM_NIC_NAME \
+ --custom-data cloud-init.txt -o JSON
+```
+
+Results:
+
+<!-- expected_similarity=0.3 -->
+```JSON
+{
+ "fqdns": "mydnslabelxxxxxx.eastus.cloudapp.azure.com",
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroupxxxxxx/providers/Microsoft.Compute/virtualMachines/myVMNamexxxxxx",
+ "identity": {
+ "principalId": "yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy",
+ "tenantId": "zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz",
+ "type": "SystemAssigned",
+ "userAssignedIdentities": null
+ },
+ "location": "eastus",
+ "macAddress": "60-45-BD-D8-1D-84",
+ "powerState": "VM running",
+ "privateIpAddress": "10.19.0.4",
+ "resourceGroup": "myLEMPResourceGroupxxxxxx",
+ "zones": ""
+}
+```
+
+## Check the Azure Linux Virtual Machine status
+
+It takes a few minutes to create the VM and supporting resources. The provisioningState value of Succeeded appears when the extension is successfully installed on the VM. The VM must have a running [VM agent](../extensions/agent-linux.md) to install the extension.
+
+```bash
+runtime="5 minute";
+endtime=$(date -ud "$runtime" +%s);
+while [[ $(date -u +%s) -le $endtime ]]; do
+ STATUS=$(ssh -o StrictHostKeyChecking=no $MY_VM_USERNAME@$FQDN "cloud-init status --wait");
+ echo $STATUS;
+ if [[ "$STATUS" == *'status: done'* ]]; then
+ break;
+ else
+ sleep 10;
+ fi;
+done
+```
+
+<!--
+## Assign Azure AD RBAC for Azure AD login for Linux Virtual Machine
+The below command uses [az role assignment create](https://learn.microsoft.com/cli/azure/role/assignment#az-role-assignment-create) to assign the `Virtual Machine Administrator Login` role to the VM for your current Azure user.
+```bash
+export MY_RESOURCE_GROUP_ID=$(az group show --resource-group $MY_RESOURCE_GROUP_NAME --query id -o tsv)
+az role assignment create \
+ --role "Virtual Machine Administrator Login" \
+ --assignee $MY_AZURE_USER_ID \
+ --scope $MY_RESOURCE_GROUP_ID -o JSON
+```
+Results:
+<!-- expected_similarity=0.3
+```JSON
+{
+ "condition": null,
+ "conditionVersion": null,
+ "createdBy": null,
+ "createdOn": "2023-09-04T09:29:16.895907+00:00",
+ "delegatedManagedIdentityResourceId": null,
+ "description": null,
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroupxxxxxx/providers/Microsoft.Authorization/roleAssignments/yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy",
+ "name": "yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy",
+ "principalId": "zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz",
+ "principalType": "User",
+ "resourceGroup": "myLEMPResourceGroupxxxxxx",
+ "roleDefinitionId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/providers/Microsoft.Authorization/roleDefinitions/zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz",
+ "scope": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroupxxxxxx",
+ "type": "Microsoft.Authorization/roleAssignments",
+ "updatedBy": "wwwwwwww-wwww-wwww-wwww-wwwwwwwwwwww",
+ "updatedOn": "2023-09-04T09:29:17.237445+00:00"
+}
+```
+-->
+
+<!--
+## Export the SSH configuration for use with SSH clients that support OpenSSH
+Login to Azure Linux VMs with Azure AD supports exporting the OpenSSH certificate and configuration. That means you can use any SSH clients that support OpenSSH-based certificates to sign in through Azure AD. The following example exports the configuration for all IP addresses assigned to the VM:
+```bash
+az ssh config --file ~/.ssh/azure-config --name $MY_VM_NAME --resource-group $MY_RESOURCE_GROUP_NAME
+```
+-->
+
+## Enable Azure AD login for a Linux Virtual Machine in Azure
+
+The following installs the extension to enable Azure AD login for a Linux VM. VM extensions are small applications that provide post-deployment configuration and automation tasks on Azure virtual machines.
+
+```bash
+az vm extension set \
+ --publisher Microsoft.Azure.ActiveDirectory \
+ --name AADSSHLoginForLinux \
+ --resource-group $MY_RESOURCE_GROUP_NAME \
+ --vm-name $MY_VM_NAME -o JSON
+```
+
+Results:
+
+<!-- expected_similarity=0.3 -->
+```JSON
+{
+ "autoUpgradeMinorVersion": true,
+ "enableAutomaticUpgrade": null,
+ "forceUpdateTag": null,
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myLEMPResourceGroupxxxxxx/providers/Microsoft.Compute/virtualMachines/myVMNamexxxxxx/extensions/AADSSHLoginForLinux",
+ "instanceView": null,
+ "location": "eastus",
+ "name": "AADSSHLoginForLinux",
+ "protectedSettings": null,
+ "protectedSettingsFromKeyVault": null,
+ "provisioningState": "Succeeded",
+ "publisher": "Microsoft.Azure.ActiveDirectory",
+ "resourceGroup": "myLEMPResourceGroupxxxxxx",
+ "settings": null,
+ "suppressFailures": null,
+ "tags": null,
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "typeHandlerVersion": "1.0",
+ "typePropertiesType": "AADSSHLoginForLinux"
+}
+```
+
+## Check and browse your WordPress website
+
+[WordPress](https://www.wordpress.org) is an open source content management system (CMS) used by over 40% of the web to create websites, blogs, and other applications. WordPress can be run on a few different Azure
+
+This WordPress setup is only for proof of concept. To install the latest WordPress in production with recommended security settings, see the [WordPress documentation](https://codex.wordpress.org/Main_Page).
+
+Validate that the application is running by curling the application url:
+
+```bash
+runtime="5 minute";
+endtime=$(date -ud "$runtime" +%s);
+while [[ $(date -u +%s) -le $endtime ]]; do
+ if curl -I -s -f $FQDN > ; then
+ curl -L -s -f $FQDN 2> | head -n 9
+ break
+ else
+ sleep 10
+ fi;
+done
+```
+
+Results:
+
+<!-- expected_similarity=0.3 -->
+```HTML
+<!DOCTYPE html>
+<html lang="en-US">
+<head>
+ <meta charset="UTF-8" />
+ <meta name="viewport" content="width=device-width, initial-scale=1" />
+<meta name='robots' content='max-image-preview:large' />
+<title>Azure hosted blog</title>
+<link rel="alternate" type="application/rss+xml" title="Azure hosted blog &raquo; Feed" href="https://mydnslabelxxxxxx.eastus.cloudapp.azure.com/?feed=rss2" />
+<link rel="alternate" type="application/rss+xml" title="Azure hosted blog &raquo; Comments Feed" href="https://mydnslabelxxxxxx.eastus.cloudapp.azure.com/?feed=comments-rss2" />
+```
+
+```bash
+echo "You can now visit your web server at https://$FQDN"
+```
vpn-gateway Vpn Gateway About Vpngateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpngateways.md
When you create a virtual network gateway, you specify the gateway SKU that you
## <a name="pricing"></a>Pricing
+You pay for two things: the hourly compute costs for the virtual network gateway, and the egress data transfer from the virtual network gateway. Pricing information can be found on the [Pricing](https://azure.microsoft.com/pricing/details/vpn-gateway) page. For legacy gateway SKU pricing, see the [ExpressRoute pricing page](https://azure.microsoft.com/pricing/details/expressroute) and scroll to the **Virtual Network Gateways** section.
-For more information about gateway SKUs for VPN Gateway, see [Gateway SKUs](vpn-gateway-about-vpn-gateway-settings.md#gwsku).
+**Virtual network gateway compute costs**<br>Each virtual network gateway has an hourly compute cost. The price is based on the gateway SKU that you specify when you create a virtual network gateway. The cost is for the gateway itself and is in addition to the data transfer that flows through the gateway. Cost of an active-active setup is the same as active-passive. For more information about gateway SKUs for VPN Gateway, see [Gateway SKUs](vpn-gateway-about-vpn-gateway-settings.md#gwsku).
+
+**Data transfer costs**<br>Data transfer costs are calculated based on egress traffic from the source virtual network gateway.
+
+* If you're sending traffic to your on-premises VPN device, it will be charged with the Internet egress data transfer rate.
+* If you're sending traffic between virtual networks in different regions, the pricing is based on the region.
+* If you're sending traffic only between virtual networks that are in the same region, there are no data costs. Traffic between VNets in the same region is free.
## <a name="new"></a>What's new?